US10523870B2 - Contextual display - Google Patents
Contextual display Download PDFInfo
- Publication number
- US10523870B2 US10523870B2 US16/223,546 US201816223546A US10523870B2 US 10523870 B2 US10523870 B2 US 10523870B2 US 201816223546 A US201816223546 A US 201816223546A US 10523870 B2 US10523870 B2 US 10523870B2
- Authority
- US
- United States
- Prior art keywords
- electronic device
- user
- display
- distance
- threshold
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000002604 ultrasonography Methods 0.000 claims abstract description 22
- 238000005259 measurement Methods 0.000 claims abstract description 17
- 238000003384 imaging method Methods 0.000 claims description 18
- 230000001815 facial effect Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims 1
- 230000008859 change Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H04N5/232935—
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/02—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
- G01S15/06—Systems determining the position data of a target
- G01S15/08—Systems for measuring distance only
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/02—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
- G01S15/50—Systems of measurement, based on relative movement of the target
- G01S15/58—Velocity or trajectory determination systems; Sense-of-movement determination systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1615—Constructional details or arrangements for portable computers with several enclosures having relative motions, each enclosure supporting at least one I/O or computing function
- G06F1/1616—Constructional details or arrangements for portable computers with several enclosures having relative motions, each enclosure supporting at least one I/O or computing function with folding flat displays, e.g. laptop computers or notebooks having a clamshell configuration, with body parts pivoting to an open position around an axis parallel to the plane they define in closed position
- G06F1/162—Constructional details or arrangements for portable computers with several enclosures having relative motions, each enclosure supporting at least one I/O or computing function with folding flat displays, e.g. laptop computers or notebooks having a clamshell configuration, with body parts pivoting to an open position around an axis parallel to the plane they define in closed position changing, e.g. reversing, the face orientation of the screen with a two degrees of freedom mechanism, e.g. for folding into tablet PC like position or orienting towards the direction opposite to the user to show to a second user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1694—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G06K9/00221—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/536—Depth or shape recovery from perspective effects, e.g. by using vanishing points
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
-
- H04N5/23219—
-
- H04N5/232939—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04806—Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
Definitions
- Present teachings relate to contextual display features for an electronic device.
- infrared proximity sensing is commonly used to detect the presence of a user and/or the orientation of the device and to change the displayed information according to this, e.g. as shown in US2016/0219217, WO2017/098524, EP 2428864 and EP2615524.
- FoV field of view
- Display transitions may be accomplished using the touchscreen of the electronic device, however, the touchscreen may be difficult to use with the user's arm extended, for example for taking a self-portrait, or selfie. Moreover, the user is usually required to use a second hand, for example, to pinch and zoom with the touchscreen.
- US2012/0287163A1 taught that automatically scaling the size of a set of visual content based upon how close a user's face is to a display.
- US2009/0164896A1 taught a technique for managing the display of content on a display of an electronic device based on a distance of the user to the display, where the distance may be estimated by analyzing video data to detect a face of the user.
- the present invention thus provides a device and method adjusting the information on the display on the context, especially but not specifically to situations such as so-called “selfies” or self-portraits, where the content of the display will depend on the distance between the device and the user face. For example shifting from showing a camera preview, to showing static images without the user requiring to touch a button or screen. After a selfie image has been captured and the phone is brought closer to the head of the user, the captured image is zoomed in. When the arm is extended again, the image zooms out and the viewfinder becomes active again.
- the user need to touch the gallery icon and use the pinch and zoom touch gesture or other touch tools to zoom in and out on the face. This can be time consuming and can potentially lead to missed photo opportunities.
- FIG. 1A ,B illustrates an aspect of the teachings where a mobile device adapts its display content based upon the context.
- FIG. 2 illustrates a flow chart for the use of the present invention.
- FIG. 3 a, b illustrates the use of the device according to the invention.
- FIGS. 1A and 1B show a first example for a relatively quick and intuitive way to zoom in and out on a captured selfie image according an aspect of the present teachings where FIG. 1A illustrates the long distance display mode and FIG. 1B illustrates the short distance display mode.
- the user 2 takes a selfie where the preview images on the display 1 a of the device or mobile phone 1 shows the complete scene, thus allowing for composing the preferred image, e.g. including the background scenery.
- the exposure of the picture may be controlled on the display 1 a , for example using touchless interface based on gestures 3 , e.g. as described in WO2015/022498 or automatically when the face is in focus.
- the device 1 detects that it is moved close to the user 2 and thus provides a detail images on the display 1 a allowing the user to inspect the exposure of the face, and of course to zoom out to see the complete images using standard gestures or menus.
- This provides one handed zoom functionality allowing users to quickly find out if a captured selfie is of good quality by magnifying the part of the image that contains the face or zooming into the center of the image when the user 2 brings the device 1 closer.
- the image zooms out and the viewfinder functionality of the display 1 a becomes active again.
- the present solution may be used to switch display contexts between the long and short distance display modes, for example:
- the face detection is preferably performed using an imaging unit such as a camera using well known face detection algorithms, e.g. as referred to in the abovementioned publications.
- an imaging unit such as a camera using well known face detection algorithms, e.g. as referred to in the abovementioned publications.
- it may be include 3D imaging and recognition of a face, or possibly an acoustic imaging unit as suggested in the article Yoong, K. & McKerrow, P. J. 2005, ‘Face recognition with CTFM sonar’, in C. Sammut (eds), Australasian Conference on Robotics and Automation, Australian Robotics and Automation Association, Sydney, pp. 1-10 describing the use of acoustic signals for recognizing a face.
- Image based analysis may be used to estimate a distance based on known feature in a recognized face, but this is slow and inaccurate, and also of course relies on the recognition of a face, which is not always easy.
- the present invention is therefore based on acoustic distance measurements, possibly with aid from the image processing, e.g. setting limits to the scanned distance range of the measurements.
- the acoustic measurements may be based on well-known techniques such as pulse echo, chirp, encoded signals, resonance measurements or similar methods using available transducers emitting acoustic signal and measuring the time lapse before receiving the echo.
- the acoustic measurements are performed in the ultrasound range, outside the audible range, but possibly close to the audible range making it possible to use transducers already existing in the device operating in the audible range.
- the acoustic measurement may include analysis in the time and frequency domain.
- the preferred algorithm therefore creates estimates of the motion of the device with an acoustic transducer relative to an acoustic reflector, presumably the head of the user.
- the position is reset, so any motion is measured relative to this position. This estimate is based on accumulating the velocity of the reflector relative to the device, where velocity is estimated using the Doppler effect.
- the movements may simply be found by monitoring the change in the measured distance using a sequence of emitted acoustic codes or patterns.
- the displayed information may also be changed continuously, e.g. by using animations illustrating the movements or continuously zooming in on a captured image or the image on the display.
- FIG. 2 illustrates a process of the imaging starting with the camera being activated.
- a live image 101 is scanned for detecting a face. If a face is recognized 101 a the image is captured with a face 102 a . Also, the distance and/or movement is sensed acoustically 103 a and possibly also by monitoring the face 104 a . To determine if the device has been moved towards the head after an image of a face has been captured, a combination of ultrasonic detection and facial recognition is used.
- the image may be zoomed in to concentrate on the face 105 a , and if a movement away from the face is detected acoustically 106 a and/or visually 107 a the image will go back to the live preview mode 101 again.
- the image 102 b is taken without a recognized face, in which case the distance and/or movement is only measured acoustically 103 b and if moved toward the user the central part of the image may be shown 105 b , and when the distance increases 107 b the system goes back to the live view mode 101 , and in that step also scanning for faces again.
- the facial recognition part may use the size of the bounding box around the face to determine if the device has been moved towards or away from the face after image capture or indicate the approximate distance based on the distance between certain features in the recognized face in the images samples during the process. In that case, when the facial box has increased in size above a given threshold, zoom on the display is triggered.
- the method according to the present invention involves a device 1 sending an ultrasonic signal 31 and analyzing the echo. It allows fine-grain measurement of distance between phone 1 and body 2 (something that neither facial recognition nor inertial sensors can provide). It has drawbacks, such as picking up reflections from other people 34 and hand/arms of the user. Therefore, it is advantageous to include a solid reference (provided by facial recognition) providing an initial distance range 33 . It also may also be set to react only when having large evidence of motion (to filter out accidental motion measurements), so the response will be delayed. Inertial sensors can be used to reduce the delay in the response.
- the transducers used according to the present solution may depend on the device, preferably based on a separate transmitter and receiver, but other well-known constellations may also be contemplated such as a single transmitter/receiver device, or arrays, pairs or matrixes adding directivity to the emitted and/or received signal, e.g. for suppressing reflectors outside the field of view of the camera or taking into account the known lobes of the emitters and receivers. This way the other peoples 34 or objects in the image may be removed from the distance measurements. If a recognized face occurs in the image a directional system may automatically select the recognized face, or this may be selected by the user.
- Inertial sensors may therefore be well suited to detect the onset of motion. Inertial sensors can suffer from drift, but facial recognition and ultrasound distance measurements can be used for correcting the drift.
- the facial recognition provides a rough measurement of distance and may be used to set an expected distance range 33 as the size of the features is recognized, but reliably identifies the primary user.
- the distance range approximated based on the facial recognition may also be used to adjust for arm length of the user or other choices made during the operation. This way the combination of the two may eliminate the risk of capturing motion from bodies on the side, or reflections from limbs, from points being outside the suggested range.
- the range 33 may also be linked to the threshold to the range 32 within which the image is enlarged to show the face or show other features on the display as discussed above in the close range display mode.
- the range 33 may be defined by the detected motion. If a motion between the device and the user is detected and being above a predetermined limit in time or amplitude the display may change in the same way as if the measurements provided an absolute distance measurement.
- the exact ranges may vary, e.g. being dependent on the age and length of the persons arms, and may be set as user preferences.
- Typical user characteristics such as arm length and other variables may also be based on inputs in the device available to the algorithm and statistics based on earlier use.
- the measured size of the recognized facial features may also be monitored continuously so as to track any changes in the distance range and through that processes be able to avoid or suppress acoustic reflections from other people, arms etc. close to the device as is illustrated in FIG. 3 b where the reflexes from the persons 34 on the side, is ignored.
- the present invention would also work if the device was stationary, e.g. placed on a support, and the user moves relative to the device. For example of the required distance between the camera and the user is to large to capture both scenery and the user.
- the present invention relates to a device and a method for controlling displayed content on an electronic device including said display and a camera having a known field of view.
- the method comprising the steps of:
- the second ultrasound signal includes a portion being constituted by a the first ultrasound signal being reflected from said user face. So that the for example the propagation time may be used to calculate the distance between the device and the face.
- the distance is then computed between the user and the electronic device using acoustic measurements involving at least one acoustic transducer being capable of both transmitting and receiving the acoustic signals or specific transducers for sending and receiving the signal, in which the transducers may be synchronized to measure the propagation time.
- the device also includes a memory storing at least two sets of predetermined display features, where the device is set to display a first set when the distance between the device and the user is above at least one chosen threshold, and the second set of display features when the distance is less than said threshold.
- the number of thresholds may depend on the function and application and may range from one to, in some case a sufficiently high number to allow continuous change in the display depending on the distance.
- the display is made dependent on the measured range to the user, where the display can be continuously changed as animations, image transformations, or user interface modifications, or the display can change between two predetermined states (images) depending on whether the detected range is above or below defined thresholds.
- the movement of the device relative to the user or face may be measured, e.g. by analyzing the reflected acoustic signal for detecting a Doppler shift relative to the emitted signal. Based on the movements the estimated trajectory of the movement may be used for estimating the time for changing between said sets of display features, and may also be used for presenting or illustrating the measured movements on the screen, for example by using animations or zooming in on a captured image as a function of the measured movement.
- the movement and trajectory is usually related to both movements along the line between the device and the face, but relative movements perpendicular to this may also be used, e.g. based on the movements of a recognized face over the screen.
- the imaging unit or camera is preferably capable of identifying at least some facial features of a user facing the device, wherein the imaging unit is operatively coupled to the electronic device.
- the size of the recognized face features may be used to estimate an approximate distance between the device and the user face, and to set a limited range for the distance calculated from the acoustic signal, thus ignoring signal occurring from objects outside said range and in this way avoid possible disturbances from outside the range.
- the display sets may include two or more distance thresholds, so that when the device is outside a first threshold the shutter controls related to the imaging means on the electronic device are activated, when the device is inside the first threshold but outside the second threshold the shutter controls of the device are hid, and when the device is inside the second threshold the display zoom control is activated.
- the device may be operated manually or be activated when sensing a movement in the direction between the imaging means and the user, e.g. using an inertial sensor.
- the electronic device will therefore comprise an imaging unit including a face recognition circuitry, an acoustic measuring unit including emitter for transmitting an acoustic signal with predetermined characteristics and a receiver for receiving and analyzing said acoustic signals, and measuring the distance, and possibly also a relative movement, between a user face and the device.
- an imaging unit including a face recognition circuitry
- an acoustic measuring unit including emitter for transmitting an acoustic signal with predetermined characteristics
- a receiver for receiving and analyzing said acoustic signals, and measuring the distance, and possibly also a relative movement, between a user face and the device.
- the transmitter and receiver may be separate units or be constituted by the same transducer.
- the device comprises a display adapted to show the imaged area and chosen information as well as a display control adapted to present a first set of information on the display when the distance is above a chosen threshold and a second set of information when the distance is below said threshold.
- Each set of display information may be stored in a user accessible memory, the type of information being chosen by a user.
- the present invention also relates to a software product implementing at least some of the features of the method for controlling the electronic device disclosed herein.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Acoustics & Sound (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Studio Devices (AREA)
- Circuits Of Receivers In General (AREA)
- Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
- Diaphragms For Electromechanical Transducers (AREA)
Abstract
Description
-
- Camera context vs gallery context: with the arm extended, the screen shows the feed from the front facing camera. With the phone close to the body, the screen shows the last photo taken (the Gallery).
- Full shot vs zoomed shot: with the arm extended, the screen shows the full shot. With the phone close to the body, the screen shows the zoomed-in face, as illustrated in
FIG. 1A ,B. - Full shot vs customization view: with the arm extended, the screen shows the full shot. With the phone close to the body, the screen shows the sharing/customization/imaging editing options in social media.
Claims (12)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/223,546 US10523870B2 (en) | 2017-12-21 | 2018-12-18 | Contextual display |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762608965P | 2017-12-21 | 2017-12-21 | |
US16/223,546 US10523870B2 (en) | 2017-12-21 | 2018-12-18 | Contextual display |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190199935A1 US20190199935A1 (en) | 2019-06-27 |
US10523870B2 true US10523870B2 (en) | 2019-12-31 |
Family
ID=66950850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/223,546 Active US10523870B2 (en) | 2017-12-21 | 2018-12-18 | Contextual display |
Country Status (2)
Country | Link |
---|---|
US (1) | US10523870B2 (en) |
NO (1) | NO344671B1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11242418B2 (en) | 2019-06-12 | 2022-02-08 | Chevron Phillips Chemical Company Lp | Surfactant as titanation ligand |
US11325996B2 (en) | 2019-06-12 | 2022-05-10 | Chevron Phillips Chemical Company Lp | Amino acid chelates of titanium and use thereof in aqueous titanation of polymerization catalysts |
US20220237743A1 (en) * | 2012-02-29 | 2022-07-28 | Google Llc | Systems, methods, and media for adjusting one or more images displayed to a viewer |
WO2023079005A1 (en) | 2021-11-05 | 2023-05-11 | Elliptic Laboratories Asa | Proximity and distance detection |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2578386B (en) | 2017-06-27 | 2021-12-01 | Cirrus Logic Int Semiconductor Ltd | Detection of replay attack |
GB2563953A (en) | 2017-06-28 | 2019-01-02 | Cirrus Logic Int Semiconductor Ltd | Detection of replay attack |
GB201713697D0 (en) | 2017-06-28 | 2017-10-11 | Cirrus Logic Int Semiconductor Ltd | Magnetic detection of replay attack |
GB201801528D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Method, apparatus and systems for biometric processes |
GB201801532D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Methods, apparatus and systems for audio playback |
GB201801527D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Method, apparatus and systems for biometric processes |
GB201801530D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Methods, apparatus and systems for authentication |
GB201801526D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Methods, apparatus and systems for authentication |
GB201804843D0 (en) | 2017-11-14 | 2018-05-09 | Cirrus Logic Int Semiconductor Ltd | Detection of replay attack |
GB201801663D0 (en) * | 2017-10-13 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Detection of liveness |
GB2567503A (en) | 2017-10-13 | 2019-04-17 | Cirrus Logic Int Semiconductor Ltd | Analysing speech signals |
GB201801661D0 (en) * | 2017-10-13 | 2018-03-21 | Cirrus Logic International Uk Ltd | Detection of liveness |
GB201801664D0 (en) | 2017-10-13 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Detection of liveness |
GB201801874D0 (en) | 2017-10-13 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Improving robustness of speech processing system against ultrasound and dolphin attacks |
GB201803570D0 (en) | 2017-10-13 | 2018-04-18 | Cirrus Logic Int Semiconductor Ltd | Detection of replay attack |
GB201801659D0 (en) | 2017-11-14 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Detection of loudspeaker playback |
US11264037B2 (en) | 2018-01-23 | 2022-03-01 | Cirrus Logic, Inc. | Speaker identification |
US11735189B2 (en) | 2018-01-23 | 2023-08-22 | Cirrus Logic, Inc. | Speaker identification |
US11475899B2 (en) | 2018-01-23 | 2022-10-18 | Cirrus Logic, Inc. | Speaker identification |
US10692490B2 (en) | 2018-07-31 | 2020-06-23 | Cirrus Logic, Inc. | Detection of replay attack |
US10915614B2 (en) | 2018-08-31 | 2021-02-09 | Cirrus Logic, Inc. | Biometric authentication |
US11037574B2 (en) | 2018-09-05 | 2021-06-15 | Cirrus Logic, Inc. | Speaker recognition and speaker change detection |
US11580657B2 (en) * | 2020-03-30 | 2023-02-14 | Snap Inc. | Depth estimation using biometric data |
US11416132B2 (en) * | 2020-07-10 | 2022-08-16 | Vmware, Inc. | Scaling display contents based on user-to-screen distance |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090164896A1 (en) * | 2007-12-20 | 2009-06-25 | Karl Ola Thorn | System and method for dynamically changing a display |
EP2428864A2 (en) | 2010-09-08 | 2012-03-14 | Apple Inc. | Camera-based orientation fix from portrait to landscape |
US20120287163A1 (en) * | 2011-05-10 | 2012-11-15 | Apple Inc. | Scaling of Visual Content Based Upon User Proximity |
EP2615524A1 (en) | 2012-01-12 | 2013-07-17 | LG Electronics, Inc. | Mobile terminal and control method thereof |
WO2015022498A1 (en) | 2013-08-15 | 2015-02-19 | Elliptic Laboratories As | Touchless user interfaces |
US20160219217A1 (en) * | 2015-01-22 | 2016-07-28 | Apple Inc. | Camera Field Of View Effects Based On Device Orientation And Scene Content |
WO2017098524A1 (en) | 2015-12-09 | 2017-06-15 | Smartron India Private Limited | A system and method for automatically capturing self-images or selfies in a mobile device |
-
2018
- 2018-01-29 NO NO20180146A patent/NO344671B1/en unknown
- 2018-12-18 US US16/223,546 patent/US10523870B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090164896A1 (en) * | 2007-12-20 | 2009-06-25 | Karl Ola Thorn | System and method for dynamically changing a display |
EP2428864A2 (en) | 2010-09-08 | 2012-03-14 | Apple Inc. | Camera-based orientation fix from portrait to landscape |
US20120287163A1 (en) * | 2011-05-10 | 2012-11-15 | Apple Inc. | Scaling of Visual Content Based Upon User Proximity |
EP2615524A1 (en) | 2012-01-12 | 2013-07-17 | LG Electronics, Inc. | Mobile terminal and control method thereof |
WO2015022498A1 (en) | 2013-08-15 | 2015-02-19 | Elliptic Laboratories As | Touchless user interfaces |
US20160219217A1 (en) * | 2015-01-22 | 2016-07-28 | Apple Inc. | Camera Field Of View Effects Based On Device Orientation And Scene Content |
WO2017098524A1 (en) | 2015-12-09 | 2017-06-15 | Smartron India Private Limited | A system and method for automatically capturing self-images or selfies in a mobile device |
Non-Patent Citations (1)
Title |
---|
Yoong, Kok Kai, et al., "Face Recognition with CTFM Sonar," in C. Sammut (eds), Australasian Conference on Robotics and Automation, Australian Robotics and Automation Association, Sydney, P.J. 2005, pp. 1-10. |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220237743A1 (en) * | 2012-02-29 | 2022-07-28 | Google Llc | Systems, methods, and media for adjusting one or more images displayed to a viewer |
US11242418B2 (en) | 2019-06-12 | 2022-02-08 | Chevron Phillips Chemical Company Lp | Surfactant as titanation ligand |
US11325996B2 (en) | 2019-06-12 | 2022-05-10 | Chevron Phillips Chemical Company Lp | Amino acid chelates of titanium and use thereof in aqueous titanation of polymerization catalysts |
US11384171B2 (en) | 2019-06-12 | 2022-07-12 | Chevron Phillips Chemical Company Lp | Surfactant as titanation ligand |
US12030975B2 (en) | 2019-06-12 | 2024-07-09 | Chevron Phillips Chemical Company Lp | Surfactant as titanation ligand |
WO2023079005A1 (en) | 2021-11-05 | 2023-05-11 | Elliptic Laboratories Asa | Proximity and distance detection |
Also Published As
Publication number | Publication date |
---|---|
US20190199935A1 (en) | 2019-06-27 |
NO344671B1 (en) | 2020-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10523870B2 (en) | Contextual display | |
EP2927634B1 (en) | Single-camera ranging method and system | |
US7620316B2 (en) | Method and device for touchless control of a camera | |
KR101983288B1 (en) | Apparatus and method for controlling a shooting status in a portable device having a dual camera | |
CN107424186B (en) | Depth information measuring method and device | |
KR101627185B1 (en) | Control method of image photographing apparatus | |
JP6016226B2 (en) | Length measuring device, length measuring method, program | |
US9127942B1 (en) | Surface distance determination using time-of-flight of light | |
CN105208287B (en) | A kind of image pickup method and device | |
TW201007583A (en) | Shadow and reflection identification in image capturing devices | |
US20170206001A1 (en) | Method and apparatus for gesture identification | |
KR20170114045A (en) | Apparatus and method for tracking trajectory of target using image sensor and radar sensor | |
JP2000347692A (en) | Person detecting method, person detecting device, and control system using it | |
US9558563B1 (en) | Determining time-of-fight measurement parameters | |
US12154282B2 (en) | Information processing apparatus, control method, and program | |
CN105338241A (en) | Shooting method and device | |
CN109947236B (en) | Method and device for controlling content on a display of an electronic device | |
JP2015166854A (en) | Projection control device of projector, projection control method of projector, projection system, projection control method of projection system, and program | |
US11076123B2 (en) | Photographing control device, photographing system and photographing control method | |
JP2008182321A (en) | Image display system | |
KR100971424B1 (en) | Ultrasound Image Processing System and Method | |
CN108734065B (en) | Gesture image acquisition equipment and method | |
JP6541497B2 (en) | Communication system, control method thereof and program | |
KR102198177B1 (en) | Photographing apparatus, method for controlling the same and a computer-readable storage medium | |
US9915528B1 (en) | Object concealment by inverse time of flight |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
AS | Assignment |
Owner name: ELLIPTIC LABORATORIES AS, NORWAY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DANIELSEN, LAILA;STRUTT, GUENAEL THOMAS;ARCEJAEGER, RACHEL-MIKEL;AND OTHERS;SIGNING DATES FROM 20190221 TO 20190225;REEL/FRAME:048536/0516 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |