US9679199B2 - Fusing device and image motion for user identification, tracking and device association - Google Patents
Fusing device and image motion for user identification, tracking and device association Download PDFInfo
- Publication number
- US9679199B2 US9679199B2 US14/096,840 US201314096840A US9679199B2 US 9679199 B2 US9679199 B2 US 9679199B2 US 201314096840 A US201314096840 A US 201314096840A US 9679199 B2 US9679199 B2 US 9679199B2
- Authority
- US
- United States
- Prior art keywords
- image
- acceleration
- mobile device
- computer
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G06K9/00624—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1694—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/211—Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/038—Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2200/00—Indexing scheme relating to G06F1/04 - G06F1/32
- G06F2200/16—Indexing scheme relating to G06F1/16 - G06F1/18
- G06F2200/163—Indexing scheme relating to constructional details of the computer
- G06F2200/1637—Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer
Definitions
- Tracking a smart phone can be used to identify and track the smart phone's owner in order to provide indoor location-based services, such as establishing the smart phone's connection with nearby infrastructure such as a wall display, or for providing the user of the phone location-specific information and advertisements.
- the cross-modal sensor fusion technique described herein provides a cross-modal sensor fusion approach to track mobile devices and the users carrying them.
- the technique matches motion features captured using sensors on a mobile device to motion features captured in images of the device in order to track the mobile device and/or its user.
- the technique matches the velocities of a mobile device, as measured by an onboard measurement unit, to similar velocities observed in images of the device to track the device and any object rigidly attached there to (e.g., a user).
- This motion feature matching process is conceptually simple.
- the technique does not require a model of the appearance of either the user or the device, nor in many cases a direct line of sight to the device.
- the technique can track the location of the device even when it is not visible (e.g., it is in a user's pocket).
- the technique can operate in real time and can be applied to a wide variety of scenarios.
- the cross-modal sensor fusion technique locates and tracks a mobile device and its user in video using accelerations.
- the technique matches the mobile device's accelerations with accelerations observed in images (e.g., color and depth images) of the video on a per pixel basis, computing the difference between the image motion features and the device motion features at a number of pixel locations in one or more of the captured images.
- the number of pixels can be predetermined if desired, as can the pixel locations that are selected.
- the technique uses the inertial sensors common to many mobile devices to find the mobile device's acceleration.
- Device and image accelerations are compared in the 3D coordinate frame of the environment, thanks to the absolute orientation sensing capabilities common in today's mobile computing devices such as, for example, smart phones, as well as the range sensing capability of depth cameras which enables computing the real world coordinates (meters) of image features.
- the device and image accelerations are compared at a predetermined number of pixels at various locations in an image. The smallest difference indicates the presence of the mobile device at the location.
- FIG. 1 depicts a flow diagram of a process for practicing one exemplary embodiment of the cross-modal sensor fusion technique described herein.
- FIG. 2 depicts a flow diagram of a process for practicing another exemplary embodiment of the cross-modal sensor fusion technique described herein.
- FIG. 3 depicts a flow diagram of a process for practicing yet another exemplary embodiment of the cross-modal sensor fusion technique described herein.
- FIG. 4 shows one exemplary environment for using a system which correlates motion features obtained from a mobile device and motion features obtained from images of the device in order to track the device according to the cross-modal sensor fusion technique described herein.
- FIG. 5 shows a high-level depiction of an exemplary cross-modal sensor fusion system that can be used in the exemplary environment shown in FIG. 4 .
- FIG. 6 shows an illustrative mobile device for use in the system of FIG. 5 .
- FIG. 7 shows an illustrative external camera system for use in the system of FIG. 5 .
- FIG. 8 shows an illustrative cross-modal sensor fusion system that can be used in conjunction with the external camera system of FIG. 7 .
- FIG. 9 is a schematic of an exemplary computing environment which can be used to practice the cross-modal sensor fusion technique.
- the cross-modal sensor fusion technique is a sensor fusion approach to locating and tracking a mobile device and its user in video.
- the technique matches motion features measured by sensors on the device with image motion features extracted from images taken of the device. These motion features can be velocities or accelerations, for example.
- the technique matches device acceleration with acceleration of the device observed in images (e.g., color and depth images) taken by a camera (such as a depth camera, for example). It uses the inertial sensors common to many mobile devices to find the device's acceleration in three dimensions. Device and image accelerations are compared in a 3D coordinate frame of the environment, thanks to the absolute orientation sensing capabilities common in today's smart phones, as well as the range sensing capability of depth cameras which enables computing the real world coordinates (meters) of image features.
- ShakeID considers which of up to four tracked hands is holding the device.
- fusion can be performed at every pixel in a video image and requires no separate process to suggest candidate objects to track.
- the cross-modal sensor technique requires no knowledge of the appearance of the device or the user, and allows for a wide range of camera placement options and applications.
- An interesting and powerful consequence of the technique is that the mobile device user, and in many cases the device itself, may be reliably tracked even if the device is in the user's pocket, fully out of view of the camera.
- Tracking of a mobile device and its user can be useful in many real-world applications. For example, it can be used to provide navigation instructions to the user or it can be used to provide location-specific advertisements. It may also be used in physical security related applications. For example, it may be used to track objects of interest or people of interest. Many, many other applications are possible.
- “Sensor fusion” refers to the combination of multiple disparate sensors to obtain a more useful signal.
- fusion techniques seek to associate two devices by finding correlation among sensor values taken from both. For example, when two mobile devices are held together and shaken, accelerometer readings from both devices will be highly correlated. Detecting such correlation can cause application software to pair or connect the devices in some useful way. Similarly, when a unique event is observed to happen at the same time at both devices, various pairings may be established. Perhaps the simplest example is connecting two devices by pressing buttons on both devices simultaneously, but the same idea can be applied across a variety of sensors. For example, two devices that are physically bumped together will measure acceleration peaks at the same moment in time. These interactions are sometimes referred to as “synchronous gestures.”
- a mobile phone may be located and paired with an interactive surface by correlating an acceleration peak in the device with the appearance of a touch contact, or when the surface detects the visible flashing of a phone at the precise moment it is triggered.
- An object tagged with a Radio Frequency Identification (RFID) chip can be detected and located as it is placed on an interactive surface by correlating the appearance of a new surface contact with the appearance of a new RFID code.
- RFID Radio Frequency Identification
- Some researchers have proposed correlating accelerometers worn at the waist with visual features to track young children in school. They consider tracking head-worn red LEDs, as well as tracking the position of motion blobs. For the accelerometer measurements, they consider integrating to obtain position for direct comparison with the visual tracking data, as well as deriving pedometer-like features. This research favors pedometer features in combination with markerless motion blob visual features.
- Still other researchers have proposed identifying and tracking people across multiple existing security cameras by correlating mobile device accelerometer and magnetometer readings. They describe a hidden Markov model-based approach to find the best assignment of sensed devices to tracked people. They rely on an external process to generate tracked objects and use a large matching window, though they demonstrate how their approach can recover from some common tracking failures.
- One system matches smart phone accelerometer values with the acceleration of up to four hands tracked by a depth camera (e.g., Microsoft Corporation's Kinect® sensor).
- the hand holding the phone is inferred by matching the device acceleration with acceleration of hand positions over a short window of time (1 s).
- a Kalman filter is used to estimate the acceleration of each hand.
- the hand with the most similar pattern of acceleration is determined to be holding the device. This work further studies the correlation of contacts on a touch screen by the opposite hand.
- touch contacts are associated with the held device by way of the Kinect® tracked skeleton that is seen to be holding the device.
- Kinect® skeletal tracking requires a fronto-parallel view of the users. Thus relying on Kinect® skeletal tracking constraints where the camera may be placed. For example, skeletal tracking fails when the camera is mounted in the ceiling for an unobstructed top-down view of the room.
- the cross-modal sensor fusion technique described herein avoids the difficulty of choosing candidate objects by matching low level motion features throughout the image. It may be used in many situations where skeletal tracking is noisy or fails outright and thus can be used in a wide variety of application scenarios. Whereas most of the above discussed work performs matching over a significant window in time, the cross-motion sensor fusion technique described herein uses a fully recursive formulation that relies on storing only the previous frame's results, not a buffer of motion history. In fact, the recursive nature of the computation allows it to be applied everywhere in the image in real time, avoiding the need to track discrete objects.
- the best approach is to match image motion directly, since as with “synchronous gestures” the pattern of image motion will provide the discriminative power to robustly detect the device or its user. Making fewer assumptions about the appearance of the device or user extends the range of applicability of the approach, and makes the technique less complex, more robust, and ultimately more useful.
- the technique matches motion features measured on a mobile device with motion features observed in images of the device in order to track the device (and its user).
- Some embodiments of the technique use color and depth images as described in the following paragraphs, but it is possible to practice the technique using grayscale and/or just two dimensional images.
- the matching process is performed at a predetermined number of pixels selected from various locations in a color image.
- the pixels used for matching can be selected based on a variety of distributions in the image.
- the matching process is performed at each pixel in a color image.
- FIG. 1 depicts one exemplary process 100 for practicing the cross-modal sensor fusion technique.
- motion features of a mobile device are measured by sensors on the device and images of the device and any object to which it is rigidly attached are simultaneously captured.
- Image motion features of the device in the captured images are found (block 104 ).
- the image motion features can be either velocities or accelerations which are determined on a pixel by pixel basis at various locations in an image.
- the image motion features are converted into the same coordinate frame of the mobile device, as shown in block 106 .
- Device motion features measured on the device are then matched with the image motion features of the device, as shown in block 108 .
- the device motion features can be velocities or accelerations measured by sensors on the device.
- the difference between the image motion features and the device motion features is computed at on a per pixel basis, at a number of pixel locations in one or more of the captured images in the common (possibly real-world) coordinate system, as shown in block 110 .
- This number of pixels can be, for example, every pixel in an image, every other pixel in an image, a random distribution of pixels in the image, a uniform distribution of pixels and the like.
- the number of pixels can be predetermined if desired, as can the pixel locations that are selected.
- the real world coordinates of the device's motions are provided by the sensors on the device, while the real world coordinates of the image motion features are determined using the coordinates from the camera that captured the images.
- the presence of the device and the object rigidly attached to it are then determined using the difference at the chosen pixels, as shown in block 112 .
- the smallest difference in an image determines the device location (and any rigid object attached to it, such as the user of the device) in the common (e.g., real-world) coordinate system.
- FIG. 2 depicts another exemplary process 200 for practicing the cross-modal sensor fusion technique that matches motion features that are accelerations.
- block 202 mobile device acceleration and color and depth images of the mobile device and its user are simultaneously captured.
- Three-dimensional (3D) image accelerations are found in the captured images, as shown in block 204 . These can be found, for example, by computing a 2D optical flow on the captured color images and using corresponding depth images to compute the 3D acceleration. These 3D image accelerations are then converted into the same coordinate frame of the mobile device, as shown in block 206 .
- the device accelerations measured by sensors on the device and the image accelerations are then matched, as shown in block 208 .
- the difference between image and device acceleration is computed on a per pixel basis, at a number of pixel locations in the color images, as shown in block 210 .
- the smallest difference value indicates the presence of the device at that pixel or point, as shown in block 212 .
- FIG. 3 depicts yet another exemplary process 300 for practicing the cross-modal sensor fusion technique.
- mobile device acceleration is found.
- color depth images of the mobile device, and optionally its user are captured.
- Two-dimensional (2D) image motion is found in the captured images, as shown in block 304 , by simultaneously computing a dense optical flow of flow vectors on the captured color images.
- Each flow vector is converted to a 3D motion using the depth images, as shown in block 306 , and each flow vector is transformed to the coordinate frame of the mobile device, as shown in block 308 .
- Image acceleration is estimated, as shown in block 310 . This 3D acceleration is estimated by a Kalman filter at each point of the image, with the 3D flow at the point provided as input.
- the 3D device and image accelerations are then matched, as shown in block 312 .
- the difference between image and device acceleration is computed at a number of pixels or points throughout one or more in the color images.
- the number of pixel or point locations can be predetermined if desired, as can the pixel or point locations that are selected.
- the smallest difference value in each image indicates the presence of the device at those pixel or point locations, as shown in block 314 .
- FIG. 4 shows an illustrative environment 400 which serves as a vehicle for introducing a system for practicing the cross-modal sensor fusion technique described herein.
- the system receives motion information from a mobile device 402 . More specifically, the system receives device motion features measured by sensors on at least one mobile device 402 .
- the system further receives captured images of the mobile device 402 , from which image motion features are computed, from at least one external camera system 404 .
- the device motion features from the mobile device are generated by the mobile device 402 itself, with respect to a frame of reference 406 of the mobile device 402 .
- the captured images are captured by the external camera system 404 from a frame of reference 408 that is external to the mobile device 402 . In other words, the external camera system 404 observes the mobile device 402 from a vantage point that is external to the mobile device 402 .
- the mobile device 402 is associated with at least one object. That object can be, for example, a user 412 which moves within a scene.
- the mobile device 402 comprises a handheld unit that is rigidly attached to a user 412 . Any of the parts of an object (e.g., a user) 412 may be in motion at any given time.
- one purpose of the system is to track the object (for example, the user 412 ) that is associated with the mobile device 402 .
- the system seeks to track the user 412 that is holding the mobile device 402 .
- the system performs this task by correlating the device motion features obtained from mobile device 402 with the image motion features of the mobile device 402 obtained from the captured images.
- the system matches the device motion features from the mobile device (which are generated by sensors on the mobile device 402 ) with image motion features extracted from the captured images.
- the system then computes the difference between the motion features from the mobile device (which are generated by the mobile device 402 ) and the motion features extracted from the captured images.
- the difference is computed on a pixel by pixel basis for a predetermined number of pixels at various locations in an image.
- the smallest difference is determined as the location of the mobile device 402 (with the user 412 rigidly attached thereto). The system can then use this conclusion to perform any environment-specific actions.
- the mobile device 402 corresponds to a piece of equipment that the user grasps and manipulates with a hand.
- this type of equipment may comprise a pointing device, a mobile telephone device, a game controller device, a game implement (such as a paddle or racket) and so on.
- the mobile device 402 can correspond to any piece of equipment of any size and shape and functionality that can monitor its own movement and report that movement to the system.
- the mobile device 402 may correspond to any piece of equipment that is worn by the user 412 or otherwise detachably fixed to the user.
- the mobile device 402 can be integrated with (or otherwise associated with) a wristwatch, pair of paints, dress, shirt, shoe, hat, belt, wristband, sweatband, patch, button, pin, necklace, ring, bracelet, eyeglasses, goggles, and so on.
- a scene contains two or more subjects, such as two or more users (not shown in FIG. 4 ). Each user may hold (or wear) his or her own mobile device.
- the system can determine the association between mobile devices and respective users.
- the matching process is run for each device.
- image motion estimation which is a computationally expensive computation, needs to be run only once regardless of how many devices are matched.
- the object that is associated with the mobile device 402 is actually a part of the mobile device 402 itself.
- the object may correspond to the housing of a mobile phone, the paddle of a game implement, etc.
- Still further interpretations of the terms “mobile device” and “object” are possible.
- the object corresponds the user 412 which holds or is otherwise associated with the mobile device 402 .
- FIG. 5 shows a high-level block depiction of a system 500 that performs the functions summarized above.
- the system 500 includes a mobile device 502 , an external camera system 504 , and a cross-modal sensor fusion processing system 506 .
- the mobile device 502 supplies device motion features measured on the mobile device to the cross-modal sensor fusion processing system 506 .
- the external camera system 504 captures images of the device 502 and sends these to the cross-modal sensor fusion processing system 506 .
- the cross-modal sensor fusion processing system 506 computes the image motion features. It also performs a correlation analysis of the motion features measured on the mobile device and the image motion features obtained from the captured images at various locations in the images.
- the cross-modal sensor fusion processing system 506 computes the difference between the device motion features measured on the mobile device and the image motion features obtained from the captured image at these pixel locations and the smallest difference indicates the location of the mobile device (and therefore the user attached thereto) in that image.
- FIG. 6 shows an overview of one type of mobile device 602 .
- the mobile device 602 incorporates or is otherwise associated with one or more position-determining devices 610 .
- the mobile device 602 can include one or more accelerometers 604 , one or more gyro devices 606 , one or more magnetometers 608 , one or more GPS units (not shown), one or more dead reckoning units (not shown), and so on.
- Each of the position-determining devices 610 uses a different technique to detect movement of the device, and, as a result, to provide a part of the motion features measured on the mobile device 602 .
- the mobile device 602 may include one or more other device processing components 612 which make use of the mobile device's motion features for any environment-specific purpose (unrelated to the motion analysis functionality described herein).
- the mobile device 602 also sends the mobile device's motion features to one or more destinations, such as the cross-modal sensor fusion processing system ( 506 of FIG. 5 ).
- the mobile device 602 can also send the mobile device's motion features to any other target system, such as a game system.
- FIG. 7 shows an overview of one type of external camera system 704 .
- the external camera system 704 can use one or more data capture techniques to capture a scene which contains the mobile device and an object, such as the user.
- the external camera system 704 can investigate the scene by irradiating it using any kind electromagnetic radiation, including one or more of visible light, infrared light, radio waves, etc.
- the external camera system 704 can optionally include an illumination source 702 which bathes the scene in infrared light.
- the infrared light may correspond to structured light which provides a pattern of elements (e.g., dots, lines, etc.).
- the structured light deforms as it is cast over the surfaces of the objects in the scene.
- a depth camera 710 can capture the manner in which the structured light is deformed. Based on that information, the depth camera 710 can derive the distances between different parts of the scene and the external camera system 704 .
- the depth camera 710 can alternatively, or in addition, use other techniques to generate the depth image, such as a time-of-flight technique, a stereoscopic correspondence technique, etc.
- the external camera system 704 can alternatively, or in addition, capture other images of the scene.
- a video camera 706 can capture an RGB video image of the scene or a grayscale video image of the scene.
- An image processing module 708 can process the depth images provided by the depth camera 704 and/or one or more other images of the scene provided by other capture units.
- the Kinect® controller provided by Microsoft Corporation of Redmond, Wash., can be used to implement at least parts of the external camera system.
- the external camera system 704 can capture a video image of the scene.
- the external camera system 704 send the video images to the cross-modal sensor fusion system 806 , described in greater detail with respect to FIG. 8 .
- one embodiment 800 of the cross-modal sensor fusion processing system 806 resides on computing device 900 that is described in greater detail with respect to FIG. 9 .
- the cross-modal sensor fusion processing system 806 receives device motion features measured onboard of a mobile device and images captured by the external camera system previously discussed.
- the image motion features are computed by the cross-modal sensor fusion processing system 806 .
- the device motion features can be velocities or 3D accelerations reported by sensors on the mobile device.
- the motion features of the mobile device and the captured images can be transmitted to the cross-modal sensor fusion system 806 via a communications link, such as, for example, a WiFi link or other communications link.
- the system 806 includes a velocity determination module 808 that determines the 2D velocity of the image features.
- the system 806 also includes an image acceleration estimation module that estimates 3D image accelerations by adding depth information to the 2D image velocities.
- a conversion module 814 converts the image coordinates into a common (e.g., real-world) coordinate frame used by the mobile device.
- the system 806 also includes a matching module 810 that matches the device motion features and the image motion features (e.g. that matches the image velocities to the device velocities, or that matches the image accelerations to the device accelerations, depending what type of motion features are being used).
- a difference computation module 812 computes the differences between the device motion features and the image motion features (e.g., 3D device accelerations and the 3D image accelerations) at points in the captured images. The difference computation module 812 determines the location of the mobile device as the point in each image where the difference is the smallest.
- orientation is computed by combining information from the onboard accelerometers, gyroscopes and magnetometers. Because this orientation is with respect to magnetic north (as measured by the magnetometer) and gravity (as measured by the accelerometer, when the device is not moving), it is often considered an “absolute” orientation.
- the mobile device reports orientation to a standard “ENU” (east, north, up) coordinate system. While magnetic north is disturbed by the presence of metal and other magnetic fields present in indoor environments, in practice it tends to be constant in a given room. It is only important that magnetic north not change dramatically as the device moves about the area imaged by the depth camera (e.g., Kinect® sensor).
- Mobile device accelerometers report device acceleration in the 3D coordinate frame of the device. Having computed absolute orientation using the magnetometers, gyros and accelerometers, it is easy to transform the accelerometer outputs to the ENU coordinate frame and subtract acceleration due to gravity. Some mobile devices provide an API that performs this calculation to give the acceleration of the device in the ENU coordinate frame, without acceleration due to gravity. Of course, because it depends on device orientation, its accuracy is only as good as that of the orientation estimate. One mobile device in a prototype implementation transmits this device acceleration (ENU coordinates, gravity removed) over WiFi to the cross-modal sensor fusion system that performs sensor fusion.
- the cross-modal sensor fusion technique compares image motion features from images of the device and device motion features from sensors on the device in order to track the device (and its user).
- only velocities are computed.
- accelerations are also computed.
- the following discussion focuses more on using accelerations in tracking the mobile device.
- the processing used in using velocities to track the mobile device is basically a subset of that for using accelerations. For example, estimating velocity from images is already accomplished by computing optical flow. Computing like velocities on the mobile device involves integrating the accelerometer values from the device.
- the cross-modal sensor fusion technique compares the 3D acceleration of the mobile device with 3D acceleration observed in video.
- the technique finds acceleration in video by first computing the velocity of movement all of the pixels in a color image using a standard optical flow technique. This 2D image-space velocity is augmented with depth information and converted to velocity in real world 3D coordinates (meters per second). Acceleration is estimated at each point in the image using a Kalman filter. The following paragraphs describe each of these steps in detail.
- image motion is found by computing a dense optical flow on an entire color image.
- Dense optical flow algorithms model the motion observed in a pair of images as a displacement u, v at each pixel.
- optical flow algorithms There are a variety of optical flow algorithms.
- One implementation of the technique uses an optical flow algorithm known for its accuracy that performs a nonlinear optimization over multiple factors.
- there are many other ways to compute flow including a conceptually simpler block matching technique, where for each point in the image at time t, the closest patch around the point is found in the neighborhood of the point at time t+1, using the sum of the squared differences on image pixel intensities, or other similarity metrics.
- the cross-modal sensor fusion technique computes flow from the current frame at time t to the frame at time t ⁇ 1.
- the velocity u, v at each point x, y is denoted as u x,y and v x,y . It is noted that x, y are integer valued, while u, v are real-valued.
- Depth cameras such as for example Microsoft Corporation's Kinect® sensor, report distance to the nearest surface at every point in its depth image. Knowing the focal lengths of the depth and color cameras, and their relative position and orientation, the 3D position of a point in the color image may be calculated.
- One known external camera system provides an API to compute the 3D position of a point in the color camera in real world units (meters). The 3D position corresponding to a 2D point x, y in the color image at time t is denoted as z x,y,t .
- one embodiment of the cross-modal sensor fusion technique uses a Kalman filter-based technique that estimates velocity and acceleration at each pixel.
- Some embodiments of the cross-modal sensor fusion technique uses a Kalman filter to estimate acceleration of moving objects in the image.
- the Kalman filter incorporates knowledge of sensor noise and is recursive (that is, it incorporates all previous observations). The technique thus allows much better estimates of acceleration compared to the approach of using finite differences.
- the basics of estimating acceleration employed in one embodiment of the cross-modal sensor fusion technique are described below.
- the Kalman filter is closely related to the simpler “exponential” filter.
- the Kalman filter is essentially this improved exponential filter, and includes a principled means to set the value of the gain given the uncertainty in both the prediction x t * and observation z t .
- the motion of a single object in 3D is first considered.
- the equations of motion predict the object's position x t *, velocity v t * and acceleration a t * from previous values, x t-1 , v t-1 and a t-1 :
- x t * x t-1 +v t-1 ⁇ t+ 1 ⁇ 2 a t-1 ⁇ t 2
- v t v t-1 +k v *( z t ⁇ x t *)
- a t a t-1 +k a *( z t ⁇ x t *)
- Kalman gain is computed via a conventional method for computing the optimal Kalman gain using two distinct phases of prediction and update.
- the predict phase uses the state estimate from a previous time step to produce an estimate of the state at the current time step.
- This predicted state estimate, or a priori state estimate is an estimate of the state at the current time step, but does not include observation information from the current time step.
- the update phase the current a priori prediction is combined with current observation information to refine the state estimate (called the a posteriori state estimate).
- the two phases alternate, with the prediction advancing the state until the next observation, and the update incorporating the observation, but this is not necessary.
- the Kalman gain is a function of the uncertainty in the predictive model x t * and observations z.
- the uncertainty in z t is related to the noise of the sensor.
- Kalman gain is time-varying. However, if the uncertainty of the predictive model and observations is constant, Kalman gain converges to a constant value, as presented above. This leads to a simplified implementation of the update equations, and further underscores the relationship between the Kalman filter and the simpler exponential filter.
- the cross-modal sensor fusion technique maintains a Kalman filter of the form described above to estimate 3D acceleration at pixel locations in the image (in some embodiments at each pixel).
- the estimated position, velocity and acceleration at each pixel location x, y are denoted as x x,y,t , v x,y,t and a x,y,t respectively.
- Optical flow information is used in two ways: first, the flow at a point in the image is a measurement of the velocity of the object under that point. It thus acts as input to estimate of acceleration using the Kalman filter. Second, the technique can use flow to propagate motion estimates spatially, so that they track the patches of the image whose motion is being estimated. In this way the Kalman filter can use many observations to accurately estimate the acceleration of a given patch of an object as it moves about the image. This is accomplished in the following manner:
- x x,y,t x x+u,y+v,t-1 +k x *( z x,y,t ⁇ x* x,y,t )
- v x,y,t v x+u,y+v,t-1 +k v *( z x,y,t ⁇ x* x,y,t )
- a x,y,t a x+u,y+v,t-1 +k a *( z x,y,t ⁇ x* x,y,t )
- x, y are integer-valued, while u, v are real-valued.
- x x,y,t-1 , v x,y,t-1 and a x,y,t-1 are stored as an array the same dimension of the color image, but because x+u and y+v are real valued, quantities x x+u,y+v,t-1 , v x+u,y+v,t-1 , and a x+u,y+v,t-1 are best computed by bilinear interpolation.
- the Kalman filter at x, y updates motion estimates found at x+u, y+v in the previous time step. In this way motion estimates track the objects whose motion is being estimated.
- the mobile device is placed display-side down on a plane that is easily observed by the camera, such as a wall or desk. Viewing the color video stream of the camera, the user clicks on three or more points on the plane.
- the 3D unit normal n k of the plane in coordinates of the camera is computed by first calculating the 3D position of each clicked point and fitting a plane by a least-squares procedure.
- the same normal n w in ENU coordinates is computed by rotating the unit vector z (out of the display of the device) by the device orientation.
- gravity unit vector g k in camera coordinates is taken from the 3-axis accelerometer built in to some camera system, such as, for example, the Kinect® sensor.
- Gravity g w in the ENU coordinate frame is by definition ⁇ z.
- the 3 ⁇ 3 rotation matrix M camera ⁇ world that brings a 3D camera point to the ENU coordinate frame is calculated by matching the normals n k and n w , as well as gravity vectors g k and g w , and forming orthonormal bases K and W by successive cross products:
- 3D image accelerations are estimated at each pixel and transformed to the ENU coordinate system as described above.
- locating the device by computing the instantaneous minimum over r x,y,t will fail to find the device when it is momentarily still or moving with constant velocity.
- device acceleration may be near zero and so matches many parts of the scene that are not moving, such as the background.
- smoothing r x,y,t with an exponential filter to obtain s x,y,t This smoothed value is “tracked” using optical flow and bilinear interpolation, in the same manner as the Kalman motion estimates.
- the latency of the depth camera e.g., the Kinect® sensor
- the measure of similarity may be inaccurate.
- the cross-modal sensor fusion technique accounts for the relative latency of the camera (e.g., Kinect® sensor) by artificially lagging the mobile device readings by some small number of frames. In one prototype implementation this lag is tuned empirically to four frames, approximately 64 ms.
- the minimum value over s x,y,t can be checked against a threshold to reject matches of poor quality.
- the minimum value at x*, y* is denoted as s*.
- FIG. 9 illustrates a simplified example of a general-purpose computer system on which various embodiments and elements of the cross-modal sensor fusion technique, as described herein, may be implemented. It should be noted that any boxes that are represented by broken or dashed lines in FIG. 9 represent alternate embodiments of the simplified computing device, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
- FIG. 9 shows a general system diagram showing a simplified computing device 900 .
- Such computing devices can be typically be found in devices having at least some minimum computational capability, including, but not limited to, personal computers, server computers, hand-held computing devices, laptop or mobile computers, communications devices such as cell phones and PDA's, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, audio or video media players, etc.
- the device should have a sufficient computational capability and system memory to enable basic computational operations.
- the computational capability is generally illustrated by one or more processing unit(s) 910 , and may also include one or more GPUs 915 , either or both in communication with system memory 920 .
- the processing unit(s) 910 of the general computing device may be specialized microprocessors, such as a DSP, a VLIW, or other micro-controller, or can be conventional CPUs having one or more processing cores, including specialized GPU-based cores in a multi-core CPU.
- the computing device can be implemented as an ASIC or FPGA, for example.
- the simplified computing device of FIG. 9 may also include other components, such as, for example, a communications interface 930 .
- the simplified computing device of FIG. 9 may also include one or more conventional computer input devices 940 (e.g., pointing devices, keyboards, audio and speech input devices, video input devices, haptic input devices, devices for receiving wired or wireless data transmissions, etc.).
- the simplified computing device of FIG. 9 may also include other optional components, such as, for example, one or more conventional computer output devices 950 (e.g., display device(s) 955 , audio output devices, video output devices, devices for transmitting wired or wireless data transmissions, etc.).
- typical communications interfaces 930 , input devices 940 , output devices 950 , and storage devices 960 for general-purpose computers are well known to those skilled in the art, and will not be described in detail herein.
- the simplified computing device of FIG. 9 may also include a variety of computer readable media.
- Computer readable media can be any available media that can be accessed by computer 900 via storage devices 960 and includes both volatile and nonvolatile media that is either removable 970 and/or non-removable 980 , for storage of information such as computer-readable or computer-executable instructions, data structures, program modules, or other data.
- Computer readable media may comprise computer storage media and communication media.
- Computer storage media refers to tangible computer or machine readable media or storage devices such as DVD's, CD's, floppy disks, tape drives, hard drives, optical drives, solid state memory devices, RAM, ROM, EEPROM, flash memory or other memory technology, magnetic cassettes, magnetic tapes, magnetic disk storage, or other magnetic storage devices, or any other device which can be used to store the desired information and which can be accessed by one or more computing devices.
- modulated data signal or “carrier wave” generally refer to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection carrying one or more modulated data signals, and wireless media such as acoustic, RF, infrared, laser, and other wireless media for transmitting and/or receiving one or more modulated data signals or carrier waves. Combinations of any of the above should also be included within the scope of communication media.
- software, programs, and/or computer program products embodying some or all of the various embodiments of the cross-modal sensor fusion technique described herein, or portions thereof, may be stored, received, transmitted, or read from any desired combination of computer or machine readable media or storage devices and communication media in the form of computer executable instructions or other data structures.
- cross-modal sensor fusion technique described herein may be further described in the general context of computer-executable instructions, such as program modules, being executed by a computing device.
- program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
- the embodiments described herein may also be practiced in distributed computing environments where tasks are performed by one or more remote processing devices, or within a cloud of one or more devices, that are linked through one or more communications networks.
- program modules may be located in both local and remote computer storage media including media storage devices.
- the aforementioned instructions may be implemented, in part or in whole, as hardware logic circuits, which may or may not include a processor.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
x t =x t-1+α(z t −x t-1)
where the gain αε(0,1) controls the degree to which the filter incorporates the “innovation” zt−xt-1. The smaller the gain, the less the filter follows the observation zt, and the more the signal is smoothed. An improved version of this filter is
x t =x t-1+α(z t −x t*)
where xt* is a prediction of xt given xt-1. The Kalman filter is essentially this improved exponential filter, and includes a principled means to set the value of the gain given the uncertainty in both the prediction xt* and observation zt.
x t *=x t-1 +v t-1 Δt+½a t-1 Δt 2
v t *=v t-1 +a t-1 Δt
a t *=a t-1
Given observation zt of the position of a tracked object, the technique updates the estimates of position, velocity and acceleration with
x t =x t-1 +k x*(z t −x t*)
v t =v t-1 +k v*(z t −x t*)
a t =a t-1 +k a*(z t −x t*)
where * denotes element-wise multiplication, and Kalman gains kx, kv, ka relate the innovation, or error in the prediction of position, to changes in each of the estimates of position, velocity and acceleration. Kalman gain is computed via a conventional method for computing the optimal Kalman gain using two distinct phases of prediction and update. The predict phase uses the state estimate from a previous time step to produce an estimate of the state at the current time step. This predicted state estimate, or a priori state estimate, is an estimate of the state at the current time step, but does not include observation information from the current time step. In the update phase, the current a priori prediction is combined with current observation information to refine the state estimate (called the a posteriori state estimate). Typically, the two phases alternate, with the prediction advancing the state until the next observation, and the update incorporating the observation, but this is not necessary. Hence, the Kalman gain is a function of the uncertainty in the predictive model xt* and observations z. In particular, it is preferable to assign a high uncertainty to the estimate of acceleration at to reflect the belief that acceleration of the object varies over time. Similarly, the uncertainty in zt is related to the noise of the sensor.
x x,y,t =x x+u,y+v,t-1 +k x*(z x,y,t −x* x,y,t)
v x,y,t =v x+u,y+v,t-1 +k v*(z x,y,t −x* x,y,t)
a x,y,t =a x+u,y+v,t-1 +k a*(z x,y,t −x* x,y,t)
r x,y,t=√{square root over (∥a x,y,t −d t∥2)}
Regions of the image that move with the device will give small values of rx,y,t. In particular, the hope is that pixels that lie on the device will give the smallest values. If one assumes that the device is present in the scene, it may suffice to locate its position in the image by finding x*, y* that minimizes rx,y,t. However, other objects that momentarily move with the device, such as those rigidly attached (e.g., the hand holding the device and the arm) may also match well.
Claims (20)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/096,840 US9679199B2 (en) | 2013-12-04 | 2013-12-04 | Fusing device and image motion for user identification, tracking and device association |
EP14812379.7A EP3077992B1 (en) | 2013-12-04 | 2014-11-26 | Process and system for determining the location of an object by fusing motion features and iamges of the object |
PCT/US2014/067517 WO2015084667A1 (en) | 2013-12-04 | 2014-11-26 | Fusing device and image motion for user identification, tracking and device association |
CN201480066398.7A CN105814609B (en) | 2013-12-04 | 2014-11-26 | For user's identification, tracking and the associated fusion device of equipment and image motion |
US15/592,344 US20170330031A1 (en) | 2013-12-04 | 2017-05-11 | Fusing device and image motion for user identification, tracking and device association |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/096,840 US9679199B2 (en) | 2013-12-04 | 2013-12-04 | Fusing device and image motion for user identification, tracking and device association |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/592,344 Continuation US20170330031A1 (en) | 2013-12-04 | 2017-05-11 | Fusing device and image motion for user identification, tracking and device association |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150154447A1 US20150154447A1 (en) | 2015-06-04 |
US9679199B2 true US9679199B2 (en) | 2017-06-13 |
Family
ID=52101621
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/096,840 Active 2035-05-25 US9679199B2 (en) | 2013-12-04 | 2013-12-04 | Fusing device and image motion for user identification, tracking and device association |
US15/592,344 Abandoned US20170330031A1 (en) | 2013-12-04 | 2017-05-11 | Fusing device and image motion for user identification, tracking and device association |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/592,344 Abandoned US20170330031A1 (en) | 2013-12-04 | 2017-05-11 | Fusing device and image motion for user identification, tracking and device association |
Country Status (4)
Country | Link |
---|---|
US (2) | US9679199B2 (en) |
EP (1) | EP3077992B1 (en) |
CN (1) | CN105814609B (en) |
WO (1) | WO2015084667A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11389962B2 (en) * | 2010-05-24 | 2022-07-19 | Teladoc Health, Inc. | Telepresence robot system that can be accessed by a cellular phone |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9779596B2 (en) | 2012-10-24 | 2017-10-03 | Apple Inc. | Devices and methods for locating accessories of an electronic device |
US9437000B2 (en) * | 2014-02-20 | 2016-09-06 | Google Inc. | Odometry feature matching |
EP2988288A1 (en) * | 2014-08-22 | 2016-02-24 | Moog B.V. | Medical simulator handpiece |
WO2016056449A1 (en) * | 2014-10-10 | 2016-04-14 | 富士通株式会社 | Skill determination program, skill determination method, skill determination device, and server |
US9412169B2 (en) * | 2014-11-21 | 2016-08-09 | iProov | Real-time visual feedback for user positioning with respect to a camera and a display |
WO2016095192A1 (en) * | 2014-12-19 | 2016-06-23 | SZ DJI Technology Co., Ltd. | Optical-flow imaging system and method using ultrasonic depth sensing |
JP6428277B2 (en) * | 2015-01-09 | 2018-11-28 | 富士通株式会社 | Object association method, apparatus, and program |
US10121259B2 (en) * | 2015-06-04 | 2018-11-06 | New York University Langone Medical | System and method for determining motion and structure from optical flow |
US10238976B2 (en) * | 2016-07-07 | 2019-03-26 | Disney Enterprises, Inc. | Location-based experience with interactive merchandise |
US20180204331A1 (en) * | 2016-07-21 | 2018-07-19 | Gopro, Inc. | Subject tracking systems for a movable imaging system |
CN107816990B (en) * | 2016-09-12 | 2020-03-31 | 华为技术有限公司 | Positioning method and positioning device |
KR102656557B1 (en) * | 2016-10-07 | 2024-04-12 | 삼성전자주식회사 | Image processing method and electronic device supporting the same |
CN108696293B (en) * | 2017-03-03 | 2020-11-10 | 株式会社理光 | Wearable device, mobile device and connection method thereof |
CN109074367B (en) * | 2017-03-22 | 2021-02-05 | 华为技术有限公司 | Method for determining terminal held by object in photo and terminal thereof |
EP3479767A1 (en) * | 2017-11-03 | 2019-05-08 | Koninklijke Philips N.V. | Distance measurement devices, systems and methods, particularly for use in vital signs measurements |
US10122969B1 (en) | 2017-12-07 | 2018-11-06 | Microsoft Technology Licensing, Llc | Video capture systems and methods |
US11194842B2 (en) * | 2018-01-18 | 2021-12-07 | Samsung Electronics Company, Ltd. | Methods and systems for interacting with mobile device |
US10706556B2 (en) | 2018-05-09 | 2020-07-07 | Microsoft Technology Licensing, Llc | Skeleton-based supplementation for foreground image segmentation |
EP3605287A1 (en) * | 2018-07-31 | 2020-02-05 | Nokia Technologies Oy | An apparatus, method and computer program for adjusting output signals |
US11641563B2 (en) | 2018-09-28 | 2023-05-02 | Apple Inc. | System and method for locating wireless accessories |
CN109886326B (en) * | 2019-01-31 | 2022-01-04 | 深圳市商汤科技有限公司 | Cross-modal information retrieval method and device and storage medium |
US20220200789A1 (en) * | 2019-04-17 | 2022-06-23 | Apple Inc. | Sharing keys for a wireless accessory |
CN113796099A (en) | 2019-04-17 | 2021-12-14 | 苹果公司 | Finding target device using augmented reality |
US11863671B1 (en) | 2019-04-17 | 2024-01-02 | Apple Inc. | Accessory assisted account recovery |
US11889302B2 (en) | 2020-08-28 | 2024-01-30 | Apple Inc. | Maintenance of wireless devices |
US12073705B2 (en) | 2021-05-07 | 2024-08-27 | Apple Inc. | Separation alerts for notification while traveling |
US12143895B2 (en) | 2021-06-04 | 2024-11-12 | Apple Inc. | Pairing groups of accessories |
CN114185071A (en) * | 2021-12-10 | 2022-03-15 | 武汉市虎联智能科技有限公司 | Positioning system and method based on object recognition and spatial position perception |
CN114821006B (en) * | 2022-06-23 | 2022-09-20 | 盾钰(上海)互联网科技有限公司 | Twin state detection method and system based on interactive indirect reasoning |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060240866A1 (en) * | 2005-04-25 | 2006-10-26 | Texas Instruments Incorporated | Method and system for controlling a portable communication device based on its orientation |
US7236091B2 (en) | 2005-02-10 | 2007-06-26 | Pinc Solutions | Position-tracking system |
US7583275B2 (en) | 2002-10-15 | 2009-09-01 | University Of Southern California | Modeling and video projection for augmented virtual environments |
US7761233B2 (en) | 2006-06-30 | 2010-07-20 | International Business Machines Corporation | Apparatus and method for measuring the accurate position of moving objects in an indoor environment |
US20100197390A1 (en) | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Pose tracking pipeline |
US8380246B2 (en) | 2007-03-01 | 2013-02-19 | Microsoft Corporation | Connecting mobile devices via interactive input medium |
US20130046505A1 (en) | 2011-08-15 | 2013-02-21 | Qualcomm Incorporated | Methods and apparatuses for use in classifying a motion state of a mobile device |
US20130069931A1 (en) | 2011-09-15 | 2013-03-21 | Microsoft Corporation | Correlating movement information received from different sources |
US20130113704A1 (en) | 2011-11-04 | 2013-05-09 | The Regents Of The University Of California | Data fusion and mutual calibration for a sensor network and a vision system |
US20130166202A1 (en) | 2007-08-06 | 2013-06-27 | Amrit Bandyopadhyay | System and method for locating, tracking, and/or monitoring the status of personnel and/or assets both indoors and outdoors |
US20130162778A1 (en) | 2011-12-26 | 2013-06-27 | Semiconductor Energy Laboratory Co., Ltd. | Motion recognition device |
US8548608B2 (en) | 2012-03-02 | 2013-10-01 | Microsoft Corporation | Sensor fusion algorithm |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101950200B (en) * | 2010-09-21 | 2011-12-21 | 浙江大学 | Camera based method and device for controlling game map and role shift by eyeballs |
-
2013
- 2013-12-04 US US14/096,840 patent/US9679199B2/en active Active
-
2014
- 2014-11-26 CN CN201480066398.7A patent/CN105814609B/en active Active
- 2014-11-26 EP EP14812379.7A patent/EP3077992B1/en active Active
- 2014-11-26 WO PCT/US2014/067517 patent/WO2015084667A1/en active Application Filing
-
2017
- 2017-05-11 US US15/592,344 patent/US20170330031A1/en not_active Abandoned
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7583275B2 (en) | 2002-10-15 | 2009-09-01 | University Of Southern California | Modeling and video projection for augmented virtual environments |
US7236091B2 (en) | 2005-02-10 | 2007-06-26 | Pinc Solutions | Position-tracking system |
US20060240866A1 (en) * | 2005-04-25 | 2006-10-26 | Texas Instruments Incorporated | Method and system for controlling a portable communication device based on its orientation |
US7761233B2 (en) | 2006-06-30 | 2010-07-20 | International Business Machines Corporation | Apparatus and method for measuring the accurate position of moving objects in an indoor environment |
US8380246B2 (en) | 2007-03-01 | 2013-02-19 | Microsoft Corporation | Connecting mobile devices via interactive input medium |
US20130166202A1 (en) | 2007-08-06 | 2013-06-27 | Amrit Bandyopadhyay | System and method for locating, tracking, and/or monitoring the status of personnel and/or assets both indoors and outdoors |
US20100197390A1 (en) | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Pose tracking pipeline |
US20130046505A1 (en) | 2011-08-15 | 2013-02-21 | Qualcomm Incorporated | Methods and apparatuses for use in classifying a motion state of a mobile device |
US20130069931A1 (en) | 2011-09-15 | 2013-03-21 | Microsoft Corporation | Correlating movement information received from different sources |
US20130113704A1 (en) | 2011-11-04 | 2013-05-09 | The Regents Of The University Of California | Data fusion and mutual calibration for a sensor network and a vision system |
US20130162778A1 (en) | 2011-12-26 | 2013-06-27 | Semiconductor Energy Laboratory Co., Ltd. | Motion recognition device |
US8548608B2 (en) | 2012-03-02 | 2013-10-01 | Microsoft Corporation | Sensor fusion algorithm |
Non-Patent Citations (27)
Title |
---|
"International Preliminary Report on Patentability Issued in PCT Application No. PCT/US2014/067517", Mailed Date: Jan. 25, 2016, 8 Pages. |
"Search Report and Written Opinion Issued in PCT Patent Application No. PCT/US2014/067517", Mailed Date: Feb. 17, 2015, 10 Pages. |
"Second Written Opinion Issued in PCT Patent Application No. PCT/US2014/067517", Mailed Date: Oct. 27, 2015, 7 Pages. |
Brox, et al., "High Accuracy Optical Flow Estimation Based on a Theory for Warping", In Proceedings of 8th European Conference on Computer Vision, Springer, vol. 4, May, 2004, 12 pages. |
Hinckley, Ken, "Synchronous Gestures for Multiple Persons and Computers", In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, Jul. 9, 2009, 10 pages. |
Holmquist, et al., "Smart-Its Friends: A Technique for Users to Easily Establish Connections Between Smart Artefacts", In Proceedings of Third International Conference on Ubiquitous Computing, Sep. 2001, 6 pages. |
Kawai, et al., "Identification and Positioning Based on Motion Sensors and a Video Camera", In Proceedings of 4th IASTED International Conference on Web-Based Education, Feb. 21, 2005, 7 pages. |
Kozlowski, et al., "Recognizing the Sex of a Walker From Dynamic Point-Light Display", In Journal of Perception & Psychophysics, vol. 21, Issue 6, Nov. 1977, 7 pages. |
Marquardt, et al., "Cross-Device Interaction via Micro-Mobility and F-Formations", In Proceedings of the 25th Annual ACM Symposium on user Interface Software and Technology, Oct. 7, 2012, 10 pages. |
Mayrhofer, et al., "Shake Well Before Use: Intuitive and Secure Pairing of Mobile Devices", In IEEE Transactions on Mobile Computing, vol. 8, Issue 6, Jun. 2009, 15 pages. |
Olwal, et al., "Surface Fusion: Unobtrusive Tracking of Everyday Objects in Tangible Interfaces", In Proceedings of the Conference of Graphics Interface, May 28, 2008, 8 pages. |
Perera, et al., "Context Aware Computing for the Internet of Thing: A Survey", In IEEE Communications Surveys & Tutorials, May 3, 2013, 41 pages. |
Plotz, et al., "Automatic Synchronization of Wearable Sensors and Video-Cameras for Ground Truth Annotation-A Practical Approach", In Proceedings of 16th International Symposium on Wearable Computers, Jun. 18, 2012, 4 pages. |
Plotz, et al., "Automatic Synchronization of Wearable Sensors and Video-Cameras for Ground Truth Annotation—A Practical Approach", In Proceedings of 16th International Symposium on Wearable Computers, Jun. 18, 2012, 4 pages. |
Rekimoto, et al., "SyncTap: An Interaction Technique for Mobile Networking", In Proceedings of 5th International Symposium on Human-Computer Interaction with Mobile Devices and Services, Sep. 8, 2003, 12 pages. |
Rofouei, et al., "Your Phone or Mine?: Fusing Body, Touch, and Device Sensing for Multi-User Device-Display Interaction", In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, May 5, 2012, 4 pages. |
Schmidt, et al., "PhoneTouch: A Technique for Direct Phone Interaction on Surfaces", In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, Oct. 3, 2010, 4 pages. |
Shi et al., "A Rotation Based Method for Detecting On-body Positions of Mobile Devices", Sep. 21, 2011, ACM, Proceedings of 13th Int. Conf. on Ubiquitous Computing, Ubicomp'11, p. 559-560. * |
Shigeta, et al., "Identifying a Moving Object with an Accelerometer in a Camera View", In International Conference on Intelligent Robots and Systems, Sep. 22, 2008, 6 pages. |
Smith, et al., "A Wirelessly-Powered Platform for Sensing and Computation", In Proceedings of 8th International Conference on Ubiquitous Computing, Sep. 17, 2006, 12 pages. |
Stein et al., "Accelerometer Localization in the View of aStationary Camera", May 30, 2012, IEEE, 2012 9th Conf. on Computer and Robot Vision, p. 109-116. * |
Teixeira, et al., "Tasking Networked CCTV Cameras and Mobile Phones to Identify and Localize Multiple People", In Proceedings of the 12th ACM International Conference on Ubiquitous Computing, Sep. 26, 2010, 10 pages. |
Want, et al., "The Active Badge Location System", In Journal of ACM Transactions on Information Systems, vol. 10, Issue 1, Jan. 1992, 12 pages. |
Weenk, et al., "Automatic Identification of Inertial Sensor Placement on Human Body Segments During Walking", In Journal of Neuro Engineering and Rehabilitation, Mar. 21, 2013, 9 pages. |
Welch, et al., "An Introduction to the Kalman Filter", In Technical Report TR 95-041, Retrieved on: Oct. 16, 2013, 16 pages. |
Wilson, A., "BlueTable: Connecting Wireless Mobile Devices on Interactive Surfaces Using Vision-Based Handshaking", In Proceedings of Graphics Interface, May 28, 2007, 7 pages. |
Wolfe, Jeremy M., "Guided Search 2.0: A Revised Model of Visual Search", In Journal of Psychonomic Bulletin & Review, vol. 1, Issue 2, Jan. 1994, 37 pages. |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11389962B2 (en) * | 2010-05-24 | 2022-07-19 | Teladoc Health, Inc. | Telepresence robot system that can be accessed by a cellular phone |
Also Published As
Publication number | Publication date |
---|---|
CN105814609B (en) | 2019-11-12 |
US20150154447A1 (en) | 2015-06-04 |
CN105814609A (en) | 2016-07-27 |
EP3077992A1 (en) | 2016-10-12 |
EP3077992B1 (en) | 2019-11-06 |
WO2015084667A1 (en) | 2015-06-11 |
US20170330031A1 (en) | 2017-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9679199B2 (en) | Fusing device and image motion for user identification, tracking and device association | |
US9411037B2 (en) | Calibration of Wi-Fi localization from video localization | |
US20180328753A1 (en) | Local location mapping method and system | |
US9875579B2 (en) | Techniques for enhanced accurate pose estimation | |
CN105424030B (en) | Fusion navigation device and method based on wireless fingerprint and MEMS sensor | |
US9529426B2 (en) | Head pose tracking using a depth camera | |
Zhao et al. | Enhancing camera-based multimodal indoor localization with device-free movement measurement using WiFi | |
Elloumi et al. | Indoor pedestrian localization with a smartphone: A comparison of inertial and vision-based methods | |
US9576183B2 (en) | Fast initialization for monocular visual SLAM | |
JP5181704B2 (en) | Data processing apparatus, posture estimation system, posture estimation method and program | |
WO2017215024A1 (en) | Pedestrian navigation device and method based on novel multi-sensor fusion technology | |
US10733798B2 (en) | In situ creation of planar natural feature targets | |
US20150185018A1 (en) | Methods and Systems for Determining Estimation of Motion of a Device | |
JP2016538053A (en) | Detection of changing features of objects with non-fixed devices | |
US11162791B2 (en) | Method and system for point of sale ordering | |
Li et al. | RD-VIO: Robust visual-inertial odometry for mobile augmented reality in dynamic environments | |
US20170311125A1 (en) | Method of setting up a tracking system | |
CN112525197A (en) | Ultra-wideband inertial navigation fusion pose estimation method based on graph optimization algorithm | |
Alcázar-Fernández et al. | Seamless mobile indoor navigation with VLP-PDR | |
CN113610702B (en) | Picture construction method and device, electronic equipment and storage medium | |
CN116249872A (en) | Indoor positioning with multiple motion estimators | |
Hoseinitabatabaei et al. | Towards a position and orientation independent approach for pervasive observation of user direction with mobile phones | |
Koc et al. | Indoor mapping and positioning using augmented reality | |
US20240271938A1 (en) | Smartphone-based inertial odometry | |
US12174024B2 (en) | Estimating camera motion through visual tracking in low contrast high motion single camera systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WILSON, ANDREW D.;BENKO, HRVOJE;SIGNING DATES FROM 20131129 TO 20131203;REEL/FRAME:031876/0402 |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034747/0417 Effective date: 20141014 Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:039025/0454 Effective date: 20141014 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |