US8818028B2 - Systems and methods for accurate user foreground video extraction - Google Patents
Systems and methods for accurate user foreground video extraction Download PDFInfo
- Publication number
- US8818028B2 US8818028B2 US13/083,470 US201113083470A US8818028B2 US 8818028 B2 US8818028 B2 US 8818028B2 US 201113083470 A US201113083470 A US 201113083470A US 8818028 B2 US8818028 B2 US 8818028B2
- Authority
- US
- United States
- Prior art keywords
- pixels
- user
- pixel
- region
- foreground
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title claims description 50
- 238000000605 extraction Methods 0.000 title description 8
- 238000001514 detection method Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 description 8
- 230000007423 decrease Effects 0.000 description 6
- 230000007704 transition Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 4
- 239000000203 mixture Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Definitions
- the present invention is related to the field of video processing, and more specifically towards systems and methods for accurate user foreground video extraction.
- Background subtraction comprises the removal of a background from a live video, the results of which results in a video comprising the foreground of the live video.
- Conventional video processing techniques use such background subtraction processes for video conference applications. For example, the foreground of the live video may be extracted and then inserted or embedded into a second background.
- conventional video processing techniques insert or embed a foreground from a live video into a second background
- the conventional techniques do not provide an accurate or clear foreground video.
- pixels comprising a user's hair or portions of the user's fingers may be not be accurately represented by a depth camera and as such provide a poor representation of a user.
- the extracted foreground video may appear to be of poor quality.
- the systems and methods may provide increased user foreground accuracy through processing of a depth image and a color image.
- the systems and methods disclosed herein provide accurate user foreground video extraction.
- the systems and methods may receive a depth image and a color image of a frame from a live video from at least one camera.
- the depth image is processed by identifying a categorization for each of its pixels.
- the categorization may be based on a comparison of depth values for each of the pixels and a pixel history such that pixels may be categorized as one of a foreground pixel, background pixel, unclear pixel, or an unknown pixel.
- a region map is then created based on the categorization of the pixels.
- an unclear region of the color image is identified based on at least the region map.
- An unclear region band of the color image is then created.
- the unclear region band may at least comprise unclear pixels.
- the unclear pixels in the unclear region band are then segmented or distributed between the foreground and background of the color image.
- FIG. 1 illustrates an example video comprising a foreground portion, background portion and an unclear portion in accordance with some embodiments.
- FIG. 2 illustrates an example video with the background portion subtracted or removed and the unclear portion segmented between the foreground and the background.
- FIG. 3 is a flow diagram illustrating an example embodiment of a method for accurate user foreground video extraction.
- FIG. 4 illustrates a high level abstraction of an example method for the depth based processing of a depth image for identifying sections of the depth image.
- FIG. 5 is a flow diagram illustrating an example embodiment of a method for depth based processing for identifying a foreground portion and a background portion of a video.
- FIG. 6 is an example of grouping pixels of a depth image into connected components in accordance with some embodiments.
- FIG. 7 is a high level abstraction of an example method for color image based processing to identify a foreground portion and a background portion of a video and segmenting an unclear region to extract and accurately define a user foreground video.
- FIG. 8 is a flow diagram of a method for color based processing for the identification of a foreground portion and a background portion to extract a user foreground video.
- FIG. 9A illustrates an example of foreground region filling in accordance with some embodiments.
- FIG. 9B illustrates an example of a completed foreground region filling in accordance with some embodiments.
- FIG. 10 is an example graph cut for segmenting pixels in an unclear band between the foreground and the background in accordance with some embodiments
- FIG. 11 illustrates an embodiment of a camera system for foreground video embedding in accordance with some embodiments.
- FIG. 12 illustrates an embodiment of a computer system and network system that incorporates the foreground video embedding systems and methods of the present invention.
- the systems, methods, and circuits disclosed herein relate to accurate user foreground video extraction.
- FIG. 1 illustrates an example video 100 .
- the example video 100 comprises a background portion 110 , an unclear portion 130 , and a foreground portion 120 .
- the background portion 110 may comprise a wall, outdoor scene, or any other background scene and the foreground portion 120 may comprise a human user or presenter.
- the foreground portion 120 may comprise any identifiable object or entity.
- the unclear portion 130 may comprise the boundary between the background portion 110 and the foreground portion 120 .
- the unclear portion 130 may comprise an area of pixels that have not been identified with certainty of belonging to the background portion 110 or the foreground portion 120 .
- the example video 100 may be divided into at least three portions—a background 110 , a foreground 120 , and the unclear portion 130 .
- the video 100 comprises a user speaking in a room
- the user may comprise the foreground portion 120
- a wall of the room may comprise the background portion 110
- the unclear portion 130 may comprise a boundary around the user and the wall.
- FIG. 2 illustrates an example processed video 200 .
- the processed video 200 comprises a foreground portion 220 and a background portion 210 .
- any unclear portions of the video have been categorized as either foreground or background after processing of depth and color images of the video.
- the processed video 200 approximates the video 100 with the segmentation or categorization of pixels within the unclear portion as foreground pixels or background pixels.
- the segmenting of the unclear pixels of the unclear portion may comprise distributing the unclear pixels between the foreground region and the background region. For example, an erosion or dilation of the image may occur such that the boundaries of the background region and foreground region change.
- the background portion 210 may be subtracted or removed in order to create a foreground video.
- FIG. 3 illustrates an example method 300 for accurate user foreground video extraction.
- the identified background portion may be removed to create a foreground video.
- the method 300 processes a depth image and a color image to extract a user foreground video.
- a color image and a depth image is received.
- the depth image may comprise information indicating the distance of each pixel of an image frame of a video from a sensor.
- the color image may comprise color pixel information of an image frame of a video.
- the depth and color camera information may be received from a three dimensional (3D) camera, depth camera, z-camera, range camera, or from a plurality of sources.
- the color information may be received from a color camera and the depth information may be received from a depth camera.
- the color information and depth information may be received from a single camera.
- the color information may be received from a red-blue-green (RGB) sensor on a camera and the depth information may be received from an infrared (IR) sensor comprised within the same camera. Further details with regard to the camera are described with relation to FIG. 11 .
- the method 300 receives depth and color information of a video.
- the depth image is processed as further discussed with relation to FIGS. 4 , 5 , and 6 .
- the color image is processed as discussed in more detail with relation to FIGS. 7 , 8 , 9 A, and 9 B.
- an alpha mask may be applied to the resulting image.
- FIG. 4 illustrates a high level method 400 for the depth based processing of a depth image for identifying sections of the depth image.
- the method 400 receives a depth image, classifies portions of the depth image, and generates a region map from the depth image.
- a depth image is received.
- the depth image may comprise a depth value for each pixel that indicates the distance of each pixel of an image frame from a sensor.
- the depth image is cleaned and classified or categorized. For example, pixels may be categorized as a user pixel, background pixel, or unknown pixel based on the depth value of the pixel and a user history and/or a background history.
- a section map may be created. The section may comprise multiple user categorizations or sections, as described in further detail below. Details with regard to the user histories and background history are discussed in further detail below.
- the pixels are grouped into connected components, as discussed in further detail below.
- connected components are tracked.
- a component comprises unknown pixels
- a head must be detected for the component to be a user.
- the component comprises background pixels (e.g., the component was not detected as a user in a previous frame and it is now part of the background)
- the component must have a head and be in motion.
- a plurality of users may be detected. For example, two separate connected components may be detected to move and as such be detected as two separate users.
- the two separate connected components may be at least a predefined distance from each other.
- any or all of the user histories and background history are updated.
- the user histories and background history comprise an average value (e.g., depth value) of each pixel from a plurality of frames.
- the histories may comprise an average depth value of each pixel from a plurality of previously processed depth image frames.
- a region map may be created.
- a region map may comprise a categorization or classification of each pixel of the depth image.
- the region map may specify or categorize which pixel is a background pixel, foreground pixel, unclear pixel, or an unknown pixel.
- the creation of the region map is based upon the previous classification of the depth image and user information.
- the user information may comprise that a first user should be enabled and a second user should be disabled.
- the region map may comprise the previous categorization of the depth image, but with the disabled user's pixels or connected component being categorized as a background and the enabled user's pixels or connected components remaining categorized as a foreground. Details with regard to the method 400 are discussed in further detail below with regard to FIG. 5 .
- FIG. 5 illustrates a method 500 for depth based processing for the identifying of a foreground portion and a background portion of a video.
- the identified background portion may be removed to create a foreground video.
- the method 500 receives depth image information and categorizes image pixels based on the depth image information.
- a depth image is received.
- the depth image is checked.
- the depth image frame is checked to determine whether the depth information is useful. For example, if the majority of pixels from the depth image comprise small or invalid values then the depth image frame may be considered to not be useful and as such may be discarded.
- all pixels in the region map are set to ‘unknown.’
- all depth histories (described in further detail below) and user information may be cleared or deleted.
- each pixel may be categorized or determined to belong to a section of the depth image frame. For example, each pixel may be categorized as unknown, background, a user pixel, or as a bad pixel. In some embodiments, there may be a plurality of types of user pixels. For example, each user may comprise a separate user pixel identification in order to keep different users separate. In some embodiments, the categorization of the pixels is based on a background history and user histories. Each of the background history and each user history comprises an aggregate history of the background pixels and user pixels as compiled from previous depth image frames.
- the current depth value is compared to the depth value in the background and foreground histories and ideally matched as either background or a user.
- how close a pixel's current depth value must match either of the background or user histories may be based upon a confidence level threshold of the pixel. For example, to determine the best match (e.g., whether the pixel is a user or background) may comprise a cost calculated for each history and the history with the lowest cost may be chosen to be the pixel's section or categorization. If the depth value of a current pixel does not match any of the background or user histories, then the pixel may be labeled as unknown. In some embodiments, if the pixel has an invalid depth value or a depth value beyond a threshold, then the pixel may be labeled as an invalid pixel (e.g., a bad pixel).
- connected components of the depth image pixels are created.
- the creation of connected components may group pixels into components based on the pixel's section or categorization and the pixel's depth value. For example, each pixel's depth value and categorization (i.e., user, unknown, or background) may be compared with its neighboring pixels' depth value and categorization.
- the categorization may comprise a different categorization for each user. As such, a plurality of user categorizations may be used. If neighboring pixels share a common categorization and have similar depth values, then the neighboring pixels may be considered to be a part of a single component.
- the pixel's depth value is not compared with a neighboring pixel's depth value.
- neighboring pixels with an invalid depth categorization will be grouped into a single component.
- disjoint sets are used to manage the connected components. Once the connected components are determined (e.g., components are created for foreground components, background components, etc.) each component comprising the pixels categorized as unknown are examined. A determination may be made to decide whether the unknown component is connected to a known component such as a background or foreground component. For example, for each unknown component, a list of connections to known categorized components is generated.
- the categorized component selected for the unknown component is based on the total number of connections and the total depth difference between the unknown component and the categorized component. For example, if an unknown component comprises a large number of connections to a background component and there is a small depth difference between the unknown component and the background component, then the unknown component may be categorized as a background component. As such, all pixels in the unknown component may be categorized as a background component and included in the background component. Thus, the previously unknown pixels are regrouped into the background component pixels.
- a motion detection of connected components is performed.
- the motion detection determines if a component is moving between depth image frames.
- a moving component may be determined to be a person (e.g., a user).
- a user may be detected at block 525 .
- a camera may provide an infrared intensity image and as such the difference between the infrared intensity or depth value of the current image frame and a previous image frame may be calculated. If a pixel's infrared intensity increases by a significant amount and the pixel's value is below a specific threshold, then the pixel may be marked as moving.
- a pixel may be considered to be moving if its depth value decreases by a specific amount and the pixel depth value is below a specific threshold.
- Each component comprising a moving pixel may be further examined. If the number of moving pixels in a single component is above a predefined minimum amount and the percentage of moving pixels is not small relative to all pixels of the component, then the component may be tagged as being in motion and as such may comprise a user.
- user tracking is performed on the connected components.
- user tracking may be performed at every few frames and result in the analysis of all of the connected components.
- a connected component in motion must have a user head detected in order for the connected component to be categorized as a user.
- the user tracking may comprise checking the unknown component to determine whether the unknown component should be a foreground component or if it is a part of an existing foreground component (e.g., the unknown component is a user). If the unknown component is not part of an existing user, then the unknown component may be a new user and thus is analyzed through additional processes at blocks 545 and 550 . Similar processes are performed for a background component.
- a background component For a background component to be re-categorized as a foreground or user component, the background component must be approximate to a user's center of mass. Moreover, in some embodiments, a new user must have additional features detected and must be in motion. If a background component is determined to be a part of a user or a new user, then the component is removed from the background history.
- the performance of the user tracking at block 530 may further comprise processing checks on foreground or user components. For example, if a foreground or user component is far from a user's center of mass, then it may be re-categorized as an unknown component. If a user component is close to another user's center of mass, then it may be removed from the current user and into the second user's history.
- the user's information may be updated based on the current frame. For example, information related to a user's center of mass, dimensions, and motion may be updated. As such, the positioning and placement of a user may be detected such that a user's gestures may be detected, as described in further detail below.
- a detected gesture from a user may enable or disable the user from the system or the user's standing placement (e.g., depth threshold) may be used to enable or disable the user.
- the user's standing placement e.g., depth threshold
- a history of various characteristics of a user are recorded and updated.
- the user's features are detected.
- the features detected may comprise a user's head and hands.
- the user's torso and neck may first be located by segmenting the user component into a plurality of horizontal slices and moving upward until the width of the horizontal slices begins to diverge from the average width by a set amount. After finding the user's torso and neck, the user's head is identified by examining an area above the identified neck. Once the user's head is found, then the user's hands may be identified by performing a skeletonization of the user component. In some embodiments, the user's hands may be assumed to be the furthest points to the left and the right of the user's torso.
- a user component's gestures are detected.
- a user raising his or her hand may be detected.
- the detection of a user's gestures is based on the previously provided position of the user's features.
- a user raising his or her hand may be detected by a vertical line comprising the user's hand position as well as a distance.
- a region map may be created.
- the region map may be created based on the previously discussed categorizations and user information.
- the region map may comprise values of foreground, background, unclear, and unknown.
- For a background component the region is set to background.
- an invalid depth component may be set to unknown. If the component is set to unknown, then it may be checked to see whether it is close in proximity to a user such that it may be considered to be part of the user and as such categorized as an unclear component. If the user is enabled then the user component may remain as a foreground component, but if the user is disabled, then the user component may be re-categorized as a background component.
- the region map may comprise a categorization of pixels and/or components as foreground, background, unclear, or unknown.
- user histories may be updated.
- a user history is recorded and updated for each user.
- Each pixel in the user history may comprise a depth value and a confidence level.
- the user history is updated for each received depth frame.
- the depth values may be updated using an exponential moving average.
- the confidence level may be updated so as to increase whenever a pixel is categorized as a user and the depth value is similar to the depth value in the user history. However, if the depth value is significantly different, then the confidence level may decrease. If a pixel is labeled as a background then the confidence level decreases, but if a pixel is labeled as another user, then the user confidence may decrease more slowly.
- the user histories enables the systems and methods disclosed herein to determine which pixels are associated to which user in a following frame.
- a background history may be updated similar to the user history as previously described.
- the background history may comprise two different types of histories such as a trusted and non-trusted history.
- the non-trusted history may be updated per each frame.
- a pixel is labeled as a background and the depth value matches the depth value in the non-trusted history then the age of the pixel increases. If the age of the pixel reaches a defined minimum age, then the pixel is re-categorized as trusted. If the depth value continues to match the depth value in the trusted history, then the confidence level may increase. However, if the depth value does not match, then the confidence level will decrease and if the confidence level reaches zero then the history at the pixel may be re-categorized as non-trusted.
- FIG. 6 is an example of grouping of pixels 600 of a video into connected components.
- pixels of a similar categorization i.e., foreground and background
- a similar depth value are grouped into a connected component.
- the grouping of pixels 600 comprises foreground pixels 610 and background pixels 660 .
- Each of the foreground pixels comprises a foreground categorization and a similar depth value.
- each pixel is categorized as a foreground pixel with a depth value of 20.
- the grouping of pixels 600 comprises background pixels 660 .
- each of the background pixels comprises a background categorization and a similar depth value of 90.
- specific depth values of 20 and 90 are described, one skilled in the art would appreciate that any depth value may be used in the grouping of pixels into connected components.
- pixels of varying depth value may be grouped together into a single connected component.
- pixels with a depth value within a depth value range may be used when grouping pixels into connected components. For example, pixels with depth values of 25 and 29 may be considered to have a similar depth value while pixels within depth values of 78 and 83 may be considered to have a similar depth value. As such, in some embodiments, a depth value range may be used when grouping pixels into connected components.
- the grouping of pixels 600 comprises connected component 620 and connected component 670 . Foreground pixels 610 with a similar depth value have been grouped together into connected component 620 and background pixels 660 with a similar depth value have been grouped together into connected component 670 . As such, in some embodiments, neighboring or connected pixels with an identical categorization and a similar depth value have been grouped into connected components.
- FIG. 7 illustrates a high level flow diagram of a method 700 for color image based processing to identify a foreground portion and a background portion during to extract and accurately define a user foreground video.
- the method 700 receives a color image and a region map of a corresponding depth image as previously discussed, segments the foreground and background from the color image to create an unclear region or band of pixels, and processes the unclear region or band of pixels.
- a color image and a region map are received.
- region segmentation may be performed and, at block 730 , the color image edges may be detected.
- the unknown region 130 of FIG. 1 may be detected and then segmented between the background and the foreground.
- an unclear band may be created around the unclear region 130 .
- the unclear band may be expanded to comprise the unclear region, band, and/or pixels as well as neighboring foreground and background pixels.
- the color image pixels in the band may then be segmented between the foreground and the background such that at least two subsets of pixels are created.
- one subset may comprise a portion of the unclear pixels within the unclear band with the foreground and a second subset that may comprise a separate portion of the unclear pixels within the unclear band with the background.
- color image processing is performed on the band comprising the unclear region.
- FIG. 8 illustrates a flow diagram of a method 800 for color based processing for the identification of a foreground portion and a background portion to extract a user foreground video.
- a color image is received.
- a region map as previously discussed with regard to FIG. 5 may also be received.
- the received color image may be down sampled and cropped. For example, if the resolution of the color image is high definition (HD), the color image may be down sampled to a lower resolution, such as a VGA-equivalent size (e.g., 640 ⁇ 480 resolution).
- the boundaries of the received color image may not comprise depth information. As such, the boundaries of the color image may be cropped out or removed so that further processing on the color image may be more efficient.
- a foreground region filling may be performed.
- the depth image as received in FIG. 8 may comprise a reduced resolution than that of the color image.
- a warped foreground region may comprise a sparse set of pixels while unknown pixels within the sparse set of pixels should be labeled as foreground pixels.
- a local window surrounding the pixel may be searched for other foreground pixels. If the unknown pixel is surrounded by foreground pixels, then it may be assumed that the unknown pixel lies within the sparse set of foreground pixels and should thus be re-categorized or labeled as a foreground pixel.
- unclear regions of the color image may be identified and segmented out of the foreground and background regions of the color image so that further processing may be performed on the unclear region.
- the unclear region may comprise the area or set of pixels of which may not yet be categorized as a background pixel or a foreground pixel.
- foreground region filling may be performed on unknown pixels that are surrounded by foreground pixels.
- an unclear region may comprise pixels at the position of a user's hair.
- An unclear region surrounding a user's body may be further identified by expanding the contour line of the user body outwards and/or inwards to become a region. As such, unclear regions may be identified.
- a color background history may be applied and updated.
- the color background history may comprise the accumulated color values of a plurality of color images.
- the color background history may be used to remove unclear head pixels from the unclear region that comprise color values that are similar with the corresponding color values in the color background history.
- the application of the color background history may be performed before the processes described with relation to block 840 so as to create a more efficient color image process.
- the color background history may, also be used when applying a graph cut as described in further detail below.
- a graph may be constructed. For example, a graph may be constructed by all of the pixels in the identified unclear region, along with any foreground and background pixels that is adjacent to the unclear region. Each pixel is then connected to its 4 or 8 neighboring pixels and a source that represents the foreground and a sink that represents the background.
- N-links may be inter-pixel links. Terminal links (T-links) may comprise links connecting a pixel to the source or the sink. The capacities of the N-links may be assigned based on the color contrast (L1 norm) between pixels based on the following equation:
- cap N ⁇ ( i , j ) ⁇ ⁇ N ⁇ e - ⁇ N ⁇ ⁇ p i - p j ⁇ 2 ⁇ if ⁇ ⁇ p i - p j ⁇ 1 ⁇ ⁇ N 0 ⁇ ⁇ else .
- the capacities of the T-links may comprise the summation of several factors.
- One such factor may comprise the probability with respect to the Gaussian mixture models of the background and the Gaussian mixture model of the foreground. These models may be learned and updated using the detected background pixels from the previous color image frames.
- Another factor may comprise the temporal coherence of the region map of the current image frame and the region map of the previous image frame. For each pixel i in the graph, a value cap(i) (capacity) may be defined as the following equation:
- a third factor may comprise the color contrast (L1 norm) between a pixel in the graph and its color background history, as in the following equation:
- the cap source of the foreground pixels in the graph may be set to a large enough constant number to prevent its categorization as a background pixel by the graph cut algorithm.
- the cap sink of the background pixel must also be set to a large constant number.
- a fast binary graph cut may be performed on the graph based on a number of factors to obtain a segmentation between the foreground and background.
- the region map may be stabilized in order to reduce small temporal flickering of the foreground-background edges (e.g., edge waviness artifacts).
- noisy pixels may be detected in the unclear region of the region map before the graph cut is performed by counting the foreground to background and background to foreground transition time of each pixel. For every new frame and for each pixel of the new frame, if the pixel doesn't transition from one categorized region to another categorized region (e.g., from a foreground region to a background region), its transition count may decrease.
- the pixel transition count may increase. If a pixel's transition count is above a threshold value, the region categorization of the pixel may be copied from the pixel's region categorization from the previous image frame's region map.
- a median filter may be applied to the identified foreground region in order to smoothen the foreground edges.
- the median filter may be applied in the following pseudo code manner:
- an alpha mask may be generated to convert the foreground categorized pixels to a 0xFF alpha value and convert other categorized pixels to a 0x00 alpha value. In some embodiments, this may comprise an up sampling for the alpha mask.
- FIG. 9A illustrates an example embodiment of foreground region filling 900 .
- a depth image resolution is lesser or smaller than a corresponding color image resolution
- pixels categorized as unknown pixels may be re-categorized as either a foreground pixel or a background pixel.
- a foreground region 910 may comprise unknown pixels 920 .
- Unknown pixel window 930 comprises a plurality of foreground pixels 940 and at least one unknown pixel 945 .
- the unknown pixel 945 may be categorized as an unknown pixel. For example, FIG.
- filled foreground region 930 comprises the foreground pixels 940 and the previously unknown pixel 945 is re-categorized as a foreground pixel.
- FIG. 10 illustrates an example graph cut 1000 in accordance with some embodiments.
- the graph cut 1000 may comprise separating or segmenting unclear pixels into at least two different subsets, one subset comprising unclear pixels grouped with a source and another subset comprising unclear pixels grouped with a sink.
- the source may represent the foreground and the sink may represent the background.
- the unclear region pixels may be segmented into two subsets, one subset including a representation of the foreground and the other subset including a representation of the background.
- a graph cut 1000 may comprise a source 1010 , sink 1020 , and unclear pixels 1030 .
- the cut 1040 segments the unclear pixels 1030 .
- the unclear pixels to the left of the graph cut 1040 are grouped or segmented with the source and the unclear pixels to the right of the graph cut 1040 are grouped or segmented with the sink. Since the source and the sink represent the foreground and the background, the graph cut 1040 segments the unclear pixels 1030 between the foreground and background. As such, the unclear band is processed to create a segmentation of the pixels within it.
- FIG. 11 illustrates an embodiment of a camera system 1100 for the foreground video embedding systems and methods of the present invention.
- the camera system 1100 comprises a camera 1110 , computer 1120 , and display 1130 .
- a camera 1110 is connected to a computer 1120 .
- the camera 1110 may comprise a three dimensional (3D) camera, depth camera, z-camera, range camera.
- the camera 1110 may be comprised of a color or RGB camera and a depth camera or may comprise of a single camera with an RGB sensor and depth sensor.
- the camera 1110 receives color information and depth information.
- the received color information may comprise information related to the color of each pixel of a video.
- the color information is received from a Red-Green-Blue (RGB) sensor 1111 .
- the RGB sensor 1111 may capture the color pixel information in a scene of a captured video image.
- the camera 1110 may further comprise an infrared sensor 1112 and an infrared illuminator 1113 .
- the infrared illuminator 1113 may shine an infrared light through a lens of the camera 1110 onto a scene. As the scene is illuminated by the infrared light, the infrared light will bounce or reflect back to the camera 1110 . The reflected infrared light is received by the infrared sensor 1112 . The reflected light received by the infrared sensor results in depth information of the scene of the camera 1110 . As such, objects within the scene or view of the camera 1110 may be illuminated by infrared light from the infrared illuminator 1113 .
- the infrared light will reflect off of objects within the scene or view of the camera 1110 and the reflected infrared light will be directed towards the camera 1110 .
- the infrared sensor 1112 may receive the reflected infrared light and determine a depth or distance of the objects within the scene or view of the camera 1110 based on the reflected infrared light.
- the camera 1110 may further comprise a synchronization module 1114 to temporally synchronize the information from the RGB sensor 1111 , infrared sensor 1112 , and infrared illuminator 1113 .
- the synchronization module 1114 may be hardware and/or software embedded into the camera 1110 .
- the camera 1110 may further comprise a 3D application programming interface (API) for providing an input-output (IO) structure and interface to communicate the color and depth information to a computer system 1120 .
- the computer system 1120 may process the received color and depth information and comprise and perform the systems and methods disclosed herein.
- the computer system 920 may display the foreground video embedded into the background feed onto a display screen 1130 .
- FIG. 12 is a diagrammatic representation of a network 1200 , including nodes for client computer systems 1202 1 through 1202 N , nodes for server computer systems 1204 1 through 1204 N , nodes for network infrastructure 1206 1 through 1206 N , any of which nodes may comprise a machine 1250 within which a set of instructions for causing the machine to perform any one of the techniques discussed above may be executed.
- the embodiment shown is purely exemplary, and might be implemented in the context of one or more of the figures herein.
- Any node of the network 1200 may comprise a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof capable to perform the functions described herein.
- a general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing devices (e.g. a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration, etc).
- a node may comprise a machine in the form of a virtual machine (VM), a virtual server, a virtual client, a virtual desktop, a virtual volume, a network router, a network switch, a network bridge, a personal digital assistant (PDA), a cellular telephone, a web appliance, or any machine capable of executing a sequence of instructions that specify actions to be taken by that machine.
- Any node of the network may communicate cooperatively with another node on the network.
- any node of the network may communicate cooperatively with every other node of the network.
- any node or group of nodes on the network may comprise one or more computer systems (e.g. a client computer system, a server computer system) and/or may comprise one or more embedded computer systems, a massively parallel computer system, and/or a cloud computer system.
- the computer system 1250 includes a processor 1208 (e.g. a processor core, a microprocessor, a computing device, etc), a main memory 1210 and a static memory 1212 , which communicate with each other via a bus 1214 .
- the machine 1250 may further include a display unit 1216 that may comprise a touch-screen, or a liquid crystal display (LCD), or a light emitting diode (LED) display, or a cathode ray tube (CRT).
- the computer system 1250 also includes a human input/output (I/O) device 1218 (e.g. a keyboard, an alphanumeric keypad, etc), a pointing device 1220 (e.g.
- I/O human input/output
- a mouse e.g. a mouse, a touch screen, etc
- a drive unit 1222 e.g. a disk drive unit, a CD/DVD drive, a tangible computer readable removable media drive, an SSD storage device, etc
- a signal generation device 1228 e.g. a speaker, an audio output, etc
- a network interface device 1230 e.g. an Ethernet interface, a wired network interface, a wireless network interface, a propagated signal interface, etc).
- the drive unit 1222 includes a machine-readable medium 1224 on which is stored a set of instructions (i.e. software, firmware, middleware, etc) 1226 embodying any one, or all, of the methodologies described above.
- the set of instructions 1226 is also shown to reside, completely or at least partially, within the main memory 1210 and/or within the processor 1208 .
- the set of instructions 1226 may further be transmitted or received via the network interface device 1230 over the network bus 1214 .
- a machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g. a computer).
- a machine-readable medium includes read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical or acoustical or any other type of media suitable for storing information.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
If the pixel i is categorized as a foreground pixel in the previous image frame's region map, then capsource(i)=cap(i) and capsink(i)=0. However, if the pixel i is categorized as a background pixel in the previous image frame's region map, then set capsource(i)=0 and capsink(i)=cap(i).
For each pixel p in UC region | ||
{ | ||
Count = 0; |
For each pixel pi in the NxN support window around pixel p { |
If R(pi) = UC, count++; |
} | |
If (count<N*N/2), R(p) = BG; | |
Else R(p) = FG; |
} | ||
Claims (23)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/083,470 US8818028B2 (en) | 2010-04-09 | 2011-04-08 | Systems and methods for accurate user foreground video extraction |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US32262410P | 2010-04-09 | 2010-04-09 | |
US32262910P | 2010-04-09 | 2010-04-09 | |
US13/083,470 US8818028B2 (en) | 2010-04-09 | 2011-04-08 | Systems and methods for accurate user foreground video extraction |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110249190A1 US20110249190A1 (en) | 2011-10-13 |
US8818028B2 true US8818028B2 (en) | 2014-08-26 |
Family
ID=44760687
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/083,470 Active 2033-05-08 US8818028B2 (en) | 2010-04-09 | 2011-04-08 | Systems and methods for accurate user foreground video extraction |
Country Status (1)
Country | Link |
---|---|
US (1) | US8818028B2 (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140232650A1 (en) * | 2013-02-15 | 2014-08-21 | Microsoft Corporation | User Center-Of-Mass And Mass Distribution Extraction Using Depth Images |
US20150029294A1 (en) * | 2013-07-23 | 2015-01-29 | Personify, Inc. | Systems and methods for integrating user personas with content during video conferencing |
US20150035828A1 (en) * | 2013-07-31 | 2015-02-05 | Thomson Licensing | Method for processing a current image of an image sequence, and corresponding computer program and processing device |
US20150371398A1 (en) * | 2014-06-23 | 2015-12-24 | Gang QIAO | Method and system for updating background model based on depth |
US9386303B2 (en) | 2013-12-31 | 2016-07-05 | Personify, Inc. | Transmitting video and sharing content via a network using multiple encoding techniques |
US20160196675A1 (en) * | 2015-01-04 | 2016-07-07 | Personify, Inc. | Methods and Systems for Visually Deemphasizing a Displayed Persona |
US9414016B2 (en) | 2013-12-31 | 2016-08-09 | Personify, Inc. | System and methods for persona identification using combined probability maps |
US20160266650A1 (en) * | 2015-03-11 | 2016-09-15 | Microsoft Technology Licensing, Llc | Background model for user recognition |
US9485433B2 (en) | 2013-12-31 | 2016-11-01 | Personify, Inc. | Systems and methods for iterative adjustment of video-capture settings based on identified persona |
US9530044B2 (en) | 2010-08-30 | 2016-12-27 | The Board Of Trustees Of The University Of Illinois | System for background subtraction with 3D camera |
US9563962B2 (en) | 2015-05-19 | 2017-02-07 | Personify, Inc. | Methods and systems for assigning pixels distance-cost values using a flood fill technique |
US9628722B2 (en) | 2010-03-30 | 2017-04-18 | Personify, Inc. | Systems and methods for embedding a foreground video into a background feed based on a control input |
US9774548B2 (en) | 2013-12-18 | 2017-09-26 | Personify, Inc. | Integrating user personas with chat sessions |
US9881207B1 (en) | 2016-10-25 | 2018-01-30 | Personify, Inc. | Methods and systems for real-time user extraction using deep learning networks |
US9883155B2 (en) | 2016-06-14 | 2018-01-30 | Personify, Inc. | Methods and systems for combining foreground video and background video using chromatic matching |
US9916668B2 (en) | 2015-05-19 | 2018-03-13 | Personify, Inc. | Methods and systems for identifying background in video data using geometric primitives |
US10244224B2 (en) | 2015-05-26 | 2019-03-26 | Personify, Inc. | Methods and systems for classifying pixels as foreground using both short-range depth data and long-range depth data |
US20200058270A1 (en) * | 2017-04-28 | 2020-02-20 | Huawei Technologies Co., Ltd. | Bullet screen display method and electronic device |
US10984589B2 (en) | 2017-08-07 | 2021-04-20 | Verizon Patent And Licensing Inc. | Systems and methods for reference-model-based modification of a three-dimensional (3D) mesh data model |
US11095854B2 (en) | 2017-08-07 | 2021-08-17 | Verizon Patent And Licensing Inc. | Viewpoint-adaptive three-dimensional (3D) personas |
US11475668B2 (en) | 2020-10-09 | 2022-10-18 | Bank Of America Corporation | System and method for automatic video categorization |
US11659133B2 (en) | 2021-02-24 | 2023-05-23 | Logitech Europe S.A. | Image generating system with background replacement or modification capabilities |
US11800056B2 (en) | 2021-02-11 | 2023-10-24 | Logitech Europe S.A. | Smart webcam system |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8265380B1 (en) * | 2008-08-14 | 2012-09-11 | Adobe Systems Incorporated | Reuse of image processing information |
US8787663B2 (en) * | 2010-03-01 | 2014-07-22 | Primesense Ltd. | Tracking body parts by combined color image and depth processing |
EP2400768A3 (en) * | 2010-06-25 | 2014-12-31 | Samsung Electronics Co., Ltd. | Method, apparatus and computer-readable medium for coding and decoding depth image using color image |
KR101702948B1 (en) * | 2010-07-20 | 2017-02-06 | 삼성전자주식회사 | Rate-Distortion Optimization Apparatus and Method for depth-image encoding |
US9628755B2 (en) * | 2010-10-14 | 2017-04-18 | Microsoft Technology Licensing, Llc | Automatically tracking user movement in a video chat application |
US8824823B1 (en) * | 2011-01-20 | 2014-09-02 | Verint Americas Inc. | Increased quality of image objects based on depth in scene |
US8773499B2 (en) * | 2011-06-24 | 2014-07-08 | Microsoft Corporation | Automatic video framing |
US9117281B2 (en) * | 2011-11-02 | 2015-08-25 | Microsoft Corporation | Surface segmentation from RGB and depth images |
CN103390164B (en) * | 2012-05-10 | 2017-03-29 | 南京理工大学 | Method for checking object based on depth image and its realize device |
US8831285B2 (en) * | 2012-07-26 | 2014-09-09 | Hewlett-Packard Development Company, L.P. | Detecting objects with a depth sensor |
US9514522B2 (en) | 2012-08-24 | 2016-12-06 | Microsoft Technology Licensing, Llc | Depth data processing and compression |
US9406135B2 (en) * | 2012-10-29 | 2016-08-02 | Samsung Electronics Co., Ltd. | Device and method for estimating head pose |
KR101994319B1 (en) * | 2013-02-20 | 2019-06-28 | 삼성전자주식회사 | Apparatus of recognizing an object using a depth image and method thereof |
JP6167563B2 (en) * | 2013-02-28 | 2017-07-26 | ノーリツプレシジョン株式会社 | Information processing apparatus, information processing method, and program |
US9092657B2 (en) * | 2013-03-13 | 2015-07-28 | Microsoft Technology Licensing, Llc | Depth image processing |
US20140267611A1 (en) * | 2013-03-14 | 2014-09-18 | Microsoft Corporation | Runtime engine for analyzing user motion in 3d images |
AU2013206597A1 (en) * | 2013-06-28 | 2015-01-22 | Canon Kabushiki Kaisha | Depth constrained superpixel-based depth map refinement |
CN104427291B (en) * | 2013-08-19 | 2018-09-28 | 华为技术有限公司 | A kind of image processing method and equipment |
WO2015200820A1 (en) * | 2014-06-26 | 2015-12-30 | Huawei Technologies Co., Ltd. | Method and device for providing depth based block partitioning in high efficiency video coding |
US20160004300A1 (en) * | 2014-07-07 | 2016-01-07 | PinchVR Inc. | System, Method, Device and Computer Readable Medium for Use with Virtual Environments |
US9774793B2 (en) * | 2014-08-01 | 2017-09-26 | Adobe Systems Incorporated | Image segmentation for a live camera feed |
CN105590309B (en) * | 2014-10-23 | 2018-06-15 | 株式会社理光 | Foreground image dividing method and device |
US10033926B2 (en) * | 2015-11-06 | 2018-07-24 | Google Llc | Depth camera based image stabilization |
US10091435B2 (en) * | 2016-06-07 | 2018-10-02 | Disney Enterprises, Inc. | Video segmentation from an uncalibrated camera array |
CN107368188B (en) * | 2017-07-13 | 2020-05-26 | 河北中科恒运软件科技股份有限公司 | Foreground extraction method and system based on multiple spatial positioning in mediated reality |
JP6787844B2 (en) * | 2017-07-21 | 2020-11-18 | Kddi株式会社 | Object extractor and its superpixel labeling method |
CN109903291B (en) | 2017-12-11 | 2021-06-01 | 腾讯科技(深圳)有限公司 | Image processing method and related device |
US10672188B2 (en) * | 2018-04-19 | 2020-06-02 | Microsoft Technology Licensing, Llc | Surface reconstruction for environments with moving objects |
TWI689892B (en) * | 2018-05-18 | 2020-04-01 | 瑞昱半導體股份有限公司 | Background blurred method and electronic apparatus based on foreground image |
US10762219B2 (en) * | 2018-05-18 | 2020-09-01 | Microsoft Technology Licensing, Llc | Automatic permissions for virtual objects |
US10951859B2 (en) | 2018-05-30 | 2021-03-16 | Microsoft Technology Licensing, Llc | Videoconferencing device and method |
CN110309787B (en) * | 2019-07-03 | 2022-07-29 | 电子科技大学 | A human sitting posture detection method based on depth camera |
CN112037121A (en) * | 2020-08-19 | 2020-12-04 | 北京字节跳动网络技术有限公司 | Picture processing method, device, equipment and storage medium |
CN113194270B (en) * | 2021-04-28 | 2022-08-05 | 北京达佳互联信息技术有限公司 | Video processing method and device, electronic equipment and storage medium |
CN113762253B (en) * | 2021-08-24 | 2022-08-26 | 合肥的卢深视科技有限公司 | Speckle extraction method and device, electronic device and storage medium |
CN113808235A (en) * | 2021-09-16 | 2021-12-17 | 平安普惠企业管理有限公司 | Color filling method, device, equipment and storage medium |
US11979244B2 (en) * | 2021-09-30 | 2024-05-07 | Snap Inc. | Configuring 360-degree video within a virtual conferencing system |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5001558A (en) | 1985-06-11 | 1991-03-19 | General Motors Corporation | Night vision system with color video camera |
US5343311A (en) | 1992-04-14 | 1994-08-30 | Electronics For Imaging, Inc. | Indexed processing of color image data |
US5506946A (en) | 1991-10-01 | 1996-04-09 | Electronics For Imaging, Inc. | Selective color correction |
US6150930A (en) | 1992-08-14 | 2000-11-21 | Texas Instruments Incorporated | Video equipment and method to assist motor vehicle operators |
US20020158873A1 (en) | 2001-01-26 | 2002-10-31 | Todd Williamson | Real-time virtual viewpoint in simulated reality environment |
US6664973B1 (en) | 1996-04-28 | 2003-12-16 | Fujitsu Limited | Image processing apparatus, method for processing and image and computer-readable recording medium for causing a computer to process images |
US20050094879A1 (en) | 2003-10-31 | 2005-05-05 | Michael Harville | Method for visual-based recognition of an object |
US20070110298A1 (en) | 2005-11-14 | 2007-05-17 | Microsoft Corporation | Stereo video for gaming |
US20070146512A1 (en) | 2005-12-27 | 2007-06-28 | Sanyo Electric Co., Ltd. | Imaging apparatus provided with imaging device having sensitivity in visible and infrared regions |
US20070201738A1 (en) | 2005-07-21 | 2007-08-30 | Atsushi Toda | Physical information acquisition method, physical information acquisition device, and semiconductor device |
US20090244309A1 (en) | 2006-08-03 | 2009-10-01 | Benoit Maison | Method and Device for Identifying and Extracting Images of multiple Users, and for Recognizing User Gestures |
US20100195898A1 (en) | 2009-01-28 | 2010-08-05 | Electronics And Telecommunications Research Institute | Method and apparatus for improving quality of depth image |
US7773136B2 (en) | 2006-08-28 | 2010-08-10 | Sanyo Electric Co., Ltd. | Image pickup apparatus and image pickup method for equalizing infrared components in each color component signal |
US20100302395A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Environment And/Or Target Segmentation |
US20110193939A1 (en) | 2010-02-09 | 2011-08-11 | Microsoft Corporation | Physical interaction zone for gesture-based user interfaces |
US20110242277A1 (en) * | 2010-03-30 | 2011-10-06 | Do Minh N | Systems and methods for embedding a foreground video into a background feed based on a control input |
US20110243430A1 (en) | 2008-11-04 | 2011-10-06 | Konica Minolta Opto, Inc. | Image input apparatus |
US20110293179A1 (en) * | 2010-05-31 | 2011-12-01 | Mert Dikmen | Systems and methods for illumination correction of an image |
US8175384B1 (en) * | 2008-03-17 | 2012-05-08 | Adobe Systems Incorporated | Method and apparatus for discriminative alpha matting |
US8649932B2 (en) | 2006-10-27 | 2014-02-11 | International Electronic Machines Corp. | Vehicle evaluation using infrared data |
-
2011
- 2011-04-08 US US13/083,470 patent/US8818028B2/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5001558A (en) | 1985-06-11 | 1991-03-19 | General Motors Corporation | Night vision system with color video camera |
US5506946A (en) | 1991-10-01 | 1996-04-09 | Electronics For Imaging, Inc. | Selective color correction |
US5343311A (en) | 1992-04-14 | 1994-08-30 | Electronics For Imaging, Inc. | Indexed processing of color image data |
US5517334A (en) | 1992-04-14 | 1996-05-14 | Electronics For Imaging, Inc. | Indexed processing of color image data |
US6150930A (en) | 1992-08-14 | 2000-11-21 | Texas Instruments Incorporated | Video equipment and method to assist motor vehicle operators |
US6664973B1 (en) | 1996-04-28 | 2003-12-16 | Fujitsu Limited | Image processing apparatus, method for processing and image and computer-readable recording medium for causing a computer to process images |
US20020158873A1 (en) | 2001-01-26 | 2002-10-31 | Todd Williamson | Real-time virtual viewpoint in simulated reality environment |
US20050094879A1 (en) | 2003-10-31 | 2005-05-05 | Michael Harville | Method for visual-based recognition of an object |
US20070201738A1 (en) | 2005-07-21 | 2007-08-30 | Atsushi Toda | Physical information acquisition method, physical information acquisition device, and semiconductor device |
US20070110298A1 (en) | 2005-11-14 | 2007-05-17 | Microsoft Corporation | Stereo video for gaming |
US20070146512A1 (en) | 2005-12-27 | 2007-06-28 | Sanyo Electric Co., Ltd. | Imaging apparatus provided with imaging device having sensitivity in visible and infrared regions |
US20090244309A1 (en) | 2006-08-03 | 2009-10-01 | Benoit Maison | Method and Device for Identifying and Extracting Images of multiple Users, and for Recognizing User Gestures |
US7773136B2 (en) | 2006-08-28 | 2010-08-10 | Sanyo Electric Co., Ltd. | Image pickup apparatus and image pickup method for equalizing infrared components in each color component signal |
US8649932B2 (en) | 2006-10-27 | 2014-02-11 | International Electronic Machines Corp. | Vehicle evaluation using infrared data |
US8175384B1 (en) * | 2008-03-17 | 2012-05-08 | Adobe Systems Incorporated | Method and apparatus for discriminative alpha matting |
US20110243430A1 (en) | 2008-11-04 | 2011-10-06 | Konica Minolta Opto, Inc. | Image input apparatus |
US20100195898A1 (en) | 2009-01-28 | 2010-08-05 | Electronics And Telecommunications Research Institute | Method and apparatus for improving quality of depth image |
US20100302395A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Environment And/Or Target Segmentation |
US20110193939A1 (en) | 2010-02-09 | 2011-08-11 | Microsoft Corporation | Physical interaction zone for gesture-based user interfaces |
US20110242277A1 (en) * | 2010-03-30 | 2011-10-06 | Do Minh N | Systems and methods for embedding a foreground video into a background feed based on a control input |
US20110293179A1 (en) * | 2010-05-31 | 2011-12-01 | Mert Dikmen | Systems and methods for illumination correction of an image |
Non-Patent Citations (2)
Title |
---|
U.S. Appl. No. 12/871,428, filed Aug. 30, 2010, Quang H. Nguyen et al. |
U.S. Appl. No. 13/076,264, filed Mar. 30, 2011, Minh N. Do et al. |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9628722B2 (en) | 2010-03-30 | 2017-04-18 | Personify, Inc. | Systems and methods for embedding a foreground video into a background feed based on a control input |
US10325360B2 (en) | 2010-08-30 | 2019-06-18 | The Board Of Trustees Of The University Of Illinois | System for background subtraction with 3D camera |
US9792676B2 (en) * | 2010-08-30 | 2017-10-17 | The Board Of Trustees Of The University Of Illinois | System for background subtraction with 3D camera |
US20170109872A1 (en) * | 2010-08-30 | 2017-04-20 | The Board Of Trustees Of The University Of Illinois | System for background subtraction with 3d camera |
US9530044B2 (en) | 2010-08-30 | 2016-12-27 | The Board Of Trustees Of The University Of Illinois | System for background subtraction with 3D camera |
US9052746B2 (en) * | 2013-02-15 | 2015-06-09 | Microsoft Technology Licensing, Llc | User center-of-mass and mass distribution extraction using depth images |
US20140232650A1 (en) * | 2013-02-15 | 2014-08-21 | Microsoft Corporation | User Center-Of-Mass And Mass Distribution Extraction Using Depth Images |
US20150029294A1 (en) * | 2013-07-23 | 2015-01-29 | Personify, Inc. | Systems and methods for integrating user personas with content during video conferencing |
US9055186B2 (en) * | 2013-07-23 | 2015-06-09 | Personify, Inc | Systems and methods for integrating user personas with content during video conferencing |
US20150035828A1 (en) * | 2013-07-31 | 2015-02-05 | Thomson Licensing | Method for processing a current image of an image sequence, and corresponding computer program and processing device |
US10074209B2 (en) * | 2013-07-31 | 2018-09-11 | Thomson Licensing | Method for processing a current image of an image sequence, and corresponding computer program and processing device |
US9774548B2 (en) | 2013-12-18 | 2017-09-26 | Personify, Inc. | Integrating user personas with chat sessions |
US10325172B2 (en) | 2013-12-31 | 2019-06-18 | Personify, Inc. | Transmitting video and sharing content via a network |
US9414016B2 (en) | 2013-12-31 | 2016-08-09 | Personify, Inc. | System and methods for persona identification using combined probability maps |
US9485433B2 (en) | 2013-12-31 | 2016-11-01 | Personify, Inc. | Systems and methods for iterative adjustment of video-capture settings based on identified persona |
US9942481B2 (en) | 2013-12-31 | 2018-04-10 | Personify, Inc. | Systems and methods for iterative adjustment of video-capture settings based on identified persona |
US9740916B2 (en) | 2013-12-31 | 2017-08-22 | Personify Inc. | Systems and methods for persona identification using combined probability maps |
US9386303B2 (en) | 2013-12-31 | 2016-07-05 | Personify, Inc. | Transmitting video and sharing content via a network using multiple encoding techniques |
US20150371398A1 (en) * | 2014-06-23 | 2015-12-24 | Gang QIAO | Method and system for updating background model based on depth |
US9727971B2 (en) * | 2014-06-23 | 2017-08-08 | Ricoh Company, Ltd. | Method and system for updating background model based on depth |
US9671931B2 (en) * | 2015-01-04 | 2017-06-06 | Personify, Inc. | Methods and systems for visually deemphasizing a displayed persona |
US20160196675A1 (en) * | 2015-01-04 | 2016-07-07 | Personify, Inc. | Methods and Systems for Visually Deemphasizing a Displayed Persona |
US20160266650A1 (en) * | 2015-03-11 | 2016-09-15 | Microsoft Technology Licensing, Llc | Background model for user recognition |
US9639166B2 (en) * | 2015-03-11 | 2017-05-02 | Microsoft Technology Licensing, Llc | Background model for user recognition |
US9916668B2 (en) | 2015-05-19 | 2018-03-13 | Personify, Inc. | Methods and systems for identifying background in video data using geometric primitives |
US9563962B2 (en) | 2015-05-19 | 2017-02-07 | Personify, Inc. | Methods and systems for assigning pixels distance-cost values using a flood fill technique |
US9953223B2 (en) | 2015-05-19 | 2018-04-24 | Personify, Inc. | Methods and systems for assigning pixels distance-cost values using a flood fill technique |
US10244224B2 (en) | 2015-05-26 | 2019-03-26 | Personify, Inc. | Methods and systems for classifying pixels as foreground using both short-range depth data and long-range depth data |
US9883155B2 (en) | 2016-06-14 | 2018-01-30 | Personify, Inc. | Methods and systems for combining foreground video and background video using chromatic matching |
US9881207B1 (en) | 2016-10-25 | 2018-01-30 | Personify, Inc. | Methods and systems for real-time user extraction using deep learning networks |
US20200058270A1 (en) * | 2017-04-28 | 2020-02-20 | Huawei Technologies Co., Ltd. | Bullet screen display method and electronic device |
US11095854B2 (en) | 2017-08-07 | 2021-08-17 | Verizon Patent And Licensing Inc. | Viewpoint-adaptive three-dimensional (3D) personas |
US10997786B2 (en) | 2017-08-07 | 2021-05-04 | Verizon Patent And Licensing Inc. | Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas |
US11004264B2 (en) | 2017-08-07 | 2021-05-11 | Verizon Patent And Licensing Inc. | Systems and methods for capturing, transferring, and rendering viewpoint-adaptive three-dimensional (3D) personas |
US11024078B2 (en) | 2017-08-07 | 2021-06-01 | Verizon Patent And Licensing Inc. | Systems and methods compression, transfer, and reconstruction of three-dimensional (3D) data meshes |
US10984589B2 (en) | 2017-08-07 | 2021-04-20 | Verizon Patent And Licensing Inc. | Systems and methods for reference-model-based modification of a three-dimensional (3D) mesh data model |
US11386618B2 (en) | 2017-08-07 | 2022-07-12 | Verizon Patent And Licensing Inc. | Systems and methods for model-based modification of a three-dimensional (3D) mesh |
US11461969B2 (en) | 2017-08-07 | 2022-10-04 | Verizon Patent And Licensing Inc. | Systems and methods compression, transfer, and reconstruction of three-dimensional (3D) data meshes |
US11580697B2 (en) | 2017-08-07 | 2023-02-14 | Verizon Patent And Licensing Inc. | Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas |
US11475668B2 (en) | 2020-10-09 | 2022-10-18 | Bank Of America Corporation | System and method for automatic video categorization |
US11800056B2 (en) | 2021-02-11 | 2023-10-24 | Logitech Europe S.A. | Smart webcam system |
US11659133B2 (en) | 2021-02-24 | 2023-05-23 | Logitech Europe S.A. | Image generating system with background replacement or modification capabilities |
US11800048B2 (en) | 2021-02-24 | 2023-10-24 | Logitech Europe S.A. | Image generating system with background replacement or modification capabilities |
US12058471B2 (en) | 2021-02-24 | 2024-08-06 | Logitech Europe S.A. | Image generating system |
Also Published As
Publication number | Publication date |
---|---|
US20110249190A1 (en) | 2011-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8818028B2 (en) | Systems and methods for accurate user foreground video extraction | |
US9053573B2 (en) | Systems and methods for generating a virtual camera viewpoint for an image | |
US9628722B2 (en) | Systems and methods for embedding a foreground video into a background feed based on a control input | |
Haines et al. | Background subtraction with dirichletprocess mixture models | |
US9008457B2 (en) | Systems and methods for illumination correction of an image | |
WO2019218824A1 (en) | Method for acquiring motion track and device thereof, storage medium, and terminal | |
Denman et al. | An adaptive optical flow technique for person tracking systems | |
CN109035304B (en) | Target tracking method, medium, computing device and apparatus | |
US9087229B2 (en) | System for background subtraction with 3D camera | |
Lu | A multiscale spatio-temporal background model for motion detection | |
Ramya et al. | A modified frame difference method using correlation coefficient for background subtraction | |
Wang et al. | A multi-view learning approach to foreground detection for traffic surveillance applications | |
Moya-Alcover et al. | Modeling depth for nonparametric foreground segmentation using RGBD devices | |
US8947600B2 (en) | Methods, systems, and computer-readable media for detecting scene changes in a video | |
US8995718B2 (en) | System and method for low complexity change detection in a sequence of images through background estimation | |
CN109271848B (en) | Face detection method, face detection device and storage medium | |
KR101337423B1 (en) | Method of moving object detection and tracking using 3d depth and motion information | |
TWI729587B (en) | Object localization system and method thereof | |
CN107742115A (en) | A method and system for detecting and tracking moving objects based on video surveillance | |
Padmashini et al. | Vision based algorithm for people counting using deep learning | |
JP5958557B2 (en) | Object recognition method and object recognition apparatus | |
Zhu et al. | Background subtraction based on non-parametric model | |
JP6603123B2 (en) | Animal body detection apparatus, detection method, and program | |
Mustafah et al. | Skin region detector for real time face detection system | |
Zhu et al. | Improved accuracy of superpixel segmentation by region merging method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NUVIXA, INC., ILLINOIS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NGUYEN, QUANG H.;MEYER, GREG;DO, MINH N.;AND OTHERS;REEL/FRAME:026100/0277 Effective date: 20110408 |
|
AS | Assignment |
Owner name: PERSONIFY, INC., ILLINOIS Free format text: CHANGE OF NAME;ASSIGNOR:NUVIXA, INC.;REEL/FRAME:032390/0758 Effective date: 20121220 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551) Year of fee payment: 4 |
|
AS | Assignment |
Owner name: HONGFUJIN PRECISION INDUSTRY (WUHAN) CO. LTD., TEX Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PERSONIFY, INC.;REEL/FRAME:051367/0920 Effective date: 20190522 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: PERSONIFY, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HONGFUJIN PRECISION INDUSTRY WUHAN;REEL/FRAME:057467/0738 Effective date: 20210909 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |