US11461911B2 - Depth information calculation method and device based on light-field-binocular system - Google Patents
Depth information calculation method and device based on light-field-binocular system Download PDFInfo
- Publication number
- US11461911B2 US11461911B2 US17/034,563 US202017034563A US11461911B2 US 11461911 B2 US11461911 B2 US 11461911B2 US 202017034563 A US202017034563 A US 202017034563A US 11461911 B2 US11461911 B2 US 11461911B2
- Authority
- US
- United States
- Prior art keywords
- confidence
- map
- camera
- light field
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000004364 calculation method Methods 0.000 title claims abstract description 25
- 238000000034 method Methods 0.000 claims abstract description 33
- 238000004590 computer program Methods 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 12
- 238000001514 detection method Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000002146 bilateral effect Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 210000003692 ilium Anatomy 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000000059 patterning Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000004800 variational method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/557—Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
- H04N13/232—Image signal generators using stereoscopic image cameras using a single 2D image sensor using fly-eye lenses, e.g. arrangements of circular lenses
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the present disclosure relates to the field of computer vision technique, and more particularly, to a depth information calculation method and a depth information calculation device based on a light-field-binocular system.
- Depth detection is commonly used in applications, such as three-dimensional sensing, robotics, and autonomous driving.
- Accuracy and detection range of a depth map are the most important research directions in depth study.
- existing works, such as stereo matching, light field, single view by learning involve technical solutions of providing the detection range of the depth map on various scales, it is impossible to provide measurement on a scale of full depth.
- a light field camera has a high angle resolution (an angle view of being densely sampled), but has a small baseline for a region having at a far distance.
- the light field camera is unable obtain depth for an object at a far distant.
- a stereo camera with a large baseline may have a good performance on scene at the far distant, but has a limited angle resolution (an angle view of being sparsely sampled) for an object at a close distance. Therefore, it may be difficult for the stereo camera to obtain the depth of the object at the close distance.
- Embodiments of a first aspect of the present disclosure provide a depth information calculation method based on a light-field-binocular system.
- the method includes: constructing a light-field-binocular system by a monocular camera and a light field camera; calibrating an image obtained by the light field camera and an image obtained by the monocular camera to generate an image pattern after encoding calibrated images, and determining images in the image pattern as input images of an algorithm; obtaining a far-distance disparity map based on binocular information of the input images, setting a first confidence for each pixel in the disparity map, and obtaining a first tarte confidence.
- Embodiments of the present disclosure provide an electronic device.
- the electronic device includes a processor and a memory, having a computer program stored thereon.
- the processor is configured to execute the method described above.
- Embodiments of the present disclosure provide a non-transitory computer readable storage medium, having one or more computer programs stored thereon. When the one or more computer programs are executed by a processor, the method described above is executed.
- FIG. 1 is a flowchart illustrating a depth information calculation method based on a light-field-binocular system according to embodiments of the present disclosure.
- FIG. 2 is a schematic diagram illustrating a depth information calculation method based on a light-field-binocular system according to embodiments of the present disclosure.
- FIG. 3 is a block diagram illustrating a cross-baseline method as illustrated in FIG. 2 according to embodiments of the present disclosure.
- FIG. 4 illustrates examples of an input image according to embodiments of the present disclosure.
- FIG. 5 is a schematic diagram illustrating a correspondence between EPI and variance according to embodiments of the present disclosure.
- FIG. 6 is a schematic diagram illustrating EPI with monocular information according to embodiments of the present disclosure.
- FIG. 7 is a schematic diagram illustrating a depth distribution in a spatial domain and a minimum variance distribution in an EPI domain according to embodiments of the present disclosure.
- FIG. 8 illustrates comparison among a reference depth map, a part of an inputted raw image, a result according to embodiments of the present disclosure, and result of other methods.
- FIG. 9 is a diagram illustrating a depth information calculation device based on a light-field-binocular system according to embodiments of the present disclosure.
- FIG. 10 illustrates comparison among an image according to embodiments of the present disclosure and images according to other methods.
- FIG. 11 is a diagram illustrating an evaluation parameter curve, according to embodiments of the present disclosure, obtained by transforming baseline (B) between monocular and light field.
- FIG. 12 is a schematic diagram illustrating a depth information calculation device based on a light-field-binocular system according to embodiments of the present disclosure.
- FIG. 13 is a block diagram illustrating an electronic device according to embodiments of the present disclosure.
- the stereo matching may be combined with the light field, for a large-scale depth range estimation, although the above may require more than one light field camera for achieving other purposes.
- Yan et al. obtained good depth information for a near-distance object by moving the light field camera.
- Dansereau et al. adopted three light field cameras to directly extract feature information based on a 4D (four-dimensional) model of the light field. Therefore, before discussing the depth information calculation method and device based on a light field binocular system according to embodiments of the present disclosure, some methods for calculating depth based on the light field and methods for calculating depth based on the stereo matching may be briefly introduced.
- Detection on depth information based on the light field camera is similar to depth inference based on light flow and stereo matching, but the light field camera provides more viewpoints that are distributed densely, making EPI (epipolar-plane image) analysis possible.
- Perwass and Wietzke adopted communication information to estimate the depth of the light field camera.
- Bishop et al. proposed to iteratively search for a best match using corresponding clues and filters.
- Tao et al. proposed to estimate depth based on correspondence and defocus clues.
- Wanner et al. adopted depth marker constraint to estimate slope and direction of an EPI line.
- Yu et al. studied triangulation of ray space in 3D line geometry to improve reconstruction quality.
- Horn et al. proposed to couple a luminance constant to the spatial smoothness hypothesis, starting from an energy function, and proposed a variational method using light flow.
- Black et al. proposed a robust framework to deal with exception values, luminance instability, and spatial discontinuities.
- performing a complete search based on the above method is computationally impractical.
- Bruhn et al. adopted a coarse-to-fine and warp-based approach to realize computational utility, which is the most popular framework of using the light flow method.
- Dealing with large parallax is always an important research direction of the stereo matching, most of which is to use sparse points to deal with the large parallax, while A few of which is to use dense approximate nearest neighbor field (ANNF) to obtain a better result.
- NAF dense approximate nearest neighbor field
- Lu et al. adopted super-pixels to obtain a corresponding edge-aware field.
- Bao et al. adopted edge-aware bilateral data terms.
- some learning-based methods are also proposed. However, these methods are individually trained and may not performed well in each application scene. In a case where the training data is based on the depth extracted by a conventional stereo matching method, the learning-based method is not performed well, since the conventional stereo matching method does not acquire the depth information of a near-distance object.
- FIG. 1 is a flowchart illustrating a depth information calculation method based on a light-field-binocular system according to embodiments of the present disclosure.
- the depth information calculation method based on the light-field-binocular system may include the following.
- a light-field-binocular system is constructed by a monocular camera and a light field camera.
- the binocular system formed by a common monocular high-definition camera and a lytro ilium light field camera is established.
- the method may further include: calibrating internal parameters and external parameters of the light-field-binocular system.
- images obtained by the light field camera and the monocular camera are calibrated and encoded to generate an image pattern, and images having the image mode are determined as input images of an algorithm.
- the image obtained by the light field camera and the image obtained by a common high-definition camera may be calibrated and encoded into a new image pattern.
- the images in the new image pattern may be determined as input images of the algorithm.
- calibrating the images obtained by the light field camera and the monocular camera to generate the image patterning after encoding, and determining the image having the image pattern as the input image of the algorithm may include: calibrating the images such that a resolution of the image obtained by the light field camera is same to a resolution of the image obtained by the monocular camera; and indexing data of a four-dimensional (4D) light field in a spatial domain. Each pixel corresponds to a preset spatial position.
- raw data acquired by the light field camera may be decomposed into 225 sub-images. Viewpoints of each sub-image are slightly translated in horizontal and vertical directions from each other.
- the monocular image acquired by the monocular camera and a sub-image at a center of the light field (instead of the light field image) acquired by the light field camera are calibrated to avoid a large re-projection error caused by a small baseline among sub-images of the light field.
- small baseline no rotation occurs between different viewpoints of the light field. Therefore, same calibration parameters may be applicable to calibrate the sub-image parallel with the monocular camera amongst the sub-images of the light field.
- the block diagram explaining the cross-baseline method will be described in FIG. 3 .
- a far-distance disparity map is obtained based on binocular information from the input images, a first confidence is set for each pixel in the disparity map, and a first target confidence is obtained.
- a far-distance disparity map I stereo may be obtained using the binocular information from the input images.
- the first confidence C 1 may be set for each pixel in the disparity map and the first target confidence C stereo may be obtained.
- the first confidence is first proposed in embodiments of the present disclosure.
- the sub-image of the light field may be up-sampled to have a resolution same to the resolution of the monocular high-definition image.
- data of the 4D (four-dimensional) light field is indexed in the spatial domain.
- Each pixel corresponds to a spatial position.
- the 9 ⁇ 9 sub-images corresponding to views positioned closely to a center view are selected from 225 sub-images.
- the 9 ⁇ 9 sub-pixel array (the term “pixel” used herein refer to a set of sub-pixels in an angular domain) represents pixels of one input image.
- the first 9 rows of sub-pixels are from 81 different viewpoints of a light field camera having a small baseline, while 9 subpixels on a last row represent angular information of another baseline defined by the monocular camera.
- a range for calculating disparity is changed.
- a new disparity value is determined based on the light field information of the input images.
- An updated depth value is calculated based on the new disparity value to replace an original depth value corresponding to the pixel and a second target confidence of the pixel is calculated.
- the first confidence C 1 and the first target confidence C stereo may be calculated by:
- the range for calculating the disparity may be changed when a pixel has a low first confidence.
- a new disparity value may be determined based on the light field information of the input images, which may be represented by I if .
- the new disparity value is converted to a depth value to replace the original depth value corresponding to the pixel.
- the second target confidence C if of the pixel is obtained.
- Correspondence between the EPI and the variance is illustrated as FIG. 5 .
- the EPI with monocular information is illustrated as FIG. 6 .
- Depth distribution in the spatial domain and minimum variance distribution in the EPI domain are illustrated as FIG. 7 .
- a variance may be obtained for each pixel. For example, totally, res variances may be obtained after stretching the EPI for res times.
- An index I stereo corresponding to the minimum variance among the res variances may be determined, which may be proportional to the disparity value at the point of the pixel.
- a ratio of the average variance to the minimum variance may be determined as the first target confidence C stereo
- the minimum variance of the res variances may be determined as the first confidence C 1 .
- Pixels having the respective first confidence greater than a threshold Var th may be selected. For each of these selected pixels, the disparity value is too large such that the minimum variance is too large, which requires to calculate a new disparity value.
- I stereo index ⁇ ( Va ⁇ r min ) ( 1 )
- C 1 V ⁇ a ⁇ r min ( 2 )
- C s ⁇ t ⁇ e ⁇ r ⁇ e ⁇ o V ⁇ a ⁇ r mean Var min ( 3 )
- Var min is a minimum value of variances
- Var mean is an average value of variances.
- the disparity value is calculated using the light field information of the previous input images.
- the calculation method is also to determine the index corresponding to the minimum variance after the EPI is stretched for res times, which is denotes as I if .
- the first target confidence corresponding to the respective pixel is denoted as C if .
- the index I if is proportional to the disparity value of the point of the pixel.
- the disparity maps obtained based on a binocular manner and a light field manner are combined on a same unit into an index map.
- the first confidence and the first target confidence are combined into a confidence map.
- the index map and the confidence map are optimized to obtain a final disparity map.
- the final disparity map is converted into a final depth map.
- a conversion formula of converting the disparity map into the depth map is represented by:
- Depth ⁇ f ⁇ B D f ⁇ b D , where, D is a disparity map, f is a focal length of the camera, B is a baseline between the monocular camera and the light field camera, b is a baseline between sub-images of the light field camera.
- the disparity maps obtained based on the binocular manner and the light field manner are combined on the same unit to obtain the index map with the conversion formula, and the confidences C stereo and C if may be combined into the confidence map.
- the obtained index map and the obtained confidence map are optimized to obtain the final disparity map with two algorithms.
- the final disparity map is converted to the final depth map.
- the indexes I if and I stereo may be converted into the disparity value d corresponding to each pixel using the formula 4. All disparity values form the disparity map (index map).
- the second target confidence C if obtained at block 104 replaces the first target confidence C stereo to obtain the final confidence map.
- the above may be represented by an equation (4):
- two disparity maps D 1 and D 2 may be obtained using two algorithms: bilateral solver, super-pixel (SP) wise and WLS (weighted least square) optimization algorithm.
- the final disparity map may be converted into the final depth map using an equation (6):
- D is the final disparity map
- f is a focal length of the camera
- B is a baseline between the monocular camera and the light field camera
- b is a baseline between sub-images of the light field camera.
- FIG. 8 illustrates comparison among a reference depth map, a part of an inputted raw image, a result according to embodiments of the present disclosure, and result of other methods.
- FIG. 9 illustrates a depth information calculation device based on a light-field-binocular system according to embodiments of the present disclosure.
- FIG. 10 illustrates comparison among an image according to embodiments of the present disclosure and images according to other methods.
- FIG. 11 illustrates an evaluation parameter curve, according to embodiments of the present disclosure, obtained by transforming baseline (B) between monocular and light field.
- far-distant depth and near-distance depth may be obtained by a light field camera system under cases of different baseline sizes.
- a light field camera (lytro ilium) available on the market and a monocular camera available on the market may be used together.
- the minimum variance of the EPI domain is determined as a reference basis to determine whether the depth of each pixel is calculated using cross-baseline camera information.
- accurate near-distance depth information may be obtained using the light field camera, and far-distance depth information may be obtained using a binocular camera.
- accurate near-distance depth information and accurate far-distance depth information may be acquired.
- accurate depth information may be obtained without increasing computation amounts.
- accurate gar-distance depth information may be obtained.
- FIG. 12 is a schematic diagram of a depth information calculation device based on a light-field-binocular system according to embodiments of the present disclosure.
- the depth information calculation device 10 based on the light-field-binocular system may include: a constructing module 100 , a calibrating module 200 , a setting module 300 , a changing module 400 , and a combining module 500 .
- the constructing module 100 may be configured to construct a light-field-binocular system by a monocular camera and a light field camera.
- the calibrating module 200 may be configured to calibrate images obtained by the light field camera and the monocular camera to generate an image pattern after encoding the calibrated images and to determine images having the image pattern as input images of an algorithm.
- the setting module 300 may be configured to obtain a far-distance disparity map based on binocular information of the input images, set a respective first confidence for each pixel in the disparity map, and obtain a first target confidence.
- the changing module 400 may be configured to, in response to detecting that the first confidence of a pixel is smaller than a preset value, change a range for calculating the disparity map, to determine a new disparity value based on the light field information of the input images, to determine an updated depth value based on the new disparity value to replace an original depth value of the pixel, and obtain a second target confidence of the pixel.
- the combining module 500 may be configured to combine the disparity maps obtained based on a binocular manner and a light field manner on a same unit into an index map, and to combine the first confidence and the first target confidence into a confidence map, to optimize the index map and the confidence map to obtain a final disparity map, and to convert the final disparity map into a final depth map.
- the device 10 may obtain accurate far-distance depth information without increasing computation amounts. In addition, not only accurate near-distance depth information, but also accurate far-distance depth may be obtained.
- the device 10 may further include: a calibrating unit.
- the calibrating unit may be configured to, after constructing the light-field-binocular system, calibrate internal parameters and external parameters of the light-field-binocular system.
- the calibrating module 200 is further configured to: calibrate the images such that a resolution of the image obtained by the light field camera is same to a resolution of the image obtained by the monocular camera; and to index data of a four-dimensional light field in a spatial domain. Each pixel corresponds to a preset spatial position.
- the first confidence and the first target confidence may be calculated by:
- a conversion formula converting the disparity map into the depth map is represented by:
- Depth ⁇ f ⁇ B D f ⁇ b D , where, D is a disparity map, f is a focal length of the camera, B is a baseline between the monocular camera and the light field camera, b is a baseline between sub-images of the light field camera.
- accurate near-distance depth information may be obtained using the light field camera, and far-distance depth information may be obtained using a binocular camera.
- accurate near-distance depth information and accurate far-distance depth information may be acquired.
- accurate depth information may be obtained without increasing computation amounts.
- accurate gar-distance depth information may be obtained.
- FIG. 13 is a block diagram illustrating an electronic device according to embodiments of the present disclosure.
- the electronic device 12 illustrated in FIG. 13 is only illustrated as an example, and should not be considered as any restriction on the function and the usage range of embodiments of the present disclosure.
- the electronic device 12 is in the form of a general-purpose computing apparatus.
- the electronic device 12 may include, but is not limited to, one or more processors or processing units 16 , a system memory 28 , and a bus 18 connecting different system components (including the system memory 28 and the processing unit 16 ).
- the bus 18 represents one or more of several types of bus architectures, including a memory bus or a memory control bus, a peripheral bus, a graphic acceleration port (GAP) bus, a processor bus, or a local bus using any bus architecture in a variety of bus architectures.
- these architectures include, but are not limited to, an industry standard architecture (ISA) bus, a micro-channel architecture (MCA) bus, an enhanced ISA bus, a video electronics standards association (VESA) local bus, and a peripheral component interconnect (PCI) bus.
- ISA industry standard architecture
- MCA micro-channel architecture
- VESA video electronics standards association
- PCI peripheral component interconnect
- the electronic device 12 may include multiple kinds of computer-readable media. These media may be any storage media accessible by the electronic device 12 , including transitory or non-transitory storage medium and movable or unmovable storage medium.
- the memory 28 may include a computer-readable medium in a form of volatile memory, such as a random access memory (RAM) 30 and/or a high-speed cache memory 32 .
- the electronic device 12 may further include other transitory/non-transitory storage media and movable/unmovable storage media.
- the storage system 34 may be used to read and write non-removable, non-volatile magnetic media (not shown in the figure, commonly referred to as “hard disk drives”).
- FIG. 13 it may be provided a disk driver for reading and writing movable non-volatile magnetic disks (e.g. “floppy disks”), as well as an optical driver for reading and writing movable non-volatile optical disks (e.g.
- each driver may be connected to the bus 18 via one or more data medium interfaces.
- the memory 28 may include at least one program product, which has a set of (for example at least one) program modules configured to perform the functions of embodiments of the present disclosure.
- a program/application 40 with a set of (at least one) program modules 42 may be stored in memory 28 , the program modules 42 may include, but not limit to, an operating system, one or more application programs, other program modules and program data, and any one or combination of above examples may include an implementation in a network environment.
- the program modules 42 are generally configured to implement functions and/or methods described in embodiments of the present disclosure.
- the electronic device 12 may also communicate with one or more external devices 14 (e.g., a keyboard, a pointing device, a display 24 , and etc.) and may also communicate with one or more devices that enables a user to interact with the electronic device 12 , and/or any device (e.g., a network card, a modem, and etc.) that enables the electronic device 12 to communicate with one or more other computing devices. This kind of communication can be achieved by the input/output (I/O) interface 22 .
- the electronic device 12 may be connected to and communicate with one or more networks such as a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet through a network adapter 20 . As illustrated in FIG.
- the network adapter 20 communicates with other modules of the electronic device 12 over bus 18 .
- other hardware and/or software modules may be used in combination with the electronic device 12 , which including, but not limited to, microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, as well as data backup storage systems and the like.
- the processing unit 16 can perform various functional applications and data processing by running programs stored in the system memory 28 , for example, to perform the depth information calculation method based on a light-field-binocular system according to embodiments of the present disclosure.
- Embodiments of the present disclosure provides a non-transitory computer storage medium.
- the computer storage medium of embodiments of the present disclosure may adopt any combination of one or more computer readable media.
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
- the computer readable storage medium may be, but is not limited to, for example, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, component or any combination thereof.
- a specific example of the computer readable storage media include (a non-exhaustive list): an electrical connection having one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read only memory (ROM), an Erasable Programmable Read Only Memory (EPROM) or a flash memory, an optical fiber, a compact disc read-only memory (CD-ROM), an optical memory component, a magnetic memory component, or any suitable combination thereof.
- the computer readable storage medium may be any tangible medium including or storing programs. The programs may be used by an instruction executed system, apparatus or device, or a connection thereof.
- the computer readable signal medium may include a data signal propagating in baseband or as part of carrier which carries a computer readable program code. Such propagated data signal may be in many forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination thereof.
- the computer readable signal medium may also be any computer readable medium other than the computer readable storage medium, which may send, propagate, or transport programs used by an instruction executed system, apparatus or device, or a connection thereof.
- the program code stored on the computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, or any suitable combination thereof.
- the computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages.
- the programming language includes an object-oriented programming language, such as Java, Smalltalk, C++, as well as conventional procedural programming language, such as “C” language or similar programming language.
- the program code may be executed entirely on a user's computer, partly on the user's computer, as a separate software package, partly on the user's computer, partly on a remote computer, or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer or an external computer (such as using an Internet service provider to connect over the Internet) through any kind of network, including a Local Area Network (hereafter referred as to LAN) or a Wide Area Network (hereafter referred as to WAN).
- LAN Local Area Network
- WAN Wide Area Network
- first and second are used herein for purposes of description and are not intended to indicate or imply relative importance or significance.
- the feature defined with “first” and “second” may comprise one or more this feature.
- a plurality of means at least two, for example, two or three, unless specified otherwise.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Measurement Of Optical Distance (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
Description
where, Varmin is a minimum value of variances, and Varmean is an average value of variances.
where, Varmin is a minimum value of variances, and Varmean is an average value of variances.
where, D is a disparity map, f is a focal length of the camera, B is a baseline between the monocular camera and the light field camera, b is a baseline between sub-images of the light field camera.
D=D 1 α ×D 2 1−α (5)
where, D is the final disparity map, f is a focal length of the camera, B is a baseline between the monocular camera and the light field camera, b is a baseline between sub-images of the light field camera.
where, C1 is the first confidence, Cstereo is the first target confidence, Varmin is a minimum value of variances, and Varmean is an average value of variances.
where, D is a disparity map, f is a focal length of the camera, B is a baseline between the monocular camera and the light field camera, b is a baseline between sub-images of the light field camera.
Claims (15)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911005533.8 | 2019-10-22 | ||
CN201911005533.8A CN111028281B (en) | 2019-10-22 | 2019-10-22 | Depth information calculation method and device based on light field binocular system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210118162A1 US20210118162A1 (en) | 2021-04-22 |
US11461911B2 true US11461911B2 (en) | 2022-10-04 |
Family
ID=70201459
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/034,563 Active 2041-06-12 US11461911B2 (en) | 2019-10-22 | 2020-09-28 | Depth information calculation method and device based on light-field-binocular system |
Country Status (2)
Country | Link |
---|---|
US (1) | US11461911B2 (en) |
CN (1) | CN111028281B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102550678B1 (en) | 2020-01-22 | 2023-07-04 | 노다르 인크. | Non-Rigid Stereo Vision Camera System |
CN112530008B (en) * | 2020-12-25 | 2024-10-18 | 中国科学院苏州纳米技术与纳米仿生研究所 | Method, device, equipment and storage medium for determining parameters of stripe structured light |
CN113096175B (en) * | 2021-03-24 | 2023-10-24 | 苏州中科广视文化科技有限公司 | Depth map confidence estimation method based on convolutional neural network |
US11577748B1 (en) * | 2021-10-08 | 2023-02-14 | Nodar Inc. | Real-time perception system for small objects at long range for autonomous vehicles |
CN114173106B (en) * | 2021-12-01 | 2022-08-05 | 北京拙河科技有限公司 | Real-time video stream fusion processing method and system based on light field camera |
CN114529595B (en) * | 2022-02-10 | 2025-04-22 | 展讯通信(上海)有限公司 | Depth map processing method, device and electronic device |
CN114742847B (en) * | 2022-04-18 | 2025-02-14 | 北京信息科技大学 | A light field cutout method and device based on empty angle consistency |
CN114897952B (en) * | 2022-05-30 | 2023-04-04 | 中国测绘科学研究院 | Method and system for estimating accurate depth of single light field image in self-adaptive shielding manner |
US11782145B1 (en) | 2022-06-14 | 2023-10-10 | Nodar Inc. | 3D vision system with automatically calibrated stereo vision sensors and LiDAR sensor |
CN116366826A (en) * | 2023-04-11 | 2023-06-30 | 北京字跳网络技术有限公司 | Image processing method, device, server and storage medium |
CN119383325B (en) * | 2024-12-31 | 2025-03-21 | 汇智天下(杭州)科技有限公司 | Three-dimensional light field display optimization method, system, electronic device and storage medium |
Citations (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8355565B1 (en) * | 2009-10-29 | 2013-01-15 | Hewlett-Packard Development Company, L.P. | Producing high quality depth maps |
US8619082B1 (en) * | 2012-08-21 | 2013-12-31 | Pelican Imaging Corporation | Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation |
US20150077522A1 (en) * | 2013-09-18 | 2015-03-19 | Kabushiki Kaisha Toshiba | Solid state imaging device, calculating device, and calculating program |
US9165401B1 (en) * | 2011-10-24 | 2015-10-20 | Disney Enterprises, Inc. | Multi-perspective stereoscopy from light fields |
US9519972B2 (en) * | 2013-03-13 | 2016-12-13 | Kip Peli P1 Lp | Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies |
US9704250B1 (en) * | 2014-10-30 | 2017-07-11 | Amazon Technologies, Inc. | Image optimization techniques using depth planes |
US9774837B2 (en) * | 2014-04-17 | 2017-09-26 | Electronics And Telecommunications Research Institute | System for performing distortion correction and calibration using pattern projection, and method using the same |
US9792685B2 (en) * | 2014-01-28 | 2017-10-17 | Altek Semiconductor Corp. | Image capturing device and method for calibrating image deformation thereof |
US9804395B2 (en) * | 2014-01-29 | 2017-10-31 | Ricoh Co., Ltd | Range calibration of a binocular optical augmented reality system |
US9918072B2 (en) * | 2014-07-31 | 2018-03-13 | Samsung Electronics Co., Ltd. | Photography apparatus and method thereof |
US10091484B2 (en) * | 2013-10-30 | 2018-10-02 | Tsinghua University | Method for acquiring comfort degree of motion-sensing binocular stereoscopic video |
US20190182475A1 (en) * | 2017-12-12 | 2019-06-13 | Black Sesame International Holding Limited | Dual camera system for real-time depth map generation |
US10326979B2 (en) * | 2016-05-23 | 2019-06-18 | Microsoft Technology Licensing, Llc | Imaging system comprising real-time image registration |
US10482627B2 (en) * | 2016-09-22 | 2019-11-19 | Samsung Electronics Co., Ltd | Method and electronic device for calibration of stereo camera |
US20200036905A1 (en) * | 2018-07-24 | 2020-01-30 | Black Sesame International Holding Limited | Three camera alignment in mobile devices |
US10602126B2 (en) * | 2016-06-10 | 2020-03-24 | Lucid VR, Inc. | Digital camera device for 3D imaging |
US10645368B1 (en) * | 2017-06-02 | 2020-05-05 | Shanghaitech University | Method and apparatus for estimating depth of field information |
US10659677B2 (en) * | 2017-07-21 | 2020-05-19 | Panasonic Intellectual Property Managment Co., Ltd. | Camera parameter set calculation apparatus, camera parameter set calculation method, and recording medium |
US10728520B2 (en) * | 2016-10-31 | 2020-07-28 | Verizon Patent And Licensing Inc. | Methods and systems for generating depth data by converging independently-captured depth maps |
US10764559B2 (en) * | 2018-10-23 | 2020-09-01 | Xi'an Jiaotong University | Depth information acquisition method and device |
US10805589B2 (en) * | 2015-04-19 | 2020-10-13 | Fotonation Limited | Multi-baseline camera array system architectures for depth augmentation in VR/AR applications |
US20200357141A1 (en) * | 2018-01-23 | 2020-11-12 | SZ DJI Technology Co., Ltd. | Systems and methods for calibrating an optical system of a movable object |
US10855909B2 (en) * | 2016-07-29 | 2020-12-01 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for obtaining binocular panoramic image, and storage medium |
US20210274150A1 (en) * | 2018-06-29 | 2021-09-02 | Logistics and Supply Chain MultiTech R&D Centre Limited | Multimodal imaging sensor calibration method for accurate image fusion |
US20210321078A1 (en) * | 2018-12-29 | 2021-10-14 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for camera calibration |
US20210327136A1 (en) * | 2020-04-17 | 2021-10-21 | Mvtec Software Gmbh | System and method for efficient 3d reconstruction of objects with telecentric line-scan cameras |
US20210385425A1 (en) * | 2020-06-05 | 2021-12-09 | Beijing Smarter Eye Technology Co. Ltd. | Method and device for calibrating binocular camera |
US20220046220A1 (en) * | 2019-11-22 | 2022-02-10 | Dalian University Of Technology | Multispectral stereo camera self-calibration algorithm based on track feature registration |
US20220044433A1 (en) * | 2020-08-04 | 2022-02-10 | Beijing Smarter Eye Technology Co. Ltd. | Method and system for distance measurement based on binocular camera, device and computer-readable storage medium |
US20220046219A1 (en) * | 2020-08-07 | 2022-02-10 | Owl Autonomous Imaging, Inc. | Multi-aperture ranging devices and methods |
US11308579B2 (en) * | 2018-03-13 | 2022-04-19 | Boe Technology Group Co., Ltd. | Image stitching method, image stitching apparatus, display apparatus, and computer product |
US20220178688A1 (en) * | 2018-03-02 | 2022-06-09 | Beijing Tusen Zhitu Technology Co., Ltd. | Method and apparatus for binocular ranging |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023189B (en) * | 2016-05-17 | 2018-11-09 | 北京信息科技大学 | A kind of light field data depth reconstruction method based on matching optimization |
CN109840922B (en) * | 2018-01-31 | 2021-03-02 | 中国科学院计算技术研究所 | Depth acquisition method and system based on binocular field camera |
CN108564620B (en) * | 2018-03-27 | 2020-09-04 | 中国人民解放军国防科技大学 | A Scene Depth Estimation Method for Light Field Array Cameras |
-
2019
- 2019-10-22 CN CN201911005533.8A patent/CN111028281B/en active Active
-
2020
- 2020-09-28 US US17/034,563 patent/US11461911B2/en active Active
Patent Citations (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8355565B1 (en) * | 2009-10-29 | 2013-01-15 | Hewlett-Packard Development Company, L.P. | Producing high quality depth maps |
US9165401B1 (en) * | 2011-10-24 | 2015-10-20 | Disney Enterprises, Inc. | Multi-perspective stereoscopy from light fields |
US8619082B1 (en) * | 2012-08-21 | 2013-12-31 | Pelican Imaging Corporation | Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation |
US9519972B2 (en) * | 2013-03-13 | 2016-12-13 | Kip Peli P1 Lp | Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies |
US20150077522A1 (en) * | 2013-09-18 | 2015-03-19 | Kabushiki Kaisha Toshiba | Solid state imaging device, calculating device, and calculating program |
US10091484B2 (en) * | 2013-10-30 | 2018-10-02 | Tsinghua University | Method for acquiring comfort degree of motion-sensing binocular stereoscopic video |
US9792685B2 (en) * | 2014-01-28 | 2017-10-17 | Altek Semiconductor Corp. | Image capturing device and method for calibrating image deformation thereof |
US9804395B2 (en) * | 2014-01-29 | 2017-10-31 | Ricoh Co., Ltd | Range calibration of a binocular optical augmented reality system |
US9774837B2 (en) * | 2014-04-17 | 2017-09-26 | Electronics And Telecommunications Research Institute | System for performing distortion correction and calibration using pattern projection, and method using the same |
US9918072B2 (en) * | 2014-07-31 | 2018-03-13 | Samsung Electronics Co., Ltd. | Photography apparatus and method thereof |
US9704250B1 (en) * | 2014-10-30 | 2017-07-11 | Amazon Technologies, Inc. | Image optimization techniques using depth planes |
US10805589B2 (en) * | 2015-04-19 | 2020-10-13 | Fotonation Limited | Multi-baseline camera array system architectures for depth augmentation in VR/AR applications |
US10326979B2 (en) * | 2016-05-23 | 2019-06-18 | Microsoft Technology Licensing, Llc | Imaging system comprising real-time image registration |
US10602126B2 (en) * | 2016-06-10 | 2020-03-24 | Lucid VR, Inc. | Digital camera device for 3D imaging |
US10855909B2 (en) * | 2016-07-29 | 2020-12-01 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for obtaining binocular panoramic image, and storage medium |
US10482627B2 (en) * | 2016-09-22 | 2019-11-19 | Samsung Electronics Co., Ltd | Method and electronic device for calibration of stereo camera |
US10728520B2 (en) * | 2016-10-31 | 2020-07-28 | Verizon Patent And Licensing Inc. | Methods and systems for generating depth data by converging independently-captured depth maps |
US10645368B1 (en) * | 2017-06-02 | 2020-05-05 | Shanghaitech University | Method and apparatus for estimating depth of field information |
US10659677B2 (en) * | 2017-07-21 | 2020-05-19 | Panasonic Intellectual Property Managment Co., Ltd. | Camera parameter set calculation apparatus, camera parameter set calculation method, and recording medium |
US20190182475A1 (en) * | 2017-12-12 | 2019-06-13 | Black Sesame International Holding Limited | Dual camera system for real-time depth map generation |
US20200357141A1 (en) * | 2018-01-23 | 2020-11-12 | SZ DJI Technology Co., Ltd. | Systems and methods for calibrating an optical system of a movable object |
US20220178688A1 (en) * | 2018-03-02 | 2022-06-09 | Beijing Tusen Zhitu Technology Co., Ltd. | Method and apparatus for binocular ranging |
US11308579B2 (en) * | 2018-03-13 | 2022-04-19 | Boe Technology Group Co., Ltd. | Image stitching method, image stitching apparatus, display apparatus, and computer product |
US20210274150A1 (en) * | 2018-06-29 | 2021-09-02 | Logistics and Supply Chain MultiTech R&D Centre Limited | Multimodal imaging sensor calibration method for accurate image fusion |
US20200036905A1 (en) * | 2018-07-24 | 2020-01-30 | Black Sesame International Holding Limited | Three camera alignment in mobile devices |
US10764559B2 (en) * | 2018-10-23 | 2020-09-01 | Xi'an Jiaotong University | Depth information acquisition method and device |
US20210321078A1 (en) * | 2018-12-29 | 2021-10-14 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for camera calibration |
US20220046220A1 (en) * | 2019-11-22 | 2022-02-10 | Dalian University Of Technology | Multispectral stereo camera self-calibration algorithm based on track feature registration |
US20210327136A1 (en) * | 2020-04-17 | 2021-10-21 | Mvtec Software Gmbh | System and method for efficient 3d reconstruction of objects with telecentric line-scan cameras |
US20210385425A1 (en) * | 2020-06-05 | 2021-12-09 | Beijing Smarter Eye Technology Co. Ltd. | Method and device for calibrating binocular camera |
US20220044433A1 (en) * | 2020-08-04 | 2022-02-10 | Beijing Smarter Eye Technology Co. Ltd. | Method and system for distance measurement based on binocular camera, device and computer-readable storage medium |
US20220046219A1 (en) * | 2020-08-07 | 2022-02-10 | Owl Autonomous Imaging, Inc. | Multi-aperture ranging devices and methods |
Non-Patent Citations (17)
Title |
---|
"Single Lens 3D Camera with Extended Depth of Field", Powerpoint, Website: www.raytrix.de, raytrix, 2012. |
Bao, L et al., "Fast Edge-Preserving PatchMatch for Large Displacement Optical Flow", CVRP2014, DOI: 10.1109/TIP.2014.2359374, IEEE Transactions on Image Processing ( vol. 23, Issue: 12, Dec. 2014). |
Berthold, H. et al., "Determining Optical Flow", Artificial Intelligence, https://doi.org/10.1016/0004-3702(81)90024-2, (vol. 17, Issues 1-3, dated Aug. 1981, pp. 185-203). |
Bishop, T. et al., "The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution", IEEE Transactions on Pattern Analysis and Machine Intelligence ( vol. 34, Issue: 5, May 2012). |
Black, M. et al., "The Robust Estimation of Multiple Motions: Parametric and Piecewise-Smooth Flow Fields", Computer Vision and Image Understanding (vol. 63, Issue 1, Jan. 1996, pp. 75-104). |
Bruhn, A. et al., "Lucas/Kanade Meets Horn/Schunck: Combining Local and Global Optic Flow Methods", International Journal of Computer Vision 61(3), 211-231, 2005. |
Chen, C. et al., "Light Field Stereo Matching Using Bilateral Statistics of Surface Cameras", CVRP2014, DOI: 10.1109/CVPR.2014.197, 2014 IEEE Conference on Computer Vision and Pattern Recognition. |
Chen, J. et al., "Accurate Light Field Depth Estimation with Superpixel Regularization over Partially Occluded Regions", arXiv:1708.01964v1 [cs.CV], Aug. 7, 2017. |
Dansereau, D. et al., "LiFF: Light Field Features in Scale and Depth", arXiv:1901.03916v1 [cs.CV], Jan. 13, 2019. |
Kim, C., et al., "Scene Reconstruction from High Spatio-Angular Resolution Light Fields" ACM Transactions on Graphics, Article No. 73, https://doi.org/10.1145/2461912.2461926, Jul. 2013. |
Lu, J. et al., "PatchMatch Filter: Efficient Edge-Aware Filtering Meets Randomized Search for Fast Correspondence Field Estimation", DOI: 10.1109/CVPR.2013.242, 2013 IEEE Conference on Computer Vision and Pattern Recognition. |
Scott McCloskey, "Masking Light Fields to Remove Partial Occlusion", DOI: 10.1109/ICPR.2014.358, 2014 22nd International Conference on Pattern Recognition. |
Tao, M. et al., "Depth from Combining Defocus and Correspondence Using Light-Field Cameras", ISSV2013, Proceedings of the 2013 IEEE International Conference on Computer Vision, DOI:10.1109/ICCV.2013.89, Dec. 2013. |
Wang, T. et al., "Light Field Video Capture Using a Learning-Based Hybrid Imaging System", arXiv:1705.02997v1 [cs.CV], https://doi.org/http://dx.doi.org/10.1145/3072959.3073614, May 8, 2017. |
Wang, TC. et al. "Depth Estimation with Occlusion Modeling Using Light-Field Cameras", Abstract, DOI : 10.1109/TPAMI.2016.2515615, IEEE Transactions on Pattern Analysis and Machine Intelligence ( vol. 38, Issue: 11, Nov. 1, 2016). |
Wanner, S. et al., "Globally Consistent Depth Labeling of 4D Light Fields", Heidelberg Collaboratory for Image Processing, DOI: 10.1109/CVPR.2012.6247656, 2012 IEEE Conference on Computer Vision and Pattern Recognition. |
Yu, Z. et al., "Line Assisted Light Field Triangulation and Stereo Matching", DOI: 10.1109/ICCV.2013.347, 2013 IEEE International Conference on Computer Vision. |
Also Published As
Publication number | Publication date |
---|---|
US20210118162A1 (en) | 2021-04-22 |
CN111028281B (en) | 2022-10-18 |
CN111028281A (en) | 2020-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11461911B2 (en) | Depth information calculation method and device based on light-field-binocular system | |
US11321937B1 (en) | Visual localization method and apparatus based on semantic error image | |
WO2021233029A1 (en) | Simultaneous localization and mapping method, device, system and storage medium | |
US20210110599A1 (en) | Depth camera-based three-dimensional reconstruction method and apparatus, device, and storage medium | |
US8355565B1 (en) | Producing high quality depth maps | |
US8463024B1 (en) | Combining narrow-baseline and wide-baseline stereo for three-dimensional modeling | |
CN108965853B (en) | Integrated imaging three-dimensional display method, device, equipment and storage medium | |
CN111239684A (en) | Binocular fast distance measurement method based on YoloV3 deep learning | |
CN116071721A (en) | Transformer-based high-precision map real-time prediction method and system | |
Bermudez-Cameo et al. | Automatic line extraction in uncalibrated omnidirectional cameras with revolution symmetry | |
CN117132649A (en) | Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion | |
WO2025002194A1 (en) | Scene reconstruction method and apparatus, and storage medium and electronic device | |
CN116704048A (en) | Double-light registration method | |
US20240242318A1 (en) | Face deformation compensating method for face depth image, imaging device, and storage medium | |
US20230035477A1 (en) | Method and device for depth map completion | |
KR20230006628A (en) | method and device for processing image, electronic equipment, storage medium and computer program | |
CN114549927A (en) | Feature detection network training, augmented reality virtual reality registration tracking and occlusion processing methods | |
US20240005541A1 (en) | Image depth prediction method and electronic device | |
CN114612572B (en) | A laser radar and camera extrinsic parameter calibration method and device based on deep learning | |
CN116912645A (en) | Three-dimensional target detection method and device integrating texture and geometric features | |
CN113763468B (en) | Positioning method, device, system and storage medium | |
CN110689513B (en) | Color image fusion method and device and terminal equipment | |
CN115438712A (en) | Perception fusion method, device and equipment based on convolution neural network and vehicle-road cooperation and storage medium | |
Arslan | Accuracy assessment of single viewing techniques for metric measurements on single images | |
CN116105720B (en) | Low-illumination scene robot active vision SLAM method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TSINGHUA UNIVERSITY, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FANG, LU;JIN, DINGJIAN;ZHANG, ANKE;AND OTHERS;REEL/FRAME:053902/0663 Effective date: 20200423 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |