US6205241B1 - Compression of stereoscopic images - Google Patents
Compression of stereoscopic images Download PDFInfo
- Publication number
- US6205241B1 US6205241B1 US09/088,617 US8861798A US6205241B1 US 6205241 B1 US6205241 B1 US 6205241B1 US 8861798 A US8861798 A US 8861798A US 6205241 B1 US6205241 B1 US 6205241B1
- Authority
- US
- United States
- Prior art keywords
- image
- region
- images
- epipolar
- stereoscopic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/005—Statistical coding, e.g. Huffman, run length coding
Definitions
- This invention pertains to the field of image compression. More specifically, this invention pertains to a range based method of compressing a stereoscopic set of images.
- the optical phenomenon exploited by the brain to extract depth information from a three dimensional scene is known as “parallax.”
- a person with two functional eyes 402 viewing point 304 sees slightly different images in each eye 402 , due to the slightly different angle from each eye 402 to point 304 .
- the apparent location of point 304 is different in each image formed by eyes 402 .
- the brain is able to determine the distance to point 304 .
- photographing, or otherwise recording a scene from two distinct locations which mimic the location of eyes 402 , as illustrated in FIG.
- each camera 202 uses a lens or lens system 204 to project an image of point 304 onto image plane 308 .
- each image point 302 of images 300 a and 300 b represents a point 304 in a three dimensional scene.
- Each image 300 is associated with a “vantage point” 306 , which is the location of the point of view of that image 300 .
- Each image point 302 corresponds to the intersection of an image plane 308 with a “view line” 310 .
- a view line 310 passes through a vantage point 306 and the point 304 in the scene which is represented by image point 302 .
- the view line 310 which passes through a vantage point 306 and intersects image plane 308 perpendicularly defines a “center point” 312 in the image 300 associated with the vantage point 306 .
- a set of two or more images 300 is “stereoscopic” if they represent substantially parallel views of substantially the same scene, with the vantage points 306 of the images 300 being separated in a direction substantially perpendicular to the direction of the views, this perpendicular direction defining the “epipolar” axis 314 .
- a viewer perceives apparent points 404 where points 304 had been.
- Apparent points 404 appear to be at a distance 416 which is proportional to the actual distance 316 of points 304 , scaled by the ratio of distance 418 to distance 318 , and the ratio of distance 420 to distance 320 .
- Distance 420 is the distance between each of the viewer's eyes 402
- distance 320 is the distance between vantage points 306 .
- Distance 418 is the distance between the viewer's eyes 402 and images 300
- distance 318 is the distance between vantage points 306 and image plane 308 .
- Stereoscopic systems require the use of at least two stereoscopic images 300 to create the illusion of three dimensional apparent points 404 .
- Graphic images typically contain a large amount of information. Because of this, the storage and transmission of graphic images generally benefit from the use of compression techniques which reduce the amount of information necessary to reconstruct an image.
- a compressed graphics file contains less information than an uncompressed image, but it can be used to recreate, either perfectly or with losses, the uncompressed image.
- the image storage and transmission requirements of such systems are twice the image storage and transmission requirements of ordinary monocular images. As such, stereoscopic systems are especially prone to benefit from image compression techniques. What is needed is an image compression technique which is especially suited to stereoscopic images, taking advantage of the high level of redundancy in stereoscopic image sets.
- the present invention comprises a method for compressing a set of stereoscopic images ( 300 ). For at least one of the apparent points ( 404 ) represented in the set of stereoscopic images ( 300 ), a region ( 502 ) which represents at least the apparent point ( 404 ) is identified in each image ( 300 ). The locations of these regions ( 502 ) within the images ( 300 ), together with the geometry of the vantage point ( 306 ) locations relative to image plane ( 308 ), specify the apparent depths ( 416 ) of the apparent points ( 404 ) in the scene. Information relating to the apparent depths ( 416 ) is recorded for the apparent points ( 404 ). This recorded depth information, together with just one of the stereoscopic images ( 300 ), can be used to later reconstruct the other stereoscopic image ( 300 ) for stereoscopic viewing.
- the set of stereoscopic images ( 300 ) can be still images or moving images, and they can be captured digitally, scanned from photographs, or computer-generated.
- FIG. 1 is an illustration of a pair of eyes 402 viewing a point 304 .
- FIG. 2 is an illustration of two cameras producing a stereoscopic set of images of a point 304 .
- FIG. 3 a is a planar view of the geometry involved in creating stereoscopic images 300 a and 300 b.
- FIG. 3 b is an illustration of stereoscopic images 300 a and 300 b resulting from the geometry of FIG. 3 a.
- FIG. 4 is a planar view of a pair of eyes 402 viewing a stereoscopic set of images 300 a and 300 b.
- FIG. 5 is an illustration of a stereoscopic set of images 300 a and 300 b which contain points 404 which are bounded by areas 502 .
- FIG. 6 is an illustration of one embodiment of the invention.
- FIG. 7 is an illustration of an undefined region 700 in a reconstructed image 300 c which is based on stereoscopic images 300 a and 300 b.
- Camera 608 with two imaging systems is used to capture a stereoscopic set of images 300 based on a real world scene.
- Camera 608 can capture images 300 in digital form, or analog photographs can be scanned at a later time to produce digital images 300 .
- images 300 are computer generated images based on a computer model of a three dimensional scene. Images 300 can also be moving images representing either a real world scene or a computer model. Images 300 are stored in digital form in input image memory 604 , where they are accessible to central processing unit (CPU) 602 .
- CPU 602 responds to program instructions stored on a computer readable medium, such as program disk 610 .
- input image memory 604 , CPU 602 , and program disk 610 reside in computer 600 . In other embodiments, some or all of these elements are incorporated in camera 608 .
- CPU 602 operates on stereoscopic images 300 a and 300 b, which each represent at least one apparent point 404 in common. Each image 300 a and 300 b is examined to determine regions 502 . Each region 502 a in image 300 a represents at least one apparent point 404 which is also represented by a corresponding region 502 b in image 300 b. There are a number of possible approaches to determining corresponding regions 502 . Both object-based and non-object-based types of approaches are described, and other approaches can be employed by alternate embodiments of the present invention.
- One object-based method for determining corresponding regions 502 is to use standard edge-detection methods to find objects represented in each image 300 .
- the region 502 representing an object in image 300 a will likely represent many of the same apparent points 404 as a region 502 representing the same object in image 300 b.
- Edge-detection methods generally analyze small patches of an image to determine contrast. Areas of each image 300 with high contrast are assumed to represent edges of objects. The gradients of contrast in the small patches indicate the direction perpendicular to the object edge.
- An area of each image 300 which is generally circumscribed by detected edges, and is generally devoid of detected edges on the interior, is assumed to be a region 502 representing an object. In FIG. 5, shaded objects are indicated as being enclosed in regions 502 .
- a luminance threshold can be used to characterize every pixel as either an object pixel or a background pixel. In that case all contiguous object pixels are determined to constitute an area 502 representing a single object.
- epipolar line 504 is easily calculated, which greatly simplifies the task of matching regions 502 .
- regions 502 from a larger set of vertical locations in image 300 b must be considered as possible matches for a given region 502 a in image 300 a.
- the width, height and mean color of target region 502 a in image 300 a are compared to the width, height and mean color of each region 502 b located near the corresponding epipolar line 504 in image 300 b.
- the region 502 b which most nearly resembles the target region 502 a is considered the match.
- More sophisticated methods, such as cross-correlation can also be employed. For example, each element of a two-dimensional Fourier transform of one region 502 a is multiplied by the complex conjugate of each corresponding element of a two-dimensional Fourier transform of another region 502 b, resulting in a Fourier transform of a cross-correlation.
- the epipolar offset of each region 502 a is determined.
- the epipolar offset is the difference along the epipolar axis 314 between the location of region 502 a in image 300 a and the location of matching region 502 b in image 300 b.
- To determine the epipolar offset of a region 502 in two or more images 300 it is necessary to have a “characteristic” point in each image 300 from which to measure the location of regions 502 within that image 300 .
- the preferred characteristic point is the upper left corner of each image 300 .
- Another possible characteristic point is center point 312 , which represents the intersection with the image plane 308 of that view line 310 which is perpendicular to image plane 308 , as illustrated in FIG. 3 a.
- a point 304 located an infinite distance in front of image plane 308 would appear at center point 312 in both images 300 .
- the position 506 of each region 502 in each image 300 can be determined.
- Non-object-based methods of determining regions 502 and epipolar offsets can also be used.
- small areas of image 300 a are compared to small areas of image 300 b which are located on epipolar lines 504 which correspond to the small areas of image 300 a.
- Correlation techniques like those described above are used to determine which small area in image 300 b corresponds to each small area in image 300 a.
- the small areas are determined by applying a grid to each image 300 .
- the small areas can be, at the limit, as small as one pixel each. In cases where areas as small as one pixel are used, however, the correlation techniques will take into account areas surrounding the pixel. Matching small areas are assumed to represent essentially the same apparent points 404 .
- each small area in image 300 a is considered an independent region 502 a with an independent epipolar offset.
- the collection of all small areas which share a common epipolar offset can be considered to constitute a single region 502 a with that epipolar offset.
- regions 502 a can be abnormally shaped, and might not even be contiguous. In either of these cases, however, each region 502 a will have one epipolar offset associated with it.
- the centroid of a region 502 a is a distance 506 a to the right of the left edge of image 300 a.
- the corresponding region 502 b is located a distance 506 b to the right of the left edge of image 300 b.
- X A is the distance of a region 502 a to the right of a characteristic point in image 300 a
- X B is the distance of the corresponding region 502 b to the right of the characteristic point in image 300 b
- Offset point is the epipolar offset of the region 502 a in image 300 a .
- the depth 316 of actual point 304 can be determined.
- stereoscopic image pair 300 is compressed by replacing image 300 b with a list of locations of regions 502 a in image 300 a , and the corresponding epipolar offset for each.
- a region 502 a can generally be uniquely identified in the list by the location of an image point 302 within region 502 a .
- the list of locations and offsets generally takes considerably less storage space than image 300 b alone, and this list constitutes the compressed form of image 300 b in the illustrative embodiment.
- image 300 b When it is desired to use image pair 300 for stereoscopic viewing, image 300 b must be reconstructed. This is done by first making a copy of image 300 a . Then, regions 502 a in image 300 a are determined using the same method as in the compression procedure, and the proper epipolar offset for each region 502 a is determined from the list. Finally, the regions 502 a in duplicate image 300 a are moved by the distance specified by the corresponding epipolar offsets. The portions of duplicate image 300 a which had previously been occupied by regions 502 a are replaced by either a neutral color or a background pattern. Alternately, the background surrounding the original position of region 502 can be extended into the vacant space. After the movement of regions 502 a , duplicate image 300 a becomes a reconstructed version of image 300 b.
- a more accurate reconstructed image 300 b can be achieved by recording, as part of the compressed form of image 300 b , those portions of image 300 b which are visible in image 300 b only.
- FIG. 7 a set of stereoscopic images 300 a and 300 b are shown. Two circular objects are partially overlapping in image 300 a , and are not overlapping in image 300 b , where both objects are translated along the epipolar axis.
- a reconstruction 300 c of image 300 b relying only on the information present in image 300 a leaves an undefined region 700 . As discussed above, this undefined region can be filled in through a number of methods which approximate a general background pattern.
- the actual portion of image 300 b corresponding to undefined region 700 can be recorded as an image. This recorded region 700 is then used in the reconstruction process to fill in the undefined region 700 of image 300 c . Because most stereoscopic image sets contain relatively small undefined regions 700 , saving the image information from these regions 700 will generally still allow for significant compression.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
A set of stereoscopic images (300) is compressed. For apparent points (404) in the set of stereoscopic images (300), a region (502) is identified in each image (300) which represents at least the apparent point (404). The locations of these regions (502) within the images (300), together with the geometry of the vantage point (306) locations, specify the apparent depths (416) of the apparent points (404) in the scene. Information relating to the apparent depths (416) is recorded for the apparent points (404). This recorded depth information, together with just one of the stereoscopic images (300), can be used to later reconstruct the other stereoscopic image (300) for stereoscopic viewing. The set of stereoscopic images (300) can be still images or moving images, and they can be captured digitally, scanned from photographs, or computer-generated.
Description
This invention pertains to the field of image compression. More specifically, this invention pertains to a range based method of compressing a stereoscopic set of images.
Stereoscopic photography has been practiced since almost the beginning of photography itself. Early stereoscopic viewers allowed users to view scenic locations with a realism lacking in ordinary photography. Modern versions of stereoscopic viewers, such as the View-Master, produced by Tyco Toys, have long been staples of the toy industry. Advances in technology have produced such variations as “3-D” movies and, more recently, “virtual reality,” or computer generated interactive stereoscopic simulations. As real-time stereoscopic viewers are beginning to find uses in such areas as the medical field, it is apparent that stereoscopic viewing is becoming more common.
The optical phenomenon exploited by the brain to extract depth information from a three dimensional scene is known as “parallax.” As shown in FIG. 1, a person with two functional eyes 402 viewing point 304 sees slightly different images in each eye 402, due to the slightly different angle from each eye 402 to point 304. The apparent location of point 304 is different in each image formed by eyes 402. By analyzing the differences due to parallax, the brain is able to determine the distance to point 304. By photographing, or otherwise recording, a scene from two distinct locations which mimic the location of eyes 402, as illustrated in FIG. 2, a set of images can be generated which, when viewed properly, can recreate the parallax of the original scene, giving the illusion of three dimensions in the two dimensional images. Each camera 202 uses a lens or lens system 204 to project an image of point 304 onto image plane 308. As illustrated in FIGS. 3a and 3 b, each image point 302 of images 300 a and 300 b represents a point 304 in a three dimensional scene. Each image 300 is associated with a “vantage point” 306, which is the location of the point of view of that image 300. Each image point 302 corresponds to the intersection of an image plane 308 with a “view line” 310. A view line 310 passes through a vantage point 306 and the point 304 in the scene which is represented by image point 302. The view line 310 which passes through a vantage point 306 and intersects image plane 308 perpendicularly defines a “center point” 312 in the image 300 associated with the vantage point 306.
A set of two or more images 300 is “stereoscopic” if they represent substantially parallel views of substantially the same scene, with the vantage points 306 of the images 300 being separated in a direction substantially perpendicular to the direction of the views, this perpendicular direction defining the “epipolar” axis 314.
As illustrated in FIG. 4, when stereoscopic images 300 are viewed with eyes 402 taking the place of vantage points 306 relative to images 300, a viewer perceives apparent points 404 where points 304 had been. Apparent points 404 appear to be at a distance 416 which is proportional to the actual distance 316 of points 304, scaled by the ratio of distance 418 to distance 318, and the ratio of distance 420 to distance 320. Distance 420 is the distance between each of the viewer's eyes 402, and distance 320 is the distance between vantage points 306. Distance 418 is the distance between the viewer's eyes 402 and images 300, and distance 318 is the distance between vantage points 306 and image plane 308.
Stereoscopic systems require the use of at least two stereoscopic images 300 to create the illusion of three dimensional apparent points 404. Graphic images typically contain a large amount of information. Because of this, the storage and transmission of graphic images generally benefit from the use of compression techniques which reduce the amount of information necessary to reconstruct an image. A compressed graphics file contains less information than an uncompressed image, but it can be used to recreate, either perfectly or with losses, the uncompressed image. Because multiple graphic images are required by stereoscopic systems, the image storage and transmission requirements of such systems are twice the image storage and transmission requirements of ordinary monocular images. As such, stereoscopic systems are especially prone to benefit from image compression techniques. What is needed is an image compression technique which is especially suited to stereoscopic images, taking advantage of the high level of redundancy in stereoscopic image sets.
In one embodiment, the present invention comprises a method for compressing a set of stereoscopic images (300). For at least one of the apparent points (404) represented in the set of stereoscopic images (300), a region (502) which represents at least the apparent point (404) is identified in each image (300). The locations of these regions (502) within the images (300), together with the geometry of the vantage point (306) locations relative to image plane (308), specify the apparent depths (416) of the apparent points (404) in the scene. Information relating to the apparent depths (416) is recorded for the apparent points (404). This recorded depth information, together with just one of the stereoscopic images (300), can be used to later reconstruct the other stereoscopic image (300) for stereoscopic viewing.
The set of stereoscopic images (300) can be still images or moving images, and they can be captured digitally, scanned from photographs, or computer-generated.
FIG. 1 is an illustration of a pair of eyes 402 viewing a point 304.
FIG. 2 is an illustration of two cameras producing a stereoscopic set of images of a point 304.
FIG. 3a is a planar view of the geometry involved in creating stereoscopic images 300 a and 300 b.
FIG. 3b is an illustration of stereoscopic images 300 a and 300 b resulting from the geometry of FIG. 3a.
FIG. 4 is a planar view of a pair of eyes 402 viewing a stereoscopic set of images 300 a and 300 b.
FIG. 5 is an illustration of a stereoscopic set of images 300 a and 300 b which contain points 404 which are bounded by areas 502.
FIG. 6 is an illustration of one embodiment of the invention.
FIG. 7 is an illustration of an undefined region 700 in a reconstructed image 300 c which is based on stereoscopic images 300 a and 300 b.
Referring now to FIGS. 5 and 6, an embodiment of the present invention is described. Camera 608 with two imaging systems is used to capture a stereoscopic set of images 300 based on a real world scene. Camera 608 can capture images 300 in digital form, or analog photographs can be scanned at a later time to produce digital images 300. In other embodiments, images 300 are computer generated images based on a computer model of a three dimensional scene. Images 300 can also be moving images representing either a real world scene or a computer model. Images 300 are stored in digital form in input image memory 604, where they are accessible to central processing unit (CPU) 602. CPU 602 responds to program instructions stored on a computer readable medium, such as program disk 610. In the embodiment illustrated in FIG. 6, input image memory 604, CPU 602, and program disk 610 reside in computer 600. In other embodiments, some or all of these elements are incorporated in camera 608.
Responding to the instructions of the program on disk 610, CPU 602 operates on stereoscopic images 300 a and 300 b, which each represent at least one apparent point 404 in common. Each image 300 a and 300 b is examined to determine regions 502. Each region 502 a in image 300 a represents at least one apparent point 404 which is also represented by a corresponding region 502 b in image 300 b. There are a number of possible approaches to determining corresponding regions 502. Both object-based and non-object-based types of approaches are described, and other approaches can be employed by alternate embodiments of the present invention.
One object-based method for determining corresponding regions 502 is to use standard edge-detection methods to find objects represented in each image 300. The region 502 representing an object in image 300 a will likely represent many of the same apparent points 404 as a region 502 representing the same object in image 300 b. Edge-detection methods generally analyze small patches of an image to determine contrast. Areas of each image 300 with high contrast are assumed to represent edges of objects. The gradients of contrast in the small patches indicate the direction perpendicular to the object edge. An area of each image 300 which is generally circumscribed by detected edges, and is generally devoid of detected edges on the interior, is assumed to be a region 502 representing an object. In FIG. 5, shaded objects are indicated as being enclosed in regions 502. Less sophisticated object-based methods can be utilized in appropriate circumstances. For example, if each image 300 contains only dark objects against a light background, a luminance threshold can be used to characterize every pixel as either an object pixel or a background pixel. In that case all contiguous object pixels are determined to constitute an area 502 representing a single object.
After regions 502 which represent objects have been identified in each image 300, it is necessary to match detected objects in image 300 a to objects in image 300 b. This task is simplified by the geometry of the stereoscopic set-up when the views are substantially parallel. For parallel views, an apparent point 404 will appear in both images 300 at the same vertical location, where vertical is the direction within image plane 308 which is perpendicular to the epipolar direction. This vertical location is indicated by epipolar line 504 in FIG. 5. If the views are almost parallel, epipolar line 504 along which apparent point 404 b can appear in image 300 b, given the location of matching apparent point 404 a in image 300 a, is nearly horizontal. Given accurate information about the alignment of the system used to create images 300, epipolar line 504 is easily calculated, which greatly simplifies the task of matching regions 502. Without accurate alignment information, regions 502 from a larger set of vertical locations in image 300 b must be considered as possible matches for a given region 502 a in image 300 a.
Several techniques can be employed to match regions 502 which have been identified using one of the object-based methods. In one embodiment, the width, height and mean color of target region 502 a in image 300 a are compared to the width, height and mean color of each region 502 b located near the corresponding epipolar line 504 in image 300 b. The region 502 b which most nearly resembles the target region 502 a is considered the match. More sophisticated methods, such as cross-correlation, can also be employed. For example, each element of a two-dimensional Fourier transform of one region 502 a is multiplied by the complex conjugate of each corresponding element of a two-dimensional Fourier transform of another region 502 b, resulting in a Fourier transform of a cross-correlation. Applying an inverse two dimensional Fourier transform results in a cross-correlation, the magnitude of which can be used to determine the degree of fit between the two regions 502. Other transforms can also be used in performing cross-correlations, which are useful for determining the best match given a number of possible matches.
After regions 502 a in image 300 a have been matched up with regions 502 b in image 300 b, the epipolar offset of each region 502 a is determined. The epipolar offset is the difference along the epipolar axis 314 between the location of region 502 a in image 300 a and the location of matching region 502 b in image 300 b. To determine the epipolar offset of a region 502 in two or more images 300, it is necessary to have a “characteristic” point in each image 300 from which to measure the location of regions 502 within that image 300. The preferred characteristic point is the upper left corner of each image 300. Another possible characteristic point is center point 312, which represents the intersection with the image plane 308 of that view line 310 which is perpendicular to image plane 308, as illustrated in FIG. 3a. A point 304 located an infinite distance in front of image plane 308 would appear at center point 312 in both images 300. Using the characteristic points, the position 506 of each region 502 in each image 300 can be determined.
Non-object-based methods of determining regions 502 and epipolar offsets can also be used. In one such embodiment of the present invention, small areas of image 300 a are compared to small areas of image 300 b which are located on epipolar lines 504 which correspond to the small areas of image 300 a. Correlation techniques like those described above are used to determine which small area in image 300 b corresponds to each small area in image 300 a. The small areas are determined by applying a grid to each image 300. The small areas can be, at the limit, as small as one pixel each. In cases where areas as small as one pixel are used, however, the correlation techniques will take into account areas surrounding the pixel. Matching small areas are assumed to represent essentially the same apparent points 404.
Having matched small areas in two images 300, the epipolar offset for each small area is easily determined. In one approach each small area in image 300 a is considered an independent region 502 a with an independent epipolar offset. In another approach, the collection of all small areas which share a common epipolar offset can be considered to constitute a single region 502 a with that epipolar offset. In the second case regions 502 a can be abnormally shaped, and might not even be contiguous. In either of these cases, however, each region 502 a will have one epipolar offset associated with it.
In image 300 a of FIG. 5, the centroid of a region 502 a is a distance 506 a to the right of the left edge of image 300 a. In image 300 b, the corresponding region 502 b is located a distance 506 b to the right of the left edge of image 300 b. The difference in locations is distance 506 a minus distance 506 b, or more generally:
where XA is the distance of a region 502 a to the right of a characteristic point in image 300 a, XB is the distance of the corresponding region 502 b to the right of the characteristic point in image 300 b, and Offsetpoint is the epipolar offset of the region 502 a in image 300 a. Referring to FIG. 3, if distance 320 between the right and left vantage points 306 is Offsetvantage, and distance 318 between vantage points 306 and image plane 308 is Depthimage, then the following relation gives Depthpoint, the perpendicular distance 316 from vantage points 306 to a point 304 represented by region 502:
From Equation 2, the depth 316 of actual point 304 can be determined.
It is apparent from Equation 2 that Depthpoint and Offsetpoint are inversely related, and that apparent depth 416 for an apparent point 404 can be recorded indirectly by recording Offsetpoint, the corresponding epipolar offset, assuming the other parameters of Equation 2 are known. In the illustrative embodiment of the present invention, stereoscopic image pair 300 is compressed by replacing image 300 b with a list of locations of regions 502 a in image 300 a, and the corresponding epipolar offset for each. A region 502 a can generally be uniquely identified in the list by the location of an image point 302 within region 502 a. The list of locations and offsets generally takes considerably less storage space than image 300 b alone, and this list constitutes the compressed form of image 300 b in the illustrative embodiment.
When it is desired to use image pair 300 for stereoscopic viewing, image 300 b must be reconstructed. This is done by first making a copy of image 300 a. Then, regions 502 a in image 300 a are determined using the same method as in the compression procedure, and the proper epipolar offset for each region 502 a is determined from the list. Finally, the regions 502 a in duplicate image 300 a are moved by the distance specified by the corresponding epipolar offsets. The portions of duplicate image 300 a which had previously been occupied by regions 502 a are replaced by either a neutral color or a background pattern. Alternately, the background surrounding the original position of region 502 can be extended into the vacant space. After the movement of regions 502 a, duplicate image 300 a becomes a reconstructed version of image 300 b.
A more accurate reconstructed image 300 b can be achieved by recording, as part of the compressed form of image 300 b, those portions of image 300 b which are visible in image 300 b only. Referring now to FIG. 7, a set of stereoscopic images 300 a and 300 b are shown. Two circular objects are partially overlapping in image 300 a, and are not overlapping in image 300 b, where both objects are translated along the epipolar axis. A reconstruction 300 c of image 300 b relying only on the information present in image 300 a leaves an undefined region 700. As discussed above, this undefined region can be filled in through a number of methods which approximate a general background pattern. Alternately, the actual portion of image 300 b corresponding to undefined region 700 can be recorded as an image. This recorded region 700 is then used in the reconstruction process to fill in the undefined region 700 of image 300 c. Because most stereoscopic image sets contain relatively small undefined regions 700, saving the image information from these regions 700 will generally still allow for significant compression.
The methods described do not generally allow image 300 b to be recreated exactly, but for points 404 which are properly identified and matched, the reconstructed stereoscopic image pair 300 will exhibit the same effect of depth as the original image pair 300. In many applications such a reconstructed image 300 b works acceptably well.
The above description is included to illustrate the operation of exemplary embodiments and is not meant to limit the scope of the invention. The scope of the invention is to be limited only by the following claims. From the above description, many variations will be apparent to one skilled in the art that would be encompassed by the spirit and scope of the present invention.
Claims (30)
1. A method for compressing one of a stereoscopic set of images, each of the images in the stereoscopic set of images having a vantage point associated therewith, the method comprising the steps of:
a) identifying in a first image of the stereoscopic set of images a first region representing an apparent point;
b) identifying in a second image of the stereoscopic set of images a second region representing the apparent point;
c) determining an epipolar offset, where the epipolar offset is the difference in an epipolar direction between the location of the second region relative to a characteristic point of the second image and the location of the first region relative to a characteristic point of the first image, which epipolar direction is substantially parallel to a vector from the vantage point of the first image to the vantage point of the second image; and
d) creating a compressed stereoscopic image by replacing the second region with an indication of the first region and with the corresponding epipolar offset.
2. The method of claim 1, wherein the step of identifying in a first image a first region representing an apparent point comprises the sub-steps of:
using contrast information in the first image to determine edge portions of the first image, which edge portions represent edges of an object; and
determining as the first region a portion of the first image which is generally circumscribed by the edge portions.
3. The method of claim 2, wherein the step of identifying in a second image a second region representing the apparent point comprises the sub-steps of:
using contrast information in the second image to determine edge portions of the second image, which edge portions represent edges of the object; and
determining as the second region a portion of the second image which is generally circumscribed by the edge portions.
4. The method of claim 1, wherein the step of identifying in a first image a first region representing an apparent point comprises selecting as the first region a portion of the first image, and the step of identifying in a second image a second region representing the apparent point comprises the sub-steps of:
selecting in the second image more than one target region, where each target region is the same size as the first region;
performing a cross-correlation calculation to determine the degree of similarity between the first region and each target region; and
selecting as the second region that target region which is most similar to the first region.
5. The method of claim 4, wherein the size of the first region is a predetermined number of pixels.
6. The method of claim 5, wherein the size of the first region is one pixel.
7. The method of claim 1, wherein steps (a) through (d) are repeated for a plurality of first and second regions and wherein the method further comprises the step of creating a compressed stereoscopic image by replacing the second image with a list of locations of the plurality of the first regions and the corresponding epipolar offsets.
8. The method of claim 1, wherein one of the first image and the second image is a scanned photograph of a physical scene.
9. The method of claim 1, wherein one of the first image and the second image is an image of a physical scene which has been captured in digital form.
10. The method of claim 1, wherein one of the first image and the second image is a computer generated image.
11. A method for using a first image from a stereoscopic set of images and a compressed form of a second image from the stereoscopic set to construct the second image in uncompressed form, which compressed form of the second image comprises information specifying an epipolar offset which is the difference in an epipolar direction between the location of a first region of the first image relative to a characteristic point of the first image and the location of a second region of the second image relative to a characteristic point of the second image, which epipolar direction is substantially perpendicular to the direction of view of the first image, the method comprising:
identifying the first region in the first image; and
producing a third image as an uncompressed form of the second image by overlaying the first region, displaced by the epipolar offset, on a duplicate of the first image.
12. The method of claim 11, further comprising the step of causing a background region of the duplicate of the first image to be replaced with a background pattern, where the background region is a portion of the duplicate of the first image which does correspond to the location of the undisplaced first region, and does not correspond to the location of the first region displaced by the epipolar offset.
13. The method of claim 12, wherein the background pattern comprises a combination of image qualities drawn from portions of the first image surrounding the first region.
14. The method of claim 12, wherein the background pattern is a solid color.
15. The method of claim 12, wherein the compressed form of the second image further comprises background image information based on the portion of the second image which corresponds to the location of the background region, and the background pattern is based on the background image information.
16. A computer apparatus for compressing one of a stereoscopic set of images, each of the images in the stereoscopic set of images having a vantage point associated therewith, the apparatus comprising:
a central processing unit (CPU);
an image memory, coupled to the CPU, for storing a first image of the stereoscopic set of images and a second image of the stereoscopic set of images; and
a program memory coupled to the CPU, for storing an array of instructions, which instructions, when executed by the CPU, cause the CPU to:
(a) identify in the first image a first region representing an apparent point;
(b) identify in the second image a second region representing the apparent point;
(c) determine an epipolar offset, where the epipolar offset is the difference in an epipolar direction between the location of the second region relative to a characteristic point of the second image and the location of the first region relative to a characteristic point of the first image, which epipolar direction is parallel to a vector from the vantage point of the first image to the vantage point of the second image; and
(d) create a compressed stereoscopic image by replacing the second region with an indication of the first region and with the corresponding epipolar offset.
17. The apparatus of claim 16, wherein the steps (a) through (d) are repeated for a plurality of first and second regions and wherein the method further comprises the step of creating a compressed stereoscopic image by replacing the second image with a list of locations of the plurality of the first regions and the corresponding epipolar offsets.
18. A computer readable medium containing a computer program for compressing one of a stereoscopic set of images, each of the images in the stereoscopic set of images having a vantage point associated therewith, the computer program performing the steps of:
(a) identifying in a first image of the stereoscopic set of images a first region representing an apparent point;
(b) identifying in a second image of the stereoscopic set of images a second region representing the apparent point;
(c) determining an epipolar offset, where the epipolar offset is the difference in an epipolar direction between the location of the second region relative to a characteristic point of the second image and the location of the first region relative to a characteristic point of the first image, which epipolar direction is substantially parallel to a vector from the vantage point of the first image to the vantage point of the second image; and
(d) creating a compressed stereoscopic image by replacing the second region with an indication of the first region and with the corresponding epipolar offset.
19. The computer readable medium of claim 18, wherein the steps (a) through (d) are repeated for a plurality of first and second regions and wherein the method further comprises the step of creating a compressed stereoscopic image by replacing the second image with a list of locations of the plurality of the first regions and the corresponding epipolar offsets.
20. A computer readable medium containing a computer program for using a first image from a stereoscopic set of images and a compressed form of a second image from the stereoscopic set to construct the second image in uncompressed form, which compressed form of the second image comprises information specifying an epipolar offset which is the difference in an epipolar direction between the location of a first region of the first image relative to a characteristic point of the first image and the location of a second region of the second image relative to a characteristic point of the second image, which epipolar direction is substantially perpendicular to the direction of view of the first image, the computer program performing the steps of:
identifying the first region in the first image; and
producing a third image as an uncompressed form of the second image by overlaying the first region, displaced by the epipolar offset, on a duplicate of the first image.
21. A method for compressing one of a stereoscopic set of images, the method comprising the steps of:
a) identifying in a first image of the stereoscopic set of images a first region representing an apparent point;
b) identifying in a second image of the stereoscopic set of images a second region representing the apparent point;
c) determining a predetermined offset, where the predetermined offset is the difference between the location of the second region relative to a characteristic point of the second image and the location of the first region relative to a characteristic point of the first image; and
d) compressing the stereoscopic image by generating the second region according to an indication of the first region and the predetermined offset.
22. The method of claim 21, wherein the step of identifying in a first image a first region representing an apparent point comprises the sub-steps of:
using contrast information in the first image to determine edge portions of the first image, which edge portions represent edges of an object; and
determining as the first region a portion of the first image which is generally circumscribed by the edge portions.
23. The method of claim 22, wherein the step of identifying in a second image a second region representing the apparent point comprises the sub-steps of:
using contrast information in the second image to determine edge portions of the second image, which edge portions represent edges of the object; and
determining as the second region a portion of the second image which is generally circumscribed by the edge portions.
24. The method of claim 21, wherein the step of identifying in a first image a first region representing an apparent point comprises selecting as the first region a portion of the first image, and the step of identifying in a second image a second region representing the apparent point comprises the sub-steps of:
selecting in the second image more than one target region, where each target region is the same size as the first region;
performing a cross-correlation calculation to determine the degree of similarity between the first region and each target region; and
selecting as the second region that target region which is most similar to the first region.
25. The method of claim 24, wherein the size of the first region is a predetermined number of pixels.
26. The method of claim 25, wherein the size of the first region is one pixel.
27. The method of claim 21, wherein steps (a) through (d) are repeated for a plurality of first and second regions and wherein the method further comprises the step of creating a compressed stereoscopic image by replacing the second image with a list of locations of the plurality of the first regions and the predetermined offsets.
28. The method of claim 21, wherein one of the first image and the second image is a scanned photograph of a physical scene.
29. The method of claim 21, wherein one of the first image and the second image is an image of a physical scene which has been captured in digital form.
30. The method of claim 21, wherein one of the first image and the second image is a computer generated image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/088,617 US6205241B1 (en) | 1998-06-01 | 1998-06-01 | Compression of stereoscopic images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/088,617 US6205241B1 (en) | 1998-06-01 | 1998-06-01 | Compression of stereoscopic images |
Publications (1)
Publication Number | Publication Date |
---|---|
US6205241B1 true US6205241B1 (en) | 2001-03-20 |
Family
ID=22212410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/088,617 Expired - Lifetime US6205241B1 (en) | 1998-06-01 | 1998-06-01 | Compression of stereoscopic images |
Country Status (1)
Country | Link |
---|---|
US (1) | US6205241B1 (en) |
Cited By (70)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030080282A1 (en) * | 2001-10-26 | 2003-05-01 | Walley Thomas M. | Apparatus and method for three-dimensional relative movement sensing |
US20030152264A1 (en) * | 2002-02-13 | 2003-08-14 | Perkins Christopher H. | Method and system for processing stereoscopic images |
US6704042B2 (en) * | 1998-12-10 | 2004-03-09 | Canon Kabushiki Kaisha | Video processing apparatus, control method therefor, and storage medium |
US20040247159A1 (en) * | 2003-06-07 | 2004-12-09 | Niranjan Damera-Venkata | Motion estimation for compression of calibrated multi-view image sequences |
US20050169543A1 (en) * | 2004-01-30 | 2005-08-04 | Niranjan Damera-Venkata | Motion estimation for compressing multiple view images |
US20110298704A1 (en) * | 2005-10-21 | 2011-12-08 | Apple Inc. | Three-dimensional imaging and display system |
US20140002675A1 (en) * | 2012-06-28 | 2014-01-02 | Pelican Imaging Corporation | Systems and methods for detecting defective camera arrays and optic arrays |
US9025895B2 (en) | 2011-09-28 | 2015-05-05 | Pelican Imaging Corporation | Systems and methods for decoding refocusable light field image files |
US9041829B2 (en) | 2008-05-20 | 2015-05-26 | Pelican Imaging Corporation | Capturing and processing of high dynamic range images using camera arrays |
US9049411B2 (en) | 2008-05-20 | 2015-06-02 | Pelican Imaging Corporation | Camera arrays incorporating 3×3 imager configurations |
US9100586B2 (en) | 2013-03-14 | 2015-08-04 | Pelican Imaging Corporation | Systems and methods for photometric normalization in array cameras |
US9123117B2 (en) | 2012-08-21 | 2015-09-01 | Pelican Imaging Corporation | Systems and methods for generating depth maps and corresponding confidence maps indicating depth estimation reliability |
US9124864B2 (en) | 2013-03-10 | 2015-09-01 | Pelican Imaging Corporation | System and methods for calibration of an array camera |
US9128228B2 (en) | 2011-06-28 | 2015-09-08 | Pelican Imaging Corporation | Optical arrangements for use with an array camera |
US9143711B2 (en) | 2012-11-13 | 2015-09-22 | Pelican Imaging Corporation | Systems and methods for array camera focal plane control |
US9185276B2 (en) | 2013-11-07 | 2015-11-10 | Pelican Imaging Corporation | Methods of manufacturing array camera modules incorporating independently aligned lens stacks |
US9210392B2 (en) | 2012-05-01 | 2015-12-08 | Pelican Imaging Coporation | Camera modules patterned with pi filter groups |
US9214013B2 (en) | 2012-09-14 | 2015-12-15 | Pelican Imaging Corporation | Systems and methods for correcting user identified artifacts in light field images |
US9247117B2 (en) | 2014-04-07 | 2016-01-26 | Pelican Imaging Corporation | Systems and methods for correcting for warpage of a sensor array in an array camera module by introducing warpage into a focal plane of a lens stack array |
US9253380B2 (en) | 2013-02-24 | 2016-02-02 | Pelican Imaging Corporation | Thin form factor computational array cameras and modular array cameras |
US9264610B2 (en) | 2009-11-20 | 2016-02-16 | Pelican Imaging Corporation | Capturing and processing of images including occlusions captured by heterogeneous camera arrays |
US9412206B2 (en) | 2012-02-21 | 2016-08-09 | Pelican Imaging Corporation | Systems and methods for the manipulation of captured light field image data |
US9426361B2 (en) | 2013-11-26 | 2016-08-23 | Pelican Imaging Corporation | Array camera configurations incorporating multiple constituent array cameras |
US9438888B2 (en) | 2013-03-15 | 2016-09-06 | Pelican Imaging Corporation | Systems and methods for stereo imaging with camera arrays |
US9497429B2 (en) | 2013-03-15 | 2016-11-15 | Pelican Imaging Corporation | Extended color processing on pelican array cameras |
US9497370B2 (en) | 2013-03-15 | 2016-11-15 | Pelican Imaging Corporation | Array camera architecture implementing quantum dot color filters |
US9516222B2 (en) | 2011-06-28 | 2016-12-06 | Kip Peli P1 Lp | Array cameras incorporating monolithic array camera modules with high MTF lens stacks for capture of images used in super-resolution processing |
US9521319B2 (en) | 2014-06-18 | 2016-12-13 | Pelican Imaging Corporation | Array cameras and array camera modules including spectral filters disposed outside of a constituent image sensor |
US9578259B2 (en) | 2013-03-14 | 2017-02-21 | Fotonation Cayman Limited | Systems and methods for reducing motion blur in images or video in ultra low light with array cameras |
US9633442B2 (en) | 2013-03-15 | 2017-04-25 | Fotonation Cayman Limited | Array cameras including an array camera module augmented with a separate camera |
US9638883B1 (en) | 2013-03-04 | 2017-05-02 | Fotonation Cayman Limited | Passive alignment of array camera modules constructed from lens stack arrays and sensors based upon alignment information obtained during manufacture of array camera modules using an active alignment process |
US9733486B2 (en) | 2013-03-13 | 2017-08-15 | Fotonation Cayman Limited | Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing |
US9741118B2 (en) | 2013-03-13 | 2017-08-22 | Fotonation Cayman Limited | System and methods for calibration of an array camera |
US9766380B2 (en) | 2012-06-30 | 2017-09-19 | Fotonation Cayman Limited | Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors |
US9774789B2 (en) | 2013-03-08 | 2017-09-26 | Fotonation Cayman Limited | Systems and methods for high dynamic range imaging using array cameras |
US9794476B2 (en) | 2011-09-19 | 2017-10-17 | Fotonation Cayman Limited | Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures |
US9800856B2 (en) | 2013-03-13 | 2017-10-24 | Fotonation Cayman Limited | Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies |
US9813616B2 (en) | 2012-08-23 | 2017-11-07 | Fotonation Cayman Limited | Feature based high resolution motion estimation from low resolution images captured using an array source |
US9866739B2 (en) | 2011-05-11 | 2018-01-09 | Fotonation Cayman Limited | Systems and methods for transmitting and receiving array camera image data |
US9888194B2 (en) | 2013-03-13 | 2018-02-06 | Fotonation Cayman Limited | Array camera architecture implementing quantum film image sensors |
US9898856B2 (en) | 2013-09-27 | 2018-02-20 | Fotonation Cayman Limited | Systems and methods for depth-assisted perspective distortion correction |
US9936148B2 (en) | 2010-05-12 | 2018-04-03 | Fotonation Cayman Limited | Imager array interfaces |
US9942474B2 (en) | 2015-04-17 | 2018-04-10 | Fotonation Cayman Limited | Systems and methods for performing high speed video capture and depth estimation using array cameras |
US9955070B2 (en) | 2013-03-15 | 2018-04-24 | Fotonation Cayman Limited | Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information |
US10009538B2 (en) | 2013-02-21 | 2018-06-26 | Fotonation Cayman Limited | Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information |
US10089740B2 (en) | 2014-03-07 | 2018-10-02 | Fotonation Limited | System and methods for depth regularization and semiautomatic interactive matting using RGB-D images |
US10119808B2 (en) | 2013-11-18 | 2018-11-06 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
US10122993B2 (en) | 2013-03-15 | 2018-11-06 | Fotonation Limited | Autofocus system for a conventional camera that uses depth information from an array camera |
US10129523B2 (en) | 2016-06-22 | 2018-11-13 | Microsoft Technology Licensing, Llc | Depth-aware reprojection |
US10237531B2 (en) | 2016-06-22 | 2019-03-19 | Microsoft Technology Licensing, Llc | Discontinuity-aware reprojection |
US10250871B2 (en) | 2014-09-29 | 2019-04-02 | Fotonation Limited | Systems and methods for dynamic calibration of array cameras |
US10366472B2 (en) | 2010-12-14 | 2019-07-30 | Fotonation Limited | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
US10390005B2 (en) | 2012-09-28 | 2019-08-20 | Fotonation Limited | Generating images from light fields utilizing virtual viewpoints |
US10482618B2 (en) | 2017-08-21 | 2019-11-19 | Fotonation Limited | Systems and methods for hybrid depth regularization |
US11270110B2 (en) | 2019-09-17 | 2022-03-08 | Boston Polarimetrics, Inc. | Systems and methods for surface modeling using polarization cues |
US11290658B1 (en) | 2021-04-15 | 2022-03-29 | Boston Polarimetrics, Inc. | Systems and methods for camera exposure control |
US11302012B2 (en) | 2019-11-30 | 2022-04-12 | Boston Polarimetrics, Inc. | Systems and methods for transparent object segmentation using polarization cues |
US11525906B2 (en) | 2019-10-07 | 2022-12-13 | Intrinsic Innovation Llc | Systems and methods for augmentation of sensor systems and imaging systems with polarization |
US11580667B2 (en) | 2020-01-29 | 2023-02-14 | Intrinsic Innovation Llc | Systems and methods for characterizing object pose detection and measurement systems |
US11689813B2 (en) | 2021-07-01 | 2023-06-27 | Intrinsic Innovation Llc | Systems and methods for high dynamic range imaging using crossed polarizers |
US11792538B2 (en) | 2008-05-20 | 2023-10-17 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US11797863B2 (en) | 2020-01-30 | 2023-10-24 | Intrinsic Innovation Llc | Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images |
US11953700B2 (en) | 2020-05-27 | 2024-04-09 | Intrinsic Innovation Llc | Multi-aperture polarization optical systems using beam splitters |
US11954886B2 (en) | 2021-04-15 | 2024-04-09 | Intrinsic Innovation Llc | Systems and methods for six-degree of freedom pose estimation of deformable objects |
US12020455B2 (en) | 2021-03-10 | 2024-06-25 | Intrinsic Innovation Llc | Systems and methods for high dynamic range image reconstruction |
US12069227B2 (en) | 2021-03-10 | 2024-08-20 | Intrinsic Innovation Llc | Multi-modal and multi-spectral stereo camera arrays |
US12067746B2 (en) | 2021-05-07 | 2024-08-20 | Intrinsic Innovation Llc | Systems and methods for using computer vision to pick up small objects |
US12172310B2 (en) | 2021-06-29 | 2024-12-24 | Intrinsic Innovation Llc | Systems and methods for picking objects using 3-D geometry and segmentation |
US12175741B2 (en) | 2021-06-22 | 2024-12-24 | Intrinsic Innovation Llc | Systems and methods for a vision guided end effector |
US12293535B2 (en) | 2021-08-03 | 2025-05-06 | Intrinsic Innovation Llc | Systems and methods for training pose estimators in computer vision |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3769889A (en) | 1972-02-09 | 1973-11-06 | R Wechsler | Three dimensional image reproduction with pseudo-scopy correction by image inversion optics which are capable of independently controlling z-axis placement in image space |
US3883251A (en) | 1974-04-12 | 1975-05-13 | Bendix Corp | Single photo epipolar scan instrument |
US4601053A (en) * | 1983-11-21 | 1986-07-15 | Grumman Aerospace Corporation | Automatic TV ranging system |
US5249035A (en) | 1990-11-26 | 1993-09-28 | Kabushiki Kaisha Toshiba | Method of measuring three dimensional shape |
US5347363A (en) | 1991-07-25 | 1994-09-13 | Kabushiki Kaisha Toshiba | External lead shape measurement apparatus for measuring lead shape of semiconductor package by using stereoscopic vision |
US5390024A (en) | 1991-08-13 | 1995-02-14 | Wright; Steven | Optical transform generating apparatus |
US5432712A (en) | 1990-05-29 | 1995-07-11 | Axiom Innovation Limited | Machine vision stereo matching |
US5455689A (en) | 1991-06-27 | 1995-10-03 | Eastman Kodak Company | Electronically interpolated integral photography system |
US5475422A (en) | 1993-06-21 | 1995-12-12 | Nippon Telegraph And Telephone Corporation | Method and apparatus for reconstructing three-dimensional objects |
US5495576A (en) | 1993-01-11 | 1996-02-27 | Ritchey; Kurtis J. | Panoramic image based virtual reality/telepresence audio-visual system and method |
US5510831A (en) | 1994-02-10 | 1996-04-23 | Vision Iii Imaging, Inc. | Autostereoscopic imaging apparatus and method using suit scanning of parallax images |
US5644651A (en) | 1995-03-31 | 1997-07-01 | Nec Research Institute, Inc. | Method for the estimation of rotation between two frames via epipolar search for use in a three-dimensional representation |
US5655033A (en) | 1993-06-21 | 1997-08-05 | Canon Kabushiki Kaisha | Method for extracting corresponding point in plural images |
US5680474A (en) | 1992-10-27 | 1997-10-21 | Canon Kabushiki Kaisha | Corresponding point extraction method for a plurality of images |
US5703961A (en) * | 1994-12-29 | 1997-12-30 | Worldscape L.L.C. | Image transformation and synthesis methods |
JPH1032840A (en) | 1996-04-05 | 1998-02-03 | Matsushita Electric Ind Co Ltd | Multi-viewpoint image transmission and display method |
US5852672A (en) * | 1995-07-10 | 1998-12-22 | The Regents Of The University Of California | Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects |
US5973726A (en) * | 1993-09-24 | 1999-10-26 | Canon Kabushiki Kaisha | Panoramic image processing apparatus |
-
1998
- 1998-06-01 US US09/088,617 patent/US6205241B1/en not_active Expired - Lifetime
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3769889A (en) | 1972-02-09 | 1973-11-06 | R Wechsler | Three dimensional image reproduction with pseudo-scopy correction by image inversion optics which are capable of independently controlling z-axis placement in image space |
US3883251A (en) | 1974-04-12 | 1975-05-13 | Bendix Corp | Single photo epipolar scan instrument |
US4601053A (en) * | 1983-11-21 | 1986-07-15 | Grumman Aerospace Corporation | Automatic TV ranging system |
US5432712A (en) | 1990-05-29 | 1995-07-11 | Axiom Innovation Limited | Machine vision stereo matching |
US5249035A (en) | 1990-11-26 | 1993-09-28 | Kabushiki Kaisha Toshiba | Method of measuring three dimensional shape |
US5455689A (en) | 1991-06-27 | 1995-10-03 | Eastman Kodak Company | Electronically interpolated integral photography system |
US5347363A (en) | 1991-07-25 | 1994-09-13 | Kabushiki Kaisha Toshiba | External lead shape measurement apparatus for measuring lead shape of semiconductor package by using stereoscopic vision |
US5390024A (en) | 1991-08-13 | 1995-02-14 | Wright; Steven | Optical transform generating apparatus |
US5680474A (en) | 1992-10-27 | 1997-10-21 | Canon Kabushiki Kaisha | Corresponding point extraction method for a plurality of images |
US5495576A (en) | 1993-01-11 | 1996-02-27 | Ritchey; Kurtis J. | Panoramic image based virtual reality/telepresence audio-visual system and method |
US5475422A (en) | 1993-06-21 | 1995-12-12 | Nippon Telegraph And Telephone Corporation | Method and apparatus for reconstructing three-dimensional objects |
US5655033A (en) | 1993-06-21 | 1997-08-05 | Canon Kabushiki Kaisha | Method for extracting corresponding point in plural images |
US5973726A (en) * | 1993-09-24 | 1999-10-26 | Canon Kabushiki Kaisha | Panoramic image processing apparatus |
US5510831A (en) | 1994-02-10 | 1996-04-23 | Vision Iii Imaging, Inc. | Autostereoscopic imaging apparatus and method using suit scanning of parallax images |
US5703961A (en) * | 1994-12-29 | 1997-12-30 | Worldscape L.L.C. | Image transformation and synthesis methods |
US5644651A (en) | 1995-03-31 | 1997-07-01 | Nec Research Institute, Inc. | Method for the estimation of rotation between two frames via epipolar search for use in a three-dimensional representation |
US5852672A (en) * | 1995-07-10 | 1998-12-22 | The Regents Of The University Of California | Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects |
JPH1032840A (en) | 1996-04-05 | 1998-02-03 | Matsushita Electric Ind Co Ltd | Multi-viewpoint image transmission and display method |
Non-Patent Citations (3)
Title |
---|
Dupéret. A., "Automatic Derivation of a DTM to Produce Countour Lines", http://dgrwww.epfl.ch/PHOT/publicat/wks96/Art_3_2.html, Mar. 24, 1998, France. |
Roux, M., "Cartography Updating", http://www-ima.enst.fr/activite_96/en/node42.html, Mar. 24, 1998, France. |
Zhizhuo, W., "From Photogrammetry to Geomatics-a Commemoration of the Accomplishment that is VirtuoZo", The VirtuoZo Manuscript, http://www.squirrel.com.au/virtuozo/Manuscript/wang/html, Dec. 18, 1997, Australia. |
Cited By (187)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6704042B2 (en) * | 1998-12-10 | 2004-03-09 | Canon Kabushiki Kaisha | Video processing apparatus, control method therefor, and storage medium |
US20030080282A1 (en) * | 2001-10-26 | 2003-05-01 | Walley Thomas M. | Apparatus and method for three-dimensional relative movement sensing |
US6770863B2 (en) * | 2001-10-26 | 2004-08-03 | Agilent Technologies, Inc. | Apparatus and method for three-dimensional relative movement sensing |
US20030152264A1 (en) * | 2002-02-13 | 2003-08-14 | Perkins Christopher H. | Method and system for processing stereoscopic images |
US7286689B2 (en) * | 2003-06-07 | 2007-10-23 | Hewlett-Packard Development Company, L.P. | Motion estimation for compression of calibrated multi-view image sequences |
US20040247159A1 (en) * | 2003-06-07 | 2004-12-09 | Niranjan Damera-Venkata | Motion estimation for compression of calibrated multi-view image sequences |
US20050169543A1 (en) * | 2004-01-30 | 2005-08-04 | Niranjan Damera-Venkata | Motion estimation for compressing multiple view images |
US7463778B2 (en) | 2004-01-30 | 2008-12-09 | Hewlett-Packard Development Company, L.P | Motion estimation for compressing multiple view images |
US9766716B2 (en) | 2005-10-21 | 2017-09-19 | Apple Inc. | Three-dimensional imaging and display system |
US20110298704A1 (en) * | 2005-10-21 | 2011-12-08 | Apple Inc. | Three-dimensional imaging and display system |
US20110298798A1 (en) * | 2005-10-21 | 2011-12-08 | Apple Inc. | Three-dimensional imaging and display system |
US9958960B2 (en) | 2005-10-21 | 2018-05-01 | Apple Inc. | Three-dimensional imaging and display system |
US8743345B2 (en) * | 2005-10-21 | 2014-06-03 | Apple Inc. | Three-dimensional imaging and display system |
US8780332B2 (en) * | 2005-10-21 | 2014-07-15 | Apple Inc. | Three-dimensional imaging and display system |
US9049411B2 (en) | 2008-05-20 | 2015-06-02 | Pelican Imaging Corporation | Camera arrays incorporating 3×3 imager configurations |
US9060120B2 (en) | 2008-05-20 | 2015-06-16 | Pelican Imaging Corporation | Systems and methods for generating depth maps using images captured by camera arrays |
US9485496B2 (en) | 2008-05-20 | 2016-11-01 | Pelican Imaging Corporation | Systems and methods for measuring depth using images captured by a camera array including cameras surrounding a central camera |
US11412158B2 (en) | 2008-05-20 | 2022-08-09 | Fotonation Limited | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US9041829B2 (en) | 2008-05-20 | 2015-05-26 | Pelican Imaging Corporation | Capturing and processing of high dynamic range images using camera arrays |
US9749547B2 (en) | 2008-05-20 | 2017-08-29 | Fotonation Cayman Limited | Capturing and processing of images using camera array incorperating Bayer cameras having different fields of view |
US9041823B2 (en) | 2008-05-20 | 2015-05-26 | Pelican Imaging Corporation | Systems and methods for performing post capture refocus using images captured by camera arrays |
US9049367B2 (en) | 2008-05-20 | 2015-06-02 | Pelican Imaging Corporation | Systems and methods for synthesizing higher resolution images using images captured by camera arrays |
US9191580B2 (en) | 2008-05-20 | 2015-11-17 | Pelican Imaging Corporation | Capturing and processing of images including occlusions captured by camera arrays |
US9049391B2 (en) | 2008-05-20 | 2015-06-02 | Pelican Imaging Corporation | Capturing and processing of near-IR images including occlusions using camera arrays incorporating near-IR light sources |
US9049390B2 (en) | 2008-05-20 | 2015-06-02 | Pelican Imaging Corporation | Capturing and processing of images captured by arrays including polychromatic cameras |
US9049381B2 (en) | 2008-05-20 | 2015-06-02 | Pelican Imaging Corporation | Systems and methods for normalizing image data captured by camera arrays |
US9055233B2 (en) | 2008-05-20 | 2015-06-09 | Pelican Imaging Corporation | Systems and methods for synthesizing higher resolution images using a set of images containing a baseline image |
US9055213B2 (en) | 2008-05-20 | 2015-06-09 | Pelican Imaging Corporation | Systems and methods for measuring depth using images captured by monolithic camera arrays including at least one bayer camera |
US9060124B2 (en) | 2008-05-20 | 2015-06-16 | Pelican Imaging Corporation | Capturing and processing of images using non-monolithic camera arrays |
US9060142B2 (en) | 2008-05-20 | 2015-06-16 | Pelican Imaging Corporation | Capturing and processing of images captured by camera arrays including heterogeneous optics |
US9060121B2 (en) | 2008-05-20 | 2015-06-16 | Pelican Imaging Corporation | Capturing and processing of images captured by camera arrays including cameras dedicated to sampling luma and cameras dedicated to sampling chroma |
US9188765B2 (en) | 2008-05-20 | 2015-11-17 | Pelican Imaging Corporation | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US9077893B2 (en) | 2008-05-20 | 2015-07-07 | Pelican Imaging Corporation | Capturing and processing of images captured by non-grid camera arrays |
US9094661B2 (en) | 2008-05-20 | 2015-07-28 | Pelican Imaging Corporation | Systems and methods for generating depth maps using a set of images containing a baseline image |
US12022207B2 (en) | 2008-05-20 | 2024-06-25 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US11792538B2 (en) | 2008-05-20 | 2023-10-17 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US9576369B2 (en) | 2008-05-20 | 2017-02-21 | Fotonation Cayman Limited | Systems and methods for generating depth maps using images captured by camera arrays incorporating cameras having different fields of view |
US9712759B2 (en) | 2008-05-20 | 2017-07-18 | Fotonation Cayman Limited | Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras |
US9124815B2 (en) | 2008-05-20 | 2015-09-01 | Pelican Imaging Corporation | Capturing and processing of images including occlusions captured by arrays of luma and chroma cameras |
US9235898B2 (en) | 2008-05-20 | 2016-01-12 | Pelican Imaging Corporation | Systems and methods for generating depth maps using light focused on an image sensor by a lens element array |
US12041360B2 (en) | 2008-05-20 | 2024-07-16 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US10142560B2 (en) | 2008-05-20 | 2018-11-27 | Fotonation Limited | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US10027901B2 (en) | 2008-05-20 | 2018-07-17 | Fotonation Cayman Limited | Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras |
US10306120B2 (en) | 2009-11-20 | 2019-05-28 | Fotonation Limited | Capturing and processing of images captured by camera arrays incorporating cameras with telephoto and conventional lenses to generate depth maps |
US9264610B2 (en) | 2009-11-20 | 2016-02-16 | Pelican Imaging Corporation | Capturing and processing of images including occlusions captured by heterogeneous camera arrays |
US9936148B2 (en) | 2010-05-12 | 2018-04-03 | Fotonation Cayman Limited | Imager array interfaces |
US10455168B2 (en) | 2010-05-12 | 2019-10-22 | Fotonation Limited | Imager array interfaces |
US11875475B2 (en) | 2010-12-14 | 2024-01-16 | Adeia Imaging Llc | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
US12243190B2 (en) | 2010-12-14 | 2025-03-04 | Adeia Imaging Llc | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
US10366472B2 (en) | 2010-12-14 | 2019-07-30 | Fotonation Limited | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
US11423513B2 (en) | 2010-12-14 | 2022-08-23 | Fotonation Limited | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
US10218889B2 (en) | 2011-05-11 | 2019-02-26 | Fotonation Limited | Systems and methods for transmitting and receiving array camera image data |
US9866739B2 (en) | 2011-05-11 | 2018-01-09 | Fotonation Cayman Limited | Systems and methods for transmitting and receiving array camera image data |
US10742861B2 (en) | 2011-05-11 | 2020-08-11 | Fotonation Limited | Systems and methods for transmitting and receiving array camera image data |
US9128228B2 (en) | 2011-06-28 | 2015-09-08 | Pelican Imaging Corporation | Optical arrangements for use with an array camera |
US9578237B2 (en) | 2011-06-28 | 2017-02-21 | Fotonation Cayman Limited | Array cameras incorporating optics with modulation transfer functions greater than sensor Nyquist frequency for capture of images used in super-resolution processing |
US9516222B2 (en) | 2011-06-28 | 2016-12-06 | Kip Peli P1 Lp | Array cameras incorporating monolithic array camera modules with high MTF lens stacks for capture of images used in super-resolution processing |
US9794476B2 (en) | 2011-09-19 | 2017-10-17 | Fotonation Cayman Limited | Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures |
US10375302B2 (en) | 2011-09-19 | 2019-08-06 | Fotonation Limited | Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures |
US10019816B2 (en) | 2011-09-28 | 2018-07-10 | Fotonation Cayman Limited | Systems and methods for decoding image files containing depth maps stored as metadata |
US9129183B2 (en) | 2011-09-28 | 2015-09-08 | Pelican Imaging Corporation | Systems and methods for encoding light field image files |
US10984276B2 (en) | 2011-09-28 | 2021-04-20 | Fotonation Limited | Systems and methods for encoding image files containing depth maps stored as metadata |
US9042667B2 (en) | 2011-09-28 | 2015-05-26 | Pelican Imaging Corporation | Systems and methods for decoding light field image files using a depth map |
US9031343B2 (en) | 2011-09-28 | 2015-05-12 | Pelican Imaging Corporation | Systems and methods for encoding light field image files having a depth map |
US9031335B2 (en) | 2011-09-28 | 2015-05-12 | Pelican Imaging Corporation | Systems and methods for encoding light field image files having depth and confidence maps |
US10430682B2 (en) | 2011-09-28 | 2019-10-01 | Fotonation Limited | Systems and methods for decoding image files containing depth maps stored as metadata |
US9036931B2 (en) | 2011-09-28 | 2015-05-19 | Pelican Imaging Corporation | Systems and methods for decoding structured light field image files |
US9025894B2 (en) | 2011-09-28 | 2015-05-05 | Pelican Imaging Corporation | Systems and methods for decoding light field image files having depth and confidence maps |
US9025895B2 (en) | 2011-09-28 | 2015-05-05 | Pelican Imaging Corporation | Systems and methods for decoding refocusable light field image files |
US9811753B2 (en) | 2011-09-28 | 2017-11-07 | Fotonation Cayman Limited | Systems and methods for encoding light field image files |
US11729365B2 (en) | 2011-09-28 | 2023-08-15 | Adela Imaging LLC | Systems and methods for encoding image files containing depth maps stored as metadata |
US20180197035A1 (en) | 2011-09-28 | 2018-07-12 | Fotonation Cayman Limited | Systems and Methods for Encoding Image Files Containing Depth Maps Stored as Metadata |
US10275676B2 (en) | 2011-09-28 | 2019-04-30 | Fotonation Limited | Systems and methods for encoding image files containing depth maps stored as metadata |
US9864921B2 (en) | 2011-09-28 | 2018-01-09 | Fotonation Cayman Limited | Systems and methods for encoding image files containing depth maps stored as metadata |
US12052409B2 (en) | 2011-09-28 | 2024-07-30 | Adela Imaging LLC | Systems and methods for encoding image files containing depth maps stored as metadata |
US10311649B2 (en) | 2012-02-21 | 2019-06-04 | Fotonation Limited | Systems and method for performing depth based image editing |
US9754422B2 (en) | 2012-02-21 | 2017-09-05 | Fotonation Cayman Limited | Systems and method for performing depth based image editing |
US9412206B2 (en) | 2012-02-21 | 2016-08-09 | Pelican Imaging Corporation | Systems and methods for the manipulation of captured light field image data |
US9210392B2 (en) | 2012-05-01 | 2015-12-08 | Pelican Imaging Coporation | Camera modules patterned with pi filter groups |
US9706132B2 (en) | 2012-05-01 | 2017-07-11 | Fotonation Cayman Limited | Camera modules patterned with pi filter groups |
US10334241B2 (en) | 2012-06-28 | 2019-06-25 | Fotonation Limited | Systems and methods for detecting defective camera arrays and optic arrays |
US20140002675A1 (en) * | 2012-06-28 | 2014-01-02 | Pelican Imaging Corporation | Systems and methods for detecting defective camera arrays and optic arrays |
US9100635B2 (en) * | 2012-06-28 | 2015-08-04 | Pelican Imaging Corporation | Systems and methods for detecting defective camera arrays and optic arrays |
US9807382B2 (en) | 2012-06-28 | 2017-10-31 | Fotonation Cayman Limited | Systems and methods for detecting defective camera arrays and optic arrays |
US10261219B2 (en) | 2012-06-30 | 2019-04-16 | Fotonation Limited | Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors |
US11022725B2 (en) | 2012-06-30 | 2021-06-01 | Fotonation Limited | Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors |
US9766380B2 (en) | 2012-06-30 | 2017-09-19 | Fotonation Cayman Limited | Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors |
US12002233B2 (en) | 2012-08-21 | 2024-06-04 | Adeia Imaging Llc | Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints |
US9147254B2 (en) | 2012-08-21 | 2015-09-29 | Pelican Imaging Corporation | Systems and methods for measuring depth in the presence of occlusions using a subset of images |
US9129377B2 (en) | 2012-08-21 | 2015-09-08 | Pelican Imaging Corporation | Systems and methods for measuring depth based upon occlusion patterns in images |
US9235900B2 (en) | 2012-08-21 | 2016-01-12 | Pelican Imaging Corporation | Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints |
US9123118B2 (en) | 2012-08-21 | 2015-09-01 | Pelican Imaging Corporation | System and methods for measuring depth using an array camera employing a bayer filter |
US9123117B2 (en) | 2012-08-21 | 2015-09-01 | Pelican Imaging Corporation | Systems and methods for generating depth maps and corresponding confidence maps indicating depth estimation reliability |
US9240049B2 (en) | 2012-08-21 | 2016-01-19 | Pelican Imaging Corporation | Systems and methods for measuring depth using an array of independently controllable cameras |
US9858673B2 (en) | 2012-08-21 | 2018-01-02 | Fotonation Cayman Limited | Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints |
US10380752B2 (en) | 2012-08-21 | 2019-08-13 | Fotonation Limited | Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints |
US9813616B2 (en) | 2012-08-23 | 2017-11-07 | Fotonation Cayman Limited | Feature based high resolution motion estimation from low resolution images captured using an array source |
US10462362B2 (en) | 2012-08-23 | 2019-10-29 | Fotonation Limited | Feature based high resolution motion estimation from low resolution images captured using an array source |
US9214013B2 (en) | 2012-09-14 | 2015-12-15 | Pelican Imaging Corporation | Systems and methods for correcting user identified artifacts in light field images |
US10390005B2 (en) | 2012-09-28 | 2019-08-20 | Fotonation Limited | Generating images from light fields utilizing virtual viewpoints |
US9749568B2 (en) | 2012-11-13 | 2017-08-29 | Fotonation Cayman Limited | Systems and methods for array camera focal plane control |
US9143711B2 (en) | 2012-11-13 | 2015-09-22 | Pelican Imaging Corporation | Systems and methods for array camera focal plane control |
US10009538B2 (en) | 2013-02-21 | 2018-06-26 | Fotonation Cayman Limited | Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information |
US9253380B2 (en) | 2013-02-24 | 2016-02-02 | Pelican Imaging Corporation | Thin form factor computational array cameras and modular array cameras |
US9374512B2 (en) | 2013-02-24 | 2016-06-21 | Pelican Imaging Corporation | Thin form factor computational array cameras and modular array cameras |
US9743051B2 (en) | 2013-02-24 | 2017-08-22 | Fotonation Cayman Limited | Thin form factor computational array cameras and modular array cameras |
US9774831B2 (en) | 2013-02-24 | 2017-09-26 | Fotonation Cayman Limited | Thin form factor computational array cameras and modular array cameras |
US9638883B1 (en) | 2013-03-04 | 2017-05-02 | Fotonation Cayman Limited | Passive alignment of array camera modules constructed from lens stack arrays and sensors based upon alignment information obtained during manufacture of array camera modules using an active alignment process |
US9774789B2 (en) | 2013-03-08 | 2017-09-26 | Fotonation Cayman Limited | Systems and methods for high dynamic range imaging using array cameras |
US9917998B2 (en) | 2013-03-08 | 2018-03-13 | Fotonation Cayman Limited | Systems and methods for measuring scene information while capturing images using array cameras |
US11570423B2 (en) | 2013-03-10 | 2023-01-31 | Adeia Imaging Llc | System and methods for calibration of an array camera |
US9124864B2 (en) | 2013-03-10 | 2015-09-01 | Pelican Imaging Corporation | System and methods for calibration of an array camera |
US11985293B2 (en) | 2013-03-10 | 2024-05-14 | Adeia Imaging Llc | System and methods for calibration of an array camera |
US11272161B2 (en) | 2013-03-10 | 2022-03-08 | Fotonation Limited | System and methods for calibration of an array camera |
US10958892B2 (en) | 2013-03-10 | 2021-03-23 | Fotonation Limited | System and methods for calibration of an array camera |
US10225543B2 (en) | 2013-03-10 | 2019-03-05 | Fotonation Limited | System and methods for calibration of an array camera |
US9986224B2 (en) | 2013-03-10 | 2018-05-29 | Fotonation Cayman Limited | System and methods for calibration of an array camera |
US9800856B2 (en) | 2013-03-13 | 2017-10-24 | Fotonation Cayman Limited | Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies |
US9888194B2 (en) | 2013-03-13 | 2018-02-06 | Fotonation Cayman Limited | Array camera architecture implementing quantum film image sensors |
US10127682B2 (en) | 2013-03-13 | 2018-11-13 | Fotonation Limited | System and methods for calibration of an array camera |
US9741118B2 (en) | 2013-03-13 | 2017-08-22 | Fotonation Cayman Limited | System and methods for calibration of an array camera |
US9733486B2 (en) | 2013-03-13 | 2017-08-15 | Fotonation Cayman Limited | Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing |
US10547772B2 (en) | 2013-03-14 | 2020-01-28 | Fotonation Limited | Systems and methods for reducing motion blur in images or video in ultra low light with array cameras |
US10091405B2 (en) | 2013-03-14 | 2018-10-02 | Fotonation Cayman Limited | Systems and methods for reducing motion blur in images or video in ultra low light with array cameras |
US9787911B2 (en) | 2013-03-14 | 2017-10-10 | Fotonation Cayman Limited | Systems and methods for photometric normalization in array cameras |
US9578259B2 (en) | 2013-03-14 | 2017-02-21 | Fotonation Cayman Limited | Systems and methods for reducing motion blur in images or video in ultra low light with array cameras |
US10412314B2 (en) | 2013-03-14 | 2019-09-10 | Fotonation Limited | Systems and methods for photometric normalization in array cameras |
US9100586B2 (en) | 2013-03-14 | 2015-08-04 | Pelican Imaging Corporation | Systems and methods for photometric normalization in array cameras |
US10674138B2 (en) | 2013-03-15 | 2020-06-02 | Fotonation Limited | Autofocus system for a conventional camera that uses depth information from an array camera |
US9800859B2 (en) | 2013-03-15 | 2017-10-24 | Fotonation Cayman Limited | Systems and methods for estimating depth using stereo array cameras |
US9602805B2 (en) | 2013-03-15 | 2017-03-21 | Fotonation Cayman Limited | Systems and methods for estimating depth using ad hoc stereo array cameras |
US9633442B2 (en) | 2013-03-15 | 2017-04-25 | Fotonation Cayman Limited | Array cameras including an array camera module augmented with a separate camera |
US9497370B2 (en) | 2013-03-15 | 2016-11-15 | Pelican Imaging Corporation | Array camera architecture implementing quantum dot color filters |
US10455218B2 (en) | 2013-03-15 | 2019-10-22 | Fotonation Limited | Systems and methods for estimating depth using stereo array cameras |
US9497429B2 (en) | 2013-03-15 | 2016-11-15 | Pelican Imaging Corporation | Extended color processing on pelican array cameras |
US9438888B2 (en) | 2013-03-15 | 2016-09-06 | Pelican Imaging Corporation | Systems and methods for stereo imaging with camera arrays |
US9955070B2 (en) | 2013-03-15 | 2018-04-24 | Fotonation Cayman Limited | Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information |
US10542208B2 (en) | 2013-03-15 | 2020-01-21 | Fotonation Limited | Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information |
US10122993B2 (en) | 2013-03-15 | 2018-11-06 | Fotonation Limited | Autofocus system for a conventional camera that uses depth information from an array camera |
US10182216B2 (en) | 2013-03-15 | 2019-01-15 | Fotonation Limited | Extended color processing on pelican array cameras |
US10638099B2 (en) | 2013-03-15 | 2020-04-28 | Fotonation Limited | Extended color processing on pelican array cameras |
US10540806B2 (en) | 2013-09-27 | 2020-01-21 | Fotonation Limited | Systems and methods for depth-assisted perspective distortion correction |
US9898856B2 (en) | 2013-09-27 | 2018-02-20 | Fotonation Cayman Limited | Systems and methods for depth-assisted perspective distortion correction |
US9264592B2 (en) | 2013-11-07 | 2016-02-16 | Pelican Imaging Corporation | Array camera modules incorporating independently aligned lens stacks |
US9185276B2 (en) | 2013-11-07 | 2015-11-10 | Pelican Imaging Corporation | Methods of manufacturing array camera modules incorporating independently aligned lens stacks |
US9924092B2 (en) | 2013-11-07 | 2018-03-20 | Fotonation Cayman Limited | Array cameras incorporating independently aligned lens stacks |
US9426343B2 (en) | 2013-11-07 | 2016-08-23 | Pelican Imaging Corporation | Array cameras incorporating independently aligned lens stacks |
US10119808B2 (en) | 2013-11-18 | 2018-11-06 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
US11486698B2 (en) | 2013-11-18 | 2022-11-01 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
US10767981B2 (en) | 2013-11-18 | 2020-09-08 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
US10708492B2 (en) | 2013-11-26 | 2020-07-07 | Fotonation Limited | Array camera configurations incorporating constituent array cameras and constituent cameras |
US9456134B2 (en) | 2013-11-26 | 2016-09-27 | Pelican Imaging Corporation | Array camera configurations incorporating constituent array cameras and constituent cameras |
US9426361B2 (en) | 2013-11-26 | 2016-08-23 | Pelican Imaging Corporation | Array camera configurations incorporating multiple constituent array cameras |
US9813617B2 (en) | 2013-11-26 | 2017-11-07 | Fotonation Cayman Limited | Array camera configurations incorporating constituent array cameras and constituent cameras |
US10574905B2 (en) | 2014-03-07 | 2020-02-25 | Fotonation Limited | System and methods for depth regularization and semiautomatic interactive matting using RGB-D images |
US10089740B2 (en) | 2014-03-07 | 2018-10-02 | Fotonation Limited | System and methods for depth regularization and semiautomatic interactive matting using RGB-D images |
US9247117B2 (en) | 2014-04-07 | 2016-01-26 | Pelican Imaging Corporation | Systems and methods for correcting for warpage of a sensor array in an array camera module by introducing warpage into a focal plane of a lens stack array |
US9521319B2 (en) | 2014-06-18 | 2016-12-13 | Pelican Imaging Corporation | Array cameras and array camera modules including spectral filters disposed outside of a constituent image sensor |
US11546576B2 (en) | 2014-09-29 | 2023-01-03 | Adeia Imaging Llc | Systems and methods for dynamic calibration of array cameras |
US10250871B2 (en) | 2014-09-29 | 2019-04-02 | Fotonation Limited | Systems and methods for dynamic calibration of array cameras |
US9942474B2 (en) | 2015-04-17 | 2018-04-10 | Fotonation Cayman Limited | Systems and methods for performing high speed video capture and depth estimation using array cameras |
US10237531B2 (en) | 2016-06-22 | 2019-03-19 | Microsoft Technology Licensing, Llc | Discontinuity-aware reprojection |
US10129523B2 (en) | 2016-06-22 | 2018-11-13 | Microsoft Technology Licensing, Llc | Depth-aware reprojection |
US10818026B2 (en) | 2017-08-21 | 2020-10-27 | Fotonation Limited | Systems and methods for hybrid depth regularization |
US11562498B2 (en) | 2017-08-21 | 2023-01-24 | Adela Imaging LLC | Systems and methods for hybrid depth regularization |
US11983893B2 (en) | 2017-08-21 | 2024-05-14 | Adeia Imaging Llc | Systems and methods for hybrid depth regularization |
US10482618B2 (en) | 2017-08-21 | 2019-11-19 | Fotonation Limited | Systems and methods for hybrid depth regularization |
US11270110B2 (en) | 2019-09-17 | 2022-03-08 | Boston Polarimetrics, Inc. | Systems and methods for surface modeling using polarization cues |
US11699273B2 (en) | 2019-09-17 | 2023-07-11 | Intrinsic Innovation Llc | Systems and methods for surface modeling using polarization cues |
US12099148B2 (en) | 2019-10-07 | 2024-09-24 | Intrinsic Innovation Llc | Systems and methods for surface normals sensing with polarization |
US11982775B2 (en) | 2019-10-07 | 2024-05-14 | Intrinsic Innovation Llc | Systems and methods for augmentation of sensor systems and imaging systems with polarization |
US11525906B2 (en) | 2019-10-07 | 2022-12-13 | Intrinsic Innovation Llc | Systems and methods for augmentation of sensor systems and imaging systems with polarization |
US11842495B2 (en) | 2019-11-30 | 2023-12-12 | Intrinsic Innovation Llc | Systems and methods for transparent object segmentation using polarization cues |
US11302012B2 (en) | 2019-11-30 | 2022-04-12 | Boston Polarimetrics, Inc. | Systems and methods for transparent object segmentation using polarization cues |
US11580667B2 (en) | 2020-01-29 | 2023-02-14 | Intrinsic Innovation Llc | Systems and methods for characterizing object pose detection and measurement systems |
US11797863B2 (en) | 2020-01-30 | 2023-10-24 | Intrinsic Innovation Llc | Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images |
US11953700B2 (en) | 2020-05-27 | 2024-04-09 | Intrinsic Innovation Llc | Multi-aperture polarization optical systems using beam splitters |
US12069227B2 (en) | 2021-03-10 | 2024-08-20 | Intrinsic Innovation Llc | Multi-modal and multi-spectral stereo camera arrays |
US12020455B2 (en) | 2021-03-10 | 2024-06-25 | Intrinsic Innovation Llc | Systems and methods for high dynamic range image reconstruction |
US11954886B2 (en) | 2021-04-15 | 2024-04-09 | Intrinsic Innovation Llc | Systems and methods for six-degree of freedom pose estimation of deformable objects |
US11290658B1 (en) | 2021-04-15 | 2022-03-29 | Boston Polarimetrics, Inc. | Systems and methods for camera exposure control |
US11683594B2 (en) | 2021-04-15 | 2023-06-20 | Intrinsic Innovation Llc | Systems and methods for camera exposure control |
US12067746B2 (en) | 2021-05-07 | 2024-08-20 | Intrinsic Innovation Llc | Systems and methods for using computer vision to pick up small objects |
US12175741B2 (en) | 2021-06-22 | 2024-12-24 | Intrinsic Innovation Llc | Systems and methods for a vision guided end effector |
US12172310B2 (en) | 2021-06-29 | 2024-12-24 | Intrinsic Innovation Llc | Systems and methods for picking objects using 3-D geometry and segmentation |
US11689813B2 (en) | 2021-07-01 | 2023-06-27 | Intrinsic Innovation Llc | Systems and methods for high dynamic range imaging using crossed polarizers |
US12293535B2 (en) | 2021-08-03 | 2025-05-06 | Intrinsic Innovation Llc | Systems and methods for training pose estimators in computer vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6205241B1 (en) | Compression of stereoscopic images | |
US6160909A (en) | Depth control for stereoscopic images | |
US7643025B2 (en) | Method and apparatus for applying stereoscopic imagery to three-dimensionally defined substrates | |
US20120182403A1 (en) | Stereoscopic imaging | |
EP0735512B1 (en) | Methods for selecting two frames of a two-dimensional image sequence to form the basis for calculating the relative depth of image objects | |
JP5036132B2 (en) | Critical alignment of parallax images for autostereoscopic display | |
JP4065488B2 (en) | 3D image generation apparatus, 3D image generation method, and storage medium | |
US7983477B2 (en) | Method and apparatus for generating a stereoscopic image | |
US4925294A (en) | Method to convert two dimensional motion pictures for three-dimensional systems | |
CN108513123B (en) | Image array generation method for integrated imaging light field display | |
JP4440066B2 (en) | Stereo image generation program, stereo image generation system, and stereo image generation method | |
JP3524147B2 (en) | 3D image display device | |
JPH08331607A (en) | Three-dimensional display image generating method | |
JP2010510569A (en) | System and method of object model fitting and registration for transforming from 2D to 3D | |
US20150187132A1 (en) | System and method for three-dimensional visualization of geographical data | |
EP1668919B1 (en) | Stereoscopic imaging | |
KR100335617B1 (en) | 3D stereoscopic image synthesis method | |
Knorr et al. | Stereoscopic 3D from 2D video with super-resolution capability | |
Shimamura et al. | Construction of an immersive mixed environment using an omnidirectional stereo image sensor | |
JPH0981746A (en) | Two-dimensional display image generating method | |
KR101163020B1 (en) | Method and scaling unit for scaling a three-dimensional model | |
CN107103620A (en) | The depth extraction method of many pumped FIR laser cameras of spatial sampling under a kind of visual angle based on individual camera | |
Shimamura et al. | Construction and presentation of a virtual environment using panoramic stereo images of a real scene and computer graphics models | |
KR100893855B1 (en) | 3D foreground and 2D background combining method and 3D application engine | |
CN117315164B (en) | Optical waveguide holographic display method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CANON KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MELEN, ROGER D.;REEL/FRAME:009224/0957 Effective date: 19980529 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |