US20060017807A1 - Panoramic vision system and method - Google Patents
Panoramic vision system and method Download PDFInfo
- Publication number
- US20060017807A1 US20060017807A1 US10/899,410 US89941004A US2006017807A1 US 20060017807 A1 US20060017807 A1 US 20060017807A1 US 89941004 A US89941004 A US 89941004A US 2006017807 A1 US2006017807 A1 US 2006017807A1
- Authority
- US
- United States
- Prior art keywords
- image
- vision system
- vehicle
- display
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/27—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/105—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/302—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with GPS information or vehicle data, e.g. vehicle speed, gyro, steering angle data
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/303—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/802—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
- B60R2300/8026—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views in addition to a rear-view mirror system
Definitions
- This invention relates to vision systems, particularly to panoramic vision systems with image distortion correction.
- panoramic lens enable a wide field of view but will have distortions due to inherent geometric shape and optical non-linearity.
- multiple panoramic cameras may cover the entire exterior and interior area of a building, and the system can provide continuous view of the area and, manually or automatically, track objects through the area.
- multiple panoramic cameras can provide a full 360° view of the area and around obstructive objects.
- Such systems can adapt display views to specific operator modes such as turning, reversing, and lane changing to improve situational awareness. Additional advantages of a vehicle vision system are reducing the wind drag and noise caused by side mirrors, and reducing the width span of the vehicle by eliminating such protruding mirrors.
- These systems can also have the capability to detect objects in motion, provide warning of close objects, and track such objects through multiple viewing regions. Vision systems could also greatly enhance night vision through various technologies such as infrared, radar, and light sensitive devices.
- Vision systems consist of one or more image acquisition devices, coupled to one or more viewable display units.
- An image acquisition device can incorporate different lenses with different focal lengths and depths of focus such as planar, panoramic, fish-eye, and annular.
- Lenses like fish-eye and annular have a wide field of view and a large depth of focus. They can capture a wide and deep field of view. They tend, however, to distort images, especially the edges. Resulting images look disproportionate.
- any type of lens there are also optical distortions caused by tangential and radial lens imperfections, lens offset, focal length of the lens, and light falloff near the outer portions of the lens.
- a vision system there are yet other types of image distortions caused by luminance variations and color aberrations. These distortions affect the quality and sharpness of the image.
- Prior art panoramic vision systems do not remove image distortions while accurately blending multiple images.
- One such panoramic system is disclosed in U.S. Pat. No. 6,498,620B2, namely a rearview vision system for a vehicle.
- This system consists of image capture devices, an image synthesizer, and a display system.
- Neither in this document, nor in the ones referenced therein, is there an electronic image processing system correcting for geometric, optical, color aberration, and luminance image distortions.
- a vehicle vision system includes pre-calibration based luminance correction only but not other optical and geometric corrections. A thorough luminance and chrominance correction should be based on input and output optical and geometric parameters and be adaptive for changing ambient environments.
- the present invention in one aspect provides a panoramic vision system having associated camera, display optics and geometric characteristics, said system comprising:
- the present invention provides a method for providing panoramic vision using a panoramic vision system having camera, display optics and geometric characteristics as well as geometric and optical distortion parameters, to generate a composite image that covers up to 360° or 4 ⁇ steradians, said method comprising:
- the present invention provides an image processor, for use in a panoramic vision system having associated camera, display optics and geometric characteristics as well as geometric and optical distortion parameters, said panoramic vision system using a plurality of image acquisition devices to capture image frames from a scene and to generate digital image data and image sensor inputs and a digitizer to convert the digital image data and the image sensor inputs into digital image data, said image processor comprising:
- the present invention in its first embodiment provides a vehicle vision system covering up to 360° horizontal field of view, and up to 180° vertical field of view coverage, with emphasis on the rear, side, and corner views.
- the situational display image can be integrated with other vehicle information data such as vehicle status or navigation data.
- the image is preferably displayed on the front panel, contiguous with the driver's field of view. Additional views can be displayed to improve coverage and flexibility such as interior dashboard corner displays that substitute mirrors, eliminating exposure to the external elements and wind drag.
- the system is adapted to reconfigure the display and toggle between views based on control inputs, when one view becomes more critical than others.
- the display reconfigures to show a wide view of the rear bottom of the vehicle when the vehicle is backing up, or the right side view when the right turning signal is turned on and the vehicle runs at higher speed, or the right corner view when the right turning signal is active and the vehicle is stopped or runs at low speed.
- the system provides such facilities as parking assistance, or lane crossing warning via pattern recognition of curbs, lanes, and objects as well as distance determination.
- the present invention provides a surveillance system covering up to 360° horizontal or 4 ⁇ steradian field of view of the exterior and/or interior of a structure such as a building, a bridge, etc.
- the system can provide multiple distortion corrected views to produce a continuous strip or scanning panoramic view.
- the system can perform motion detection and object tracking over the surveillance area and object recognition to decipher people, vehicles, etc.
- the invention provides a videoconference vision system producing a wide view of up to 180° or a circular view of up to 360°.
- the invention facilitates viewing the participants of a conference or a gathering and provides ability to zoom in on speakers with a single camera.
- FIG. 1 represents an overall view of a vision system and its components built in accordance with the present invention
- FIG. 2A represents the structure of the image processor of FIG. 1 as part of the vision system
- FIG. 2B represents the flow logic of the image processor of FIG. 1 ;
- FIG. 3A represents a conventional vehicle with mirrors used for side and rear vision
- FIG. 3B represents an example setting for the cameras and their views in a vehicle example of the vision system
- FIG. 3C represents selected views for the cameras in a vehicle example of the vision system
- FIG. 4A represents the viewing surface of the vision system display in a vehicle vision system example
- FIG. 4B represents the same viewing surface as FIG. 4A when the right turning signal is engaged in a vehicle vision system example
- FIG. 5A represents a wide alternative of the viewing surface of the vision system display in a vehicle vision system example
- FIG. 5B represents reconfigured viewing surface of FIG. 5A when the vehicle is running in reverse;
- FIG. 6A represents reconfigured display when the vehicle is changing lanes
- FIG. 6B represents reconfigured display when the vehicle is turning right and driver's view is obscured
- FIG. 7 represents a top view display showing the vehicle and adjacent objects
- FIG. 8 represents an example of control inputs in a vehicle vision system
- FIG. 9A represents the view of a hemispheric, 180° (2 ⁇ steradians) view camera, set in the center of a conference table in a videoconference vision system example;
- FIG. 9B represents the display of a videoconference vision system example
- FIG. 10 represents a hallway or a building wall with two cameras installed on the walls as part of a surveillance system example
- FIG. 11A represents the display of a surveillance system example of the invention.
- FIG. 11B represents the same display as FIG. 9A at a later time.
- FIG. 1 shows the overall structure of vision system 100 . It comprises a plurality of image acquisition devices like camera 110 to capture image frames, digitizer 118 to convert image frame data to digital image data, image processor 200 to adjust luminance and correct image distortions and form a composite image from digital image data and control parameters, controller 101 to relay user and sensor parameters to image processor 200 , and display device 120 to display the composed image for viewing on the viewing surface 160 .
- the final composed image in the present invention covers up to 360° including several specific regions of interest and is significantly distortion free. It greatly enhances situational awareness.
- Camera 110 in FIG. 1 comprises image optics 112 , sensor array 114 and image capture control 116 .
- image optics 112 may use a wide angle lens to minimize the number of cameras required.
- Wide angle lenses also have wider depth of focus, reducing the need for focus adjustment and its cost implementation. Such lenses have a field of view typically in excess of 100° and a depth of focus from a couple of feet to infinity. Both these properties make such lenses desirable for a panoramic vision system.
- Wide angle lenses have greater optical distortions that are difficult and costly to correct optically.
- electronic correction is used to compensate for any optical and geometric distortion as well as any other non-linearity and imperfection in the optics or projection path.
- camera lenses need not be perfect or yield perfect images. All image distortions, including those caused by the lens geometry or imperfection, are mathematically modeled and electronically corrected. The importance of this feature lies in the fact that it enables one to use inexpensive wide-angle lenses with an extended depth of focus.
- the drawback of such lenses is the image distortion and luminance non-uniformity they cause; they tend to stretch objects, especially near the edges of a scene. This effect is well known, for example, for fish-eye or annular lenses. They make objects look disproportionate.
- Image sensor array 114 is depicted in camera 110 .
- the sensor array is adapted to moderate extreme variations of light intensity of the scene through the camera system. For a vision system to operate efficiently in different lighting conditions, there is need for a great dynamic range. This dynamic range requirement stems from the day and night variations in ambient light intensity as well a wide variation from incidental light sources at night.
- a camera sensor is basically a mosaic of segments, each exposed to the light from a portion of the scene, registering the intensity of the light as output voltages.
- Image Capture Control 116 sends image frame intensity information and based on these data it receives commands for integration optimization, lens iris control, and white balance control from image processor 200 .
- Digitizer 118 receives image data from image capture control 116 and converts these data into digital image data.
- the function of digitizer 118 may be integrated in the image sensor 114 or image processor 200 .
- digitizer 118 receives these data from audio device array 117 . It then produces digital audio data and combines these data with digital image data. The digital image data is then sent to image processor 200 .
- FIG. 2A shows image processor 200 in detail.
- the function of the image processor is to receive digital image data, measure image statistics, enhance image quality, compensate for luminance non-uniformity, and correct various distortions in these data to finally generate one or multiple composite images that are significantly distortion free.
- It comprises optics and geometry data interface 236 , which comprises camera optics data, projection optics data, and projection geometry data, as well as control data interface 202 , image measurement module 210 , luminance correction module 220 , distortion convolution stage 250 , distortion correction module 260 , and display controller 280 .
- image measurement module 210 comprises camera optics data, projection optics data, and projection geometry data
- control data interface 202 comprises image measurement module 210 , luminance correction module 220 , distortion convolution stage 250 , distortion correction module 260 , and display controller 280 .
- image processor 200 measures full and selective image areas and analyzes them to control exposure for best image quality in the area of interest.
- High light level sources such as headlights and spotlights are substantially reduced in intensity and low level objects are enhanced in detail to aid element recognition.
- luminance correction module 220 performs histogram analysis of the areas of interest and expands contrast in the low light area and reduces contrast in high light areas to provide enhanced image element recognition.
- Luminance non-uniformities are undesired brightness variations across an image.
- the main physical reason for luminance non-uniformity is the fact that light rays going through different portions of an optical system, travel different distances and area densities. The intensity of a light ray falls off with the square of the distance it travels. This phenomenon happens in the camera optics as well as the display optics.
- imperfections in an optical system also cause luminance non-uniformity. Examples of such imperfections are projection lens vignette and lateral fluctuations and non-uniformity in the generated projection light. If the brightness variations are different for the three different color components, they are referred to as chrominance non-uniformity.
- color aberration Another type of optical distortion is color aberration. It stems from the fact that an optical component like a lens has different indices of refraction for different wavelengths. Light rays propagating through optical components refract at different angles for different wavelengths. This results in lateral shifts of colors in images. Lateral aberrations cause different color component of a point object to separate and diverge. On a viewing surface a point would look fringed.
- Another type of color aberration is axial in nature and is caused by the fact that a lens has different focal points for different light wavelengths. This type of color aberrations could not be corrected electronically.
- image processor 200 maps the captured scene onto a viewing surface with certain characteristics such as shape, size, and aspect ratio. For example an image can be formed from different sections of a captured scene and projected onto a portion of the windshield of a car, in which case it would suffer distortions because of the non-flat shape as well as particular size of the viewing surface.
- the image processor in the present invention corrects for all these distortions as explained below.
- image processor 200 digital image data are received by image measurement module 210 , where image contrast and brightness histograms within the region of interest are measured. These histograms are analyzed by the luminance correction module 220 to control sensor exposure and to adjust the digital image data to improve image content for visualization and detection. Some adjustments include highlight compression, contrast expansion, detail enhancement, and noise reduction.
- luminance correction module 220 receives camera and projection optics data from optics and geometry data interface 236 . These data are determined from accurate light propagation calculations and calibrations of the optical components. These data are crucial for luminance corrections since they provide the optical path length of different light rays through the camera as well as the projection system. Separate or combined camera and projection correction maps are generated to compute the correction for each pixel.
- the correction map can be obtained off-line or it could be computed dynamically according to circumstances.
- the function of the luminance correction module 220 is therefore to receive digital image data and produce luminance-adjusted digital image data.
- luminance correction module 220 preferably applies luminance corrections separately to the three different color components.
- the physical implementation of luminance correction module 220 could be by a software program or a dedicated processing circuitry such as a digital signal processor or computational logic within an integrated circuit.
- Luminance-adjusted image data should be corrected for geometric, optical, and other spatial distortions. These distortions are referred to as “warp” corrections and such correction techniques are called “image warping” in the literature. A discussion of image warping can be found in George Wolberg's “Digital Image Warping”, IEEE Computer Society Press, 1988, hereby incorporated by reference.
- Image warping is basically an efficient parameterization of coordinate transformations, mapping output pixels to input pixels. Ideally a grid data set represents a mapping of every output pixel to an input pixel. However, grid data representation is quite unforgiving in terms of hardware implementation because of the shear size of the look-up tables. Image warping, in this invention provides an efficient way to represent a pixel grid data set via a few parameters. In one example, this parameterization is done by polynomials of degree n, with n determined by the complexity of the combined distortion.
- Different areas of the output space are divided into patches with inherent geometrical properties to reduce the degree of polynomials.
- the higher the number of patches and degree of fitting polynomial per patch the more accurate the parameterization of the grid data set.
- Such warp maps therefore represent a mapping of output pixels to input pixels, representing the camera optics, display optics, and display geometry including the nature of the final composite image specification and the shape of viewing surface.
- any control parameter, including user input parameters are also combined with the above parameters and represented in a single transformation.
- a sampling or filtering function is often needed. Once the output image pixel is mapped onto an input pixel, an area around this input pixel is designated for filtering. This area is referred to as the filter footprint.
- Filtering is basically a weighted averaging function, resulting in the intensities of constituent colors of an output pixel based on the all pixels inside the footprint.
- an anisotropic elliptical footprint is used for optimal image quality. It is known that the larger the size of the footprint, the higher the quality of the output image.
- Image processing 200 in the present invention, performs the image filtering with simultaneous coordinate transformation.
- all geometric and optical distortion parameters explained above are concatenated in the distortion convolution stage 250 for different color components. These parameters include camera optics data, projection optics data, projection geometry data, via Optics and geometry interface 236 , and control inputs via control interface 202 .
- the concatenated optical and geometric distortion parameters are then obtained by distortion correction module 260 .
- the function of this module is to transform the position, shape, and color intensities of each element of a scene onto a display pixel.
- the shape of viewing surface 160 is taken into account in the projection geometry data 234 . This surface is not necessarily flat and could be any general shape so long as a surface map is obtained and concatenated with other distortion parameters.
- the display surface map is convoluted with the rest of the distortion data.
- Distortion correction module 260 obtains a warp map covering the entire space of distortion parameters.
- the process is explained in detail in co-pending United States Patent Application Nos. 2003/0020732- A1 and 2003/0043303- A1 hereby incorporated by reference.
- a transformation is computed to compensate for the distortions an image suffers when it propagates through the camera optics, through display optics, and onto the specific shape of the viewing surface 160 .
- the formation of the distortion parameter set and the transformation computation could be done offline and stored in a memory to be accessed by image processor 200 via an interface. They could as well be done at least partly dynamically in case of varying parameters.
- a display image can be composed of view windows that can be independent views or concatenated views.
- the distortion correction module 260 interpolates and calculates from the warp surface equations the spatial transform and filtering parameters and performs the image transformation for display image.
- distortion correction module 260 finds nearest grid data point in the distortion parameter space first. It then interpolates the existing transformation corresponding to that set of parameters to fit the actual distortion parameters. Correction module 260 then applies the transformation to the digital image data to compensate for all distortions.
- the digital image data from each frame of each camera is combined to form a composite image fitting the viewing surface and its substructure. The corresponding digital data are corrected in such a way that when an image is formed on the viewing surface, it is visibly distortion free and it fits an optimized viewing region on the viewing surface.
- the physical implementation of distortion correction module 260 could be by a software program on a general purpose digital signal processor or by a dedicated processing circuitry such as an application specific integrated circuit.
- a physical example of the image processor 200 is incorporated in the Silicon Optix Inc. sxW1 and REON chip.
- FIG. 2B shows the flow logic of image processor 200 in one example of the present invention.
- Digital data flow in this chart is indicated via bold lines whereas calculated data flow is depicted via thin lines.
- brightness and contrast histograms are measured from the digital data at step ( 10 ).
- Camera optics along with display optics and display geometry parameters are obtained in steps ( 14 ) and ( 16 ).
- These data are then obtained along with brightness and contrast histogram in step ( 20 ), where the image luminance non-uniformity is adjusted.
- Optics and geometry data from steps ( 14 ) and ( 16 ), as well as control parameters obtained in step ( 26 ) are then gathered at step ( 30 ).
- all distortion parameters are concatenated.
- a transformation inverting the effect of the distortions is then computed at step ( 40 ). This compensating transformation is then applied to the luminance-adjusted digital image data obtained from step ( 20 ).
- step ( 50 ) constitutes simultaneous coordinate transformation and image processing.
- step ( 50 ) distortion-compensated digital image data are used to generate a composite image for display.
- display controller 280 generates an image from these data.
- the image is formed on display device 120 which could be a direct view display or a projection display device.
- the image from a projection display device 120 is projected through display optics 122 , onto viewing surface 160 .
- Display optics 122 are bulk optics components to direct the projected light form display device 120 onto viewing surface 160 . Any particular display system has additional optical and geometric distortions that need to be corrected.
- image processor 200 concatenates these distortions and corrects for the combined distortion.
- the present invention is not limited to any particular choice of the display system; the display device could be provided as liquid crystal, light emitting diode, cathode ray tube, electroluminescent, plasma, or any other viable display choice.
- the display device could be viewable directly or it could be projected through the display optics onto an integrated compartment, serving as a viewing screen.
- the brightness of the display system is adjusted via ambient light sensors, an independent signal, and a user dimmer switch.
- the size of the viewing surface is also preferably adjustable by the user for instance according to the distance from the operators' eyes.
- Controller 101 interfaces with image processor 200 . It acquires user parameters from user interface 150 along with inputs from the sensors 194 and sends them to the image processor 200 for display control.
- the user parameters are specific to the application corresponding to different embodiments of the invention.
- Sensors 194 preferably include ambient light sensors and direct glare sensors. The data from these sensors are used for display adjustment. Control parameters are convoluted with other parameters to provide a desired image.
- Compression stage 132 receives the video stream from image processor 200 and compresses the digital video data. Compressed data are stored in the record device 142 for future use. In another embodiment the data are encoded in the encryption stage 134 and sent over the network via network interface 144 .
- FIG. 3A shows a conventional automotive vehicle 900 with two side mirrors and an in-cabin mirror covering right view 902 , left view 904 , and rear view 906 . It is well known that traditional mirror positions and fields of view cause blind spots and may have distortions due to wide viewing angle. It is hard to get complete situational awareness from the images provided by the three mirrors.
- vehicle 900 ′ is equipped with camera 110 ′ and camera 111 ′ forward of the driver seat. These positions increase coverage by overlapping with the driver's direct forward field of view.
- Camera 113 ′ in this example is situated at the rear of the vehicle or in the middle rear of its roof. Specific areas of different views are selected to provide a continuous image without overlap. This prevents driver confusion.
- vehicle 900 ′′ has camera 110 ′′ and camera 111 ′′ at the front corners of the vehicle and camera 113 ′′ at the center rear.
- the emphasis is on adjustable views by user input or by control parameters.
- turning side view 903 ′′ is made available when the turning signal is engaged. It should be noted that the coverage of this view is a function of user inputs, and in principal, covers the whole area between dashed lines.
- drive side view 905 ′′ is used in regular driving mode and it is also expandable to cover the whole area between dashed lines.
- Rear view in this example has two modes depending on control parameters. When the vehicle is in reverse, reverse rear view 907 ′′ is used for display.
- This view yields a complete coverage of the rear of the vehicle, including objects on the pavement. This assures safer backing up and greatly facilitates parallel parking.
- a narrower drive rear view 906 ′′ is used. These positions and angles ensure a convenient view of the exterior of the vehicle by the driver. This example significantly facilitates functions like parallel parking and lane change.
- FIG. 4A shows an example of the viewing surface 160 as seen by the driver.
- Rear view 166 is at the bottom of the viewing surface while the right view 162 and left view 164 are at the top right and the top left of the viewing surface 160 respectively.
- FIG. 5A shows an example of viewing surface 160 ′ where a wider display is used and the side displays are at the two sides of the rear view 166 ′.
- Image processor 200 has the panoramic conversion and image stitching capability to compose this particular display.
- FIG. 4B and FIG. 5B present examples of reconfigured displays of FIG. 4A and FIG. 5A , when the vehicle is turning right or in reverse respectively.
- FIG. 6A is an example illustration of the reconfigured display of viewing surface 160 when the vehicle is changing lanes, moving into the right lane.
- the right front and rear of the vehicle is completely viewable, resulting in situation awareness with respect to everything on the right side of the vehicle.
- FIG. 6B is an example illustration of the display configured for turning right, when the right turning signal is engaged.
- Turning side view 903 ′′ of FIG. 3C is now displayed at the top middle portion of the display as turning side view 163 ′′.
- Rear view 166 ′′ and right view 166 ′′ are at the bottom of the display.
- FIG. 7 shows another example of the viewing surface 160 .
- This particular configuration of the display is achievable via gathering visual information from image acquisition devices, as well as distance determination from ultrasound and radar sensors integrated with vision system 100 .
- the vehicle with bold lines and grey shade is contains vision system 100 and is shown with respect to the surrounding setting, namely, other vehicle.
- This example implementation of the present invention greatly increases situational awareness and facilitates driving the vehicle. Two of the dimensions of each object are captured directly by the cameras. The shape and sizes of these vehicles and objects are reconstructed by extrapolating the views of the cameras according to the size and shape of the lanes and curbs. A more accurate visual account of the driving scene could be reconstructed by pattern recognition and look up tales for specific objects in a database.
- Projection geometry data, projection optics data, and camera optics data are cover all these alternative implementations in image processor 200 .
- the display brightness is adjustable via ambient light sensors, via signal from the vehicle headlights, or via manual dimmer switch.
- the size of the display surface is also adjustable via the driver.
- the focal length of the displayed images lie considerably within the driver's field of focus as explained in U.S. Pat. No. 5,949,331 which is hereby incorporated by reference.
- the display system focal length be adjusted according to the speed of the vehicle to make sure the images always form within the depth of focus of the driver. At higher speed, the driver naturally focuses on a longer distance. Such adjustments are achieved via speedometer signal 156 or transmission signal 154 .
- FIG. 6 shows an example of user inputs 150 in a vehicle vision system.
- Signal interface 151 receives signals from different components and interfaces with controller 101 .
- Turning signal 152 for instance, when engaged while turning right, relays a signal to the signal interface 151 and onto the controller 101 and eventually to image processor 200 .
- image processor 200 receives this signal, it configures the display in a way as to put emphasis on the right display 162 on viewing surface 160 .
- FIG. 4B shows the viewing screen 160 under such conditions.
- Right view display now occupies half of viewing surface 160 with the other half dedicated to rear view display 166 .
- FIG. 5B shows the same situation with a wide viewing surface embodiment. Other signals evoke other functions.
- the transmission preferably generates a signal as to emphasize the rear view.
- Other signals include steering signal which is engaged when the steering wheel is turned to one side by more than a preset limit, brake signal when is engaged when the brake pedal is depressed, transmission signal which conveys information about the traveling speed and direction, and speedometer signal which is gauged at various set velocities to input different signals.
- the display system in the present invention, adjusts to different situations depending on any of these control parameters and reconfigures automatically to make crucial images available.
- a variety of control signals could be incorporated with image processor 200 and here we have mentioned only a few examples.
- Vision system 100 accesses these data either through downloaded files, or through wireless communications at driver's request. It then superimposes these data on the image frame data to compose the image for the display system. Important warning systems are also preferably received by the vision system and are displayed and broadcast on via the display system and audio signal interfaces. Vision system 100 could also be integrated with intelligent highway driving systems via exchanging image data information. It is also integrated with distance determiners and object identifiers as per U.S. Pat. No. 6,498,620 B2 hereby incorporated by reference.
- vehicle vision system 100 comprises compression stage 132 and record device 142 to make a continuous record of events as detected by the cameras and audio devices. These features incorporate the function to save specific segments prior to and past impact in addition to driver intended events to be reviewed after mishaps like accidents, carjacking, and traffic violations. Such information could be used by the law enforcement agents, judicial authorities, and insurance companies.
- the present invention provides a videoconference vision system covering up to 180° or 360° depending on a given setting.
- audio signals are also needed along with the video signals.
- audio device array 117 provides audio inputs in addition to video inputs from the cameras.
- the audio signal data is converted to digital data via digitizer 118 and is superposed on the digital image frame data.
- a pan function is provided based on the triangulation of the audio signal to pan to individual speakers or questioners.
- the pan and zoom functions are provided digitally by image processor 200 . In lack of optical zoom, a digital zoom is provided.
- FIG. 7A relates to an example of a videoconference system. It shows the view of camera 110 at the center of a conference table.
- camera 110 has a hemispheric lens system and captures everything above, and including, the table surface. Raw images are upside down, stretched, and disproportionate. The displayed images on viewing surface 160 are, however, distortion free and panoramically converted.
- FIG. 7B shows viewing surface 160 where the speaker's image is blown up and the rest of the conference is visible in a straight form on the same surface.
- the present invention provides a surveillance system with motion detection and object recognition.
- FIG. 2A shows an example of image processor 200 , used in the surveillance system.
- Motion detector 270 evaluates successive input image frames and based on preset level of detected motion and signals alarm and sends the motion area co-ordinates to the distortion convolution stage 250 .
- the input image frames are used to monitor area outside the current view window.
- the tracking co-ordinates are used by the distortion convolution stage 250 to calculate the view window for corrected display of the detected object.
- the distortion corrected object image can be resolution enhanced by motion compensated and temporal interpolation of the object.
- Object recognition and classification can also be performed.
- FIG. 8 shows an illustrated example of a surveillance system.
- Cameras 110 and 111 are in a hallway where they are mounted on the walls. These two cameras monitor traffic through the hallway.
- Image processor 200 in this embodiment uses motion detector and tracker 270 to track the motion of an object as it passes through the hallway.
- the viewing surface 160 in this example is shown in FIG. 9A at a certain time and FIG. 9B at a later time.
- the passage of the object is thoroughly captured as he moves from the field of view of one camera to the other.
- the vision system can also perform resolution enhancement by temporal extraction to improve object detail and recognition and displays it at the top of the image in both FIG. 9A and FIG. 9B .
- the top portion in this figure is provided in full resolution or by digitally zooming on the moving object.
- the recognition and tracking of the object is achieved via comparing the detect object in motion from one frame with the next frames. An outline highlights the tracked object for ease of recognition.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Mechanical Engineering (AREA)
- Image Processing (AREA)
- Stereoscopic And Panoramic Photography (AREA)
Abstract
Description
- This invention relates to vision systems, particularly to panoramic vision systems with image distortion correction.
- Improved situational awareness is increasingly required for many applications including surveillance systems, videoconference vision systems, and vehicle vision systems. Essential to such applications is the need to monitor a wide operating area and to form a composite image for easy comprehension in different user modes. To minimize the number of cameras and cost, cameras with panoramic lens enable a wide field of view but will have distortions due to inherent geometric shape and optical non-linearity. In a surveillance application, multiple panoramic cameras may cover the entire exterior and interior area of a building, and the system can provide continuous view of the area and, manually or automatically, track objects through the area.
- In a vehicle vision system, multiple panoramic cameras can provide a full 360° view of the area and around obstructive objects. Such systems can adapt display views to specific operator modes such as turning, reversing, and lane changing to improve situational awareness. Additional advantages of a vehicle vision system are reducing the wind drag and noise caused by side mirrors, and reducing the width span of the vehicle by eliminating such protruding mirrors. These systems can also have the capability to detect objects in motion, provide warning of close objects, and track such objects through multiple viewing regions. Vision systems could also greatly enhance night vision through various technologies such as infrared, radar, and light sensitive devices.
- Vision systems consist of one or more image acquisition devices, coupled to one or more viewable display units. An image acquisition device can incorporate different lenses with different focal lengths and depths of focus such as planar, panoramic, fish-eye, and annular. Lenses like fish-eye and annular have a wide field of view and a large depth of focus. They can capture a wide and deep field of view. They tend, however, to distort images, especially the edges. Resulting images look disproportionate. In any type of lens, there are also optical distortions caused by tangential and radial lens imperfections, lens offset, focal length of the lens, and light falloff near the outer portions of the lens. In a vision system, there are yet other types of image distortions caused by luminance variations and color aberrations. These distortions affect the quality and sharpness of the image.
- Prior art panoramic vision systems do not remove image distortions while accurately blending multiple images. One such panoramic system is disclosed in U.S. Pat. No. 6,498,620B2, namely a rearview vision system for a vehicle. This system consists of image capture devices, an image synthesizer, and a display system. Neither in this document, nor in the ones referenced therein, is there an electronic image processing system correcting for geometric, optical, color aberration, and luminance image distortions. In the United States Patent Application Publication No. 2003/0103141A1 a vehicle vision system includes pre-calibration based luminance correction only but not other optical and geometric corrections. A thorough luminance and chrominance correction should be based on input and output optical and geometric parameters and be adaptive for changing ambient environments.
- Distorted images and discontinuity between multiple images slow down the operator's visual cognizance, and as such, her/his situational awareness, resulting in potential errors. This is one of the most important impediments in launching an efficient panoramic vision system. It is therefore desirable to provide a panoramic vision system with accurate representation of the situational view via removing geometric, optical, color aberration, luminance, and other image distortions and providing a composite image of multiple views. Such corrections will aid visualization and recognition and will improve visual image quality.
- The present invention in one aspect provides a panoramic vision system having associated camera, display optics and geometric characteristics, said system comprising:
-
- (a) a plurality of image acquisition devices to capture image frame data from a scene and to generate image sensor inputs, said image frame data collectively covering up to a 360° field of view, said image acquisition devices having geometric and optical distortion parameters;
- (b) a digitizer coupled to the plurality of image acquisition devices to sample and convert the image frame data and the image sensor inputs into digital image data;
- (c) an image processor coupled to the digitizer comprising:
- (i) an image measurement device to receive the digital image data and to measure the image luminance histogram and the ambient light level associated with the digital image data;
- (ii) a luminance correction module coupled to the image measurement device to receive the digital image data along with the camera, display optics and geometric characteristics, the image luminance histogram and the ambient light level, and to correct for luminance non-uniformities and to optimize the luminance range of selected regions within the digital image data;
- (iii) a convolution stage coupled to the luminance correction module to combine the geometric and optical distortion parameters including the image sensor inputs, the camera, display optics and geometric characteristics and imperfections associated therein to form convoluted distortion parameters;
- (iv) a distortion correction module coupled to the convolution stage to generate and apply a distortion correction transformation based on the convoluted distortion parameters to the digital image data to generate corrected digital image data;
- (v) a display controller coupled to the distortion correction module to synthesize a composite image from the corrected digital image data; and
- (d) a display system coupled to said image processor to display the composite image on a viewing surface for viewing, said composite image being visually distortion free.
- In another aspect, the present invention provides a method for providing panoramic vision using a panoramic vision system having camera, display optics and geometric characteristics as well as geometric and optical distortion parameters, to generate a composite image that covers up to 360° or 4π steradians, said method comprising:
-
- (a) acquiring image frame data from a scene, said image frame data collectively covering up to 360° or 4π steradian field of view, and generating a set of image senor inputs;
- (b) converting the image frame data and the image senor inputs into digital image data, said digital image data being associated with an image luminance histogram and an ambient light level;
- (c) obtaining the camera, display optics and geometric characteristics, the image luminance histogram and the ambient light level and correcting for luminance non-uniformities to optimize the luminance range of selected regions within the digital image data;
- (d) convoluting the geometric and optical distortion parameters including the image sensor inputs, the camera, display optics and geometric characteristics and imperfections associated therein to form convoluted distortion parameters;
- (e) generating and applying the distortion correction transformations to the digital image data, said distortion correction transformations being based on the convoluted geometric and optical distortion parameters to generate corrected digital image data;
- (f) synthesizing a composite image from the corrected digital image data; and
- (g) displaying the composite image on a viewing surface for viewing, said composite image being visually distortion free.
- In another aspect, the present invention provides an image processor, for use in a panoramic vision system having associated camera, display optics and geometric characteristics as well as geometric and optical distortion parameters, said panoramic vision system using a plurality of image acquisition devices to capture image frames from a scene and to generate digital image data and image sensor inputs and a digitizer to convert the digital image data and the image sensor inputs into digital image data, said image processor comprising:
-
- (a) an image measurement device to receive the digital image data and to measure the image luminance histogram and the ambient light level associated with the digital image data;
- (b) a luminance correction module coupled to the image measurement device to receive the digital image data along with the camera, display optics and geometric characteristics, the image luminance histogram, and the ambient light level and to correct for luminance non-uniformities and to optimize the luminance range of selected regions within the digital image data;
- (c) a convolution stage coupled to the luminance correction module to combine the geometric and optical distortion parameters including the image sensor inputs, the camera, display optics and geometric characteristics and imperfections associated therein to form convoluted distortion parameters;
- (d) a distortion correction module coupled to the convolution stage to generate and apply a distortion correction transformation, based on the convoluted distortion parameters, to the digital image data to generate corrected digital image data; and
- (e) a display controller coupled to the distortion correction module to synthesize a composite image from the corrected digital image data.
- The present invention in its first embodiment provides a vehicle vision system covering up to 360° horizontal field of view, and up to 180° vertical field of view coverage, with emphasis on the rear, side, and corner views. The situational display image can be integrated with other vehicle information data such as vehicle status or navigation data. The image is preferably displayed on the front panel, contiguous with the driver's field of view. Additional views can be displayed to improve coverage and flexibility such as interior dashboard corner displays that substitute mirrors, eliminating exposure to the external elements and wind drag. The system is adapted to reconfigure the display and toggle between views based on control inputs, when one view becomes more critical than others.
- For instance, the display reconfigures to show a wide view of the rear bottom of the vehicle when the vehicle is backing up, or the right side view when the right turning signal is turned on and the vehicle runs at higher speed, or the right corner view when the right turning signal is active and the vehicle is stopped or runs at low speed. In a preferred embodiment, the system provides such facilities as parking assistance, or lane crossing warning via pattern recognition of curbs, lanes, and objects as well as distance determination.
- In another embodiment, the present invention provides a surveillance system covering up to 360° horizontal or 4π steradian field of view of the exterior and/or interior of a structure such as a building, a bridge, etc. The system can provide multiple distortion corrected views to produce a continuous strip or scanning panoramic view. The system can perform motion detection and object tracking over the surveillance area and object recognition to decipher people, vehicles, etc.
- In yet another embodiment, the invention provides a videoconference vision system producing a wide view of up to 180° or a circular view of up to 360°. In this embodiment, the invention facilitates viewing the participants of a conference or a gathering and provides ability to zoom in on speakers with a single camera.
- Further details of different aspects and advantages of the embodiments of the invention will be revealed in the following description along with the accompanying drawings.
- In the accompanying drawings:
-
FIG. 1 represents an overall view of a vision system and its components built in accordance with the present invention; -
FIG. 2A represents the structure of the image processor ofFIG. 1 as part of the vision system; -
FIG. 2B represents the flow logic of the image processor ofFIG. 1 ; -
FIG. 3A represents a conventional vehicle with mirrors used for side and rear vision; -
FIG. 3B represents an example setting for the cameras and their views in a vehicle example of the vision system; -
FIG. 3C represents selected views for the cameras in a vehicle example of the vision system; -
FIG. 4A represents the viewing surface of the vision system display in a vehicle vision system example; -
FIG. 4B represents the same viewing surface asFIG. 4A when the right turning signal is engaged in a vehicle vision system example; -
FIG. 5A represents a wide alternative of the viewing surface of the vision system display in a vehicle vision system example; -
FIG. 5B represents reconfigured viewing surface ofFIG. 5A when the vehicle is running in reverse; -
FIG. 6A represents reconfigured display when the vehicle is changing lanes; -
FIG. 6B represents reconfigured display when the vehicle is turning right and driver's view is obscured; -
FIG. 7 represents a top view display showing the vehicle and adjacent objects; -
FIG. 8 represents an example of control inputs in a vehicle vision system; -
FIG. 9A represents the view of a hemispheric, 180° (2π steradians) view camera, set in the center of a conference table in a videoconference vision system example; -
FIG. 9B represents the display of a videoconference vision system example; -
FIG. 10 represents a hallway or a building wall with two cameras installed on the walls as part of a surveillance system example; -
FIG. 11A represents the display of a surveillance system example of the invention; and -
FIG. 11B represents the same display asFIG. 9A at a later time. - Built in accordance with the present invention,
FIG. 1 shows the overall structure ofvision system 100. It comprises a plurality of image acquisition devices likecamera 110 to capture image frames,digitizer 118 to convert image frame data to digital image data,image processor 200 to adjust luminance and correct image distortions and form a composite image from digital image data and control parameters,controller 101 to relay user and sensor parameters to imageprocessor 200, anddisplay device 120 to display the composed image for viewing on theviewing surface 160. The final composed image in the present invention covers up to 360° including several specific regions of interest and is significantly distortion free. It greatly enhances situational awareness. -
Camera 110 inFIG. 1 comprisesimage optics 112,sensor array 114 andimage capture control 116. For applications requiring wide area coverage,image optics 112 may use a wide angle lens to minimize the number of cameras required. Wide angle lenses also have wider depth of focus, reducing the need for focus adjustment and its cost implementation. Such lenses have a field of view typically in excess of 100° and a depth of focus from a couple of feet to infinity. Both these properties make such lenses desirable for a panoramic vision system. Wide angle lenses have greater optical distortions that are difficult and costly to correct optically. In the present invention, electronic correction is used to compensate for any optical and geometric distortion as well as any other non-linearity and imperfection in the optics or projection path. - In the present invention, camera lenses need not be perfect or yield perfect images. All image distortions, including those caused by the lens geometry or imperfection, are mathematically modeled and electronically corrected. The importance of this feature lies in the fact that it enables one to use inexpensive wide-angle lenses with an extended depth of focus. The drawback of such lenses is the image distortion and luminance non-uniformity they cause; they tend to stretch objects, especially near the edges of a scene. This effect is well known, for example, for fish-eye or annular lenses. They make objects look disproportionate. There are bulk optic solutions available to compensate for these distortions. The addition of bulk optics, however, would make such devices larger, more massive, and more expensive. The cost of bulk optics components is fixed by the labor cost for such functions as polishing, alignment, etc. Electronic components however become constantly cheaper and more efficient. It is also well known that bulk optics can never remove certain distortions completely.
- Another drawback of bulk optics solutions is the sensitivity to alignment and being rendered dysfunctional upon impact. There is therefore crucial need for electronic image corrections to make a vision system represent the true setting, while making it robust, and relatively economical. This is substantiated by the fact that in a vision system, electronic components are most likely needed anyway to accomplish other tasks. These tasks include panoramic conversion and display control.
Image processor 200, described later in this section, provides these essential functions. -
Image sensor array 114 is depicted incamera 110. The sensor array is adapted to moderate extreme variations of light intensity of the scene through the camera system. For a vision system to operate efficiently in different lighting conditions, there is need for a great dynamic range. This dynamic range requirement stems from the day and night variations in ambient light intensity as well a wide variation from incidental light sources at night. A camera sensor is basically a mosaic of segments, each exposed to the light from a portion of the scene, registering the intensity of the light as output voltages.Image Capture Control 116 sends image frame intensity information and based on these data it receives commands for integration optimization, lens iris control, and white balance control fromimage processor 200. -
Digitizer 118 receives image data fromimage capture control 116 and converts these data into digital image data. In an alternative example, the function ofdigitizer 118 may be integrated in theimage sensor 114 orimage processor 200. In case of existing audio data acquisition,digitizer 118 receives these data from audio device array 117. It then produces digital audio data and combines these data with digital image data. The digital image data is then sent to imageprocessor 200. -
FIG. 2A showsimage processor 200 in detail. The function of the image processor is to receive digital image data, measure image statistics, enhance image quality, compensate for luminance non-uniformity, and correct various distortions in these data to finally generate one or multiple composite images that are significantly distortion free. It comprises optics andgeometry data interface 236, which comprises camera optics data, projection optics data, and projection geometry data, as well ascontrol data interface 202,image measurement module 210,luminance correction module 220,distortion convolution stage 250,distortion correction module 260, anddisplay controller 280. To understand the function of theimage processor 200 it is important to briefly discuss the nature and causes of these distortions. - Ambient light and spot lighting can result in extreme variations in illumination levels, resulting in poor image quality and difficulty in image element recognition. In this invention,
image processor 200 measures full and selective image areas and analyzes them to control exposure for best image quality in the area of interest. High light level sources such as headlights and spotlights are substantially reduced in intensity and low level objects are enhanced in detail to aid element recognition. - While the image sensor may be of any type and resolution, for applications identified in this invention, a high resolution solid state CCD or CMOS image sensor would be appropriate in terms of size, cost, integration, and flexibility. Typical applications include adjusting to wide varying ambient light levels by iris control and integration time. In addition to iris control and integration time control,
luminance correction module 220 performs histogram analysis of the areas of interest and expands contrast in the low light area and reduces contrast in high light areas to provide enhanced image element recognition. - Luminance non-uniformities are undesired brightness variations across an image. The main physical reason for luminance non-uniformity is the fact that light rays going through different portions of an optical system, travel different distances and area densities. The intensity of a light ray falls off with the square of the distance it travels. This phenomenon happens in the camera optics as well as the display optics. In addition to this purely physical cause, imperfections in an optical system also cause luminance non-uniformity. Examples of such imperfections are projection lens vignette and lateral fluctuations and non-uniformity in the generated projection light. If the brightness variations are different for the three different color components, they are referred to as chrominance non-uniformity.
- Another type of optical distortion is color aberration. It stems from the fact that an optical component like a lens has different indices of refraction for different wavelengths. Light rays propagating through optical components refract at different angles for different wavelengths. This results in lateral shifts of colors in images. Lateral aberrations cause different color component of a point object to separate and diverge. On a viewing surface a point would look fringed. Another type of color aberration is axial in nature and is caused by the fact that a lens has different focal points for different light wavelengths. This type of color aberrations could not be corrected electronically.
- A variety of other distortions might be present in a vision system. Tangential or radial lens imperfections, lens offset, projector imperfections, and keystone distortions from off-axis projection are some of the common distortions.
- In addition to the mentioned distortions,
image processor 200 maps the captured scene onto a viewing surface with certain characteristics such as shape, size, and aspect ratio. For example an image can be formed from different sections of a captured scene and projected onto a portion of the windshield of a car, in which case it would suffer distortions because of the non-flat shape as well as particular size of the viewing surface. The image processor in the present invention corrects for all these distortions as explained below. - Referring now to the details of
image processor 200 inFIG. 2A , digital image data are received byimage measurement module 210, where image contrast and brightness histograms within the region of interest are measured. These histograms are analyzed by theluminance correction module 220 to control sensor exposure and to adjust the digital image data to improve image content for visualization and detection. Some adjustments include highlight compression, contrast expansion, detail enhancement, and noise reduction. - Along with this measurement information,
luminance correction module 220 receives camera and projection optics data from optics andgeometry data interface 236. These data are determined from accurate light propagation calculations and calibrations of the optical components. These data are crucial for luminance corrections since they provide the optical path length of different light rays through the camera as well as the projection system. Separate or combined camera and projection correction maps are generated to compute the correction for each pixel. - The correction map can be obtained off-line or it could be computed dynamically according to circumstances. The function of the
luminance correction module 220 is therefore to receive digital image data and produce luminance-adjusted digital image data. In case of chrominance non-uniformity,luminance correction module 220 preferably applies luminance corrections separately to the three different color components. The physical implementation ofluminance correction module 220 could be by a software program or a dedicated processing circuitry such as a digital signal processor or computational logic within an integrated circuit. - Luminance-adjusted image data should be corrected for geometric, optical, and other spatial distortions. These distortions are referred to as “warp” corrections and such correction techniques are called “image warping” in the literature. A discussion of image warping can be found in George Wolberg's “Digital Image Warping”, IEEE Computer Society Press, 1988, hereby incorporated by reference.
- Image warping is basically an efficient parameterization of coordinate transformations, mapping output pixels to input pixels. Ideally a grid data set represents a mapping of every output pixel to an input pixel. However, grid data representation is quite unforgiving in terms of hardware implementation because of the shear size of the look-up tables. Image warping, in this invention provides an efficient way to represent a pixel grid data set via a few parameters. In one example, this parameterization is done by polynomials of degree n, with n determined by the complexity of the combined distortion.
- Different areas of the output space, in another example of this invention, are divided into patches with inherent geometrical properties to reduce the degree of polynomials. In principal, the higher the number of patches and degree of fitting polynomial per patch, the more accurate the parameterization of the grid data set. However, this has to be balanced with processing power for real time applications. Such warp maps therefore represent a mapping of output pixels to input pixels, representing the camera optics, display optics, and display geometry including the nature of the final composite image specification and the shape of viewing surface. In addition, any control parameter, including user input parameters are also combined with the above parameters and represented in a single transformation.
- In addition to coordinate transformation, a sampling or filtering function is often needed. Once the output image pixel is mapped onto an input pixel, an area around this input pixel is designated for filtering. This area is referred to as the filter footprint. Filtering is basically a weighted averaging function, resulting in the intensities of constituent colors of an output pixel based on the all pixels inside the footprint. In a particular example, an anisotropic elliptical footprint is used for optimal image quality. It is known that the larger the size of the footprint, the higher the quality of the output image.
Image processing 200, in the present invention, performs the image filtering with simultaneous coordinate transformation. - To correct for image distortions, all geometric and optical distortion parameters explained above are concatenated in the
distortion convolution stage 250 for different color components. These parameters include camera optics data, projection optics data, projection geometry data, via Optics andgeometry interface 236, and control inputs viacontrol interface 202. The concatenated optical and geometric distortion parameters are then obtained bydistortion correction module 260. The function of this module is to transform the position, shape, and color intensities of each element of a scene onto a display pixel. The shape ofviewing surface 160 is taken into account in the projection geometry data 234. This surface is not necessarily flat and could be any general shape so long as a surface map is obtained and concatenated with other distortion parameters. The display surface map is convoluted with the rest of the distortion data. -
Distortion correction module 260, in one example of the present invention, obtains a warp map covering the entire space of distortion parameters. The process is explained in detail in co-pending United States Patent Application Nos. 2003/0020732-A1 and 2003/0043303-A1 hereby incorporated by reference. For each set of distortion parameters, a transformation is computed to compensate for the distortions an image suffers when it propagates through the camera optics, through display optics, and onto the specific shape of theviewing surface 160. The formation of the distortion parameter set and the transformation computation could be done offline and stored in a memory to be accessed byimage processor 200 via an interface. They could as well be done at least partly dynamically in case of varying parameters. - A display image can be composed of view windows that can be independent views or concatenated views. For each view window, the
distortion correction module 260 interpolates and calculates from the warp surface equations the spatial transform and filtering parameters and performs the image transformation for display image. - For every image frame,
distortion correction module 260 finds nearest grid data point in the distortion parameter space first. It then interpolates the existing transformation corresponding to that set of parameters to fit the actual distortion parameters.Correction module 260 then applies the transformation to the digital image data to compensate for all distortions. The digital image data from each frame of each camera is combined to form a composite image fitting the viewing surface and its substructure. The corresponding digital data are corrected in such a way that when an image is formed on the viewing surface, it is visibly distortion free and it fits an optimized viewing region on the viewing surface. The physical implementation ofdistortion correction module 260 could be by a software program on a general purpose digital signal processor or by a dedicated processing circuitry such as an application specific integrated circuit. A physical example of theimage processor 200 is incorporated in the Silicon Optix Inc. sxW1 and REON chip. -
FIG. 2B shows the flow logic ofimage processor 200 in one example of the present invention. Digital data flow in this chart is indicated via bold lines whereas calculated data flow is depicted via thin lines. As seen in this figure, brightness and contrast histograms are measured from the digital data at step (10). Camera optics along with display optics and display geometry parameters are obtained in steps (14) and (16). These data are then obtained along with brightness and contrast histogram in step (20), where the image luminance non-uniformity is adjusted. Optics and geometry data from steps (14) and (16), as well as control parameters obtained in step (26), are then gathered at step (30). At this step, all distortion parameters are concatenated. A transformation inverting the effect of the distortions is then computed at step (40). This compensating transformation is then applied to the luminance-adjusted digital image data obtained from step (20). - As formerly explained, pixel data is filtered at this step for higher image quality. Accordingly, step (50) constitutes simultaneous coordinate transformation and image processing. At step (50), distortion-compensated digital image data are used to generate a composite image for display.
- Once the digital image data for a frame are processed and composed,
display controller 280 generates an image from these data. In an example of this invention, the image is formed ondisplay device 120 which could be a direct view display or a projection display device. In one example, the image from aprojection display device 120 is projected throughdisplay optics 122, ontoviewing surface 160.Display optics 122 are bulk optics components to direct the projected lightform display device 120 ontoviewing surface 160. Any particular display system has additional optical and geometric distortions that need to be corrected. In this example,image processor 200 concatenates these distortions and corrects for the combined distortion. - It should be noted that the present invention is not limited to any particular choice of the display system; the display device could be provided as liquid crystal, light emitting diode, cathode ray tube, electroluminescent, plasma, or any other viable display choice. The display device could be viewable directly or it could be projected through the display optics onto an integrated compartment, serving as a viewing screen. Preferably, the brightness of the display system is adjusted via ambient light sensors, an independent signal, and a user dimmer switch. The size of the viewing surface is also preferably adjustable by the user for instance according to the distance from the operators' eyes.
-
Controller 101 interfaces withimage processor 200. It acquires user parameters from user interface 150 along with inputs from thesensors 194 and sends them to theimage processor 200 for display control. The user parameters are specific to the application corresponding to different embodiments of the invention.Sensors 194 preferably include ambient light sensors and direct glare sensors. The data from these sensors are used for display adjustment. Control parameters are convoluted with other parameters to provide a desired image. - For some applications of this invention, it could be desirable to record the data stream or to send it over the network to different clients. In both cases it is desirable and a necessity to compress the video data stream first to achieve storage and bandwidth limits.
Compression stage 132, in one example, receives the video stream fromimage processor 200 and compresses the digital video data. Compressed data are stored in therecord device 142 for future use. In another embodiment the data are encoded in theencryption stage 134 and sent over the network vianetwork interface 144. - Having explained general features of the present invention in detail, a number of example implementations of
vision system 100 will now be discussed. In one embodiment, the present invention provides a vehicle vision system.FIG. 3A shows a conventionalautomotive vehicle 900 with two side mirrors and an in-cabin mirror coveringright view 902,left view 904, andrear view 906. It is well known that traditional mirror positions and fields of view cause blind spots and may have distortions due to wide viewing angle. It is hard to get complete situational awareness from the images provided by the three mirrors. - In one example of the present invention illustrated in
FIG. 3B ,vehicle 900′ is equipped withcamera 110′ andcamera 111′ forward of the driver seat. These positions increase coverage by overlapping with the driver's direct forward field of view.Camera 113′ in this example is situated at the rear of the vehicle or in the middle rear of its roof. Specific areas of different views are selected to provide a continuous image without overlap. This prevents driver confusion. - In another example illustrated in
FIG. 3C ,vehicle 900″ hascamera 110″ andcamera 111″ at the front corners of the vehicle andcamera 113″ at the center rear. In this example the emphasis is on adjustable views by user input or by control parameters. For example, turningside view 903″ is made available when the turning signal is engaged. It should be noted that the coverage of this view is a function of user inputs, and in principal, covers the whole area between dashed lines. Similarly, driveside view 905″ is used in regular driving mode and it is also expandable to cover the whole area between dashed lines. Rear view in this example has two modes depending on control parameters. When the vehicle is in reverse, reverserear view 907″ is used for display. This view yields a complete coverage of the rear of the vehicle, including objects on the pavement. This assures safer backing up and greatly facilitates parallel parking. When the vehicle is in drive mode, however, a narrower driverear view 906″ is used. These positions and angles ensure a convenient view of the exterior of the vehicle by the driver. This example significantly facilitates functions like parallel parking and lane change. -
FIG. 4A shows an example of theviewing surface 160 as seen by the driver. Rear view 166 is at the bottom of the viewing surface while theright view 162 and leftview 164 are at the top right and the top left of theviewing surface 160 respectively.FIG. 5A shows an example ofviewing surface 160′ where a wider display is used and the side displays are at the two sides of therear view 166′.Image processor 200 has the panoramic conversion and image stitching capability to compose this particular display.FIG. 4B andFIG. 5B present examples of reconfigured displays ofFIG. 4A andFIG. 5A , when the vehicle is turning right or in reverse respectively. -
FIG. 6A is an example illustration of the reconfigured display ofviewing surface 160 when the vehicle is changing lanes, moving into the right lane. In this example, the right front and rear of the vehicle is completely viewable, resulting in situation awareness with respect to everything on the right side of the vehicle.FIG. 6B is an example illustration of the display configured for turning right, when the right turning signal is engaged. Turningside view 903″ ofFIG. 3C is now displayed at the top middle portion of the display as turningside view 163″. Rear view 166″ andright view 166″ are at the bottom of the display. -
FIG. 7 shows another example of theviewing surface 160. This particular configuration of the display is achievable via gathering visual information from image acquisition devices, as well as distance determination from ultrasound and radar sensors integrated withvision system 100. In this illustrated example, the vehicle with bold lines and grey shade is containsvision system 100 and is shown with respect to the surrounding setting, namely, other vehicle. This example implementation of the present invention greatly increases situational awareness and facilitates driving the vehicle. Two of the dimensions of each object are captured directly by the cameras. The shape and sizes of these vehicles and objects are reconstructed by extrapolating the views of the cameras according to the size and shape of the lanes and curbs. A more accurate visual account of the driving scene could be reconstructed by pattern recognition and look up tales for specific objects in a database. - It should be noted that the present invention is not limited to these illustrated examples; variations on the number and position of cameras as well as reconfigurations of the viewing surface are within the scope of this invention. Projection geometry data, projection optics data, and camera optics data are cover all these alternative implementations in
image processor 200. - Preferably the display brightness is adjustable via ambient light sensors, via signal from the vehicle headlights, or via manual dimmer switch. In addition, preferably the size of the display surface is also adjustable via the driver. It is also preferred that the focal length of the displayed images lie considerably within the driver's field of focus as explained in U.S. Pat. No. 5,949,331 which is hereby incorporated by reference. However, it is also preferred that the display system focal length be adjusted according to the speed of the vehicle to make sure the images always form within the depth of focus of the driver. At higher speed, the driver naturally focuses on a longer distance. Such adjustments are achieved via
speedometer signal 156 ortransmission signal 154. -
FIG. 6 shows an example of user inputs 150 in a vehicle vision system.Signal interface 151 receives signals from different components and interfaces withcontroller 101. Turningsignal 152 for instance, when engaged while turning right, relays a signal to thesignal interface 151 and onto thecontroller 101 and eventually to imageprocessor 200. Onceimage processor 200 receives this signal, it configures the display in a way as to put emphasis on theright display 162 onviewing surface 160.FIG. 4B shows theviewing screen 160 under such conditions. Right view display now occupies half ofviewing surface 160 with the other half dedicated torear view display 166.FIG. 5B shows the same situation with a wide viewing surface embodiment. Other signals evoke other functions. - For instance, while parking the vehicle, the transmission preferably generates a signal as to emphasize the rear view. Other signals include steering signal which is engaged when the steering wheel is turned to one side by more than a preset limit, brake signal when is engaged when the brake pedal is depressed, transmission signal which conveys information about the traveling speed and direction, and speedometer signal which is gauged at various set velocities to input different signals. The display system, in the present invention, adjusts to different situations depending on any of these control parameters and reconfigures automatically to make crucial images available. A variety of control signals could be incorporated with
image processor 200 and here we have mentioned only a few examples. - A variety of useful information could be displayed on the
viewing surface 160 including road maps, roadside information, GPS information, and local weather forecast.Vision system 100 accesses these data either through downloaded files, or through wireless communications at driver's request. It then superimposes these data on the image frame data to compose the image for the display system. Important warning systems are also preferably received by the vision system and are displayed and broadcast on via the display system and audio signal interfaces.Vision system 100 could also be integrated with intelligent highway driving systems via exchanging image data information. It is also integrated with distance determiners and object identifiers as per U.S. Pat. No. 6,498,620 B2 hereby incorporated by reference. - In one example implementation,
vehicle vision system 100 comprisescompression stage 132 andrecord device 142 to make a continuous record of events as detected by the cameras and audio devices. These features incorporate the function to save specific segments prior to and past impact in addition to driver intended events to be reviewed after mishaps like accidents, carjacking, and traffic violations. Such information could be used by the law enforcement agents, judicial authorities, and insurance companies. - In a different embodiment, the present invention provides a videoconference vision system covering up to 180° or 360° depending on a given setting. In this embodiment audio signals are also needed along with the video signals. In the illustrated example of
FIG. 1 of thevision system 100, audio device array 117 provides audio inputs in addition to video inputs from the cameras. The audio signal data is converted to digital data viadigitizer 118 and is superposed on the digital image frame data. A pan function is provided based on the triangulation of the audio signal to pan to individual speakers or questioners. The pan and zoom functions are provided digitally byimage processor 200. In lack of optical zoom, a digital zoom is provided. -
FIG. 7A relates to an example of a videoconference system. It shows the view ofcamera 110 at the center of a conference table. In thisparticular example camera 110 has a hemispheric lens system and captures everything above, and including, the table surface. Raw images are upside down, stretched, and disproportionate. The displayed images onviewing surface 160 are, however, distortion free and panoramically converted. -
FIG. 7B showsviewing surface 160 where the speaker's image is blown up and the rest of the conference is visible in a straight form on the same surface. - In yet a different embodiment, the present invention provides a surveillance system with motion detection and object recognition.
FIG. 2A shows an example ofimage processor 200, used in the surveillance system. Motion detector 270 evaluates successive input image frames and based on preset level of detected motion and signals alarm and sends the motion area co-ordinates to thedistortion convolution stage 250. The input image frames are used to monitor area outside the current view window. The tracking co-ordinates are used by thedistortion convolution stage 250 to calculate the view window for corrected display of the detected object. The distortion corrected object image can be resolution enhanced by motion compensated and temporal interpolation of the object. Object recognition and classification can also be performed. -
FIG. 8 shows an illustrated example of a surveillance system.Cameras Image processor 200 in this embodiment uses motion detector and tracker 270 to track the motion of an object as it passes through the hallway. Theviewing surface 160 in this example is shown inFIG. 9A at a certain time andFIG. 9B at a later time. The passage of the object is thoroughly captured as he moves from the field of view of one camera to the other. The vision system can also perform resolution enhancement by temporal extraction to improve object detail and recognition and displays it at the top of the image in bothFIG. 9A andFIG. 9B . The top portion in this figure is provided in full resolution or by digitally zooming on the moving object. The recognition and tracking of the object is achieved via comparing the detect object in motion from one frame with the next frames. An outline highlights the tracked object for ease of recognition.
Claims (56)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/899,410 US7576767B2 (en) | 2004-07-26 | 2004-07-26 | Panoramic vision system and method |
TW094121880A TWI287402B (en) | 2004-07-26 | 2005-06-29 | Panoramic vision system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/899,410 US7576767B2 (en) | 2004-07-26 | 2004-07-26 | Panoramic vision system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
US20060017807A1 true US20060017807A1 (en) | 2006-01-26 |
US7576767B2 US7576767B2 (en) | 2009-08-18 |
Family
ID=35656706
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/899,410 Active 2027-03-06 US7576767B2 (en) | 2004-07-26 | 2004-07-26 | Panoramic vision system and method |
Country Status (2)
Country | Link |
---|---|
US (1) | US7576767B2 (en) |
TW (1) | TWI287402B (en) |
Cited By (168)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060093190A1 (en) * | 2004-09-17 | 2006-05-04 | Proximex Corporation | Adaptive multi-modal integrated biometric identification detection and surveillance systems |
US20060204128A1 (en) * | 2005-03-07 | 2006-09-14 | Silverstein D A | System and method for correcting image vignetting |
US20060232672A1 (en) * | 2005-04-18 | 2006-10-19 | Song Sim | Overhead display device with dual panel structure for a vehicle |
US20060268360A1 (en) * | 2005-05-12 | 2006-11-30 | Jones Peter W J | Methods of creating a virtual window |
US20070206556A1 (en) * | 2006-03-06 | 2007-09-06 | Cisco Technology, Inc. | Performance optimization with integrated mobility and MPLS |
DE102006031895A1 (en) * | 2006-07-07 | 2008-01-10 | Siemens Ag | Vehicle interval display in traffic shows the leading vehicle in a changing perspective and color according to the length of gap between it and the own vehicle |
US20080036875A1 (en) * | 2006-08-09 | 2008-02-14 | Jones Peter W | Methods of creating a virtual window |
US20080130951A1 (en) * | 2006-11-30 | 2008-06-05 | Wren Christopher R | System and Method for Modeling Movement of Objects Using Probabilistic Graphs Obtained From Surveillance Data |
WO2008086078A1 (en) * | 2007-01-05 | 2008-07-17 | Silicon Optix, Inc. | Color and geometry distortion correction system and method |
US20080207261A1 (en) * | 2006-11-30 | 2008-08-28 | Paten Navigation Inc. | Panoramic scene capture using a mobile wireless device |
US20080303901A1 (en) * | 2007-06-08 | 2008-12-11 | Variyath Girish S | Tracking an object |
US20090009604A1 (en) * | 2007-07-02 | 2009-01-08 | Nissan Motor Co., Ltd. | Image processing system and method |
US20090040302A1 (en) * | 2005-04-19 | 2009-02-12 | Stuart Thompson | Automated surveillance system |
US20090059018A1 (en) * | 2007-09-05 | 2009-03-05 | Micron Technology, Inc. | Navigation assisted mosaic photography |
US20090109193A1 (en) * | 2007-10-26 | 2009-04-30 | Microsoft Corporation | Detecting ambient light levels in a vision system |
WO2009064504A1 (en) | 2007-11-16 | 2009-05-22 | Tenebraex Corporation | Systems and methods of creating a virtual window |
WO2009102503A2 (en) * | 2008-02-14 | 2009-08-20 | Cisco Technology, Inc. | Adaptive quantization for uniform quality in panoramic videoconferencing |
US20090207233A1 (en) * | 2008-02-14 | 2009-08-20 | Mauchly J William | Method and system for videoconference configuration |
WO2009108039A2 (en) * | 2008-02-27 | 2009-09-03 | Mimos Berhad | Method to detect abnormal events for wide view video images |
EP2101295A1 (en) * | 2008-03-10 | 2009-09-16 | Ricoh Company, Ltd. | Image processing method, image processing device, and image capturing device |
US20090244257A1 (en) * | 2008-03-26 | 2009-10-01 | Macdonald Alan J | Virtual round-table videoconference |
US20090256901A1 (en) * | 2008-04-15 | 2009-10-15 | Mauchly J William | Pop-Up PIP for People Not in Picture |
US20090290033A1 (en) * | 2007-11-16 | 2009-11-26 | Tenebraex Corporation | Systems and methods of creating a virtual window |
US20100082557A1 (en) * | 2008-09-19 | 2010-04-01 | Cisco Technology, Inc. | System and method for enabling communication sessions in a network environment |
US7777783B1 (en) * | 2007-03-23 | 2010-08-17 | Proximex Corporation | Multi-video navigation |
US20100225735A1 (en) * | 2009-03-09 | 2010-09-09 | Cisco Technology, Inc. | System and method for providing three dimensional imaging in a network environment |
US20100225732A1 (en) * | 2009-03-09 | 2010-09-09 | Cisco Technology, Inc. | System and method for providing three dimensional video conferencing in a network environment |
US20100283829A1 (en) * | 2009-05-11 | 2010-11-11 | Cisco Technology, Inc. | System and method for translating communications between participants in a conferencing environment |
US20100302345A1 (en) * | 2009-05-29 | 2010-12-02 | Cisco Technology, Inc. | System and Method for Extending Communications Between Participants in a Conferencing Environment |
CN101951487A (en) * | 2010-08-19 | 2011-01-19 | 深圳大学 | Panoramic image fusion method, system and image processing equipment |
US20110044505A1 (en) * | 2009-08-21 | 2011-02-24 | Korea University Industry And Academy Cooperation | Equipment operation safety monitoring system and method and computer-readable medium recording program for executing the same |
US20110069148A1 (en) * | 2009-09-22 | 2011-03-24 | Tenebraex Corporation | Systems and methods for correcting images in a multi-sensor system |
USD636359S1 (en) | 2010-03-21 | 2011-04-19 | Cisco Technology, Inc. | Video unit with integrated features |
USD636747S1 (en) | 2010-03-21 | 2011-04-26 | Cisco Technology, Inc. | Video unit with integrated features |
US20110106380A1 (en) * | 2009-11-02 | 2011-05-05 | Denso Corporation | Vehicle surrounding monitoring device |
USD637569S1 (en) | 2010-03-21 | 2011-05-10 | Cisco Technology, Inc. | Mounted video unit |
USD637568S1 (en) | 2010-03-21 | 2011-05-10 | Cisco Technology, Inc. | Free-standing video unit |
US20110134245A1 (en) * | 2009-12-07 | 2011-06-09 | Irvine Sensors Corporation | Compact intelligent surveillance system comprising intent recognition |
US20110228096A1 (en) * | 2010-03-18 | 2011-09-22 | Cisco Technology, Inc. | System and method for enhancing video images in a conferencing environment |
US20110234807A1 (en) * | 2007-11-16 | 2011-09-29 | Tenebraex Corporation | Digital security camera |
CN102256154A (en) * | 2011-07-28 | 2011-11-23 | 中国科学院自动化研究所 | Method and system for positioning and playing three-dimensional panoramic video |
US20120002050A1 (en) * | 2010-06-30 | 2012-01-05 | Fujitsu Ten Limited | Image display system |
US8139896B1 (en) * | 2005-03-28 | 2012-03-20 | Grandeye, Ltd. | Tracking moving objects accurately on a wide-angle video |
KR101192800B1 (en) * | 2006-10-26 | 2012-10-18 | 엘지디스플레이 주식회사 | A liquid crystal display device and a method for diving the same |
US20130038681A1 (en) * | 2010-02-08 | 2013-02-14 | Ooo "Sistemy Peredovykh Tekhnologiy" | Method and Device for Determining the Speed of Travel and Coordinates of Vehicles and Subsequently Identifying Same and Automatically Recording Road Traffic Offences |
USD678308S1 (en) | 2010-12-16 | 2013-03-19 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD678320S1 (en) | 2010-12-16 | 2013-03-19 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD678307S1 (en) | 2010-12-16 | 2013-03-19 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD678894S1 (en) | 2010-12-16 | 2013-03-26 | Cisco Technology, Inc. | Display screen with graphical user interface |
US20130100290A1 (en) * | 2010-06-29 | 2013-04-25 | Keiji Sato | Image calibration method and image calibration device |
US20130113876A1 (en) * | 2010-09-29 | 2013-05-09 | Huawei Device Co., Ltd. | Method and Device for Multi-Camera Image Correction |
USD682293S1 (en) | 2010-12-16 | 2013-05-14 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD682294S1 (en) | 2010-12-16 | 2013-05-14 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD682864S1 (en) | 2010-12-16 | 2013-05-21 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD682854S1 (en) | 2010-12-16 | 2013-05-21 | Cisco Technology, Inc. | Display screen for graphical user interface |
WO2013081985A1 (en) * | 2011-11-28 | 2013-06-06 | Magna Electronics, Inc. | Vision system for vehicle |
US20130182077A1 (en) * | 2012-01-17 | 2013-07-18 | David Holz | Enhanced contrast for object detection and characterization by optical imaging |
US20130222593A1 (en) * | 2012-02-22 | 2013-08-29 | Magna Electronics Inc. | Vehicle vision system with multi-paned view |
US8542264B2 (en) | 2010-11-18 | 2013-09-24 | Cisco Technology, Inc. | System and method for managing optics in a video environment |
CN103336268A (en) * | 2013-06-14 | 2013-10-02 | 北京航空航天大学 | Induction type non-contact charging position alignment device and method |
US20130258048A1 (en) * | 2010-12-27 | 2013-10-03 | Panasonic Corporation | Image signal processor and image signal processing method |
US8599934B2 (en) | 2010-09-08 | 2013-12-03 | Cisco Technology, Inc. | System and method for skip coding during video conferencing in a network environment |
US8599865B2 (en) | 2010-10-26 | 2013-12-03 | Cisco Technology, Inc. | System and method for provisioning flows in a mobile network environment |
CN103552507A (en) * | 2013-11-11 | 2014-02-05 | 李良杰 | Panoramic display and record system of external car environment |
US8670019B2 (en) | 2011-04-28 | 2014-03-11 | Cisco Technology, Inc. | System and method for providing enhanced eye gaze in a video conferencing environment |
US8682087B2 (en) | 2011-12-19 | 2014-03-25 | Cisco Technology, Inc. | System and method for depth-guided image filtering in a video conference environment |
CN103679838A (en) * | 2012-09-20 | 2014-03-26 | 上海科沁机电有限公司 | Vehicle monitoring system and method |
US20140085473A1 (en) * | 2011-06-16 | 2014-03-27 | Aisin Seiki Kabushiki Kaisha | In-vehicle camera apparatus |
US8692862B2 (en) | 2011-02-28 | 2014-04-08 | Cisco Technology, Inc. | System and method for selection of video data in a video conference environment |
US20140098229A1 (en) * | 2012-10-05 | 2014-04-10 | Magna Electronics Inc. | Multi-camera image stitching calibration system |
US8699457B2 (en) | 2010-11-03 | 2014-04-15 | Cisco Technology, Inc. | System and method for managing flows in a mobile network environment |
US8723914B2 (en) | 2010-11-19 | 2014-05-13 | Cisco Technology, Inc. | System and method for providing enhanced video processing in a network environment |
US8730297B2 (en) | 2010-11-15 | 2014-05-20 | Cisco Technology, Inc. | System and method for providing camera functions in a video environment |
CN103802729A (en) * | 2014-03-04 | 2014-05-21 | 石春 | Driver assistant system for electric automobile |
CN103886653A (en) * | 2014-01-15 | 2014-06-25 | 柳州市华航电器有限公司 | Panorama image monitoring and driving recording system |
US8786631B1 (en) | 2011-04-30 | 2014-07-22 | Cisco Technology, Inc. | System and method for transferring transparency information in a video environment |
US20140232872A1 (en) * | 2011-09-26 | 2014-08-21 | Magna Electronics Inc. | Vehicle camera image quality improvement in poor visibility conditions by contrast amplification |
WO2014146658A1 (en) * | 2013-03-22 | 2014-09-25 | Conti Temic Microelectronic Gmbh | Method for monitoring a vehicle |
CN104079917A (en) * | 2014-07-14 | 2014-10-01 | 中国地质大学(武汉) | A 360-degree panoramic stereo camera |
US20140300504A1 (en) * | 2013-04-09 | 2014-10-09 | Ford Global Technologies, Llc | Active park assist object detection |
CN104115188A (en) * | 2012-03-01 | 2014-10-22 | 日产自动车株式会社 | Three-dimensional object detection device |
US8896655B2 (en) | 2010-08-31 | 2014-11-25 | Cisco Technology, Inc. | System and method for providing depth adaptive video conferencing |
US20140347709A1 (en) * | 2013-05-21 | 2014-11-27 | Stmicroelectronics, Inc. | Method and apparatus for forming digital images |
US8902244B2 (en) | 2010-11-15 | 2014-12-02 | Cisco Technology, Inc. | System and method for providing enhanced graphics in a video environment |
CN104182949A (en) * | 2014-08-18 | 2014-12-03 | 武汉大学 | Image inking and fusing method and system based on histogram feature point registration |
KR20140143843A (en) * | 2012-04-12 | 2014-12-17 | 케이엘에이-텐코 코포레이션 | Systems and methods for sample inspection and review |
US8934026B2 (en) | 2011-05-12 | 2015-01-13 | Cisco Technology, Inc. | System and method for video coding in a dynamic environment |
US20150022665A1 (en) * | 2012-02-22 | 2015-01-22 | Magna Electronics Inc. | Vehicle camera system with image manipulation |
US8947493B2 (en) | 2011-11-16 | 2015-02-03 | Cisco Technology, Inc. | System and method for alerting a participant in a video conference |
EP2846532A1 (en) * | 2013-09-06 | 2015-03-11 | Application Solutions (Electronics and Vision) Limited | System, device and method for displaying a harmonised combined image |
US9070019B2 (en) | 2012-01-17 | 2015-06-30 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
US9082297B2 (en) | 2009-08-11 | 2015-07-14 | Cisco Technology, Inc. | System and method for verifying parameters in an audiovisual environment |
US9111138B2 (en) | 2010-11-30 | 2015-08-18 | Cisco Technology, Inc. | System and method for gesture interface control |
US9143725B2 (en) | 2010-11-15 | 2015-09-22 | Cisco Technology, Inc. | System and method for providing enhanced graphics in a video environment |
US9147260B2 (en) | 2010-12-20 | 2015-09-29 | International Business Machines Corporation | Detection and tracking of moving objects |
US9153028B2 (en) | 2012-01-17 | 2015-10-06 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
EP2930070A1 (en) * | 2014-04-08 | 2015-10-14 | Application Solutions (Electronics and Vision) Limited | Monitoring system |
US20150326783A1 (en) * | 2009-01-19 | 2015-11-12 | Microsoft Technology Licensing, Llc | Three dimensional image capture system for imaging building facades using a digital camera, a near-infrared camera, and laser range finder |
US20150341597A1 (en) * | 2014-05-22 | 2015-11-26 | Dr. Ing. H.C. F. Porsche Aktiengesellschaft | Method for presenting a vehicle's environment on a display apparatus; a display apparatus; a system comprising a plurality of image capturing units and a display apparatus; a computer program |
US20150343950A1 (en) * | 2012-12-22 | 2015-12-03 | Audi Ag | Motor vehicle having a camera monitoring system |
CN105197108A (en) * | 2015-01-14 | 2015-12-30 | 河海大学常州校区 | Multi-objective direction-finding system and method based on automotive drive assistant system |
US9285893B2 (en) | 2012-11-08 | 2016-03-15 | Leap Motion, Inc. | Object detection and tracking with variable-field illumination devices |
CN105430325A (en) * | 2015-11-03 | 2016-03-23 | 苏交科集团股份有限公司 | Method and system for positioning traffic flow direction rapidly in road monitoring video image |
US9313452B2 (en) | 2010-05-17 | 2016-04-12 | Cisco Technology, Inc. | System and method for providing retracting optics in a video conferencing environment |
US9338394B2 (en) | 2010-11-15 | 2016-05-10 | Cisco Technology, Inc. | System and method for providing enhanced audio in a video environment |
CN105828028A (en) * | 2016-03-04 | 2016-08-03 | 乐卡汽车智能科技(北京)有限公司 | Method for performing safety detection on vehicle bottom and device thereof |
CN105939650A (en) * | 2014-02-14 | 2016-09-14 | 奥林巴斯株式会社 | Endoscope system |
US9465461B2 (en) | 2013-01-08 | 2016-10-11 | Leap Motion, Inc. | Object detection and tracking with audio and optical signals |
US9501152B2 (en) | 2013-01-15 | 2016-11-22 | Leap Motion, Inc. | Free-space user interface and control using virtual constructs |
US9544563B1 (en) | 2007-03-23 | 2017-01-10 | Proximex Corporation | Multi-video navigation system |
US9613262B2 (en) | 2014-01-15 | 2017-04-04 | Leap Motion, Inc. | Object detection and tracking for providing a virtual device experience |
US20170099461A1 (en) * | 2015-10-05 | 2017-04-06 | Polycom, Inc. | Panoramic image placement to minimize full image interference |
US9632658B2 (en) | 2013-01-15 | 2017-04-25 | Leap Motion, Inc. | Dynamic user interactions for display control and scaling responsiveness of display objects |
CN106817539A (en) * | 2016-12-29 | 2017-06-09 | 珠海市魅族科技有限公司 | The image processing method and system of a kind of vehicle |
US9681154B2 (en) | 2012-12-06 | 2017-06-13 | Patent Capital Group | System and method for depth-guided filtering in a video conference environment |
US9679215B2 (en) | 2012-01-17 | 2017-06-13 | Leap Motion, Inc. | Systems and methods for machine control |
US9702977B2 (en) | 2013-03-15 | 2017-07-11 | Leap Motion, Inc. | Determining positional information of an object in space |
US9747696B2 (en) | 2013-05-17 | 2017-08-29 | Leap Motion, Inc. | Systems and methods for providing normalized parameters of motions of objects in three-dimensional space |
US20170255836A1 (en) * | 2016-03-07 | 2017-09-07 | Vivotek Inc. | Fisheye image display method |
US9804258B2 (en) | 2015-02-27 | 2017-10-31 | The United States Of America, As Represented By The Secretary Of The Army | Surface non-uniformity determination with radio waves |
US9843621B2 (en) | 2013-05-17 | 2017-12-12 | Cisco Technology, Inc. | Calendaring activities based on communication processing |
EP3117406A4 (en) * | 2014-03-13 | 2017-12-20 | Road-IQ, LLC | Device, system and method for aggregating networks and serving data from those networks to computers |
US9900490B2 (en) | 2011-09-21 | 2018-02-20 | Magna Electronics Inc. | Vehicle vision system using image data transmission and power supply via a coaxial cable |
US9916009B2 (en) | 2013-04-26 | 2018-03-13 | Leap Motion, Inc. | Non-tactile interface systems and methods |
TWI622297B (en) * | 2016-12-19 | 2018-04-21 | Display method capable of simultaneously displaying rear panorama and turning picture when the vehicle turns | |
US9996638B1 (en) | 2013-10-31 | 2018-06-12 | Leap Motion, Inc. | Predictive information for free space gesture control and communication |
CN108769526A (en) * | 2018-06-12 | 2018-11-06 | 广州视源电子科技股份有限公司 | Image adjusting method, device, equipment and storage medium |
EP3366037A4 (en) * | 2016-03-11 | 2018-11-07 | Samsung Electronics Co., Ltd. | Electronic apparatus for providing panorama image and control method thereof |
US10139918B2 (en) | 2013-01-15 | 2018-11-27 | Leap Motion, Inc. | Dynamic, free-space user interactions for machine control |
CN109109790A (en) * | 2018-08-29 | 2019-01-01 | 广东卡仕达电子科技有限公司 | 360 ° of panoramic parking assist systems of one kind and its sound control method |
US10185029B2 (en) | 2016-02-26 | 2019-01-22 | The United States Of America, As Represented By The Secretary Of The Army | Timing and synchronization of radio waves for scanning, detection, and measurement of surface non-uniformity |
CN109478094A (en) * | 2016-07-12 | 2019-03-15 | 奥迪股份公司 | Method for operating a display device of a motor vehicle |
US10232797B2 (en) | 2013-04-29 | 2019-03-19 | Magna Electronics Inc. | Rear vision system for vehicle with dual purpose signal lines |
US20190102869A1 (en) * | 2017-09-29 | 2019-04-04 | Denso Corporation | Apparatus for monitoring surroundings of vehicle and method of calibrating the same |
EP3474532A1 (en) * | 2017-10-17 | 2019-04-24 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and program |
US10281987B1 (en) | 2013-08-09 | 2019-05-07 | Leap Motion, Inc. | Systems and methods of free-space gestural interaction |
CN110103826A (en) * | 2019-03-28 | 2019-08-09 | 上海赫千电子科技有限公司 | A kind of image display and method of electronics rearview mirror |
US10462372B2 (en) * | 2010-08-27 | 2019-10-29 | Sony Corporation | Imaging device, imaging system, and imaging method |
US10567705B2 (en) | 2013-06-10 | 2020-02-18 | Magna Electronics Inc. | Coaxial cable with bidirectional data transmission |
US10573249B2 (en) * | 2016-12-21 | 2020-02-25 | Apical Limited | Display control |
US10609285B2 (en) | 2013-01-07 | 2020-03-31 | Ultrahaptics IP Two Limited | Power consumption in motion-capture systems |
US10620709B2 (en) | 2013-04-05 | 2020-04-14 | Ultrahaptics IP Two Limited | Customized gesture interpretation |
US10645282B2 (en) | 2016-03-11 | 2020-05-05 | Samsung Electronics Co., Ltd. | Electronic apparatus for providing panorama image and control method thereof |
US10691219B2 (en) | 2012-01-17 | 2020-06-23 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US10846942B1 (en) | 2013-08-29 | 2020-11-24 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US10852423B2 (en) | 2018-03-07 | 2020-12-01 | The Government Of The United States, As Represented By The Secretary Of The Army | Vehicle-mounted wave transmission and wave response reception |
US20210150740A1 (en) * | 2019-11-14 | 2021-05-20 | Panasonic Avionics Corporation | Automatic perspective correction for in-flight entertainment (ife) monitors |
CN113165483A (en) * | 2018-10-12 | 2021-07-23 | 靛蓝技术股份有限公司 | Method and apparatus for adjusting a reactive system based on sensory input and vehicle incorporating the same |
US11082631B2 (en) * | 2018-04-03 | 2021-08-03 | Aisin Seiki Kabushiki Kaisha | Image processing device |
US20210237669A1 (en) * | 2020-01-31 | 2021-08-05 | Denso Corporation | Sensor system for vehicle |
US11252338B2 (en) * | 2018-12-14 | 2022-02-15 | Koito Manufacturing Co., Ltd. | Infrared camera system and vehicle |
US11295541B2 (en) * | 2019-02-13 | 2022-04-05 | Tencent America LLC | Method and apparatus of 360 degree camera video processing with targeted view |
CN114795095A (en) * | 2022-04-11 | 2022-07-29 | 重庆高铂瑞骐科技开发有限公司 | Multi-mode anorectal detection endoscope probe and application thereof |
CN115348367A (en) * | 2021-05-12 | 2022-11-15 | 蔚来汽车科技(安徽)有限公司 | Combined computer vision and human vision camera system |
US11720180B2 (en) | 2012-01-17 | 2023-08-08 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
DE102010035772B4 (en) | 2009-08-27 | 2023-09-28 | Robert Bosch Gmbh | System and method for providing guidance information to a driver of a vehicle |
US11778159B2 (en) | 2014-08-08 | 2023-10-03 | Ultrahaptics IP Two Limited | Augmented reality with motion sensing |
US11775033B2 (en) | 2013-10-03 | 2023-10-03 | Ultrahaptics IP Two Limited | Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation |
US11875012B2 (en) | 2018-05-25 | 2024-01-16 | Ultrahaptics IP Two Limited | Throwable interface for augmented reality and virtual reality environments |
US11917281B2 (en) | 2017-12-28 | 2024-02-27 | Waymo Llc | Camera system, method and instructions using images captured by a first mage sensor and a second image sensor to generate a third image corresponding to a simulated lens having an intermediate focal length |
CN117935127A (en) * | 2024-03-22 | 2024-04-26 | 国任财产保险股份有限公司 | Intelligent damage assessment method and system for panoramic video exploration |
US12032746B2 (en) | 2015-02-13 | 2024-07-09 | Ultrahaptics IP Two Limited | Systems and methods of creating a realistic displacement of a virtual object in virtual reality/augmented reality environments |
US12118134B2 (en) | 2015-02-13 | 2024-10-15 | Ultrahaptics IP Two Limited | Interaction engine for creating a realistic experience in virtual reality/augmented reality environments |
US12115915B2 (en) | 2015-12-17 | 2024-10-15 | Magna Electronics Inc. | Vehicle vision system with electrical noise filtering circuitry |
US12131011B2 (en) | 2013-10-29 | 2024-10-29 | Ultrahaptics IP Two Limited | Virtual interactions for machine control |
US12154238B2 (en) | 2014-05-20 | 2024-11-26 | Ultrahaptics IP Two Limited | Wearable augmented reality devices with object detection and tracking |
US12164694B2 (en) | 2013-10-31 | 2024-12-10 | Ultrahaptics IP Two Limited | Interactions with virtual objects for machine control |
US12260023B2 (en) | 2012-01-17 | 2025-03-25 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1952348A2 (en) * | 2005-11-18 | 2008-08-06 | Koninklijke Philips Electronics N.V. | Method of filtering bending features |
JP4707109B2 (en) * | 2006-03-02 | 2011-06-22 | アルパイン株式会社 | Multi-camera image processing method and apparatus |
DE102006049404A1 (en) * | 2006-10-19 | 2008-04-30 | Carl Zeiss Ag | HMD device |
DE102006052779A1 (en) * | 2006-11-09 | 2008-05-15 | Bayerische Motoren Werke Ag | Method for generating an overall image of the surroundings of a motor vehicle |
JP4450036B2 (en) * | 2007-09-10 | 2010-04-14 | トヨタ自動車株式会社 | Composite image generating apparatus and program |
US7984574B2 (en) * | 2008-03-11 | 2011-07-26 | Deere & Company | Construction vehicle with rear object detection |
US8134589B2 (en) * | 2008-07-17 | 2012-03-13 | Eastman Kodak Company | Zoom by multiple image capture |
JP5344227B2 (en) * | 2009-03-25 | 2013-11-20 | アイシン精機株式会社 | Vehicle periphery monitoring device |
JP4994422B2 (en) * | 2009-05-13 | 2012-08-08 | リズム時計工業株式会社 | Detection system, signal processing method of detection system, and smoke detector |
JP5436086B2 (en) * | 2009-08-03 | 2014-03-05 | アルパイン株式会社 | Vehicle periphery image display device and vehicle periphery image display method |
CN102314259B (en) * | 2010-07-06 | 2015-01-28 | 株式会社理光 | Method for detecting objects in display area and equipment |
TWI494824B (en) * | 2010-08-24 | 2015-08-01 | Quanta Comp Inc | Optical touch system and method |
JP5316550B2 (en) * | 2011-01-05 | 2013-10-16 | 株式会社デンソー | Rear view support system |
EP2687408B1 (en) * | 2011-03-18 | 2017-06-14 | Toyota Jidosha Kabushiki Kaisha | Vehicle periphery monitoring device |
US9105090B2 (en) | 2011-07-13 | 2015-08-11 | Analog Devices, Inc. | Wide-angle lens image correction |
TW201344648A (en) | 2012-04-30 | 2013-11-01 | Tung Thih Electronic Co Ltd | Driving monitoring system and driving monitoring method |
US9058706B2 (en) * | 2012-04-30 | 2015-06-16 | Convoy Technologies Llc | Motor vehicle camera and monitoring system |
US20140009569A1 (en) * | 2012-07-05 | 2014-01-09 | Shih-Yao Chen | Panogramic Camera System for Vehicle Event Data Recorder |
TW201516389A (en) * | 2013-10-30 | 2015-05-01 | Metal Ind Res & Dev Ct | System for examining front pillar structure of vehicle and method thereof |
US9667900B2 (en) * | 2013-12-09 | 2017-05-30 | Optiz, Inc. | Three dimensional system-on-chip image sensor package |
TWI600558B (en) | 2014-04-01 | 2017-10-01 | Dynamic lane detection system and method | |
TWI581994B (en) * | 2015-04-20 | 2017-05-11 | 葉振凱 | The device of complex information computing and processing for vehicle |
US10486599B2 (en) | 2015-07-17 | 2019-11-26 | Magna Mirrors Of America, Inc. | Rearview vision system for vehicle |
TWI568615B (en) * | 2015-08-12 | 2017-02-01 | Car door open collision warning system | |
CA2930738C (en) * | 2015-08-24 | 2017-05-02 | Synaptive Medical (Barbados) Inc. | A medical imaging system for illuminating tissue samples using three-dimensional structured illumination microscopy |
TWI721755B (en) * | 2020-01-10 | 2021-03-11 | 新煒科技有限公司 | Panoramic camera |
CN113114923B (en) | 2020-01-10 | 2022-11-25 | 三赢科技(深圳)有限公司 | Panoramic camera |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5185667A (en) * | 1991-05-13 | 1993-02-09 | Telerobotics International, Inc. | Omniview motionless camera orientation system |
US5353061A (en) * | 1992-10-08 | 1994-10-04 | International Business Machines Corporation | System and method for frame-differencing video compression/decompression using perceptually-constant information and image analysis |
US5369450A (en) * | 1993-06-01 | 1994-11-29 | The Walt Disney Company | Electronic and computational correction of chromatic aberration associated with an optical system used to view a color video display |
US5465163A (en) * | 1991-03-18 | 1995-11-07 | Canon Kabushiki Kaisha | Image processing method and apparatus for processing oversized original images and for synthesizing multiple images |
US5699444A (en) * | 1995-03-31 | 1997-12-16 | Synthonics Incorporated | Methods and apparatus for using image data to determine camera location and orientation |
US5949331A (en) * | 1993-02-26 | 1999-09-07 | Donnelly Corporation | Display enhancements for vehicle vision system |
US5978521A (en) * | 1997-09-25 | 1999-11-02 | Cognex Corporation | Machine vision methods using feedback to determine calibration locations of multiple cameras that image a common object |
US6163361A (en) * | 1999-04-23 | 2000-12-19 | Eastman Kodak Company | Digital camera including a printer for receiving a cartridge having security control circuitry |
US6184781B1 (en) * | 1999-02-02 | 2001-02-06 | Intel Corporation | Rear looking vision system |
US20020027597A1 (en) * | 2000-09-05 | 2002-03-07 | John Sachau | System for mobile videoconferencing |
US6424273B1 (en) * | 2001-03-30 | 2002-07-23 | Koninklijke Philips Electronics N.V. | System to aid a driver to determine whether to change lanes |
US6498620B2 (en) * | 1993-02-26 | 2002-12-24 | Donnelly Corporation | Vision system for a vehicle including an image capture device and a display system having a long focal length |
US20030103141A1 (en) * | 1997-12-31 | 2003-06-05 | Bechtel Jon H. | Vehicle vision system |
US7126616B2 (en) * | 2001-06-12 | 2006-10-24 | Silicon Optix Inc. | Method and system for processing a non-linear two dimensional spatial transformation |
US7239360B2 (en) * | 2002-06-12 | 2007-07-03 | Silicon Optix Inc. | Short throw projection system and method |
-
2004
- 2004-07-26 US US10/899,410 patent/US7576767B2/en active Active
-
2005
- 2005-06-29 TW TW094121880A patent/TWI287402B/en active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5465163A (en) * | 1991-03-18 | 1995-11-07 | Canon Kabushiki Kaisha | Image processing method and apparatus for processing oversized original images and for synthesizing multiple images |
US5185667A (en) * | 1991-05-13 | 1993-02-09 | Telerobotics International, Inc. | Omniview motionless camera orientation system |
US5353061A (en) * | 1992-10-08 | 1994-10-04 | International Business Machines Corporation | System and method for frame-differencing video compression/decompression using perceptually-constant information and image analysis |
US6498620B2 (en) * | 1993-02-26 | 2002-12-24 | Donnelly Corporation | Vision system for a vehicle including an image capture device and a display system having a long focal length |
US5949331A (en) * | 1993-02-26 | 1999-09-07 | Donnelly Corporation | Display enhancements for vehicle vision system |
US5369450A (en) * | 1993-06-01 | 1994-11-29 | The Walt Disney Company | Electronic and computational correction of chromatic aberration associated with an optical system used to view a color video display |
US5699444A (en) * | 1995-03-31 | 1997-12-16 | Synthonics Incorporated | Methods and apparatus for using image data to determine camera location and orientation |
US5978521A (en) * | 1997-09-25 | 1999-11-02 | Cognex Corporation | Machine vision methods using feedback to determine calibration locations of multiple cameras that image a common object |
US20030103141A1 (en) * | 1997-12-31 | 2003-06-05 | Bechtel Jon H. | Vehicle vision system |
US6184781B1 (en) * | 1999-02-02 | 2001-02-06 | Intel Corporation | Rear looking vision system |
US6163361A (en) * | 1999-04-23 | 2000-12-19 | Eastman Kodak Company | Digital camera including a printer for receiving a cartridge having security control circuitry |
US20020027597A1 (en) * | 2000-09-05 | 2002-03-07 | John Sachau | System for mobile videoconferencing |
US6424273B1 (en) * | 2001-03-30 | 2002-07-23 | Koninklijke Philips Electronics N.V. | System to aid a driver to determine whether to change lanes |
US7126616B2 (en) * | 2001-06-12 | 2006-10-24 | Silicon Optix Inc. | Method and system for processing a non-linear two dimensional spatial transformation |
US7239360B2 (en) * | 2002-06-12 | 2007-07-03 | Silicon Optix Inc. | Short throw projection system and method |
Cited By (314)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7956890B2 (en) | 2004-09-17 | 2011-06-07 | Proximex Corporation | Adaptive multi-modal integrated biometric identification detection and surveillance systems |
US9432632B2 (en) | 2004-09-17 | 2016-08-30 | Proximex Corporation | Adaptive multi-modal integrated biometric identification and surveillance systems |
US20060093190A1 (en) * | 2004-09-17 | 2006-05-04 | Proximex Corporation | Adaptive multi-modal integrated biometric identification detection and surveillance systems |
US8976237B2 (en) | 2004-09-17 | 2015-03-10 | Proximex Corporation | Adaptive multi-modal integrated biometric identification detection and surveillance systems |
US20060204128A1 (en) * | 2005-03-07 | 2006-09-14 | Silverstein D A | System and method for correcting image vignetting |
US7634152B2 (en) * | 2005-03-07 | 2009-12-15 | Hewlett-Packard Development Company, L.P. | System and method for correcting image vignetting |
US8139896B1 (en) * | 2005-03-28 | 2012-03-20 | Grandeye, Ltd. | Tracking moving objects accurately on a wide-angle video |
US20060232672A1 (en) * | 2005-04-18 | 2006-10-19 | Song Sim | Overhead display device with dual panel structure for a vehicle |
US20090040302A1 (en) * | 2005-04-19 | 2009-02-12 | Stuart Thompson | Automated surveillance system |
US20060268360A1 (en) * | 2005-05-12 | 2006-11-30 | Jones Peter W J | Methods of creating a virtual window |
US8472415B2 (en) | 2006-03-06 | 2013-06-25 | Cisco Technology, Inc. | Performance optimization with integrated mobility and MPLS |
US20070206556A1 (en) * | 2006-03-06 | 2007-09-06 | Cisco Technology, Inc. | Performance optimization with integrated mobility and MPLS |
DE102006031895A1 (en) * | 2006-07-07 | 2008-01-10 | Siemens Ag | Vehicle interval display in traffic shows the leading vehicle in a changing perspective and color according to the length of gap between it and the own vehicle |
DE102006031895B4 (en) * | 2006-07-07 | 2014-03-27 | Continental Automotive Gmbh | Display system for the visualization of vehicle distances |
US8446509B2 (en) | 2006-08-09 | 2013-05-21 | Tenebraex Corporation | Methods of creating a virtual window |
US20080036875A1 (en) * | 2006-08-09 | 2008-02-14 | Jones Peter W | Methods of creating a virtual window |
KR101192800B1 (en) * | 2006-10-26 | 2012-10-18 | 엘지디스플레이 주식회사 | A liquid crystal display device and a method for diving the same |
US8688170B2 (en) * | 2006-11-30 | 2014-04-01 | Patent Navigation, Inc | Panoramic scene capture using a mobile wireless device |
US20080130951A1 (en) * | 2006-11-30 | 2008-06-05 | Wren Christopher R | System and Method for Modeling Movement of Objects Using Probabilistic Graphs Obtained From Surveillance Data |
US8149278B2 (en) * | 2006-11-30 | 2012-04-03 | Mitsubishi Electric Research Laboratories, Inc. | System and method for modeling movement of objects using probabilistic graphs obtained from surveillance data |
US20080207261A1 (en) * | 2006-11-30 | 2008-08-28 | Paten Navigation Inc. | Panoramic scene capture using a mobile wireless device |
WO2008086078A1 (en) * | 2007-01-05 | 2008-07-17 | Silicon Optix, Inc. | Color and geometry distortion correction system and method |
US10484611B2 (en) | 2007-03-23 | 2019-11-19 | Sensormatic Electronics, LLC | Multi-video navigation |
US7777783B1 (en) * | 2007-03-23 | 2010-08-17 | Proximex Corporation | Multi-video navigation |
US10326940B2 (en) | 2007-03-23 | 2019-06-18 | Proximex Corporation | Multi-video navigation system |
US9544563B1 (en) | 2007-03-23 | 2017-01-10 | Proximex Corporation | Multi-video navigation system |
US9544496B1 (en) | 2007-03-23 | 2017-01-10 | Proximex Corporation | Multi-video navigation |
US20080303901A1 (en) * | 2007-06-08 | 2008-12-11 | Variyath Girish S | Tracking an object |
US8570373B2 (en) | 2007-06-08 | 2013-10-29 | Cisco Technology, Inc. | Tracking an object utilizing location information associated with a wireless device |
US20090009604A1 (en) * | 2007-07-02 | 2009-01-08 | Nissan Motor Co., Ltd. | Image processing system and method |
EP2012271A3 (en) * | 2007-07-02 | 2009-05-20 | Nissan Motor Co., Ltd. | Image processing system and method |
US20090059018A1 (en) * | 2007-09-05 | 2009-03-05 | Micron Technology, Inc. | Navigation assisted mosaic photography |
US7973779B2 (en) | 2007-10-26 | 2011-07-05 | Microsoft Corporation | Detecting ambient light levels in a vision system |
US20090109193A1 (en) * | 2007-10-26 | 2009-04-30 | Microsoft Corporation | Detecting ambient light levels in a vision system |
US8791984B2 (en) | 2007-11-16 | 2014-07-29 | Scallop Imaging, Llc | Digital security camera |
WO2009064504A1 (en) | 2007-11-16 | 2009-05-22 | Tenebraex Corporation | Systems and methods of creating a virtual window |
EP2223526A4 (en) * | 2007-11-16 | 2011-02-23 | Tenebraex Corp | Systems and methods of creating a virtual window |
US8564640B2 (en) | 2007-11-16 | 2013-10-22 | Tenebraex Corporation | Systems and methods of creating a virtual window |
US20090147071A1 (en) * | 2007-11-16 | 2009-06-11 | Tenebraex Corporation | Systems and methods of creating a virtual window |
EP2223526A1 (en) * | 2007-11-16 | 2010-09-01 | Tenebraex Corporation | Systems and methods of creating a virtual window |
US20090290033A1 (en) * | 2007-11-16 | 2009-11-26 | Tenebraex Corporation | Systems and methods of creating a virtual window |
US20110234807A1 (en) * | 2007-11-16 | 2011-09-29 | Tenebraex Corporation | Digital security camera |
WO2009102503A3 (en) * | 2008-02-14 | 2009-10-08 | Cisco Technology, Inc. | Adaptive quantization for uniform quality in panoramic videoconferencing |
WO2009102503A2 (en) * | 2008-02-14 | 2009-08-20 | Cisco Technology, Inc. | Adaptive quantization for uniform quality in panoramic videoconferencing |
US20090207233A1 (en) * | 2008-02-14 | 2009-08-20 | Mauchly J William | Method and system for videoconference configuration |
US8797377B2 (en) | 2008-02-14 | 2014-08-05 | Cisco Technology, Inc. | Method and system for videoconference configuration |
WO2009108039A2 (en) * | 2008-02-27 | 2009-09-03 | Mimos Berhad | Method to detect abnormal events for wide view video images |
WO2009108039A3 (en) * | 2008-02-27 | 2009-10-22 | Mimos Berhad | Method to detect abnormal events for wide view video images |
US20090231472A1 (en) * | 2008-03-10 | 2009-09-17 | Ryosuke Kasahara | Image processing method, image processing device, and image capturing device |
EP2101295A1 (en) * | 2008-03-10 | 2009-09-16 | Ricoh Company, Ltd. | Image processing method, image processing device, and image capturing device |
US8106973B2 (en) | 2008-03-10 | 2012-01-31 | Ricoh Company, Ltd. | Image processing method, image processing device, and image capturing device |
US20090244257A1 (en) * | 2008-03-26 | 2009-10-01 | Macdonald Alan J | Virtual round-table videoconference |
US8319819B2 (en) | 2008-03-26 | 2012-11-27 | Cisco Technology, Inc. | Virtual round-table videoconference |
US8390667B2 (en) | 2008-04-15 | 2013-03-05 | Cisco Technology, Inc. | Pop-up PIP for people not in picture |
US20090256901A1 (en) * | 2008-04-15 | 2009-10-15 | Mauchly J William | Pop-Up PIP for People Not in Picture |
US8694658B2 (en) | 2008-09-19 | 2014-04-08 | Cisco Technology, Inc. | System and method for enabling communication sessions in a network environment |
US20100082557A1 (en) * | 2008-09-19 | 2010-04-01 | Cisco Technology, Inc. | System and method for enabling communication sessions in a network environment |
US20150326783A1 (en) * | 2009-01-19 | 2015-11-12 | Microsoft Technology Licensing, Llc | Three dimensional image capture system for imaging building facades using a digital camera, a near-infrared camera, and laser range finder |
US10715724B2 (en) * | 2009-01-19 | 2020-07-14 | Microsoft Technology Licensing, Llc | Vehicle-mounted sensor system that includes cameras and laser measurement systems |
US11477374B2 (en) | 2009-01-19 | 2022-10-18 | Microsoft Technology Licensing, Llc | Three dimensional image capture system for imaging building facades using a digital camera, a near-infrared camera, and laser range finder |
US8659637B2 (en) | 2009-03-09 | 2014-02-25 | Cisco Technology, Inc. | System and method for providing three dimensional video conferencing in a network environment |
US20100225735A1 (en) * | 2009-03-09 | 2010-09-09 | Cisco Technology, Inc. | System and method for providing three dimensional imaging in a network environment |
US8477175B2 (en) | 2009-03-09 | 2013-07-02 | Cisco Technology, Inc. | System and method for providing three dimensional imaging in a network environment |
US20100225732A1 (en) * | 2009-03-09 | 2010-09-09 | Cisco Technology, Inc. | System and method for providing three dimensional video conferencing in a network environment |
US20100283829A1 (en) * | 2009-05-11 | 2010-11-11 | Cisco Technology, Inc. | System and method for translating communications between participants in a conferencing environment |
US20100302345A1 (en) * | 2009-05-29 | 2010-12-02 | Cisco Technology, Inc. | System and Method for Extending Communications Between Participants in a Conferencing Environment |
US9204096B2 (en) | 2009-05-29 | 2015-12-01 | Cisco Technology, Inc. | System and method for extending communications between participants in a conferencing environment |
US8659639B2 (en) | 2009-05-29 | 2014-02-25 | Cisco Technology, Inc. | System and method for extending communications between participants in a conferencing environment |
US9082297B2 (en) | 2009-08-11 | 2015-07-14 | Cisco Technology, Inc. | System and method for verifying parameters in an audiovisual environment |
US20110044505A1 (en) * | 2009-08-21 | 2011-02-24 | Korea University Industry And Academy Cooperation | Equipment operation safety monitoring system and method and computer-readable medium recording program for executing the same |
DE102010035772B4 (en) | 2009-08-27 | 2023-09-28 | Robert Bosch Gmbh | System and method for providing guidance information to a driver of a vehicle |
US20110069148A1 (en) * | 2009-09-22 | 2011-03-24 | Tenebraex Corporation | Systems and methods for correcting images in a multi-sensor system |
US20110106380A1 (en) * | 2009-11-02 | 2011-05-05 | Denso Corporation | Vehicle surrounding monitoring device |
US20110134245A1 (en) * | 2009-12-07 | 2011-06-09 | Irvine Sensors Corporation | Compact intelligent surveillance system comprising intent recognition |
US8830299B2 (en) * | 2010-02-08 | 2014-09-09 | OOO “Korporazija Stroy Invest Proekt M” | Method and device for determining the speed of travel and coordinates of vehicles and subsequently identifying same and automatically recording road traffic offences |
US20130038681A1 (en) * | 2010-02-08 | 2013-02-14 | Ooo "Sistemy Peredovykh Tekhnologiy" | Method and Device for Determining the Speed of Travel and Coordinates of Vehicles and Subsequently Identifying Same and Automatically Recording Road Traffic Offences |
US20110228096A1 (en) * | 2010-03-18 | 2011-09-22 | Cisco Technology, Inc. | System and method for enhancing video images in a conferencing environment |
US9225916B2 (en) | 2010-03-18 | 2015-12-29 | Cisco Technology, Inc. | System and method for enhancing video images in a conferencing environment |
USD636747S1 (en) | 2010-03-21 | 2011-04-26 | Cisco Technology, Inc. | Video unit with integrated features |
USD653245S1 (en) | 2010-03-21 | 2012-01-31 | Cisco Technology, Inc. | Video unit with integrated features |
USD637568S1 (en) | 2010-03-21 | 2011-05-10 | Cisco Technology, Inc. | Free-standing video unit |
USD636359S1 (en) | 2010-03-21 | 2011-04-19 | Cisco Technology, Inc. | Video unit with integrated features |
USD637570S1 (en) | 2010-03-21 | 2011-05-10 | Cisco Technology, Inc. | Mounted video unit |
USD655279S1 (en) | 2010-03-21 | 2012-03-06 | Cisco Technology, Inc. | Video unit with integrated features |
USD637569S1 (en) | 2010-03-21 | 2011-05-10 | Cisco Technology, Inc. | Mounted video unit |
US9313452B2 (en) | 2010-05-17 | 2016-04-12 | Cisco Technology, Inc. | System and method for providing retracting optics in a video conferencing environment |
US20130100290A1 (en) * | 2010-06-29 | 2013-04-25 | Keiji Sato | Image calibration method and image calibration device |
US9030561B2 (en) * | 2010-06-29 | 2015-05-12 | Clarion Co., Ltd. | Image calibration method and image calibration device |
US20120002050A1 (en) * | 2010-06-30 | 2012-01-05 | Fujitsu Ten Limited | Image display system |
CN101951487B (en) * | 2010-08-19 | 2012-06-27 | 深圳大学 | Panoramic image fusion method, system and image processing equipment |
CN101951487A (en) * | 2010-08-19 | 2011-01-19 | 深圳大学 | Panoramic image fusion method, system and image processing equipment |
US10462372B2 (en) * | 2010-08-27 | 2019-10-29 | Sony Corporation | Imaging device, imaging system, and imaging method |
EP3654632A1 (en) * | 2010-08-27 | 2020-05-20 | Sony Corporation | Imaging device, imaging system, and imaging method |
EP2424221B1 (en) * | 2010-08-27 | 2020-02-12 | Sony Corporation | Imaging device, imaging system, and imaging method |
US8896655B2 (en) | 2010-08-31 | 2014-11-25 | Cisco Technology, Inc. | System and method for providing depth adaptive video conferencing |
US8599934B2 (en) | 2010-09-08 | 2013-12-03 | Cisco Technology, Inc. | System and method for skip coding during video conferencing in a network environment |
US9172871B2 (en) * | 2010-09-29 | 2015-10-27 | Huawei Device Co., Ltd. | Method and device for multi-camera image correction |
US20130113876A1 (en) * | 2010-09-29 | 2013-05-09 | Huawei Device Co., Ltd. | Method and Device for Multi-Camera Image Correction |
US8599865B2 (en) | 2010-10-26 | 2013-12-03 | Cisco Technology, Inc. | System and method for provisioning flows in a mobile network environment |
US9331948B2 (en) | 2010-10-26 | 2016-05-03 | Cisco Technology, Inc. | System and method for provisioning flows in a mobile network environment |
US8699457B2 (en) | 2010-11-03 | 2014-04-15 | Cisco Technology, Inc. | System and method for managing flows in a mobile network environment |
US9338394B2 (en) | 2010-11-15 | 2016-05-10 | Cisco Technology, Inc. | System and method for providing enhanced audio in a video environment |
US8730297B2 (en) | 2010-11-15 | 2014-05-20 | Cisco Technology, Inc. | System and method for providing camera functions in a video environment |
US8902244B2 (en) | 2010-11-15 | 2014-12-02 | Cisco Technology, Inc. | System and method for providing enhanced graphics in a video environment |
US9143725B2 (en) | 2010-11-15 | 2015-09-22 | Cisco Technology, Inc. | System and method for providing enhanced graphics in a video environment |
US8542264B2 (en) | 2010-11-18 | 2013-09-24 | Cisco Technology, Inc. | System and method for managing optics in a video environment |
US8723914B2 (en) | 2010-11-19 | 2014-05-13 | Cisco Technology, Inc. | System and method for providing enhanced video processing in a network environment |
US9111138B2 (en) | 2010-11-30 | 2015-08-18 | Cisco Technology, Inc. | System and method for gesture interface control |
USD678894S1 (en) | 2010-12-16 | 2013-03-26 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD682293S1 (en) | 2010-12-16 | 2013-05-14 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD678308S1 (en) | 2010-12-16 | 2013-03-19 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD682854S1 (en) | 2010-12-16 | 2013-05-21 | Cisco Technology, Inc. | Display screen for graphical user interface |
USD678320S1 (en) | 2010-12-16 | 2013-03-19 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD678307S1 (en) | 2010-12-16 | 2013-03-19 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD682294S1 (en) | 2010-12-16 | 2013-05-14 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD682864S1 (en) | 2010-12-16 | 2013-05-21 | Cisco Technology, Inc. | Display screen with graphical user interface |
US9147260B2 (en) | 2010-12-20 | 2015-09-29 | International Business Machines Corporation | Detection and tracking of moving objects |
US9609181B2 (en) * | 2010-12-27 | 2017-03-28 | Panasonic Intellectual Property Management Co., Ltd. | Image signal processor and method for synthesizing super-resolution images from non-linear distorted images |
US20130258048A1 (en) * | 2010-12-27 | 2013-10-03 | Panasonic Corporation | Image signal processor and image signal processing method |
US8692862B2 (en) | 2011-02-28 | 2014-04-08 | Cisco Technology, Inc. | System and method for selection of video data in a video conference environment |
US8670019B2 (en) | 2011-04-28 | 2014-03-11 | Cisco Technology, Inc. | System and method for providing enhanced eye gaze in a video conferencing environment |
US8786631B1 (en) | 2011-04-30 | 2014-07-22 | Cisco Technology, Inc. | System and method for transferring transparency information in a video environment |
US8934026B2 (en) | 2011-05-12 | 2015-01-13 | Cisco Technology, Inc. | System and method for video coding in a dynamic environment |
US20140085473A1 (en) * | 2011-06-16 | 2014-03-27 | Aisin Seiki Kabushiki Kaisha | In-vehicle camera apparatus |
CN102256154A (en) * | 2011-07-28 | 2011-11-23 | 中国科学院自动化研究所 | Method and system for positioning and playing three-dimensional panoramic video |
US10284764B2 (en) | 2011-09-21 | 2019-05-07 | Magna Electronics Inc. | Vehicle vision using image data transmission and power supply via a coaxial cable |
US12143712B2 (en) | 2011-09-21 | 2024-11-12 | Magna Electronics Inc. | Vehicular vision system using image data transmission and power supply via a coaxial cable |
US11877054B2 (en) | 2011-09-21 | 2024-01-16 | Magna Electronics Inc. | Vehicular vision system using image data transmission and power supply via a coaxial cable |
US10567633B2 (en) | 2011-09-21 | 2020-02-18 | Magna Electronics Inc. | Vehicle vision system using image data transmission and power supply via a coaxial cable |
US10827108B2 (en) | 2011-09-21 | 2020-11-03 | Magna Electronics Inc. | Vehicular vision system using image data transmission and power supply via a coaxial cable |
US11638070B2 (en) | 2011-09-21 | 2023-04-25 | Magna Electronics Inc. | Vehicular vision system using image data transmission and power supply via a coaxial cable |
US9900490B2 (en) | 2011-09-21 | 2018-02-20 | Magna Electronics Inc. | Vehicle vision system using image data transmission and power supply via a coaxial cable |
US11201994B2 (en) | 2011-09-21 | 2021-12-14 | Magna Electronics Inc. | Vehicular multi-camera surround view system using image data transmission and power supply via coaxial cables |
US20140232872A1 (en) * | 2011-09-26 | 2014-08-21 | Magna Electronics Inc. | Vehicle camera image quality improvement in poor visibility conditions by contrast amplification |
US9774790B1 (en) | 2011-09-26 | 2017-09-26 | Magna Electronics Inc. | Method for enhancing vehicle camera image quality |
US9681062B2 (en) * | 2011-09-26 | 2017-06-13 | Magna Electronics Inc. | Vehicle camera image quality improvement in poor visibility conditions by contrast amplification |
US10257432B2 (en) * | 2011-09-26 | 2019-04-09 | Magna Electronics Inc. | Method for enhancing vehicle camera image quality |
US8947493B2 (en) | 2011-11-16 | 2015-02-03 | Cisco Technology, Inc. | System and method for alerting a participant in a video conference |
WO2013081985A1 (en) * | 2011-11-28 | 2013-06-06 | Magna Electronics, Inc. | Vision system for vehicle |
US12100166B2 (en) | 2011-11-28 | 2024-09-24 | Magna Electronics Inc. | Vehicular vision system |
US11634073B2 (en) | 2011-11-28 | 2023-04-25 | Magna Electronics Inc. | Multi-camera vehicular vision system |
US11142123B2 (en) | 2011-11-28 | 2021-10-12 | Magna Electronics Inc. | Multi-camera vehicular vision system |
US8682087B2 (en) | 2011-12-19 | 2014-03-25 | Cisco Technology, Inc. | System and method for depth-guided image filtering in a video conference environment |
US10699155B2 (en) | 2012-01-17 | 2020-06-30 | Ultrahaptics IP Two Limited | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US10366308B2 (en) | 2012-01-17 | 2019-07-30 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US12260023B2 (en) | 2012-01-17 | 2025-03-25 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US9436998B2 (en) | 2012-01-17 | 2016-09-06 | Leap Motion, Inc. | Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections |
US10565784B2 (en) | 2012-01-17 | 2020-02-18 | Ultrahaptics IP Two Limited | Systems and methods for authenticating a user according to a hand of the user moving in a three-dimensional (3D) space |
US10691219B2 (en) | 2012-01-17 | 2020-06-23 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US9495613B2 (en) | 2012-01-17 | 2016-11-15 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging using formed difference images |
US9153028B2 (en) | 2012-01-17 | 2015-10-06 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
US9070019B2 (en) | 2012-01-17 | 2015-06-30 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
US20130182077A1 (en) * | 2012-01-17 | 2013-07-18 | David Holz | Enhanced contrast for object detection and characterization by optical imaging |
US9945660B2 (en) | 2012-01-17 | 2018-04-17 | Leap Motion, Inc. | Systems and methods of locating a control object appendage in three dimensional (3D) space |
US9934580B2 (en) | 2012-01-17 | 2018-04-03 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US12086327B2 (en) | 2012-01-17 | 2024-09-10 | Ultrahaptics IP Two Limited | Differentiating a detected object from a background using a gaussian brightness falloff pattern |
US11308711B2 (en) | 2012-01-17 | 2022-04-19 | Ultrahaptics IP Two Limited | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US11782516B2 (en) | 2012-01-17 | 2023-10-10 | Ultrahaptics IP Two Limited | Differentiating a detected object from a background using a gaussian brightness falloff pattern |
US9778752B2 (en) | 2012-01-17 | 2017-10-03 | Leap Motion, Inc. | Systems and methods for machine control |
US9626591B2 (en) | 2012-01-17 | 2017-04-18 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging |
US8693731B2 (en) * | 2012-01-17 | 2014-04-08 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging |
US9652668B2 (en) | 2012-01-17 | 2017-05-16 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US9672441B2 (en) | 2012-01-17 | 2017-06-06 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US9767345B2 (en) | 2012-01-17 | 2017-09-19 | Leap Motion, Inc. | Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections |
US10767982B2 (en) | 2012-01-17 | 2020-09-08 | Ultrahaptics IP Two Limited | Systems and methods of locating a control object appendage in three dimensional (3D) space |
US9741136B2 (en) | 2012-01-17 | 2017-08-22 | Leap Motion, Inc. | Systems and methods of object shape and position determination in three-dimensional (3D) space |
US9679215B2 (en) | 2012-01-17 | 2017-06-13 | Leap Motion, Inc. | Systems and methods for machine control |
US11720180B2 (en) | 2012-01-17 | 2023-08-08 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US9697643B2 (en) | 2012-01-17 | 2017-07-04 | Leap Motion, Inc. | Systems and methods of object shape and position determination in three-dimensional (3D) space |
US10410411B2 (en) | 2012-01-17 | 2019-09-10 | Leap Motion, Inc. | Systems and methods of object shape and position determination in three-dimensional (3D) space |
US11577645B2 (en) * | 2012-02-22 | 2023-02-14 | Magna Electronics Inc. | Vehicular vision system with image manipulation |
US10926702B2 (en) * | 2012-02-22 | 2021-02-23 | Magna Electronics Inc. | Vehicle camera system with image manipulation |
US11007937B2 (en) * | 2012-02-22 | 2021-05-18 | Magna Electronics Inc. | Vehicular display system with multi-paned image display |
US20210268963A1 (en) * | 2012-02-22 | 2021-09-02 | Magna Electronics Inc. | Vehicular display system with multi-paned image display |
US20150022665A1 (en) * | 2012-02-22 | 2015-01-22 | Magna Electronics Inc. | Vehicle camera system with image manipulation |
US10457209B2 (en) * | 2012-02-22 | 2019-10-29 | Magna Electronics Inc. | Vehicle vision system with multi-paned view |
US20130222593A1 (en) * | 2012-02-22 | 2013-08-29 | Magna Electronics Inc. | Vehicle vision system with multi-paned view |
US11607995B2 (en) * | 2012-02-22 | 2023-03-21 | Magna Electronics Inc. | Vehicular display system with multi-paned image display |
US20210178970A1 (en) * | 2012-02-22 | 2021-06-17 | Magna Electronics Inc. | Vehicular vision system with image manipulation |
US20200101900A1 (en) * | 2012-02-22 | 2020-04-02 | Magna Electronics Inc. | Vehicle camera system with image manipulation |
US10493916B2 (en) * | 2012-02-22 | 2019-12-03 | Magna Electronics Inc. | Vehicle camera system with image manipulation |
CN104115188A (en) * | 2012-03-01 | 2014-10-22 | 日产自动车株式会社 | Three-dimensional object detection device |
US10109116B2 (en) | 2012-03-21 | 2018-10-23 | Road-Iq, Llc | Device, system and method for aggregating networks and serving data from those networks to computers |
KR20140143843A (en) * | 2012-04-12 | 2014-12-17 | 케이엘에이-텐코 코포레이션 | Systems and methods for sample inspection and review |
KR102119289B1 (en) * | 2012-04-12 | 2020-06-04 | 케이엘에이 코포레이션 | Systems and methods for sample inspection and review |
US9939386B2 (en) * | 2012-04-12 | 2018-04-10 | KLA—Tencor Corporation | Systems and methods for sample inspection and review |
CN103679838A (en) * | 2012-09-20 | 2014-03-26 | 上海科沁机电有限公司 | Vehicle monitoring system and method |
US10284818B2 (en) | 2012-10-05 | 2019-05-07 | Magna Electronics Inc. | Multi-camera image stitching calibration system |
US11265514B2 (en) | 2012-10-05 | 2022-03-01 | Magna Electronics Inc. | Multi-camera calibration method for a vehicle moving along a vehicle assembly line |
US20140098229A1 (en) * | 2012-10-05 | 2014-04-10 | Magna Electronics Inc. | Multi-camera image stitching calibration system |
US9723272B2 (en) * | 2012-10-05 | 2017-08-01 | Magna Electronics Inc. | Multi-camera image stitching calibration system |
US10904489B2 (en) | 2012-10-05 | 2021-01-26 | Magna Electronics Inc. | Multi-camera calibration method for a vehicle moving along a vehicle assembly line |
US9285893B2 (en) | 2012-11-08 | 2016-03-15 | Leap Motion, Inc. | Object detection and tracking with variable-field illumination devices |
US9681154B2 (en) | 2012-12-06 | 2017-06-13 | Patent Capital Group | System and method for depth-guided filtering in a video conference environment |
US20150343950A1 (en) * | 2012-12-22 | 2015-12-03 | Audi Ag | Motor vehicle having a camera monitoring system |
US9499099B2 (en) * | 2012-12-22 | 2016-11-22 | Audi Ag | Motor vehicle having a camera monitoring system |
US10609285B2 (en) | 2013-01-07 | 2020-03-31 | Ultrahaptics IP Two Limited | Power consumption in motion-capture systems |
US9626015B2 (en) | 2013-01-08 | 2017-04-18 | Leap Motion, Inc. | Power consumption in motion-capture systems with audio and optical signals |
US10097754B2 (en) | 2013-01-08 | 2018-10-09 | Leap Motion, Inc. | Power consumption in motion-capture systems with audio and optical signals |
US9465461B2 (en) | 2013-01-08 | 2016-10-11 | Leap Motion, Inc. | Object detection and tracking with audio and optical signals |
US11353962B2 (en) | 2013-01-15 | 2022-06-07 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
US11269481B2 (en) | 2013-01-15 | 2022-03-08 | Ultrahaptics IP Two Limited | Dynamic user interactions for display control and measuring degree of completeness of user gestures |
US10739862B2 (en) | 2013-01-15 | 2020-08-11 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
US10241639B2 (en) | 2013-01-15 | 2019-03-26 | Leap Motion, Inc. | Dynamic user interactions for display control and manipulation of display objects |
US12204695B2 (en) | 2013-01-15 | 2025-01-21 | Ultrahaptics IP Two Limited | Dynamic, free-space user interactions for machine control |
US10139918B2 (en) | 2013-01-15 | 2018-11-27 | Leap Motion, Inc. | Dynamic, free-space user interactions for machine control |
US10042510B2 (en) | 2013-01-15 | 2018-08-07 | Leap Motion, Inc. | Dynamic user interactions for display control and measuring degree of completeness of user gestures |
US10042430B2 (en) | 2013-01-15 | 2018-08-07 | Leap Motion, Inc. | Free-space user interface and control using virtual constructs |
US11243612B2 (en) | 2013-01-15 | 2022-02-08 | Ultrahaptics IP Two Limited | Dynamic, free-space user interactions for machine control |
US10782847B2 (en) | 2013-01-15 | 2020-09-22 | Ultrahaptics IP Two Limited | Dynamic user interactions for display control and scaling responsiveness of display objects |
US10564799B2 (en) | 2013-01-15 | 2020-02-18 | Ultrahaptics IP Two Limited | Dynamic user interactions for display control and identifying dominant gestures |
US9501152B2 (en) | 2013-01-15 | 2016-11-22 | Leap Motion, Inc. | Free-space user interface and control using virtual constructs |
US11874970B2 (en) | 2013-01-15 | 2024-01-16 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
US9696867B2 (en) | 2013-01-15 | 2017-07-04 | Leap Motion, Inc. | Dynamic user interactions for display control and identifying dominant gestures |
US9632658B2 (en) | 2013-01-15 | 2017-04-25 | Leap Motion, Inc. | Dynamic user interactions for display control and scaling responsiveness of display objects |
US11740705B2 (en) | 2013-01-15 | 2023-08-29 | Ultrahaptics IP Two Limited | Method and system for controlling a machine according to a characteristic of a control object |
US10585193B2 (en) | 2013-03-15 | 2020-03-10 | Ultrahaptics IP Two Limited | Determining positional information of an object in space |
US9702977B2 (en) | 2013-03-15 | 2017-07-11 | Leap Motion, Inc. | Determining positional information of an object in space |
US11693115B2 (en) | 2013-03-15 | 2023-07-04 | Ultrahaptics IP Two Limited | Determining positional information of an object in space |
WO2014146658A1 (en) * | 2013-03-22 | 2014-09-25 | Conti Temic Microelectronic Gmbh | Method for monitoring a vehicle |
US10620709B2 (en) | 2013-04-05 | 2020-04-14 | Ultrahaptics IP Two Limited | Customized gesture interpretation |
US11347317B2 (en) | 2013-04-05 | 2022-05-31 | Ultrahaptics IP Two Limited | Customized gesture interpretation |
US9696420B2 (en) * | 2013-04-09 | 2017-07-04 | Ford Global Technologies, Llc | Active park assist object detection |
US20140300504A1 (en) * | 2013-04-09 | 2014-10-09 | Ford Global Technologies, Llc | Active park assist object detection |
US10452151B2 (en) | 2013-04-26 | 2019-10-22 | Ultrahaptics IP Two Limited | Non-tactile interface systems and methods |
US9916009B2 (en) | 2013-04-26 | 2018-03-13 | Leap Motion, Inc. | Non-tactile interface systems and methods |
US11099653B2 (en) | 2013-04-26 | 2021-08-24 | Ultrahaptics IP Two Limited | Machine responsiveness to dynamic user movements and gestures |
US10232797B2 (en) | 2013-04-29 | 2019-03-19 | Magna Electronics Inc. | Rear vision system for vehicle with dual purpose signal lines |
US9843621B2 (en) | 2013-05-17 | 2017-12-12 | Cisco Technology, Inc. | Calendaring activities based on communication processing |
US9747696B2 (en) | 2013-05-17 | 2017-08-29 | Leap Motion, Inc. | Systems and methods for providing normalized parameters of motions of objects in three-dimensional space |
US20140347709A1 (en) * | 2013-05-21 | 2014-11-27 | Stmicroelectronics, Inc. | Method and apparatus for forming digital images |
US12244963B2 (en) | 2013-06-10 | 2025-03-04 | Magna Electronics Inc. | Vehicular vision system |
US11533452B2 (en) | 2013-06-10 | 2022-12-20 | Magna Electronics Inc. | Vehicular multi-camera vision system using coaxial cables with bidirectional data transmission |
US11290679B2 (en) | 2013-06-10 | 2022-03-29 | Magna Electronics Inc. | Vehicular multi-camera vision system using coaxial cables with bidirectional data transmission |
US10567705B2 (en) | 2013-06-10 | 2020-02-18 | Magna Electronics Inc. | Coaxial cable with bidirectional data transmission |
US11025859B2 (en) | 2013-06-10 | 2021-06-01 | Magna Electronics Inc. | Vehicular multi-camera vision system using coaxial cables with bidirectional data transmission |
US11792360B2 (en) | 2013-06-10 | 2023-10-17 | Magna Electronics Inc. | Vehicular vision system using cable with bidirectional data transmission |
CN103336268A (en) * | 2013-06-14 | 2013-10-02 | 北京航空航天大学 | Induction type non-contact charging position alignment device and method |
US11567578B2 (en) | 2013-08-09 | 2023-01-31 | Ultrahaptics IP Two Limited | Systems and methods of free-space gestural interaction |
US10281987B1 (en) | 2013-08-09 | 2019-05-07 | Leap Motion, Inc. | Systems and methods of free-space gestural interaction |
US10831281B2 (en) | 2013-08-09 | 2020-11-10 | Ultrahaptics IP Two Limited | Systems and methods of free-space gestural interaction |
US10846942B1 (en) | 2013-08-29 | 2020-11-24 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US12236528B2 (en) | 2013-08-29 | 2025-02-25 | Ultrahaptics IP Two Limited | Determining spans and span lengths of a control object in a free space gesture control environment |
US11282273B2 (en) | 2013-08-29 | 2022-03-22 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US12086935B2 (en) | 2013-08-29 | 2024-09-10 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US11776208B2 (en) | 2013-08-29 | 2023-10-03 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US11461966B1 (en) | 2013-08-29 | 2022-10-04 | Ultrahaptics IP Two Limited | Determining spans and span lengths of a control object in a free space gesture control environment |
EP2846532A1 (en) * | 2013-09-06 | 2015-03-11 | Application Solutions (Electronics and Vision) Limited | System, device and method for displaying a harmonised combined image |
US9214034B2 (en) | 2013-09-06 | 2015-12-15 | Application Solutions (Electronics and Vision) Ltd. | System, device and method for displaying a harmonized combined image |
US11775033B2 (en) | 2013-10-03 | 2023-10-03 | Ultrahaptics IP Two Limited | Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation |
US12242312B2 (en) | 2013-10-03 | 2025-03-04 | Ultrahaptics IP Two Limited | Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation |
US12131011B2 (en) | 2013-10-29 | 2024-10-29 | Ultrahaptics IP Two Limited | Virtual interactions for machine control |
US12164694B2 (en) | 2013-10-31 | 2024-12-10 | Ultrahaptics IP Two Limited | Interactions with virtual objects for machine control |
US11568105B2 (en) | 2013-10-31 | 2023-01-31 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US9996638B1 (en) | 2013-10-31 | 2018-06-12 | Leap Motion, Inc. | Predictive information for free space gesture control and communication |
US11010512B2 (en) | 2013-10-31 | 2021-05-18 | Ultrahaptics IP Two Limited | Improving predictive information for free space gesture control and communication |
US11868687B2 (en) | 2013-10-31 | 2024-01-09 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US12265761B2 (en) | 2013-10-31 | 2025-04-01 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
CN103552507A (en) * | 2013-11-11 | 2014-02-05 | 李良杰 | Panoramic display and record system of external car environment |
US9613262B2 (en) | 2014-01-15 | 2017-04-04 | Leap Motion, Inc. | Object detection and tracking for providing a virtual device experience |
CN103886653A (en) * | 2014-01-15 | 2014-06-25 | 柳州市华航电器有限公司 | Panorama image monitoring and driving recording system |
CN105939650A (en) * | 2014-02-14 | 2016-09-14 | 奥林巴斯株式会社 | Endoscope system |
US20160338575A1 (en) * | 2014-02-14 | 2016-11-24 | Olympus Corporation | Endoscope system |
CN103802729A (en) * | 2014-03-04 | 2014-05-21 | 石春 | Driver assistant system for electric automobile |
EP3117406A4 (en) * | 2014-03-13 | 2017-12-20 | Road-IQ, LLC | Device, system and method for aggregating networks and serving data from those networks to computers |
EP2930070A1 (en) * | 2014-04-08 | 2015-10-14 | Application Solutions (Electronics and Vision) Limited | Monitoring system |
US12154238B2 (en) | 2014-05-20 | 2024-11-26 | Ultrahaptics IP Two Limited | Wearable augmented reality devices with object detection and tracking |
US20150341597A1 (en) * | 2014-05-22 | 2015-11-26 | Dr. Ing. H.C. F. Porsche Aktiengesellschaft | Method for presenting a vehicle's environment on a display apparatus; a display apparatus; a system comprising a plurality of image capturing units and a display apparatus; a computer program |
CN104079917A (en) * | 2014-07-14 | 2014-10-01 | 中国地质大学(武汉) | A 360-degree panoramic stereo camera |
US12095969B2 (en) | 2014-08-08 | 2024-09-17 | Ultrahaptics IP Two Limited | Augmented reality with motion sensing |
US11778159B2 (en) | 2014-08-08 | 2023-10-03 | Ultrahaptics IP Two Limited | Augmented reality with motion sensing |
CN104182949A (en) * | 2014-08-18 | 2014-12-03 | 武汉大学 | Image inking and fusing method and system based on histogram feature point registration |
CN105197108A (en) * | 2015-01-14 | 2015-12-30 | 河海大学常州校区 | Multi-objective direction-finding system and method based on automotive drive assistant system |
US12118134B2 (en) | 2015-02-13 | 2024-10-15 | Ultrahaptics IP Two Limited | Interaction engine for creating a realistic experience in virtual reality/augmented reality environments |
US12032746B2 (en) | 2015-02-13 | 2024-07-09 | Ultrahaptics IP Two Limited | Systems and methods of creating a realistic displacement of a virtual object in virtual reality/augmented reality environments |
US9804258B2 (en) | 2015-02-27 | 2017-10-31 | The United States Of America, As Represented By The Secretary Of The Army | Surface non-uniformity determination with radio waves |
US20170099461A1 (en) * | 2015-10-05 | 2017-04-06 | Polycom, Inc. | Panoramic image placement to minimize full image interference |
US10015446B2 (en) | 2015-10-05 | 2018-07-03 | Polycom, Inc. | Optimizing panoramic image composition |
US9843770B2 (en) * | 2015-10-05 | 2017-12-12 | Polycom, Inc. | Panoramic image placement to minimize full image interference |
US10182208B2 (en) | 2015-10-05 | 2019-01-15 | Polycom, Inc. | Panoramic image placement to minimize full image interference |
CN105430325A (en) * | 2015-11-03 | 2016-03-23 | 苏交科集团股份有限公司 | Method and system for positioning traffic flow direction rapidly in road monitoring video image |
US12115915B2 (en) | 2015-12-17 | 2024-10-15 | Magna Electronics Inc. | Vehicle vision system with electrical noise filtering circuitry |
US10185029B2 (en) | 2016-02-26 | 2019-01-22 | The United States Of America, As Represented By The Secretary Of The Army | Timing and synchronization of radio waves for scanning, detection, and measurement of surface non-uniformity |
CN105828028A (en) * | 2016-03-04 | 2016-08-03 | 乐卡汽车智能科技(北京)有限公司 | Method for performing safety detection on vehicle bottom and device thereof |
US20170255836A1 (en) * | 2016-03-07 | 2017-09-07 | Vivotek Inc. | Fisheye image display method |
US10002305B2 (en) * | 2016-03-07 | 2018-06-19 | Vivotek Inc. | Fisheye image display method |
US10645282B2 (en) | 2016-03-11 | 2020-05-05 | Samsung Electronics Co., Ltd. | Electronic apparatus for providing panorama image and control method thereof |
EP3366037A4 (en) * | 2016-03-11 | 2018-11-07 | Samsung Electronics Co., Ltd. | Electronic apparatus for providing panorama image and control method thereof |
CN109478094A (en) * | 2016-07-12 | 2019-03-15 | 奥迪股份公司 | Method for operating a display device of a motor vehicle |
TWI622297B (en) * | 2016-12-19 | 2018-04-21 | Display method capable of simultaneously displaying rear panorama and turning picture when the vehicle turns | |
US10573249B2 (en) * | 2016-12-21 | 2020-02-25 | Apical Limited | Display control |
CN106817539A (en) * | 2016-12-29 | 2017-06-09 | 珠海市魅族科技有限公司 | The image processing method and system of a kind of vehicle |
US10810712B2 (en) * | 2017-09-29 | 2020-10-20 | Denso Corporation | Apparatus for monitoring surroundings of vehicle and method of calibrating the same |
US20190102869A1 (en) * | 2017-09-29 | 2019-04-04 | Denso Corporation | Apparatus for monitoring surroundings of vehicle and method of calibrating the same |
EP3474532A1 (en) * | 2017-10-17 | 2019-04-24 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and program |
US10861194B2 (en) | 2017-10-17 | 2020-12-08 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
US12250447B2 (en) | 2017-12-28 | 2025-03-11 | Waymo Llc | Multiple operating modes to expand dynamic range |
US11917281B2 (en) | 2017-12-28 | 2024-02-27 | Waymo Llc | Camera system, method and instructions using images captured by a first mage sensor and a second image sensor to generate a third image corresponding to a simulated lens having an intermediate focal length |
US10852423B2 (en) | 2018-03-07 | 2020-12-01 | The Government Of The United States, As Represented By The Secretary Of The Army | Vehicle-mounted wave transmission and wave response reception |
US11082631B2 (en) * | 2018-04-03 | 2021-08-03 | Aisin Seiki Kabushiki Kaisha | Image processing device |
US11875012B2 (en) | 2018-05-25 | 2024-01-16 | Ultrahaptics IP Two Limited | Throwable interface for augmented reality and virtual reality environments |
CN108769526A (en) * | 2018-06-12 | 2018-11-06 | 广州视源电子科技股份有限公司 | Image adjusting method, device, equipment and storage medium |
CN109109790A (en) * | 2018-08-29 | 2019-01-01 | 广东卡仕达电子科技有限公司 | 360 ° of panoramic parking assist systems of one kind and its sound control method |
CN113165483A (en) * | 2018-10-12 | 2021-07-23 | 靛蓝技术股份有限公司 | Method and apparatus for adjusting a reactive system based on sensory input and vehicle incorporating the same |
US11252338B2 (en) * | 2018-12-14 | 2022-02-15 | Koito Manufacturing Co., Ltd. | Infrared camera system and vehicle |
US11295541B2 (en) * | 2019-02-13 | 2022-04-05 | Tencent America LLC | Method and apparatus of 360 degree camera video processing with targeted view |
CN110103826A (en) * | 2019-03-28 | 2019-08-09 | 上海赫千电子科技有限公司 | A kind of image display and method of electronics rearview mirror |
US20210150740A1 (en) * | 2019-11-14 | 2021-05-20 | Panasonic Avionics Corporation | Automatic perspective correction for in-flight entertainment (ife) monitors |
US11615542B2 (en) * | 2019-11-14 | 2023-03-28 | Panasonic Avionics Corporation | Automatic perspective correction for in-flight entertainment (IFE) monitors |
US11584315B2 (en) * | 2020-01-31 | 2023-02-21 | Denso Corporation | Sensor system for vehicle |
US20210237669A1 (en) * | 2020-01-31 | 2021-08-05 | Denso Corporation | Sensor system for vehicle |
CN115348367A (en) * | 2021-05-12 | 2022-11-15 | 蔚来汽车科技(安徽)有限公司 | Combined computer vision and human vision camera system |
CN114795095A (en) * | 2022-04-11 | 2022-07-29 | 重庆高铂瑞骐科技开发有限公司 | Multi-mode anorectal detection endoscope probe and application thereof |
CN117935127A (en) * | 2024-03-22 | 2024-04-26 | 国任财产保险股份有限公司 | Intelligent damage assessment method and system for panoramic video exploration |
Also Published As
Publication number | Publication date |
---|---|
TW200606048A (en) | 2006-02-16 |
US7576767B2 (en) | 2009-08-18 |
TWI287402B (en) | 2007-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7576767B2 (en) | Panoramic vision system and method | |
JP4543147B2 (en) | Panorama vision system and method | |
JP4491453B2 (en) | Method and apparatus for visualizing the periphery of a vehicle by fusing infrared and visual images depending on the periphery | |
US9445011B2 (en) | Dynamic rearview mirror adaptive dimming overlay through scene brightness estimation | |
US9858639B2 (en) | Imaging surface modeling for camera modeling and virtual view synthesis | |
US20150042799A1 (en) | Object highlighting and sensing in vehicle image display systems | |
CN100462836C (en) | Camera units and devices for monitoring the surroundings of vehicles | |
CN100462835C (en) | Camera units and devices for monitoring the surroundings of vehicles | |
US20140114534A1 (en) | Dynamic rearview mirror display features | |
US20150109444A1 (en) | Vision-based object sensing and highlighting in vehicle image display systems | |
US10506178B2 (en) | Image synthesis device for electronic mirror and method thereof | |
JP4975592B2 (en) | Imaging device | |
JP2008530667A (en) | Method and apparatus for visualizing the periphery of a vehicle by fusing infrared and visible images | |
KR102235951B1 (en) | Imaging Apparatus and method for Automobile | |
WO2020195851A1 (en) | Vehicle-mounted camera device, and image distortion correction method for same | |
WO2013157184A1 (en) | Rearward visibility assistance device for vehicle, and rear visibility assistance method for vehicle | |
WO2017158829A1 (en) | Display control device and display control method | |
US20240217439A1 (en) | Image capturing device, movable apparatus, and storage medium | |
JP7384343B2 (en) | Image processing device, image processing program | |
KR20070049109A (en) | Panoramic vision system and method | |
DE102013220839B4 (en) | A method of dynamically adjusting a brightness of an image of a rear view display device and a corresponding vehicle imaging system | |
JP4795813B2 (en) | Vehicle perimeter monitoring device | |
JP2023046965A (en) | Image processing system, moving device, image processing method, and computer program | |
JP4747670B2 (en) | Image processing device | |
US20240221383A1 (en) | Image capturing apparatus and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SILICON OPTX, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, LOUIE;VAKILI, MASOUD;REEL/FRAME:015614/0131 Effective date: 20040723 |
|
AS | Assignment |
Owner name: SILICON OPTIX INC., CALIFORNIA Free format text: A CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE NAME ON REEL 015614 FRAME 0131;ASSIGNORS:LEE, LOUIE;VAKILI, MASOUD;REEL/FRAME:015701/0675 Effective date: 20040723 |
|
AS | Assignment |
Owner name: SO DELAWARE CORPORATION, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:SILICON OPTIX INC.;REEL/FRAME:022645/0218 Effective date: 20081021 Owner name: SO DELAWARE CORPORATION,CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:SILICON OPTIX INC.;REEL/FRAME:022645/0218 Effective date: 20081021 |
|
AS | Assignment |
Owner name: SO DELAWARE CORPORATION, CALIFORNIA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE SERIAL NUMBER 10889410 TO SERIAL NUMBER 10899410 PREVIOUSLY RECORDED ON REEL 022645 FRAME 0218;ASSIGNOR:SILICON OPTIX, INC.;REEL/FRAME:022671/0703 Effective date: 20081021 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: GEO SEMICONDUCTOR INC.,CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SO DELAWARE CORPORATION;REEL/FRAME:023928/0006 Effective date: 20090511 Owner name: GEO SEMICONDUCTOR INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SO DELAWARE CORPORATION;REEL/FRAME:023928/0006 Effective date: 20090511 |
|
AS | Assignment |
Owner name: MONTAGE CAPITAL, LLC, CALIFORNIA Free format text: SECURITY AGREEMENT;ASSIGNOR:GEO SEMICONDUCTOR INC.;REEL/FRAME:025008/0303 Effective date: 20100917 |
|
AS | Assignment |
Owner name: HARRIS & HARRIS GROUP, INC., NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNOR:GEO SEMICONDUCTOR INC.;REEL/FRAME:026036/0934 Effective date: 20110301 |
|
AS | Assignment |
Owner name: BISHOPSGATE HOLDINGS CORPORATION, BERMUDA Free format text: SECURITY AGREEMENT;ASSIGNOR:GEO SEMICONDUCTOR INC.;REEL/FRAME:029341/0102 Effective date: 20120510 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: GEO SEMICONDUCTOR, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MONTAGE CAPITAL, LLC;REEL/FRAME:030183/0179 Effective date: 20130408 |
|
AS | Assignment |
Owner name: BISHOPSGATE HOLDINGS CORPORATION, BERMUDA Free format text: SECURITY AGREEMENT;ASSIGNOR:GEO SEMICONDUCTOR INC;REEL/FRAME:031479/0486 Effective date: 20130809 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: SCOTT LAKE HOLDINGS INC., CANADA Free format text: SECURITY INTEREST;ASSIGNOR:GEO SEMICONDUCTOR INC.;REEL/FRAME:044957/0529 Effective date: 20171120 |
|
AS | Assignment |
Owner name: ROADMAP GEO LP III, AS ADMINISTRATIVE AGENT, CANAD Free format text: SECURITY INTEREST;ASSIGNOR:GEO SEMICONDUCTOR INC.;REEL/FRAME:044958/0828 Effective date: 20171222 |
|
AS | Assignment |
Owner name: ROADMAP GEO LP III, AS ADMINISTRATIVE AGENT, CANAD Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE APPLICATION NO. FROM US12027189 TO PCTUS1227189 PREVIOUSLY RECORDED ON REEL 044958 FRAME 0828. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST;ASSIGNOR:GEO SEMICONDUCTOR INC.;REEL/FRAME:045482/0808 Effective date: 20171222 |
|
AS | Assignment |
Owner name: GEO SEMICONDUCTOR INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BISHOPSGATE HOLDINGS CORPORATION;REEL/FRAME:049286/0365 Effective date: 20190515 |
|
AS | Assignment |
Owner name: GEO SEMICONDUCTOR INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:180 DEGREE CAPITAL CORP.;REEL/FRAME:049320/0777 Effective date: 20190521 |
|
AS | Assignment |
Owner name: GEO SEMICONDUCTOR INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ROADMAP GEO LP III;REEL/FRAME:049334/0793 Effective date: 20190515 Owner name: CRESCENT COVE CAPITAL II, LP, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:GEO SEMICONDUCTOR INC.;REEL/FRAME:049337/0040 Effective date: 20190515 Owner name: GEO SEMICONDUCTOR INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SCOTT LAKE HOLDINGS INC.;REEL/FRAME:050340/0516 Effective date: 20190515 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2553); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 12 |
|
AS | Assignment |
Owner name: GEO SEMICONDUCTOR, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CRESCENT COVE CAPITAL II, LP;REEL/FRAME:060840/0079 Effective date: 20220721 |
|
AS | Assignment |
Owner name: EAST WEST BANK, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:GEO SEMICONDUCTOR INC.;REEL/FRAME:060925/0979 Effective date: 20220726 |
|
AS | Assignment |
Owner name: GEO SEMICONDUCTOR INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:EAST WEST BANK;REEL/FRAME:062955/0700 Effective date: 20230303 |