US10313605B2 - Image processing apparatus and control method thereof for generating high dynamic range image data - Google Patents
Image processing apparatus and control method thereof for generating high dynamic range image data Download PDFInfo
- Publication number
- US10313605B2 US10313605B2 US15/609,475 US201715609475A US10313605B2 US 10313605 B2 US10313605 B2 US 10313605B2 US 201715609475 A US201715609475 A US 201715609475A US 10313605 B2 US10313605 B2 US 10313605B2
- Authority
- US
- United States
- Prior art keywords
- image data
- gamma
- light region
- image
- applying
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H04N5/2355—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- G06K9/4642—
-
- G06K9/4661—
-
- G06T5/009—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/82—Camera processing pipelines; Components thereof for controlling camera response irrespective of the scene brightness, e.g. gamma correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/20—Circuitry for controlling amplitude response
- H04N5/202—Gamma control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
Definitions
- the present invention relates to a technique of extending the dynamic range of an image captured by a video camera, a digital camera, or the like.
- gain adjustment for matching an image of a short exposure time (short-duration shot image) with an image of a long exposure time (long-duration shot image) in characteristic is performed on a plurality of images different in exposure time at the time of shooting.
- Japanese Patent Laid-Open No. 2002-190983 proposes a method of obtaining an image with a wide dynamic range by subsequently composing images.
- U.S. Patent Application Publication No. 2005/046708 proposes a method of applying a local filter to the surroundings of a pixel of interest, comparing variance values of a local region between images different in exposure time, and increasing a composition ratio at the time of image composition in accordance with the heights of the variance values.
- this method it is judged which of the images each having the different exposure time for each pixel could be shot without causing less pixel saturation, and contrivance to use the pixel with a lower possibility of pixel saturation for composition is made.
- a case will be considered here in which a scene to be shot has an extremely large dynamic range, and a brightness distribution is divided into two portions, a light portion and a dark portion. It is easy to understand if considering, for example, a scene that includes both indoors and outdoors applies to this case.
- FIG. 14 shows an example of the histogram of a scene in this case.
- a horizontal axis indicates luminance in the scene (image)
- a vertical axis indicates the frequency of a pixel
- a solid line indicates a luminance distribution of the scene
- a dotted line indicates gamma.
- the pixel value of a lighter pixel becomes extremely large, resulting in the pixel value of a composite image also tending to concentrate near the upper limit and lower limit of the pixel values. If this is output to a monitor or the like, the gamma indicated by the dotted line in FIG. 14 is applied to the composite image, making a contrast extremely low in a high-luminance portion.
- the S/N ratio of a sensor decreases in a dark pixel value, and thus the variance value of the local region of a reference pixel in the short-duration shot image is large, and the variance value of the local region in the long-duration shot image is small in a dark portion of a scene. Also in a light portion, the variance value of the local region in the short-duration shot image becomes large and in the long-duration shot image, pixel saturation occurs, and a local variance becomes small.
- the present invention has been made in consideration of the aforementioned problems, and provides a technique of obtaining, from a plurality of images different in exposure amount, an image with a wide dynamic range and a high contrast even after composition.
- an image processing apparatus which generates HDR (High Dynamic Range) image data from a plurality of image data different in exposure amount
- the apparatus comprising: a first composition unit configured to generate first HDR image data by applying first gamma to each of the plurality of image data and composing the plurality of image data after application of the first gamma; a discrimination unit configured to discriminate, based on a light region which satisfies a preset condition, the light region, a dark region, and an intermediate region from one preset image data out of the plurality of image data if the one preset image data includes the light region; and a second composition unit configured to generate second HDR image data by applying second gamma different from the first gamma to one of the plurality of image data, and composing, in accordance with a discrimination result by the discrimination unit, image data obtained by applying the second gamma and the first HDR image data.
- a first composition unit configured to generate first HDR image data by applying first gamma to each
- the present invention it becomes possible to obtain, from the plurality of images different in exposure amount, the image with the wide dynamic range and the high contrast even after composition.
- FIG. 1 is a block diagram showing the arrangement of an image processing apparatus according to the first embodiment
- FIG. 2 is a flowchart showing overall image processing according to the first embodiment
- FIG. 3 is a flowchart showing light region determination processing according to the first embodiment
- FIG. 4 is a flowchart showing the sequence of light region pixel decision processing according to the first embodiment
- FIGS. 5A to 5C are views showing an original image and a luminance average image, and a histogram of the luminance average image used in the light region determination processing according to the first embodiment
- FIGS. 6A and 6B are views showing a light region schematic map and a histogram according to the first embodiment
- FIG. 7 shows a schematic histogram of a zero-section count according to the first embodiment
- FIGS. 8A and 8B are views showing ternarization according to the first embodiment
- FIGS. 9A to 9C are views showing luminance image data, a light region map, and a composite map according to the first embodiment
- FIG. 10 is a graph showing a table for generating the composite map according to the first embodiment
- FIGS. 11A and 11B are graphs schematically showing the first gamma and the second gamma according to the first embodiment
- FIG. 12 is a flowchart showing composite map generation processing according to the first embodiment
- FIG. 13 is a graph showing the relationship between the composite map and the composition ratio according to the first embodiment
- FIG. 14 is a graph showing the luminance distribution and the tendency of gamma in a scene with a wide dynamic range
- FIG. 15 is a flowchart showing the sequence of light region determination processing according to the second embodiment.
- FIGS. 16A to 16D are graphs showing gamma curves applied to four normal images
- FIG. 17 is a graph showing an example of the luminance distribution of an HDR image
- FIG. 18 is a flowchart showing image processing according to the third embodiment.
- FIG. 19 is a flowchart showing composition processing of the second gamma image according to the third embodiment.
- first embodiment The outline of the first embodiment will be described.
- normal gamma first gamma
- basic HDR image data high Dynamic Range image data
- this embodiment determines the presence of an image, in the four input images, that can be utilized to improve the contrast of a light portion. If there is not such an image, the basic HDR image data is decided as final HDR image data.
- FIG. 1 is a block diagram showing the arrangement of an image processing apparatus to which the first embodiment is applied.
- An image capturing unit 101 is a unit configured to detect light from an object, and is made of, for example, a zoom lens, a focus lens, a blur correction lens, a stop, a shutter, an optical low-pass filter, an iR cut filter, a color filter, a sensor such as a CMOS or a CCD, and the like.
- An A/D converter 102 is a unit configured to convert a detection amount of the light from the object into a digital value.
- a signal processing unit 103 is a unit configured to process a signal of the above-described digital value and generate a digital image.
- the signal processing unit 103 performs, for example, demosaicing processing, white balance processing, gamma processing, or the like. This signal processing unit 103 also performs image composition processing to be described in this embodiment.
- An encoder unit 105 is a unit configured to perform data compression on the above-described digital image and performs processing such as compression into Jpeg.
- a media interface unit 106 is an interface to be connected to a PC and other media (for example, a hard disk, a memory card, a CF card, an SD card, and a USB memory).
- a CPU 107 involves all the processing of the respective units described above.
- a ROM 108 and a RAM 109 provide the CPU 107 with programs, data, work areas, and the like needed for the processing.
- the ROM 108 also stores control programs to be described later. Note that if the access speed of the RAM 109 is sufficiently faster than that of the ROM 108 , the programs stored in the ROM 108 may be executed after temporarily loaded to the RAM 109 .
- An operation unit 111 is a unit which inputs an instruction from a user, and is made of, for example, buttons and a mode dial.
- a character generation unit 112 is a unit configured to generate characters and graphics.
- a D/A converter 104 is a unit configured to convert the above-described digital image into an analog image.
- a display unit 113 is a unit which displays a shot image or an image of a GUI or the like. In general, a CRT, a liquid crystal display, or the like is used for the display unit 113 . Alternatively, the display unit 113 may be a known touch screen. In this case, an input by the touch screen can also be treated as an input by the operation unit 111 .
- An imaging controller 110 is a unit configured to perform the control of an imaging system instructed by the CPU 107 and performs control such as adjusting a focus, opening the shutter, adjusting the stop, and the like. Besides the above-described constituent elements, various constituent elements exist for a system configuration. However, they are not main subjects of the embodiment, and thus a description thereof will be omitted.
- an image input portion ranges from image capturing by the image capturing unit 101 to A/D conversion by the A/D converter 102 , the signal processing unit 103 performs image processing on image data obtained in the image input portion, and the CPU 107 , the ROM 108 , and the RAM 109 are utilized at that time.
- the image data captured by the image capturing unit 101 is image data of a color space having three components R, G, and B, each of which is represented by 8 bits (256 tones). It is to be understood that this is for better understanding of technical contents by showing a concrete example, but this is merely an example.
- This processing is processing when an HDR shooting mode by the operation unit 111 is set, and image capturing processing is performed.
- the CPU 107 controls the imaging controller 110 to change a shutter speed stepwise and to capture four images I 1 to I 4 different in exposure amount.
- the relation of the exposure amounts is I 1 >I 2 >I 3 >I 4 .
- the image I 1 has the largest exposure amount among four images, and thus for the image I 1 , gradation in a dark portion of the object is maintained easily while a light portion of the object is likely to suffer from “blown out highlights”. On the other hand, it can be said that for the image I 4 , gradation in the light portion of the object is maintained easily while the dark portion of the object is likely to suffer from “blocked up shadows”.
- step S 201 the signal processing unit 103 inputs four image data I 1 to I 4 that have been captured by the image capturing unit 101 and have undergone A/D conversion by the A/D converter 102 , and saves them in the RAM 109 .
- step S 202 the signal processing unit 103 applies the first gamma to the input image data I 1 to I 4 and also saves, in the RAM 109 , the image data I 1 to I 4 after the gamma application.
- the first gamma applied here is desirably gamma with less blocked up shadows or blown out highlights appearing while securing a wide dynamic range. Therefore, for example, log gamma as shown in FIG. 11A is applied in the embodiment.
- the log gamma as shown in FIG. 11A has different output pixel values depending on input pixel values obtained from sensors of respective exposure images as in FIGS. 16 A to 16 D in accordance with the respective exposure conditions. It is possible, by composing these images after the gamma application, to obtain an HDR composite image having the output pixel value continuously with respect to the brightness of a shooting scene as shown in FIG. 17 . Note that gamma other than this may be used, as a matter of course.
- step S 203 the signal processing unit 103 performs the first image composition processing by using the image data I 1 to I 4 after the application of the first gamma, generates one image data with a wide dynamic range, and saves the generated image data in the RAM 109 .
- the image data with the wide dynamic range generated in this composition processing will be referred to as basic HDR image data hereinafter.
- a range indicating the brightness of the HDR image is divided into three (a value obtained by subtracting 1 from the number of images to be composed). Respective ranges are defined as R 1 , R 2 , and R 3 in ascending order of the brightness.
- the range R 1 is a portion of low brightness, and thus the image data I 1 and I 2 are used to generate image data I (R 1 ) whose gradation is maintained especially in that range R 1 .
- this image data I (R 3 ) is the basic HDR image data described earlier.
- step S 204 it is determined whether the contrast of the light portion region of the object can further be improved from an original image. If it is determined that the contrast cannot further be improved, the basic HDR image data is output as an HDR composition result. On the other hand, if it is determined that the contrast can further be improved, new HDR image data with the contrast of the light portion region further improved from that in the basic HDR image data is generated and output as an HDR composition result. This processing is performed from step S 204 .
- step S 204 the signal processing unit 103 uses the image data I 3 to which the first gamma is not applied to determine that a region having certain brightness and a comparatively large area (a detail of which will be described later) is included.
- the reason why the image data I 3 is selected as a determination target is as follows.
- Image data having a smaller exposure amount can further maintain the contrast of a lighter region in an object image.
- the image data I 4 having the smallest exposure amount is considered to be good in this regard.
- the present inventor considers that the light region improved in contrast is preferably as wide as possible.
- the lower limit of the target light region is preferably low. Accordingly, the image data I 3 that can maintain even gradation in that intermediate region is selected as the determination target. Note that the user may be able to designate the image data to be determined, or the image data I 2 can be selected as the determination target in some cases.
- step S 205 based on a determination result in step S 204 , the signal processing unit 103 determines whether the contrast of the light region can further be improved. If it is determined that the contrast of the light region cannot further be improved, the process advances to step S 206 in which the signal processing unit 103 outputs, to the encoder unit 105 , the basic HDR image data as composite image data indicating a composition result in this embodiment.
- the composite image data encoded by the encoder unit 105 is output to media via the media I/F 106 or stored in the RAM 109 .
- the signal processing unit 103 advances the process to step S 207 .
- step S 207 based on information indicating the rough position of the light region calculated in step S 204 , the signal processing unit 103 decides a light region pixel indicating which pixel position in an input image is the light region and saves a decided result in the RAM 109 . A decision on this light region pixel will be described in detail later.
- step S 208 the signal processing unit 103 applies the second gamma different from the first gamma to the image data I 4 having the smallest exposure amount out of the input image data I 1 to I 4 saved in the RAM 109 . Consequently, image data obtained by applying the second gamma will be denoted as image data L hereinafter.
- the second gamma has an S-shaped curve with respect to the pixel value of the input image as shown in, for example, FIG. 11B and has the largest (or smallest) output pixel value different from that of the first gamma.
- the gamma is given in such a way, by applying this curve, that the output pixel value increases in a portion of certain brightness of the object in the scene as compared with a case in which there is no object complying with certain brightness in a scene, and the first gamma is applied, resulting in making it possible to improve gradation of the entire HDR image.
- the second gamma has gamma characteristics other than the S-shaped curve given here.
- the second gamma is gamma in which a larger gradation levels are assigned to the light portion. Then, the signal processing unit 103 saves, in the RAM 109 , the image data L after the application of the second gamma.
- step S 209 the signal processing unit 103 performs composition processing of the basic HDR image data and the image data L obtained by applying the second gamma, and generates the HDR image data with the improved contrast of the light region.
- I HDR be an HDR image after the contrast of the light region is improved
- step S 210 the signal processing unit 103 outputs, to the encoder unit 105 , the generated HDR image data as composite image data indicating the composition result in this embodiment.
- the composite image data encoded by the encoder unit 105 is output to the media via the media I/F 106 or stored in the RAM 109 .
- the image composition processing in this first embodiment is completed by the above-described processing.
- the light region determination processing roughly checks whether the region which is light and has the comparatively large area exists in the input image.
- the region which is light and has the comparatively large area is referred to as a light region.
- step S 301 the signal processing unit 103 performs color conversion to obtain a luminance component Y on the image data I 3 captured by the image capturing unit 101 .
- a conversion method may be a general transformation from RGB to one luminance component Y.
- Luminance image data generated by this conversion processing is saved in the RAM 109 . Note that the luminance Y is also represented by 8 bits.
- step S 302 the signal processing unit 103 divides the luminance image data generated in step S 301 into a plurality of partial regions (pixel blocks) each having a preset size and obtains an average luminance value Y AV in each partial region.
- Y(x, y) be a luminance value at coordinates (x, y) in one partial region
- the image capturing unit 101 in the first embodiment captures image data of 2,400 pixels in the horizontal direction and 1,400 pixels in the vertical direction, and one partial region has the size of 100 ⁇ 100 pixels.
- the image data is divided into 24 ⁇ 14 partial regions (can also be referred to as reduced image data made of 24 ⁇ 14 pixels), and average luminance in each partial region is calculated.
- FIG. 5A shows the target image data (I 3 in the embodiment).
- FIG. 5B shows an image (to be referred to as a luminance average image hereinafter) obtained by indicating each partial region with its average value.
- step S 303 the signal processing unit 103 obtains a histogram with respect to the luminance average image obtained by step S 302 .
- the obtained histogram becomes, for example, as shown in FIG. 5C .
- a horizontal axis indicates a luminance value
- a vertical axis indicates a frequency (the number of partial regions).
- step S 304 the signal processing unit 103 obtains a binarization threshold TH from the luminance average image obtained in step S 302 and performs binarization by using that threshold, obtaining a binarized image.
- a binarization method can be performed by a known algorithm.
- the binarization threshold TH is obtained by the Otsu method or the like, and binarization can be performed by using that threshold TH.
- FIG. 6A shows the binarized image. Note that in the embodiment, in the binarized image, a pixel with luminance equal to or larger than a threshold is “255”, and a pixel with luminance smaller than the threshold is “0”.
- the binarization threshold TH is set in a frequency portion capable of separating the histogram most accurately, as indicated by reference numeral 602 shown in FIG. 6B .
- step S 305 the signal processing unit 103 counts zero-sections in the histogram based on the binarization threshold obtained in step S 304 . More specifically, the signal processing unit 103 counts the number of bins each in which the frequency is “0” in a direction of increasing lightness with the binarization threshold 602 as a starting point, as shown in FIG. 7 . Then, a bin range in which the frequency is 0 is defined as a zero-section 701 . The signal processing unit 103 decides the field of search of the zero-section 701 by checking whether the zero-section 701 exists having a certain length (for example, the number of bins is “5”) in a predetermined range with the binarization threshold 602 as the starting point.
- a certain length for example, the number of bins is “5”
- the signal processing unit 103 stores, in the RAM 109 , a determination result that the light region is found in an image data of interest. If the zero-section is not found, the signal processing unit 103 stores, in the RAM 109 , a determination result that there is not the light region in the image data of interest.
- each of a method of deciding the field of search of the zero-section and the length of the zero-section described here is merely an example, and they may be decided by using another method in implementing this embodiment.
- the length of the zero-section may be decided depending on the number of partial regions, or the user may be able to set the length as needed.
- the field of search of the zero-section may be searched for in both the increasing and decreasing directions or just on one side from the center of the binarization threshold 602 .
- step S 306 based on the binarized image generated in step S 304 , the signal processing unit 103 sets a portion in which a pixel value contacts “255” to a pixel value “128” different from a pixel value “255” or “0” in a region with the pixel value “0”, performing ternarization of the binarized image data.
- FIGS. 8A and 8B show an example of conversion from binarized image data 601 to ternarized image data 801 . Each of FIGS. 8A and 8B shows that of the pixel value “0” (an illustrated black portion) in the binarized image data, the pixel value which contacts the pixel value “255” is “128” (an illustrated gray portion).
- each partial region that forms the image data is divided into a light portion region, a dark portion region, and an intermediate region thereof. Note that this ternarized image data is stored in the RAM 109 .
- step S 309 the signal processing unit 103 outputs the ternarized image data as light region schematic map data. Note that if the zero-section 701 of a predetermined length is not found, the ternarized image data is not generated, and this process thus ends.
- the processing of FIG. 3 is performed on the image data I 3 .
- the processing may be performed as follows.
- FIG. 3 the processing of FIG. 3 is performed on all the captured images I 1 to I 4 , and it is determined that there is no light region if there is the zero-section of the predetermined length in none of them. Then, if there exists even one image data having the zero-section of the predetermined length, ternarized image data generated from image data having the smallest exposure amount among them is output as the light region schematic map data.
- the light region determination processing in step S 204 of FIG. 2 it is determined whether there is the light region, and the light region schematic map data is created.
- the light region pixel decision processing in step S 207 of FIG. 2 light region map data for deciding, for each pixel, a portion serving as the light region in the input image is generated.
- the light region map data is information indicating to what ratio the basic HDR image data obtained by actually applying the first gamma and the image to which the second gamma is applied are composed, and is held as, for example, an 8-bit monochrome image.
- the basic HDR image data generated by applying the first gamma is used for the dark portion region having the pixel value “0”
- the image data to which the second gamma is applied is used for the light portion region having the pixel value “255”
- a pixel value obtained by composing the basic HDR image data and the image data obtained by applying the second gamma is output for the intermediate region having the pixel value “128”. Note that it is only necessary that three pixel values can be discriminated, and thus they should not necessarily be any of 0, 128, and 255 by ternarization.
- the light region pixel decision processing will be described in detail with reference to a flowchart of FIG. 4 .
- step S 401 the signal processing unit 103 reads the light region schematic map data created in step S 201 .
- step S 402 the light region schematic map data read in step S 401 is enlarged to the same size as the input image, generating the light region map data.
- a nearest neighbor method is used for enlargement so as to avoid all the pixels in the light region map data from taking pixel values other than three values defined earlier.
- step S 403 the input image (assumed to be the image data I 3 serving as the source of the light region schematic map data) stored in the RAM 109 is read, and luminance image data is generated. Conversion from RGB of the input image to the luminance Y can be performed by using the general transformation first. Then, smoothing processing using a Gaussian filter or the like is performed on image data constituted by only the luminance component obtained, generating the luminance image data.
- step S 404 for each pixel, a location in the input image at which the light region is positioned is decided. More specifically, based on the light region map data generated in step S 402 , it is determined whether each pixel of the luminance image data is the light region pixel, a detail of which will be described later.
- step S 405 data obtained by mapping the light region pixels generated in step S 404 is output as a composite map data and stored in the RAM 109 .
- the light region pixel decision processing in step S 204 is completed by the above-described processing.
- step S 404 The light region pixel determination processing in step S 404 will now be described in detail with reference to FIGS. 9A to 9C and a flowchart of FIG. 12 .
- step S 1201 the signal processing unit 103 first initializes a pixel position at which the light region pixel determination is performed.
- the signal processing unit 103 sets, for example, a pixel at the upper left corner of the input image data as a determination start position.
- the signal processing unit 103 sets a pixel at the upper left corner of a light region map data 902 as a reference start position. Note that in a description below, the respective positions of a determination target pixel and a reference target pixel are updated in the raster scan order.
- step S 1202 it is confirmed whether a value of the light region map data 902 corresponding to a determination target pixel position in the input image is “0”. If the value is 0, the process advances to step S 1208 ; otherwise, the process advances to step S 1203 .
- step S 1203 it is confirmed whether a value of the light region map data 902 corresponding to the determination target pixel position in the input image is “255”. If the value is “255”, the process advances to step S 1207 ; otherwise, the process advances to step S 1204 .
- step S 1204 an output value of the light region pixel is decided. More specifically, as shown in FIG. 10 , regarding the luminance image data as an input, a table for deciding a value to be output to the composite map data is given to decide an output value with reference to the table. Note that at this time, the output value may be 0 for a certain input value or smaller, and the output value may be 255 for the certain input value or larger. This portion is a portion positioned as a boundary in the composite map data. Therefore, it is likely that the light region and a portion other than this are mixed, and this needs to be determined strictly based on the brightness of the input image. Accordingly, the output value of the light region pixel is determined based on the luminance image data.
- step S 1205 it is determined whether the light region determination processing for all the pixels has been completed. If it is determined that the processing has been completed, the process advances to step S 1209 ; otherwise, the process advances to step S 1206 .
- step S 1206 a determination target is moved to a next pixel position to be determined. For example, a position on the right side of the pixel, a left-end pixel one line below, or the like is to be determined.
- step S 1207 the pixel value of the composite map data is set to “255”. This portion is a region determined as definitely a light place in the light region map data, and thus the output value is “255”.
- step S 1208 the pixel value of the composite map data is set to “0”. In contrast to the former case, this portion is a region determined as definitely a dark place in the light region map data, and thus the output value is 0.
- step S 1209 generated composite map data 903 is output and stored in the RAM 109 .
- the light region pixel determination processing in step S 404 and output processing of the composite map data are completed by the above-described processing.
- the pixel values in the generated composite map data 903 can take values from 0 to 255.
- a function G( ) described in step S 203 is basically composite arithmetic processing of two image data as given by:
- x and y are variables representing pixel positions
- A is a composition ratio decided from a value of a pixel position (x, y) in the composite map data
- I Short is an image having a small exposure amount
- I Long is an image having a large exposure amount.
- the luminance component of the image I Short is used to derive the composition ratio A.
- the luminance component of each pixel is obtained from the image I Short , performing the smoothing processing.
- a Gaussian filter of 5 ⁇ 5 can be used for the smoothing processing.
- the output values of the composite map data are referred to with respect to the luminance components after the smoothing processing, as shown in FIG. 13 .
- the value of the composition ratio A in each pixel is decided. In FIG.
- a method of deciding A sets thresholds th 2 and th 3 , sets the output value to 0 if luminance is less than th 2 , sets the output value to 1 if the luminance is more than th 3 , and uses a result obtained by linear interpolation between th 2 and th 3 .
- step S 203 arithmetic operations are performed on the image data I 1 to I 4 by applying equation (2).
- the thresholds th 2 and th 3 in the arithmetic operations are values decided in advance from the relationship between two images to be composed.
- the composition processing in step S 209 will now be described.
- the image data I Short in equation (2) is the image data L after the application of the second gamma
- the image data I Long is the basic HDR image.
- a “0” region in the composite map data 903 corresponds to a dark region in FIG. 9B , and the basic HDR image data is used to emphasize or maintain the gradation of that region.
- the image data L to which the second gamma is applied is used.
- the basic HDR image data and the image data L are composed based on the composite map data.
- the HDR image data when the HDR image data is generated from the plurality of images different in exposure condition, it is determined whether a predetermined image out of the plurality of images satisfies a preset lightness condition and if the predetermined image does not satisfy the condition, the normal first gamma is applied to each image, generating the normal HDR image. Then, if the predetermined image out of the plurality of images satisfies the preset lightness condition, it becomes possible to generate, from the image and the normal HDR image, an HDR image with a further improved contrast of a lightness region.
- the outline of the second embodiment will be described.
- two images different in exposure are input, and it is judged, based on an input image of either of them, whether to improve the contrast of a light portion. Then, if it is judged that visibility can be improved, image composition is further performed after image processing different from normal one and normal image composition processing is performed on an image selected when composition operations different in exposure are performed. If it is not judged that the contrast can be improved, only the normal image composition processing is performed. Note that this second embodiment describes only a difference from the first embodiment.
- the difference from the first embodiment is the light region determination processing of the shot images in step S 204 and, more particularly, a method of outputting the light region schematic map data in step S 309 .
- FIG. 15 is a flowchart showing the output processing of light region schematic map data according to this second embodiment. A difference from FIG. 3 is that there are two images used for composition as described above, and thus the processing is completed by one path.
- the third embodiment will be described.
- an example will be described in which assuming four images different in exposure to be one set, a plurality of sets arranged time-serially, that is, a moving image is processed. Then, in the third embodiment, images for one set of interest are input, an image that satisfies a predetermined condition among them is specified, and it is judged, based on that image, whether the contrast of a light portion can be improved.
- image composition for improving visibility is further performed in a current frame based on to what extent the image composition has been performed in a preceding frame after gamma conversion on a gamma conversion condition different from a normal one and normal gamma conversion is performed on an image selected when composition operations different in exposure are performed.
- image composition for improving visibility is further performed in a current frame based on to what extent the image composition has been performed in a preceding frame after gamma conversion on a gamma conversion condition different from a normal one and normal gamma conversion is performed on an image selected when composition operations different in exposure are performed.
- information held in a RAM 109 holds, in addition to the image data obtained in the image input portion described in the first embodiment, a composition ratio Ip representing to what extent the second gamma image is composed to the first gamma when an HDR composite image in the preceding frame is output and a composition ratio In indicating to what extent the second gamma image in the current frame is composed.
- a composition ratio Ip representing to what extent the second gamma image is composed to the first gamma when an HDR composite image in the preceding frame is output
- a composition ratio In In indicating to what extent the second gamma image in the current frame is composed.
- FIG. 18 shows the sequence of image composition processing in a signal processing unit 103 of an image processing apparatus according to the third embodiment.
- Steps S 1801 to S 1803 are the same as steps S 201 to S 203 of FIG. 2
- steps S 1807 and S 1808 are the same as steps S 207 and S 208 .
- Steps S 1804 and S 1809 are different from the first embodiment.
- step S 1804 the signal processing unit 103 judges whether the contrast of a light region can be improved and decides a numerical value to be substituted in a variable JL to be described later. If judging that the contrast of the light region can be improved, the signal processing unit 103 substitutes 255 in the variable JL. If judging that the contrast of the light region cannot be improved, the signal processing unit 103 substitutes 0 in the variable JL.
- step S 1809 the signal processing unit 103 performs image composition by using images after the application of the first and second gammas created in steps S 1802 and S 1808 , and generates an image with a wide dynamic range. Based on a result in step S 1804 , image composition is performed by using the composition ratio Ip of the images after the application of the first and second gammas, and the HDR composite image in the preceding frame held in the RAM 109 , a detail of which will be described later.
- FIG. 19 is a flowchart showing the sequence of composition processing of the second gamma image in step S 1806 in the image processing apparatus according to the third embodiment.
- step S 1901 the signal processing unit 103 obtains the composition ratio Ip of the second gamma image in the preceding frame held in the RAM 109 .
- Ip is represented by, for example, an integer from 0 to 255.
- the obtained In is held in the RAM 109 .
- a determination result of whether there is the light region obtained in step S 203 in the current frame is used for JL. If there is the light region, 255 obtained in step S 1804 earlier is used for the substitution. If there is not the light region, 0 is used for the substitution.
- the degree of composition of the second gamma image does not vary largely even if a value of A varies largely for each frame or a result in step S 203 changes for each frame. This makes it possible to obtain an image with a small change when the composite image is regarded as the moving image.
- the third embodiment it is possible to provide an easy-to-see image without causing a large image change in an output image even if whether to compose the second gamma changes for each frame.
- the effect of improving the contrast is attained with two types of gamma conversion.
- the present invention is not limited to this.
- another tone curve such as polygonal line gamma may be used.
- the tone curve for improving a contrast desirably has large gradation levels in a light region or a dark region whose contrast should be improved.
- the number of image data utilized for composition processing is four in the first embodiment and two in the second embodiment. Needless to say, however, a generalization can be made to N (where N ⁇ 2) images.
- Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
- computer executable instructions e.g., one or more programs
- a storage medium which may also be referred to more fully as a
- the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
- the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
- the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
I(R1)=G(I 1 ,I 2)
wherein G(x, y) is a function indicating composition processing of an image x and an image y.
I(R2)=G(I(R1),I 3)
I(R3)=G(I(R2),I 4)
I HDR =G(I(R3),L)
Y AV =ΣY(x,y)/(p×q) (1)
wherein p is the number of pixels in a horizontal direction in the partial region, q is the number of pixels in a vertical direction in the partial region, and E represents a sum (integral) function when x is changed to 0, 1, . . . , p−1, and y is changed to 0, 1, . . . , q−1.
wherein x and y are variables representing pixel positions, A is a composition ratio decided from a value of a pixel position (x, y) in the composite map data, IShort is an image having a small exposure amount, and ILong is an image having a large exposure amount. A method of generating the composition ratio A will now be described.
In=Ip+Kp*(JL−Ip)+Ki*(JL−Ip) (3)
I Out3(x,y)=In×A/255×I Local(x,y)+(255−In)×((255−A)/255)×I Out1(x,y) (4)
wherein A is a composite map generated in step S204. Then, In is utilized for composition as Ip in a next frame.
Claims (11)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016119150 | 2016-06-15 | ||
JP2016-119150 | 2016-06-15 | ||
JP2017-054872 | 2017-03-21 | ||
JP2017054872A JP6826472B2 (en) | 2016-06-15 | 2017-03-21 | Image processing device and its control method |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170366729A1 US20170366729A1 (en) | 2017-12-21 |
US10313605B2 true US10313605B2 (en) | 2019-06-04 |
Family
ID=60659930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/609,475 Active US10313605B2 (en) | 2016-06-15 | 2017-05-31 | Image processing apparatus and control method thereof for generating high dynamic range image data |
Country Status (1)
Country | Link |
---|---|
US (1) | US10313605B2 (en) |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2016252993B2 (en) | 2015-04-23 | 2018-01-04 | Apple Inc. | Digital viewfinder user interface for multiple cameras |
US9854156B1 (en) | 2016-06-12 | 2017-12-26 | Apple Inc. | User interface for camera effects |
DK180859B1 (en) | 2017-06-04 | 2022-05-23 | Apple Inc | USER INTERFACE CAMERA EFFECTS |
CN107403422B (en) * | 2017-08-04 | 2020-03-27 | 上海兆芯集成电路有限公司 | Method and system for enhancing image contrast |
US11112964B2 (en) | 2018-02-09 | 2021-09-07 | Apple Inc. | Media capture lock affordance for graphical user interface |
US11722764B2 (en) | 2018-05-07 | 2023-08-08 | Apple Inc. | Creative camera |
US10375313B1 (en) | 2018-05-07 | 2019-08-06 | Apple Inc. | Creative camera |
DK201870623A1 (en) | 2018-09-11 | 2020-04-15 | Apple Inc. | User interfaces for simulated depth effects |
US11128792B2 (en) | 2018-09-28 | 2021-09-21 | Apple Inc. | Capturing and displaying images with multiple focal planes |
US11321857B2 (en) | 2018-09-28 | 2022-05-03 | Apple Inc. | Displaying and editing images with depth information |
CN109819176A (en) * | 2019-01-31 | 2019-05-28 | 深圳达闼科技控股有限公司 | A kind of image pickup method, system, device, electronic equipment and storage medium |
US11770601B2 (en) | 2019-05-06 | 2023-09-26 | Apple Inc. | User interfaces for capturing and managing visual media |
US10645294B1 (en) | 2019-05-06 | 2020-05-05 | Apple Inc. | User interfaces for capturing and managing visual media |
US11706521B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | User interfaces for capturing and managing visual media |
CN112243089B (en) * | 2019-07-17 | 2022-05-13 | 比亚迪股份有限公司 | Camera HDR image effect switch control method and device, rearview mirror, vehicle and storage medium |
WO2021014560A1 (en) | 2019-07-23 | 2021-01-28 | パナソニックIpマネジメント株式会社 | Imaging device |
JP7532040B2 (en) * | 2020-01-28 | 2024-08-13 | キヤノン株式会社 | Imaging device and control method thereof |
US11039074B1 (en) | 2020-06-01 | 2021-06-15 | Apple Inc. | User interfaces for managing media |
US11212449B1 (en) | 2020-09-25 | 2021-12-28 | Apple Inc. | User interfaces for media capture and management |
CN112258417B (en) * | 2020-10-28 | 2023-02-28 | 杭州海康威视数字技术股份有限公司 | Image generation method, device and equipment |
JP7642398B2 (en) * | 2021-02-25 | 2025-03-10 | キヤノン株式会社 | Imaging equipment, surveillance systems |
US11539876B2 (en) | 2021-04-30 | 2022-12-27 | Apple Inc. | User interfaces for altering visual media |
US11778339B2 (en) | 2021-04-30 | 2023-10-03 | Apple Inc. | User interfaces for altering visual media |
US12112024B2 (en) | 2021-06-01 | 2024-10-08 | Apple Inc. | User interfaces for managing media styles |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010007599A1 (en) * | 1999-12-28 | 2001-07-12 | Ryosuke Iguchi | Image processing method and image processing apparatus |
JP2002190983A (en) | 2000-12-20 | 2002-07-05 | Matsushita Electric Ind Co Ltd | Image compositing circuit |
US20040051790A1 (en) * | 2002-06-24 | 2004-03-18 | Masaya Tamaru | Image pickup apparatus and image processing method |
US20050041138A1 (en) * | 2003-07-31 | 2005-02-24 | Nobuo Suzuki | Image composition method, solid-state imaging device, and digital camera |
US20050046708A1 (en) | 2003-08-29 | 2005-03-03 | Chae-Whan Lim | Apparatus and method for improving the quality of a picture having a high illumination difference |
US20100226547A1 (en) * | 2009-03-03 | 2010-09-09 | Microsoft Corporation | Multi-Modal Tone-Mapping of Images |
US20120008006A1 (en) * | 2010-07-08 | 2012-01-12 | Nikon Corporation | Image processing apparatus, electronic camera, and medium storing image processing program |
US20120262600A1 (en) * | 2011-04-18 | 2012-10-18 | Qualcomm Incorporated | White balance optimization with high dynamic range images |
US20140022408A1 (en) * | 2012-07-20 | 2014-01-23 | Canon Kabushiki Kaisha | Image capture apparatus, method of controlling image capture apparatus, and electronic device |
US8965120B2 (en) * | 2012-02-02 | 2015-02-24 | Canon Kabushiki Kaisha | Image processing apparatus and method of controlling the same |
US20160093091A1 (en) | 2014-09-30 | 2016-03-31 | Canon Kabushiki Kaisha | Image processing apparatus and method therefor |
US20170094145A1 (en) * | 2014-06-11 | 2017-03-30 | Sony Corporation | Image capturing apparatus and image capturing method |
US20170206690A1 (en) | 2016-01-20 | 2017-07-20 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
US20180358974A1 (en) * | 2016-03-25 | 2018-12-13 | Fujifilm Corporation | Analog/digital conversion device and control method therefor |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9965607B2 (en) * | 2012-06-29 | 2018-05-08 | Apple Inc. | Expedited biometric validation |
-
2017
- 2017-05-31 US US15/609,475 patent/US10313605B2/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010007599A1 (en) * | 1999-12-28 | 2001-07-12 | Ryosuke Iguchi | Image processing method and image processing apparatus |
JP2002190983A (en) | 2000-12-20 | 2002-07-05 | Matsushita Electric Ind Co Ltd | Image compositing circuit |
US20040051790A1 (en) * | 2002-06-24 | 2004-03-18 | Masaya Tamaru | Image pickup apparatus and image processing method |
US20050041138A1 (en) * | 2003-07-31 | 2005-02-24 | Nobuo Suzuki | Image composition method, solid-state imaging device, and digital camera |
US20050046708A1 (en) | 2003-08-29 | 2005-03-03 | Chae-Whan Lim | Apparatus and method for improving the quality of a picture having a high illumination difference |
US20100226547A1 (en) * | 2009-03-03 | 2010-09-09 | Microsoft Corporation | Multi-Modal Tone-Mapping of Images |
US20120008006A1 (en) * | 2010-07-08 | 2012-01-12 | Nikon Corporation | Image processing apparatus, electronic camera, and medium storing image processing program |
US20120262600A1 (en) * | 2011-04-18 | 2012-10-18 | Qualcomm Incorporated | White balance optimization with high dynamic range images |
US8965120B2 (en) * | 2012-02-02 | 2015-02-24 | Canon Kabushiki Kaisha | Image processing apparatus and method of controlling the same |
US20140022408A1 (en) * | 2012-07-20 | 2014-01-23 | Canon Kabushiki Kaisha | Image capture apparatus, method of controlling image capture apparatus, and electronic device |
US20170094145A1 (en) * | 2014-06-11 | 2017-03-30 | Sony Corporation | Image capturing apparatus and image capturing method |
US20160093091A1 (en) | 2014-09-30 | 2016-03-31 | Canon Kabushiki Kaisha | Image processing apparatus and method therefor |
US20170206690A1 (en) | 2016-01-20 | 2017-07-20 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
US20180358974A1 (en) * | 2016-03-25 | 2018-12-13 | Fujifilm Corporation | Analog/digital conversion device and control method therefor |
Also Published As
Publication number | Publication date |
---|---|
US20170366729A1 (en) | 2017-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10313605B2 (en) | Image processing apparatus and control method thereof for generating high dynamic range image data | |
US8005271B2 (en) | Face detection method and digital camera | |
US9253374B2 (en) | Image processing apparatus and method for controlling the same for obtaining and applying gamma characteristics to reduce a difference between light and dark areas of an image | |
JP6613697B2 (en) | Image processing apparatus, program, and recording medium | |
JP2013527483A (en) | Autofocus image system | |
EP2911110A2 (en) | Image signal processing apparatus, image signal processing method, and image capturing apparatus | |
US10769800B2 (en) | Moving object detection apparatus, control method for moving object detection apparatus, and non-transitory computer-readable storage medium | |
US9813634B2 (en) | Image processing apparatus and method | |
US20180061029A1 (en) | Image processing apparatus, imaging apparatus, image processing method, and storage medium storing image processing program of image processing apparatus | |
US11336834B2 (en) | Device, control method, and storage medium, with setting exposure condition for each area based on exposure value map | |
US10621703B2 (en) | Image processing apparatus, image processing method, and program | |
US11863873B2 (en) | Image capturing apparatus, method of controlling image capturing apparatus, and storage medium | |
US10482580B2 (en) | Image processing apparatus, image processing method, and program | |
JP2010268426A (en) | Image processing apparatus, image processing method, and program | |
US20180288336A1 (en) | Image processing apparatus | |
US10863103B2 (en) | Setting apparatus, setting method, and storage medium | |
US10832386B2 (en) | Image processing apparatus, image processing method, and storage medium | |
JP6826472B2 (en) | Image processing device and its control method | |
US9710897B2 (en) | Image processing apparatus, image processing method, and recording medium | |
JP2009296210A (en) | Image processor and image processing method | |
JP5219771B2 (en) | Video processing apparatus and video processing apparatus control method | |
JP2016208343A (en) | Image processing system, control method of the same, control program, and imaging apparatus | |
US11696044B2 (en) | Image capturing apparatus, control method, and storage medium | |
JP2012100064A (en) | Imaging apparatus | |
KR20070070771A (en) | Threshold Determination Apparatus and Method Used for Noise Rejection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CANON KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ITOH, YASUHIRO;REEL/FRAME:043866/0134 Effective date: 20170523 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |