US20150358529A1 - Image processing device, its control method, and storage medium - Google Patents
Image processing device, its control method, and storage medium Download PDFInfo
- Publication number
- US20150358529A1 US20150358529A1 US14/722,757 US201514722757A US2015358529A1 US 20150358529 A1 US20150358529 A1 US 20150358529A1 US 201514722757 A US201514722757 A US 201514722757A US 2015358529 A1 US2015358529 A1 US 2015358529A1
- Authority
- US
- United States
- Prior art keywords
- image
- unit
- generating unit
- light
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 68
- 230000008569 process Effects 0.000 claims description 51
- 238000003384 imaging method Methods 0.000 claims description 32
- 230000003287 optical effect Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 21
- 230000006870 function Effects 0.000 description 8
- 230000010354 integration Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- H04N5/23212—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B13/00—Optical objectives specially designed for the purposes specified below
- G02B13/001—Miniaturised objectives for electronic devices, e.g. portable telephones, webcams, PDAs, small digital cameras
- G02B13/0015—Miniaturised objectives for electronic devices, e.g. portable telephones, webcams, PDAs, small digital cameras characterised by the lens design
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0075—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
-
- G06T7/004—
-
- G06T7/0051—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/571—Depth or shape recovery from multiple images from focus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/958—Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
- H04N23/959—Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
Definitions
- the present invention relates to an image processing device that searches for a object from a shot image and generates an image focusing on a object image that is a search result, and its control method.
- LF light-field
- a feature in which the image for which the viewpoint, the focus position, and the depth of field can be reconstructed after shooting by performing the arithmetic processing on the recorded data is an advantage of the LF camera.
- This reconstruction processing is referred to as development processing of the LF image.
- the development processing is performed by focusing on a predetermined default focus position when the LF image is displayed. For example, setting information during the display of the LF image at the previous time is stored as the default focus position and the setting information is used next time.
- an image search object search
- Japanese Patent Application Publication No. 2010-086194 discloses a presentation method of image search results.
- FIG. 16A illustrates an example of the LF image. This illustrates a state of focusing on a object 122 among three objects 121 , 122 and 123 as a default state. Accordingly, the object specified as the search target (for example, the object 123 ) may not always be focused on even performing the development by focusing on the default focus position. Therefore, a user need to search for the object to be searched for while adjusting the focus position again. For example, the user needs to perform an adjustment operation for focusing on the object 123 that is the search target, as shown in the example of the LF image in FIG. 16B , and thus the operation is complicated.
- the present invention increases the convenience for a user by focusing on a object to be searched for, in an image processing device that processes light-field data.
- a device comprises a first image generating unit configured to generate a first image having a predetermined depth of field from each of a plurality of light-field data each of which a focus state is changeable; a searching unit configured to search the light-field data that includes a predetermined object by analyzing the first image generated by the first image generating unit; and a second image generating unit configured to generate a second image that has a shallower depth of field than the first image and focused on the predetermined object, based on the light-field data detected by the searching unit.
- focusing on the object to be searched enables increasing the convenience for the user.
- FIGS. 1A and 1B are schematic diagrams illustrating configuration examples A and B inside an LF camera.
- FIG. 2 is a schematic diagram illustrating a positional relation between microlens array 12 and each pixel in an image sensor 13 .
- FIG. 3 is a schematic diagram illustrating a relation between a travelling direction of incident light rays to microlenses and a recording area in the image sensor 13 .
- FIG. 4 is a schematic diagram illustrating information of light rays that are incident to the image sensor 13 .
- FIG. 5 is a schematic diagram illustrating refocuses arithmetic processing.
- FIG. 6 is a schematic diagram illustrating a relation between differences in incident angles to the microlenses and the recording area in the image sensor 13 .
- FIG. 7 is a schematic diagram illustrating the adjustment processing for the depth of field.
- FIG. 8 is a block diagram illustrating a schema of an image display device according to an embodiment of the present invention.
- FIG. 9 is a block diagram illustrating a schema of a object searching unit of FIG. 8 .
- FIG. 10 is a flowchart illustrating an example of a process of the object searching unit in FIG. 8 .
- FIG. 11 is a block diagram illustrating a schema of a display image generating unit in FIG. 8 .
- FIG. 12 is a flowchart illustrating an example of a process of the display image generating unit in FIG. 8 .
- FIG. 13 is a flowchart illustrating an example of a process of the display image generating unit according to a second embodiment of the present invention.
- FIG. 14 is a flowchart illustrating an example of a process of the display image generating unit according to a third embodiment of the present invention.
- FIG. 15 is a flowchart illustrating an example of a process of the display image generating unit according to a fourth embodiment of the present invention.
- FIGS. 16A and 16B are diagrams illustrating examples of an LF image.
- FIGS. 17A and 17B are diagrams illustrating examples of pan-focus images and examples of notification processing to a user.
- FIG. 1 exemplifies a schematic configuration of the LF camera.
- Light that is incident from a object to microlens array 12 through an imaging lens 11 configuring an imaging optical system is photoelectrically converted by an image sensor 13 , and an electric signal is obtained.
- imaging data obtained here is LF data.
- An imaging lens 11 projects the light from the object to the microlens array 12 .
- the imaging lens 11 is interchangeable and it is used by being mounted on a main body of an imaging apparatus 10 .
- a user can change the imaging magnification by a zoom operation of the imaging lens 11 .
- the microlens array 12 is configured by arranging microlenses in a grid shape and being positioned between the imaging lens 11 and the image sensor 13 .
- Each of the microlenses configuring the microlens array 12 splits the incident light from the imaging lens 11 and outputs the split light to the image sensor 13 .
- the image sensor 13 configuring an imaging unit is an imaging element having a plurality of pixels and detects light intensity at each of the pixels. The light split through each of the microlenses is incident to each of the pixels in the image sensor 13 that receives the light from the object.
- FIG. 2 is a schematic diagram illustrating a positional relation between the microlens array 12 and each pixel in the image sensor 13 .
- Each of the microlenses of the microlens array 12 is arranged so that the plurality of pixels in the image sensor 13 corresponds.
- the light that is split by each of the microlenses is incident to each of the pixels in the image sensor 13 and the light intensity (information of light rays) from different directions can be detected at each of the pixels. Additionally, it is possible to know the incident directions (information of direction) of light rays that are incident to each of the pixels in the image sensor 13 through the microlens according to the positional relation between each of the microlenses and each of the pixels in the image sensor 13 .
- information about a travelling direction of the light is detected in accordance with a distribution of the light intensity.
- An image on a focal plane having different distances from a lens apex plane of the microlens array 12 can be obtained by synthesizing each output of the pixels in the image sensor 13 positioned corresponding to eccentricity from an optical axis of each of the microlenses.
- the light rays are represented by a function parameterized by two parallel planes by using a parameter, for example, a position, a direction, and a wavelength.
- the incident direction of the light to each of the pixels is determined according to the arrangement of the plurality of pixels corresponding to each of the microlenses.
- the imaging apparatus 10 obtains the information about light rays and the information about directions and performs the sorting of light rays and calculation processing (hereinafter referred to as “reconstruction”), so that the image data at an optional focus position and an optional viewpoint can be generated.
- This information about the light rays and the information about the direction are included in the LF data.
- the focus position in this case allows the user to focus on a desired image area after shooting.
- FIG. 3 is a schematic diagram illustrating a relation between the travelling direction of the incident light rays to the microlenses of the microlens array 12 and the recording area in the image sensor 13 .
- the object image through the imaging lens 11 is formed on the microlens array 12 and the incident light rays to the microlens array 12 are received at the image sensor 13 through the microlens array 12 .
- the light rays that are incident to the microlens array 12 are received at different positions on the image sensor 13 according to their travelling directions, and the object image that becomes a similar figure with respect to the shape of the imaging lens 11 is formed for each of the microlenses.
- FIG. 4 is a schematic diagram illustrating the information of light rays that are incident to the image sensor 13 .
- a description will be given of the light rays received at the image sensor 13 by using FIG. 4 .
- a rectangular coordinate system on the lens surface of the imaging lens 11 is denoted as (u and v) and a rectangular coordinates system on an imaging plane of the image sensor 13 is denoted as (x and y).
- a distance between the lens surface of the imaging lens 11 and the imaging plane of the image sensor 13 is denoted as “F”.
- the intensity of light passing through the imaging lens 11 and the image sensor 13 can be represented by a four dimensional function L (u, v, x and y) shown in the drawing.
- the four dimensional function L (u, v, x and y) holding the traveling direction of the light rays is recorded in the image sensor 13 .
- FIG. 5 is a schematic diagram illustrating the refocus arithmetic processing.
- the intensity of light rays L′(u, v, s and t) in the rectangular coordinates system (s and t) on the refocus plane is represented as the formula (1) below.
- an image E′ (s and t) obtained on the refocus plane is obtained by integrating the intensity of light rays L′ (u, v, s and t) with respect to a lens aperture, and thus, it is represented as the formula (2) below.
- an image set to an optional focal point can be reconstructed by performing the refocus arithmetic processing by this formula (2).
- Weighting is performed by multiplying a weighting coefficient per image data that forms an image area assigned to each of the microlenses before the refocus arithmetic. For example, when generating an image having a deep depth of field is desired, integration processing is performed by using only the information about light rays that are incident to a receiving plane of the image sensor 13 at relatively small angles. In other words, the light rays that are incident to the image sensor 13 at relatively large angles are not included in the integration processing by multiplying a weighting coefficient 0 (zero).
- FIG. 6 is a schematic diagram illustrating a relation between the differences in the incident angle to the microlenses and the recording area in the image sensor 13
- FIG. 7 is a schematic diagram illustrating the adjustment processing for the depth of field.
- the light rays that are incident to the image sensor 13 at relatively small angles are positioned at a more central area.
- the integration processing is performed by using only the pixel data obtained in a center portion in the area (hatched areas in the drawing). Through such processing, it is possible to express an image having a deep depth of field as if an opening diaphragm included in a typical imaging apparatus is narrowed.
- the depth of field of the image can be adjusted after shooting based on the LF data (information of light rays) actually obtained.
- FIG. 8 is a block diagram schematically illustrating a configuration example of an image display device 100 according to the embodiment of the present invention.
- An LF image storing unit 101 receives the data of the LF image shot by the LF camera and stores it in a memory. Additionally, the LF image storing unit 101 transmits the stored LF data according to a request from an LF image developing unit 103 .
- the LF data may be directly received from the LF camera connected with the image display device through a USB (Universal Serial Bus) and the like, or it the LF data may be stored in a storing media, for example, a SD card is read.
- USB Universal Serial Bus
- a search criteria receiving unit 102 receives search criteria of the object that is specified by the user through the operation unit and transmits it to a object searching unit 104 .
- a method for specifying the search criteria that is performed by inputting the image to be searched for.
- the LF image developing unit 103 reads out the LF data from the LF image storing unit 101 and performs the predetermined arithmetic processing.
- the predetermined arithmetic processing is necessary processing for developing the LF image, and the processing is performed according to the requests from the object searching unit 104 , a depth map generating unit 105 , and a display image generating unit 106 .
- the object searching unit 104 searches for the image that matches the search criteria of the object received from the search criteria receiving unit 102 , from among all of the images stored in the LF image storing unit 101 .
- the details about the object searching unit 104 will be described below.
- the depth map generating unit 105 executes distance information generating processing according to the request from the display image generating unit 106 .
- a depth map is created by the generation of the distance information in each of the pixels in the LF image.
- the distance information corresponding to the depth in the image is calculated by generating the images having two and more different viewpoints, and by detecting a positional displacement of the plurality of generated images.
- the display image generating unit 106 generates display image data. The details about the display image generating unit 106 will be described below.
- An image displaying unit 107 displays the image on a screen according to the image data generated by the display image generating unit 106 .
- a user adjustment value receiving unit 108 receives information about the focus position specified by the user through the operation unit (specified coordinates information) and transmits it to the display image generating unit 106 .
- the focus position corresponds to a (focus) position on which to focus in the shot LF image.
- the specification of the focus position there is a method in which the user specifies the position of the object to be focused by utilizing a pointing device, a touch panel, or the like.
- FIG. 9 is a block diagram of a schema of a configuration mainly illustrating the object searching unit 104 .
- the object searching unit 104 includes a search image generating unit 201 and a feature value comparing unit 202 .
- the search image generating unit 201 requests developing the LF data obtained from the LF image storing unit 101 of the LF image developing unit 103 . While the focus position in the LF image is changeable during the development, the object in an unfocused location (blurred object) cannot be detected when the object search is performed by using the image that was developed by focusing on a specific position. For example, it is difficult to detect the objects 121 and 123 in a state shown in FIG. 16A . Accordingly, the search image generating unit 201 requests generating an image having a maximum depth of field and focusing on all of the objects in the image (pan-focus image) of the LF image developing unit 103 . The LF image developing unit 103 generates pan-focus image data as image data for searching for the object.
- FIG. 17A illustrates an example of the pan-focus image that focuses on all of the objects 121 , 122 and 123 .
- a feature value comparing unit 202 analyzes the image received from the search criteria receiving unit 102 and the search image generating unit 201 , and detects the object.
- the feature value comparing unit 202 calculates feature values of the detected object and compares them.
- the image data processed by the feature value comparing unit 202 is image data that has been developed by the LF image developing unit 103 .
- JPEG Joint Photographic Experts Group
- FIG. 10 is a flowchart illustrating an example of a process performed by the object searching unit 104 .
- the process below is achieved by reading out and executing a program from a memory by a CPU (Central Processing Unit) configuring a control unit of the image processing device.
- CPU Central Processing Unit
- the object searching unit 104 first receives the image data to specify the search criteria from the search criteria receiving unit 102 (S 301 ). Next, the object searching unit 104 calculates the feature value of the object image from the received image data (S 302 ). Iterating processing is performed in steps from S 304 to S 307 , which are between steps S 303 and S 308 , and a step S 309 . The object searching unit 104 subsequently obtains the LF images from the LF image storing unit 101 (S 304 ) and requests the development of the pan-focus images of the LF image developing unit 103 (S 305 ).
- the object searching unit 104 obtains the pan-focus images developed by the LF image developing unit 103 , detects the object image of the image, and calculates the feature value (S 306 ). The object searching unit 104 determines whether or not the feature value calculated in S 306 is identical with the feature value of the object specified as the search criteria (S 307 ). When these feature values are identical, the process proceeds to S 309 and when these feature values are not identical, the process proceeds to S 303 from S 308 to continue the process. In S 309 , the object searching unit 104 notifies a file identifier of the corresponding LF image, for which it has been determined that the feature values of the object are identical, and the coordinates information of the corresponding object image of the display image generating unit 106 . Then, the process proceeds to S 303 from S 308 and continues until the process for all of the LF images ends.
- FIG. 11 is a block diagram mainly illustrating a schematic example of a configuration of the display image generating unit 106 .
- the display image generating unit 106 includes a focus coordinates determining unit 401 , a focus position determining unit 402 , and an image generating unit 403 .
- the focus coordinates determining unit 401 performs coordinates information selecting processing with respect to the coordinates information from the object searching unit 104 and the user adjustment value receiving unit 108 .
- the focus coordinates determining unit 401 receives the coordinates information of the image on which the object to be in focus is shot (object image) from the object searching unit 104 or the user adjustment value receiving unit 108 and transmits either of them to the focus position determining unit 402 .
- the coordinates information transmitted from the focus coordinates determining unit 401 is determined according to whether or not the image to be displayed is a result for the object search. That is, when the image to be displayed is the result of the object search, the focus coordinates determining unit 401 transmits the coordinates information received from the object searching unit 104 to the focus position determining unit 402 .
- the coordinates information received from the user adjustment value receiving unit 108 is transmitted to the focus position determining unit 402 .
- the focus position determining unit 402 transmits the coordinates information received from the focus coordinates determining unit 401 to the depth map generating unit 105 .
- the depth map generating unit 105 returns the distance information corresponding to the received coordinates information based on the depth map created through the distance information generating processing.
- the focus position determining unit 402 obtains the distance information corresponding to the coordinates information from the depth map generating unit 105 and transmits it to the image generating unit 403 .
- the image generating unit 403 transmits the distance information received from the focus position determining unit 402 to the LF image developing unit 103 , which serves as the information for the focus position used during the development, and requests the development of the LF image.
- the image generating unit 403 transmits the image data developed by the LF image developing unit 103 to the image displaying unit 107 . Hence, the image focusing on the desired object is displayed on the screen of the image displaying unit 107 .
- FIG. 12 is a flowchart illustrating an example of a process performed by the display image generating unit 106 .
- the focus coordinates determining unit 401 obtains the coordinates information to be focused on (focus coordinates) from the user adjustment value receiving unit 108 and the object searching unit 104 (S 501 ).
- the focus coordinates determining unit 401 determines whether or not the current display mode is a display mode of the object search results (S 502 ).
- the process proceeds to S 503 when it is the display mode of the object search results, or the process proceeds to S 504 when it is not the display mode of the object search results.
- the focus coordinates determining unit 401 sets the coordinates to be searched indicated by the information received from the object searching unit 104 to the focus coordinates. Additionally, in S 504 , the focus coordinates determining unit 401 sets the coordinates indicated by the information received from the user adjustment value receiving unit 108 to the focus coordinates. The process proceeds to S 505 subsequent to S 503 or S 504 , and the focus position determining unit 402 transmits the coordinates information received from the focus coordinates determining unit 401 to the depth map generating unit 105 .
- the focus position determining unit 402 obtains the distance information corresponding to the focus coordinates and transmits it to the image generating unit 403 .
- the image generating unit 403 transmits the distance information obtained in S 505 to the LF image developing unit 103 (S 506 ).
- the LF image developing unit 103 develops the image focusing on the coordinates set in S 503 or S 504 and image generating unit 403 obtains the image data after the development and transmits it to the image displaying unit 107 . Accordingly, for example, as shown in FIG. 16B , the image focusing on the object 123 , which is the search result, is displayed on the screen of the image displaying unit 107 .
- the LF image when the LF image is displayed as the result for the object search, the image focusing on the object to be searched for is displayed as the search result. Therefore, the adjustment of the focus position by a user's manual operation is no longer needed during the object search. That is, the convenience of the user increases by displaying the LF image focusing on the object to be searched.
- processing in which all of the detected objects are focused on and displayed is executed.
- a plurality of criteria for the object search is specified and when the LF image on which a plurality of persons who resemble each other like a sibling is detected as a result for the object search, the image is displayed in a state of focusing on all of the detected objects.
- a object searching unit 104 A transmits the coordinates information for all of the object images detected as the result for the object search to a display image generating unit 106 A.
- the display image generating unit 106 A sets the depth of field so as to focus on all of the objects detected by the object searching unit 104 A and develops the LF image.
- FIG. 17A exemplifies an image focusing on three objects 121 , 122 , and 123 .
- FIG. 13 is a flowchart illustrating an example of a process performed by the display image generating unit 106 A.
- the display image generating unit 106 A first obtains the coordinates information to be focused by the user adjustment value receiving unit 108 and the object searching unit 104 A (S 701 ). Here, there may be a plurality of coordinates information to be obtained.
- the distance information to a object positioned nearest (referred to as “N”) is initialized at infinity (S 702 ) and the distance information to a object positioned farthest (referred to as “F”) is initialized at zero (S 703 ).
- it is determined whether or not the current display mode is a display mode that is a result of the object search (S 704 ). As the result of the determination, when the current display mode is the display mode as the result of the object search, the process proceeds to S 705 , and when the current display mode is not the display mode as the result of the object search, the process proceeds to S 712 .
- Processes from S 706 to S 710 are executed as iterating processing.
- the display image generating unit 106 A obtains the distance information (referred to as “D”) with respect to all of the detected objects P from the depth map generating unit 105 (S 706 ).
- S 707 is a process of comparing the distance information D and F and determining whether or not D is larger than F.
- D is larger than F, that is, the object P is positioned farthest among the detected objects
- the process proceeds to S 708 and the process for updating F by D (process of substituting a value of D for F) is executed.
- D is F or less
- the process proceeds to S 709 .
- S 709 is a process of comparing the distance information D and N and determining whether or not D is smaller than N.
- D is smaller than N, that is, when the object P is positioned at the nearest position among the detected objects
- the process proceeds to S 710 and the process of updating N by D (the process of substituting a value of D for N) is executed.
- the process of S 710 proceeds to S 711 to continue the process with respect to the subsequent object.
- the processes from S 706 to S 710 with respect to all of the objects end, and the process proceeds to S 716 .
- the display image generating unit 106 A obtains the distance information D corresponding to the focus coordinates from the depth map generating unit 105 after setting the coordinates of the user adjustment value to the focus coordinates (S 713 ).
- the distance information N is updated by D (the value D is substituted for N) and the distance information F is updated by D in S 715 (the value D is substituted for F). Then, the process proceeds to S 716 .
- the display image generating unit 106 A determines a focus range from each value of F and N. That is, the range to be focused on is a range corresponding to the distance information from the N value to the F value, and the display image generating unit 106 A notifies the focus position and the depth of field of the LF image developing unit 103 and requests the development of the LF image.
- the display image generating unit 106 A obtains the developed image, that is, image data focusing on the plurality of detected objects, and transmits it to the image displaying unit 107 .
- the image displaying unit 107 displays an image focusing on the plurality of objects. According to the present embodiment, the image focusing on all of the detected objects can be displayed when the plurality of the objects are detected as the result for the object search.
- the second embodiment describes the process of generating the display image assuming that focusing on the plurality of objects detected as the searched result is possible.
- focusing on all of the objects is not always possible depending on the property of the lens used for the shot or the shot contents.
- notification processing is performed when focusing on the objects detected as the result of the object search is not possible. That is, when focusing on the objects detected as the result of the object search is not possible, the display image generating unit 106 B generates an image to provide notify about this. This image is superimposed on the developed image and one display image is generated.
- FIG. 14 is a flowchart illustrating an example of a process performed by the display image generating unit 106 B.
- the processes from S 501 to S 506 are described in FIG. 12 and thus S 901 and S 902 , which are different, will be described below.
- the display image generating unit 106 B determines whether or not focusing on a detected object is possible (S 901 ). When focusing on a detected object is possible, the display processing of the image focusing on the object is performed. Or, when focusing on the detected object is not possible, the process proceeds to S 902 , and the display image generating unit 106 B generates an image that provides notification that focusing on the corresponding object is not possible, and performs the process of superimposing this image on the developed image.
- FIG. 17B illustrates an example of an image generated in S 902 . A message that indicates focusing on the detected object is not possible is displayed on the display area 130 in the screen of the image displaying unit 107 .
- a pan-focus image is displayed when focusing on the object detected as the result of the object search is not possible.
- the display image generating unit 106 C performs a process of generating the pan-focus image when focusing on the object detected as the result of the object search is not possible.
- FIG. 15 is a flowchart illustrating a process performed by the display image generating unit 106 C.
- the processes from S 501 to S 506 are as described in FIG. 12 and the processes of S 1101 and S 1102 , which are different, will be described below.
- the display image generating unit 106 C determines whether or not focusing on the object is possible, that is, whether or not effective distance information can be obtained in S 505 , after the distance information corresponding to the focus coordinates is obtained in S 505 (S 1101 ). When the effective distance information cannot be obtained, the process proceeds to S 1102 , and when the effective distance information can be obtained, the process proceeds to S 506 .
- the display image generating unit 106 C requests the development of the pan-focus image of the LF image developing unit 103 , obtains the developed pan-focus image, and transmits it to the image displaying unit 107 .
- the developed pan-focus image is displayed on the screen of the image displaying unit 107 (see FIG. 17A ).
- the pan-focus image having a maximum depth of field can be displayed.
- Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
- computer executable instructions e.g., one or more programs
- a storage medium which may also be referred to more fully as a
- the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
- the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
- the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Optics & Photonics (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
Description
- 1. Field of the Invention
- The present invention relates to an image processing device that searches for a object from a shot image and generates an image focusing on a object image that is a search result, and its control method.
- 2. Description of the Related Art
- An imaging apparatus referred to as a light-field (hereinafter referred to as “LF”) camera has been commercialized in recent years. This imaging apparatus enables shooting light in a plurality of directions and obtaining information of light rays by splitting incident light through a microlens array arranged on an image sensor. Hereinafter, an image shot by the LF camera is referred to as “LF image” and image data is referred to as “LF data”. It is possible to reconstruct an image at an optional viewpoint and an image focusing at an optional position and with an optional depth of field by executing predetermined calculation processing after shooting based on the intensity of the light rays relating to the LF data and its incident direction. That is, a feature in which the image for which the viewpoint, the focus position, and the depth of field can be reconstructed after shooting by performing the arithmetic processing on the recorded data is an advantage of the LF camera. This reconstruction processing is referred to as development processing of the LF image.
- It is common that the development processing is performed by focusing on a predetermined default focus position when the LF image is displayed. For example, setting information during the display of the LF image at the previous time is stored as the default focus position and the setting information is used next time. Additionally, an image search (object search) in which a object, including a person, is specified as search criteria and an image of the specified object is searched for by using object recognition and meta data is commonly performed. Japanese Patent Application Publication No. 2010-086194 discloses a presentation method of image search results.
- As described above, the focus position and the like during the display of the LF image at the previous time is set as the default, the development is performed by focusing on the position and the LF image is displayed, but the object may not always be focused in such a setting.
FIG. 16A illustrates an example of the LF image. This illustrates a state of focusing on aobject 122 among threeobjects object 123 that is the search target, as shown in the example of the LF image inFIG. 16B , and thus the operation is complicated. - Additionally, the prior art in Japanese Patent Application Publication No. 2010-086194 discloses changing a shape or color of a frame line that emphasizes the object to be searched for, but the focus position of the image to be displayed is not considered.
- The present invention increases the convenience for a user by focusing on a object to be searched for, in an image processing device that processes light-field data.
- A device according to the present invention comprises a first image generating unit configured to generate a first image having a predetermined depth of field from each of a plurality of light-field data each of which a focus state is changeable; a searching unit configured to search the light-field data that includes a predetermined object by analyzing the first image generated by the first image generating unit; and a second image generating unit configured to generate a second image that has a shallower depth of field than the first image and focused on the predetermined object, based on the light-field data detected by the searching unit.
- According to the present invention, focusing on the object to be searched enables increasing the convenience for the user.
- Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
-
FIGS. 1A and 1B are schematic diagrams illustrating configuration examples A and B inside an LF camera. -
FIG. 2 is a schematic diagram illustrating a positional relation betweenmicrolens array 12 and each pixel in animage sensor 13. -
FIG. 3 is a schematic diagram illustrating a relation between a travelling direction of incident light rays to microlenses and a recording area in theimage sensor 13. -
FIG. 4 is a schematic diagram illustrating information of light rays that are incident to theimage sensor 13. -
FIG. 5 is a schematic diagram illustrating refocuses arithmetic processing. -
FIG. 6 is a schematic diagram illustrating a relation between differences in incident angles to the microlenses and the recording area in theimage sensor 13. -
FIG. 7 is a schematic diagram illustrating the adjustment processing for the depth of field. -
FIG. 8 is a block diagram illustrating a schema of an image display device according to an embodiment of the present invention. -
FIG. 9 is a block diagram illustrating a schema of a object searching unit ofFIG. 8 . -
FIG. 10 is a flowchart illustrating an example of a process of the object searching unit inFIG. 8 . -
FIG. 11 is a block diagram illustrating a schema of a display image generating unit inFIG. 8 . -
FIG. 12 is a flowchart illustrating an example of a process of the display image generating unit inFIG. 8 . -
FIG. 13 is a flowchart illustrating an example of a process of the display image generating unit according to a second embodiment of the present invention. -
FIG. 14 is a flowchart illustrating an example of a process of the display image generating unit according to a third embodiment of the present invention. -
FIG. 15 is a flowchart illustrating an example of a process of the display image generating unit according to a fourth embodiment of the present invention. -
FIGS. 16A and 16B are diagrams illustrating examples of an LF image. -
FIGS. 17A and 17B are diagrams illustrating examples of pan-focus images and examples of notification processing to a user. - Hereinafter, a detailed description will be given of each embodiment of the present invention with reference to attached drawings. A description will be given of an LF camera before describing an image processing device according to embodiments of the present invention.
-
FIG. 1 exemplifies a schematic configuration of the LF camera. Light that is incident from a object tomicrolens array 12 through animaging lens 11 configuring an imaging optical system is photoelectrically converted by animage sensor 13, and an electric signal is obtained. Note that imaging data obtained here is LF data. - An
imaging lens 11 projects the light from the object to themicrolens array 12. Theimaging lens 11 is interchangeable and it is used by being mounted on a main body of animaging apparatus 10. A user can change the imaging magnification by a zoom operation of theimaging lens 11. Themicrolens array 12 is configured by arranging microlenses in a grid shape and being positioned between theimaging lens 11 and theimage sensor 13. Each of the microlenses configuring themicrolens array 12 splits the incident light from theimaging lens 11 and outputs the split light to theimage sensor 13. Theimage sensor 13 configuring an imaging unit is an imaging element having a plurality of pixels and detects light intensity at each of the pixels. The light split through each of the microlenses is incident to each of the pixels in theimage sensor 13 that receives the light from the object. -
FIG. 2 is a schematic diagram illustrating a positional relation between themicrolens array 12 and each pixel in theimage sensor 13. Each of the microlenses of themicrolens array 12 is arranged so that the plurality of pixels in theimage sensor 13 corresponds. The light that is split by each of the microlenses is incident to each of the pixels in theimage sensor 13 and the light intensity (information of light rays) from different directions can be detected at each of the pixels. Additionally, it is possible to know the incident directions (information of direction) of light rays that are incident to each of the pixels in theimage sensor 13 through the microlens according to the positional relation between each of the microlenses and each of the pixels in theimage sensor 13. Specifically, information about a travelling direction of the light is detected in accordance with a distribution of the light intensity. An image on a focal plane having different distances from a lens apex plane of themicrolens array 12 can be obtained by synthesizing each output of the pixels in theimage sensor 13 positioned corresponding to eccentricity from an optical axis of each of the microlenses. Note that the light rays are represented by a function parameterized by two parallel planes by using a parameter, for example, a position, a direction, and a wavelength. Specifically, the incident direction of the light to each of the pixels is determined according to the arrangement of the plurality of pixels corresponding to each of the microlenses. As described above, theimaging apparatus 10 obtains the information about light rays and the information about directions and performs the sorting of light rays and calculation processing (hereinafter referred to as “reconstruction”), so that the image data at an optional focus position and an optional viewpoint can be generated. This information about the light rays and the information about the direction are included in the LF data. The focus position in this case allows the user to focus on a desired image area after shooting. -
FIG. 3 is a schematic diagram illustrating a relation between the travelling direction of the incident light rays to the microlenses of themicrolens array 12 and the recording area in theimage sensor 13. The object image through theimaging lens 11 is formed on themicrolens array 12 and the incident light rays to themicrolens array 12 are received at theimage sensor 13 through themicrolens array 12. At this time, as shown inFIG. 3 , the light rays that are incident to themicrolens array 12 are received at different positions on theimage sensor 13 according to their travelling directions, and the object image that becomes a similar figure with respect to the shape of theimaging lens 11 is formed for each of the microlenses. -
FIG. 4 is a schematic diagram illustrating the information of light rays that are incident to theimage sensor 13. A description will be given of the light rays received at theimage sensor 13 by usingFIG. 4 . Here, a rectangular coordinate system on the lens surface of theimaging lens 11 is denoted as (u and v) and a rectangular coordinates system on an imaging plane of theimage sensor 13 is denoted as (x and y). Further, a distance between the lens surface of theimaging lens 11 and the imaging plane of theimage sensor 13 is denoted as “F”. Thus, the intensity of light passing through theimaging lens 11 and theimage sensor 13 can be represented by a four dimensional function L (u, v, x and y) shown in the drawing. Because the light rays that are incident to each of the microlenses are incident to different pixels depending on the travelling direction, in addition to the position information of the light rays, the four dimensional function L (u, v, x and y) holding the traveling direction of the light rays is recorded in theimage sensor 13. - Next, a description will be given of refocus arithmetic processing after imaging.
FIG. 5 is a schematic diagram illustrating the refocus arithmetic processing. As shown inFIG. 5 , when the positional relation among the imaging lens surface, the imaging plane, and a refocus plane is set, the intensity of light rays L′(u, v, s and t) in the rectangular coordinates system (s and t) on the refocus plane is represented as the formula (1) below. -
- Additionally, an image E′ (s and t) obtained on the refocus plane is obtained by integrating the intensity of light rays L′ (u, v, s and t) with respect to a lens aperture, and thus, it is represented as the formula (2) below.
-
- Accordingly, an image set to an optional focal point (refocus plane) can be reconstructed by performing the refocus arithmetic processing by this formula (2).
- Next, a description will be given of adjustment processing of a depth of field after shooting. Weighting is performed by multiplying a weighting coefficient per image data that forms an image area assigned to each of the microlenses before the refocus arithmetic. For example, when generating an image having a deep depth of field is desired, integration processing is performed by using only the information about light rays that are incident to a receiving plane of the
image sensor 13 at relatively small angles. In other words, the light rays that are incident to theimage sensor 13 at relatively large angles are not included in the integration processing by multiplying a weighting coefficient 0 (zero). -
FIG. 6 is a schematic diagram illustrating a relation between the differences in the incident angle to the microlenses and the recording area in theimage sensor 13, andFIG. 7 is a schematic diagram illustrating the adjustment processing for the depth of field. As shown inFIG. 6 , the light rays that are incident to theimage sensor 13 at relatively small angles are positioned at a more central area. Accordingly, as shown inFIG. 7 , the integration processing is performed by using only the pixel data obtained in a center portion in the area (hatched areas in the drawing). Through such processing, it is possible to express an image having a deep depth of field as if an opening diaphragm included in a typical imaging apparatus is narrowed. It is also possible to generate a pan-focus image having a deeper depth of field by reducing pixel data in the center portion to be used. As described above, the depth of field of the image can be adjusted after shooting based on the LF data (information of light rays) actually obtained. -
FIG. 8 is a block diagram schematically illustrating a configuration example of animage display device 100 according to the embodiment of the present invention. An LFimage storing unit 101 receives the data of the LF image shot by the LF camera and stores it in a memory. Additionally, the LFimage storing unit 101 transmits the stored LF data according to a request from an LFimage developing unit 103. The LF data may be directly received from the LF camera connected with the image display device through a USB (Universal Serial Bus) and the like, or it the LF data may be stored in a storing media, for example, a SD card is read. - A search
criteria receiving unit 102 receives search criteria of the object that is specified by the user through the operation unit and transmits it to aobject searching unit 104. There is a method for specifying the search criteria that is performed by inputting the image to be searched for. Alternatively, there is a method for specifying the search criteria performed by selection through the user's operation among the images registered beforehand. - The LF
image developing unit 103 reads out the LF data from the LFimage storing unit 101 and performs the predetermined arithmetic processing. The predetermined arithmetic processing is necessary processing for developing the LF image, and the processing is performed according to the requests from theobject searching unit 104, a depthmap generating unit 105, and a displayimage generating unit 106. - The
object searching unit 104 searches for the image that matches the search criteria of the object received from the searchcriteria receiving unit 102, from among all of the images stored in the LFimage storing unit 101. The details about theobject searching unit 104 will be described below. The depthmap generating unit 105 executes distance information generating processing according to the request from the displayimage generating unit 106. A depth map is created by the generation of the distance information in each of the pixels in the LF image. The distance information corresponding to the depth in the image is calculated by generating the images having two and more different viewpoints, and by detecting a positional displacement of the plurality of generated images. The displayimage generating unit 106 generates display image data. The details about the displayimage generating unit 106 will be described below. Animage displaying unit 107 displays the image on a screen according to the image data generated by the displayimage generating unit 106. - A user adjustment
value receiving unit 108 receives information about the focus position specified by the user through the operation unit (specified coordinates information) and transmits it to the displayimage generating unit 106. The focus position corresponds to a (focus) position on which to focus in the shot LF image. As for the specification of the focus position, there is a method in which the user specifies the position of the object to be focused by utilizing a pointing device, a touch panel, or the like. Alternatively, there is a method in which the user specifies the focus position by utilizing a sliding bar (scroll bar) or the like. - Next, details of the
object searching unit 104 will be described.FIG. 9 is a block diagram of a schema of a configuration mainly illustrating theobject searching unit 104. Theobject searching unit 104 includes a searchimage generating unit 201 and a featurevalue comparing unit 202. - The search
image generating unit 201 requests developing the LF data obtained from the LFimage storing unit 101 of the LFimage developing unit 103. While the focus position in the LF image is changeable during the development, the object in an unfocused location (blurred object) cannot be detected when the object search is performed by using the image that was developed by focusing on a specific position. For example, it is difficult to detect theobjects FIG. 16A . Accordingly, the searchimage generating unit 201 requests generating an image having a maximum depth of field and focusing on all of the objects in the image (pan-focus image) of the LFimage developing unit 103. The LFimage developing unit 103 generates pan-focus image data as image data for searching for the object.FIG. 17A illustrates an example of the pan-focus image that focuses on all of theobjects value comparing unit 202 analyzes the image received from the searchcriteria receiving unit 102 and the searchimage generating unit 201, and detects the object. The featurevalue comparing unit 202 calculates feature values of the detected object and compares them. The image data processed by the featurevalue comparing unit 202 is image data that has been developed by the LFimage developing unit 103. Hence, in the extraction of the object image and the calculation of the feature values, the known method for a JPEG (Joint Photographic Experts Group) image and the like can be used. -
FIG. 10 is a flowchart illustrating an example of a process performed by theobject searching unit 104. The process below is achieved by reading out and executing a program from a memory by a CPU (Central Processing Unit) configuring a control unit of the image processing device. - The
object searching unit 104 first receives the image data to specify the search criteria from the search criteria receiving unit 102 (S301). Next, theobject searching unit 104 calculates the feature value of the object image from the received image data (S302). Iterating processing is performed in steps from S304 to S307, which are between steps S303 and S308, and a step S309. Theobject searching unit 104 subsequently obtains the LF images from the LF image storing unit 101 (S304) and requests the development of the pan-focus images of the LF image developing unit 103 (S305). Theobject searching unit 104 obtains the pan-focus images developed by the LFimage developing unit 103, detects the object image of the image, and calculates the feature value (S306). Theobject searching unit 104 determines whether or not the feature value calculated in S306 is identical with the feature value of the object specified as the search criteria (S307). When these feature values are identical, the process proceeds to S309 and when these feature values are not identical, the process proceeds to S303 from S308 to continue the process. In S309, theobject searching unit 104 notifies a file identifier of the corresponding LF image, for which it has been determined that the feature values of the object are identical, and the coordinates information of the corresponding object image of the displayimage generating unit 106. Then, the process proceeds to S303 from S308 and continues until the process for all of the LF images ends. - Next, a detailed description will be given of the display
image generating unit 106.FIG. 11 is a block diagram mainly illustrating a schematic example of a configuration of the displayimage generating unit 106. The displayimage generating unit 106 includes a focus coordinates determiningunit 401, a focusposition determining unit 402, and animage generating unit 403. The focus coordinates determiningunit 401 performs coordinates information selecting processing with respect to the coordinates information from theobject searching unit 104 and the user adjustmentvalue receiving unit 108. The focus coordinates determiningunit 401 receives the coordinates information of the image on which the object to be in focus is shot (object image) from theobject searching unit 104 or the user adjustmentvalue receiving unit 108 and transmits either of them to the focusposition determining unit 402. The coordinates information transmitted from the focus coordinates determiningunit 401 is determined according to whether or not the image to be displayed is a result for the object search. That is, when the image to be displayed is the result of the object search, the focus coordinates determiningunit 401 transmits the coordinates information received from theobject searching unit 104 to the focusposition determining unit 402. When the image to be displayed is not the result for the object search, the coordinates information received from the user adjustmentvalue receiving unit 108 is transmitted to the focusposition determining unit 402. - The focus
position determining unit 402 transmits the coordinates information received from the focus coordinates determiningunit 401 to the depthmap generating unit 105. The depthmap generating unit 105 returns the distance information corresponding to the received coordinates information based on the depth map created through the distance information generating processing. The focusposition determining unit 402 obtains the distance information corresponding to the coordinates information from the depthmap generating unit 105 and transmits it to theimage generating unit 403. Theimage generating unit 403 transmits the distance information received from the focusposition determining unit 402 to the LFimage developing unit 103, which serves as the information for the focus position used during the development, and requests the development of the LF image. Theimage generating unit 403 transmits the image data developed by the LFimage developing unit 103 to theimage displaying unit 107. Hence, the image focusing on the desired object is displayed on the screen of theimage displaying unit 107. -
FIG. 12 is a flowchart illustrating an example of a process performed by the displayimage generating unit 106. The focus coordinates determiningunit 401 obtains the coordinates information to be focused on (focus coordinates) from the user adjustmentvalue receiving unit 108 and the object searching unit 104 (S501). Next, the focus coordinates determiningunit 401 determines whether or not the current display mode is a display mode of the object search results (S502). The process proceeds to S503 when it is the display mode of the object search results, or the process proceeds to S504 when it is not the display mode of the object search results. - In S503, the focus coordinates determining
unit 401 sets the coordinates to be searched indicated by the information received from theobject searching unit 104 to the focus coordinates. Additionally, in S504, the focus coordinates determiningunit 401 sets the coordinates indicated by the information received from the user adjustmentvalue receiving unit 108 to the focus coordinates. The process proceeds to S505 subsequent to S503 or S504, and the focusposition determining unit 402 transmits the coordinates information received from the focus coordinates determiningunit 401 to the depthmap generating unit 105. That is, after the focus coordinates set in S503 or S504 are transmitted to the depthmap generating unit 105, the focusposition determining unit 402 obtains the distance information corresponding to the focus coordinates and transmits it to theimage generating unit 403. Theimage generating unit 403 transmits the distance information obtained in S505 to the LF image developing unit 103 (S506). The LFimage developing unit 103 develops the image focusing on the coordinates set in S503 or S504 andimage generating unit 403 obtains the image data after the development and transmits it to theimage displaying unit 107. Accordingly, for example, as shown inFIG. 16B , the image focusing on theobject 123, which is the search result, is displayed on the screen of theimage displaying unit 107. - In the present embodiment, when the LF image is displayed as the result for the object search, the image focusing on the object to be searched for is displayed as the search result. Therefore, the adjustment of the focus position by a user's manual operation is no longer needed during the object search. That is, the convenience of the user increases by displaying the LF image focusing on the object to be searched.
- Next, a description will be given of a second embodiment of the present invention. In the present embodiment, when a plurality of objects is detected as a result for the object search, processing in which all of the detected objects are focused on and displayed is executed. For example, when a plurality of criteria for the object search is specified and when the LF image on which a plurality of persons who resemble each other like a sibling is detected as a result for the object search, the image is displayed in a state of focusing on all of the detected objects. Note that detailed explanations are omitted by using reference numerals already used for the structural elements that are identical to the case of the first embodiment, and the points of difference will be explained in detail. Such a manner of omitting explanations is identical in the embodiments explained below.
- A
object searching unit 104A transmits the coordinates information for all of the object images detected as the result for the object search to a displayimage generating unit 106A. The displayimage generating unit 106A sets the depth of field so as to focus on all of the objects detected by theobject searching unit 104A and develops the LF image.FIG. 17A exemplifies an image focusing on threeobjects -
FIG. 13 is a flowchart illustrating an example of a process performed by the displayimage generating unit 106A. The displayimage generating unit 106A first obtains the coordinates information to be focused by the user adjustmentvalue receiving unit 108 and theobject searching unit 104A (S701). Here, there may be a plurality of coordinates information to be obtained. Next, the distance information to a object positioned nearest (referred to as “N”) is initialized at infinity (S702) and the distance information to a object positioned farthest (referred to as “F”) is initialized at zero (S703). Next, it is determined whether or not the current display mode is a display mode that is a result of the object search (S704). As the result of the determination, when the current display mode is the display mode as the result of the object search, the process proceeds to S705, and when the current display mode is not the display mode as the result of the object search, the process proceeds to S712. - Processes from S706 to S710, which are between S705 and S711, are executed as iterating processing. The display
image generating unit 106A obtains the distance information (referred to as “D”) with respect to all of the detected objects P from the depth map generating unit 105 (S706). S707 is a process of comparing the distance information D and F and determining whether or not D is larger than F. When D is larger than F, that is, the object P is positioned farthest among the detected objects, the process proceeds to S708 and the process for updating F by D (process of substituting a value of D for F) is executed. In contrast, when D is F or less, the process proceeds to S709. - S709 is a process of comparing the distance information D and N and determining whether or not D is smaller than N. When D is smaller than N, that is, when the object P is positioned at the nearest position among the detected objects, the process proceeds to S710 and the process of updating N by D (the process of substituting a value of D for N) is executed. After the process of S710, or when D is N or more, the process proceeds to S711 to continue the process with respect to the subsequent object. The processes from S706 to S710 with respect to all of the objects end, and the process proceeds to S716.
- When the process proceeds to S712 from S704, the display
image generating unit 106A obtains the distance information D corresponding to the focus coordinates from the depthmap generating unit 105 after setting the coordinates of the user adjustment value to the focus coordinates (S713). In S714, the distance information N is updated by D (the value D is substituted for N) and the distance information F is updated by D in S715 (the value D is substituted for F). Then, the process proceeds to S716. - In S716, the display
image generating unit 106A determines a focus range from each value of F and N. That is, the range to be focused on is a range corresponding to the distance information from the N value to the F value, and the displayimage generating unit 106A notifies the focus position and the depth of field of the LFimage developing unit 103 and requests the development of the LF image. The displayimage generating unit 106A obtains the developed image, that is, image data focusing on the plurality of detected objects, and transmits it to theimage displaying unit 107. - Accordingly, the
image displaying unit 107 displays an image focusing on the plurality of objects. According to the present embodiment, the image focusing on all of the detected objects can be displayed when the plurality of the objects are detected as the result for the object search. - Next, a description will be given of a third embodiment of the present invention. The second embodiment describes the process of generating the display image assuming that focusing on the plurality of objects detected as the searched result is possible. However, in the LF image, focusing on all of the objects is not always possible depending on the property of the lens used for the shot or the shot contents. Accordingly, in the present embodiment, notification processing is performed when focusing on the objects detected as the result of the object search is not possible. That is, when focusing on the objects detected as the result of the object search is not possible, the display
image generating unit 106B generates an image to provide notify about this. This image is superimposed on the developed image and one display image is generated. -
FIG. 14 is a flowchart illustrating an example of a process performed by the displayimage generating unit 106B. The processes from S501 to S506 are described inFIG. 12 and thus S901 and S902, which are different, will be described below. - After the process of S506, the display
image generating unit 106B determines whether or not focusing on a detected object is possible (S901). When focusing on a detected object is possible, the display processing of the image focusing on the object is performed. Or, when focusing on the detected object is not possible, the process proceeds to S902, and the displayimage generating unit 106B generates an image that provides notification that focusing on the corresponding object is not possible, and performs the process of superimposing this image on the developed image.FIG. 17B illustrates an example of an image generated in S902. A message that indicates focusing on the detected object is not possible is displayed on the display area 130 in the screen of theimage displaying unit 107. - According to the present embodiment, when focusing on the object detected as the result of the object search is not possible, it is possible to notify it of the user.
- Next, a description will be given of a fourth embodiment of the present invention. In the present embodiment, a pan-focus image is displayed when focusing on the object detected as the result of the object search is not possible. The display
image generating unit 106C performs a process of generating the pan-focus image when focusing on the object detected as the result of the object search is not possible. -
FIG. 15 is a flowchart illustrating a process performed by the displayimage generating unit 106C. The processes from S501 to S506 are as described inFIG. 12 and the processes of S1101 and S1102, which are different, will be described below. - The display
image generating unit 106C determines whether or not focusing on the object is possible, that is, whether or not effective distance information can be obtained in S505, after the distance information corresponding to the focus coordinates is obtained in S505 (S1101). When the effective distance information cannot be obtained, the process proceeds to S1102, and when the effective distance information can be obtained, the process proceeds to S506. - In S1102, the display
image generating unit 106C requests the development of the pan-focus image of the LFimage developing unit 103, obtains the developed pan-focus image, and transmits it to theimage displaying unit 107. Hence, the developed pan-focus image is displayed on the screen of the image displaying unit 107 (seeFIG. 17A ). According to the present embodiment, when focusing on the object detected as the result of the object search is not possible, the pan-focus image having a maximum depth of field can be displayed. - Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
- While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
- This application claims the benefit of Japanese Patent Application No. 2014-115773, filed Jun. 4 2014, which is hereby incorporated by reference wherein in its entirety.
Claims (15)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014115773A JP6362433B2 (en) | 2014-06-04 | 2014-06-04 | Image processing apparatus, control method therefor, and program |
JP2014-115773 | 2014-06-04 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150358529A1 true US20150358529A1 (en) | 2015-12-10 |
US9936121B2 US9936121B2 (en) | 2018-04-03 |
Family
ID=54706901
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/722,757 Expired - Fee Related US9936121B2 (en) | 2014-06-04 | 2015-05-27 | Image processing device, control method of an image processing device, and storage medium that stores a program to execute a control method of an image processing device |
Country Status (5)
Country | Link |
---|---|
US (1) | US9936121B2 (en) |
JP (1) | JP6362433B2 (en) |
KR (1) | KR101761105B1 (en) |
CN (1) | CN105323460B (en) |
DE (1) | DE102015108601A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12200393B2 (en) * | 2022-01-04 | 2025-01-14 | Canon Kabushiki Kaisha | Image processing device, image processing method, and storage medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107241612B (en) * | 2017-07-06 | 2020-05-19 | 北京潘达互娱科技有限公司 | Network live broadcast method and device |
KR20210028808A (en) | 2019-09-04 | 2021-03-15 | 삼성전자주식회사 | Image sensor and imaging apparatus having the same |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070230944A1 (en) * | 2006-04-04 | 2007-10-04 | Georgiev Todor G | Plenoptic camera |
US7936392B2 (en) * | 2004-10-01 | 2011-05-03 | The Board Of Trustees Of The Leland Stanford Junior University | Imaging arrangements and methods therefor |
US20120194709A1 (en) * | 2011-01-31 | 2012-08-02 | Sanyo Electric Co., Ltd. | Image pickup apparatus |
US8279325B2 (en) * | 2008-11-25 | 2012-10-02 | Lytro, Inc. | System and method for acquiring, editing, generating and outputting video data |
US20120249550A1 (en) * | 2009-04-18 | 2012-10-04 | Lytro, Inc. | Selective Transmission of Image Data Based on Device Attributes |
US8559705B2 (en) * | 2006-12-01 | 2013-10-15 | Lytro, Inc. | Interactive refocusing of electronic images |
US20130342526A1 (en) * | 2012-06-26 | 2013-12-26 | Yi-Ren Ng | Depth-assigned content for depth-enhanced pictures |
JP2014103601A (en) * | 2012-11-21 | 2014-06-05 | Canon Inc | Information processing unit, information processing method, and program |
US20140267243A1 (en) * | 2013-03-13 | 2014-09-18 | Pelican Imaging Corporation | Systems and Methods for Synthesizing Images from Image Data Captured by an Array Camera Using Restricted Depth of Field Depth Maps in which Depth Estimation Precision Varies |
US8978984B2 (en) * | 2013-02-28 | 2015-03-17 | Hand Held Products, Inc. | Indicia reading terminals and methods for decoding decodable indicia employing light field imaging |
US8995785B2 (en) * | 2012-02-28 | 2015-03-31 | Lytro, Inc. | Light-field processing and analysis, camera control, and user interfaces and interaction on light-field capture devices |
US20150103192A1 (en) * | 2013-10-14 | 2015-04-16 | Qualcomm Incorporated | Refocusable images |
US9380281B2 (en) * | 2012-08-13 | 2016-06-28 | Canon Kabushiki Kaisha | Image processing apparatus, control method for same, and program |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009081810A (en) | 2007-09-27 | 2009-04-16 | Fujifilm Corp | Photographing device and photographing method |
JP2010086194A (en) | 2008-09-30 | 2010-04-15 | Fujifilm Corp | Share image browsing method and device |
JP5163446B2 (en) | 2008-11-25 | 2013-03-13 | ソニー株式会社 | Imaging apparatus, imaging method, and program |
JP6080417B2 (en) | 2011-08-19 | 2017-02-15 | キヤノン株式会社 | Image processing apparatus and image processing method |
JP2013125050A (en) | 2011-12-13 | 2013-06-24 | Nec Casio Mobile Communications Ltd | Imaging apparatus, focusing method, and program |
JP6014037B2 (en) | 2012-02-09 | 2016-10-25 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | Image recognition apparatus, image recognition method, program, and integrated circuit |
CN102638654B (en) | 2012-03-28 | 2015-03-25 | 华为技术有限公司 | Method, device and equipment for outputting multi-pictures |
JP2013254432A (en) * | 2012-06-08 | 2013-12-19 | Canon Inc | Image processing apparatus and image processing method |
JP6016516B2 (en) * | 2012-08-13 | 2016-10-26 | キヤノン株式会社 | Image processing apparatus, control method therefor, image processing program, and imaging apparatus |
JP6074201B2 (en) * | 2012-09-21 | 2017-02-01 | キヤノン株式会社 | Image processing apparatus, control method, and program |
JP6082223B2 (en) * | 2012-10-15 | 2017-02-15 | キヤノン株式会社 | Imaging apparatus, control method thereof, and program |
CN103034987B (en) | 2012-12-16 | 2015-10-21 | 吴凡 | A kind of multiple focussing image file format and generating apparatus and image file processing method |
-
2014
- 2014-06-04 JP JP2014115773A patent/JP6362433B2/en not_active Expired - Fee Related
-
2015
- 2015-05-27 US US14/722,757 patent/US9936121B2/en not_active Expired - Fee Related
- 2015-06-01 KR KR1020150077050A patent/KR101761105B1/en not_active Expired - Fee Related
- 2015-06-01 DE DE102015108601.8A patent/DE102015108601A1/en not_active Ceased
- 2015-06-01 CN CN201510293820.9A patent/CN105323460B/en not_active Expired - Fee Related
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7936392B2 (en) * | 2004-10-01 | 2011-05-03 | The Board Of Trustees Of The Leland Stanford Junior University | Imaging arrangements and methods therefor |
US20070230944A1 (en) * | 2006-04-04 | 2007-10-04 | Georgiev Todor G | Plenoptic camera |
US8559705B2 (en) * | 2006-12-01 | 2013-10-15 | Lytro, Inc. | Interactive refocusing of electronic images |
US20140240463A1 (en) * | 2008-11-25 | 2014-08-28 | Lytro, Inc. | Video Refocusing |
US8279325B2 (en) * | 2008-11-25 | 2012-10-02 | Lytro, Inc. | System and method for acquiring, editing, generating and outputting video data |
US20120249550A1 (en) * | 2009-04-18 | 2012-10-04 | Lytro, Inc. | Selective Transmission of Image Data Based on Device Attributes |
US20120194709A1 (en) * | 2011-01-31 | 2012-08-02 | Sanyo Electric Co., Ltd. | Image pickup apparatus |
US8995785B2 (en) * | 2012-02-28 | 2015-03-31 | Lytro, Inc. | Light-field processing and analysis, camera control, and user interfaces and interaction on light-field capture devices |
US20130342526A1 (en) * | 2012-06-26 | 2013-12-26 | Yi-Ren Ng | Depth-assigned content for depth-enhanced pictures |
US9380281B2 (en) * | 2012-08-13 | 2016-06-28 | Canon Kabushiki Kaisha | Image processing apparatus, control method for same, and program |
JP2014103601A (en) * | 2012-11-21 | 2014-06-05 | Canon Inc | Information processing unit, information processing method, and program |
US8978984B2 (en) * | 2013-02-28 | 2015-03-17 | Hand Held Products, Inc. | Indicia reading terminals and methods for decoding decodable indicia employing light field imaging |
US20140267243A1 (en) * | 2013-03-13 | 2014-09-18 | Pelican Imaging Corporation | Systems and Methods for Synthesizing Images from Image Data Captured by an Array Camera Using Restricted Depth of Field Depth Maps in which Depth Estimation Precision Varies |
US20150103192A1 (en) * | 2013-10-14 | 2015-04-16 | Qualcomm Incorporated | Refocusable images |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12200393B2 (en) * | 2022-01-04 | 2025-01-14 | Canon Kabushiki Kaisha | Image processing device, image processing method, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105323460A (en) | 2016-02-10 |
KR20150139783A (en) | 2015-12-14 |
JP6362433B2 (en) | 2018-07-25 |
CN105323460B (en) | 2018-09-18 |
KR101761105B1 (en) | 2017-07-25 |
US9936121B2 (en) | 2018-04-03 |
DE102015108601A1 (en) | 2015-12-17 |
JP2015230541A (en) | 2015-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI538508B (en) | Image capturing system obtaining scene depth information and focusing method thereof | |
JP6887960B2 (en) | Systems and methods for autofocus triggers | |
US9727585B2 (en) | Image processing apparatus and method for controlling the same | |
US9619886B2 (en) | Image processing apparatus, imaging apparatus, image processing method and program | |
US9380281B2 (en) | Image processing apparatus, control method for same, and program | |
US10109036B2 (en) | Image processing apparatus, control method for same, and program that performs image processing for image data having a focus state that is changeable | |
US10356381B2 (en) | Image output apparatus, control method, image pickup apparatus, and storage medium | |
US9936121B2 (en) | Image processing device, control method of an image processing device, and storage medium that stores a program to execute a control method of an image processing device | |
JP4952768B2 (en) | Imaging apparatus and image analysis computer program | |
US10332259B2 (en) | Image processing apparatus, image processing method, and program | |
US9319579B2 (en) | Image processing apparatus, control method, and program for the same with focus state specification and deletion confirmation of image data | |
CN104915948B (en) | The system and method for selecting two-dimentional interest region for use scope sensor | |
US10339665B2 (en) | Positional shift amount calculation apparatus and imaging apparatus | |
JP6916627B2 (en) | Imaging device and its control method | |
JP7373297B2 (en) | Image processing device, image processing method and program | |
US10530985B2 (en) | Image capturing apparatus, image capturing system, method of controlling image capturing apparatus, and non-transitory computer-readable storage medium | |
WO2016042721A1 (en) | Positional shift amount calculation apparatus and imaging apparatus | |
JP6584091B2 (en) | Electronic device and display control method | |
JP2016163073A (en) | Electronic equipment and display control method | |
JP2014016687A (en) | Image processing apparatus, image processing method, and program | |
JP2017022464A (en) | Image processing device and image processing method | |
JP2017175400A (en) | Image processing device, imaging apparatus, control method, and program | |
JP2017220767A (en) | Focusing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CANON KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KAWAKAMI, TAKASHI;REEL/FRAME:036483/0533 Effective date: 20150514 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20220403 |