200917828 九、發明說明: 【發明所屬之技術領域】 本發明關於一種攝像設備及一對焦狀況顯示方法,更 明確地係,關於特別是在一自動對焦操作中顯示對焦狀況 之攝像設備’以及一對焦狀況顯示方法。 【先前技術】 用於顯示指示一對象是否對焦的資訊之方法已揭示如 下。 曰本專利申請公開案第6_丨1 3 1 84號提出一攝像設備, 其顯示一長條圖,該長條圖顯示在電子取景器下面的對焦 狀況的開關狀態,以便使用者可檢查一對焦狀況。 日本專利申請早期公開案第6-30 1098號提出一攝像設 備,其顯示一對焦輔助計量器,以便使用者可檢查一對象 是否在其焦深範圍內。 日本專利申請公開案第2002- 3 1 1 4 89號提出一攝像設 備,其可改變十字形目標標記的顯示顔色或顯示格式,以 便使用者可檢查一對焦/未對焦狀況。 【發明內容】 然而,以上專利文獻所說明之攝像設備不利地以每位 使用者不易瞭解的方式顯示指出一對象對焦的資訊。 特別地,在考慮像是兒童使用者不熟悉攝像設備之攝 像操作情況下,此種資訊應該以吸引使用者注意及容易一 目暸然之方式顯示。 存在有用於偵測一對象臉部,在受偵測臉部附近顯示 200917828 一方框的技術(參見第28圖),使得受偵測臉部可對焦,但 是當像是不熟悉攝像設備操作的兒童使用者攝像時,會有 無法容易告知使用者該臉部已偵測及該臉部已對焦的問 題。 本發明係有鑑於以上情況而完成者,且本發明之一目 的係提供攝像設備,其能於攝像操作中顯示決定影像是否 對焦或容易決定哪區域對焦之資訊;及一對焦狀況顯示方 法。 爲了達成此目的,本發明第一態樣之攝像設備包含: 攝像裝置,其拍攝對象的影像;影像擷取裝置,其經由攝 像裝置連續擷取代表該對象的影像信號;顯示裝置,其可 根據所擷取影像信號,顯示鏡後測光影像;自動調焦裝置, 其根據所擷取影像信號進行自動調焦,使該對象之對比能 最大化,封焦狀況偵測裝置,其在藉由該自動調焦裝置的 調整之後’偵測該對象的對焦狀況;及顯示控制裝置,其 將用於顯7K對焦資訊的顯示區域合成在顯示裝置的鏡後測 光影像’且亦合成對焦資訊,該對焦資訊至少在當一影像 未對焦時與當一影像回應該對焦狀況偵測裝置所偵測到的 對焦狀況以對焦在顯示區域時之間不同。 根據第一態樣的攝像設備,可拍攝對象的影像,且可 連續擷取代表該對象的影像信號,所以根據所擺取影像信 號’鏡後測光影像可由顯示裝置顯示,其中合成一顯示區 域以顯不封焦資訊。同時’根據所擷取影像信號,進行自 動調焦,以使對象之對比最大化,及偵測該調焦結果。根 200917828 據所偵測對焦狀況,對焦資訊可改變而 該對焦資訊至少包括一未對焦狀況的資 的資訊。這使指出一對象對焦在一期望 使用者谷易瞭解的方式來顯示。 如第一態樣的攝像設備之本發明第 更包含:臉部偵測裝置,由所擷取影像信 及自動調焦裝置,可在臉部偵測裝置偵 偵測到的臉部上進行自動調焦。 根據本發明第二態樣的攝像設備, 擷取影像信號偵測到,且當偵測到臉部 臉部上進行自動調焦。這避免主要對象 何失敗。 於如第二態樣的攝像設備之本發明 備中’顯示控制裝置將在顯示裝置的鏡 的顯示區域合成在接近該臉部偵測裝 置。 根據第三態樣的攝像設備,當偵測 測光影像上顯示的顯示區域合成在接 置。這使指示哪一區域對焦的資訊能夠 的方式顯示。 於如第三態樣的攝像設備之本發明 備中’藉該顯示控制裝置顯示在顯示裝 上的顯示區域具有對話氣球形狀。 根據第四態樣的攝像設備,具有對 合成在顯示區域。 訊與一已對焦狀況 區域的資訊能夠以 二態樣的攝像設備 [號偵測對象臉部; 測到臉部時,在所 該對象臉部可從所 時,在所偵測到的 之未對焦狀況的任 第三態樣的攝像設 後測光影像上顯示 置所偵出臉部的位 到臉部時,將鏡後 近該偵測臉部的位 以使用者容易瞭解 第四態樣的攝像設 置的鏡後測光影像 話氣球形狀的顯示 200917828 區域被合成在鏡後測光影像。這允許以吸引使用者注意的 顯示來告知使用者一對焦狀況。 如第一至第四態樣中任一者的攝像設備,本發明之第 五態樣的攝像設備更包含:儲存裝置,儲存對應對焦狀況 的對焦資訊,該對焦狀況至少包括一未對焦狀況的資訊與 一已對焦狀況的資訊;以及顯示控制裝置,將儲存在該儲 存裝置中的對焦資訊合成於顯示區域。 根據第五態樣的攝像設備,儲存包括未對焦狀況資訊 與已對焦狀況資訊之對焦資訊,及顯示對應於所儲存對焦 資訊中的對焦狀況之對焦資訊。這允許以使用者容易瞭解 方式顯示對焦狀況,且該顯示讓使用者容易判斷該對焦狀 況。 如第五態樣的攝像設備之本發明第六態樣的攝像設備 更包含:輸入裝置,回應對焦狀況而輸入對焦資訊;及儲 存裝置,儲存該輸入裝置所輸入的對焦資訊,且當對應相 同對焦狀況的複數個對焦資訊部分儲存在該儲存裝置時, 顯示控制裝置從該複數筆對焦資訊部分選擇期望的一筆對 焦資訊部分,及將該筆部分合成在顯示區域。 根據第六態樣的攝像設備,進一步儲存經由輸入裝置 輸入的對焦資訊,以便儲存對應相同對焦狀況的複數筆對 焦資訊部分。當儲存對應相同對焦狀況的複數筆對焦資訊 部分時,可從複數筆對焦資訊部分選擇一筆對焦資訊的期 望部分’以便明確地顯示。此允許根據使用者的偏好進行 客製化。 200917828 在如第五態樣或第六態樣之攝像設備之本發明第七態 樣的攝像設備中’一旦該對焦狀況偵測裝置偵測到調焦, 該顯示控制裝置即可將對焦資訊從一未對焦狀況資訊切換 至一已對焦狀況資訊。 根據第七態樣的攝像設備’在對焦操作之前,顯示一 未對焦狀況資訊,及在調焦期間,將未對焦狀況資訊切換 成已對焦影像的已對焦狀況之資訊,使得當一影像對焦 時,顯示一已對焦狀況的資訊。此允許以吸引使用者注意 的顯示明確告知使用者一對焦狀況。 在如第二至第六態樣之任一者的攝像設備之本發明第 八態樣的攝像設備中,臉部偵測裝置可偵測對象的臉部與 表情,且該顯示控制裝置根據該臉部偵測裝置所偵測到的 表情來合成對焦資訊在顯示區域。 根據第八態樣的攝像設備,其偵測一對象的臉部與表 情,及根據所偵到的表情,顯示對焦資訊。此允許藉由吸 引使用者注意的顯示,以易於瞭解的方式告知使用者一對 焦狀況。 在如第一至第八態樣之任一者的攝像設備之本發明第 九態樣的攝像設備中,該顯示控制裝置回應對焦狀況偵測 裝置所偵測到的結果,變更顯示區域的尺寸。 根據第九態樣的攝像設備,回應一對焦狀況,改變顯 示區域的尺寸。此允許以易於瞭解的方式告知使用者一對 焦狀況。 根據本發明之第十態樣的攝像設備包括:攝像裝置, 200917828 其可拍攝對象的影像;影像擷取裝置,其可經由攝像裝置 連續擷取代表對象的影像信號;顯示裝置,其可根據所擷 取影像信號顯示一鏡後測光影像;自動調焦裝置,其根據 所擷取影像信號,進行對象的期望區域的自動調焦;對焦 狀況偵測裝置,其在自動調焦裝置的調整之後,偵測對象 的對焦狀況;動畫影像產生裝置,其產生具有下列特徵之 至少一者的動畫影像:可變位置;可變尺寸;及可變形狀, 及至少在當影像未對焦與當一影像已對焦之間顯示不同影 像;及顯示控制裝置’其回應由對焦狀況偵測裝置所偵測 到的的對焦狀況,將該動畫影像合成爲鏡後測光影像。 根據第十態樣之攝像設備,其拍攝一對象的影像,及 連續擷取代表對象的一影像信號’俾根據該擷取的影像信 號’顯示一鏡後測光影像。同時,根據所擷取影像信號, 對對象的期望區域進行自動調焦’且然後偵測該對象的對 焦狀況’俾回應所偵測到的對焦狀況,將動畫影像合成爲 鏡後測光影像。動畫影像具有下列特徵之至少一者:可變 位置;可變尺寸;及可變形狀,且預先產生包括至少在當 〜影像未對焦與一影像已對焦之間顯示的不同影像。在自 動。周焦裝置中,通常’ 一對比af系統可用於調焦,以使一 封象之期望區域之對比最大化,但是可使用其他系統。此 允許指出一對象的期望區域對焦的資訊能夠以使用者容易 瞭解的方式來顯示。 如第十態樣的攝像設備之根據本發明第十一態樣的攝 f象設備更包括:臉部偵測裝置,從所擷取影像信號偵測對 -10- 200917828 象的臉部;及自動調焦裝置,於該臉部偵測裝置偵測到臉 部時’在該偵測的臉部上進行自動調焦。 根據第十一態樣的攝像設備,從所擷取影像信號偵測 到對象的臉部’且當偵測到臉部時,偵測的臉部會被對焦。 此避免一主要對象的未對焦狀況的任何失敗。 在如第十與第態樣之任一者的攝像設備之本發明 第十二態樣的攝像設備中,該顯示控制裝置回應該對焦狀 況偵測裝置所偵測到的對焦狀況,改變動畫影像的色調、 明亮、與飽和度之至少一者。 根據第十二態樣之攝像設備,回應所偵測到的對焦狀 況,改變動畫影像的色調、明亮、與飽和度之至少一者。 此使對焦狀況能以使用者容易瞭解的方式顯示。 在如第十至第十二態樣之任一者的攝像設備之本發明 第十三態樣的攝像設備中,該動畫影像產生裝置產生具有 同心顯示的複數個框之動畫影像,且該等複數個框具有不 同尺寸及旋轉直到該對焦狀況偵測裝置偵測到一對焦狀 況,且具有彼此相同尺寸的複數個框,且當該對焦狀況偵 測裝置偵測到一對焦狀況時便停止。 根據第十三態樣之攝像設備,一具有複數個同心框的 動畫影像係合成爲鏡後測光影像,且該等框具有不同尺 寸,並旋轉直到偵測到一對焦狀況爲止,且具有彼此相等 的尺寸,且當偵測到一對焦狀況時便停止旋轉。該等框可 具有不同形狀,包括任何幾何配置(像是一圓形、一橢圓 形、與一矩形)與一心形。此允許以吸引使用者注意的顯示 200917828 明確告知使用者一對焦狀況。 在如第十至第十二態樣之任一者的攝像設備之本發明 第十四態樣的攝像設備中,該動畫影像產生裝置可產生具 有複數個框的動畫影像,該等框能以彼此不同方向旋轉, 直到該對焦狀況偵測裝置偵測到一對焦狀況。此種在不同 方向的複數個框旋轉允許更清楚告知使用者一對焦狀況。 在如第十至第十二態樣之任一者的攝像設備之本發明 第十五態樣的攝像設備中,該動畫影像產生裝置產生動畫 影像,其具有能夠以預定角速度,在預定方向中連續旋轉 的框,直到對焦狀況偵測裝置偵測一對焦狀況。 根據第十五態樣之攝像設備,該等框係以預定角速 度,在預定方向中連續旋轉,直到偵測到一對焦狀況爲止。 此以吸引使用者注意的顯示明確告知使用者―對焦;(犬丨兄。 在如第十至第十二態樣之任一者的攝像設備之本發明 第十六態樣的攝像設備中,該動畫影像產生裝置產生_動 畫影像,其具有接近期望區域連續搖動的框,直到對焦狀 況偵測裝置偵測到一對焦狀況爲止。此允許以吸引{吏# 注意的顯示明確告知使用者一對焦狀況。 在如第十至第十二態樣之任一者的攝像設備之第十七 態樣的攝像設備中,該顯示控制裝置改變框與回應對焦狀 況偵測裝置所偵測對焦狀況進行調焦的區域之間的距離, 且當對焦狀況偵測裝置偵測到一已對焦狀況時,框會在已 進行自動調焦的整個區域上重疊及顯示。 200917828 根據第十七態樣的攝像設備,回應對焦狀況,改變框 與已進行調焦的區域之間的距離,且當對焦狀況偵測裝置 偵測到一已對焦狀況時,該框會重疊及顯示在已進行調焦 的區域。此允許指示哪一區域對焦的資訊能以使用者容易 瞭解的方式加以顯示。 在如第十至第十二態樣之任一者的攝像設備之本發明 第十八態樣的攝像設備中,該動畫影像產生裝置可產生一 動物耳朵之動畫影像’其回應對焦狀況偵測裝置所偵測到 之對焦狀況而改變姿態’以便當影像未對焦時耳下垂,且 當一影像對焦時耳伸出’且該顯示控制裝置可使動畫影像 重疊及顯示在已進行調焦的區域。 根據本發明之第十八態樣的攝像設備,一動畫影像合 成爲一鏡後測光影像,其中一動物的耳朵回應一對焦狀況 改變姿態,以便在回應一對焦狀況時,當影像未對焦時耳 下垂,且當影像已對焦時耳伸出。此可讓使用者被明確告 知一對焦狀況。 在如第十八態樣之任一者的攝像設備之本發明第十九 態樣的攝像設備中,該自動調焦裝置進行一對象臉部的調 焦,且該顯示控制裝置使動畫影像重疊及顯示在對象的整 個臉部上。 根據第十九態樣的攝像設備,—動物的耳朵變更姿態 的動畫會重疊及顯示在-偵測對㈣整個臉部,其允許明 確告知使用者一對焦狀況。 在如第十至第十二態樣之任一者的攝像設備之本發明 -1 3 - 200917828 第一十態樣的攝像設備中,該動畫影像產生裝置產生一動 畫影像,其能回應該對焦狀況偵測裝置所偵測到的對焦狀 況來顯示動物的不同部分,以便當對象的期望區域未對 焦時,只顯不動物的一部分動畫,且當對象的期望區域已 對焦時,顯示整隻動物,且該顯示控制裝置使動畫影像重 疊及顯示在已進行自動調焦的整個區域上。 根據第—十態樣的拍攝攝像設備,回應—對焦狀況而 顯示一動物的不同部分的一動畫影像會重疊及顯示在已進 行自動調焦的整個區域上,以便當該對象的期望區域未對 焦時,只顯示像是一動物部分特徵的動畫,且當該對象的 期望區域已對焦日寺,顯示整個特徵。&允許能夠以使用者 谷易瞭解的方式顯示一對焦狀況。 在如第十至第十二態樣之任一者的攝像設備之本發明 第一十一態樣的攝像設備中該動畫影像產生裝置產生一動 畫影像,其回應該對焦狀況偵測裝置所偵測到的對焦狀 況’顯示本質可飛翔的一動物的不同狀態,以便當該對象 的期望區域未對焦時,顯示一飛翔動物,且當該對象的期 望區域已對焦時,顯示一棲息的動物,且當對焦狀況偵測 裝置偵測到一已對焦狀況時’該顯示控制裝置使此飛翔動 物的動畫影像位在接近已進行自動調焦的區域。 根據本發明的第二十一態樣的攝像設備,一動畫影像 被合成在接近已進行自動調焦的區域之一位置,所以當該 對象的期望區域未對焦時,顯示一飛翔動物的動畫影像, 且昌該對象的期望區域已對焦時,動物停止飛翔且位在接 -14- 200917828 近已進行自動調焦的區域處。此允許能夠以使用者容易瞭 解的方式顯示一對焦狀況。此外’此讓使用者知道動物所 在位置是否對焦’即是’能夠以使用者容易瞭解的方式知 道對焦區域所在位置。 於如第十至第十二態樣之任一者的攝像設備之本發明 第二十二態樣的攝像設備中,該動畫影像產生裝置產生— 動畫影像,其回應該對焦狀況偵測裝置所偵測到的對焦狀 況,顯示一不同開花階段,所以當對象的期望區域未對焦 時,顯示一花蕾’且當對象的期望區域已對焦時,顯示— 朵盛開的花,且該顯示控制裝置使動畫影像顯示在接近已 進行自動調焦的區域之一位置。 根據本發明之第二十二態樣的攝像設備,回應一對焦 狀況顯示不同開花階段之一動畫影像被合成在接近已進行 自動調焦的區域之一位置,所以當對象的期望區域未對焦 時’顯示一花蕾’且當對象的期望區域已對焦時,顯示花 蕾盛開。此允許能以使用者容易瞭解的方式顯示一對焦狀 況。 於如第十至第十一態樣之任一者的攝像設備之本發明 第二十二態樣的攝像設備中’該動畫影像產生裝置產生一 對話氣球的動畫影像,其回應該對焦狀況偵測裝置所偵測 到的對焦狀況而具有不同尺寸,且該顯示控制裝置使動書 影像顯示在接近已進行自動調焦的區域之—位置。 根據本發明之第二十三態樣的攝像設備,回應一對焦 狀況而有不同尺寸的一對話氣球的動畫影像被合成在接近 -15- 200917828 已進行自動調焦的區域之—位置。此允許能以使用者容易 瞭解的方式顯示一對焦狀況。 在如第二十三態樣的攝像設備之本發明第二十四態樣 的攝像設備中’該動畫影像產生裝置可產生一對話氣球的 動畫影像’其至少在當該對象的期望區域已對焦時與當該 對象的期望區域未對焦時之間具有不同影像。 根據本發明之第二十四態樣的攝像設備,回應一對焦 狀況’至少在當該對象的期望區域已對焦時與當該對象的 期望區域未對焦時之間,具有不同尺寸與不同影像的對話 氣球的動畫影像被合成在接近已進行自動調焦的區域之一 位置。此允許能夠以使用者容易瞭解的方式顯示一對焦狀 況。 根據本發明之第二十五態樣,一種對焦狀況顯示方法 包括下列步驟:連續擷取對象的一影像信號之步驟;顯示 步驟’根據所擷取影像信號,顯示一鏡後測光影像;自動 調焦步驟’根據所擷取影像信號,對該對象的期望區域進 行自動調焦;偵測步驟’偵測該調焦狀況;及合成步驟, 將用於顯示對焦資訊的顯示區域合成在鏡後測光影像,亦 將對焦資訊合成在顯示區域,該對焦資訊至少在當該對象 的期望區域未對焦時與當回應偵測的已對焦狀況以使對象 的期望區域對焦時之間不同。 根據本發明之第二十六態樣,一種對焦狀況顯示方法 包括:連續擷取對象的影像信號之步驟;顯示步驟,根據 所擷取影像信號顯示一鏡後測光影像;自動調焦步驟,根 -16- 200917828 據所擷取影像信號’對對象的期望區域進行自動焦到調 整’偵測步驟,偵測該調焦條件;及合成步驟,回應所偵 測到的調焦狀況’將顯示對焦狀況的一動畫影像合成爲該 通過鏡頭影像。 在如第二十六態樣的對焦狀況顯示方法之本發明第二 十七態樣的對焦狀況顯示方法中,該自動調焦步驟更包 括:臉部偵測步驟’從所擷取影像信號偵測對象的臉部之 步驟;及自動調焦步驟’對所偵測臉部進行自動調焦。 在如第二十六態樣的對焦狀況顯示方法之本發明第二 十八態樣的對焦狀況顯示方法中,該將動畫影像合成爲通 過鏡頭影像之步驟更包括: 產生具有下列至少一特徵的動畫影像之步驟:可變位 置;可變尺寸;及可變形狀,及顯示至少當對象的期望區 域未對焦時與當對象的期望區域對焦時之間的一不同影 像:及 將所產生動畫影像合成爲通過鏡頭影像之步驟。 在如第二十八態樣的對焦狀況顯示方法之本發明第二 十九態樣的對焦狀況顯示方法中,在回應所偵測調焦狀 況,改變動畫影像的色調、明亮與飽和度之至少一者之後, 該合成動畫影像的步驟將動畫影像合成爲鏡後測光影像。 根據本發明’容易決定一影像是否對焦、或容易決定 哪一區域對焦的資訊可以一攝像操作加以顯示。 【實施方式】 現在,將根據附圖,詳細解說根據本發明達成照像機 -17- 200917828 之較佳實施例。 <第一實施例> 第1圖爲顯示根據本發明第一實施例的攝像設備實施 例之正透視圖。第2圖爲該攝像設備實施例之後視圖。該 攝像設備爲一數位相機’其於攝像元件接收鏡後測光,並 將該光轉換成數位信號而儲存在一儲存媒體中。 數位相機1 〇具有方型盒狀,橫向長形的照像機本體 1 2 ’且如第1圖所示,該照像機本體1 2具有在前端上的鏡 頭14、電子閃光燈1 6、取景器1 8、自拍定時燈20、AF輔 助燈22、閃光燈調整感應器24及其類似者。照像機本體 1 2亦具有在頂端上的快門鈕26、電源/模式開關28、模式 旋鈕30及其類似者。如第2圖所示,照像機本體12更具 有在背部的監視器3 2、接目鏡3 4、喇叭3 6、縮放鈕3 8、 十字形鈕 40、MENU/ΟΚ 鈕 42、一 DIS P 鈕 4 4、一 B A C K 鈕 46及其類似者。 照像機本體1 2具有下表面(未顯示),其設有螺紋孔, 供用於可打開/可關閉護蓋下的三腳架、電池與記憶體卡 槽,且電池與記憶體卡分別裝入電池盒與記憶體卡槽。 鏡頭14配置成具有可伸縮收進的伸縮鏡頭,並當使用 電源/模式開關2 8設定攝像模式時,可從照像機本體1 2向 外延伸。縮放機構與鏡頭1 4的可伸縮收進機構係根據已知 技術,且該等特定結構將不會在下面詳細解釋。 電子閃光燈1 6包括發光部分,其配置成能在水平方向 與垂直方向中擺動’以便閃光燈可朝主要對象輻射。電子 -18- 200917828 問光燈1 6的結構將在下面詳細解釋。 取景器1 8爲可決定將被攝像的對象之直通窗。 自拍定時燈20例如由LED形成,並在按下快門鈕26 (稍 後解釋)之後,隔一段時間,使用供攝像的自拍定時燈’在 攝像時發光。 AF輔助燈22例如由高亮度LED形成,並回應一 AF 而發光。 如稍後將說明,閃光燈調整感應器24調整電子閃光燈 f ·. ' 1 6的光量。 快門鈕26爲具有所謂「半按」與「全按」的兩段開關。 快門鈕26的「半按」導致AE/AF操作,且快門鈕26的「全 按」促使數位相機1 0進行攝像。 電源/模式開關28作用於如開啓/關閉數位相機1 〇之電 源開關,亦作用於設定數位相機1 0的模式之模式開關,並 滑動地配置在「關閉位置」、「再現位置」、與「攝像位置」 .之間。當電源/模式開關28滑至與「再現位置」或「攝像 K. 位置」對齊時’數位相機10會啓動,且當電源/模式開關 28與「關閉位置」對齊時’則會關閉。電源/模式開關28 與「再現位置」對齊導致設定成「再現模式」,且與「攝像 位置」對齊時,則導致設定成「攝像模式」。 模式旋鈕3 0作用於如攝像模式設定裝置,可設定數位 相機1 0的攝像模式,且模式旋鈕的設定位置允許數位相機 1 〇的攝像模式改變成不同模式。模式包括例如:「自動攝 像模式」,供自動設定數位相機10的光圈一快門速度及其 200917828 類似者;「動態攝像模式」,供拍攝動態影像;「人像攝像模 式」,其適合拍攝人的影像;「運動攝像模式」,其適合拍攝 移動對象的影像「景觀攝像模式」,其適合拍攝景觀的影 像,「夜景攝像模式」’其適合拍攝夜景的影像;「光圈優先 攝像模式」,其中攝影師設定光圈校準,且數位相機自 動設定快門速度;「光圏速度優先攝像模式」,其中攝影師 故疋快門速度’且數位相機10自動設定光圈校準;「手動 攝像模式」,其中攝影師設定光圈、快門速度及其類似者; 及「人偵測攝像模式」,其中一人可自動被偵測,且閃光燈 朝向此人發光,這將在稍後詳細解釋。 監視器32提供彩色顯示的液晶顯示器。監視器32用 來作爲影像顯示面板,供在再現模式中顯示拍攝影像,亦 用來作爲使用者界面顯示面板,供不同設定操作。此外, 在攝像模式中’鏡後測光影像會依需要顯示,將監視器3 2 當作檢查視角之電子取景器來使用。 , 當模式旋鈕30及其類似者啓動語音輸出時,喇叭36 輸出像是聲音與蜂鳴聲之預定聲響。 縮放鈕3 8作用於指定縮放縮放指定裝置,並包括:放 大鈕38T,其指定朝向望遠鏡端的縮放;及縮小鈕38W, 其指定朝向寬角端的縮放。在數位相機1 0中,在攝像模式 屮,放大鈕38T與縮小鈕38W的操作可使鏡頭14的對焦長 度改變。同時,於再現模式中,放大鈕3 8 T與縮小鈕3 8 W 的操作導致增加或減少再現影像的尺寸。 十字形鈕40作用於方向指定裝置’透過該方向指定裝 -20 - 200917828 置’可輸入上、下、左、與右四方向之指定,並可用來選 擇例如選單功能監視器之選單功能項目。 MENU/ΟΚ鈕42作用於按鈕(MENU按鈕),其指定從每 一模式的正常監視器至一功能選單螢幕之切換,且作用於 按鈕(◦ K鈕),其指定選定內容的決定、處理的進行及其類 似者。 DISP鈕44作用於指定在監視器32上的顯示器的開關 之一按鈕,且在攝像期間,按下DISP鈕44,可使在監視 器32上的顯示切換從on—導框顯示—〇FF。於再現期間, 按下DISP鈕44可使顯示切換從正常再現—無文字的再現 —多再現。 BACK鈕46作用於指定取消輸入操作或返回一先前操 作狀態之按鈕。 第3圖爲顯示該數位相機1〇內部的圖解結構之方塊 圖。 如第3圖所示,數位相機1 〇係配置有c P U 1 1 0、操作 部(快門鈕26、電源/模式開關28、模式旋鈕30、縮放鈕3 8、 十字形鈕 40、MENU/ΟΚ 鈕 42、DISP 鈕 44、BACK 鈕 46 及 其類似者)112、ROM 116、EEPR0M 118、記憶體 120、VRAM 122、攝像兀件 124、時序產生器(TG, “Timing Generator”) U6、類比處理部(CDS/AMP) 1 28、A/D轉換器1 30、影像輸 入控制部132、影像信號處理部1 3 4、視訊編碼器1 3 6、文 字MIX部138、AF偵測部140、AE/AWB偵測部142節、光 圈驅動部144、鏡頭驅動部146、壓縮與解壓縮處理部148、 200917828 媒體控制部150、儲存媒體152、臉部偵測部i54、閃光燈 調整控制部1 6 0及其類似者。 CPU 110係根據從操作部112輸入的操作信號的預定 控制程式來整體控制整台數位相機1 〇。 經由匯流排114連接至CPU 1 10的R0M丨丨6儲存CPU 1 1 0進行的控制程式及用於控制所需的不同資料,且 EEPROM 118儲存與數位相機10的操作有關的不同設定資 訊,像是使用者設定資訊。記憶體(SDRAM) 120用來作爲供 C PU 1 1 0計算的一區域,亦用來作爲影像資料及其類似者 的暫時儲存區域’而VRAM 122則用來作爲只有影像資料 的暫時儲存區域。 攝像元件124配置有彩色CCD,該彩色CCD具有預定 濾色器的陣列’且電子式拍攝由鏡頭1 4形成的一對象的影 像。時序產生器(TG)126輸出時序訊號,用於回應來自CPU 1 1 0的命令,以驅動攝像元件1 2 4。 類比處理部128取樣及保持(關聯倍抽樣處理)每像素 的R、G和B信號’作爲從攝像元件1 2 4輸出的影像信號, 亦放大該等信號以輸出至A/D轉換器130。 A/D轉換器1 30將從類比處理部1 28輸出的類比R、G 和B信號轉換成數位R、G和B信號,並輸出該等信號。 影像輸入控制部1 3 2將從A / D轉換器1 3 0輸出的數位 R、G和B信號輸出至記憶體1 20。 影像信號處理部1 34包括:同步電路(處理電路藉由補 償在單CCD上的濾色陣列中彩色信號的空間轉向以同時轉 -22 - 200917828 換彩色信號)、白平衡補償電路、灰度校正電路、輪廓校正 電路、亮度/色差信號產生電路及其類似者,並根據來自CPU 1 1 0的命令,進行期望信號處理,輸入影像信號以產生影像 資料(YUV資料),包括亮度資料(Y資料)與色差資料(Cr和 Cb資料)。 視訊編碼器1 3 6根據來自C PU 1 1 0的命令控制在監視 器3 2上的顯示。亦即,根據來自C PU 110的命令,視訊編 碼器1 3 6將輸入影像信號轉換成在監視器3 2上顯示的視訊 信號(例如’ NTSC信號、PAL信號與SCAM信號),及輸出 信號至監視器32,並依需要,亦將由文字MIX部138合成 的預定文字與繪圖資訊輸出至監視器3 2。 A F偵測部1 4 0配置有:高通濾波器,其只通過g信號 高頻組件;絕對値處理部;AF區域偵測部,其移除在預定 對焦區域中的信號(例如,螢幕的中央部分);及整合部, 其整合絕對値資料在AF區域。 AE/AWB偵測部142根據來自CPU 1 10的命令,使用輸 入影像信號來計算AE控制與AWB控制所需的物理量。例 如’如AE控制所需的物理量、藉由將一監視器分成複數個 區域(例如,1 6 X 1 6)所獲得每區域的R、G和b影像信號的 積分値。 光圈驅動部144與鏡頭驅動部146根據來自cpu 11〇 的命令控制攝像元件1 2 4的驅動部1 2 4 A,並控制攝像鏡頭 1 4與光圈1 5的操作。 壓縮與解壓縮處理部148根據來自CPU 11〇的命令來 -23 - 200917828 進行輸入影像資料的預定樣式的壓縮處理,並產生壓縮的 影像資料。壓縮與解壓縮處理部148亦根據來自CPU 1 10 的命令來進行輸入壓縮影像資料的預定樣式解壓縮處理’ 並產生解壓縮的影像資料。 媒體控制部150根據來自CPU 110的命令來控制載入 —媒體槽的儲存媒體1 5 2的資料之讀取/寫入。 臉部偵測部1 54根據來自CPU 1 1 0的命令,從輸入影 像資料萃取影像的臉部區域,並偵測該區域的位置(例如, 1 " 臉部區域的重心),臉部區域的萃取係例如藉由從原始影像 萃取膚色資料、及沿著判定具有膚色的光學測點來萃取資 料集來進行。用於從一影像萃取臉部區域包括下列其他已 知方法包括:藉由將光度資料轉換成色調與飽和來決定一 臉部區域、及產生轉換的色調與飽和的二度空間矩形圖供 分析之方法;藉由萃取對應一人臉部形狀的一臉部候選區 域來決定一臉部區域、及根據在區域中的特徵量來決定一 臉部區域之方法;藉由從一影像萃取人臉輪廓以決定臉部 I 區域之方法;藉由準備具有人臉形的複數個樣本、計算在 樣本與一影像之間的關聯、及根據該關聯値決定一臉部候 選區域之方法,且可使用此等之任一者於萃取。 對焦狀況顯示產生部156產生一對話氣球,其中顯示 文字與符號。C P U 1 1 0可識別一 A F的狀態’其係由A F偵 測部1 4 0進行,並送出一命令至對焦狀況顯示產生部1 5 6。 根據來自CPU 110的命令’對焦狀況顯示產生部156產生 對應至A F狀態的文字或繪圖。然後’根據臉部偵測部1 5 4 -24 - 200917828 所偵測到的臉部位置資訊,CPU 1 1 0送出一命令至文字MIX 部138’顯示接近臉部的對焦狀況顯示產生部156所產生的 顯示。對焦狀況顯示產生部1 5 6所產生的顯示將稍後詳細 解釋。 閃光燈調整控制部1 6 0根據來自C P U 1 1 0的命令來控 制電子閃光燈1 6的發光。 其次,如上述配置的本實施例的數位相機1 〇之操作將 在下面解釋。 首先,一般攝像與記錄處理的程序將在下面解釋。如 上述,數位相機1 0藉由使電源/模式開關28與一攝像位置 對齊,設定在一攝像模式,並可拍攝一影像。攝像模式的 設定可使鏡頭1 4向外延伸,建立供攝像的備用狀態。 在攝像模式下,一對象光通過鏡頭14,經由光圈15, 對焦在攝像元件1 24的一光接收面上。攝像元件1 24的光 接收面具有透過紅(R)、綠(G)與藍(B)濾色器二維空間排列 的許多光二極體(光接收元件),其係以預定陣列結構加以 排列(例如,貝爾圖形(B a y e r P a 11 e r η)、與 G條絞斑圖(G Stripe Pattern))。通過透鏡14的對象光由光二極體之每一 者接收,且會被轉換成對應入射光量的信號電荷量。 根據從時序產生器(TG) 126給予的驅動脈衝,連續讀出 在每一光二極體中累積的信號電荷,作爲對應信號電荷的 電壓信號(影像信號),其將被加入類比處理部(CDS/AMP) 128°BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image pickup apparatus and a focus condition display method, and more particularly to an image pickup apparatus that displays a focus condition, particularly in an autofocus operation, and a focus Status display method. [Prior Art] A method for displaying information indicating whether or not an object is in focus has been disclosed as follows. An image pickup apparatus is proposed in the patent application publication No. 6_丨1 3 1 84, which displays a long bar graph showing the on-off state of the focus state under the electronic viewfinder so that the user can check one Focus condition. Japanese Patent Application Laid-Open No. Hei 6-30 1098 proposes an image pickup apparatus which displays a focus assisting meter so that the user can check whether an object is within its depth of focus. Japanese Patent Application Laid-Open Publication No. 2002-3-1140849 proposes a camera device which can change the display color or display format of the cross-shaped target mark so that the user can check a focus/unfocused condition. SUMMARY OF THE INVENTION However, the image pickup apparatus described in the above patent document disadvantageously displays information indicating that an object is in focus in a manner that is difficult for each user to understand. In particular, in the case of considering a camera operation such as a child user unfamiliar with the image pickup apparatus, such information should be displayed in a manner that attracts the user's attention and is easy to see at a glance. There is a technique for detecting a subject's face and displaying a box of 200917828 near the detected face (see Figure 28), so that the detected face can focus, but when it is like a child who is not familiar with the operation of the camera device When the user takes a picture, there is a problem that the user cannot easily tell the face that the face has been detected and the face is in focus. The present invention has been made in view of the above circumstances, and an object of the present invention is to provide an image pickup apparatus capable of displaying information for determining whether or not an image is in focus or for determining which area to focus on in an image pickup operation; and a focus condition display method. In order to achieve the object, an imaging apparatus according to a first aspect of the present invention includes: an imaging device that captures an image of a subject; an image capturing device that continuously captures an image signal representing the object via the imaging device; and a display device that can be The image signal is captured to display the post-mirror image; the auto-focusing device automatically adjusts the focus according to the captured image signal to maximize the contrast of the object, and the focus detection device is After the adjustment of the auto-focusing device, 'detecting the focus state of the object; and the display control device, which combines the display area for displaying the 7K focus information on the post-mirror image of the display device' and also synthesizes the focus information, the focus is also The information differs at least when an image is out of focus and when an image is returned to the focus condition detected by the focus detection device to focus on the display area. According to the imaging device of the first aspect, the image of the object can be captured, and the image signal representing the object can be continuously captured, so that the image of the rear view can be displayed by the display device according to the captured image signal, wherein a display area is synthesized No focus information. At the same time, according to the captured image signal, automatic focusing is performed to maximize the contrast of the object and detect the focusing result. Root 200917828 According to the detected focus condition, the focus information can be changed and the focus information includes at least information about an unfocused condition. This makes it possible to indicate that an object is in focus in a manner that is desirable for the user to understand. The invention of the first aspect of the present invention further includes: a face detecting device that automatically captures the detected face on the face detected by the face detecting device focusing. According to the imaging apparatus of the second aspect of the present invention, the captured image signal is detected, and automatic focus adjustment is performed when the face is detected. This avoids the failure of the primary object. In the present invention of the image pickup apparatus of the second aspect, the display control means synthesizes the display area of the mirror of the display means in proximity to the face detecting means. According to the imaging device of the third aspect, when the detected display area displayed on the photometric image is synthesized, the connection is made. This displays the information indicating which area is in focus. In the present invention of the image pickup apparatus of the third aspect, the display area displayed on the display device by the display control means has a dialog balloon shape. According to the image pickup apparatus of the fourth aspect, the pair is synthesized in the display area. The information of a focused and in-focus area can be used as a two-dimensional imaging device [number to detect the face of the object; when the face is detected, the face of the object can be taken from the time, and the detected object is not When the position of the detected face is displayed on the face of the third aspect of the focus state, the position of the detected face is close to the face of the detected face, so that the user can easily understand the fourth aspect. The post-mirror image of the camera settings is displayed in the shape of the balloon. The 200917828 area is synthesized in the post-mirror image. This allows the user to be informed of a focus condition with a display that draws the user's attention. The imaging device of the fifth aspect of the present invention, further includes: a storage device that stores focus information corresponding to a focus condition, the focus condition including at least an unfocused condition Information and information about a focused condition; and display control means for synthesizing focus information stored in the storage device into the display area. According to the fifth aspect of the image pickup apparatus, the focus information including the unfocused condition information and the focused condition information is stored, and the focus information corresponding to the focus condition in the stored focus information is displayed. This allows the focus condition to be displayed in a manner that is easy for the user to understand, and which makes it easy for the user to judge the focus condition. The imaging device of the sixth aspect of the present invention, further comprising: an input device that inputs focus information in response to a focus condition; and a storage device that stores focus information input by the input device and corresponds to the same When a plurality of focus information portions of the focus condition are stored in the storage device, the display control device selects a desired piece of focus information portion from the plurality of focus information portions, and combines the pen portion in the display area. According to the sixth aspect of the image pickup apparatus, the focus information input via the input means is further stored to store the plurality of pen focus information portions corresponding to the same focus condition. When storing the plurality of focus information portions corresponding to the same focus condition, a desired portion of the focus information can be selected from the plurality of focus information portions for clear display. This allows customization based on the user's preferences. 200917828 In the image pickup apparatus of the seventh aspect of the present invention as in the fifth aspect or the sixth aspect of the image pickup apparatus, 'once the focus state detecting means detects focus adjustment, the display control means can focus information from An unfocused status information is switched to a focused status information. According to the seventh aspect of the image capturing apparatus, before the focusing operation, an unfocused state information is displayed, and during the focusing, the unfocused state information is switched to the information of the focused state of the focused image, so that when an image is in focus , displays information about a focused condition. This allows the user to be clearly informed of the focus condition with a display that draws the user's attention. In an image pickup apparatus according to an eighth aspect of the present invention, in the image pickup apparatus of any one of the second to sixth aspect, the face detecting means can detect a face and an expression of the subject, and the display control means is configured according to the The expression detected by the face detection device synthesizes the focus information in the display area. According to the eighth aspect of the image pickup apparatus, the object and the expression of the object are detected, and the focus information is displayed based on the detected expression. This allows the user to be informed of the coke condition in an easy-to-understand manner by displaying the user's attention. In the image pickup apparatus of the ninth aspect of the invention, in the image pickup apparatus of any one of the first to eighth aspect, the display control means changes the size of the display area in response to the result detected by the focus condition detecting means . According to the ninth aspect of the image pickup apparatus, the size of the display area is changed in response to a focus condition. This allows the user to be informed of the coke condition in an easy-to-understand manner. An imaging apparatus according to a tenth aspect of the present invention includes: an image capturing apparatus, 200917828, an image of the object that can be photographed; an image capturing apparatus that continuously captures an image signal representing the object via the image capturing apparatus; and a display apparatus capable of The image signal is displayed to display a post-mirror image; the auto-focusing device performs automatic focusing on a desired area of the object according to the captured image signal; and the focus condition detecting device is adjusted after the auto-focusing device is adjusted. Detecting an object's in-focus condition; an animated image generating device that produces an animated image having at least one of the following features: a variable position; a variable size; and a variable shape, and at least when the image is not in focus and when an image has been Different images are displayed between the focus; and the display control device's response to the focus condition detected by the focus condition detecting device synthesizes the moving image into a post-mirror image. According to the tenth aspect of the image pickup apparatus, an image of an object is captured, and an image signal ’ 代表 is continuously captured to display a post-mirror image based on the captured image signal ’. At the same time, according to the captured image signal, the desired area of the object is automatically focused 'and then the focus state of the object is detected' 俾 in response to the detected focus condition, and the animated image is synthesized into a post-mirror image. The animated image has at least one of the following features: a variable position; a variable size; and a variable shape, and pre-generating includes different images displayed at least between when the image is out of focus and an image is in focus. In automatic. In a peripheral focus device, typically a 'contrast af system can be used for focusing to maximize the contrast of the desired area of an image, but other systems can be used. This allows information indicating the focus of an object's desired area to be displayed in a manner that is easily understood by the user. The camera device according to the eleventh aspect of the present invention, further includes: a face detecting device that detects a face of the image of the -10-200917828 image from the captured image signal; The auto-focusing device performs auto-focusing on the detected face when the face detecting device detects a face. According to the eleventh aspect of the image pickup apparatus, the face of the subject is detected from the captured image signal' and when the face is detected, the detected face is focused. This avoids any failure of the unfocused condition of a primary object. In the image pickup apparatus of the twelfth aspect of the invention of the image pickup apparatus of any of the tenth and the aspect, the display control means returns the focus state detected by the focus state detecting means, and changes the animated image At least one of hue, brightness, and saturation. According to the twelfth aspect of the image pickup apparatus, in response to the detected focus condition, at least one of hue, brightness, and saturation of the animated image is changed. This allows the focus condition to be displayed in a manner that is easy for the user to understand. In an image pickup apparatus according to a thirteenth aspect of the present invention, in the image pickup apparatus of any one of the tenth to twelfth aspects, the animated image generating apparatus generates an animated image having a plurality of frames displayed concentrically, and the same The plurality of frames have different sizes and rotations until the focus condition detecting device detects a focus condition and has a plurality of frames of the same size, and stops when the focus condition detecting device detects a focus condition. According to the thirteenth aspect of the image capturing apparatus, an animated image having a plurality of concentric frames is synthesized into a post-mirror image, and the frames have different sizes and are rotated until a focus condition is detected and are equal to each other. The size and stop rotating when a focus condition is detected. The frames can have different shapes, including any geometric configuration (like a circle, an ellipse, and a rectangle) and a heart shape. This allows the display to be noticed by the user 200917828 to clearly inform the user of a focus condition. In the image pickup apparatus of the fourteenth aspect of the invention, in the image pickup apparatus of any one of the tenth to twelfth aspects, the animated image generating apparatus can generate an animated image having a plurality of frames, the frames can be Rotate in different directions until the focus detection device detects a focus condition. This rotation of a plurality of frames in different directions allows the user to be more clearly informed of a focus condition. In an image pickup apparatus according to a fifteenth aspect of the invention of the image pickup apparatus of any of the tenth to twelfth aspects, the animated image generating apparatus generates an animated image having a predetermined angular velocity in a predetermined direction The frame is rotated continuously until the focus detection device detects a focus condition. According to the fifteenth aspect of the image pickup apparatus, the frames are continuously rotated in a predetermined direction at a predetermined angular velocity until a focus condition is detected. In the image pickup apparatus of the sixteenth aspect of the present invention, in the image pickup apparatus of any of the tenth to twelfth aspects, The animated image generating device generates an _ animated image having a frame that is continuously swayed close to the desired area until the focus condition detecting device detects a focus condition. This allows the user to explicitly inform the user of the focus by attracting attention to the display. In the image pickup apparatus of the seventeenth aspect of the image pickup apparatus of any of the tenth to twelfth aspects, the display control means changes the frame and adjusts the focus condition detected by the focus state detecting means The distance between the areas of focus, and when the focus detection device detects a focus condition, the frame overlaps and displays over the entire area where autofocus has been performed. 200917828 Camera apparatus according to the seventeenth aspect In response to the focus condition, the distance between the frame and the area where the focus has been adjusted is changed, and when the focus state detecting device detects a focused condition, the frame overlaps and is displayed. The area where the focusing is performed. This allows the information indicating which area is in focus to be displayed in a manner that is easy for the user to understand. The eighteenth aspect of the present invention in the image pickup apparatus of any of the tenth to twelfth aspects In the image capturing device, the animated image generating device can generate an animated image of the animal's ear, which changes the posture in response to the focus condition detected by the focus state detecting device, so that the ear is drooping when the image is not in focus, and when When the image is in focus, the ear sticks out and the display control device can overlap and display the animated image in the area where the focusing has been performed. According to the eighteenth aspect of the present invention, an animated image is synthesized into a post-mirror image. One of the animal's ears responds to a focus condition to change the posture so that when the image is in focus, the ear hangs when the image is not in focus, and the ear sticks out when the image is in focus. This allows the user to be clearly informed of a focus condition. In an image pickup apparatus according to a nineteenth aspect of the invention of the image pickup apparatus of any one of the eighteenth aspect, the auto focus device performs an object face Focusing, and the display control device superimposes and displays the animated image on the entire face of the object. According to the imaging device of the nineteenth aspect, the animation of the animal's ear changing posture overlaps and is displayed in the -detecting pair (4) the entire face, which allows the user to clearly inform the user of a focus condition. In the image pickup apparatus of any of the tenth to twelfth aspects, the present invention - 1 3 - 200917828 The animated image generating device generates an animated image, which can display different parts of the animal in response to the in-focus condition detected by the in-focus condition detecting device, so that when the desired area of the object is not in focus, only part of the animation of the animal is displayed. And when the desired area of the object is in focus, the whole animal is displayed, and the display control device overlaps and displays the animated image on the entire area where the auto-focusing has been performed. According to the photographing device of the tenth aspect, the response - an animated image showing a different part of an animal that is in focus and superimposed and displayed on the entire area where autofocusing has been performed, so that the object When the desired area is not in focus, only an animation like an animal part feature is displayed, and when the desired area of the object has been focused on the day temple, the entire feature is displayed. & allows a focus condition to be displayed in a way that the user can easily understand. In the image pickup apparatus of the eleventh aspect of the present invention, in the image pickup apparatus of any one of the tenth to twelfth aspects, the animated image generating apparatus generates an animated image which is detected by the focus state detecting means The measured focus condition 'shows a different state of an animal that is capable of flying so that when the desired area of the subject is out of focus, a flying animal is displayed, and when the desired area of the subject is in focus, a perched animal is displayed, And when the focus condition detecting device detects a focused condition, the display control device causes the flying animal's animated image to be located in an area close to the auto-focusing. According to the imaging apparatus of the twenty-first aspect of the present invention, an animated image is synthesized in a position close to one of the areas where the auto-focusing has been performed, so that when the desired area of the object is not in focus, an animated image of a flying animal is displayed. When the desired area of the object has been in focus, the animal stops flying and is located in the area where the auto-focusing has been performed in the vicinity of -14-200917828. This allows a focus condition to be displayed in a manner that is easy for the user to understand. In addition, this allows the user to know whether the position of the animal is in focus or not, that is, the position of the focus area can be known in a manner that is easy for the user to understand. In an image pickup apparatus according to a twenty-second aspect of the present invention, in the image pickup apparatus of any one of the tenth to twelfth aspects, the animated image generating apparatus generates an animated image, which is returned to the focus state detecting device The detected focus condition shows a different flowering stage, so when the desired area of the subject is out of focus, a flower bud is displayed, and when the desired area of the subject is in focus, a blooming flower is displayed, and the display control device makes The animated image is displayed in a position close to one of the areas where autofocus has been performed. According to the imaging device of the twenty-second aspect of the present invention, in response to a focus condition, an animation image of one of the different flowering stages is synthesized in a position close to one of the areas where the auto focus has been performed, so when the desired area of the object is not in focus 'Show a flower bud' and when the desired area of the subject is in focus, the flower buds are displayed in full bloom. This allows a focus condition to be displayed in a manner that is easy for the user to understand. In the image pickup apparatus of the twenty-second aspect of the present invention, in the image pickup apparatus of any one of the tenth to eleventh aspects, the animated image generating apparatus generates an animated image of a dialog balloon, which is responsive to the focus state The focus condition detected by the measuring device has different sizes, and the display control device causes the moving book image to be displayed in a position close to the area where the auto focus has been performed. According to the image pickup apparatus of the twenty-third aspect of the present invention, an animated image of a dialog balloon having different sizes in response to a focus condition is synthesized in a position close to an area where autofocusing has been performed -15-200917828. This allows a focus condition to be displayed in a way that is easy for the user to understand. In the image pickup apparatus of the twenty-fourth aspect of the invention of the image pickup apparatus of the twenty-third aspect, the animated image generating apparatus generates an animated image of a dialog balloon, which is at least when the desired area of the object is in focus There are different images between when the desired area of the object is not in focus. According to a twenty-fourth aspect of the invention, the image pickup apparatus responds to a focus condition 'at least between when the desired area of the subject is in focus and when the desired area of the subject is out of focus, having different sizes and different images The animated image of the dialog balloon is synthesized in a position close to one of the areas where autofocusing has been performed. This allows a focus condition to be displayed in a manner that is easy for the user to understand. According to a twenty-fifth aspect of the present invention, a method for displaying a focus condition includes the steps of: continuously capturing an image signal of an object; and displaying a step of displaying a post-mirror image based on the captured image signal; The focus step 'automatically focuses the desired area of the object according to the captured image signal; the detecting step 'detects the focusing condition; and the synthesizing step, synthesizes the display area for displaying the focus information into the mirror back metering The image also synthesizes the focus information in the display area, the focus information being different at least between when the desired area of the object is out of focus and when the focus condition of the detected response is made to focus the desired area of the object. According to a twenty-sixth aspect of the present invention, a method for displaying a focus condition includes: a step of continuously capturing an image signal of an object; a display step of displaying a post-mirror image based on the captured image signal; an autofocusing step, root -16- 200917828 According to the image signal 'automatic focus adjustment to the desired area of the object' detection step, detecting the focus condition; and the synthesis step, in response to the detected focus condition' will display the focus An animated image of the condition is synthesized into the through-the-lens image. In the focus state display method of the twenty-seventh aspect of the present invention, in the focus state display method of the twenty-sixth aspect, the auto focus step further comprises: a face detection step of detecting from the captured image signal The step of measuring the face of the object; and the auto-focusing step 'automatically adjusts the detected face. In the focus condition display method of the twenty-eighth aspect of the present invention, the step of synthesizing the animated image into the through-the-lens image further comprises: generating at least one of the following features: The steps of animated images: variable position; variable size; and variable shape, and display a different image at least when the desired area of the subject is out of focus and when the subject is in focus: and the resulting animated image The step of synthesizing the image through the lens. In the focus state display method of the twenty-ninth aspect of the present invention, in the focus state display method according to the twenty-eighth aspect, the color tone, the brightness and the saturation of the animated image are changed in response to the detected focus state. After that, the step of synthesizing the animated image synthesizes the animated image into a post-mirror image. According to the present invention, information which is easy to determine whether an image is in focus or which region is easily determined can be displayed by an image pickup operation. [Embodiment] Now, a preferred embodiment of a camera -17-200917828 according to the present invention will be explained in detail based on the drawings. <First Embodiment> Fig. 1 is a front perspective view showing an embodiment of an image pickup apparatus according to a first embodiment of the present invention. Fig. 2 is a rear view of the embodiment of the image pickup apparatus. The image pickup device is a digital camera that receives the mirror after the image pickup element and converts the light into a digital signal for storage in a storage medium. The digital camera 1 has a square box-shaped, laterally elongated camera body 1 2 ' and as shown in FIG. 1 , the camera body 1 2 has a lens 14 on the front end, an electronic flash 16 , and a framing The device 18, the self-timer lamp 20, the AF assist lamp 22, the flash adjustment sensor 24, and the like. The camera body 1 2 also has a shutter button 26 on the top end, a power/mode switch 28, a mode knob 30, and the like. As shown in Fig. 2, the camera body 12 further has a monitor 3 on the back, an eyepiece 34, a speaker 36, a zoom button 38, a cross button 40, a MENU/ΟΚ button 42, and a DIS P. Button 4 4, a BACK button 46 and the like. The camera body 1 2 has a lower surface (not shown) provided with a threaded hole for a tripod under the openable/closable cover, a battery and a memory card slot, and the battery and the memory card are respectively loaded into the battery Box and memory card slot. The lens 14 is configured to have a telescopic lens that can be retracted and retracted, and can be extended outward from the camera body 1 2 when the imaging mode is set using the power/mode switch 28. The zoom mechanism and the retractable retracting mechanism of the lens 14 are in accordance with known techniques, and such specific structures will not be explained in detail below. The electronic flash 16 includes a light emitting portion configured to be swingable in the horizontal direction and the vertical direction so that the flash can be radiated toward the main subject. Electronics -18- 200917828 The structure of the light 1 6 will be explained in detail below. The viewfinder 18 is a through window that can determine the object to be imaged. The self-timer lamp 20 is formed, for example, by an LED, and after the shutter button 26 is pressed (explained later), the self-timer lamp for photographing is used to emit light at the time of image pickup. The AF assist lamp 22 is formed, for example, of a high-brightness LED and emits light in response to an AF. As will be described later, the flash adjustment sensor 24 adjusts the amount of light of the electronic flash f·. '16. The shutter button 26 is a two-stage switch having so-called "half press" and "full press". The "half press" of the shutter button 26 causes the AE/AF operation, and the "full press" of the shutter button 26 causes the digital camera 10 to take a picture. The power/mode switch 28 acts on the power switch such as turning on/off the digital camera 1 ,, and also acts on the mode switch for setting the mode of the digital camera 10, and is slidably disposed in the "closed position", "reproduction position", and " Camera position" between. When the power/mode switch 28 is slid to align with the "reproduction position" or "camera K. position", the digital camera 10 is activated, and when the power/mode switch 28 is aligned with the "closed position", it is turned off. When the power/mode switch 28 is aligned with the "reproduction position", it is set to "reproduction mode", and when it is aligned with the "imaging position", it is set to "camera mode". The mode dial 30 acts on the camera mode setting device to set the camera mode of the digital camera 10, and the mode knob setting position allows the camera mode of the digital camera 1 to change to a different mode. The modes include, for example, "Automatic Camera Mode" for automatically setting the aperture-shutter speed of the digital camera 10 and its 200917828-like; "Dynamic Camera Mode" for shooting motion pictures; "Portrait Camera Mode", which is suitable for shooting people's images. "Sports Camera Mode", which is suitable for shooting images of the moving object "Landscape Camera Mode", which is suitable for shooting landscape images, "Night Scenery Mode" 'is suitable for shooting night scenes; "Aperture Priority Camera Mode", among photographers Set aperture calibration, and the digital camera automatically sets the shutter speed; "Film speed priority camera mode", where the photographer has a shutter speed 'and the digital camera 10 automatically sets the aperture calibration; "manual camera mode", in which the photographer sets the aperture, The shutter speed and the like; and the "person detection camera mode", one of which can be automatically detected, and the flash is illuminated toward the person, which will be explained in detail later. The monitor 32 provides a liquid crystal display in color display. The monitor 32 is used as an image display panel for displaying a captured image in the reproduction mode, and is also used as a user interface display panel for different setting operations. In addition, in the camera mode, the post-mirror metering image is displayed as needed, and the monitor 3 2 is used as an electronic viewfinder for checking the angle of view. When the mode dial 30 and the like activate the voice output, the speaker 36 outputs a predetermined sound like a sound and a beep. The zoom button 3 8 acts on the designated zoom zoom specifying device and includes: an enlarge button 38T that specifies the zoom toward the telescope end; and a zoom button 38W that specifies the zoom toward the wide angle end. In the digital camera 10, in the imaging mode 屮, the operation of the enlargement button 38T and the reduction button 38W can change the focus length of the lens 14. Meanwhile, in the reproduction mode, the operation of the enlargement button 3 8 T and the reduction button 3 8 W causes an increase or decrease in the size of the reproduced image. The cross-shaped button 40 acts on the direction specifying means </ RTI> through the direction designation -20 - 200917828 setting to input the upper, lower, left, and right four directions, and can be used to select a menu function item such as a menu function monitor. The MENU/ΟΚ button 42 acts on the button (MENU button), which specifies the switching from the normal monitor of each mode to a function menu screen, and acts on the button (◦ K button), which specifies the selection and processing of the selected content. Carry out and similar. The DISP button 44 acts on one of the switches of the display designated on the monitor 32, and during the recording, pressing the DISP button 44 causes the display on the monitor 32 to be switched from the on-lead display - 〇 FF. During reproduction, pressing the DISP button 44 causes the display switching to be reproduced from normal reproduction - no text reproduction. The BACK button 46 acts on a button that specifies a cancel input operation or returns to a previous operational state. Figure 3 is a block diagram showing the schematic structure inside the digital camera. As shown in Fig. 3, the digital camera 1 is equipped with a c PU 1 1 0, an operation unit (a shutter button 26, a power/mode switch 28, a mode dial 30, a zoom button 38, a cross button 40, and a MENU/ΟΚ). Button 42, DISP button 44, BACK button 46 and the like) 112, ROM 116, EEPR0M 118, memory 120, VRAM 122, camera element 124, timing generator (TG, "Timing Generator") U6, analog processing (CDS/AMP) 1 28, A/D converter 1 30, video input control unit 132, video signal processing unit 1 3 4, video encoder 1 36, character MIX unit 138, AF detection unit 140, AE /AWB detecting unit 142, diaphragm driving unit 144, lens driving unit 146, compression and decompression processing unit 148, 200917828 media control unit 150, storage medium 152, face detecting unit i54, flash adjustment control unit 1 600 And similar. The CPU 110 integrally controls the entire digital camera 1 in accordance with a predetermined control program of an operation signal input from the operation unit 112. The ROM 6 connected to the CPU 1 10 via the bus bar 114 stores a control program by the CPU 1 10 and different data required for control, and the EEPROM 118 stores different setting information related to the operation of the digital camera 10, like It is user setting information. The memory (SDRAM) 120 is used as an area for calculation of the CPU 1 10, and is also used as a temporary storage area for image data and the like, and the VRAM 122 is used as a temporary storage area for only image data. The image pickup element 124 is provided with a color CCD having an array of predetermined color filters and electronically capturing an image of an object formed by the lens 14. The timing generator (TG) 126 outputs a timing signal for responding to a command from the CPU 1 10 to drive the imaging element 1 24 . The analog processing unit 128 samples and holds (associates the double-sampling processing) the R, G, and B signals per pixel as the video signals output from the image pickup device 1 24, and amplifies the signals to be output to the A/D converter 130. The A/D converter 1 30 converts the analog R, G, and B signals output from the analog processing unit 1 28 into digital R, G, and B signals, and outputs the signals. The video input control unit 1 3 2 outputs the digital R, G, and B signals output from the A/D converter 1 300 to the memory 1200. The image signal processing unit 1 34 includes: a synchronization circuit (the processing circuit compensates the spatial direction of the color signal in the color filter array on the single CCD to simultaneously turn the -22 - 200917828 color change signal), the white balance compensation circuit, and the gradation correction a circuit, a contour correction circuit, a luminance/color difference signal generating circuit, and the like, and performing desired signal processing according to a command from the CPU 1 10, inputting a video signal to generate image data (YUV data), including luminance data (Y data) ) and color difference data (Cr and Cb data). The video encoder 1 3 6 controls the display on the monitor 3 2 in accordance with commands from the CPU 1 10 . That is, according to the command from the CPU 110, the video encoder 136 converts the input video signal into a video signal (for example, 'NTSC signal, PAL signal, and SCAM signal) displayed on the monitor 32, and outputs the signal to The monitor 32 outputs the predetermined characters and drawing information synthesized by the character MIX unit 138 to the monitor 32 as needed. The AF detecting unit 1 400 is configured with a high-pass filter that passes only the g signal high-frequency component; an absolute processing unit; and an AF area detecting unit that removes a signal in a predetermined focus area (for example, the center of the screen) Part); and the integration department, which integrates absolute data in the AF area. The AE/AWB detecting unit 142 calculates the physical quantity required for the AE control and the AWB control using the input video signal in accordance with a command from the CPU 1 10. For example, 'the physical quantity required for AE control, the integral 値 of the R, G, and b image signals per region obtained by dividing a monitor into a plurality of regions (for example, 16 6 16). The diaphragm driving unit 144 and the lens driving unit 146 control the driving unit 1 2 4 A of the imaging element 1 24 in accordance with a command from the cpu 11 ,, and control the operations of the imaging lens 14 and the aperture 15 . The compression and decompression processing unit 148 performs compression processing of a predetermined pattern of the input image material in accordance with a command from the CPU 11A -23 - 200917828, and generates compressed image data. The compression and decompression processing unit 148 also performs a predetermined pattern decompression process of inputting compressed image data in accordance with a command from the CPU 1 10 and generates decompressed image data. The media control unit 150 controls reading/writing of data loaded into the storage medium 152 of the media slot in accordance with a command from the CPU 110. The face detecting unit 1 54 extracts a face region of the image from the input image data according to a command from the CPU 1 1 0, and detects the position of the region (for example, 1 " center of gravity of the face region), face region The extraction is performed, for example, by extracting skin color data from the original image and extracting the data set along the optical spot having the skin color. Other known methods for extracting facial regions from an image include: determining a face region by converting photometric data into hue and saturation, and generating a converted hue and a saturated second-degree rectangle for analysis a method for determining a face region by extracting a face candidate region corresponding to a face shape of a person, and determining a face region according to a feature amount in the region; by extracting a face contour from an image a method of determining a face I region; a method of preparing a plurality of samples having a face shape, calculating an association between the sample and an image, and determining a face candidate region based on the association, and using the same Either in the extraction. The focus condition display generating portion 156 generates a dialog balloon in which characters and symbols are displayed. C P U 1 1 0 can recognize the state of an A F 'which is performed by the A F detecting unit 1 400 and sends a command to the in-focus condition display generating unit 156. The focus state display generation unit 156 generates a character or a drawing corresponding to the A F state in accordance with the command from the CPU 110. Then, based on the face position information detected by the face detecting unit 1 5 4 -24 - 200917828, the CPU 1 10 sends a command to the character MIX portion 138' to display the focus status display generating portion 156 close to the face. The resulting display. The display generated by the focus condition display generating portion 156 will be explained later in detail. The flash adjustment control unit 160 controls the light emission of the electronic flash 16 in accordance with a command from C P U 1 10 . Next, the operation of the digital camera 1 of the present embodiment configured as described above will be explained below. First, the general imaging and recording processing procedure will be explained below. As described above, the digital camera 10 sets an image pickup mode by aligning the power/mode switch 28 with an image pickup position, and can take an image. The camera mode setting allows the lens 14 to extend outward to establish a standby state for imaging. In the image capturing mode, an object light passes through the lens 14 and is focused on a light receiving surface of the image pickup device 14 via the aperture 15. The light receiving surface of the image pickup element 1 24 has a plurality of photodiodes (light receiving elements) arranged in a two-dimensional space through red (R), green (G), and blue (B) color filters, which are arranged in a predetermined array structure. (For example, Bell pattern (B ayer P a 11 er η), and G stripe pattern (G Stripe Pattern). The object light passing through the lens 14 is received by each of the photodiodes and is converted into a signal charge amount corresponding to the amount of incident light. According to the driving pulse given from the timing generator (TG) 126, the signal charges accumulated in each photodiode are continuously read out as a voltage signal (image signal) corresponding to the signal charge, which is added to the analog processing unit (CDS). /AMP) 128°
從類比處理部1 28輸出的類似R、G和B信號係被A/D -25 - 200917828 轉換器1 3 0被轉換成數位R、G和B信號,且將被加入影 像輸入控制部1 3 2。影像輸入控制部1 3 2將從A/D轉換器 1 3 0輸出的數位R、G和b信號輸出至記憶體1 2 0。 當所攝影像輸出至監視器3 2時,從影像輸入控制部 1 32輸出至記憶體1 20的影像信號被影像信號處理部1 34 用來產生亮度/色差信號,此等信號被傳送至視訊編碼器 1 3 6。視訊編碼器1 3 6將所輸入亮度/色差信號轉換成供顯 示的信號樣式(例如,NTSC彩色複合視頻信號),且輸出至 1 監視器32。如此’攝像元件1 24所攝影像即顯示在監視器 32上。 影像信號週期性從攝像元件1 2 4擷取,且V R Α Μ 1 2 2 中的影像資料係由從影像信號產生的亮度/色差信號週期 性重寫,以便輸出至監視器3 2,其允許攝像元件1 2 4所攝 影像的即時顯示。攝影師可看見即時顯示在監視器3 2上的 (鏡後測光)影像,以便檢查一攝像的視角。 從V R Α Μ 1 2 2加至視訊編碼器1 3 6的亮度/色差信號依 (: " 需要亦加至文字MIX部1 3 8,所以信號與預定文字或繪圖 合成,且加至視訊編碼器1 3 6。此使需要的攝像資訊重疊顯 示在鏡後測光影像上。 按下快門鈕26開始攝像。當按下快門鈕26至半途時, S10N信號輸入至CPU 110’導致CPU 110進行AE/AF處理。 首先,將攝像元件1 2 4經由影像輸入控制部1 3 2擷取的 影像信號被輸入至A F偵測部1 4 0與A E / A W B偵測部1 4 2。 將AF偵測部140所獲得的整合資料報告給CPU 1 10。 -26- 200917828 C Ρ ϋ 1 1 0控制鏡頭驅動部丨4 6,以移動攝像光學系統中 包含鏡頭1 4的對焦鏡頭組’同時,在複數個a F偵測點計 算對焦評估値(AF評估値),以決定可爲一對焦位置的具有 局部最大評估値之鏡頭位置。然後,爲了要移動該組對焦 鏡頭至獲得的對焦位置,CPU 1 1 0控制鏡頭驅動部1 4 6。 CPU 1 10係根據從AE/AWB偵測部142獲得的積分値, 偵測對象的明亮(對象亮度),計算適於攝像的曝光値(攝像 的EV値)。然後,使用所獲得供攝像及預定程序圖的EV 値來決定光圈値與快門速度,並根據這些値,C P U 1 1 0控 制攝像元件124的電子快門與光圈驅動部144來獲得適當 曝光量。同時’使用所偵測的對象亮度,CPU 110決定是 否需要來自電子閃光燈的發光。 在白平衡自動調整中,AE/AWB偵測部142計算每一劃 分區域的R、G和B信號的每一顏色的平均積分値,並提 供計算結果給CPU 1 1 0。CPU 1 1 0使用所獲得R積分値、所 獲得B積分値及所獲得G積分値來計算每一劃分區域的比 率R/G與比率B/G,以便根據所獲得R/G和B/G値的R/G 和B/G彩色空間分佈來決定一光源類型。例如根據適合決 定光源類型的白平衡値的調整,使每一比率値約爲1 (即 是,在一螢幕中的RGB綜合比率爲R: G: BS1: 1: 1), C P U 1 1 0係在白平衡電路的調整中控制與R、G和B信號有 關的增益値(白平衡校正値)’及修正每一色彩頻道的信號。 如上述,半按快門鈕26促成AE/AF處理。在此處理中, 攝影師依需要操作縮放鈕3 8以藉由調整縮放鏡頭丨4來調 -27 - 200917828 整視角。 在此處理之後,當全按快門鈕26時,S20N信號被輸 入至CPU 1 10,且CPU 110啓動一攝像與記錄處理。即是, 使用根據光學測量結果決定的快門速度與光圈値,使攝像 元件124曝光。在曝光中,當電子閃光燈16發光時,閃光 燈調整控制部1 60控制電子閃光燈1 6的發射。閃光燈調整 控制部1 60切斷電子閃光燈1 6的電流,並當閃光燈調整感 應器24接收預定光量時,停止電子閃光燈1 6的發射。 ! 從攝像元件124輸出的影像信號由記憶體120透過類 比處理部128、A/D轉換器130、與影像輸入控制部132取 得,並被影像信號處理部1 3 4轉換成一亮度/色差信號而儲 存在記憶體1 2 0中。 將儲存在記憶體1 20中的影像資料加入壓縮與解壓縮 處理部148,並根據待儲存於記憶體120中的預定壓縮格式 (例如,JPEG格式),壓縮成預定影像檔案格式(例如,Exif 格式)之影像檔案,以便經由媒體控制部1 50,記錄在儲存 } '> 媒體1 5 2。 藉由將電源/模式開關28與一再現位置對齊及將數位 相機1 0設定在一再現模式中,記錄在儲存媒體1 5 2中的影 像可以上述方式再現及顯示在監視器3 2上。 當數位相機1 0藉由使電源/模式開關2 8與一再現位置 對齊而設定在一再現模式中時,CPU 1 1 〇輸出命令至媒體 控制部150,以讀出記錄在儲存媒體152中的最近影像檔 案。 -28 - 200917828 將包括在讀取影像檔案中的壓縮影.像資料加入壓縮與 解壓縮處理部148,以便解壓縮成一亮度/色差信號,此信 號係經由視訊編碼器1 3 6輸出至監視器3 2。如此,記錄在 儲存媒體1 5 2中的影像會再現及顯示在監視器3 2上。於再 現中’再現影像的亮度/色差信號亦加入至文字MIX部1 3 8 而與一預定文字或繪圖合成,且其依需要加入視訊編碼器 1 3 6。此使預定攝像資訊重疊在一拍攝影像上,並顯示在監 視器3 2上。 影像的逐框倒帶藉由十字形鈕40的左與右鍵操作進 行,且按下十字形鈕40的右鍵將促成下一影像檔案從儲存 媒體152讀出’其會再現及顯示在監視器32上。按下十字 形鈕40的左鍵將導致一先前影像檔案從儲存媒體1 52讀 出,其會再現及顯示在監視器32上。 在本實施例的數位相機1 0中,爲了對使用者顯示一對 焦狀況,決定一對焦操作的狀態,並提供對應該狀態的顯 示(對焦狀況顯示)。現在,對焦狀況之顯示程序將在下面 解釋。 <對焦狀況顯示的第一實施例> 第4圖爲顯示數位相機1 〇的一對焦狀況顯示處理流程 之流程圖。下列步驟通常係由CPU 1 1 0進行。 在鏡後測光影像顯示在監視器3 2的一攝像備用狀態 中,臉部偵測部1 5 4偵測來自一輸入對象影像的一對象臉 部,且當影像包括一臉部時,進行一臉部區域的萃取及一 臉部位置的偵測(步驟S 1 0)。 -29 - 200917828 決定是(SI ON)否(SI 2)偵測到半按下快門鈕26。當未偵 測到S10N(步驟S12 =撤銷)時,重複步驟S12。 當偵測到S10N(步驟S12 =是)時,決定在步驟S10(步驟 S 1 4)是否從對象影像偵測到一臉。當偵測到一臉(步驟s 1 4 = 是)時,如第5圖所示,一對話氣球將顯示,其爲鄰近由對 焦狀況顯示產生部1 5 6所拍攝臉部區域之一顯示區域,供 顯示對焦資訊(步驟S 1 6),且當對話氣球的內部尙未對焦 時,對焦資訊的文字「?」會顯示(步驟s丨6)。當未偵測到 一臉部(步驟S 14 =否)時,如第6圖所示,一對話氣球係由 對焦狀況顯示產生部156(步驟S18)顯示在監視器的左下角 部分’且當對話氣球的內部尙未對焦時,對焦資訊文字r ?」 會顯示(步驟S20)。 決定是否偵測到全按快門鈕26(S2〇N)。當偵測到S2〇N (步驟S22 =是)時,一AF不能夠正確進行,藉此一組對焦鏡 頭移至一預定位置(步驟S4〇)供—攝像(步驟S42)。 當未偵測到S20N(步驟S22 =否)時,鏡頭驅動部146受 控制以移動該組對焦鏡頭’同時計算在複數個Af偵測點上 的對焦評估値’俾決定具有局部最大評估値的鏡頭位置爲 對焦位置(步驟S 2 4)。然後,爲移動該組對焦鏡頭至所獲 铪封焦位置,鏡頭驅動部丨46受控制啓動該組對焦鏡頭的 移動。The similar R, G, and B signals output from the analog processing unit 1 28 are converted into digital R, G, and B signals by the A/D -25 - 200917828 converter 130, and are added to the video input control unit 13 2. The video input control unit 1 3 2 outputs the digital R, G, and b signals output from the A/D converter 1 30 to the memory 1 220. When the captured image is output to the monitor 32, the video signal output from the video input control unit 1 32 to the memory 1 is used by the video signal processing unit 34 to generate a luminance/color difference signal, and the signals are transmitted to the video. Encoder 1 3 6. The video encoder 136 converts the input luminance/color difference signal into a signal pattern for display (for example, an NTSC color composite video signal), and outputs it to the 1 monitor 32. Thus, the image captured by the image pickup device 1 24 is displayed on the monitor 32. The image signal is periodically captured from the image pickup device 1 24, and the image data in VR Α Μ 1 2 2 is periodically rewritten by the luminance/color difference signal generated from the image signal for output to the monitor 32, which allows Instant display of the imaged image of the imaging element 1 2 4 . The photographer can see the (post-mirror) image that is instantly displayed on the monitor 3 2 to check the angle of view of a camera. From VR Α Μ 1 2 2 to the video encoder 1 3 6 brightness/color difference signal (: " needs to be added to the text MIX part 1 3 8, so the signal is combined with the predetermined text or drawing, and added to the video encoding The device 1 3 6. This causes the required camera information to be superimposed on the post-mirror image. The shutter button 26 is pressed to start imaging. When the shutter button 26 is pressed halfway, the S10N signal is input to the CPU 110' causing the CPU 110 to perform AE. /AF processing. First, the image signal captured by the image pickup device 1 2 4 via the image input control unit 1 3 2 is input to the AF detection unit 1 40 and the AE / AWB detection unit 1 4 2 . The integrated data obtained by the department 140 is reported to the CPU 1 10. -26- 200917828 C Ρ ϋ 1 1 0 controls the lens driving unit 丨4 6, to move the focus lens group including the lens 14 in the imaging optical system' at the same time, in plural The a F detection point calculates the focus evaluation 値 (AF evaluation 値) to determine the lens position with a local maximum evaluation 可 which can be a focus position. Then, in order to move the focus lens to the obtained focus position, the CPU 1 1 0 Controls the lens drive unit 1 4 6. CPU 1 10 Series According to the integral 値 obtained from the AE/AWB detecting unit 142, the brightness of the object (object brightness) is detected, and an exposure 适于 (image EV 値) suitable for imaging is calculated. Then, the obtained image for imaging and predetermined program map is used. The EV 决定 determines the aperture 値 and the shutter speed, and according to these 値, the CPU 1 10 controls the electronic shutter and the aperture driving unit 144 of the imaging element 124 to obtain an appropriate exposure amount. At the same time, 'using the detected object brightness, the CPU 110 determines Whether or not illumination from the electronic flash is required. In the white balance automatic adjustment, the AE/AWB detecting section 142 calculates the average integral 每一 of each color of the R, G, and B signals of each divided area, and supplies the calculation result to the CPU 1 1 0. The CPU 1 10 0 calculates the ratio R/G and the ratio B/G of each divided region using the obtained R integral 値, the obtained B integral 値, and the obtained G integral , in order to obtain the R/G sum according to the obtained The R/G and B/G color space distribution of B/G値 determines the type of light source. For example, according to the adjustment of the white balance 适合 suitable for determining the type of light source, each ratio 値 is about 1 (that is, in a screen) The RGB composite ratio is R: G: BS1: 1: 1), CPU 1 1 0 controls the gain 値 (white balance correction 値)' associated with the R, G, and B signals in the adjustment of the white balance circuit and corrects the signal of each color channel. As described above, pressing the shutter button halfway 26 facilitates AE/AF processing. In this process, the photographer operates the zoom button 38 as needed to adjust the zoom lens 丨4 to adjust the -27 - 200917828 full viewing angle. After this processing, when the shutter button 26 is fully pressed, the S20N signal is input to the CPU 1 10, and the CPU 110 starts an imaging and recording process. That is, the imaging element 124 is exposed by using the shutter speed and the aperture 决定 determined based on the optical measurement result. In the exposure, when the electronic flash 16 emits light, the flash adjustment control unit 160 controls the emission of the electronic flash 16. The flash adjustment control unit 1 60 cuts off the current of the electronic flash 16 and stops the emission of the electronic flash 16 when the flash adjustment sensor 24 receives the predetermined amount of light. The video signal output from the imaging device 124 is transmitted from the memory 120 through the analog processing unit 128, the A/D converter 130, and the video input control unit 132, and is converted into a luminance/color difference signal by the video signal processing unit 134. Stored in memory 1 2 0. The image data stored in the memory 120 is added to the compression and decompression processing section 148, and compressed into a predetermined image file format according to a predetermined compression format (for example, JPEG format) to be stored in the memory 120 (for example, Exif) The image file of the format is recorded in the storage device ''> media 1 5 2 via the media control unit 150. By aligning the power/mode switch 28 with a reproduction position and setting the digital camera 10 in a reproduction mode, the image recorded in the storage medium 152 can be reproduced and displayed on the monitor 3 in the manner described above. When the digital camera 10 is set in a reproduction mode by aligning the power/mode switch 28 with a reproduction position, the CPU 1 1 outputs a command to the media control section 150 to read out the record in the storage medium 152. Recent image files. -28 - 200917828 The compressed image included in the read image file is added to the compression and decompression processing unit 148 for decompression into a luminance/color difference signal, which is output to the monitor via the video encoder 136. 3 2. Thus, the image recorded in the storage medium 152 is reproduced and displayed on the monitor 32. In the reproduction, the luminance/color difference signal of the reproduced image is also added to the character MIX portion 1 3 8 to be combined with a predetermined text or drawing, and is added to the video encoder 1 36 as needed. This causes the predetermined imaging information to be superimposed on a captured image and displayed on the monitor 32. The frame-by-frame rewind of the image is performed by the left and right button operations of the cross button 40, and pressing the right button of the cross button 40 causes the next image file to be read from the storage medium 152 'which will be reproduced and displayed on the monitor 32. on. Pressing the left button of the cross button 40 will cause a previous image file to be read from the storage medium 1 52, which will be reproduced and displayed on the monitor 32. In the digital camera 10 of the present embodiment, in order to display a pair of focus conditions to the user, the state of a focus operation is determined, and the display of the corresponding state (focus state display) is provided. The display of the focus condition will now be explained below. <First Embodiment of Focus Status Display> Fig. 4 is a flowchart showing a flow of a focus status display process of the digital camera 1 。. The following steps are usually performed by the CPU 1 10 0. In the image capturing standby state of the monitor 32, the face detecting unit 154 detects a target face from an input subject image, and when the image includes a face, performs a Extraction of the face area and detection of a face position (step S10). -29 - 200917828 The decision is (SI ON) No (SI 2) to detect that the shutter button 26 is pressed halfway. When S10N is not detected (step S12 = revocation), step S12 is repeated. When S10N is detected (step S12 = YES), it is determined whether a face is detected from the subject image in step S10 (step S 14). When a face is detected (step s 1 4 = YES), as shown in Fig. 5, a dialog balloon will be displayed, which is adjacent to one of the face regions captured by the focus state display generating portion 156. For displaying the focus information (step S16), and when the internal focus of the dialog balloon is not in focus, the text "?" of the focus information is displayed (step s丨6). When a face is not detected (step S14=No), as shown in Fig. 6, a dialog balloon is displayed by the focus state display generation portion 156 (step S18) in the lower left portion of the monitor' and When the inside of the dialog balloon is not in focus, the focus information text r?" is displayed (step S20). Decide whether to detect the full press of the shutter button 26 (S2〇N). When S2 〇 N is detected (step S22 = YES), an AF cannot be correctly performed, whereby a set of focus lens heads is moved to a predetermined position (step S4 〇) for image pickup (step S42). When S20N is not detected (step S22=No), the lens driving section 146 is controlled to move the set of focus lenses' while calculating the focus evaluation 复' at a plurality of Af detection points to determine the local maximum evaluation 値The lens position is the in-focus position (step S 2 4). Then, to move the set of focus lenses to the obtained focus position, the lens drive unit 46 is controlled to activate the movement of the set of focus lenses.
疋對象疋否在影像中心附近的一預定區域中。 S26)。當在步驟S10偵測 驟S 1 0未偵測到臉部時, -30 - 200917828 當決定期望區域未對焦(步驟S26=否)時,該組對焦鏡 頭移動至所獲得對焦點’且同時,如第7圖所示,在對話 氣球中顯示的文字「?」旋轉’以便文字「?」返回爲文字 「丨」(步驟S 3 2)’並重複步驟S 2 6。當期望區域對焦時,即 是’完成該組對焦鏡頭的運動(步驟326 =是),如第8圖所 示’當對話氣球的內部對焦時’對焦資訊文字「丨」會顯示 (步驟S28),且影像的明亮增加清楚顯示話氣球(步驟s3〇)。Whether the object is in a predetermined area near the center of the image. S26). When it is detected that the face is not detected in step S10, -30 - 200917828, when it is determined that the desired area is not in focus (step S26=NO), the group of focus lenses moves to the obtained focus point 'and at the same time, As shown in Fig. 7, the character "?" displayed in the dialog balloon is rotated so that the character "?" is returned to the character "丨" (step S3 2)' and step S26 is repeated. When the desired area is in focus, that is, 'the movement of the group of focus lenses is completed (step 326=YES), as shown in Fig. 8 'When the internal focus of the dialogue balloon is focused, the focus information word "丨" is displayed (step S28). And the brightening of the image clearly shows the balloon (step s3〇).
即是,在一對焦操作開始之前(步驟S2〇),當對話氣球 的內部未對焦時,對焦資訊文字「?」會顯示,且在對焦操 作期間(步驟S24至步驟S32),文字「?」會以9〇度旋轉以 逐漸將文子「?」切換成文字「丨」,所以同時隨著對焦處理 結束,當對話氣球的內部對焦時,對焦資訊文字「!」會顯 不(步驟S 28)。文子「?」能以—預定轉動速度予以固定旋 轉’或以使用從對焦位置至對焦計算所需大槪時間所計算 之速度予以旋轉,或以藉由計算從步驟s24獲得對焦位置 的該組對焦鏡頭的移動所需時間、及計算允許旋轉與當預 疋區域封焦時之時序同時結束的—旋轉速度所獲得之速度 予以旋轉。 決疋疋否啓動語音輸出。當啓動語音輸出(步驟 疋)日寸,指不對焦完成的聲響(像是語音、曲子、呼叫及其 類似者)能經由㈣36(步驟S36)來輸&,且然後決定是否 偵測到S 2 0N。當去啓動g五攻❾山& ^ 1 士白 田…占勒口口曰輸出時(步驟s 3 * =否),決定是 否偵測到S20N(步驟S38)。 晶未偵測至!J S20N(步,驟538 =否)時,處理會返回至計算 200917828 一對焦評估値的步驟(步驟S24)。當偵測到S20N(步驟S38 = 是)時,一影像被拍攝(步驟S42) ° 根據本實施例,藉由顯示一對話氣球與對焦資訊,可 吸引使用者的注意,此能以容易瞭解的方式告知使用者對 焦資訊。當一 A F未正確進行時,不會顯示對焦資訊,惟已 正確進行AF時,會顯示對應於一對焦狀況的對焦資訊會顯 示,這引導使用者進行一AF操作。 同時,根據本實施例,當偵測到臉時,一對話氣球或 其類似者會在臉部附近顯示,其能以易於瞭解的方式顯示 使用者該對焦區域。而且,除了顯示之外,會輸出聲音, 指示使用者該臉部已對焦。 <對焦狀況顯示的第二實施例> 在一對焦狀況顯示的第二實施例中,不僅一文字回應 —對焦狀況來顯不在一對話氣球中,而且一訊息會回應一 對象條件而改變。第9圖爲顯示數位相機1 〇的一對焦狀況 顯示處理流程之流程圖。下列步驟通常係由CPU 1 1 〇進 行。下列步驟通常係由CPU 1 1〇進行。如同在第—實施例 中的部件之相同者係以相同參考數字表示,且不在此詳細 描述。 於鏡後測光影像顯示在監視器32的一攝像的備用狀 態中,臉部偵測部1 5 4偵測來自一輸入對象影像的一對象 的臉部’且當影像包括—臉部日寺,進行一臉部區域的萃取 及一臉部位置的偵測(步驟S i 〇 )。 決疋疋否偵測到S10N(步驟S12)。當未偵測到S10N (步 -32- 200917828 驟S12 =否)時,重複步驟S12。 、…當偵測到S1〇N(步驟312 =是)時,在步驟si〇(步驟U4) 決疋疋η從對象影像偵測到—臉部。當偵測到一臉部(步酽 s 14=是)時,一對話氣球會顯示與臉部區域相鄰(步驟 s 1 6) ’且指示對話氣球的內部尙未對焦的文字「尙未·· 會顯示(步驟S50)。當未偵測到一臉部(步驟““否)時,— 對話氣球會顯示在螢幕的左下角部分(步驟s丨8),且對焦資 訊表示對話氣球的內部尙未對焦的文字「尙未…」會顯示 (步驟S50)。 決定是否偵測到S20N(步驟S22)。當偵測到S2〇N(步驟 S22 =是)時,—組對焦鏡頭移到用於一攝像(步驟s42)的預 定位置(步驟S 4 0)。 當未偵測到S20N (步驟S22 =否)時,對焦評估値的計算 與—對焦位置的決定(一對焦操作的開始)會進行(步驟 S 24) ’所以決定—期望區域是否對焦。 當決定期望區域未對焦(步驟S26 =否)時,該組對焦鏡 頭移至獲得的對焦點(步驟S52),並重複步驟S26。當期望 區域對焦(步驟S 26 =是)時,決定是否從對焦的對象影像偵 測到一臉部(步驟S54)。 當未從對象影像偵測到一臉部(步驟S54 =否)時,對焦 資訊指示對話氣球的內部對焦的文字「OK !」會顯示(步驟 s 5 6 ),且影像的明亮增加以清楚顯示對話氣球(步驟S 3 0)。 當從對象影像偵測到一臉部(步驟S54 =是)時,臉部偵 測部1 5 4偵測在步驟S 5 4 (步驟S 5 8)偵測到的臉部表情,並 200917828 決定該表情是否爲微笑(步驟S60)。 當表情爲微夭(步驟S60=是)時,對焦資訊指示對話氣 球的內部對焦的文字「拍攝此微笑照片!」會顯示(步驟 S 6 2 ),且影像的明亮增加以清楚顯示對話氣球(步驟s 3 〇)。 當表情不是微笑(步驟S60 =否)時,指示對話氣球的內部對 焦的文子「Ο K !」會顯不,且影像的明亮會增加以清楚顯示 對話氣球(步驟S30)。 決定是否啓動語音輸出(步驟S34),當啓動語音輸出 (步驟S34 =是)時’指示完成對焦的聲響(像是語音、曲子、 呼叫及其類似者)能經由喇叭36輸出(步驟S36),然後決定 是否偵測到S20N(步驟S38)。當未啓動語音輸出(步驟S34 = 否)時,決定是否偵測到S 2 0 N (步驟S 3 8 )。 當未偵測到S 2 0 N (步驟s 3 8 =否)時,處理返回至計算一 對焦評估値(步驟S 2 4 )的步驟。當偵測到s 2 0 N (步驟S 3 8 =是) 時,一影像被拍攝(步驟S42)。 根據本具體實施例,使用一句子告知一對焦狀況,其 能以一容易瞭解方式告知使用者對焦狀況。此亦讓使用者 容易判斷該對焦狀況。 在上述具體實施例中,對焦資訊與一對焦狀況彼此相 關,且預先記錄在ROM 1 1 6,所以對焦資訊顯示在一對話 氣球中,但是使用者可輸入對焦資訊。在此情況中,當使 用者經由操作部1 1 2輸入對應一對焦狀況的對焦資訊時, 輸入的對焦資訊與一對焦狀況相關,並記錄在ROM 1 1 6, 所以,根據決定一期望區域是否對焦的結果(步驟S26),對 -34 - 200917828 應至一對焦狀況的對焦資訊可從使用者輸入的對焦資訊選 擇,以顯示在一對話氣球中。若一相同對焦狀況的複數個 對焦資訊部分記錄在ROM 1 1 6 ’即可自動選擇顯示所記錄 對焦資訊中最近一筆對焦資訊部分,或可顯示使用者預設 的一筆對焦資訊部分。 在上述實施例中,當偵測到該臉部時,一對話氣球或 其類似者會顯示在該臉部的附近’但是除了此之外,如第 1 0圖所示,指示偵測到臉的一框可重疊及顯示在該臉部 上,更清楚表示哪部分係對焦。一對話氣球顯示的位置並 未限於一臉部附近,且臉部偵測部1 5 4可偵測一嘴部,以 便對話氣球可設置成好像從嘴部說出。 在上述實施例中,臉部偵測部1 54偵測一臉部或一臉 部及其表情,但是臉部偵測部1 54可偵測臉部的運動,即 是一主要對象的運動,所以顯示回應對象運動的對焦資 訊。換句話說,當臉部偵測部1 54偵測到對象的運動時’ 像是「現在!」的顯示對焦資訊會搖晃,所以當不再偵測到 對象的運動時,即是當對象停止移動時,像是「現在!」的 對焦資訊的搖晃可停止以告知使用者該對象停止移動。 在上述實施例中,對焦資訊顯示在一對話氣球中,以 便告知使用者一對焦狀況,但是一對話氣球的尺寸可改變 以顯示一對焦狀況。例如,首先,一小的對話氣球係如指 示來顯示(參見第12A圖),且當對焦操作進行,當完成對 焦時,對話氣球會放大(參見第12B圖)以具有最大尺寸(參 見第12C圖)。 -35- 200917828 在上述實施例中,當未偵測一臉部時,一對話氣球會 顯示在螢幕的左下角部分,但是其他顯示亦可使用’且一 人臉部或一動物(像是一隻鳥)的動畫可顯示。例如1 ’如第 1 1圖所示,一隻鳥可顯示在螢幕的左下角,所以從一鳥嘴 延伸的一對話氣球可顯示成好像鳥正在說話。另外’文字 資訊與繪圖資訊可顯示在一對話氣球。而且,對應至一對 焦狀況的動畫可顯示在一對話氣球。例如,除了微笑臉孔 之外的一臉部小圖像可於未對焦狀況顯示,且微笑臉孔的 一臉部小圖像可於一對焦狀況顯示。 <第二實施例> 在本發明的第一實施例中,當偵測到臉部時,一對話 氣球、文字資訊或其類似者會顯示接近一臉部,以容易瞭 解方式告知使用者一對焦區域或一對焦狀況,但是以容易 瞭解方式顯示給使用者一對焦區域或一對焦狀況的該方式 並未限於此。 在本發明的一第二實施例中,一動畫顯示係以容易瞭 解方式告知使用者一對焦區域或一對焦狀況。現在,將在 下面解釋數位相機1 1的第二實施例。如同在第一實施例中 的部件之相同者係以相同參考數字表示,且不在此描述。 第13圖爲顯示數位相機11內部的圖解結構之方塊圖。That is, before a focus operation starts (step S2 〇), when the inside of the dialog balloon is not in focus, the focus information word "?" is displayed, and during the focus operation (step S24 to step S32), the character "?" It will rotate at 9 degrees to gradually switch the text "?" to the text "丨", so at the same time, as the focus processing ends, when the internal focus of the dialogue balloon is in focus, the focus information text "!" will be displayed (step S28). . The text "?" can be rotated at a predetermined rotational speed or rotated at a speed calculated from the focus position to the time required for the focus calculation, or by calculating the set of focus obtained from step s24. The time required for the movement of the lens, and the calculation of the speed at which the rotation is allowed to end at the same time as the timing when the pre-pitched region is closed, is rotated. Decide whether to start voice output. When the voice output (step 疋) is started, it means that the sounds that are not in focus (such as voice, music, call, and the like) can be input via (4) 36 (step S36), and then decide whether to detect S. 2 0N. When the start of the g5 attack on the mountain & ^ 1 Shibaitian ... Zhankou mouth output (step s 3 * = No), it is determined whether or not S20N is detected (step S38). When the crystal is not detected! J S20N (step, step 538 = No), the processing returns to the step of calculating the 200917828-focus evaluation ( (step S24). When S20N is detected (step S38=YES), an image is captured (step S42). According to the embodiment, by displaying a dialog balloon and focus information, the user's attention can be attracted, which can be easily understood. The way to inform the user to focus information. When an A F is not correctly performed, the focus information will not be displayed. However, when AF is correctly performed, the focus information corresponding to a focus condition will be displayed, which guides the user to perform an AF operation. Meanwhile, according to the present embodiment, when a face is detected, a dialog balloon or the like is displayed near the face, which can display the focus area of the user in an easy-to-understand manner. Moreover, in addition to the display, a sound is output indicating that the user has focused on the face. <Second Embodiment of Focus Status Display> In the second embodiment of a focus condition display, not only a text response-focus condition is displayed in a dialog balloon, but a message is changed in response to an object condition. Figure 9 is a flow chart showing the processing flow of a focus status display of the digital camera 1 。. The following steps are usually performed by CPU 1 1 . The following steps are usually performed by the CPU 1 1〇. The same components as in the first embodiment are denoted by the same reference numerals and will not be described in detail herein. The post-mirror photometric image is displayed in a standby state of an image of the monitor 32, and the face detecting unit 154 detects a face of an object from an input subject image and when the image includes a face Japanese temple, Extraction of a face region and detection of a face position (step S i 〇). It is determined whether or not S10N is detected (step S12). When S10N is not detected (step -32 - 200917828, step S12 = no), step S12 is repeated. When ... S1 〇 N is detected (step 312 = YES), in step si 〇 (step U4) 疋疋 η is detected from the object image - face. When a face is detected (step s 14 = YES), a dialog balloon displays a text adjacent to the face region (step s 16 6) and indicates that the interior of the dialog balloon is not in focus. · It will be displayed (step S50). When a face is not detected (step "No", the dialog balloon will be displayed in the lower left corner of the screen (step s丨8), and the focus information indicates the inside of the dialog balloon. The unfocused text "尙不..." is displayed (step S50). It is determined whether or not S20N is detected (step S22). When S2 〇 N is detected (step S22 = YES), the group focus lens is moved to a predetermined position for one imaging (step s42) (step S 4 0). When S20N is not detected (step S22 = No), the calculation of the focus evaluation 与 and the determination of the focus position (the start of a focus operation) are performed (step S 24)', so it is determined whether or not the desired area is in focus. When it is determined that the desired area is not in focus (step S26 = No), the set of focus lens heads is moved to the obtained focus point (step S52), and step S26 is repeated. When the desired area is in focus (step S26 = YES), it is determined whether or not a face is detected from the focused subject image (step S54). When a face is not detected from the target image (step S54=No), the focus information indicates that the internal focus text "OK!" of the dialog balloon is displayed (step s 5 6 ), and the brightness of the image is increased to clearly display Conversation balloon (step S 3 0). When a face is detected from the target image (step S54=YES), the face detecting unit 154 detects the facial expression detected in step S54 (step S58), and decides at 200917828 Whether the expression is a smile (step S60). When the expression is slight (step S60=Yes), the focus information indicates that the internal focus of the dialogue balloon "shoot this smile photo!" is displayed (step S6 2), and the brightness of the image is increased to clearly display the dialog balloon ( Step s 3 〇). When the expression is not a smile (step S60 = No), the text "Ο K !" indicating the internal focus of the dialog balloon is displayed, and the brightness of the image is increased to clearly display the dialog balloon (step S30). Determining whether to activate the voice output (step S34), when the voice output is activated (step S34 = YES), the sound indicating the completion of the focus (such as voice, music, call, and the like) can be output via the speaker 36 (step S36), It is then decided whether or not S20N is detected (step S38). When the voice output is not activated (step S34 = No), it is determined whether or not S 2 0 N is detected (step S38). When S 2 0 N is not detected (step s 3 8 = No), the process returns to the step of calculating a focus evaluation 値 (step S 2 4 ). When s 2 0 N is detected (step S 3 8 = YES), an image is captured (step S42). According to this embodiment, a sentence is used to inform a focus condition that the user can be informed of the focus condition in an easy-to-understand manner. This also makes it easy for the user to judge the focus condition. In the above specific embodiment, the focus information and a focus condition are related to each other and are recorded in advance in the ROM 1 1 6, so the focus information is displayed in a dialog balloon, but the user can input the focus information. In this case, when the user inputs the focus information corresponding to a focus condition via the operation unit 1 1 2, the input focus information is related to a focus condition and is recorded in the ROM 1 1 6 , so, according to determining whether a desired area is The result of the focus (step S26), the focus information of the -34 - 200917828 to a focus condition can be selected from the focus information input by the user to be displayed in a dialog balloon. If a plurality of focus information portions of the same focus condition are recorded in the ROM 1 1 6 ’, the latest focus information portion of the recorded focus information can be automatically selected, or a portion of the focus information preset by the user can be displayed. In the above embodiment, when the face is detected, a dialog balloon or the like is displayed in the vicinity of the face', but in addition to this, as shown in FIG. 10, the face is detected. A frame can be overlapped and displayed on the face to more clearly indicate which part is in focus. The position where a dialog balloon is displayed is not limited to the vicinity of a face, and the face detecting portion 154 can detect a mouth so that the dialogue balloon can be set as if it is spoken from the mouth. In the above embodiment, the face detecting unit 1 54 detects a face or a face and an expression thereof, but the face detecting unit 1 54 can detect the movement of the face, that is, the movement of a main object. So the focus information that reflects the motion of the object is displayed. In other words, when the face detecting unit 154 detects the motion of the object, the display focus information of the image "now!" will be shaken, so when the motion of the object is no longer detected, the object stops. When moving, the shaking of the focus information like "Now!" can be stopped to inform the user that the object stops moving. In the above embodiment, the focus information is displayed in a dialog balloon to inform the user of a focus condition, but the size of a dialog balloon can be changed to display a focus condition. For example, first, a small dialog balloon is displayed as indicated (see Figure 12A), and when the focus operation is performed, when the focus is completed, the dialog balloon is enlarged (see Figure 12B) to have the largest size (see section 12C). Figure). -35- 200917828 In the above embodiment, when a face is not detected, a dialog balloon will be displayed in the lower left corner of the screen, but other displays may also use 'and one person's face or one animal (like one) The animation of the bird can be displayed. For example, 1 ' As shown in Figure 11, a bird can be displayed in the lower left corner of the screen, so a dialogue balloon extending from a bird's beak can be displayed as if the bird is talking. In addition, the text information and drawing information can be displayed in a dialogue balloon. Moreover, an animation corresponding to a pair of focus conditions can be displayed in a dialog balloon. For example, a small face image other than a smiling face can be displayed in an unfocused condition, and a small face image of a smiling face can be displayed in a focus condition. <Second Embodiment> In the first embodiment of the present invention, when a face is detected, a dialog balloon, text information, or the like is displayed close to a face, and the user is notified in an easy-to-understand manner. A manner of a focus area or a focus condition, but displaying the focus area or a focus condition to the user in an easy-to-understand manner is not limited thereto. In a second embodiment of the present invention, an animated display informs the user of a focus area or a focus condition in an easy-to-understand manner. Now, a second embodiment of the digital camera 11 will be explained below. The same components as in the first embodiment are denoted by the same reference numerals and will not be described herein. Fig. 13 is a block diagram showing the schematic structure inside the digital camera 11.
如第1 3圖所示,數位相機1 1包括:c P U 1 1 0、操作部 (快門鈕26、電源/模式開關28、模式旋鈕30、縮放鈕38、 十字形鈕 40、MENU/ΟΚ 鈕 42、DISP 鈕 44、BACK 鈕 46 及 其類似者)1 12、ROM 1 16、EEPROM 1 1 8、記憶體 120、VRAM -36 - 200917828 122、攝像元件 124、時序產生器(TG) 126、類比處理部 (CDS/AMP)128、A/D轉換器130、影像輸入控制部132、影 像信號處理部1 3 4、視訊編碼器1 3 6、AF偵測偵測部1 40、 AE/AMP偵測部142、光圈驅動部144、一鏡頭驅動部146、 壓縮與解壓縮處理部148、媒體控制部150、儲存媒體152' 臉部偵測部1 54、閃光燈調整控制部1 60、動畫影像合成部 162、動畫影像產生部164及其類似者。 根據來自C P U 1 1 0的命令,視訊編碼器1 3 6將輸入影 像信號轉換成視訊信號(例如,NTSC信號、PAL信號或SCAM 信號)顯示在監視器32上,並輸出視訊信號至監視器32, 及亦依需要輸出由動畫影像合成部1 62所合成的預定文字 資訊或繪圖資訊至監視器3 2。 動畫影像產生部1 64將複數個靜態影像組合成一動畫 影像(動態影像),並產生例如像是動畫GIF、MNG(二者皆 爲格式種類)中的動畫影像。CPU 1 1 0識別由AF偵測部1 40 進行的AF的狀態,並送出一命令至動畫影像產生部164。 根據來自CPU 110的命令’動畫影像產生部164選擇對應 至AF狀態的靜態影像,並產生動畫影像。然後,根據臉部 偵測部1 54所偵測臉部位置訊息,CPU 110送出命令至動畫 影像合成部162,顯示由動畫影像產生部164所產生的動畫 影像。動畫影像產生部丨64可儲存一程式,以便回應由在 ROM 1 1 6中的臉部偵測部1 54及其類似者所偵測的對象狀 態來選擇一靜態影像’所以動畫影像產生部1 64可使用程 式產生一動畫影像。此允許產生具真實感的一動畫影像。 -37 - 200917828 由動畫影像產生部1 64所產生的動畫影像將在稍後詳細解 釋。 其次’根據如上述所配置本具體實施例之數位相機11 之操作將在下面解釋。在本實施例的數位相機11中,爲了 要對使用者顯示一對焦狀況,可決定一對焦操作的狀態’ 且對應狀態的一動畫影像會顯示。現在,顯示此種動畫影 像的程序將在下面解釋。 <對焦狀況顯示的第一實施例> 第1 4圖爲顯示數位相機1 1的一對焦狀況顯示處理流 程之流程圖。下列步驟通常係由CPU 1 1 0進行。 在一鏡後測光影像顯示在監視器3 2的一攝像的備用 狀態中,臉部偵測部1 5 4偵測來自一輸入對象影像的一對 象臉部,且當該影像包括一臉部時,進行一臉部區域的拍 攝及一臉位置的偵測(步驟S 1 10)。 決定是否偵測到半按下快門鈕2 6 (S 1 0 N)。當未偵測到 S10N(步驟Sli2 =否)時,重複步驟Sii2。 當偵測到S10N(步驟S1 12 =是)時,決定一臉部是否在 步驟S 1 1 0(步驟S 1 1 4)從對象影像偵測到。當偵測到—臉部 (步驟S 1 1 4 =是)時,決定是否偵測到s 2 0 N (步驟S 1 1 6)。 當未偵測到一臉部(步驟S 1 1 4 =否)時,決定是否偵測到 S 2 0 N (步驟S 1 1 8 )。當未偵測到s 2 〇 N (步驟s丨1 8 =否)時,重 複偵測一臉部的步驟(步驟S 1 1 4 ),且當偵測到s 2 0 N (步驟 S118 =是)時’一影像能被拍攝(步驟Si38)。 當偵測到S 2 0 N (步驟S 1 1 6 =是)時,一 A F不能夠正確進 -38 - 200917828 行’因此一組對焦鏡頭移至用於一攝像(步驟s 1 3 8 )的一頊 定位置(步驟S140)。 當未偵測到S20N(步驟si 16 =否)時,如第15A圖所禾’ 具有彼此不同尺寸的兩圓形框係在通過鏡頭影像(步驟 S 1 20)的任何位置(例如,中心)上以不同顏色呈同心圓顔 示。在此情況中’外部圓形框具有的區域係大於在步驟S 1 1 4 所偵測到的臉部區域。當兩圓形框具有相同區域時,每/ 圓形框使用複數個弧形加以配置,以便兩圓形框之重疊形 成一圓。 當在複數個AF偵測點上計算對焦評估値時,鏡頭驅動 部1 4 6受控制以移動該組對焦鏡頭,所以可決定具有局部 最大評估値的鏡頭位置爲一對焦位置(步驟S 1 2 2)。然後’ 爲移動該組對焦鏡頭至所獲得對焦位置,鏡頭驅動部1 46 受控制啓動該組對焦鏡頭的運動。 決定是否在步驟S114(步驟S 124)使該偵測的臉對焦。 當決定尙未對焦臉部(步驟S 124 =否)時,該組對焦鏡頭移妄 所獲得對焦點,且同時,動畫影像產生部1 64產生一動_ 影像,其中一內圓形框與一外圓形框係以彼此不同方向旋 轉,其會在通過鏡頭影像上重疊及顯示(步驟S126)。即是’ 動畫影像產生部1 64產生一動畫影像’其中在回應一對焦 狀況方面,外圓形框的尺寸可減少’且內圓形框的尺寸可 增加’使得外圓形框以一期望速度順時針方向旋轉’且內 圓形框以一期望速度逆時針方向旋轉’且當影像對焦時’ 兩弧形具有彼此相同尺寸。圓形框以不同方向旋轉的一動 -39 - 200917828 畫影像之產生增加圓形框旋轉的可見度。CPU 1 10合成及 顯示產生的動畫影像於通過鏡頭影像,使得動畫影像從在 步驟S 1 20顯示的位置移至在步驟S 1 1 4偵測的臉部。然後, 再次重複步驟S124。 當期望區域對焦時,即是,完成該組對焦鏡頭的運動 (步驟S124 =是),如第15B圖所示,具有彼此相同尺寸且彼 此重疊的兩圓形框所形成的圓係在步驟S114(步驟S 128)偵 ‘, 測的臉部上重疊。然後,兩圓框能夠以高明亮清楚顯示(步 驟 S130)。 決定是否啓動語音輸出(步驟S132)。當啓動語音輸出 (步驟S 132 =是)時’指示完成對焦的一聲響(像是語音、曲 子、呼叫及其類似者)能經由喇叭3 6 (步驟S 1 3 4)輸出,然後 決定是否偵測到S20N(步驟S136)。當未啓動語音輸出時(步 驟S 1 3 2 =否)’決定是否偵測到S 2 〇 N (步驟S 1 3 6 )。 當未偵測到S20N(步驟S136 =否)時,處理返回顯示具 ,y 有彼此不同尺寸的的兩圓形框(步驟S 1 20),當偵測到 S20N(步驟S136 =是)時,一影像被拍攝(步驟Sn8)。 根據本實施例’當一影像未對焦時,複數個旋轉框會 顯示,且當一影像對焦時,該等框形成一固定圓,其使一 對焦狀況及一已對焦狀況以使用者容易瞭解的方式來告 知。當一 AF未正確進行時,不顯示框,其將引導使用者進 行一 AF操作。同時,當偵測到臉部時,框會重疊及顯示在 偵測的臉部上’其以一容易瞭解方式顯示給使用者該對焦 區域。且除了顯示之外’會輸出一聲音,其以使用者容易 -40- 200917828 瞭解的方式指示使用者該區域已對焦。 在本實施例中,顯示框具有一圓形,但是未侷限於圓 形,顯示框可具有不同形狀,包括任何幾何配置(像是三角 形、矩形、與橢圓形)與不規則形狀,像是一心形。此允許 藉由吸引使用者注意之顯示來告知使用者一對焦狀況。像 是三角形、矩形、橢圓形、與心形的形狀(其當旋轉時改 變其形狀)具有一優點爲形狀的靜止狀態比一圓形形狀的 情況更好辨識。 並且,在本實施例中,兩圓形框係彼此以不同顏色顯 示,但是當一影像對焦時,兩圓形框可以兩不同顏色之間 的中間顏色加以顯示。例如,當外框係以藍色顯示,且內 框係以黃色顯示,當一影像對焦時,由兩框形成的圓會是 一黃綠色。如此,複數個框能以變化的顏色顯示,其可增 加其可見度。在旋轉框與靜止框之間的色彩變化使框停止 時的臉部更可辨識。當然,兩框能以相同顏色顯示。 同時,在本實施例中,可見度可藉由當對焦時更清楚 顯示一框而增加,但是沒有限制,可使用任何允許告知使 用者一對焦狀況的顯示。例如,該框可具有一較暗顏色, 或當對焦時,該框可具有一更寬的線條。 同時,在本實施例中,該框係以一預定轉動速度旋轉, 但是旋轉速度能回應一對焦狀況而改變。該框能以使用從 對焦位置至對焦計算所需的大槪時間所計算的速度來旋 轉,或以藉由計算該組對焦鏡頭從對焦位置的運動所需的 時間所獲得的速度、及計算允許該旋轉與當對焦預定區域 -41- 200917828 時同時結束的一旋轉速度來旋轉。 同時’在本實施例中,複數個框能彼此以不同方向旋 轉’以使一對焦狀況更清楚告知使用者,但是該等框能以 相同方向旋轉。在此情況中,複雜度可減少以降低處理量。 <對焦狀況顯示的第二實施例> 在一對焦狀況顯示之第二實施例中,矩形框搖晃的動 畫會顯不。桌16圖爲顯不數位相機π —對焦狀況顯示處 理流程之流程圖。下列步驟通常係由C P U 1 1 0進行。如同 在第一實施例中的部件之相同者係以相同參考數字表示, 且不在此詳細描述。 決定是否偵測到S 1 Ο N (步驟s 1 1 2)。當未偵測到s 1 0 N (步驟S1 12 =否)時,重複步驟S112。 當偵測到S10N(步驟S1 12 =是)時,決定一臉部是否在 步驟S 1 1 0 (步驟S 1 1 4 )從該對象影像偵測到。當偵測到一臉 部(步驟S 1 1 4 =是)時’決定是否偵測到s 2 〇 N (步驟s丨丨6)。 呈未偵測到一臉部(步驟S 1 1 4 =否)時,決定是否偵測到 S20N。當未偵測到S20N(步驟Sn8 =否)時,重複偵測一臉 部的步驟(步驟S114),且當偵測到S2〇N(步驟SU8 =是)時, 一影像被拍攝(步驟S 1 3 8)。 當偵測到S20N(步驟Sll6 =是)時,—AF不能夠正確進 行,因此一組對焦鏡頭係移至用於一攝像(步驟sn8)的預 定位置(步驟S 1 4 0)。 當未偵測到s_步驟以16 =否)時,如第所示, —通常方框會顯示在通過鏡頭影像(步驟S220)的任何位置 200917828 (例如,中心)上。現在,使用一通常方框之一範例將在下 面解釋,但是如第1 7A圖所示,與主框(具有一寬線條之框) 協調移動的一或多個輔助框(具有較窄線條之框)可能被顯 示。此外,指示眼睛、嘴部及其類似者的一標記可顯示在 該偵測的臉部。 當在複數個AF偵測點上的對焦評估値被計算時,鏡頭 驅動部1 46受控制以移動該組對焦鏡頭,所以可決定具有 局部最大評估値的鏡頭位置爲一對焦位置(步驟S 1 22)。然 後,爲移動該組對焦鏡頭至獲得的對焦位置,鏡頭驅動部 1 46受控制以啓動該組對焦鏡頭的運動。 決定偵測的臉部是否在步驟S1 14(步驟S 124)對焦。當 決定臉部尙未對焦時(步驟S 1 24 =否),該組對焦鏡頭移至所 獲得對焦點,且同時,其中一通常方框時常移動(即是該框 正在搖晃)的一動畫影像係由動畫影像產生部164產生以重 疊及顯示在通過鏡頭影像上(步驟S 226)。即是,動畫影像 產生部164產生一動畫影像,其中一通常方框回應在隨意 方向中的一對焦狀況,持續以一預定速度、及以一距離在 特定區域附近移動。CPU 110合成及顯示產生的動畫影像 成鏡後測光影像,所以運動中心的區域係重疊在步驟S 1 1 4 偵測到的臉部上。然後,再次重複步驟S 1 24。 當期望區域對焦時,即是,完成該組對焦鏡頭的運動 (步驟S 124 =是),如第17B圖所示,該通常方形框係在步驟 S114(步驟S228)偵測到的臉部上重疊。然後,通常方框係 具有較高明亮的清楚顯示(步驟S 230)。 200917828As shown in FIG. 13 , the digital camera 1 1 includes: c PU 1 1 0, an operation unit (a shutter button 26, a power/mode switch 28, a mode dial 30, a zoom button 38, a cross button 40, and a MENU/ΟΚ button). 42. DISP button 44, BACK button 46 and the like) 1 12. ROM 1 16 , EEPROM 1 1 8 , memory 120 , VRAM - 36 - 2009 17828 122 , imaging element 124 , timing generator (TG) 126 , analogy Processing unit (CDS/AMP) 128, A/D converter 130, video input control unit 132, video signal processing unit 134, video encoder 136, AF detection and detection unit 140, AE/AMP Detect The measuring unit 142, the diaphragm driving unit 144, the lens driving unit 146, the compression and decompression processing unit 148, the media control unit 150, the storage medium 152', the face detecting unit 154, the flash adjustment control unit 160, and the animation image synthesis The portion 162, the animated image generating portion 164, and the like. According to a command from the CPU 1 10, the video encoder 136 converts the input video signal into a video signal (for example, an NTSC signal, a PAL signal, or an SCAM signal) and displays it on the monitor 32, and outputs the video signal to the monitor 32. And the predetermined text information or drawing information synthesized by the moving image synthesizing unit 1 62 is also output to the monitor 32 as needed. The moving image generating unit 1 64 combines a plurality of still images into one moving image (moving image), and generates, for example, an animated image such as an animated GIF or MNG (both of which are format types). The CPU 1 10 recognizes the state of the AF by the AF detecting unit 140, and sends a command to the moving image generating unit 164. The animation image generation unit 164 selects a still image corresponding to the AF state based on the command from the CPU 110, and generates an animation image. Then, based on the face position information detected by the face detecting unit 154, the CPU 110 sends a command to the moving image synthesizing unit 162 to display the moving image generated by the moving image generating unit 164. The animation image generating unit 64 can store a program to select a still image in response to the state of the object detected by the face detecting unit 154 and the like in the ROM 161. Therefore, the moving image generating unit 1 64 can use the program to generate an animated image. This allows for an animated image that is realistic. -37 - 200917828 The animated image generated by the animated image generating unit 1 64 will be explained in detail later. Next, the operation of the digital camera 11 according to the present embodiment configured as described above will be explained below. In the digital camera 11 of the present embodiment, in order to display a focus condition to the user, a state of a focus operation can be determined and an animated image of the corresponding state is displayed. The procedure for displaying such an animated image will now be explained below. <First Embodiment of Focus Status Display> Fig. 14 is a flowchart showing a flow status display processing flow of the digital camera 11. The following steps are usually performed by the CPU 1 10 0. After a mirrored image is displayed in a standby state of an image of the monitor 32, the face detecting unit 154 detects an object face from an input subject image, and when the image includes a face , shooting of a face area and detection of a face position (step S 1 10). Decide if you have detected that the shutter button 2 6 (S 1 0 N) is pressed halfway. When S10N is not detected (step Sli2 = No), step Sii2 is repeated. When S10N is detected (step S1 12 = YES), it is determined whether a face is detected from the object image in step S 1 1 0 (step S 1 1 4). When the face is detected (step S 1 1 4 = YES), it is determined whether or not s 2 0 N is detected (step S 1 16). When a face is not detected (step S1 1 4 = No), it is determined whether S 2 0 N is detected (step S 1 18). When s 2 〇N is not detected (step s丨1 8 = no), the step of detecting a face is repeated (step S 1 1 4 ), and when s 2 0 N is detected (step S118 = yes) When an image can be taken (step Si38). When S 2 0 N is detected (step S 1 1 6 = YES), an AF cannot correctly enter the line -38 - 200917828 'so the set of focus lenses is moved to a camera for use (step s 1 3 8 ) A predetermined position (step S140). When S20N is not detected (step si 16 = No), as shown in Fig. 15A, two circular frames having different sizes from each other are in any position (for example, center) through the lens image (step S120). It is shown in different colors in a concentric circle. In this case, the outer circular frame has a larger area than the face area detected in step S1 14 . When the two circular frames have the same area, each/circular frame is configured with a plurality of curved shapes so that the overlapping of the two circular frames forms a circle. When the focus evaluation 计算 is calculated on the plurality of AF detection points, the lens driving unit 146 is controlled to move the group of focus lenses, so that the lens position having the local maximum evaluation 値 can be determined as a focus position (step S 1 2 2). Then, to move the set of focus lenses to the obtained focus position, the lens drive unit 1 46 is controlled to activate the movement of the set of focus lenses. It is determined whether the detected face is focused in step S114 (step S124). When it is determined that the face is not in focus (step S 124 = No), the focus lens of the group moves to obtain the focus point, and at the same time, the moving image generating unit 1 64 generates a motion image, wherein the inner circular frame and the outer one The circular frames are rotated in different directions from each other, which are superimposed and displayed on the through-the-lens image (step S126). That is, the 'animated image generating unit 1 64 generates an animated image' in which the size of the outer circular frame can be reduced and the size of the inner circular frame can be increased in response to a focus condition, so that the outer circular frame is at a desired speed. Rotate clockwise 'and the inner circular frame rotates counterclockwise at a desired speed' and the two arcs have the same size as each other when the image is in focus. The movement of the circular frame in different directions -39 - 200917828 The production of the image increases the visibility of the rotation of the circular frame. The CPU 1 10 synthesizes and displays the generated moving image on the through-the-lens image so that the moving image is moved from the position displayed in step S120 to the face detected in step S114. Then, step S124 is repeated again. When the desired area is in focus, that is, the movement of the set of focus lenses is completed (step S124=Yes), as shown in FIG. 15B, the circles formed by the two circular frames having the same size and overlapping each other are in step S114. (Step S128) Detecting, the measured faces overlap. Then, the two circular frames can be clearly displayed with high brightness (step S130). It is decided whether or not to initiate voice output (step S132). When the voice output is activated (step S132=YES), a sound indicating that the focus is completed (such as voice, music, call, and the like) can be output via the speaker 3 6 (step S 1 3 4), and then it is determined whether or not S20N is detected (step S136). When the voice output is not activated (step S1 3 2 = No), it is determined whether or not S 2 〇 N is detected (step S 1 3 6 ). When S20N is not detected (step S136=NO), the process returns to the display device, y has two circular frames of different sizes from each other (step S120), and when S20N is detected (step S136=YES), An image is taken (step Sn8). According to the embodiment, when an image is not in focus, a plurality of rotating frames are displayed, and when an image is in focus, the frames form a fixed circle, which makes a focus condition and a focused condition easy for the user to understand. Way to tell. When an AF is not correctly performed, no frame is displayed, which will guide the user to perform an AF operation. At the same time, when a face is detected, the frame will overlap and be displayed on the detected face', which is displayed to the user in an easy-to-understand manner. And in addition to the display, a sound is output, which indicates to the user that the area is in focus in a manner that is easy for the user to understand -40-200917828. In this embodiment, the display frame has a circular shape, but is not limited to a circular shape, and the display frame may have different shapes, including any geometric configuration (such as a triangle, a rectangle, and an ellipse) and an irregular shape, such as a heart. shape. This allows the user to be informed of a focus condition by attracting the user's attention to the display. Shapes such as triangles, rectangles, ellipses, and hearts, which change their shape when rotated, have the advantage that the static state of the shape is better identified than the case of a circular shape. Also, in the present embodiment, the two circular frames are displayed in different colors from each other, but when one image is in focus, the two circular frames can be displayed in an intermediate color between two different colors. For example, when the outer frame is displayed in blue and the inner frame is displayed in yellow, when an image is in focus, the circle formed by the two frames will be a yellow-green color. Thus, a plurality of boxes can be displayed in varying colors, which can increase their visibility. The color change between the rotating frame and the stationary frame makes the face more recognizable when the frame is stopped. Of course, the two boxes can be displayed in the same color. Meanwhile, in the present embodiment, the visibility can be increased by displaying a frame more clearly when focusing, but without limitation, any display allowing the user to be informed of a focus condition can be used. For example, the frame can have a darker color, or when focused, the frame can have a wider line. Meanwhile, in the present embodiment, the frame is rotated at a predetermined rotational speed, but the rotational speed can be changed in response to a focus condition. The frame can be rotated at a speed calculated using a large time required from the focus position to the focus calculation, or a speed obtained by calculating a time required for the movement of the focus lens from the focus position, and calculation permission This rotation is rotated at a rotation speed that ends at the same time when the predetermined area is -41-200917828. Meanwhile, in the present embodiment, the plurality of frames can be rotated in different directions with each other to make the focus state clearer to the user, but the frames can be rotated in the same direction. In this case, the complexity can be reduced to reduce the amount of processing. <Second Embodiment of Focus Status Display> In the second embodiment of a focus condition display, an animation of a rectangular frame shake is displayed. Table 16 is a flow chart showing the process of displaying the π-focus status display. The following steps are usually performed by C P U 1 1 0. The same components as in the first embodiment are denoted by the same reference numerals and will not be described in detail herein. Decide whether S 1 Ο N is detected (step s 1 1 2). When s 1 0 N is not detected (step S1 12 = No), step S112 is repeated. When S10N is detected (step S1 12 = YES), it is determined whether a face is detected from the object image in step S 1 1 0 (step S 1 1 4 ). When a face is detected (step S 1 1 4 = YES), it is determined whether or not s 2 〇 N is detected (step s 丨丨 6). When a face is not detected (step S 1 1 4 = No), it is determined whether S20N is detected. When S20N is not detected (step Sn8=No), the step of detecting a face is repeated (step S114), and when S2〇N is detected (step SU8=YES), an image is captured (step S) 1 3 8). When S20N is detected (step S116 = YES), -AF cannot be performed correctly, so a group of focus lenses is moved to a predetermined position for one imaging (step sn8) (step S1 4 0). When the s_step is not detected to 16 = No), as shown, the usual box is displayed at any position 200917828 (eg, center) through the lens image (step S220). Now, an example using one of the usual boxes will be explained below, but as shown in Figure 17A, one or more auxiliary frames (with narrower lines) coordinated with the main frame (the frame with a wide line) Box) may be displayed. In addition, a mark indicating the eyes, the mouth, and the like can be displayed on the detected face. When the focus evaluation 値 on the plurality of AF detection points is calculated, the lens driving unit 1 46 is controlled to move the group of focus lenses, so that the lens position having the local maximum evaluation 値 can be determined as a focus position (step S1) twenty two). Then, to move the set of focus lenses to the obtained focus position, the lens driving portion 1 46 is controlled to activate the movement of the set of focus lenses. It is determined whether the detected face is in focus at step S1 14 (step S124). When it is determined that the face is not in focus (step S1 24 = No), the group of focus lenses is moved to the obtained focus point, and at the same time, one of the usual frames is constantly moving (that is, the frame is shaking) of an animated image. It is generated by the animation image generation unit 164 to be superimposed and displayed on the through-the-lens image (step S226). That is, the animated image generating portion 164 generates an animated image in which a normal frame responds to a focus condition in an arbitrary direction, continues to move at a predetermined speed, and at a distance near a specific region. The CPU 110 synthesizes and displays the generated moving image into a mirrored light-measuring image, so that the area of the moving center is superimposed on the face detected in the step S1 14 . Then, step S 1 24 is repeated again. When the desired area is in focus, that is, the movement of the group of focus lenses is completed (step S124=YES), as shown in FIG. 17B, the normal square frame is on the face detected in step S114 (step S228). overlapping. Then, usually the frame has a brighter clear display (step S230). 200917828
到S20N(步驟S136)。當未 S132)。當啓動語音輸出 一聲響(像是語音、曲子、呼叫及其類 輸出(步驟S 134) ’然後決定是否偵測 當未啓動語音輸出時(步驟S 132 =否), 決定是否偵測到S20N(步驟Sl36)。 當未偵測到S20N(步驟Sl36 =否)時,處理返回至顯示 一通常方框的步驟(步驟S220)。當偵測到S2〇N(步驟S 1 36 = 是)時’ 一影像被拍攝(步驟Sl38)。 根據本實施例’移動框會停止,其使得以使用者容易 瞭解的方式告知一對焦狀況。當一 A F未正確進行時,不會 顯示框’其引導使用者進行一 AF操作。同時,當偵測到該 臉部時’該框會停止在所偵測臉部的位置上移動,其能以 容易瞭解方式顯示給使用者該對焦區域。而且,除了顯示 之外,可輸出一語音,其以使用者容易瞭解的方式指示使 用者該區域已對焦。 在本實施例中,所顯示框具有一通常方形形狀,但是 未侷限於此,該框可具有不同形狀,包括任何幾何配置, 像是三角形、矩形、多角形、圓形、與橢圓形、與像是一 心形之不規則形狀。 同時,在本實施例中,可見度可在對焦時藉由更清楚 顯示一框而增加,但是沒有限制,可使用允許告知使用者 一對焦狀況的任何顯示。例如,該框可具有一較暗顏色’ 或當對焦時,該框可具有一較寬線條。 同時,在本實施例中,該框能以一預定速度移動’但 -44- 200917828 是移動速度可回應一移動距離而改變。 <對焦狀況顯示的第三實施例> 在一對焦狀況顯示的第三實施例中,顯示其中一動物 (例如’一兔子)耳朵回應一對焦狀況而移動的—動畫。第 1 8圖爲顯示數位相機1 1的一對焦狀況顯示的那些處理流 程之流程圖。下列步驟通常係由c P U 1 1 〇進行。如同在第 ~實施例中的部件之相同者係以相同參考數字表示,且不 在此詳細描述。 決定是否偵測到S 1 (步驟S 1 1 2)。當未偵測到s】〇 N (步 驟S1 12 =否)時,重複步驟S1 12。 當偵測到S 1 0 N (步驟S 1 1 2 =是)時,決定一臉部是否在 步驟S 1 1 0從對象影像偵測到(步驟S 1 1 4)。當偵測到一臉部 (步驟S114 =是)時,決定是否偵測到S20N(步驟S116)。 當未偵測到一臉部(步驟S 1 1 4 =否)時,決定是否偵測到 S 2 0 N (步驟s 1 1 8)。當未偵測到s 2 0 N (步驟s 1 1 8 =否)時,電 複偵測一臉部的步驟(步驟S 1 1 4)’且當偵測到S 2 0 N (步皆 S118 =是)時,一影像被拍攝(步驟S138)。 當偵測到S20N(步驟S116 =是)時,一Af不能夠正確進 行’因此一組對焦鏡頭係移至用於一攝像(步驟S 1 3 8)的〜 預定位置(步驟S 140)。 崔未偵測到S 2 0 N (步驟S 1 1 6 =否)時,如第丨9 A圖所年; —兔子下垂(落下的)耳朵顯示在步驟S114(步驟S320)偵_ 到的臉部頂端上。 當計算複數個AF偵測點的對焦評估値時,鏡頭驅動部 -45 - 200917828 1 46受控制以移動該組對焦鏡頭,所以可決定具有局部最大 評估値的鏡頭位置爲一對焦位置(步驟S 1 22)。然後,爲移 動該組對焦鏡頭至獲得的對焦位置,鏡頭驅動部1 46受控 制以啓動該組對焦鏡頭的運動。 決定偵測的臉部是否在步驟S114(步驟S124)對焦。當 決定臉部尙未對焦(步驟S 124 =否)時,該組對焦鏡頭移至所 獲得對焦點’且同時,回應一對焦狀況,兔子的垂耳開始 下垂橫向擺動伸出、及當對焦之時,動畫影像產生部164 產生兔耳伸出的一動畫係在通過鏡頭影像上,藉由CPU 110’重疊及顯示在與步驟S320的兔子下垂耳相同的位置 上。然後,再次重複步驟S124。 當期望區域對焦時,即是,完成該組對焦鏡頭的運動 (步驟S124 =是)’如第19B圖所示,在步驟S320,兔子的 伸出及靜止之耳朵係顯示在與兔子垂耳相同的位置上(步 驟S 3 28)。然後,兔子的垂耳係以高明亮清楚顯示(步驟 S330)。 決定是否啓動語音輸出(步驟S132)。當啓動語音輸出 (步驟S 132 =是)時,指示完成對焦的聲響(像是語音、曲子、 呼叫及其類似者)能經由喇叭36(步驟3134)輸出’然後決定 是否偵測到S20N(步驟Si36)。當未啓動語音輸出(步驟 Si32 =否)時’決定是否偵測到S2〇N(步驟sn6)。 虽未偵測到S20N(步驟3136 =否)時,處理返问顯示一 兔子垂耳的步驟(步騾S320)。當偵測到S2〇N(歩驟si% — 疋)時,一影·像被拍攝(步驟Sl38)。 ', -46 - 200917828 根據本實施例’使用吸引使用者注意的〜 鬼耳之顯不 形式,其對使用者顯示進行一對焦操作。兔耳伸出程度係 能見地顯示一對焦操作的進展程度,其允許以使用者容易 瞭解的方式告知一對焦狀況與一已對焦狀況。並且’當一 AF未正確進行時,不會顯示兔耳,其引導使用者進行一 AF操作。並且,兔耳顯示在一偵測臉上’其係以容易瞭解 的方式顯示給使用者將被對焦或已對焦的區域。 在本實施例中,以上雖闡述兔耳的範例,但是未侷限 於兔耳,像是狗耳、象耳、大熊貓耳及其類似者伸出的任 何動物耳朵都可使用。 並且,在本實施例中,可見度雖可藉由當對焦時更清 楚顯示兔耳而增加,但是未侷限於此,可使用允許容易告 知使用者一對焦狀況之任何顯示。例如,整個兔耳可具有 一暗色’或當對焦時’兔耳可具有一較寬線條。 <對焦狀況顯示的第四實施例> 在對焦狀況顯示之第四實施例中,其顯示像是一動物 (例如,一隻熊)的特徵照片回應一對焦狀況逐漸出現的動 畫。第20圖爲顯示數位相機u的一對焦狀況顯示處理流 程之流程圖。下列步驟通常係由CPU丨丨〇進行。如同在第 一實施例中的部件之相同者係以相同參考數字表示,且不 在此詳細描述。 決疋是否偵測到s 1 Ο N (步驟s 1 1 2)。當未偵測到S 1 Ο N (步驟S112 =否)時,重複步驟 當偵測到S 1 Ο N (步驟s 1 1 2 =是)時,決定一臉是否在步 -47 - 200917828 驟si 10已從對象影像偵測到(步驟sn4)。當偵測到—臉部 (步驟S 1 1 4 =是)時’決定是否偵測到s 2 ο N (步驟s 1 1 6)。 @未偵測到一臉部(步驟S 1 1 4 =否)時,決定是否偵測到 S20N(步驟S118)。當未偵測到S2〇N(步驟以18 =否)時,重 複偵測一臉部的步驟(步驟S114),且當偵測到S2〇N(步驟 S118 =是)時’ 一影像被拍攝(步驟3138)。 當偵測到S20N(步驟S116 =是)時,一AF不能夠正確進 ^ 行,因此一組對焦鏡頭係移至用於一攝像(步驟s 1 3 8)的〜 預定位置(步驟S 1 4 0)。 虽未偵測到S 2 0 N (步驟s 1 1 6 =否),計算在複數個a F傾 測點的對焦評估値時’鏡頭驅動部丨4 6受控制以移動該組 對焦鏡頭’所以可決定具有局部最大評估値的鏡頭位置爲 一對焦位置(步驟S 1 2 2)。然後,爲移動該組對焦鏡頭至所 獲得對焦位置’鏡頭驅動部1 4 6受控制以啓動該組對焦鏡 頭的運動。 決定是否在步驟S114(步驟S 124)對焦偵測到的臉部。 '' 當決定該臉部尙未對焦(步驟S 1 24 =否)時,該組對焦鏡頭即 移至獲得的對焦點,且同時’當動畫影像產生部丨64進行 對焦操作時’一熊特徵逐漸出現的一動畫影像重疊及顯承 在鏡後測光影像上(步驟S426)。即是;如第21 A圖所示, 動畫影像產生部1 64產生一動畫影像,其中.熊特徵從接 近臉部位置出現’例如’當對焦操作進行時逐漸出現臉側, 且如第2 1 B圖所示’當對焦時,整個熊特徵顯示在臉旁。 CPU 1 1 0將所產生動畫影像合成及顯示成鏡後測光影像, *48- 200917828 以便熊特徵從在步驟s 1 1 4偵測的一臉側出現。然後,再次 重複步驟S 1 2 4。 當期望區域對焦時’即是,完成該組對焦鏡頭的運_ (步驟S 124 =是),如第21B圖所示,全部熊特徵顯示位在歩 驟S 1 1 4偵測到的臉旁,及熊特徵的一部分亦重疊在臉部前 表面(步驟S 4 2 8 )。然後,熊特徵能以高亮度清楚顯示(歩驟 S430)。 決定是否啓動語音輸出(步驟S132)。當啓動語音輸出 (步驟S 1 3 2 =是)時’指示對焦完成的聲響(像是語音、曲子 呼叫及其類似者)能經由喇叭36(步驟S1 34)輸出,然後決定 是否偵測到S20N(步驟S136)。當未啓動語音輸出時(歩驟 S 1 3 2 =否),決定是否偵測到s 2 0 N (步驟S 1 3 6)。 當未偵測到S 2 0 N (步驟S 1 3 6 =否)時,處理返回計算對 焦評價値的步驟(步驟S 1 2 2 )。當偵測到S 2 0 N (步驟S 1 3 6 =是) 時’ 一影像被拍攝(步驟S138)。 根據本實施例’可使用吸引使用者注意的熊特徵之顯 示形式’其對使用者一對焦操作的進行。熊特徵的出現量 係可見地顯示顯示對焦操作的進行程度,其允許能以使用 者容易瞭解方式告知一對焦狀況與一已對焦狀況。並且, 當一 AF未正確地進行時,不會顯示熊特徵,其引導使用者 進行一 AF操作。並且,—熊特徵顯示接近一偵測到的臉’ 其以容易瞭解的方式顯示使用者該將被對焦或已對焦區 域。 同時’在本實施例中,可見度能藉由當對焦時更清楚 -49 - 200917828 顯示一熊特徵而增加,但不是限制,可使用任何允許容易 告知使用者一對焦狀況之顯示。例如,該熊特徵可具有一 暗色’或當對焦時’該熊特徵可具有一較寬線條。 <對焦狀況顯示的第五實施例> 在對焦狀況顯示的第五實施例中,其顯示一動物(例 如,一隻鳥)回應一對焦狀況移動的一動畫。第22圖爲顯 示數位相機1 1的一對焦狀況顯示處理流程之流程圖。下列 步驟通常係由CPU 1 1 0進行。如同在第一實施例中的部件 之相同者係以相同參考數字表示,且將不在此詳細描述。 決定是否偵測到S10N(步驟S112)。當未偵測到S10N(步 驟S112 =否)時,重複步驟S112。 當偵測到S10N(步驟S112 =是),決定一臉部是否在步 驟S 1 1 0已從對象影像偵測到(步驟s 1 1 4)。當偵測到一臉部 (步驟S 1 1 4 =是)時’決定是否偵測到s 2 〇 N (步驟s丨丨6 )。 當未偵測到一臉部(步驟S 1 1 4 =否)時,決定是否偵測到 S 2 0 N (步驟S 1 1 8)。當未偵測到s 2 0 N (步驟S 1 1 8 =否)時,重 複偵測一臉部的步驟(步驟S 1 1 4 ),且當s 2 0 N被偵測到(步 驟S118 =是)時’一影像能被拍攝(步驟以38)。 當偵測到S 2 0 N (步驟S 1 1 6 =是)時,一 a F不能夠正確進 行’因此一組對焦鏡頭移至用於一攝像(步驟S138)的一預 定位置(步驟S140)。 當未偵測到S 2 0 N (步驟S 1 1 6 =否)時,如第2 3 A圖所示, 一飛鳥顯示在一預定位置(例如,一螢幕的左下角部分)(步 驟 S520)。 -50 - 200917828 當計算在複數fi AF f貞測點上的對焦冑估値日寺,鏡頭驅 動部H6受控制以移動該組對焦鏡頭,所以可決定具有局 部最大評估値的鏡頭位置爲一對焦位置(步驟sn2)。然 後’爲移賴組對焦鏡頭至獲得的對焦㈣,鏡頭驅動部 1 4 6受控制以啓動該組對焦鏡頭的運動。 決定在步驟Sil4(步驟S124)是否對焦偵測到的臉部。 當決定臉部尙未對焦時(步驟S12“否),該組對焦鏡頭移至 所獲得對焦點,且同時,由動畫影像產生部1 64所產生在 營幕中到處飛翔的一飛鳥之動畫影像藉Cpu 11〇重疊及顯 示在鏡後測光影像上(步騾S5 26)。然後,再次重複步驟 S 124。 當期望區域對焦時’即是’完成該組對焦鏡頭的運動 (步驟S 1 2 4 =是)’如第2 3 B圖所示’該鳥棲息在步驟s 1 1 4 (步 驟S 5 2 8)偵測到的臉部頂端。然後’該棲息的鳥能以高亮度 清楚顯示(步驟S 5 30)。 決定是否啓動語音輸出(步驟Sl32)。當啓動語音輸出 (步驟S132 =是)時’指示對焦完成的聲響(像是語音、曲子、 呼叫及其類似者)能經由喇叭3 6 (步驟S 1 3 4)輸出,然後決定 是否偵測到S20N(步驟S136)。當未啓動語音輸出(步驟 S 1 3 2 =否)時,決定是否偵測到S 2 0 N (步驟S 1 3 6)。 當未偵測到S20N(步驟S 136 =否)時,處理返回顯示一 飛鳥在一預定位置的步驟(步驟S520)。當偵測到S20N(步驟 S136 =是)時’一影像被拍攝(步驟S138)。 根據本實施例,可使用吸引使用者注意的一飛鳥的顯 200917828 示形式’其對使用者顯示對焦操作已進行。飛鳥棲息動作 可見地顯不一對焦能以使用者容易瞭解的方式完成。並 且’當一 AF未正確進行時’不會顯示鳥,其引導使用者進 行一 AF操作。並且’一鳥棲息在一受偵測的臉部,其係以 容易瞭解的方式顯示使用者該對焦位置。 同時,在本實施例中,雖然以上闡述一鳥的範例,但 是可使用任何像是蝴蝶、蜻蜓、蜜蜂、蝙蝠及其類似者之 本質上可飛翔動物的耳朵。在使用蝴蝶的動畫影像之情況 中,一動畫影像可產生,其中一蝴蝶於步驟S526係在螢幕 中到處飛翔,所以在步驟S 5 2 8,蝴蝶停止飛翔,且棲息在 步驟S 1 1 4偵測到的臉部上’且在步驟s 5 3 0,棲息的蝴蝶能 以較高明亮清楚顯示。 並且’在本具體實施例中’可見度係藉由顯示當對焦 時一鳥停止飛翔並棲息而增加,但是未侷限於此,可使用 允許容易告知使用者一對焦狀況的任何顯示。例如,該鳥 可具有一暗色’或當對焦時,該鳥可具有一較寬線條。 <對焦狀況顯示的第六實施例> 在對焦狀況顯示的一第六實施例中,顯示其中回應一 對焦狀況的花開動畫。第2 4圖爲顯示數位相機1 1的—對 焦狀況顯示處理流程之流程圖。下列步驟通常係由c p U丄J 〇 進行。如同在第一實施例中的部件之相同者係以相同參考 數字表示,且將不在此詳細描述。 決定是否偵測到S10N(步驟sll2)。當未偵測到S10N(步 驟S112 =否)時,重複步驟Sli2。 -52- 200917828 當偵測到S10N(步驟S112 =是)時,決定是否—臉部在 步驟S 1 1 0從對象影像偵測到(步驟s丨丨4)。當偵測到一臉部 (步驟S 1 1 4 =是)時’決定是否偵測到S 2 〇 N (步驟s 1 1 6)。 當未偵測到一臉部(步驟S114 =否)時,決定是否偵測到 S20N(步驟S118)。當未偵測到S20N(步驟sii8 =否)時,重 複偵測一臉部的步驟(步驟S1 14),且當偵測到S20N(步驟 S118 =是)時’一影像被拍攝(步驟S138)。 當偵測到S 2 0 N (步驟S Π 6 =是)時’一 A F不能夠正確進 行’因此該組對焦鏡頭移至用於一攝像(步驟S138)的一預 定位置(步驟S 1 4 0)。 當未偵測到S20N(步驟S116 =否)時,如第25A圖所示, 一花蕾顯示在步驟S 1 1 4 (步驟S 6 2 0)偵測到的臉部上(步驟 S 620” 當計算在複數個的AF偵測點上的焦評估値時,鏡頭驅 動部1 46受控制以移動該組對焦鏡頭,所以可決定具有局 部最大評估値的鏡頭位置爲一對焦位置(步驟S 1 22)。然 後,爲移動該組對焦鏡頭移至所獲得對焦位置,鏡頭驅動 部1 46受控制以啓動該組對焦鏡頭的運動。 決定在步驟s 1 1 4是否對焦偵測到的臉部(步驟s 1 24)。 當決定臉部尙未對焦時(步驟S 124 =否)’該組對焦鏡頭係移 至所獲得對焦點,且同時,回應_ '動畫影像產生部1 6 4產 生的一對焦狀況’ 一花朵逐漸盛開的一動畫影像係由C PU 1 1 0 (步驟S 6 2 6)’在鏡後測光影像上’合成及顯不在與步驟 S 620的花蕾相同的位置。然後’再次重複步驟S124 ° -53 - 200917828 當期望區域對焦時,θ 即疋’完成該組對焦鏡頭的運動 (步驟 S124:是),如第 — 5 B圖所不,完全盛開的花朵顯示在 與步驟S 620 (步驟S 6 28)的花蕾相同的位置。然後,完全盛 開的花朵能以較高明亮清楚顯示(步驟s63〇)。 決定是否啓動語音輸 唰出(步驟S132)。當啓動語音輸出 (步驟S 1 3 2 =是)時,指示對隹^ 封…、兀成的聲響(像是語音、曲子、Go to S20N (step S136). When not S132). When the voice output is activated (such as voice, music, call and its class output (step S 134) ' then decide whether to detect when the voice output is not activated (step S 132 = No), decide whether to detect S20N (Step S136) When S20N is not detected (step S1366=NO), the process returns to the step of displaying a normal block (step S220). When S2〇N is detected (step S1 36 = YES) 'An image is taken (step S38). According to the present embodiment, the moving frame is stopped, which causes a focus condition to be notified in a manner that is easy for the user to understand. When an AF is not correctly performed, the frame 'it' is not displayed. Perform an AF operation. At the same time, when the face is detected, the frame will stop moving at the position of the detected face, which can be displayed to the user in an easy-to-understand manner. Moreover, in addition to displaying In addition, a voice can be output which indicates to the user that the area is in focus in a manner that is easy for the user to understand. In this embodiment, the displayed frame has a generally square shape, but is not limited thereto, and the frame may have different shape , including any geometric configuration, such as a triangle, a rectangle, a polygon, a circle, an ellipse, and an irregular shape like a heart. Meanwhile, in this embodiment, the visibility can be more clearly displayed when focusing. A frame is added, but without limitation, any display that allows the user to be informed of a focus condition can be used. For example, the frame can have a darker color' or the frame can have a wider line when in focus. In this embodiment, the frame can be moved at a predetermined speed 'but -44-200917828 is the moving speed that can be changed in response to a moving distance. <Third Embodiment of Focusing Status Display> Third in a Focusing Status Display In the embodiment, an animation in which an animal (for example, a rabbit) moves in response to a focus condition is displayed. Fig. 18 is a flow chart showing the processing flow of a focus condition display of the digital camera 11. The following steps are performed. Usually carried out by c PU 1 1 。. The same components as in the first embodiment are denoted by the same reference numerals and will not be described in detail herein. S 1 is detected (step S 1 1 2). When s] 〇 N is not detected (step S1 12 = No), step S1 12 is repeated. When S 1 0 N is detected (step S 1 1 2 = Yes), it is determined whether a face is detected from the target image in step S1 1 0 (step S1 14). When a face is detected (step S114=Yes), it is determined whether S20N is detected. (Step S116) When a face is not detected (step S1 1 4 = No), it is determined whether S 2 0 N is detected (step s 1 1 8). When s 2 0 N is not detected (Step s 1 1 8 = No), the step of detecting a face is detected (step S 1 1 4)' and when S 2 0 N is detected (step S118 = Yes), an image is taken. (Step S138). When S20N is detected (step S116 = YES), an Af cannot be correctly performed. Thus, a group of focus lenses is moved to a predetermined position for one imaging (step S 1 3 8) (step S140). When Cui does not detect S 2 0 N (step S 1 1 6 = No), as shown in Fig. 9A; the rabbit's drooping (falling) ear shows the face detected in step S114 (step S320). On the top of the section. When calculating the focus evaluation 复 of a plurality of AF detection points, the lens driving unit -45 - 200917828 1 46 is controlled to move the group of focus lenses, so that the lens position having the local maximum evaluation 値 can be determined as a focus position (step S 1 22). Then, to move the set of focus lenses to the obtained in-focus position, the lens driving portion 1 46 is controlled to activate the movement of the set of focus lenses. It is determined whether the detected face is in focus in step S114 (step S124). When it is determined that the face is not in focus (step S 124 = No), the group of focus lenses is moved to the obtained focus point 'and at the same time, in response to a focus condition, the rabbit's lop ear begins to sag laterally to extend, and when focusing At this time, the animation image generating unit 164 generates an animation in which the rabbit ears are extended on the through-the-lens image, and the CPU 110' overlaps and displays at the same position as the rabbit sag of the step S320. Then, step S124 is repeated again. When the desired area is in focus, that is, the movement of the group of focus lenses is completed (step S124=YES)', as shown in Fig. 19B, in step S320, the extended and stationary ears of the rabbit are displayed in the same manner as the rabbit's lop ear. The position (step S 3 28). Then, the lop ear of the rabbit is clearly displayed with high brightness (step S330). It is decided whether or not to initiate voice output (step S132). When the voice output is activated (step S132 = YES), the sound indicating completion of focusing (such as voice, music, call, and the like) can be output via the speaker 36 (step 3134) and then decide whether or not the S20N is detected (step Si36). When the voice output is not activated (step Si32 = No), it is determined whether or not S2 〇 N is detected (step sn6). Although S20N is not detected (step 3136 = No), the processing returns a step of displaying a rabbit lop ear (step S320). When S2〇N is detected (歩 si% - 疋), a shadow image is taken (step S138). ', -46 - 200917828 According to the present embodiment, a display form of a ghost ear that attracts the attention of the user is used, which performs a focusing operation on the user display. The extent to which the rabbit ears are extended reveals the degree of progress of a focus operation, which allows a focus condition and a focused condition to be communicated in a manner that is easily understood by the user. And 'When an AF is not correctly performed, the rabbit ears are not displayed, which guides the user to perform an AF operation. Also, the rabbit ears are displayed on a detection face' which is displayed in an easily understandable manner to the area in which the user will be in focus or in focus. In the present embodiment, although the above describes an example of a rabbit ear, it is not limited to rabbit ears, and any animal ear such as a dog ear, an elephant ear, a giant panda ear, and the like can be used. Further, in the present embodiment, the visibility can be increased by displaying the rabbit ears more clearly when focusing, but it is not limited thereto, and any display that allows the user to easily notice a focus state can be used. For example, the entire rabbit ear may have a dark color 'or when in focus' the rabbit ears may have a wider line. <Fourth Embodiment of Focus Status Display> In the fourth embodiment of the focus condition display, it displays an image in which a feature photograph of an animal (e.g., a bear) is gradually reflected in response to a focus condition. Fig. 20 is a flow chart showing a flow of a focus state display processing process of the digital camera u. The following steps are usually performed by the CPU. The same components as in the first embodiment are denoted by the same reference numerals and will not be described in detail herein. It is determined whether s 1 Ο N is detected (step s 1 1 2). When S 1 Ο N is not detected (step S112 = No), the repeating step determines whether a face is in step -47 - 200917828 when the S 1 Ο N is detected (step s 1 1 2 = YES) 10 has been detected from the object image (step sn4). When the face is detected (step S 1 1 4 = YES), it is determined whether or not s 2 ο N is detected (step s 1 16). @ When no face is detected (step S1 1 4 = No), it is determined whether or not S20N is detected (step S118). When S2〇N is not detected (step 18=No), the step of detecting a face is repeated (step S114), and when S2〇N is detected (step S118=YES), an image is taken. (Step 3138). When S20N is detected (step S116 = YES), an AF cannot be correctly performed, so a group of focus lenses is moved to a predetermined position for one imaging (step s 1 3 8) (step S 1 4) 0). Although S 2 0 N is not detected (step s 1 16 = no), when the focus evaluation 复 of the plurality of a F tilt points is calculated, the 'lens driving unit 丨 46 is controlled to move the set of focus lenses'. It is determined that the lens position having the local maximum evaluation 为 is an in-focus position (step S 1 2 2). Then, in order to move the set of focus lenses to the obtained focus position, the lens drive unit 14 is controlled to activate the movement of the set of focus lenses. It is determined whether or not the detected face is focused in step S114 (step S124). '' When it is determined that the face is not in focus (step S 1 24 = No), the group of focus lenses is moved to the obtained focus point, and at the same time 'when the animated image generating unit 64 performs the focusing operation' An gradual appearance of an animated image is superimposed and displayed on the post-mirror photometry image (step S426). That is, as shown in FIG. 21A, the animated image generating unit 1 64 generates an animated image in which the bear feature appears 'from the close to the face position', for example, the face side gradually appears when the focusing operation is performed, and as in the 2nd 1 Figure B shows 'When focusing, the entire bear feature is displayed next to the face. The CPU 1 10 0 synthesizes and displays the generated animation image into a post-mirror image, *48- 200917828 so that the bear feature appears from a face side detected in step s 1 1 4 . Then, step S 1 2 4 is repeated again. When the desired area is in focus', that is, the operation of the set of focus lenses is completed (step S124=YES), as shown in Fig. 21B, all the bear feature display positions are next to the face detected in step S11.4. And a part of the bear feature also overlaps the front surface of the face (step S 4 2 8 ). Then, the bear feature can be clearly displayed with high brightness (step S430). It is decided whether or not to initiate voice output (step S132). When the voice output is activated (step S 1 3 2 = YES), the sound indicating the completion of the focus (such as a voice, a music call, and the like) can be output via the speaker 36 (step S1 34), and then it is determined whether the S20N is detected. (Step S136). When the voice output is not activated (step S 1 3 2 = No), it is determined whether or not s 2 0 N is detected (step S 1 3 6). When S 2 0 N is not detected (step S 1 3 6 = No), the process returns to the step of calculating the focus evaluation ( (step S 1 2 2 ). When S 2 0 N is detected (step S 1 3 6 = YES), an image is taken (step S138). According to the present embodiment, a display form of a bear feature that attracts the user's attention can be used, which performs a focusing operation on the user. The amount of appearance of the bear feature is visibly displayed to indicate the degree of progress of the focus operation, which allows a focus condition and a focused condition to be informed in a manner that is easy for the user to understand. Also, when an AF is not correctly performed, the bear feature is not displayed, which guides the user to perform an AF operation. And, the bear feature is displayed close to a detected face' which displays the user's focus or focus area in an easily understandable manner. Meanwhile, in the present embodiment, the visibility can be increased by displaying a bear characteristic when the focus is more clearly -49 - 200917828, but it is not limited, and any display that allows the user to easily notify the user of a focus condition can be used. For example, the bear feature can have a dark color 'or when focused' the bear feature can have a wider line. <Fifth Embodiment of Focus Status Display> In the fifth embodiment of the focus condition display, it displays an animation in which an animal (e.g., a bird) moves in response to a focus condition. Fig. 22 is a flow chart showing the flow of a focus state display process of the digital camera 11. The following steps are usually performed by CPU 1 10 0. The same components as in the first embodiment are denoted by the same reference numerals and will not be described in detail herein. It is determined whether or not S10N is detected (step S112). When S10N is not detected (step S112 = No), step S112 is repeated. When S10N is detected (step S112 = YES), it is determined whether a face has been detected from the object image in step S 1 1 0 (step s 1 1 4). When a face is detected (step S1 1 4 = YES), it is determined whether or not s 2 〇 N is detected (step s丨丨6). When a face is not detected (step S1 1 4 = No), it is determined whether S 2 0 N is detected (step S 1 18). When s 2 0 N is not detected (step S 1 18 = no), the step of detecting a face is repeated (step S 1 1 4 ), and when s 2 0 N is detected (step S118 = Yes) 'An image can be taken (step 38). When S 2 0 N is detected (step S 1 16 = Yes), an a F cannot be correctly performed. Therefore, a group of focus lenses is moved to a predetermined position for one imaging (step S138) (step S140). . When S 2 0 N is not detected (step S 1 16 = No), as shown in FIG. 2 A, a bird is displayed at a predetermined position (for example, a lower left corner portion of a screen) (step S520). . -50 - 200917828 When calculating the focus on the complex fi AF f贞 measurement point, the lens drive unit H6 is controlled to move the focus lens, so it is possible to determine the lens position with the local maximum evaluation 为 as a focus. Location (step sn2). Then, the lens drive unit 146 is controlled to activate the movement of the group of focus lenses for the focus group to focus on the obtained focus (4). It is determined whether or not the detected face is focused in step Sil4 (step S124). When it is determined that the face is not in focus (NO in step S12), the group of focus lenses is moved to the obtained focus point, and at the same time, an animated image of an bird flying around the camp is generated by the animated image generating unit 1 64. The CPU 11 is overlapped and displayed on the post-mirror image (step S5 26). Then, step S 124 is repeated again. When the desired area is in focus, 'that is' completes the movement of the group of focus lenses (step S 1 2 4) =Y) 'As shown in Figure 2 3 B', the bird inhabits the top of the face detected in step s 1 1 4 (step S 5 2 8). Then the bird can be clearly displayed with high brightness ( Step S5 30) Decide whether to activate the voice output (step S32). When the voice output is activated (step S132 = YES), the sound indicating the completion of the focus (such as voice, music, call, and the like) can be via the speaker 3 6 (step S 1 3 4) output, and then decide whether to detect S20N (step S136). When the voice output is not activated (step S 1 3 2 = no), it is determined whether S 2 0 N is detected (step S 1 3 6) When S20N is not detected (step S 136 = No), the process returns to display The step of the bird at a predetermined position (step S520). When S20N is detected (step S136=YES), an image is captured (step S138). According to the embodiment, a bird that attracts the user's attention can be used. 200917828 shows the form 'It has been performed for the user to display the focus operation. The bird's perch action can be visually displayed in a way that is easy for the user to understand. And 'when an AF is not correctly done' does not display the bird, it guides The user performs an AF operation, and 'a bird perches on a detected face, which displays the user's focus position in an easy-to-understand manner. Meanwhile, in the present embodiment, although the above illustrates a bird example , but any ear that can fly animals, such as butterflies, dragonflies, bees, bats, and the like, can be used. In the case of an animated image using a butterfly, an animated image can be generated, one of which is in step S526. Flying around in the screen, so in step S 5 2 8, the butterfly stops flying and inhabits the face detected in step S1 1 4' and at step s 5 3 0, The butterfly can be clearly displayed with higher brightness. And 'in this embodiment, the visibility is increased by displaying a bird stopping flying and inhabiting when focusing, but is not limited thereto, and can be used to allow easy notification to the user. Any display of a focus condition. For example, the bird may have a dark color ' or the bird may have a wider line when in focus. <Sixth Embodiment of Focusing Status Display> In the embodiment, a flower opening animation in which a focus condition is responded is displayed. Fig. 24 is a flow chart showing a flow of the focus state display processing of the digital camera 11. The following steps are usually performed by c p U丄J 。 . The same components as those in the first embodiment are denoted by the same reference numerals and will not be described in detail herein. Decide whether to detect S10N (step sll2). When S10N is not detected (step S112 = No), step Sli2 is repeated. -52- 200917828 When S10N is detected (step S112 = YES), it is determined whether or not - the face is detected from the subject image in step S 1 1 0 (step s 4). When a face is detected (step S1 1 4 = YES), it is determined whether or not S 2 〇 N is detected (step s 1 16). When a face is not detected (step S114 = No), it is determined whether or not S20N is detected (step S118). When S20N is not detected (step sii8 = No), the step of detecting a face is repeated (step S1 14), and when S20N is detected (step S118 = YES), an image is captured (step S138) . When S 2 0 N is detected (step S Π 6 = YES) 'one AF cannot be correctly performed', so the group of focus lenses is moved to a predetermined position for one imaging (step S138) (step S 1 4 0 ). When S20N is not detected (NO in step S116), as shown in Fig. 25A, a flower bud is displayed on the face detected in step S1 1 4 (step S 6 2 0) (step S620). When the focus evaluation 値 on the plurality of AF detection points is calculated, the lens driving unit 1 46 is controlled to move the group of focus lenses, so that the lens position having the local maximum evaluation 値 can be determined as a focus position (step S 1 22) Then, in order to move the group of focus lenses to the obtained focus position, the lens driving portion 1 46 is controlled to activate the movement of the group of focus lenses. Decide whether to focus on the detected face in step s 1 1 4 (step s 1 24). When it is determined that the face is not in focus (step S 124 = No) 'The focus lens is moved to the obtained focus point, and at the same time, the response _ 'animation image generation unit 1 6 4 produces a focus Condition 'An animated image in which a flower is gradually blooming is synthesized and displayed in the same position as the flower bud of step S 620 by C PU 1 1 0 (step S 6 2 6) 'on the post-mirror photometry image. Then 'repeated again Step S124 ° -53 - 200917828 When the desired area is in focus, θ is 'Complete the movement of the group of focus lenses (step S124: YES), as shown in Fig. 5B, the fully blooming flowers are displayed in the same position as the flower buds of step S 620 (step S68). Then, completely blooming The flower can be clearly displayed with a high brightness (step s63〇). Decide whether to activate the voice output (step S132). When the voice output is activated (step S1 3 2 = YES), the indication is 隹^封...,兀Sound (like voice, music,
呼叫及其類似者)能經由喇卩八以終山,本晒。 ^ A 叫利叭3 6輸出(步驟S 1 3 4 ),然後決Calls and the like) can pass the Lama to the end of the mountain, the sun. ^ A called Liebu 3 6 output (step S 1 3 4 ), then decide
定是否偵測到S20N(步驟Sl36)。當未啓動語音輸出(步驟 S132 =否)時,決定是否偵測到S2〇N(步驟sn6)。 當未偵測到S20N(步驟3136 =否)時,處理返回至顯示 —化雷的步驟(步驟S 620)。當偵測到S2〇N(步驟5丨36 =是) 時’ 一影像被拍攝(步驟s 1 3 8)。 根據本實施例’可使用吸引使用者注意的一花朵顯示 形式’其顯示給使用者一對焦操作已進行。花之盛開可見 地顯示一對焦操作的進行程度,其允許能以使用者容易瞭 解的方式告知一正在對焦狀況與一已對焦狀況。並且,當 一 AF未正確進行時’不會顯示花朵,此引導使用者進行一 AF操作。並且,一朵花顯示接近在一偵測到的臉部,其以 容易瞭解的方式顯示使用者將對焦或已對焦的位置。 同時,在本實施例中,可見度雖能藉由當對焦時更清 楚顯示一完全盛開花朵而增加,但未侷限於此,可使用允 許容易告知使用者一對焦狀況的任何顯示。例如,該花可 具有一較暗色,或當對焦時,該花可具有一較寬線條。 <對焦狀況顯示的第七實施例> -54- 200917828 在對焦狀況顯示的一第七實施例中,其係顯示回應一 對焦狀況而脹大一對話氣球的動畫。第2 6圖爲顯示數位相 機u的一對焦狀況顯示處理流程之流程圖。下列步驟通常 係由CPU 1 1 0進行。如同在第一實施例中的部件之相同者 係以相同參考數字表示’且將不在此詳細描述。 決定是否偵測到S10N(步驟SU2)。當未偵測到S10N (步驟S112 =否)時,重複步驟S112。 當偵測到S 1 Ο N (步驟S 1 1 2 =是)時,決定一臉部是否在 步驟S 1 1 0從對象影像偵測到(步驟s 1 1 4)。當偵測到一臉部 (步驟S114 =是)時’決定是否偵測到S20N(步驟S116)。 當未偵測到一臉部(步驟S 1 1 4 =否)時,決定是否偵測到 S 2 Ο N (步驟S 1 1 8)。當未偵測到S 2 Ο N (步驟S 1 1 8 =否)時,重 複偵測一臉部的步驟(步驟S114),及當偵測到S2〇N/(步驟 S 1 1 8 =是)時’一影像能被拍攝(步驟s丨3 8)。 當偵測到S 2 Ο N (步驟S 1 1 6 =是),一a F不能夠正確地進 行’因此使該組對焦鏡頭移至用於一攝像(步驟Si38)的一 預定位置(步驟S140)。 當未偵測到S 2 Ο N (步驟s 1 1 6 =否),當計算在複數個A F 偵測點上的對焦評估値時,鏡頭驅動部1 46受控制以移動 該組對焦鏡頭’所以可決定具有局部最大評估値的鏡頭位 置爲一對焦位置(步驟S 1 2 2)。然後’爲移動該組對焦鏡頭 至所獲得對焦位置’鏡頭驅動部丨46控制以啓動該組對焦 鏡頭的運動。 決定偵測到的臉部是否在步驟s 1 1 4對焦(步驟s 1 2 4)。 -55 - 200917828 當決定臉部尙未對焦(步驟S 124 =否)時,該組對焦鏡頭移至 所獲得對焦點,且同時,其中回應一對焦狀況脹大一對話 氣球的動畫影像可藉動畫影像產生部丨64產生,重疊及顯 示在鏡後測光影像上(步驟S726)。即是,如第27A所示圖, 動畫影像產生部164可產生一動畫影像,其中在啓動一對 焦操作之後’會顯不一非常小尺寸的對話氣球(或一小圓) 會顯示’且如第27B圖所示,當對焦操作進行時,對話氣 球的尺寸會逐漸增加’所以如第2 7 C圖所示,當完成對焦 操作時’對話氣球具有最大尺寸。CPU 11〇顯示接近在步 驟S 1 1 4偵測到的臉部的產生之動畫影像(例如,在其一 端)。然後’步驟S 1 24再次重複。顯示對話氣球的位置並 未偈限於接近臉部,且臉部偵測部丨5 4可偵測—嘴部,以 便將對話氣球顯示成好像從偵測到的嘴部說出。 當期望區域對焦時,即是,完成該組對焦鏡頭的運動 (步驟S124 =是)’如第27C圖所示,最大尺寸的對話氣球係 接近在步驟S 1 1 4偵測到的臉部來顯示(步驟s 7 2 8 )。然後, 對話氣球以較高明亮來清楚顯示(步驟S 7 30)。 決定是否啓動語音輸出(步驟S132)。當啓動語音輸出 (步驟S 1 3 2 =是)時’指示對焦完成的聲響(像是語音、曲子、 呼叫及其類似者)能經由喇叭36輸出(步驟s 1 34),然後決 定是否偵測到S 2 ◦ N (步驟s 1 3 6)。當未啓動語音輸出(步驟 S 1 3 2 =否)時’決定是否偵測到s 2 ◦ N (步驟S 1 3 6)。 當未偵測到S 2 Ο N (步驟s 1 3 6 =否),處理返回計算對焦 評估値的步驟(步驟S 1 2 2)。當偵測到s 2 Ο N (步驟S 1 3 6 =是) -56 - 200917828 時,一影像被拍攝(步驟S 138)。 根據本實施例,可使用藉由改變其 注意的一對話氣球,其對使用者顯示一 對話氣球的尺寸可見地顯示一對焦操作 得以使用者容易瞭解的方式告知一對焦 況。並且,當一 AF未正確進行時’不會 引導使用者進行一 AF操作。並且’一對 偵測到的臉部,其係以容易瞭解的方式 焦位置。 在本實施例中,可見度雖能藉由當 一對話氣球而增加,但是未加以侷限’ 知使用者一對焦狀況的任何顯示。例如 氣球可具有一較寬線條。 並且,在本實施例中,其中具有一 球具有一可變尺寸的一動畫影像係在一 成,但是其中具有文字「?」直到一對焦 v 成對焦操作時,具有文字「!」的一對話 在一鏡後測光影像。此使得對一使用者 況。 在本發明中,一臉部可偵測爲對焦 未侷限於一臉部、與一整個人、一動物 一車及其類似者)可偵測爲對焦。不同已 一整個人、一動物、一車及其類似者。 本發明的專利申請案並未限於一數 尺寸來吸引使用者 對焦操作已進行。 的進行程度,其使 狀況與一已對焦狀 顯示對話氣球’其 話氣球顯不接近一 顯示給使用者該對 對焦時更清楚顯示 可使用允許容易告 ,當對焦時,對話 文字「丨」的對話氣 通過鏡頭影像中合 操作完成,及當完 氣球可產生以合成 清楚告知一對焦狀 ,但是偵測的目標 (像是狗、貓與兔、 知技術可用來偵測 位相機,本發明亦 -57 - 200917828 可應用至一攝像設備,像是一具有照像機及攝影機的行動 電話。 【圖式簡單說明】 第1圖爲顯示運用本發明數位相機之第一實施例之正 透視圖, 第2圖爲該數位相機之第一實施例之後視圖; 第3圖爲顯示該數位相機之第一實施例之圖解結構之 方塊圖; 第4圖爲顯示該數位相機的一對焦狀況顯示的處理之 第一實施例之流程圖; 第5圖爲在一對焦操作完成之前的一對焦狀況顯示之 範例; 第6圖爲在一對焦操作完成之前的一對焦狀況顯示之 範例: 第7圖爲在一對焦操作完成之前的一對焦狀況顯示之 範例; 第8圖爲當一對焦操作完成時的一對焦狀況顯示之 例; —弟9圖爲顯示數位相機的一對焦狀況顯示的處理之第 〜具Γ體實施例之流程圖; 1 〇圖爲當一對焦操作完成時的一對焦狀況顯示之 範例; 第11圖爲在一對焦操作完成之前的一對焦狀況顯示 -58 - 200917828 第1 2 A圖爲在一對焦操作開始之後的一對焦狀況顯示 之範例;第1 2B圖爲在一對焦操作期間的一對焦狀況顯示 之範例;及第1 2 C圖爲當一影像對焦時的一對焦狀況顯示 之範例; 第13圖爲顯示該數位相機的一圖解結構之方塊圖; 第1 4圖爲顯示該數位相機的一動畫影像顯示的處理 之第一實施例之流程圖; 第1 5 A圖爲在一對焦操作完成之前的數位相機的一動 畫影像之第一實施例之顯示例;及第15B圖爲當一對焦操 作完成時該數位相機一動畫影像的第一實施例之顯示例; 第1 6圖爲顯示該數位相機的一動畫影像顯示的處理 之第二實施例之流程圖; 第17A圖爲在一對焦操作完成之前,數位相機之動畫 影像之第二實施例之顯示例;及第1 7 B圖爲當一對焦操作 完成時’數位相機之動畫影像之第二實施例之顯示例; 第1 8圖爲顯示該數位相機的一動畫影像顯示的處理 之第三實施例之流程圖; 第19A圖爲在一對焦操作完成之前數位相機一動畫影 像之第三實施例之顯示例;及第1 9 B圖爲當一對焦操作完 成時該數位相機的一動畫影像之第三實施例之顯示例; 第20圖爲該數位相機的--動畫影像顯示的處理之第 四實施例之流程圖; 第21A圖爲在一對焦操作完成之前該數位相機的一動 畫影像之第四實施例之顯示例;及第2 1 B圖爲當一對焦操 -59 - 200917828 作完成時,該數位相機的一動畫影像的第四實施例之顯示 例; 第22圖爲顯示該數位相機的一動畫影像顯示的處理 之第五實施例之流程圖; 第23A圖爲在一對焦操作完成之前該數位相機一動畫 影像之第五實施例之顯示例;及第23B圖爲當一對焦操作 完成時該數位相機的一動畫影像之第五具體實施例之顯示 例; 第24圖爲顯示該數位相機的一動畫影像顯示的處理 之第六實施例之流程圖; 第25A圖爲在一對焦操作完成之前該數位相機一動畫 影像之第六實施例之顯示例;及第25B圖爲當一對焦操作 完成時該數位相機一動畫影像之第六實施例之顯示例; 第2 6圖爲顯示該數位相機一動畫影像顯示的處理之 第七實施例之流程圖; 第27A圖至第27C圖爲該數位相機一動畫影像之第七 實施例之顯示例;第27 A圖爲當一對焦操作已開始時之顯 示例;第27B圖爲當對焦操作處理時之顯示例,及第27C 圖爲當對焦操作已完成時之顯示例;及 第2 8圖爲該相關技術之顯示例。 【主要元件符號說明】 10 數位相機 11 數位相機 12 照像機本體 -60 - 200917828Whether or not S20N is detected (step S36). When the voice output is not activated (step S132 = No), it is determined whether or not S2 〇 N is detected (step sn6). When S20N is not detected (step 3136 = No), the process returns to the step of displaying - the thunder (step S620). When S2〇N is detected (step 5丨36 = YES), an image is taken (step s 1 3 8). According to this embodiment, a flower display form that attracts the user's attention can be used, which displays to the user that a focusing operation has been performed. The flower blooms visually show the extent to which a focus operation is performed, which allows a focus condition and a focused condition to be informed in a manner that is easily understood by the user. Also, when an AF is not correctly performed, the flower is not displayed, which guides the user to perform an AF operation. Also, a flower display is approaching a detected face, which displays the location in which the user will focus or focus in an easily understandable manner. Meanwhile, in the present embodiment, although the visibility can be increased by displaying a completely blooming flower more clearly when focusing, it is not limited thereto, and any display that allows the user to easily notify the user of a focus condition can be used. For example, the flower may have a darker color, or the flower may have a wider line when in focus. <Seventh Embodiment of Focus Status Display> -54- 200917828 In a seventh embodiment of the focus status display, it is an animation showing that a dialog balloon is inflated in response to a focus condition. Figure 26 is a flow chart showing the processing flow of a focus status display of the digital camera u. The following steps are usually performed by the CPU 1 10 0. The same components as in the first embodiment are denoted by the same reference numerals and will not be described in detail herein. It is decided whether or not S10N is detected (step SU2). When S10N is not detected (step S112 = No), step S112 is repeated. When S 1 Ο N is detected (step S 1 1 2 = YES), it is determined whether a face is detected from the object image in step S 1 1 0 (step s 1 1 4). When a face is detected (step S114 = YES), it is determined whether or not S20N is detected (step S116). When a face is not detected (step S1 1 4 = No), it is determined whether S 2 Ο N is detected (step S 1 18). When S 2 Ο N is not detected (step S 1 18 = no), the step of detecting a face is repeated (step S114), and when S2 〇 N/ is detected (step S 1 18 = yes) When 'an image can be taken (step s丨3 8). When S 2 Ο N is detected (step S 1 16 6=YES), an a F cannot be correctly performed. Therefore, the group of focus lenses is moved to a predetermined position for one imaging (step Si38) (step S140). ). When S 2 Ο N is not detected (step s 1 1 6 = No), when the focus evaluation 値 on the plurality of AF detection points is calculated, the lens driving unit 1 46 is controlled to move the group of focus lenses' so It is determined that the lens position having the local maximum evaluation 为 is an in-focus position (step S 1 2 2). Then, 'moving the set of focus lenses to the obtained focus position' lens drive unit 丨 46 is controlled to activate the movement of the set of focus lenses. It is determined whether the detected face is in focus at step s 1 1 4 (step s 1 2 4). -55 - 200917828 When it is determined that the face is not in focus (step S 124 = No), the group of focus lenses is moved to the obtained focus point, and at the same time, an animated image in which the balloon is swollen in response to a focus condition can be animated The image generating unit 64 generates, superimposes, and displays on the post-mirror photometry image (step S726). That is, as shown in FIG. 27A, the animated image generating unit 164 can generate an animated image in which a dialog balloon (or a small circle) that displays a very small size will be displayed after starting a focusing operation. As shown in Fig. 27B, when the focusing operation is performed, the size of the dialog balloon is gradually increased. Therefore, as shown in Fig. 2C, the dialogue balloon has the largest size when the focusing operation is completed. The CPU 11 〇 displays an animated image (e.g., at one end thereof) that is close to the face detected in step S 1 1 4 . Then 'Step S 1 24 is repeated again. The position of the dialog balloon is displayed and is not limited to the face, and the face detection unit 丨5 4 can detect the mouth to display the dialog balloon as if it were spoken from the detected mouth. When the desired area is in focus, that is, the movement of the group of focus lenses is completed (step S124=YES)', as shown in Fig. 27C, the maximum size dialog balloon is close to the face detected in step S114. Display (step s 7 2 8). Then, the dialog balloon is clearly displayed with a higher brightness (step S7 30). It is decided whether or not to initiate voice output (step S132). When the voice output is activated (step S 1 3 2 = YES), the sound indicating that the focus is completed (such as voice, music, call, and the like) can be output via the speaker 36 (step s 1 34), and then decide whether to detect. Go to S 2 ◦ N (step s 1 3 6). When the voice output is not activated (step S1 3 2 = No), it is determined whether or not s 2 ◦ N is detected (step S 1 3 6). When S 2 Ο N is not detected (step s 1 3 6 = NO), the process returns to the step of calculating the focus evaluation ( (step S 1 2 2). When s 2 Ο N is detected (step S 1 3 6 = YES) -56 - 200917828, an image is taken (step S138). According to this embodiment, a dialog balloon can be used which changes its attention by informing the user that a size of the dialog balloon visibly displays a focus operation in a manner that is easily understood by the user to inform a focus condition. Also, when an AF is not correctly performed, the user is not guided to perform an AF operation. And a pair of detected faces, which are in a position that is easy to understand. In this embodiment, the visibility can be increased by a dialog balloon, but is not limited to any display of the user's focus condition. For example, a balloon can have a wider line. Moreover, in this embodiment, an animation image having a ball having a variable size is integrated into one frame, but has a text "?" therein until a focus v is in focus operation, and a dialogue having the character "!" Metering the image after a mirror. This makes it possible for a user. In the present invention, a face can be detected as focusing, not limited to a face, with an entire person, an animal, a car, and the like, which can be detected as focusing. Different has an entire person, an animal, a car and the like. The patent application of the present invention is not limited to a number of sizes to attract the user to the focus operation. The degree of progress, which makes the situation and a focused display dialogue balloon 'the balloon is not close to one display. The user is more clearly displayed when the pair is in focus. The use of the permission is easy to report. When focusing, the dialogue text is "丨". The dialogue gas is completed by the operation of the lens image, and when the balloon is generated, the focus can be clearly displayed, but the detected targets (such as dogs, cats and rabbits, and the known technology can be used to detect the camera, the present invention also -57 - 200917828 can be applied to a camera device, such as a mobile phone with a camera and a camera. [Schematic Description] FIG. 1 is a front perspective view showing a first embodiment of a digital camera to which the present invention is applied. 2 is a rear view of the first embodiment of the digital camera; FIG. 3 is a block diagram showing the schematic structure of the first embodiment of the digital camera; and FIG. 4 is a view showing the processing of a focus status display of the digital camera. A flow chart of the first embodiment; FIG. 5 is an example of a focus condition display before a focus operation is completed; and FIG. 6 is a view before a focus operation is completed. Example of focus status display: Figure 7 shows an example of a focus status display before a focus operation is completed; Figure 8 shows an example of a focus status display when a focus operation is completed; - Figure 9 shows a digital camera A flowchart of the processing of a focus state display; 1 is an example of a focus condition display when a focus operation is completed; and 11 is a focus before a focus operation is completed Status display -58 - 200917828 Figure 1 2 A shows an example of a focus status display after the start of a focus operation; Figure 1 2B shows an example of a focus status display during a focus operation; and 1 2 C An example of a focus condition display when an image is in focus; FIG. 13 is a block diagram showing a schematic structure of the digital camera; and FIG. 14 is a first implementation of processing for displaying an animated image of the digital camera; Example flow chart; FIG. 15A is a display example of a first embodiment of an animated image of a digital camera before a focus operation is completed; and FIG. 15B is a view when a focus operation is completed A display example of a first embodiment of an image of a digital camera; an image of a second embodiment of a process for displaying an animated image of the digital camera; FIG. 17A is a view of a focus operation, A display example of a second embodiment of an animated image of a digital camera; and a picture of a second embodiment of an animated image of a digital camera when a focusing operation is completed; FIG. 8 is a view showing the digit A flowchart of a third embodiment of processing of an animated image display of a camera; FIG. 19A is a display example of a third embodiment of a digital camera-animated image before a focus operation is completed; and FIG. 19B is a a display example of a third embodiment of an animated image of the digital camera when the focusing operation is completed; FIG. 20 is a flowchart of a fourth embodiment of the processing of the animated image display of the digital camera; a display example of a fourth embodiment of an animated image of the digital camera before the focusing operation is completed; and a second 1 B image is an animated image of the digital camera when a focus operation -59 - 200917828 is completed A display example of the fourth embodiment; FIG. 22 is a flow chart showing a fifth embodiment of the process of displaying an animated image of the digital camera; and FIG. 23A is a fifth of an animated image of the digital camera before the completion of the focusing operation The display example of the embodiment; and FIG. 23B is a display example of a fifth embodiment of an animation image of the digital camera when a focusing operation is completed; and FIG. 24 is a process for displaying an animation image display of the digital camera. FIG. 25A is a display example of a sixth embodiment of an image of the digital camera before a focus operation is completed; and FIG. 25B is an image of the digital camera when a focus operation is completed A display example of the sixth embodiment; FIG. 26 is a flowchart showing a seventh embodiment of the process of displaying an animated image of the digital camera; and FIGS. 27A to 27C are the seventh image of the digital camera. A display example of the embodiment; FIG. 27A is a display example when a focus operation has started; FIG. 27B is a display example when the focus operation is processed, and FIG. 27C is a view when the focus operation is completed When the display cases; and 8 shows a second example of the related art of the graph. [Main component symbol description] 10 Digital camera 11 Digital camera 12 Camera body -60 - 200917828
14 鏡頭 15 光圈 16 電子閃光燈 18 取景器 20 自拍定時燈 22 A F輔助燈 24 閃光燈調整感應器 26 快門鈕 2 8 電源/模式開關 3 0 模式旋鈕 3 2 監視器 34 接目鏡 3 6 喇叭 3 8 縮放鈕 3 8 W 縮小鈕 3 8 T 放大鈕 40 十字形鈕 42 MENU/ΟΚ 鈕 44 DISP 鈕 46 BACK 鈕 110 CPU 112 操作部14 Lens 15 Aperture 16 Electronic flash 18 Viewfinder 20 Self-timer lamp 22 AF-assist illuminator 24 Flash adjustment sensor 26 Shutter button 2 8 Power/mode switch 3 0 Mode knob 3 2 Monitor 34 Eyepiece 3 6 Horn 3 8 Zoom button 3 8 W Zoom out button 3 8 T Zoom button 40 Cross button 42 MENU/ΟΚ button 44 DISP button 46 BACK button 110 CPU 112 Operation unit