US5367454A - Interactive man-machine interface for simulating human emotions - Google Patents
Interactive man-machine interface for simulating human emotions Download PDFInfo
- Publication number
- US5367454A US5367454A US08/081,703 US8170393A US5367454A US 5367454 A US5367454 A US 5367454A US 8170393 A US8170393 A US 8170393A US 5367454 A US5367454 A US 5367454A
- Authority
- US
- United States
- Prior art keywords
- emotions
- intensity
- basic
- emotion
- agent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S345/00—Computer graphics processing and selective visual display systems
- Y10S345/949—Animation processing method
- Y10S345/952—Simulation
Definitions
- the present invention relates to an interactive system and more particularly to an emotion emulator for producing pseudo-emotions of an artificial agent in an interactive information input/output system using the personified agent.
- input means such as keyboard and mouse for helping users communicate with information processing units
- display screens such as CRTs as means of making processing units display symbolic information in response, so that various operations are performed in an interactive mode.
- agent a personified artificial agent
- images e.g., Suenaga et al., Collection of Papers, Vol. J75 - D - II, No. 2, 190 - 202, Feb., 1992, Electronic Communication Society.
- Unexamined Japanese Patent Publication Hei-2-83727/(1990) discloses what provides a natural speaking face image by controlling the lip movement of an agent on a display screen in accordance with an utterance resulting from speech synthesis on the part of the system.
- the agent as a partner of conversation in the conventional interactive system with such an electronic agent only gives utterance accompanied with a simple variation of facial expression but it is not given the power of expressing pseudo-emotion after the human emotion model.
- the personification of the agent as viewed from the user is not thoroughgoing and the agent lacking in emotional expressions has not only little affinity for the novice user but also weak power to encourage users in general to input information actively.
- the problem is that smooth exchange of intentions is unlikely.
- An object of the present invention is to provide an emotion emulator which is designed to attain natural interaction with users and candid exchange of mutual intentions by giving an agent a pseudo-emotion model as an internal mood model so as to make the agent behave more like a human.
- a storage means for holding the intensity of basic emotions so as to provide an agent with artificial emotion.
- eight basic elements of the emotions that have been made known by psychological studies as in "Emotion and Personality," Pluchik. R, Modern Fundamental Psychology 8, Tokyo University Press, may be enumerated.
- the intensity of basic emotions which an agent possesses by making use of phenomena resulting from working environment that is, by a means for increasing the intensity of basic emotions of a predetermined agent by predefined differential values in accordance with conditions of the utterance received by the agent from a user, the achievement status for a given task being accomplished by the agent, presumption on the possibility for the goal-fulfillment and the like.
- a means for placing the intensity of basic emotions in a steady state that is, placing the total emotional condition in the neutral status as the intensity of basic emotions exponentially attenuates with the lapse of time long enough for any phenomenon to be prevented from arising in the working environment.
- an emotion emulator comprises: a basic emotion memory 1 for storing the intensity of basic emotions constituting an overall artificial emotion system, an emotional stimulus detector 2 for detecting input of an emotional stimulus in working environment, a deferential value memory 3 for pre-storing a differential value for instructing the extent of varying basic emotion depending on the kind and intensity of the emotional stimulus detected by the emotional stimulus detector 2, an intensity revisor 4 for revising the intensity values of basic emotions stored in the storage means 1 according to the emotional stimulus detected by the emotional stimulus detector 2 and the differential value stored in the deferential value memory 3, an internal mutual interaction memory 5 for preholding the intensity of internal mutual interaction between basic emotions, an intensity revisor 6 for revising the intensity value of basic emotion stored in the basic emotion memory 1 according to the intensity of basic emotions stored in the basic emotion memory 1 and the intensity of mutual interaction stored in the internal mutual interaction memory 5, and a time dependent attenuator 7 for periodically reducing the intensity of basic emotions stored in the basic emotion memory 1.
- agent's emotional condition can be read out in real time, so that it can immediately be reflected in its facial expression and task performance.
- the time dependent attenuator functions as a stabilizer of emotion.
- agent's emotions will have to be strictly user-friendly and desirably free from any instable temperament.
- the time dependent attenuator acts on leading agent's emotions to a moderate mood.
- the basic emotion memory 1 for storing the intensity of basic emotions in the system configuration shown in FIG. 1 holds the intensity of basic emotions constituting the overall artificial emotion system offered by the agent.
- the differential value memory 3 for storing the intensity of emotional stimulus pre-stores a differential value for instructing the extent of varying basic emotion depending on the kind and intensity of the emotional stimulus detected by the detector 2 for detecting input of an emotional stimulus in working environment.
- the intensity revisor 4 for revising intensity values of basic emotions revises the intensity of basic emotions stored in the basic emotion memory 1 according to the emotional stimulus detected by the detector 2 and the differential value stored in the memory 3.
- the intensity revisor 6 for revising the intensity value revises the intensity of basic emotions stored in the basic emotion memory 1 according to the intensity of basic emotions stored in the memory 1 and the intensity of internal mutual interaction stored in the memory 5 for pre-holding the intensity of internal mutual interaction between basic emotions.
- the time dependent attenuator 7 periodically reduces the intensity of basic emotions stored in the memory 1 with the lapse of time and converges the intensity value to the neutral emotional status on condition that no external stimulus exists and that the interaction between basic emotions is sufficiently small.
- FIG. 1 is a block diagram illustrating a system configuration according to the present invention
- FIG. 2 is a block diagram illustrating a configuration of an emotion emulator embodying the present invention applied to an interactive schedule management system
- FIG. 3 is a process flowchart for the interactive schedule management system of FIG. 2;
- FIG. 4 is a block diagram illustrating an artificial emotion system configuration
- FIG. 5 is a diagram illustrating pervasive effects among eight basic emotions.
- FIG. 6 is a diagram illustrating algorithm for forming pseudo-emotion.
- FIG. 2 is a block diagram of an emotion emulator according to the present invention as what is applied to a speech interactive schedule management system in which an agent speaks to interact with a user in order to lay out schedules for meetings, tour itineraries and the like.
- numeral 21 denotes an agent type interface
- 22 a user 23 a schedule management information holder
- 211 a speech recognition unit
- 212 a user's intention presumption unit
- 213 a scheduling action unit
- 214 a meeting schedule database
- 215 an artificial pseudo emotion system
- 216 a facial expression animation generator
- 217 a facial expression image database
- 218 a speech synthesizing unit.
- an utterance of the user 22 is recognized in the speech recognition unit 211 and the user's intention presumption unit 212 presumes the intention of user's operation from the coded contents of the utterance.
- the scheduling action unit 213 plans to move or erase a schedule in line with the user's intention.
- a request for the activation of certain basic emotion as what is based on the user's intention is sent to the artificial emotion system 215.
- the pseudo-emotional condition is also transmitted to the scheduling action unit 213, whereby it affects the agent's action.
- the emotional condition of the agent is incessantly recognized by the facial expression animation generator 216 as the animation of the facial expression with reference to the facial expression image database 217 and offered via an image display means (not shown) to the user 22 as information on the emotional condition of the agent.
- FIG. 3 is a flowchart illustrating the arrangement of the steps in a multi-process including "speech recognition” through taking in user'utterance, “presumption of intention and action” and “formation of pseudo-emotion and facial expression” in the interactive schedule management system.
- the user's intention presumption unit 212 monitors the presence or absence of utterance (S-11). If utterance is absent (No at S-11), data designating the absence of utterance is given to the scheduling action unit 213.
- the scheduling action unit 213 instructs the speech synthesizing unit 218 to synthesize speech of a request for utterance and notifies the artificial emotion system 215 of the presence of the request therefor (S-12).
- the artificial emotion system 215 buffers a stimulus like this request for utterance (S-31), computes variations in pseudo-emotion (S-32) and instructs the facial expression animation generator 216 to form the animation of the facial expression.
- the facial expression animation generator 216 continuously sticks together facial expression images corresponding to the requested utterance by reference to the facial expression image database 217 (S-33).
- the speech recognition unit 211 recognizes it (S-21), buffers the result of recognition (S-22) and delivers data on the contents of utterance to the user's intention presumption unit 212.
- the user's intention presumption unit 212 converts the user's intention presumed from the utterance into an action command (S-13) and delivers the action command to the scheduling action unit 213.
- the scheduling action unit 213 interprets and executes the action command (S-14), requests the corresponding animation of the utterance from the facial expression animation generator 216 (S-15) and instructs the speech synthesizing unit 218 to output corresponding speech so that the synthesized speech is output (S-16).
- data on the action command accompanied with the presumption of the intention is transmitted to the artificial emotion system 215 and the artificial emotion system 215 buffers a stimulus likewise (S-31), computes variations in pseudo-emotion (S-32) and instructs the facial expression animation generator 216 to form the animation of the facial expression.
- the facial expression animation generator 216 continuously sticks together facial expression images by reference to the facial expression image database 217 (S-33).
- FIG. 4 is a block diagram illustrating an artificial emotion system configuration, wherein numeral 10 denotes a register for the intensity of each of eight basic emotions, 20 an environmental emotional stimulus input, 30 an emotional stimulus intensity storage table, 40 an emotion intensity revisor, 50 an internal interaction intensity storage table, 70 an attenuation constants memory, 80 a clock source, and 90 an output of action schedule/formation of animation of facial expression.
- the register 10 are used for respectively storing the intensity of eight basic emotions (surprise, anger, disgust, fear, joy, acceptance, expectation and sadness).
- the emotion intensity revisor 40 revises the contents of the register 10 for the intensity of each basic emotion on the basis of the occurrence of an emotional stimulus and mutual interactions between each two basic emotions and also revises desired registers constituting the register 10 for the intensity of each basic emotion in a manner that exponentially attenuates the intensity with the lapse of time according to attenuation constants stored in the attenuation constants memory 70.
- the emotion intensity revisor 40 operates according to a clock pulse from the clock source 80 at predetermined time intervals.
- the working environment as an emotional stimulus includes "user's utterance/task schedule/task execution/task results.”
- the incremental amount of basic emotion is pre-stored in the emotional stimulus intensity storage table 30 of FIG. 4 as a pair of the contents of emotional stimulus and the incremental amount of basic emotion.
- IF no voice signal is obtained from the user (no user's utterance) despite repetition of input request made by the agent
- THEN increase the value of anger in the basic emotion register by 2 units.
- IF requested task is completed by agent's action by applying the agent's operation
- THEN increase the value of joy in the basic emotion register by 1 unit.
- IF voice recognition is continuously failed
- ⁇ THEN increase the value of sadness in the basic emotion register by 0.5 unit.
- IF a conference room is occupied, no schedule is suited for every members, or meeting schedule is fallen on other meeting ⁇ THEN: increase the value of sadness by 1 unit and the value of disgust by 0.5 unit in the basic emotion register.
- IF during conversation, user's manual operation is started by a mouse ⁇ THEN: increase the value of sadness by 0.5 unit, the value of anger by 0.5 unit, and value of disgust by 0.5 unit in the basic emotion register.
- IF scheduling tool is hang up ⁇ THEN: increase the value of surprise by 2 units and the value of fear by 1 unit.
- IF processing of voice recognition becomes slow ⁇ THEN: increase the value of fear by 1 unit and the value of disgust by 0.5 unit.
- IF processing of the artificial agent itself becomes slow ⁇ THEN: increase the value of fear by 2 units and the value of sadness by 1 unit.
- IF window of the agent is moved by mouse ⁇ THEN: increase the value of fear by 0.5 unit and the value of surprise by 1 unit.
- IF window of the agent make be small ⁇ THEN: increase the value of disgust by 0.5 unit and the value of anger by 1 unit.
- IF voice signal is inputted even when the artificial agent is not in a condition of voice recognition ⁇ THEN: increase the value of disgust by 0.5 unit.
- IF user carry out unacceptable action, for example closing of window during action ⁇ THEN: increase the value of anger by 0.5 and the value of disgust by 0.5 unit.
- IF continuous searching of conference room and schedule is succeed by minor change after the searching has been failed once ⁇ THEN: increase the value of surprise by 0.5 unit and the value of joy by 1 unit.
- IF trigger word for each basic emotion is detected in utterance of user ⁇ THEN: the values of basic emotions are increased in accordance with the following list.
- the emotion intensity revisor 40 revises the emotion intensity register by computing the interaction conforming to the parameter table at the timing of the clock from the clock source 80.
- the basic emotions exponentially attenuate with the lapse of time.
- each basic emotion converges to zero in value, that is, to the neutral emotional status.
- An example of the attenuation constants (R i ) of the basic emotions is shown in the following table.
- This time dependent attenuation is accomplished by letting the emotion intensity revisor 40 revise the eight registers 10 for the intensity of each basic emotion in accordance with the attenuation constant memory 70.
- FIG. 6 summarizes the operation of the artificial emotion system in consideration of the aforesaid three emotion changing factors.
- S-3 A decision is made on whether an emotional stimulus occurs or not (S-3) and if the stimulus is absent (n), the flow returns to S-1.
- an increment Dk of the k-th basic emotion resulting from the emotional stimulus is multiplied by the intensity of the emotional stimulus S and the result is added to the intensity e kt of the k-th basic emotion at time t.
- the artificial emotion system fluctuates at all times so as to offer human-like behaviors.
- the emotion emulator according to the present invention not only has sensitivity for the emotional events in its working environment but also exhibits internal behavior of emotion itself, so that it can simulate human-like emotions. Since the emotion emulator is also provided with stability of the agent's mood which is essential when it is applied to a human-computer interface, the agent can be made to behave more like a human by giving it pseudo-emotion according to the present invention. By implementing natural interaction with the user, the exchange of intentions between the user and the agent is greatly enhanced.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Processing Or Creating Images (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
An interactive man-machine interface system displays an animated face that exhibits human-like emotions. The system stores data representing each of eight basic emotions and continually changes the level of each basic emotion depending on environmental stimuli, internal reactions between the emotions, and the passage of time. The environmental stimuli include, for example, specific comments made by the user that are recognized by the system, the successful completion of a task, and failure to complete a task. The degree of internal reactions between emotions is programmed before operation. For example, an increase in anger causes a predetermined decrease in joy. Finally, all eight basic emotions are made to reduce in intensity over time. Based on a database of facial expressions, the system displays a composite expression corresponding to the intensity levels of all eight basic emotions.
Description
The present invention relates to an interactive system and more particularly to an emotion emulator for producing pseudo-emotions of an artificial agent in an interactive information input/output system using the personified agent.
Heretofore, there have been utilized input means such as keyboard and mouse for helping users communicate with information processing units, and display screens such as CRTs as means of making processing units display symbolic information in response, so that various operations are performed in an interactive mode.
However, the recent development of information processing environment is attaining the stage of fostering machine-to-man coordination by putting into practice the exchange of mutual intentions between the information unit or a machine and the user or a human, using a wide range of information including emotions.
Some of the interactive systems that have been proposed so far are arranged so that a personified artificial agent (hereinafter simply called "agent") appears on the screen and speaks to users by means of its image and speech synthesis (e.g., Suenaga et al., Collection of Papers, Vol. J75 - D - II, No. 2, 190 - 202, Feb., 1992, Electronic Communication Society).
With respect to a system capable of voice conversation, Unexamined Japanese Patent Publication Hei-2-83727/(1990), for example, discloses what provides a natural speaking face image by controlling the lip movement of an agent on a display screen in accordance with an utterance resulting from speech synthesis on the part of the system.
As studies in models of artificial emotion, a model of artificial emotion with the application of the harmony theory has been referred to by Mogi and Hara in "Shingaku Technical Report HC91-42," and a method of mapping between the mood and the facial expressions has also been proposed by Kitamura et al. in "Trial of Forming Facial Expressions Using Models of Emotion" (Collection of Preliminary Papers, Meeting in Spring 1992, Electronic Data Communication Society).
Nevertheless, an interactive system incorporating pseudo-emotion into an electronic agent still remains unreported.
The agent as a partner of conversation in the conventional interactive system with such an electronic agent only gives utterance accompanied with a simple variation of facial expression but it is not given the power of expressing pseudo-emotion after the human emotion model. In consequence, the personification of the agent as viewed from the user is not thoroughgoing and the agent lacking in emotional expressions has not only little affinity for the novice user but also weak power to encourage users in general to input information actively. The problem is that smooth exchange of intentions is unlikely.
An object of the present invention is to provide an emotion emulator which is designed to attain natural interaction with users and candid exchange of mutual intentions by giving an agent a pseudo-emotion model as an internal mood model so as to make the agent behave more like a human.
In order to accomplish the object above, according to the first aspect of the present invention, it is provided a storage means for holding the intensity of basic emotions so as to provide an agent with artificial emotion. In this case, eight basic elements of the emotions that have been made known by psychological studies as in "Emotion and Personality," Pluchik. R, Modern Fundamental Psychology 8, Tokyo University Press, may be enumerated.
Furthermore, according to the second aspect of the present invention, it is varied the intensity of basic emotions which an agent possesses by making use of phenomena resulting from working environment, that is, by a means for increasing the intensity of basic emotions of a predetermined agent by predefined differential values in accordance with conditions of the utterance received by the agent from a user, the achievement status for a given task being accomplished by the agent, presumption on the possibility for the goal-fulfillment and the like.
Furthermore, according to the third aspect of the present invention, it is provided a means for causing the emotional condition to autonomously vary by predetermining the interaction between basic emotions and making the interaction occur at fixed time intervals so as to increase and decrease the intensity of basic emotions.
Furthermore, according to the fourth aspect of the present invention, it is provided a means for placing the intensity of basic emotions in a steady state, that is, placing the total emotional condition in the neutral status as the intensity of basic emotions exponentially attenuates with the lapse of time long enough for any phenomenon to be prevented from arising in the working environment.
As shown in FIG. 1, an emotion emulator according to the present invention comprises: a basic emotion memory 1 for storing the intensity of basic emotions constituting an overall artificial emotion system, an emotional stimulus detector 2 for detecting input of an emotional stimulus in working environment, a deferential value memory 3 for pre-storing a differential value for instructing the extent of varying basic emotion depending on the kind and intensity of the emotional stimulus detected by the emotional stimulus detector 2, an intensity revisor 4 for revising the intensity values of basic emotions stored in the storage means 1 according to the emotional stimulus detected by the emotional stimulus detector 2 and the differential value stored in the deferential value memory 3, an internal mutual interaction memory 5 for preholding the intensity of internal mutual interaction between basic emotions, an intensity revisor 6 for revising the intensity value of basic emotion stored in the basic emotion memory 1 according to the intensity of basic emotions stored in the basic emotion memory 1 and the intensity of mutual interaction stored in the internal mutual interaction memory 5, and a time dependent attenuator 7 for periodically reducing the intensity of basic emotions stored in the basic emotion memory 1.
With the storage of basic emotions according to the first aspect of the present invention as noted above, agent's emotional condition can be read out in real time, so that it can immediately be reflected in its facial expression and task performance.
With the acceptance of environmental emotional stimulus according to the second aspect above, agent's emotions quickly change with various happenings in task performance environment, thus making variations of flexible pseudo-emotions available.
As agent's complex emotional behavior is defined by the interactive means according to the third aspect above, the subtle autonomous fluctuation of emotions independent of external factors can be brought about.
The time dependent attenuator according to the fourth aspect above functions as a stabilizer of emotion. When an interactive information processing console is applied, agent's emotions will have to be strictly user-friendly and desirably free from any instable temperament. When no emotional stimulus exists, the time dependent attenuator acts on leading agent's emotions to a moderate mood.
More specifically, the basic emotion memory 1 for storing the intensity of basic emotions in the system configuration shown in FIG. 1 holds the intensity of basic emotions constituting the overall artificial emotion system offered by the agent.
The differential value memory 3 for storing the intensity of emotional stimulus pre-stores a differential value for instructing the extent of varying basic emotion depending on the kind and intensity of the emotional stimulus detected by the detector 2 for detecting input of an emotional stimulus in working environment.
The intensity revisor 4 for revising intensity values of basic emotions revises the intensity of basic emotions stored in the basic emotion memory 1 according to the emotional stimulus detected by the detector 2 and the differential value stored in the memory 3.
The intensity revisor 6 for revising the intensity value revises the intensity of basic emotions stored in the basic emotion memory 1 according to the intensity of basic emotions stored in the memory 1 and the intensity of internal mutual interaction stored in the memory 5 for pre-holding the intensity of internal mutual interaction between basic emotions.
The time dependent attenuator 7 periodically reduces the intensity of basic emotions stored in the memory 1 with the lapse of time and converges the intensity value to the neutral emotional status on condition that no external stimulus exists and that the interaction between basic emotions is sufficiently small.
With the arrangement above, it is possible to provide an emotion emulator capable of forming pseudo-emotion very similar to human emotion and realizing natural interaction with users by making the agent behave more like a human.
FIG. 1 is a block diagram illustrating a system configuration according to the present invention;
FIG. 2 is a block diagram illustrating a configuration of an emotion emulator embodying the present invention applied to an interactive schedule management system;
FIG. 3 is a process flowchart for the interactive schedule management system of FIG. 2;
FIG. 4 is a block diagram illustrating an artificial emotion system configuration;
FIG. 5 is a diagram illustrating pervasive effects among eight basic emotions; and
FIG. 6 is a diagram illustrating algorithm for forming pseudo-emotion.
Referring to the accompanying drawings, a detailed description will subsequently be given of an embodiment of the present invention.
FIG. 2 is a block diagram of an emotion emulator according to the present invention as what is applied to a speech interactive schedule management system in which an agent speaks to interact with a user in order to lay out schedules for meetings, tour itineraries and the like.
In FIG. 2, numeral 21 denotes an agent type interface, 22 a user, 23 a schedule management information holder, 211 a speech recognition unit, 212 a user's intention presumption unit, 213 a scheduling action unit, 214 a meeting schedule database, 215 an artificial pseudo emotion system, 216 a facial expression animation generator, 217 a facial expression image database, and 218 a speech synthesizing unit.
First, an utterance of the user 22 is recognized in the speech recognition unit 211 and the user's intention presumption unit 212 presumes the intention of user's operation from the coded contents of the utterance.
While referring to the meeting schedule database 214 and while interacting with the user with the aid of synthesized speech via the speech synthesizing unit 218, the scheduling action unit 213 plans to move or erase a schedule in line with the user's intention. At this time, a request for the activation of certain basic emotion as what is based on the user's intention is sent to the artificial emotion system 215. Conversely, the pseudo-emotional condition is also transmitted to the scheduling action unit 213, whereby it affects the agent's action.
When a series of actions scheduled in the scheduling action unit 213 are executed with success (or failure), the results are transmitted to the artificial emotion system 215 and cause the pseudo-emotional condition to vary.
The emotional condition of the agent is incessantly recognized by the facial expression animation generator 216 as the animation of the facial expression with reference to the facial expression image database 217 and offered via an image display means (not shown) to the user 22 as information on the emotional condition of the agent.
FIG. 3 is a flowchart illustrating the arrangement of the steps in a multi-process including "speech recognition" through taking in user'utterance, "presumption of intention and action" and "formation of pseudo-emotion and facial expression" in the interactive schedule management system.
In the process of "presumption of intention and action" of FIG. 3, the user's intention presumption unit 212 monitors the presence or absence of utterance (S-11). If utterance is absent (No at S-11), data designating the absence of utterance is given to the scheduling action unit 213. The scheduling action unit 213 instructs the speech synthesizing unit 218 to synthesize speech of a request for utterance and notifies the artificial emotion system 215 of the presence of the request therefor (S-12). The artificial emotion system 215 buffers a stimulus like this request for utterance (S-31), computes variations in pseudo-emotion (S-32) and instructs the facial expression animation generator 216 to form the animation of the facial expression. The facial expression animation generator 216 continuously sticks together facial expression images corresponding to the requested utterance by reference to the facial expression image database 217 (S-33).
When the user gives utterance, the speech recognition unit 211 recognizes it (S-21), buffers the result of recognition (S-22) and delivers data on the contents of utterance to the user's intention presumption unit 212. The user's intention presumption unit 212 converts the user's intention presumed from the utterance into an action command (S-13) and delivers the action command to the scheduling action unit 213.
The scheduling action unit 213 interprets and executes the action command (S-14), requests the corresponding animation of the utterance from the facial expression animation generator 216 (S-15) and instructs the speech synthesizing unit 218 to output corresponding speech so that the synthesized speech is output (S-16).
On the other hand, data on the action command accompanied with the presumption of the intention is transmitted to the artificial emotion system 215 and the artificial emotion system 215 buffers a stimulus likewise (S-31), computes variations in pseudo-emotion (S-32) and instructs the facial expression animation generator 216 to form the animation of the facial expression. The facial expression animation generator 216 continuously sticks together facial expression images by reference to the facial expression image database 217 (S-33).
FIG. 4 is a block diagram illustrating an artificial emotion system configuration, wherein numeral 10 denotes a register for the intensity of each of eight basic emotions, 20 an environmental emotional stimulus input, 30 an emotional stimulus intensity storage table, 40 an emotion intensity revisor, 50 an internal interaction intensity storage table, 70 an attenuation constants memory, 80 a clock source, and 90 an output of action schedule/formation of animation of facial expression.
In FIG. 4, the register 10 are used for respectively storing the intensity of eight basic emotions (surprise, anger, disgust, fear, joy, acceptance, expectation and sadness).
The emotion intensity revisor 40 revises the contents of the register 10 for the intensity of each basic emotion on the basis of the occurrence of an emotional stimulus and mutual interactions between each two basic emotions and also revises desired registers constituting the register 10 for the intensity of each basic emotion in a manner that exponentially attenuates the intensity with the lapse of time according to attenuation constants stored in the attenuation constants memory 70. The emotion intensity revisor 40 operates according to a clock pulse from the clock source 80 at predetermined time intervals.
A description will subsequently be given of the operation of the artificial emotion system thus arranged in the following order: "change of basic emotions under the condition of certain emotional stimulus", "change in internal emotion interaction", and "attenuation of basic emotions with the lapse of time."
1. Change of basic emotions under the condition of certain emotional stimulus.
It is dependent on the user's utterance, the contents of the operation sequence planned in the scheduling unit and the result of the operation sequence how emotions vary in task performance environment. In this case, the working environment as an emotional stimulus includes "user's utterance/task schedule/task execution/task results."
The incremental amount of basic emotion is pre-stored in the emotional stimulus intensity storage table 30 of FIG. 4 as a pair of the contents of emotional stimulus and the incremental amount of basic emotion.
Followings show examples of revision of the intensity of the basic emotion and when the emotion intensity revisor 40 receives an interruption of an emotional stimulus occurrence, the contents of emotional stimulus are matched with the emotional stimulus intensity storage table 30 and the corresponding increment is added to the register 10 for the intensity of each of the eight basic emotion.
More specifically, in example 1, IF: no voice signal is obtained from the user (no user's utterance) despite repetition of input request made by the agent→THEN: increase the value of anger in the basic emotion register by 2 units.
In example 2, IF: requested task is completed by agent's action by applying the agent's operation→THEN: increase the value of joy in the basic emotion register by 1 unit.
In example 3, IF: voice recognition is continuously failed →THEN: increase the value of sadness in the basic emotion register by 0.5 unit.
In example 4, IF: a conference room is occupied, no schedule is suited for every members, or meeting schedule is fallen on other meeting→THEN: increase the value of sadness by 1 unit and the value of disgust by 0.5 unit in the basic emotion register.
In example 5, IF: during conversation, user's manual operation is started by a mouse→THEN: increase the value of sadness by 0.5 unit, the value of anger by 0.5 unit, and value of disgust by 0.5 unit in the basic emotion register.
In example 6, IF: scheduling tool is hang up→THEN: increase the value of surprise by 2 units and the value of fear by 1 unit.
In example 7, IF: processing of voice recognition becomes slow→THEN: increase the value of fear by 1 unit and the value of disgust by 0.5 unit.
In example 8, IF: processing of the artificial agent itself becomes slow→THEN: increase the value of fear by 2 units and the value of sadness by 1 unit.
In example 9, IF: window of the agent is moved by mouse→THEN: increase the value of fear by 0.5 unit and the value of surprise by 1 unit.
In example 10, IF: window of the agent make be small→THEN: increase the value of disgust by 0.5 unit and the value of anger by 1 unit.
In example 11, IF: user name of utterance is not in user list→THEN: increase the value of fear by 0.5 unit,
In example 12, IF: voice signal is inputted even when the artificial agent is not in a condition of voice recognition→THEN: increase the value of disgust by 0.5 unit.
In example 13, IF: user carry out unacceptable action, for example closing of window during action→THEN: increase the value of anger by 0.5 and the value of disgust by 0.5 unit.
In example 14, IF: continuous searching of conference room and schedule is succeed by minor change after the searching has been failed once→THEN: increase the value of surprise by 0.5 unit and the value of joy by 1 unit. In example 15, IF: trigger word for each basic emotion is detected in utterance of user→THEN: the values of basic emotions are increased in accordance with the following list.
__________________________________________________________________________ Key word Joy Acceptance Fear Surprise Sadness Disgust Anger Expectation __________________________________________________________________________ "Thank you" 2.0 1.0 0.0 0.0 0.0 0.0 0.0 0.5 "Big help!" 1.0 0.5 0.0 0.0 0.0 0.0 0.0 0.5 "O.K." 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 "Well done!" 0.5 1.0 0.0 0.0 0.0 0.0 0.0 0.5 "That would be well" 0.5 0.5 0.0 0.0 0.0 0.0 0.0 1.0 "Dumb" 0.0 0.0 0.0 0.5 1.0 0.5 0.0 0.0 "Stupid" 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 "Leave me alone" 0.0 0.0 0.5 0.5 0.5 0.5 0.5 0.0 "Stop it" 0.0 0.0 0.0 1.0 0.0 0.5 0.0 0.0 "Fooling around" 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.0 "Hey you!" 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 "Hang on!" 0.0 0.0 0.5 1.0 0.0 0.0 0.0 0.0 "Hurry up" 0.0 0.0 1.0 0.5 0.0 0.0 0.0 0.0 "Stop it!" 0.0 0.0 1.0 0.0 0.0 0.5 0.5 0.0 __________________________________________________________________________
In that way, the register 10 for the intensity of each basic emotion are revised.
2. Change in internal emotion interaction.
Eight basic emotions are set herein as shown in FIG. 5. There exit pervasive effects of interaction of excitation and inhibition among these eight basic emotions. For example, joy heals sadness and anger overcomes fear.
In addition to the mutual inhibition of opposing basic emotions, more general excitatory/inhibitory interaction is present as noted previously. For example, disgust induces sadness or anger depresses joy.
With respect to the quantitative interactive intensity for embodying such a model, a desirable intensity constant is obtained by trial and error through parameter adjustment. An example of matrix of internal interaction constants (Wij) is shown in following table.
__________________________________________________________________________ Basic emotion Acceptance Fear Surprise Sadness Disgust Anger Expectation Joy __________________________________________________________________________ Acceptation -- 0.28 0.00 -- -- -- 0.00 0.42 0.03 0.52 0.08 Fear 0.45 -- 0.28 0.00 0.00 -- 0.00 0.00 0.54 Surprise 0.00 0.12 -- 0.28 0.00 -- -- -- 0.02 0.43 0.09 Sadness 0.00 0.00 0.08 -- 0.28 0.00 -- -- 0.13 0.51 Disgust -- 0.00 0.00 0.22 -- 0.28 0.00 -- 0.58 0.15 Anger -- -- 0.00 0.00 0.43 -- 0.41 -- 0.24 0.52 0.32 Expectation 0.18 -- -- -- -- 0.09 -- 0.45 0.08 0.50 0.24 0.09 Joy 0.51 -- 0.00 -- -- -- 0.32 -- 0.06 0.54 0.27 0.18 __________________________________________________________________________
These parameters are preserved in the internal interaction intensity storage table 50.
The emotion intensity revisor 40 revises the emotion intensity register by computing the interaction conforming to the parameter table at the timing of the clock from the clock source 80.
3. Attenuation of basic emotions with the lapse of time.
The basic emotions exponentially attenuate with the lapse of time. When the interaction between basic emotions is sufficiently small without any emotional stimulus, each basic emotion converges to zero in value, that is, to the neutral emotional status. An example of the attenuation constants (Ri) of the basic emotions is shown in the following table.
__________________________________________________________________________ Basic emotion Acceptance Fear Surprise Sadness Disgust Anger Expectation Joy __________________________________________________________________________ Attenuation 0.96 0.88 0.65 0.97 0.94 0.98 0.92 0.94 constants R.sub.i __________________________________________________________________________
This time dependent attenuation is accomplished by letting the emotion intensity revisor 40 revise the eight registers 10 for the intensity of each basic emotion in accordance with the attenuation constant memory 70.
FIG. 6 summarizes the operation of the artificial emotion system in consideration of the aforesaid three emotion changing factors.
FIG. 6 illustrates an algorithm for revision of emotion intensity with i, j, k=1˜8.
When the process of producing pseudo-emotion is started in FIG. 6, the intensity eit of the i-th basic emotion at time t is multiplied by an attenuation constant Ri of the i-th basic emotion at unit time to obtain eit+1 (S-1) and further eit+1 = eit +ΣJ (eit x wij)is computed as the interaction between basic emotions (S-2).
A decision is made on whether an emotional stimulus occurs or not (S-3) and if the stimulus is absent (n), the flow returns to S-1. When such a stimulus occurs (y), an increment Dk of the k-th basic emotion resulting from the emotional stimulus is multiplied by the intensity of the emotional stimulus S and the result is added to the intensity ekt of the k-th basic emotion at time t.
In the above described flowchart, Δt denotes an unit time step; eit, intensity of i-th basic emotion at time t ; Dij, differential value of i-th basic emotion by j-th emotional stimulus; Wij, interaction constant form i-th basic emotion to j-th basic emotion; Ri, attenuation constant of i-th basic emotion; Si, effect constant of i-th basic emotion; and i, j, k (=1 to 8), index of a basic emotion.
By repeating this process, the artificial emotion system fluctuates at all times so as to offer human-like behaviors.
As set forth above, the emotion emulator according to the present invention not only has sensitivity for the emotional events in its working environment but also exhibits internal behavior of emotion itself, so that it can simulate human-like emotions. Since the emotion emulator is also provided with stability of the agent's mood which is essential when it is applied to a human-computer interface, the agent can be made to behave more like a human by giving it pseudo-emotion according to the present invention. By implementing natural interaction with the user, the exchange of intentions between the user and the agent is greatly enhanced.
Claims (3)
1. An interactive man-machine interface, comprising:
first storage means for storing at least two sets of data representing intensities of at least two basic emotions in an artificial emotion system;
second storage means for storing a time dependent attenuation value;
decrementing means for periodically decrementing the data representing intensity of basic emotions stored in said first storage means according to said time dependent attenuation value;
detecting means for detecting an emotional stimulus input;
third storage means for storing a differential value representing sensitivity of basic emotions to emotional stimuli;
first revising means for revising the decremented data representing intensity of basic emotions according to the emotional stimulus input detected by said detecting means and the differential value stored in said third storage means;
setting means for setting a level of internal interaction between the sets of data representing intensity of basic emotions; and
second revising means for further revising the decremented data representing intensity of basic emotions according to the level of internal interaction set by said setting means.
2. A man-machine interface comprising:
means for detecting predetermined stimuli from a working environment;
means for storing at least one set of data with a magnitude representing the intensity level of at least one artificial emotion;
means for revising the magnitude of each said set of data in response to said predetermined stimuli;
means for displaying an interface agent resembling a person; and
means for choosing, based on the magnitude of each set of data, an expression of said interface agent from a plurality of different expressions.
3. A method of stimulating human emotions in a man-machine interface system, comprising the steps of:
storing sets of data representing intensity levels of different basic emotions;
storing predetermined facial characteristics of an interface agent resembling a person, said predetermined facial characteristics representing the sets of data at different intensity levels;
storing trigger words;
storing desired changes in the intensity levels of said different basic emotions for each said trigger word;
detecting said trigger words in the speech of a user;
revising each set of data based on the detected trigger words and said desired changes in the intensity levels;
choosing characteristics representative of each revised set of data from said predetermined facial characteristics of the agent; and
displaying the agent with said chosen facial characteristics.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP4-169574 | 1992-06-26 | ||
JP16957492A JPH0612401A (en) | 1992-06-26 | 1992-06-26 | Emotion simulating device |
Publications (1)
Publication Number | Publication Date |
---|---|
US5367454A true US5367454A (en) | 1994-11-22 |
Family
ID=15889005
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US08/081,703 Expired - Fee Related US5367454A (en) | 1992-06-26 | 1993-06-25 | Interactive man-machine interface for simulating human emotions |
Country Status (2)
Country | Link |
---|---|
US (1) | US5367454A (en) |
JP (1) | JPH0612401A (en) |
Cited By (100)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5732232A (en) * | 1996-09-17 | 1998-03-24 | International Business Machines Corp. | Method and apparatus for directing the expression of emotion for a graphical user interface |
US5734794A (en) * | 1995-06-22 | 1998-03-31 | White; Tom H. | Method and system for voice-activated cell animation |
EP0883090A2 (en) * | 1997-06-06 | 1998-12-09 | AT&T Corp. | Method for generating photo-realistic animated characters |
US5886697A (en) * | 1993-05-24 | 1999-03-23 | Sun Microsystems, Inc. | Method and apparatus for improved graphical user interface having anthropomorphic characters |
US5943648A (en) * | 1996-04-25 | 1999-08-24 | Lernout & Hauspie Speech Products N.V. | Speech signal distribution system providing supplemental parameter associated data |
EP0976039A2 (en) * | 1997-03-21 | 2000-02-02 | International Business Machines Corporation | Apparatus and method for communicating between an intelligent agent and client computer process using disguised messages |
EP0978770A2 (en) * | 1998-08-06 | 2000-02-09 | Yamaha Hatsudoki Kabushiki Kaisha | System and method for controlling object by simulating emotions and a personality in the object |
EP0978790A1 (en) * | 1998-08-06 | 2000-02-09 | Yamaha Hatsudoki Kabushiki Kaisha | Control system and method for controlling object using emotions and personality generated in the object |
US6064383A (en) * | 1996-10-04 | 2000-05-16 | Microsoft Corporation | Method and system for selecting an emotional appearance and prosody for a graphical character |
US6118888A (en) * | 1997-02-28 | 2000-09-12 | Kabushiki Kaisha Toshiba | Multi-modal interface apparatus and method |
US6144938A (en) * | 1998-05-01 | 2000-11-07 | Sun Microsystems, Inc. | Voice user interface with personality |
US6163822A (en) * | 1998-05-04 | 2000-12-19 | Compaq Computer Corporation | Technique for controlling and processing a section of an interactive presentation simultaneously with detecting stimulus event in manner that overrides process |
US6175772B1 (en) * | 1997-04-11 | 2001-01-16 | Yamaha Hatsudoki Kabushiki Kaisha | User adaptive control of object having pseudo-emotions by learning adjustments of emotion generating and behavior generating algorithms |
EP1072297A1 (en) * | 1998-12-24 | 2001-01-31 | Sony Corporation | Information processor, portable device, electronic pet device, recorded medium on which information processing procedure is recorded, and information processing method |
US6192354B1 (en) | 1997-03-21 | 2001-02-20 | International Business Machines Corporation | Apparatus and method for optimizing the performance of computer tasks using multiple intelligent agents having varied degrees of domain knowledge |
EP1083490A2 (en) * | 1999-09-10 | 2001-03-14 | Yamaha Hatsudoki Kabushiki Kaisha | Interactive artificial intelligence |
EP1091306A2 (en) * | 1996-12-20 | 2001-04-11 | Sony Corporation | Method and apparatus for automatic sending of E-mail and automatic sending control program supplying medium |
WO2001048660A1 (en) * | 1999-12-29 | 2001-07-05 | Virtual Personalities, Inc. | Virtual human interface for conducting surveys |
US20010042057A1 (en) * | 2000-01-25 | 2001-11-15 | Nec Corporation | Emotion expressing device |
US6329994B1 (en) * | 1996-03-15 | 2001-12-11 | Zapa Digital Arts Ltd. | Programmable computer graphic objects |
US20010056364A1 (en) * | 2000-03-09 | 2001-12-27 | Diederiks Elmo Marcus Attila | Method of interacting with a consumer electronics system |
US20020010584A1 (en) * | 2000-05-24 | 2002-01-24 | Schultz Mitchell Jay | Interactive voice communication method and system for information and entertainment |
US20020010589A1 (en) * | 2000-07-24 | 2002-01-24 | Tatsushi Nashida | System and method for supporting interactive operations and storage medium |
US20020019678A1 (en) * | 2000-08-07 | 2002-02-14 | Takashi Mizokawa | Pseudo-emotion sound expression system |
WO2002013935A1 (en) | 2000-08-12 | 2002-02-21 | Smirnov Alexander V | Toys imitating characters behaviour |
EP1183997A2 (en) | 2000-09-02 | 2002-03-06 | Samsung Electronics Co., Ltd. | Apparatus and method for perceiving physical and emotional states of a person |
US6368111B2 (en) | 1997-06-24 | 2002-04-09 | Juan Legarda | System and method for interactively simulating and discouraging drug use |
US6401029B1 (en) * | 1999-03-19 | 2002-06-04 | Kabushikikaisha Equos Research | Assist device in designation of destination |
US6401080B1 (en) | 1997-03-21 | 2002-06-04 | International Business Machines Corporation | Intelligent agent with negotiation capability and method of negotiation therewith |
US20020072918A1 (en) * | 1999-04-12 | 2002-06-13 | White George M. | Distributed voice user interface |
EP1226550A1 (en) * | 1999-10-08 | 2002-07-31 | Electronic Arts, Inc. | Remote communication through visual representations |
WO2002067194A2 (en) * | 2001-02-20 | 2002-08-29 | I & A Research Inc. | System for modeling and simulating emotion states |
US6476815B1 (en) * | 1998-10-19 | 2002-11-05 | Canon Kabushiki Kaisha | Information processing apparatus and method and information transmission system |
US20020191021A1 (en) * | 2001-03-12 | 2002-12-19 | Lilian Labelle | Method and device for validating parameters defining an image |
US20030005062A1 (en) * | 1997-12-17 | 2003-01-02 | Kazuhiko Hachiya | Method and apparatus for automatic sending of E-mail and automatic sending control program supplying medium |
US20030040911A1 (en) * | 2001-08-14 | 2003-02-27 | Oudeyer Pierre Yves | Method and apparatus for controlling the operation of an emotion synthesising device |
US20030067486A1 (en) * | 2001-10-06 | 2003-04-10 | Samsung Electronics Co., Ltd. | Apparatus and method for synthesizing emotions based on the human nervous system |
US20030069847A1 (en) * | 2001-10-10 | 2003-04-10 | Ncr Corporation | Self-service terminal |
US6559870B1 (en) | 1999-03-26 | 2003-05-06 | Canon Kabushiki Kaisha | User interface method for determining a layout position of an agent, information processing apparatus, and program storage medium |
DE10154423A1 (en) * | 2001-11-06 | 2003-05-15 | Deutsche Telekom Ag | Speech controlled interface for accessing an information or computer system in which a digital assistant analyses user input and its own output so that it can be personalized to match user requirements |
US20030108241A1 (en) * | 2001-12-11 | 2003-06-12 | Koninklijke Philips Electronics N.V. | Mood based virtual photo album |
US20030156304A1 (en) * | 2002-02-19 | 2003-08-21 | Eastman Kodak Company | Method for providing affective information in an imaging system |
US20030163320A1 (en) * | 2001-03-09 | 2003-08-28 | Nobuhide Yamazaki | Voice synthesis device |
US20030174122A1 (en) * | 2002-03-12 | 2003-09-18 | Siemens Ag | Adaptation of a human-machine interface as a function of a psychological profile and a current state of being of a user |
US20030193504A1 (en) * | 1999-04-07 | 2003-10-16 | Fuji Xerox Co., Ltd. | System for designing and rendering personalities for autonomous synthetic characters |
US20030234871A1 (en) * | 2002-06-25 | 2003-12-25 | Squilla John R. | Apparatus and method of modifying a portrait image |
US20040018477A1 (en) * | 1998-11-25 | 2004-01-29 | Olsen Dale E. | Apparatus and method for training using a human interaction simulator |
US6700965B1 (en) | 2002-05-03 | 2004-03-02 | At&T Corp. | Identifier-triggered personalized customer relations management service |
US20040054534A1 (en) * | 2002-09-13 | 2004-03-18 | Junqua Jean-Claude | Client-server voice customization |
US20040075677A1 (en) * | 2000-11-03 | 2004-04-22 | Loyall A. Bryan | Interactive character system |
EP1420366A2 (en) * | 2002-11-14 | 2004-05-19 | Eastman Kodak Company | System and method for modifying a portrait image in response to a stimulus |
WO2004044837A1 (en) * | 2002-11-11 | 2004-05-27 | Alfred Schurmann | Determination and control of the activities of an emotional system |
US20040179659A1 (en) * | 2001-08-21 | 2004-09-16 | Byrne William J. | Dynamic interactive voice interface |
US20040243529A1 (en) * | 1996-03-25 | 2004-12-02 | Stoneman Martin L. | Machine computational-processing systems for simulated-humanoid autonomous decision systems |
US20050008246A1 (en) * | 2000-04-13 | 2005-01-13 | Fuji Photo Film Co., Ltd. | Image Processing method |
US20050054381A1 (en) * | 2003-09-05 | 2005-03-10 | Samsung Electronics Co., Ltd. | Proactive user interface |
EP1522920A2 (en) * | 2003-09-05 | 2005-04-13 | Samsung Electronics Co., Ltd. | Proactive user interface including emotional agent |
US20050091305A1 (en) * | 1998-10-23 | 2005-04-28 | General Magic | Network system extensible by users |
US20050091057A1 (en) * | 1999-04-12 | 2005-04-28 | General Magic, Inc. | Voice application development methodology |
US20050114142A1 (en) * | 2003-11-20 | 2005-05-26 | Masamichi Asukai | Emotion calculating apparatus and method and mobile communication apparatus |
US20050143138A1 (en) * | 2003-09-05 | 2005-06-30 | Samsung Electronics Co., Ltd. | Proactive user interface including emotional agent |
US6964023B2 (en) * | 2001-02-05 | 2005-11-08 | International Business Machines Corporation | System and method for multi-modal focus detection, referential ambiguity resolution and mood classification using multi-modal input |
US20060129405A1 (en) * | 2003-01-12 | 2006-06-15 | Shlomo Elfanbaum | Method and device for determining a personal happiness index and improving it |
US20060147884A1 (en) * | 2002-09-26 | 2006-07-06 | Anthony Durrell | Psychometric instruments and methods for mood analysis, psychoeducation, mood health promotion, mood health maintenance and mood disorder therapy |
US7076331B1 (en) | 1998-11-30 | 2006-07-11 | Sony Corporation | Robot, method of robot control, and program recording medium |
US20060248461A1 (en) * | 2005-04-29 | 2006-11-02 | Omron Corporation | Socially intelligent agent software |
US20060282493A1 (en) * | 2005-06-14 | 2006-12-14 | Omron Corporation And Stanford University | Apparatus and method for socially intelligent virtual entity |
US7233321B1 (en) | 1998-12-15 | 2007-06-19 | Intel Corporation | Pointing device with integrated audio input |
US20070213981A1 (en) * | 2002-03-21 | 2007-09-13 | Meyerhoff James L | Methods and systems for detecting, measuring, and monitoring stress in speech |
US20080052080A1 (en) * | 2005-11-30 | 2008-02-28 | University Of Southern California | Emotion Recognition System |
US20080068397A1 (en) * | 2006-09-14 | 2008-03-20 | Carey James E | Emotion-Based Digital Video Alteration |
US7386522B1 (en) | 1997-03-21 | 2008-06-10 | International Business Machines Corporation | Optimizing the performance of computer tasks using intelligent agent with multiple program modules having varied degrees of domain knowledge |
US20080177685A1 (en) * | 2006-11-06 | 2008-07-24 | Kadri Faisal L | Artificial Psychology Dialog Player with Aging Simulation |
KR100858079B1 (en) * | 2002-01-03 | 2008-09-10 | 삼성전자주식회사 | Method and apparatus for generating agent emotion |
US20080294713A1 (en) * | 1999-03-23 | 2008-11-27 | Saylor Michael J | System and method for management of an automatic olap report broadcast system |
US20090210476A1 (en) * | 2008-02-19 | 2009-08-20 | Joseph Arie Levy | System and method for providing tangible feedback according to a context and personality state |
US20090311654A1 (en) * | 2008-06-16 | 2009-12-17 | Pedro Amador Lopez | Multistage Automatic Coaching Methodology |
US20100098341A1 (en) * | 2008-10-21 | 2010-04-22 | Shang-Tzu Ju | Image recognition device for displaying multimedia data |
US20100149573A1 (en) * | 2008-12-17 | 2010-06-17 | Xerox Corporation | System and method of providing image forming machine power up status information |
US7885912B1 (en) | 1996-03-25 | 2011-02-08 | Stoneman Martin L | Humanoid machine systems, methods, and ontologies |
US8094788B1 (en) * | 1999-09-13 | 2012-01-10 | Microstrategy, Incorporated | System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services with customized message depending on recipient |
US20120011477A1 (en) * | 2010-07-12 | 2012-01-12 | Nokia Corporation | User interfaces |
US8130918B1 (en) | 1999-09-13 | 2012-03-06 | Microstrategy, Incorporated | System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services, with closed loop transaction processing |
US20120130196A1 (en) * | 2010-11-24 | 2012-05-24 | Fujitsu Limited | Mood Sensor |
US20130300645A1 (en) * | 2012-05-12 | 2013-11-14 | Mikhail Fedorov | Human-Computer Interface System |
US20130311528A1 (en) * | 2012-04-25 | 2013-11-21 | Raanan Liebermann | Communications with a proxy for the departed and other devices and services for communicaiton and presentation in virtual reality |
US8607138B2 (en) | 1999-05-28 | 2013-12-10 | Microstrategy, Incorporated | System and method for OLAP report generation with spreadsheet report within the network user interface |
US8762155B2 (en) | 1999-04-12 | 2014-06-24 | Intellectual Ventures I Llc | Voice integration platform |
US8928671B2 (en) | 2010-11-24 | 2015-01-06 | Fujitsu Limited | Recording and analyzing data on a 3D avatar |
US8983889B1 (en) | 1996-03-25 | 2015-03-17 | Martin L. Stoneman | Autonomous humanoid cognitive systems |
US20150279350A1 (en) * | 2012-11-08 | 2015-10-01 | Nec Corporation | Conversation-sentence generation device, conversation-sentence generation method, and conversation-sentence generation program |
US9208213B2 (en) | 1999-05-28 | 2015-12-08 | Microstrategy, Incorporated | System and method for network user interface OLAP report formatting |
US20160148043A1 (en) * | 2013-06-20 | 2016-05-26 | Elwha Llc | Systems and methods for enhancement of facial expressions |
US20170163831A1 (en) * | 2010-12-27 | 2017-06-08 | Sharp Kabushiki Kaisha | Image forming apparatus having display section displaying environmental certification information during startup |
US20180358008A1 (en) * | 2017-06-08 | 2018-12-13 | Microsoft Technology Licensing, Llc | Conversational system user experience |
US10325616B2 (en) * | 2016-03-30 | 2019-06-18 | Japan Mathematical Institute Inc. | Intention emergence device, intention emergence method, and intention emergence program |
CN110073391A (en) * | 2016-12-28 | 2019-07-30 | 本田技研工业株式会社 | Rental system and evaluation system |
US10398366B2 (en) | 2010-07-01 | 2019-09-03 | Nokia Technologies Oy | Responding to changes in emotional condition of a user |
US10957089B2 (en) * | 2018-09-13 | 2021-03-23 | International Business Machines Corporation | Animation generation |
US11370125B2 (en) * | 2016-11-10 | 2022-06-28 | Warner Bros. Entertainment Inc. | Social robot with environmental control feature |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6337552B1 (en) * | 1999-01-20 | 2002-01-08 | Sony Corporation | Robot apparatus |
JPH07219741A (en) * | 1994-02-03 | 1995-08-18 | Nec Corp | Metaphor interface device |
JPH0916800A (en) * | 1995-07-04 | 1997-01-17 | Fuji Electric Co Ltd | Spoken dialogue system with face image |
JPH0973382A (en) * | 1995-09-04 | 1997-03-18 | Fujitsu Ltd | Computer system |
US5838316A (en) * | 1996-01-26 | 1998-11-17 | International Business Machines Corporation | Method and system for presenting a plurality of animated display objects to a user for selection on a graphical user interface in a data processing system |
JP3353651B2 (en) * | 1997-06-23 | 2002-12-03 | 松下電器産業株式会社 | Agent interface device |
JP3792882B2 (en) * | 1998-03-17 | 2006-07-05 | 株式会社東芝 | Emotion generation device and emotion generation method |
JP3513397B2 (en) * | 1998-07-29 | 2004-03-31 | 日本電信電話株式会社 | Character animation realizing method and recording medium storing the program |
JP3501123B2 (en) * | 1998-11-30 | 2004-03-02 | ソニー株式会社 | Robot apparatus and behavior control method for robot apparatus |
JP4366617B2 (en) * | 1999-01-25 | 2009-11-18 | ソニー株式会社 | Robot device |
JP3514372B2 (en) | 1999-06-04 | 2004-03-31 | 日本電気株式会社 | Multimodal dialogue device |
JP2001075698A (en) * | 2000-08-07 | 2001-03-23 | Nec Corp | Metaphor interface device |
JP2002251473A (en) * | 2001-02-23 | 2002-09-06 | Asahi Kasei Corp | Safety checking system, terminal and program |
JP2014085952A (en) * | 2012-10-25 | 2014-05-12 | Kddi Corp | Expression generation device and program |
JP2014167737A (en) * | 2013-02-28 | 2014-09-11 | Kddi Corp | Device and program for creating gestures |
JP6170834B2 (en) * | 2013-12-26 | 2017-07-26 | Kddi株式会社 | Emotion expression device, emotion expression method, and computer program |
US10429803B2 (en) | 2014-09-23 | 2019-10-01 | Intel Corporation | Multifactor intelligent agent control |
WO2017170404A1 (en) * | 2016-03-30 | 2017-10-05 | 光吉 俊二 | Intention emergence device, intention emergence method, and intention emergence program |
JP6419134B2 (en) * | 2016-11-25 | 2018-11-07 | 本田技研工業株式会社 | Vehicle emotion display device, vehicle emotion display method, and vehicle emotion display program |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4104625A (en) * | 1977-01-12 | 1978-08-01 | Atari, Inc. | Apparatus for providing facial image animation |
US4459114A (en) * | 1982-10-25 | 1984-07-10 | Barwick John H | Simulation system trainer |
US4569026A (en) * | 1979-02-05 | 1986-02-04 | Best Robert M | TV Movies that talk back |
US4642710A (en) * | 1985-03-15 | 1987-02-10 | Milton Bradley International, Inc. | Animated display controlled by an audio device |
US4646172A (en) * | 1955-06-14 | 1987-02-24 | Lemelson Jerome H | Video system and method |
JPH0283727A (en) * | 1988-09-21 | 1990-03-23 | Matsushita Electric Ind Co Ltd | Voice interaction equipment |
-
1992
- 1992-06-26 JP JP16957492A patent/JPH0612401A/en active Pending
-
1993
- 1993-06-25 US US08/081,703 patent/US5367454A/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4646172A (en) * | 1955-06-14 | 1987-02-24 | Lemelson Jerome H | Video system and method |
US4104625A (en) * | 1977-01-12 | 1978-08-01 | Atari, Inc. | Apparatus for providing facial image animation |
US4569026A (en) * | 1979-02-05 | 1986-02-04 | Best Robert M | TV Movies that talk back |
US4459114A (en) * | 1982-10-25 | 1984-07-10 | Barwick John H | Simulation system trainer |
US4642710A (en) * | 1985-03-15 | 1987-02-10 | Milton Bradley International, Inc. | Animated display controlled by an audio device |
JPH0283727A (en) * | 1988-09-21 | 1990-03-23 | Matsushita Electric Ind Co Ltd | Voice interaction equipment |
Non-Patent Citations (2)
Title |
---|
Suenaga et al., "Human Reader: An Advanced Human Machine Interface Based On Human Images and Speech", The Transactions of the Institute of Electronics, Information and Communication Engineers, D-II, vol. J75-D-II No. 2 pp. 190-202, Feb. 1992. |
Suenaga et al., Human Reader: An Advanced Human Machine Interface Based On Human Images and Speech , The Transactions of the Institute of Electronics, Information and Communication Engineers, D II, vol. J75 D II No. 2 pp. 190 202, Feb. 1992. * |
Cited By (189)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6160551A (en) * | 1993-05-24 | 2000-12-12 | Sun Microsystems, Inc. | Graphical user interface for displaying and manipulating objects |
US5886697A (en) * | 1993-05-24 | 1999-03-23 | Sun Microsystems, Inc. | Method and apparatus for improved graphical user interface having anthropomorphic characters |
US7240289B2 (en) | 1993-05-24 | 2007-07-03 | Sun Microsystems, Inc. | Graphical user interface for displaying and navigating in a directed graph structure |
US5995106A (en) * | 1993-05-24 | 1999-11-30 | Sun Microsystems, Inc. | Graphical user interface for displaying and navigating in a directed graph structure |
US6154209A (en) * | 1993-05-24 | 2000-11-28 | Sun Microsystems, Inc. | Graphical user interface with method and apparatus for interfacing to remote devices |
US6344861B1 (en) | 1993-05-24 | 2002-02-05 | Sun Microsystems, Inc. | Graphical user interface for displaying and manipulating objects |
US5734794A (en) * | 1995-06-22 | 1998-03-31 | White; Tom H. | Method and system for voice-activated cell animation |
US6329994B1 (en) * | 1996-03-15 | 2001-12-11 | Zapa Digital Arts Ltd. | Programmable computer graphic objects |
US6331861B1 (en) | 1996-03-15 | 2001-12-18 | Gizmoz Ltd. | Programmable computer graphic objects |
US7885912B1 (en) | 1996-03-25 | 2011-02-08 | Stoneman Martin L | Humanoid machine systems, methods, and ontologies |
US8983889B1 (en) | 1996-03-25 | 2015-03-17 | Martin L. Stoneman | Autonomous humanoid cognitive systems |
US20040243529A1 (en) * | 1996-03-25 | 2004-12-02 | Stoneman Martin L. | Machine computational-processing systems for simulated-humanoid autonomous decision systems |
US5943648A (en) * | 1996-04-25 | 1999-08-24 | Lernout & Hauspie Speech Products N.V. | Speech signal distribution system providing supplemental parameter associated data |
US5732232A (en) * | 1996-09-17 | 1998-03-24 | International Business Machines Corp. | Method and apparatus for directing the expression of emotion for a graphical user interface |
US6064383A (en) * | 1996-10-04 | 2000-05-16 | Microsoft Corporation | Method and system for selecting an emotional appearance and prosody for a graphical character |
EP1091306A2 (en) * | 1996-12-20 | 2001-04-11 | Sony Corporation | Method and apparatus for automatic sending of E-mail and automatic sending control program supplying medium |
US6118888A (en) * | 1997-02-28 | 2000-09-12 | Kabushiki Kaisha Toshiba | Multi-modal interface apparatus and method |
US6345111B1 (en) | 1997-02-28 | 2002-02-05 | Kabushiki Kaisha Toshiba | Multi-modal interface apparatus and method |
US6401080B1 (en) | 1997-03-21 | 2002-06-04 | International Business Machines Corporation | Intelligent agent with negotiation capability and method of negotiation therewith |
US7386522B1 (en) | 1997-03-21 | 2008-06-10 | International Business Machines Corporation | Optimizing the performance of computer tasks using intelligent agent with multiple program modules having varied degrees of domain knowledge |
US6085178A (en) * | 1997-03-21 | 2000-07-04 | International Business Machines Corporation | Apparatus and method for communicating between an intelligent agent and client computer process using disguised messages |
US7908225B1 (en) | 1997-03-21 | 2011-03-15 | International Business Machines Corporation | Intelligent agent with negotiation capability and method of negotiation therewith |
EP0976039A4 (en) * | 1997-03-21 | 2005-07-20 | Ibm | DEVICE AND METHOD FOR TRANSMITTING BETWEEN AN INTELLIGENT MEDIUM AND USER COMPUTER USING MASKED MESSAGES |
US6192354B1 (en) | 1997-03-21 | 2001-02-20 | International Business Machines Corporation | Apparatus and method for optimizing the performance of computer tasks using multiple intelligent agents having varied degrees of domain knowledge |
EP0976039A2 (en) * | 1997-03-21 | 2000-02-02 | International Business Machines Corporation | Apparatus and method for communicating between an intelligent agent and client computer process using disguised messages |
US6175772B1 (en) * | 1997-04-11 | 2001-01-16 | Yamaha Hatsudoki Kabushiki Kaisha | User adaptive control of object having pseudo-emotions by learning adjustments of emotion generating and behavior generating algorithms |
EP0883090A2 (en) * | 1997-06-06 | 1998-12-09 | AT&T Corp. | Method for generating photo-realistic animated characters |
EP0883090A3 (en) * | 1997-06-06 | 1999-12-15 | AT&T Corp. | Method for generating photo-realistic animated characters |
US6368111B2 (en) | 1997-06-24 | 2002-04-09 | Juan Legarda | System and method for interactively simulating and discouraging drug use |
US7159009B2 (en) | 1997-12-17 | 2007-01-02 | Sony Corporation | Method and apparatus for automatic sending of e-mail and automatic sending control program supplying medium |
US20030005062A1 (en) * | 1997-12-17 | 2003-01-02 | Kazuhiko Hachiya | Method and apparatus for automatic sending of E-mail and automatic sending control program supplying medium |
US6334103B1 (en) | 1998-05-01 | 2001-12-25 | General Magic, Inc. | Voice user interface with personality |
US20080103777A1 (en) * | 1998-05-01 | 2008-05-01 | Ben Franklin Patent Holding Llc | Voice User Interface With Personality |
US20060106612A1 (en) * | 1998-05-01 | 2006-05-18 | Ben Franklin Patent Holding Llc | Voice user interface with personality |
US9055147B2 (en) | 1998-05-01 | 2015-06-09 | Intellectual Ventures I Llc | Voice user interface with personality |
US6144938A (en) * | 1998-05-01 | 2000-11-07 | Sun Microsystems, Inc. | Voice user interface with personality |
US7058577B2 (en) | 1998-05-01 | 2006-06-06 | Ben Franklin Patent Holding, Llc | Voice user interface with personality |
US7266499B2 (en) | 1998-05-01 | 2007-09-04 | Ben Franklin Patent Holding Llc | Voice user interface with personality |
US20050091056A1 (en) * | 1998-05-01 | 2005-04-28 | Surace Kevin J. | Voice user interface with personality |
US6163822A (en) * | 1998-05-04 | 2000-12-19 | Compaq Computer Corporation | Technique for controlling and processing a section of an interactive presentation simultaneously with detecting stimulus event in manner that overrides process |
US6249780B1 (en) * | 1998-08-06 | 2001-06-19 | Yamaha Hatsudoki Kabushiki Kaisha | Control system for controlling object using pseudo-emotions and pseudo-personality generated in the object |
US6901390B2 (en) | 1998-08-06 | 2005-05-31 | Yamaha Hatsudoki Kabushiki Kaisha | Control system for controlling object using pseudo-emotions and pseudo-personality generated in the object |
EP0978770A3 (en) * | 1998-08-06 | 2001-11-07 | Yamaha Hatsudoki Kabushiki Kaisha | System and method for controlling object by simulating emotions and a personality in the object |
US6230111B1 (en) * | 1998-08-06 | 2001-05-08 | Yamaha Hatsudoki Kabushiki Kaisha | Control system for controlling object using pseudo-emotions and pseudo-personality generated in the object |
EP0978790A1 (en) * | 1998-08-06 | 2000-02-09 | Yamaha Hatsudoki Kabushiki Kaisha | Control system and method for controlling object using emotions and personality generated in the object |
EP0978770A2 (en) * | 1998-08-06 | 2000-02-09 | Yamaha Hatsudoki Kabushiki Kaisha | System and method for controlling object by simulating emotions and a personality in the object |
US6476815B1 (en) * | 1998-10-19 | 2002-11-05 | Canon Kabushiki Kaisha | Information processing apparatus and method and information transmission system |
US20050091305A1 (en) * | 1998-10-23 | 2005-04-28 | General Magic | Network system extensible by users |
US7949752B2 (en) | 1998-10-23 | 2011-05-24 | Ben Franklin Patent Holding Llc | Network system extensible by users |
US8326914B2 (en) | 1998-10-23 | 2012-12-04 | Ben Franklin Patent Holding Llc | Network system extensible by users |
US7648365B2 (en) | 1998-11-25 | 2010-01-19 | The Johns Hopkins University | Apparatus and method for training using a human interaction simulator |
US7198490B1 (en) * | 1998-11-25 | 2007-04-03 | The Johns Hopkins University | Apparatus and method for training using a human interaction simulator |
US7048544B2 (en) * | 1998-11-25 | 2006-05-23 | The Johns Hopkins University | Apparatus and method for training using a human interaction simulator |
US20040018477A1 (en) * | 1998-11-25 | 2004-01-29 | Olsen Dale E. | Apparatus and method for training using a human interaction simulator |
US20070243517A1 (en) * | 1998-11-25 | 2007-10-18 | The Johns Hopkins University | Apparatus and method for training using a human interaction simulator |
US7076331B1 (en) | 1998-11-30 | 2006-07-11 | Sony Corporation | Robot, method of robot control, and program recording medium |
US7804494B2 (en) | 1998-12-15 | 2010-09-28 | Intel Corporation | Pointing device with integrated audio input and associated methods |
US20080170049A1 (en) * | 1998-12-15 | 2008-07-17 | Larson Jim A | Pointing device with integrated audio input and associated methods |
US7233321B1 (en) | 1998-12-15 | 2007-06-19 | Intel Corporation | Pointing device with integrated audio input |
US7656397B2 (en) | 1998-12-15 | 2010-02-02 | Intel Corporation | Pointing device with integrated audio input and associated methods |
EP1748421A3 (en) * | 1998-12-24 | 2007-07-25 | Sony Corporation | Speech input processing with emotion model based response generation |
KR100751957B1 (en) * | 1998-12-24 | 2007-08-24 | 소니 가부시끼 가이샤 | Electronic PET device, information processing device and information processing method |
EP1072297A4 (en) * | 1998-12-24 | 2005-12-14 | Sony Corp | INVOMATION PROCESSING DEVICE, PORTABLE DEVICE, ELECTRONIC PET DEVICE, RECORDING MEDIUM FOR INFORMATION PROCESSING WOEI INFORMATION PROCESSING METHOD |
EP1072297A1 (en) * | 1998-12-24 | 2001-01-31 | Sony Corporation | Information processor, portable device, electronic pet device, recorded medium on which information processing procedure is recorded, and information processing method |
KR100702645B1 (en) * | 1998-12-24 | 2007-04-02 | 소니 가부시끼 가이샤 | Information processing device, electronic pet device and information processing method |
US6792406B1 (en) * | 1998-12-24 | 2004-09-14 | Sony Corporation | Information processing apparatus, portable device, electronic pet apparatus recording medium storing information processing procedures and information processing method |
EP1748421A2 (en) * | 1998-12-24 | 2007-01-31 | Sony Corporation | Speech input processing with emotion model based response generation |
US6401029B1 (en) * | 1999-03-19 | 2002-06-04 | Kabushikikaisha Equos Research | Assist device in designation of destination |
US8321411B2 (en) | 1999-03-23 | 2012-11-27 | Microstrategy, Incorporated | System and method for management of an automatic OLAP report broadcast system |
US9477740B1 (en) | 1999-03-23 | 2016-10-25 | Microstrategy, Incorporated | System and method for management of an automatic OLAP report broadcast system |
US20080294713A1 (en) * | 1999-03-23 | 2008-11-27 | Saylor Michael J | System and method for management of an automatic olap report broadcast system |
US6559870B1 (en) | 1999-03-26 | 2003-05-06 | Canon Kabushiki Kaisha | User interface method for determining a layout position of an agent, information processing apparatus, and program storage medium |
US20030193504A1 (en) * | 1999-04-07 | 2003-10-16 | Fuji Xerox Co., Ltd. | System for designing and rendering personalities for autonomous synthetic characters |
US8078469B2 (en) | 1999-04-12 | 2011-12-13 | White George M | Distributed voice user interface |
US20050091057A1 (en) * | 1999-04-12 | 2005-04-28 | General Magic, Inc. | Voice application development methodology |
US20020072918A1 (en) * | 1999-04-12 | 2002-06-13 | White George M. | Distributed voice user interface |
US20060293897A1 (en) * | 1999-04-12 | 2006-12-28 | Ben Franklin Patent Holding Llc | Distributed voice user interface |
US8396710B2 (en) | 1999-04-12 | 2013-03-12 | Ben Franklin Patent Holding Llc | Distributed voice user interface |
US7769591B2 (en) | 1999-04-12 | 2010-08-03 | White George M | Distributed voice user interface |
US8762155B2 (en) | 1999-04-12 | 2014-06-24 | Intellectual Ventures I Llc | Voice integration platform |
US8607138B2 (en) | 1999-05-28 | 2013-12-10 | Microstrategy, Incorporated | System and method for OLAP report generation with spreadsheet report within the network user interface |
US9208213B2 (en) | 1999-05-28 | 2015-12-08 | Microstrategy, Incorporated | System and method for network user interface OLAP report formatting |
US10592705B2 (en) | 1999-05-28 | 2020-03-17 | Microstrategy, Incorporated | System and method for network user interface report formatting |
EP1083490A3 (en) * | 1999-09-10 | 2003-12-10 | Yamaha Hatsudoki Kabushiki Kaisha | Interactive artificial intelligence |
EP1083490A2 (en) * | 1999-09-10 | 2001-03-14 | Yamaha Hatsudoki Kabushiki Kaisha | Interactive artificial intelligence |
US8094788B1 (en) * | 1999-09-13 | 2012-01-10 | Microstrategy, Incorporated | System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services with customized message depending on recipient |
US8995628B2 (en) | 1999-09-13 | 2015-03-31 | Microstrategy, Incorporated | System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services with closed loop transaction processing |
US8130918B1 (en) | 1999-09-13 | 2012-03-06 | Microstrategy, Incorporated | System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services, with closed loop transaction processing |
EP1226550A1 (en) * | 1999-10-08 | 2002-07-31 | Electronic Arts, Inc. | Remote communication through visual representations |
EP1226550A4 (en) * | 1999-10-08 | 2005-03-23 | Electronic Arts Inc | Remote communication through visual representations |
US7253817B1 (en) | 1999-12-29 | 2007-08-07 | Virtual Personalities, Inc. | Virtual human interface for conducting surveys |
US6826540B1 (en) * | 1999-12-29 | 2004-11-30 | Virtual Personalities, Inc. | Virtual human interface for conducting surveys |
WO2001048660A1 (en) * | 1999-12-29 | 2001-07-05 | Virtual Personalities, Inc. | Virtual human interface for conducting surveys |
US20010042057A1 (en) * | 2000-01-25 | 2001-11-15 | Nec Corporation | Emotion expressing device |
US20010056364A1 (en) * | 2000-03-09 | 2001-12-27 | Diederiks Elmo Marcus Attila | Method of interacting with a consumer electronics system |
US6778191B2 (en) * | 2000-03-09 | 2004-08-17 | Koninklijke Philips Electronics N.V. | Method of interacting with a consumer electronics system |
US20050008246A1 (en) * | 2000-04-13 | 2005-01-13 | Fuji Photo Film Co., Ltd. | Image Processing method |
US20020010584A1 (en) * | 2000-05-24 | 2002-01-24 | Schultz Mitchell Jay | Interactive voice communication method and system for information and entertainment |
US20020010589A1 (en) * | 2000-07-24 | 2002-01-24 | Tatsushi Nashida | System and method for supporting interactive operations and storage medium |
US7426467B2 (en) * | 2000-07-24 | 2008-09-16 | Sony Corporation | System and method for supporting interactive user interface operations and storage medium |
US20020019678A1 (en) * | 2000-08-07 | 2002-02-14 | Takashi Mizokawa | Pseudo-emotion sound expression system |
WO2002013935A1 (en) | 2000-08-12 | 2002-02-21 | Smirnov Alexander V | Toys imitating characters behaviour |
US6656116B2 (en) | 2000-09-02 | 2003-12-02 | Samsung Electronics Co. Ltd. | Apparatus and method for perceiving physical and emotional state |
EP1183997A2 (en) | 2000-09-02 | 2002-03-06 | Samsung Electronics Co., Ltd. | Apparatus and method for perceiving physical and emotional states of a person |
US20040075677A1 (en) * | 2000-11-03 | 2004-04-22 | Loyall A. Bryan | Interactive character system |
US7478047B2 (en) * | 2000-11-03 | 2009-01-13 | Zoesis, Inc. | Interactive character system |
US6964023B2 (en) * | 2001-02-05 | 2005-11-08 | International Business Machines Corporation | System and method for multi-modal focus detection, referential ambiguity resolution and mood classification using multi-modal input |
US20030028383A1 (en) * | 2001-02-20 | 2003-02-06 | I & A Research Inc. | System for modeling and simulating emotion states |
WO2002067194A3 (en) * | 2001-02-20 | 2003-10-30 | I & A Res Inc | System for modeling and simulating emotion states |
WO2002067194A2 (en) * | 2001-02-20 | 2002-08-29 | I & A Research Inc. | System for modeling and simulating emotion states |
US20030163320A1 (en) * | 2001-03-09 | 2003-08-28 | Nobuhide Yamazaki | Voice synthesis device |
US20020191021A1 (en) * | 2001-03-12 | 2002-12-19 | Lilian Labelle | Method and device for validating parameters defining an image |
US7079711B2 (en) * | 2001-03-12 | 2006-07-18 | Canon Kabushiki Kaisha | Method and device for validating parameters defining an image |
US20030040911A1 (en) * | 2001-08-14 | 2003-02-27 | Oudeyer Pierre Yves | Method and apparatus for controlling the operation of an emotion synthesising device |
US7457752B2 (en) * | 2001-08-14 | 2008-11-25 | Sony France S.A. | Method and apparatus for controlling the operation of an emotion synthesizing device |
US9729690B2 (en) | 2001-08-21 | 2017-08-08 | Ben Franklin Patent Holding Llc | Dynamic interactive voice interface |
US7920682B2 (en) | 2001-08-21 | 2011-04-05 | Byrne William J | Dynamic interactive voice interface |
US20040179659A1 (en) * | 2001-08-21 | 2004-09-16 | Byrne William J. | Dynamic interactive voice interface |
US20030067486A1 (en) * | 2001-10-06 | 2003-04-10 | Samsung Electronics Co., Ltd. | Apparatus and method for synthesizing emotions based on the human nervous system |
KR100624403B1 (en) * | 2001-10-06 | 2006-09-15 | 삼성전자주식회사 | Nervous system based emotion synthesis apparatus and method in human body |
US7333969B2 (en) * | 2001-10-06 | 2008-02-19 | Samsung Electronics Co., Ltd. | Apparatus and method for synthesizing emotions based on the human nervous system |
US20030069847A1 (en) * | 2001-10-10 | 2003-04-10 | Ncr Corporation | Self-service terminal |
DE10154423A1 (en) * | 2001-11-06 | 2003-05-15 | Deutsche Telekom Ag | Speech controlled interface for accessing an information or computer system in which a digital assistant analyses user input and its own output so that it can be personalized to match user requirements |
US20030108241A1 (en) * | 2001-12-11 | 2003-06-12 | Koninklijke Philips Electronics N.V. | Mood based virtual photo album |
US6931147B2 (en) | 2001-12-11 | 2005-08-16 | Koninklijke Philips Electronics N.V. | Mood based virtual photo album |
KR100858079B1 (en) * | 2002-01-03 | 2008-09-10 | 삼성전자주식회사 | Method and apparatus for generating agent emotion |
US20030156304A1 (en) * | 2002-02-19 | 2003-08-21 | Eastman Kodak Company | Method for providing affective information in an imaging system |
US7327505B2 (en) * | 2002-02-19 | 2008-02-05 | Eastman Kodak Company | Method for providing affective information in an imaging system |
DE10210799B4 (en) * | 2002-03-12 | 2006-04-27 | Siemens Ag | Adaptation of a human-machine interface depending on a psycho-profile and a current state of a user |
US20030174122A1 (en) * | 2002-03-12 | 2003-09-18 | Siemens Ag | Adaptation of a human-machine interface as a function of a psychological profile and a current state of being of a user |
DE10210799A1 (en) * | 2002-03-12 | 2004-05-19 | Siemens Ag | Adaptation of a human-machine interface depending on a psycho profile and the current state of a user |
US7283962B2 (en) * | 2002-03-21 | 2007-10-16 | United States Of America As Represented By The Secretary Of The Army | Methods and systems for detecting, measuring, and monitoring stress in speech |
US20070213981A1 (en) * | 2002-03-21 | 2007-09-13 | Meyerhoff James L | Methods and systems for detecting, measuring, and monitoring stress in speech |
US6700965B1 (en) | 2002-05-03 | 2004-03-02 | At&T Corp. | Identifier-triggered personalized customer relations management service |
US20030234871A1 (en) * | 2002-06-25 | 2003-12-25 | Squilla John R. | Apparatus and method of modifying a portrait image |
US20040054534A1 (en) * | 2002-09-13 | 2004-03-18 | Junqua Jean-Claude | Client-server voice customization |
US20060147884A1 (en) * | 2002-09-26 | 2006-07-06 | Anthony Durrell | Psychometric instruments and methods for mood analysis, psychoeducation, mood health promotion, mood health maintenance and mood disorder therapy |
GB2409734A (en) * | 2002-11-11 | 2005-07-06 | Alfred Schurmann | Determination and control of activities of an emotional system |
GB2409734B (en) * | 2002-11-11 | 2007-01-03 | Alfred Schurmann | Determination and control of activities of an emotional system |
WO2004044837A1 (en) * | 2002-11-11 | 2004-05-27 | Alfred Schurmann | Determination and control of the activities of an emotional system |
EP1420366A2 (en) * | 2002-11-14 | 2004-05-19 | Eastman Kodak Company | System and method for modifying a portrait image in response to a stimulus |
US7154510B2 (en) | 2002-11-14 | 2006-12-26 | Eastman Kodak Company | System and method for modifying a portrait image in response to a stimulus |
EP1420366A3 (en) * | 2002-11-14 | 2006-03-29 | Eastman Kodak Company | System and method for modifying a portrait image in response to a stimulus |
US20040095359A1 (en) * | 2002-11-14 | 2004-05-20 | Eastman Kodak Company | System and method for modifying a portrait image in response to a stimulus |
US20060129405A1 (en) * | 2003-01-12 | 2006-06-15 | Shlomo Elfanbaum | Method and device for determining a personal happiness index and improving it |
US20050143138A1 (en) * | 2003-09-05 | 2005-06-30 | Samsung Electronics Co., Ltd. | Proactive user interface including emotional agent |
US7725419B2 (en) * | 2003-09-05 | 2010-05-25 | Samsung Electronics Co., Ltd | Proactive user interface including emotional agent |
EP1522920A3 (en) * | 2003-09-05 | 2007-01-03 | Samsung Electronics Co., Ltd. | Proactive user interface including emotional agent |
EP1522918A3 (en) * | 2003-09-05 | 2007-04-04 | Samsung Electronics Co., Ltd. | Proactive user interface |
EP1522920A2 (en) * | 2003-09-05 | 2005-04-13 | Samsung Electronics Co., Ltd. | Proactive user interface including emotional agent |
US20050054381A1 (en) * | 2003-09-05 | 2005-03-10 | Samsung Electronics Co., Ltd. | Proactive user interface |
US20050114142A1 (en) * | 2003-11-20 | 2005-05-26 | Masamichi Asukai | Emotion calculating apparatus and method and mobile communication apparatus |
US20070135689A1 (en) * | 2003-11-20 | 2007-06-14 | Sony Corporation | Emotion calculating apparatus and method and mobile communication apparatus |
US20060248461A1 (en) * | 2005-04-29 | 2006-11-02 | Omron Corporation | Socially intelligent agent software |
US20060282493A1 (en) * | 2005-06-14 | 2006-12-14 | Omron Corporation And Stanford University | Apparatus and method for socially intelligent virtual entity |
US7944448B2 (en) | 2005-06-14 | 2011-05-17 | Omron Corporation | Apparatus and method for socially intelligent virtual entity |
US8209182B2 (en) * | 2005-11-30 | 2012-06-26 | University Of Southern California | Emotion recognition system |
US20080052080A1 (en) * | 2005-11-30 | 2008-02-28 | University Of Southern California | Emotion Recognition System |
US20080068397A1 (en) * | 2006-09-14 | 2008-03-20 | Carey James E | Emotion-Based Digital Video Alteration |
US20080177685A1 (en) * | 2006-11-06 | 2008-07-24 | Kadri Faisal L | Artificial Psychology Dialog Player with Aging Simulation |
US7644060B2 (en) * | 2006-11-06 | 2010-01-05 | Kadri Faisal L | Artificial psychology dialog player with aging simulation |
US20090210476A1 (en) * | 2008-02-19 | 2009-08-20 | Joseph Arie Levy | System and method for providing tangible feedback according to a context and personality state |
US20090311654A1 (en) * | 2008-06-16 | 2009-12-17 | Pedro Amador Lopez | Multistage Automatic Coaching Methodology |
US20100098341A1 (en) * | 2008-10-21 | 2010-04-22 | Shang-Tzu Ju | Image recognition device for displaying multimedia data |
US20100149573A1 (en) * | 2008-12-17 | 2010-06-17 | Xerox Corporation | System and method of providing image forming machine power up status information |
US10398366B2 (en) | 2010-07-01 | 2019-09-03 | Nokia Technologies Oy | Responding to changes in emotional condition of a user |
US20120011477A1 (en) * | 2010-07-12 | 2012-01-12 | Nokia Corporation | User interfaces |
US20120130196A1 (en) * | 2010-11-24 | 2012-05-24 | Fujitsu Limited | Mood Sensor |
US8928671B2 (en) | 2010-11-24 | 2015-01-06 | Fujitsu Limited | Recording and analyzing data on a 3D avatar |
US20170163831A1 (en) * | 2010-12-27 | 2017-06-08 | Sharp Kabushiki Kaisha | Image forming apparatus having display section displaying environmental certification information during startup |
US9992369B2 (en) * | 2010-12-27 | 2018-06-05 | Sharp Kabushiki Kaisha | Image forming apparatus having display section displaying environmental certification information during startup and being foldable into a generally flush accommodated state |
US20130311528A1 (en) * | 2012-04-25 | 2013-11-21 | Raanan Liebermann | Communications with a proxy for the departed and other devices and services for communicaiton and presentation in virtual reality |
US20130300645A1 (en) * | 2012-05-12 | 2013-11-14 | Mikhail Fedorov | Human-Computer Interface System |
US9002768B2 (en) * | 2012-05-12 | 2015-04-07 | Mikhail Fedorov | Human-computer interface system |
US9679553B2 (en) * | 2012-11-08 | 2017-06-13 | Nec Corporation | Conversation-sentence generation device, conversation-sentence generation method, and conversation-sentence generation program |
US20150279350A1 (en) * | 2012-11-08 | 2015-10-01 | Nec Corporation | Conversation-sentence generation device, conversation-sentence generation method, and conversation-sentence generation program |
US20160148043A1 (en) * | 2013-06-20 | 2016-05-26 | Elwha Llc | Systems and methods for enhancement of facial expressions |
US9792490B2 (en) * | 2013-06-20 | 2017-10-17 | Elwha Llc | Systems and methods for enhancement of facial expressions |
TWI722160B (en) * | 2016-03-30 | 2021-03-21 | 光吉俊二 | Meaning creation device, meaning creation method and meaning creation program |
US10325616B2 (en) * | 2016-03-30 | 2019-06-18 | Japan Mathematical Institute Inc. | Intention emergence device, intention emergence method, and intention emergence program |
US20240316782A1 (en) * | 2016-11-10 | 2024-09-26 | Warner Bros. Entertainment Inc. | Social robot with environmental control feature |
US12011822B2 (en) * | 2016-11-10 | 2024-06-18 | Warner Bros. Entertainment Inc. | Social robot with environmental control feature |
US11370125B2 (en) * | 2016-11-10 | 2022-06-28 | Warner Bros. Entertainment Inc. | Social robot with environmental control feature |
US20220395983A1 (en) * | 2016-11-10 | 2022-12-15 | Warner Bros. Entertainment Inc. | Social robot with environmental control feature |
CN110073391A (en) * | 2016-12-28 | 2019-07-30 | 本田技研工业株式会社 | Rental system and evaluation system |
CN110073391B (en) * | 2016-12-28 | 2022-02-25 | 本田技研工业株式会社 | Rental system and evaluation system |
US10535344B2 (en) * | 2017-06-08 | 2020-01-14 | Microsoft Technology Licensing, Llc | Conversational system user experience |
US20180358008A1 (en) * | 2017-06-08 | 2018-12-13 | Microsoft Technology Licensing, Llc | Conversational system user experience |
US10957089B2 (en) * | 2018-09-13 | 2021-03-23 | International Business Machines Corporation | Animation generation |
Also Published As
Publication number | Publication date |
---|---|
JPH0612401A (en) | 1994-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5367454A (en) | Interactive man-machine interface for simulating human emotions | |
Levin et al. | A stochastic model of human-machine interaction for learning dialog strategies | |
US20240321011A1 (en) | Nonverbal Information Generation Apparatus, Nonverbal Information Generation Model Learning Apparatus, Methods, And Programs | |
Cassell et al. | Beat: the behavior expression animation toolkit | |
US5819243A (en) | System with collaborative interface agent | |
US20210370519A1 (en) | Nonverbal information generation apparatus, nonverbal information generation model learning apparatus, methods, and programs | |
US20030193504A1 (en) | System for designing and rendering personalities for autonomous synthetic characters | |
Martin | TYCOON: Theoretical framework and software tools for multimodal interfaces | |
US20070250464A1 (en) | Historical figures in today's society | |
US12165672B2 (en) | Nonverbal information generation apparatus, method, and program | |
US11404063B2 (en) | Nonverbal information generation apparatus, nonverbal information generation model learning apparatus, methods, and programs | |
JPWO2005093650A1 (en) | Will expression model device, psychological effect program, will expression simulation method | |
Álvarez et al. | An emotional model for a guide robot | |
US20220129627A1 (en) | Multi-persona social agent | |
CN116966574A (en) | Interaction processing method and device for non-player character, electronic equipment and storage medium | |
US20040179043A1 (en) | Method and system for animating a figure in three dimensions | |
US7016880B1 (en) | Event based system for use within the creation and implementation of educational simulations | |
Prendinger et al. | MPML and SCREAM: Scripting the bodies and minds of life-like characters | |
WO2020256992A1 (en) | System and method for intelligent dialogue based on knowledge tracing | |
Fares et al. | Zero-shot style transfer for multimodal data-driven gesture synthesis | |
Sousa Silva et al. | Forget about it: Entity-level working memory models for referring expression generation in robot cognitive architectures | |
WO2007092795A2 (en) | Method for movie animation | |
Schaat | SiMA-C: a foundational mental architecture | |
Chen et al. | Equipping a lifelike animated agent with a mind | |
CN114127791A (en) | Cognitive mode setting in an animator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: FUJI XEROX CO., LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KAWAMOTO, KOUSHI;OMURA, KENGO;REEL/FRAME:006614/0231 Effective date: 19930617 |
|
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees | ||
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 19981122 |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |