US11972277B2 - Emotionally driven software interaction experience - Google Patents
Emotionally driven software interaction experience Download PDFInfo
- Publication number
- US11972277B2 US11972277B2 US16/874,987 US202016874987A US11972277B2 US 11972277 B2 US11972277 B2 US 11972277B2 US 202016874987 A US202016874987 A US 202016874987A US 11972277 B2 US11972277 B2 US 11972277B2
- Authority
- US
- United States
- Prior art keywords
- user
- input
- root
- emotional
- user interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
- G06F9/453—Help systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/435—Filtering based on additional data, e.g. user or group profiles
- G06F16/436—Filtering based on additional data, e.g. user or group profiles using biological or physiological data of a human being, e.g. blood pressure, facial expression, gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/542—Event management; Broadcasting; Multicasting; Notifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0269—Targeted advertisements based on user profile or attribute
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
Definitions
- the present invention generally relates to providing a dynamically interactive software experience to a user that is dependent on an emotional state or intention. More particularly, the invention relates to ascertaining a user emotional goal to provide the user recommendations regarding the fulfillment of that goal through a customized and dynamic software experience.
- Embodiments of the present invention are directed to systems and methods that ascertain an emotional goal and can therefore provide specific recommendations directed towards achieving that goal.
- the embodiments can further provide a dynamic and interactive user interaction experience that is consistent with an identified emotion.
- a computer implemented method for ascertaining an emotional goal includes receiving, via an emotionally responsive computerized system having a user interface communicatively coupled to a networked user device including a processor device, a first user-input concerning a purpose of a user's interaction with a software interface. It can further include registering a second user-input indicative of a target person to whom the purpose of the user's interaction pertains and prompting the user to provide at least one root motivator comprising a root emotion or a root reason for the interaction.
- Some variations of the invention also include generating a user-perceptible output and a set of user interface elements dependent on at least one root motivator along with obtaining the user's specific emotional goal with respect to the target person on the basis of user-inputs in response to a presentation of a sequence of user interface elements to provide the user via the software interface a recommendation regarding a fulfillment of the specific emotional goal.
- generating the user-perceptible output and the set of user interface elements includes the output being and elements in the set of user interface generating images or sounds that are semantically, tonally, thematically, or chromatically related to at least one root motivator.
- the generation of a subsequent user interface element and the generation of a subsequent output can respectively be dependent, at least in part, on a preceding user interface element interaction and each subsequent user interface element in the sequence presented to the user that includes a prompt or query can elicit a more precise description of the user's emotional goal with higher specificity than the description elicited by a preceding prompt or query.
- the user interface and its elements, as well as outputs presented to a user can be visually displayed on a screen coupled to a touch sensitive input means that the user can use to interact with embodiments of the invention.
- a user interface device through which user interface elements and outputs can be presented to a user can be an audio input-output device configured to emit sounds and receive voice input.
- a non-transitory computer readable medium includes instructions that are executable by a computing device which when executed can cause the computing device to register a first user-input concerning a purpose of a user's interaction with a software interface of the computing device as well as a second user-input indicative of a target person to whom the purpose of the user's interaction pertains.
- the instructions can also cause the computing device to prompt the user to provide one or more root motivators, including a root emotion or a root reason, for the interaction and then generate a user-perceptible output and a set of user interface elements dependent on at least one root motivator.
- Some versions of the invention have instructions that can then obtain the user's specific emotional goal with respect to the target person on the basis of the user's inputs in response to the presentation of a sequence of user interface elements, and that can also provide the user, via the software interface, a recommendation regarding a fulfillment of the specific emotional goal.
- the user-perceptible output and elements in the set of user interface elements can be reflective of a theme consistent with at least one root motivator and can have the user-perceptible output and/or the user interface elements include images or sounds that are semantically, tonally, thematically, or chromatically related to at least one root motivator.
- Some versions of the invention can have each of a subsequent user interface element and a subsequent output be respectively dependent on a preceding user interface element interaction.
- the elements of the set of user interface elements can be presented to a user in a sequence wherein each subsequent user interface element includes a prompt or query eliciting a more precise description of the user's emotional goal than that provided by the user in response to a previous prompt or query presented to the user earlier in the sequence.
- the instructions can also cause the computing device to present the user-perceptible output and interface elements to a user via a screen, an audio input-output device, or both the screen and the audio input-output device.
- the system for ascertaining an emotional goal includes at least one computing device that has at least one processor device and at least one user interface including an input means and output means communicatively connected to the processor.
- the system can also have the computing device receive a first user-input, via the at least one user interface, that is indicative of a purpose of a user's interaction with the at least one user interface as well as a second user-input indicative of a target person (which could be the user themselves) to whom the purpose of the user's interaction pertains.
- the computing device of the system can, by using the processor device, generate a prompt in response to the first and second user-input for the user to provide at least one root motivator that can include a root emotion or a root reason for the interaction.
- the computing device can also generate user-perceptible output and a set of user interface elements dependent on at least one root motivator and present the output and the set of elements to the user via the output means.
- the computer device can determine the user's specific emotional goal with respect to the target person on the basis of the user-inputs in response to a presentation of a sequence of user interface elements and can also provide, via the output means, for the user a recommendation regarding a fulfillment of the specific emotional goal.
- the user-perceptible output and elements in the set of user interface elements can be reflective of a theme consistent with at least one root motivator.
- subsequent user interface element and subsequent outputs can be respectively dependent on preceding user interface element interactions, and elements of the set of user interface elements can be presented to a user in a sequence where each subsequent user interface element includes a prompt or query eliciting a more precise description of the user's emotional goal than that provided by the user in response to a previous prompt or query presented to the user earlier in the sequence
- FIG. 1 is a flow chart of a method for ascertaining an emotional goal, in accordance with an embodiment of the present invention
- FIGS. 2 A- 2 H depict user interface elements and output appearing as part of an iterative process, in accordance with an embodiment of the present invention
- FIG. 3 shows a recommendation aimed at fulfilling a user's emotional goal, in accordance with an embodiment of the present invention
- FIGS. 4 A- 4 E depict logical tree diagrams, in accordance with an embodiment of the present invention.
- FIG. 5 depicts a schematic diagram of a networked system, in accordance with an embodiment of the present invention.
- FIG. 6 shows a schematic diagram of a system, in accordance with an embodiment of the present invention.
- various features may be described as being optional, for example, through the use of the verb “may;”, or, through the use of any of the phrases: “in some embodiments,” “in some implementations,” “in some designs,” “in various embodiments,” “in various implementations,”, “in various designs,” “in an illustrative example,” or “for example;” or, through the use of parentheses.
- the present disclosure does not explicitly recite each and every permutation that may be obtained by choosing from the set of optional features.
- the present disclosure is to be interpreted as explicitly disclosing all such permutations.
- a system described as having three optional features may be embodied in seven different ways, namely with just one of the three possible features, with any two of the three possible features or with all three of the three possible features.
- the term “any” may be understood as designating any number of the respective elements, i.e. as designating one, at least one, at least two, each or all of the respective elements.
- the term “any” may be understood as designating any collection(s) of the respective elements, i.e. as designating one or more collections of the respective elements, a collection comprising one, at least one, at least two, each or all of the respective elements.
- the respective collections need not comprise the same number of elements.
- Embodiments of the invention relate to systems and methods of providing an emotion-driven software experience and identifying an emotional goal. Employment of the systems and methods described herein provides a user with an experience that dynamically changes and interacts with the user according to an emotion experienced by or indicated by a user, determines a specific emotional goal, and facilitates achievement of the emotional goal by providing a recommendation.
- embodiments of the invention incorporate the use of an interactive physical interface capable of presenting a dynamically responsive audio, video, or audio-video software interface through which questions and prompts as well as other output can be presented to a user.
- the same interactive interface or an alternative one can also be used to produce outputs perceivable by the user and to receive input and selections from the user, some of which may be entered by the user in response to the aforementioned questions and/or prompts.
- An objective of the various systems and methods described herein is to ascertain or to assist in ascertaining a user's emotional goal.
- the function of the potential questions, prompts, and/or outputs that may be presented to a user is to narrow down, focus on, and identify, with increasing specificity, a user's desired or intended emotional goal so that a recommendation relating to the fulfillment of that emotional goal can then be made.
- the herein described systems and methods are flexible and include the option for a user to skip questions and/or prompts.
- embodiments of the invention can include questions and/or prompts to be either automatically learned by the embodiment through user interaction or that can be added manually to the software interface through periodic updates of the embodiment.
- the various embodiments presented herein are dynamically responsive to a user's emotion or emotional intention.
- the several versions of the invention seek to provide changing software behavior based on an understanding or identification of a user's emotional (or psychological) state. Accordingly, one of the goals achieved thereby is the provision of a better, more natural, dynamic experience of interacting with the software (and its interface) and the humanization of that which conventionally would otherwise be a static transactional process.
- the various embodiments emulate human interaction to the extent that they react and adjust aspects and characteristics of the interaction in accordance with the perceived emotional state or condition of the user. Notably, it is not just the outcome of the interaction that is dependent on the identified emotion, but rather the entirety of the interaction including its constituent steps or stages and their qualities that become contingent on and follow from the emotion as well.
- Some versions of the invention operate in a manner dependent on the collection of userinputs such that the user's interaction with a software interface, the experience of that interaction including the perception of the user interface elements (e.g., questions, prompts, and selection options), and the manner in which they are presented, can be dynamic and responsive based on the emotional state and intent of the user.
- the user interface elements e.g., questions, prompts, and selection options
- Subsequent actions, results (obtained at the conclusion of an interactive process), and outputs presented during the course of a user's interaction with a software interface of the invention can depend on the understanding (i.e., identifying/defining) a root emotional cause as well as supporting emotional details of that root cause (e.g., sub-emotions, reasons, sub-reasons, etc.) motivating the user to pursue an emotional goal and driving the user's interaction with the embodiment.
- the embodiment is able to accommodate for the missing information with pre-determined default states for the selections, interaction options, interface element (e.g., prompt, questions, buttons, pictures, sounds, etc.) that can be substituted in lieu of user-input for the remaining stages of the user's interaction with the software interface.
- interface element e.g., prompt, questions, buttons, pictures, sounds, etc.
- the systems and methods described herein provide an emotion-driven software interaction experience that seeks to not only identify the emotional state or condition of the user and facilitate the conveyance of an emotion to a target individual (e.g., a recipient), but to also propagate that emotional state to be reflected in the various stages of the interaction.
- Conveyance of the desired emotion to the target individual can be realized in a variety of ways including, but not limited to, through the presentation of a gift, a card message, or multi-media content.
- Prompts can be understood to be any manner of invitations or suggestions for a user to provide information and can include questions, open ended statements, fillable fields, and selection options.
- Affirmations can be understood to be automatically presented responses provided immediately after a particular user interaction with the software interface (e.g., answering questions, making selections via user interface elements, and responding to prompts) that dynamically reflect the sentiments and emotions that may have been indicated or implied by that interaction (e.g., an affirmation can include a “Yay Exciting!” statement in response to a “Celebrating Something” answer being provided by a user).
- affirmations can contain the appropriate relevant tone and be presented in a manner that is thematically and logically consistent with emotional information gleaned from the user's interaction with the software interface.
- the phrase “software interface” may refer to any type of user interface with which a user can interact and includes any interactive elements that a user can engage or provide input through as well as any outputs produced as a result of such engagement.
- the phrase “software interactions” include the interactions of a user with a software interface of a version of the invention and can include responses to questions or prompts, selections being made via user interface elements presented to the user, and input being provided by a user.
- the logic followed in the presentation of prompts and/or questions to a user can follow a pattern where subsequent questions/prompts appearing during a user's interaction with the software interface depend on answers, responses, or input provided at an earlier time during the user's interaction so that a deeper emotional understanding of the user's intention can be ascertained. For example, after a user enters an answer to a question and receives the aforementioned affirmation, the dynamically contingent question of “What are you celebrating” may be asked to further clarify the user's emotional goal or intention. It should be noted that the arrangement, appearance, and perception of user interface elements presented to the user can be guided based on the reuse of emotional elements and answers presented or provided at an earlier point of the user's interaction with the software interface.
- visual cues or elements e.g., pictures, photographs, diagrams, etc.
- color, imagery, textual elements including the choice of words and phrasing, as well as potential audio elements can be customized to be adapted to correspond to and blend with an emotion indicated or elicited from the user.
- the user interface elements can adapt based on an identified emotion (e.g., bright colors and cheery sounds can be used in situations where a celebratory emotion is indicated and reuse language that reinforces happiness and celebration throughout the course of a user's interaction with the software interface).
- a key element in the inventive processes described herein is the establishment or identification of a primary or root cause or root emotion that can set the mood and serve as a basis for tailoring the rest of the user's experience of interacting with the software interface. It is from this root emotional motivator that the remaining interactions stem in the various embodiments of the invention.
- the purpose of a user's interaction 102 with the software via its interface may be indicated by a user.
- Exemplary purposes may include, but are not limited to, a gift purchase, a presentation of a floral arrangement, self-improvement, or dating, each of which can have a target individual (i.e., an individual who is intended to ultimately be the beneficiary of the result of the interaction).
- a target individual i.e., an individual who is intended to ultimately be the beneficiary of the result of the interaction.
- the user may be prompted to indicate or select a relationship 104 between the user and the target individual.
- the target individual can be the same person as the user, especially in the case of a self-help or psychotherapy software application implementation of an embodiment of the invention (i.e., the user intend him/herself to be the intended ultimate beneficiary of the interaction).
- a decision 106 can be made to provide an appropriate subsequent user interface element or output that logically stems from the input provided by the user. Subsequently, if a relationship 104 is indicated or selected, a user may be prompted to provide additional details about the relationship, such as, for example, the target individual's name 112 . In some cases, after providing input, a user may be presented with an affirmation 108 which, as described above, can be a response that is logically related to that input.
- the user can then be prompted to indicate or select one or more root emotion 110 or root reason (each of “root emotion” and “root reason” also referred to herein as a 218 ) for the user's interaction with the software.
- This root emotion 110 or reason can then serve as the basis for the subsequent presentation of interface elements and outputs for user interaction as well as the basis for further, more specific, determination of the user's emotional goal.
- a root emotion 110 or root reason can begin a process that may be referred to herein as an “emotional funnel” to the extent that the process is directed towards ascertaining the user's emotional goal with ever more precision and specificity at each subsequent stage of the process.
- the user can be prompted to indicate or select a main reason 118 in support of the root emotion 110 or root reason, that further specifies the user's emotional intent from one that was conceptually broader to one that is conceptually narrower and more precise.
- a main reason 118 in support of the root emotion 110 or root reason, that further specifies the user's emotional intent from one that was conceptually broader to one that is conceptually narrower and more precise.
- the various embodiments of the invention contemplate the provision of any of a variety of root emotions 110 or reasons as well as any of a variety of applicable main reasons 118 logically stemming therefrom.
- the user can be provided with an affirmation (similar to ones described above) after having provided an input, or, the user can be prompted to provide a sub reason 120 for the main reason 118 .
- the sub reason 120 provided by the user can then serve to further specify and more precisely articulate the user's emotional intent. Accordingly, the process of iteratively affirming a user's previous input and/or prompting the user to provide more specific details regarding the user's emotional intent can be repeated an arbitrary number of times until the desired level of precision for the identification or determination of the user's emotional goal 122 is achieved. It should be understood that the level of depth or precision desired and, consequently, the number of iterative stages can be either decided by the user or pre-determined by a setting of the software. In this manner a precise determination of a user's emotional goal 122 can be made which can then serve as a basis for providing the user a recommendation for the fulfillment of that emotional goal 122 .
- the user interface interaction identifying the user's relationship 104 to the target individual can be employed to further characterize a user's emotional state or emotional intention, particularly in instances where an embodiment of the invention is operated in the context of gift-giving, and can also be a factor in determining a pre-determined set of root emotions 110 that a user may opt to choose from instead of providing one by free entry into a fillable field without selecting from a pre-determined set. It should be understood that throughout the course of a user's interaction with the software interface of an embodiment of the invention, a plurality of various user interface elements may be presented to the user, each of which may be dynamic and changeable with respect and in response to an input provided by the user.
- dynamic elements can be leveraged or affected to provide a more natural interaction experience to the user that is thematically consistent with an input that the user provided. For example, because a key input that a user may provide is a root emotion 110 or reason, subsequent user interface elements or outputs of an embodiment can be adjusted to reflect or correspond with that root emotion 110 or reason.
- images, sounds, and interactive elements presented to the user may be chosen or adjusted to correspond to an emotion by being semantically (i.e., in general meaning), tonally (e.g., by general overarching color tone or sound pitch/frequency), thematically (e.g., by belonging to a particular conceptual category), or chromatically (e.g., by the variety of the selection of colors used or primary note sound used) consistent therewith.
- semantically i.e., in general meaning
- tonally e.g., by general overarching color tone or sound pitch/frequency
- thematically e.g., by belonging to a particular conceptual category
- chromatically e.g., by the variety of the selection of colors used or primary note sound used
- the emotion-driven interactive software experience can inherit or adopt some aspects, qualities, or properties of the relationship to provide a more human-like and customized interaction to the user (e.g., in the context of an embodiment of the invention being used for gift-giving, the product name or message can include the relationship 104 or target individual's name 112 to make the gift more personal).
- the initial purpose 102 of interacting with the software indicated by the user can determine the remainder of the emotionally-driven interactive software experience as that initial purpose 102 can define the realm of possibilities of why a user may be feeling physically, emotionally, or psychologically and provide basic information about why they chose to initiate the interaction with the software. Consequently, that information may determine the presence or availability of options or user interface elements appearing in subsequent stages of the users interaction with the software interface.
- the user can be prompted to provide information that can help understand the deeper meaning or reason for the interaction so that the interaction can be customized to deliver a proper (i.e., appropriate, corresponding) emotional response through the use of imagery and tone applicable to the various user interface elements, and outputs provided to the user throughout the course of a user's experience of interacting with the embodiment of the invention.
- tone can refer to an aspect of a user interface element or an output that can be adjusted to correspond with a root emotion 110 or root reason provided by the user.
- the number of times a user is prompted for input, the verbiage of the prompts, the color scheme, and the types of user interface elements presented can be varied in order to correspond semantically, tonally, thematically, or chromatically with a root emotion 110 .
- the presentation of a recommendation (including a potential course of action, message, product, or service) for the fulfillment of a user's emotional goal 122 can likewise be semantically, tonally, thematically, or chromatically reflective of or corresponding with the root emotion 110 or reason.
- An exemplary method for ascertaining a user's emotional goal may begin with the identification of a basic root emotion 110 or reason (e.g., “celebrating” or “thinking of you”) that can then dictate or filter the subsequent, more specific, identification of a more precise emotional intention (e.g., “celebrating a birthday” or “expressing sympathy for a loss”).
- the visual and/or audio outputs (e.g., cues) and interface elements presented to a user can reflect or be sensitive to the root reason or emotion 110 provided, and thereby make the remainder of the software interaction experience be reflective and representative of that root emotion.
- subsequent prompts or questions can be presented to a user to further ascertain the user's emotional intent with more precision to a level of specificity that can be determined by the user (e.g., “celebrating because it's my aunt's 50 th golden birthday”). Accordingly, this more precise identification of a user's emotional intent, can itself further influence the customization and adaptation of the output and user interface elements selected to be presented to the user at subsequent stages of the interaction with the software interface.
- the interface elements and the interaction therewith can become contingent upon a previous input provided by the user as the audio and/or visual user interface elements and outputs (e.g., cues) are presented in correspondence with a user's identification of the user's emotion and reason.
- a user's emotional state and their emotional goal or emotional intention may not coincide with each other.
- a user may want to convey an emotion to the target individual that is different from the one that they themselves are experiencing (e.g., the user is sad because their aunt passed away, but the user's emotional goal is to make the target individual, the user's cousin, feel loved and supported since the cousin's mother passed away).
- subsequent interaction with the software interface defined by the user interface elements and outputs presented to the user, can be independent of or inconsistent with the user's emotional state, but instead rather be dependent on and consistent with the emotional goal or intent that the user has with respect to the target individual.
- the experience of interacting with the emotion-driven software can culminate in the recommendation of a product, service, or course of action that fulfills the user's emotional goal.
- the various versions of the invention can be configured for use with different initial purposes of interaction, including, as mentioned earlier, purposes related to self-help, psychotherapy, and dating in addition to emotional message conveyance and gift-giving.
- a user may desire to improve their social skills or abilities of interacting with others and choose to interact with an embodiment of the invention in order to better understand the user's own emotional state and emotional intentions.
- the user can be presented with an articulation of their emotional state or emotional desire with a level of specificity that the user themselves may not have been able to achieve unaided.
- affirmations are presented after user input is provided, such an exercise may also be used for the purposes of self-reflection and personal understanding that can improve a user's psychological well-being.
- user interaction as described herein can be realized through the use of various input and output means.
- user-perceptible visual output can be provided to a user through the use of a screen connected to an input means.
- a user can be presented with visual prompts (e.g., questions, fillable fields, menus, selection choices, etc.) to provide input (e.g., information regarding an emotional state or emotional goal) to a software interface.
- user-perceptible audio output can be provided to a user through an audio device such as a speaker. Accordingly, a user can be presented with audible questions or prompts to provide information as well as with options or choices from which a selection can be made.
- user-input can be received using a tactile or a touch-sensitive device as well as through a device capable of registering sound.
- a user may input responses to the prompts presented to the user using a computer peripheral (e.g., a mouse, keyboard), a touch-screen, a microphone, or a combination of such devices.
- FIGS. 2 A- 2 H show more specific examples of the elements discussed above.
- the various elements of the user interfaces of the embodiment presented in these figures can include textual elements such as questions and prompts, static or moving visual image elements including pictures, photographs, and video, as well as interface elements which can be manipulated or engaged to provide user input such as fillable fields, buttons, menus, and selectable item lists.
- a user may initially choose giving a gift (e.g., buying flowers) as the intended purpose of interacting with a software interface of the invention. Then, in a subsequent stage of interaction shown in FIG.
- the user may be prompted to provide or identify a relationship 202 with respect to a target individual who is intended to be the beneficiary of the gift.
- the provision of the relationship can be accomplished by a user completing a fillable field 204 , which may include an autocomplete function presenting choices for selection 210 as the user types 208 or otherwise provides input in response to the prompt as shown in FIG. 2 B .
- the provision of relationship information can be realized by a user selecting from a pre-set list of suggestion 206 .
- the user may be prompted 212 to provide the target individual's name.
- the name can be provided by free-form input into field 214 by the user as shown in FIG. 2 D .
- a prompt for the target individual's name may not be applicable such as when the target individual is the user him/herself or when the purpose of interacting with the software interface does not pertain to gift-giving or conveying an emotional message to another individual.
- the prompt 212 for the target individual's name 212 has been customized to reflect a previous user interaction with a user interface element in a preceding stage. Specifically, the prompt is customized/adapted to refer to a “girlfriend” as a consequence of the user's previous input in response to the prompt to identify the target individual.
- FIG. 2 E a user can be presented with an affirmation 217 that is logically related to the input provided previously.
- the affirmation 217 provides an encouraging remark that is logically related to, and emotionally consistent with, the user's input identifying the name 212 of the target individual.
- the user can also be then prompted, with question 216 or otherwise, to provide one or more root emotion/root reason/root motivator 218 .
- the root motivator 218 corresponds to the root emotion 110 or reason previously discussed with reference to FIG. 1 .
- the root motivator 218 can be provided by free input into a fillable field, selection from a menu, voice input, as well as other suitable means. It should be noted that one or more root motivators 218 can be selected and that reference to a single root motivator 218 or a plurality of root motivators 218 can be understood as interchangeable herein where logically consistent.
- the root motivator can identify the emotional or psychological root of the purpose with which the user is interacting with the embodiment of the invention. As noted earlier, these root motivators 218 can subsequently serve as a basis for presenting customized user interface elements (e.g., prompts, buttons, shapes, fields, selectable lists, etc.) and outputs (text, pictures, shapes, sounds, video, etc.) that are reflective of and/or consistent with the root motivators 218 .
- customized user interface elements e.g., prompts, buttons, shapes, fields, selectable lists, etc.
- outputs text, pictures, shapes, sounds, video, etc.
- subsequent elements and outputs may be customized/adapted to be presented in subdued hues reflective of sadness; the number of questions or prompts presented to the user may be small (so as not to unduly burden the user); the verbiage used can be chosen to be sensitive to the sad event; and/or the pictures and tones used can be chosen to be dull or darker.
- subsequent elements and outputs may reflect this by being presented in bright colorful hues, including a variety of lengthy questions or prompts, and use bright and exciting imagery and tones.
- audio output and interface elements include sounds and tonality consistent with an emotional mood or theme represented by the root motivator 218 (e.g., sad sounds and tones of voice prompts for a sad emotional motivator, and cheerful and upbeat sounds and tones for a happy emotional motivator).
- a user can thereafter again be presented with another optional affirmation 221 in response to providing a root motivator 218 , which in the instant case was the selection of the “Just Because” option.
- the optional affirmation 221 is customized, logically related to, and determined by the input provided at a previous stage of the user's interaction with a user interface element of an embodiment of the invention.
- the user can also be prompted with a question or an open ended phrase 220 to provide more specific information regarding the user's emotional intent or goal.
- this aspect of the invention can be understood to be a narrowing of the emotional funnel as the user is guided to an ever more precise articulation of the user's emotional goal.
- a user in response to the prompt a user can select a more precise emotional intent from a pre-determined list 222 that is derived from, dependent on, and consistent with the root motivator(s) 218 and previous inputs provided by the user.
- the user can also freely input a response into fillable field 224 to more precisely describe the user's emotional intent. It should be understood that this response indicating a more precise emotional intent corresponds to the main reason 118 discussed previously with reference to FIG. 1 .
- a user can be presented with an additional prompt 226 for provision of information that can even further specify or assist in articulating the user's emotional intent.
- the user's emotional intent can be defined in terms of how the user desires for the target individual to feel which may not necessarily be congruent with the user's emotional state or condition.
- a user can provide input further specifying the user's emotional intent by selecting from a list of pre-selected options 228 or providing textual, tactile, or voice input including information that is responsive to the additional prompt 226 . It should be understood that in the various embodiments described herein the iterative process depicted in FIGS. 2 A- 2 G can be repeated or conducted in an order other than the one described herein.
- information concerning a user's reasons or emotional goal may be obtained through facial recognition, gesture recognition, and/or voice signature analysis. Because a user's non-verbal communication, cues, involuntary reflexes/cues, as well as the tone/timbre of their voice may be indicative of their emotional state or emotional intent with respect to a target individual, the information obtained through these alternative means can be used in conjunction with, or in lieu of, the information obtained through the questions/prompts.
- the user can then be provided with suggestions or recommendation relating to the fulfillment of that goal.
- a user can be presented with a suggestions of including a message 230 to be sent to the target individual that accurately conveys the emotional goal.
- pre-made recommendations or suggestions 232 of message content can be presented to the user based on either the root motivator 218 and/or one or more inputs provided by the user at an earlier stage of the iterative process.
- the user may be presented with recommendations of multimedia (e.g., audio, video, or audio-video) content to send to the target individual that coveys an emotional message consistent with the user's emotional goal.
- multimedia e.g., audio, video, or audio-video
- the present invention can assist the user in fulfilling the user's emotional goal by priming the user to write a better card message than the user would have otherwise written by having identified the emotional goal with sufficient precision prior to writing the message.
- a user can be presented with product or service suggestions 350 that are aimed at fulfilling the user's previously identified emotional goal.
- the presentation of the suggested product or service is an extension of the user's interaction with the software interface of the embodiment and contains elements and outputs representative or reflective of the user's root motivator 218 , of any intervening user-inputs and selections, and/or of the specific emotional goal.
- the recommendation of a product may include a product name 352 that stems from, and is dependent on, an input provided by a user at an earlier time (e.g., by incorporating the name of the target recipient of the product in the product's name). Further, the product recommendation, the message as described in FIG.
- multimedia content recommended to be conveyed to the target individual to fulfill the emotional goal can also include a dynamically depicted inspiration sentence fragments 356 or suggestion sentence fragments that relate to or describe the motional goal.
- the presentation of a product recommendation to the user can also include other descriptive textual 354 and/or pictorial 346 outputs provided to the user that are semantically, tonally, thematically, or chromatically related to the root motivator(s) 218 , intervening inputs and selections, and/or to the specific emotional goal.
- embodiments of the invention incorporate a multifaceted logical structure that enables a user to hone in on their desired emotional goal through an iterative process that can be depicted as a logical tree diagram.
- Exemplary tree diagrams of a user proceeding through the iterative process described above are shown in FIGS. 4 A- 4 E .
- Each of the figures show a plurality of logical paths that can be followed from an initially identified core motivator to a specific ultimate emotional goal along with intervening stages of the iterative narrowing and specification of the emotional goal that occur along the way.
- Core motivators can be defined broadly and can include but are not limited to core motivator indications such as “Thinking of You” 402 shown in FIG. 4 A , “Romance” 416 shown in FIG. 4 B , “Celebrating!” shown in FIG. 4 C , “Feeling Thankful” shown in FIG.
- the paths that a user may take through the iterative process may be different in length depending on the user's root motivator(s) 218 , the level of desired specificity, and the number of times the user chooses to undergo the iterative process of the emotional funnel. For example, a user may begin the process by indicating that with respect to a given target individual the user is “Thinking of You” 402 . Afterwards, a user may be presented with a series of optional affirmations as well as with prompts and/or questions attempting to elicit more precise emotional information from the user.
- the user may specify that the user is thinking of the target individual because the target individual is “Going through hard stuff” 404 (e.g., the death of a loved one) and that the user desires to express “Sympathy” 406 to the target individual.
- the user can be guided to specify that the emotional goal of the user includes a “Celebration of Life” 408 that is “Honoring” 410 the life of the individual regarding whose loss the user wants to console the target individual.
- the level of precision of determining the emotional goal can be low resulting in a short path along the logical tree.
- a root motivator of “Thinking of You” 402 can be further specified with only one iteration of prompts/questions to identify that the user has the emotional goal of saying “Good Luck” 412 to the target individual.
- some paths 414 through the iterative process originating with one root motivator 218 can overlap or coincide with paths originating with other root motivators and result in similar emotional-goals being identified.
- a path leading from the root motivator of “Romance” 416 may include a single main reason 118 , namely “Rekindle the flame” 418 , that sufficiently specifies the user's emotional goal.
- the same root motivator “Romance” 416 can include a path 414 that overlaps with the path 414 originating with the “Thinking of you” 402 root motivator.
- a user may choose to end the iterative process of more precisely defining the emotional goal at a lower level of specificity.
- a user beginning with a root motivator of “Celebrating!” 420 can choose to end the process after one set of prompts/questions and be satisfied with specifying the goal by narrowing the identified emotional intent to celebrating “Colleagues” 422 or celebrating a “milestone” 424 without going any further.
- an emotional goal defined at a low level of precision can result in a suggestion or recommendation that is tailored to the emotional goal to a correspondingly low level of precision.
- the above described methods of the embodiments of the present invention can be performed on a single computing device, a set of interlinked computing devices, or embodied as computer readable and/or executable instructions on a non-transitory computer readable medium.
- the computer readable program instructions may execute entirely on a user's computing device, partly on the user's computing device, as a stand-alone software package, partly on the user's computer and partly on a remote computing device (e.g., a server), or entirely on a remote computing device or server.
- the remote computing device may be connected to the user's computing device through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computing device (for example, through the Internet using an Internet Service Provider).
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the invention.
- data may be transferred to the system, stored by the system and/or transferred by the system to users of the system across LANs (e.g., office networks, home networks) or WANs (e.g., the Internet).
- the system may be comprised of numerous computing devices and/or numerous servers communicatively connected across one or more LANs and/or WANs.
- LANs e.g., office networks, home networks
- WANs e.g., the Internet
- the system may be comprised of numerous computing devices and/or numerous servers communicatively connected across one or more LANs and/or WANs.
- there may be a hand-held computing device being operated by a user to send input and selections via the internet to a server to generate a series of questions/prompts on the display of the hand-held device.
- an audio computing device can be operated by a user's voice to send input and selections via the internet to a server to generate a series of questions/prompts to be presented to a user via audio.
- a server can be operated by a user's voice to send input and selections via the internet to a server to generate a series of questions/prompts to be presented to a user via audio.
- the system and methods provided herein may be employed by a user of a computing device whether connected to a network or not.
- some steps of the methods provided herein may be performed by components and modules of the system whether connected or not. While such components/modules are offline, and the data they generated will then be transmitted to the relevant other parts of the system once the offline component/module comes again online with the rest of the network (or a relevant part thereof).
- a user may be operating a tablet device to input responses and selections after being presented with prompts/questions to arrive at a precise articulation of the user's emotional-goal and have the table generate a customized recommendation for fulfilling the user's emotional goal without needing the tablet to be connected to the internet.
- some of its applications, features, or functions may not be accessible when not connected to a network, however a user or a module/component of the system itself may be able to compose, combine, or generate data offline from the remainder of the system that will be consumed by the system or its other components when the user/offline system component or module is later connected to the system network.
- the system can be comprised of one or more application servers 503 for electronically storing information used by the system.
- Applications in the server 503 may retrieve and manipulate information in storage devices and exchange information through a WAN 501 (e.g., the Internet).
- Applications in server 503 may also be used to manipulate information stored remotely and process and analyze data stored remotely across a WAN 501 (e.g., the Internet).
- exchange of information through the WAN 501 or other network may occur through one or more high speed connections.
- high speed connections may be over-the-air (OTA), passed through networked systems, directly connected to one or more WANs 501 or directed through one or more routers 502 .
- OTA over-the-air
- server 503 may connect to WAN 501 for the exchange of information, and various embodiments of the invention are contemplated for use with any method for connecting to networks for the purpose of exchanging information.
- Components, elements, or modules of the system may connect to server 503 via WAN 501 or other network in various ways.
- a component or module may connect to the system (i) through a computing device 512 directly connected to the WAN 501 , (ii) through a computing device connected to the WAN 501 through a routing device 502 , (iii) through a computing device 508 , 509 , 510 , 514 connected to a wireless access point 507 , or (iv) through a computing device 511 via a wireless connection (e.g., WiFi, CDMA, GMS, 3G, 4G, 5G, other suitable means, and means not yet invented) to the WAN 501 .
- a wireless connection e.g., WiFi, CDMA, GMS, 3G, 4G, 5G, other suitable means, and means not yet invented
- server 503 may connect to server 503 via WAN 501 or other network, and embodiments of the invention are contemplated for use with any method for connecting to server 503 via WAN 501 or other network.
- server 503 could be comprised of a personal computing device, such as a smartphone or tablet, acting as a host for other computing devices to connect to.
- Users 520 of the system in accordance with embodiments of the invention can interact with the system via computing devices such as a laptop 510 , personal computers 508 , cell phones/smart phones 509 , tablets 511 , smart speakers 514 , smart TVs, smart hubs, smart kiosks, and the like.
- computing devices such as a laptop 510 , personal computers 508 , cell phones/smart phones 509 , tablets 511 , smart speakers 514 , smart TVs, smart hubs, smart kiosks, and the like.
- Each of the abovementioned steps and aspects can be performed via the input and output means of these respective devices including presentation of software user interface elements, presentation of prompts/questions to the user, collection of user input, presentation of options, suggestions, and recommendations, as well as the subsequent presentation of recommended courses of action, products, or services aimed at achieving the user's emotional goal.
- a user 520 can operate a tablet 511 to navigate to a browser interface presenting a web-based version of the software interface of the invention and be presented with prompts and questions on the screen of the laptop in response to which the user can provide inputs via the touchscreen of the tablet.
- the tablet 511 can provide iteratively more narrow questions and prompts to determine the user's emotional goal by processing the user input locally or having it, in whole or in part, be sent to be processed on a remote device such as a server, to then have a customized recommendation for the fulfillment of that emotional goal be generated on the screen of the tablet 511 .
- a remote device such as a server
- the user can interact with the software interface of the invention by engaging user interface elements and entering input through a touch-screen of the tablet 511 .
- a user can initialize an audio software interface to receive audio output and provide audio input to interact with the interface elements.
- an audio software interface to receive audio output and provide audio input to interact with the interface elements.
- user 520 can be presented with prompts, questions and other requests for input or selections via the audio output a smart speaker 514 (e.g., through statements or questions being presented through a voice emanating from the smart speaker 514 ). Thereafter, the user 520 can provide input in response to the prompts/question and make selections from among the options and suggestions via voice input.
- the aforementioned collection of facial or gesture information can be realized through the use of image capture devices (e.g., camera(s) on a smart phone 209 , laptop 210 , tablet 211 , computer 205 configured with a webcam, smart hubs, smart kiosks etc.) included in a system or device in accordance with an embodiment of the invention.
- image capture devices e.g., camera(s) on a smart phone 209 , laptop 210 , tablet 211 , computer 205 configured with a webcam, smart hubs, smart kiosks etc.
- voice information in accordance with the various embodiments can be performed through the use of a microphone or other suitable sound capture and recording device that may be included on a variety of devices such as a smart phone 209 , laptop 210 , tablet 211 , computer 205 , a smart speaker 214 , and the like.
- the communications means of the system may be any means for communicating data, including image and video, over one or more networks or to one or more peripheral devices attached to the system, or to a system module or component.
- Appropriate communications means may include, but are not limited to, wireless connections, wired connections, cellular connections, data port connections, Bluetooth® connections, or any combination thereof.
- a computer program includes a finite sequence of computational instructions or program instructions. It will be appreciated that a programmable apparatus or computing device can receive such a computer program and, by processing the computational instructions thereof, produce a technical effect. It should be understood that a programmable apparatus or computing device can include one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors, programmable devices, programmable gate arrays, programmable array logic, memory devices, application specific integrated circuits, or the like, which can be suitably employed or configured to process computer program instructions, execute computer logic, store computer data, and so on. Throughout this specification and elsewhere, a computing device can include any and all suitable combinations of at least one general purpose computer, special-purpose computer, programmable data processing apparatus, processor, processor architecture, and so on.
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
- a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- Illustrative examples of the computer readable storage medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, a static random access memory (SRAM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- CD-ROM compact disc read-only memory
- CD-ROM compact disc read-only memory
- SRAM static random access memory
- DVD digital versatile disk
- memory stick a floppy disk
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computing device or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the blocks may occur out of the order noted in the figures.
- the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- the computing device 600 can generally be comprised of a Central Processing Unit (CPU) 604 operatively coupled to other components via a system bus 602 , optional further processing units including a graphics processing unit (GPU), a cache, a Read Only Memory (ROM) 608 , and a Random Access Memory (RAM) 610 .
- CPU Central Processing Unit
- GPU graphics processing unit
- ROM Read Only Memory
- RAM Random Access Memory
- the computing device 600 can also include an input/output (I/O) adapter 620 , a sound adapter 630 , a network adapter 640 , a user interface adapter 650 , and a display adapter 660 , all of which may be operatively coupled to the system bus 602 .
- I/O input/output
- a first storage device 622 and a second storage device 624 can be operatively coupled to system bus 602 by the I/O adapter 620 .
- the storage devices 622 and 624 can be any of a disk storage device (e.g., a magnetic or optical disk storage device), a solid state magnetic device, and so forth. It should be appreciated that the storage devices 622 and 624 can be the same type of storage device or different types of storage devices.
- the device 600 is embodied by a smart speaker 214 or the like, it can incorporate a speaker 632 which may be operatively coupled to system bus 602 by the sound adapter 630 .
- a transceiver 642 may be operatively coupled to system bus 602 by network adapter 640 .
- the device 600 is embodied by a tablet 511 or a smart phone 509 , it can include a display device 662 which may be operatively coupled to system bus 602 by display adapter 660 .
- the device 600 may include a mother board, alternatively/additionally a different storage medium (e.g., hard disk drive, solid state drive, flash memory, cloud storage), an operating system, one or more application software and one or more input/output devices/means, including one or more communication interfaces (e.g., RS232, Ethernet, Wifi, Bluetooth, USB). Accordingly, in some embodiments a first user-input device 652 , a second user-input device 654 , and a third user-input device 656 may be operatively coupled to system bus 602 by user interface adapter 650 .
- a different storage medium e.g., hard disk drive, solid state drive, flash memory, cloud storage
- an operating system e.g., one or more application software and one or more input/output devices/means, including one or more communication interfaces (e.g., RS232, Ethernet, Wifi, Bluetooth, USB).
- the user-input devices 652 , 654 , and 656 can be any of a keyboard, a mouse, a keypad, an image capture device (e.g., a camera), a motion sensing device, a microphone, a touch-sensitive device (e.g., a touch screen or touchpad), a device incorporating the functionality of at least two of the preceding devices, and so forth. Of course, other types of input devices can also be used, while remaining within the scope and spirit of the present invention.
- the user-input devices 652 , 654 , and 656 can be the same type of user-input device or different types of user-input devices.
- the user-input devices 652 , 654 , and 656 may be used to input and output information to and from system 600 .
- the prompts and questions discussed above may be presented to the user via the output means of exemplary device 600 in accordance with the embodiments of the present invention.
- a user can be prompted to respond to questions, enter input, or make selections in accordance with an embodiment of the invention.
- a user can provide the input and selections to interact with the various elements and aspects of the invention to provide the information used for ascertaining a user's emotional goal and providing recommendations regarding the fulfillment of that emotional goal.
- processing system/device 600 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements.
- various other input devices and/or output devices can be included in processing system 600 , depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art.
- various types of wireless and/or wired input and/or output devices can be used and additional processors, controllers, memories, and so forth, in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art.
- These and other variations of the processing system 600 are readily contemplated by one of ordinary skill in the art given the teachings of the embodiments provided herein.
- Computer program instructions may include computer executable code.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including a functional programming language such as python, an object oriented programming language such as SMALLTALK, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- a variety of languages for expressing computer program instructions are possible, including without limitation, Java, JavaScript, assembly language, Lisp, HTML, Perl, and so on.
- Such languages may include assembly languages, hardware description languages, database programming languages, functional programming languages, imperative programming languages, and so on.
- computer program instructions can be stored, compiled, or interpreted to run on a computing device, a programmable data processing apparatus, a heterogeneous combination of processors or processor architectures, and so on.
- embodiments of the system as described herein can take the form of web-based computer software, which includes client/server software, software-as-a-service, peer-to-peer software, or the like.
- the term “hardware processor subsystem”, “hardware processor”, “processing device”, or “computing device” can refer to a processor, memory, software or combinations thereof that cooperate to perform one or more specific tasks.
- the hardware processor subsystem can include one or more data processing elements (e.g., logic circuits, processing circuits, instruction execution devices, etc.).
- the one or more data processing elements can be included in a central processing unit, a graphics processing unit, and/or a separate processor- or computing element-based controller (e.g., logic gates, etc.).
- the hardware processor subsystem can include one or more on-board memories (e.g., caches, dedicated memory arrays, read only memory, etc.).
- the hardware processor subsystem can include one or more memories that can be on or off board or that can be dedicated for use by the hardware processor subsystem (e.g., ROM, RAM, basic input/output system (BIOS), etc.).
- the hardware processor subsystem can include and execute one or more software elements.
- the one or more software elements can include an operating system and/or one or more applications and/or specific code to achieve a specified result.
- the hardware processor subsystem can include dedicated, specialized circuitry that performs one or more electronic processing functions to achieve a specified result.
- Such circuitry can include one or more application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and/or programmable logic arrays (PLAs).
- ASICs application-specific integrated circuits
- FPGAs field-programmable gate arrays
- PDAs programmable logic arrays
- process and “execute” are used interchangeably to indicate execute, process, interpret, compile, assemble, link, load, any and all combinations of the foregoing, or the like. Therefore, embodiments that process computer program instructions, computer-executable code, or the like can suitably act upon the instructions or code in any and all of the ways just described.
- block diagrams and flowchart illustrations depict methods, apparatuses (e.g., systems), and computer program products.
- Any and all such functions (“depicted functions”) can be implemented by computer program instructions; by special-purpose, hardware-based computer systems; by combinations of special purpose hardware and computer instructions; by combinations of general purpose hardware and computer instructions; and so on—any and all of which may be generally referred to herein as a “component”, “module,” or “system.”
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Strategic Management (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Entrepreneurship & Innovation (AREA)
- Finance (AREA)
- Human Resources & Organizations (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physiology (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Game Theory and Decision Science (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Operations Research (AREA)
- User Interface Of Digital Computer (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/874,987 US11972277B2 (en) | 2019-05-16 | 2020-05-15 | Emotionally driven software interaction experience |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962848634P | 2019-05-16 | 2019-05-16 | |
US202062968547P | 2020-01-31 | 2020-01-31 | |
US16/874,987 US11972277B2 (en) | 2019-05-16 | 2020-05-15 | Emotionally driven software interaction experience |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200364068A1 US20200364068A1 (en) | 2020-11-19 |
US11972277B2 true US11972277B2 (en) | 2024-04-30 |
Family
ID=73231628
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/874,987 Active US11972277B2 (en) | 2019-05-16 | 2020-05-15 | Emotionally driven software interaction experience |
Country Status (1)
Country | Link |
---|---|
US (1) | US11972277B2 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12086817B2 (en) * | 2021-03-31 | 2024-09-10 | International Business Machines Corporation | Personalized alert generation based on information dissemination |
CN113625916B (en) * | 2021-07-23 | 2022-08-16 | 广州玺明机械科技有限公司 | Entertainment device for interaction and data acquisition and transmission |
CN116347179A (en) * | 2023-02-16 | 2023-06-27 | 成都光合信号科技有限公司 | Method, device, equipment and medium for live interaction |
Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090234755A1 (en) * | 2008-03-17 | 2009-09-17 | Sidoruk Trel W | System and method for purchasing a gift |
US7720784B1 (en) * | 2005-08-30 | 2010-05-18 | Walt Froloff | Emotive intelligence applied in electronic devices and internet using emotion displacement quantification in pain and pleasure space |
WO2010128347A1 (en) | 2009-05-06 | 2010-11-11 | Virtual Bouquet Limited | Improvements in remote assembly of a composite arrangement |
US20100312547A1 (en) | 2009-06-05 | 2010-12-09 | Apple Inc. | Contextual voice commands |
US20110183305A1 (en) * | 2008-05-28 | 2011-07-28 | Health-Smart Limited | Behaviour Modification |
US20110252348A1 (en) | 2010-04-08 | 2011-10-13 | Exciting Unlimited LLC | Floral arrangement creation system, method and computer program product |
US20110302117A1 (en) * | 2007-11-02 | 2011-12-08 | Thomas Pinckney | Interestingness recommendations in a computing advice facility |
US20120117005A1 (en) * | 2010-10-11 | 2012-05-10 | Spivack Nova T | System and method for providing distributed intelligent assistance |
US20120245987A1 (en) | 2010-12-14 | 2012-09-27 | Moneyhoney Llc | System and method for processing gift cards via social networks |
US20130332308A1 (en) | 2011-11-21 | 2013-12-12 | Facebook, Inc. | Method for recommending a gift to a sender |
US20140039975A1 (en) * | 2012-08-03 | 2014-02-06 | Sensory Logic, Inc. | Emotional modeling of a subject |
US20140052563A1 (en) | 2011-09-29 | 2014-02-20 | Electronic Commodities Exchange | Apparatus, Article of Manufacture and Methods for Customized Design of a Jewelry Item |
US20140223462A1 (en) * | 2012-12-04 | 2014-08-07 | Christopher Allen Aimone | System and method for enhancing content using brain-state data |
US20140276244A1 (en) * | 2013-03-13 | 2014-09-18 | MDMBA Consulting, LLC | Lifestyle Management System |
US20140351332A1 (en) * | 2013-05-21 | 2014-11-27 | Spring, Inc. | Systems and methods for providing on-line services |
US20140379615A1 (en) | 2013-06-20 | 2014-12-25 | Six Five Labs, Inc. | Dynamically evolving cognitive architecture system based on prompting for additional user input |
US9009096B2 (en) | 2011-07-12 | 2015-04-14 | Ebay Inc. | Recommendations in a computing advice facility |
US20150154684A1 (en) * | 2012-09-09 | 2015-06-04 | Lela, Inc | Methods and apparatus for providing automated emotional motivator based service |
US9171048B2 (en) * | 2012-12-03 | 2015-10-27 | Wellclub, Llc | Goal-based content selection and delivery |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US20160171598A1 (en) | 2003-11-06 | 2016-06-16 | Behr Process Corporation | Data-Driven Color Coordinator |
US20160203729A1 (en) * | 2015-01-08 | 2016-07-14 | Happify, Inc. | Dynamic interaction system and method |
US20160232131A1 (en) * | 2015-02-11 | 2016-08-11 | Google Inc. | Methods, systems, and media for producing sensory outputs correlated with relevant information |
US20160275584A1 (en) * | 2014-10-17 | 2016-09-22 | Janell Gibson | System, method, and apparatus for personalizing greeting cards |
US20170140563A1 (en) * | 2015-11-13 | 2017-05-18 | Kodak Alaris Inc. | Cross cultural greeting card system |
US20170250930A1 (en) | 2016-02-29 | 2017-08-31 | Outbrain Inc. | Interactive content recommendation personalization assistant |
US20170262604A1 (en) * | 2014-06-09 | 2017-09-14 | Revon Systems, Inc. | Systems and methods for health tracking and management |
US9875483B2 (en) | 2012-05-17 | 2018-01-23 | Wal-Mart Stores, Inc. | Conversational interfaces |
US20180113583A1 (en) | 2016-10-20 | 2018-04-26 | Samsung Electronics Co., Ltd. | Device and method for providing at least one functionality to a user with respect to at least one of a plurality of webpages |
US9996531B1 (en) | 2016-03-29 | 2018-06-12 | Facebook, Inc. | Conversational understanding |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US20180229372A1 (en) * | 2017-02-10 | 2018-08-16 | JIBO, Inc. | Maintaining attention and conveying believability via expression and goal-directed behavior with a social robot |
US10185983B2 (en) | 2015-12-31 | 2019-01-22 | TCL Research America Inc. | Least-ask: conversational recommender system with minimized user interaction |
US10348658B2 (en) | 2017-06-15 | 2019-07-09 | Google Llc | Suggested items for use with embedded applications in chat conversations |
US20200110519A1 (en) | 2018-10-04 | 2020-04-09 | Microsoft Technology Licensing, Llc | User-centric browser location |
US20200159724A1 (en) * | 2018-11-16 | 2020-05-21 | Fuvi Cognitive Network Corp. | Apparatus, method, and system of cognitive data blocks and links for personalization, comprehension, retention, and recall of cognitive contents of a user |
US10740568B2 (en) * | 2018-01-24 | 2020-08-11 | Servicenow, Inc. | Contextual communication and service interface |
-
2020
- 2020-05-15 US US16/874,987 patent/US11972277B2/en active Active
Patent Citations (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160171598A1 (en) | 2003-11-06 | 2016-06-16 | Behr Process Corporation | Data-Driven Color Coordinator |
US7720784B1 (en) * | 2005-08-30 | 2010-05-18 | Walt Froloff | Emotive intelligence applied in electronic devices and internet using emotion displacement quantification in pain and pleasure space |
US20110302117A1 (en) * | 2007-11-02 | 2011-12-08 | Thomas Pinckney | Interestingness recommendations in a computing advice facility |
US20090234755A1 (en) * | 2008-03-17 | 2009-09-17 | Sidoruk Trel W | System and method for purchasing a gift |
US20110183305A1 (en) * | 2008-05-28 | 2011-07-28 | Health-Smart Limited | Behaviour Modification |
WO2010128347A1 (en) | 2009-05-06 | 2010-11-11 | Virtual Bouquet Limited | Improvements in remote assembly of a composite arrangement |
US20100312547A1 (en) | 2009-06-05 | 2010-12-09 | Apple Inc. | Contextual voice commands |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US20110252348A1 (en) | 2010-04-08 | 2011-10-13 | Exciting Unlimited LLC | Floral arrangement creation system, method and computer program product |
US20120117005A1 (en) * | 2010-10-11 | 2012-05-10 | Spivack Nova T | System and method for providing distributed intelligent assistance |
US20120245987A1 (en) | 2010-12-14 | 2012-09-27 | Moneyhoney Llc | System and method for processing gift cards via social networks |
US9009096B2 (en) | 2011-07-12 | 2015-04-14 | Ebay Inc. | Recommendations in a computing advice facility |
US20140052563A1 (en) | 2011-09-29 | 2014-02-20 | Electronic Commodities Exchange | Apparatus, Article of Manufacture and Methods for Customized Design of a Jewelry Item |
US20130332308A1 (en) | 2011-11-21 | 2013-12-12 | Facebook, Inc. | Method for recommending a gift to a sender |
US9875483B2 (en) | 2012-05-17 | 2018-01-23 | Wal-Mart Stores, Inc. | Conversational interfaces |
US20140039975A1 (en) * | 2012-08-03 | 2014-02-06 | Sensory Logic, Inc. | Emotional modeling of a subject |
US20150154684A1 (en) * | 2012-09-09 | 2015-06-04 | Lela, Inc | Methods and apparatus for providing automated emotional motivator based service |
US9171048B2 (en) * | 2012-12-03 | 2015-10-27 | Wellclub, Llc | Goal-based content selection and delivery |
US20140223462A1 (en) * | 2012-12-04 | 2014-08-07 | Christopher Allen Aimone | System and method for enhancing content using brain-state data |
US20140276244A1 (en) * | 2013-03-13 | 2014-09-18 | MDMBA Consulting, LLC | Lifestyle Management System |
US20140351332A1 (en) * | 2013-05-21 | 2014-11-27 | Spring, Inc. | Systems and methods for providing on-line services |
US20140379615A1 (en) | 2013-06-20 | 2014-12-25 | Six Five Labs, Inc. | Dynamically evolving cognitive architecture system based on prompting for additional user input |
US20170262604A1 (en) * | 2014-06-09 | 2017-09-14 | Revon Systems, Inc. | Systems and methods for health tracking and management |
US20160275584A1 (en) * | 2014-10-17 | 2016-09-22 | Janell Gibson | System, method, and apparatus for personalizing greeting cards |
US20160203729A1 (en) * | 2015-01-08 | 2016-07-14 | Happify, Inc. | Dynamic interaction system and method |
US20160232131A1 (en) * | 2015-02-11 | 2016-08-11 | Google Inc. | Methods, systems, and media for producing sensory outputs correlated with relevant information |
US20170140563A1 (en) * | 2015-11-13 | 2017-05-18 | Kodak Alaris Inc. | Cross cultural greeting card system |
US10185983B2 (en) | 2015-12-31 | 2019-01-22 | TCL Research America Inc. | Least-ask: conversational recommender system with minimized user interaction |
US20170250930A1 (en) | 2016-02-29 | 2017-08-31 | Outbrain Inc. | Interactive content recommendation personalization assistant |
US9996531B1 (en) | 2016-03-29 | 2018-06-12 | Facebook, Inc. | Conversational understanding |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US20180113583A1 (en) | 2016-10-20 | 2018-04-26 | Samsung Electronics Co., Ltd. | Device and method for providing at least one functionality to a user with respect to at least one of a plurality of webpages |
US20180229372A1 (en) * | 2017-02-10 | 2018-08-16 | JIBO, Inc. | Maintaining attention and conveying believability via expression and goal-directed behavior with a social robot |
US10348658B2 (en) | 2017-06-15 | 2019-07-09 | Google Llc | Suggested items for use with embedded applications in chat conversations |
US10740568B2 (en) * | 2018-01-24 | 2020-08-11 | Servicenow, Inc. | Contextual communication and service interface |
US20200110519A1 (en) | 2018-10-04 | 2020-04-09 | Microsoft Technology Licensing, Llc | User-centric browser location |
US20200159724A1 (en) * | 2018-11-16 | 2020-05-21 | Fuvi Cognitive Network Corp. | Apparatus, method, and system of cognitive data blocks and links for personalization, comprehension, retention, and recall of cognitive contents of a user |
Non-Patent Citations (12)
Title |
---|
"Add deep logic (expertise) to your AI support assistant," https://exvisory.ai, Sep. 6, 2019, 5 pages. |
Chai, Joyce, et al., "Natural language assistant: A dialog system for online product recommendation." AI Magazine, Jun. 2002, pp. 63-76, 23.3. |
Christakopoulou, Konstantina, et al., "Towards conversational recommender systems." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, Aug. 2016, pp. 815-824. |
Dibitonto et al., "Chatbot in a Campus Environment: Design of LISA, a Virtual Assistant to Help Students in Their University Life". In: Kurosu, M. (eds) Human-Computer Interaction. Interaction Technologies. HCI 2018. https://doi.org/10.1007/978-3-319-91250-9_9, pp. 103-116 (Year: 2018). * |
Gault, "How Customization is the ‘Glass Slipper’ Solution For E-commerce Evolution" Forbes.com (Year: 2018). |
Heussner, "Gifting 2.0: Letting the Web Pick Presents for Your Friends" ABCNews.com, Nov. 29, 2010, 5 pages. |
Huang, Ting-Hao, Kenneth, et al., "" Is There Anything Else I Can Help You With?" Challenges in Deploying an On-Demand Crowd-Powered Conversational Agent." Fourth AAAI Conference on Human Computation and Crowdsourcing, Sep. 2016, pp. 79-88. |
Kandhari, Mandeep Singh, et al., "A Voice Controlled E-Commerce Web Application." 2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). IEEE, Nov. 2018, 7 pages. |
McRoberts, Sarah, et al., "Exploring Interactions with Voice-Controlled TV." arXiv preprint arXiv, May 2019, pp. 1-11, 1905.05851. |
Morgan, "10 Customer Experience Implementations of Artificial Intelligence" (2018) Forbes.com, Feb. 8, 2018, 5 pages. |
Qu, Chen, et al., "Analyzing and characterizing user intent in information-seeking conversations." The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Jun. 2018, pp. 989-992. |
Sun, Yueming, et al., "Conversational recommender system." The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Jun. 2018, pp. 235-244. |
Also Published As
Publication number | Publication date |
---|---|
US20200364068A1 (en) | 2020-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210173548A1 (en) | Virtual assistant acquisitions and training | |
US11972277B2 (en) | Emotionally driven software interaction experience | |
O'Brien et al. | Separating the activation, integration, and validation components of reading | |
US20130305153A1 (en) | Interactive storybook system and method | |
US20180344242A1 (en) | Systems and methods for training artificially-intelligent classifier | |
US20020068500A1 (en) | Adaptive toy system and functionality | |
US20220254334A1 (en) | Wake word selection assistance architectures and methods | |
Maguire | Development of a heuristic evaluation tool for voice user interfaces | |
WO2020215128A1 (en) | Augmentative and Alternative Communication (ACC) Reading System | |
Lee | Voice user interface projects: build voice-enabled applications using dialogflow for google home and Alexa skills kit for Amazon Echo | |
WO2022259005A1 (en) | Automated no-code coding of app-software using a conversational interface and natural language processing | |
JP4563440B2 (en) | Electronic picture book system and electronic picture book system controller | |
JP2011128362A (en) | Learning system | |
US20200042552A1 (en) | Distributed recording, curation, and transmission system for context-specific concatenated media generation | |
Zhong et al. | LLM-mediated domain-specific voice agents: the case of TextileBot | |
Esau-Held et al. | “Foggy sounds like nothing”—enriching the experience of voice assistants with sonic overlays | |
US12111834B1 (en) | Ambient multi-device framework for agent companions | |
US10254834B2 (en) | System and method for generating identifiers from user input associated with perceived stimuli | |
JP4858285B2 (en) | Information display device and information display processing program | |
US20240058566A1 (en) | Stimulus generator | |
Van Even | ParCos Deliverable 4.3 Trainer Training Package | |
Salter | Technological and business fundamentals for mobile app development | |
US20240242626A1 (en) | Universal method for dynamic intents with accurate sign language output in user interfaces of application programs and operating systems | |
JP7653828B2 (en) | Robots and robot systems | |
Taba | Personalized AI Assistant |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LOVINGLY, LLC, NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VEGA, JOSEPH;GARLAND, KENNY;MARQUEZ, DANIELA VIRGINIA;AND OTHERS;SIGNING DATES FROM 20200512 TO 20200513;REEL/FRAME:052673/0836 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |