US10431205B2 - Dialog device with dialog support generated using a mixture of language models combined using a recurrent neural network - Google Patents
Dialog device with dialog support generated using a mixture of language models combined using a recurrent neural network Download PDFInfo
- Publication number
- US10431205B2 US10431205B2 US15/139,886 US201615139886A US10431205B2 US 10431205 B2 US10431205 B2 US 10431205B2 US 201615139886 A US201615139886 A US 201615139886A US 10431205 B2 US10431205 B2 US 10431205B2
- Authority
- US
- United States
- Prior art keywords
- dialog
- word
- natural language
- language
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation
-
- G06F17/279—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
Definitions
- the following relates to the dialog system arts, customer support arts, call center arts, and related arts.
- a dialog system is based on, or augments, a natural language interfacing device such as a chat interface or a telephonic device.
- a chat interface is typically Internet-based and enables a chat participant to enter and receive textual dialog utterances.
- Each participant operates his or her own chat interface, and typed or dictated utterances from each participant are displayed on both (or all) participants' chat interfaces, so that the participants can conduct a dialog.
- the chat interfaces may support other content exchange, e.g. one participant may present a (possibly multi-media) document that is displayed on both (or all) participants' chat interfaces.
- natural language interfacing devices include telephonic devices such as conventional telephones or cellular telephones, or tablet computers, notebook computers, or the like having appropriate audio components (microphone and earphone or loudspeaker, or full audio headset).
- the telephonic device may provide audio-only interfacing, or the natural language dialog may be augmented by video (e.g., the telephonic device may be a video conferencing device).
- dialog systems are ubiquitous and widely used for diverse applications.
- One common application is to provide customer support for customers, clients, users, or the like.
- the support provides a call center to which a customer, client, or the like calls.
- the call center is staffed by call center agents who handle customer calls.
- the call center agents have expertise in the product, service, or the like for which support is to be provided. In practice, however, call center agents have limited knowledge and expertise, which can limit their effectiveness.
- a semi-automated dialog system the ongoing dialog is recorded and processed to predict an appropriate current utterance for the call center agent to express.
- the current utterance, or a list of current utterance candidates is typically displayed on a display component of the natural language interfacing device.
- the interfacing device is a chat interface comprising computer with a connected headset running chat software
- the list of current utterance candidates may be displayed in a suggestions window shown on the computer display.
- the call center agent can refer to this list and may choose one of the utterance candidates for responding to the caller (either verbatim or with edits made by the call center agent).
- dialog system chooses a single “best” utterance and actually expresses that utterance to the caller, e.g. by automatically generating a typed response via a chat interface, or by operating a speech synthesizer to automatically “speak” the utterance to the caller at the opposite end of a telephonic call.
- Dialog systems can advantageously support a call center agent by providing response suggestions (semi-automated systems) or can replace the call center agent entirely by acting as a “virtual agent” (fully automated systems).
- a complication arises.
- the utterances generated by the dialog system are generally expected to exhibit expertise in a relatively narrow knowledge domain.
- a dialog system of a call center maintained by an electronics retailer may be expected to provide expert advice regarding various consumer electronic devices, such as various makes/models of cellular telephones, computers, or so forth.
- the utterances generated by the dialog system should be correct as to information within this knowledge domain and pertinent to the current dialog point.
- the utterances generated by the dialog system should be effective natural language communication, for example employing proper vocabulary and grammar, and correctly using common phrases, and so forth.
- a common dialog system architecture includes a natural language understanding (NLU) component, a natural language generation (NLG) component, and a “central orchestrator” usually referred to as a dialog manager (DM), which takes input from the NLU component, updates an internal state, consults a Knowledge Base to decide on a next dialog action (DA) to take, and communicates this DA to the NLG component which generates a natural language utterance implementing the DA.
- NLU natural language understanding
- NLG natural language generation
- DM central orchestrator
- a dialog device comprises: a natural language interfacing device comprising a chat interface or a telephonic device; a natural language output device comprising the chat interface, a display device, or a speech synthesizer outputting speech to the telephonic device; and a computer.
- the computer is programmed to store natural language dialog conducted via the natural language interfacing device and to construct a current natural language utterance word-by-word, with each word of the current natural language utterance being chosen by operations including: applying a plurality of language models to a context comprising a concatenation of the stored natural language dialog and the current natural language utterance up to but not including the word being chosen to output, for each applied language model, a distribution over the words of a vocabulary; normalizing the distributions output by the plurality of language models to generate corresponding normalized distributions; applying a recurrent neural network (RNN) to the normalized distributions to generate a mixture distribution; and choosing the next word using the mixture distribution.
- RNN recurrent neural network
- the natural language output device is configured to output the current natural language utterance after it has been constructed by the computer.
- a dialog method comprising: conducting a natural language dialog using a chat interface or a telephonic device; while conducting the natural language dialog, constructing a current natural language utterance word-by-word; and outputting the constructed current natural language utterance via one of the chat interface, a display device, and a speech synthesizer outputting speech to the telephonic device.
- the current natural language utterance is constructed word-by-word using a computer programmed to choose each word of the current natural language utterance by operations including: applying a plurality of language models to a context comprising a concatenation of the natural language dialog and the current natural language utterance up to but not including the word being chosen; applying a recurrent neural network (RNN) to word distributions output by the applied plurality of language models to generate a mixture distribution; and choosing the next word using the mixture distribution.
- RNN recurrent neural network
- a non-transitory storage medium stores instructions readable and executable by a computer to construct a current natural language utterance for continuing a natural language dialog by a method in which each word of the current natural language utterance is chosen by operations including: applying a plurality of language models to a context comprising a concatenation of the natural language dialog and the current natural language utterance up to but not including the word being chosen; normalizing word distributions output by the plurality of language models to generate corresponding normalized word distributions; applying a recurrent neural network (RNN) to the normalized distributions to generate a mixture distribution; and choosing the next word using the mixture distribution.
- RNN recurrent neural network
- FIG. 1 diagrammatically illustrates a dialog device.
- FIG. 2 diagrammatically shows an LSTM cell.
- FIG. 3 diagrammatically shows an embodiment of the mixture model implemented by the dialog support module of the dialog device of FIG. 1 .
- FIG. 4 diagrammatically illustrates preprocessing of a dialog.
- FIG. 5 diagrammatically shows an illustrative neural chat language model (LM).
- LM neural chat language model
- FIG. 6 diagrammatically shows an illustrative question-answer language model (QA-LM) including a neural model translating natural language questions to formal queries that are posed to the knowledge base of FIG. 1 .
- QA-LM question-answer language model
- FIG. 7 diagrammatically shows an illustrative embodiment of the neural model component of the QA-LM of FIGS. 1 and 6 .
- FIG. 8 diagrammatically shows another embodiment of the mixture model implemented by the dialog support module of the dialog device of FIG. 1 , which integrates the neural chat LM of FIG. 5 and the QA-LM of FIGS. 6 and 7 .
- FIGS. 9 and 10 present experimental results as described herein.
- the NLU/DM/NLG utterance generation chain is replaced by a set of language models (LMs) whose outputs are combined using a single Recurrent Neural Network (RNN) that generates a current natural language utterance from the dialogue history up to the point where this current utterance is to be produced.
- the RNN in this context can be seen as a form of conditional neural language model, where the dialogue history provides the context for the production of the current utterance.
- the RNN is a Long Short-Term Memory (LSTM) model, which can effectively exploit long distance memories for computing the current utterance.
- LSTM Long Short-Term Memory
- RNN Gated Recurrent Unit
- the RNN integrates only two LMs—a question-answer (QA) LM configured to provide a distribution of answers obtained from a knowledge base (KB) over questions directed to the KB; and a second language model that is not configured to provide answers obtained from a KB.
- QA question-answer
- KB knowledge base
- This mixing of two language models provides an integration of knowledge domain-specific responses (via the QA-LM) and responses that emphasize employing appropriate vocabulary, grammar, proper phrasing possibly including idiomatic phrasing, and so forth (via the second language model).
- QA-LM question-answer
- responses that emphasize employing appropriate vocabulary, grammar, proper phrasing possibly including idiomatic phrasing, and so forth (via the second language model).
- the disclosed approaches can employ the RNN to integrate or mix an arbitrary number of LMs.
- a dialog device which maintains one end of a two-way or multi-way dialog via a chat system or a telephonic link.
- the dialog is assumed to be between (1) a call center agent at a customer or client support call center and (2) a customer or client seeking support in using a product, service, or the like.
- the dialog device of FIG. 1 is assumed to be utilized by or embody the call center agent in this dialog. This designation is merely for illustrative convenience, and the disclosed dialog devices may be used in other contexts.
- the dialog device includes a natural language interfacing device 10 , such as a chat interface system or a telephonic device.
- the illustrative natural language interfacing device 10 is implemented as a computer 12 including a display component or device 14 and one or more user input components or devices such as an illustrative keyboard 16 , an illustrative mouse 18 or other pointing device (e.g. trackball, trackpad), a touch-sensitive overlay of the display 14 employing, for example, a capacitive or surface acoustic wave (SAW) touch-sensing technology, a dictation microphone (not shown), and/or so forth.
- SAW surface acoustic wave
- the computer 12 carries out the dialog using a call center online chat system 20 implemented by execution of a chat program on the computer 12 , via which the call center agent views textual messages on the display component 14 from a customer or client seeking support and sends text messages to the customer or client input via the keyboard 16 or dictated via a dictation microphone.
- a call center online chat system 20 implemented by execution of a chat program on the computer 12 , via which the call center agent views textual messages on the display component 14 from a customer or client seeking support and sends text messages to the customer or client input via the keyboard 16 or dictated via a dictation microphone.
- the computer 12 In a fully automated mode there is no human call center agent, and instead the computer 12 generates dialog automatically using techniques disclosed herein and transmits the dialog to the client or customer as text messages sent via the chat system 20 .
- the computer 10 includes appropriate audio components (microphone and earphone or loudspeaker, or full audio headset, not shown) and runs a suitable teleconferencing software 22 for conducting a telephone call (optionally also including two-way video, i.e. a videoconference call).
- the audio components also include a speech synthesizer module 24 , e.g. including software running on the computer 12 and a physical speaker or electronic signal generator, for synthesizing spoken responses of the automated agent.
- the computer-based telephonic hardware is optionally replaced by a physically separate unit, e.g. a telephone handset.
- a speech recognition module 26 is suitably implemented by a microphone that picks up the spoken dialog and software executing on the computer 12 to electronically transcribe the speech to text content.
- the natural language interfacing device 10 enables the agent to engage in a dialog with the customer or client (by text messaging in chat embodiments, or by speech in telephonic embodiments).
- the natural language interfacing device 10 also automatically outputs the agent-side of the dialog by generating text message responses or synthesized speech responses.
- the dialog progresses via chat or telephonic communication, it is recorded and stored in a context storage 30 as past dialog 32 .
- the past dialog 32 is directly obtained as a concatenation of successive text messages stored electronically, e.g. as ASCII text strings.
- the spoken dialog is picked up by a microphone and converted to text by the speech recognition module 26 in order to be stored as the past dialog 32 .
- the context storage 30 may, for example, be an internal hard disk drive or solid state drive (SSD) of the computer 12 , or an external hard drive or SSD connected with the computer 12 by a cable (e.g. USB cable) or via an electronic network, e.g. an Ethernet.
- the computer 12 is further programmed to construct a current natural language utterance 34 word-by-word, with each word of the current natural language utterance 34 being chosen by a dialog support module 40 implemented by suitable programming of the computer 12 to implement one or more embodiments of the dialog generation approaches disclosed herein.
- a plurality of language models (LMs) 42 are applied to the context comprising a concatenation of the stored natural language dialog 32 and the current natural language utterance 34 up to (but not including) the word being chosen.
- the LMs 42 output, for each applied language model, a distribution over the words of a vocabulary.
- At least one of the LMs 42 may optionally be a question-answer LM (QA-LM) that generates a distribution of words over the vocabulary based at least in part on content of a knowledge base (KB) 44 containing domain-specific information.
- QA-LM question-answer LM
- KB 44 knowledge base
- the distributions output by the plurality of LMs 42 are normalized by normalization functions 46 , e.g. softmax functions 46 in the illustrative example of FIG.
- a recurrent neural network (RNN) 50 is applied to the normalized distributions to generate a mixture distribution over the words of the vocabulary.
- RNN recurrent neural network
- the RNN is a long short-term memory (LSTM) mixture model 50 , although other RNN architectures such as a Gated Recurrent Unit (GRU) architecture may be employed as the RNN.
- the RNN 50 includes a vector of LM weights 52 that are used by the RNN 50 to appropriately weight each normalized distribution output by a respective LM 42 in the mixture distribution.
- the next word of the agent-side utterance (or suggested utterance) is chosen using the mixture distribution output by the RNN 50 .
- the next word of the utterance is chosen as the most probable word according to the mixture distribution.
- the next word may be chosen by sampling the mixture distribution, and the process may be repeated several times to construct a plurality of current utterance candidates.
- the chosen word is added to (i.e. concatenated to) the current utterance 34 , so that the current utterance 34 is constructed word-by-word.
- the current utterance 34 is fed back to the natural language interfacing device 10 and may be used in various ways depending upon the dialog modality (chat or telephonic) and level of automation (semi-automated or fully automated) as discussed next.
- the current utterance 34 is suitably stored as text, e.g. ASCII text, which in a chat embodiment may directly form a latest agent-side message (or message suggestion).
- the current utterance 34 may be presented as a suggestion on the display component 14 , or may be converted to speech using the speech synthesizer module 24 .
- the current utterance 34 is a suggestion that is suitably displayed in a suggestions window of the display device 14 for reference by a human agent engaged in the dialog.
- the dialog support module 40 may be invoked several times in semi-automatic mode, using sampling of the mixture distribution for each word, so that different suggested utterances are generated by the several invocations, and these several utterance suggestions may be displayed in the suggestions window of the display device 14 .
- the current utterance is transmitted to the customer or client, e.g. as a text message in chat embodiments or by operation of the speech synthesizer 24 and transmission via the telephonic link to the customer or client as synthesized speech.
- dialog support module 40 Some more detailed embodiments of the dialog support module 40 are next described.
- the RNN 50 adaptively mixes the set of LMs 42 .
- the output of this mixture is given by:
- a gating network which is a neural network whose output is a normalized vector (e.g., normalized using the softmax layer 46 ).
- the various LMs 42 are expert in various sub-fields (e.g. expert in technical content in the case of the QA-LM, expert in natural language phrasing in the case of some other LM) then they can be combined to produce the RNN 50 which is then expert in the span of sub-fields.
- the gating network selects which LM should be chosen to deal with the input.
- the illustrative embodiment employs a LSTM as the RNN 50 .
- the LSTM architecture maintains a memory in a hidden layer of all inputs received over time, by adding up all (gated) inputs to the hidden layer through time to a memory cell. In this way, errors propagated back through time do not vanish and even inputs received a long time ago are still (approximately) preserved and can play a role in computing the output of the network.
- the LSTM cell includes a memory cell c, an input gate i, a forget gate f, and an output gate o.
- i t ⁇ ( W xi x t +W hi h t-1 +W ci c t-1 +b i )
- f t ⁇ ( W xf x t +W hf h t-1 +W cf c t-1 +b f )
- c t f t ⁇ c t-1 +i t ⁇ tan h ( W xc x t +W hc h t-1 +b c )
- o f ⁇ ( W xo x t +W ho h t-1 +W co c t +b o )
- h t o t ⁇ tan h ( c t )
- ⁇ is the sigmoid function
- i t , f t , o t are the outputs (i.e. activations) of the corresponding gates
- c t is the state of the memory cell; the symbol “
- the network can learn to use the input gate to decide when to memorize information, and similarly learn to use the output gate to decide when to access that memory.
- the forget gate finally, is to reset the memory.
- an illustrative embodiment of the mixture model implemented by the dialog support module 40 using LSTM architecture as the RNN 50 is diagrammatically shown.
- Let w 1 t w 1 . . . w t be the past dialog 32 concatenated with the current utterance 34 (up to but not including the word being chosen), i.e. the dialog history.
- There are K LM's (where K is a positive integer having a value of at least two) that compute distributions over their respective vocabularies V k , k 1, . . . , K.
- the distributions can be written as p k (w ⁇ V k
- w 1 t ) ⁇ k 1, . . . , K.
- the LSTM is used to encode the history word-by-word into a vector which is the hidden state h t of the LSTM at time step t.
- the softmax layer 46 is used to compute probabilities associated with the K LMs 42 according to:
- dialog support module 40 tests were performed using a dialog corpus consisting of 165,146 dialogs from a technology company in the domain of mobile telephony support.
- the corpus was split into training, development, and test data sets whose sizes were 145,146 dialogs, 10,000 dialogs, and 10,000 dialogs, respectively.
- each dialog was preprocessed to tokenize (using Stanford CoreNLP in the actually performed tests) and convert the dialog to lowercase. Unused information such as head, tail, and chat time were removed.
- FIG. 4 shows an example of a dialog after tokenization and lowercasing. The indicated head, tail, and time sections are removed. (Note that actual names are anonymized in FIG. 4 , as well as in all other dialog examples presented herein).
- a context-response pair was created whose context consists of all sentences appearing before the response.
- a token “ ⁇ EOC>” was added to its tail.
- the prefix tokens “ ⁇ CLIENT>” and “ ⁇ AGENT>” were also added to inform whether the following utterance was from the client or from the agent.
- a token “ ⁇ EOR>” was added to its tail.
- the knowledge base 44 used consisted of 1,744,565 device-attribute-value triples, e.g.
- the target context-response pairs were those in which the client asks about numeric value attributes. Because the dialogues were not classified as to this format, a simple heuristic was employed to select target context-response pairs: a context-response pair is chosen if its response contained a number and one of the following keywords: cpu, processor, ghz, mhz, memory, mb(s), gb(s), byte, pixel, height, width, weigh, size, camera, mp, hour(s), or mah. Using this heuristic, 17,660 pairs were collected for training, 1,362 pairs were collected for development, and 1,394 pairs were collected for testing. These sets are significantly smaller than the collected dialog pairs. Examining the total 27,600 tokens in the development set, only 1,875 (6.8%) value tokens were found in the knowledge base.
- the actually performed tests employed two language models: a neural chat model for capturing natural language content, and a QA-LM for capturing domain-specific content. It is to be understood that these are merely illustrative examples and the disclosed approach can in general use any number and type(s) of language models.
- the illustrative neural chat LM is first described, followed by the illustrative QA-LM.
- this LSTM inherits the hidden state and the memory cell of the context side LSTM. Differing from the context side, a softmax layer is placed on the top of this LSTM to compute the probability of generating word w given the current history:
- a question is posed to query a knowledge base in a formalism such as Structured Query Language (SQL).
- SQL Structured Query Language
- a human-like question answering system should take natural questions as input instead of formal expressions.
- a neural model is built to translate natural language questions to formal queries that are then posed to the knowledge base 44 .
- the illustrative QA-LM employs an LSTM to encode a natural question into a vector. Two softmax layers are then used to predict the device name and the attribute, as shown in FIG. 7 .
- the sequence-to-tuple neural model thus translates natural questions to the formal form ⁇ device name, attribute, ?> which is suitably input to the KB 44 as shown in previous FIG. 6 . Because the focus here is on a question-answer (QA) situation in which the customer or client asks about device specifications, this model is sufficient. For more complex types of domain-specific inquiries, more advanced QA models could be employed.
- a softmax layer may be used to predict the values directly.
- this alternative approach has certain disadvantages.
- Such a softmax layer would comprise a large number of units, equal to the number of values that can be found in the knowledge base. This number is 0(n d ⁇ n a ) where n d and n a are respectively the number of devices and the number of attributes.
- n d and n a are respectively the number of devices and the number of attributes.
- the two softmax layers output a distribution over devices p d ( ⁇
- a distribution over V qa can be computed which is the set of all values found in the knowledge base 44 , according to:
- the training of the described illustrative QA-LM is analogous to the already-described training of the neural chat model.
- natural questions are not available to train the QA model.
- Paraphrasing methods could be used to generate natural questions, but only if suitable paraphrasing sources are available, which may not be the case for domain-specific settings. Instead, for the actually performed training a training dataset was generated by the following process.
- step (4) For each tuple ⁇ device name, attribute>: (1) the device name was paraphrased by randomly dropping some words (e.g., “apple iphone 4” becomes “iphone 4”); (2) the attribute name was paraphrased by similarly dropping some words and also by using the small manually generated dictionary given in Table 2; (3) l words were drawn from a vocabulary with respect to word frequency, where l ⁇ Gamma(k,n) (e.g., “I have cat”); and (4) all words generated in steps (1)-(3) were concatenated and shuffled (e.g., “cat iphone 4 have battery I”). The output of step (4) was then taken to form a training datapoint such as:
- Cat iphone 4 have battery i ⁇ apple_iphone_4 battery_talk_time
- Attribute Alternatives battery capacity battery size battery standby time battery life battery talk time battery life camera megapixels megapixel, mega pixel, mp, mega pixels cpu maximum frequency cpu, processor, power internal ram ram internal storage memory primary screen physical height screen height primary screen physical width screen width Removable memory maximum external memory, external storage, size memory card, sd card secondary camera maximum front camera resolution resolution secondary camera megapixels front megapixel, front mega pixel, front mp, front mega pixels
- a training set for training the QA-LM was generated of 7,682,365 datapoints and a development set of 10,000.
- the neural chat model (described with reference to FIG. 5 ) and the neural QA model (described with reference to FIGS. 6 and 7 ) were integrates using the LSTM-based mixture-of-LM's model (described with reference to FIGS. 2 and 3 ). Because the neural chat model also employs LSTMs to encode histories into vectors, the integration makes use of the hidden state of the neural chat model LSTM to compute normalized weights, as shown in FIG. 8 .
- the rationale behind this approach is as follows.
- the neural chat model is expected to generate smooth responses into which the neural QA model effectively “inserts” values retrieved from the knowledge base 44 . Because the hidden state of the neural chat model captures the uncertainty of generating the next word, it is also able to detect whether the next word should be generated by the neural chat model.
- w 1 t ,C ) ⁇ p c ( w
- h t c is the hidden state of the neural chat model
- ⁇ is the sigmoid function
- the sigmoid function is equivalent to the softmax function for two output units—hence the sigmoid function can be used in place of the softmax function for the normalization where the input includes only two distributions.
- ⁇ ⁇ ( w ) ⁇ 100 if ⁇ ⁇ w ⁇ V qa ⁇ ⁇ ⁇ V c 1 otherwise
- ⁇ is the regularization parameter
- D is the set of all training device-specification context-response pairs.
- the parameter ⁇ (w ⁇ V qa ⁇ V c ) is set to the relatively high value of 100 because it is desired for the training phase to focus on those tokens representing values in the knowledge base but not supported by the neural chat model. Another reason for choosing this high value for ⁇ (w ⁇ V qa ⁇ V c ) is that, in the set of target context-response pairs, tokens found in V c significantly outnumber tokens found in V qa .
- a decoder similar to that used in the neural chat model can be used here.
- the current best open prefix (which is not ended by “ ⁇ EOR>”) is extended to V c ⁇ V qa successors by attaching every word in the vocabulary to its tail.
- the decoder for the integrated model of FIG. 8 differs in that the decoder first attaches each w ⁇ V c to the tail of the chosen open prefix. The decoder then, for each device name d and each attribute a, attaches to the tail of the prefix a token ⁇ d, a>.
- This generation approach makes it more realistic to generate a value of one single best ⁇ d,a>, rather than a value of many low probability ⁇ d′, a′>.
- a constraint was stipulated that does not allow a response to contain more than one token from the neural QA model. In other words, a response is to answer not more than one question.
- test results for the actually constructed implementation of the integrated model of FIG. 8 are described.
- perplexity per word is used as a performance metric, and some chat examples are shown for further illustration.
- the neural chat model was tested in isolation using the already described dialog training data.
- the output h t and memory c t of each LSTM are 1024-d vectors.
- the vocabulary of each side has 19,300 words (i.e., all words appearing more than once in the training data). Each word was assigned to a 512-d vector, which is initialized randomly and learnt later.
- the model was trained with the learning rate 0.01, until its perplexity per word on the development set converged.
- the neural chat model was compared against a standard KN-smoothing 5-gram model trained on all responses found in the training dataset.
- the 5-gram model was tested to have a perplexity per word of 7.7 whereas the neural chat model disclosed herein gained a significantly lower perplexity, 5.6.
- the neural chat model was trained a second time, but without giving contexts.
- the new perplexity per word was 6.8. This indicates that both the LSTMs and contexts contribute to the significant reduction in perplexity.
- the “AGENT” outputs were produced by the neural chat model.
- CLIENT hello, i want to register a customer AGENT: thanks for chatting with X. i'm happy to support you and ensure that our customer is registered.
- AGENT name, number, carrier, device.
- AGENT great, thanks for chatting with X. let us know if you need anything else.
- CLIENT hello AGENT: hi CLIENT: could you tell me the battery life of iphone 5? AGENT: that's an excellent question, i can definitely help you with that! would you mind if i took two or three minutes to do some research on that?
- Example 3 The responses in Examples 1 and 2 look smooth. However, some given values are not correct; for instance, the standby time should be 225 hours. If being asked difficult questions, such as comparing two devices, the model also does not provide satisfactory results, as shown in Example 3.
- CLIENT hi, which phone is better, iphone 5 or galaxy s iii? AGENT: that's an excellent question, i can definitely help you with that! would you mind if i took two or three minutes to do some research on that? CLIENT: go ahead AGENT: thanks for you patience! #UNKNOWN# was there anything else i could help you out with today?
- Tests on device-specification chats are considered next. To this end, training of the neural chat model was continued on the device-specification training data (target context-response pairs selected from the dialogs using the heuristic as described previously) with a smaller learning rate, 0.001. Using this smaller learning rate it may be expected that the model will not forget what it has learnt on the chat corpus.
- the neural QA model was then trained on the data generated as described previously (e.g., by processing ⁇ device name, attribute tuples including randomly dropping words).
- the output h t and memory c t of the LSTM used in the model are 1024-d vectors. Its vocabulary has 12,700 words.
- the model was trained until its likelihood on the development set converged. Finally, the integration model was trained on the device-specification training data.
- Table 3 presents perplexities computed on all tokens and on only value tokens that are found in the knowledge base. It is evident that the integration does not help to decrease the perplexity on all tokens, though the increase (0.7) is small. However, it does help to decrease perplexity 38.3% on value tokens.
- Example 4 the client asks about an attribute of two devices.
- CLIENT hello, what is the battery talk time of lumia 900 AGENT: nokia_lumia_900
- Example 5 presents a scenario which again is about one device, but here with two attributes.
- CLIENT i want to know about how fast is the processor of apple ipad 2
- AGENT that's an excellent question, i can definitely help you with that! apple_ipad_2
- FIG. 9 presents ⁇ , p c , and p qa for the context-response pair: ⁇ CLIENT> what is the batt life on the new photon q ⁇ AGENT> that's an excellent question, i can definitely help you with that ! ⁇ pause> ⁇ AGENT> talk: 7.5 hours max. -lrb- 450 minutes -rrb- ⁇ newline> ⁇ EOC> ⁇ — ⁇ standby: 220 hours max. -lrb- 9.2 days -rrb- ⁇ EOR> ⁇ .
- ⁇ tends to have high values (>0:9) for tokens (e.g., “hours, days”) that should be generated by the neural chat model.
- a has lower values for tokens (e.g., “220”) that should be generated by the QA model. Note that because the value “9.2” is not in the knowledge base, its p qa is zero.
- FIG. 10 presents ⁇ , p c , and p qa for the context-response pair: ⁇ CLIENT> hello brandon i was wondering how many gb s of internal memory does the boost mobile galaxy prevail have ⁇ AGENT> sorry to hear that this is giving you trouble, but i will be happy to assist you. ⁇ CLIENT> ok ⁇ AGENT>117 mb internal storage, available to user ⁇ newline> ⁇ EOC> ⁇ — ⁇ plus 2 gigabyte card included ⁇ pause> ⁇ EOR> ⁇ .
- FIG. 10 shows that neural QA model is not always helpful. In this example, the token “2” receives very high values of ⁇ and p c . This is because the constituent “plus 2” appears many times in the training data. The neural chat model thus is able to remember this fact. Therefore, the integration model chooses the neural chat model instead of the neural QA model to generate this token.
- an LSTM-based (or more generally, RNN-based) mixture-of-experts is employed for language modelling.
- the LSTM is used to encode the whole dialog history to compute a set of normalized weights.
- the final output of the LSTM at each time step is a sum of weighted outputs from given independent component language models.
- This mixture distribution is used to choose a next word of an utterance, and this process is repeated for each word until a suitable terminator is selected, such as ⁇ EOR> or ⁇ EOC> in the illustrative examples.
- the approach can be used to integrate a neural chat model with a neural QA model, so as to combine the linguistic capability of the neural chat and the domain-specific technical expertise of the QA model.
- Experimental results presented herein demonstrate that the integration model is capable of performing chats in which the user asks about device-specification questions.
- artificial question-answer data are generated, e.g. by randomly dropping words, that cover more chat scenarios, which enables effective training without reduced need to acquire domain-specific training data.
- the disclosed approaches are expected to find use in a wide range of dialog applications.
- the illustrative tests were performed in the domain of mobile telephony support, with questions being posed about device specifications.
- questions may be posed regarding how to solve a technical problem, in which the knowledge base stores answers comprising word strings.
- an automatic math instructor could be developed by integrating a language model with a language model that can output mathematical content.
- the disclosed functionality of the dialog device and its constituent components implemented by the computer 12 may additionally or alternatively be embodied as a non-transitory storage medium storing instructions readable and executable by the computer 12 (or another electronic processor or electronic data processing device) to perform the disclosed operations.
- the non-transitory storage medium may, for example, include one or more of: an internal hard disk drive of the computer 12 , external hard drive, a network-accessible hard drive or other magnetic storage medium; a solid state drive (SSD) of the computer 12 or other electronic storage medium; an optical disk or other optical storage medium; various combinations thereof; or so forth.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Machine Translation (AREA)
Abstract
Description
where the p(k|x) terms are computed by a gating network, which is a neural network whose output is a normalized vector (e.g., normalized using the softmax layer 46). Conceptually, if the
i t=σ(W xi x t +W hi h t-1 +W ci c t-1 +b i)
f t=σ(W xf x t +W hf h t-1 +W cf c t-1 +b f)
c t =f t ⊙c t-1 +i t⊙ tan h(W xc x t +W hc h t-1 +b c)
o f=σ(W xo x t +W ho h t-1 +W co c t +b o)
h t =o t⊙ tan h(c t)
where σ is the sigmoid function; it, ft, ot are the outputs (i.e. activations) of the corresponding gates; ct is the state of the memory cell; the symbol “⊙” denotes the element-wise multiplication operation; and the W and b terms are weight matrices and bias vectors.
where [u(1,ht), . . . , u(K,ht)]T=Wht+b. In the foregoing, the term W∈ K×dim(h
- Context <CLIENT> how do i change the text notification on my htc evo <AGENT> sorry you are having problems with that but you are in the right place. before we begin can i start with you name please ?<CLIENT> Y test <EOC>
- Response thank you Y. one moment while i pull up the information on that device. <EOR>
Usually the dialog alternates between <CLIENT> and <AGENT> turns, where in each <AGENT> turn the agent is providing a response to the client utterance of the immediately preceding <CLIENT> turn. However, in some cases the agent may need to provide two or more responses to the last client utterance (or, stated another way, the agent-side turn may include more than one response). In such cases, a token <pause> is added to the tail of the agent responses, e.g. in the form “<pause><EOC>. Similarly, for long responses, provision is made for breaking the response into multiple lines by adding to the tail of the current response a token “<newline>”. Because long responses make both training and generation difficult, a long response is split into smaller ones and “<newline>” tokens are used to mark the break-points. For the actually processed dialog training data, the context-response extraction process yielded 973,684 pairs for training, 74,141 pairs for development, and 75,474 pairs for testing.
-
- [Apple iPhone 5] [camera megapixels] [8.0]
There were 4729 devices and 608 attributes in the knowledge base. Because only numeric values were considered, only triples that satisfy this were chosen, resulting in a set of 65,226 triples of 34 attributes. The 34 attributes are listed in Table 1. Since this knowledge base used standard units (e.g., “megabyte” for storage, “mhz” for CPU frequency) but call center agents sometimes use other units (e.g., “gigabyte”, “ghz”), each value in the knowledge base was converted to other units if applicable.
- [Apple iPhone 5] [camera megapixels] [8.0]
TABLE 1 |
list of attributes |
List of attributes |
battery capacity | ||
battery standby time | ||
battery talk time | ||
browser screen size | ||
camera digital zoom factor | ||
camera maximum resolution | ||
camera megapixels | ||
camera optical zoom factor | ||
cpu | ||
cpu maximum frequency | ||
internal ram | ||
internal storage | ||
java app usable screen size | ||
java max memory size | ||
native app usable screen size | ||
primary screen physical height | ||
primary screen physical width | ||
primary screen rotate | ||
primary screen type | ||
removable memory maximum size | ||
rendering screen size | ||
screen orientation | ||
screen size | ||
screen size char | ||
secondary camera maximum resolution | ||
secondary camera megapixels | ||
secondary screen physical height | ||
secondary screen physical width | ||
secondary screen size | ||
secondary screen size char | ||
secondary screen type | ||
sync contacts to removable memory | ||
wallpaper external screen usable size | ||
wallpaper internal screen usable size | ||
where [u(v1,ht), . . . , u(v|V
where D is the set of all context-response pairs in the training data, and A is the regularization parameter. To perform the optimization, the mini batch gradient descent method was employed, where the gradient
is computed efficiently by the back-propagation algorithm. In the actually performed training, the AdaGrad method (Duchi et al., “Adaptive subgradient methods for online learning and stochastic optimization”, The Journal of Machine Learning Research pages 2121-2159 (2011)) was used to automatically update the learning rate for each parameter. To speed up the training phase, the forward process used only the last 350 tokens of each context. The backward process was stopped at the 100th time step after passing “<EOC>”.
where T is the set of all <device-attribute-value> triples in the
-
- galaxy s3 dog mouse cpu buy
cake cat iphone 4 have battery i→apple_iphone_4 battery_talk_time
- galaxy s3 dog mouse cpu buy
TABLE 2 |
Paraphrasing Attributes |
Attribute | Alternatives |
battery capacity | battery size |
battery standby time | battery life |
battery talk time | battery life |
camera megapixels | megapixel, mega pixel, mp, mega pixels |
cpu maximum frequency | cpu, processor, power |
internal ram | ram |
internal storage | memory |
primary screen physical height | screen height |
primary screen physical width | screen width |
Removable memory maximum | external memory, external storage, |
size | memory card, sd card |
secondary camera maximum | front camera resolution |
resolution | |
secondary camera megapixels | front megapixel, front mega pixel, |
front mp, front mega pixels | |
α=σ(w T h t c +b)
p(w|w 1 t ,C)=α·p c(w|w 1 t ,C)+(1−α)·p qa(w|w 1 t ,C)
where ht c is the hidden state of the neural chat model, σ is the sigmoid function, w∈ dim(h
where:
where λ is the regularization parameter and D is the set of all training device-specification context-response pairs. The parameter β(w∈Vqa\Vc) is set to the relatively high value of 100 because it is desired for the training phase to focus on those tokens representing values in the knowledge base but not supported by the neural chat model. Another reason for choosing this high value for β(w∈Vqa\Vc) is that, in the set of target context-response pairs, tokens found in Vc significantly outnumber tokens found in Vqa.
CLIENT: | hello, i want to register a customer |
AGENT: | thanks for chatting with X. i'm happy to support you |
and ensure that our customer is registered. | |
CLIENT: | what should i do? |
AGENT: | name, number, carrier, device. |
CLIENT: | Y, 123, att, iphone 5 |
AGENT: | was there anything else i could help you out with today? |
CLIENT: | that's all what i need. thanks |
AGENT: | great, thanks for chatting with X. let us know if you need |
anything else. | |
CLIENT: | hello |
AGENT: | hi |
CLIENT: | could you tell me the battery life of iphone 5? |
AGENT: | that's an excellent question, i can definitely help you |
with that! | |
would you mind if i took two or three minutes to do | |
some research on that? | |
CLIENT: | no problem |
AGENT: | thanks for you patience! |
battery | |
talk time: | |
14.00 hours | |
the average is 8 h (509 min) | |
stand-by time: | |
8.3 days (200 hours) | |
the average is 20 days (481 hours) | |
capacity: | |
1420 mah | |
CLIENT: | thanks |
AGENT: | no problem |
was there anything else i could help you with today? | |
CLIENT: | no thanks |
AGENT: | thank you for contacting X. have a great rest of your day! |
CLIENT: | hi, which phone is better, iphone 5 or galaxy s iii? |
AGENT: | that's an excellent question, i can definitely help you |
with that! | |
would you mind if i took two or three minutes to do some | |
research on that? | |
CLIENT: | go ahead |
AGENT: | thanks for you patience! |
#UNKNOWN# | |
was there anything else i could help you out with today? | |
TABLE 3 |
Perplexities of the neural chat model and the integration model |
Neural chat | Integration | ||
All tokens | 14.7 | 15.4 | ||
Only value tokens | 75.8 | 46.8 | ||
CLIENT: | hello, what is the battery talk time of lumia 900 |
AGENT: | nokia_lumia_900|||battery_talk_time hours |
CLIENT: | and how about zte blade |
AGENT: | zte_blade|||battery_talk_time hours |
CLIENT: | thanks |
AGENT: | was there anything else i could help you out with today? |
CLIENT: | that is all. thanks |
AGENT: | thank you for contacting X. have a great rest of your day! |
CLIENT: | i want to know about how fast is the processor of |
AGENT: | that's an excellent question, i can definitely help you |
with that! | |
apple_ipad_2|||cpu_maximum_frequency ghz | |
was there anything else i could help you out with today? | |
CLIENT: | and its ram? |
AGENT: | apple_ipad_2|||internal_ram gigabytes |
CLIENT: | thanks |
AGENT: | was there anything else i could help you out with today? |
Claims (18)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/139,886 US10431205B2 (en) | 2016-04-27 | 2016-04-27 | Dialog device with dialog support generated using a mixture of language models combined using a recurrent neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/139,886 US10431205B2 (en) | 2016-04-27 | 2016-04-27 | Dialog device with dialog support generated using a mixture of language models combined using a recurrent neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170316775A1 US20170316775A1 (en) | 2017-11-02 |
US10431205B2 true US10431205B2 (en) | 2019-10-01 |
Family
ID=60158475
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/139,886 Active 2036-05-26 US10431205B2 (en) | 2016-04-27 | 2016-04-27 | Dialog device with dialog support generated using a mixture of language models combined using a recurrent neural network |
Country Status (1)
Country | Link |
---|---|
US (1) | US10431205B2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210034817A1 (en) * | 2018-04-13 | 2021-02-04 | National Institute Of Information And Communications Technology | Request paraphrasing system, request paraphrasing model and request determining model training method, and dialogue system |
US11681932B2 (en) * | 2016-06-21 | 2023-06-20 | International Business Machines Corporation | Cognitive question answering pipeline calibrating |
US11947604B2 (en) | 2020-03-17 | 2024-04-02 | International Business Machines Corporation | Ranking of messages in dialogs using fixed point operations |
Families Citing this family (174)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US20120309363A1 (en) | 2011-06-03 | 2012-12-06 | Apple Inc. | Triggering notifications associated with tasks items that represent tasks to perform |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
AU2014214676A1 (en) | 2013-02-07 | 2015-08-27 | Apple Inc. | Voice trigger for a digital assistant |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
EP3937002A1 (en) | 2013-06-09 | 2022-01-12 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10791216B2 (en) | 2013-08-06 | 2020-09-29 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
TWI566107B (en) | 2014-05-30 | 2017-01-11 | 蘋果公司 | Method for processing a multi-part voice command, non-transitory computer readable storage medium and electronic device |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10152299B2 (en) | 2015-03-06 | 2018-12-11 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US10460227B2 (en) | 2015-05-15 | 2019-10-29 | Apple Inc. | Virtual assistant in a communication session |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10200824B2 (en) | 2015-05-27 | 2019-02-05 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US20160378747A1 (en) | 2015-06-29 | 2016-12-29 | Apple Inc. | Virtual assistant for media playback |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10331312B2 (en) | 2015-09-08 | 2019-06-25 | Apple Inc. | Intelligent automated assistant in a media environment |
US10740384B2 (en) | 2015-09-08 | 2020-08-11 | Apple Inc. | Intelligent automated assistant for media search and playback |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US12223282B2 (en) | 2016-06-09 | 2025-02-11 | Apple Inc. | Intelligent automated assistant in a home environment |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US12197817B2 (en) | 2016-06-11 | 2025-01-14 | Apple Inc. | Intelligent device arbitration and control |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
DK179343B1 (en) * | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
US10282546B1 (en) * | 2016-06-21 | 2019-05-07 | Symatec Corporation | Systems and methods for detecting malware based on event dependencies |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11222253B2 (en) * | 2016-11-03 | 2022-01-11 | Salesforce.Com, Inc. | Deep neural network model for processing data through multiple linguistic task hierarchies |
KR102630668B1 (en) * | 2016-12-06 | 2024-01-30 | 한국전자통신연구원 | System and method for expanding input text automatically |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US10049106B2 (en) * | 2017-01-18 | 2018-08-14 | Xerox Corporation | Natural language generation through character-based recurrent neural networks with finite-state prior knowledge |
WO2018175291A1 (en) * | 2017-03-20 | 2018-09-27 | Ebay Inc. | Detection of mission change in conversation |
US10360908B2 (en) * | 2017-04-19 | 2019-07-23 | International Business Machines Corporation | Recommending a dialog act using model-based textual analysis |
US10224032B2 (en) * | 2017-04-19 | 2019-03-05 | International Business Machines Corporation | Determining an impact of a proposed dialog act using model-based textual analysis |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | User interface for correcting recognition errors |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
DK180048B1 (en) | 2017-05-11 | 2020-02-04 | Apple Inc. | MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION |
DK201770427A1 (en) | 2017-05-12 | 2018-12-20 | Apple Inc. | Low-latency intelligent automated assistant |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK201770411A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | MULTI-MODAL INTERFACES |
US20180336892A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Detecting a trigger of a digital assistant |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US20180336275A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Intelligent automated assistant for media exploration |
DK179560B1 (en) | 2017-05-16 | 2019-02-18 | Apple Inc. | Far-field extension for digital assistant services |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
CN107368524B (en) * | 2017-06-07 | 2020-06-02 | 创新先进技术有限公司 | Dialog generation method and device and electronic equipment |
US10176808B1 (en) * | 2017-06-20 | 2019-01-08 | Microsoft Technology Licensing, Llc | Utilizing spoken cues to influence response rendering for virtual assistants |
US10446147B1 (en) * | 2017-06-27 | 2019-10-15 | Amazon Technologies, Inc. | Contextual voice user interface |
US11316865B2 (en) | 2017-08-10 | 2022-04-26 | Nuance Communications, Inc. | Ambient cooperative intelligence system and method |
US20190066823A1 (en) | 2017-08-10 | 2019-02-28 | Nuance Communications, Inc. | Automated Clinical Documentation System and Method |
US10515625B1 (en) | 2017-08-31 | 2019-12-24 | Amazon Technologies, Inc. | Multi-modal natural language processing |
US10635707B2 (en) * | 2017-09-07 | 2020-04-28 | Xerox Corporation | Contextual memory bandit for proactive dialogs |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
JP2019106054A (en) * | 2017-12-13 | 2019-06-27 | 株式会社東芝 | Dialog system |
KR102608469B1 (en) * | 2017-12-22 | 2023-12-01 | 삼성전자주식회사 | Method and apparatus for generating natural language |
CN108038230B (en) * | 2017-12-26 | 2022-05-20 | 北京百度网讯科技有限公司 | Information generation method and device based on artificial intelligence |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
WO2019165260A1 (en) * | 2018-02-22 | 2019-08-29 | Verint Americas Inc. | System and method of highlighting influential samples in sequential analysis |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US11250382B2 (en) | 2018-03-05 | 2022-02-15 | Nuance Communications, Inc. | Automated clinical documentation system and method |
WO2019173333A1 (en) | 2018-03-05 | 2019-09-12 | Nuance Communications, Inc. | Automated clinical documentation system and method |
US20190272895A1 (en) | 2018-03-05 | 2019-09-05 | Nuance Communications, Inc. | System and method for review of automated clinical documentation |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US11568863B1 (en) * | 2018-03-23 | 2023-01-31 | Amazon Technologies, Inc. | Skill shortlister for natural language processing |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
CN108763284B (en) * | 2018-04-13 | 2021-07-20 | 华南理工大学 | A Question Answering System Implementation Method Based on Deep Learning and Topic Model |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
CN108681574B (en) * | 2018-05-07 | 2021-11-05 | 中国科学院合肥物质科学研究院 | A text-summary-based answer selection method and system for non-factual question and answer questions |
US11600194B2 (en) * | 2018-05-18 | 2023-03-07 | Salesforce.Com, Inc. | Multitask learning as question answering |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
CN108763504B (en) * | 2018-05-30 | 2020-07-24 | 浙江大学 | Dialog reply generation method and system based on reinforced double-channel sequence learning |
RU2688758C1 (en) * | 2018-05-31 | 2019-05-22 | Публичное Акционерное Общество "Сбербанк России" (Пао Сбербанк) | Method and system for arranging dialogue with user in user-friendly channel |
DK201870355A1 (en) | 2018-06-01 | 2019-12-16 | Apple Inc. | Virtual assistant operation in multi-device environments |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
DK179822B1 (en) | 2018-06-01 | 2019-07-12 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11076039B2 (en) | 2018-06-03 | 2021-07-27 | Apple Inc. | Accelerated task performance |
US10664472B2 (en) | 2018-06-27 | 2020-05-26 | Bitdefender IPR Management Ltd. | Systems and methods for translating natural language sentences into database queries |
CN108897723B (en) * | 2018-06-29 | 2022-08-02 | 北京百度网讯科技有限公司 | Scene conversation text recognition method and device and terminal |
CN109271524B (en) * | 2018-08-02 | 2021-10-15 | 中国科学院计算技术研究所 | Entity Linking Method in Knowledge Base Question Answering System |
US10748526B2 (en) * | 2018-08-28 | 2020-08-18 | Accenture Global Solutions Limited | Automated data cartridge for conversational AI bots |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
CN109242400A (en) * | 2018-11-02 | 2019-01-18 | 南京信息工程大学 | A kind of logistics express delivery odd numbers recognition methods based on convolution gating cycle neural network |
CN111309990B (en) * | 2018-12-12 | 2024-01-23 | 北京嘀嘀无限科技发展有限公司 | Statement response method and device |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11023530B2 (en) * | 2019-02-13 | 2021-06-01 | International Business Machines Corporation | Predicting user preferences and requirements for cloud migration |
US11258730B2 (en) * | 2019-03-06 | 2022-02-22 | Go Daddy Operating Company, LLC | Generating a plurality of selectable responses based on a database indexed by receiver devices storing responses to similar SMS messages |
CN109979450B (en) * | 2019-03-11 | 2021-12-07 | 海信视像科技股份有限公司 | Information processing method and device and electronic equipment |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11704573B2 (en) * | 2019-03-25 | 2023-07-18 | Here Global B.V. | Method, apparatus, and computer program product for identifying and compensating content contributors |
US11501761B2 (en) * | 2019-04-05 | 2022-11-15 | Samsung Electronics Co., Ltd. | Method and apparatus for speech recognition |
KR102758478B1 (en) * | 2019-04-05 | 2025-01-22 | 삼성전자주식회사 | Method and apparatus for speech recognition |
US10964309B2 (en) * | 2019-04-16 | 2021-03-30 | Microsoft Technology Licensing, Llc | Code-switching speech recognition with end-to-end connectionist temporal classification model |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
DK201970509A1 (en) | 2019-05-06 | 2021-01-15 | Apple Inc | Spoken notifications |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
DK180129B1 (en) | 2019-05-31 | 2020-06-02 | Apple Inc. | USER ACTIVITY SHORTCUT SUGGESTIONS |
DK201970510A1 (en) | 2019-05-31 | 2021-02-11 | Apple Inc | Voice identification in digital assistant systems |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11468890B2 (en) | 2019-06-01 | 2022-10-11 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
CN111125333B (en) * | 2019-06-06 | 2022-05-27 | 北京理工大学 | A Generative Question Answering Method Based on Representation Learning and Multilayer Covering Mechanism |
CN110222155B (en) * | 2019-06-13 | 2020-10-02 | 北京百度网讯科技有限公司 | Knowledge selection strategy dialog generation method and device and terminal |
US11216480B2 (en) | 2019-06-14 | 2022-01-04 | Nuance Communications, Inc. | System and method for querying data points from graph data structures |
CN110309282B (en) * | 2019-06-14 | 2021-08-27 | 北京奇艺世纪科技有限公司 | Answer determination method and device |
US11227679B2 (en) | 2019-06-14 | 2022-01-18 | Nuance Communications, Inc. | Ambient clinical intelligence system and method |
US11043207B2 (en) | 2019-06-14 | 2021-06-22 | Nuance Communications, Inc. | System and method for array data simulation and customized acoustic modeling for ambient ASR |
US20200403945A1 (en) * | 2019-06-19 | 2020-12-24 | International Business Machines Corporation | Methods and systems for managing chatbots with tiered social domain adaptation |
US20200401878A1 (en) * | 2019-06-19 | 2020-12-24 | International Business Machines Corporation | Collaborative real-time solution efficacy |
US11531807B2 (en) | 2019-06-28 | 2022-12-20 | Nuance Communications, Inc. | System and method for customized text macros |
WO2021056255A1 (en) | 2019-09-25 | 2021-04-01 | Apple Inc. | Text detection using global geometry estimators |
US11670408B2 (en) | 2019-09-30 | 2023-06-06 | Nuance Communications, Inc. | System and method for review of automated clinical documentation |
US11681911B2 (en) * | 2019-10-15 | 2023-06-20 | Naver Corporation | Method and system for training neural sequence-to-sequence models by incorporating global features |
US11810578B2 (en) | 2020-05-11 | 2023-11-07 | Apple Inc. | Device arbitration for digital assistant-based intercom systems |
US11183193B1 (en) | 2020-05-11 | 2021-11-23 | Apple Inc. | Digital assistant hardware abstraction |
US11061543B1 (en) | 2020-05-11 | 2021-07-13 | Apple Inc. | Providing relevant data items based on context |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
GB2596141A (en) * | 2020-06-19 | 2021-12-22 | Continental Automotive Gmbh | Driving companion |
US11490204B2 (en) | 2020-07-20 | 2022-11-01 | Apple Inc. | Multi-device audio adjustment coordination |
US11438683B2 (en) | 2020-07-21 | 2022-09-06 | Apple Inc. | User identification using headphones |
US11222103B1 (en) | 2020-10-29 | 2022-01-11 | Nuance Communications, Inc. | Ambient cooperative intelligence system and method |
US20220189501A1 (en) | 2020-12-16 | 2022-06-16 | Truleo, Inc. | Audio analysis of body worn camera |
KR20240080879A (en) * | 2022-11-30 | 2024-06-07 | 삼성전자주식회사 | Method and apparatus for linking entities |
CN116595148B (en) * | 2023-05-25 | 2023-12-29 | 北京快牛智营科技有限公司 | Method and system for realizing dialogue flow by using large language model |
US12229313B1 (en) | 2023-07-19 | 2025-02-18 | Truleo, Inc. | Systems and methods for analyzing speech data to remove sensitive data |
US12106393B1 (en) | 2023-08-09 | 2024-10-01 | Authenticating.com, LLC | Artificial intelligence for reducing bias in policing |
CN117457015B (en) * | 2023-10-27 | 2024-07-30 | 深圳技术大学 | A single-channel speech enhancement method and system based on heterogeneous multi-experts |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150255060A1 (en) * | 2014-03-05 | 2015-09-10 | Casio Computer Co., Ltd. | Voice search device, voice search method, and non-transitory recording medium |
US20160352656A1 (en) * | 2015-05-31 | 2016-12-01 | Microsoft Technology Licensing, Llc | Context-sensitive generation of conversational responses |
US20170053646A1 (en) * | 2015-08-17 | 2017-02-23 | Mitsubishi Electric Research Laboratories, Inc. | Method for using a Multi-Scale Recurrent Neural Network with Pretraining for Spoken Language Understanding Tasks |
-
2016
- 2016-04-27 US US15/139,886 patent/US10431205B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150255060A1 (en) * | 2014-03-05 | 2015-09-10 | Casio Computer Co., Ltd. | Voice search device, voice search method, and non-transitory recording medium |
US20160352656A1 (en) * | 2015-05-31 | 2016-12-01 | Microsoft Technology Licensing, Llc | Context-sensitive generation of conversational responses |
US20170053646A1 (en) * | 2015-08-17 | 2017-02-23 | Mitsubishi Electric Research Laboratories, Inc. | Method for using a Multi-Scale Recurrent Neural Network with Pretraining for Spoken Language Understanding Tasks |
Non-Patent Citations (14)
Title |
---|
A. Graves, A.-r. Mohamed, and G. Hinton, "Speech recognition with deep recurrent neural networks," in 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013, pp. 6645-6649. (Year: 2013). * |
Duchi, et al., "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization," Journal of Machine Learning Research, vol. 12, pp. 2121-2159 (2011). |
Fader, et al., "Paraphrase-Driven Learning for Open Question Answering," Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pp. 1608-1618 (2013). |
Florian, et al., "Dynamic Nonlocal Language Modeling via Hierarchical Topic-Based Adaptation," Computer Science Dept. and Center for Language and Speech Processing, Johns Hopkins University, pp. 1-8 (2001). |
Graves, "Supervised Sequence Labelling with Recurrent Neural Networks," Springer Berlin Heidelberg, pp. i-129 (2012). |
Hochreiter, et al., "Long Short-Term Memory," Neural Computation, vol. 9(8), pp. 1735-1780 (1997). |
Jacobs, et al., "A competitive modular connectionist architecture," Advances in neural information processing systems, MIT, pp. 767-773 (1991). |
Radford, et al., "Named entity recognition with document-specific KB tag gazetteers," Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 512-517 (2015). |
Serban, et al., "Hierarchical Neural Network Generative Models for Movie Dialogues," arXiv preprint arXiv:1507.04808, pp. 1-11 (2015). |
Shang, et al., "Neural Responding Machine for Short-Text Conversation," Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pp. 1577-1586 (2015). |
Sordoni, et al., "A Neural Network Approach to Context-Sensitive Generation of Conversational Responses," Human Language Technologies, ACL, pp. 196-205 (2015). |
Sutskever, et al., "Sequence to Sequence Learning with Neural Networks," Advances in neural information processing systems, pp. 3104-3112 (2014). |
Vinyals, et al., "A Neural Conversational Model," arXiv preprint arXiv:1506.05869, pp. 1-8 (2015). |
Weston, et al., "Memory Networks," arXiv preprint arXiv:1410.3916, pp. 1-15 (2014). |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11681932B2 (en) * | 2016-06-21 | 2023-06-20 | International Business Machines Corporation | Cognitive question answering pipeline calibrating |
US20210034817A1 (en) * | 2018-04-13 | 2021-02-04 | National Institute Of Information And Communications Technology | Request paraphrasing system, request paraphrasing model and request determining model training method, and dialogue system |
US11861307B2 (en) * | 2018-04-13 | 2024-01-02 | National Institute Of Information And Communications Technology | Request paraphrasing system, request paraphrasing model and request determining model training method, and dialogue system |
US11947604B2 (en) | 2020-03-17 | 2024-04-02 | International Business Machines Corporation | Ranking of messages in dialogs using fixed point operations |
Also Published As
Publication number | Publication date |
---|---|
US20170316775A1 (en) | 2017-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10431205B2 (en) | Dialog device with dialog support generated using a mixture of language models combined using a recurrent neural network | |
US20220172707A1 (en) | Speech recognition method and apparatus, device, and storage medium | |
US10740564B2 (en) | Dialog generation method, apparatus, and device, and storage medium | |
US9473637B1 (en) | Learning generation templates from dialog transcripts | |
US11355097B2 (en) | Sample-efficient adaptive text-to-speech | |
US10750018B2 (en) | Modeling voice calls to improve an outcome of a call between a representative and a customer | |
US9053096B2 (en) | Language translation based on speaker-related information | |
US20200042613A1 (en) | Processing an incomplete message with a neural network to generate suggested messages | |
US8811638B2 (en) | Audible assistance | |
US20140081643A1 (en) | System and method for determining expertise through speech analytics | |
US20130144619A1 (en) | Enhanced voice conferencing | |
CN111933115A (en) | Speech recognition method, apparatus, device and storage medium | |
CN107430616A (en) | The interactive mode of speech polling re-forms | |
US11880666B2 (en) | Generating conversation descriptions using neural networks | |
CN112818109B (en) | Intelligent reply method, medium, device and computing equipment for mail | |
CN113140138A (en) | Interactive teaching method, device, storage medium and electronic equipment | |
CN113782022B (en) | Communication method, device, equipment and storage medium based on intention recognition model | |
CN116016779A (en) | Voice call translation assisting method, system, computer equipment and storage medium | |
US11790302B2 (en) | System and method for calculating a score for a chain of interactions in a call center | |
US20230130777A1 (en) | Method and system for generating voice in an ongoing call session based on artificial intelligent techniques | |
CN113111658B (en) | Method, device, equipment and storage medium for checking information | |
US20240346230A1 (en) | Call Tagging Using Machine Learning Model | |
CN113887554A (en) | Method and device for processing feedback words | |
CN112969000A (en) | Control method and device of network conference, electronic equipment and storage medium | |
US20240346232A1 (en) | Dynamic construction of large language model prompts |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: XEROX CORPORATION, CONNECTICUT Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LE, PHONG;DYMETMAN, MARC;RENDERS, JEAN-MICHEL;REEL/FRAME:038395/0441 Effective date: 20160427 |
|
AS | Assignment |
Owner name: CONDUENT BUSINESS SERVICES, LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:041542/0022 Effective date: 20170112 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:052189/0698 Effective date: 20200318 |
|
AS | Assignment |
Owner name: CONDUENT HEALTH ASSESSMENTS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT CASUALTY CLAIMS SOLUTIONS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT BUSINESS SOLUTIONS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT COMMERCIAL SOLUTIONS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: ADVECTIS, INC., GEORGIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT TRANSPORT SOLUTIONS, INC., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT STATE & LOCAL SOLUTIONS, INC., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT BUSINESS SERVICES, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., NORTH CAROLINA Free format text: SECURITY INTEREST;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:057970/0001 Effective date: 20211015 Owner name: U.S. BANK, NATIONAL ASSOCIATION, CONNECTICUT Free format text: SECURITY INTEREST;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:057969/0445 Effective date: 20211015 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |