US20080195388A1 - Context based word prediction - Google Patents
Context based word prediction Download PDFInfo
- Publication number
- US20080195388A1 US20080195388A1 US11/704,381 US70438107A US2008195388A1 US 20080195388 A1 US20080195388 A1 US 20080195388A1 US 70438107 A US70438107 A US 70438107A US 2008195388 A1 US2008195388 A1 US 2008195388A1
- Authority
- US
- United States
- Prior art keywords
- document
- words
- context
- data source
- text input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
- G06F3/0237—Character input methods using prediction or retrieval techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/274—Converting codes to words; Guess-ahead of partial word inputs
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
- G10L15/19—Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/219—Managing data history or versioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/93—Document management systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/32—Digital ink
Definitions
- Typing or otherwise entering information into a computing device can be cumbersome and time consuming where each individual word must be typed in its entirety or handwritten in its entirety in the case of electronic handwriting input methods or spoken accurately in the case of speech recognition input methods.
- Typing information on small mobile devices can be particularly difficult due to the decreased size or form factor of the mobile device and associated keyboard.
- mobile devices often some type of modified typing method, for example thumb typing, is required on a very small keyboard, or typing text via a twelve key keypad is required.
- the prediction user interface may show a number of unhelpful words, such as “three,” “thread,” and the like because the words are being retrieved from a non-contextual source such as a dictionary.
- unhelpful words such as “three,” “thread,” and the like because the words are being retrieved from a non-contextual source such as a dictionary.
- other words such as names and technical terms are not likely to be included in an available input prediction dictionary, and thus, these words and terms will not be predicted at all. For example, if the user desires to type a person's name, for example, “Alexandro Giordano,” the user may be required to type each and every character making up the name because such a name is not likely to be included in an input prediction dictionary accessible by the input method in use.
- Embodiments of the present invention solve the above and other problems by providing context-based word prediction.
- a software application utilizes words contained in an application document to provide context-based prediction in a related document.
- an electronic mail application may utilize words contained in a received electronic mail message to provide word prediction during the preparation of a reply message to the received message.
- the software application creates an application defined data source and populates the data source with words occurring in a document.
- a prediction engine presents candidate words to the user as the user enters characters of words, and the user may choose from the presented candidate words for automatic population into the document.
- the prediction engine retrieves candidate words from the context-based application defined data source and, if available, from one or more existing sources of words, for example, electronic dictionaries.
- words from the context-based application data source may be ranked higher over words from the one or more existing sources.
- information from the application defined data source may be transferred between computing devices, for example, between a mobile computing device and a desktop (non-mobile) computing device.
- FIG. 1 is a simplified block diagram illustrating a system architecture of a context-based word prediction system.
- FIG. 2A is a logical flow diagram illustrating a method for providing context-based word prediction.
- FIG. 2B is a simplified block diagram illustrating a mobile computing device with which context-based word prediction is employed.
- FIG. 3 is a state diagram and operational flow illustrating a method for providing context-based word prediction via an electronic messaging application.
- FIG. 4 is a state diagram and operational flow illustrating a method for providing context-based word prediction via a software application.
- FIG. 5 is a state diagram and operational flow illustrating a method for providing context-based word prediction from text retrieved from an application document.
- FIG. 6 is a state diagram and operational flow illustrating a method for providing context-based word prediction via a speech or voice recognition input method.
- FIG. 7 is a state diagram and operational flow illustrating a method for providing context-based word prediction via a handwriting recognition input method.
- FIG. 8 is a logical flow diagram illustrating a method for utilizing context-based word prediction information on one computing device that was generated on another computing device.
- FIG. 9 illustrates an exemplary computing operating environment in which embodiments of the present invention may be practiced.
- FIG. 1 is a simplified block diagram illustrating a system architecture of a context-based word prediction system 100 .
- a user 110 utilizes an input method 115 for entry of text or data into a document, for example, an electronic mail message, a word processing document, a spreadsheet application document, a slide presentation application document, an electronic handwriting application document, and the like.
- the input method 115 is illustrative of an input method editor (IME) which is used for typing or otherwise entering text or data input.
- IME input method editor
- Other suitable input methods 115 include handwriting recognition engines, handwriting text input panels, voice or speech recognition engines, and the like.
- any input method that benefits from a lexicon data source may benefit from words stored in an application defined data source as is described herein.
- an application 170 is illustrative of any software application with which a user 110 may enter and edit text or data using the input method 115 , for example, a word processing application, and electronic mail application, a spreadsheet application, a slide presentation application, an electronic handwriting application, and the like.
- the text framework 120 includes a prediction engine 125 and data sources 130 for providing word prediction during the entry of text or data via the input method 115 .
- the prediction engine 125 is a software application module operative to retrieve words from one or more data sources in response to text or data character entry received via the input method 115 .
- the prediction engine 125 is operative to retrieve words from the data sources 130 and for passing the retrieved words back to the input method 115 for presentation to the user 110 in response to text or data characters entered by the user 110 via the input method 115 .
- the data sources 130 may include an application defined data source (ADDS) 150 and may include one or more other existing text prediction data sources 135 .
- the software application 170 parses a received or previously prepared document, for example, a received electronic mail message, a previously prepared word processing document, a previously prepared slide presentation document, and the like and stores words parsed from the received or previously prepared document in the application defined data store 160 .
- An application defined candidate provider 155 serves as an interface between the application defined data store 160 and the prediction engine 125 .
- the input method 115 may parse words from a document for storage in an ADDS created by the input method 115 for subsequent use in a candidate list of predicted words.
- Words stored in the application defined data store 160 may be ranked according to their relevance to each other and according to the probability or likelihood that they will be utilized by the prediction engine 125 for presentation in a candidate word list. For example, statistical weighting may be applied to words based on relationships between words, such as whether a particular word is traditionally a noun followed by a verb. Such ranking analysis is useful in determining a likelihood or probability that a given word is more likely a desired word for completing a text entry, and thus, for inclusion in a candidate word list provided by the context-based word prediction system described herein.
- the word “thesaurus” and “the” are parsed by the application 170 and are placed in the application defined data source 150 for subsequent use by the prediction engine 125 , the word “thesaurus” likely will receive a higher ranking than the word “the” so that if a user subsequently begins typing the characters “th” via the input method 115 , the prediction engine may present the word “thesaurus” before the presentation of the word “the” based on the probability that the user 110 will require input assistance for the word “thesaurus” before requiring assistance with the input of the word “the.” Algorithms for ranking words for presentation by a text/word prediction engine are well known to those skilled in the art and need not be discussed in detail herein.
- the application defined data source 150 is created by the application 170 for each pre-existing or received document on a case-by-case basis.
- the application 170 may create an application defined data source 150 for each received electronic mail message for which a reply message is generated by the user 110 via the input method 115 .
- the existing text prediction data sources 135 are representative of pre-existing data sources, for example dictionaries, previously assembled collections of words entered by the user 110 , contacts databases, technical terms databases and the like.
- the static word dictionary 145 is illustrative of a repository for containing such previously stored or assembled words.
- the static word provider 140 is illustrative of an interface between the static word dictionary database 145 and the prediction engine 125 .
- the words parsed from the related document and stored in the application defined data source may be utilized by the prediction engine 125 before the prediction engine 125 utilizes words from the existing text prediction data sources for presentation to the user via the input method 115 because the words contained in the application defined data source are more likely to be the words being entered by the user in a document related to the received or pre-existing document.
- the input method 115 calls into the prediction engine 125 as each character is entered in order to get word prediction candidates that match the current user input.
- the prediction engine 125 retrieves word results from the application defined data source 150 (which is populated with words from a related document) and from existing lexicon data sources contained in the existing text prediction data sources, including statistical word information, input history, etc.
- the word candidates are then returned by the prediction engine 125 to the input method 115 where they are displayed to the user as word prediction results in a word candidate list.
- FIG. 2A is a logical flow diagram illustrating a method for providing context-based word prediction.
- the context-based word prediction routine 200 begins at start operation 205 and proceeds to operation 210 where an application 170 , for example, an electronic mail application, a word processing application, a slide presentation application, a spreadsheet application, and the like creates and populates an application defined data source (ADDS) based on a received or previously created document.
- an application 170 for example, an electronic mail application, a word processing application, a slide presentation application, a spreadsheet application, and the like creates and populates an application defined data source (ADDS) based on a received or previously created document.
- a description of the preparation of an application defined data source for a received electronic mail message is described below with reference to FIG. 3
- a description of the creation of an application defined data source for a previously prepared document is described below with reference to FIG. 4 .
- the application 170 creates an instance of the application defined data source 150 including an instance of the application defined data store 160 and the application defined candidate provider 155 .
- Text or data contained in the received or previously created document are parsed by the application 170 , and individual words making up the parsed document are populated in the application defined data store 160 .
- the candidate provider is operative to interpret an internal data format of the ADDS and extract words and probability information for extracted words based on a given text input from the user.
- the candidate provider may make complex determinations of the probability that one or more words in the ADDS match a given input based on various properties, such as how the words in the ADDS relate to each other and to the input in a given language model, or the candidate provider may simply return all words from the ADDS that start with one or more text characters of a given text input in alphabetical order.
- a text input is received via an input method 115 , for example, a typing input method editor, a speech recognition engine, an electronic handwriting recognition engine, and the like.
- the input method 115 calls the prediction engine 125 as each character is received for the text or data input.
- the prediction engine 125 retrieves words from the application defined candidate provider of the application defined data source responsive to each successive character entry.
- the prediction engine 125 similarly retrieves words from the existing text prediction data sources 135 responsive to each character entry.
- words from the application defined data source and words from the existing text prediction data sources retrieved by the prediction engine 125 are returned to the input method 115 , and at operation 240 , the words return to the input method 115 by the prediction engine 125 are displayed for selection by the user 110 for completing a word being entered via the input method 115 .
- the context-based word prediction routine 200 ends at operation 295 .
- the prediction engine 125 is able to present words via the input method 115 that may be more contextually relevant to text or data being entered by the user 110 than are words contained in the existing text prediction data sources 135 . For example, consider that the following example electronic mail message is received by the user 110 via an electronic mail message application 170 .
- the application 170 parses the received electronic mail message (Example 1) and stores words contained in the received electronic mail message in the application defined data store 160 of the application defined data source 150 for subsequent display via the input method 115 when the user 110 is preparing the responsive reply electronic mail message (Example 2).
- a number of words contained in the desired responsive electronic mail message are contained in the originally received electronic mail message. Words entered into the responsive electronic mail message that also occurred in the originally received electronic mail message are underlined in the example electronic mail message for emphasis only.
- a number of words are repeated from the originally received electronic mail message. For example, the words “thanks,” “Alexandro,” “finish,” “Karazaki,” “Mathematics,” “Models,” “check,” “them,” “into,” “the,” “security,” “site,” and “James” all appear in the responsive electronic mail message. While some of these repeating words, for example “check,” “finish,” “them,” and “into” may be stored in an existing text prediction data sources, for example, a dictionary, many do not or will not have a high enough probability in the existing text prediction data sources to be presented to the user 110 via the input method 115 without use of the application defined data source 150 after character entry by the user 110 .
- the word “check” is placed in the application defined data source as being contextually relevant to the document being created, for example, a responsive electronic mail message to an originally received electronic mail message containing the word “check,” the word may be presented at or near the top of a list of word prediction candidates presented to the user 110 via the input method 115 to allow the user to quickly select from the word prediction candidates for completing a word being entered by the user.
- Other more complex or less common words for example, names such as “Alexandro” or “Karazaki” may not be presented at all without the use of the context-based application defined data source 150 where these words are given a greater probability of subsequent entry by their inclusion in the application defined data source.
- the user would be forced to type or otherwise enter each and every character of the desired words.
- context-based word prediction may be utilized for phrase prediction and completion.
- the phrase of words may be stored in the application defined data source, and the prediction engine may be utilized for offering a phrase of words in the candidate list for use in automatically completing the entry of the phrase when the entry of the phrase is subsequently commenced.
- the phrase may be stored in the application defined data source, as described above. Subsequently, if a user enters the character “s,” in addition to the word “software” being provided in a candidate list, the phrase “software developer” may also be offered in the candidate list for possible selection by the user.
- FIG. 2B is a simplified block diagram illustrating a mobile computing device with which context-based word prediction may be employed.
- the mobile computing device 250 includes a text display area 252 and a keyboard area 255 with which text or data may be entered into the text display area.
- the keyboard 255 is representative of one type of input method 115 , described above with reference to FIG. 1 . Other types of input methods are equally applicable to embodiments described herein.
- the keyboard 255 may take the form of an electronic handwriting recognition engine and stylus for writing.
- the keyboard 255 may take the form of a speech recognition engine and microphone for receiving audible speech.
- a text string “My team has f” has been entered by the user.
- the last character entered by the user is the character “f,” and the cursor 262 is in position for entry of a second character.
- the text string 260 being entered by the user is a reply electronic message to a previously received electronic message which contains the word “finish,” as described above for the example electronic mail message (Example 1). Because the word “finish” has been stored in the application defined data source 150 , when the user enters the character “f,” a word candidate list 265 is automatically generated by the input method 215 and is populated with words retrieved by the prediction engine 125 from the application defined data source 150 and from one or more existing text prediction data sources 135 , as described above.
- the word “finish” 270 is illustrated in the word candidate list 265 . If the user desires to complete the word presently being entered with the word “finish,” the user may select the desired word from the word candidate list, and the word presently being entered will automatically be completed with the selected word.
- FIG. 3 is a state diagram and operational flow illustrating a method for providing context-based word prediction via an electronic messaging application.
- the operational flow and components illustrated in FIG. 3 provide further detail with respect to operation of embodiments of the present invention with respect to an electronic mail messaging application.
- the electronic mail messaging application 170 begins at operation 310 when a user 110 starts a reply action by attempting to reply to a previously received electronic mail message (see Example 1 above).
- the messaging application 170 creates a new reply window in which the user may type or otherwise enter a reply electronic mail message (see Example 2 above).
- the messaging application 170 creates a new application defined data source (ADDS) 150 for the reply message being entered by the user.
- creation of the new application defined data source includes creating an instance of the application defined data store 160 and an instance of the application defined candidate provider 155 .
- the messaging application 170 parses the originally received electronic mail message and populates the application defined data source with reply text data in the form of words parsed from the received electronic mail message the reply text data available to the prediction engine 125 .
- the newly created application defined data source 150 is enabled for use by the prediction engine 125 .
- the user 110 begins typing or otherwise entering text or data into the reply message window in response to the received electronic mail message.
- the input method 115 intercepts each character of entered data on a character-by-character basis at operation 345 .
- the input method 115 calls the prediction engine 125 to obtain prediction results responsive to the entered text or data character.
- the prediction engine 125 obtains words from both the application defined data source 150 via the application defined candidate provider 155 and from the existing text prediction data sources 135 via the static word provider 140 at operation 360 .
- words provided from both the application defined data source and the existing text prediction data sources may be ranked according to one or more ranking algorithms for ultimate display to the user at operation 355 via the input method 115 .
- the context-based word prediction system of the present invention allows for the presentation of candidate words to the user that are contextually relevant to the document being created or edited that otherwise would not be presented or would be presented at a much lower ranking if the prediction engine 125 could only access the existing text prediction data sources and not the application defined data source.
- FIG. 4 is a state diagram and operational flow illustrating a method for providing context-based word prediction via another type of software application 170 , for example, a word processing application, a spreadsheet application, a slide presentation, and the like, with which documents may be produced containing text or data that may be used for building an application defined data source 150 for predicting words in related documents.
- a word processing document such as a letter or memorandum.
- the application 170 parses the opened document for words that may be used to create an application defined data source 150 for subsequent use by the prediction engine 125 for generating context-based word prediction.
- the application 170 creates the application defined data source 150 in the same manner as described above with reference to FIG. 3 , and at operation 435 , in conjunction with the prediction engine 125 , the application 170 populates the application defined data source with the words parsed from the opened document.
- the user begins inputting text or data into the open document, and at operation 445 , the input method 115 intercepts the input and proceeds to obtain and display a candidate list of predicted words in the same manner as described above for the electronic messaging application.
- opening a document at operation 410 may include opening the same document for which an application defined data source previously has been generated by the application 170 . That is, according to this embodiment, when a document is generated using the application 170 , an application defined data source may be created and stored for subsequent use in editing or adding/deleting text in the same document. Alternatively, the application defined data source may be generated dynamically as the document is being generated in the first instance. For example, if a first line of text entered into a new document includes the word “modification,” an application defined data source may be dynamically updated to include the word “modification” in the data store 160 . Thus, if in a subsequent sentence or paragraph, the user types the character “m,” the prediction engine may fetch words beginning with the character “m” including the word “modification” dynamically added to the application defined data source during the present editing session of the document.
- an application defined data source created for a first document may be related to a second document during creation of the second document or editing of the second document.
- a user may be provided an opportunity to browse to or link to a first document for which an application defined data source has been created which may assist the user in generating or editing the second document.
- the user may associate a related letter document with the previously generated memorandum document so that the application defined data source created for the memorandum document will be available for generating and editing the letter document.
- FIG. 5 illustrates an alternative method for creating an application defined data source (ADDS) from an application document.
- a user opens a document generated by an application 170 , and at operation 515 , the input method 115 retrieves text from the application document, parses the text, and creates an application defined data source at operation 520 .
- the prediction engine 125 populates the application defined data source with words parsed from the opened document, and at operation 530 , the prediction engine 125 enables the application defined data source for subsequent in providing predicted words.
- the input method 115 intercepts the text input at operation 540 .
- the input method 115 obtains prediction results from the prediction engine 125 by obtaining possible candidates from the various data sources 130 , including the created ADDS for this document.
- a candidate list of predicted words is displayed for the user, as describe above.
- this embodiment of the present invention allows for creation of an application defined data source from an opened application document without requiring modifications to existing applications.
- the input method 115 may include a number of different types of input methods, for example, typing, electronic handwriting, speech recognition, etc. If the input method 115 is a speech recognition engine, the accuracy of the speech recognition engine in understanding spoken words of a given user may be improved using the context-based word prediction system of the present invention. If a word or phrase is spoken into a speech recognition input method 115 by a user, and the user selects a word from a candidate list provided by the context-based word prediction system 100 for correction of or completion of a word spoken into the speech recognition input method, then the speech recognition input method accuracy will be improved because the input method will learn how to interpret the spoken words of the user with improved accuracy.
- FIG. 6 is a state diagram and operational flow illustrating a method for providing context-based word prediction via a speech or voice recognition input method.
- a user opens a document to be created or edited using a speech or voice recognition engine as the input method 115 .
- the opened document is parsed for relevant words from the document, and at operation 620 , the application creates a new application defined data source for the opened document.
- the prediction engine 125 populates the created application defined data source with words parsed from the opened document.
- the ADDS is enabled for use.
- the user begins speaking into the microphone of the voice or speech recognition input method 115 .
- the input method 115 obtains words from the ADDS that are responsive to the input received at operation 635 .
- the input method 115 may obtain the words from the ADDS via a variety of interfaces operative to allow the retrieval of the stored words.
- the words obtained from the ADDS are added to a speech language model operated by the voice or speech recognition input method.
- the input method 115 listens to the voice or speech input received from the user via the microphone of the input method 115 .
- the input method 115 performs voice recognition on the spoken words, and at operation 665 , the input method 115 displays recognized words including words retrieved from the ADDS responsive to the voice recognition performed on the spoken input from the user.
- the displayed recognized words may be displayed in a candidate list of predicted words from which the user may select for completion of one or more spoken words.
- words retrieved from the ADDS may not be displayed in a candidate list, and the words may instead be used by the voice/speech recognition input method to improve its accuracy by having a greater number of words from which to choose for completing a voice/speech input.
- the context-based word prediction system 100 may be utilized for predicting words or data, and for improving the accuracy of the handwriting recognition engine.
- the handwritten character may be utilized by the prediction engine 125 for retrieving matching words from the application defined data source 150 and from existing text prediction data sources 135 .
- the context-based word prediction system may be used for matching handwriting strokes to words or data in the data sources 130 . For example, if an electronic handwriting stroke is recognized as a stroke that can only belong to the character “A,” then the electronic handwriting input method 115 may pass the character “A” to the prediction engine for retrieving words or data beginning with the character “A.”
- the accuracy of the handwriting input method may be improved for a given user utilizing the context-based word prediction system described herein. For example, if the user handwrites a word not found in the data sources 130 , then the user will be required to correct the results of the handwritten word if the handwriting input method 115 incorrectly interprets the word written by the user.
- the accuracy of the handwriting input method 115 will be improved because the input method will be able to match the handwritten character or text received by the user to the correct word selected from the candidate list so that the handwriting input method will learn how to more accurately recognize the user's personal style of handwriting the subject word.
- Accuracy of the handwriting input method 115 may likewise be improved on a stroke-by-stroke basis.
- the handwriting input method accuracy will be improved where the handwriting input method will now be able to more accurately interpret the electronic handwriting strokes entered by the user.
- FIG. 7 is a state diagram and operational flow illustrating a method for providing context-based word prediction via a handwriting recognition input method.
- a document is opened with application 170 for creation or text input using a handwriting recognition input method 115 .
- the application 170 parses relevant words from the opened document, and at operation 720 , the application 170 creates a new application defined data source.
- the prediction engine 125 populates the application defined data source with words parsed from the opened document, and at operation 730 , the prediction engine 125 enables the ADDS for subsequent use.
- the user begins to edit or add to the open document using the handwriting recognition input method 115 .
- the handwriting recognition input method 115 obtains words from the enabled ADDS that are responsive to the input received at operation 735 .
- the input method 115 may obtain the words from the ADDS via a variety of interfaces operative to allow the retrieval of the stored words.
- the obtained words are added to a handwriting language model operated by the handwriting recognition input method 115 .
- the input method 115 intercepts hand written words or other text entered by the user at operation 735 .
- the input method 115 performs handwriting recognition on the handwritten input.
- the input method 115 displays a candidate list of words responsive to the handwriting recognition including words obtained from the enable ADDS that are responsive to the handwriting recognition applied to the handwritten input.
- words retrieved from the ADDS may not be displayed in a candidate list, and the words may instead be used by the voice/speech recognition input method to improve its accuracy by having a greater number of words from which to choose for completing a voice/speech input.
- the context-based word prediction system 100 illustrated in FIG. 1 may reside and operate on a single computing device, for example, a mobile computing device, a desktop computing device, a server-based computing device, and the like.
- information created and stored in the application defined data source of one computing device may be transferred to a second computing device for use in preparation of a document on the second device.
- a user may be utilizing the context-based word prediction system with the preparation of electronic mail messages on the user's desktop computing device in the user's office or home. Subsequently if the user begins traveling and desires to prepare documents, for example, electronic mail messages, on a mobile computing device, it is advantageous to allow for a transfer of the application defined data source from the user's desktop computing device to the user's mobile computing device.
- FIG. 8 is a logical flow diagram illustrating a method for utilizing distributed context-based word prediction information between different computing devices.
- the application defined data source transfer routine 800 illustrated in FIG. 8 , begins at start operation 805 and proceeds to operation 810 where a synchronization session between a first computing device and a second computing device is initiated.
- the synchronization session initiated at operation 810 synchronizes data between a first computing device, for example, a desktop computing device or server with a second computing device, for example, a separate desktop computing device or a mobile computing device.
- data from the first computing device may be readily exchanged with the second computing device.
- An example synchronization session may be provided by the ACTIVESYNC software provided by MICROSOFT CORPORATION of Redmond, Wash. wherein a synchronization session may be provided between a server and a separate computing device, for example, a mobile computing device.
- a server may be provided between a server and a separate computing device, for example, a mobile computing device.
- a synchronization session may be provided between a server and a separate computing device, for example, a mobile computing device.
- a separate computing device for example, a mobile computing device.
- other types of synchronization session programs which allow a transfer of data from one computing device to a second computing device may be suitable for the embodiments described herein.
- the synchronization session program 802 calls the computing device on which the text prediction data sources 130 are located, including the application defined data source 150 and the existing text prediction data sources 135 , and requests information from those sources for building a similar word prediction dictionary that will be transferred to the second computing device, for example, the mobile computing device 250 .
- the synchronization session program may call the operating system of the first computing device for retrieving the required information, or the call may be placed to the application 170 for retrieving the required information.
- the synchronization session program collects other words that may be used by the second computing device, for example, words that have been added to a spellchecking program, words that have been provided to an autocorrect dictionary program, user-provided words, and the like.
- the synchronization session program compiles the information obtained at operations 815 and 820 into a data format suitable for the computing device to which the data will be transferred.
- the second computing device is notified of the availability of the compiled information when the compiled information is transferred to the second computing device via the synchronization session established between the first computing device and the second computing device.
- the context-based word prediction system 100 illustrated and described with reference to FIG. 1 should be available on the second computing device.
- the prediction engine 125 resident on the second computing device detects the newly installed data sources that have been compiled and transferred to the second computing device via the synchronization session program.
- the context-based word prediction system 100 is available on the second computing device, as the user utilizes the second computing device for preparing, responding to or otherwise editing or modifying documents on the second computing device, the context-based word prediction system 100 on the second computing device may create additional application defined data sources 150 or may supplement those data sources transferred to the second computing device from the first computing device.
- the user starts inputting a word in a document on the second computing device in the same manner as described above with reference to FIGS. 2 , 3 and 4 .
- the prediction engine 125 on the second computing device predicts words from the data transferred to the second computing device from the first computing device and provides a candidate list of words to the user for automatic completion of text input.
- the data compiled by the synchronization session program may be maintained at the first computing device, and the data transfer prediction engine 125 on the second computing device may request the compiled data from the first computing device as each text input is received at the second device. That is, the compiled data remains on the first computing device, and the second computing device may access and retrieve data from the first computing device via the synchronization session where the data has been compiled in a format required by the second computing device.
- FIG. 9 the following discussion is intended to provide a brief, general description of a suitable computing environment in which embodiments of the invention may be implemented. While the invention will be described in the general context of program modules that execute in conjunction with program modules that run on an operating system on a personal computer, those skilled in the art will recognize that the invention may also be implemented in combination with other types of computer systems and program modules.
- program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types.
- program modules may be located in both local and remote memory storage devices.
- computer 900 comprises a general purpose desktop, laptop, handheld, mobile or other type of computer (computing device) capable of executing one or more application programs.
- the computer 900 includes at least one central processing unit 908 (“CPU”), a system memory 912 , including a random access memory 918 (“RAM”) and a read-only memory (“ROM”) 920 , and a system bus 910 that couples the memory to the CPU 908 .
- CPU central processing unit
- system memory 912 including a random access memory 918 (“RAM”) and a read-only memory (“ROM”) 920
- system bus 910 that couples the memory to the CPU 908 .
- a basic input/output system containing the basic routines that help to transfer information between elements within the computer, such as during startup, is stored in the ROM 920 .
- the computer 902 further includes a mass storage device 914 for storing an operating system 932 , application programs, and other program modules.
- the mass storage device 914 is connected to the CPU 908 through a mass storage controller (not shown) connected to the bus 910 .
- the mass storage device 914 and its associated computer-readable media provide non-volatile storage for the computer 900 .
- computer-readable media can be any available media that can be accessed or utilized by the computer 900 .
- Computer-readable media may comprise computer storage media and communication media.
- Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
- Computer storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer 900 .
- the computer 900 may operate in a networked environment using logical connections to remote computers through a network 904 , such as a local network, the Internet, etc. for example.
- the computer 902 may connect to the network 904 through a network interface unit 916 connected to the bus 910 .
- the network interface unit 916 may also be utilized to connect to other types of networks and remote computing systems.
- the computer 900 may also include an input/output controller 922 for receiving and processing input from a number of other devices, including a keyboard, mouse, etc. (not shown). Similarly, an input/output controller 922 may provide output to a display screen, a printer, or other type of output device.
- a number of program modules and data files may be stored in the mass storage device 914 and RAM 918 of the computer 900 , including an operating system 932 suitable for controlling the operation of a networked personal computer, such as the WINDOWS® operating systems from MICROSOFT CORPORATION of Redmond, Wash.
- the mass storage device 914 and RAM 918 may also store one or more program modules.
- the mass storage device 914 and the RAM 918 may store application programs, such as a software application 924 , for example, a word processing application, a spreadsheet application, etc.
- a context-based word prediction system program 100 is illustrated for performing context-based word prediction as described herein.
- the context-based word prediction system may operate as a standalone application that may be called by a given software application 170 , or the system 100 may be integrated with the programming of a given application 170 .
- the synchronization session program 802 is a software program operative to provide a synchronization session between two or more computing devices as described above with reference to FIG. 8 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Document Processing Apparatus (AREA)
Abstract
Description
- Typing or otherwise entering information into a computing device can be cumbersome and time consuming where each individual word must be typed in its entirety or handwritten in its entirety in the case of electronic handwriting input methods or spoken accurately in the case of speech recognition input methods. Typing information on small mobile devices can be particularly difficult due to the decreased size or form factor of the mobile device and associated keyboard. With mobile devices, often some type of modified typing method, for example thumb typing, is required on a very small keyboard, or typing text via a twelve key keypad is required.
- In response to these and other input difficulties, input methods have been developed that provide word prediction or word suggestions as a user types in order to reduce the number of keys that must be pressed. Prior solutions often make use of static dictionaries containing language dictionaries and lists of words that the user had previously entered using the input method. While these solutions may help the user in general text input, the words that are predicted are not always in the context of the current task the user is trying to complete. For example, according to current data input solutions, a word prediction user interface that changes after each key press may be provided, but if a user wants to type a word such as “threat,” the user must type a number of characters, for example, “thre” before the prediction user interface shows the word “threat” desired by the user. And, the prediction user interface may show a number of unhelpful words, such as “three,” “thread,” and the like because the words are being retrieved from a non-contextual source such as a dictionary. Unfortunately, other words such as names and technical terms are not likely to be included in an available input prediction dictionary, and thus, these words and terms will not be predicted at all. For example, if the user desires to type a person's name, for example, “Alexandro Giordano,” the user may be required to type each and every character making up the name because such a name is not likely to be included in an input prediction dictionary accessible by the input method in use.
- It is with respect to these and other considerations that the present invention has been made.
- This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended as an aid in determining the scope of the claimed subject matter.
- Embodiments of the present invention solve the above and other problems by providing context-based word prediction. A software application utilizes words contained in an application document to provide context-based prediction in a related document. For example, an electronic mail application may utilize words contained in a received electronic mail message to provide word prediction during the preparation of a reply message to the received message.
- According to an embodiment, the software application creates an application defined data source and populates the data source with words occurring in a document. When the same or a related document is being edited or created via an input method, for example, typing, speech recognition, electronic handwriting, etc., a prediction engine presents candidate words to the user as the user enters characters of words, and the user may choose from the presented candidate words for automatic population into the document. The prediction engine retrieves candidate words from the context-based application defined data source and, if available, from one or more existing sources of words, for example, electronic dictionaries. According to one embodiment, words from the context-based application data source may be ranked higher over words from the one or more existing sources. According to another embodiment, information from the application defined data source may be transferred between computing devices, for example, between a mobile computing device and a desktop (non-mobile) computing device.
- These and other features and advantages will be apparent from a reading of the following detailed description and a review of the associated drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of the invention as claimed.
-
FIG. 1 is a simplified block diagram illustrating a system architecture of a context-based word prediction system. -
FIG. 2A is a logical flow diagram illustrating a method for providing context-based word prediction. -
FIG. 2B is a simplified block diagram illustrating a mobile computing device with which context-based word prediction is employed. -
FIG. 3 is a state diagram and operational flow illustrating a method for providing context-based word prediction via an electronic messaging application. -
FIG. 4 is a state diagram and operational flow illustrating a method for providing context-based word prediction via a software application. -
FIG. 5 is a state diagram and operational flow illustrating a method for providing context-based word prediction from text retrieved from an application document. -
FIG. 6 is a state diagram and operational flow illustrating a method for providing context-based word prediction via a speech or voice recognition input method. -
FIG. 7 is a state diagram and operational flow illustrating a method for providing context-based word prediction via a handwriting recognition input method. -
FIG. 8 is a logical flow diagram illustrating a method for utilizing context-based word prediction information on one computing device that was generated on another computing device. -
FIG. 9 illustrates an exemplary computing operating environment in which embodiments of the present invention may be practiced. - As briefly described above, embodiments of the present invention are directed to context-based word prediction. The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar elements. While embodiments of the invention may be described, modifications, adaptations, and other implementations are possible. For example, substitutions, additions, or modifications may be made to the elements illustrated in the drawings, and the methods described herein may be modified by substituting, reordering, or adding stages to the disclosed methods. Accordingly, the following detailed description does not limit the invention, but instead, the proper scope of the invention is defined by the appended claims.
-
FIG. 1 is a simplified block diagram illustrating a system architecture of a context-basedword prediction system 100. As illustrated inFIG. 1 , auser 110 utilizes aninput method 115 for entry of text or data into a document, for example, an electronic mail message, a word processing document, a spreadsheet application document, a slide presentation application document, an electronic handwriting application document, and the like. Theinput method 115 is illustrative of an input method editor (IME) which is used for typing or otherwise entering text or data input. Othersuitable input methods 115 include handwriting recognition engines, handwriting text input panels, voice or speech recognition engines, and the like. As should be appreciated, any input method that benefits from a lexicon data source may benefit from words stored in an application defined data source as is described herein. - Referring still to
FIG. 1 , anapplication 170 is illustrative of any software application with which auser 110 may enter and edit text or data using theinput method 115, for example, a word processing application, and electronic mail application, a spreadsheet application, a slide presentation application, an electronic handwriting application, and the like. - The
text framework 120 includes aprediction engine 125 anddata sources 130 for providing word prediction during the entry of text or data via theinput method 115. As is described in detail below, theprediction engine 125 is a software application module operative to retrieve words from one or more data sources in response to text or data character entry received via theinput method 115. Theprediction engine 125 is operative to retrieve words from thedata sources 130 and for passing the retrieved words back to theinput method 115 for presentation to theuser 110 in response to text or data characters entered by theuser 110 via theinput method 115. - The
data sources 130 may include an application defined data source (ADDS) 150 and may include one or more other existing textprediction data sources 135. According to embodiments, thesoftware application 170 parses a received or previously prepared document, for example, a received electronic mail message, a previously prepared word processing document, a previously prepared slide presentation document, and the like and stores words parsed from the received or previously prepared document in the application defineddata store 160. An application definedcandidate provider 155 serves as an interface between the application defineddata store 160 and theprediction engine 125. Alternatively, as described further below, theinput method 115 may parse words from a document for storage in an ADDS created by theinput method 115 for subsequent use in a candidate list of predicted words. - Words stored in the application defined
data store 160 may be ranked according to their relevance to each other and according to the probability or likelihood that they will be utilized by theprediction engine 125 for presentation in a candidate word list. For example, statistical weighting may be applied to words based on relationships between words, such as whether a particular word is traditionally a noun followed by a verb. Such ranking analysis is useful in determining a likelihood or probability that a given word is more likely a desired word for completing a text entry, and thus, for inclusion in a candidate word list provided by the context-based word prediction system described herein. For example, if the words “thesaurus” and “the” are parsed by theapplication 170 and are placed in the application defineddata source 150 for subsequent use by theprediction engine 125, the word “thesaurus” likely will receive a higher ranking than the word “the” so that if a user subsequently begins typing the characters “th” via theinput method 115, the prediction engine may present the word “thesaurus” before the presentation of the word “the” based on the probability that theuser 110 will require input assistance for the word “thesaurus” before requiring assistance with the input of the word “the.” Algorithms for ranking words for presentation by a text/word prediction engine are well known to those skilled in the art and need not be discussed in detail herein. - According to an embodiment, the application defined
data source 150 is created by theapplication 170 for each pre-existing or received document on a case-by-case basis. For example, if theapplication 170 is an electronic mail application, theapplication 170 may create an application defineddata source 150 for each received electronic mail message for which a reply message is generated by theuser 110 via theinput method 115. - Referring still to
FIG. 1 , the existing textprediction data sources 135 are representative of pre-existing data sources, for example dictionaries, previously assembled collections of words entered by theuser 110, contacts databases, technical terms databases and the like. Thestatic word dictionary 145 is illustrative of a repository for containing such previously stored or assembled words. Thestatic word provider 140 is illustrative of an interface between the staticword dictionary database 145 and theprediction engine 125. - When a user enters text or data via the
input method 115 in the context of a received, pre-existing, or otherwise related document received by or prepared via theapplication 170, the words parsed from the related document and stored in the application defined data source may be utilized by theprediction engine 125 before theprediction engine 125 utilizes words from the existing text prediction data sources for presentation to the user via theinput method 115 because the words contained in the application defined data source are more likely to be the words being entered by the user in a document related to the received or pre-existing document. - Thus, from the foregoing, when a
user 110 begins inputting text in a document, theinput method 115 calls into theprediction engine 125 as each character is entered in order to get word prediction candidates that match the current user input. Theprediction engine 125 retrieves word results from the application defined data source 150 (which is populated with words from a related document) and from existing lexicon data sources contained in the existing text prediction data sources, including statistical word information, input history, etc. The word candidates are then returned by theprediction engine 125 to theinput method 115 where they are displayed to the user as word prediction results in a word candidate list. - Having described a system architecture for context-based word prediction,
FIG. 2A is a logical flow diagram illustrating a method for providing context-based word prediction. The context-basedword prediction routine 200 begins atstart operation 205 and proceeds tooperation 210 where anapplication 170, for example, an electronic mail application, a word processing application, a slide presentation application, a spreadsheet application, and the like creates and populates an application defined data source (ADDS) based on a received or previously created document. A description of the preparation of an application defined data source for a received electronic mail message is described below with reference toFIG. 3 , and a description of the creation of an application defined data source for a previously prepared document is described below with reference toFIG. 4 . - At
operation 210, theapplication 170 creates an instance of the application defineddata source 150 including an instance of the application defineddata store 160 and the application definedcandidate provider 155. Text or data contained in the received or previously created document are parsed by theapplication 170, and individual words making up the parsed document are populated in the application defineddata store 160. According to an embodiment, the candidate provider is operative to interpret an internal data format of the ADDS and extract words and probability information for extracted words based on a given text input from the user. As is understood by those skilled in the art, the candidate provider may make complex determinations of the probability that one or more words in the ADDS match a given input based on various properties, such as how the words in the ADDS relate to each other and to the input in a given language model, or the candidate provider may simply return all words from the ADDS that start with one or more text characters of a given text input in alphabetical order. - At
operation 215, a text input is received via aninput method 115, for example, a typing input method editor, a speech recognition engine, an electronic handwriting recognition engine, and the like. Atoperation 220, theinput method 115 calls theprediction engine 125 as each character is received for the text or data input. Atoperation 225, theprediction engine 125 retrieves words from the application defined candidate provider of the application defined data source responsive to each successive character entry. Atoperation 230, theprediction engine 125 similarly retrieves words from the existing textprediction data sources 135 responsive to each character entry. - At
operation 235, words from the application defined data source and words from the existing text prediction data sources retrieved by theprediction engine 125 are returned to theinput method 115, and atoperation 240, the words return to theinput method 115 by theprediction engine 125 are displayed for selection by theuser 110 for completing a word being entered via theinput method 115. The context-basedword prediction routine 200 ends atoperation 295. - As described above, according to embodiments, because the
application 170 parses words from a received or previously created document and stores the parsed words in the application defined data source, theprediction engine 125 is able to present words via theinput method 115 that may be more contextually relevant to text or data being entered by theuser 110 than are words contained in the existing text prediction data sources 135. For example, consider that the following example electronic mail message is received by theuser 110 via an electronicmail message application 170. -
-
- From: Alexandro Giordano.
- Sent: Wednesday, Nov. 30, 2006, 9:35 p.m.
- To: James Smith
- Subject: Karazaki Mathematics Models
- Jim, As you finish the Karazaki Mathematics Models, please check them into the Karazaki security site. Thanks, Alexandro
- Now consider that the following desired reply message is to be entered by the receiving user.
-
-
- From: James Smith
- Sent: Wednesday, Nov. 30, 2006, 10:03 p.m.
- To: Alexandro Giordano
- Subject: RE: Karazaki Mathematics Models
- Alexandro, My team has drafts of our Karazaki Mathematics Models. Should we check them into the security site now, or wait until they are completely finished? Thanks, James
- According to embodiments of the invention, the
application 170 parses the received electronic mail message (Example 1) and stores words contained in the received electronic mail message in the application defineddata store 160 of the application defineddata source 150 for subsequent display via theinput method 115 when theuser 110 is preparing the responsive reply electronic mail message (Example 2). As illustrated above, a number of words contained in the desired responsive electronic mail message are contained in the originally received electronic mail message. Words entered into the responsive electronic mail message that also occurred in the originally received electronic mail message are underlined in the example electronic mail message for emphasis only. - Referring to the example responsive electronic mail message (Example 2), a number of words are repeated from the originally received electronic mail message. For example, the words “thanks,” “Alexandro,” “finish,” “Karazaki,” “Mathematics,” “Models,” “check,” “them,” “into,” “the,” “security,” “site,” and “James” all appear in the responsive electronic mail message. While some of these repeating words, for example “check,” “finish,” “them,” and “into” may be stored in an existing text prediction data sources, for example, a dictionary, many do not or will not have a high enough probability in the existing text prediction data sources to be presented to the
user 110 via theinput method 115 without use of the application defineddata source 150 after character entry by theuser 110. For example, without use of the application defineddata source 150 for these words, when the user begins to type a word such as “check,” by entry of the characters “ch,” without use of the application defineddata source 150, other words may be presented to theuser 110 from the existing textprediction data sources 135, for example, “chair,” “challenge,” “chance,” and the like, and the desired word “check” may be presented further down a line of potential words extracted from the existing text prediction data sources, or the word “check” may not be presented at all. Thus, because the word “check” is placed in the application defined data source as being contextually relevant to the document being created, for example, a responsive electronic mail message to an originally received electronic mail message containing the word “check,” the word may be presented at or near the top of a list of word prediction candidates presented to theuser 110 via theinput method 115 to allow the user to quickly select from the word prediction candidates for completing a word being entered by the user. Other more complex or less common words, for example, names such as “Alexandro” or “Karazaki” may not be presented at all without the use of the context-based application defineddata source 150 where these words are given a greater probability of subsequent entry by their inclusion in the application defined data source. Thus, without the presentation of such complex or unique words in a word prediction list to theuser 110, the user would be forced to type or otherwise enter each and every character of the desired words. - According to one embodiment, context-based word prediction may be utilized for phrase prediction and completion. For example, if the
application 170 parses the text contained in a document and determines that two or more words comprise a phrase of words, the phrase of words may be stored in the application defined data source, and the prediction engine may be utilized for offering a phrase of words in the candidate list for use in automatically completing the entry of the phrase when the entry of the phrase is subsequently commenced. For example, if theapplication 170 determines that the two words “software” and “developer” are used together to create the phrase “software developer,” the phrase may be stored in the application defined data source, as described above. Subsequently, if a user enters the character “s,” in addition to the word “software” being provided in a candidate list, the phrase “software developer” may also be offered in the candidate list for possible selection by the user. -
FIG. 2B is a simplified block diagram illustrating a mobile computing device with which context-based word prediction may be employed. Themobile computing device 250 includes a text display area 252 and akeyboard area 255 with which text or data may be entered into the text display area. Thekeyboard 255 is representative of one type ofinput method 115, described above with reference toFIG. 1 . Other types of input methods are equally applicable to embodiments described herein. For example, for another type ofcomputing device 250, thekeyboard 255 may take the form of an electronic handwriting recognition engine and stylus for writing. Similarly, thekeyboard 255 may take the form of a speech recognition engine and microphone for receiving audible speech. - As illustrated in
FIG. 2B , a text string “My team has f” has been entered by the user. The last character entered by the user is the character “f,” and thecursor 262 is in position for entry of a second character. For purposes of example, consider that thetext string 260 being entered by the user is a reply electronic message to a previously received electronic message which contains the word “finish,” as described above for the example electronic mail message (Example 1). Because the word “finish” has been stored in the application defineddata source 150, when the user enters the character “f,” aword candidate list 265 is automatically generated by theinput method 215 and is populated with words retrieved by theprediction engine 125 from the application defineddata source 150 and from one or more existing textprediction data sources 135, as described above. As illustrated inFIG. 2B , the word “finish” 270 is illustrated in theword candidate list 265. If the user desires to complete the word presently being entered with the word “finish,” the user may select the desired word from the word candidate list, and the word presently being entered will automatically be completed with the selected word. -
FIG. 3 is a state diagram and operational flow illustrating a method for providing context-based word prediction via an electronic messaging application. The operational flow and components illustrated inFIG. 3 provide further detail with respect to operation of embodiments of the present invention with respect to an electronic mail messaging application. Referring toFIG. 3 , the electronicmail messaging application 170 begins atoperation 310 when auser 110 starts a reply action by attempting to reply to a previously received electronic mail message (see Example 1 above). Atoperation 315, themessaging application 170 creates a new reply window in which the user may type or otherwise enter a reply electronic mail message (see Example 2 above). - At
operation 320 themessaging application 170 creates a new application defined data source (ADDS) 150 for the reply message being entered by the user. As described above, creation of the new application defined data source includes creating an instance of the application defineddata store 160 and an instance of the application definedcandidate provider 155. Atoperation 335, themessaging application 170 parses the originally received electronic mail message and populates the application defined data source with reply text data in the form of words parsed from the received electronic mail message the reply text data available to theprediction engine 125. Atoperation 340, the newly created application defineddata source 150 is enabled for use by theprediction engine 125. - Referring back to the
messaging application 170, atoperation 330, theuser 110 begins typing or otherwise entering text or data into the reply message window in response to the received electronic mail message. As the user begins entering reply text or data, theinput method 115 intercepts each character of entered data on a character-by-character basis atoperation 345. Atoperation 350, theinput method 115 calls theprediction engine 125 to obtain prediction results responsive to the entered text or data character. Atoperation 360, theprediction engine 125 obtains words from both the application defineddata source 150 via the application definedcandidate provider 155 and from the existing textprediction data sources 135 via thestatic word provider 140 atoperation 360. As described above, words provided from both the application defined data source and the existing text prediction data sources may be ranked according to one or more ranking algorithms for ultimate display to the user atoperation 355 via theinput method 115. As described above with reference toFIGS. 1 and 2 , the context-based word prediction system of the present invention allows for the presentation of candidate words to the user that are contextually relevant to the document being created or edited that otherwise would not be presented or would be presented at a much lower ranking if theprediction engine 125 could only access the existing text prediction data sources and not the application defined data source. -
FIG. 4 is a state diagram and operational flow illustrating a method for providing context-based word prediction via another type ofsoftware application 170, for example, a word processing application, a spreadsheet application, a slide presentation, and the like, with which documents may be produced containing text or data that may be used for building an application defineddata source 150 for predicting words in related documents. In contrast to the starting of a reply action described with reference toFIG. 3 , atoperation 410, the user opens a document generated by theapplication 170, for example, a word processing document, such as a letter or memorandum. - At
operation 415, theapplication 170 parses the opened document for words that may be used to create an application defineddata source 150 for subsequent use by theprediction engine 125 for generating context-based word prediction. Atoperation 425, theapplication 170 creates the application defineddata source 150 in the same manner as described above with reference toFIG. 3 , and atoperation 435, in conjunction with theprediction engine 125, theapplication 170 populates the application defined data source with the words parsed from the opened document. Atoperation 430, the user begins inputting text or data into the open document, and atoperation 445, theinput method 115 intercepts the input and proceeds to obtain and display a candidate list of predicted words in the same manner as described above for the electronic messaging application. - According to an embodiment, opening a document at
operation 410 may include opening the same document for which an application defined data source previously has been generated by theapplication 170. That is, according to this embodiment, when a document is generated using theapplication 170, an application defined data source may be created and stored for subsequent use in editing or adding/deleting text in the same document. Alternatively, the application defined data source may be generated dynamically as the document is being generated in the first instance. For example, if a first line of text entered into a new document includes the word “modification,” an application defined data source may be dynamically updated to include the word “modification” in thedata store 160. Thus, if in a subsequent sentence or paragraph, the user types the character “m,” the prediction engine may fetch words beginning with the character “m” including the word “modification” dynamically added to the application defined data source during the present editing session of the document. - Alternatively, an application defined data source created for a first document may be related to a second document during creation of the second document or editing of the second document. For example, upon launching a second document, a user may be provided an opportunity to browse to or link to a first document for which an application defined data source has been created which may assist the user in generating or editing the second document. For example, if the user knows that a memorandum was previously produced for which an application defined data source was generated, the user may associate a related letter document with the previously generated memorandum document so that the application defined data source created for the memorandum document will be available for generating and editing the letter document.
-
FIG. 5 illustrates an alternative method for creating an application defined data source (ADDS) from an application document. Atoperation 510, a user opens a document generated by anapplication 170, and atoperation 515, theinput method 115 retrieves text from the application document, parses the text, and creates an application defined data source atoperation 520. Atoperation 525, theprediction engine 125 populates the application defined data source with words parsed from the opened document, and atoperation 530, theprediction engine 125 enables the application defined data source for subsequent in providing predicted words. - Referring back to the
application 170, when the user begins entering new text or editing text in the opened document at operation 535, theinput method 115 intercepts the text input atoperation 540. Atoperation 545, theinput method 115, obtains prediction results from theprediction engine 125 by obtaining possible candidates from thevarious data sources 130, including the created ADDS for this document. Atoperation 555, a candidate list of predicted words is displayed for the user, as describe above. Advantageously, this embodiment of the present invention allows for creation of an application defined data source from an opened application document without requiring modifications to existing applications. - As described above, the
input method 115 may include a number of different types of input methods, for example, typing, electronic handwriting, speech recognition, etc. If theinput method 115 is a speech recognition engine, the accuracy of the speech recognition engine in understanding spoken words of a given user may be improved using the context-based word prediction system of the present invention. If a word or phrase is spoken into a speechrecognition input method 115 by a user, and the user selects a word from a candidate list provided by the context-basedword prediction system 100 for correction of or completion of a word spoken into the speech recognition input method, then the speech recognition input method accuracy will be improved because the input method will learn how to interpret the spoken words of the user with improved accuracy. -
FIG. 6 is a state diagram and operational flow illustrating a method for providing context-based word prediction via a speech or voice recognition input method. Atoperation 610, a user opens a document to be created or edited using a speech or voice recognition engine as theinput method 115. Atoperation 615, the opened document is parsed for relevant words from the document, and atoperation 620, the application creates a new application defined data source for the opened document. Atoperation 625, theprediction engine 125 populates the created application defined data source with words parsed from the opened document. Atoperation 630, the ADDS is enabled for use. - Referring back to the
application 170, atoperation 635, the user begins speaking into the microphone of the voice or speechrecognition input method 115. Atoperation 640, theinput method 115 obtains words from the ADDS that are responsive to the input received atoperation 635. As should be appreciated, theinput method 115 may obtain the words from the ADDS via a variety of interfaces operative to allow the retrieval of the stored words. Atoperation 650, the words obtained from the ADDS are added to a speech language model operated by the voice or speech recognition input method. - At
operation 665, theinput method 115 listens to the voice or speech input received from the user via the microphone of theinput method 115. Atoperation 660, theinput method 115 performs voice recognition on the spoken words, and atoperation 665, theinput method 115 displays recognized words including words retrieved from the ADDS responsive to the voice recognition performed on the spoken input from the user. As described above, the displayed recognized words may be displayed in a candidate list of predicted words from which the user may select for completion of one or more spoken words. Alternatively, words retrieved from the ADDS may not be displayed in a candidate list, and the words may instead be used by the voice/speech recognition input method to improve its accuracy by having a greater number of words from which to choose for completing a voice/speech input. - In the case of handwriting recognition engines, the context-based
word prediction system 100 may be utilized for predicting words or data, and for improving the accuracy of the handwriting recognition engine. As with the typing input method described above, when a user enters a character using an electronic handwriting input method, the handwritten character may be utilized by theprediction engine 125 for retrieving matching words from the application defineddata source 150 and from existing text prediction data sources 135. In addition, the context-based word prediction system may be used for matching handwriting strokes to words or data in the data sources 130. For example, if an electronic handwriting stroke is recognized as a stroke that can only belong to the character “A,” then the electronichandwriting input method 115 may pass the character “A” to the prediction engine for retrieving words or data beginning with the character “A.” - In addition, in the case of a
handwriting input method 115, the accuracy of the handwriting input method may be improved for a given user utilizing the context-based word prediction system described herein. For example, if the user handwrites a word not found in thedata sources 130, then the user will be required to correct the results of the handwritten word if thehandwriting input method 115 incorrectly interprets the word written by the user. On the other hand, if the user selects a word provided in a word candidate list generated by the context-based word prediction system for completing a partially handwritten word, the accuracy of thehandwriting input method 115 will be improved because the input method will be able to match the handwritten character or text received by the user to the correct word selected from the candidate list so that the handwriting input method will learn how to more accurately recognize the user's personal style of handwriting the subject word. Accuracy of thehandwriting input method 115 may likewise be improved on a stroke-by-stroke basis. That is, if the user is provided a candidate list of words in response to one or more strokes entered by the user, and the user selects a word from the candidate list, then the handwriting input method accuracy will be improved where the handwriting input method will now be able to more accurately interpret the electronic handwriting strokes entered by the user. -
FIG. 7 is a state diagram and operational flow illustrating a method for providing context-based word prediction via a handwriting recognition input method. Atoperation 710, a document is opened withapplication 170 for creation or text input using a handwritingrecognition input method 115. Atoperation 715, theapplication 170 parses relevant words from the opened document, and atoperation 720, theapplication 170 creates a new application defined data source. Atoperation 725, theprediction engine 125 populates the application defined data source with words parsed from the opened document, and atoperation 730, theprediction engine 125 enables the ADDS for subsequent use. - At
operation 735, the user begins to edit or add to the open document using the handwritingrecognition input method 115. Atoperation 740, the handwritingrecognition input method 115 obtains words from the enabled ADDS that are responsive to the input received atoperation 735. As should be appreciated, theinput method 115 may obtain the words from the ADDS via a variety of interfaces operative to allow the retrieval of the stored words. Atoperation 750, the obtained words are added to a handwriting language model operated by the handwritingrecognition input method 115. - At
operation 755, theinput method 115 intercepts hand written words or other text entered by the user atoperation 735. Atoperation 760, theinput method 115 performs handwriting recognition on the handwritten input. Atoperation 765, theinput method 115 displays a candidate list of words responsive to the handwriting recognition including words obtained from the enable ADDS that are responsive to the handwriting recognition applied to the handwritten input. Alternatively, words retrieved from the ADDS may not be displayed in a candidate list, and the words may instead be used by the voice/speech recognition input method to improve its accuracy by having a greater number of words from which to choose for completing a voice/speech input. - As described above, according to an embodiment of the invention, the context-based
word prediction system 100 illustrated inFIG. 1 may reside and operate on a single computing device, for example, a mobile computing device, a desktop computing device, a server-based computing device, and the like. According to another embodiment, information created and stored in the application defined data source of one computing device may be transferred to a second computing device for use in preparation of a document on the second device. For example, a user may be utilizing the context-based word prediction system with the preparation of electronic mail messages on the user's desktop computing device in the user's office or home. Subsequently if the user begins traveling and desires to prepare documents, for example, electronic mail messages, on a mobile computing device, it is advantageous to allow for a transfer of the application defined data source from the user's desktop computing device to the user's mobile computing device. -
FIG. 8 is a logical flow diagram illustrating a method for utilizing distributed context-based word prediction information between different computing devices. The application defined datasource transfer routine 800, illustrated inFIG. 8 , begins atstart operation 805 and proceeds tooperation 810 where a synchronization session between a first computing device and a second computing device is initiated. The synchronization session initiated atoperation 810 synchronizes data between a first computing device, for example, a desktop computing device or server with a second computing device, for example, a separate desktop computing device or a mobile computing device. During the synchronization session, data from the first computing device may be readily exchanged with the second computing device. An example synchronization session may be provided by the ACTIVESYNC software provided by MICROSOFT CORPORATION of Redmond, Wash. wherein a synchronization session may be provided between a server and a separate computing device, for example, a mobile computing device. As should be appreciated, other types of synchronization session programs which allow a transfer of data from one computing device to a second computing device may be suitable for the embodiments described herein. - At
operation 815, the synchronization session program 802 (FIG. 9 ) calls the computing device on which the textprediction data sources 130 are located, including the application defineddata source 150 and the existing textprediction data sources 135, and requests information from those sources for building a similar word prediction dictionary that will be transferred to the second computing device, for example, themobile computing device 250. According to embodiments, the synchronization session program may call the operating system of the first computing device for retrieving the required information, or the call may be placed to theapplication 170 for retrieving the required information. - At
operation 820, the synchronization session program collects other words that may be used by the second computing device, for example, words that have been added to a spellchecking program, words that have been provided to an autocorrect dictionary program, user-provided words, and the like. Atoperation 825, the synchronization session program compiles the information obtained atoperations operation 825, the second computing device is notified of the availability of the compiled information when the compiled information is transferred to the second computing device via the synchronization session established between the first computing device and the second computing device. As should be understood, in order for the second computing device to utilize the information transferred to the second computing device, the context-basedword prediction system 100 illustrated and described with reference toFIG. 1 should be available on the second computing device. - At
operation 830, theprediction engine 125 resident on the second computing device detects the newly installed data sources that have been compiled and transferred to the second computing device via the synchronization session program. As should be appreciated, because the context-basedword prediction system 100 is available on the second computing device, as the user utilizes the second computing device for preparing, responding to or otherwise editing or modifying documents on the second computing device, the context-basedword prediction system 100 on the second computing device may create additional application defineddata sources 150 or may supplement those data sources transferred to the second computing device from the first computing device. - At
operation 835, the user starts inputting a word in a document on the second computing device in the same manner as described above with reference toFIGS. 2 , 3 and 4. Atoperation 840, theprediction engine 125 on the second computing device predicts words from the data transferred to the second computing device from the first computing device and provides a candidate list of words to the user for automatic completion of text input. - According to an alternate embodiment, the data compiled by the synchronization session program may be maintained at the first computing device, and the data
transfer prediction engine 125 on the second computing device may request the compiled data from the first computing device as each text input is received at the second device. That is, the compiled data remains on the first computing device, and the second computing device may access and retrieve data from the first computing device via the synchronization session where the data has been compiled in a format required by the second computing device. - Referring now to
FIG. 9 , the following discussion is intended to provide a brief, general description of a suitable computing environment in which embodiments of the invention may be implemented. While the invention will be described in the general context of program modules that execute in conjunction with program modules that run on an operating system on a personal computer, those skilled in the art will recognize that the invention may also be implemented in combination with other types of computer systems and program modules. - Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
- Referring now to
FIG. 9 , an illustrative operating environment for embodiments of the invention will be described. As shown inFIG. 9 ,computer 900 comprises a general purpose desktop, laptop, handheld, mobile or other type of computer (computing device) capable of executing one or more application programs. Thecomputer 900 includes at least one central processing unit 908 (“CPU”), asystem memory 912, including a random access memory 918 (“RAM”) and a read-only memory (“ROM”) 920, and asystem bus 910 that couples the memory to theCPU 908. A basic input/output system containing the basic routines that help to transfer information between elements within the computer, such as during startup, is stored in theROM 920. The computer 902 further includes amass storage device 914 for storing anoperating system 932, application programs, and other program modules. - The
mass storage device 914 is connected to theCPU 908 through a mass storage controller (not shown) connected to thebus 910. Themass storage device 914 and its associated computer-readable media provide non-volatile storage for thecomputer 900. Although the description of computer-readable media contained herein refers to a mass storage device, such as a hard disk or CD-ROM drive, it should be appreciated by those skilled in the art that computer-readable media can be any available media that can be accessed or utilized by thecomputer 900. - By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the
computer 900. - According to various embodiments of the invention, the
computer 900 may operate in a networked environment using logical connections to remote computers through anetwork 904, such as a local network, the Internet, etc. for example. The computer 902 may connect to thenetwork 904 through anetwork interface unit 916 connected to thebus 910. It should be appreciated that thenetwork interface unit 916 may also be utilized to connect to other types of networks and remote computing systems. Thecomputer 900 may also include an input/output controller 922 for receiving and processing input from a number of other devices, including a keyboard, mouse, etc. (not shown). Similarly, an input/output controller 922 may provide output to a display screen, a printer, or other type of output device. - As mentioned briefly above, a number of program modules and data files may be stored in the
mass storage device 914 andRAM 918 of thecomputer 900, including anoperating system 932 suitable for controlling the operation of a networked personal computer, such as the WINDOWS® operating systems from MICROSOFT CORPORATION of Redmond, Wash. Themass storage device 914 andRAM 918 may also store one or more program modules. In particular, themass storage device 914 and theRAM 918 may store application programs, such as a software application 924, for example, a word processing application, a spreadsheet application, etc. According to embodiments of the present invention, a context-based wordprediction system program 100 is illustrated for performing context-based word prediction as described herein. As should be appreciated, the context-based word prediction system may operate as a standalone application that may be called by a givensoftware application 170, or thesystem 100 may be integrated with the programming of a givenapplication 170. Thesynchronization session program 802 is a software program operative to provide a synchronization session between two or more computing devices as described above with reference toFIG. 8 . - It should be appreciated that various embodiments of the present invention can be implemented (1) as a sequence of computer implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation is a matter of choice dependent on the performance requirements of the computing system implementing the invention. Accordingly, logical operations including related algorithms can be referred to variously as operations, structural devices, acts or modules. It will be recognized by one skilled in the art that these operations, structural devices, acts and modules may be implemented in software, firmware, special purpose digital logic, and any combination thereof without deviating from the spirit and scope of the present invention as recited within the claims set forth herein.
- Although the invention has been described in connection with various exemplary embodiments, those of ordinary skill in the art will understand that many modifications can be made thereto within the scope of the claims that follow. Accordingly, it is not intended that the scope of the invention in any way be limited by the above description, but instead be determined entirely by reference to the claims that follow.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/704,381 US7912700B2 (en) | 2007-02-08 | 2007-02-08 | Context based word prediction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/704,381 US7912700B2 (en) | 2007-02-08 | 2007-02-08 | Context based word prediction |
Publications (2)
Publication Number | Publication Date |
---|---|
US20080195388A1 true US20080195388A1 (en) | 2008-08-14 |
US7912700B2 US7912700B2 (en) | 2011-03-22 |
Family
ID=39686602
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/704,381 Active 2029-08-20 US7912700B2 (en) | 2007-02-08 | 2007-02-08 | Context based word prediction |
Country Status (1)
Country | Link |
---|---|
US (1) | US7912700B2 (en) |
Cited By (215)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060265208A1 (en) * | 2005-05-18 | 2006-11-23 | Assadollahi Ramin O | Device incorporating improved text input mechanism |
US20070226649A1 (en) * | 2006-03-23 | 2007-09-27 | Agmon Jonathan | Method for predictive typing |
US20080072143A1 (en) * | 2005-05-18 | 2008-03-20 | Ramin Assadollahi | Method and device incorporating improved text input mechanism |
US20080250034A1 (en) * | 2007-04-06 | 2008-10-09 | John Edward Petri | External metadata acquisition and synchronization in a content management system |
US20080266261A1 (en) * | 2007-04-25 | 2008-10-30 | Idzik Jacek S | Keystroke Error Correction Method |
US20090279782A1 (en) * | 2008-05-06 | 2009-11-12 | Wu Yingchao | Candidate selection method for handwriting input |
US20090278853A1 (en) * | 2008-05-12 | 2009-11-12 | Masaharu Ueda | Character input program, character input device, and character input method |
US20100153091A1 (en) * | 2008-12-11 | 2010-06-17 | Microsoft Corporation | User-specified phrase input learning |
US20100250251A1 (en) * | 2009-03-30 | 2010-09-30 | Microsoft Corporation | Adaptation for statistical language model |
US20100332215A1 (en) * | 2009-06-26 | 2010-12-30 | Nokia Corporation | Method and apparatus for converting text input |
US20110029862A1 (en) * | 2009-07-30 | 2011-02-03 | Research In Motion Limited | System and method for context based predictive text entry assistance |
US20110167340A1 (en) * | 2010-01-06 | 2011-07-07 | Bradford Allen Moore | System and Method for Issuing Commands to Applications Based on Contextual Information |
US20110184736A1 (en) * | 2010-01-26 | 2011-07-28 | Benjamin Slotznick | Automated method of recognizing inputted information items and selecting information items |
US20110197128A1 (en) * | 2008-06-11 | 2011-08-11 | EXBSSET MANAGEMENT GmbH | Device and Method Incorporating an Improved Text Input Mechanism |
US20110202876A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | User-centric soft keyboard predictive technologies |
US20110208507A1 (en) * | 2010-02-19 | 2011-08-25 | Google Inc. | Speech Correction for Typed Input |
WO2011107751A3 (en) * | 2010-03-04 | 2011-10-20 | Touchtype Ltd | System and method for inputting text into electronic devices |
US20120029910A1 (en) * | 2009-03-30 | 2012-02-02 | Touchtype Ltd | System and Method for Inputting Text into Electronic Devices |
WO2012042217A1 (en) | 2010-09-29 | 2012-04-05 | Touchtype Ltd. | System and method for inputting text into electronic devices |
US20120110518A1 (en) * | 2010-10-29 | 2012-05-03 | Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. | Translation of directional input to gesture |
US20120110579A1 (en) * | 2010-10-29 | 2012-05-03 | Microsoft Corporation | Enterprise resource planning oriented context-aware environment |
US20120223889A1 (en) * | 2009-03-30 | 2012-09-06 | Touchtype Ltd | System and Method for Inputting Text into Small Screen Devices |
US20120278751A1 (en) * | 2011-04-29 | 2012-11-01 | Chih-Yu Chen | Input method and input module thereof |
US20120290291A1 (en) * | 2011-05-13 | 2012-11-15 | Gabriel Lee Gilbert Shelley | Input processing for character matching and predicted word matching |
US8374846B2 (en) | 2005-05-18 | 2013-02-12 | Neuer Wall Treuhand Gmbh | Text input device and method |
US20130080964A1 (en) * | 2011-09-28 | 2013-03-28 | Kyocera Corporation | Device, method, and storage medium storing program |
US20130085747A1 (en) * | 2011-09-29 | 2013-04-04 | Microsoft Corporation | System, Method and Computer-Readable Storage Device for Providing Cloud-Based Shared Vocabulary/Typing History for Efficient Social Communication |
US20130096918A1 (en) * | 2011-10-12 | 2013-04-18 | Fujitsu Limited | Recognizing device, computer-readable recording medium, recognizing method, generating device, and generating method |
EP2592566A1 (en) * | 2011-11-10 | 2013-05-15 | Research In Motion Limited | Touchscreen keyboard predictive display and generation of a set of characters |
EP2592567A1 (en) * | 2011-11-10 | 2013-05-15 | Research In Motion Limited | Methods and systems for removing or replacing keyboard prediction candidates |
US20130191737A1 (en) * | 2003-04-15 | 2013-07-25 | Dictaphone Corporation | Method, system, and apparatus for data reuse |
US20130212475A1 (en) * | 2010-11-01 | 2013-08-15 | Koninklijke Philips Electronics N.V. | Suggesting relevant terms during text entry |
US20130262680A1 (en) * | 2012-03-28 | 2013-10-03 | Bmc Software, Inc. | Dynamic service resource control |
US20140025371A1 (en) * | 2012-07-17 | 2014-01-23 | Samsung Electronics Co., Ltd. | Method and apparatus for recommending texts |
WO2014015205A1 (en) * | 2012-07-20 | 2014-01-23 | Microsoft Corporation | String predictions from buffer |
CN103547980A (en) * | 2011-05-23 | 2014-01-29 | 微软公司 | Context aware input engine |
WO2014022322A1 (en) * | 2012-07-30 | 2014-02-06 | Microsoft Corporation | Generating string predictions using contexts |
US20140068523A1 (en) * | 2012-08-28 | 2014-03-06 | Huawei Device Co., Ltd | Method and apparatus for optimizing handwriting input method |
US20140108004A1 (en) * | 2012-10-15 | 2014-04-17 | Nuance Communications, Inc. | Text/character input system, such as for use with touch screens on mobile phones |
WO2014089524A1 (en) * | 2012-12-06 | 2014-06-12 | Microsoft Corporation | Communication context based predictive-text suggestion |
US20140188460A1 (en) * | 2012-10-16 | 2014-07-03 | Google Inc. | Feature-based autocorrection |
WO2013171481A3 (en) * | 2012-05-14 | 2014-07-10 | Touchtype Limited | Mechanism, system and method for synchronising devices |
US20140281944A1 (en) * | 2013-03-14 | 2014-09-18 | Apple Inc. | Automatic supplementation of word correction dictionaries |
US20140379325A1 (en) * | 2013-06-21 | 2014-12-25 | Research In Motion Limited | Text entry at electronic communication device |
US20150058718A1 (en) * | 2013-08-26 | 2015-02-26 | Samsung Electronics Co., Ltd. | User device and method for creating handwriting content |
US9032322B2 (en) | 2011-11-10 | 2015-05-12 | Blackberry Limited | Touchscreen keyboard predictive display and generation of a set of characters |
US9063653B2 (en) | 2012-08-31 | 2015-06-23 | Blackberry Limited | Ranking predictions based on typing speed and typing confidence |
US9116552B2 (en) | 2012-06-27 | 2015-08-25 | Blackberry Limited | Touchscreen keyboard providing selection of word predictions in partitions of the touchscreen keyboard |
EP2911148A1 (en) * | 2014-02-24 | 2015-08-26 | Panasonic Intellectual Property Management Co., Ltd. | Data input device, data input method, and in-vehicle apparatus |
WO2015148333A1 (en) * | 2014-03-27 | 2015-10-01 | Microsoft Technology Licensing, Llc | Flexible schema for language model customization |
US9152323B2 (en) | 2012-01-19 | 2015-10-06 | Blackberry Limited | Virtual keyboard providing an indication of received input |
US9201510B2 (en) | 2012-04-16 | 2015-12-01 | Blackberry Limited | Method and device having touchscreen keyboard with visual cues |
US9207860B2 (en) | 2012-05-25 | 2015-12-08 | Blackberry Limited | Method and apparatus for detecting a gesture |
JP2016014987A (en) * | 2014-07-01 | 2016-01-28 | Kddi株式会社 | Input support device, input support system, and program |
US20160026639A1 (en) * | 2014-07-28 | 2016-01-28 | International Business Machines Corporation | Context-based text auto completion |
US9310889B2 (en) | 2011-11-10 | 2016-04-12 | Blackberry Limited | Touchscreen keyboard predictive display and generation of a set of characters |
US9311298B2 (en) | 2013-06-21 | 2016-04-12 | Microsoft Technology Licensing, Llc | Building conversational understanding systems using a toolset |
CN105518577A (en) * | 2013-08-26 | 2016-04-20 | 三星电子株式会社 | User device and method for creating handwriting content |
US20160196150A1 (en) * | 2013-08-09 | 2016-07-07 | Kun Jing | Input Method Editor Providing Language Assistance |
US9390079B1 (en) | 2013-05-10 | 2016-07-12 | D.R. Systems, Inc. | Voice commands for report editing |
US9477625B2 (en) | 2014-06-13 | 2016-10-25 | Microsoft Technology Licensing, Llc | Reversible connector for accessory devices |
US9520127B2 (en) | 2014-04-29 | 2016-12-13 | Microsoft Technology Licensing, Llc | Shared hidden layer combination for speech recognition systems |
US9524290B2 (en) | 2012-08-31 | 2016-12-20 | Blackberry Limited | Scoring predictions based on prediction length and typing speed |
US20170010800A1 (en) * | 2013-02-05 | 2017-01-12 | Google Inc. | Gesture keyboard input of non-dictionary character strings |
US9557913B2 (en) | 2012-01-19 | 2017-01-31 | Blackberry Limited | Virtual keyboard display having a ticker proximate to the virtual keyboard |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9589565B2 (en) | 2013-06-21 | 2017-03-07 | Microsoft Technology Licensing, Llc | Environmentally aware dialog policies and response generation |
US9606634B2 (en) | 2005-05-18 | 2017-03-28 | Nokia Technologies Oy | Device incorporating improved text input mechanism |
US9614724B2 (en) | 2014-04-21 | 2017-04-04 | Microsoft Technology Licensing, Llc | Session-based device configuration |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9639260B2 (en) | 2007-01-07 | 2017-05-02 | Apple Inc. | Application programming interfaces for gesture operations |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US9652448B2 (en) | 2011-11-10 | 2017-05-16 | Blackberry Limited | Methods and systems for removing or replacing on-keyboard prediction candidates |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US20170154030A1 (en) * | 2015-11-30 | 2017-06-01 | Citrix Systems, Inc. | Providing electronic text recommendations to a user based on what is discussed during a meeting |
US9684521B2 (en) | 2010-01-26 | 2017-06-20 | Apple Inc. | Systems having discrete and continuous gesture recognizers |
US9690481B2 (en) | 2008-03-04 | 2017-06-27 | Apple Inc. | Touch event model |
US9703394B2 (en) * | 2015-03-24 | 2017-07-11 | Google Inc. | Unlearning techniques for adaptive language models in text entry |
US9715489B2 (en) | 2011-11-10 | 2017-07-25 | Blackberry Limited | Displaying a prediction candidate after a typing mistake |
US9717006B2 (en) | 2014-06-23 | 2017-07-25 | Microsoft Technology Licensing, Llc | Device quarantine in a wireless network |
US20170220129A1 (en) * | 2014-07-18 | 2017-08-03 | Shanghai Chule (Coo Tek) Information Technology Co., Ltd. | Predictive Text Input Method and Device |
US9728184B2 (en) | 2013-06-18 | 2017-08-08 | Microsoft Technology Licensing, Llc | Restructuring deep neural network acoustic models |
US9733716B2 (en) | 2013-06-09 | 2017-08-15 | Apple Inc. | Proxy gesture recognizer |
US9798459B2 (en) | 2008-03-04 | 2017-10-24 | Apple Inc. | Touch event model for web pages |
CN107341138A (en) * | 2017-06-29 | 2017-11-10 | 珠海市魅族科技有限公司 | A kind of information fill method and device, computer installation and readable storage medium storing program for executing |
US9874914B2 (en) | 2014-05-19 | 2018-01-23 | Microsoft Technology Licensing, Llc | Power management contracts for accessory devices |
US9910588B2 (en) | 2012-02-24 | 2018-03-06 | Blackberry Limited | Touchscreen keyboard providing word predictions in partitions of the touchscreen keyboard in proximate association with candidate letters |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9959296B1 (en) * | 2014-05-12 | 2018-05-01 | Google Llc | Providing suggestions within a document |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9965177B2 (en) | 2009-03-16 | 2018-05-08 | Apple Inc. | Event recognition |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US9996524B1 (en) * | 2017-01-30 | 2018-06-12 | International Business Machines Corporation | Text prediction using multiple devices |
WO2018111702A1 (en) * | 2016-12-15 | 2018-06-21 | Microsoft Technology Licensing, Llc | Word order suggestion taking into account frequency and formatting information |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US10111099B2 (en) | 2014-05-12 | 2018-10-23 | Microsoft Technology Licensing, Llc | Distributing content in managed wireless distribution networks |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10191654B2 (en) | 2009-03-30 | 2019-01-29 | Touchtype Limited | System and method for inputting text into electronic devices |
US10216408B2 (en) | 2010-06-14 | 2019-02-26 | Apple Inc. | Devices and methods for identifying user interface objects based on view hierarchy |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US20190130901A1 (en) * | 2016-06-15 | 2019-05-02 | Sony Corporation | Information processing device and information processing method |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10372310B2 (en) | 2016-06-23 | 2019-08-06 | Microsoft Technology Licensing, Llc | Suppression of input images |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10402493B2 (en) | 2009-03-30 | 2019-09-03 | Touchtype Ltd | System and method for inputting text into electronic devices |
US10412439B2 (en) | 2002-09-24 | 2019-09-10 | Thomson Licensing | PVR channel and PVR IPG information |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10417332B2 (en) | 2016-12-15 | 2019-09-17 | Microsoft Technology Licensing, Llc | Predicting text by combining attempts |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10430045B2 (en) | 2009-03-31 | 2019-10-01 | Samsung Electronics Co., Ltd. | Method for creating short message and portable terminal using the same |
US10438595B2 (en) | 2014-09-30 | 2019-10-08 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10453443B2 (en) | 2014-09-30 | 2019-10-22 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10482133B2 (en) | 2016-09-07 | 2019-11-19 | International Business Machines Corporation | Creating and editing documents using word history |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10529332B2 (en) | 2015-03-08 | 2020-01-07 | Apple Inc. | Virtual assistant activation |
US10558749B2 (en) | 2017-01-30 | 2020-02-11 | International Business Machines Corporation | Text prediction using captured image from an image capture device |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
WO2020051209A1 (en) * | 2018-09-04 | 2020-03-12 | Nuance Communications, Inc. | Multi-character text input system with audio feedback and word completion |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10613746B2 (en) | 2012-01-16 | 2020-04-07 | Touchtype Ltd. | System and method for inputting text |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10691445B2 (en) | 2014-06-03 | 2020-06-23 | Microsoft Technology Licensing, Llc | Isolating a portion of an online computing service for testing |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US10719225B2 (en) | 2009-03-16 | 2020-07-21 | Apple Inc. | Event recognition |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
CN112214114A (en) * | 2019-07-12 | 2021-01-12 | 北京搜狗科技发展有限公司 | Input method and device and electronic equipment |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
CN112560477A (en) * | 2020-12-09 | 2021-03-26 | 中科讯飞互联(北京)信息科技有限公司 | Text completion method, electronic device and storage device |
US10963142B2 (en) | 2007-01-07 | 2021-03-30 | Apple Inc. | Application programming interfaces for scrolling |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
CN113076158A (en) * | 2021-03-26 | 2021-07-06 | 维沃移动通信有限公司 | Display control method and display control device |
US11069336B2 (en) | 2012-03-02 | 2021-07-20 | Apple Inc. | Systems and methods for name pronunciation |
US11079933B2 (en) * | 2008-01-09 | 2021-08-03 | Apple Inc. | Method, device, and graphical user interface providing word recommendations for text input |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US11080335B2 (en) * | 2019-08-13 | 2021-08-03 | International Business Machines Corporation | Concept-based autosuggest based on previously identified items |
US11120220B2 (en) | 2014-05-30 | 2021-09-14 | Apple Inc. | Device, method, and graphical user interface for a predictive keyboard |
US11127397B2 (en) | 2015-05-27 | 2021-09-21 | Apple Inc. | Device voice control |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11194547B2 (en) * | 2018-06-22 | 2021-12-07 | Samsung Electronics Co., Ltd. | Text input device and method therefor |
US11194467B2 (en) | 2019-06-01 | 2021-12-07 | Apple Inc. | Keyboard management user interfaces |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US11350253B2 (en) | 2011-06-03 | 2022-05-31 | Apple Inc. | Active transport based notifications |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US11425060B2 (en) * | 2016-09-20 | 2022-08-23 | Google Llc | System and method for transmitting a response in a messaging application |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US20220366137A1 (en) * | 2017-07-31 | 2022-11-17 | Apple Inc. | Correcting input based on user context |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI502380B (en) | 2007-03-29 | 2015-10-01 | Nokia Corp | Method, apparatus, server, system and computer program product for use with predictive text input |
JP2008293403A (en) * | 2007-05-28 | 2008-12-04 | Sony Ericsson Mobilecommunications Japan Inc | Character input device, portable terminal and character input program |
CN100592249C (en) * | 2007-09-21 | 2010-02-24 | 上海汉翔信息技术有限公司 | How to Quickly Enter Related Words |
US8756527B2 (en) * | 2008-01-18 | 2014-06-17 | Rpx Corporation | Method, apparatus and computer program product for providing a word input mechanism |
US8010465B2 (en) * | 2008-02-26 | 2011-08-30 | Microsoft Corporation | Predicting candidates using input scopes |
US8180630B2 (en) | 2008-06-06 | 2012-05-15 | Zi Corporation Of Canada, Inc. | Systems and methods for an automated personalized dictionary generator for portable devices |
US8589149B2 (en) * | 2008-08-05 | 2013-11-19 | Nuance Communications, Inc. | Probability-based approach to recognition of user-entered data |
US8224642B2 (en) * | 2008-11-20 | 2012-07-17 | Stratify, Inc. | Automated identification of documents as not belonging to any language |
US8373724B2 (en) | 2009-01-28 | 2013-02-12 | Google Inc. | Selective display of OCR'ed text and corresponding images from publications on a client device |
US8442813B1 (en) * | 2009-02-05 | 2013-05-14 | Google Inc. | Methods and systems for assessing the quality of automatically generated text |
US8660837B2 (en) * | 2009-03-20 | 2014-02-25 | Honda Motor Co., Ltd. | Language processor |
GB0917753D0 (en) | 2009-10-09 | 2009-11-25 | Touchtype Ltd | System and method for inputting text into electronic devices |
JP5587119B2 (en) * | 2010-09-30 | 2014-09-10 | キヤノン株式会社 | CHARACTER INPUT DEVICE, ITS CONTROL METHOD, AND PROGRAM |
KR20120045218A (en) * | 2010-10-29 | 2012-05-09 | 삼성전자주식회사 | Method and apparatus for inputting a message using multi-touch |
US9576573B2 (en) | 2011-08-29 | 2017-02-21 | Microsoft Technology Licensing, Llc | Using multiple modality input to feedback context for natural language understanding |
US8850310B2 (en) * | 2011-10-11 | 2014-09-30 | Microsoft Corporation | Data entry suggestion lists for designated document data entry areas based on data from other document data entry areas |
US9223497B2 (en) | 2012-03-16 | 2015-12-29 | Blackberry Limited | In-context word prediction and word correction |
US9336187B2 (en) * | 2012-05-14 | 2016-05-10 | The Boeing Company | Mediation computing device and associated method for generating semantic tags |
US8918408B2 (en) | 2012-08-24 | 2014-12-23 | Microsoft Corporation | Candidate generation for predictive input using input history |
US9099091B2 (en) * | 2013-01-22 | 2015-08-04 | Nuance Communications, Inc. | Method and apparatus of adaptive textual prediction of voice data |
JP6038700B2 (en) * | 2013-03-25 | 2016-12-07 | 株式会社東芝 | Shaping device |
CN104345899B (en) * | 2013-08-08 | 2018-01-19 | 阿里巴巴集团控股有限公司 | Field conversion method and client for input method |
US9377871B2 (en) | 2014-08-01 | 2016-06-28 | Nuance Communications, Inc. | System and methods for determining keyboard input in the presence of multiple contact points |
US9953646B2 (en) | 2014-09-02 | 2018-04-24 | Belleau Technologies | Method and system for dynamic speech recognition and tracking of prewritten script |
CN104615591B (en) * | 2015-03-10 | 2019-02-05 | 上海触乐信息科技有限公司 | Context-based forward input error correction method and device |
US11157166B2 (en) * | 2015-11-20 | 2021-10-26 | Felt, Inc. | Automove smart transcription |
WO2019026087A1 (en) * | 2017-07-31 | 2019-02-07 | Kulkarni Hrishikesh | An intelligent context based prediction system |
US11205045B2 (en) | 2018-07-06 | 2021-12-21 | International Business Machines Corporation | Context-based autocompletion suggestion |
US11620407B2 (en) | 2019-10-17 | 2023-04-04 | International Business Machines Corporation | Real-time, context based detection and classification of data |
Citations (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5377281A (en) * | 1992-03-18 | 1994-12-27 | At&T Corp. | Knowledge-based character recognition |
US5390279A (en) * | 1992-12-31 | 1995-02-14 | Apple Computer, Inc. | Partitioning speech rules by context for speech recognition |
US5805911A (en) * | 1995-02-01 | 1998-09-08 | Microsoft Corporation | Word prediction system |
US5896321A (en) * | 1997-11-14 | 1999-04-20 | Microsoft Corporation | Text completion system for a miniature computer |
US5907839A (en) * | 1996-07-03 | 1999-05-25 | Yeda Reseach And Development, Co., Ltd. | Algorithm for context sensitive spelling correction |
US6204848B1 (en) * | 1999-04-14 | 2001-03-20 | Motorola, Inc. | Data entry apparatus having a limited number of character keys and method |
US6223059B1 (en) * | 1999-02-22 | 2001-04-24 | Nokia Mobile Phones Limited | Communication terminal having a predictive editor application |
US6346894B1 (en) * | 1997-02-27 | 2002-02-12 | Ameritech Corporation | Method and system for intelligent text entry on a numeric keypad |
US6377965B1 (en) * | 1997-11-07 | 2002-04-23 | Microsoft Corporation | Automatic word completion system for partially entered data |
US20030046073A1 (en) * | 2001-08-24 | 2003-03-06 | International Business Machines Corporation | Word predicting method, voice recognition method, and voice recognition apparatus and program using the same methods |
US6578032B1 (en) * | 2000-06-28 | 2003-06-10 | Microsoft Corporation | Method and system for performing phrase/word clustering and cluster merging |
US20040044422A1 (en) * | 2002-07-03 | 2004-03-04 | Vadim Fux | System and method for intelligent text input |
US20040153975A1 (en) * | 2003-02-05 | 2004-08-05 | Williams Roland E. | Text entry mechanism for small keypads |
US20050017954A1 (en) * | 1998-12-04 | 2005-01-27 | Kay David Jon | Contextual prediction of user words and user actions |
US20050114770A1 (en) * | 2003-11-21 | 2005-05-26 | Sacher Heiko K. | Electronic device and user interface and input method therefor |
US6917910B2 (en) * | 1999-12-27 | 2005-07-12 | International Business Machines Corporation | Method, apparatus, computer system and storage medium for speech recognition |
US6922810B1 (en) * | 2000-03-07 | 2005-07-26 | Microsoft Corporation | Grammar-based automatic data completion and suggestion for user input |
US6970599B2 (en) * | 2002-07-25 | 2005-11-29 | America Online, Inc. | Chinese character handwriting recognition system |
US7031908B1 (en) * | 2000-06-01 | 2006-04-18 | Microsoft Corporation | Creating a language model for a language processing system |
US20060173678A1 (en) * | 2005-02-02 | 2006-08-03 | Mazin Gilbert | Method and apparatus for predicting word accuracy in automatic speech recognition systems |
US20060190447A1 (en) * | 2005-02-22 | 2006-08-24 | Microsoft Corporation | Query spelling correction method and system |
US20060190436A1 (en) * | 2005-02-23 | 2006-08-24 | Microsoft Corporation | Dynamic client interaction for search |
US7111248B2 (en) * | 2002-01-15 | 2006-09-19 | Openwave Systems Inc. | Alphanumeric information input method |
US20060259479A1 (en) * | 2005-05-12 | 2006-11-16 | Microsoft Corporation | System and method for automatic generation of suggested inline search terms |
US20060265648A1 (en) * | 2005-05-23 | 2006-11-23 | Roope Rainisto | Electronic text input involving word completion functionality for predicting word candidates for partial word inputs |
US7296223B2 (en) * | 2003-06-27 | 2007-11-13 | Xerox Corporation | System and method for structured document authoring |
US20080076472A1 (en) * | 2006-09-22 | 2008-03-27 | Sony Ericsson Mobile Communications Ab | Intelligent Predictive Text Entry |
US20080306732A1 (en) * | 2005-01-11 | 2008-12-11 | France Telecom | Method and Device for Carrying Out Optimal Coding Between Two Long-Term Prediction Models |
US7630980B2 (en) * | 2005-01-21 | 2009-12-08 | Prashant Parikh | Automatic dynamic contextual data entry completion system |
US7657423B1 (en) * | 2003-10-31 | 2010-02-02 | Google Inc. | Automatic completion of fragments of text |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5467425A (en) | 1993-02-26 | 1995-11-14 | International Business Machines Corporation | Building scalable N-gram language models using maximum likelihood maximum entropy N-gram models |
US6839667B2 (en) | 2001-05-16 | 2005-01-04 | International Business Machines Corporation | Method of speech recognition by presenting N-best word candidates |
US6687697B2 (en) | 2001-07-30 | 2004-02-03 | Microsoft Corporation | System and method for improved string matching under noisy channel conditions |
JP4416644B2 (en) | 2004-12-28 | 2010-02-17 | マイクロソフト コーポレーション | Character processing apparatus with prediction function, method, recording medium, and program |
US8036878B2 (en) | 2005-05-18 | 2011-10-11 | Never Wall Treuhand GmbH | Device incorporating improved text input mechanism |
-
2007
- 2007-02-08 US US11/704,381 patent/US7912700B2/en active Active
Patent Citations (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5377281A (en) * | 1992-03-18 | 1994-12-27 | At&T Corp. | Knowledge-based character recognition |
US5390279A (en) * | 1992-12-31 | 1995-02-14 | Apple Computer, Inc. | Partitioning speech rules by context for speech recognition |
US5805911A (en) * | 1995-02-01 | 1998-09-08 | Microsoft Corporation | Word prediction system |
US5907839A (en) * | 1996-07-03 | 1999-05-25 | Yeda Reseach And Development, Co., Ltd. | Algorithm for context sensitive spelling correction |
US6346894B1 (en) * | 1997-02-27 | 2002-02-12 | Ameritech Corporation | Method and system for intelligent text entry on a numeric keypad |
US6377965B1 (en) * | 1997-11-07 | 2002-04-23 | Microsoft Corporation | Automatic word completion system for partially entered data |
US5896321A (en) * | 1997-11-14 | 1999-04-20 | Microsoft Corporation | Text completion system for a miniature computer |
US20050017954A1 (en) * | 1998-12-04 | 2005-01-27 | Kay David Jon | Contextual prediction of user words and user actions |
US6223059B1 (en) * | 1999-02-22 | 2001-04-24 | Nokia Mobile Phones Limited | Communication terminal having a predictive editor application |
US6204848B1 (en) * | 1999-04-14 | 2001-03-20 | Motorola, Inc. | Data entry apparatus having a limited number of character keys and method |
US6917910B2 (en) * | 1999-12-27 | 2005-07-12 | International Business Machines Corporation | Method, apparatus, computer system and storage medium for speech recognition |
US6922810B1 (en) * | 2000-03-07 | 2005-07-26 | Microsoft Corporation | Grammar-based automatic data completion and suggestion for user input |
US7031908B1 (en) * | 2000-06-01 | 2006-04-18 | Microsoft Corporation | Creating a language model for a language processing system |
US6578032B1 (en) * | 2000-06-28 | 2003-06-10 | Microsoft Corporation | Method and system for performing phrase/word clustering and cluster merging |
US20030046073A1 (en) * | 2001-08-24 | 2003-03-06 | International Business Machines Corporation | Word predicting method, voice recognition method, and voice recognition apparatus and program using the same methods |
US7111248B2 (en) * | 2002-01-15 | 2006-09-19 | Openwave Systems Inc. | Alphanumeric information input method |
US20040044422A1 (en) * | 2002-07-03 | 2004-03-04 | Vadim Fux | System and method for intelligent text input |
US6970599B2 (en) * | 2002-07-25 | 2005-11-29 | America Online, Inc. | Chinese character handwriting recognition system |
US20040153975A1 (en) * | 2003-02-05 | 2004-08-05 | Williams Roland E. | Text entry mechanism for small keypads |
US7296223B2 (en) * | 2003-06-27 | 2007-11-13 | Xerox Corporation | System and method for structured document authoring |
US7657423B1 (en) * | 2003-10-31 | 2010-02-02 | Google Inc. | Automatic completion of fragments of text |
US20050114770A1 (en) * | 2003-11-21 | 2005-05-26 | Sacher Heiko K. | Electronic device and user interface and input method therefor |
US20080306732A1 (en) * | 2005-01-11 | 2008-12-11 | France Telecom | Method and Device for Carrying Out Optimal Coding Between Two Long-Term Prediction Models |
US7630980B2 (en) * | 2005-01-21 | 2009-12-08 | Prashant Parikh | Automatic dynamic contextual data entry completion system |
US20060173678A1 (en) * | 2005-02-02 | 2006-08-03 | Mazin Gilbert | Method and apparatus for predicting word accuracy in automatic speech recognition systems |
US20060190447A1 (en) * | 2005-02-22 | 2006-08-24 | Microsoft Corporation | Query spelling correction method and system |
US20060190436A1 (en) * | 2005-02-23 | 2006-08-24 | Microsoft Corporation | Dynamic client interaction for search |
US20060259479A1 (en) * | 2005-05-12 | 2006-11-16 | Microsoft Corporation | System and method for automatic generation of suggested inline search terms |
US20060265648A1 (en) * | 2005-05-23 | 2006-11-23 | Roope Rainisto | Electronic text input involving word completion functionality for predicting word candidates for partial word inputs |
US20080076472A1 (en) * | 2006-09-22 | 2008-03-27 | Sony Ericsson Mobile Communications Ab | Intelligent Predictive Text Entry |
Cited By (351)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US10412439B2 (en) | 2002-09-24 | 2019-09-10 | Thomson Licensing | PVR channel and PVR IPG information |
US9251129B2 (en) * | 2003-04-15 | 2016-02-02 | Nuance Communications, Inc. | Method, system, and computer-readable medium for creating a new electronic document from an existing electronic document |
US20130191737A1 (en) * | 2003-04-15 | 2013-07-25 | Dictaphone Corporation | Method, system, and apparatus for data reuse |
US8374850B2 (en) | 2005-05-18 | 2013-02-12 | Neuer Wall Treuhand Gmbh | Device incorporating improved text input mechanism |
US8117540B2 (en) * | 2005-05-18 | 2012-02-14 | Neuer Wall Treuhand Gmbh | Method and device incorporating improved text input mechanism |
US8374846B2 (en) | 2005-05-18 | 2013-02-12 | Neuer Wall Treuhand Gmbh | Text input device and method |
US20080072143A1 (en) * | 2005-05-18 | 2008-03-20 | Ramin Assadollahi | Method and device incorporating improved text input mechanism |
US8036878B2 (en) | 2005-05-18 | 2011-10-11 | Never Wall Treuhand GmbH | Device incorporating improved text input mechanism |
US9606634B2 (en) | 2005-05-18 | 2017-03-28 | Nokia Technologies Oy | Device incorporating improved text input mechanism |
US20060265208A1 (en) * | 2005-05-18 | 2006-11-23 | Assadollahi Ramin O | Device incorporating improved text input mechanism |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US20070226649A1 (en) * | 2006-03-23 | 2007-09-27 | Agmon Jonathan | Method for predictive typing |
US10613741B2 (en) | 2007-01-07 | 2020-04-07 | Apple Inc. | Application programming interface for gesture operations |
US10175876B2 (en) | 2007-01-07 | 2019-01-08 | Apple Inc. | Application programming interfaces for gesture operations |
US9639260B2 (en) | 2007-01-07 | 2017-05-02 | Apple Inc. | Application programming interfaces for gesture operations |
US11449217B2 (en) | 2007-01-07 | 2022-09-20 | Apple Inc. | Application programming interfaces for gesture operations |
US11954322B2 (en) | 2007-01-07 | 2024-04-09 | Apple Inc. | Application programming interface for gesture operations |
US10963142B2 (en) | 2007-01-07 | 2021-03-30 | Apple Inc. | Application programming interfaces for scrolling |
US9665265B2 (en) | 2007-01-07 | 2017-05-30 | Apple Inc. | Application programming interfaces for gesture operations |
US20080250034A1 (en) * | 2007-04-06 | 2008-10-09 | John Edward Petri | External metadata acquisition and synchronization in a content management system |
US20080266261A1 (en) * | 2007-04-25 | 2008-10-30 | Idzik Jacek S | Keystroke Error Correction Method |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US11079933B2 (en) * | 2008-01-09 | 2021-08-03 | Apple Inc. | Method, device, and graphical user interface providing word recommendations for text input |
US11474695B2 (en) | 2008-01-09 | 2022-10-18 | Apple Inc. | Method, device, and graphical user interface providing word recommendations for text input |
US9690481B2 (en) | 2008-03-04 | 2017-06-27 | Apple Inc. | Touch event model |
US9798459B2 (en) | 2008-03-04 | 2017-10-24 | Apple Inc. | Touch event model for web pages |
US10521109B2 (en) | 2008-03-04 | 2019-12-31 | Apple Inc. | Touch event model |
US9720594B2 (en) | 2008-03-04 | 2017-08-01 | Apple Inc. | Touch event model |
US9971502B2 (en) | 2008-03-04 | 2018-05-15 | Apple Inc. | Touch event model |
US12236038B2 (en) | 2008-03-04 | 2025-02-25 | Apple Inc. | Devices, methods, and user interfaces for processing input events |
US11740725B2 (en) | 2008-03-04 | 2023-08-29 | Apple Inc. | Devices, methods, and user interfaces for processing touch events |
US10936190B2 (en) | 2008-03-04 | 2021-03-02 | Apple Inc. | Devices, methods, and user interfaces for processing touch events |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US20090279782A1 (en) * | 2008-05-06 | 2009-11-12 | Wu Yingchao | Candidate selection method for handwriting input |
US8229225B2 (en) * | 2008-05-06 | 2012-07-24 | Wu Yingchao | Candidate selection method for handwriting input |
US20090278853A1 (en) * | 2008-05-12 | 2009-11-12 | Masaharu Ueda | Character input program, character input device, and character input method |
US8307281B2 (en) * | 2008-05-12 | 2012-11-06 | Omron Corporation | Predicting conversion candidates based on the current context and the attributes of previously selected conversion candidates |
US20110197128A1 (en) * | 2008-06-11 | 2011-08-11 | EXBSSET MANAGEMENT GmbH | Device and Method Incorporating an Improved Text Input Mechanism |
US8713432B2 (en) | 2008-06-11 | 2014-04-29 | Neuer Wall Treuhand Gmbh | Device and method incorporating an improved text input mechanism |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US20100153091A1 (en) * | 2008-12-11 | 2010-06-17 | Microsoft Corporation | User-specified phrase input learning |
US9009591B2 (en) | 2008-12-11 | 2015-04-14 | Microsoft Corporation | User-specified phrase input learning |
US9965177B2 (en) | 2009-03-16 | 2018-05-08 | Apple Inc. | Event recognition |
US12265704B2 (en) | 2009-03-16 | 2025-04-01 | Apple Inc. | Event recognition |
US11163440B2 (en) | 2009-03-16 | 2021-11-02 | Apple Inc. | Event recognition |
US11755196B2 (en) | 2009-03-16 | 2023-09-12 | Apple Inc. | Event recognition |
US10719225B2 (en) | 2009-03-16 | 2020-07-21 | Apple Inc. | Event recognition |
EP2889729B1 (en) * | 2009-03-30 | 2023-03-15 | Microsoft Technology Licensing, LLC | System and method for inputting text into electronic devices |
US10073829B2 (en) * | 2009-03-30 | 2018-09-11 | Touchtype Limited | System and method for inputting text into electronic devices |
US10191654B2 (en) | 2009-03-30 | 2019-01-29 | Touchtype Limited | System and method for inputting text into electronic devices |
US8798983B2 (en) | 2009-03-30 | 2014-08-05 | Microsoft Corporation | Adaptation for statistical language model |
US9189472B2 (en) * | 2009-03-30 | 2015-11-17 | Touchtype Limited | System and method for inputting text into small screen devices |
US20140350920A1 (en) | 2009-03-30 | 2014-11-27 | Touchtype Ltd | System and method for inputting text into electronic devices |
US10445424B2 (en) | 2009-03-30 | 2019-10-15 | Touchtype Limited | System and method for inputting text into electronic devices |
US9659002B2 (en) * | 2009-03-30 | 2017-05-23 | Touchtype Ltd | System and method for inputting text into electronic devices |
US20120029910A1 (en) * | 2009-03-30 | 2012-02-02 | Touchtype Ltd | System and Method for Inputting Text into Electronic Devices |
US10402493B2 (en) | 2009-03-30 | 2019-09-03 | Touchtype Ltd | System and method for inputting text into electronic devices |
US20120223889A1 (en) * | 2009-03-30 | 2012-09-06 | Touchtype Ltd | System and Method for Inputting Text into Small Screen Devices |
US20100250251A1 (en) * | 2009-03-30 | 2010-09-30 | Microsoft Corporation | Adaptation for statistical language model |
US10430045B2 (en) | 2009-03-31 | 2019-10-01 | Samsung Electronics Co., Ltd. | Method for creating short message and portable terminal using the same |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US20100332215A1 (en) * | 2009-06-26 | 2010-12-30 | Nokia Corporation | Method and apparatus for converting text input |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US20110029862A1 (en) * | 2009-07-30 | 2011-02-03 | Research In Motion Limited | System and method for context based predictive text entry assistance |
US9223590B2 (en) * | 2010-01-06 | 2015-12-29 | Apple Inc. | System and method for issuing commands to applications based on contextual information |
US20110167340A1 (en) * | 2010-01-06 | 2011-07-07 | Bradford Allen Moore | System and Method for Issuing Commands to Applications Based on Contextual Information |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US12061915B2 (en) | 2010-01-26 | 2024-08-13 | Apple Inc. | Gesture recognizers with delegates for controlling and modifying gesture recognition |
US20110184736A1 (en) * | 2010-01-26 | 2011-07-28 | Benjamin Slotznick | Automated method of recognizing inputted information items and selecting information items |
US9684521B2 (en) | 2010-01-26 | 2017-06-20 | Apple Inc. | Systems having discrete and continuous gesture recognizers |
US10732997B2 (en) | 2010-01-26 | 2020-08-04 | Apple Inc. | Gesture recognizers with delegates for controlling and modifying gesture recognition |
US9613015B2 (en) | 2010-02-12 | 2017-04-04 | Microsoft Technology Licensing, Llc | User-centric soft keyboard predictive technologies |
US9165257B2 (en) | 2010-02-12 | 2015-10-20 | Microsoft Technology Licensing, Llc | Typing assistance for editing |
US20110201387A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Real-time typing assistance |
US20110202836A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Typing assistance for editing |
US10156981B2 (en) | 2010-02-12 | 2018-12-18 | Microsoft Technology Licensing, Llc | User-centric soft keyboard predictive technologies |
US20110202876A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | User-centric soft keyboard predictive technologies |
US10126936B2 (en) | 2010-02-12 | 2018-11-13 | Microsoft Technology Licensing, Llc | Typing assistance for editing |
US8782556B2 (en) * | 2010-02-12 | 2014-07-15 | Microsoft Corporation | User-centric soft keyboard predictive technologies |
US20110208507A1 (en) * | 2010-02-19 | 2011-08-25 | Google Inc. | Speech Correction for Typed Input |
US8423351B2 (en) * | 2010-02-19 | 2013-04-16 | Google Inc. | Speech correction for typed input |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US10692504B2 (en) | 2010-02-25 | 2020-06-23 | Apple Inc. | User profiling for voice input processing |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
WO2011107751A3 (en) * | 2010-03-04 | 2011-10-20 | Touchtype Ltd | System and method for inputting text into electronic devices |
US9052748B2 (en) | 2010-03-04 | 2015-06-09 | Touchtype Limited | System and method for inputting text into electronic devices |
US10216408B2 (en) | 2010-06-14 | 2019-02-26 | Apple Inc. | Devices and methods for identifying user interface objects based on view hierarchy |
US9384185B2 (en) | 2010-09-29 | 2016-07-05 | Touchtype Ltd. | System and method for inputting text into electronic devices |
CN103201707A (en) * | 2010-09-29 | 2013-07-10 | 触摸式有限公司 | System and method for inputting text into electronic devices |
WO2012042217A1 (en) | 2010-09-29 | 2012-04-05 | Touchtype Ltd. | System and method for inputting text into electronic devices |
US10146765B2 (en) | 2010-09-29 | 2018-12-04 | Touchtype Ltd. | System and method for inputting text into electronic devices |
US20120110518A1 (en) * | 2010-10-29 | 2012-05-03 | Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. | Translation of directional input to gesture |
US20120110579A1 (en) * | 2010-10-29 | 2012-05-03 | Microsoft Corporation | Enterprise resource planning oriented context-aware environment |
US10026058B2 (en) * | 2010-10-29 | 2018-07-17 | Microsoft Technology Licensing, Llc | Enterprise resource planning oriented context-aware environment |
US9104306B2 (en) * | 2010-10-29 | 2015-08-11 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Translation of directional input to gesture |
US9886427B2 (en) * | 2010-11-01 | 2018-02-06 | Koninklijke Philips N.V. | Suggesting relevant terms during text entry |
US20130212475A1 (en) * | 2010-11-01 | 2013-08-15 | Koninklijke Philips Electronics N.V. | Suggesting relevant terms during text entry |
US10417405B2 (en) | 2011-03-21 | 2019-09-17 | Apple Inc. | Device access using voice authentication |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US20120278751A1 (en) * | 2011-04-29 | 2012-11-01 | Chih-Yu Chen | Input method and input module thereof |
US20120290291A1 (en) * | 2011-05-13 | 2012-11-15 | Gabriel Lee Gilbert Shelley | Input processing for character matching and predicted word matching |
EP2715489A4 (en) * | 2011-05-23 | 2014-06-18 | Microsoft Corp | Context aware input engine |
CN103547980A (en) * | 2011-05-23 | 2014-01-29 | 微软公司 | Context aware input engine |
EP2715489A2 (en) * | 2011-05-23 | 2014-04-09 | Microsoft Corporation | Context aware input engine |
US11350253B2 (en) | 2011-06-03 | 2022-05-31 | Apple Inc. | Active transport based notifications |
US20130080964A1 (en) * | 2011-09-28 | 2013-03-28 | Kyocera Corporation | Device, method, and storage medium storing program |
US10235355B2 (en) | 2011-09-29 | 2019-03-19 | Microsoft Technology Licensing, Llc | System, method, and computer-readable storage device for providing cloud-based shared vocabulary/typing history for efficient social communication |
US20130085747A1 (en) * | 2011-09-29 | 2013-04-04 | Microsoft Corporation | System, Method and Computer-Readable Storage Device for Providing Cloud-Based Shared Vocabulary/Typing History for Efficient Social Communication |
US9785628B2 (en) * | 2011-09-29 | 2017-10-10 | Microsoft Technology Licensing, Llc | System, method and computer-readable storage device for providing cloud-based shared vocabulary/typing history for efficient social communication |
US9082404B2 (en) * | 2011-10-12 | 2015-07-14 | Fujitsu Limited | Recognizing device, computer-readable recording medium, recognizing method, generating device, and generating method |
US20130096918A1 (en) * | 2011-10-12 | 2013-04-18 | Fujitsu Limited | Recognizing device, computer-readable recording medium, recognizing method, generating device, and generating method |
US9122672B2 (en) | 2011-11-10 | 2015-09-01 | Blackberry Limited | In-letter word prediction for virtual keyboard |
EP2592567A1 (en) * | 2011-11-10 | 2013-05-15 | Research In Motion Limited | Methods and systems for removing or replacing keyboard prediction candidates |
EP2592566A1 (en) * | 2011-11-10 | 2013-05-15 | Research In Motion Limited | Touchscreen keyboard predictive display and generation of a set of characters |
US9652448B2 (en) | 2011-11-10 | 2017-05-16 | Blackberry Limited | Methods and systems for removing or replacing on-keyboard prediction candidates |
US9032322B2 (en) | 2011-11-10 | 2015-05-12 | Blackberry Limited | Touchscreen keyboard predictive display and generation of a set of characters |
US9715489B2 (en) | 2011-11-10 | 2017-07-25 | Blackberry Limited | Displaying a prediction candidate after a typing mistake |
US9310889B2 (en) | 2011-11-10 | 2016-04-12 | Blackberry Limited | Touchscreen keyboard predictive display and generation of a set of characters |
US10613746B2 (en) | 2012-01-16 | 2020-04-07 | Touchtype Ltd. | System and method for inputting text |
US9557913B2 (en) | 2012-01-19 | 2017-01-31 | Blackberry Limited | Virtual keyboard display having a ticker proximate to the virtual keyboard |
US9152323B2 (en) | 2012-01-19 | 2015-10-06 | Blackberry Limited | Virtual keyboard providing an indication of received input |
US9910588B2 (en) | 2012-02-24 | 2018-03-06 | Blackberry Limited | Touchscreen keyboard providing word predictions in partitions of the touchscreen keyboard in proximate association with candidate letters |
US11069336B2 (en) | 2012-03-02 | 2021-07-20 | Apple Inc. | Systems and methods for name pronunciation |
US9223623B2 (en) * | 2012-03-28 | 2015-12-29 | Bmc Software, Inc. | Dynamic service resource control |
US20130262680A1 (en) * | 2012-03-28 | 2013-10-03 | Bmc Software, Inc. | Dynamic service resource control |
US9201510B2 (en) | 2012-04-16 | 2015-12-01 | Blackberry Limited | Method and device having touchscreen keyboard with visual cues |
WO2013171481A3 (en) * | 2012-05-14 | 2014-07-10 | Touchtype Limited | Mechanism, system and method for synchronising devices |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
JP2015523629A (en) * | 2012-05-14 | 2015-08-13 | タッチタイプ リミテッド | Mechanisms, systems, and methods for synchronizing devices |
US20150134326A1 (en) * | 2012-05-14 | 2015-05-14 | Touchtype Limited | Mechanism for synchronising devices, system and method |
US10055397B2 (en) * | 2012-05-14 | 2018-08-21 | Touchtype Limited | Mechanism for synchronising devices, system and method |
CN104541266A (en) * | 2012-05-14 | 2015-04-22 | 触摸式有限公司 | Mechanism for synchronising devices, system and method |
US9207860B2 (en) | 2012-05-25 | 2015-12-08 | Blackberry Limited | Method and apparatus for detecting a gesture |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9116552B2 (en) | 2012-06-27 | 2015-08-25 | Blackberry Limited | Touchscreen keyboard providing selection of word predictions in partitions of the touchscreen keyboard |
US20140025371A1 (en) * | 2012-07-17 | 2014-01-23 | Samsung Electronics Co., Ltd. | Method and apparatus for recommending texts |
CN104487918A (en) * | 2012-07-20 | 2015-04-01 | 微软公司 | String predictions from buffer |
US9298274B2 (en) | 2012-07-20 | 2016-03-29 | Microsoft Technology Licensing, Llc | String predictions from buffer |
EP2875418B1 (en) * | 2012-07-20 | 2018-08-29 | Microsoft Technology Licensing, LLC | String predictions from buffer |
WO2014015205A1 (en) * | 2012-07-20 | 2014-01-23 | Microsoft Corporation | String predictions from buffer |
WO2014022322A1 (en) * | 2012-07-30 | 2014-02-06 | Microsoft Corporation | Generating string predictions using contexts |
US9195645B2 (en) | 2012-07-30 | 2015-11-24 | Microsoft Technology Licensing, Llc | Generating string predictions using contexts |
JP2015528968A (en) * | 2012-07-30 | 2015-10-01 | マイクロソフト コーポレーション | Generating string prediction using context |
CN104508604A (en) * | 2012-07-30 | 2015-04-08 | 微软公司 | Generating string predictions using contexts |
US20140068523A1 (en) * | 2012-08-28 | 2014-03-06 | Huawei Device Co., Ltd | Method and apparatus for optimizing handwriting input method |
US9063653B2 (en) | 2012-08-31 | 2015-06-23 | Blackberry Limited | Ranking predictions based on typing speed and typing confidence |
US9524290B2 (en) | 2012-08-31 | 2016-12-20 | Blackberry Limited | Scoring predictions based on prediction length and typing speed |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US20140108004A1 (en) * | 2012-10-15 | 2014-04-17 | Nuance Communications, Inc. | Text/character input system, such as for use with touch screens on mobile phones |
US9026428B2 (en) * | 2012-10-15 | 2015-05-05 | Nuance Communications, Inc. | Text/character input system, such as for use with touch screens on mobile phones |
US9747272B2 (en) * | 2012-10-16 | 2017-08-29 | Google Inc. | Feature-based autocorrection |
US20140188460A1 (en) * | 2012-10-16 | 2014-07-03 | Google Inc. | Feature-based autocorrection |
US9244905B2 (en) | 2012-12-06 | 2016-01-26 | Microsoft Technology Licensing, Llc | Communication context based predictive-text suggestion |
CN105144040A (en) * | 2012-12-06 | 2015-12-09 | 微软技术许可有限责任公司 | Communication context based predictive-text suggestion |
WO2014089524A1 (en) * | 2012-12-06 | 2014-06-12 | Microsoft Corporation | Communication context based predictive-text suggestion |
US10095405B2 (en) * | 2013-02-05 | 2018-10-09 | Google Llc | Gesture keyboard input of non-dictionary character strings |
US20170010800A1 (en) * | 2013-02-05 | 2017-01-12 | Google Inc. | Gesture keyboard input of non-dictionary character strings |
US20140281944A1 (en) * | 2013-03-14 | 2014-09-18 | Apple Inc. | Automatic supplementation of word correction dictionaries |
US9977779B2 (en) * | 2013-03-14 | 2018-05-22 | Apple Inc. | Automatic supplementation of word correction dictionaries |
US9390079B1 (en) | 2013-05-10 | 2016-07-12 | D.R. Systems, Inc. | Voice commands for report editing |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10769385B2 (en) | 2013-06-09 | 2020-09-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US11429190B2 (en) | 2013-06-09 | 2022-08-30 | Apple Inc. | Proxy gesture recognizer |
US9733716B2 (en) | 2013-06-09 | 2017-08-15 | Apple Inc. | Proxy gesture recognizer |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US11048473B2 (en) | 2013-06-09 | 2021-06-29 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US9728184B2 (en) | 2013-06-18 | 2017-08-08 | Microsoft Technology Licensing, Llc | Restructuring deep neural network acoustic models |
US9311298B2 (en) | 2013-06-21 | 2016-04-12 | Microsoft Technology Licensing, Llc | Building conversational understanding systems using a toolset |
US20140379325A1 (en) * | 2013-06-21 | 2014-12-25 | Research In Motion Limited | Text entry at electronic communication device |
US9589565B2 (en) | 2013-06-21 | 2017-03-07 | Microsoft Technology Licensing, Llc | Environmentally aware dialog policies and response generation |
US9697200B2 (en) | 2013-06-21 | 2017-07-04 | Microsoft Technology Licensing, Llc | Building conversational understanding systems using a toolset |
US10572602B2 (en) | 2013-06-21 | 2020-02-25 | Microsoft Technology Licensing, Llc | Building conversational understanding systems using a toolset |
US10304448B2 (en) | 2013-06-21 | 2019-05-28 | Microsoft Technology Licensing, Llc | Environmentally aware dialog policies and response generation |
US9244906B2 (en) * | 2013-06-21 | 2016-01-26 | Blackberry Limited | Text entry at electronic communication device |
US10656957B2 (en) * | 2013-08-09 | 2020-05-19 | Microsoft Technology Licensing, Llc | Input method editor providing language assistance |
US20160196150A1 (en) * | 2013-08-09 | 2016-07-07 | Kun Jing | Input Method Editor Providing Language Assistance |
US11474688B2 (en) | 2013-08-26 | 2022-10-18 | Samsung Electronics Co., Ltd. | User device and method for creating handwriting content |
CN105518577A (en) * | 2013-08-26 | 2016-04-20 | 三星电子株式会社 | User device and method for creating handwriting content |
US10684771B2 (en) * | 2013-08-26 | 2020-06-16 | Samsung Electronics Co., Ltd. | User device and method for creating handwriting content |
US20150058718A1 (en) * | 2013-08-26 | 2015-02-26 | Samsung Electronics Co., Ltd. | User device and method for creating handwriting content |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US9613625B2 (en) | 2014-02-24 | 2017-04-04 | Panasonic Intellectual Property Management Co., Ltd. | Data input device, data input method, storage medium, and in-vehicle apparatus |
EP2911148A1 (en) * | 2014-02-24 | 2015-08-26 | Panasonic Intellectual Property Management Co., Ltd. | Data input device, data input method, and in-vehicle apparatus |
AU2015236417B2 (en) * | 2014-03-27 | 2019-12-19 | Microsoft Technology Licensing, Llc | Flexible schema for language model customization |
US10497367B2 (en) | 2014-03-27 | 2019-12-03 | Microsoft Technology Licensing, Llc | Flexible schema for language model customization |
WO2015148333A1 (en) * | 2014-03-27 | 2015-10-01 | Microsoft Technology Licensing, Llc | Flexible schema for language model customization |
CN106133826A (en) * | 2014-03-27 | 2016-11-16 | 微软技术许可有限责任公司 | For the self-defining flexible modes of language model |
US9529794B2 (en) | 2014-03-27 | 2016-12-27 | Microsoft Technology Licensing, Llc | Flexible schema for language model customization |
US9614724B2 (en) | 2014-04-21 | 2017-04-04 | Microsoft Technology Licensing, Llc | Session-based device configuration |
US9520127B2 (en) | 2014-04-29 | 2016-12-13 | Microsoft Technology Licensing, Llc | Shared hidden layer combination for speech recognition systems |
US10901965B1 (en) * | 2014-05-12 | 2021-01-26 | Google Llc | Providing suggestions within a document |
US10223392B1 (en) * | 2014-05-12 | 2019-03-05 | Google Llc | Providing suggestions within a document |
US11907190B1 (en) * | 2014-05-12 | 2024-02-20 | Google Llc | Providing suggestions within a document |
US10111099B2 (en) | 2014-05-12 | 2018-10-23 | Microsoft Technology Licensing, Llc | Distributing content in managed wireless distribution networks |
US9959296B1 (en) * | 2014-05-12 | 2018-05-01 | Google Llc | Providing suggestions within a document |
US12197406B1 (en) * | 2014-05-12 | 2025-01-14 | Google Llc | Providing suggestions within a document |
US9874914B2 (en) | 2014-05-19 | 2018-01-23 | Microsoft Technology Licensing, Llc | Power management contracts for accessory devices |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US11120220B2 (en) | 2014-05-30 | 2021-09-14 | Apple Inc. | Device, method, and graphical user interface for a predictive keyboard |
US10417344B2 (en) | 2014-05-30 | 2019-09-17 | Apple Inc. | Exemplar-based natural language processing |
US10657966B2 (en) | 2014-05-30 | 2020-05-19 | Apple Inc. | Better resolution when referencing to concepts |
US10714095B2 (en) | 2014-05-30 | 2020-07-14 | Apple Inc. | Intelligent assistant for home automation |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US10691445B2 (en) | 2014-06-03 | 2020-06-23 | Microsoft Technology Licensing, Llc | Isolating a portion of an online computing service for testing |
US9477625B2 (en) | 2014-06-13 | 2016-10-25 | Microsoft Technology Licensing, Llc | Reversible connector for accessory devices |
US9717006B2 (en) | 2014-06-23 | 2017-07-25 | Microsoft Technology Licensing, Llc | Device quarantine in a wireless network |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
JP2016014987A (en) * | 2014-07-01 | 2016-01-28 | Kddi株式会社 | Input support device, input support system, and program |
US20170220129A1 (en) * | 2014-07-18 | 2017-08-03 | Shanghai Chule (Coo Tek) Information Technology Co., Ltd. | Predictive Text Input Method and Device |
US10031907B2 (en) * | 2014-07-28 | 2018-07-24 | International Business Machines Corporation | Context-based text auto completion |
US10929603B2 (en) * | 2014-07-28 | 2021-02-23 | International Business Machines Corporation | Context-based text auto completion |
US20160026639A1 (en) * | 2014-07-28 | 2016-01-28 | International Business Machines Corporation | Context-based text auto completion |
US20180267953A1 (en) * | 2014-07-28 | 2018-09-20 | International Business Machines Corporation | Context-based text auto completion |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10390213B2 (en) | 2014-09-30 | 2019-08-20 | Apple Inc. | Social reminders |
US10453443B2 (en) | 2014-09-30 | 2019-10-22 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10438595B2 (en) | 2014-09-30 | 2019-10-08 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US10529332B2 (en) | 2015-03-08 | 2020-01-07 | Apple Inc. | Virtual assistant activation |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US9703394B2 (en) * | 2015-03-24 | 2017-07-11 | Google Inc. | Unlearning techniques for adaptive language models in text entry |
US11127397B2 (en) | 2015-05-27 | 2021-09-21 | Apple Inc. | Device voice control |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10613825B2 (en) * | 2015-11-30 | 2020-04-07 | Logmein, Inc. | Providing electronic text recommendations to a user based on what is discussed during a meeting |
US20170154030A1 (en) * | 2015-11-30 | 2017-06-01 | Citrix Systems, Inc. | Providing electronic text recommendations to a user based on what is discussed during a meeting |
US10354652B2 (en) | 2015-12-02 | 2019-07-16 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10942702B2 (en) | 2016-06-11 | 2021-03-09 | Apple Inc. | Intelligent device arbitration and control |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10580409B2 (en) | 2016-06-11 | 2020-03-03 | Apple Inc. | Application integration with a digital assistant |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10937415B2 (en) * | 2016-06-15 | 2021-03-02 | Sony Corporation | Information processing device and information processing method for presenting character information obtained by converting a voice |
US20190130901A1 (en) * | 2016-06-15 | 2019-05-02 | Sony Corporation | Information processing device and information processing method |
US10372310B2 (en) | 2016-06-23 | 2019-08-06 | Microsoft Technology Licensing, Llc | Suppression of input images |
US10482133B2 (en) | 2016-09-07 | 2019-11-19 | International Business Machines Corporation | Creating and editing documents using word history |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US11425060B2 (en) * | 2016-09-20 | 2022-08-23 | Google Llc | System and method for transmitting a response in a messaging application |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
WO2018111702A1 (en) * | 2016-12-15 | 2018-06-21 | Microsoft Technology Licensing, Llc | Word order suggestion taking into account frequency and formatting information |
US10417332B2 (en) | 2016-12-15 | 2019-09-17 | Microsoft Technology Licensing, Llc | Predicting text by combining attempts |
CN110073349A (en) * | 2016-12-15 | 2019-07-30 | 微软技术许可有限责任公司 | Consider the word order suggestion of frequency and formatted message |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US9996524B1 (en) * | 2017-01-30 | 2018-06-12 | International Business Machines Corporation | Text prediction using multiple devices |
US10558749B2 (en) | 2017-01-30 | 2020-02-11 | International Business Machines Corporation | Text prediction using captured image from an image capture device |
US10223352B2 (en) * | 2017-01-30 | 2019-03-05 | International Business Machines Corporation | Text prediction using multiple devices |
US10223351B2 (en) * | 2017-01-30 | 2019-03-05 | International Business Machines Corporation | Text prediction using multiple devices |
US20180246875A1 (en) * | 2017-01-30 | 2018-08-30 | International Business Machines Corporation | Text prediction using multiple devices |
US10255268B2 (en) * | 2017-01-30 | 2019-04-09 | International Business Machines Corporation | Text prediction using multiple devices |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10847142B2 (en) | 2017-05-11 | 2020-11-24 | Apple Inc. | Maintaining privacy of personal information |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
CN107341138A (en) * | 2017-06-29 | 2017-11-10 | 珠海市魅族科技有限公司 | A kind of information fill method and device, computer installation and readable storage medium storing program for executing |
US20220366137A1 (en) * | 2017-07-31 | 2022-11-17 | Apple Inc. | Correcting input based on user context |
US11900057B2 (en) * | 2017-07-31 | 2024-02-13 | Apple Inc. | Correcting input based on user context |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US10944859B2 (en) | 2018-06-03 | 2021-03-09 | Apple Inc. | Accelerated task performance |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10504518B1 (en) | 2018-06-03 | 2019-12-10 | Apple Inc. | Accelerated task performance |
US11194547B2 (en) * | 2018-06-22 | 2021-12-07 | Samsung Electronics Co., Ltd. | Text input device and method therefor |
US11762628B2 (en) * | 2018-06-22 | 2023-09-19 | Samsung Electronics Co., Ltd. | Text input device and method therefor |
US20220075593A1 (en) * | 2018-06-22 | 2022-03-10 | Samsung Electronics Co, Ltd. | Text input device and method therefor |
US11106905B2 (en) | 2018-09-04 | 2021-08-31 | Cerence Operating Company | Multi-character text input system with audio feedback and word completion |
WO2020051209A1 (en) * | 2018-09-04 | 2020-03-12 | Nuance Communications, Inc. | Multi-character text input system with audio feedback and word completion |
US11842044B2 (en) | 2019-06-01 | 2023-12-12 | Apple Inc. | Keyboard management user interfaces |
US11620046B2 (en) | 2019-06-01 | 2023-04-04 | Apple Inc. | Keyboard management user interfaces |
US11194467B2 (en) | 2019-06-01 | 2021-12-07 | Apple Inc. | Keyboard management user interfaces |
CN112214114A (en) * | 2019-07-12 | 2021-01-12 | 北京搜狗科技发展有限公司 | Input method and device and electronic equipment |
US11080335B2 (en) * | 2019-08-13 | 2021-08-03 | International Business Machines Corporation | Concept-based autosuggest based on previously identified items |
CN112560477A (en) * | 2020-12-09 | 2021-03-26 | 中科讯飞互联(北京)信息科技有限公司 | Text completion method, electronic device and storage device |
CN113076158A (en) * | 2021-03-26 | 2021-07-06 | 维沃移动通信有限公司 | Display control method and display control device |
Also Published As
Publication number | Publication date |
---|---|
US7912700B2 (en) | 2011-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7912700B2 (en) | Context based word prediction | |
US9779080B2 (en) | Text auto-correction via N-grams | |
US7953692B2 (en) | Predicting candidates using information sources | |
US8972240B2 (en) | User-modifiable word lattice display for editing documents and search queries | |
US9471566B1 (en) | Method and apparatus for converting phonetic language input to written language output | |
US8126827B2 (en) | Predicting candidates using input scopes | |
US20190087403A1 (en) | Online spelling correction/phrase completion system | |
US9524291B2 (en) | Visual display of semantic information | |
US7080004B2 (en) | Grammar authoring system | |
US6789231B1 (en) | Method and system for providing alternatives for text derived from stochastic input sources | |
JP5462001B2 (en) | Contextual input method | |
Gong et al. | Alphabetically constrained keypad designs for text entry on mobile devices | |
US20120297294A1 (en) | Network search for writing assistance | |
CN101815996A (en) | Detect name entities and neologisms | |
EP2153352A1 (en) | Recognition architecture for generating asian characters | |
Van Halteren et al. | Linguistic Exploitation of Syntactic Databases: The Use of the Nijmegen LDB Program | |
KR20080085165A (en) | Input data expansion system and method, and wildcard insertion and input data expansion system | |
WO2021034395A1 (en) | Data-driven and rule-based speech recognition output enhancement | |
JP3992348B2 (en) | Morphological analysis method and apparatus, and Japanese morphological analysis method and apparatus | |
US7996768B2 (en) | Operations on document components filtered via text attributes | |
WO2022108671A1 (en) | Automatic document sketching | |
JP2006053906A (en) | Efficient multi-modal method for providing input to computing device | |
US20050165712A1 (en) | Method for operating software object using natural language and program for the same | |
JP5293607B2 (en) | Abbreviation generation apparatus and program, and abbreviation generation method | |
US20240386185A1 (en) | Enhanced generation of formatted and organized guides from unstructured spoken narrative using large language models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOWER, JASON;FURUUCHI, KENJI;LIU, SIMON;AND OTHERS;SIGNING DATES FROM 20070416 TO 20070427;REEL/FRAME:019243/0675 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034542/0001 Effective date: 20141014 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |