US20180060312A1 - Providing ideogram translation - Google Patents
Providing ideogram translation Download PDFInfo
- Publication number
- US20180060312A1 US20180060312A1 US15/243,987 US201615243987A US2018060312A1 US 20180060312 A1 US20180060312 A1 US 20180060312A1 US 201615243987 A US201615243987 A US 201615243987A US 2018060312 A1 US2018060312 A1 US 2018060312A1
- Authority
- US
- United States
- Prior art keywords
- ideogram
- translation
- message
- context
- sender
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000013519 translation Methods 0.000 title claims abstract description 211
- 238000004891 communication Methods 0.000 claims abstract description 90
- 230000014616 translation Effects 0.000 claims description 207
- 238000000034 method Methods 0.000 claims description 37
- 230000008569 process Effects 0.000 claims description 22
- 230000002996 emotional effect Effects 0.000 claims description 18
- 238000009877 rendering Methods 0.000 claims description 17
- 239000012634 fragment Substances 0.000 claims description 14
- 238000004458 analytical method Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims 3
- 238000013459 approach Methods 0.000 abstract 1
- 238000003860 storage Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 16
- 230000009471 action Effects 0.000 description 5
- 238000004590 computer program Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G06F17/2881—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/55—Rule-based translation
- G06F40/56—Natural language generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G06F17/2775—
-
- G06F17/2836—
-
- G06F17/30684—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/12—Use of codes for handling textual entities
- G06F40/126—Character encoding
- G06F40/129—Handling non-Latin characters, e.g. kana-to-kanji conversion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/274—Converting codes to words; Guess-ahead of partial word inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/42—Data-driven translation
- G06F40/47—Machine-assisted translation, e.g. using translation memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/53—Processing of non-Latin text
Definitions
- Ideograms are a popular communication modality. However, many users do not know how to interpret ideograms or how to type in ideograms. Furthermore, some users are not sufficiently savvy to communicate with ideograms with sufficient speed. Amount of available ideograms further complicate communication with ideograms. Ideogram variation and numbers are extensive. A common user spends significant time to find ideograms in demand. Lack of easy to use ideogram communication modalities lead to underutilization of ideograms as a communication medium.
- Embodiments are directed to ideogram translation.
- a communication application may detect a message being created, where the message includes one or more ideograms, and generate a translation of the one or more ideograms into text based on a content of the one or more ideograms and a contextual information associated with the message, the contextual information including one or more of a sender context, a recipient context, and a message context.
- the communication application may also identify two or more translations of the one or more ideograms, present the two or more translations to a sender for a selection among the two or more translations, and receive the selection among the two or more translations.
- the communication application may then provide the selection among the two or more translations to a communication module to be transmitted to a recipient for display.
- FIG. 1 is a conceptual diagram illustrating an example of providing ideogram translation, according to embodiments
- FIG. 2 is a display diagram illustrating example components of a communication application that translates ideogram(s), according to embodiments;
- FIG. 3 is a display diagram illustrating components of a scheme to translate ideogram(s) in a communication application, according to embodiments;
- FIG. 4 is a display diagram illustrating a scheme to translate ideogram(s) using Unicode character intermediaries, according to embodiments
- FIG. 5 is a simplified networked environment, where a system according to embodiments may be implemented
- FIG. 6 is a block diagram of an example computing device, which may be used to provide ideogram translation, according to embodiments.
- FIG. 7 is a logic flow diagram illustrating a process for providing ideogram translation, according to embodiments.
- ideogram(s) in an exchanged message may be translated into text.
- An ideogram or ideograph is a graphic symbol that represents an idea or concept, independent of any particular language, and specific words or phrases. Some ideograms may be comprehensible by familiarity with prior convention; others may convey their meaning through pictorial resemblance to a physical object, and thus may also be referred to as pictograms.
- the communication application may detect a message with ideogram(s), for example a smiling face ( ), a frowning face ( ), and/or a heart ( ⁇ ), among others.
- the communication application may process ideogram(s) (detected in the message) to generate a translation based on a content of the ideogram(s) and a contextual information associated with the message.
- the contextual information may include a sender context, a recipient context, and/or a message context.
- Each ideogram in the message may be matched to a corresponding word. However, in scenarios where the ideogram may correspond to multiple words, the user may be provided with a selection prompt to select the correct word that may be used to translate the ideogram.
- the translation may be presented to the recipient for display.
- program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types.
- embodiments may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and comparable computing devices.
- Embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
- program modules may be located in both local and remote memory storage devices.
- Some embodiments may be implemented as a computer-implemented process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media.
- the computer program product may be a computer storage medium readable by a computer system and encoding a computer program that comprises instructions for causing a computer or computing system to perform example process(es).
- the computer-readable storage medium is a physical computer-readable memory device.
- the computer-readable storage medium can for example be implemented via one or more of a volatile computer memory, a non-volatile memory, a hard drive, a flash drive, a floppy disk, or a compact disk, and comparable hardware media.
- platform may be a combination of software and hardware components to provide ideogram translation. Examples of platforms include, but are not limited to, a hosted service executed over a plurality of servers, an application executed on a single computing device, and comparable systems.
- server generally refers to a computing device executing one or more software programs typically in a networked environment. More detail on these technologies and example operations is provided below.
- a computing device refers to a device comprising at least a memory and a processor that includes a desktop computer, a laptop computer, a tablet computer, a smart phone, a vehicle mount computer, or a wearable computer.
- a memory may be a removable or non-removable component of a computing device configured to store one or more instructions to be executed by one or more processors.
- a processor may be a component of a computing device coupled to a memory and configured to execute programs in conjunction with instructions stored by the memory.
- a file is any form of structured data that is associated with audio, video, or similar content.
- An operating system is a system configured to manage hardware and software components of a computing device that provides common services and applications.
- An integrated module is a component of an application or service that is integrated within the application or service such that the application or service is configured to execute the component.
- a computer-readable memory device is a physical computer-readable storage medium implemented via one or more of a volatile computer memory, a non-volatile memory, a hard drive, a flash drive, a floppy disk, or a compact disk, and comparable hardware media that includes instructions thereon to automatically save content to a location.
- a user experience a visual display associated with an application or service through which a user interacts with the application or service.
- a user action refers to an interaction between a user and a user experience of an application or a user experience provided by a service that includes one of touch input, gesture input, voice command, eye tracking, gyroscopic input, pen input, mouse input, and keyboards input.
- An application programming interface may be a set of routines, protocols, and tools for an application or service that enable the application or service to interact or communicate with one or more other applications and services managed by separate entities.
- FIG. 1 is a conceptual diagram illustrating examples of providing ideogram translation, according to embodiments.
- a computing device 104 may execute a communication application 102 .
- the communication application 102 may include a messaging application.
- the computing device 104 may include a physical computer and/or a mobile computing device such as a smart phone and/or similar ones.
- the computing device 104 may also include a special purpose and/or configured components that is optimized to transmit ideograms through the communication application 102 .
- a communication component of the computing device 104 may be customized to translate an ideogram to Unicode characters and transmit and receive the ideogram(s) as Unicode characters.
- the computing device 104 may execute the communication application 102 .
- the communication application 102 may initiate operations to translate ideogram(s) upon detecting a message 106 being created by a sender 110 that includes ideogram(s).
- An ideogram 108 may include a graphic that reflects an emotional state. Example of the ideogram may include a smiling face ( ), a frowning face ( ), and/or a heart ( ⁇ ), among others.
- the ideogram 108 may be displayed as a graphic, an image, an animation, and/or similar ones.
- the message 106 may include components such as the ideogram 108 and word(s) that surround the ideogram 108 . Alternatively, the message 106 may only include the ideogram 108 and other ideogram(s).
- a user of the communication application 102 such as the sender 110 may desire to communicate with ideogram(s) but lack the knowledge or the know how to do so.
- the communication application 102 may provide automated ideogram translation.
- the communication application 102 may process the ideogram 108 to generate a translation 114 based on a content of the ideogram 108 and a contextual information associated with the message 106 .
- the contextual information may include a sender context, a recipient context, and/or a message context, among others.
- relationship(s) between the ideogram 108 and components of the message 106 (such as words that surround the ideogram 108 ) may be analyzed to identify a structure of the message 106 in relation to the ideogram 108 .
- a sentence and/or a set of words that have a structure similar to the message 106 may be selected as the translation 114 .
- the computing device 104 may communicate with other client device(s) or server(s) through a network.
- the network may provide wired or wireless communications between network nodes such as the computing device 104 , other client device(s) and/or server(s), among others.
- Previous example(s) to providing ideogram translation in the communication application 102 are not provided in a limiting sense.
- the communication application 102 may transmit the message 106 to an ideogram translation provider and receive the translation 114 from the ideogram translation provider, among others.
- the sender 110 may interact with the communication application 102 with a keyboard based input, a mouse based input, a voice based input, a pen based input, and a gesture based input, among others.
- the gesture based input may include one or more touch based actions such as a touch action, a swipe action, and a combination of each, among others.
- FIG. 1 has been described with specific components including the computing device 104 , the communication application 102 , embodiments are not limited to these components or system configurations and can be implemented with other system configuration employing fewer or additional components.
- FIG. 2 is a display diagram illustrating example components of a communication application that translates ideogram(s), according to embodiments.
- an inference engine 212 of a communication application 202 may detect a message 206 created by a sender that includes ideograms 208 .
- the ideograms 208 may include a heart ( ) and a smiling face ( ⁇ ).
- the inference engine 212 may generate a translation 216 of the ideograms 208 to text based on a content of the ideograms 208 and contextual information associated with the message 206 .
- the contextual information may include a sender context 220 , a recipient context 222 , and a message context 224 .
- the inference engine 212 may process the ideograms 208 to identify translations of the ideograms 208 .
- the inference engine 212 may query an ideogram translation dictionary of the communication application 202 with the ideograms 208 .
- the inference engine 212 may locate a translation 230 (love and heart) and another translation 232 (smile and face). Upon locating two or more translations, the inference engine 212 may interact with a sender of the message 206 to prompt the sender to select one that may be used as the translation 216 .
- a rendering engine 214 may be instructed to provide a listing of the translation 230 and the translation 232 to prompt the sender to make a selection.
- the inference engine 212 may designate the selection as the translation 216 .
- the translation 216 may be saved into the ideogram translation dictionary in relation to the ideograms 208 .
- the rendering engine 214 may be instructed to present the translation 216 to the recipient for display.
- the inference engine 212 may also process the ideograms 208 based on a message context 224 .
- a structure of the message 206 may be detected within the message context 224 .
- the structure may include location of components of the message 206 , relationships that define the location of the components, and/or grammatical relationships between the components, among others.
- the inference engine 212 may process the word 207 and the ideograms 208 within the message 206 to identify relationships 211 between the word and the ideograms 208 .
- the translation 216 may be generated based on the relationships 211 .
- the inference engine 212 may detect a noun such as “I” as the word 207 .
- the inference engine 212 may infer that a verb may follow the word 207 based on a grammatical relationship and a location relationship between the word 207 and the ideograms 208 .
- the inference engine 212 may query an ideogram translation provider with the structure of the message 206 , the word 207 , and the relationships detected between the word 207 and the ideograms 208 (in addition to a content of the ideograms 208 ).
- the inference engine 212 may receive the translation 216 from the ideogram translation provider.
- the translation may match the structure of the message and include the word 207 and the relationships 211 .
- the inference engine 212 may query a sentence fragment provider with the word 207 and the relationships 211 .
- a sentence fragment (such as I love smile) may be received from the sentence fragment provider.
- the translation 216 may be generated by replacing the word 207 and the ideograms 208 with the sentence fragment. As such, only a set of components of the message surrounding the ideograms 208 may be processed to detect relationships which may lower resource consumption compared to processing remaining components 209 of the message 206 .
- the inference engine 212 may also analyze contextual information associated with the sender to translate the ideograms 208 .
- the inference engine 212 may identify attributes of the sender.
- the attributes may include a role, a presence information, an emotional state, and/or a location of the sender, among others.
- the translations ( 230 and 232 ) may be filtered based on the attributes. For example, a translation that does not match the emotional state of the sender may not be included in a list of possible translations.
- the filtered translations may be provided to the sender for a selection. Upon receiving the selection from the sender, the translation 216 may be generated from the selection.
- the inference engine 212 may detect an emotional state of the sender as happy (for example, by recognizing the emotional state from a third party information provider such as a social networking provider, a camera associated with the user's device, and/or the context of the message the user has typed). The inference engine 212 may filter out a number of the translations that do not match the emotional state of the sender.
- the translations ( 230 and 232 ) may correlate with the happy emotional state of the sender. As such, the translations ( 230 and 232 ) may be presented to the sender for a selection through the rendering engine 214 . The selected translation may be used to generate the translation 216 .
- contextual information associated with the recipient may be analyzed to translate the ideograms 208 .
- the inference engine 212 may identify attributes of the recipient.
- the attributes may include a role, a presence information, an emotional state, and/or a location of the recipient, among others.
- the translations ( 230 and 232 ) may be filtered based on the attributes.
- the filtered translations may be provided to the sender or the recipient for a selection.
- the translation 216 may be generated from the selection.
- the inference engine 212 may detect an emotional state of the recipient as happy (for example, by recognizing the emotional state from a third party information provider such as a social networking provider, a camera associated with the user's device, and/or the context of the message the user has typed). The inference engine 212 may filter out a number of the translations that do not match the emotional state of the recipient. The translations ( 230 and 232 ) may correlate with the happy emotional state of the recipient. The translations ( 230 and 232 ) may be presented to the recipient or the sender for a selection through the rendering engine 214 . The selected translation may be used to generate the translation 216 .
- a third party information provider such as a social networking provider, a camera associated with the user's device, and/or the context of the message the user has typed.
- the inference engine 212 may filter out a number of the translations that do not match the emotional state of the recipient.
- the translations ( 230 and 232 ) may correlate with the happy emotional state of the
- FIG. 3 is a display diagram illustrating components of a scheme to translate ideogram(s) in a communication application, according to embodiments.
- an inference engine 312 of the communication application 302 may process ideograms 308 within a message 306 to generate a translation 316 .
- the communication engine may translate words of a new message 318 to new ideograms 322 .
- the inference engine 312 may detect a message 306 that includes ideograms 308 .
- the inference engine 312 may query an ideogram translation dictionary 324 to locate translations that match the ideograms 308 . If two or more translations are detected, the rendering engine 314 is prompted to provide the translations to a sender of the message 306 to request the sender to make a selection. Upon receiving the selection, the selection may be used to generate the translation 316 . Alternatively, if the ideograms 308 match a single set of translations, the translations may be used to generate the translation 316 .
- the ideograms 308 may be translated through an ideogram translation provider 326 .
- the ideogram translation provider may be provided with the message 306 to process the ideograms 308 , generate the translation 316 , and transmit the translation 316 to the communication application 302 .
- the rendering engine 314 may be prompted to provide the translation 316 to be transmitted to a recipient for display.
- a new message 318 may be detected.
- the new message 318 may have a content that solely includes words.
- the ideogram translation dictionary may be queried for a new translation 320 that includes new ideograms 322 .
- the new translation 320 may be found in the ideogram translation dictionary 324 .
- the new translation 320 may be presented to the recipient through the rendering engine 314 .
- the ideogram translations may be presented to the sender for a selection. A selected ideogram translation may be used to generate the new translation 320 .
- the ideogram translation provider 326 may be used to translate the new message 318 .
- the inference engine 312 may directly query the ideogram translation provider 326 to translate the message 318 to the new translation 320 (with the new ideograms 322 ).
- the ideogram translation provider 326 may be queried (with the new message 318 ) upon a failure to locate the new translation 320 within the ideogram translation dictionary 324 .
- FIG. 4 is a display diagram illustrating a scheme to translate ideogram(s) using Unicode character intermediaries, according to embodiments.
- an inference engine 412 of a communication application 402 may translate a message 406 with an ideogram 408 by converting the ideogram 408 to Unicode characters 410 .
- An ideogram translation dictionary may be queried with the Unicode characters 410 to locate a translation associated with the Unicode characters 410 .
- the translation may be used to construct a translated sentence 416 by replacing the ideogram 408 with the translation.
- the translated sentence 416 may be presented to the recipient as the translation of the message 406 through the rendering engine 414 .
- the inference engine 412 may prompt the rendering engine 414 to provide the two or more translations ( 430 and 432 ) for a selection to the sender.
- the sender may be instructed to make a selection from the two or more translations ( 430 and 432 ).
- the selected translation ( 430 ) may be used to construct the translated sentence 416 .
- the communication application may be employed to provide ideogram translation.
- An increased user efficiency with the communication application 102 may occur as a result of processing the ideogram and components of a message that have a relationship with the ideogram to generate the translation.
- automatically translating ideograms to words or words to ideograms within a communication based on user demand, by the communication application 102 may reduce processor load, increase processing speed, conserve memory, and reduce network bandwidth usage.
- the actions/operations described herein are not a mere use of a computer, but address results that are a direct consequence of software used as a service offered to large numbers of users and applications.
- FIG. 1 through 4 The example scenarios and schemas in FIG. 1 through 4 are shown with specific components, data types, and configurations. Embodiments are not limited to systems according to these example configurations. Providing ideogram translation may be implemented in configurations employing fewer or additional components in applications and user interfaces. Furthermore, the example schema and components shown in FIG. 1 through 4 and their subcomponents may be implemented in a similar manner with other values using the principles described herein.
- FIG. 5 is an example networked environment, where embodiments may be implemented.
- a communication application configured to translate ideograms may be implemented via software executed over one or more servers 514 such as a hosted service.
- the platform may communicate with communication applications on individual computing devices such as a smart phone 513 , a mobile computer 512 , or desktop computer 511 (‘client devices’) through network(s) 510 .
- client devices desktop computer 511
- Communication applications executed on any of the client devices 511 - 513 may facilitate communications via application(s) executed by servers 514 , or on individual server 516 .
- a communication application may detect a message created by a sender that includes ideogram(s).
- the ideogram(s) may be processed to generate a translation based on a content of the ideogram and contextual information associated with the message.
- the contextual information may include a sender context, a recipient context, and/or a message context.
- the translation may be provided for display to the recipient.
- the communication application may store data associated with the ideograms in data store(s) 519 directly or through database server 518 .
- Network(s) 510 may comprise any topology of servers, clients, Internet service providers, and communication media.
- a system according to embodiments may have a static or dynamic topology.
- Network(s) 510 may include secure networks such as an enterprise network, an unsecure network such as a wireless open network, or the Internet.
- Network(s) 510 may also coordinate communication over other networks such as Public Switched Telephone Network (PSTN) or cellular networks.
- PSTN Public Switched Telephone Network
- network(s) 510 may include short range wireless networks such as Bluetooth or similar ones.
- Network(s) 510 provide communication between the nodes described herein.
- network(s) 510 may include wireless media such as acoustic, RF, infrared and other wireless media.
- FIG. 6 is a block diagram of an example computing device, which may be used to provide ideogram translation, according to embodiments.
- computing device 600 may be used as a server, desktop computer, portable computer, smart phone, special purpose computer, or similar device.
- the computing device 600 may include one or more processors 604 and a system memory 606 .
- a memory bus 608 may be used for communication between the processor 604 and the system memory 606 .
- the basic configuration 602 may be illustrated in FIG. 6 by those components within the inner dashed line.
- the processor 604 may be of any type, including but not limited to a microprocessor ( ⁇ P), a microcontroller ( ⁇ C), a digital signal processor (DSP), or any combination thereof.
- the processor 604 may include one more levels of caching, such as a level cache memory 612 , one or more processor cores 614 , and registers 616 .
- the example processor cores 614 may (each) include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof.
- An example memory controller 618 may also be used with the processor 604 , or in some implementations, the memory controller 618 may be an internal part of the processor 604 .
- the system memory 606 may be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof.
- the system memory 606 may include an operating system 620 , a communication application 622 , and a program data 624 .
- the communication application 622 may include components such as an inference engine 626 and a rendering engine 627 .
- the inference engine 626 and the rendering engine 627 may execute the processes associated with the communication application 622 .
- the inference engine 626 may detect a message created by a sender that includes ideogram(s).
- the ideogram(s) may be processed to generate a translation based on a content of the ideogram and contextual information associated with the message.
- the contextual information may include a sender context, a recipient context, and/or a message context.
- the rendering engine 627 may provide the translation to the recipient for display.
- the communication application 622 may provide a message through a communication module associated with the computing device 600 .
- An example of the communication module may include a communication device 666 , among others that may be communicatively coupled to the computing device 600 .
- the program data 624 may also include, among other data, ideogram data 628 , or the like, as described herein.
- the ideogram data 628 may include translations.
- the computing device 600 may have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration 602 and any desired devices and interfaces.
- a bus/interface controller 630 may be used to facilitate communications between the basic configuration 602 and one or more data storage devices 632 via a storage interface bus 634 .
- the data storage devices 632 may be one or more removable storage devices 636 , one or more non-removable storage devices 638 , or a combination thereof.
- Examples of the removable storage and the non-removable storage devices may include magnetic disk devices, such as flexible disk drives and hard-disk drives (HDDs), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSDs), and tape drives, to name a few.
- Example computer storage media may include volatile and nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data.
- the system memory 606 , the removable storage devices 636 and the non-removable storage devices 638 are examples of computer storage media.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVDs), solid state drives, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by the computing device 600 . Any such computer storage media may be part of the computing device 600 .
- the computing device 600 may also include an interface bus 640 for facilitating communication from various interface devices (for example, one or more output devices 642 , one or more peripheral interfaces 644 , and one or more communication devices 666 ) to the basic configuration 602 via the bus/interface controller 630 .
- interface devices for example, one or more output devices 642 , one or more peripheral interfaces 644 , and one or more communication devices 666 .
- Some of the example output devices 642 include a graphics processing unit 648 and an audio processing unit 650 , which may be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 652 .
- One or more example peripheral interfaces 644 may include a serial interface controller 654 or a parallel interface controller 656 , which may be configured to communicate with external devices such as input devices (for example, keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (for example, printer, scanner, etc.) via one or more I/O ports 658 .
- An example of the communication device(s) 666 includes a network controller 660 , which may be arranged to facilitate communications with one or more other computing devices 662 over a network communication link via one or more communication ports 664 .
- the one or more other computing devices 662 may include servers, computing devices, and comparable devices.
- the network communication link may be one example of a communication media.
- Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media.
- a “modulated data signal” may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media.
- RF radio frequency
- IR infrared
- the term computer readable media as used herein may include both storage media and communication media.
- the computing device 600 may be implemented as a part of a general purpose or specialized server, mainframe, or similar computer, which includes any of the above functions.
- the computing device 600 may also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
- Example embodiments may also include methods to provide ideogram translation. These methods can be implemented in any number of ways, including the structures described herein. One such way may be by machine operations, of devices of the type described in the present disclosure. Another optional way may be for one or more of the individual operations of the methods to be performed in conjunction with one or more human operators performing some of the operations while other operations may be performed by machines. These human operators need not be collocated with each other, but each can be only with a machine that performs a portion of the program. In other embodiments, the human interaction can be automated such as by pre-selected criteria that may be machine automated.
- FIG. 7 is a logic flow diagram illustrating a process for providing ideogram translation, according to embodiments.
- Process 700 may be implemented on a computing device, such as the computing device 600 or another system.
- Process 700 begins with operation 710 , where the communication application detects a message created by a sender that includes ideogram(s).
- An ideogram may include a graphic that reflects an emotional state.
- the communication application may generate a translation of the ideogram(s) based on a content of the ideogram(s) and a contextual information associated with the message at operation 720 .
- the contextual information may include a sender context, a recipient context, and/or a message context.
- Each ideogram in the message may be matched to a translation. However, in scenarios where the ideogram may correspond to multiple translations, the sender may be provided with a selection prompt to select the correct translation that may be used to translate the ideogram.
- the translation may be provided to a recipient for display.
- process 700 is for illustration purposes. Providing ideogram translation may be implemented by similar processes with fewer or additional steps, as well as in different order of operations using the principles described herein.
- the operations described herein may be executed by one or more processors operated on one or more computing devices, one or more processor cores, specialized processing devices, and/or general purpose processors, among other examples.
- a computing device to provide ideogram translation includes a communication module, a memory configured to store instructions associated with a communication application, a processor coupled to the memory and the communication module.
- the processor executes the communication application in conjunction with the instructions stored in the memory.
- the communication application includes an inference engine and a rendering engine.
- the inference engine is configured to detect a message created by a sender, where the message includes one or more ideograms and generate a translation of the one or more ideograms into text based on a content of the one or more ideograms and a contextual information associated with the message, where the contextual information includes one or more of a sender context, a recipient context, and a message context.
- the rendering engine is configured to provide the translation to the communication module to be transmitted to a recipient for display.
- the inference engine is further configured to identify two or more translations of the one or more ideograms and prompt the rendering engine to present the two or more translations to the sender for a selection among the two or more translations.
- the inference engine is further configured to receive the selection among the two or more translations from the sender, designate the selection among the two or more translations as the translation corresponding to the one or more ideograms, and save the one or more ideograms and the translation in an ideogram translation dictionary.
- the inference engine is further configured to detect a structure of the message as the message context, where the structure includes one or more words adjacent to the one or more ideograms, process the one or more words and the one or more ideograms to identify one or more relationships between the one or more words and the one or more ideograms, and generate the translation of the one or more ideograms based on the one or more relationships with the one or more words.
- the inference engine is further configured to query an ideogram translation provider with the structure of the message, the one or more words, and the one or more relationships and receive the translation from the ideogram translation provider.
- the inference engine is further configured to query a sentence fragment provider with the structure of the message, the one or more words and the one or more relationships, receive a sentence fragment that matches the one or more relationships from the sentence fragment provider, where the sentence fragment includes the one or more words, and generate the translation by replacing the one or more words and the one or more ideograms with the sentence fragment within the message.
- the inference engine is further configured to analyze the sender context to identify an attribute of the sender, where the attribute of the sender includes one or more of a role, a presence information, an emotional state, and a location of the sender and generate the translation of the one or more ideograms based on a selection of one or more textual equivalents for the one or more ideograms based on the identified attribute.
- the inference engine is further configured to analyze the recipient context to identify an attribute of the recipient, where the attribute of the recipient includes one or more of a role, a presence information, an emotional state, and a location of the recipient and generate the translation of the one or more ideograms based on a selection of one or more textual equivalents for the one or more ideograms based on the identified attribute.
- the inference engine is further configured to identify two or more textual equivalents for the one or more ideograms, analyze the two or more textual equivalents based on the one or more of the sender context, the recipient context, and the message context, and select one of the two or more textual equivalents as the translation based on the analysis.
- the inference engine is further configured to provide the one or more ideograms along with the translation to the communication module to be transmitted to a recipient for display.
- the one or more ideograms include one of an icon, a pictogram, and an emoji.
- a method executed on a computing device to provide ideogram translation includes detecting a message being created, where the message includes one or more ideograms, generating a translation of the one or more ideograms into text based on a content of the one or more ideograms and a contextual information associated with the message, where the contextual information includes one or more of a sender context, a recipient context, and a message context, identifying two or more translations of the one or more ideograms, presenting the two or more translations to a sender for a selection among the two or more translations, receiving the selection among the two or more translations, and providing the selection among the two or more translations to a communication module to be transmitted to a recipient for display.
- the method further includes converting the one or more ideograms to one or more sets of Unicode characters that correspond to the one or more ideograms, searching an ideogram translation dictionary using the one or more sets of Unicode characters, locating one or more words that match the one or more sets of Unicode characters, and generating translation from the one or more words.
- Generating the translation of the one or more ideograms based on the sender context includes analyzing a history of the sender's messages to other recipients and identifying the two or more translations based on the analysis.
- Generating the translation of the one or more ideograms based on the recipient context includes analyzing a history of the recipient's messages from other senders and identifying the two or more translations based on the analysis.
- Generating the translation of the one or more ideograms based on the message context includes analyzing one or more of a conversation that includes the message, a prior message, and a number of recipients and identifying the two or more translations based on the analysis.
- a computer-readable memory device with instructions stored thereon to provide ideogram translation includes receiving a message that includes one or more ideograms, generating a translation of the one or more ideograms into text based on a content of the one or more ideograms and a contextual information associated with the message, where the contextual information includes one or more of a sender context, a recipient context, and a message context, and providing the translation to a recipient of the message for display.
- the instructions further include detecting a structure of the message within the message context, where the structure includes one or more words adjacent to the one or more ideograms, processing the one or more words and the one or more ideograms to identify one or more relationships between the one or more words and the one or more ideograms, and generating the translation of the one or more ideograms based on the one or more relationships with the one or more words.
- the instructions further include analyzing one or more of a history of the recipient's messages from other senders, a history of the sender's messages to other recipients, a conversation that includes the message, a prior message, and a number of recipients and generating the translation based on the analysis.
- the means for providing ideogram translation includes a means for detecting a message created by a sender, where the message includes one or more ideograms, a means for generating a translation of the one or more ideograms into text based on a content of the one or more ideograms and a contextual information associated with the message, where the contextual information includes one or more of a sender context, a recipient context, and a message context, and a means for providing the translation to a recipient for display.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Various approaches to provide ideogram translation are described. A communication application initiates operations to translate ideogram(s) upon detecting a message created by a sender that includes ideogram(s). A translation of the ideogram(s) is generated based on a content of the ideogram(s) and contextual information associated with the message. The contextual information includes a sender context, a recipient context, or a message context. The translation is provided to the recipient for display.
Description
- Information collection, management, and analysis have changed work processes and associated data management. Automation and improvements in daily processes have expanded scope of capabilities offered by applications consumed daily by users. With the development of faster and smaller electronics, execution of mass processes at systems providing applications and services have become feasible. Indeed, services enhancing provided applications have become common features in modern application environments. Such systems provide a wide variety of applications such as web browsers that present users with expanded functionality. Many such applications provide communication modalities and attempt to improve media consumption. Communication applications consume significant resources but have large potential for performance improvements through automation.
- Ideograms are a popular communication modality. However, many users do not know how to interpret ideograms or how to type in ideograms. Furthermore, some users are not sufficiently savvy to communicate with ideograms with sufficient speed. Amount of available ideograms further complicate communication with ideograms. Ideogram variation and numbers are extensive. A common user spends significant time to find ideograms in demand. Lack of easy to use ideogram communication modalities lead to underutilization of ideograms as a communication medium.
- This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to exclusively identify key features or essential features of the claimed subject matter, nor is it intended as an aid in determining the scope of the claimed subject matter.
- Embodiments are directed to ideogram translation. A communication application, according to embodiments, may detect a message being created, where the message includes one or more ideograms, and generate a translation of the one or more ideograms into text based on a content of the one or more ideograms and a contextual information associated with the message, the contextual information including one or more of a sender context, a recipient context, and a message context. The communication application may also identify two or more translations of the one or more ideograms, present the two or more translations to a sender for a selection among the two or more translations, and receive the selection among the two or more translations. The communication application may then provide the selection among the two or more translations to a communication module to be transmitted to a recipient for display.
- These and other features and advantages will be apparent from a reading of the following detailed description and a review of the associated drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory and do not restrict aspects as claimed.
-
FIG. 1 is a conceptual diagram illustrating an example of providing ideogram translation, according to embodiments; -
FIG. 2 is a display diagram illustrating example components of a communication application that translates ideogram(s), according to embodiments; -
FIG. 3 is a display diagram illustrating components of a scheme to translate ideogram(s) in a communication application, according to embodiments; -
FIG. 4 is a display diagram illustrating a scheme to translate ideogram(s) using Unicode character intermediaries, according to embodiments; -
FIG. 5 is a simplified networked environment, where a system according to embodiments may be implemented; -
FIG. 6 is a block diagram of an example computing device, which may be used to provide ideogram translation, according to embodiments; and -
FIG. 7 is a logic flow diagram illustrating a process for providing ideogram translation, according to embodiments. - As briefly described above, ideogram(s) in an exchanged message may be translated into text. An ideogram or ideograph is a graphic symbol that represents an idea or concept, independent of any particular language, and specific words or phrases. Some ideograms may be comprehensible by familiarity with prior convention; others may convey their meaning through pictorial resemblance to a physical object, and thus may also be referred to as pictograms. In an example scenario, the communication application may detect a message with ideogram(s), for example a smiling face (), a frowning face (), and/or a heart (♡), among others. The communication application may process ideogram(s) (detected in the message) to generate a translation based on a content of the ideogram(s) and a contextual information associated with the message. The contextual information may include a sender context, a recipient context, and/or a message context. Each ideogram in the message may be matched to a corresponding word. However, in scenarios where the ideogram may correspond to multiple words, the user may be provided with a selection prompt to select the correct word that may be used to translate the ideogram. Next, the translation may be presented to the recipient for display.
- In the following detailed description, references are made to the accompanying drawings that form a part hereof, and in which are shown by way of illustrations, specific embodiments, or examples. These aspects may be combined, other aspects may be utilized, and structural changes may be made without departing from the spirit or scope of the present disclosure. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims and their equivalents.
- While some embodiments will be described in the general context of program modules that execute in conjunction with an application program that runs on an operating system on a personal computer, those skilled in the art will recognize that aspects may also be implemented in combination with other program modules.
- Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that embodiments may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and comparable computing devices. Embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
- Some embodiments may be implemented as a computer-implemented process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media. The computer program product may be a computer storage medium readable by a computer system and encoding a computer program that comprises instructions for causing a computer or computing system to perform example process(es). The computer-readable storage medium is a physical computer-readable memory device. The computer-readable storage medium can for example be implemented via one or more of a volatile computer memory, a non-volatile memory, a hard drive, a flash drive, a floppy disk, or a compact disk, and comparable hardware media.
- Throughout this specification, the term “platform” may be a combination of software and hardware components to provide ideogram translation. Examples of platforms include, but are not limited to, a hosted service executed over a plurality of servers, an application executed on a single computing device, and comparable systems. The term “server” generally refers to a computing device executing one or more software programs typically in a networked environment. More detail on these technologies and example operations is provided below.
- A computing device, as used herein, refers to a device comprising at least a memory and a processor that includes a desktop computer, a laptop computer, a tablet computer, a smart phone, a vehicle mount computer, or a wearable computer. A memory may be a removable or non-removable component of a computing device configured to store one or more instructions to be executed by one or more processors. A processor may be a component of a computing device coupled to a memory and configured to execute programs in conjunction with instructions stored by the memory. A file is any form of structured data that is associated with audio, video, or similar content. An operating system is a system configured to manage hardware and software components of a computing device that provides common services and applications. An integrated module is a component of an application or service that is integrated within the application or service such that the application or service is configured to execute the component. A computer-readable memory device is a physical computer-readable storage medium implemented via one or more of a volatile computer memory, a non-volatile memory, a hard drive, a flash drive, a floppy disk, or a compact disk, and comparable hardware media that includes instructions thereon to automatically save content to a location. A user experience—a visual display associated with an application or service through which a user interacts with the application or service. A user action refers to an interaction between a user and a user experience of an application or a user experience provided by a service that includes one of touch input, gesture input, voice command, eye tracking, gyroscopic input, pen input, mouse input, and keyboards input. An application programming interface (API) may be a set of routines, protocols, and tools for an application or service that enable the application or service to interact or communicate with one or more other applications and services managed by separate entities.
-
FIG. 1 is a conceptual diagram illustrating examples of providing ideogram translation, according to embodiments. - In a diagram 100, a
computing device 104 may execute acommunication application 102. Thecommunication application 102 may include a messaging application. Thecomputing device 104 may include a physical computer and/or a mobile computing device such as a smart phone and/or similar ones. Thecomputing device 104 may also include a special purpose and/or configured components that is optimized to transmit ideograms through thecommunication application 102. For example, a communication component of thecomputing device 104 may be customized to translate an ideogram to Unicode characters and transmit and receive the ideogram(s) as Unicode characters. - The
computing device 104 may execute thecommunication application 102. Thecommunication application 102 may initiate operations to translate ideogram(s) upon detecting amessage 106 being created by asender 110 that includes ideogram(s). Anideogram 108 may include a graphic that reflects an emotional state. Example of the ideogram may include a smiling face (), a frowning face (), and/or a heart (♡), among others. Theideogram 108 may be displayed as a graphic, an image, an animation, and/or similar ones. Themessage 106 may include components such as theideogram 108 and word(s) that surround theideogram 108. Alternatively, themessage 106 may only include theideogram 108 and other ideogram(s). - A user of the
communication application 102 such as thesender 110 may desire to communicate with ideogram(s) but lack the knowledge or the know how to do so. As such, thecommunication application 102 may provide automated ideogram translation. Thecommunication application 102 may process theideogram 108 to generate atranslation 114 based on a content of theideogram 108 and a contextual information associated with themessage 106. The contextual information may include a sender context, a recipient context, and/or a message context, among others. For example, relationship(s) between theideogram 108 and components of the message 106 (such as words that surround the ideogram 108) may be analyzed to identify a structure of themessage 106 in relation to theideogram 108. A sentence and/or a set of words that have a structure similar to themessage 106 may be selected as thetranslation 114. - The
computing device 104 may communicate with other client device(s) or server(s) through a network. The network may provide wired or wireless communications between network nodes such as thecomputing device 104, other client device(s) and/or server(s), among others. Previous example(s) to providing ideogram translation in thecommunication application 102 are not provided in a limiting sense. Alternatively, thecommunication application 102 may transmit themessage 106 to an ideogram translation provider and receive thetranslation 114 from the ideogram translation provider, among others. - The
sender 110 may interact with thecommunication application 102 with a keyboard based input, a mouse based input, a voice based input, a pen based input, and a gesture based input, among others. The gesture based input may include one or more touch based actions such as a touch action, a swipe action, and a combination of each, among others. - While the example system in
FIG. 1 has been described with specific components including thecomputing device 104, thecommunication application 102, embodiments are not limited to these components or system configurations and can be implemented with other system configuration employing fewer or additional components. -
FIG. 2 is a display diagram illustrating example components of a communication application that translates ideogram(s), according to embodiments. - In a diagram 200, an
inference engine 212 of acommunication application 202 may detect amessage 206 created by a sender that includesideograms 208. Theideograms 208 may include a heart () and a smiling face (♡). Theinference engine 212 may generate atranslation 216 of theideograms 208 to text based on a content of theideograms 208 and contextual information associated with themessage 206. The contextual information may include asender context 220, arecipient context 222, and amessage context 224. - The
inference engine 212 may process theideograms 208 to identify translations of theideograms 208. For example, theinference engine 212 may query an ideogram translation dictionary of thecommunication application 202 with theideograms 208. Theinference engine 212 may locate a translation 230 (love and heart) and another translation 232 (smile and face). Upon locating two or more translations, theinference engine 212 may interact with a sender of themessage 206 to prompt the sender to select one that may be used as thetranslation 216. - In an example scenario, a
rendering engine 214 may be instructed to provide a listing of thetranslation 230 and thetranslation 232 to prompt the sender to make a selection. Upon receiving the selection, theinference engine 212 may designate the selection as thetranslation 216. Thetranslation 216 may be saved into the ideogram translation dictionary in relation to theideograms 208. Furthermore, therendering engine 214 may be instructed to present thetranslation 216 to the recipient for display. - The
inference engine 212 may also process theideograms 208 based on amessage context 224. For example, a structure of themessage 206 may be detected within themessage context 224. The structure may include location of components of themessage 206, relationships that define the location of the components, and/or grammatical relationships between the components, among others. Theinference engine 212 may process theword 207 and theideograms 208 within themessage 206 to identifyrelationships 211 between the word and theideograms 208. Thetranslation 216 may be generated based on therelationships 211. - For example, the
inference engine 212 may detect a noun such as “I” as theword 207. Theinference engine 212 may infer that a verb may follow theword 207 based on a grammatical relationship and a location relationship between theword 207 and theideograms 208. As such, theinference engine 212 may query an ideogram translation provider with the structure of themessage 206, theword 207, and the relationships detected between theword 207 and the ideograms 208 (in addition to a content of the ideograms 208). In response, theinference engine 212 may receive thetranslation 216 from the ideogram translation provider. The translation may match the structure of the message and include theword 207 and therelationships 211. - Alternatively, the
inference engine 212 may query a sentence fragment provider with theword 207 and therelationships 211. In response, a sentence fragment (such as I love smile) may be received from the sentence fragment provider. Thetranslation 216 may be generated by replacing theword 207 and theideograms 208 with the sentence fragment. As such, only a set of components of the message surrounding theideograms 208 may be processed to detect relationships which may lower resource consumption compared toprocessing remaining components 209 of themessage 206. - The
inference engine 212 may also analyze contextual information associated with the sender to translate theideograms 208. Theinference engine 212 may identify attributes of the sender. The attributes may include a role, a presence information, an emotional state, and/or a location of the sender, among others. The translations (230 and 232) may be filtered based on the attributes. For example, a translation that does not match the emotional state of the sender may not be included in a list of possible translations. The filtered translations may be provided to the sender for a selection. Upon receiving the selection from the sender, thetranslation 216 may be generated from the selection. - For example, the
inference engine 212 may detect an emotional state of the sender as happy (for example, by recognizing the emotional state from a third party information provider such as a social networking provider, a camera associated with the user's device, and/or the context of the message the user has typed). Theinference engine 212 may filter out a number of the translations that do not match the emotional state of the sender. The translations (230 and 232) may correlate with the happy emotional state of the sender. As such, the translations (230 and 232) may be presented to the sender for a selection through therendering engine 214. The selected translation may be used to generate thetranslation 216. - Similarly, contextual information associated with the recipient may be analyzed to translate the
ideograms 208. Theinference engine 212 may identify attributes of the recipient. The attributes may include a role, a presence information, an emotional state, and/or a location of the recipient, among others. The translations (230 and 232) may be filtered based on the attributes. The filtered translations may be provided to the sender or the recipient for a selection. Upon receiving the selection from the sender or the recipient, thetranslation 216 may be generated from the selection. - For example, the
inference engine 212 may detect an emotional state of the recipient as happy (for example, by recognizing the emotional state from a third party information provider such as a social networking provider, a camera associated with the user's device, and/or the context of the message the user has typed). Theinference engine 212 may filter out a number of the translations that do not match the emotional state of the recipient. The translations (230 and 232) may correlate with the happy emotional state of the recipient. The translations (230 and 232) may be presented to the recipient or the sender for a selection through therendering engine 214. The selected translation may be used to generate thetranslation 216. -
FIG. 3 is a display diagram illustrating components of a scheme to translate ideogram(s) in a communication application, according to embodiments. - In a diagram 300, an
inference engine 312 of thecommunication application 302 may processideograms 308 within amessage 306 to generate atranslation 316. Alternatively, the communication engine may translate words of anew message 318 tonew ideograms 322. - For example, the
inference engine 312 may detect amessage 306 that includesideograms 308. Theinference engine 312 may query anideogram translation dictionary 324 to locate translations that match theideograms 308. If two or more translations are detected, therendering engine 314 is prompted to provide the translations to a sender of themessage 306 to request the sender to make a selection. Upon receiving the selection, the selection may be used to generate thetranslation 316. Alternatively, if theideograms 308 match a single set of translations, the translations may be used to generate thetranslation 316. - Furthermore, the
ideograms 308 may be translated through anideogram translation provider 326. The ideogram translation provider may be provided with themessage 306 to process theideograms 308, generate thetranslation 316, and transmit thetranslation 316 to thecommunication application 302. Upon receiving the translation 316 (from the ideogram translation provider 326), therendering engine 314 may be prompted to provide thetranslation 316 to be transmitted to a recipient for display. - In another scenario, a
new message 318 may be detected. Thenew message 318 may have a content that solely includes words. The ideogram translation dictionary may be queried for anew translation 320 that includesnew ideograms 322. Thenew translation 320 may be found in theideogram translation dictionary 324. Next, thenew translation 320 may be presented to the recipient through therendering engine 314. Alternatively, if two or more ideogram translations of the words (of the new message 318) are detected, the ideogram translations may be presented to the sender for a selection. A selected ideogram translation may be used to generate thenew translation 320. - Furthermore, the
ideogram translation provider 326 may be used to translate thenew message 318. For example, theinference engine 312 may directly query theideogram translation provider 326 to translate themessage 318 to the new translation 320 (with the new ideograms 322). Alternatively, theideogram translation provider 326 may be queried (with the new message 318) upon a failure to locate thenew translation 320 within theideogram translation dictionary 324. -
FIG. 4 is a display diagram illustrating a scheme to translate ideogram(s) using Unicode character intermediaries, according to embodiments. - In a diagram 400, an
inference engine 412 of acommunication application 402 may translate amessage 406 with anideogram 408 by converting theideogram 408 toUnicode characters 410. An ideogram translation dictionary may be queried with theUnicode characters 410 to locate a translation associated with theUnicode characters 410. The translation may be used to construct a translatedsentence 416 by replacing theideogram 408 with the translation. The translatedsentence 416 may be presented to the recipient as the translation of themessage 406 through therendering engine 414. - However, if the search results in two or more translations (430 and 432) of the
ideogram 408 then theinference engine 412 may prompt therendering engine 414 to provide the two or more translations (430 and 432) for a selection to the sender. The sender may be instructed to make a selection from the two or more translations (430 and 432). Upon detecting the selection, the selected translation (430) may be used to construct the translatedsentence 416. - As discussed above, the communication application may be employed to provide ideogram translation. An increased user efficiency with the
communication application 102 may occur as a result of processing the ideogram and components of a message that have a relationship with the ideogram to generate the translation. Additionally, automatically translating ideograms to words or words to ideograms within a communication based on user demand, by thecommunication application 102, may reduce processor load, increase processing speed, conserve memory, and reduce network bandwidth usage. - Embodiments, as described herein, address a need that arises from a lack of efficiency to provide ideogram translation. The actions/operations described herein are not a mere use of a computer, but address results that are a direct consequence of software used as a service offered to large numbers of users and applications.
- The example scenarios and schemas in
FIG. 1 through 4 are shown with specific components, data types, and configurations. Embodiments are not limited to systems according to these example configurations. Providing ideogram translation may be implemented in configurations employing fewer or additional components in applications and user interfaces. Furthermore, the example schema and components shown inFIG. 1 through 4 and their subcomponents may be implemented in a similar manner with other values using the principles described herein. -
FIG. 5 is an example networked environment, where embodiments may be implemented. A communication application configured to translate ideograms may be implemented via software executed over one ormore servers 514 such as a hosted service. The platform may communicate with communication applications on individual computing devices such as asmart phone 513, amobile computer 512, or desktop computer 511 (‘client devices’) through network(s) 510. - Communication applications executed on any of the client devices 511-513 may facilitate communications via application(s) executed by
servers 514, or onindividual server 516. A communication application may detect a message created by a sender that includes ideogram(s). The ideogram(s) may be processed to generate a translation based on a content of the ideogram and contextual information associated with the message. The contextual information may include a sender context, a recipient context, and/or a message context. Next, the translation may be provided for display to the recipient. The communication application may store data associated with the ideograms in data store(s) 519 directly or throughdatabase server 518. - Network(s) 510 may comprise any topology of servers, clients, Internet service providers, and communication media. A system according to embodiments may have a static or dynamic topology. Network(s) 510 may include secure networks such as an enterprise network, an unsecure network such as a wireless open network, or the Internet. Network(s) 510 may also coordinate communication over other networks such as Public Switched Telephone Network (PSTN) or cellular networks. Furthermore, network(s) 510 may include short range wireless networks such as Bluetooth or similar ones. Network(s) 510 provide communication between the nodes described herein. By way of example, and not limitation, network(s) 510 may include wireless media such as acoustic, RF, infrared and other wireless media.
- Many other configurations of computing devices, applications, data sources, and data distribution systems may be employed to provide ideogram translation. Furthermore, the networked environments discussed in
FIG. 5 are for illustration purposes only. Embodiments are not limited to the example applications, modules, or processes. -
FIG. 6 is a block diagram of an example computing device, which may be used to provide ideogram translation, according to embodiments. - For example,
computing device 600 may be used as a server, desktop computer, portable computer, smart phone, special purpose computer, or similar device. In an example basic configuration 602, thecomputing device 600 may include one ormore processors 604 and asystem memory 606. A memory bus 608 may be used for communication between theprocessor 604 and thesystem memory 606. The basic configuration 602 may be illustrated inFIG. 6 by those components within the inner dashed line. - Depending on the desired configuration, the
processor 604 may be of any type, including but not limited to a microprocessor (μP), a microcontroller (μC), a digital signal processor (DSP), or any combination thereof. Theprocessor 604 may include one more levels of caching, such as alevel cache memory 612, one ormore processor cores 614, and registers 616. Theexample processor cores 614 may (each) include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. Anexample memory controller 618 may also be used with theprocessor 604, or in some implementations, thememory controller 618 may be an internal part of theprocessor 604. - Depending on the desired configuration, the
system memory 606 may be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. Thesystem memory 606 may include anoperating system 620, acommunication application 622, and aprogram data 624. Thecommunication application 622 may include components such as aninference engine 626 and arendering engine 627. Theinference engine 626 and therendering engine 627 may execute the processes associated with thecommunication application 622. Theinference engine 626 may detect a message created by a sender that includes ideogram(s). The ideogram(s) may be processed to generate a translation based on a content of the ideogram and contextual information associated with the message. The contextual information may include a sender context, a recipient context, and/or a message context. Next, therendering engine 627 may provide the translation to the recipient for display. - The
communication application 622 may provide a message through a communication module associated with thecomputing device 600. An example of the communication module may include acommunication device 666, among others that may be communicatively coupled to thecomputing device 600. Theprogram data 624 may also include, among other data,ideogram data 628, or the like, as described herein. Theideogram data 628 may include translations. - The
computing device 600 may have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration 602 and any desired devices and interfaces. For example, a bus/interface controller 630 may be used to facilitate communications between the basic configuration 602 and one or moredata storage devices 632 via a storage interface bus 634. Thedata storage devices 632 may be one or moreremovable storage devices 636, one or morenon-removable storage devices 638, or a combination thereof. Examples of the removable storage and the non-removable storage devices may include magnetic disk devices, such as flexible disk drives and hard-disk drives (HDDs), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSDs), and tape drives, to name a few. Example computer storage media may include volatile and nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. - The
system memory 606, theremovable storage devices 636 and thenon-removable storage devices 638 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVDs), solid state drives, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by thecomputing device 600. Any such computer storage media may be part of thecomputing device 600. - The
computing device 600 may also include an interface bus 640 for facilitating communication from various interface devices (for example, one ormore output devices 642, one or moreperipheral interfaces 644, and one or more communication devices 666) to the basic configuration 602 via the bus/interface controller 630. Some of theexample output devices 642 include agraphics processing unit 648 and anaudio processing unit 650, which may be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 652. One or more exampleperipheral interfaces 644 may include aserial interface controller 654 or aparallel interface controller 656, which may be configured to communicate with external devices such as input devices (for example, keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (for example, printer, scanner, etc.) via one or more I/O ports 658. An example of the communication device(s) 666 includes anetwork controller 660, which may be arranged to facilitate communications with one or moreother computing devices 662 over a network communication link via one ormore communication ports 664. The one or moreother computing devices 662 may include servers, computing devices, and comparable devices. - The network communication link may be one example of a communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media. A “modulated data signal” may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media. The term computer readable media as used herein may include both storage media and communication media.
- The
computing device 600 may be implemented as a part of a general purpose or specialized server, mainframe, or similar computer, which includes any of the above functions. Thecomputing device 600 may also be implemented as a personal computer including both laptop computer and non-laptop computer configurations. - Example embodiments may also include methods to provide ideogram translation. These methods can be implemented in any number of ways, including the structures described herein. One such way may be by machine operations, of devices of the type described in the present disclosure. Another optional way may be for one or more of the individual operations of the methods to be performed in conjunction with one or more human operators performing some of the operations while other operations may be performed by machines. These human operators need not be collocated with each other, but each can be only with a machine that performs a portion of the program. In other embodiments, the human interaction can be automated such as by pre-selected criteria that may be machine automated.
-
FIG. 7 is a logic flow diagram illustrating a process for providing ideogram translation, according to embodiments.Process 700 may be implemented on a computing device, such as thecomputing device 600 or another system. -
Process 700 begins withoperation 710, where the communication application detects a message created by a sender that includes ideogram(s). An ideogram may include a graphic that reflects an emotional state. The communication application may generate a translation of the ideogram(s) based on a content of the ideogram(s) and a contextual information associated with the message atoperation 720. The contextual information may include a sender context, a recipient context, and/or a message context. Each ideogram in the message may be matched to a translation. However, in scenarios where the ideogram may correspond to multiple translations, the sender may be provided with a selection prompt to select the correct translation that may be used to translate the ideogram. Next, atoperation 730, the translation may be provided to a recipient for display. - The operations included in
process 700 is for illustration purposes. Providing ideogram translation may be implemented by similar processes with fewer or additional steps, as well as in different order of operations using the principles described herein. The operations described herein may be executed by one or more processors operated on one or more computing devices, one or more processor cores, specialized processing devices, and/or general purpose processors, among other examples. - In some examples, a computing device to provide ideogram translation is described. The computing device includes a communication module, a memory configured to store instructions associated with a communication application, a processor coupled to the memory and the communication module. The processor executes the communication application in conjunction with the instructions stored in the memory. The communication application includes an inference engine and a rendering engine. The inference engine is configured to detect a message created by a sender, where the message includes one or more ideograms and generate a translation of the one or more ideograms into text based on a content of the one or more ideograms and a contextual information associated with the message, where the contextual information includes one or more of a sender context, a recipient context, and a message context. The rendering engine is configured to provide the translation to the communication module to be transmitted to a recipient for display.
- In other examples, the inference engine is further configured to identify two or more translations of the one or more ideograms and prompt the rendering engine to present the two or more translations to the sender for a selection among the two or more translations. The inference engine is further configured to receive the selection among the two or more translations from the sender, designate the selection among the two or more translations as the translation corresponding to the one or more ideograms, and save the one or more ideograms and the translation in an ideogram translation dictionary.
- In further examples, the inference engine is further configured to detect a structure of the message as the message context, where the structure includes one or more words adjacent to the one or more ideograms, process the one or more words and the one or more ideograms to identify one or more relationships between the one or more words and the one or more ideograms, and generate the translation of the one or more ideograms based on the one or more relationships with the one or more words. The inference engine is further configured to query an ideogram translation provider with the structure of the message, the one or more words, and the one or more relationships and receive the translation from the ideogram translation provider. The inference engine is further configured to query a sentence fragment provider with the structure of the message, the one or more words and the one or more relationships, receive a sentence fragment that matches the one or more relationships from the sentence fragment provider, where the sentence fragment includes the one or more words, and generate the translation by replacing the one or more words and the one or more ideograms with the sentence fragment within the message.
- In other examples, the inference engine is further configured to analyze the sender context to identify an attribute of the sender, where the attribute of the sender includes one or more of a role, a presence information, an emotional state, and a location of the sender and generate the translation of the one or more ideograms based on a selection of one or more textual equivalents for the one or more ideograms based on the identified attribute. The inference engine is further configured to analyze the recipient context to identify an attribute of the recipient, where the attribute of the recipient includes one or more of a role, a presence information, an emotional state, and a location of the recipient and generate the translation of the one or more ideograms based on a selection of one or more textual equivalents for the one or more ideograms based on the identified attribute.
- In further examples, the inference engine is further configured to identify two or more textual equivalents for the one or more ideograms, analyze the two or more textual equivalents based on the one or more of the sender context, the recipient context, and the message context, and select one of the two or more textual equivalents as the translation based on the analysis. The inference engine is further configured to provide the one or more ideograms along with the translation to the communication module to be transmitted to a recipient for display. The one or more ideograms include one of an icon, a pictogram, and an emoji.
- In some examples, a method executed on a computing device to provide ideogram translation is described. The method includes detecting a message being created, where the message includes one or more ideograms, generating a translation of the one or more ideograms into text based on a content of the one or more ideograms and a contextual information associated with the message, where the contextual information includes one or more of a sender context, a recipient context, and a message context, identifying two or more translations of the one or more ideograms, presenting the two or more translations to a sender for a selection among the two or more translations, receiving the selection among the two or more translations, and providing the selection among the two or more translations to a communication module to be transmitted to a recipient for display.
- In other examples, the method further includes converting the one or more ideograms to one or more sets of Unicode characters that correspond to the one or more ideograms, searching an ideogram translation dictionary using the one or more sets of Unicode characters, locating one or more words that match the one or more sets of Unicode characters, and generating translation from the one or more words. Generating the translation of the one or more ideograms based on the sender context includes analyzing a history of the sender's messages to other recipients and identifying the two or more translations based on the analysis. Generating the translation of the one or more ideograms based on the recipient context includes analyzing a history of the recipient's messages from other senders and identifying the two or more translations based on the analysis. Generating the translation of the one or more ideograms based on the message context includes analyzing one or more of a conversation that includes the message, a prior message, and a number of recipients and identifying the two or more translations based on the analysis.
- In some examples, a computer-readable memory device with instructions stored thereon to provide ideogram translation is described. The instructions include receiving a message that includes one or more ideograms, generating a translation of the one or more ideograms into text based on a content of the one or more ideograms and a contextual information associated with the message, where the contextual information includes one or more of a sender context, a recipient context, and a message context, and providing the translation to a recipient of the message for display.
- In other examples, the instructions further include detecting a structure of the message within the message context, where the structure includes one or more words adjacent to the one or more ideograms, processing the one or more words and the one or more ideograms to identify one or more relationships between the one or more words and the one or more ideograms, and generating the translation of the one or more ideograms based on the one or more relationships with the one or more words. The instructions further include analyzing one or more of a history of the recipient's messages from other senders, a history of the sender's messages to other recipients, a conversation that includes the message, a prior message, and a number of recipients and generating the translation based on the analysis.
- In some examples a means for providing ideogram translation is described. The means for providing ideogram translation includes a means for detecting a message created by a sender, where the message includes one or more ideograms, a means for generating a translation of the one or more ideograms into text based on a content of the one or more ideograms and a contextual information associated with the message, where the contextual information includes one or more of a sender context, a recipient context, and a message context, and a means for providing the translation to a recipient for display.
- The above specification, examples and data provide a complete description of the manufacture and use of the composition of the embodiments. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims and embodiments.
Claims (20)
1. A computing device to provide ideogram translation, the computing device comprising:
a communication module;
a memory configured to store instructions associated with a communication application;
a processor coupled to the memory and the communication module, the processor executing the communication application in conjunction with the instructions stored in the memory, wherein the communication application includes:
an inference engine configured to:
detect a message created by a sender, wherein the message includes an ideogram;
determine contextual information associated with the message by analyzing a sender context, a recipient context and a message context based on one or more of: a presence information, an emotional state, or a location of the sender or a recipient;
automatically generate a translation of the ideogram into text by:
generating a list of possible translations based on a content of the ideogram; and
filtering the list of possible translations based on contextual information associated with the message; and
a rendering engine configured to:
provide the translation to the communication module to be transmitted to the recipient for display.
2. The computing device of claim 1 , wherein the inference engine is further configured to:
identify two or more translations of the ideogram; and
prompt the rendering engine to present the two or more translations to the sender for a selection among the two or more translations.
3. The computing device of claim 2 , wherein the inference engine is further configured to:
receive the selection among the two or more translations from the sender;
designate the selection among the two or more translations as the translation corresponding to the ideogram; and
save the ideogram and the translation in an ideogram translation dictionary.
4. The computing device of claim 1 , wherein the inference engine is further configured to:
detect a structure of the message as the message context, wherein the structure includes one or more words adjacent to the ideogram;
process the one or more words and the ideogram to identify one or more relationships between the one or more words and the ideogram; and
generate the translation of the ideogram based on the one or more relationships with the one or more words.
5. The computing device of claim 4 , wherein the inference engine is further configured to:
query an ideogram translation provider with the structure of the message, the one or more words, and the one or more relationships; and
receive the translation from the ideogram translation provider.
6. The computing device of claim 4 , wherein the inference engine is further configured to:
query a sentence fragment provider with the structure of the message, the one or more words and the one or more relationships;
receive a sentence fragment that matches the one or more relationships from the sentence fragment provider, wherein the sentence fragment includes the one or more words; and
generate the translation by replacing the one or more words and the ideogram with the sentence fragment within the message.
7. The computing device of claim 1 , wherein the inference engine is further configured to:
analyze the sender context to identify an attribute of the sender; and
generate the translation of the ideogram based on a selection of one or more textual equivalents for the ideogram based on the identified attribute.
8. The computing device of claim 1 , wherein the inference engine is further configured to:
analyze the recipient context to identify an attribute of the recipient; and
generate the translation of the ideogram based on a selection of one or more textual equivalents for the ideogram based on the identified attribute.
9. The computing device of claim 1 , wherein the inference engine is further configured to:
identify two or more textual equivalents for the ideogram;
analyze the two or more textual equivalents based on the one or more of the sender context, the recipient context, and the message context; and
select one of the two or more textual equivalents as the translation based on the analysis.
10. The computing device of claim 1 , wherein the inference engine is further configured to:
provide ideogram along with the translation to the communication module to be transmitted to the recipient for display.
11. The computing device of claim 1 , wherein the ideogram include one of an icon, a pictogram, and an emoji.
12. A method executed on a computing device to provide ideogram translation, the method comprising:
detecting a message being created, wherein the message includes an ideogram;
determining contextual information associated with the message by analyzing a sender context, a recipient context, or a message context based on one or more of a presence information, an emotional state, or a location of a sender or a recipient;
automatically generating a translation of the ideogram into text by;
generating a list of possible translations based on a content of the ideogram; and
filtering the list of possible translations based on the contextual information associated with the message;
identifying two or more translations of the ideogram;
presenting the two or more translations to the sender for a selection among the two or more translations;
receiving the selection among the two or more translations; and
providing the selection among the two or more translations to a communication module to be transmitted to the recipient for display.
13. The method of claim 12 , further comprising:
converting the ideogram to one or more sets of Unicode characters that correspond to the ideogram.
14. The method of claim 13 , further comprising:
searching an ideogram translation dictionary using the one or more sets of Unicode characters;
locating one or more words that match the one or more sets of Unicode characters; and
generating translation from the one or more words.
15. The method of claim 12 , wherein generating the translation of the ideogram based on the sender context further comprises:
analyzing a history of the sender's messages to other recipients; and
identifying the two or more translations based on the analysis.
16. The method of claim 12 , wherein generating the translation of the ideogram based on the recipient context further comprises:
analyzing a history of the recipient's messages from other senders; and
identifying the two or more translations based on the analysis.
17. The method of claim 12 , wherein generating the translation of the ideogram based on the message context comprises:
analyzing one or more of a conversation that includes the message, a prior message, and a number of recipients; and
identifying the two or more translations based on the analysis.
18. A computer-readable memory device with instructions stored thereon to provide ideogram translation, the instructions comprising:
receiving a message that includes an ideogram;
determining contextual information associated with the message by analyzing a sender context, a recipient context, or a message context based on one or more of a presence information, an emotional state, or a location of a sender or a recipient;
automatically generating a translation of the ideogram into text by:
generating a list of possible translations based on a content of the ideogram; and
filtering the list of possible translations based on the contextual information associated with the message; and
providing the translation to the recipient of the message for display.
19. The computer-readable memory device of claim 18 , wherein the instructions further comprise:
detecting a structure of the message within the message context, wherein the structure includes one or more words adjacent to the ideogram;
processing the one or more words and the ideogram to identify one or more relationships between the one or more words and the ideogram; and
generating the translation of the ideogram based on the one or more relationships with the one or more words.
20. The computer-readable memory device of claim 18 , wherein the instructions further comprise:
analyzing one or more of a history of the recipient's messages from other senders, a history of the sender's messages to other recipients, a conversation that includes the message, a prior message, and a number of recipients; and
generating the translation based on the analysis.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/243,987 US20180060312A1 (en) | 2016-08-23 | 2016-08-23 | Providing ideogram translation |
PCT/US2017/047243 WO2018039008A1 (en) | 2016-08-23 | 2017-08-17 | Providing ideogram translation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/243,987 US20180060312A1 (en) | 2016-08-23 | 2016-08-23 | Providing ideogram translation |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180060312A1 true US20180060312A1 (en) | 2018-03-01 |
Family
ID=59714155
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/243,987 Abandoned US20180060312A1 (en) | 2016-08-23 | 2016-08-23 | Providing ideogram translation |
Country Status (2)
Country | Link |
---|---|
US (1) | US20180060312A1 (en) |
WO (1) | WO2018039008A1 (en) |
Cited By (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109032377A (en) * | 2018-07-12 | 2018-12-18 | 广州三星通信技术研究有限公司 | The method and apparatus of output input method candidate word for electric terminal |
US20190007356A1 (en) * | 2017-06-30 | 2019-01-03 | Daria A. Loi | Incoming communication filtering system |
US10311144B2 (en) * | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10390213B2 (en) | 2014-09-30 | 2019-08-20 | Apple Inc. | Social reminders |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10417344B2 (en) | 2014-05-30 | 2019-09-17 | Apple Inc. | Exemplar-based natural language processing |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10417405B2 (en) | 2011-03-21 | 2019-09-17 | Apple Inc. | Device access using voice authentication |
US10438595B2 (en) | 2014-09-30 | 2019-10-08 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10453443B2 (en) | 2014-09-30 | 2019-10-22 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10529332B2 (en) | 2015-03-08 | 2020-01-07 | Apple Inc. | Virtual assistant activation |
US10580409B2 (en) | 2016-06-11 | 2020-03-03 | Apple Inc. | Application integration with a digital assistant |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10657966B2 (en) | 2014-05-30 | 2020-05-19 | Apple Inc. | Better resolution when referencing to concepts |
US10681212B2 (en) | 2015-06-05 | 2020-06-09 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10692504B2 (en) | 2010-02-25 | 2020-06-23 | Apple Inc. | User profiling for voice input processing |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US10714117B2 (en) | 2013-02-07 | 2020-07-14 | Apple Inc. | Voice trigger for a digital assistant |
US10741185B2 (en) | 2010-01-18 | 2020-08-11 | Apple Inc. | Intelligent automated assistant |
US10741181B2 (en) | 2017-05-09 | 2020-08-11 | Apple Inc. | User interface for correcting recognition errors |
US10748546B2 (en) | 2017-05-16 | 2020-08-18 | Apple Inc. | Digital assistant services based on device capabilities |
US10769385B2 (en) | 2013-06-09 | 2020-09-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US10878809B2 (en) | 2014-05-30 | 2020-12-29 | Apple Inc. | Multi-command single utterance input method |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10909171B2 (en) | 2017-05-16 | 2021-02-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US10930282B2 (en) | 2015-03-08 | 2021-02-23 | Apple Inc. | Competing devices responding to voice triggers |
US10942703B2 (en) | 2015-12-23 | 2021-03-09 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
US11010127B2 (en) | 2015-06-29 | 2021-05-18 | Apple Inc. | Virtual assistant for media playback |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11048473B2 (en) | 2013-06-09 | 2021-06-29 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11127397B2 (en) | 2015-05-27 | 2021-09-21 | Apple Inc. | Device voice control |
US11126400B2 (en) | 2015-09-08 | 2021-09-21 | Apple Inc. | Zero latency digital assistant |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11169616B2 (en) | 2018-05-07 | 2021-11-09 | Apple Inc. | Raise to speak |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11269678B2 (en) | 2012-05-15 | 2022-03-08 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11380310B2 (en) | 2017-05-12 | 2022-07-05 | Apple Inc. | Low-latency intelligent automated assistant |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US11516537B2 (en) | 2014-06-30 | 2022-11-29 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US11599331B2 (en) | 2017-05-11 | 2023-03-07 | Apple Inc. | Maintaining privacy of personal information |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11656884B2 (en) | 2017-01-09 | 2023-05-23 | Apple Inc. | Application integration with a digital assistant |
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US11710482B2 (en) | 2018-03-26 | 2023-07-25 | Apple Inc. | Natural assistant interaction |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11798547B2 (en) | 2013-03-15 | 2023-10-24 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US11809783B2 (en) | 2016-06-11 | 2023-11-07 | Apple Inc. | Intelligent device arbitration and control |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US11854539B2 (en) | 2018-05-07 | 2023-12-26 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11928604B2 (en) | 2005-09-08 | 2024-03-12 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US12010262B2 (en) | 2013-08-06 | 2024-06-11 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US12014118B2 (en) | 2017-05-15 | 2024-06-18 | Apple Inc. | Multi-modal interfaces having selection disambiguation and text modification capability |
US12051413B2 (en) | 2015-09-30 | 2024-07-30 | Apple Inc. | Intelligent device identification |
US12197817B2 (en) | 2016-06-11 | 2025-01-14 | Apple Inc. | Intelligent device arbitration and control |
US12223282B2 (en) | 2016-06-09 | 2025-02-11 | Apple Inc. | Intelligent automated assistant in a home environment |
US12277954B2 (en) | 2024-04-16 | 2025-04-15 | Apple Inc. | Voice trigger for a digital assistant |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7607097B2 (en) * | 2003-09-25 | 2009-10-20 | International Business Machines Corporation | Translating emotion to braille, emoticons and other special symbols |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080262827A1 (en) * | 2007-03-26 | 2008-10-23 | Telestic Llc | Real-Time Translation Of Text, Voice And Ideograms |
US20150100537A1 (en) * | 2013-10-03 | 2015-04-09 | Microsoft Corporation | Emoji for Text Predictions |
-
2016
- 2016-08-23 US US15/243,987 patent/US20180060312A1/en not_active Abandoned
-
2017
- 2017-08-17 WO PCT/US2017/047243 patent/WO2018039008A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7607097B2 (en) * | 2003-09-25 | 2009-10-20 | International Business Machines Corporation | Translating emotion to braille, emoticons and other special symbols |
Cited By (174)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11928604B2 (en) | 2005-09-08 | 2024-03-12 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US11979836B2 (en) | 2007-04-03 | 2024-05-07 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US11900936B2 (en) | 2008-10-02 | 2024-02-13 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US12165635B2 (en) | 2010-01-18 | 2024-12-10 | Apple Inc. | Intelligent automated assistant |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US12087308B2 (en) | 2010-01-18 | 2024-09-10 | Apple Inc. | Intelligent automated assistant |
US10741185B2 (en) | 2010-01-18 | 2020-08-11 | Apple Inc. | Intelligent automated assistant |
US10692504B2 (en) | 2010-02-25 | 2020-06-23 | Apple Inc. | User profiling for voice input processing |
US10417405B2 (en) | 2011-03-21 | 2019-09-17 | Apple Inc. | Device access using voice authentication |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11321116B2 (en) | 2012-05-15 | 2022-05-03 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US11269678B2 (en) | 2012-05-15 | 2022-03-08 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US12009007B2 (en) | 2013-02-07 | 2024-06-11 | Apple Inc. | Voice trigger for a digital assistant |
US11636869B2 (en) | 2013-02-07 | 2023-04-25 | Apple Inc. | Voice trigger for a digital assistant |
US10714117B2 (en) | 2013-02-07 | 2020-07-14 | Apple Inc. | Voice trigger for a digital assistant |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
US11557310B2 (en) | 2013-02-07 | 2023-01-17 | Apple Inc. | Voice trigger for a digital assistant |
US11862186B2 (en) | 2013-02-07 | 2024-01-02 | Apple Inc. | Voice trigger for a digital assistant |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US11798547B2 (en) | 2013-03-15 | 2023-10-24 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US10769385B2 (en) | 2013-06-09 | 2020-09-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US11727219B2 (en) | 2013-06-09 | 2023-08-15 | Apple Inc. | System and method for inferring user intent from speech inputs |
US11048473B2 (en) | 2013-06-09 | 2021-06-29 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US12073147B2 (en) | 2013-06-09 | 2024-08-27 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US12010262B2 (en) | 2013-08-06 | 2024-06-11 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US12067990B2 (en) | 2014-05-30 | 2024-08-20 | Apple Inc. | Intelligent assistant for home automation |
US11670289B2 (en) | 2014-05-30 | 2023-06-06 | Apple Inc. | Multi-command single utterance input method |
US10878809B2 (en) | 2014-05-30 | 2020-12-29 | Apple Inc. | Multi-command single utterance input method |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10657966B2 (en) | 2014-05-30 | 2020-05-19 | Apple Inc. | Better resolution when referencing to concepts |
US12118999B2 (en) | 2014-05-30 | 2024-10-15 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11810562B2 (en) | 2014-05-30 | 2023-11-07 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US10417344B2 (en) | 2014-05-30 | 2019-09-17 | Apple Inc. | Exemplar-based natural language processing |
US10714095B2 (en) | 2014-05-30 | 2020-07-14 | Apple Inc. | Intelligent assistant for home automation |
US11699448B2 (en) | 2014-05-30 | 2023-07-11 | Apple Inc. | Intelligent assistant for home automation |
US11516537B2 (en) | 2014-06-30 | 2022-11-29 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11838579B2 (en) | 2014-06-30 | 2023-12-05 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US12200297B2 (en) | 2014-06-30 | 2025-01-14 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10438595B2 (en) | 2014-09-30 | 2019-10-08 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10390213B2 (en) | 2014-09-30 | 2019-08-20 | Apple Inc. | Social reminders |
US10453443B2 (en) | 2014-09-30 | 2019-10-22 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US10529332B2 (en) | 2015-03-08 | 2020-01-07 | Apple Inc. | Virtual assistant activation |
US10930282B2 (en) | 2015-03-08 | 2021-02-23 | Apple Inc. | Competing devices responding to voice triggers |
US11842734B2 (en) | 2015-03-08 | 2023-12-12 | Apple Inc. | Virtual assistant activation |
US12236952B2 (en) | 2015-03-08 | 2025-02-25 | Apple Inc. | Virtual assistant activation |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US12001933B2 (en) | 2015-05-15 | 2024-06-04 | Apple Inc. | Virtual assistant in a communication session |
US12154016B2 (en) | 2015-05-15 | 2024-11-26 | Apple Inc. | Virtual assistant in a communication session |
US11127397B2 (en) | 2015-05-27 | 2021-09-21 | Apple Inc. | Device voice control |
US11070949B2 (en) | 2015-05-27 | 2021-07-20 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display |
US10681212B2 (en) | 2015-06-05 | 2020-06-09 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US11947873B2 (en) | 2015-06-29 | 2024-04-02 | Apple Inc. | Virtual assistant for media playback |
US11010127B2 (en) | 2015-06-29 | 2021-05-18 | Apple Inc. | Virtual assistant for media playback |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US11550542B2 (en) | 2015-09-08 | 2023-01-10 | Apple Inc. | Zero latency digital assistant |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US12204932B2 (en) | 2015-09-08 | 2025-01-21 | Apple Inc. | Distributed personal assistant |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US11126400B2 (en) | 2015-09-08 | 2021-09-21 | Apple Inc. | Zero latency digital assistant |
US11954405B2 (en) | 2015-09-08 | 2024-04-09 | Apple Inc. | Zero latency digital assistant |
US12051413B2 (en) | 2015-09-30 | 2024-07-30 | Apple Inc. | Intelligent device identification |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11809886B2 (en) | 2015-11-06 | 2023-11-07 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
US11886805B2 (en) | 2015-11-09 | 2024-01-30 | Apple Inc. | Unconventional virtual assistant interactions |
US10942703B2 (en) | 2015-12-23 | 2021-03-09 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US11853647B2 (en) | 2015-12-23 | 2023-12-26 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US12223282B2 (en) | 2016-06-09 | 2025-02-11 | Apple Inc. | Intelligent automated assistant in a home environment |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11657820B2 (en) | 2016-06-10 | 2023-05-23 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US12175977B2 (en) | 2016-06-10 | 2024-12-24 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11809783B2 (en) | 2016-06-11 | 2023-11-07 | Apple Inc. | Intelligent device arbitration and control |
US10580409B2 (en) | 2016-06-11 | 2020-03-03 | Apple Inc. | Application integration with a digital assistant |
US11749275B2 (en) | 2016-06-11 | 2023-09-05 | Apple Inc. | Application integration with a digital assistant |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US12197817B2 (en) | 2016-06-11 | 2025-01-14 | Apple Inc. | Intelligent device arbitration and control |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US11656884B2 (en) | 2017-01-09 | 2023-05-23 | Apple Inc. | Application integration with a digital assistant |
US12260234B2 (en) | 2017-01-09 | 2025-03-25 | Apple Inc. | Application integration with a digital assistant |
US10741181B2 (en) | 2017-05-09 | 2020-08-11 | Apple Inc. | User interface for correcting recognition errors |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US11599331B2 (en) | 2017-05-11 | 2023-03-07 | Apple Inc. | Maintaining privacy of personal information |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US11538469B2 (en) | 2017-05-12 | 2022-12-27 | Apple Inc. | Low-latency intelligent automated assistant |
US11862151B2 (en) | 2017-05-12 | 2024-01-02 | Apple Inc. | Low-latency intelligent automated assistant |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US11837237B2 (en) | 2017-05-12 | 2023-12-05 | Apple Inc. | User-specific acoustic models |
US11380310B2 (en) | 2017-05-12 | 2022-07-05 | Apple Inc. | Low-latency intelligent automated assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US12014118B2 (en) | 2017-05-15 | 2024-06-18 | Apple Inc. | Multi-modal interfaces having selection disambiguation and text modification capability |
US10909171B2 (en) | 2017-05-16 | 2021-02-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US12254887B2 (en) | 2017-05-16 | 2025-03-18 | Apple Inc. | Far-field extension of digital assistant services for providing a notification of an event to a user |
US11532306B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Detecting a trigger of a digital assistant |
US12026197B2 (en) | 2017-05-16 | 2024-07-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US11675829B2 (en) | 2017-05-16 | 2023-06-13 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311144B2 (en) * | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10748546B2 (en) | 2017-05-16 | 2020-08-18 | Apple Inc. | Digital assistant services based on device capabilities |
US20190007356A1 (en) * | 2017-06-30 | 2019-01-03 | Daria A. Loi | Incoming communication filtering system |
US10652183B2 (en) * | 2017-06-30 | 2020-05-12 | Intel Corporation | Incoming communication filtering system |
US11902233B2 (en) * | 2017-06-30 | 2024-02-13 | Intel Corporation | Incoming communication filtering system |
US20230021182A1 (en) * | 2017-06-30 | 2023-01-19 | Intel Corporation | Incoming communication filtering system |
US11477152B2 (en) * | 2017-06-30 | 2022-10-18 | Intel Corporation | Incoming communication filtering system |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US11710482B2 (en) | 2018-03-26 | 2023-07-25 | Apple Inc. | Natural assistant interaction |
US12211502B2 (en) | 2018-03-26 | 2025-01-28 | Apple Inc. | Natural assistant interaction |
US11169616B2 (en) | 2018-05-07 | 2021-11-09 | Apple Inc. | Raise to speak |
US11907436B2 (en) | 2018-05-07 | 2024-02-20 | Apple Inc. | Raise to speak |
US11900923B2 (en) | 2018-05-07 | 2024-02-13 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11487364B2 (en) | 2018-05-07 | 2022-11-01 | Apple Inc. | Raise to speak |
US11854539B2 (en) | 2018-05-07 | 2023-12-26 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11431642B2 (en) | 2018-06-01 | 2022-08-30 | Apple Inc. | Variable latency device coordination |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US12080287B2 (en) | 2018-06-01 | 2024-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US11630525B2 (en) | 2018-06-01 | 2023-04-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US11360577B2 (en) | 2018-06-01 | 2022-06-14 | Apple Inc. | Attention aware virtual assistant dismissal |
US12067985B2 (en) | 2018-06-01 | 2024-08-20 | Apple Inc. | Virtual assistant operations in multi-device environments |
US10720160B2 (en) | 2018-06-01 | 2020-07-21 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US12061752B2 (en) | 2018-06-01 | 2024-08-13 | Apple Inc. | Attention aware virtual assistant dismissal |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10504518B1 (en) | 2018-06-03 | 2019-12-10 | Apple Inc. | Accelerated task performance |
US10944859B2 (en) | 2018-06-03 | 2021-03-09 | Apple Inc. | Accelerated task performance |
CN109032377A (en) * | 2018-07-12 | 2018-12-18 | 广州三星通信技术研究有限公司 | The method and apparatus of output input method candidate word for electric terminal |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11893992B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Multi-modal inputs for voice commands |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11783815B2 (en) | 2019-03-18 | 2023-10-10 | Apple Inc. | Multimodality in digital assistant systems |
US12136419B2 (en) | 2019-03-18 | 2024-11-05 | Apple Inc. | Multimodality in digital assistant systems |
US12216894B2 (en) | 2019-05-06 | 2025-02-04 | Apple Inc. | User configurable task triggers |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11675491B2 (en) | 2019-05-06 | 2023-06-13 | Apple Inc. | User configurable task triggers |
US12154571B2 (en) | 2019-05-06 | 2024-11-26 | Apple Inc. | Spoken notifications |
US11705130B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | Spoken notifications |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11888791B2 (en) | 2019-05-21 | 2024-01-30 | Apple Inc. | Providing message response suggestions |
US11360739B2 (en) | 2019-05-31 | 2022-06-14 | Apple Inc. | User activity shortcut suggestions |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11657813B2 (en) | 2019-05-31 | 2023-05-23 | Apple Inc. | Voice identification in digital assistant systems |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US12197712B2 (en) | 2020-05-11 | 2025-01-14 | Apple Inc. | Providing relevant data items based on context |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11924254B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Digital assistant hardware abstraction |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US11750962B2 (en) | 2020-07-21 | 2023-09-05 | Apple Inc. | User identification using headphones |
US12219314B2 (en) | 2020-07-21 | 2025-02-04 | Apple Inc. | User identification using headphones |
US12277954B2 (en) | 2024-04-16 | 2025-04-15 | Apple Inc. | Voice trigger for a digital assistant |
Also Published As
Publication number | Publication date |
---|---|
WO2018039008A1 (en) | 2018-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180060312A1 (en) | Providing ideogram translation | |
US10122839B1 (en) | Techniques for enhancing content on a mobile device | |
US10409901B2 (en) | Providing collaboration communication tools within document editor | |
US10379702B2 (en) | Providing attachment control to manage attachments in conversation | |
US20170090705A1 (en) | Conversation and version control for objects in communications | |
US10073826B2 (en) | Providing action associated with event detected within communication | |
JP2015517161A (en) | Content-based web extensions and content linking | |
EP3387556B1 (en) | Providing automated hashtag suggestions to categorize communication | |
EP3374879A1 (en) | Provide interactive content generation for document | |
US11068853B2 (en) | Providing calendar utility to capture calendar event | |
US20170169037A1 (en) | Organization and discovery of communication based on crowd sourcing | |
US20180052696A1 (en) | Providing teaching user interface activated by user action | |
US10474428B2 (en) | Sorting parsed attachments from communications | |
US11163938B2 (en) | Providing semantic based document editor | |
US20190227678A1 (en) | Providing document feature management in relation to communication | |
US10171687B2 (en) | Providing content and attachment printing for communication | |
US10082931B2 (en) | Transitioning command user interface between toolbar user interface and full menu user interface based on use context | |
US20170330236A1 (en) | Enhancing contact card based on knowledge graph | |
US20180308036A1 (en) | Mitigating absence of skill input during collaboration session | |
US20160321226A1 (en) | Insertion of unsaved content via content channel | |
US20170171122A1 (en) | Providing rich preview of communication in communication summary | |
US8935343B2 (en) | Instant messaging network resource validation | |
US20170180279A1 (en) | Providing interest based navigation of communications | |
US20170168654A1 (en) | Organize communications on timeline |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WON, SUNG JOON;REEL/FRAME:039501/0894 Effective date: 20160822 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |