US11574631B2 - Device control system, device control method, and terminal device - Google Patents

Device control system, device control method, and terminal device Download PDF

Info

Publication number
US11574631B2
US11574631B2 US16/888,279 US202016888279A US11574631B2 US 11574631 B2 US11574631 B2 US 11574631B2 US 202016888279 A US202016888279 A US 202016888279A US 11574631 B2 US11574631 B2 US 11574631B2
Authority
US
United States
Prior art keywords
phrase
user
type
devices
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/888,279
Other versions
US20200294494A1 (en
Inventor
Akihiko Suyama
Kazuya Mushikabe
Keisuke Tsukada
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yamaha Corp
Original Assignee
Yamaha Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yamaha Corp filed Critical Yamaha Corp
Assigned to YAMAHA CORPORATION reassignment YAMAHA CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MUSHIKABE, KAZUYA, SUYAMA, AKIHIKO, TSUKADA, KEISUKE
Publication of US20200294494A1 publication Critical patent/US20200294494A1/en
Application granted granted Critical
Publication of US11574631B2 publication Critical patent/US11574631B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • G08C17/02Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/19Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/20Binding and programming of remote control devices
    • G08C2201/21Programming remote control devices via third means
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/30User interface
    • G08C2201/31Voice input
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context

Definitions

  • the present invention relates to a device control system, a device control method, and a terminal device.
  • the present invention has been made in view of the above-mentioned problem, and therefore has an object to provide a technology that allows a user to easily and reliably acquire phrases acceptable in speech input.
  • a device control system includes: at least one processor; and at least one memory device that stores a plurality of instructions, which when executed by the at least one processor, causes the at least one processor to operate to: acquire user setting relating to a device; generate a phrase for controlling the device based on the acquired user setting; and output data for displaying the generated phrase.
  • a device control method includes: acquiring user setting relating to a device; generating a phrase for controlling the device based on the acquired user setting; and outputting data for displaying the generated phrase.
  • a program according to the present invention causes a computer to function as: setting acquisition means for acquiring user setting relating to a device; phrase generation means for generating a phrase for controlling the device based on the acquired user setting; and display data output means for outputting data for displaying the generated phrase.
  • the user can easily acquire phrases acceptable in the speech input.
  • FIG. 1 is a diagram for illustrating a configuration of a device control system according to at least one embodiment of the present invention.
  • FIG. 2 is a diagram for illustrating an example of device tables.
  • FIG. 3 is a block diagram for illustrating functions implemented by a speech recognition device, a command data transmission device, a device, and a user terminal.
  • FIG. 4 is a diagram for illustrating an example of a message.
  • FIG. 5 is a flowchart for illustrating an example of processing of use registration for speech input.
  • FIG. 6 is a table for showing an example of a speech input use table.
  • FIG. 7 is a sequence diagram for illustrating processing of device registration.
  • FIG. 8 is a flowchart for illustrating an example of processing by the user terminal in the device registration.
  • FIG. 9 is a flowchart for illustrating an example of processing by the device in the device registration.
  • FIG. 10 is a flowchart for illustrating an example of processing by the command data transmission device in the device registration.
  • FIG. 11 is a diagram for illustrating an example of templates of sample phrases.
  • FIG. 12 is a flowchart for illustrating an example of processing by a phrase generation module and a phrase output module.
  • FIG. 13 is a diagram for illustrating an example of displayed sample phrases.
  • FIG. 14 is a diagram for illustrating another example of the displayed sample phrases.
  • FIG. 15 is a flowchart for illustrating an example of processing by an operation instruction reception module, a message generation module, a message transmission module, a message reception module, and a command execution module.
  • FIG. 1 is a diagram for illustrating a configuration of a device control system according to at least one embodiment of the present invention.
  • the device control system 1 includes a first device 20 - 1 , a second device 20 - 2 , a third device 20 - 3 , a speech input device 30 , a speech recognition device 40 , and a command processing system 50 .
  • the first device 20 - 1 , the second device 20 - 2 , and the third device 20 - 3 are hereinafter sometimes generally referred to as “devices 20 ”
  • the devices 20 and the speech input device 30 are installed in a local area, and are connected to a LAN 2 .
  • the LAN 2 may be a wired LAN or a wireless LAN.
  • the LAN 2 is connected to an Internet 6 through a router 4 .
  • the speech recognition device 40 and the command processing system 50 are installed outside the local area. In other words, the speech recognition device 40 and the command processing system 50 are installed on the Internet 6 side as viewed from the devices 20 and the speech input device 30 .
  • the “local area” is an area having a limited range in which communication through the LAN 2 is available.
  • the devices 20 are devices to be controlled by the device control system 1 . Although the three devices 20 are illustrated in FIG. 1 , four or more devices 20 may be included, or only two or less devices 20 may be included.
  • the device 20 is an audio device or an audio/visual (AV) device.
  • the device 20 is an AV receiver, an AV amplifier, a speaker, an optical disc player (Blu-ray disc (trademark) player, a DVD (trademark) player, or the like), or a television receiver.
  • the device 20 may be a musical instrument (an electronic musical instrument, an electric musical instrument, or the like).
  • the device 20 may be a device other than those devices.
  • the first device 20 - 1 is an AV receiver
  • the second device 20 - 2 is a television receiver
  • the third device 20 - 3 is a Blu-ray disc player
  • the second device 20 - 2 is connected to a first high-definition multimedia interface (HDMI) (trademark) terminal (HDMI 1) of the first device 20 - 1 through an HDMI cable.
  • the third device 20 - 3 is connected to a second HDMI terminal (HDMI 2) of the first device 20 - 1 through an HDMI cable.
  • private IP addresses “192.168.0.2”, “192.168.0.3”, and “192.168.0.4” are set to the first device 20 - 1 , the second device 20 - 2 , and the third device 20 - 3 , respectively.
  • the first device 20 - 1 includes a controller 21 , a storage 22 , and a communication unit 23 .
  • the controller 21 includes at least one microprocessor (CPU), and is configured to carry out processing in accordance with programs stored in the storage 22 .
  • the storage 22 includes a main storage (e.g., RAM) and an auxiliary storage (e.g., a nonvolatile semiconductor memory or a hard disk drive).
  • the storage 22 is configured to store programs and data.
  • the communication unit 23 is configured to transmit/receive data to/from other devices.
  • the second device 20 - 2 and the third device 20 - 3 also include the controller 21 , the storage 22 , and the communication unit 23 , which are not shown in FIG. 1 .
  • the device 20 may include a component (e.g., an optical disc drive or a memory card slot) configured to read programs and data stored in an information storage medium (e.g., an optical disc or a memory card). Further, the programs may be supplied to the devices 20 through the information storage medium. The programs may be supplied to the devices 20 through the Internet 6 .
  • a component e.g., an optical disc drive or a memory card slot
  • an information storage medium e.g., an optical disc or a memory card
  • the speech input device 30 includes a microphone and is configured to receive speech input.
  • the speech input device 30 is used by a user for the speech input of an operation instruction to the devices 20 .
  • the user wants to start up the device 20 installed in a living room from a standby state through the command processing system 50 supplied from a company X
  • the user inputs “AAA, ask MC to turn on the Living Room” or the like to the speech input device 30 .
  • “AAA” is a word (wake word) for starting the speech input to the speech input device 30 , and is, for example, a name of the speech input device 30 .
  • “MC” is a name of the command processing system 50 .
  • the speech data indicating the speech (phrase) input to the speech input device 30 is transmitted together with a user ID to the speech recognition device 40 through the Internet 6 .
  • the speech recognition device 40 is implemented by, for example, a server computer.
  • the speech recognition device 40 may be implemented by a plurality of servers through so-called cloud computing.
  • the speech recognition device 40 is configured to carry out speech recognition processing, to thereby convert the speech data to data in a form easily recognized by a program (command processing system 50 ).
  • the speech recognition device 40 generates an operation instruction in a predetermined form, which contains strings indicating a type of instruction by the user and a subject to the instruction, from the speech data on the phrase.
  • the operation instruction is transmitted together with the user ID to the command processing system 50 .
  • the user ID may be added by any device (processing) on the Internet side before the speech data is transmitted to the command processing system 50 .
  • the speech recognition device 40 is capable of transmitting the operation instruction to a command processing system 50 capable of processing content of the phrase transmitted from the user in accordance with the content of the phrase, for example, a specific word group in the phrase.
  • the user registers the command processing systems 50 to be used in the speech recognition device 40 in advance.
  • the speech recognition device 40 selects any one of the registered command processing systems 50 based on words contained in the phrase input from the user, and transmits the operation instruction to the selected command processing system 50 .
  • the speech recognition device 40 may receive a plurality of types of phrases corresponding to a specific device 20 , and control the device 20 through the command data transmission device 10 corresponding to each of the types of phrases. For example, a format of wording of the instruction in a phrase differs in accordance with the type of the phrase.
  • a phrase for starting up the device 20 having a name “Living Room” is “AAA, ask MC to turn on the Living Room”.
  • a phrase for starting up the device 20 is “AAA, turn on the Living Room”. While the phrase in the first type contains “MC”, which is a name for identifying the command processing system 50 , the phrase in the second type does not contain the name.
  • MC which is a name for identifying the command processing system 50
  • the phrase in the second type does not contain the name.
  • the command processing system 50 which is the transmission destination when the speech recognition device 40 receives the first type of phrase, may be different from that of the second type.
  • a user terminal 60 is configured to receive a physical operation, for example, a touch operation by the user, to thereby control the device 20 . Moreover, the user terminal 60 is configured to set the command processing system 50 and the speech recognition device 40 based on an operation by the user.
  • the user terminal 60 is, for example, a smartphone or a personal computer.
  • the user terminal 60 includes a controller 61 , a storage 62 , and a communication unit 63 .
  • the controller 61 , the storage 62 , and the communication unit 63 are the same as the controller 21 , the storage 22 , and the communication unit 23 , respectively.
  • the command processing system 50 includes the command data transmission device 10 , a database 52 , and a message queueing telemetry transport (MQTT) server 53 .
  • MQTT message queueing telemetry transport
  • the database 52 stores various types of data.
  • the database 52 stores information on devices 20 owned by respective users.
  • FIG. 2 is a diagram for illustrating an example of device tables stored in the database 52 .
  • a device table T1 is stored for each of the users (while associated with the user ID).
  • the user ID used in the command processing system 50 (database 52 ), the user terminal 60 , and the devices 20 may be different from or the same as the user ID used in the speech input device 30 and the speech recognition device 40 .
  • correspondence data for converting those user IDs to each other is stored in the command processing system 50 or the speech recognition device 40 .
  • the device table T1 includes fields of “ID”, “name”, “device ID”, “IP address”, “command type”, “terminals”, “connection destination”, “reception availability”, and “acceptable commands”.
  • the “ID” field indicates information for uniquely identifying each device 20 owned by the user.
  • the first device 20 - 1 corresponds to a device ID of “1”
  • the second device 20 - 2 corresponds to a device ID of “2”.
  • the “name” field indicates a name of the device 20 . This name is used by the user to specify the device 20 subject to the operation instruction. As the name, any name set by the user may be used, or, for example, an initial name set by a manufacturer of the device 20 or the like may be used, and may be modifiable by the user.
  • the “device ID” field indicates a device ID for solely and uniquely identifying the device 20 .
  • the device ID may be a MAC address of the device 20 , or an ID generated based on the MAC address.
  • the “IP address” field indicates an IP address set to a wireless or wired network interface card provided for the device 20 .
  • the “command type” field is a type (system) of commands used in the device 20 .
  • the “terminals” field indicates a list of input terminals provided for the device 20 .
  • the “connection destination” field indicates an input terminal of another device 20 to which the device 20 is connected when the device 20 is connected to the another device 20 and the sound output from the device 20 is input to the another device 20 .
  • the “reception availability” field indicates whether a message containing a command can be received through the Internet 6 . Detailed description is later made of the message. For example, “0” or “1” is registered in the “reception availability” field. “0” indicates that a message cannot be received through the Internet 6 . “1” indicates that a message can be received through the Internet 6 .
  • the “acceptable commands” field indicates a list of commands that the device 20 can accept.
  • “Power” is set in the list of the “acceptable commands” field
  • the field indicates that the device can be started up from the standby state through an external command. Otherwise, the field indicates that the device cannot be started up from the standby state.
  • “Volume” is set in the list of the “acceptable commands” field
  • the field indicates that a volume of the device can be controlled through an external command. Otherwise, the field indicates that the volume cannot be controlled through an external command.
  • the data in the device table T1 is registered by each user.
  • the user can register information on the device 20 owned by the user with the device table T1 through access from the user terminal 60 to the command data transmission device 10 . Detailed description is later made of the registration.
  • Data other than the device tables T1 is stored in the database 52 .
  • a user and data indicating types of phrases that can be input by the user for the device 20 associated with each other are stored in the database 52 .
  • data indicating a correspondence between an operation instruction and a command (namely, data for converting the operation instruction to the command) may be stored in the database 52 .
  • the command data transmission device 10 is implemented by, for example, a server computer. As illustrated in FIG. 1 , the command data transmission device 10 includes a controller 11 , a storage 12 , and a communication unit 13 .
  • the controller 11 , the storage 12 , and the communication unit 13 are the same as the controller 21 , the storage 22 , and the communication unit 23 , respectively.
  • the command data transmission device 10 may include a component (e.g., an optical disc drive or a memory card slot) configured to read programs and data stored in an information storage medium (e.g., an optical disc or a memory card). Further, the programs may be supplied to the command data transmission device 10 through the information storage medium. The programs may be supplied to the command data transmission device 10 through the Internet 6 .
  • the command data transmission device 10 can make access to the database 52 .
  • the command data transmission device 10 and the database 52 may be implemented by a single server computer, or may be implemented by individual server computers.
  • the command data transmission device 10 is configured to receive an operation instruction, which is transmitted from the speech recognition device 40 and is directed to the device 20 , generate a message containing a command based on the operation instruction, and transmit the message to the device 20 . More specifically, the message is transmitted to the device 20 through the MQTT server 53 .
  • the MQTT server 53 is configured to transmit/receive data through use of the MQTT protocol.
  • the command data transmission device 10 and the MQTT server 53 may be implemented by a single server computer, or may be implemented by individual server computers.
  • FIG. 3 is a block diagram for illustrating functions implemented by the command data transmission device 10 , the devices 20 , and the user terminal 60 .
  • the command data transmission device 10 includes an operation instruction reception module 110 , a message generation module 120 , a message transmission module 130 , a device information acquisition module 140 , a speech use registration module 150 , a device registration module 160 , and a registration result transmission module 170 .
  • the operation instruction reception module 110 , the message generation module 120 , the message transmission module 130 , the device information acquisition module 140 , the speech use registration module 150 , the device registration module 160 , and the registration result transmission module 170 are implemented by the controller 11 executing programs for the respective functions stored in the storage 12 .
  • the device 20 includes a message reception module 210 , a command execution module 220 , a device information transmission module 230 , and a setting module 260 .
  • the message reception module 210 , the command execution module 220 , the device information transmission module 230 , and the setting module 260 are implemented by the controller 21 executing programs for the respective functions stored in the storage 22 .
  • the user terminal 60 includes a registration control module 610 , a setting acquisition module 620 , a phrase generation module 630 , and a phrase output module 640 .
  • the registration control module 610 , the setting acquisition module 620 , the phrase generation module 630 , and the phrase output module 640 are implemented by the controller 61 executing programs for the respective functions stored in the storage 62 .
  • the operation instruction reception module 110 is configured to receive an operation instruction directed to the device 20 .
  • the operation instruction reception module 110 receives an operation instruction from the speech recognition device 40 .
  • the operation instruction is converted by the speech recognition device 40 to data in a form, for example, text data, which is recognizable by a program.
  • the message generation module 120 When the operation reception module 110 receives the operation instruction directed to the device 20 , the message generation module 120 generates a message containing a user ID and a command.
  • the user ID is used to identify the user relating to the operation instruction.
  • the command is data, for example, text, for causing an operation to be carried out in accordance with the operation instruction.
  • FIG. 4 is a diagram for illustrating an example of a message D 1 .
  • the message D 1 illustrated in FIG. 4 is an example of a message generated when an operation instruction to start up the first device 20 - 1 from the standby state is received.
  • the message D 1 contains items of “uid”, “type”, “id”, and “command”.
  • the item of “uid” is a user ID of a user who issues the operation instruction.
  • a user ID of “U1” is set to the item of “uid”.
  • the item of “type” indicates a type of the data.
  • “cmd” is set to the item of “type”. This indicates that a command is contained in the message.
  • the item of “id” indicates identification information for uniquely identifying the message.
  • Data set to the item of “command” indicates content of the command.
  • the item of “command” contains items of “ip”, “path”, and “method”.
  • the item of “ip” indicates a destination of the command.
  • the IP address of the first device 20 - 1 is set to the item of “ip”.
  • the item of “path” corresponds to a command itself.
  • the item of “method” indicates a method of the HTTP protocol to be used.
  • the message transmission module 130 is configured to transmit the message generated by the message generation module 120 to the device 20 .
  • the message transmission module 130 may transmit the message to another device 20 , and then cause the another device 20 to transfer the message to the subject device 20 .
  • the message is transmitted to the device 20 through the MQTT protocol.
  • the message transmission module 130 transmits the message to the device 20 through the MQTT server 53 .
  • a plurality of topics exist in the MQTT server 53 . Identification information on each of the topics is set based on the device IDs of the devices 20 .
  • the command data transmission device 10 publishes the message to atopic on a request side, which contains identification information corresponding to the device IDs, and the devices 20 receive the message published to the topic on the request side, which contains the identification information on the devices 20 .
  • the communication between the command processing system 50 and the devices 20 may be carried out through a protocol different from the MQTT protocol.
  • the message reception module 210 receives the message through the Internet 6 .
  • the command execution module 220 executes the command based on the command contained in the message.
  • the execution module 220 may directly interpret the command contained in the message, to thereby directly control the device 20 .
  • the command execution module 220 may include an internal execution module configured to execute a command received from the user terminal 60 or the like existing in the local area through the LAN 2 , and a conversion module configured to convert a command contained in a received message and internally transmit the converted command to the internal execution module.
  • the device 20 may activate the HTTP daemon, and the internal execution module may receive the command from the conversion module through the HTTP protocol.
  • the registration control module 610 is configured to enable the speech recognition device 40 to receive a phrase of at least one of the first type and the second type and is configured to enable to transmit the operation instruction to the command processing system 50 . More specifically, the registration control module 610 causes the speech recognition device 40 and the command processing system 50 to carry out processing of the use registration for this user. Moreover, the registration control module 610 registers the devices 20 subject to the instruction through the speech input to the command processing system 50 , based on an operation by the user.
  • the setting acquisition module 620 is configured to detect the devices 20 connected to the local area, and acquire, from the detected devices 20 , device information containing the names of the devices 20 used in the speech input for the devices 20 . Moreover, the setting acquisition module 620 acquires, from the registration control module 610 or the command processing system 50 , information indicating the types of phrases that the speech recognition device 40 can accept.
  • the device information and the information indicating the types of phrases are hereinafter generally referred to as “user setting”. This is because the name of the device 20 used in the speech input and the available types of phrases are items that can be set by the user.
  • the registration control module 610 registers one or a plurality of devices 20 specified by the user out of the detected devices 20 as devices 20 subject to the instruction through the speech input.
  • the speech use registration module 150 receives, from the registration control module 610 of the user terminal 60 , a request (use registration request) to register the use of the command processing system 50 through the speech input by the user, and carries out processing of enabling the speech input by the user through cooperation of the command data transmission device 10 and the speech recognition device 40 .
  • the device registration module 160 is configured to receive, from the registration control module 610 of the user terminal 60 , a request (device registration request) to register a device 20 subject to the instruction through the speech input, to thereby register the device 20 as the subject to the speech input.
  • the registration result transmission module 170 is configured to transmit a result (device registration result) of the registration of the device 20 and a template of sample phrases.
  • the setting module 260 is configured to receive a user ID registration instruction from the registration control module 610 of the user terminal 60 , and write a user ID contained in the user ID registration instruction in the nonvolatile memory. Moreover, the setting module 260 is configured to receive a connection start instruction from the registration control module 610 of the user terminal 60 , and connect to the MQTT server 53 included in the command processing system 50 , to thereby enable reception from the command processing system 50 .
  • the phrase generation module 630 is configured to generate a phrase capable of controlling a device 20 based on the user setting acquired by the setting acquisition module 620 .
  • the phrase output module 640 is configured to output data for displaying the generated phrase. As a result of the output of the data by the phrase output module 640 , the generated phrase is displayed on a device visually recognizable by the user, for example, a display screen of the user terminal 60 .
  • the device information acquisition module 140 is configured to acquire the device information on the device 20 .
  • the device information acquisition module 140 is configured to generate a message (hereinafter referred to as “device information request”), and transmit the message to the device 20 through the Internet 6 .
  • the device information request contains a command for causing a device 20 to transmit device information on the device 20 to the device information acquisition module 140 and a user ID. More specifically, the device information acquisition module 140 transmits the device information request to the device 20 through the MQTT server 53 .
  • the generation and the transmission of the device information request may also be carried out by the message generation module 120 and the message transmission module 130 .
  • the device information transmission module 230 When the device information request is received, the device information transmission module 230 returns the device information on the device 20 to the command data transmission device 10 through the Internet 6 .
  • the device information contains, for example, device type information on the device 20 , the name used by a user to identify the device 20 , the device ID, and the IP address. Moreover, the device information may contain current information indicating a current state of the device 20 .
  • the device information transmission module 230 transmits the device information to the command data transmission device through the MQTT server 53 .
  • the device information acquisition module 140 receives the device information. Then, the device information acquisition module 140 extracts required items out of the received device information, and registers those items in the device table T1.
  • the processing of the use registration for the speech input contains processing of enabling the speech recognition device 40 to receive one or a plurality of types of phrase for a user and transmitting an operation instruction, and processing of registering information enabling the command processing system 50 to receive the operation instruction. This processing is required to be carried out before the user uses the command processing system 50 through the speech input.
  • FIG. 5 is a flowchart for illustrating an example of the processing of the use registration for the speech input.
  • the processing described below to be carried out by the speech use registration module 150 is carried out by the controller 11 executing a program corresponding to its function.
  • the processing to be carried out by the registration control module 610 is carried out by the controller 61 executing a program corresponding to its function.
  • the registration control module 610 of the user terminal 60 transmits the use registration request to the command processing system 50 (Step S 111 ). Moreover, the speech use registration module 150 of the command data transmission device 10 receives the use registration request through the communication unit 13 (Step S 121 ).
  • the use registration request contains information indicating a speech recognition device 40 for receiving the speech input and a command processing system 50 for processing an operation instruction, which are directly or indirectly specified by the user.
  • the registration control module 610 transmits authentication information on the user (Step S 112 ).
  • the speech use registration module 150 receives the authentication information on the user through the communication unit 13 (Step S 122 ).
  • the authentication information transmitted from the user terminal 60 maybe, for example, the user ID and a password.
  • the registration control module 610 may transmit, as the authentication information, access permission information (type of token) acquired from the authentications server by inputting identification information on the user input by the user and the password to the authentication server.
  • the registration control module 610 may use the access permission information to acquire the user ID from the authentication server.
  • the user ID input in this case and the user ID used by the command processing system 50 and the device 20 may be different from each other.
  • the speech use registration module 150 may generate a hash value of the user ID contained in the authentication information as the user ID to be used in the subsequent processing.
  • the speech use registration module 150 carries out such setting that the speech recognition device 40 receives speech input of the first type of phrase from the user, and such setting that the speech recognition device 40 transmits an operation instruction for the first type of phrase to the command processing system 50 (Step S 123 ).
  • FIG. 6 is a table for showing an example of a speech input use table.
  • the speech input use table contains fields of “user ID”, “registration for first type”, and “registration for second type”.
  • the “user ID” serves as a key for uniquely identifying a record in the table.
  • the “registration for first type” indicates whether the processing for the use registration for receiving the first type of phrase has been carried out for the user indicated by the “user ID” field.
  • the “registration for second type” field indicates whether the processing for the use registration for receiving the second type of phrase has been carried out for the user indicated by the “user ID” field.
  • the speech use registration module 150 transmits a response indicating whether the use registration for the speech input is successful (Step S 125 ).
  • the registration control module 610 receives the use registration response (Step S 113 ).
  • FIG. 7 is a sequence diagram for illustrating the processing of the device registration.
  • FIG. 8 is a flowchart for illustrating an example of processing by the user terminal 60 in the device registration.
  • FIG. 9 is a flowchart for illustrating an example of processing by the device 20 in the device registration.
  • FIG. 10 is a flowchart for illustrating an example of processing by the command data transmission device 10 in the device registration.
  • Processing to be carried out by the device registration module 160 , the registration result transmission module 170 , and the device information acquisition module 140 is carried out by the control module 11 executing programs corresponding to their functions. Processing to be carried out by the registration control module 610 , the setting acquisition module 620 , the phrase generation module 630 , and the phrase output module 640 is carried out by the control module 61 executing programs corresponding to their functions. Processing to be carried out by the device information transmission module 230 and the setting module 260 is carried out by the control module 21 executing programs corresponding to their functions.
  • the registration control module 610 of the user terminal 60 acquires an instruction to start the device registration from the user, the registration control module 610 transmits a use registration confirmation request to the command processing system 50 through the communication unit 63 (Step S 211 ). More specifically, the use registration confirmation request is a information for inquiring whether one or a plurality of types of phrases are registered to use in the speech recognition device 40 . Then, the device registration module 160 of the command data transmission device 10 receives the use registration confirmation request, and transmits a use registration confirmation response indicating the state of the use registration of the speech recognition (Step S 251 ).
  • the device registration module 160 acquires a record stored in the speech input use table for the user who has instructed to start the device registration, and returns information indicating the types of phrases registered for use based on the values of the fields of the registration for the first type and the registration for the second type.
  • the use registration confirmation response may include information indicating the user ID of the user who operates the user terminal 60 .
  • the registration control module 610 of the user terminal 60 receives the use registration confirmation response, and stores the information indicating the types of phrases registered for use and the like in the storage 62 (Step S 212 ).
  • an error message is transmitted as the use registration confirmation response, and the registration control module 610 outputs a screen for enabling the user to carry out the use registration.
  • the setting acquisition module 620 When the setting acquisition module 620 receives the use registration confirmation response, the setting acquisition module 620 detects one or a plurality of devices 20 connected to the local area (LAN 2 ), and transmits device information requests to the detected one or plurality of devices 20 (Step S 213 ).
  • the device information transmission module 230 of the device 20 receives the device information request, and transmits the device information to the user terminal 60 (Step S 221 ).
  • the device information contains the name of the device set by the user and the device ID.
  • the setting acquisition module 620 of the user terminal 60 receives the transmitted device information (Step S 214 ).
  • the registration control module 610 transmits user ID registration instructions to the retrieved devices 20 (Step S 215 ).
  • the setting module 260 of the device 20 receives the user ID registration instruction (Step S 222 ).
  • the user ID registration instruction contains the user ID to be registered in the device 20 .
  • the setting module 260 writes the user ID in the nonvolatile memory (Step S 223 ).
  • the registration control module 610 of the user terminal 60 transmits, to the retrieved devices, connection start instructions to connect to the MQTT server (Step S 216 ).
  • the setting module 260 of the device 20 receives the connection start instruction to connect to the MQTT server 53 (Step S 224 ). Then, the setting module 260 connects to the MQTT server 53 (Step S 225 ) so as to bring about a state in which an operation instruction can be received from the MQTT server 53 .
  • the registration control module 610 of the user terminal 60 transmits a device registration request to the command processing system 50 (command data transmission device 10 ) (Step S 217 ).
  • the device registration request contains the user ID and the device ID of the device 20 to be registered.
  • the command data transmission device 10 receives the device registration request from the user terminal 60 (Step S 252 ).
  • the access permission information on the permission for the access to the authentication server may be contained in the device registration request in place of the user ID. In this case, the command data transmission device 10 may use the access permission information to acquire the user ID or information that is a source of the user ID from the authentication server.
  • the device registration module 160 transmits a device information request to the device 20 having the device ID contained in the device registration request (Step S 253 ).
  • the device information transmission module 230 receives the device information request from the command processing system 50 (MQTT server 53 ), and transmits the device information to the command processing system 50 (Step S 226 ).
  • the device registration module 160 receives the device information (Step S 254 ).
  • the device information contains information indicating a system of a command set, information on the input terminals of the device 20 , information on other devices 20 connected to the input terminals, information on whether the device can receive messages through the Internet 6 , and a list of commands that the device 20 can accept.
  • the device registration module 160 stores at least a part (containing the device ID) of the received device information associated with the user ID in the device table T1 (Step S 255 ).
  • the device registration module 160 may sort out and shape the information contained in the device information, and store the resulting information in the device table T1.
  • the registration result transmission module 170 selects templates of sample phrases based on the use registration state of the speech input, namely, the types of phrases that the speech recognition device 40 can receive for the user (Step S 256 ). Moreover, the device registration module 160 transmits, to the user terminal 60 , a device registration result indicating whether each of the devices has been successfully registered and the selected templates of the sample phrases (Step S 257 ).
  • FIG. 11 is a diagram for illustrating an example of the templates of the sample phrases.
  • the sample phrase is a phrase capable of controlling the device 20 when the phrase is input by the user through the speech.
  • FIG. 11 is an illustration of an example of a case in which phrases of the first type and the second type are registered for use.
  • a first template TP 1 contains a template of sample phrases of the first type.
  • a second template TP 2 contains a template of sample phrases of the second type.
  • Content of the sample is a character string set to an item of “sentence”.
  • the name of the device can be set by the user, and cannot thus be prepared in advance. Therefore, a temporary character string of “% s” is embedded in the content of the sample.
  • a character string set to an item of “feature” indicates a command that the device 20 subject to the operation is required to have.
  • the phrase generation module 630 receives the result of the processing for the device registration request and the templates of the sample phrases from the command processing system 50 (command data transmission device 10 ) (Step S 218 ).
  • the phrase generation module 630 generates sample phrases based on the received templates, and the use registration state and the names of the devices, which are set by the user, and the phrase output module 640 outputs the generated sample phrases (Step S 219 ).
  • FIG. 12 is a flowchart for illustrating an example of the processing by the phrase generation module 630 and the phrase output module 640 .
  • the phrase generation module 630 selects a device 20 capable of executing commands required by sample phrases, and acquires the name of the selected device 20 (Step S 311 ).
  • the phrase generation module 630 collects, for example, commands set to the item of “feature” of the template, selects a record having all the collected commands in the field of the acceptable commands out of records of the certain user stored in the device table T1, and acquires the name of a device contained in the record.
  • the phrase generation module 630 carries out processing in accordance with the user setting (information indicating the types of phrases registered for use and the name of the device). More specifically, the following processing is carried out.
  • the phrase generation module 630 determines whether the information indicating the types of phrases registered for use indicates a state in which the first type of phrases can be accepted (Step S 312 ).
  • the phrase generation module 630 replaces the portion of the device name out of the wording of the first template TP 1 by the name of the selected device, to thereby generate sample phrases (Step S 313 ).
  • the phrase output module 640 outputs data on the generated sample phrases so that the sample phrases are displayed on the screen of the user terminal 60 (Step S 314 ).
  • the processing in Step S 313 and Step S 314 is skipped.
  • FIG. 13 is a diagram for illustrating an example of the displayed sample phrases.
  • the example of FIG. 13 is an example of a screen displayed based on the first template TP 1 when the first type of phrase is set to be acceptable.
  • a device 20 having the name of “Living Room” can process a command for a volume operation, and corresponding phrases are thus output as the sample phrases.
  • the phrase generation module 630 determines whether the information indicating the types of phrases registered for use indicates that the second type of phrase can be accepted (Step S 315 ).
  • the phrase generation module 630 replaces the portion of the device name out of the wording of the second template TP 2 by the name of the selected device, to thereby generate sample phrases (Step S 316 ).
  • the phrase output module 640 outputs data on the generated sample phrases so that the sample phrases are displayed on the screen of the user terminal 60 (Step S 317 ). In this case, when the information indicating the types of phrases registered for use does not indicate the state in which the second type of phrase can be accepted (N in Step S 315 ), the processing in Step S 316 and Step S 317 is skipped.
  • FIG. 14 is a diagram for illustrating another example of the displayed sample phrases.
  • the example of FIG. 14 is an example of a screen displayed based on the second template TP 2 when the second type of phrase is set to be acceptable.
  • the user can easily and reliably acquire the sample phrases acceptable in the speech input through dynamic control of the sample phrases displayed as the input example of the command. Moreover, even when the acceptable phrases change in accordance with the user, the user is not required to replace the content of the sample phrase by also using the user setting to generate sample phrases. The user can thus easily recognize acceptable and valid sample phrases.
  • FIG. 15 is a flowchart for illustrating an example of processing by the operation instruction reception module 110 , the message generation module 120 , the message transmission module 130 , the message reception module 210 , and the command execution module 220 .
  • the processing described below to be carried out by the operation instruction reception module 110 , the message generation module 120 , and the message transmission module 130 is carried out by the controller 11 executing programs corresponding to their functions.
  • the processing to be carried out by the message reception module 210 and the command execution module 220 is carried out by the controller 21 executing programs corresponding to their functions.
  • the operation instruction reception module 110 acquires an operation instruction from the speech recognition device 40 (Step S 511 ). Then, the message generation module 120 acquires the user ID of a user subject to the operation instruction based on the acquired operation instruction, and acquires the device ID of a device 20 that is associated with the user ID and is subject to the operation instruction, based on the name of the device contained in the operation instruction and the device table T1 (Step S 512 ). Then, the message generation module 120 generates a message containing a command corresponding to the operation instruction and the acquired user ID (Step S 513 ). The message transmission module 130 transmits the generated message to the device 20 subject to the operation instruction (Step S 514 ).
  • the message reception module 210 of the device 20 subject to the operation instruction receives the message (Step S 521 ). Then, the message reception module 210 compares the user ID contained in the message and the user ID written in the nonvolatile memory of the device 20 with each other (Step S 522 ). When those user IDs are the same (Y in Step S 522 ), the command execution module 220 executes the command contained in the message (Step S 523 ). On the other hand, when those user IDs are different from each other (N in Step S 522 ), the message is discarded, and the command contained in the message is not executed.
  • the message reception module 210 controls whether the command is to be executed in accordance with the comparison result of the user IDs. As a result, an unexpected operation of the device 20 can be prevented.
  • the device 20 in a related-art case, in a case where the device 20 is transferred to other user, but the device registration is not reset on the command processing system 50 side, when a user who has transferred the device 20 inputs a command directed to the device 20 through speech by mistake, the device 20 may function by mistake through the command.
  • a message containing the command is discarded on the device 20 side, and the possibility of an unexpected operation can thus be suppressed.
  • a device control system includes: setting acquisition means for acquiring user setting relating to a device; phrase generation means for generating a phrase for controlling the device based on the acquired user setting; and display data output means for outputting data for displaying the generated phrase.
  • a device control method includes: acquiring user setting relating to a device; generating a phrase for controlling the device based on the acquired user setting; and outputting data for displaying the generated phrase.
  • a program causes a computer to function as: setting acquisition means for acquiring user setting relating to a device; phrase generation means for generating a phrase for controlling the device based on the acquired user setting; and display data output means for outputting data for displaying the generated phrase.
  • the setting acquisition means may be configured to acquire a name for identifying the device in speech input
  • the phrase generation means may be configured to generate a phrase for controlling the device based on the acquired name
  • the setting acquisition means may be configured to acquire a command executable by each of a plurality of devices in the speech input
  • the phrase generation means may be configured to generate a phrase containing any one of names of the plurality of devices based on the command executable by each of the plurality of devices.
  • the setting acquisition means may be configured to acquire information indicating a type of phrase acceptable by a recognition module, which is configured to recognize an instruction speech of a user, as an instruction directed to the device, and the phrase generation means may be configured to generate a phrase for controlling the device based on the acquired information.
  • the recognition module is set by the user so that any one of a first type of phrase and a second type of phrase acceptable, the first type of phrase containing an identification name for identifying a system configured to acquire an operation instruction generated by the recognition module to control the device, the second type of phrase being inhibited from containing the identification name, and the phrase generation means may be configured to generate a phrase for controlling the device based on whether each of the first type of phrase and the second type of phrase is acceptable.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Selective Calling Equipment (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Provided is a device control system configured to: acquire user setting relating to a device; generate a phrase for controlling the device based on the acquired user setting; and output data for displaying the generated phrase.

Description

CROSS-REFERENCE TO RELATED APPLICATION
The present application is continuation of International Application No. PCT/JP2018/042864 filed on Nov. 20, 2018, which claims priority from Japanese Application No. JP 2017-231631 filed on Dec. 1, 2017. The contents of these applications are hereby incorporated by reference into this application.
BACKGROUND OF THE INVENTION 1. Field of the Invention
The present invention relates to a device control system, a device control method, and a terminal device.
2. Description of the Related Art
In recent years, there have been an increasing number of devices, such as smartphones and smart speakers, which are easily operable, without physical operations, through speech input using speech recognition.
In the speech input, some phrase is input as an instruction. A sentence containing the phrase to be input has basically a high degree of freedom. A user thus does not know what phrases are acceptable as the speech input. In view of this, a method of providing manuals describing the acceptable phrases is conceivable, but the user can be guided to only fixed phrases. Therefore, there has been a problem in that, while the user can be guided to phrases common to all users, the user cannot be guided to phrases that change in accordance with a user.
SUMMARY OF THE INVENTION
The present invention has been made in view of the above-mentioned problem, and therefore has an object to provide a technology that allows a user to easily and reliably acquire phrases acceptable in speech input.
In order to solve the above-mentioned problem, a device control system according to the present invention includes: at least one processor; and at least one memory device that stores a plurality of instructions, which when executed by the at least one processor, causes the at least one processor to operate to: acquire user setting relating to a device; generate a phrase for controlling the device based on the acquired user setting; and output data for displaying the generated phrase.
Moreover, a device control method according to the present invention includes: acquiring user setting relating to a device; generating a phrase for controlling the device based on the acquired user setting; and outputting data for displaying the generated phrase.
Moreover, a program according to the present invention causes a computer to function as: setting acquisition means for acquiring user setting relating to a device; phrase generation means for generating a phrase for controlling the device based on the acquired user setting; and display data output means for outputting data for displaying the generated phrase.
According to at least one embodiment of the present invention, the user can easily acquire phrases acceptable in the speech input.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram for illustrating a configuration of a device control system according to at least one embodiment of the present invention.
FIG. 2 is a diagram for illustrating an example of device tables.
FIG. 3 is a block diagram for illustrating functions implemented by a speech recognition device, a command data transmission device, a device, and a user terminal.
FIG. 4 is a diagram for illustrating an example of a message.
FIG. 5 is a flowchart for illustrating an example of processing of use registration for speech input.
FIG. 6 is a table for showing an example of a speech input use table.
FIG. 7 is a sequence diagram for illustrating processing of device registration.
FIG. 8 is a flowchart for illustrating an example of processing by the user terminal in the device registration.
FIG. 9 is a flowchart for illustrating an example of processing by the device in the device registration.
FIG. 10 is a flowchart for illustrating an example of processing by the command data transmission device in the device registration.
FIG. 11 is a diagram for illustrating an example of templates of sample phrases.
FIG. 12 is a flowchart for illustrating an example of processing by a phrase generation module and a phrase output module.
FIG. 13 is a diagram for illustrating an example of displayed sample phrases.
FIG. 14 is a diagram for illustrating another example of the displayed sample phrases.
FIG. 15 is a flowchart for illustrating an example of processing by an operation instruction reception module, a message generation module, a message transmission module, a message reception module, and a command execution module.
DETAILED DESCRIPTION OF THE INVENTION
Description is now made of an example of at least one embodiment of the present invention with reference to the drawings.
[1. System Configuration] FIG. 1 is a diagram for illustrating a configuration of a device control system according to at least one embodiment of the present invention. As illustrated in FIG. 1 , the device control system 1 includes a first device 20-1, a second device 20-2, a third device 20-3, a speech input device 30, a speech recognition device 40, and a command processing system 50. The first device 20-1, the second device 20-2, and the third device 20-3 are hereinafter sometimes generally referred to as “devices 20
The devices 20 and the speech input device 30 are installed in a local area, and are connected to a LAN 2. The LAN 2 may be a wired LAN or a wireless LAN. The LAN 2 is connected to an Internet 6 through a router 4. The speech recognition device 40 and the command processing system 50 are installed outside the local area. In other words, the speech recognition device 40 and the command processing system 50 are installed on the Internet 6 side as viewed from the devices 20 and the speech input device 30. The “local area” is an area having a limited range in which communication through the LAN 2 is available.
The devices 20 are devices to be controlled by the device control system 1. Although the three devices 20 are illustrated in FIG. 1 , four or more devices 20 may be included, or only two or less devices 20 may be included.
For example, the device 20 is an audio device or an audio/visual (AV) device. Specifically, the device 20 is an AV receiver, an AV amplifier, a speaker, an optical disc player (Blu-ray disc (trademark) player, a DVD (trademark) player, or the like), or a television receiver. Moreover, for example, the device 20 may be a musical instrument (an electronic musical instrument, an electric musical instrument, or the like). The device 20 may be a device other than those devices.
In the following, an example in which the first device 20-1 is an AV receiver, the second device 20-2 is a television receiver, and the third device 20-3 is a Blu-ray disc player is assumed.
The second device 20-2 is connected to a first high-definition multimedia interface (HDMI) (trademark) terminal (HDMI 1) of the first device 20-1 through an HDMI cable. The third device 20-3 is connected to a second HDMI terminal (HDMI 2) of the first device 20-1 through an HDMI cable. Moreover, private IP addresses “192.168.0.2”, “192.168.0.3”, and “192.168.0.4” are set to the first device 20-1, the second device 20-2, and the third device 20-3, respectively.
As illustrated in FIG. 1 , the first device 20-1 includes a controller 21, a storage 22, and a communication unit 23. The controller 21 includes at least one microprocessor (CPU), and is configured to carry out processing in accordance with programs stored in the storage 22. The storage 22 includes a main storage (e.g., RAM) and an auxiliary storage (e.g., a nonvolatile semiconductor memory or a hard disk drive). The storage 22 is configured to store programs and data. The communication unit 23 is configured to transmit/receive data to/from other devices. The second device 20-2 and the third device 20-3 also include the controller 21, the storage 22, and the communication unit 23, which are not shown in FIG. 1 .
The device 20 may include a component (e.g., an optical disc drive or a memory card slot) configured to read programs and data stored in an information storage medium (e.g., an optical disc or a memory card). Further, the programs may be supplied to the devices 20 through the information storage medium. The programs may be supplied to the devices 20 through the Internet 6.
The speech input device 30 includes a microphone and is configured to receive speech input. In at least one embodiment, the speech input device 30 is used by a user for the speech input of an operation instruction to the devices 20. For example, when the user wants to start up the device 20 installed in a living room from a standby state through the command processing system 50 supplied from a company X, the user inputs “AAA, ask MC to turn on the Living Room” or the like to the speech input device 30. In this case, “AAA” is a word (wake word) for starting the speech input to the speech input device 30, and is, for example, a name of the speech input device 30. “MC” is a name of the command processing system 50.
The speech data indicating the speech (phrase) input to the speech input device 30 is transmitted together with a user ID to the speech recognition device 40 through the Internet 6. The speech recognition device 40 is implemented by, for example, a server computer. The speech recognition device 40 may be implemented by a plurality of servers through so-called cloud computing. The speech recognition device 40 is configured to carry out speech recognition processing, to thereby convert the speech data to data in a form easily recognized by a program (command processing system 50). For example, the speech recognition device 40 generates an operation instruction in a predetermined form, which contains strings indicating a type of instruction by the user and a subject to the instruction, from the speech data on the phrase. Then, the operation instruction is transmitted together with the user ID to the command processing system 50. The user ID may be added by any device (processing) on the Internet side before the speech data is transmitted to the command processing system 50.
Under this state, the speech recognition device 40 is capable of transmitting the operation instruction to a command processing system 50 capable of processing content of the phrase transmitted from the user in accordance with the content of the phrase, for example, a specific word group in the phrase. The user registers the command processing systems 50 to be used in the speech recognition device 40 in advance. The speech recognition device 40 selects any one of the registered command processing systems 50 based on words contained in the phrase input from the user, and transmits the operation instruction to the selected command processing system 50. Moreover, the speech recognition device 40 may receive a plurality of types of phrases corresponding to a specific device 20, and control the device 20 through the command data transmission device 10 corresponding to each of the types of phrases. For example, a format of wording of the instruction in a phrase differs in accordance with the type of the phrase.
For example, in a first type of phrase, a phrase for starting up the device 20 having a name “Living Room” is “AAA, ask MC to turn on the Living Room”. In a second type, a phrase for starting up the device 20 is “AAA, turn on the Living Room”. While the phrase in the first type contains “MC”, which is a name for identifying the command processing system 50, the phrase in the second type does not contain the name. Whether the speech recognition device 40 receives the first type of phrase to transmit an operation instruction, or receives the second type of phrase to transmit an operation instruction is set by the user. Detailed description is later made of this setting. The command processing system 50, which is the transmission destination when the speech recognition device 40 receives the first type of phrase, may be different from that of the second type.
A user terminal 60 is configured to receive a physical operation, for example, a touch operation by the user, to thereby control the device 20. Moreover, the user terminal 60 is configured to set the command processing system 50 and the speech recognition device 40 based on an operation by the user. The user terminal 60 is, for example, a smartphone or a personal computer. The user terminal 60 includes a controller 61, a storage 62, and a communication unit 63. The controller 61, the storage 62, and the communication unit 63 are the same as the controller 21, the storage 22, and the communication unit 23, respectively.
As illustrated in FIG. 1 , the command processing system 50 includes the command data transmission device 10, a database 52, and a message queueing telemetry transport (MQTT) server 53.
The database 52 stores various types of data. For example, the database 52 stores information on devices 20 owned by respective users. FIG. 2 is a diagram for illustrating an example of device tables stored in the database 52. A device table T1 is stored for each of the users (while associated with the user ID). The user ID used in the command processing system 50 (database 52), the user terminal 60, and the devices 20 may be different from or the same as the user ID used in the speech input device 30 and the speech recognition device 40. When those user IDs are different from each other, correspondence data for converting those user IDs to each other is stored in the command processing system 50 or the speech recognition device 40.
As shown in FIG. 2 , the device table T1 includes fields of “ID”, “name”, “device ID”, “IP address”, “command type”, “terminals”, “connection destination”, “reception availability”, and “acceptable commands”.
The “ID” field indicates information for uniquely identifying each device 20 owned by the user. In FIG. 2 , the first device 20-1 corresponds to a device ID of “1”, and the second device 20-2 corresponds to a device ID of “2”.
The “name” field indicates a name of the device 20. This name is used by the user to specify the device 20 subject to the operation instruction. As the name, any name set by the user may be used, or, for example, an initial name set by a manufacturer of the device 20 or the like may be used, and may be modifiable by the user.
The “device ID” field indicates a device ID for solely and uniquely identifying the device 20. The device ID may be a MAC address of the device 20, or an ID generated based on the MAC address. The “IP address” field indicates an IP address set to a wireless or wired network interface card provided for the device 20. The “command type” field is a type (system) of commands used in the device 20. The “terminals” field indicates a list of input terminals provided for the device 20. The “connection destination” field indicates an input terminal of another device 20 to which the device 20 is connected when the device 20 is connected to the another device 20 and the sound output from the device 20 is input to the another device 20.
The “reception availability” field indicates whether a message containing a command can be received through the Internet 6. Detailed description is later made of the message. For example, “0” or “1” is registered in the “reception availability” field. “0” indicates that a message cannot be received through the Internet 6. “1” indicates that a message can be received through the Internet 6.
The “acceptable commands” field indicates a list of commands that the device 20 can accept. When “Power” is set in the list of the “acceptable commands” field, the field indicates that the device can be started up from the standby state through an external command. Otherwise, the field indicates that the device cannot be started up from the standby state. Moreover, when “Volume” is set in the list of the “acceptable commands” field, the field indicates that a volume of the device can be controlled through an external command. Otherwise, the field indicates that the volume cannot be controlled through an external command.
The data in the device table T1 is registered by each user. The user can register information on the device 20 owned by the user with the device table T1 through access from the user terminal 60 to the command data transmission device 10. Detailed description is later made of the registration.
Data other than the device tables T1 is stored in the database 52. For example, a user and data indicating types of phrases that can be input by the user for the device 20 associated with each other are stored in the database 52. Additionally, data indicating a correspondence between an operation instruction and a command (namely, data for converting the operation instruction to the command) may be stored in the database 52.
The command data transmission device 10 is implemented by, for example, a server computer. As illustrated in FIG. 1 , the command data transmission device 10 includes a controller 11, a storage 12, and a communication unit 13. The controller 11, the storage 12, and the communication unit 13 are the same as the controller 21, the storage 22, and the communication unit 23, respectively. The command data transmission device 10 may include a component (e.g., an optical disc drive or a memory card slot) configured to read programs and data stored in an information storage medium (e.g., an optical disc or a memory card). Further, the programs may be supplied to the command data transmission device 10 through the information storage medium. The programs may be supplied to the command data transmission device 10 through the Internet 6.
The command data transmission device 10 can make access to the database 52. The command data transmission device 10 and the database 52 may be implemented by a single server computer, or may be implemented by individual server computers.
The command data transmission device 10 is configured to receive an operation instruction, which is transmitted from the speech recognition device 40 and is directed to the device 20, generate a message containing a command based on the operation instruction, and transmit the message to the device 20. More specifically, the message is transmitted to the device 20 through the MQTT server 53. The MQTT server 53 is configured to transmit/receive data through use of the MQTT protocol. The command data transmission device 10 and the MQTT server 53 may be implemented by a single server computer, or may be implemented by individual server computers.
[2. Functional Blocks] FIG. 3 is a block diagram for illustrating functions implemented by the command data transmission device 10, the devices 20, and the user terminal 60.
As illustrated in FIG. 3 , the command data transmission device 10 includes an operation instruction reception module 110, a message generation module 120, a message transmission module 130, a device information acquisition module 140, a speech use registration module 150, a device registration module 160, and a registration result transmission module 170. The operation instruction reception module 110, the message generation module 120, the message transmission module 130, the device information acquisition module 140, the speech use registration module 150, the device registration module 160, and the registration result transmission module 170 are implemented by the controller 11 executing programs for the respective functions stored in the storage 12.
Moreover, as illustrated in FIG. 3 , the device 20 includes a message reception module 210, a command execution module 220, a device information transmission module 230, and a setting module 260. The message reception module 210, the command execution module 220, the device information transmission module 230, and the setting module 260 are implemented by the controller 21 executing programs for the respective functions stored in the storage 22.
Moreover, as illustrated in FIG. 3 , the user terminal 60 includes a registration control module 610, a setting acquisition module 620, a phrase generation module 630, and a phrase output module 640. The registration control module 610, the setting acquisition module 620, the phrase generation module 630, and the phrase output module 640 are implemented by the controller 61 executing programs for the respective functions stored in the storage 62.
[2-1] First, description is made of the operation instruction reception module 110, the message generation module 120, the message transmission module 130, the message reception module 210, and the command execution module 220.
The operation instruction reception module 110 is configured to receive an operation instruction directed to the device 20. For example, the operation instruction reception module 110 receives an operation instruction from the speech recognition device 40. The operation instruction is converted by the speech recognition device 40 to data in a form, for example, text data, which is recognizable by a program.
When the operation reception module 110 receives the operation instruction directed to the device 20, the message generation module 120 generates a message containing a user ID and a command. The user ID is used to identify the user relating to the operation instruction. The command is data, for example, text, for causing an operation to be carried out in accordance with the operation instruction.
FIG. 4 is a diagram for illustrating an example of a message D1. The message D1 illustrated in FIG. 4 is an example of a message generated when an operation instruction to start up the first device 20-1 from the standby state is received.
The message D1 contains items of “uid”, “type”, “id”, and “command”. The item of “uid” is a user ID of a user who issues the operation instruction. In the example illustrated in FIG. 4 , a user ID of “U1” is set to the item of “uid”. The item of “type” indicates a type of the data. In the example illustrated in FIG. 4 , “cmd” is set to the item of “type”. This indicates that a command is contained in the message. The item of “id” indicates identification information for uniquely identifying the message. Data set to the item of “command” indicates content of the command. The item of “command” contains items of “ip”, “path”, and “method”. The item of “ip” indicates a destination of the command. In the example of FIG. 4 , the IP address of the first device 20-1 is set to the item of “ip”. The item of “path” corresponds to a command itself. The item of “method” indicates a method of the HTTP protocol to be used.
The message transmission module 130 is configured to transmit the message generated by the message generation module 120 to the device 20. When the device 20 subject to the operation instruction cannot receive the message through the Internal 6 (“0” is set to the “reception availability” field), the message transmission module 130 may transmit the message to another device 20, and then cause the another device 20 to transfer the message to the subject device 20.
The message is transmitted to the device 20 through the MQTT protocol. In other words, the message transmission module 130 transmits the message to the device 20 through the MQTT server 53. A plurality of topics exist in the MQTT server 53. Identification information on each of the topics is set based on the device IDs of the devices 20. At the time of the transmission from the command data transmission device 10 to the devices 20, the command data transmission device 10 publishes the message to atopic on a request side, which contains identification information corresponding to the device IDs, and the devices 20 receive the message published to the topic on the request side, which contains the identification information on the devices 20. The communication between the command processing system 50 and the devices 20 may be carried out through a protocol different from the MQTT protocol.
The message reception module 210 receives the message through the Internet 6. The command execution module 220 executes the command based on the command contained in the message. The execution module 220 may directly interpret the command contained in the message, to thereby directly control the device 20. Moreover, the command execution module 220 may include an internal execution module configured to execute a command received from the user terminal 60 or the like existing in the local area through the LAN 2, and a conversion module configured to convert a command contained in a received message and internally transmit the converted command to the internal execution module. For example, the device 20 may activate the HTTP daemon, and the internal execution module may receive the command from the conversion module through the HTTP protocol.
[2-2] Description is now made of an overview of the registration control module 610, the setting acquisition module 620, the phrase generation module 630, the phrase output module 640, the speech use registration module 150, the device registration module 160, the registration result transmission module 170, and the setting module 260.
The registration control module 610 is configured to enable the speech recognition device 40 to receive a phrase of at least one of the first type and the second type and is configured to enable to transmit the operation instruction to the command processing system 50. More specifically, the registration control module 610 causes the speech recognition device 40 and the command processing system 50 to carry out processing of the use registration for this user. Moreover, the registration control module 610 registers the devices 20 subject to the instruction through the speech input to the command processing system 50, based on an operation by the user.
The setting acquisition module 620 is configured to detect the devices 20 connected to the local area, and acquire, from the detected devices 20, device information containing the names of the devices 20 used in the speech input for the devices 20. Moreover, the setting acquisition module 620 acquires, from the registration control module 610 or the command processing system 50, information indicating the types of phrases that the speech recognition device 40 can accept. The device information and the information indicating the types of phrases are hereinafter generally referred to as “user setting”. This is because the name of the device 20 used in the speech input and the available types of phrases are items that can be set by the user. When a plurality of devices 20 are detected by the setting acquisition module 620, for example, the registration control module 610 registers one or a plurality of devices 20 specified by the user out of the detected devices 20 as devices 20 subject to the instruction through the speech input.
The speech use registration module 150 receives, from the registration control module 610 of the user terminal 60, a request (use registration request) to register the use of the command processing system 50 through the speech input by the user, and carries out processing of enabling the speech input by the user through cooperation of the command data transmission device 10 and the speech recognition device 40.
The device registration module 160 is configured to receive, from the registration control module 610 of the user terminal 60, a request (device registration request) to register a device 20 subject to the instruction through the speech input, to thereby register the device 20 as the subject to the speech input. The registration result transmission module 170 is configured to transmit a result (device registration result) of the registration of the device 20 and a template of sample phrases.
The setting module 260 is configured to receive a user ID registration instruction from the registration control module 610 of the user terminal 60, and write a user ID contained in the user ID registration instruction in the nonvolatile memory. Moreover, the setting module 260 is configured to receive a connection start instruction from the registration control module 610 of the user terminal 60, and connect to the MQTT server 53 included in the command processing system 50, to thereby enable reception from the command processing system 50.
The phrase generation module 630 is configured to generate a phrase capable of controlling a device 20 based on the user setting acquired by the setting acquisition module 620. The phrase output module 640 is configured to output data for displaying the generated phrase. As a result of the output of the data by the phrase output module 640, the generated phrase is displayed on a device visually recognizable by the user, for example, a display screen of the user terminal 60.
[2-3] Description is now made of the device information acquisition module 140 and the device information transmission module 230.
The device information acquisition module 140 is configured to acquire the device information on the device 20. The device information acquisition module 140 is configured to generate a message (hereinafter referred to as “device information request”), and transmit the message to the device 20 through the Internet 6. The device information request contains a command for causing a device 20 to transmit device information on the device 20 to the device information acquisition module 140 and a user ID. More specifically, the device information acquisition module 140 transmits the device information request to the device 20 through the MQTT server 53. The generation and the transmission of the device information request may also be carried out by the message generation module 120 and the message transmission module 130.
When the device information request is received, the device information transmission module 230 returns the device information on the device 20 to the command data transmission device 10 through the Internet 6. The device information contains, for example, device type information on the device 20, the name used by a user to identify the device 20, the device ID, and the IP address. Moreover, the device information may contain current information indicating a current state of the device 20. The device information transmission module 230 transmits the device information to the command data transmission device through the MQTT server 53. The device information acquisition module 140 receives the device information. Then, the device information acquisition module 140 extracts required items out of the received device information, and registers those items in the device table T1.
[3. Processing] Description is now made of processing to be carried out by the device control system 1.
[3-1] First, description is made of processing of the use registration for the speech input. The processing of the use registration for the speech input contains processing of enabling the speech recognition device 40 to receive one or a plurality of types of phrase for a user and transmitting an operation instruction, and processing of registering information enabling the command processing system 50 to receive the operation instruction. This processing is required to be carried out before the user uses the command processing system 50 through the speech input.
Description is now made of an example of processing for the use registration for the first type of phrases. The second type of phrases may be registered through the same processing as that described below. FIG. 5 is a flowchart for illustrating an example of the processing of the use registration for the speech input. The processing described below to be carried out by the speech use registration module 150 is carried out by the controller 11 executing a program corresponding to its function. The processing to be carried out by the registration control module 610 is carried out by the controller 61 executing a program corresponding to its function.
First, when a user issues an instruction to start processing for the use registration, the registration control module 610 of the user terminal 60 transmits the use registration request to the command processing system 50 (Step S111). Moreover, the speech use registration module 150 of the command data transmission device 10 receives the use registration request through the communication unit 13 (Step S121). The use registration request contains information indicating a speech recognition device 40 for receiving the speech input and a command processing system 50 for processing an operation instruction, which are directly or indirectly specified by the user. Moreover, the registration control module 610 transmits authentication information on the user (Step S112). The speech use registration module 150 receives the authentication information on the user through the communication unit 13 (Step S122). The authentication information transmitted from the user terminal 60 maybe, for example, the user ID and a password. Moreover, when an authentication server other than the command processing system 50 exists, the registration control module 610 may transmit, as the authentication information, access permission information (type of token) acquired from the authentications server by inputting identification information on the user input by the user and the password to the authentication server. In this case, the registration control module 610 may use the access permission information to acquire the user ID from the authentication server. The user ID input in this case and the user ID used by the command processing system 50 and the device 20 may be different from each other. For example, the speech use registration module 150 may generate a hash value of the user ID contained in the authentication information as the user ID to be used in the subsequent processing.
When the authentication information is acquired, the speech use registration module 150 carries out such setting that the speech recognition device 40 receives speech input of the first type of phrase from the user, and such setting that the speech recognition device 40 transmits an operation instruction for the first type of phrase to the command processing system 50 (Step S123).
Then, the speech use registration module 150 stores a use registration state of the speech input in a database (Step S124). FIG. 6 is a table for showing an example of a speech input use table. The speech input use table contains fields of “user ID”, “registration for first type”, and “registration for second type”. The “user ID” serves as a key for uniquely identifying a record in the table. The “registration for first type” indicates whether the processing for the use registration for receiving the first type of phrase has been carried out for the user indicated by the “user ID” field. The “registration for second type” field indicates whether the processing for the use registration for receiving the second type of phrase has been carried out for the user indicated by the “user ID” field.
After the use registration state for the speech input is stored in the database, the speech use registration module 150 transmits a response indicating whether the use registration for the speech input is successful (Step S125). The registration control module 610 receives the use registration response (Step S113).
[3-2] Description is now made of the processing (registration processing) of registering the devices 20 subject to the instruction through the speech input in the command processing system 50 based on the operation by the user. FIG. 7 is a sequence diagram for illustrating the processing of the device registration. FIG. 8 is a flowchart for illustrating an example of processing by the user terminal 60 in the device registration. FIG. 9 is a flowchart for illustrating an example of processing by the device 20 in the device registration. FIG. 10 is a flowchart for illustrating an example of processing by the command data transmission device 10 in the device registration.
Processing to be carried out by the device registration module 160, the registration result transmission module 170, and the device information acquisition module 140 is carried out by the control module 11 executing programs corresponding to their functions. Processing to be carried out by the registration control module 610, the setting acquisition module 620, the phrase generation module 630, and the phrase output module 640 is carried out by the control module 61 executing programs corresponding to their functions. Processing to be carried out by the device information transmission module 230 and the setting module 260 is carried out by the control module 21 executing programs corresponding to their functions.
In the following, description is made of the registration processing to be carried out by the user terminal 60, the command data transmission device 10, and the device 20 in an order of the sequence diagram of FIG. 7 .
First, when the registration control module 610 of the user terminal 60 acquires an instruction to start the device registration from the user, the registration control module 610 transmits a use registration confirmation request to the command processing system 50 through the communication unit 63 (Step S211). More specifically, the use registration confirmation request is a information for inquiring whether one or a plurality of types of phrases are registered to use in the speech recognition device 40. Then, the device registration module 160 of the command data transmission device 10 receives the use registration confirmation request, and transmits a use registration confirmation response indicating the state of the use registration of the speech recognition (Step S251). More specifically, the device registration module 160 acquires a record stored in the speech input use table for the user who has instructed to start the device registration, and returns information indicating the types of phrases registered for use based on the values of the fields of the registration for the first type and the registration for the second type. The use registration confirmation response may include information indicating the user ID of the user who operates the user terminal 60. The registration control module 610 of the user terminal 60 receives the use registration confirmation response, and stores the information indicating the types of phrases registered for use and the like in the storage 62 (Step S212). When the use registration is not executed for any type of phrases to operate the device 20 for the user, an error message is transmitted as the use registration confirmation response, and the registration control module 610 outputs a screen for enabling the user to carry out the use registration.
When the setting acquisition module 620 receives the use registration confirmation response, the setting acquisition module 620 detects one or a plurality of devices 20 connected to the local area (LAN 2), and transmits device information requests to the detected one or plurality of devices 20 (Step S213). The device information transmission module 230 of the device 20 receives the device information request, and transmits the device information to the user terminal 60 (Step S221). The device information contains the name of the device set by the user and the device ID. The setting acquisition module 620 of the user terminal 60 receives the transmitted device information (Step S214).
When the device information is received, the registration control module 610 transmits user ID registration instructions to the retrieved devices 20 (Step S215). The setting module 260 of the device 20 receives the user ID registration instruction (Step S222). The user ID registration instruction contains the user ID to be registered in the device 20. Moreover, when the user ID registration instruction is received, the setting module 260 writes the user ID in the nonvolatile memory (Step S223).
When the user ID is written, the registration control module 610 of the user terminal 60 transmits, to the retrieved devices, connection start instructions to connect to the MQTT server (Step S216). The setting module 260 of the device 20 receives the connection start instruction to connect to the MQTT server 53 (Step S224). Then, the setting module 260 connects to the MQTT server 53 (Step S225) so as to bring about a state in which an operation instruction can be received from the MQTT server 53.
When the devices 20 connect to the MQTT server 53, the registration control module 610 of the user terminal 60 transmits a device registration request to the command processing system 50 (command data transmission device 10) (Step S217). The device registration request contains the user ID and the device ID of the device 20 to be registered. The command data transmission device 10 receives the device registration request from the user terminal 60 (Step S252). The access permission information on the permission for the access to the authentication server may be contained in the device registration request in place of the user ID. In this case, the command data transmission device 10 may use the access permission information to acquire the user ID or information that is a source of the user ID from the authentication server.
Then, the device registration module 160 transmits a device information request to the device 20 having the device ID contained in the device registration request (Step S253). The device information transmission module 230 receives the device information request from the command processing system 50 (MQTT server 53), and transmits the device information to the command processing system 50 (Step S226). The device registration module 160 receives the device information (Step S254). The device information contains information indicating a system of a command set, information on the input terminals of the device 20, information on other devices 20 connected to the input terminals, information on whether the device can receive messages through the Internet 6, and a list of commands that the device 20 can accept. The device registration module 160 stores at least a part (containing the device ID) of the received device information associated with the user ID in the device table T1 (Step S255). The device registration module 160 may sort out and shape the information contained in the device information, and store the resulting information in the device table T1.
When the device registration module 160 has finished the processing up to Step S255, the registration result transmission module 170 selects templates of sample phrases based on the use registration state of the speech input, namely, the types of phrases that the speech recognition device 40 can receive for the user (Step S256). Moreover, the device registration module 160 transmits, to the user terminal 60, a device registration result indicating whether each of the devices has been successfully registered and the selected templates of the sample phrases (Step S257).
FIG. 11 is a diagram for illustrating an example of the templates of the sample phrases. The sample phrase is a phrase capable of controlling the device 20 when the phrase is input by the user through the speech. FIG. 11 is an illustration of an example of a case in which phrases of the first type and the second type are registered for use. A first template TP1 contains a template of sample phrases of the first type. A second template TP2 contains a template of sample phrases of the second type. Content of the sample is a character string set to an item of “sentence”. The name of the device can be set by the user, and cannot thus be prepared in advance. Therefore, a temporary character string of “% s” is embedded in the content of the sample. A character string set to an item of “feature” indicates a command that the device 20 subject to the operation is required to have.
The phrase generation module 630 receives the result of the processing for the device registration request and the templates of the sample phrases from the command processing system 50 (command data transmission device 10) (Step S218). The phrase generation module 630 generates sample phrases based on the received templates, and the use registration state and the names of the devices, which are set by the user, and the phrase output module 640 outputs the generated sample phrases (Step S219).
More detailed description is now made of the processing by the phrase generation module 630 and the phrase output module 640. FIG. 12 is a flowchart for illustrating an example of the processing by the phrase generation module 630 and the phrase output module 640.
First, the phrase generation module 630 selects a device 20 capable of executing commands required by sample phrases, and acquires the name of the selected device 20 (Step S311). The phrase generation module 630 collects, for example, commands set to the item of “feature” of the template, selects a record having all the collected commands in the field of the acceptable commands out of records of the certain user stored in the device table T1, and acquires the name of a device contained in the record.
Then, the phrase generation module 630 carries out processing in accordance with the user setting (information indicating the types of phrases registered for use and the name of the device). More specifically, the following processing is carried out. First, the phrase generation module 630 determines whether the information indicating the types of phrases registered for use indicates a state in which the first type of phrases can be accepted (Step S312). When the information indicates the state in which the first type of phrase can be accepted (Y in Step S312), the phrase generation module 630 replaces the portion of the device name out of the wording of the first template TP1 by the name of the selected device, to thereby generate sample phrases (Step S313). Then, the phrase output module 640 outputs data on the generated sample phrases so that the sample phrases are displayed on the screen of the user terminal 60 (Step S314). In this case, when the information indicating the types of phrases registered for use does not indicate the state in which the first type of phrase can be accepted (N in Step S312), the processing in Step S313 and Step S314 is skipped.
FIG. 13 is a diagram for illustrating an example of the displayed sample phrases. The example of FIG. 13 is an example of a screen displayed based on the first template TP1 when the first type of phrase is set to be acceptable. A device 20 having the name of “Living Room” can process a command for a volume operation, and corresponding phrases are thus output as the sample phrases.
Then, the phrase generation module 630 determines whether the information indicating the types of phrases registered for use indicates that the second type of phrase can be accepted (Step S315). When the information indicates the state in which the second type of phrase can be accepted (Y in Step S315), the phrase generation module 630 replaces the portion of the device name out of the wording of the second template TP2 by the name of the selected device, to thereby generate sample phrases (Step S316). Then, the phrase output module 640 outputs data on the generated sample phrases so that the sample phrases are displayed on the screen of the user terminal 60 (Step S317). In this case, when the information indicating the types of phrases registered for use does not indicate the state in which the second type of phrase can be accepted (N in Step S315), the processing in Step S316 and Step S317 is skipped.
FIG. 14 is a diagram for illustrating another example of the displayed sample phrases. The example of FIG. 14 is an example of a screen displayed based on the second template TP2 when the second type of phrase is set to be acceptable.
In such a manner, the user can easily and reliably acquire the sample phrases acceptable in the speech input through dynamic control of the sample phrases displayed as the input example of the command. Moreover, even when the acceptable phrases change in accordance with the user, the user is not required to replace the content of the sample phrase by also using the user setting to generate sample phrases. The user can thus easily recognize acceptable and valid sample phrases.
[3-3] Description is now made of processing by the command processing system 50 of receiving an operation instruction and controlling the device 20. FIG. 15 is a flowchart for illustrating an example of processing by the operation instruction reception module 110, the message generation module 120, the message transmission module 130, the message reception module 210, and the command execution module 220. The processing described below to be carried out by the operation instruction reception module 110, the message generation module 120, and the message transmission module 130 is carried out by the controller 11 executing programs corresponding to their functions. The processing to be carried out by the message reception module 210 and the command execution module 220 is carried out by the controller 21 executing programs corresponding to their functions.
First, the operation instruction reception module 110 acquires an operation instruction from the speech recognition device 40 (Step S511). Then, the message generation module 120 acquires the user ID of a user subject to the operation instruction based on the acquired operation instruction, and acquires the device ID of a device 20 that is associated with the user ID and is subject to the operation instruction, based on the name of the device contained in the operation instruction and the device table T1 (Step S512). Then, the message generation module 120 generates a message containing a command corresponding to the operation instruction and the acquired user ID (Step S513). The message transmission module 130 transmits the generated message to the device 20 subject to the operation instruction (Step S514).
The message reception module 210 of the device 20 subject to the operation instruction receives the message (Step S521). Then, the message reception module 210 compares the user ID contained in the message and the user ID written in the nonvolatile memory of the device 20 with each other (Step S522). When those user IDs are the same (Y in Step S522), the command execution module 220 executes the command contained in the message (Step S523). On the other hand, when those user IDs are different from each other (N in Step S522), the message is discarded, and the command contained in the message is not executed.
As described above, the message reception module 210 controls whether the command is to be executed in accordance with the comparison result of the user IDs. As a result, an unexpected operation of the device 20 can be prevented. In particular, in a related-art case, in a case where the device 20 is transferred to other user, but the device registration is not reset on the command processing system 50 side, when a user who has transferred the device 20 inputs a command directed to the device 20 through speech by mistake, the device 20 may function by mistake through the command. In contrast, in at least one embodiment, when the user to whom the device is transferred has registered the device, irrespective of a command input by the user who has transferred the device, a message containing the command is discarded on the device 20 side, and the possibility of an unexpected operation can thus be suppressed.
Supplementary Notes
As can be understood from the above description of at least one embodiment, in the present application, a variety of technical ideas including embodiments of the invention described below are disclosed.
A device control system according to at least one embodiment of the present invention includes: setting acquisition means for acquiring user setting relating to a device; phrase generation means for generating a phrase for controlling the device based on the acquired user setting; and display data output means for outputting data for displaying the generated phrase.
A device control method according to at least one embodiment of the present invention includes: acquiring user setting relating to a device; generating a phrase for controlling the device based on the acquired user setting; and outputting data for displaying the generated phrase.
A program according to at least one embodiment of the present invention causes a computer to function as: setting acquisition means for acquiring user setting relating to a device; phrase generation means for generating a phrase for controlling the device based on the acquired user setting; and display data output means for outputting data for displaying the generated phrase.
According to one aspect of the invention described above, the setting acquisition means may be configured to acquire a name for identifying the device in speech input, and the phrase generation means may be configured to generate a phrase for controlling the device based on the acquired name.
According to one aspect of the invention described above, the setting acquisition means may be configured to acquire a command executable by each of a plurality of devices in the speech input, and the phrase generation means may be configured to generate a phrase containing any one of names of the plurality of devices based on the command executable by each of the plurality of devices.
According to one aspect of the invention described above, the setting acquisition means may be configured to acquire information indicating a type of phrase acceptable by a recognition module, which is configured to recognize an instruction speech of a user, as an instruction directed to the device, and the phrase generation means may be configured to generate a phrase for controlling the device based on the acquired information.
According to one aspect of the invention described above, the recognition module is set by the user so that any one of a first type of phrase and a second type of phrase acceptable, the first type of phrase containing an identification name for identifying a system configured to acquire an operation instruction generated by the recognition module to control the device, the second type of phrase being inhibited from containing the identification name, and the phrase generation means may be configured to generate a phrase for controlling the device based on whether each of the first type of phrase and the second type of phrase is acceptable.
While there have been described what are at present considered to be certain embodiments of the invention, it will be understood that various modifications may be made thereto, and it is intended that the appended claims cover all such modifications as fall within the true spirit and scope of the invention.

Claims (12)

What is claimed is:
1. A device control system comprising:
a plurality of devices to be controlled;
a storage device storing a table that includes setting information on each of the plurality of devices, the setting information including a user setting of each of the plurality of devices in a speech input, the user setting of each device identifying a name thereof;
at least one memory storing instructions; and
at least one processor that implements the instructions to:
acquire a template including a phrase containing a predetermined command term;
obtain a predetermined command from the predetermined command term included in the template;
obtain the setting information from the table;
select, from among the plurality of devices, a first device that is configured to execute the obtained predetermined command based on the obtained setting information;
acquire the identified name of the first device from the user setting thereof;
generate a phrase for controlling the first device from the phrase included in the template by adding the identified name obtained from the user setting of the first device; and
output data for displaying the generated phrase.
2. The device control system according to claim 1, wherein the user setting of each of the plurality of devices further includes a command to be controlled in the speech input and executable by the respective device thereof.
3. The device control system according to claim 1, wherein:
the user setting further includes phrase information indicating a type of phrase acceptable by a recognition device, which is configured to recognize an instruction speech of a user, as an instruction directed to the first device, and
the generated phrase is for controlling the first device based on the acquired user setting including the phrase information.
4. The device control system according to claim 3, wherein:
the recognition device is settable by the user so that any one of a first type of phrase or a second type of phrase is acceptable, the first type of phrase containing an identification name for identifying a system configured to acquire an operation instruction generated by the recognition device to control the first device, and the second type of phrase being inhibited from containing the identification name, and
the generated phrase is for controlling the first device based on whether each of the first type of phrase and the second type of phrase is acceptable.
5. A device control method of controlling a plurality of devices using at least one processor, the device control method comprising:
storing, in a storage device, a table that includes setting information on each of the plurality of devices, the setting information including a user setting of each of the plurality of devices in a speech input, the user setting of each device identifying a name thereof;
acquiring a template including a phrase containing a predetermined command term;
obtaining a predetermined command from the predetermined command term included in the template;
obtaining the setting information from the table;
selecting, from among the plurality of devices, a first device that is configured to execute the obtained predetermined command based on the obtained setting information;
acquiring the identified name of first device from the user setting thereof;
generating, with the at least one processor, a phrase for controlling the first device from the phrase included in the template by adding the identified name obtained from the user setting of the first device; and
outputting, with the at least one processor, data for displaying the generated phrase.
6. The device control method according to claim 5, wherein the user setting of each of the plurality of devices includes a command to be controlled in the speech input and executable by the respective device thereof.
7. The device control method according to claim 5, wherein:
the user setting further includes phrase information indicating a type of phrase acceptable by a recognition device, which is configured to recognize an instruction speech of a user, as an instruction directed to the first device, and
the generated phrase is for controlling the first device based on the acquired user setting including the phrase information.
8. The device control method according to claim 7, wherein:
the recognition device is settable by the user so that any one of a first type of phrase or a second type of phrase is acceptable, the first type of phrase containing an identification name for identifying a system configured to acquire an operation instruction generated by the recognition device to control the device, and the second type of phrase being inhibited from containing the identification name, and
the generated phrase is for controlling the first device based on whether each of the first type of phrase and the second type of phrase is acceptable.
9. A terminal device for controlling a plurality of devices in a device control system including a storage device storing a table that includes setting information on each of the plurality of devices, the setting information including a user setting of each of the plurality of devices in a speech input, the user setting of each device identifying a name thereof, the terminal device comprising:
at least one memory storing instructions;
at least one processor that implements the instructions to:
acquire a template including a phrase containing a predetermined command term;
obtain a predetermined command from the predetermined command term included in the template;
obtain the setting information from the table;
select, from among the plurality of devices, a first device that is configured to execute the obtained predetermined command based on the obtained setting information;
acquire the identified name of the first device from the user setting thereof;
generate a phrase for controlling the first device from the phrase included in the template by adding the identified name obtained from the user setting of the first device; and
output data for displaying the generated phrase.
10. The terminal device according to claim 9, wherein the user setting of each of the plurality of devices further includes a command to be controlled in the speech input and executable by the respective device thereof.
11. The terminal device according to claim 9, wherein:
the user setting further includes phrase information indicating a type of phrase acceptable by a recognition device, which is configured to recognize an instruction speech of a user, as an instruction directed to the first device, and
the generated phrase is for controlling the first device based on the acquired user setting including the phrase information.
12. The terminal device according to claim 11, wherein:
the recognition device is settable by the user so that any one of a first type of phrase or a second type of phrase is acceptable, the first type of phrase containing an identification name for identifying a system configured to acquire an operation instruction generated by the recognition device to control the first device, and the second type of phrase being inhibited from containing the identification name, and
the generated phrase is for controlling the first device based on whether each of the first type of phrase and the second type of phrase is acceptable.
US16/888,279 2017-12-01 2020-05-29 Device control system, device control method, and terminal device Active 2039-02-10 US11574631B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2017-231631 2017-12-01
JPJP2017-231631 2017-12-01
JP2017231631A JP6962158B2 (en) 2017-12-01 2017-12-01 Equipment control system, equipment control method, and program
PCT/JP2018/042864 WO2019107224A1 (en) 2017-12-01 2018-11-20 Apparatus control system, apparatus control method, and program

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/042864 Continuation WO2019107224A1 (en) 2017-12-01 2018-11-20 Apparatus control system, apparatus control method, and program

Publications (2)

Publication Number Publication Date
US20200294494A1 US20200294494A1 (en) 2020-09-17
US11574631B2 true US11574631B2 (en) 2023-02-07

Family

ID=66665578

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/888,279 Active 2039-02-10 US11574631B2 (en) 2017-12-01 2020-05-29 Device control system, device control method, and terminal device

Country Status (5)

Country Link
US (1) US11574631B2 (en)
EP (1) EP3719630A4 (en)
JP (1) JP6962158B2 (en)
CN (1) CN111433736B (en)
WO (1) WO2019107224A1 (en)

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
EP3809407A1 (en) 2013-02-07 2021-04-21 Apple Inc. Voice trigger for a digital assistant
AU2014278592B2 (en) 2013-06-09 2017-09-07 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US20160378747A1 (en) 2015-06-29 2016-12-29 Apple Inc. Virtual assistant for media playback
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10740384B2 (en) 2015-09-08 2020-08-11 Apple Inc. Intelligent automated assistant for media search and playback
US10331312B2 (en) 2015-09-08 2019-06-25 Apple Inc. Intelligent automated assistant in a media environment
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US12223282B2 (en) 2016-06-09 2025-02-11 Apple Inc. Intelligent automated assistant in a home environment
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US12197817B2 (en) 2016-06-11 2025-01-14 Apple Inc. Intelligent device arbitration and control
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
DK180048B1 (en) 2017-05-11 2020-02-04 Apple Inc. MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK201770428A1 (en) 2017-05-12 2019-02-18 Apple Inc. Low-latency intelligent automated assistant
DK201770411A1 (en) 2017-05-15 2018-12-20 Apple Inc. MULTI-MODAL INTERFACES
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
DK179822B1 (en) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. Virtual assistant operation in multi-device environments
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
DK201970509A1 (en) 2019-05-06 2021-01-15 Apple Inc Spoken notifications
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
US11468890B2 (en) 2019-06-01 2022-10-11 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
US11183193B1 (en) 2020-05-11 2021-11-23 Apple Inc. Digital assistant hardware abstraction
US11061543B1 (en) 2020-05-11 2021-07-13 Apple Inc. Providing relevant data items based on context
US11490204B2 (en) 2020-07-20 2022-11-01 Apple Inc. Multi-device audio adjustment coordination
US11438683B2 (en) 2020-07-21 2022-09-06 Apple Inc. User identification using headphones
JP7596926B2 (en) 2021-05-24 2024-12-10 株式会社リコー Information processing system, voice operation system, program, and voice operation method

Citations (96)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09288500A (en) 1996-04-22 1997-11-04 Olympus Optical Co Ltd Voice recording and reproducing device
US5903871A (en) 1996-04-22 1999-05-11 Olympus Optical Co., Ltd. Voice recording and/or reproducing apparatus
JP2001128262A (en) 1999-10-28 2001-05-11 Yokogawa Electric Corp Remote control system
US20020071577A1 (en) 2000-08-21 2002-06-13 Wim Lemay Voice controlled remote control with downloadable set of voice commands
JP2002202826A (en) 2000-12-28 2002-07-19 Canon Inc Information processing system, charging method for the system, network device, information processor, and storage medium
JP2002259114A (en) 2001-03-05 2002-09-13 Nec Corp Voice recognition computer system
JP2004015627A (en) 2002-06-10 2004-01-15 Sharp Corp Remote control system of av equipment
US20040015573A1 (en) 2002-07-16 2004-01-22 Matsushita Electric Industrial Co., Ltd. Network terminal setting information management method and information terminal device
US20040088535A1 (en) * 2002-10-31 2004-05-06 International Business Machines Corporation Method, apparatus and computer program product for selecting computer system settings for various operating circumstances
US20050009498A1 (en) 2003-07-07 2005-01-13 Lg Electronics Inc. Control system and method for home network system
JP2005109763A (en) 2003-09-29 2005-04-21 Nec Corp Network system, wol device, network tv tuner, and information device activation method
US20050226595A1 (en) * 2004-03-26 2005-10-13 Kreifeldt Richard A Audio-related system node instantiation
US20060193292A1 (en) 2005-02-28 2006-08-31 Microsoft Corporation Measurement based mechanism to enable two wireless devices to directly communicate with each other to support traffic prioritization
US20070091168A1 (en) 2005-10-25 2007-04-26 Hyun Lee Method to support simultaneous wireless connection of multiple media components
US20070130337A1 (en) 2004-05-21 2007-06-07 Cablesedge Software Inc. Remote access system and method and intelligent agent therefor
US7234115B1 (en) 2002-09-26 2007-06-19 Home Director, Inc. Home entertainment system and method
US20070256027A1 (en) * 2003-12-23 2007-11-01 Daimlerchrysler Ag Control System for a Motor Vehicle
JP2008533935A (en) 2005-03-18 2008-08-21 クゥアルコム・インコーポレイテッド Method and apparatus for monitoring configurable performance levels in a wireless device
US20080263654A1 (en) 2007-04-17 2008-10-23 Microsoft Corporation Dynamic security shielding through a network resource
US20090089065A1 (en) * 2007-10-01 2009-04-02 Markus Buck Adjusting or setting vehicle elements through speech control
US20090204410A1 (en) * 2008-02-13 2009-08-13 Sensory, Incorporated Voice interface and search for electronic devices including bluetooth headsets and remote systems
US20100328133A1 (en) 2008-01-31 2010-12-30 Mitsunori Nojima Electronic device, remote control system, signal processing method, control program and recording medium
US20120077545A1 (en) 2010-09-29 2012-03-29 Pantech Co., Ltd. Mobile terminal and control method
US20120206233A1 (en) 2011-02-15 2012-08-16 Fujifilm Corporation Radiographic imaging device and communication mode setting device
US20120300018A1 (en) 2011-05-23 2012-11-29 Li Gordon Yong Using distributed local qos optimization to achieve global qos optimization for video conferencing services
US8438218B2 (en) 2007-10-17 2013-05-07 Samsung Electronics Co., Ltd. Apparatus and method for providing accessible home network information in remote access environment
US20130115927A1 (en) 2011-06-03 2013-05-09 Apple Inc. Active Transport Based Notifications
US20130158980A1 (en) 2011-12-15 2013-06-20 Microsoft Corporation Suggesting intent frame(s) for user request(s)
US8520807B1 (en) 2012-08-10 2013-08-27 Google Inc. Phonetically unique communication identifiers
US8595319B2 (en) 2010-10-13 2013-11-26 Verizon Patent And Licensing Inc. Home network video peer-to-peer for mobile devices
US20130322634A1 (en) 2012-06-05 2013-12-05 Apple Inc. Context-aware voice guidance
US20140149118A1 (en) * 2012-11-28 2014-05-29 Lg Electronics Inc. Apparatus and method for driving electric device using speech recognition
US20140277644A1 (en) * 2013-03-15 2014-09-18 Bose Corporation Audio Systems and Related Devices and Methods
US20140330560A1 (en) 2013-05-06 2014-11-06 Honeywell International Inc. User authentication of voice controlled devices
US20140330569A1 (en) 2013-05-06 2014-11-06 Honeywell International Inc. Device voice recognition systems and methods
US20140351847A1 (en) 2013-05-27 2014-11-27 Kabushiki Kaisha Toshiba Electronic device, and method and storage medium
US20150046580A1 (en) * 2012-09-28 2015-02-12 Panasonic Intellectual Property Corporation Of America Information notification method, information notification system, and server device
US20150053781A1 (en) * 2013-08-21 2015-02-26 Honeywell International Inc. Devices and methods for interacting with an hvac controller
JP2015100085A (en) 2013-11-20 2015-05-28 ヤマハ株式会社 System and device for synchronous reproduction
JP2015106358A (en) 2013-12-02 2015-06-08 日立アプライアンス株式会社 Remote access system and in-residence equipment control device
US20150178099A1 (en) 2013-12-23 2015-06-25 International Business Machines Corporation Interconnecting portal components with dialog state transitions
US20150206529A1 (en) * 2014-01-21 2015-07-23 Samsung Electronics Co., Ltd. Electronic device and voice recognition method thereof
US9094363B1 (en) 2014-11-17 2015-07-28 Microsoft Technology Licensing, Llc Relevant communication mode selection
US20150215315A1 (en) 2014-01-27 2015-07-30 Microsoft Corporation Discovering and disambiguating identity providers
US20150243287A1 (en) 2013-04-19 2015-08-27 Panasonic Intellectual Property Corporation Of America Control method for household electrical appliance, household electrical appliance control system, and gateway
US20150254057A1 (en) 2014-03-04 2015-09-10 Microsoft Technology Licensing, Llc Voice-command suggestions
US20150294666A1 (en) * 2012-12-28 2015-10-15 Socionext Inc. Device including speech recognition function and method of recognizing speech
US20150324706A1 (en) * 2014-05-07 2015-11-12 Vivint, Inc. Home automation via voice control
US20150348551A1 (en) 2014-05-30 2015-12-03 Apple Inc. Multi-command single utterance input method
US9220012B1 (en) 2013-01-15 2015-12-22 Marvell International Ltd. Systems and methods for provisioning devices
US20150379993A1 (en) 2014-06-30 2015-12-31 Samsung Electronics Co., Ltd. Method of providing voice command and electronic device supporting the same
US9281958B2 (en) 2013-04-05 2016-03-08 Electronics And Telecommunications Research Institute Method for providing interworking service in home network
US20160164694A1 (en) 2013-07-26 2016-06-09 Kocom Co., Ltd. Smart device-based home network system and control method therefor
US20160221363A1 (en) 2015-01-30 2016-08-04 Samsung Electronics Co., Ltd. Image forming apparatus, recording medium, terminal, server, note printing method, and storage medium
US20160267913A1 (en) 2015-03-13 2016-09-15 Samsung Electronics Co., Ltd. Speech recognition system and speech recognition method thereof
US9479496B2 (en) 2014-03-10 2016-10-25 Fujitsu Limited Communication terminal and secure log-in method acquiring password from server using user ID and sensor data
US20160360347A1 (en) 2015-06-04 2016-12-08 Panasonic Intellectual Property Management Co., Ltd. Method for controlling storage battery pack and storage battery pack
US20170006026A1 (en) 2015-07-01 2017-01-05 Innoaus Korea Inc. Electronic device and method for generating random and unique code
JP2017028586A (en) 2015-07-24 2017-02-02 シャープ株式会社 Cooperation system and device-control server
US20170097618A1 (en) 2015-10-05 2017-04-06 Savant Systems, Llc History-based key phrase suggestions for voice control of a home automation system
US20170111423A1 (en) 2015-10-19 2017-04-20 At&T Mobility Ii Llc Real-Time Video Delivery for Connected Home Applications
US20170125035A1 (en) 2015-10-28 2017-05-04 Xiaomi Inc. Controlling smart device by voice
US20170164065A1 (en) 2015-12-07 2017-06-08 Caavo Inc Network-based control of a media device
US20170230705A1 (en) 2016-02-04 2017-08-10 The Directv Group, Inc. Method and system for controlling a user receiving device using voice commands
US20170264451A1 (en) 2014-09-16 2017-09-14 Zte Corporation Intelligent Home Terminal and Control Method of Intelligent Home Terminal
JP2017167627A (en) 2016-03-14 2017-09-21 コニカミノルタ株式会社 Job execution system, job execution method, image processing device, and job execution program
US20170279949A1 (en) 2016-03-24 2017-09-28 Panasonic Intellectual Property Management Co., Ltd. Home interior monitoring system and communication control method
EP3232160A1 (en) 2014-12-12 2017-10-18 Clarion Co., Ltd. Voice input assistance device, voice input assistance system, and voice input method
US20170331807A1 (en) 2016-05-13 2017-11-16 Soundhound, Inc. Hands-free user authentication
US20170344732A1 (en) 2016-05-24 2017-11-30 Mastercard International Incorporated System and method for processing a transaction with secured authentication
US20180007210A1 (en) 2016-06-29 2018-01-04 Paypal, Inc. Voice-controlled audio communication system
US20180014480A1 (en) 2016-07-15 2018-01-18 Rain Bird Corporation Wireless remote irrigation control
US9892732B1 (en) 2016-08-12 2018-02-13 Paypal, Inc. Location based voice recognition system
US20180048479A1 (en) 2016-08-11 2018-02-15 Xiamen Eco Lighting Co. Ltd. Smart electronic device
US20180047394A1 (en) 2016-08-12 2018-02-15 Paypal, Inc. Location based voice association system
US20180068663A1 (en) 2016-09-07 2018-03-08 Samsung Electronics Co., Ltd. Server and method for controlling external device
US20180107445A1 (en) 2015-03-31 2018-04-19 Sony Corporation Information processing device, control method, and program
US20180137858A1 (en) 2016-11-17 2018-05-17 BrainofT Inc. Controlling connected devices using a relationship graph
US20180174581A1 (en) * 2016-12-19 2018-06-21 Pilot, Inc. Voice-activated vehicle lighting control hub
US20180170242A1 (en) * 2016-12-19 2018-06-21 Pilot, Inc. Bluetooth-enabled vehicle lighting control hub
US20180182399A1 (en) 2016-12-02 2018-06-28 Yamaha Corporation Control method for control device, control method for apparatus control system, and control device
US20180190264A1 (en) 2016-12-30 2018-07-05 Google Llc Conversation-Aware Proactive Notifications for a Voice Interface Device
US20180191670A1 (en) 2016-12-29 2018-07-05 Yamaha Corporation Command Data Transmission Device, Local Area Device, Apparatus Control System, Method for Controlling Command Data Transmission Device, Method for Controlling Local Area Device, Apparatus Control Method, and Program
US20180204575A1 (en) 2017-01-14 2018-07-19 Foxconn Interconnect Technology Limited Control system utilizing verbal command
US20180277119A1 (en) 2015-11-25 2018-09-27 Mitsubishi Electric Corporation Speech dialogue device and speech dialogue method
US20180332033A1 (en) 2015-11-17 2018-11-15 Idee Limited Security systems and methods for continuous authorized access to restricted access locations
US20190089550A1 (en) 2017-09-15 2019-03-21 Kohler Co. Bathroom speaker
US20190115017A1 (en) 2017-10-13 2019-04-18 Hyundai Motor Company Speech recognition-based vehicle control method
US20190123897A1 (en) 2017-10-19 2019-04-25 Bank Of America Corporation Method and apparatus for perfect forward secrecy using deterministic hierarchy
US20190173834A1 (en) 2017-12-01 2019-06-06 Yamaha Corporation Device Control System, Device, and Computer-Readable Non-Transitory Storage Medium
US20190229945A1 (en) 2018-01-24 2019-07-25 Yamaha Corporation Device control system, device control method, and non-transitory computer readable storage medium
US20190259386A1 (en) 2016-06-10 2019-08-22 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US20190341033A1 (en) 2018-05-01 2019-11-07 Dell Products, L.P. Handling responses from voice services
US20190341037A1 (en) 2018-05-07 2019-11-07 Spotify Ab Voice recognition system for use with a personal media streaming appliance
US20190341038A1 (en) 2018-05-07 2019-11-07 Spotify Ab Voice recognition system for use with a personal media streaming appliance
US20200104094A1 (en) 2018-09-27 2020-04-02 Abl Ip Holding Llc Customizable embedded vocal command sets for a lighting and/or other environmental controller

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1138995A (en) * 1997-07-16 1999-02-12 Denso Corp Speech recognition device and navigation system
US20020193989A1 (en) * 1999-05-21 2002-12-19 Michael Geilhufe Method and apparatus for identifying voice controlled devices
US7254543B2 (en) * 2001-12-18 2007-08-07 Toshio Ibaraki Television apparatus having speech recognition function, and method of controlling the same
JP2010130223A (en) * 2008-11-26 2010-06-10 Fujitsu Ten Ltd Voice activation system and voice activation method
JP6029985B2 (en) * 2013-01-11 2016-11-24 クラリオン株式会社 Information processing apparatus, operation system, and method of operating information processing apparatus
JP6054283B2 (en) * 2013-11-27 2016-12-27 シャープ株式会社 Speech recognition terminal, server, server control method, speech recognition system, speech recognition terminal control program, server control program, and speech recognition terminal control method
JP5871088B1 (en) * 2014-07-29 2016-03-01 ヤマハ株式会社 Terminal device, information providing system, information providing method, and program
US9812126B2 (en) * 2014-11-28 2017-11-07 Microsoft Technology Licensing, Llc Device arbitration for listening devices
CN106218557B (en) * 2016-08-31 2020-01-07 北京兴科迪科技有限公司 Vehicle-mounted microphone with voice recognition control function
JP6522679B2 (en) * 2017-03-13 2019-05-29 シャープ株式会社 Speech control apparatus, method, speech system, and program

Patent Citations (111)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09288500A (en) 1996-04-22 1997-11-04 Olympus Optical Co Ltd Voice recording and reproducing device
US5903871A (en) 1996-04-22 1999-05-11 Olympus Optical Co., Ltd. Voice recording and/or reproducing apparatus
JP2001128262A (en) 1999-10-28 2001-05-11 Yokogawa Electric Corp Remote control system
US20020071577A1 (en) 2000-08-21 2002-06-13 Wim Lemay Voice controlled remote control with downloadable set of voice commands
JP2004507936A (en) 2000-08-21 2004-03-11 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Voice-controlled remote controller with a set of downloadable voice commands
JP2002202826A (en) 2000-12-28 2002-07-19 Canon Inc Information processing system, charging method for the system, network device, information processor, and storage medium
JP2002259114A (en) 2001-03-05 2002-09-13 Nec Corp Voice recognition computer system
JP2004015627A (en) 2002-06-10 2004-01-15 Sharp Corp Remote control system of av equipment
US20040015573A1 (en) 2002-07-16 2004-01-22 Matsushita Electric Industrial Co., Ltd. Network terminal setting information management method and information terminal device
US7234115B1 (en) 2002-09-26 2007-06-19 Home Director, Inc. Home entertainment system and method
US20040088535A1 (en) * 2002-10-31 2004-05-06 International Business Machines Corporation Method, apparatus and computer program product for selecting computer system settings for various operating circumstances
US20050009498A1 (en) 2003-07-07 2005-01-13 Lg Electronics Inc. Control system and method for home network system
US7366498B2 (en) 2003-07-07 2008-04-29 Lg Electronics Inc. Control system and method for home network system
JP2005109763A (en) 2003-09-29 2005-04-21 Nec Corp Network system, wol device, network tv tuner, and information device activation method
US20070256027A1 (en) * 2003-12-23 2007-11-01 Daimlerchrysler Ag Control System for a Motor Vehicle
US20050226595A1 (en) * 2004-03-26 2005-10-13 Kreifeldt Richard A Audio-related system node instantiation
US7869577B2 (en) 2004-05-21 2011-01-11 Voice On The Go Inc. Remote access system and method and intelligent agent therefor
US20110119063A1 (en) 2004-05-21 2011-05-19 Voice On The Go Inc. Remote notification system and method and intelligent agent therefor
US20070130337A1 (en) 2004-05-21 2007-06-07 Cablesedge Software Inc. Remote access system and method and intelligent agent therefor
US7983399B2 (en) 2004-05-21 2011-07-19 Voice On The Go Inc. Remote notification system and method and intelligent agent therefor
US20130311180A1 (en) 2004-05-21 2013-11-21 Voice On The Go Inc. Remote access system and method and intelligent agent therefor
US20100083352A1 (en) 2004-05-21 2010-04-01 Voice On The Go Inc. Remote access system and method and intelligent agent therefor
US20060193292A1 (en) 2005-02-28 2006-08-31 Microsoft Corporation Measurement based mechanism to enable two wireless devices to directly communicate with each other to support traffic prioritization
JP2008533935A (en) 2005-03-18 2008-08-21 クゥアルコム・インコーポレイテッド Method and apparatus for monitoring configurable performance levels in a wireless device
US20070091168A1 (en) 2005-10-25 2007-04-26 Hyun Lee Method to support simultaneous wireless connection of multiple media components
US20080263654A1 (en) 2007-04-17 2008-10-23 Microsoft Corporation Dynamic security shielding through a network resource
US20090089065A1 (en) * 2007-10-01 2009-04-02 Markus Buck Adjusting or setting vehicle elements through speech control
US8438218B2 (en) 2007-10-17 2013-05-07 Samsung Electronics Co., Ltd. Apparatus and method for providing accessible home network information in remote access environment
US20100328133A1 (en) 2008-01-31 2010-12-30 Mitsunori Nojima Electronic device, remote control system, signal processing method, control program and recording medium
US20090204410A1 (en) * 2008-02-13 2009-08-13 Sensory, Incorporated Voice interface and search for electronic devices including bluetooth headsets and remote systems
US20120077545A1 (en) 2010-09-29 2012-03-29 Pantech Co., Ltd. Mobile terminal and control method
US8595319B2 (en) 2010-10-13 2013-11-26 Verizon Patent And Licensing Inc. Home network video peer-to-peer for mobile devices
US20120206233A1 (en) 2011-02-15 2012-08-16 Fujifilm Corporation Radiographic imaging device and communication mode setting device
US20120300018A1 (en) 2011-05-23 2012-11-29 Li Gordon Yong Using distributed local qos optimization to achieve global qos optimization for video conferencing services
US20130115927A1 (en) 2011-06-03 2013-05-09 Apple Inc. Active Transport Based Notifications
US20130158980A1 (en) 2011-12-15 2013-06-20 Microsoft Corporation Suggesting intent frame(s) for user request(s)
US20130322634A1 (en) 2012-06-05 2013-12-05 Apple Inc. Context-aware voice guidance
US8520807B1 (en) 2012-08-10 2013-08-27 Google Inc. Phonetically unique communication identifiers
US20150046580A1 (en) * 2012-09-28 2015-02-12 Panasonic Intellectual Property Corporation Of America Information notification method, information notification system, and server device
US20140149118A1 (en) * 2012-11-28 2014-05-29 Lg Electronics Inc. Apparatus and method for driving electric device using speech recognition
US20150294666A1 (en) * 2012-12-28 2015-10-15 Socionext Inc. Device including speech recognition function and method of recognizing speech
US9220012B1 (en) 2013-01-15 2015-12-22 Marvell International Ltd. Systems and methods for provisioning devices
US20140277644A1 (en) * 2013-03-15 2014-09-18 Bose Corporation Audio Systems and Related Devices and Methods
US9281958B2 (en) 2013-04-05 2016-03-08 Electronics And Telecommunications Research Institute Method for providing interworking service in home network
US20150243287A1 (en) 2013-04-19 2015-08-27 Panasonic Intellectual Property Corporation Of America Control method for household electrical appliance, household electrical appliance control system, and gateway
US20140330560A1 (en) 2013-05-06 2014-11-06 Honeywell International Inc. User authentication of voice controlled devices
US20140330569A1 (en) 2013-05-06 2014-11-06 Honeywell International Inc. Device voice recognition systems and methods
US20140351847A1 (en) 2013-05-27 2014-11-27 Kabushiki Kaisha Toshiba Electronic device, and method and storage medium
US20160164694A1 (en) 2013-07-26 2016-06-09 Kocom Co., Ltd. Smart device-based home network system and control method therefor
US20150053781A1 (en) * 2013-08-21 2015-02-26 Honeywell International Inc. Devices and methods for interacting with an hvac controller
US20160291925A1 (en) 2013-11-20 2016-10-06 Yamaha Corporation Synchronized playback system, synchronized playback apparatus, and control method
US20180196630A1 (en) 2013-11-20 2018-07-12 Yamaha Corporation Synchronized playback system, synchronized playback apparatus, and control method
JP2015100085A (en) 2013-11-20 2015-05-28 ヤマハ株式会社 System and device for synchronous reproduction
JP2015106358A (en) 2013-12-02 2015-06-08 日立アプライアンス株式会社 Remote access system and in-residence equipment control device
US20150178099A1 (en) 2013-12-23 2015-06-25 International Business Machines Corporation Interconnecting portal components with dialog state transitions
US20150206529A1 (en) * 2014-01-21 2015-07-23 Samsung Electronics Co., Ltd. Electronic device and voice recognition method thereof
US20150215315A1 (en) 2014-01-27 2015-07-30 Microsoft Corporation Discovering and disambiguating identity providers
US9544310B2 (en) 2014-01-27 2017-01-10 Microsoft Technology Licensing, Llc Discovering and disambiguating identity providers
US20150254057A1 (en) 2014-03-04 2015-09-10 Microsoft Technology Licensing, Llc Voice-command suggestions
US9479496B2 (en) 2014-03-10 2016-10-25 Fujitsu Limited Communication terminal and secure log-in method acquiring password from server using user ID and sensor data
US20150324706A1 (en) * 2014-05-07 2015-11-12 Vivint, Inc. Home automation via voice control
US20150348551A1 (en) 2014-05-30 2015-12-03 Apple Inc. Multi-command single utterance input method
US20150379993A1 (en) 2014-06-30 2015-12-31 Samsung Electronics Co., Ltd. Method of providing voice command and electronic device supporting the same
US20170264451A1 (en) 2014-09-16 2017-09-14 Zte Corporation Intelligent Home Terminal and Control Method of Intelligent Home Terminal
US9094363B1 (en) 2014-11-17 2015-07-28 Microsoft Technology Licensing, Llc Relevant communication mode selection
EP3232160A1 (en) 2014-12-12 2017-10-18 Clarion Co., Ltd. Voice input assistance device, voice input assistance system, and voice input method
US20160221363A1 (en) 2015-01-30 2016-08-04 Samsung Electronics Co., Ltd. Image forming apparatus, recording medium, terminal, server, note printing method, and storage medium
US20160267913A1 (en) 2015-03-13 2016-09-15 Samsung Electronics Co., Ltd. Speech recognition system and speech recognition method thereof
US20180107445A1 (en) 2015-03-31 2018-04-19 Sony Corporation Information processing device, control method, and program
US20160360347A1 (en) 2015-06-04 2016-12-08 Panasonic Intellectual Property Management Co., Ltd. Method for controlling storage battery pack and storage battery pack
US9900730B2 (en) 2015-06-04 2018-02-20 Panasonic Intellectual Property Management Co., Ltd. Method for controlling storage battery pack and storage battery pack
US20170006026A1 (en) 2015-07-01 2017-01-05 Innoaus Korea Inc. Electronic device and method for generating random and unique code
US10341336B2 (en) 2015-07-01 2019-07-02 Innoaus Korea Inc. Electronic device and method for generating random and unique code
JP2017028586A (en) 2015-07-24 2017-02-02 シャープ株式会社 Cooperation system and device-control server
US20170097618A1 (en) 2015-10-05 2017-04-06 Savant Systems, Llc History-based key phrase suggestions for voice control of a home automation system
US20170111423A1 (en) 2015-10-19 2017-04-20 At&T Mobility Ii Llc Real-Time Video Delivery for Connected Home Applications
US20170125035A1 (en) 2015-10-28 2017-05-04 Xiaomi Inc. Controlling smart device by voice
US20180336359A1 (en) 2015-11-17 2018-11-22 Idee Limited Security systems and methods with identity management for access to restricted access locations
US20180332033A1 (en) 2015-11-17 2018-11-15 Idee Limited Security systems and methods for continuous authorized access to restricted access locations
US20180277119A1 (en) 2015-11-25 2018-09-27 Mitsubishi Electric Corporation Speech dialogue device and speech dialogue method
US20170164065A1 (en) 2015-12-07 2017-06-08 Caavo Inc Network-based control of a media device
US20170230705A1 (en) 2016-02-04 2017-08-10 The Directv Group, Inc. Method and system for controlling a user receiving device using voice commands
JP2017167627A (en) 2016-03-14 2017-09-21 コニカミノルタ株式会社 Job execution system, job execution method, image processing device, and job execution program
US20170279949A1 (en) 2016-03-24 2017-09-28 Panasonic Intellectual Property Management Co., Ltd. Home interior monitoring system and communication control method
US20170331807A1 (en) 2016-05-13 2017-11-16 Soundhound, Inc. Hands-free user authentication
US20170344732A1 (en) 2016-05-24 2017-11-30 Mastercard International Incorporated System and method for processing a transaction with secured authentication
US20190259386A1 (en) 2016-06-10 2019-08-22 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US20180007210A1 (en) 2016-06-29 2018-01-04 Paypal, Inc. Voice-controlled audio communication system
US20180014480A1 (en) 2016-07-15 2018-01-18 Rain Bird Corporation Wireless remote irrigation control
US20180048479A1 (en) 2016-08-11 2018-02-15 Xiamen Eco Lighting Co. Ltd. Smart electronic device
US10623198B2 (en) 2016-08-11 2020-04-14 Xiamen Eco Lighting Co., Ltd. Smart electronic device for multi-user environment
US9892732B1 (en) 2016-08-12 2018-02-13 Paypal, Inc. Location based voice recognition system
US20180047394A1 (en) 2016-08-12 2018-02-15 Paypal, Inc. Location based voice association system
US20180068663A1 (en) 2016-09-07 2018-03-08 Samsung Electronics Co., Ltd. Server and method for controlling external device
US20180137858A1 (en) 2016-11-17 2018-05-17 BrainofT Inc. Controlling connected devices using a relationship graph
US20180182399A1 (en) 2016-12-02 2018-06-28 Yamaha Corporation Control method for control device, control method for apparatus control system, and control device
US20180170242A1 (en) * 2016-12-19 2018-06-21 Pilot, Inc. Bluetooth-enabled vehicle lighting control hub
US20180174581A1 (en) * 2016-12-19 2018-06-21 Pilot, Inc. Voice-activated vehicle lighting control hub
US20180191670A1 (en) 2016-12-29 2018-07-05 Yamaha Corporation Command Data Transmission Device, Local Area Device, Apparatus Control System, Method for Controlling Command Data Transmission Device, Method for Controlling Local Area Device, Apparatus Control Method, and Program
US20180190264A1 (en) 2016-12-30 2018-07-05 Google Llc Conversation-Aware Proactive Notifications for a Voice Interface Device
US10679608B2 (en) 2016-12-30 2020-06-09 Google Llc Conversation-aware proactive notifications for a voice interface device
US20180204575A1 (en) 2017-01-14 2018-07-19 Foxconn Interconnect Technology Limited Control system utilizing verbal command
US20190089550A1 (en) 2017-09-15 2019-03-21 Kohler Co. Bathroom speaker
US20190115017A1 (en) 2017-10-13 2019-04-18 Hyundai Motor Company Speech recognition-based vehicle control method
US20190123897A1 (en) 2017-10-19 2019-04-25 Bank Of America Corporation Method and apparatus for perfect forward secrecy using deterministic hierarchy
US20190173834A1 (en) 2017-12-01 2019-06-06 Yamaha Corporation Device Control System, Device, and Computer-Readable Non-Transitory Storage Medium
US20190229945A1 (en) 2018-01-24 2019-07-25 Yamaha Corporation Device control system, device control method, and non-transitory computer readable storage medium
US20190341033A1 (en) 2018-05-01 2019-11-07 Dell Products, L.P. Handling responses from voice services
US20190341037A1 (en) 2018-05-07 2019-11-07 Spotify Ab Voice recognition system for use with a personal media streaming appliance
US20190341038A1 (en) 2018-05-07 2019-11-07 Spotify Ab Voice recognition system for use with a personal media streaming appliance
US20200104094A1 (en) 2018-09-27 2020-04-02 Abl Ip Holding Llc Customizable embedded vocal command sets for a lighting and/or other environmental controller

Non-Patent Citations (19)

* Cited by examiner, † Cited by third party
Title
English translation of Written Opinion issued in Intl. Appln. No. PCT/JP2018/042864 dated Jan. 8, 2019, previously cited in IDS filed May 29, 2020.
Extended European Search Report issued in European Appln. No. 18884611.7 dated Jul. 26, 2021.
International Preliminary Report on Patentability issued in Intl. Appln. No. PCT/JP2016/089215 dated Jul. 11, 2019. English translation provided.
International Preliminary Report on Patentability issued in Intl. Appln. No. PCT/JP2018/042864 dated Jun. 11, 2020. English translation provided.
International Search Report issued in Intl. Appln. No PCT/JP2018/042864 dated Jan. 8, 2019. English translation provided.
International Search Report issued in Intl. Appln. No. PCT/JP2016/089215 dated Mar. 28, 2017. English translation provided.
Notice of Allowance issued in U.S. Appl. No. 15/908,379 dated Aug. 5, 2019.
Notice of Allowance issued in U.S. Appl. No. 16/255,259 dated Sep. 23, 2020.
Office Action issued in Japanese Appln. No. 2017-231630 dated Oct. 5, 2021. Computer generated English translation provided.
Office Action issued in Japanese Appln. No. 2018-009918 dated Nov. 9, 2021 Computer generated English translation provided.
Office Action issued in Japanese Appln. No. 2018-558642 dated Jan. 7, 2020. English translation provided.
Office Action issued in Japanese Patent Application No. 2017-231630 dated May 10, 2022. English translation provided.
Office Action issued in U.S. Appl. No. 15/908,379 dated Apr. 8, 2019.
Office Action issued in U.S. Appl. No. 16/205,783 dated Aug. 19, 2020.
Office Action issued in U.S. Appl. No. 16/205,783 dated Jan. 27, 2020.
Office Action issued in U.S. Appl. No. 16/205,783 dated May 14, 2020.
Office Action issued in U.S. Appl. No. 16/255,259 dated May 20, 2020.
Written Opinion issued in Intl. Appln. No. PCT/JP2016/089215 dated Mar. 28, 2017. English translation provided.
Written Opinion issued in Intl. Appln. No. PCT/JP2018/042864 dated Jan. 8, 2019.

Also Published As

Publication number Publication date
WO2019107224A1 (en) 2019-06-06
CN111433736A (en) 2020-07-17
US20200294494A1 (en) 2020-09-17
CN111433736B (en) 2024-05-07
JP2019101730A (en) 2019-06-24
EP3719630A1 (en) 2020-10-07
EP3719630A4 (en) 2021-08-25
JP6962158B2 (en) 2021-11-05

Similar Documents

Publication Publication Date Title
US11574631B2 (en) Device control system, device control method, and terminal device
US9230559B2 (en) Server and method of controlling the same
CN107153499A (en) The Voice command of interactive whiteboard equipment
US10917381B2 (en) Device control system, device, and computer-readable non-transitory storage medium
JP2019086535A (en) Transmission control device and program
CN103974109A (en) Voice recognition apparatus and method for providing response information
US11574632B2 (en) In-cloud wake-up method and system, terminal and computer-readable storage medium
TW201405546A (en) A voice activation request system and operating process
KR102797533B1 (en) Providing Method of Autofill function and electric device including the same
US20190102530A1 (en) Authentication system and server device
CN111539217B (en) Method, equipment and system for disambiguation of natural language content titles
US20190103117A1 (en) Server device and server client system
CN111404788A (en) Device sharing method and server
US11875786B2 (en) Natural language recognition assistant which handles information in data sessions
JP6715307B2 (en) Equipment discovery method, device, equipment and program
JP2011159189A (en) Communication system, portal server, authentication server, service server, communication method, and program
KR101835091B1 (en) Chatting method and chatting system for learning language
WO2022019145A1 (en) Information processing device, information processing method, and information processing program
JP3867058B2 (en) Authentication system and authentication program
WO2024004010A1 (en) Connection control device, connection control method, and non-transitory computer-readable medium
CN113539251A (en) Control method, device, equipment and storage medium for household electrical appliance

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

AS Assignment

Owner name: YAMAHA CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SUYAMA, AKIHIKO;MUSHIKABE, KAZUYA;TSUKADA, KEISUKE;REEL/FRAME:053721/0446

Effective date: 20200901

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE

OSZAR »