US9749731B2 - Sidetone generation using multiple microphones - Google Patents
Sidetone generation using multiple microphones Download PDFInfo
- Publication number
- US9749731B2 US9749731B2 US15/003,339 US201615003339A US9749731B2 US 9749731 B2 US9749731 B2 US 9749731B2 US 201615003339 A US201615003339 A US 201615003339A US 9749731 B2 US9749731 B2 US 9749731B2
- Authority
- US
- United States
- Prior art keywords
- digitized samples
- sidetone
- microphones
- processing
- samples
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 claims abstract description 66
- 238000000034 method Methods 0.000 claims abstract description 49
- 230000008569 process Effects 0.000 claims abstract description 33
- 230000003139 buffering effect Effects 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 6
- 239000000872 buffer Substances 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 6
- 230000000694 effects Effects 0.000 description 7
- 238000004590 computer program Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000002592 echocardiography Methods 0.000 description 4
- 230000001965 increasing effect Effects 0.000 description 3
- 230000003750 conditioning effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000000613 ear canal Anatomy 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000007789 sealing Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1041—Mechanical or electronic switches, or control elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1016—Earpieces of the intra-aural type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/02—Circuits for transducers, loudspeakers or microphones for preventing acoustic reaction, i.e. acoustic oscillatory feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
- H04R2430/23—Direction finding using a sum-delay beam-former
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/05—Electronic compensation of the occlusion effect
Definitions
- This disclosure generally relates to headsets used for communications over a telecommunication system.
- Headsets used for communicating over telecommunication systems include one or more microphones and speakers.
- the speaker portion of such a headset can be enclosed in a housing that may cover a portion of one or both ears of the user, thereby interfering with the user's ability to hear his/her own voice during a conversation. This in turn can cause the conversation to sound unnatural to the user, and degrade the quality of user-experience of using the headset.
- this document features an apparatus that includes an input device, a sidetone generator, and an acoustic transducer.
- the input device includes a set of two or more microphones, and is configured to produce digitized samples of sound captured by the set of two or more microphones.
- the sidetone generator includes one or more processing devices, and is configured to receive digitized samples that include at least one digitized sample for each of two or more microphones of the set.
- the sidetone generator is also configured to process the received digitized samples to generate a sidetone signal.
- the acoustic transducer is configured to generate an audio feedback based on the sidetone signal.
- this document features a method that includes generating digitized samples of sound captured by a set of two or more microphones, and receiving, at one or more processing devices, digitized samples that include at least one digitized sample for each of two or more microphones of the set. The method also includes processing the digitized samples to generate a sidetone signal, and generating audio feedback based on the sidetone signal.
- this document features or more non-transitory machine-readable storage devices that store instructions executable by one or more processing devices to perform various operations.
- the operations include receiving digitized samples that include at least one digitized sample from each of two or more microphones of a set of microphones generating digitized samples of captured sound.
- the operations also include processing the digitized samples to generate a sidetone signal, and causing generation of audio feedback based on the sidetone signal.
- Implementations of the above aspects can include one or more of the following features.
- One or more frames of the digitized samples of the sound captured by the set of two or more microphones can be buffered in a memory.
- the one or more frames of the digitized samples can be processed by a circuitry for subsequent transmission.
- the sidetone generator can be configured to generate the sidetone signal in parallel with the buffering of the one or more frames of the digitized samples.
- the sidetone generator can be configured to process the received digitized samples based on one or more parameters provided by the circuitry for processing the one or more frames of the digitized samples.
- the one or more processing devices can be configured to receive a set of multiple digitized samples for each of the two or more microphones of the set to generate the sidetone signal.
- a number of digitized samples in each set of multiple digitized samples can be based on a target latency associated with generating the sidetone signal.
- Processing the received digitized samples can include executing a beamforming operation using samples from the set of two or more microphones.
- Processing the received digitized samples can include executing a microphone mixing operation using samples from the set of two or more microphones.
- Processing the received digitized samples can include executing an equalization operation.
- the sidetone generator can be configured to generate the sidetone signal within 5 ms of receiving the at least one digitized sample for each of two or more microphones of the set.
- FIG. 1 is an example of a headset.
- FIG. 2 is a schematic diagram illustrating signal paths in one example implementation of the technology described herein.
- FIG. 3 is a flow chart of an example process for generating a sidetone signal.
- Sidetone generation is used for providing an audible feedback to a user of a communication headset that interferes with the user's ability to hear ambient sounds naturally. Naturalness of a conversation can be improved, for example, by detecting the user's own voice using a microphone, and playing it back as an audible feedback via a speaker of the communication headset. Such audible feedback is referred to as a sidetone.
- Such acoustic devices can include, for example, wired or wireless-enabled headsets, headphones, earphones, earbuds, hearing aids, or other in-ear, on-ear, or around-ear acoustic devices.
- a sidetone generator in a headset a user may not be able to hear ambient sounds, including his/her own voice while speaking, and therefore may find the experience to be unnatural or uncomfortable. This in turn can degrade the user experience associated with using headsets for conversations or announcements.
- a sidetone generator may be used in a communication headset to restore, at least partially, the natural acoustic feeling of a conversation.
- a sidetone generator can be used, for example, to provide to the user, through a speaker, acoustic feedback based on the user's own voice captured by a microphone. This may allow the user to hear his/her own voice even when the user's ear is at least partially covered by the headset, thereby making the conversation sound more natural to the user.
- the naturalness of the conversation may depend on the quality of the sidetone signal used for generating the acoustic feedback provided to the user.
- the sidetone signal can be based on samples from a single microphone of the headset.
- a resulting acoustic feedback may contain a high amount of noise. This may result in an undesirable user-experience in some cases, for example, when the headset is used in a noisy environment.
- headsets with multiple microphones may use noise reduction and/or signal enhancing processes such as directive beamforming and microphone mixing (e.g., normalized least mean squares (NLMS) Mic Mixing), such processes typically require buffering of one or more frames of signal samples, which in turn can make the associated latencies unacceptable for sidetone generation.
- buffering used in a frame-based architecture or circuit of a headset may result in a latency of 7.5 ms or more, which is greater than the standard of 5 ms prescribed by the telecommunication standardization sector of the International Circuit Union (ITU-T).
- any sidetone generated using such a frame-based circuit may produce undesired acoustic effects such as echoes and reverberations, making the sidetone subjectively unacceptable to the user.
- frame-based processes are usually used for processing outgoing signals sent out from the headset, and not for sidetone generation.
- the technology described herein facilitates implementing noise reduction and/or signal enhancing processes such as directive beamforming and microphone mixing using a sidetone generator that employs a low-latency stream-based architecture.
- a sidetone generator can be configured to process input data provided by multiple microphones, using a small number of samples from each microphone to enable low latency (e.g., 3-4 ms) processing.
- the number of samples per microphone can be one, two, three, or a suitable number selected based on a target latency. For example, a higher number of samples may provide better frequency resolution at the cost of an increased latency, and a lower number of samples may reduce latency at the cost of lower frequency resolution.
- the number of samples per microphone can be selected to be lower than the number of samples buffered for the frame-based processing by the outgoing signal processor.
- the target latency can be based on, for example, a standard (e.g., the standard of 5 ms prescribed by ITU-T) or a limit over which undesirable acoustic effects such as echoes or reverberation may be perceived by a human user.
- the low latency processing may result in a noise-reduced sidetone that reduces the undesirable acoustic effects such as reverberation or echoes.
- This in turn can enable the sidetone generator to produce high quality sidetones, possibly at real-time or near real-time, even in noisy environments.
- the sidetone generator can be configured to process samples from the multiple microphones in parallel with the operations of frame-based circuit or architecture that processes the data sent out from the headset.
- the sidetone generator may function in conjunction with the frame-based circuit, for example, to obtain one or more parameter values that are calculated by the frame-based circuit, but are also usable by the sidetone generator. In some cases, this may reduce processing load on the sidetone generator.
- FIG. 1 shows an example of a headset 100 . While an in-ear headset is shown in the example, other acoustic devices such as wired or wireless-enabled headsets, headphones, earphones, earbuds, hearing aids, or other in-ear, on-ear, or around-ear acoustic devices are also within the scope of the technology described herein.
- the example headset 100 includes an electronics module 105 , an acoustic driver module 110 , and an ear interface 115 that fits into the wearer's ear to retain the headset and couple the acoustic output of the driver module 110 to the user's ear canal.
- an electronics module 105 includes an electronics module 105 , an acoustic driver module 110 , and an ear interface 115 that fits into the wearer's ear to retain the headset and couple the acoustic output of the driver module 110 to the user's ear canal.
- the ear interface 115 includes an extension 120 that fits into the upper part of the wearer's concha to help retain the headset.
- the extension 120 can include an outer arm or loop 125 and an inner arm or loop 130 configured to allow the extension 120 to engage with the concha.
- the ear interface 115 may also include an ear-tip 135 for forming a sealing configuration between the ear interface and the opening of the ear canal of the user.
- the headset 100 can be configured to connect to another device such as a phone, media player, or transceiver device via one or more connecting wires or cables (e.g., the cable 140 shown in FIG. 1 ).
- the headset may be wireless, e.g., there may be no wire or cable that mechanically or electronically couples the earpiece to any other device.
- the headset can include a wireless transceiver module capable of communicating with another device such as a mobile phone or transceiver device using, for example, a media access control (MAC) protocol such as Bluetooth®, IEEE 802.11, or another local area network (LAN) or personal area network (PAN) protocol.
- MAC media access control
- LAN local area network
- PAN personal area network
- the headset 100 includes multiple microphones that capture the voice of a user and/or other ambient acoustic components such as noise, and produce corresponding electronic input signals.
- the headset 100 can also include circuitry for processing the input signals for subsequent transmission out of the headset, and for generating sidetone signals based on the input signals.
- FIG. 2 is a schematic diagram illustrating signal paths within such circuitry 200 in one example implementation of the technology described herein.
- the circuitry 200 includes a sidetone generator 205 that generates a sidetone based on input signals provided by multiple microphones 210 a , 210 b ( 210 , in general). Even though the example of FIG.
- the sidetone signals generated by the sidetone generator 205 may be used to produce acoustic feedback via one or more acoustic transducers or speakers 215 a , 215 b ( 215 , in general). Even though the example of FIG. 2 shows two speakers 215 a and 215 b , fewer or more speakers may also be used.
- the circuitry 200 can also include an outgoing signal processor 220 that processes the input signals provided by the multiple microphones 210 to generate outgoing signals 222 that are transmitted out of the headset.
- the outgoing signal processor 220 may include a frame-based architecture that processes frames of input samples buffered in a memory device (e.g., one or more registers). Such frame-based processing may allow for implementation of advanced signal conditioning processes (e.g., beamforming and microphone mixing) that improve the outgoing signal 222 and/or reduce noise in the outgoing signal 222 . However, the buffering process associated with such frame-based processing introduces some latency that may be unacceptable for generating sidetones.
- the sidetone generator 205 can be configured to process samples of the input signals provided by the microphones 210 in parallel with the operations of an outgoing signal processor 220 to generate sidetone signals at a lower latency than that associated with the outgoing signal processor 220 .
- the circuitry 200 may include one or more analog to digital converters (ADC) that digitize the analog signals captured by the microphones 210 .
- ADC analog to digital converters
- the circuitry 200 includes a sample rate converter 225 that converts the sample rate of the digitized signals to an appropriate rate as required for the corresponding application (e.g., telephony).
- the output of the sample rate converter 225 can be provided to the outgoing signal processor 220 , where the samples are buffered in preparation of being processed by the frame-based architecture of the outgoing signal processor 220 .
- outputs of the sample rate converter 225 are also provided to circuitry within the sidetone generator 205 , where a small number of samples from each microphone are processed to generate the sidetone signals.
- the sidetone generator 205 can be configured to generate a sidetone signal based on a subset of the samples that are buffered for subsequent processing by the outgoing signal processor 220 .
- the sidetone generator 205 can be configured to generate a sidetone signal based on one sample each from a set of microphones 210 . Therefore, the sidetone signal can be generated multiple times as the samples from the microphones are buffered in the outgoing signal processor 220 . For example, a sidetone signal can be produced every 3 milliseconds or less.
- Such fast processing allows for the sidetones to be generated at real-time or near real-time, e.g., with latency that is not high enough for a human ear to perceive any noticeable undesirable acoustic effects such as echoes or reverberations.
- more than one sample from each microphone 210 may be processed to improve the quality of processing by the sidetone generator.
- processing multiple samples may entail a higher latency, as well as more complexity of the associated processing circuitry. Therefore, the number of input samples that are processed to generate the sidetone signal can be selected based on various design constraints such as latency, processing goal, available processing power, complexity of associated circuitry, and/or cost.
- samples from only a subset of the microphones may be used in generating the sidetone.
- the sidetone generator 205 may use samples from only two microphones to generate the sidetones.
- the sidetone generator 205 can be configured to use various types of processing in generating the sidetone signal.
- the sidetone generator includes a beamformer 230 , a microphone mixer 235 , and an equalizer 240 .
- fewer or more processing modules may also be used.
- FIG. 2 shows the beamformer 230 , mixer 235 , and equalizer 240 to be connected in series, portions of the associated processing may be done in parallel to one another, or in a different order.
- the beamformer 230 can be configured to combine signals from two or more of the microphones to facilitate directional reception. This can be done using a spatial filtering process that processes the signals from the microphones that are arranged as a set of phased sensor arrays. The signals from the various microphones are combined in such a way that signals at particular angles experience constructive interference while signals at other angles experience destructive interference. This allows for spatial selectivity to reduce the effect of any undesired signal (e.g., noise) coming from a particular direction.
- the beamforming can be implemented as an adaptive process that detects and estimates the signal-of-interest at the output of a sensor array, for example, using spatial filtering and interference rejection.
- Various types of beamforming techniques can be used by the beamformer 230 .
- the beamformer 230 may use a time-domain beamforming technique such as delay-and-sum beamforming.
- frequency domain techniques such as a minimum variance distortionless response (MVDR) beamformer may be used for estimating direction of arrival (DOA) of signals of interest.
- DOA direction of arrival
- the directional signal generated by the beamformer 230 is passed to a mixer 235 together with an omni-directional signal (e.g., the sum of the signals received by the microphones, without any directional processing).
- the mixer 235 can be configured to combine the signals, for example, to increase (e.g., to maximize) the signal to noise ratio in the output signal.
- Various types of mixing processes can be used for combining the signals.
- the mixer 235 can be configured to use a least mean square (LMS) filter such as a normalized LMS (NLMS) filter to combine the directional and omni-directional signals.
- LMS least mean square
- NLMS normalized LMS
- the mixing ratio ⁇ can be dynamically calculated by the sidetone generator 205 via an NLMS process.
- one or more parameters used by the sidetone generator 205 can be obtained from the outgoing signal processor 220 , for example, to reduce the computational burden on the sidetone generator 205 . This may increase the speed of processing of the sidetone generator 205 thereby allowing faster generation of the sidetones.
- the beamforming coefficients 245 used by the beamformer 230 may be obtained from the outgoing signal processor 220 .
- the ratio (a) 250 may also be obtained from the outgoing signal processor 220 .
- Such cooperation between the sidetone generator 205 and the outgoing signal processor 220 may allow for the sidetone generator 205 to generate the sidetones quickly and efficiently, but without compromising on the accuracy of the parameters, which are generated using the higher computational power afforded by the frame-based processing in the outgoing signal processor 220 .
- the cooperative use of the sidetone generator 205 and the outgoing signal processor 220 may reduce the computational burden on the sidetone generator. For example, in implementations where the NLMS ratio 250 is obtained from the outgoing signal processor, the mixer 235 generates an output based on multiplication and addition operations only, whereas the relatively complex operations of generating the NLMS ratio 250 is performed by the outgoing signal processor 220 .
- the value of the ratio 250 may be one that is calculated based on older samples. However, because the ratio 250 is often not fast-changing, the effect of using a ratio value based on older samples may not be significant.
- the output of the mixer 235 is provided to an equalizer 240 , which applies an equalization process on the mixer output to generate the sidetone signal.
- the equalization process can be configured to shape the sidetone signal such that any acoustic feedback generated based on the sidetone signal sounds natural to the user of the headset.
- the sidetone signal is mixed in with the incoming signal 255 , and played back through the acoustic transducers or speakers 215 of the headset.
- the mixing can include a rate conversion (performed by the sample rate converter 225 ) to adjust the sample rate to a value appropriate for processing by the speakers 215 .
- FIG. 3 is a flow chart of an example process 300 for generating a sidetone signal.
- the process 300 can be executed on a headset, for example, by the sidetone generator 205 described above with reference to FIG. 2 .
- Operations of the process 300 can include generating digitized samples of sound captured by a set of two or more microphones ( 310 ).
- the set of microphones can be disposed on a headset such as the headset depicted in FIG. 1 .
- the set of microphones can include three or more microphones.
- the microphones may be disposed on the headset in the configuration of a phased sensor array.
- the operations of the process 300 also include receiving, at one or more processing devices, at least one digitized sample for each of two or more microphones of the set ( 320 ).
- the digitized samples may also in parallel be buffered in a memory device as one or more frames. Such frames may then be processed for subsequent transmission from the headset.
- the one or more processing devices are configured to receive a set of multiple digitized samples for each of the two or more microphones of the set.
- a number of digitized samples in each set of multiple digitized samples can be based on, for example, a target latency associated with generating a sidetone signal based on the samples.
- Processing the digitized samples includes executing a beamforming operation using samples from the set of two or more microphones.
- the beamforming operations can be substantially similar to that described with reference to the beamformer 230 of FIG. 2 .
- processing the digitized samples can include executing a microphone mixing operation using samples from the set of two or more microphones.
- the microphone mixing operation may be performed, for example, on the beamformed signal, as described above with reference to FIG. 2 .
- the microphone mixing operation can be substantially similar to that described in U.S. Pat. No. 8,620,650, the entire content of which is incorporated herein by reference.
- processing the digitized samples can include executing an equalization operation.
- the operations of the process 300 can also include generating audio feedback based on the sidetone signal ( 340 ).
- the sidetone signal and/or the audio feedback may be generated in parallel with the buffering of the one or more frames of the digitized samples.
- the sidetone signal and/or the acoustic feedback may be generated within 5 ms (e.g., in 3 ms or 4 ms) of receiving the first of the at least one digitized sample for each of two or more microphones of the set.
- Such fast sidetone and/or acoustic feedback generation based on stream-based processing of a small number of input samples may reduce undesirable acoustic effects typically associated with increased latency, and contribute towards increasing the naturalness of a conversation or speech to a user of headset.
- the functionality described herein, or portions thereof, and its various modifications can be implemented, at least in part, via a computer program product, e.g., a computer program tangibly embodied in an information carrier, such as one or more non-transitory machine-readable media or storage device, for execution by, or to control the operation of, one or more data processing apparatus, e.g., a programmable processor, a DSP, a microcontroller, a computer, multiple computers, and/or programmable logic components.
- a computer program product e.g., a computer program tangibly embodied in an information carrier, such as one or more non-transitory machine-readable media or storage device, for execution by, or to control the operation of, one or more data processing apparatus, e.g., a programmable processor, a DSP, a microcontroller, a computer, multiple computers, and/or programmable logic components.
- a computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
- a computer program can be deployed to be executed one or more processing devices at one site or distributed across multiple sites and interconnected by a network.
- Actions associated with implementing all or part of the functions can be performed by one or more programmable processors or processing devices executing one or more computer programs to perform the functions of the processes described herein. All or part of the functions can be implemented as, special purpose logic circuitry, e.g., an FPGA and/or an ASIC (application-specific integrated circuit).
- special purpose logic circuitry e.g., an FPGA and/or an ASIC (application-specific integrated circuit).
- processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
- a processor will receive instructions and data from a read-only memory or a random access memory or both.
- Components of a computer include a processor for executing instructions and one or more memory devices for storing instructions and data.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Soundproofing, Sound Blocking, And Sound Damping (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
y(n)=α*p(n)+(1−α)*v(n) (1)
In some implementations, the mixing ratio α can be dynamically calculated by the
Claims (19)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/003,339 US9749731B2 (en) | 2016-01-21 | 2016-01-21 | Sidetone generation using multiple microphones |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/003,339 US9749731B2 (en) | 2016-01-21 | 2016-01-21 | Sidetone generation using multiple microphones |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170214996A1 US20170214996A1 (en) | 2017-07-27 |
US9749731B2 true US9749731B2 (en) | 2017-08-29 |
Family
ID=59360816
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/003,339 Active US9749731B2 (en) | 2016-01-21 | 2016-01-21 | Sidetone generation using multiple microphones |
Country Status (1)
Country | Link |
---|---|
US (1) | US9749731B2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10553195B2 (en) | 2017-03-30 | 2020-02-04 | Bose Corporation | Dynamic compensation in active noise reduction devices |
US10614790B2 (en) | 2017-03-30 | 2020-04-07 | Bose Corporation | Automatic gain control in an active noise reduction (ANR) signal flow path |
US10616676B2 (en) | 2018-04-02 | 2020-04-07 | Bose Corporaton | Dynamically adjustable sidetone generation |
Families Citing this family (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10264030B2 (en) | 2016-02-22 | 2019-04-16 | Sonos, Inc. | Networked microphone device control |
US10142754B2 (en) | 2016-02-22 | 2018-11-27 | Sonos, Inc. | Sensor on moving component of transducer |
US9947316B2 (en) | 2016-02-22 | 2018-04-17 | Sonos, Inc. | Voice control of a media playback system |
US10095470B2 (en) | 2016-02-22 | 2018-10-09 | Sonos, Inc. | Audio response playback |
US9965247B2 (en) | 2016-02-22 | 2018-05-08 | Sonos, Inc. | Voice controlled media playback system based on user profile |
US10509626B2 (en) | 2016-02-22 | 2019-12-17 | Sonos, Inc | Handling of loss of pairing between networked devices |
US10097919B2 (en) | 2016-02-22 | 2018-10-09 | Sonos, Inc. | Music service selection |
US9978390B2 (en) | 2016-06-09 | 2018-05-22 | Sonos, Inc. | Dynamic player selection for audio signal processing |
US10152969B2 (en) | 2016-07-15 | 2018-12-11 | Sonos, Inc. | Voice detection by multiple devices |
US10134399B2 (en) | 2016-07-15 | 2018-11-20 | Sonos, Inc. | Contextualization of voice inputs |
US10115400B2 (en) | 2016-08-05 | 2018-10-30 | Sonos, Inc. | Multiple voice services |
US9942678B1 (en) | 2016-09-27 | 2018-04-10 | Sonos, Inc. | Audio playback settings for voice interaction |
US9743204B1 (en) | 2016-09-30 | 2017-08-22 | Sonos, Inc. | Multi-orientation playback device microphones |
US10181323B2 (en) | 2016-10-19 | 2019-01-15 | Sonos, Inc. | Arbitration-based voice recognition |
US11183181B2 (en) | 2017-03-27 | 2021-11-23 | Sonos, Inc. | Systems and methods of multiple voice services |
US10475449B2 (en) | 2017-08-07 | 2019-11-12 | Sonos, Inc. | Wake-word detection suppression |
US10048930B1 (en) | 2017-09-08 | 2018-08-14 | Sonos, Inc. | Dynamic computation of system response volume |
US10446165B2 (en) | 2017-09-27 | 2019-10-15 | Sonos, Inc. | Robust short-time fourier transform acoustic echo cancellation during audio playback |
US10482868B2 (en) | 2017-09-28 | 2019-11-19 | Sonos, Inc. | Multi-channel acoustic echo cancellation |
US10621981B2 (en) | 2017-09-28 | 2020-04-14 | Sonos, Inc. | Tone interference cancellation |
US10051366B1 (en) | 2017-09-28 | 2018-08-14 | Sonos, Inc. | Three-dimensional beam forming with a microphone array |
US10466962B2 (en) | 2017-09-29 | 2019-11-05 | Sonos, Inc. | Media playback system with voice assistance |
US10880650B2 (en) | 2017-12-10 | 2020-12-29 | Sonos, Inc. | Network microphone devices with automatic do not disturb actuation capabilities |
US10818290B2 (en) | 2017-12-11 | 2020-10-27 | Sonos, Inc. | Home graph |
USD869443S1 (en) * | 2017-12-27 | 2019-12-10 | Sony Corporation | Earphone |
WO2019152722A1 (en) | 2018-01-31 | 2019-08-08 | Sonos, Inc. | Device designation of playback and network microphone device arrangements |
US11175880B2 (en) | 2018-05-10 | 2021-11-16 | Sonos, Inc. | Systems and methods for voice-assisted media content selection |
US10847178B2 (en) | 2018-05-18 | 2020-11-24 | Sonos, Inc. | Linear filtering for noise-suppressed speech detection |
US10959029B2 (en) | 2018-05-25 | 2021-03-23 | Sonos, Inc. | Determining and adapting to changes in microphone performance of playback devices |
US10681460B2 (en) | 2018-06-28 | 2020-06-09 | Sonos, Inc. | Systems and methods for associating playback devices with voice assistant services |
US10461710B1 (en) | 2018-08-28 | 2019-10-29 | Sonos, Inc. | Media playback system with maximum volume setting |
US11076035B2 (en) | 2018-08-28 | 2021-07-27 | Sonos, Inc. | Do not disturb feature for audio notifications |
US10878811B2 (en) | 2018-09-14 | 2020-12-29 | Sonos, Inc. | Networked devices, systems, and methods for intelligently deactivating wake-word engines |
US10587430B1 (en) | 2018-09-14 | 2020-03-10 | Sonos, Inc. | Networked devices, systems, and methods for associating playback devices based on sound codes |
US11024331B2 (en) | 2018-09-21 | 2021-06-01 | Sonos, Inc. | Voice detection optimization using sound metadata |
US10811015B2 (en) | 2018-09-25 | 2020-10-20 | Sonos, Inc. | Voice detection optimization based on selected voice assistant service |
US11100923B2 (en) | 2018-09-28 | 2021-08-24 | Sonos, Inc. | Systems and methods for selective wake word detection using neural network models |
US10692518B2 (en) | 2018-09-29 | 2020-06-23 | Sonos, Inc. | Linear filtering for noise-suppressed speech detection via multiple network microphone devices |
US11899519B2 (en) | 2018-10-23 | 2024-02-13 | Sonos, Inc. | Multiple stage network microphone device with reduced power consumption and processing load |
EP3654249A1 (en) | 2018-11-15 | 2020-05-20 | Snips | Dilated convolutions and gating for efficient keyword spotting |
US11183183B2 (en) | 2018-12-07 | 2021-11-23 | Sonos, Inc. | Systems and methods of operating media playback systems having multiple voice assistant services |
US11132989B2 (en) | 2018-12-13 | 2021-09-28 | Sonos, Inc. | Networked microphone devices, systems, and methods of localized arbitration |
US10602268B1 (en) | 2018-12-20 | 2020-03-24 | Sonos, Inc. | Optimization of network microphone devices using noise classification |
US10867604B2 (en) | 2019-02-08 | 2020-12-15 | Sonos, Inc. | Devices, systems, and methods for distributed voice processing |
US11315556B2 (en) | 2019-02-08 | 2022-04-26 | Sonos, Inc. | Devices, systems, and methods for distributed voice processing by transmitting sound data associated with a wake word to an appropriate device for identification |
US11120794B2 (en) | 2019-05-03 | 2021-09-14 | Sonos, Inc. | Voice assistant persistence across multiple network microphone devices |
US11361756B2 (en) | 2019-06-12 | 2022-06-14 | Sonos, Inc. | Conditional wake word eventing based on environment |
US11200894B2 (en) | 2019-06-12 | 2021-12-14 | Sonos, Inc. | Network microphone device with command keyword eventing |
US10586540B1 (en) | 2019-06-12 | 2020-03-10 | Sonos, Inc. | Network microphone device with command keyword conditioning |
US11138969B2 (en) | 2019-07-31 | 2021-10-05 | Sonos, Inc. | Locally distributed keyword detection |
US11138975B2 (en) | 2019-07-31 | 2021-10-05 | Sonos, Inc. | Locally distributed keyword detection |
US10871943B1 (en) | 2019-07-31 | 2020-12-22 | Sonos, Inc. | Noise classification for event detection |
US11189286B2 (en) | 2019-10-22 | 2021-11-30 | Sonos, Inc. | VAS toggle based on device orientation |
US11200900B2 (en) | 2019-12-20 | 2021-12-14 | Sonos, Inc. | Offline voice control |
US11562740B2 (en) | 2020-01-07 | 2023-01-24 | Sonos, Inc. | Voice verification for media playback |
US11556307B2 (en) | 2020-01-31 | 2023-01-17 | Sonos, Inc. | Local voice data processing |
US11308958B2 (en) | 2020-02-07 | 2022-04-19 | Sonos, Inc. | Localized wakeword verification |
US11308962B2 (en) | 2020-05-20 | 2022-04-19 | Sonos, Inc. | Input detection windowing |
US11727919B2 (en) | 2020-05-20 | 2023-08-15 | Sonos, Inc. | Memory allocation for keyword spotting engines |
US11482224B2 (en) | 2020-05-20 | 2022-10-25 | Sonos, Inc. | Command keywords with input detection windowing |
US11698771B2 (en) | 2020-08-25 | 2023-07-11 | Sonos, Inc. | Vocal guidance engines for playback devices |
US11984123B2 (en) | 2020-11-12 | 2024-05-14 | Sonos, Inc. | Network device interaction by range |
US11551700B2 (en) | 2021-01-25 | 2023-01-10 | Sonos, Inc. | Systems and methods for power-efficient keyword detection |
CN115379356B (en) * | 2022-09-23 | 2025-02-28 | 上海艾为电子技术股份有限公司 | A low-latency noise reduction circuit, method and active noise reduction earphone |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100022280A1 (en) * | 2008-07-16 | 2010-01-28 | Qualcomm Incorporated | Method and apparatus for providing sidetone feedback notification to a user of a communication device with multiple microphones |
US20100272284A1 (en) * | 2009-04-28 | 2010-10-28 | Marcel Joho | Feedforward-Based ANR Talk-Through |
US8620650B2 (en) | 2011-04-01 | 2013-12-31 | Bose Corporation | Rejecting noise with paired microphones |
US20140294193A1 (en) * | 2011-02-25 | 2014-10-02 | Nokia Corporation | Transducer apparatus with in-ear microphone |
US20150256660A1 (en) * | 2014-03-05 | 2015-09-10 | Cirrus Logic, Inc. | Frequency-dependent sidetone calibration |
US20150364145A1 (en) | 2014-06-13 | 2015-12-17 | Bose Corporation | Self-voice feedback in communications headsets |
-
2016
- 2016-01-21 US US15/003,339 patent/US9749731B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100022280A1 (en) * | 2008-07-16 | 2010-01-28 | Qualcomm Incorporated | Method and apparatus for providing sidetone feedback notification to a user of a communication device with multiple microphones |
US20100272284A1 (en) * | 2009-04-28 | 2010-10-28 | Marcel Joho | Feedforward-Based ANR Talk-Through |
US20140294193A1 (en) * | 2011-02-25 | 2014-10-02 | Nokia Corporation | Transducer apparatus with in-ear microphone |
US8620650B2 (en) | 2011-04-01 | 2013-12-31 | Bose Corporation | Rejecting noise with paired microphones |
US20150256660A1 (en) * | 2014-03-05 | 2015-09-10 | Cirrus Logic, Inc. | Frequency-dependent sidetone calibration |
US20150364145A1 (en) | 2014-06-13 | 2015-12-17 | Bose Corporation | Self-voice feedback in communications headsets |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10553195B2 (en) | 2017-03-30 | 2020-02-04 | Bose Corporation | Dynamic compensation in active noise reduction devices |
US10614790B2 (en) | 2017-03-30 | 2020-04-07 | Bose Corporation | Automatic gain control in an active noise reduction (ANR) signal flow path |
US11636841B2 (en) | 2017-03-30 | 2023-04-25 | Bose Corporation | Automatic gain control in an active noise reduction (ANR) signal flow path |
US12211479B2 (en) | 2017-03-30 | 2025-01-28 | Bose Corporation | Automatic gain control in an active noise reduction (ANR) signal flow path |
US10616676B2 (en) | 2018-04-02 | 2020-04-07 | Bose Corporaton | Dynamically adjustable sidetone generation |
Also Published As
Publication number | Publication date |
---|---|
US20170214996A1 (en) | 2017-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9749731B2 (en) | Sidetone generation using multiple microphones | |
US11657793B2 (en) | Voice sensing using multiple microphones | |
CN111902866B (en) | Echo control in binaural adaptive noise cancellation system in headphones | |
JP6903153B2 (en) | Audio signal processing for noise reduction | |
US10269369B2 (en) | System and method of noise reduction for a mobile device | |
CN109218912B (en) | Multi-microphone blasting noise control | |
TW201030733A (en) | Systems, methods, apparatus, and computer program products for enhanced active noise cancellation | |
CN110856072A (en) | Earphone conversation noise reduction method and earphone | |
EP3777114B1 (en) | Dynamically adjustable sidetone generation | |
EP3566465A1 (en) | Microphone array beamforming | |
CN112399301A (en) | Earphone and noise reduction method | |
US20230010505A1 (en) | Wearable audio device with enhanced voice pick-up | |
CN116158090A (en) | Audio signal processing method and system for suppressing echo | |
JP5082878B2 (en) | Audio conferencing equipment | |
US11335315B2 (en) | Wearable electronic device with low frequency noise reduction | |
CN113038318B (en) | A kind of voice signal processing method and device | |
Miyahara et al. | A hearing device with an adaptive noise canceller for noise-robust voice input | |
CN208015947U (en) | Earphone | |
CN102970638B (en) | Processing signals | |
CN115398934A (en) | Method, device, earphone and computer program for actively suppressing occlusion effect when reproducing audio signals | |
US20250088793A1 (en) | Wearable audio devices with enhanced voice pickup | |
US20250088794A1 (en) | Wearable audio devices with enhanced voice pickup | |
US20250054479A1 (en) | Audio device with distractor suppression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BOSE CORPORATION, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YEO, XIANG-ERN;REEL/FRAME:038139/0452 Effective date: 20160225 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, MASSACHUSETTS Free format text: SECURITY INTEREST;ASSIGNOR:BOSE CORPORATION;REEL/FRAME:070438/0001 Effective date: 20250228 |