US11877140B2 - Processing object-based audio signals - Google Patents
Processing object-based audio signals Download PDFInfo
- Publication number
- US11877140B2 US11877140B2 US17/963,103 US202217963103A US11877140B2 US 11877140 B2 US11877140 B2 US 11877140B2 US 202217963103 A US202217963103 A US 202217963103A US 11877140 B2 US11877140 B2 US 11877140B2
- Authority
- US
- United States
- Prior art keywords
- submix
- audio
- audio objects
- audio signal
- submixes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000005236 sound signal Effects 0.000 title claims abstract description 80
- 238000012545 processing Methods 0.000 title claims abstract description 70
- 238000004091 panning Methods 0.000 claims abstract description 51
- 238000000034 method Methods 0.000 claims abstract description 50
- 238000009877 rendering Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 13
- 238000004422 calculation algorithm Methods 0.000 description 23
- 230000008569 process Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000006854 communication Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000003623 enhancer Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 235000009508 confectionery Nutrition 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000005303 weighing Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/008—Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
Definitions
- Example embodiments disclosed herein generally relate to audio signal processing, and more specifically, to a method and system for processing an object-based audio signal.
- audio processing algorithms modifying audio signals in either temporal domain or spectral domain.
- Various audio processing algorithms are developed so as to improve overall quality of audio signals and thus enhance users' experience on the playback.
- existing processing algorithms may include a surround virtualizer, a dialog enhancer, a volume leveler, a dynamic equalizer and the like.
- the surround virtualizer can be used to render a multi-channel audio signal over a stereo device such as a headphone because it creates a virtual surround effect for the stereo device.
- the dialog enhancer aims at enhancing dialogs in order to improve the clarity and intelligibility of human voices.
- the volume leveler aims at modifying an audio signal so as to make the loudness of the audio content more consistent over time, which may lower the output sound level for a very loud object at some time but enhance the output sound level for a whispered object at some other time.
- the dynamic equalizer provides a way to automatically adjust the equalization gains at each frequency bands in order to keep the overall consistency of the spectral balance with regard to a desired timbre or tone.
- a channel-based audio signal can therefore be spatially rendered in the sound field.
- the input audio channels are firstly down-mixed into a number of submixes, such as front, center and surround submixes in order to reduce the computational complexity on the subsequent audio processing algorithms.
- the sound field can be divided into several coverage zones in relation to endpoint arrangements and the submix represents a sum of components of the audio signal in relation to a particular coverage zone.
- An audio signal is typically processed and rendered as a channel-based audio signal, meaning that metadata associated with position, velocity, size and the like of an audio object is absent in the audio signal.
- a rendering algorithm may, for example, render the audio objects to an immersive speaker layout including speakers all around as well as above the listener.
- the object-based audio signals needs to be first rendered as the channel-based audio signals in order to be down-mixed into submixes for audio processing. This means that metadata associated with these object-based audio signals are discarded, and the resulting rendering is thus compromised in terms of playback performance.
- example embodiments disclosed herein proposes a method and system for processing object-based audio signals.
- example embodiments disclosed herein provide a method of processing an audio signal, the audio signal having a plurality of audio objects.
- the method includes calculating, based on spatial metadata of the audio object, a panning coefficient for each of the audio objects in relation to each of a plurality of predefined channel coverage zones, and converting the audio signal into submixes in relation to all of the predefined channel coverage zones based on the calculated panning coefficients and the audio objects.
- the predefined channel coverage zones are defined by a plurality of endpoints distributed in a sound field.
- Each of the submixes indicates a sum of components of the plurality of the audio objects in relation to one of the predefined channel coverage zones.
- the method also includes generating a submix gain by applying an audio processing to each of the submixes, and controlling an object gain applied to each of the audio objects, the object gain being as a function of the panning coefficients for each of the audio objects and the submix gains in relation to each of the predefined channel coverage zones.
- example embodiments disclosed herein provide a system for processing an audio signal, the audio signal having a plurality of audio objects.
- the system includes a panning coefficient calculating unit configured to calculate a panning coefficient for each of the audio objects in relation to each of a plurality of predefined channel coverage zones based on spatial metadata of the audio object, and a submix converting unit configured to convert the audio signal into submixes in relation to all of the predefined channel coverage zones based on the calculated panning coefficients and the audio objects.
- the predefined channel coverage zones are defined by a plurality of endpoints distributed in a sound field.
- Each of the submixes indicates a sum of components of the plurality of the audio objects in relation to one of the predefined channel coverage zones.
- the system also includes a submix gain generating unit configured to generate a submix gain by applying an audio processing to each of the submixes, and an object gain controlling unit configured to control an object gain applied to each of the audio objects, the object gain being as a function of the panning coefficients for each of the audio objects and the submix gains in relation to each of the predefined channel coverage zones.
- a submix gain generating unit configured to generate a submix gain by applying an audio processing to each of the submixes
- an object gain controlling unit configured to control an object gain applied to each of the audio objects, the object gain being as a function of the panning coefficients for each of the audio objects and the submix gains in relation to each of the predefined channel coverage zones.
- object-based audio signals can be rendered by taking account of the associated metadata. Because metadata from the original audio signal is preserved and used when rendering all of the audio objects, the audio signal processing and rendering can be carried out more accurately and thus the resulting reproduction is more immersive when played by, for example, a home theatre system. Meanwhile, with the submixing process described herein, the object-based audio signal can be converted into a number of submixes which can be processed by conventional audio processing algorithms, which is advantageous because the existing processing algorithms are all applicable in object-based audio processing.
- the generated panning coefficients are useful to yield object gains for weighing all of the original audio objects.
- FIG. 1 illustrates a flowchart of a method of processing an object-based audio signal in accordance with an example embodiment
- FIG. 2 illustrates an example of predefined channel coverage zones for a typical arrangement of surround endpoints in accordance with an example embodiment
- FIG. 3 illustrates a block diagram of an object-based audio signal rendering in accordance with an example embodiment
- FIG. 4 illustrates a flowchart of a method of processing an object-based audio signal in accordance with another example embodiment
- FIG. 5 illustrates a system for processing an object-based audio signal in accordance with an example embodiment
- FIG. 6 illustrates a block diagram of an example computer system suitable for the implementing example embodiments disclosed herein.
- the audio content or audio signal as input is in an object-based format. It includes one or more audio objects, and each audio object refers to an individual audio element with associated spatial metadata describing properties of the object such as position, velocity, size and so forth.
- the audio objects may be based on single channel or multiple channels.
- the audio signal is meant to be reproduced in predefined and fixed speaker locations, which are able to present the audio objects precisely in terms of location and loudness, as perceived by audiences.
- the object-based audio signal is easily manipulated or processed for its informative metadata, and it can be tailored to different acoustic systems such as a 7.1 surround home theatre and a headphone. Therefore, the object-based audio signal can provide a more immersive audio experience through more flexible rendering of the audio objects in comparison to traditional channel-based audio signals.
- FIG. 1 illustrates a flowchart of a method 100 of processing an object-based audio signal in accordance with an example embodiment
- FIG. 3 illustrates an example framework 300 of the object-based audio signal processing and rendering in accordance with the example embodiment.
- FIG. 2 illustrates an example of predefined channel coverage zones defined by a typical arrangement of surround endpoints, which shows a typical environment of use for surround content reproduction. An embodiment will be described hereinafter by reference to FIG. 1 through FIG. 3 .
- a panning coefficient for each of audio objects in relation to each of predefined channel coverage zones is calculated based on each object's spatial metadata, namely, its position in a sound field relative to endpoints or speakers.
- the predefined channel coverage zones may be defined by a number of endpoints distributed in a sound field, so that the position of any of the audio objects in the sound field can be described in relation to the zones. For example, if a particular object is meant to be played at the back side of audiences, its positioning should be highly contributed by the surround zone while less contributed by other zones.
- the panning coefficient is a weight for describing how close a particular audio object is located relative to each of a number of predefined channel coverage zones.
- Each of the predefined channel coverage zones may correspond to one submix used to cluster components of the audio objects in relation to each of the predefined channel coverage zones.
- FIG. 2 illustrates an example of predefined channel coverage zones distributed in a sound field formed by a number of endpoints or speakers, where a center zone is defined by a center channel 211 (the upper middle circle denoted by 0.5), a front zone is defined by a front left channel 201 and a front right channel 202 (the upper left and upper right circles denoted respectively by 0 and 1.0), and a surround zone is defined by a number of surround channels, for example, two surround left channels 221 , 223 (the left and left bottom circles denoted respectively by 0.5 and 1.0) and two surround right channels 222 , 224 (the right and right bottom circles denoted respectively by 0.5 and 1.0).
- An intersection of two dashed lines represent a sweet spot where an audience is recommended to be seated in order to experience the possibly best sound quality and surround effect. However, audiences may take their seats other than the sweet spot and also perceive an immersive reproduction.
- FIG. 2 only shows a sound field in which a particular audio object can be described by x-axis and y-axis in a 2D manner.
- a height zone also can be defined by a height channel.
- Most of surround systems commercially available are arranged in accordance with FIG. 2 , and thus spatial metadata for an audio object may be in the form of [X, Y] or [X, Y, Z] corresponding to the coordinate system in FIG. 2 .
- the panning coefficient can be calculated for each audio object in each submix by Equations (1) to (4) for the center zone, the front zone, the surround zone and the height zone, respectively.
- endpoint arrangement as shown in FIG. 2 and its corresponding coordinate system are illustrative. How the endpoints or speakers are arranged and how the position of the audio object within the sound field is represented are not to be limited.
- front, center, surround and height zones are illustrated in the example embodiments disclosed herein, it should be appreciated that other ways of zone segmentation are also possible, and the number of the segmented zones is not to be limited.
- the audio signal is converted into submixes in relation to all of the predefined channel coverage zones based on the panning coefficients calculated at the step S 101 , as described above, and the audio objects.
- the step of converting the audio signal into submixes also can be referred to as downmixing.
- the submixes can be generated as a weighted average of each of the audio objects by Equation (6) as below.
- s represents a submix signal including components of a number of audio objects in relation to the predefined channel coverage zones
- j represents one of the four zones c, f, s, h as defined previously
- N represents the total number of the audio objects in the object-based audio signal
- object i represents the signal associated with an audio object i
- ⁇ ij represents the panning coefficient for the i-th object in relation to the j-th zone.
- the submix downmixing process is conducted for each of the zones, in which the panning coefficients are weighted for all of the audio objects.
- each object may be distributed differently in various zones.
- a gunshot at the right side of the sound field may have its major component downmixed into the front submix represented by 201 and 202 as shown in FIG. 2 , with its minor component(s) downmixed into other submix(es).
- one submix indicates a sum of components of multiple audio objects in relation to one predefined channel coverage zone.
- the generated height submix can provide a higher resolution and a more immersive experience.
- conventional channel-based audio processing algorithms usually only process front (F), center (C), and surround (S) submixes. Therefore, the algorithms may need to be extended to deal with the height (H) submix in parallel to C/F/S processing.
- the H submix can be processed by using the same method processing the S submix. This requires the least modification on the conventional channel-based audio processing algorithms. It is noted that, although the same method is applied, the obtained panning coefficients on the height submix and surround submix would be still different, since the input signal is different.
- the H submix can be processed by designing a specific method according to its spatial attribute. For example, a specific loudness model and a masking model may be applied in the H submix for audio processing since it could be quite different comparing with the loudness perception and masking effect of the front or surround submix.
- the steps S 101 and S 102 may be achieved by an object submixer 301 as shown in FIG. 3 which illustrates a framework 300 of the object-based audio signal processing and rendering in accordance with the example embodiment.
- the input audio signal is an object-based audio signal which contains a number of objects and their corresponding metadata such as spatial metadata.
- the spatial metadata is used to calculate the panning coefficients in relation to the four predefined channel coverage zones by Equations (1) to (4), and the resulting panning coefficients and the original objects are used to generate submixes by Equation (6).
- the calculation of the panning coefficients and the generation of submixes may be finished by the object submixer 301 .
- the object submixer 301 is a key component to leverage the existing channel-based audio processing algorithms that typically downmix the input multichannel audio (e.g., 5.1 or 7.1) into three submixes (F/C/S) in order to reduce computation complexity. Similarly, the object submixer 301 also converts or downmixes the audio objects into submixes based on the objects' spatial metadata, and the submixes can be expanded from existing F/C/S to include additional spatial resolutions, for example, a height submix as discussed above. If metadata on object type is available or automatic classification technology is used to identify types of the audio objects, the submixes can further include other non-spatial attributes such as dialog submix for subsequent dialog enhancement, which will be explained in detail later in the description. With these submixes converted in accordance with the methods and systems herein, the existing channel-based audio processing algorithms can be directly used or slightly modified for object-based audio processing.
- F/C/S submixes
- a submix gain can be generated by applying an audio processing to each of the submixes.
- This can be achieved by an audio processer 302 as shown in FIG. 3 , which receives the submixes from the object submixer 301 , and outputs their respective submix gains.
- the audio processing unit 302 may include the existing channel-based audio processing algorithms including a surround virtualizer, a dialog enhancer, a volume leveler, a dynamic equalizer and the like, because the object-based audio objects and their respective metadata are converted into submixes that the channel-based processing could accept.
- the channel-based audio processing may not be changed and can be used for processing the object-based audio objects as well.
- an object gain applied to each of the audio objects can be controlled. This can be achieved by an object gain controller 303 as shown in FIG. 3 , which is used to apply gains to the original audio objects based on the submix gains and the panning coefficients. After applying audio processing algorithms, as discussed previously, a set of submix gains will be estimated for each submix, indicating how the audio signal should be modified. These submix gains are then applied to the original audio objects, in proportion to each object's contribution to each submix. That is, an object gain for each audio object is related to the submix gain obtained for each submix and the panning coefficient for the audio object in each submix. The object gain may be assigned to each of the audio objects based on the following Equation (7):
- ObjGain i represents the object gain of the i-th object
- g f , g s , g c and g h represent the submix gain obtained for the front, surround, center and height submixes, respectively
- ⁇ if , ⁇ is , ⁇ ic and ⁇ ih represent the panning coefficients for the i-th object in relation to the front zone, the surround zone, the center zone and the height zone, respectively.
- Equation (7) the position relative to the zones (reflected by ⁇ ij , j for one of the four zones c, f, s, h) and the desired processing effect (reflected by g j , j for one of the four zones c, f, s, h) are both considered for each of the objects, resulting in an improved accuracy of the audio processing for all the objects.
- the audio signal may be rendered based on the original audio objects, their corresponding metadata, and the object gains.
- This rendering step may be achieved by an object renderer 304 , as shown in FIG. 3 .
- the object renderer 304 may render the processed (object-gain applied) audio objects with various playback devices, which can be discrete channels, soundbars, headphones, and the like. Any existing or potentially available off-the-shelf renderers for object-based audio signals may be applied here, and therefore details in the following will be omitted.
- object gains for the audio objects are illustrated to be used for an audio rendering process, the object gains may be separately provided without the audio rendering process.
- a standalone decoding process may yield a number of object gains as its output.
- the object-based audio signal can be converted into a number of submixes which can be processed by conventional audio processing algorithms, which is advantageous because the existing processing algorithms are all applicable in object-based audio processing.
- the generated panning coefficients are useful to yield object gains for weighing all of the original audio objects. Because the number of objects in an object-based audio signal is normally much more than the number of channels in a channel-based audio signal, the separate weighting of the objects produces an improved accuracy of the audio signal processing and rendering compared with conventional methods applying the processed submix gains to the channels. Further, because metadata from the original audio signal is preserved and used when rendering all of the audio objects, the audio signal may be rendered more accurately and thus the resulting reproduction is more immersive when played by, for example, a home theatre system.
- a more sophisticated flow chart 400 is illustrated involving creating dialog submix(es) and analyzing object type(s).
- the types of the audio objects may be identified.
- Automatic classification technologies can be used to identify audio types of the signal being processed to generate the dialog submix.
- Existing methods such as the one noted in U.S. Patent Application No. 61/811,062 may be used for audio type identification, and its entirety is incorporated herein by way of reference.
- an additional dialog (D) submix representing content rather than spatial attributes, can be also generated. Dialog submixes are useful when human voices such as narration are meant to be processed independently of other audio objects.
- dialog submix generation an object can be exclusively assigned to the dialog submix, or partially (with a weight) downmixed to the dialog submix.
- an audio classification algorithm usually outputs a confidence score (in [ 0 , 1 ]) with regard to its decision on the presence of dialog. This confidence score can be used to estimate a reasonable weight for the object.
- the C/F/S/H/D submixes can be generated by using the following panning coefficients.
- c i represents the weight panning to dialog submix, which can be derived from the dialog confidence of the audio object (or directly equal to the dialog confidence score)
- ⁇ id represents the panning coefficient for the i-th object in relation to a dialog zone
- ⁇ ij ′ represents the modified panning coefficient to other submixes by considering the dialog confidence score
- j represents the four zones c, f, s, h as defined previously.
- c i 2 is used in order for energy preservation, and ⁇ ij is calculated in the same way as Equations (1) to (4). If one or more audio objects are determined as dialog object(s), the dialog object(s) may be clustered to a dialog submix at step S 403 .
- dialog enhancement can work on clean dialog signals instead of mixed signals (dialog with background music or noise). Another benefit it brings is that dialog at different positions can be enhanced simultaneously, while conventional dialog enhancement may only boost the dialogs in the center channel.
- a submix gain may be generated for the dialog object(s) by applying some particular processing algorithms with regard to dialog, in order to represent a preferred weighting of the particular dialog submix.
- the rest audio objects may be downmixed into submixes, which is similar to the steps S 101 and S 102 described above.
- the identified type can be used, at step S 406 , to automatically steer the behavior of audio processing algorithms by estimating their most suitable parameters based on the identified type, as the system presented in the U.S. Patent Application No. 61/811,062.
- the amount of intelligent equalizer may be set to close to 1 for music signal, and set it to close to 0 for speech signal.
- step S 407 object gains applied to each of the audio objects may be controlled in a similar way compared with the step S 104 .
- the steps from S 403 to S 406 are not necessarily sorted in sequence.
- the dialog object(s) and the other object(s) may be processed simultaneously so that the resulting submix gains for all of the objects are generated at the same time.
- the submix gain for the dialog object(s) may be generated after the submix gains for the rest object(s) are generated.
- the objects can be rendered more accurately.
- the dialog submix is about to be utilized, the computational complexity would not be increased compared with the case with only F/C/S/H submixes.
- FIG. 5 illustrates a system 500 for processing an audio signal having a plurality of audio objects in accordance with an example embodiment described herein.
- the system 500 comprises a panning coefficient calculating unit 501 configured to calculate a panning coefficient for each of the audio objects in relation to each of a plurality of predefined channel coverage zones based on spatial metadata of the audio object.
- the system 500 also comprises a submix converting unit 502 configured to convert the audio signal into submixes in relation to all of the predefined channel coverage zones based on the calculated panning coefficients and the audio objects.
- the predefined channel coverage zones are defined by a plurality of endpoints distributed in a sound field.
- Each of the submixes indicates a sum of components of the plurality of the audio objects in relation to one of the predefined channel coverage zones.
- the system 500 further comprises a submix gain generating unit 503 configured to generate a submix gain by applying an audio processing to each of the submixes, and an object gain controlling unit 504 configured to control an object gain applied to each of the audio objects, the object gain being as a function of the panning coefficients for each of the audio objects and the submix gains in relation to each of the predefined channel coverage zones.
- the system 500 may comprise an audio signal rendering unit configured to render the audio signal based on the audio objects and the object gain.
- each of the submixes may be converted as a weighted average of the plurality of audio objects, with the weight being the panning coefficient for each of the audio objects.
- the number of the predefined channel coverage zones may be equal to the number of the converted submixes.
- system 500 may further comprises a dialog determining unit configured to determine whether the audio object belongs to a dialog object, and a dialog object clustering unit configured to cluster the audio object to a dialog submix in response to the audio object being determined to be a dialog object.
- whether the audio object belongs to a dialog object may be estimated by a confidence score
- the system 500 may further comprises a dialog submix gain generating unit configured to generate the submix gain for the dialog submix based on the estimated confidence score.
- the predefined channel coverage zones may comprise a front zone defined by a front left channel and a front right channel, a center zone defined by a center channel, a surround zone defined by a surround left channel and a surround right channel, and a height zone defined by a height channel.
- the system 500 further comprises a front submix converting unit configured to convert the audio signal into a front submix in relation to the front zone based on the panning coefficients for the audio objects; a center submix converting unit configured to convert the audio signal into a center submix in relation to the center zone based on the panning coefficients for the audio objects; a surround submix converting unit configured to convert the audio signal into a surround submix in relation to the surround zone based on the panning coefficients for the audio objects; and a height submix converting unit configured to convert the audio signal into a height submix in relation to the height zone based on the panning coefficients for the audio objects.
- a front submix converting unit configured to convert the audio signal into a front submix in relation to the front zone based on the panning coefficients for the audio objects
- a center submix converting unit configured to convert the audio signal into a center submix in relation to the center zone based on the panning coefficients for the audio objects
- a surround submix converting unit configured to
- system 500 further comprises a merging unit configured to merge the center submix and the front submix, and a replacing unit configured to replace the center submix by the dialog submix.
- the surround submix and the height submix may be applied with a same audio processing algorithm in order to generate the corresponding submix gains.
- system 500 may further comprises an object type identifying unit configured, for each of the audio objects, to identify a type of the audio object, and the submix gain generating unit is configured to generate the submix gain by applying an audio processing to each of the submixes based on the identified type of the audio object.
- object type identifying unit configured, for each of the audio objects, to identify a type of the audio object
- submix gain generating unit is configured to generate the submix gain by applying an audio processing to each of the submixes based on the identified type of the audio object.
- the components of the system 500 may be a hardware module or a software unit module.
- the system 500 may be implemented partially or completely with software and/or firmware, for example, implemented as a computer program product embodied in a computer readable medium.
- the system 500 may be implemented partially or completely based on hardware, for example, as an integrated circuit (IC), an application-specific integrated circuit (ASIC), a system on chip (SOC), a field programmable gate array (FPGA), and so forth.
- IC integrated circuit
- ASIC application-specific integrated circuit
- SOC system on chip
- FPGA field programmable gate array
- FIG. 6 shows a block diagram of an example computer system 600 suitable for implementing example embodiments disclosed herein.
- the computer system 600 comprises a central processing unit (CPU) 601 which is capable of performing various processes in accordance with a program stored in a read only memory (ROM) 602 or a program loaded from a storage section 608 to a random access memory (RAM) 603 .
- ROM read only memory
- RAM random access memory
- data required when the CPU 601 performs the various processes or the like is also stored as required.
- the CPU 601 , the ROM 602 and the RAM 603 are connected to one another via a bus 604 .
- An input/output (I/O) interface 605 is also connected to the bus 604 .
- I/O input/output
- the following components are connected to the I/O interface 605 : an input section 606 including a keyboard, a mouse, or the like; an output section 607 including a display, such as a cathode ray tube (CRT), a liquid crystal display (LCD), or the like, and a speaker or the like; the storage section 608 including a hard disk or the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like.
- the communication section 609 performs a communication process via the network such as the internet.
- a drive 610 is also connected to the I/O interface 605 as required.
- a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 610 as required, so that a computer program read therefrom is installed into the storage section 608 as required.
- example embodiments disclosed herein comprise a computer program product including a computer program tangibly embodied on a machine readable medium, the computer program including program code for performing methods 100 and/or 300 .
- the computer program may be downloaded and mounted from the network via the communication section 609 , and/or installed from the removable medium 611 .
- various example embodiments disclosed herein may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of the example embodiments disclosed herein are illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it will be appreciated that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
- example embodiments disclosed herein include a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program containing program codes configured to carry out the methods as described above.
- a machine readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- the machine readable medium may be a machine readable signal medium or a machine readable storage medium.
- a machine readable medium may include, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- machine readable storage medium More specific examples of the machine readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- CD-ROM portable compact disc read-only memory
- magnetic storage device or any suitable combination of the foregoing.
- Computer program code for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer program codes may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor of the computer or other programmable data processing apparatus, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented.
- the program code may execute entirely on a computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server or distributed among one or more remote computers or servers.
- example embodiments disclosed herein may be embodied in any of the forms described herein.
- EEEs enumerated example embodiments
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Stereophonic System (AREA)
Abstract
Description
where α represents the panning coefficient for each zone, i represents the object index, c, f, s, h represent the center, front, surround and height zones, [xi, yl, zl] represents the modified relative position for coefficient calculation derived from the original object position [Xi, Yi, Zj], that is
s j=Σi=1 N a ijobjecti (6)
where s represents a submix signal including components of a number of audio objects in relation to the predefined channel coverage zones, j represents one of the four zones c, f, s, h as defined previously, N represents the total number of the audio objects in the object-based audio signal, objecti represents the signal associated with an audio object i, and αij represents the panning coefficient for the i-th object in relation to the j-th zone.
where ObjGaini represents the object gain of the i-th object, gf, gs, gc and gh represent the submix gain obtained for the front, surround, center and height submixes, respectively, and αif, αis, αic and αih represent the panning coefficients for the i-th object in relation to the front zone, the surround zone, the center zone and the height zone, respectively.
αid =c i 2 (8)
αij′=(1−c i 2)·αij (9)
where ci represents the weight panning to dialog submix, which can be derived from the dialog confidence of the audio object (or directly equal to the dialog confidence score), αid represents the panning coefficient for the i-th object in relation to a dialog zone, αij′ represents the modified panning coefficient to other submixes by considering the dialog confidence score, and j represents the four zones c, f, s, h as defined previously.
-
- EEE 1. A method of object audio processing system, including:
- An object submixer that renders/downmixes audio objects into submixes based on the object's spatial metadata;
- An audio processor that processes the generated submixes;
- A gain applier that applies the gains obtained from audio processer to original audio objects.
- EEE 2. The method in EEE 1, wherein the object submix generates four submixes: Center, Front, Surround and Height, and each submix is generated as a weighted average of the audio objects, with the weight being the panning gain of each object in each submix.
- EEE 3. The method in EEE 1, wherein the object submix further generates a dialog submix based on the manual label or automatic audio classification, and the detailed computation is illustrated in Equations (8) and (9).
- EEE 4. The method in EEEs 2 and 3, the object submixer generates four “enhanced” submixes from five C/F/S/H/D submixes, by replacing C by D and merging original C and F together.
- EEE 5. The method in EEE 1, the audio processer processes the Height submix by using the same method processing the Surround submix.
- EEE 6. The method in EEE 1, the audio processer directly uses the dialog submix for dialog enhancement.
- EEE 7. The method in EEE 1, wherein the gain of each audio object is computed from the gain obtained for each submix and the panning gain of the object in each submix, as illustrated in Equation (7).
- EEE 8. The method in EEE 1, wherein a content identification module can be added for automatic content type identification and automatic steering of audio processing algorithms.
- EEE 1. A method of object audio processing system, including:
Claims (7)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/963,103 US11877140B2 (en) | 2015-06-01 | 2022-10-10 | Processing object-based audio signals |
US18/391,426 US20240205629A1 (en) | 2015-06-01 | 2023-12-20 | Processing object-based audio signals |
Applications Claiming Priority (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510294063.7A CN106303897A (en) | 2015-06-01 | 2015-06-01 | Process object-based audio signal |
CN201510294063.7 | 2015-06-01 | ||
US201562183491P | 2015-06-23 | 2015-06-23 | |
PCT/US2016/034459 WO2016196226A1 (en) | 2015-06-01 | 2016-05-26 | Processing object-based audio signals |
US201715577510A | 2017-11-28 | 2017-11-28 | |
US16/143,351 US10251010B2 (en) | 2015-06-01 | 2018-09-26 | Processing object-based audio signals |
US16/368,574 US10602294B2 (en) | 2015-06-01 | 2019-03-28 | Processing object-based audio signals |
US16/825,776 US11470437B2 (en) | 2015-06-01 | 2020-03-20 | Processing object-based audio signals |
US17/963,103 US11877140B2 (en) | 2015-06-01 | 2022-10-10 | Processing object-based audio signals |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/825,776 Continuation US11470437B2 (en) | 2015-06-01 | 2020-03-20 | Processing object-based audio signals |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/391,426 Continuation US20240205629A1 (en) | 2015-06-01 | 2023-12-20 | Processing object-based audio signals |
Publications (2)
Publication Number | Publication Date |
---|---|
US20230105114A1 US20230105114A1 (en) | 2023-04-06 |
US11877140B2 true US11877140B2 (en) | 2024-01-16 |
Family
ID=57441671
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/577,510 Active US10111022B2 (en) | 2015-06-01 | 2016-05-26 | Processing object-based audio signals |
US16/143,351 Active US10251010B2 (en) | 2015-06-01 | 2018-09-26 | Processing object-based audio signals |
US16/368,574 Active US10602294B2 (en) | 2015-06-01 | 2019-03-28 | Processing object-based audio signals |
US16/825,776 Active 2037-01-04 US11470437B2 (en) | 2015-06-01 | 2020-03-20 | Processing object-based audio signals |
US17/963,103 Active US11877140B2 (en) | 2015-06-01 | 2022-10-10 | Processing object-based audio signals |
US18/391,426 Pending US20240205629A1 (en) | 2015-06-01 | 2023-12-20 | Processing object-based audio signals |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/577,510 Active US10111022B2 (en) | 2015-06-01 | 2016-05-26 | Processing object-based audio signals |
US16/143,351 Active US10251010B2 (en) | 2015-06-01 | 2018-09-26 | Processing object-based audio signals |
US16/368,574 Active US10602294B2 (en) | 2015-06-01 | 2019-03-28 | Processing object-based audio signals |
US16/825,776 Active 2037-01-04 US11470437B2 (en) | 2015-06-01 | 2020-03-20 | Processing object-based audio signals |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/391,426 Pending US20240205629A1 (en) | 2015-06-01 | 2023-12-20 | Processing object-based audio signals |
Country Status (4)
Country | Link |
---|---|
US (6) | US10111022B2 (en) |
EP (3) | EP3304936B1 (en) |
CN (1) | CN106303897A (en) |
WO (1) | WO2016196226A1 (en) |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
CN104969289B (en) | 2013-02-07 | 2021-05-28 | 苹果公司 | Voice trigger of digital assistant |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US10460227B2 (en) | 2015-05-15 | 2019-10-29 | Apple Inc. | Virtual assistant in a communication session |
US10553221B2 (en) * | 2015-06-17 | 2020-02-04 | Sony Corporation | Transmitting device, transmitting method, receiving device, and receiving method for audio stream including coded data |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
US12197817B2 (en) | 2016-06-11 | 2025-01-14 | Apple Inc. | Intelligent device arbitration and control |
JP6567479B2 (en) * | 2016-08-31 | 2019-08-28 | 株式会社東芝 | Signal processing apparatus, signal processing method, and program |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
EP3622509B1 (en) * | 2017-05-09 | 2021-03-24 | Dolby Laboratories Licensing Corporation | Processing of a multi-channel spatial audio format input signal |
DK201770428A1 (en) * | 2017-05-12 | 2019-02-18 | Apple Inc. | Low-latency intelligent automated assistant |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
KR102483470B1 (en) * | 2018-02-13 | 2023-01-02 | 한국전자통신연구원 | Apparatus and method for stereophonic sound generating using a multi-rendering method and stereophonic sound reproduction using a multi-rendering method |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
DK201870355A1 (en) | 2018-06-01 | 2019-12-16 | Apple Inc. | Virtual assistant operation in multi-device environments |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US12118987B2 (en) | 2019-04-18 | 2024-10-15 | Dolby Laboratories Licensing Corporation | Dialog detector |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
DK201970509A1 (en) | 2019-05-06 | 2021-01-15 | Apple Inc | Spoken notifications |
US11468890B2 (en) | 2019-06-01 | 2022-10-11 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
CN114521334B (en) | 2019-07-30 | 2023-12-01 | 杜比实验室特许公司 | Audio processing systems, methods and media |
US11968268B2 (en) | 2019-07-30 | 2024-04-23 | Dolby Laboratories Licensing Corporation | Coordination of audio devices |
KR102535704B1 (en) | 2019-07-30 | 2023-05-30 | 돌비 레버러토리즈 라이쎈싱 코오포레이션 | Dynamics handling across devices with different playback capabilities |
US11061543B1 (en) | 2020-05-11 | 2021-07-13 | Apple Inc. | Providing relevant data items based on context |
US12301635B2 (en) | 2020-05-11 | 2025-05-13 | Apple Inc. | Digital assistant hardware abstraction |
US11490204B2 (en) | 2020-07-20 | 2022-11-01 | Apple Inc. | Multi-device audio adjustment coordination |
US11438683B2 (en) | 2020-07-21 | 2022-09-06 | Apple Inc. | User identification using headphones |
US11984124B2 (en) | 2020-11-13 | 2024-05-14 | Apple Inc. | Speculative task flow execution |
EP4256815A2 (en) | 2020-12-03 | 2023-10-11 | Dolby Laboratories Licensing Corporation | Progressive calculation and application of rendering configurations for dynamic applications |
CN119422389A (en) * | 2022-06-27 | 2025-02-11 | 杜比实验室特许公司 | Separation and rendering of height objects |
Citations (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4086433A (en) * | 1974-03-26 | 1978-04-25 | National Research Development Corporation | Sound reproduction system with non-square loudspeaker lay-out |
US5757927A (en) * | 1992-03-02 | 1998-05-26 | Trifield Productions Ltd. | Surround sound apparatus |
US7672744B2 (en) | 2006-11-15 | 2010-03-02 | Lg Electronics Inc. | Method and an apparatus for decoding an audio signal |
US20110166867A1 (en) | 2008-07-16 | 2011-07-07 | Electronics And Telecommunications Research Institute | Multi-object audio encoding and decoding apparatus supporting post down-mix signal |
US8139773B2 (en) | 2009-01-28 | 2012-03-20 | Lg Electronics Inc. | Method and an apparatus for decoding an audio signal |
US8204756B2 (en) | 2007-02-14 | 2012-06-19 | Lg Electronics Inc. | Methods and apparatuses for encoding and decoding object-based audio signals |
US20120170756A1 (en) | 2011-01-04 | 2012-07-05 | Srs Labs, Inc. | Immersive audio rendering system |
US20120177204A1 (en) | 2009-06-24 | 2012-07-12 | Oliver Hellmuth | Audio Signal Decoder, Method for Decoding an Audio Signal and Computer Program Using Cascaded Audio Object Processing Stages |
US8254600B2 (en) | 2009-01-28 | 2012-08-28 | Lg Electronics Inc. | Method and an apparatus for decoding an audio signal |
US20120263308A1 (en) | 2009-10-16 | 2012-10-18 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus, method and computer program for providing one or more adjusted parameters for provision of an upmix signal representation on the basis of a downmix signal representation and a parametric side information associated with the downmix signal representation, using an average value |
US8295494B2 (en) | 2007-08-13 | 2012-10-23 | Lg Electronics Inc. | Enhancing audio with remixing capability |
US20120269353A1 (en) * | 2009-09-29 | 2012-10-25 | Juergen Herre | Audio signal decoder, audio signal encoder, method for providing an upmix signal representation, method for providing a downmix signal representation, computer program and bitstream using a common inter-object-correlation parameter value |
US8315396B2 (en) | 2008-07-17 | 2012-11-20 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for generating audio output signals using object based metadata |
US20120314875A1 (en) | 2011-06-09 | 2012-12-13 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding and decoding 3-dimensional audio signal |
WO2013006330A2 (en) | 2011-07-01 | 2013-01-10 | Dolby Laboratories Licensing Corporation | System and tools for enhanced 3d audio authoring and rendering |
US20130010969A1 (en) | 2010-03-19 | 2013-01-10 | Samsung Electronics Co., Ltd. | Method and apparatus for reproducing three-dimensional sound |
US8364497B2 (en) * | 2006-09-29 | 2013-01-29 | Electronics And Telecommunications Research Institute | Apparatus and method for coding and decoding multi-object audio signal with various channel |
US20140025386A1 (en) | 2012-07-20 | 2014-01-23 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for audio object clustering |
US8639368B2 (en) | 2008-07-15 | 2014-01-28 | Lg Electronics Inc. | Method and an apparatus for processing an audio signal |
US8670575B2 (en) | 2008-12-05 | 2014-03-11 | Lg Electronics Inc. | Method and an apparatus for processing an audio signal |
US8712784B2 (en) | 2009-06-10 | 2014-04-29 | Electronics And Telecommunications Research Institute | Encoding method and encoding device, decoding method and decoding device and transcoding method and transcoder for multi-object audio signals |
WO2014160678A2 (en) | 2013-03-26 | 2014-10-02 | Dolby Laboratories Licensing Corporation | 1apparatuses and methods for audio classifying and processing |
US20140297296A1 (en) | 2011-11-01 | 2014-10-02 | Koninklijke Philips N.V. | Audio object encoding and decoding |
WO2014184353A1 (en) | 2013-05-16 | 2014-11-20 | Koninklijke Philips N.V. | An audio processing apparatus and method therefor |
US20140355771A1 (en) | 2013-05-29 | 2014-12-04 | Qualcomm Incorporated | Compression of decomposed representations of a sound field |
US20150016641A1 (en) | 2013-07-09 | 2015-01-15 | Nokia Corporation | Audio processing apparatus |
WO2015010999A1 (en) | 2013-07-22 | 2015-01-29 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for realizing a saoc downmix of 3d audio content |
WO2015010961A2 (en) | 2013-07-22 | 2015-01-29 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for mapping first and second input channels to at least one output channel |
WO2015011015A1 (en) | 2013-07-22 | 2015-01-29 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Multi-channel audio decoder, multi-channel audio encoder, methods, computer program and encoded audio representation using a decorrelation of rendered audio signals |
US20150170657A1 (en) * | 2013-11-27 | 2015-06-18 | Dts, Inc. | Multiplet-based matrix mixing for high-channel count multichannel audio |
US20150194158A1 (en) | 2012-07-31 | 2015-07-09 | Intellectual Discovery Co., Ltd. | Method and device for processing audio signal |
US20150223002A1 (en) | 2012-08-31 | 2015-08-06 | Dolby Laboratories Licensing Corporation | System for Rendering and Playback of Object Based Audio in Various Listening Environments |
US20150350802A1 (en) | 2012-12-04 | 2015-12-03 | Samsung Electronics Co., Ltd. | Audio providing apparatus and audio providing method |
US20160029140A1 (en) | 2013-04-03 | 2016-01-28 | Dolby International Ab | Methods and systems for generating and interactively rendering object based audio |
US20160104491A1 (en) * | 2013-04-27 | 2016-04-14 | Intellectual Discovery Co., Ltd. | Audio signal processing method for sound image localization |
US20160142844A1 (en) | 2013-06-28 | 2016-05-19 | Dolby Laboratories Licensing Corporation | Improved rendering of audio objects using discontinuous rendering-matrix updates |
US20160299738A1 (en) | 2013-04-04 | 2016-10-13 | Nokia Corporation | Visual Audio Processing Apparatus |
US20160316309A1 (en) | 2014-01-07 | 2016-10-27 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for generating a plurality of audio channels |
US20160330560A1 (en) | 2014-01-10 | 2016-11-10 | Samsung Electronics Co., Ltd. | Method and apparatus for reproducing three-dimensional audio |
US20170011751A1 (en) | 2014-03-26 | 2017-01-12 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for screen related audio object remapping |
US20170048640A1 (en) | 2015-08-14 | 2017-02-16 | Dts, Inc. | Bass management for object-based audio |
US20170309288A1 (en) | 2014-10-02 | 2017-10-26 | Dolby International Ab | Decoding method and decoder for dialog enhancement |
US20180091926A1 (en) | 2016-09-23 | 2018-03-29 | Samsung Electronics Co., Ltd. | Electronic device and control method thereof |
US20180174594A1 (en) | 2015-06-17 | 2018-06-21 | Samsung Electronics Co., Ltd. | Method and device for processing internal channels for low complexity format conversion |
US10021504B2 (en) | 2014-06-26 | 2018-07-10 | Samsung Electronics Co., Ltd. | Method and device for rendering acoustic signal, and computer-readable recording medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2497279B1 (en) * | 2009-11-04 | 2018-11-21 | Fraunhofer Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for calculating driving coefficients for loudspeakers of a loudspeaker arrangement based on an audio signal associated with a virtual source |
-
2015
- 2015-06-01 CN CN201510294063.7A patent/CN106303897A/en active Pending
-
2016
- 2016-05-26 EP EP16728508.9A patent/EP3304936B1/en active Active
- 2016-05-26 EP EP22203307.8A patent/EP4167601A1/en active Pending
- 2016-05-26 EP EP19209955.4A patent/EP3651481B1/en active Active
- 2016-05-26 WO PCT/US2016/034459 patent/WO2016196226A1/en active Application Filing
- 2016-05-26 US US15/577,510 patent/US10111022B2/en active Active
-
2018
- 2018-09-26 US US16/143,351 patent/US10251010B2/en active Active
-
2019
- 2019-03-28 US US16/368,574 patent/US10602294B2/en active Active
-
2020
- 2020-03-20 US US16/825,776 patent/US11470437B2/en active Active
-
2022
- 2022-10-10 US US17/963,103 patent/US11877140B2/en active Active
-
2023
- 2023-12-20 US US18/391,426 patent/US20240205629A1/en active Pending
Patent Citations (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4086433A (en) * | 1974-03-26 | 1978-04-25 | National Research Development Corporation | Sound reproduction system with non-square loudspeaker lay-out |
US5757927A (en) * | 1992-03-02 | 1998-05-26 | Trifield Productions Ltd. | Surround sound apparatus |
US8364497B2 (en) * | 2006-09-29 | 2013-01-29 | Electronics And Telecommunications Research Institute | Apparatus and method for coding and decoding multi-object audio signal with various channel |
US7672744B2 (en) | 2006-11-15 | 2010-03-02 | Lg Electronics Inc. | Method and an apparatus for decoding an audio signal |
US8204756B2 (en) | 2007-02-14 | 2012-06-19 | Lg Electronics Inc. | Methods and apparatuses for encoding and decoding object-based audio signals |
US8295494B2 (en) | 2007-08-13 | 2012-10-23 | Lg Electronics Inc. | Enhancing audio with remixing capability |
US8639368B2 (en) | 2008-07-15 | 2014-01-28 | Lg Electronics Inc. | Method and an apparatus for processing an audio signal |
US20110166867A1 (en) | 2008-07-16 | 2011-07-07 | Electronics And Telecommunications Research Institute | Multi-object audio encoding and decoding apparatus supporting post down-mix signal |
US8315396B2 (en) | 2008-07-17 | 2012-11-20 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for generating audio output signals using object based metadata |
US8670575B2 (en) | 2008-12-05 | 2014-03-11 | Lg Electronics Inc. | Method and an apparatus for processing an audio signal |
US8139773B2 (en) | 2009-01-28 | 2012-03-20 | Lg Electronics Inc. | Method and an apparatus for decoding an audio signal |
US8254600B2 (en) | 2009-01-28 | 2012-08-28 | Lg Electronics Inc. | Method and an apparatus for decoding an audio signal |
US8712784B2 (en) | 2009-06-10 | 2014-04-29 | Electronics And Telecommunications Research Institute | Encoding method and encoding device, decoding method and decoding device and transcoding method and transcoder for multi-object audio signals |
US20120177204A1 (en) | 2009-06-24 | 2012-07-12 | Oliver Hellmuth | Audio Signal Decoder, Method for Decoding an Audio Signal and Computer Program Using Cascaded Audio Object Processing Stages |
US20120269353A1 (en) * | 2009-09-29 | 2012-10-25 | Juergen Herre | Audio signal decoder, audio signal encoder, method for providing an upmix signal representation, method for providing a downmix signal representation, computer program and bitstream using a common inter-object-correlation parameter value |
US20120263308A1 (en) | 2009-10-16 | 2012-10-18 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus, method and computer program for providing one or more adjusted parameters for provision of an upmix signal representation on the basis of a downmix signal representation and a parametric side information associated with the downmix signal representation, using an average value |
US20130010969A1 (en) | 2010-03-19 | 2013-01-10 | Samsung Electronics Co., Ltd. | Method and apparatus for reproducing three-dimensional sound |
US20120170756A1 (en) | 2011-01-04 | 2012-07-05 | Srs Labs, Inc. | Immersive audio rendering system |
US20120314875A1 (en) | 2011-06-09 | 2012-12-13 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding and decoding 3-dimensional audio signal |
WO2013006330A2 (en) | 2011-07-01 | 2013-01-10 | Dolby Laboratories Licensing Corporation | System and tools for enhanced 3d audio authoring and rendering |
US20140297296A1 (en) | 2011-11-01 | 2014-10-02 | Koninklijke Philips N.V. | Audio object encoding and decoding |
US20140025386A1 (en) | 2012-07-20 | 2014-01-23 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for audio object clustering |
US9761229B2 (en) * | 2012-07-20 | 2017-09-12 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for audio object clustering |
US20150194158A1 (en) | 2012-07-31 | 2015-07-09 | Intellectual Discovery Co., Ltd. | Method and device for processing audio signal |
US20180077511A1 (en) | 2012-08-31 | 2018-03-15 | Dolby Laboratories Licensing Corporation | System for Rendering and Playback of Object Based Audio in Various Listening Environments |
US20150223002A1 (en) | 2012-08-31 | 2015-08-06 | Dolby Laboratories Licensing Corporation | System for Rendering and Playback of Object Based Audio in Various Listening Environments |
US20150350802A1 (en) | 2012-12-04 | 2015-12-03 | Samsung Electronics Co., Ltd. | Audio providing apparatus and audio providing method |
WO2014160678A2 (en) | 2013-03-26 | 2014-10-02 | Dolby Laboratories Licensing Corporation | 1apparatuses and methods for audio classifying and processing |
US20160078879A1 (en) | 2013-03-26 | 2016-03-17 | Dolby Laboratories Licensing Corporation | Apparatuses and Methods for Audio Classifying and Processing |
US20160029140A1 (en) | 2013-04-03 | 2016-01-28 | Dolby International Ab | Methods and systems for generating and interactively rendering object based audio |
US20160299738A1 (en) | 2013-04-04 | 2016-10-13 | Nokia Corporation | Visual Audio Processing Apparatus |
US20160104491A1 (en) * | 2013-04-27 | 2016-04-14 | Intellectual Discovery Co., Ltd. | Audio signal processing method for sound image localization |
WO2014184353A1 (en) | 2013-05-16 | 2014-11-20 | Koninklijke Philips N.V. | An audio processing apparatus and method therefor |
US20160080886A1 (en) * | 2013-05-16 | 2016-03-17 | Koninklijke Philips N.V. | An audio processing apparatus and method therefor |
US20140355771A1 (en) | 2013-05-29 | 2014-12-04 | Qualcomm Incorporated | Compression of decomposed representations of a sound field |
US20160142844A1 (en) | 2013-06-28 | 2016-05-19 | Dolby Laboratories Licensing Corporation | Improved rendering of audio objects using discontinuous rendering-matrix updates |
US9883311B2 (en) | 2013-06-28 | 2018-01-30 | Dolby Laboratories Licensing Corporation | Rendering of audio objects using discontinuous rendering-matrix updates |
US20150016641A1 (en) | 2013-07-09 | 2015-01-15 | Nokia Corporation | Audio processing apparatus |
WO2015010999A1 (en) | 2013-07-22 | 2015-01-29 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for realizing a saoc downmix of 3d audio content |
US20160134989A1 (en) | 2013-07-22 | 2016-05-12 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Method and signal processing unit for mapping a plurality of input channels of an input channel configuration to output channels of an output channel configuration |
WO2015010961A2 (en) | 2013-07-22 | 2015-01-29 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for mapping first and second input channels to at least one output channel |
WO2015011015A1 (en) | 2013-07-22 | 2015-01-29 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Multi-channel audio decoder, multi-channel audio encoder, methods, computer program and encoded audio representation using a decorrelation of rendered audio signals |
US20150170657A1 (en) * | 2013-11-27 | 2015-06-18 | Dts, Inc. | Multiplet-based matrix mixing for high-channel count multichannel audio |
US20160316309A1 (en) | 2014-01-07 | 2016-10-27 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for generating a plurality of audio channels |
US20160330560A1 (en) | 2014-01-10 | 2016-11-10 | Samsung Electronics Co., Ltd. | Method and apparatus for reproducing three-dimensional audio |
US20170011751A1 (en) | 2014-03-26 | 2017-01-12 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for screen related audio object remapping |
US10021504B2 (en) | 2014-06-26 | 2018-07-10 | Samsung Electronics Co., Ltd. | Method and device for rendering acoustic signal, and computer-readable recording medium |
US20170309288A1 (en) | 2014-10-02 | 2017-10-26 | Dolby International Ab | Decoding method and decoder for dialog enhancement |
US20180174594A1 (en) | 2015-06-17 | 2018-06-21 | Samsung Electronics Co., Ltd. | Method and device for processing internal channels for low complexity format conversion |
US20170048640A1 (en) | 2015-08-14 | 2017-02-16 | Dts, Inc. | Bass management for object-based audio |
US20180091926A1 (en) | 2016-09-23 | 2018-03-29 | Samsung Electronics Co., Ltd. | Electronic device and control method thereof |
Also Published As
Publication number | Publication date |
---|---|
US10111022B2 (en) | 2018-10-23 |
US10602294B2 (en) | 2020-03-24 |
US20180152803A1 (en) | 2018-05-31 |
EP4167601A1 (en) | 2023-04-19 |
US20190037333A1 (en) | 2019-01-31 |
EP3651481B1 (en) | 2022-10-26 |
US20240205629A1 (en) | 2024-06-20 |
EP3651481A1 (en) | 2020-05-13 |
US11470437B2 (en) | 2022-10-11 |
EP3304936A1 (en) | 2018-04-11 |
US20200288260A1 (en) | 2020-09-10 |
US20230105114A1 (en) | 2023-04-06 |
WO2016196226A1 (en) | 2016-12-08 |
EP3304936B1 (en) | 2019-11-20 |
US10251010B2 (en) | 2019-04-02 |
US20190222951A1 (en) | 2019-07-18 |
CN106303897A (en) | 2017-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11877140B2 (en) | Processing object-based audio signals | |
EP3716654B1 (en) | Adaptive audio content generation | |
US10362426B2 (en) | Upmixing of audio signals | |
US10200804B2 (en) | Video content assisted audio object extraction | |
US8532999B2 (en) | Apparatus and method for generating a multi-channel synthesizer control signal, multi-channel synthesizer, method of generating an output signal from an input signal and machine-readable storage medium | |
KR101828138B1 (en) | Segment-wise Adjustment of Spatial Audio Signal to Different Playback Loudspeaker Setup | |
US10362401B2 (en) | Orientation-aware surround sound playback | |
CN105874533B (en) | Audio object extracts | |
US20180103333A1 (en) | Audio object extraction with sub-band object probability estimation | |
US20250106577A1 (en) | Upmixing systems and methods for extending stereo signals to multi-channel formats |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
AS | Assignment |
Owner name: DOLBY LABORATORIES LICENSING CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SEEFELDT, ALAN J.;LU, LIE;ZHANG, CHEN;SIGNING DATES FROM 20150701 TO 20150806;REEL/FRAME:064909/0105 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |