EP2346028A1 - An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal - Google Patents

An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal Download PDF

Info

Publication number
EP2346028A1
EP2346028A1 EP10156263A EP10156263A EP2346028A1 EP 2346028 A1 EP2346028 A1 EP 2346028A1 EP 10156263 A EP10156263 A EP 10156263A EP 10156263 A EP10156263 A EP 10156263A EP 2346028 A1 EP2346028 A1 EP 2346028A1
Authority
EP
European Patent Office
Prior art keywords
parameter
directional
spatial audio
diffuseness
audio signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP10156263A
Other languages
German (de)
French (fr)
Inventor
Richard Schultz-Amling
Fabian KÜCH
Markus Kallinger
Giovanni Del Galdo
Oliver Thiergart
Dirk Mahne
Achim Kuntz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority to AU2010332934A priority Critical patent/AU2010332934B2/en
Priority to CN201080063799.9A priority patent/CN102859584B/en
Priority to RU2012132354/08A priority patent/RU2586842C2/en
Priority to MX2012006979A priority patent/MX2012006979A/en
Priority to ES10796353.0T priority patent/ES2592217T3/en
Priority to CA2784862A priority patent/CA2784862C/en
Priority to EP10796353.0A priority patent/EP2502228B1/en
Priority to BR112012015018-9A priority patent/BR112012015018B1/en
Priority to PCT/EP2010/069669 priority patent/WO2011073210A1/en
Priority to JP2012543696A priority patent/JP5426035B2/en
Priority to KR1020127017311A priority patent/KR101431934B1/en
Priority to TW099143975A priority patent/TWI523545B/en
Priority to ARP100104731A priority patent/AR079517A1/en
Publication of EP2346028A1 publication Critical patent/EP2346028A1/en
Priority to US13/523,085 priority patent/US9196257B2/en
Priority to HK13103678.8A priority patent/HK1176733A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/173Transcoding, i.e. converting between two coded representations avoiding cascaded coding-decoding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/60Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/78Television signal recording using magnetic recording
    • H04N5/782Television signal recording using magnetic recording on tape
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/007Two-channel systems in which the audio signals are in digital form
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/03Application of parametric coding in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels

Definitions

  • the present invention relates to the field of audio processing, especially to the field of parametric spatial audio processing and for converting a first parametric spatial audio signal into a second parametric spatial audio signal.
  • Spatial sound recording aims at capturing a sound field with multiple microphones such that at the reproduction side, a listener perceives the sound image, as it was present at the recording location.
  • Standard approaches for spatial sound recording use simple stereo microphones or more sophisticated combinations of directional microphones, e.g., such as the B-format microphones used in Ambisonics and described by M.A. Gerzon, "Periphony: Width-Height Sound Reproduction,” J. Aud. Eng. Soc., Vol. 21, No. 1, pp 2-10, 1973 , in the following referred to as [Ambisonics]. Commonly, these methods are referred to as coincident-microphone techniques.
  • parametric spatial audio coders methods based on a parametric representation of sound fields can be applied, which are referred to as parametric spatial audio coders. These methods determine a downmix audio signal together with corresponding spatial side information, which are relevant for the perception of spatial sound. Examples are Directional Audio Coding (DirAC), as discussed in Pulkki, V., "Directional audio coding in spatial sound reproduction and stereo upmixing," in Proceedings of The AES 28th International Conference, pp.
  • DIAC Directional Audio Coding
  • the spatial cue information basically consists of the direction-of-arrival (DOA) of sound and the diffuseness of the sound field in frequency subbands.
  • DOA direction-of-arrival
  • the desired loudspeaker signals for reproduction are determined based on the downmix signal and the parametric side information.
  • the downmix signals and the corresponding spatial side information represent the audio scene according to the set-up, e.g. the orientation and/or position of the microphones, in relation to the different audio sources used at the time the audio scene was recorded.
  • the recording position i.e. the position of the microphones, can also be referred to as the reference listening position.
  • a modification of the recorded audio scene is not envisaged in these known spatial sound-capturing methods.
  • modification of the visual image is commonly applied, for example, in the context of video capturing.
  • an optical zoom is used in video cameras to change the virtual position of the camera, giving the impression, the image was taken from a different point of view. This is described by a translation of the camera position.
  • Another simple picture modification is the horizontal or vertical rotation of the camera around its own axis. The vertical rotation is also referred to as panning or tilting.
  • Embodiments of the present invention provide an apparatus and a method, which also allow virtually changing the listening position and/or orientation according to the visual movement.
  • the invention allows altering the acoustic image a listener perceives during reproduction such that it corresponds to the recording obtained using a microphone configuration placed at a virtual position and/or orientation other than the actual physical position of the microphones.
  • the recorded acoustic image can be aligned with the corresponding modified video image.
  • the effect of a video zoom to a certain area of an image can be applied to the recorded spatial audio image in a consistent way. According to the invention, this is achieved by appropriately modifying the spatial cue parameters and/or the downmix signal in the parametric domain of the spatial audio coder.
  • Embodiments of the present invention allow to flexibly change the position and/or orientation of a listener within a given spatial audio scene without having to record the spatial audio scene with a different microphone setting, for example, a different position and/or orientation of the recording microphone set-up with regard to the audio signal sources.
  • embodiments of the present invention allow defining a virtual listening position and/or virtual listening orientation that is different to the recording position or listening position at the time the spatial audio scene was recorded.
  • Certain embodiments of the present invention only use one or several downmix signals and/or the spatial side information, for example, the direction-of-arrival and the diffuseness to adapt the downmix signals and/or spatial side information to reflect the changed listening position and/or orientation. In other words, these embodiments do not require any further set-up information, for example, geometric information of the different audio sources with regard to the original recording position.
  • Embodiments of the present invention further receive parametric spatial audio signals according to a certain spatial audio format, for example, mono or stereo downmix signals with direction-of-arrival and diffuseness as spatial side information and convert this data according to control signals, for example, zoom or rotation control signals and output the modified or converted data in the same spatial audio format, i.e. as mono or stereo downmix signal with the associated direction-of-arrival and diffuseness parameters.
  • a certain spatial audio format for example, mono or stereo downmix signals with direction-of-arrival and diffuseness as spatial side information
  • control signals for example, zoom or rotation control signals
  • output the modified or converted data in the same spatial audio format i.e. as mono or stereo downmix signal with the associated direction-of-arrival and diffuseness parameters.
  • embodiments of the present invention are coupled to a video camera or other video sources and modify the received or original spatial audio data into the modified spatial audio data according to the zoom control or rotation control signals provided by the video camera to synchronize, for example, the audio experience to the video experience and, for example, to perform an acoustical zoom in case a video zoom is performed and/or perform an audio rotation within the audio scene in case the video camera is rotated and the microphones do not physically rotate with the camera because they are not mounted on the camera.
  • a typical spatial audio coder is described.
  • the task of a typical parametric spatial audio coder is to reproduce the spatial impression that was present at the point where it was recorded. Therefore, a spatial audio coder consists of an analysis part 100 and a synthesis part 200, as shown in Fig. 1 .
  • N microphones 102 are arranged to obtain N microphone input signals that are processed by the spatial audio analysis unit 100 to produce L downmix signals 112 with L ⁇ N together with spatial side information 114.
  • the decoder i.e.
  • the downmix signal 112 and the spatial side information 114 are used to compute M loudspeaker channels for M loudspeakers 202, which reproduce the recorded sound field with the original spatial impression.
  • the thick lines symbolize audio data
  • the thin lines 114 between the spatial audio analysis unit 100 and the spatial audio synthesis unit 200 represent the spatial side information.
  • the microphone signals are processed in a suitable time/frequency representation, e.g., by applying a short-time Fourier Transform (STFT) or any other filterbank.
  • STFT short-time Fourier Transform
  • the spatial side information determined in the analysis stage contains a measure corresponding to the direction-of-arrival (DOA) of sound and a measure of the diffuseness of the sound field, which describes the relation between direct and diffuse sound of the analyzed sound field.
  • DOA direction-of-arrival
  • the relevant acoustic information is derived from a so-called B-format microphone input, corresponding to the sound pressure and the velocity obtained by microphones configuration providing a dipole pick-up pattern, which are aligned with the axes of Cartesian coordinate system.
  • the B-format consists of four signals, namely w(t), x(t), y(t) and z(t). The first corresponds to the pressure measured by an omnidirectional microphone, whereas the latter three are signals of microphones having figure-of-eight pick-up patterns directed towards the three axes of a Cartesian coordinate system.
  • the signals x(t), y(t) and z(t) are proportional to the components of particle velocity vectors directed towards x, y and z, respectively.
  • the approach presented in SAM uses a priori knowledge of the directivity pattern of stereo microphones to determine the DOA of sound.
  • the diffuseness measure can be obtained by relating the active sound intensity to the overall energy of the sound field as proposed in DirAC.
  • the method as described in SAM proposes to evaluate the coherence between different microphone signals.
  • diffuseness could also be considered as a general reliability measure for the estimated DOA. Without loss of generality, in the following it is assumed that the diffuseness lies in the range of [1, 0], where a value of 1 indicates a purely diffuse sound field, and a value of 0 corresponds to the case where only direct sound is present. In other embodiments, other ranges and values for the diffuseness can be used.
  • the downmix signal 112 which is accompanied with the side information 114, is computed from the microphone input signals. It can be mono or include multiple audio channels. In case of DirAC, commonly only a mono signal, corresponding to the sound pressure, as obtained by an omnidirectional microphone is considered. For the SAM approach, a two-channel stereo signal is used as downmix signal.
  • the input of the synthesis 200 is the downmix signal 112 and the spatial parameters 114 in their time-frequency representation.
  • M loudspeaker channels are calculated such that the spatial audio image or spatial audio impression is reproduced correctly.
  • the scaling factor g i (k,n) depends on the DOA of the direct sound included in the side information and the loudspeaker configuration used for playback.
  • Fig. 2 shows a block diagram of an embodiment of the present invention integrated in the exemplary environment of Fig. 1 , i.e. integrated between a spatial analysis unit 100 and a spatial audio synthesis unit 200.
  • the original audio scene is recorded with a specific recording set-up of microphones specifying the location and orientation (in case of directional microphones) relative to the different audio sound sources.
  • the N microphones provide N physical microphone signals or channel signals, which are processed by the spatial audio analysis unit 100 to generate one or several downmix signals W 112 and the spatial side information 114, for example, the direction-of-arrival (DOA) ⁇ 114a and the diffuseness ⁇ 114b.
  • DOA direction-of-arrival
  • these spatial audio signals 112, 114a, 114b are not provided directly to the spatial audio synthesis unit 200, but are modified by an apparatus for converting or modifying a first parametric spatial audio signal 112, 114a, 114b representing a first listening position and/or a first listening orientation (in this example, the recording position and recording orientation) in a spatial audio scene to a second parametric spatial audio signal 212, 214a, 214b, i.e. a modified downmix signal W mod 212, a modified direction-of-arrival signal ⁇ mod 214a and/or a modified diffuseness signal ⁇ mod 214b representing a second listening position and/or second listening orientation that is different to the first listening position and/or first listening orientation.
  • a first parametric spatial audio signal 112, 114a, 114b representing a first listening position and/or a first listening orientation (in this example, the recording position and recording orientation) in a spatial audio scene
  • a second parametric spatial audio signal 212, 214a, 214b i
  • the modified direction-of-arrival 214a and the modified diffuseness 214b are also referred to as modified spatial audio information 214.
  • the apparatus 300 is also referred to as a spatial audio signal modification unit or spatial audio signal modification block 300.
  • the apparatus 300 in Fig. 3A is adapted to modify the first parametric spatial audio signal 112, 114 depending on a control signal d 402 provided by a, e.g. external, control unit 400.
  • the control signal 402 can, e.g. be a zoom control signal defining or being a zoom factor d or a zoom parameter d, or a rotation control signal 402 provided by a zoom control and/or a rotational control unit 400 of a video camera.
  • a zoom in a certain direction and a translation in the same direction are just two different ways of describing a virtual movement in that certain direction (the zoom by a zoom factor, the translation by an absolute distance or by a relative distance relative to a reference distance). Therefore, explanations herein with regard to a zoom control signal apply correspondingly to translation control signals and vice versa, and the zoom control signal 402 also refers to a translation control signal.
  • the term d can on one hand represent the control signal 402 itself, and on the other hand the control information or parameter contained in the control signal.
  • the control parameter d represents already the control signal 402.
  • the control parameter or control information d can be a distance, a zoom factor and/or a rotation angle and/or a rotation direction.
  • the apparatus 300 is adapted to provide parametric spatial audio signals 212, 214 (downmix signals and the associated side information/parameters) in the same format as the parametric spatial audio signals 112, 114 it received. Therefore, the spatial audio synthesis unit 200 is capable (without modifications) of processing the modified spatial audio signal 212, 214 in the same manner as the original or recorded spatial audio signal 112, 114 and to convert them to M physical loudspeaker signals 204 to generate the sound experience to the modified spatial audio scene or, in other words, to the modified listening position and/or modified listening orientation within the otherwise unchanged spatial audio scene.
  • a block schematic diagram of an embodiment of the novel apparatus or method is illustrated in Fig. 2 .
  • the output 112, 114 of the spatial audio coder 100 is modified based on the external control information 402 in order to obtain a spatial audio representation 212, 214 corresponding to a listening position, which is different to the one used in the original location used for the sound capturing. More precisely, both the downmix signals 112 and the spatial side information 114 are changed appropriately.
  • the modification strategy is determined by an external control 400, which can be acquired directly from a camera 400 or from any other user interface 400 that provides information about the actual position of the camera or zoom.
  • the task of the algorithm is to change the spatial impression of the sound scene in the same way as the optical zoom or camera rotation changes the point-of-view of the spectator.
  • the modification unit 300 is adapted to provide a corresponding acoustical zoom or audio rotation experience corresponding to the video zoom or video rotation.
  • Fig. 3A shows a block diagram or system overview of an embodiment of the apparatus 300 that is referred to as "acoustical zoom unit".
  • the embodiment of the apparatus 300 in Fig. 3A comprises a parameter modification unit 301 and a downmix modification unit 302.
  • the parameter modification unit 301 further comprises a direction-of-arrival modification unit 301a and a diffuseness modification unit 301b.
  • the parameter modification unit 301 is adapted to receive the direction-of-arrival parameter 114a and to modify the first or received direction-of-arrival parameter 114a depending on the control signal d 402 to obtain the modified or second direction-of-arrival parameter 214a.
  • the parameter modification unit 301 is further adapted to receive the first or original diffuseness parameter 114b and to modify the diffuseness parameter 114b by the diffuseness modification unit 301b to obtain the second or modified diffuseness parameter 214b depending on the control signal 402.
  • the downmix modification unit 302 is adapted to receive the one or more downmix signals 112 and to modify the first or original downmix signal 112 to obtain the second or modified downmix signal 212 depending on the first or original direction-of-arrival parameter 114a, the first or original diffuseness parameter 114b and/or the control signal 402.
  • embodiments of the invention provide a possibility to synchronize the change of the audio scene or audio perception according to the camera controls 402.
  • the directions can be shifted without modifying the downmix signals 112 if the camera 400 is only rotated horizontally without the zooming, i.e. applying only a rotation control signal and no zooming control signal 402. This is described by the "rotation controller" in Figs. 2 and 3 .
  • the rotation modification is described in more detail in the section about directional remapping or remapping of directions.
  • the sections about diffuseness and downmix modification are related to the translation or zooming application.
  • Embodiments of the invention can be adapted to perform both, a rotation modification and a translation or zoom modification, e.g. by first performing the rotation modification and afterwards the translation or zoom modification or vice versa, or both at the same time by providing corresponding directional mapping functions.
  • the listening position is virtually changed, which is done by appropriately remapping the analyzed directions.
  • the downmix signal is processed by a filter, which depends on the remapped directions. This filter changes the gains, as, e.g., sounds that are now closer are increased in level, while sounds from regions out-of-interest may be attenuated.
  • the diffuseness is scaled with the same assumptions, as, e.g., sounds that appear closer to the new listening position have to be reproduced less diffuse than before.
  • Block 301a the remapping of the directions is described (block 301a, f p (k,n, ⁇ ,d)), then the filter for the diffuseness modification (block 301b, f d (k,n, ⁇ ,d)) is illustrated.
  • Block 302 describes the downmix modification, which is dependent on the zoom control and the original spatial parameters.
  • the direction-of-arrival parameter can be represented, for example, by a unit vector e .
  • the elevation angle is given by ⁇ . This vector will be altered, according to the new virtual position of the microphone as described next.
  • DOA remapping is given for the two-dimensional case for presentation simplicity ( Fig. 4 ).
  • a corresponding remapping of the three-dimensional DOA can be done with similar considerations.
  • Fig. 4 shows a geometric overview of an exemplarily geometric overview of the acoustical zoom.
  • the position S marks the original microphone recording position, i.e., the original listening position.
  • a and B mark spatial positions within the observed 2-dimensional plane.
  • the listening position is moved from S to S2, e.g. in direction of the first listening orientation.
  • the sound emerging from spatial position A stays in the same angular position relative to the recording location, whereas sounds from the area or spatial position B are moved to the side. This is denoted by a changing of the analyzed angle ⁇ to ⁇ .
  • thus denotes the direction-of-arrival of sound coming from the angular position of B if the listener had been placed in S2.
  • the azimuth angle is increased from ⁇ to ⁇ as shown in Fig. 4 .
  • This function is a nonlinear transformation, dependent on the zoom factor d and the original estimated DOAs.
  • Fig. 5A shows examples for the mapping f( ) for different values of ⁇ as can be applied in the two-dimensional example shown in Fig. 4 .
  • the zoom control factor of d 1, i.e., no zoom is applied, the angles are equal to the original DOA ⁇ .
  • For increasing zoom control factors, the value of ⁇ is increased, too.
  • the function can be derived from geometric considerations or, alternatively, be chosen heuristically.
  • remapping of the directions means that each DOA is modified according to the function f().
  • the mapping f p (k,n, ⁇ ,d) is performed for every time and frequency bin (k,n).
  • the zoom parameter d is depicted as a translational distance d between the original listening position S and the modified listening position S2, as mentioned before, d can also be a factor, e.g. an optical zoom like an 4x or 8x zoom. Especially for the width or filter control, seeing d as a factor, not as a distance, allows for an easy implementation of the acoustical zoom. In other words, the zoom parameter d is in this case a real distance, or at least proportional to a distance.
  • embodiments of the invention can also be adapted to support besides the "zoom-in” as described above, e.g. reducing a distance to an object (e.g. to object A in Fig. 4 by moving from position S to position S2), also a "zoom-out", e.g. increasing a distance to an object (e.g. to object A in Fig. 4 by moving from position S2 to position S).
  • a zoom-in e.g. reducing a distance to an object (e.g. to object A in Fig. 4 by moving from position S to position S2)
  • a "zoom-out" e.g. increasing a distance to an object (e.g. to object A in Fig. 4 by moving from position S2 to position S).
  • the inverse considerations apply compared to the zoom-in as described because objects positioned on a side of the listener (e.g. object B with regard to position S2) move to the front of the listener when he moves to position S.
  • Fig. 5A shows an exemplarily mapping function (dependent on the zoom factor d) for the direction-of-arrivals for the scenario shown in Fig. 4 .
  • Embodiments of the invention can be adapted to use the same mapping function f p for all time and frequency bin values defined by k and n, or, may use different mapping functions for different time values and/or frequency bins.
  • the idea behind the filter f d is to change the diffuseness ⁇ such that it lowers the diffuseness for zoomed-in directions ( ⁇ ⁇ l ⁇ l) and increases the diffuseness for out-of-focus directions ( ⁇ >
  • certain embodiments of the modification unit 301a are adapted to only use the direction and to assume that all sources, e.g. A and B, defining the direction-of-arrival of the sound have the same distance to the first listening position, e.g. are arranged on a unit radius.
  • the mapping function f( ) can be designed such that the maximum angle, to where DOAs are remapped, is limited. For example, a maximum angle of ⁇ 60° is chosen, when the loudspeakers are positioned at ⁇ 60°. This way, the whole sound scene will stay in the front and is only widened, when the zoom is applied.
  • the rotational change or difference is derived starting from the first listening orientation respectively first viewing orientation (e.g. direction of the "nose" of the listener respectively viewer) defining a first reference or 0° orientation.
  • first viewing orientation e.g. direction of the "nose” of the listener respectively viewer
  • reference or 0° orientation changes accordingly. Therefore, embodiments of the present invention change the original angles or directions of arrival of the sound. i.e. the first directional parameter, according to the new reference or 0° orientation such that the second directional parameter represents the same "direction of arrival" in the audio scene, however relative to the new reference orientation or coordinate system. Similar considerations apply to the translation respectively zoom, where the perceived directions-of-arrival change due to the translation or zoom in direction of the first listening orientation (see Fig. 4 ).
  • the first directional parameter 114a and the second directional parameter 214a can be two-dimensional or three-dimensional vectors.
  • the first directional parameter 114a can be a vector
  • the control signal 402 is a rotation control signal defining a rotation angle (e.g. 20° in the aforementioned example) and a rotation direction (to the right in the aforementioned two-dimensional example)
  • the diffuseness scaling as, for example, performed by the diffuseness modification unit 301b is described in more detail.
  • the diffuseness is scaled with a DOA-dependent window.
  • the range of the visual angle covered by the camera image can be taken as a controller for the scaling by which the diffuseness value is increased or decreased.
  • zoomed-in-directions or directions-of-interest refer to an angular window of interest, also referred to as central range of angles, that is arranged around the first or original listening direction, e.g. the original 0° reference direction.
  • the angular window or central range is determined by the angular values ⁇ defining the border of the angular window.
  • the angular window and the width of the angular window can be defined by the negative border angle - ⁇ and the positive border angle ⁇ , wherein the magnitude of the negative border angle may be different to the positive border angle.
  • the negative border angle and the positive border angle have the same magnitude (symmetric window or central range of angles centered around the first listening orientation).
  • the magnitude of the border angle is also referred to as angular width and the width of the window (from the negative border angle to the positive border angle) is also referred to as total angular width.
  • direction-of-arrival parameters, diffuseness parameters, and/or direct or diffuse components can be modified differently depending on whether the original direction-of-arrival parameter is inside the window of interest, e.g. whether the DOA-angle or a magnitude of the DOA-angle relative to the first listening position is smaller than the magnitude of the border angle or angular width ⁇ , or whether the original direction-of-arrival parameter is outside the window of interest, e.g. whether the DOA-angle or a magnitude of the DOA-angle relative to the first listening position is larger than the magnitude of the border angle or angular width ⁇ .
  • This is also referred to as direction-dependent and the corresponding filter functions as direction dependent filter functions, wherein the angular width or border angle ⁇ defines the angle at which the corresponding filter changes from increasing the parameter to decreasing the parameter or vice versa.
  • the diffuseness modification unit 301b is adapted to modify the diffuseness ⁇ by the function f d (k,n, ⁇ ,d) or f d which is dependent on the time/frequency indices k,n, the original direction-of-arrival ⁇ , and the zoom controller d.
  • Fig. 5B shows an embodiment of a filter function f d .
  • the filter f d may be implemented as an inversion of the filter function H 1 , which will be explained later, however, adapted to match the diffuseness range, for example the range between [0..1].
  • FIG. 5B shows the mapping function or filter f d , wherein the x-axis represents the original or first diffuseness ⁇ , in Fig. 5B also referred to as ⁇ in , with the range from 0 to 1, and the y-axis represents the second or modified diffuseness ⁇ mod also in the range of 0 to 1.
  • Reference sign 552 depicts the bypass line.
  • Fig. 5B shows some prototype functions of f d , namely 562, 564, 572 and 574 depending on the look width or angular width ⁇ .
  • the angular width is smaller for ⁇ 2 than for ⁇ 1 , i.e. ⁇ 2 ⁇ ⁇ 1 .
  • ⁇ 2 corresponds to a higher zoom factor d than ⁇ 1 .
  • the area below the bypass line 552 defines the modified diffuseness values ⁇ mod in case the original direction-of-arrival ⁇ is within the angular width ⁇ which is reflected by a reduction of the modified diffuseness value ⁇ mod compared to the original diffuseness value ⁇ in or ⁇ after the mapping by the filter f d .
  • the area above the bypass line 552 represents the mapping of the original diffuseness ⁇ to the modified diffuseness values ⁇ mod in case the original direction-of-arrival ⁇ is outside the window. In other words, the area above the bypass line 552 shows the increase of the diffuseness after the mapping.
  • the angular width ⁇ decreases with an increasing zoom factor d.
  • embodiments can be adapted such that the zoom factor d or translation information not only influences the angular width ⁇ of the filter function f d but also the degree or factor the diffuseness is increased in case it is inside the window and the degree or factor the diffuseness ⁇ is decreased in case it is outside the window defined by the angular width ⁇ .
  • the zoom factor d or translation information not only influences the angular width ⁇ of the filter function f d but also the degree or factor the diffuseness is increased in case it is inside the window and the degree or factor the diffuseness ⁇ is decreased in case it is outside the window defined by the angular width ⁇ .
  • the angular width ⁇ 1 corresponds to a zoom factor d 1
  • the angular width ⁇ 2 corresponds to a zoom factor d 2
  • d 2 is larger than d 1 and, thus, the angular width ⁇ 2 is smaller than angular width ⁇ 1
  • the function f d represented by reference sign 564 and corresponding to the larger zoom factor d 2 maps the original diffuseness values ⁇ in to lower modified diffuseness values ⁇ mod than the filter function f d represented by 562 corresponding to the lower zoom factor d 1 .
  • embodiments of the filter function can be adapted to reduce the original diffuseness the more the smaller the angular width ⁇ .
  • embodiments of the filter function f d can be adapted to map the original diffuseness ⁇ in to the modified diffuseness ⁇ mod dependent on the zoom factor d and the angular width ⁇ , or the higher the zoom factor d the smaller the angular width ⁇ and/or the higher the increase of the diffuseness for direction-of-arrival ⁇ outside the window.
  • the same direction dependent window or filter function f d (k,n, ⁇ ,d) is applied for all zoom factors.
  • the use of different direction dependent window or filter functions with smaller angular widths for higher translation or zoom factors matches the audio experience of the user better and provides a more realistic audio perception.
  • the application of different mapping values for different zoom factors (higher reduction of the diffuseness with increasing zoom factor for direction-of-arrival value ⁇ inside the window, and increasing or higher diffuseness values for higher zoom factors in case the direction-of-arrival value ⁇ is outside the angular width ⁇ ) even further improve the realistic audio perception.
  • the filters for the downmix signal are used to modify the gain of the direct and diffuse part of the output signal.
  • the loudspeaker signals are thus modified.
  • the sound of the zoomed-in area is amplified, while sound from out-of-interest directions can be attenuated.
  • the downmix signal 112 may be a mono or a stereo signal for directional audio coding (DirAC) or spatial audio microphones (SAM), in the following, two different embodiments of the modification are described.
  • DIAC directional audio coding
  • SAM spatial audio microphones
  • a mono downmix modification i.e. an embodiment for a modification of a mono downmix audio signal W 112 is described.
  • W k ⁇ n S k ⁇ n + N k ⁇ n
  • S(k,n) denotes the direct sound component of the downmix signal
  • N(k,n) denotes the diffuse sound components in the original downmix signal
  • k denotes the time index or time instant the signal represents and n represents a frequency bin or frequency channel of the signal at the given time instant k .
  • H1(k,n, ⁇ ,d) and H2(k,n, ⁇ ,d) represent filters applied to the direct and the diffuse components of the signal model
  • represents the original direction-of-arrival and d the zoom factor or zoom parameter.
  • Both filters are directional dependent weighting functions.
  • a cardioid shaped pickup pattern of a microphone can be taken as a design criterion for such weighting functions.
  • the filter H 1 (k,n, ⁇ ,d) can be implemented as a raised cosine window such that the direct sound is amplified for directions of the zoomed-in area, whereas the level of sound coming from other directions is attenuated.
  • different window shapes can be applied to the direct and the diffuse sound components, respectively.
  • the gain filter implemented by the windows may be controlled by the actual translation or zoom control factor d.
  • the zoom controls the width of equal gain for the focused directions and the width of gain in general. Examples for different gain windows are given in Fig. 6 .
  • Fig. 6 shows different gain windows for the weighting filter H 1 (k,n, ⁇ ,d). Four different gain prototypes are shown:
  • the first listening orientation represented by 0° in Fig. 6 , forms the center of different zoom factor dependent direction dependent windows, wherein the predetermined central range or width of the direction dependent windows is the smaller the greater the zoom factor.
  • the borders of the central range or window are defined by the angle ⁇ at which the gain is 0 dB.
  • Fig. 6 shows symmetric windows with positive and negative borders having the same magnitude.
  • Window 616 has a width of 140° for the maximum gain and a predetermined central region with a width of 180° with borders or angular widths +/- ⁇ 3 at +/- 90°, wherein direct components inside or within the predetermined central region are increased and direct components outside of the predetermined central region are reduced (negative gain down to -2.5dB).
  • Window 618 has a width of 30° for the maximum gain and a predetermined central region with a width of 60° with borders or angular widths +/- ⁇ 4 at +/- 30°, wherein direct components inside or within the predetermined central region are increased and direct components outside of the predetermined central region are reduced (negative gain down to -6dB).
  • the zoom factor d controls the width, i.e. the negative and positive borders and the total width, and the gain of the prototype windows.
  • the window can already be designed such that the width and the gain is correctly applied to the original direction-of-arrivals ⁇ .
  • the maximal gain should be limited, in order to avoid distortions in the output signals.
  • the width of the window, or the exact shape as shown here should be considered as an illustrative example of how the zoom factor controls various aspects of a gain window. Other implementation may be used in different embodiments.
  • the filter H 2 (k,n, ⁇ , d ) is used to modify the diffuse part 112a of the downmix signal analogously to the way how the diffuseness measure ⁇ (k,n) has been modified and can be implemented as a subcardioid window as shown in Fig. 7 .
  • Fig. 7 shows a subcardioid window 702 which almost keeps the diffuse component unaltered in an area between -30° and +30° of the original direction of arrival ⁇ and attenuate the diffuse component the higher the deviation, i.e. the angle departing from the 0° orientation, of the original direction-of arrival ⁇ .
  • the diffuse signal components in the downmix signal remain unaltered. This will result in a more direct sound reproduction in zoom direction.
  • the sounds that come from all other directions are rendered more diffuse, as the microphone has been virtually placed farther away.
  • those diffuse parts will be attenuated compared to those of the original downmix signal.
  • the desired gain filter can also be designed using the previously described raised cosine windows. Note, however, that the scaling will be less pronounced than in case of the direct sound modification.
  • the windows can depend on the zoom factor, wherein the slope of the window function 702 is the steeper the higher the zoom factor.
  • a stereo downmix modification i.e. a modification of a stereo downmix signal W is described.
  • the signal S(k,n) represents direct sound
  • N i denotes the diffuse sound for the i -th microphone.
  • the direct and diffuse sound components can be determined from the downmix channels based on the diffuseness measure.
  • the gain factor c corresponds to a different scaling of the direct sound component in the different stereo channels, which arises from the different directivity pattern associated with the two downmix channels. More details on the relation of the scaling factor and the DOA of direct sound can be found in SAM. Since this scaling depends on the DOA of sound of the observed sound field, its value has to be modified in accordance to the DOA remapping resulting from the modified virtual recording location.
  • the computation of the gain filters G ij (k,n, ⁇ ,d) is performed in accordance to the corresponding gain filters H i (k,n, ⁇ ,d) as discussed for the mono downmix case.
  • the new stereo scaling factor c mod is determined as a function of the modified DOA such that it corresponds to the new virtual recording location.
  • embodiments of the present invention provide an apparatus 300 for converting a first parametric spatial audio signal 112, 114 representing a first listening position or a first listening orientation in a spatial audio scene to a second parametric spatial audio signal 212, 214 representing a second listening position or a second listening orientation, the second listening position or second listening orientation being different to the first listening position or first listening orientation.
  • the apparatus comprises a spatial audio signal modification unit 301, 302 adapted to modify the first parametric spurious audio signal 112, 114 dependent on a change of the first listening position or the first listening orientation so as to obtain the second parametric spatial audio signal 212, 214, wherein the second listening position or the second listening orientation corresponds to the first listening position or the first listening orientation changed by the change.
  • Embodiments of the apparatus 300 can be adapted to convert only a single side information parameter, for example, the direction-of-arrival 114a or the diffuseness parameter 114b, or only the audio downmix signal 112 or some or all of the aforementioned signals and parameters.
  • a single side information parameter for example, the direction-of-arrival 114a or the diffuseness parameter 114b, or only the audio downmix signal 112 or some or all of the aforementioned signals and parameters.
  • the analog microphone signals are digitized and processed to provide a downmixed time/frequency representation W(k,n) of the microphone signals, representing, for each time instant or block k , a frequency representation, wherein each frequency bin of the frequency or spectral representation is denoted by the index n .
  • the spatial audio analysis unit 100 determines for each time instant k and for each frequency bin n for the corresponding time instant k , one unit vector e DOA (confer equation (4)) providing for each frequency bin n and each time instant k , the directional parameter or information.
  • the spatial audio analysis unit 100 determines for each time instant k and each frequency bin n , a diffuseness parameter ⁇ defining a relation between the direct sound or audio components and the diffuse sound or audio components, wherein the diffuse components are, for example, caused by two or more audio sources and/or by reflections of audio signals from the audio sources.
  • the DirAC is a very processing efficient and memory efficient coding as it reduces the spatial audio information defining the audio scene, for example, audio sources, reflection, position and orientation of the microphones and respectively the listener (for each time instant k and each frequency bin n ) to one directional information, i.e. a unit vector e DOA (k,n) and one diffuseness value ⁇ (k,n) between 0 and 1, associated to the corresponding one (mono) downmix audio signal W(k,n) or several (e.g. stereo) downmix audio signals W 1 (k,n) and W 2 (k,n).
  • a unit vector e DOA (k,n) and one diffuseness value ⁇ (k,n) between 0 and 1 associated to the corresponding one (mono) downmix audio signal W(k,n) or several (e.g. stereo) downmix audio signals W 1 (k,n) and W 2 (k,n).
  • Embodiments using the aforementioned directional audio coding are, therefore, adapted to modify, for each instant k and each frequency bin n , the corresponding downmix value W(k,n) to W mod (k,n), the corresponding direction-of-arrival parameter value e (k,n) to e mod (k,n) (in Figs. 1 to 3 represented by ⁇ , respectively ⁇ mod ) and/or diffuseness parameter value ⁇ (k,n) to ⁇ mod (k,n).
  • the spatial audio signal modification unit comprises or is formed by, for example, the parameter modification unit 301 and the downmix modification unit 302.
  • the parameter modification unit 301 is adapted to process the original parameter 114a to determine the modified directional parameter 214a, to process the diffuseness parameter ⁇ depending on the original directional parameter ⁇ , respectively 114a, to split the downmix signal 112 using equations (2) and (3) using the original diffuseness parameter ⁇ , respectively 114b, and to apply the direction dependent filtering H 1 (k,n, ⁇ ,d) and H 2 (k,n, ⁇ ,d) dependent on the original directional parameter ⁇ , respectively 114a.
  • these modifications are performed for each time instant k and each frequency bin n to obtain, for each time instant k and each frequency instant n , the respective modified signals and/or parameters.
  • the apparatus 300 is adapted to only modify the first directional parameter 114a of the first parametric spatial audio signal to obtain a second directional parameter 214a of the second parametric spatial audio signal depending on the control signal 402, for example, a rotation control signal or a zoom control signal.
  • a corresponding modification or shift of the directional parameter ⁇ (k,n) 114a is sufficient.
  • the corresponding diffuseness parameters and downmix signal components can be left un-amended so that the second downmix signal 212 corresponds to the first downmix signal 112 and the second diffuseness parameter 214b corresponds to the first diffuseness parameter 114b.
  • a modification of the directional parameter ⁇ (k,n) 114a according to a remapping function as shown in Fig. 5A already improves the sound experience and provides for a better synchronization between the audio signal and, for example, a video signal compared to the unmodified or original parametric spatial audio signal (without modifying the diffuseness parameter or the downmix signal).
  • the apparatus 300 is adapted to only apply filter H 1 (k,n, ⁇ ,d). In other words, this embodiment does not perform direction-of-arrival remapping or diffuseness modification.
  • This embodiment is adapted to only determine, for example, the direct component 112a from the downmix signal 112 and to apply the filter function H 1 to the direct component to produce a direction dependent weighted version of the direct component.
  • Such embodiments may be further adapted to use the direction dependent weighted version of the direct component as modified downmix signal W mod 212, or to also determine the diffuse component 112b from the original downmix signal W 112 and to generate the modified downmix signal W mod 212 by adding, or in general combining, the direction dependent weighted version of the direct component and the original or unaltered diffuse component 112b.
  • An improved impression of the acoustic zooming can be achieved, however, the zoom effect is limited because the direction-of-arrival is not modified.
  • the filters H 1 (k,n, ⁇ ,d) and H 2 (k,n, ⁇ ,d) are both applied, however, no direction-of-arrival remapping or diffuseness modification is performed.
  • the acoustic impression is improved compared to the unamended or original parametric spatial audio signal 112, 114.
  • the zooming impression is also better than only applying filter function H 1 (k,n, ⁇ ,d) to the direct component when diffuse sound is present, however, is still limited, because the direction-of-arrival ⁇ is not modified (better than the aforementioned embodiment using only H 1 (k,n, ⁇ ,d),.
  • only the filter f d is applied, or in other words, only the diffuseness component ⁇ is modified.
  • the zooming effect is improved compared to the original parametric spatial audio signal 112, 114 because the diffuseness of zoomed in areas (areas of interest) are reduced and the diffuseness values of out-of-interest are increased.
  • Such embodiments provide a very good zoom impression that is better than only applying the direction-of-arrival remapping.
  • Embodiments applying the direction-of-arrival remapping according to function f p in combination with a downmix modification using both filter functions H 1 (k,n, ⁇ ,d) and H 2 (k,n, ⁇ ,d) provide even better zoom impressions than only applying the direction-of-arrival remapping combined with applying the first filter function H 1 alone.
  • the apparatus 300 can be adapted to only modify the directional parameter ⁇ (k,n) and the diffuseness parameter ⁇ (k,n), but not to modify the downmix signal W(k,n) 100.
  • Preferred embodiments of the apparatus 300 as mentioned above also comprise modifying the downmix signal W(k,n) to even further improve the audio experience with regard to the changed position in the spatial audio scene.
  • the parameter modification unit 301 is adapted to shift or modify the first directional parameter by an angle defined by a rotation control signal in a reverse direction to a direction defined by the rotation control signal to obtain the second directional parameter ⁇ mod (k,n) 214a.
  • the parameter modification unit 301 is adapted to obtain the second directional parameter 214a using a non-linear mapping function (as, for example, shown in Fig. 5A ) defining the second directional parameter 214a depending on the first directional parameter ⁇ (k,n) and a zoom factor d defined by a zoom control signal 402 or another translational control information defined by the change signal.
  • a non-linear mapping function as, for example, shown in Fig. 5A
  • a zoom factor d defined by a zoom control signal 402 or another translational control information defined by the change signal.
  • the parameter modification unit 301 can be adapted to modify the first diffuseness parameter ⁇ (k,n) 114b of the first parametric spatial audio signal to obtain a second diffuseness parameter ⁇ mod (k,n) 214b depending on the first directional parameter ⁇ (k,n) 114a.
  • the parameter modification unit 301, 310b is adapted to obtain the second diffuseness parameter 214b using a direction dependent function adapted to decrease the first diffuseness parameter 114b to obtain the second diffuseness parameter 214b in case the first directional parameter 114a is within a predetermined central range of the second directional parameter with the second or changed listening orientation forming the center of the predetermined two-dimensional or three-dimensional central range and/or to increase the first diffuseness parameter 114b to obtain the second diffuseness parameter in case the first directional parameter 114a is outside of the predetermined central range.
  • the first or original listening orientation defines a center, e.g.
  • a positive and a negative border of the predetermined central range is defined by a positive and a negative angle ⁇ in a two-dimensional (e.g. horizontal) plane (e.g. +/-30°) independent of whether the second listening orientation is a two-dimensional or a three-dimensional vector, or by a corresponding angle ⁇ (e.g. 30°) defining a right circular cone around the three-dimensional first listening orientation.
  • Further embodiments can comprise different predetermined central regions or windows, symmetric and asymmetric, arranged or centered around the first listening orientation or a vector defining the first listening orientation.
  • the direction-dependent function f d ( k,n, ⁇ ,d ) depends on the change signal, for example, the zoom control signal, wherein the predetermined central range, respectively the values ⁇ defining the negative and positive border (or in general the border) of the central range is the smaller the greater the translational change or the higher the zoom factor defined by the zoom control signal is.
  • the spatial audio signal modification unit comprises a downmix modification unit 302 adapted to modify the first downmix audio signal W(k,n) of the first parametric spatial audio signal to obtain a second downmix signal W mod (k,n) of the second parametric spatial audio signal depending on the first directional parameter ⁇ (k,n) and the first diffuseness parameter ⁇ (k,n) .
  • Embodiments of the downmix modification unit 302 can be adapted to split the first downmix audio signal W into a direct component S(k,n) 112a and a diffuse component N(k,n) 112b dependent on the first diffuseness parameter ⁇ (k,n) , for example, based on equations (2) and (3).
  • the downmix modification unit 302 is adapted to apply a first direction dependent function H 1 (k,n, ⁇ ,d) to obtain a direction dependent weighted version of the direct component and/or to apply a second direction dependent function H 2 (k,n, ⁇ ,d) to the diffuse component to obtain a direction-dependent weighted version of the diffuse component.
  • the downmix modification unit 302 can be adapted to produce the direction dependent weighted version of the direct component 112a by applying a further direction dependent function H 1 (k,n, ⁇ ,d) to the direct component, the further direction dependent function being adapted to increase the direct component 112a in case the first directional parameter 114a is within the further predetermined central range of the first directional parameters and/or to decrease the direct component 112a in case the first directional parameter 114a is outside of the further predetermined range of the second directional parameters.
  • H 1 k,n, ⁇ ,d
  • the downmix modification unit can be adapted to produce the direction dependent weighted version of the diffusecomponent 112b by applying a direction dependent function H 2 (k,n, ⁇ ,d) to the diffuse component 112b, the direction dependent function being adapted to decrease the diffuse component in case the first directional parameter 114a is within a predetermined central range of the first directional parameters and/or to increase the diffuseness component 112b in case the first directional parameter 114a is outside of the predetermined range of the second directional parameters.
  • the downmix modification unit 302 is adapted to obtain the second downmix signal 212 based on a combination, e.g. a sum, of a direction dependent weighted version of the direct component 112a and a direction dependent weighted version of the diffuse component 112b.
  • a combination e.g. a sum
  • further embodiments may apply other algorithms than summing the two components to obtain the modified downmix signal 212.
  • embodiments of the downmix modification unit 302 can be adapted to split up the downmix signal W into a diffuse part or component 112b and a non-diffuse or direct part or component 112a by two multiplicators, namely ( ⁇ ) 1/2 and (1 - ⁇ ) 1/2 and to filter the non-diffuse part 112a by filter function H 1 and to filter the diffuse part 112b by filter function H 2 .
  • the filter function H 1 or H 1 (k,n, ⁇ ,d) can be dependent on the time/frequency indices k, n, the original direction-of-arrival ⁇ and the zoom parameter d.
  • the filter function H 1 may be additionally dependent on the diffuseness ⁇ .
  • the filter function H 2 or H 2 (k,n, ⁇ ,d) can be dependent on the time/frequency indices k, n, the original direction-of-arrival ⁇ , and the zoom parameter d.
  • the filter function H 2 may be additionally dependent on the diffuseness ⁇ .
  • the filter function H 2 can be implemented as a subcardioid window as shown in Fig. 7 , or as a simple attenuation factor, independent of the direction-of-arrival ⁇ .
  • the zoom parameter d can be used to control the filters H 1 , H 2 and the modifiers or functions f d and f p (see Fig. 3A ).
  • the zoom parameter d can also control the look width or angular width ⁇ (also referred to as border angle ⁇ ) of the applied windows or central regions.
  • the width ⁇ is defined, e.g. as the angle at which the filter function has 0 dB (see e.g. the 0 dB line in Fig. 6 ).
  • the angular width ⁇ and/or the gain can be controlled by the zoom parameter d.
  • An example of different values for ⁇ and different maximum gains and minimum gains is given in Fig. 6 .
  • FIG. 4 where ⁇ corresponds to the original directional parameter ⁇ and ⁇ corresponds to the modified directional parameter ⁇ mod (for zoom-in), the higher zoom factor d, the more object B moves from a central or frontal position to a side position, or even (in case of even higher zoom factors d than shown in Fig. 4 ) to a position in the back of the virtually modified position.
  • the zoom factor d the more the magnitude of an initially small angle representing a position in a frontal area of the listener increases, wherein higher angles represent positions in a side area of the listener.
  • This modification of the directional parameter is taken into account by applying a function as shown in Fig. 5A .
  • the direction dependent windows or functions for the other parameters and for the direct and diffuse components can also be designed to take into account the modification of the original directional parameter or angle, by reducing the angular width ⁇ with increasing zoom d, for example in a non-linear manner corresponding to the direction-of-arrival or directional parameter mapping as shown in Fig. 5A .
  • these direction dependent windows or functions can be adapted such that the original directional parameter can be directly used (e.g. without prior modification by function f p ), or alternatively, first the directional parameter mapping f p is performed and afterwards the direction dependent weighting f d , H 1 and/or H 2 based on the modified directional parameter is performed in a similar manner.
  • directional dependent functions f d , H 1 and H 2 referring directly to ⁇ , representing the original directional parameter (for zoom-in), or directional dependent functions f d , H 1 and H 2 referring to ⁇ representing the modified directional parameter.
  • Embodiments using the modified directional parameter can employ, similar to the embodiments using the original directional parameter, different windows with different angular widths and/or different gains for different zoom factors, or, the same windows with the same angular width (because the directional parameter has already been mapped to reflect the different zoom factors) and the same gain, or windows with the same angular widths but different gains, wherein a higher zoom factor results in a higher gain (analog to the windows in Fig. 6 ).
  • Fig. 3B shows a further embodiment of the apparatus.
  • the spatial audio signal modification unit in Fig. 3B comprises or is formed by, for example, the parameter modification unit 301 and the downmix modification unit 302.
  • the parameter modification unit 301 is adapted to first process the original parameter 114a to determine the modified directional parameter 214a, to then process the diffuseness parameter ⁇ depending on the modified directional parameter ⁇ mod , respectively 214a, to split the downmix signal 112 using equations (2) and (3) and the original diffuseness parameter ⁇ , respectively 114b as described based on Fig. 3A , and to apply the direction dependent filtering H 1 and H 2 dependent on the modified directional parameter ⁇ mod , respectively 214a.
  • these modifications are performed for each time instant k and each frequency bin n to obtain, for each time instant k and each frequency instant n, the respective modified signals and/or parameters.
  • the parameter modification unit 301 is adapted to process the original parameter 114a to determine the modified directional parameter 214a, to process the diffuseness parameter ⁇ depending on the original directional parameter ⁇ or 114a, to determine the modified diffuseness parameter ⁇ mod or 214b, to split the downmix signal 112 using equations (2) and (3) and the original diffuseness parameter ⁇ or 114b as described based on Fig. 3A , and to apply the direction dependent filtering H 1 and H 2 dependent on the modified directional parameter ⁇ mod , or 214a.
  • the apparatus 300 is adapted to only modify the first directional parameter 114a of the first parametric spatial audio signal to obtain a second directional parameter 214a of the second parametric spatial audio signal depending on the control signal 402, for example, a rotation control signal or a zoom control signal.
  • a corresponding modification or shift of the directional parameter ⁇ (k,n) 114a is sufficient.
  • the corresponding diffuseness parameters and downmix signal components can be left un-amended so that the second downmix signal 212 corresponds to the first downmix signal 112 and the second diffuseness parameter 214b corresponds to the first diffuseness parameter 114b.
  • a modification of the directional parameter ⁇ (k,n) 114a according to a remapping function as shown in Fig. 5A already improves the sound experience and provides for a better synchronization between the audio signal and, for example, a video signal compared to the unmodified or original parametric spatial audio signal (without modifying the diffuseness parameter or the downmix signal).
  • Modifying the diffuseness parameter 114b further improves the audio experience or, in other words, improves the adaptation of the sound experience with regard to the changed position within the spatial audio scene. Therefore, in further embodiments, the apparatus 300 can be adapted to only modify the directional parameter ⁇ (k,n) and the diffuseness parameter ⁇ (k,n), the latter dependent on the modified directional parameter ⁇ mod (k,n), but not to modify the downmix signal W(k,n) 100.
  • Preferred embodiments of the apparatus 300 according to Fig. 3B also comprise modifying the downmix signal W(k,n) dependent on the original diffuseness ⁇ (k,n) and the modified directional parameter ⁇ mod (k,n) to even further improve the audio experience with regard to the changed position in the spatial audio scene.
  • the parameter modification unit 301 is adapted to shift or modify the first directional parameter by an angle defined by a rotation control signal in a reverse direction to a direction defined by the rotation control signal to obtain the second directional parameter ⁇ mod (k,n) 214a.
  • the parameter modification unit 301 is adapted to obtain the second directional parameter 214a using a non-linear mapping function (as, for example, shown in Fig. 5A ) defining the second directional parameter 214a depending on the first directional parameter ⁇ (k,n) and a zoom factor d defined by a zoom control signal 402 or another translational control information defined by the change signal.
  • a non-linear mapping function as, for example, shown in Fig. 5A
  • a zoom factor d defined by a zoom control signal 402 or another translational control information defined by the change signal.
  • the parameter modification unit 301 can be adapted to modify the first diffuseness parameter ⁇ (k,n) 114b of the first parametric spatial audio signal to obtain a second diffuseness parameter ⁇ mod (k,n) 214b depending on the second directional parameter ⁇ mod (k,n) 214a.
  • the parameter modification unit can be further adapted to obtain the second diffuseness parameter ⁇ mod (k,n) using a direction dependent function adapted to decrease the first diffuseness parameter ⁇ (k,n) to obtain the second diffuseness parameter ⁇ mod (k,n) in case the second directional parameter ⁇ mod (k,n) is within a predetermined central range, for example +/- 30° of the original reference orientation referred to as original 0° orientation, and/or to increase the first diffuseness parameter ⁇ (k,n) to obtain the second diffuseness parameter ⁇ mod (k,n) in case the second directional parameter ⁇ mod (k,n) is outside of the predetermined central range, for example, in a two-dimensional case outside the central range defined by +30° and -30° from the 0° original reference orientation.
  • the parameter modification unit 301, 310b is adapted to obtain the second diffuseness parameter 214b using a direction dependent function adapted to decrease the first diffuseness parameter 114b to obtain the second diffuseness parameter 214b in case the second directional parameter 214a is within a predetermined central range of the second directional parameter with the first or original listening orientation forming the center of the predetermined two-dimensional or three-dimensional central range and/or to increase the first diffuseness parameter 114b to obtain the second diffuseness parameter in case the second directional parameter 214a is outside of the predetermined central range.
  • the first listening orientation defines a center, e.g.
  • a positive and a negative border of the predetermined central range is defined by a positive and a negative angle in a two-dimensional (e.g. horizontal) plane (e.g. +/-30°) independent of whether the first listening orientation is a two-dimensional or a three-dimensional vector, or by a corresponding angle (e.g. 30°) defining a right circular cone around the three-dimensional second listening orientation.
  • Further embodiments can comprise different predetermined central regions, symmetric and asymmetric, arranged around the first listening orientation or vector defining the first listening orientation.
  • the direction-dependent function f d ( ⁇ ) depends on the change signal, for example, the zoom control signal, wherein the predetermined central range, respectively the values defining the negative and positive border (or in general the border) of the central range is the smaller the greater the translational change or the higher the zoom factor defined by the zoom control signal is.
  • the spatial audio signal modification unit comprises a downmix modification unit 302 adapted to modify the first downmix audio signal W(k,n) of the first parametric spatial audio signal to obtain a second downmix signal W mod (k,n) of the second parametric spatial audio signal depending on the second directional parameter ⁇ mod (k,n) and the first diffuseness parameter ⁇ (k,n).
  • Embodiments of the downmix modification unit 302 can be adapted to split the first downmix audio signal W into a direct component S(k,n) 112a and a diffuse component N(k,n) 112b dependent on the first diffuseness parameter ⁇ (k,n), for example, based on equations (2) and (3).
  • the downmix modification unit 302 is adapted to apply a first direction dependent function H 1 to obtain a direction dependent weighted version of the direct component and/or to apply a second direction dependent function H 2 to the diffuse component to obtain a direction-dependent weighted version of the diffuse component.
  • the downmix modification unit 302 can be adapted to produce the direction dependent weighted version of the direct component 112a by applying a further direction dependent function H 1 to the direct component, the further direction dependent function being adapted to increase the direct component 112a in case the second directional parameter 214a is within the further predetermined central range of the second directional parameters and/or to decrease the direct component 112a in case the second directional parameter 214a is outside of the further predetermined range of the second directional parameters.
  • the downmix modification unit can be adapted to produce the direction dependent weighted version of the diffuse component 112b by applying a direction dependent function H 2 to the diffuse component 112b, the direction dependent function being adapted to decrease the diffuse component in case the second directional parameter 214a is within a predetermined central range of the second directional parameters and/or to increase the diffuse component 112b in case the second directional parameter 214a is outside of the predetermined range of the second directional parameters.
  • the downmix modification unit 302 is adapted to obtain the second downmix signal 212 based on a combination, e.g. a sum, of a direction dependent weighted version of the direct component 112a and a direction dependent weighted version of the diffuse component 112b.
  • a combination e.g. a sum
  • further embodiments may apply other algorithms than summing the two components to obtain the modified downmix signal 212.
  • embodiments of the downmix modification unit 302 according to Fig. 3B can be adapted to split up the downmix signal W into a diffuse part or component 112b and a non-diffuse or direct part or component 112a by two multiplicators, namely ( ⁇ ) 1/2 and (1 - ⁇ ) 1/2 and to filter the non-diffuse part 112a by filter function H 1 and to filter the diffuse part 112b by filter function H 2 .
  • the filter function H 1 or H 1 ( ⁇ , ⁇ ) can be dependent on the time/frequency indices k, n, the modified direction-of-arrival and the zoom parameter d.
  • the filter function H 1 may be additionally dependent on the diffuseness ⁇ .
  • the filter function H 2 or H 2 ( ⁇ , ⁇ ) can be dependent on the time/frequency indices k, n, the original direction-of-arrival ⁇ , and the zoom parameter d.
  • the filter function H 2 or H 2 ( ⁇ , ⁇ ) may be additionally dependent on the diffuseness ⁇ .
  • the filter function H 2 can be implemented as a subcardioid window as shown in Fig. 7 , or as a simple attenuation factor, independent of the modified direction-of-arrival ⁇ mod .
  • the zoom parameters d can be used to control the filters H 1 , H 2 and the modifiers or functions f d and f p .
  • the zoom parameter d can also control the angular width ⁇ (also referred to as border angle ⁇ ) of the applied windows or central regions.
  • the width ⁇ is defined, e.g. as the angle at which the filter function has 0 dB (analog to the 0 dB line in Fig. 6 ).
  • the angular width ⁇ and/or the gain can be controlled by the zoom parameter d.
  • inventive embodiments lead to an improved experience of a joint video/audio playback by adjusting the perceived audio image to the zoom control of a video camera.
  • Modem camcorders for example, for home entertainment, are capable of recording surround sound and have a powerful optical zoom. There is, however, no perceptual equivalent interaction between the optical zoom and the recorded sound, as the recorded spatial sound only depends on the actual position of the camera and, thus, the position of the microphones mounted on the camera itself. In case of a scene filmed in a close-up mode, the invention allows to adjust the audio image accordingly. This leads to a more natural and consistent consumer experience as the sound is zoomed together with the picture.
  • the invention may also be applied in a post-processing phase if the original microphone signals are recorded unaltered with the video and no further processing has been done.
  • the original zoom length may not be known, the invention can be used in creative audio-visual post-processing toolboxes. An arbitrary zoom-length can be selected and the acoustical zoom can be steered by the user to match the picture. Alternatively, the user can create his own preferred spatial effects. In either case, the original microphone recording position will be altered to a user defined virtual recording position.
  • the inventive methods can be implemented in hardware or in software.
  • the implementation can be performed using a digital storage medium, in particular, a disc, a CD, a DVD or a Blu-Ray disc having an electronically-readable control signal stored thereon, which cooperates with a programmable computer system such that an embodiment of the inventive method is performed.
  • an embodiment of the present invention is, therefore, a computer program produced with a program code stored on a machine-readable carrier, the program code being operative for performing the inventive method when the computer program product runs on a computer.
  • embodiments of the inventive method are, therefore, a computer program having a program code for performing at least one of the inventive methods when the computer program runs on a computer.

Abstract

An apparatus (300) for converting a first parametric spatial audio signal representing a first listening position or a first listening orientation in a spatial audio scene to a second parametric spatial audio signal (112, 114) representing a second listening position or a second listening orientation is described, the apparatus comprising: a spatial audio signal modification unit (301, 302) adapted to modify the first parametric spatial audio signal (212, 214) dependent on a change of the first listening position or the first listening orientation so as to obtain the second parametric spatial audio signal (212, 214), wherein the second listening position or the second listening orientation corresponds to the first listening position or the first listening orientation changed by the change.

Description

  • The present invention relates to the field of audio processing, especially to the field of parametric spatial audio processing and for converting a first parametric spatial audio signal into a second parametric spatial audio signal.
  • Background of the Invention
  • Spatial sound recording aims at capturing a sound field with multiple microphones such that at the reproduction side, a listener perceives the sound image, as it was present at the recording location. Standard approaches for spatial sound recording use simple stereo microphones or more sophisticated combinations of directional microphones, e.g., such as the B-format microphones used in Ambisonics and described by M.A. Gerzon, "Periphony: Width-Height Sound Reproduction," J. Aud. Eng. Soc., Vol. 21, No. 1, pp 2-10, 1973, in the following referred to as [Ambisonics]. Commonly, these methods are referred to as coincident-microphone techniques.
  • Alternatively, methods based on a parametric representation of sound fields can be applied, which are referred to as parametric spatial audio coders. These methods determine a downmix audio signal together with corresponding spatial side information, which are relevant for the perception of spatial sound. Examples are Directional Audio Coding (DirAC), as discussed in Pulkki, V., "Directional audio coding in spatial sound reproduction and stereo upmixing," in Proceedings of The AES 28th International Conference, pp. 251-258, Piteå, Sweden, June 30 - July 2, 2006, in the following referred to as [DirAC], or the so-called spatial audio microphones (SAM) approach proposed in Faller, C., "Microphone Front-Ends for Spatial Audio Coders", in Proceedings of the AES 125th International Convention, San Francisco, Oct. 2008, in the following referred to as [SAM]. The spatial cue information basically consists of the direction-of-arrival (DOA) of sound and the diffuseness of the sound field in frequency subbands. In a synthesis stage, the desired loudspeaker signals for reproduction are determined based on the downmix signal and the parametric side information.
  • In other words, the downmix signals and the corresponding spatial side information represent the audio scene according to the set-up, e.g. the orientation and/or position of the microphones, in relation to the different audio sources used at the time the audio scene was recorded.
  • It is the object of the present invention to provide a concept for a flexible adaptation of the recorded audio scene.
  • Summary of the Invention
  • This object is solved by an apparatus according to claim 1, a method according to claim 17 and a computer program according to claim 18.
  • All the aforementioned methods mentioned above have in common that they aim at rendering the sound field at a reproduction side, as it was perceived at the recording position. The recording position, i.e. the position of the microphones, can also be referred to as the reference listening position. A modification of the recorded audio scene is not envisaged in these known spatial sound-capturing methods.
  • On the other hand, modification of the visual image is commonly applied, for example, in the context of video capturing. For example, an optical zoom is used in video cameras to change the virtual position of the camera, giving the impression, the image was taken from a different point of view. This is described by a translation of the camera position. Another simple picture modification is the horizontal or vertical rotation of the camera around its own axis. The vertical rotation is also referred to as panning or tilting.
  • Embodiments of the present invention provide an apparatus and a method, which also allow virtually changing the listening position and/or orientation according to the visual movement. In other words, the invention allows altering the acoustic image a listener perceives during reproduction such that it corresponds to the recording obtained using a microphone configuration placed at a virtual position and/or orientation other than the actual physical position of the microphones. By doing so, the recorded acoustic image can be aligned with the corresponding modified video image. For example, the effect of a video zoom to a certain area of an image can be applied to the recorded spatial audio image in a consistent way. According to the invention, this is achieved by appropriately modifying the spatial cue parameters and/or the downmix signal in the parametric domain of the spatial audio coder.
  • Embodiments of the present invention allow to flexibly change the position and/or orientation of a listener within a given spatial audio scene without having to record the spatial audio scene with a different microphone setting, for example, a different position and/or orientation of the recording microphone set-up with regard to the audio signal sources. In other words, embodiments of the present invention allow defining a virtual listening position and/or virtual listening orientation that is different to the recording position or listening position at the time the spatial audio scene was recorded.
  • Certain embodiments of the present invention only use one or several downmix signals and/or the spatial side information, for example, the direction-of-arrival and the diffuseness to adapt the downmix signals and/or spatial side information to reflect the changed listening position and/or orientation. In other words, these embodiments do not require any further set-up information, for example, geometric information of the different audio sources with regard to the original recording position.
  • Embodiments of the present invention further receive parametric spatial audio signals according to a certain spatial audio format, for example, mono or stereo downmix signals with direction-of-arrival and diffuseness as spatial side information and convert this data according to control signals, for example, zoom or rotation control signals and output the modified or converted data in the same spatial audio format, i.e. as mono or stereo downmix signal with the associated direction-of-arrival and diffuseness parameters.
  • In a particular embodiment, embodiments of the present invention are coupled to a video camera or other video sources and modify the received or original spatial audio data into the modified spatial audio data according to the zoom control or rotation control signals provided by the video camera to synchronize, for example, the audio experience to the video experience and, for example, to perform an acoustical zoom in case a video zoom is performed and/or perform an audio rotation within the audio scene in case the video camera is rotated and the microphones do not physically rotate with the camera because they are not mounted on the camera.
  • Short Description of the Figs.
  • Embodiments of the present invention will be described in detail using the following Figs.
  • Fig. 1
    shows a block diagram of a parametric spatial audio coder;
    Fig. 2
    shows the spatial audio coder of Fig. 1 together with an embodiment of the spatial parameter modification block coupled between the spatial audio analysis unit and the spatial audio synthesis unit of the spatial audio coder;
    Fig. 3A
    corresponds to Fig. 2 and shows a more detailed embodiment of the spatial parameter modification block;
    Fig. 3B
    corresponds to Fig. 2 and shows a further more detailed embodiment of the spatial parameter modification block;
    Fig. 4
    shows an exemplary geometric overview of an acoustical zoom;
    Fig. 5A
    shows an example of a directional mapping function fp (k,n,ϕ,d) for the direction-of-arrival (DOA) mapping;
    Fig. 5B
    shows an example of a diffuseness mapping function fd (k,n,ϕ,d) for the diffuseness mapping;
    Fig. 6
    shows different gain windows for the weighting filter H1(k,n,ϕ,d) of the direct sound component depending on a zoom factor; and
    Fig. 7
    shows an exemplary subcardioid window for the weighting filter H2(k,n, ϕ,d) for the diffuse component.
  • Equal or equivalent elements or elements with equal or equivalent functionality are denoted in the following description of the Figs. by equal or equivalent reference numerals.
  • Detailed Description of the Invention
  • For a better understanding of embodiments of the present invention, a typical spatial audio coder is described. The task of a typical parametric spatial audio coder is to reproduce the spatial impression that was present at the point where it was recorded. Therefore, a spatial audio coder consists of an analysis part 100 and a synthesis part 200, as shown in Fig. 1. At the acoustic front end, N microphones 102 are arranged to obtain N microphone input signals that are processed by the spatial audio analysis unit 100 to produce L downmix signals 112 with LN together with spatial side information 114. In the decoder, i.e. in the spatial audio synthesis unit, the downmix signal 112 and the spatial side information 114 are used to compute M loudspeaker channels for M loudspeakers 202, which reproduce the recorded sound field with the original spatial impression. The thick lines (the lines between the microphones 102 and the spatial audio analysis unit 100, the L downmix signals 112 and the M signal lines between the spatial audio synthesis unit 200 and the M loudspeakers 202) symbolize audio data, whereas the thin lines 114 between the spatial audio analysis unit 100 and the spatial audio synthesis unit 200 represent the spatial side information.
  • In the following, the basic steps included in the computation of the spatial parameters or, in other words, for the spatial audio analysis as performed by the spatial audio analysis unit 100, will be described in more detail. The microphone signals are processed in a suitable time/frequency representation, e.g., by applying a short-time Fourier Transform (STFT) or any other filterbank. The spatial side information determined in the analysis stage contains a measure corresponding to the direction-of-arrival (DOA) of sound and a measure of the diffuseness of the sound field, which describes the relation between direct and diffuse sound of the analyzed sound field.
  • In DirAC, it has been proposed to determine the DOA of sound as the opposite direction of the active intensity vector. The relevant acoustic information is derived from a so-called B-format microphone input, corresponding to the sound pressure and the velocity obtained by microphones configuration providing a dipole pick-up pattern, which are aligned with the axes of Cartesian coordinate system. In other words, the B-format consists of four signals, namely w(t), x(t), y(t) and z(t). The first corresponds to the pressure measured by an omnidirectional microphone, whereas the latter three are signals of microphones having figure-of-eight pick-up patterns directed towards the three axes of a Cartesian coordinate system. The signals x(t), y(t) and z(t) are proportional to the components of particle velocity vectors directed towards x, y and z, respectively. Alternatively, the approach presented in SAM uses a priori knowledge of the directivity pattern of stereo microphones to determine the DOA of sound.
  • The diffuseness measure can be obtained by relating the active sound intensity to the overall energy of the sound field as proposed in DirAC. Alternatively, the method as described in SAM proposes to evaluate the coherence between different microphone signals. It should be noted that diffuseness could also be considered as a general reliability measure for the estimated DOA. Without loss of generality, in the following it is assumed that the diffuseness lies in the range of [1, 0], where a value of 1 indicates a purely diffuse sound field, and a value of 0 corresponds to the case where only direct sound is present. In other embodiments, other ranges and values for the diffuseness can be used.
  • The downmix signal 112, which is accompanied with the side information 114, is computed from the microphone input signals. It can be mono or include multiple audio channels. In case of DirAC, commonly only a mono signal, corresponding to the sound pressure, as obtained by an omnidirectional microphone is considered. For the SAM approach, a two-channel stereo signal is used as downmix signal.
  • In the following, the synthesis of loudspeaker signals used for reproduction as performed by the spatial audio synthesis unit 200 is described in further detail. The input of the synthesis 200 is the downmix signal 112 and the spatial parameters 114 in their time-frequency representation. From this data, M loudspeaker channels are calculated such that the spatial audio image or spatial audio impression is reproduced correctly. Let Yi (k,n), with i = 1... M, denote the signal of the i-th physical loudspeaker channel in time/frequency representation with the time and frequency indices k and n, respectively. The underlying signal model for the synthesis is given by Y i k n = g i k n S k n + D i N k n ,
    Figure imgb0001

    where S(k,n) corresponds to direct sound component and N(k,n) represents the diffuse sound component. Note that for correct reproduction of diffuse sound, a decorrelation operation D i { } is applied to the diffuse component of each loudspeaker channel. The scaling factor gi(k,n) depends on the DOA of the direct sound included in the side information and the loudspeaker configuration used for playback. A suitable choice is given by the vector base amplitude panning approach proposed by Pulkki, V., "Virtual sound source positioning using vector base amplitude panning," J. Audio Eng. Soc., Vol. 45, pp 456-466, June 1997, in the following referred to as [VBAP].
  • In DirAC, the direct sound component is determined by appropriate scaling of the mono downmix signal W(k,n), and obtained according to: S k n = W k n 1 - Ψ k n
    Figure imgb0002
  • The diffuse sound component is obtained according to N k n = 1 M W k n Ψ k n
    Figure imgb0003

    where M is the number of loudspeakers used.
  • In SAM, the same signal model as in (1) is applied, however, the direct and diffuse sound components are computed based on the stereo downmix signals instead.
  • Fig. 2 shows a block diagram of an embodiment of the present invention integrated in the exemplary environment of Fig. 1, i.e. integrated between a spatial analysis unit 100 and a spatial audio synthesis unit 200. As explained based on Fig. 1, the original audio scene is recorded with a specific recording set-up of microphones specifying the location and orientation (in case of directional microphones) relative to the different audio sound sources. The N microphones provide N physical microphone signals or channel signals, which are processed by the spatial audio analysis unit 100 to generate one or several downmix signals W 112 and the spatial side information 114, for example, the direction-of-arrival (DOA) ϕ 114a and the diffuseness Ψ 114b. In contrast to Fig. 1, these spatial audio signals 112, 114a, 114b are not provided directly to the spatial audio synthesis unit 200, but are modified by an apparatus for converting or modifying a first parametric spatial audio signal 112, 114a, 114b representing a first listening position and/or a first listening orientation (in this example, the recording position and recording orientation) in a spatial audio scene to a second parametric spatial audio signal 212, 214a, 214b, i.e. a modified downmix signal W mod 212, a modified direction-of-arrival signal ϕ mod 214a and/or a modified diffuseness signal Ψ mod 214b representing a second listening position and/or second listening orientation that is different to the first listening position and/or first listening orientation. The modified direction-of-arrival 214a and the modified diffuseness 214b are also referred to as modified spatial audio information 214. The apparatus 300 is also referred to as a spatial audio signal modification unit or spatial audio signal modification block 300. The apparatus 300 in Fig. 3A is adapted to modify the first parametric spatial audio signal 112, 114 depending on a control signal d 402 provided by a, e.g. external, control unit 400. The control signal 402 can, e.g. be a zoom control signal defining or being a zoom factor d or a zoom parameter d, or a rotation control signal 402 provided by a zoom control and/or a rotational control unit 400 of a video camera. It should be noted that a zoom in a certain direction and a translation in the same direction are just two different ways of describing a virtual movement in that certain direction (the zoom by a zoom factor, the translation by an absolute distance or by a relative distance relative to a reference distance). Therefore, explanations herein with regard to a zoom control signal apply correspondingly to translation control signals and vice versa, and the zoom control signal 402 also refers to a translation control signal. The term d can on one hand represent the control signal 402 itself, and on the other hand the control information or parameter contained in the control signal. In further embodiments, the control parameter d represents already the control signal 402. The control parameter or control information d can be a distance, a zoom factor and/or a rotation angle and/or a rotation direction.
  • As can be seen from Fig. 2, the apparatus 300 is adapted to provide parametric spatial audio signals 212, 214 (downmix signals and the associated side information/parameters) in the same format as the parametric spatial audio signals 112, 114 it received. Therefore, the spatial audio synthesis unit 200 is capable (without modifications) of processing the modified spatial audio signal 212, 214 in the same manner as the original or recorded spatial audio signal 112, 114 and to convert them to M physical loudspeaker signals 204 to generate the sound experience to the modified spatial audio scene or, in other words, to the modified listening position and/or modified listening orientation within the otherwise unchanged spatial audio scene.
  • In other words, a block schematic diagram of an embodiment of the novel apparatus or method is illustrated in Fig. 2. As can be seen, the output 112, 114 of the spatial audio coder 100 is modified based on the external control information 402 in order to obtain a spatial audio representation 212, 214 corresponding to a listening position, which is different to the one used in the original location used for the sound capturing. More precisely, both the downmix signals 112 and the spatial side information 114 are changed appropriately. The modification strategy is determined by an external control 400, which can be acquired directly from a camera 400 or from any other user interface 400 that provides information about the actual position of the camera or zoom. In this embodiment, the task of the algorithm, respectively, the modification unit 300 is to change the spatial impression of the sound scene in the same way as the optical zoom or camera rotation changes the point-of-view of the spectator. In other words, the modification unit 300 is adapted to provide a corresponding acoustical zoom or audio rotation experience corresponding to the video zoom or video rotation.
  • Fig. 3A shows a block diagram or system overview of an embodiment of the apparatus 300 that is referred to as "acoustical zoom unit". The embodiment of the apparatus 300 in Fig. 3A comprises a parameter modification unit 301 and a downmix modification unit 302. The parameter modification unit 301 further comprises a direction-of-arrival modification unit 301a and a diffuseness modification unit 301b. The parameter modification unit 301 is adapted to receive the direction-of-arrival parameter 114a and to modify the first or received direction-of-arrival parameter 114a depending on the control signal d 402 to obtain the modified or second direction-of-arrival parameter 214a. The parameter modification unit 301 is further adapted to receive the first or original diffuseness parameter 114b and to modify the diffuseness parameter 114b by the diffuseness modification unit 301b to obtain the second or modified diffuseness parameter 214b depending on the control signal 402. The downmix modification unit 302 is adapted to receive the one or more downmix signals 112 and to modify the first or original downmix signal 112 to obtain the second or modified downmix signal 212 depending on the first or original direction-of-arrival parameter 114a, the first or original diffuseness parameter 114b and/or the control signal 402.
  • If the camera is controlled independently from the microphones 102, embodiments of the invention provide a possibility to synchronize the change of the audio scene or audio perception according to the camera controls 402. In addition, the directions can be shifted without modifying the downmix signals 112 if the camera 400 is only rotated horizontally without the zooming, i.e. applying only a rotation control signal and no zooming control signal 402. This is described by the "rotation controller" in Figs. 2 and 3.
  • The rotation modification is described in more detail in the section about directional remapping or remapping of directions. The sections about diffuseness and downmix modification are related to the translation or zooming application.
  • Embodiments of the invention can be adapted to perform both, a rotation modification and a translation or zoom modification, e.g. by first performing the rotation modification and afterwards the translation or zoom modification or vice versa, or both at the same time by providing corresponding directional mapping functions.
  • To achieve the acoustical zooming effect, the listening position is virtually changed, which is done by appropriately remapping the analyzed directions. To get a correct overall impression of the modified sound scene, the downmix signal is processed by a filter, which depends on the remapped directions. This filter changes the gains, as, e.g., sounds that are now closer are increased in level, while sounds from regions out-of-interest may be attenuated. Also, the diffuseness is scaled with the same assumptions, as, e.g., sounds that appear closer to the new listening position have to be reproduced less diffuse than before.
  • In the following, a more detailed description of the algorithm or method performed by the apparatus 300 is given. An overview of the acoustical zoom unit is given in Fig. 3A. First, the remapping of the directions is described (block 301a, fp(k,n,ϕ,d)), then the filter for the diffuseness modification (block 301b, fd(k,n,ϕ,d)) is illustrated. Block 302 describes the downmix modification, which is dependent on the zoom control and the original spatial parameters.
  • In the following section, the remapping of the directions, respectively the remapping of the direction-of-arrival parameters as, for example, performed by direction modification block 301a, is described.
  • The direction-of-arrival parameter (DOA parameter) can be represented, for example, by a unit vector e. For or a three-dimensional (3D) sound field analysis, the vector can be expressed by e = cos φ cos θ cos φ cos θ sin θ
    Figure imgb0004

    where the azimuth angle ϕ corresponds to the DOA in the two-dimensional (2D) plane, namely the horizontal plane. The elevation angle is given by θ. This vector will be altered, according to the new virtual position of the microphone as described next.
  • Without loss of generality, an example of the DOA remapping is given for the two-dimensional case for presentation simplicity (Fig. 4). A corresponding remapping of the three-dimensional DOA can be done with similar considerations.
  • Fig. 4 shows a geometric overview of an exemplarily geometric overview of the acoustical zoom. The position S marks the original microphone recording position, i.e., the original listening position. A and B mark spatial positions within the observed 2-dimensional plane. It is now assumed that the listening position is moved from S to S2, e.g. in direction of the first listening orientation. As can be seen from Fig. 4, the sound emerging from spatial position A stays in the same angular position relative to the recording location, whereas sounds from the area or spatial position B are moved to the side. This is denoted by a changing of the analyzed angle α to β. β thus denotes the direction-of-arrival of sound coming from the angular position of B if the listener had been placed in S2. For the considered example, the azimuth angle is increased from α to β as shown in Fig. 4. This remapping of the direction-of-arrival information can be written as a vector transformation according to e mod = f e ,
    Figure imgb0005

    where f( ) denotes a remapping function and e mod is the modified direction vector. This function is a nonlinear transformation, dependent on the zoom factor d and the original estimated DOAs. Fig. 5A shows examples for the mapping f( ) for different values of α as can be applied in the two-dimensional example shown in Fig. 4. For the zoom control factor of d = 1, i.e., no zoom is applied, the angles are equal to the original DOA α. For increasing zoom control factors, the value of β is increased, too. The function can be derived from geometric considerations or, alternatively, be chosen heuristically. Thus, remapping of the directions means that each DOA is modified according to the function f(). The mapping fp(k,n,ϕ,d) is performed for every time and frequency bin (k,n).
  • Although, in Fig. 4 the zoom parameter d is depicted as a translational distance d between the original listening position S and the modified listening position S2, as mentioned before, d can also be a factor, e.g. an optical zoom like an 4x or 8x zoom. Especially for the width or filter control, seeing d as a factor, not as a distance, allows for an easy implementation of the acoustical zoom. In other words, the zoom parameter d is in this case a real distance, or at least proportional to a distance.
  • It should be further noted that embodiments of the invention can also be adapted to support besides the "zoom-in" as described above, e.g. reducing a distance to an object (e.g. to object A in Fig. 4 by moving from position S to position S2), also a "zoom-out", e.g. increasing a distance to an object (e.g. to object A in Fig. 4 by moving from position S2 to position S). In this case the inverse considerations apply compared to the zoom-in as described because objects positioned on a side of the listener (e.g. object B with regard to position S2) move to the front of the listener when he moves to position S. In other words the magnitudes of the angles are reduced (e.g. from β to α).
  • The remapping of the directions or vector transformation is performed by the direction-of-arrival modification unit 301a. Fig. 5A shows an exemplarily mapping function (dependent on the zoom factor d) for the direction-of-arrivals for the scenario shown in Fig. 4. The diagram of Fig. 5A shows the zoom factor on the x-axis ranging from 1 to 2 and the modified or mapped angle β on the y-axis. For a zoom factor of 1, β = α, i.e. the initial angle is not modified. Reference sign 512 refers to the mapping function fp for α = 10°, reference sign 514 represents the mapping function fp for α = 30°, reference sign 516 the mapping function fp(k,n,ϕ,d) for α = 50°, reference sign 518 the mapping function fp(k,n,ϕ,d) for α = 70°, and reference sign 520 the mapping function fp(k,n,ϕ,d) for α = 90°.
  • Embodiments of the invention can be adapted to use the same mapping function fp for all time and frequency bin values defined by k and n, or, may use different mapping functions for different time values and/or frequency bins.
  • As becomes apparent from the above explanations, the idea behind the filter fd is to change the diffuseness ψ such that it lowers the diffuseness for zoomed-in directions (ϕ < lγl) and increases the diffuseness for out-of-focus directions (ϕ > |γ|).
  • To simplify the determination of the mapped angle β, certain embodiments of the modification unit 301a are adapted to only use the direction and to assume that all sources, e.g. A and B, defining the direction-of-arrival of the sound have the same distance to the first listening position, e.g. are arranged on a unit radius.
  • If a loudspeaker setup is considered, which only reproduces sound coming from frontal directions, e.g., a typical stereo loudspeaker setup, the mapping function f( ) can be designed such that the maximum angle, to where DOAs are remapped, is limited. For example, a maximum angle of ±60° is chosen, when the loudspeakers are positioned at ±60°. This way, the whole sound scene will stay in the front and is only widened, when the zoom is applied.
  • In case of a rotation of the camera, the original azimuth values are just shifted such that the new looking direction corresponds to an angle of zero. Thus, a horizontal rotation of the camera by 20° would result in β =α - 20°. Also, the downmix and the diffuseness are not changed for this special case, unless a rotation and translation are carried out simultaneously.
  • As can be seen from the aforementioned explanations, the rotational change or difference is derived starting from the first listening orientation respectively first viewing orientation (e.g. direction of the "nose" of the listener respectively viewer) defining a first reference or 0° orientation. When the listening orientation changes, the reference or 0° orientation changes accordingly. Therefore, embodiments of the present invention change the original angles or directions of arrival of the sound. i.e. the first directional parameter, according to the new reference or 0° orientation such that the second directional parameter represents the same "direction of arrival" in the audio scene, however relative to the new reference orientation or coordinate system. Similar considerations apply to the translation respectively zoom, where the perceived directions-of-arrival change due to the translation or zoom in direction of the first listening orientation (see Fig. 4).
  • The first directional parameter 114a and the second directional parameter 214a can be two-dimensional or three-dimensional vectors. In addition, the first directional parameter 114a can be a vector, wherein the control signal 402 is a rotation control signal defining a rotation angle (e.g. 20° in the aforementioned example) and a rotation direction (to the right in the aforementioned two-dimensional example), and wherein the parameter modification unit 301, 301a is adapted to rotate the vector by the rotation angle in a reverse direction to the rotation direction (β=α-20° in the aforementioned example) to obtain the second directional parameter, i.e. the second or modified vector 214a.
  • In the following section, the diffuseness scaling as, for example, performed by the diffuseness modification unit 301b is described in more detail.
  • The diffuseness is scaled with a DOA-dependent window. In certain embodiments, values of the diffuseness ψ(k,n) are decreased for the zoomed-in directions, while the diffuseness values for the directions out-of-interest are increased. This corresponds to the observation that sound sources are perceived less diffuse if they are located closer to the listening position. Therefore, for example, for a minimum zoom factor (e.g. d = 1), the diffuseness is not modified. The range of the visual angle covered by the camera image can be taken as a controller for the scaling by which the diffuseness value is increased or decreased.
  • The terms zoomed-in-directions or directions-of-interest refer to an angular window of interest, also referred to as central range of angles, that is arranged around the first or original listening direction, e.g. the original 0° reference direction. The angular window or central range is determined by the angular values γ defining the border of the angular window. The angular window and the width of the angular window can be defined by the negative border angle -γ and the positive border angle γ, wherein the magnitude of the negative border angle may be different to the positive border angle. In preferred embodiments, the negative border angle and the positive border angle have the same magnitude (symmetric window or central range of angles centered around the first listening orientation). The magnitude of the border angle is also referred to as angular width and the width of the window (from the negative border angle to the positive border angle) is also referred to as total angular width.
  • According to embodiments of the invention, direction-of-arrival parameters, diffuseness parameters, and/or direct or diffuse components can be modified differently depending on whether the original direction-of-arrival parameter is inside the window of interest, e.g. whether the DOA-angle or a magnitude of the DOA-angle relative to the first listening position is smaller than the magnitude of the border angle or angular width γ, or whether the original direction-of-arrival parameter is outside the window of interest, e.g. whether the DOA-angle or a magnitude of the DOA-angle relative to the first listening position is larger than the magnitude of the border angle or angular width γ. This is also referred to as direction-dependent and the corresponding filter functions as direction dependent filter functions, wherein the angular width or border angle γ defines the angle at which the corresponding filter changes from increasing the parameter to decreasing the parameter or vice versa.
  • Referring back to the diffuseness modification unit 301b, the diffuseness modification unit 301b is adapted to modify the diffuseness ψ by the function fd(k,n,ϕ,d) or fd which is dependent on the time/frequency indices k,n, the original direction-of-arrival ϕ, and the zoom controller d. Fig. 5B shows an embodiment of a filter function fd. The filter fd may be implemented as an inversion of the filter function H1, which will be explained later, however, adapted to match the diffuseness range, for example the range between [0..1]. Fig. 5B shows the mapping function or filter fd, wherein the x-axis represents the original or first diffuseness ψ, in Fig. 5B also referred to as ψin, with the range from 0 to 1, and the y-axis represents the second or modified diffuseness ψmod also in the range of 0 to 1. In case no zoom is applied (d = 0), the filter fd does not change the diffuseness at all and is set to bypass, i.e. ψmod = ψin respectively. Reference sign 552 depicts the bypass line.
  • If the original direction-of-arrival lies within the angular width γ, the diffuseness is decreased. If the original direction-of-arrival is outside the angular width γ, the diffuseness is increased. Fig. 5B shows some prototype functions of fd, namely 562, 564, 572 and 574 depending on the look width or angular width γ. In the example shown in Fig. 5B the angular width is smaller for γ2 than for γ1, i.e. γ2 < γ1. Thus, γ2 corresponds to a higher zoom factor d than γ1.
  • The area below the bypass line 552 defines the modified diffuseness values ψmod in case the original direction-of-arrival ϕ is within the angular width γ which is reflected by a reduction of the modified diffuseness value ψmod compared to the original diffuseness value ψin or ψ after the mapping by the filter fd. The area above the bypass line 552 represents the mapping of the original diffuseness ψ to the modified diffuseness values ψmod in case the original direction-of-arrival ϕ is outside the window. In other words, the area above the bypass line 552 shows the increase of the diffuseness after the mapping. In preferred embodiments, the angular width γ decreases with an increasing zoom factor d. In other words, the higher a zoom factor d, the smaller the angular width γ. In addition, embodiments can be adapted such that the zoom factor d or translation information not only influences the angular width γ of the filter function fd but also the degree or factor the diffuseness is increased in case it is inside the window and the degree or factor the diffuseness ψ is decreased in case it is outside the window defined by the angular width γ. Such an embodiment is shown in Fig. 5B, wherein the angular width γ1 corresponds to a zoom factor d1, and the angular width γ2 corresponds to a zoom factor d2, wherein d2 is larger than d1 and, thus, the angular width γ2 is smaller than angular width γ1. In addition, the function fd represented by reference sign 564 and corresponding to the larger zoom factor d2 maps the original diffuseness values ψin to lower modified diffuseness values ψmod than the filter function fd represented by 562 corresponding to the lower zoom factor d1. In other words, embodiments of the filter function can be adapted to reduce the original diffuseness the more the smaller the angular width γ. The corresponding applies to the area above the bypass line 552 in an inverse manner. In other words, embodiments of the filter function fd can be adapted to map the original diffuseness ψin to the modified diffuseness ψmod dependent on the zoom factor d and the angular width γ, or the higher the zoom factor d the smaller the angular width γ and/or the higher the increase of the diffuseness for direction-of-arrival ϕ outside the window.
  • In further embodiments, the same direction dependent window or filter function fd(k,n,ϕ,d) is applied for all zoom factors. However, the use of different direction dependent window or filter functions with smaller angular widths for higher translation or zoom factors matches the audio experience of the user better and provides a more realistic audio perception. The application of different mapping values for different zoom factors (higher reduction of the diffuseness with increasing zoom factor for direction-of-arrival value ϕ inside the window, and increasing or higher diffuseness values for higher zoom factors in case the direction-of-arrival value ϕ is outside the angular width γ) even further improve the realistic audio perception.
  • In the following, embodiments of the downmix modification as, for example, performed by the downmix modification unit 302, are described in more detail.
  • The filters for the downmix signal are used to modify the gain of the direct and diffuse part of the output signal. As a direct consequence of the spatial audio coder concept, the loudspeaker signals are thus modified. The sound of the zoomed-in area is amplified, while sound from out-of-interest directions can be attenuated.
  • As the downmix signal 112 may be a mono or a stereo signal for directional audio coding (DirAC) or spatial audio microphones (SAM), in the following, two different embodiments of the modification are described.
  • First, an embodiment for a mono downmix modification, i.e. an embodiment for a modification of a mono downmix audio signal W 112 is described. For the following considerations, it is useful to introduce a signal model of the mono downmix signal W(k,n) which is similar to the one already applied for the loudspeaker signal synthesis according to (1): W k n = S k n + N k n
    Figure imgb0006
  • Here, S(k,n) denotes the direct sound component of the downmix signal, N(k,n) denotes the diffuse sound components in the original downmix signal, and k denotes the time index or time instant the signal represents and n represents a frequency bin or frequency channel of the signal at the given time instant k.
  • Let W mod (k,n) denote the modified mono downmix signal. It is obtained by processing the original downmix signal according to W mod k n = H 1 k n φ d S k n + H 2 k n φ d N k n
    Figure imgb0007

    where H1(k,n,ϕ,d) and H2(k,n,ϕ,d) represent filters applied to the direct and the diffuse components of the signal model, ϕ represents the original direction-of-arrival and d the zoom factor or zoom parameter. The direct 112a and diffuse sound components 112b can be computed analogously to (2), (3), i.e. by S k n = W k n 1 - Ψ
    Figure imgb0008

    and N k n = W k n Ψ .
    Figure imgb0009
  • Both filters are directional dependent weighting functions. For example, a cardioid shaped pickup pattern of a microphone can be taken as a design criterion for such weighting functions.
  • The filter H1(k,n,ϕ,d) can be implemented as a raised cosine window such that the direct sound is amplified for directions of the zoomed-in area, whereas the level of sound coming from other directions is attenuated. In general, different window shapes can be applied to the direct and the diffuse sound components, respectively.
  • The gain filter implemented by the windows may be controlled by the actual translation or zoom control factor d. For example, the zoom controls the width of equal gain for the focused directions and the width of gain in general. Examples for different gain windows are given in Fig. 6.
  • Fig. 6 shows different gain windows for the weighting filter H1(k,n,ϕ,d). Four different gain prototypes are shown:
    1. 1. solid line: no zoom is applied, the gain is 0 dB for all directions (see 612).
    2. 2. dashed line: a zoom factor of 1.3 is applied, the window width has a width of 210° for the maximal gain and the maximal gain is 2.3 dB (see 614).
    3. 3. dotted line: a zoom factor of 2.1 is applied, the window width for the maximal gain is decreased to 140° and the maximal gain is 3 dB, the lowest -2.5 dB (see 616).
    4. 4. dash-dotted line: the zoom factor is 2.8, the window width is 30° for the maximal gain and the gain is limited to a maximum of +3 dB and a minimum of -6 dB (see 618).
  • As can be seen from Fig. 6, the first listening orientation represented by 0° in Fig. 6, forms the center of different zoom factor dependent direction dependent windows, wherein the predetermined central range or width of the direction dependent windows is the smaller the greater the zoom factor. The borders of the central range or window are defined by the angle γ at which the gain is 0 dB. Fig. 6 shows symmetric windows with positive and negative borders having the same magnitude.
  • Window 614 has a width of 210° for the maximum gain and a predetermined central region with a width of 260° with borders +/- γ2 at +/- 130°, wherein direct components inside or within the predetermined central region are increased and direct components outside of the predetermined central region remain unamended (gain = 0 dB).
  • Window 616 has a width of 140° for the maximum gain and a predetermined central region with a width of 180° with borders or angular widths +/- γ3 at +/- 90°, wherein direct components inside or within the predetermined central region are increased and direct components outside of the predetermined central region are reduced (negative gain down to -2.5dB).
  • Window 618 has a width of 30° for the maximum gain and a predetermined central region with a width of 60° with borders or angular widths +/- γ4 at +/- 30°, wherein direct components inside or within the predetermined central region are increased and direct components outside of the predetermined central region are reduced (negative gain down to -6dB).
  • In certain embodiment, therefore, the zoom factor d controls the width, i.e. the negative and positive borders and the total width, and the gain of the prototype windows. Thus, the window can already be designed such that the width and the gain is correctly applied to the original direction-of-arrivals ϕ.
  • The maximal gain should be limited, in order to avoid distortions in the output signals. The width of the window, or the exact shape as shown here should be considered as an illustrative example of how the zoom factor controls various aspects of a gain window. Other implementation may be used in different embodiments.
  • The filter H2(k,n,ϕ,d) is used to modify the diffuse part 112a of the downmix signal analogously to the way how the diffuseness measure ψ(k,n) has been modified and can be implemented as a subcardioid window as shown in Fig. 7. By applying such windows the diffuse part from the out-of-interest directions are attenuated slightly, but the zoomed-in directions remain unchanged or nearly unchanged. Fig. 7 shows a subcardioid window 702 which almost keeps the diffuse component unaltered in an area between -30° and +30° of the original direction of arrival ϕ and attenuate the diffuse component the higher the deviation, i.e. the angle departing from the 0° orientation, of the original direction-of arrival ϕ. In other words, for the zoomed-in area, the diffuse signal components in the downmix signal remain unaltered. This will result in a more direct sound reproduction in zoom direction. The sounds that come from all other directions are rendered more diffuse, as the microphone has been virtually placed farther away. Thus, those diffuse parts will be attenuated compared to those of the original downmix signal. Obviously, the desired gain filter can also be designed using the previously described raised cosine windows. Note, however, that the scaling will be less pronounced than in case of the direct sound modification. In further embodiments, the windows can depend on the zoom factor, wherein the slope of the window function 702 is the steeper the higher the zoom factor.
  • In the following, an embodiment of a stereo downmix modification, i.e. a modification of a stereo downmix signal W is described.
  • In the following it is described how the downmix modification has to be performed in case of a stereo downmix as required for the SAM approach. For the original stereo downmix signal a two channels signal model analogously to the mono case (6) is introduced: W 1 k n = S k n + N 1 k n
    Figure imgb0010
    W 2 k n = cS k n + N 2 k n
    Figure imgb0011
  • Again, the signal S(k,n) represents direct sound, while Ni denotes the diffuse sound for the i-th microphone. Analogously to (2), (3), the direct and diffuse sound components can be determined from the downmix channels based on the diffuseness measure. The gain factor c corresponds to a different scaling of the direct sound component in the different stereo channels, which arises from the different directivity pattern associated with the two downmix channels. More details on the relation of the scaling factor and the DOA of direct sound can be found in SAM. Since this scaling depends on the DOA of sound of the observed sound field, its value has to be modified in accordance to the DOA remapping resulting from the modified virtual recording location.
  • The modified stereo downmix signal corresponding to the new virtual microphone position can be written as W 1 , mod k n = G 11 k n φ d S k n + G 12 k n φ d N 1 k n
    Figure imgb0012
    W 2 , mod k n = G 21 k n φ d c mod S k n + G 22 k n φ d N 2 k n
    Figure imgb0013
  • The computation of the gain filters Gij(k,n,ϕ,d) is performed in accordance to the corresponding gain filters Hi(k,n,ϕ,d) as discussed for the mono downmix case. The new stereo scaling factor c mod is determined as a function of the modified DOA such that it corresponds to the new virtual recording location.
  • Referring back to Figs. 2 and 3A, embodiments of the present invention provide an apparatus 300 for converting a first parametric spatial audio signal 112, 114 representing a first listening position or a first listening orientation in a spatial audio scene to a second parametric spatial audio signal 212, 214 representing a second listening position or a second listening orientation, the second listening position or second listening orientation being different to the first listening position or first listening orientation. The apparatus comprises a spatial audio signal modification unit 301, 302 adapted to modify the first parametric spurious audio signal 112, 114 dependent on a change of the first listening position or the first listening orientation so as to obtain the second parametric spatial audio signal 212, 214, wherein the second listening position or the second listening orientation corresponds to the first listening position or the first listening orientation changed by the change.
  • Embodiments of the apparatus 300 can be adapted to convert only a single side information parameter, for example, the direction-of-arrival 114a or the diffuseness parameter 114b, or only the audio downmix signal 112 or some or all of the aforementioned signals and parameters.
  • As described before, in embodiments using the directional audio coding (DirAC), the analog microphone signals are digitized and processed to provide a downmixed time/frequency representation W(k,n) of the microphone signals, representing, for each time instant or block k, a frequency representation, wherein each frequency bin of the frequency or spectral representation is denoted by the index n. In addition to the downmix signal 112, the spatial audio analysis unit 100 determines for each time instant k and for each frequency bin n for the corresponding time instant k, one unit vector e DOA (confer equation (4)) providing for each frequency bin n and each time instant k, the directional parameter or information. In addition, the spatial audio analysis unit 100 determines for each time instant k and each frequency bin n, a diffuseness parameter ψ defining a relation between the direct sound or audio components and the diffuse sound or audio components, wherein the diffuse components are, for example, caused by two or more audio sources and/or by reflections of audio signals from the audio sources.
  • The DirAC is a very processing efficient and memory efficient coding as it reduces the spatial audio information defining the audio scene, for example, audio sources, reflection, position and orientation of the microphones and respectively the listener (for each time instant k and each frequency bin n) to one directional information, i.e. a unit vector e DOA(k,n) and one diffuseness value ψ(k,n) between 0 and 1, associated to the corresponding one (mono) downmix audio signal W(k,n) or several (e.g. stereo) downmix audio signals W1(k,n) and W2(k,n).
  • Embodiments using the aforementioned directional audio coding (DirAC) are, therefore, adapted to modify, for each instant k and each frequency bin n, the corresponding downmix value W(k,n) to Wmod(k,n), the corresponding direction-of-arrival parameter value e (k,n) to e mod(k,n) (in Figs. 1 to 3 represented by ϕ, respectively ϕmod) and/or diffuseness parameter value ψ(k,n) to ψ mod(k,n).
  • The spatial audio signal modification unit comprises or is formed by, for example, the parameter modification unit 301 and the downmix modification unit 302. According to a preferred embodiment, the parameter modification unit 301 is adapted to process the original parameter 114a to determine the modified directional parameter 214a, to process the diffuseness parameter ψ depending on the original directional parameter ϕ, respectively 114a, to split the downmix signal 112 using equations (2) and (3) using the original diffuseness parameter ψ, respectively 114b, and to apply the direction dependent filtering H1(k,n,ϕ,d) and H2(k,n,ϕ,d) dependent on the original directional parameter ϕ, respectively 114a. As explained previously, these modifications are performed for each time instant k and each frequency bin n to obtain, for each time instant k and each frequency instant n, the respective modified signals and/or parameters.
  • According to one embodiment, the apparatus 300 is adapted to only modify the first directional parameter 114a of the first parametric spatial audio signal to obtain a second directional parameter 214a of the second parametric spatial audio signal depending on the control signal 402, for example, a rotation control signal or a zoom control signal. In case the change of the listening position/orientation only comprises a rotation and no translation or zoom, a corresponding modification or shift of the directional parameter ϕ(k,n) 114a is sufficient. The corresponding diffuseness parameters and downmix signal components can be left un-amended so that the second downmix signal 212 corresponds to the first downmix signal 112 and the second diffuseness parameter 214b corresponds to the first diffuseness parameter 114b.
  • In case of a translational change, for example a zoom, is performed, a modification of the directional parameter ϕ(k,n) 114a according to a remapping function as shown in Fig. 5A already improves the sound experience and provides for a better synchronization between the audio signal and, for example, a video signal compared to the unmodified or original parametric spatial audio signal (without modifying the diffuseness parameter or the downmix signal).
  • The above two embodiments which only comprise adapting or remapping the direction-of-arrival by the filter fp already provide a good impression of the zooming effect.
  • According to another embodiment, the apparatus 300 is adapted to only apply filter H1(k,n,ϕ,d). In other words, this embodiment does not perform direction-of-arrival remapping or diffuseness modification. This embodiment is adapted to only determine, for example, the direct component 112a from the downmix signal 112 and to apply the filter function H1 to the direct component to produce a direction dependent weighted version of the direct component. Such embodiments may be further adapted to use the direction dependent weighted version of the direct component as modified downmix signal W mod 212, or to also determine the diffuse component 112b from the original downmix signal W 112 and to generate the modified downmix signal W mod 212 by adding, or in general combining, the direction dependent weighted version of the direct component and the original or unaltered diffuse component 112b. An improved impression of the acoustic zooming can be achieved, however, the zoom effect is limited because the direction-of-arrival is not modified.
  • In an even further embodiment, the filters H1(k,n,ϕ,d) and H2(k,n,ϕ,d) are both applied, however, no direction-of-arrival remapping or diffuseness modification is performed. The acoustic impression is improved compared to the unamended or original parametric spatial audio signal 112, 114. The zooming impression is also better than only applying filter function H1(k,n,ϕ,d) to the direct component when diffuse sound is present, however, is still limited, because the direction-of-arrival ϕ is not modified (better than the aforementioned embodiment using only H1(k,n,ϕ,d),.
  • In an even further embodiment, only the filter fd is applied, or in other words, only the diffuseness component ψ is modified. The zooming effect is improved compared to the original parametric spatial audio signal 112, 114 because the diffuseness of zoomed in areas (areas of interest) are reduced and the diffuseness values of out-of-interest are increased.
  • Further embodiments are adapted to perform the remapping of the direction-of-arrival ϕ by the filter function fp in combination with applying the filter H1(k,n,ϕ,d) alone. In other words, such embodiments do not perform a diffuseness modification according to the filter function fd and do not apply the second filter function H2(k,n,ϕ,d) to a diffuse component of the original downmix signal W 112. Such embodiments provide a very good zoom impression that is better than only applying the direction-of-arrival remapping.
  • Embodiments applying the direction-of-arrival remapping according to function fp in combination with a downmix modification using both filter functions H1(k,n,ϕ,d) and H2(k,n,ϕ,d) provide even better zoom impressions than only applying the direction-of-arrival remapping combined with applying the first filter function H1 alone.
  • Applying the direction-of-arrival remapping according to function fp, the downmix modification using filters H1(k,n,ϕ,d) and H2(k,n,ϕ,d), and the diffuseness medication using function fd provides the best acoustical zoom implementation.
  • Referring back to the embodiment remapping only the direction-of-arrival, additionally modifying the diffuseness parameter 114b further improves the audio experience or, in other words, improves the adaptation of the sound experience with regard to the changed position within the spatial audio scene. Therefore, in further embodiments, the apparatus 300 can be adapted to only modify the directional parameter ϕ(k,n) and the diffuseness parameter ψ(k,n), but not to modify the downmix signal W(k,n) 100.
  • Preferred embodiments of the apparatus 300 as mentioned above also comprise modifying the downmix signal W(k,n) to even further improve the audio experience with regard to the changed position in the spatial audio scene.
  • Therefore, in embodiments, wherein the first directional parameter ϕ(k,n) 114a is a vector, the parameter modification unit 301 is adapted to shift or modify the first directional parameter by an angle defined by a rotation control signal in a reverse direction to a direction defined by the rotation control signal to obtain the second directional parameter ϕ mod(k,n) 214a.
  • In further embodiments, the parameter modification unit 301 is adapted to obtain the second directional parameter 214a using a non-linear mapping function (as, for example, shown in Fig. 5A) defining the second directional parameter 214a depending on the first directional parameter ϕ(k,n) and a zoom factor d defined by a zoom control signal 402 or another translational control information defined by the change signal.
  • As described above, in further embodiments, the parameter modification unit 301 can be adapted to modify the first diffuseness parameter ψ(k,n) 114b of the first parametric spatial audio signal to obtain a second diffuseness parameter ψ mod(k,n) 214b depending on the first directional parameter ϕ(k,n) 114a. The parameter modification unit can be further adapted to obtain the second diffuseness parameter ψ mod(k,n) using a direction dependent function adapted to decrease the first diffuseness parameter ψ(k,n) to obtain the second diffuseness parameter ψ mod(k,n) in case the first directional parameter ϕ(k,n) is within a predetermined central range, for example γ = +/- 30° of the original reference orientation (see Fig. 5B), and/or to increase the first diffuseness parameter ψ(k,n) to obtain the second diffuseness parameter ψ mod(k,n) in case the first directional parameter ϕ(k,n) is outside of the predetermined central range, for example, in a two-dimensional case outside the central range defined by + γ = +30° and - γ = -30° from the 0° original reference orientation.
  • In other words, in certain embodiments the parameter modification unit 301, 310b is adapted to obtain the second diffuseness parameter 214b using a direction dependent function adapted to decrease the first diffuseness parameter 114b to obtain the second diffuseness parameter 214b in case the first directional parameter 114a is within a predetermined central range of the second directional parameter with the second or changed listening orientation forming the center of the predetermined two-dimensional or three-dimensional central range and/or to increase the first diffuseness parameter 114b to obtain the second diffuseness parameter in case the first directional parameter 114a is outside of the predetermined central range. The first or original listening orientation defines a center, e.g. 0°, of the predetermined central range of the first directional parameter, wherein a positive and a negative border of the predetermined central range is defined by a positive and a negative angle γ in a two-dimensional (e.g. horizontal) plane (e.g. +/-30°) independent of whether the second listening orientation is a two-dimensional or a three-dimensional vector, or by a corresponding angle γ (e.g. 30°) defining a right circular cone around the three-dimensional first listening orientation. Further embodiments can comprise different predetermined central regions or windows, symmetric and asymmetric, arranged or centered around the first listening orientation or a vector defining the first listening orientation.
  • In further embodiments, the direction-dependent function fd(k,n,ϕ,d) depends on the change signal, for example, the zoom control signal, wherein the predetermined central range, respectively the values γ defining the negative and positive border (or in general the border) of the central range is the smaller the greater the translational change or the higher the zoom factor defined by the zoom control signal is.
  • In further embodiments, the spatial audio signal modification unit comprises a downmix modification unit 302 adapted to modify the first downmix audio signal W(k,n) of the first parametric spatial audio signal to obtain a second downmix signal Wmod(k,n) of the second parametric spatial audio signal depending on the first directional parameter ϕ(k,n) and the first diffuseness parameter ψ(k,n). Embodiments of the downmix modification unit 302 can be adapted to split the first downmix audio signal W into a direct component S(k,n) 112a and a diffuse component N(k,n) 112b dependent on the first diffuseness parameter ψ(k,n), for example, based on equations (2) and (3).
  • In further embodiments, the downmix modification unit 302 is adapted to apply a first direction dependent function H1(k,n,ϕ,d) to obtain a direction dependent weighted version of the direct component and/or to apply a second direction dependent function H2(k,n,ϕ,d) to the diffuse component to obtain a direction-dependent weighted version of the diffuse component. The downmix modification unit 302 can be adapted to produce the direction dependent weighted version of the direct component 112a by applying a further direction dependent function H1(k,n,ϕ,d) to the direct component, the further direction dependent function being adapted to increase the direct component 112a in case the first directional parameter 114a is within the further predetermined central range of the first directional parameters and/or to decrease the direct component 112a in case the first directional parameter 114a is outside of the further predetermined range of the second directional parameters. In even further embodiments the downmix modification unit can be adapted to produce the direction dependent weighted version of the diffusecomponent 112b by applying a direction dependent function H2(k,n,ϕ,d) to the diffuse component 112b, the direction dependent function being adapted to decrease the diffuse component in case the first directional parameter 114a is within a predetermined central range of the first directional parameters and/or to increase the diffuseness component 112b in case the first directional parameter 114a is outside of the predetermined range of the second directional parameters.
  • In other embodiments, the downmix modification unit 302 is adapted to obtain the second downmix signal 212 based on a combination, e.g. a sum, of a direction dependent weighted version of the direct component 112a and a direction dependent weighted version of the diffuse component 112b. However, further embodiments may apply other algorithms than summing the two components to obtain the modified downmix signal 212.
  • As explained previously, embodiments of the downmix modification unit 302 can be adapted to split up the downmix signal W into a diffuse part or component 112b and a non-diffuse or direct part or component 112a by two multiplicators, namely (ψ)1/2 and (1 - ψ) 1/2 and to filter the non-diffuse part 112a by filter function H1 and to filter the diffuse part 112b by filter function H2. The filter function H1 or H1(k,n,ϕ,d) can be dependent on the time/frequency indices k, n, the original direction-of-arrival ϕ and the zoom parameter d. The filter function H1 may be additionally dependent on the diffuseness ψ. The filter function H2 or H2(k,n,ϕ,d) can be dependent on the time/frequency indices k, n, the original direction-of-arrival ϕ, and the zoom parameter d. The filter function H2 may be additionally dependent on the diffuseness ψ. As was described previously, the filter function H2 can be implemented as a subcardioid window as shown in Fig. 7, or as a simple attenuation factor, independent of the direction-of-arrival ϕ.
  • Referring to the above explanations, the zoom parameter d can be used to control the filters H1, H2 and the modifiers or functions fd and fp (see Fig. 3A). For the filter function H1 and fd the zoom parameter d can also control the look width or angular width γ (also referred to as border angle γ) of the applied windows or central regions. The width γ is defined, e.g. as the angle at which the filter function has 0 dB (see e.g. the 0 dB line in Fig. 6). The angular width γ and/or the gain can be controlled by the zoom parameter d. An example of different values for γ and different maximum gains and minimum gains is given in Fig. 6.
  • While embodiments of the apparatus have been described above, wherein the direction dependent functions and weighting depend on the first or original directional parameter ϕ (see Fig. 3A), other embodiments can be adapted to determine the second or modified diffuseness ψmod and/or one or both of the filter functions H1 and H2 dependent on the second or modified directional parameter ϕmod. As can be determined from Fig. 4, where α corresponds to the original directional parameter ϕ and β corresponds to the modified directional parameter ϕmod (for zoom-in), the higher zoom factor d, the more object B moves from a central or frontal position to a side position, or even (in case of even higher zoom factors d than shown in Fig. 4) to a position in the back of the virtually modified position. In other words, the higher the zoom factor d, the more the magnitude of an initially small angle representing a position in a frontal area of the listener increases, wherein higher angles represent positions in a side area of the listener. This modification of the directional parameter is taken into account by applying a function as shown in Fig. 5A. In addition, the direction dependent windows or functions for the other parameters and for the direct and diffuse components can also be designed to take into account the modification of the original directional parameter or angle, by reducing the angular width γ with increasing zoom d, for example in a non-linear manner corresponding to the direction-of-arrival or directional parameter mapping as shown in Fig. 5A. Therefore, these direction dependent windows or functions can be adapted such that the original directional parameter can be directly used (e.g. without prior modification by function fp), or alternatively, first the directional parameter mapping fp is performed and afterwards the direction dependent weighting fd, H1 and/or H2 based on the modified directional parameter is performed in a similar manner. Referring to Fig. 4 again, thus, both is possible, directional dependent functions fd, H1 and H2 referring directly to α, representing the original directional parameter (for zoom-in), or directional dependent functions fd, H1 and H2 referring to β representing the modified directional parameter.
  • Embodiments using the modified directional parameter can employ, similar to the embodiments using the original directional parameter, different windows with different angular widths and/or different gains for different zoom factors, or, the same windows with the same angular width (because the directional parameter has already been mapped to reflect the different zoom factors) and the same gain, or windows with the same angular widths but different gains, wherein a higher zoom factor results in a higher gain (analog to the windows in Fig. 6).
  • Fig. 3B shows a further embodiment of the apparatus. The spatial audio signal modification unit in Fig. 3B comprises or is formed by, for example, the parameter modification unit 301 and the downmix modification unit 302. According to an alternative embodiment, the parameter modification unit 301 is adapted to first process the original parameter 114a to determine the modified directional parameter 214a, to then process the diffuseness parameter ψ depending on the modified directional parameter ϕmod, respectively 214a, to split the downmix signal 112 using equations (2) and (3) and the original diffuseness parameter ψ, respectively 114b as described based on Fig. 3A, and to apply the direction dependent filtering H1 and H2 dependent on the modified directional parameter ϕmod, respectively 214a. As explained previously, these modifications are performed for each time instant k and each frequency bin n to obtain, for each time instant k and each frequency instant n, the respective modified signals and/or parameters.
  • According to another alternative embodiment of the apparatus 300 according to Fig. 3B, the parameter modification unit 301 is adapted to process the original parameter 114a to determine the modified directional parameter 214a, to process the diffuseness parameter ψ depending on the original directional parameter ϕ or 114a, to determine the modified diffuseness parameter ψmod or 214b, to split the downmix signal 112 using equations (2) and (3) and the original diffuseness parameter ψ or 114b as described based on Fig. 3A, and to apply the direction dependent filtering H1 and H2 dependent on the modified directional parameter ϕmod, or 214a.
  • According to one embodiment, the apparatus 300 according to Fig. 3B is adapted to only modify the first directional parameter 114a of the first parametric spatial audio signal to obtain a second directional parameter 214a of the second parametric spatial audio signal depending on the control signal 402, for example, a rotation control signal or a zoom control signal. In case the change of the listening position/orientation only comprises a rotation and no translation or zoom, a corresponding modification or shift of the directional parameter ϕ(k,n) 114a is sufficient. The corresponding diffuseness parameters and downmix signal components can be left un-amended so that the second downmix signal 212 corresponds to the first downmix signal 112 and the second diffuseness parameter 214b corresponds to the first diffuseness parameter 114b.
  • In case of a translational change, for example a zoom, is performed, a modification of the directional parameter ϕ(k,n) 114a according to a remapping function as shown in Fig. 5A already improves the sound experience and provides for a better synchronization between the audio signal and, for example, a video signal compared to the unmodified or original parametric spatial audio signal (without modifying the diffuseness parameter or the downmix signal).
  • Modifying the diffuseness parameter 114b further improves the audio experience or, in other words, improves the adaptation of the sound experience with regard to the changed position within the spatial audio scene. Therefore, in further embodiments, the apparatus 300 can be adapted to only modify the directional parameter ϕ(k,n) and the diffuseness parameter ψ(k,n), the latter dependent on the modified directional parameter ϕ mod(k,n), but not to modify the downmix signal W(k,n) 100.
  • Preferred embodiments of the apparatus 300 according to Fig. 3B also comprise modifying the downmix signal W(k,n) dependent on the original diffuseness ψ(k,n) and the modified directional parameter ϕ mod(k,n) to even further improve the audio experience with regard to the changed position in the spatial audio scene.
  • Therefore, in embodiments, wherein the first directional parameter ϕ(k,n) 114a is a vector, the parameter modification unit 301 is adapted to shift or modify the first directional parameter by an angle defined by a rotation control signal in a reverse direction to a direction defined by the rotation control signal to obtain the second directional parameter ϕ mod(k,n) 214a.
  • In further embodiments, the parameter modification unit 301 is adapted to obtain the second directional parameter 214a using a non-linear mapping function (as, for example, shown in Fig. 5A) defining the second directional parameter 214a depending on the first directional parameter ϕ(k,n) and a zoom factor d defined by a zoom control signal 402 or another translational control information defined by the change signal.
  • As described above, in further embodiments, the parameter modification unit 301 can be adapted to modify the first diffuseness parameter ψ(k,n) 114b of the first parametric spatial audio signal to obtain a second diffuseness parameter ψmod (k,n) 214b depending on the second directional parameter ϕ mod(k,n) 214a. The parameter modification unit can be further adapted to obtain the second diffuseness parameter ψmod (k,n) using a direction dependent function adapted to decrease the first diffuseness parameter ψ(k,n) to obtain the second diffuseness parameter ψ mod(k,n) in case the second directional parameter ϕ mod(k,n) is within a predetermined central range, for example +/- 30° of the original reference orientation referred to as original 0° orientation, and/or to increase the first diffuseness parameter ψ(k,n) to obtain the second diffuseness parameter ψmod (k,n) in case the second directional parameter ϕ mod(k,n) is outside of the predetermined central range, for example, in a two-dimensional case outside the central range defined by +30° and -30° from the 0° original reference orientation.
  • In other words, in certain embodiments the parameter modification unit 301, 310b is adapted to obtain the second diffuseness parameter 214b using a direction dependent function adapted to decrease the first diffuseness parameter 114b to obtain the second diffuseness parameter 214b in case the second directional parameter 214a is within a predetermined central range of the second directional parameter with the first or original listening orientation forming the center of the predetermined two-dimensional or three-dimensional central range and/or to increase the first diffuseness parameter 114b to obtain the second diffuseness parameter in case the second directional parameter 214a is outside of the predetermined central range. The first listening orientation defines a center, e.g. 0°, of the predetermined central range of the second directional parameter, wherein a positive and a negative border of the predetermined central range is defined by a positive and a negative angle in a two-dimensional (e.g. horizontal) plane (e.g. +/-30°) independent of whether the first listening orientation is a two-dimensional or a three-dimensional vector, or by a corresponding angle (e.g. 30°) defining a right circular cone around the three-dimensional second listening orientation. Further embodiments can comprise different predetermined central regions, symmetric and asymmetric, arranged around the first listening orientation or vector defining the first listening orientation.
  • In further embodiments, the direction-dependent function fd(ψ) depends on the change signal, for example, the zoom control signal, wherein the predetermined central range, respectively the values defining the negative and positive border (or in general the border) of the central range is the smaller the greater the translational change or the higher the zoom factor defined by the zoom control signal is.
  • In further embodiments, the spatial audio signal modification unit comprises a downmix modification unit 302 adapted to modify the first downmix audio signal W(k,n) of the first parametric spatial audio signal to obtain a second downmix signal Wmod(k,n) of the second parametric spatial audio signal depending on the second directional parameter ϕ mod(k,n) and the first diffuseness parameter ψ(k,n). Embodiments of the downmix modification unit 302 can be adapted to split the first downmix audio signal W into a direct component S(k,n) 112a and a diffuse component N(k,n) 112b dependent on the first diffuseness parameter ψ(k,n), for example, based on equations (2) and (3).
  • In further embodiments, the downmix modification unit 302 is adapted to apply a first direction dependent function H1 to obtain a direction dependent weighted version of the direct component and/or to apply a second direction dependent function H2 to the diffuse component to obtain a direction-dependent weighted version of the diffuse component. The downmix modification unit 302 can be adapted to produce the direction dependent weighted version of the direct component 112a by applying a further direction dependent function H1 to the direct component, the further direction dependent function being adapted to increase the direct component 112a in case the second directional parameter 214a is within the further predetermined central range of the second directional parameters and/or to decrease the direct component 112a in case the second directional parameter 214a is outside of the further predetermined range of the second directional parameters. In even further embodiments the downmix modification unit can be adapted to produce the direction dependent weighted version of the diffuse component 112b by applying a direction dependent function H2 to the diffuse component 112b, the direction dependent function being adapted to decrease the diffuse component in case the second directional parameter 214a is within a predetermined central range of the second directional parameters and/or to increase the diffuse component 112b in case the second directional parameter 214a is outside of the predetermined range of the second directional parameters.
  • In other embodiments, the downmix modification unit 302 is adapted to obtain the second downmix signal 212 based on a combination, e.g. a sum, of a direction dependent weighted version of the direct component 112a and a direction dependent weighted version of the diffuse component 112b. However, further embodiments may apply other algorithms than summing the two components to obtain the modified downmix signal 212.
  • As explained previously, embodiments of the downmix modification unit 302 according to Fig. 3B can be adapted to split up the downmix signal W into a diffuse part or component 112b and a non-diffuse or direct part or component 112a by two multiplicators, namely (ψ)1/2 and (1 - ψ) 1/2 and to filter the non-diffuse part 112a by filter function H1 and to filter the diffuse part 112b by filter function H2. The filter function H1 or H1(ϕ, ψ) can be dependent on the time/frequency indices k, n, the modified direction-of-arrival and the zoom parameter d. The filter function H1 may be additionally dependent on the diffuseness ψ. The filter function H2 or H2(ϕ, ψ) can be dependent on the time/frequency indices k, n, the original direction-of-arrival ϕ, and the zoom parameter d. The filter function H2 or H2(ϕ, ψ) may be additionally dependent on the diffuseness ψ. As was described previously, the filter function H2 can be implemented as a subcardioid window as shown in Fig. 7, or as a simple attenuation factor, independent of the modified direction-of-arrival ϕmod.
  • Referring to the above explanations, also in embodiments according to Fig. 3B, the zoom parameters d can be used to control the filters H1, H2 and the modifiers or functions fd and fp. For the filter functions H1 and fd the zoom parameter d can also control the angular width γ (also referred to as border angle γ) of the applied windows or central regions. The width γ is defined, e.g. as the angle at which the filter function has 0 dB (analog to the 0 dB line in Fig. 6). The angular width γ and/or the gain can be controlled by the zoom parameter d. It should be noted that in general, the explanations given with regard to the embodiments according to Fig. 3A apply in the same manner or at least in an analog manner to embodiments according to Fig. 3B.
  • In the following, exemplary applications are described where the inventive embodiments lead to an improved experience of a joint video/audio playback by adjusting the perceived audio image to the zoom control of a video camera.
  • In teleconferencing, it is state-of-the-art to automatically steer the camera towards the active speaker. This is usually connected with zooming closer to the talker. The sound is traditionally not matched to the picture. Embodiments of the present invention provide the possibility of also zooming-in on the active talker acoustically. This was the overall impression is more realistic for the far-end users, as not only the picture is changed in its focus, but the sound matches the desired change of attention. In short, the acoustical cues correspond to the visual cues.
  • Modem camcorders, for example, for home entertainment, are capable of recording surround sound and have a powerful optical zoom. There is, however, no perceptual equivalent interaction between the optical zoom and the recorded sound, as the recorded spatial sound only depends on the actual position of the camera and, thus, the position of the microphones mounted on the camera itself. In case of a scene filmed in a close-up mode, the invention allows to adjust the audio image accordingly. This leads to a more natural and consistent consumer experience as the sound is zoomed together with the picture.
  • It should be mentioned that the invention may also be applied in a post-processing phase if the original microphone signals are recorded unaltered with the video and no further processing has been done. Although the original zoom length may not be known, the invention can be used in creative audio-visual post-processing toolboxes. An arbitrary zoom-length can be selected and the acoustical zoom can be steered by the user to match the picture. Alternatively, the user can create his own preferred spatial effects. In either case, the original microphone recording position will be altered to a user defined virtual recording position.
  • Depending on certain implementation requirements of the inventive methods, the inventive methods can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, in particular, a disc, a CD, a DVD or a Blu-Ray disc having an electronically-readable control signal stored thereon, which cooperates with a programmable computer system such that an embodiment of the inventive method is performed. Generally, an embodiment of the present invention is, therefore, a computer program produced with a program code stored on a machine-readable carrier, the program code being operative for performing the inventive method when the computer program product runs on a computer. In other words, embodiments of the inventive method are, therefore, a computer program having a program code for performing at least one of the inventive methods when the computer program runs on a computer.
  • The afore-going was particularly shown and described with reference to particular embodiments thereof. It will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope thereof. It is, therefore, to be understood that various changes may be made in adapting the different embodiments without departing from the broader concept disclosed herein and comprehended by the claims that follow.

Claims (18)

  1. Apparatus (300) for converting a first parametric spatial audio signal (112, 114) representing a first listening position or a first listening orientation in a spatial audio scene to a second parametric spatial audio signal (212, 214) representing a second listening position or a second listening orientation; the apparatus comprising:
    a spatial audio signal modification unit (301, 302) adapted to modify the first parametric spatial audio signal (112, 114) dependent on a change of the first listening position or the first listening orientation so as to obtain the second parametric spatial audio signal (212, 214), wherein the second listening position or the second listening orientation corresponds to the first listening position or the first listening orientation changed by the change.
  2. Apparatus according to claim 1, wherein the spatial audio signal modification unit (301, 302) comprises:
    a parameter modification unit (301, 301a) adapted to modify a first directional parameter (114a) of the first parametric spatial audio signal (112, 114) so as to obtain a second directional parameter (214a) of the second parametric spatial audio signal (212, 214) depending on a control signal (402) providing information corresponding to the change.
  3. Apparatus according to claim 2, wherein the first directional parameter (114a) and the second directional parameter (214a) are two-dimensional or three-dimensional vectors.
  4. Apparatus according to claim 2 or 3, wherein the first directional parameter (114a) is a vector, wherein the control signal is a rotation control signal defining a rotation angle and a rotation direction, and wherein the parameter modification unit (301, 301a) is adapted to rotate the vector by the rotation angle in a reverse direction to the rotation direction to obtain the second directional parameter (214a).
  5. Apparatus according to one of the claims 2 to 4, wherein the control signal is a translation control signal (402) defining a translation (d) in direction of the first listening orientation, wherein the parameter modification unit (301, 301a) is adapted to obtain the second directional parameter (214a) using a non-linear mapping function (fp) defining the second directional parameter depending on the first directional parameter (114a) and the translation (d) defined by the control signal.
  6. Apparatus according to one of the claims 2 to 4, wherein the control signal is a zoom control signal (402) defining a zoom factor (d) in direction of the first listening orientation, wherein the parameter modification unit (301, 301a) is adapted to obtain the second directional parameter (214a) using a non-linear mapping function (fp) defining the second directional parameter depending on the first directional parameter (114a) and the zoom factor (d) defined by the zoom control signal.
  7. Apparatus according to one of the claims 2 to 6, wherein the parameter modification unit (301, 301b) is adapted to modify a first diffuseness parameter (114b) of the first parametric spatial audio signal so as to obtain a second diffuseness parameter (214b) of the second parametric spatial audio signal depending on the first directional parameter (114a) or depending on the second directional parameter (214a).
  8. Apparatus according to claim 7, wherein the parameter modification unit (301, 310b) is adapted to obtain the second diffuseness parameter (214b) using a direction dependent function (fd) adapted to decrease the first diffuseness parameter (114b) to obtain the second diffuseness parameter (214b) in case the first directional parameter (114a) is within a predetermined central range of the first directional parameter and/or to increase the first diffuseness parameter (114b) to obtain the second diffuseness parameter in case the first directional parameter (114a) is outside of the predetermined central range, or
    wherein the parameter modification unit (301, 310b) is adapted to obtain the second diffuseness parameter (214b) using a direction dependent function (fd) adapted to decrease the first diffuseness parameter (114b) to obtain the second diffuseness parameter (214b) in case the second directional parameter (214a) is within a predetermined central range of the second directional parameter and/or to increase the first diffuseness parameter (114b) to obtain the second diffuseness parameter in case the second directional parameter (214a) is outside of the predetermined central range.
  9. Apparatus according to claim 8, wherein the control signal is a translation control signal (402) defining a translation (d) in direction of the first listening orientation, wherein the direction dependent function depends on the translation, and wherein the predetermined central range is the smaller the greater the translation defined by the translation control signal; or wherein the control signal is a zoom control signal (402) defining a zoom in direction of the first listening orientation, wherein the direction dependent function depends on the zoom, and wherein the predetermined central range is the smaller the greater a zoom factor (d) defined by the zoom control signal.
  10. Apparatus according to claims 7 to 9, the spatial audio signal modification unit (300) comprising:
    a downmix modification unit (302) adapted to modify a first downmix audio signal (112) of the first parametric spatial audio signal to obtain a second downmix signal (212) of the second parametric spatial audio signal depending on the first directional parameter (114a) and/or the first diffuseness parameter (114b), or
    a downmix modification unit (302) adapted to modify the first downmix audio signal (112) of the first parametric spatial audio signal to obtain the second downmix signal (212) of the second parametric spatial audio signal depending on the second directional parameter (214a) and/or the first diffuseness parameter (114b).
  11. Apparatus according to claim 10, wherein the downmix modification unit (302) is adapted to derive a direct component (112a) from the first downmix audio signal (112) and/or a diffuse component (112b) from the first downmix audio signal (112) dependent on the first diffuseness parameter (114b).
  12. Apparatus according to claim 11, wherein the downmix modification unit (302) is adapted to determine the direct component (112a) by: S k n = W k n 1 - Ψ
    Figure imgb0014

    and/or the diffuse component by: N k n = W k n Ψ
    Figure imgb0015

    wherein k is a time index, n is a frequency bin index, W(k,n) refers to the first downmix signal, Ψ(k,n) refers to the first diffuseness parameter, S(k,n) refers to the direct component and N(k,n) refers to the diffuse component derived from the first downmix signal.
  13. Apparatus according to claim 11 or 12, wherein the downmix modification unit (302) is adapted to obtain the second downmix signal (212) based on a direction dependent weighted version of the direct component (112a), on a direction dependent weighted version of the diffuse component (112b) or based on a combination of the direction dependent weighted version of the direct component (112a) and the direction dependent weighted version of the diffuse component (112b).
  14. Apparatus according to claim 13, wherein the downmix modification unit (302) is adapted to produce the direction dependent weighted version of the direct component (112a) by applying a further direction dependent function (H1 ) to the direct component, the further direction dependent function being adapted to increase the direct component (112a) in case the first directional parameter (114a) is within a further predetermined central range of the first directional parameters and/or to decrease the direct component (112a) in case the first directional parameter (114a) is outside of the further predetermined range of the first directional parameters.
  15. Apparatus according to claim 13 or 14, wherein the downmix modification unit is adapted to produce the direction dependent weighted version of the diffuse component (112b) by applying a direction dependent function (H2 ) to the diffuse component (112b),
    the direction dependent function being adapted to decrease the diffuse component in case the first directional parameter (114a) is within a predetermined central range of the first directional parameters and/or to increase the diffuse component (112b) in case the first directional parameter (114a) is outside of the predetermined range of the first directional parameters, or
    the direction dependent function being adapted to decrease the diffuse component in case the second directional parameter (214a) is within a predetermined central range of the second directional parameters and/or to increase the diffuse component (112b) in case the second directional parameter (214a) is outside of the predetermined range of the second directional parameters.
  16. System comprising:
    an apparatus according to one of the claims 1 to 15; and
    a video camera, wherein the apparatus is coupled to the video camera and is adapted to receive a video rotation or a video zoom signal as a control signal.
  17. A method for converting a first parametric spatial audio signal (112, 114) representing a first listening position or a first listening orientation in a spatial audio scene to a second parametric spatial audio signal (212, 214) representing a second listening position or a second listening orientation, the method comprising:
    modifying the first parametric spatial audio signal dependent on a change of the first listening position or the first listening orientation so as to obtain the second parametric spatial audio signal, wherein the second listening position or the second listening orientation corresponds to the first listening position or the first listening orientation changed by the change.
  18. A computer program having a program code for performing the method according to claim 17 when the program runs on a computer.
EP10156263A 2009-12-17 2010-03-11 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal Withdrawn EP2346028A1 (en)

Priority Applications (15)

Application Number Priority Date Filing Date Title
KR1020127017311A KR101431934B1 (en) 2009-12-17 2010-12-14 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal
JP2012543696A JP5426035B2 (en) 2009-12-17 2010-12-14 Apparatus and method for converting a first parametric spatial audio signal into a second parametric spatial audio signal
PCT/EP2010/069669 WO2011073210A1 (en) 2009-12-17 2010-12-14 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal
MX2012006979A MX2012006979A (en) 2009-12-17 2010-12-14 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal.
CN201080063799.9A CN102859584B (en) 2009-12-17 2010-12-14 In order to the first parameter type spatial audio signal to be converted to the apparatus and method of the second parameter type spatial audio signal
CA2784862A CA2784862C (en) 2009-12-17 2010-12-14 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal
EP10796353.0A EP2502228B1 (en) 2009-12-17 2010-12-14 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal
AU2010332934A AU2010332934B2 (en) 2009-12-17 2010-12-14 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal
RU2012132354/08A RU2586842C2 (en) 2009-12-17 2010-12-14 Device and method for converting first parametric spatial audio into second parametric spatial audio signal
ES10796353.0T ES2592217T3 (en) 2009-12-17 2010-12-14 An apparatus and method for converting a first spatial parametric audio signal into a second spatial parametric audio signal
BR112012015018-9A BR112012015018B1 (en) 2009-12-17 2010-12-14 AN APPARATUS AND METHOD FOR CONVERTING A FIRST PARAMETRIC SPATIAL AUDIO SIGNAL TO A SECOND PARAMETRIC SPATIAL AUDIO SIGNAL
TW099143975A TWI523545B (en) 2009-12-17 2010-12-15 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal
ARP100104731A AR079517A1 (en) 2009-12-17 2010-12-17 AN APPARATUS AND METHOD FOR CONVERTING A FIRST SPACE PARAMETRIC AUDIO SIGNAL IN A SECOND AUDIO SIGNAL FOR SPACE METRIC
US13/523,085 US9196257B2 (en) 2009-12-17 2012-06-14 Apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal
HK13103678.8A HK1176733A1 (en) 2009-12-17 2013-03-25 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US28759609P 2009-12-17 2009-12-17

Publications (1)

Publication Number Publication Date
EP2346028A1 true EP2346028A1 (en) 2011-07-20

Family

ID=43748019

Family Applications (2)

Application Number Title Priority Date Filing Date
EP10156263A Withdrawn EP2346028A1 (en) 2009-12-17 2010-03-11 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal
EP10796353.0A Active EP2502228B1 (en) 2009-12-17 2010-12-14 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP10796353.0A Active EP2502228B1 (en) 2009-12-17 2010-12-14 An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal

Country Status (15)

Country Link
US (1) US9196257B2 (en)
EP (2) EP2346028A1 (en)
JP (1) JP5426035B2 (en)
KR (1) KR101431934B1 (en)
CN (1) CN102859584B (en)
AR (1) AR079517A1 (en)
AU (1) AU2010332934B2 (en)
BR (1) BR112012015018B1 (en)
CA (1) CA2784862C (en)
ES (1) ES2592217T3 (en)
HK (1) HK1176733A1 (en)
MX (1) MX2012006979A (en)
RU (1) RU2586842C2 (en)
TW (1) TWI523545B (en)
WO (1) WO2011073210A1 (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2600343A1 (en) * 2011-12-02 2013-06-05 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for merging geometry - based spatial audio coding streams
EP2733965A1 (en) * 2012-11-15 2014-05-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating a plurality of parametric audio streams and apparatus and method for generating a plurality of loudspeaker signals
EP2942981A1 (en) * 2014-05-05 2015-11-11 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. System, apparatus and method for consistent acoustic scene reproduction based on adaptive functions
EP3209029A1 (en) * 2016-02-16 2017-08-23 Sony Corporation Distributed wireless speaker system
US9794724B1 (en) 2016-07-20 2017-10-17 Sony Corporation Ultrasonic speaker assembly using variable carrier frequency to establish third dimension sound locating
US9826330B2 (en) 2016-03-14 2017-11-21 Sony Corporation Gimbal-mounted linear ultrasonic speaker assembly
US9838821B2 (en) 2013-12-27 2017-12-05 Nokia Technologies Oy Method, apparatus, computer program code and storage medium for processing audio signals
US9866986B2 (en) 2014-01-24 2018-01-09 Sony Corporation Audio speaker system with virtual music performance
WO2018050292A1 (en) * 2016-09-16 2018-03-22 Benjamin Bernard Device and method for capturing and processing a three-dimensional acoustic field
EP3340648A1 (en) * 2016-12-23 2018-06-27 Nxp B.V. Processing audio signals
US10021499B2 (en) 2014-05-13 2018-07-10 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for edge fading amplitude panning
WO2018132385A1 (en) * 2017-01-12 2018-07-19 Pcms Holdings, Inc. Audio zooming in natural audio video content service
WO2018193160A1 (en) * 2017-04-20 2018-10-25 Nokia Technologies Oy Ambience generation for spatial audio mixing featuring use of original and extended signal
GB2563635A (en) * 2017-06-21 2018-12-26 Nokia Technologies Oy Recording and rendering audio signals
WO2019097017A1 (en) * 2017-11-17 2019-05-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for encoding or decoding directional audio coding parameters using different time/frequency resolutions
WO2019121864A1 (en) * 2017-12-19 2019-06-27 Koninklijke Kpn N.V. Enhanced audiovisual multiuser communication
WO2019121773A1 (en) * 2017-12-18 2019-06-27 Dolby International Ab Method and system for handling local transitions between listening positions in a virtual reality environment
EP3605531A4 (en) * 2017-03-28 2020-04-15 Sony Corporation Information processing device, information processing method, and program
US10635383B2 (en) 2013-04-04 2020-04-28 Nokia Technologies Oy Visual audio processing apparatus
CN111183479A (en) * 2017-07-14 2020-05-19 弗劳恩霍夫应用研究促进协会 Concept for generating an enhanced or modified sound field description using a multi-layer description
WO2020104726A1 (en) * 2018-11-21 2020-05-28 Nokia Technologies Oy Ambience audio representation and associated rendering
WO2021032909A1 (en) * 2019-08-16 2021-02-25 Nokia Technologies Oy Quantization of spatial audio direction parameters
WO2021032908A1 (en) 2019-08-16 2021-02-25 Nokia Technologies Oy Quantization of spatial audio direction parameters
EP3849202A1 (en) * 2020-01-10 2021-07-14 Nokia Technologies Oy Audio and video processing
US11232802B2 (en) 2016-09-30 2022-01-25 Coronal Encoding S.A.S. Method for conversion, stereophonic encoding, decoding and transcoding of a three-dimensional audio signal
WO2022020365A1 (en) * 2020-07-20 2022-01-27 Orbital Audio Laboratories, Inc. Multi-stage processing of audio signals to facilitate rendering of 3d audio via a plurality of playback devices
RU2777921C2 (en) * 2017-12-18 2022-08-11 Долби Интернешнл Аб Method and system for processing local transitions between listening positions in virtual reality environment
US11443737B2 (en) 2020-01-14 2022-09-13 Sony Corporation Audio video translation into multiple languages for respective listeners
WO2023118643A1 (en) * 2021-12-22 2023-06-29 Nokia Technologies Oy Apparatus, methods and computer programs for generating spatial audio output
US11950085B2 (en) 2017-07-14 2024-04-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Concept for generating an enhanced sound field description or a modified sound field description using a multi-point sound field description

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101442446B1 (en) 2010-12-03 2014-09-22 프라운호퍼-게젤샤프트 츄어 푀르더룽 데어 안게반텐 포르슝에.파우. Sound acquisition via the extraction of geometrical information from direction of arrival estimates
EP2541547A1 (en) * 2011-06-30 2013-01-02 Thomson Licensing Method and apparatus for changing the relative positions of sound objects contained within a higher-order ambisonics representation
JP5740531B2 (en) 2011-07-01 2015-06-24 ドルビー ラボラトリーズ ライセンシング コーポレイション Object-based audio upmixing
US9047863B2 (en) 2012-01-12 2015-06-02 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for criticality threshold control
CN104054126B (en) * 2012-01-19 2017-03-29 皇家飞利浦有限公司 Space audio is rendered and is encoded
US9445174B2 (en) 2012-06-14 2016-09-13 Nokia Technologies Oy Audio capture apparatus
KR102581878B1 (en) 2012-07-19 2023-09-25 돌비 인터네셔널 에이비 Method and device for improving the rendering of multi-channel audio signals
EP2901667B1 (en) 2012-09-27 2018-06-27 Dolby Laboratories Licensing Corporation Spatial multiplexing in a soundfield teleconferencing system
JP6031930B2 (en) * 2012-10-02 2016-11-24 ソニー株式会社 Audio processing apparatus and method, program, and recording medium
CN103021414B (en) * 2012-12-04 2014-12-17 武汉大学 Method for distance modulation of three-dimensional audio system
EP2757559A1 (en) * 2013-01-22 2014-07-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for spatial audio object coding employing hidden objects for signal mixture manipulation
US11146903B2 (en) * 2013-05-29 2021-10-12 Qualcomm Incorporated Compression of decomposed representations of a sound field
CN104244164A (en) * 2013-06-18 2014-12-24 杜比实验室特许公司 Method, device and computer program product for generating surround sound field
WO2015000819A1 (en) 2013-07-05 2015-01-08 Dolby International Ab Enhanced soundfield coding using parametric component generation
WO2015178949A1 (en) * 2014-05-19 2015-11-26 Tiskerling Dynamics Llc Using the location of a near-end user in a video stream to adjust audio settings of a far-end system
JP6624068B2 (en) * 2014-11-28 2019-12-25 ソニー株式会社 Transmission device, transmission method, reception device, and reception method
US9602946B2 (en) 2014-12-19 2017-03-21 Nokia Technologies Oy Method and apparatus for providing virtual audio reproduction
KR102516625B1 (en) * 2015-01-30 2023-03-30 디티에스, 인코포레이티드 Systems and methods for capturing, encoding, distributing, and decoding immersive audio
KR102617476B1 (en) * 2016-02-29 2023-12-26 한국전자통신연구원 Apparatus and method for synthesizing separated sound source
KR102561371B1 (en) 2016-07-11 2023-08-01 삼성전자주식회사 Multimedia display apparatus and recording media
US10187740B2 (en) * 2016-09-23 2019-01-22 Apple Inc. Producing headphone driver signals in a digital audio signal processing binaural rendering environment
KR20180090022A (en) * 2017-02-02 2018-08-10 한국전자통신연구원 Method for providng virtual-reality based on multi omni-direction camera and microphone, sound signal processing apparatus, and image signal processing apparatus for performin the method
CN110463226B (en) * 2017-03-14 2022-02-18 株式会社理光 Sound recording device, sound system, sound recording method and carrier device
EP3618463A4 (en) * 2017-04-25 2020-04-29 Sony Corporation Signal processing device, method, and program
GB2562518A (en) * 2017-05-18 2018-11-21 Nokia Technologies Oy Spatial audio processing
US10299039B2 (en) 2017-06-02 2019-05-21 Apple Inc. Audio adaptation to room
RU2736274C1 (en) * 2017-07-14 2020-11-13 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Principle of generating an improved description of the sound field or modified description of the sound field using dirac technology with depth expansion or other technologies
US11004567B2 (en) 2017-08-15 2021-05-11 Koko Home, Inc. System and method for processing wireless backscattered signal using artificial intelligence processing for activities of daily life
US10412482B2 (en) 2017-11-08 2019-09-10 Merry Electronics (Shenzhen) Co., Ltd. Loudspeaker apparatus
USD877121S1 (en) 2017-12-27 2020-03-03 Yandex Europe Ag Speaker device
RU2707149C2 (en) * 2017-12-27 2019-11-22 Общество С Ограниченной Ответственностью "Яндекс" Device and method for modifying audio output of device
GB201800918D0 (en) 2018-01-19 2018-03-07 Nokia Technologies Oy Associated spatial audio playback
CN109492126B (en) * 2018-11-02 2022-03-01 廊坊市森淼春食用菌有限公司 Intelligent interaction method and device
US10810850B2 (en) 2019-02-19 2020-10-20 Koko Home, Inc. System and method for state identity of a user and initiating feedback using multiple sources
GB2584837A (en) * 2019-06-11 2020-12-23 Nokia Technologies Oy Sound field related rendering
GB2584838A (en) * 2019-06-11 2020-12-23 Nokia Technologies Oy Sound field related rendering
JP2022547253A (en) * 2019-07-08 2022-11-11 ディーティーエス・インコーポレイテッド Discrepancy audiovisual acquisition system
USD947152S1 (en) 2019-09-10 2022-03-29 Yandex Europe Ag Speaker device
GB2587335A (en) * 2019-09-17 2021-03-31 Nokia Technologies Oy Direction estimation enhancement for parametric spatial audio capture using broadband estimates
US11719804B2 (en) 2019-09-30 2023-08-08 Koko Home, Inc. System and method for determining user activities using artificial intelligence processing
US11240635B1 (en) * 2020-04-03 2022-02-01 Koko Home, Inc. System and method for processing using multi-core processors, signals, and AI processors from multiple sources to create a spatial map of selected region
US11184738B1 (en) 2020-04-10 2021-11-23 Koko Home, Inc. System and method for processing using multi core processors, signals, and AI processors from multiple sources to create a spatial heat map of selected region
JP2023549038A (en) * 2020-10-09 2023-11-22 フラウンホーファー-ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン Apparatus, method or computer program for processing encoded audio scenes using parametric transformation
JP2023548650A (en) * 2020-10-09 2023-11-20 フラウンホーファー-ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン Apparatus, method, or computer program for processing encoded audio scenes using bandwidth expansion
CN116438598A (en) * 2020-10-09 2023-07-14 弗劳恩霍夫应用研究促进协会 Apparatus, method or computer program for processing encoded audio scenes using parameter smoothing
WO2022115803A1 (en) * 2020-11-30 2022-06-02 The Regents Of The University Of California Systems and methods for sound-enhanced meeting platforms
CN115472170A (en) * 2021-06-11 2022-12-13 华为技术有限公司 Three-dimensional audio signal processing method and device
CN115086861B (en) * 2022-07-20 2023-07-28 歌尔股份有限公司 Audio processing method, device, equipment and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1473971A2 (en) * 2003-04-07 2004-11-03 Yamaha Corporation Sound field controller
EP1589754A2 (en) * 2004-04-20 2005-10-26 Sony Corporation Information processing apparatus, imaging apparatus, information processing method, and program
US20080298597A1 (en) * 2007-05-30 2008-12-04 Nokia Corporation Spatial Sound Zooming

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4984087A (en) * 1988-05-27 1991-01-08 Matsushita Electric Industrial Co., Ltd. Microphone apparatus for a video camera
JPH03114000A (en) * 1989-09-27 1991-05-15 Nippon Telegr & Teleph Corp <Ntt> Voice reproduction system
JPH07288899A (en) * 1994-04-15 1995-10-31 Matsushita Electric Ind Co Ltd Sound field reproducing device
JPH07312712A (en) * 1994-05-19 1995-11-28 Sanyo Electric Co Ltd Video camera and reproducing device
JP3830997B2 (en) * 1995-10-24 2006-10-11 日本放送協会 Depth direction sound reproducing apparatus and three-dimensional sound reproducing apparatus
JP2002207488A (en) * 2001-01-01 2002-07-26 Junichi Kakumoto System for representing and transmitting presence of sound and image
GB2374507B (en) * 2001-01-29 2004-12-29 Hewlett Packard Co Audio user interface with audio cursor
JP2003244800A (en) * 2002-02-14 2003-08-29 Matsushita Electric Ind Co Ltd Sound image localization apparatus
JP2003284196A (en) * 2002-03-20 2003-10-03 Sony Corp Sound image localizing signal processing apparatus and sound image localizing signal processing method
SE527670C2 (en) * 2003-12-19 2006-05-09 Ericsson Telefon Ab L M Natural fidelity optimized coding with variable frame length
US20090299756A1 (en) 2004-03-01 2009-12-03 Dolby Laboratories Licensing Corporation Ratio of speech to non-speech audio such as for elderly or hearing-impaired listeners
SE0400997D0 (en) 2004-04-16 2004-04-16 Cooding Technologies Sweden Ab Efficient coding or multi-channel audio
JP2006050241A (en) * 2004-08-04 2006-02-16 Matsushita Electric Ind Co Ltd Decoder
JP2006074386A (en) 2004-09-01 2006-03-16 Fujitsu Ltd Stereoscopic audio reproducing method, communication apparatus, and program
SE0402650D0 (en) 2004-11-02 2004-11-02 Coding Tech Ab Improved parametric stereo compatible coding or spatial audio
US7751572B2 (en) 2005-04-15 2010-07-06 Dolby International Ab Adaptive residual audio coding
TWI390993B (en) 2005-10-20 2013-03-21 Lg Electronics Inc Method for encoding and decoding multi-channel audio signal and apparatus thereof
EP1974344A4 (en) * 2006-01-19 2011-06-08 Lg Electronics Inc Method and apparatus for decoding a signal
JP4940671B2 (en) 2006-01-26 2012-05-30 ソニー株式会社 Audio signal processing apparatus, audio signal processing method, and audio signal processing program
JP5081838B2 (en) 2006-02-21 2012-11-28 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Audio encoding and decoding
TW200742275A (en) 2006-03-21 2007-11-01 Dolby Lab Licensing Corp Low bit rate audio encoding and decoding in which multiple channels are represented by fewer channels and auxiliary information
ATE539434T1 (en) * 2006-10-16 2012-01-15 Fraunhofer Ges Forschung APPARATUS AND METHOD FOR MULTI-CHANNEL PARAMETER CONVERSION
US8290167B2 (en) * 2007-03-21 2012-10-16 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and apparatus for conversion between multi-channel audio formats
US20080298610A1 (en) * 2007-05-30 2008-12-04 Nokia Corporation Parameter Space Re-Panning for Spatial Audio
EP2158791A1 (en) 2007-06-26 2010-03-03 Koninklijke Philips Electronics N.V. A binaural object-oriented audio decoder
JP5284360B2 (en) * 2007-09-26 2013-09-11 フラウンホッファー−ゲゼルシャフト ツァ フェルダールング デァ アンゲヴァンテン フォアシュンク エー.ファオ Apparatus and method for extracting ambient signal in apparatus and method for obtaining weighting coefficient for extracting ambient signal, and computer program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1473971A2 (en) * 2003-04-07 2004-11-03 Yamaha Corporation Sound field controller
EP1589754A2 (en) * 2004-04-20 2005-10-26 Sony Corporation Information processing apparatus, imaging apparatus, information processing method, and program
US20080298597A1 (en) * 2007-05-30 2008-12-04 Nokia Corporation Spatial Sound Zooming

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
FALLER, C.: "Microphone Front-Ends for Spatial Audio Coders", PROCEEDINGS OF THE AES 125TH INTERNATIONAL CONVENTION, October 2008 (2008-10-01)
KALLINGER MARKUS ET AL: "A Spatial Filtering Approach for Directional Audio Coding", AES CONVENTION 126; MAY 2009, AES, 60 EAST 42ND STREET, ROOM 2520 NEW YORK 10165-2520, USA, 1 May 2009 (2009-05-01), XP040508935 *
M.A. GERZON: "Periphony: Width-Height Sound Reproduction", J. AUD. ENG. SOC., vol. 21, no. 1, 1973, pages 2 - 10
PULKII V: "VIRTUAL SOUND SOURCE POSITIONING USING VECTOR BASE AMPLITUDE PANNING", JOURNAL OF THE AUDIO ENGINEERING SOCIETY, AUDIO ENGINEERING SOCIETY, NEW YORK, NY, US, vol. 45, no. 6, 1 June 1996 (1996-06-01), pages 456 - 466, XP000695381, ISSN: 1549-4950 *
PULKKI V: "Spatial Sound Reproduction with Directional Audio Coding", JOURNAL OF THE AUDIO ENGINEERING SOCIETY, AUDIO ENGINEERING SOCIETY, NEW YORK, NY, US, vol. 55, no. 6, 1 June 2007 (2007-06-01), pages 503 - 516, XP002526348, ISSN: 0004-7554 *
PULKKI, V.: "Directional audio coding in spatial sound reproduction and stereo upmixing", PROCEEDINGS OF THE AES 28TH INTERNATIONAL CONFERENCE, 30 June 2006 (2006-06-30), pages 251 - 258
PULKKI, V.: "Virtual sound source positioning using vector base amplitude panning", J. AUDIO ENG. SOC., vol. 45, June 1997 (1997-06-01), pages 456 - 466

Cited By (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104185869B (en) * 2011-12-02 2017-10-17 弗劳恩霍夫应用研究促进协会 Apparatus and method for merging the spatial audio coding stream based on geometry
WO2013079663A3 (en) * 2011-12-02 2013-10-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for merging geometry-based spatial audio coding streams
RU2609102C2 (en) * 2011-12-02 2017-01-30 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Device and method of spatial audio encoding streams combining based on geometry
US9484038B2 (en) 2011-12-02 2016-11-01 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for merging geometry-based spatial audio coding streams
CN104185869A (en) * 2011-12-02 2014-12-03 弗兰霍菲尔运输应用研究公司 Apparatus and method for merging geometry-based spatial audio coding streams
CN104185869B9 (en) * 2011-12-02 2018-01-12 弗劳恩霍夫应用研究促进协会 Device and method for merging geometry-based spatial audio coding streams
EP2600343A1 (en) * 2011-12-02 2013-06-05 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for merging geometry - based spatial audio coding streams
WO2014076058A1 (en) * 2012-11-15 2014-05-22 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating a plurality of parametric audio streams and apparatus and method for generating a plurality of loudspeaker signals
US10313815B2 (en) 2012-11-15 2019-06-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating a plurality of parametric audio streams and apparatus and method for generating a plurality of loudspeaker signals
TWI512720B (en) * 2012-11-15 2015-12-11 Fraunhofer Ges Forschung Apparatus and method for generating a plurality of parametric audio streams and apparatus and method for generating a plurality of loudspeaker signals
CN104904240B (en) * 2012-11-15 2017-06-23 弗劳恩霍夫应用研究促进协会 Apparatus and method and apparatus and method for generating multiple loudspeaker signals for generating multiple parameters audio stream
EP2733965A1 (en) * 2012-11-15 2014-05-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating a plurality of parametric audio streams and apparatus and method for generating a plurality of loudspeaker signals
RU2633134C2 (en) * 2012-11-15 2017-10-11 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Device and method for forming plurality of parametric sound flows and device and method for forming plurality of acoustic system signals
US10635383B2 (en) 2013-04-04 2020-04-28 Nokia Technologies Oy Visual audio processing apparatus
US9838821B2 (en) 2013-12-27 2017-12-05 Nokia Technologies Oy Method, apparatus, computer program code and storage medium for processing audio signals
US9866986B2 (en) 2014-01-24 2018-01-09 Sony Corporation Audio speaker system with virtual music performance
WO2015169618A1 (en) * 2014-05-05 2015-11-12 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. System, apparatus and method for consistent acoustic scene reproduction based on informed spatial filtering
RU2663343C2 (en) * 2014-05-05 2018-08-03 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. System, device and method for compatible reproduction of acoustic scene based on adaptive functions
EP2942982A1 (en) * 2014-05-05 2015-11-11 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. System, apparatus and method for consistent acoustic scene reproduction based on informed spatial filtering
WO2015169617A1 (en) * 2014-05-05 2015-11-12 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. System, apparatus and method for consistent acoustic scene reproduction based on adaptive functions
CN106664501A (en) * 2014-05-05 2017-05-10 弗劳恩霍夫应用研究促进协会 System, apparatus and method for consistent acoustic scene reproduction based on informed spatial filtering
EP2942981A1 (en) * 2014-05-05 2015-11-11 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. System, apparatus and method for consistent acoustic scene reproduction based on adaptive functions
CN106664501B (en) * 2014-05-05 2019-02-15 弗劳恩霍夫应用研究促进协会 The systems, devices and methods of consistent acoustics scene reproduction based on the space filtering notified
US9936323B2 (en) 2014-05-05 2018-04-03 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. System, apparatus and method for consistent acoustic scene reproduction based on informed spatial filtering
RU2665280C2 (en) * 2014-05-05 2018-08-28 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. System, apparatus and method for consistent acoustic scene reproduction based on informed spatial filtering
US10015613B2 (en) 2014-05-05 2018-07-03 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. System, apparatus and method for consistent acoustic scene reproduction based on adaptive functions
RU2666248C2 (en) * 2014-05-13 2018-09-06 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Device and method for amplitude panning with front fading
US10021499B2 (en) 2014-05-13 2018-07-10 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for edge fading amplitude panning
US9924291B2 (en) 2016-02-16 2018-03-20 Sony Corporation Distributed wireless speaker system
EP3209029A1 (en) * 2016-02-16 2017-08-23 Sony Corporation Distributed wireless speaker system
US9826330B2 (en) 2016-03-14 2017-11-21 Sony Corporation Gimbal-mounted linear ultrasonic speaker assembly
US9794724B1 (en) 2016-07-20 2017-10-17 Sony Corporation Ultrasonic speaker assembly using variable carrier frequency to establish third dimension sound locating
WO2018050292A1 (en) * 2016-09-16 2018-03-22 Benjamin Bernard Device and method for capturing and processing a three-dimensional acoustic field
US10854210B2 (en) 2016-09-16 2020-12-01 Coronal Audio S.A.S. Device and method for capturing and processing a three-dimensional acoustic field
US11232802B2 (en) 2016-09-30 2022-01-25 Coronal Encoding S.A.S. Method for conversion, stereophonic encoding, decoding and transcoding of a three-dimensional audio signal
US10602297B2 (en) 2016-12-23 2020-03-24 Nxp B.V. Processing audio signals
EP3340648A1 (en) * 2016-12-23 2018-06-27 Nxp B.V. Processing audio signals
WO2018132385A1 (en) * 2017-01-12 2018-07-19 Pcms Holdings, Inc. Audio zooming in natural audio video content service
EP3605531A4 (en) * 2017-03-28 2020-04-15 Sony Corporation Information processing device, information processing method, and program
US11074921B2 (en) 2017-03-28 2021-07-27 Sony Corporation Information processing device and information processing method
WO2018193160A1 (en) * 2017-04-20 2018-10-25 Nokia Technologies Oy Ambience generation for spatial audio mixing featuring use of original and extended signal
US11632643B2 (en) 2017-06-21 2023-04-18 Nokia Technologies Oy Recording and rendering audio signals
GB2563635A (en) * 2017-06-21 2018-12-26 Nokia Technologies Oy Recording and rendering audio signals
CN111183479B (en) * 2017-07-14 2023-11-17 弗劳恩霍夫应用研究促进协会 Apparatus and method for generating enhanced sound field description using multi-layer description
US11863962B2 (en) 2017-07-14 2024-01-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Concept for generating an enhanced sound-field description or a modified sound field description using a multi-layer description
US11950085B2 (en) 2017-07-14 2024-04-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Concept for generating an enhanced sound field description or a modified sound field description using a multi-point sound field description
CN111183479A (en) * 2017-07-14 2020-05-19 弗劳恩霍夫应用研究促进协会 Concept for generating an enhanced or modified sound field description using a multi-layer description
WO2019097017A1 (en) * 2017-11-17 2019-05-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for encoding or decoding directional audio coding parameters using different time/frequency resolutions
US11367454B2 (en) 2017-11-17 2022-06-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for encoding or decoding directional audio coding parameters using quantization and entropy coding
WO2019097018A1 (en) * 2017-11-17 2019-05-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for encoding or decoding directional audio coding parameters using quantization and entropy coding
US11783843B2 (en) 2017-11-17 2023-10-10 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for encoding or decoding directional audio coding parameters using different time/frequency resolutions
EP4113512A1 (en) * 2017-11-17 2023-01-04 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for encoding or decoding directional audio coding parameters using different time/frequency resolutions
TWI759240B (en) * 2017-11-17 2022-03-21 弗勞恩霍夫爾協會 Apparatus and method for encoding or decoding directional audio coding parameters using quantization and entropy coding
RU2763155C2 (en) * 2017-11-17 2021-12-27 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Apparatus and method for encoding or decoding the directional audio encoding parameters using quantisation and entropy encoding
RU2763313C2 (en) * 2017-11-17 2021-12-28 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Apparatus and method for encoding or decoding the directional audio encoding parameters using various time and frequency resolutions
TWI752281B (en) * 2017-11-17 2022-01-11 弗勞恩霍夫爾協會 Apparatus and method for encoding or decoding directional audio coding parameters using quantization and entropy coding
TWI708241B (en) * 2017-11-17 2020-10-21 弗勞恩霍夫爾協會 Apparatus and method for encoding or decoding directional audio coding parameters using different time/frequency resolutions
EP4203524A1 (en) * 2017-12-18 2023-06-28 Dolby International AB Method and system for handling local transitions between listening positions in a virtual reality environment
US11109178B2 (en) 2017-12-18 2021-08-31 Dolby International Ab Method and system for handling local transitions between listening positions in a virtual reality environment
WO2019121773A1 (en) * 2017-12-18 2019-06-27 Dolby International Ab Method and system for handling local transitions between listening positions in a virtual reality environment
RU2777921C2 (en) * 2017-12-18 2022-08-11 Долби Интернешнл Аб Method and system for processing local transitions between listening positions in virtual reality environment
US11743672B2 (en) 2017-12-18 2023-08-29 Dolby International Ab Method and system for handling local transitions between listening positions in a virtual reality environment
WO2019121864A1 (en) * 2017-12-19 2019-06-27 Koninklijke Kpn N.V. Enhanced audiovisual multiuser communication
US11082662B2 (en) 2017-12-19 2021-08-03 Koninklijke Kpn N.V. Enhanced audiovisual multiuser communication
WO2020104726A1 (en) * 2018-11-21 2020-05-28 Nokia Technologies Oy Ambience audio representation and associated rendering
US11924627B2 (en) 2018-11-21 2024-03-05 Nokia Technologies Oy Ambience audio representation and associated rendering
EP4014235A4 (en) * 2019-08-16 2023-04-05 Nokia Technologies Oy Quantization of spatial audio direction parameters
WO2021032909A1 (en) * 2019-08-16 2021-02-25 Nokia Technologies Oy Quantization of spatial audio direction parameters
WO2021032908A1 (en) 2019-08-16 2021-02-25 Nokia Technologies Oy Quantization of spatial audio direction parameters
US11342001B2 (en) 2020-01-10 2022-05-24 Nokia Technologies Oy Audio and video processing
EP3849202A1 (en) * 2020-01-10 2021-07-14 Nokia Technologies Oy Audio and video processing
US11443737B2 (en) 2020-01-14 2022-09-13 Sony Corporation Audio video translation into multiple languages for respective listeners
WO2022020365A1 (en) * 2020-07-20 2022-01-27 Orbital Audio Laboratories, Inc. Multi-stage processing of audio signals to facilitate rendering of 3d audio via a plurality of playback devices
US11962989B2 (en) 2020-07-20 2024-04-16 Orbital Audio Laboratories, Inc. Multi-stage processing of audio signals to facilitate rendering of 3D audio via a plurality of playback devices
WO2023118643A1 (en) * 2021-12-22 2023-06-29 Nokia Technologies Oy Apparatus, methods and computer programs for generating spatial audio output

Also Published As

Publication number Publication date
AU2010332934A1 (en) 2012-07-26
BR112012015018A2 (en) 2022-05-17
RU2012132354A (en) 2014-01-27
US20130016842A1 (en) 2013-01-17
MX2012006979A (en) 2012-07-17
WO2011073210A1 (en) 2011-06-23
TWI523545B (en) 2016-02-21
TW201146026A (en) 2011-12-16
EP2502228B1 (en) 2016-06-22
RU2586842C2 (en) 2016-06-10
ES2592217T3 (en) 2016-11-28
KR20120089369A (en) 2012-08-09
JP2013514696A (en) 2013-04-25
JP5426035B2 (en) 2014-02-26
AR079517A1 (en) 2012-02-01
EP2502228A1 (en) 2012-09-26
HK1176733A1 (en) 2013-08-02
AU2010332934B2 (en) 2015-02-19
CN102859584B (en) 2015-11-25
CA2784862C (en) 2020-06-16
CA2784862A1 (en) 2011-06-23
CN102859584A (en) 2013-01-02
BR112012015018B1 (en) 2023-11-28
US9196257B2 (en) 2015-11-24
KR101431934B1 (en) 2014-08-19

Similar Documents

Publication Publication Date Title
EP2502228B1 (en) An apparatus and a method for converting a first parametric spatial audio signal into a second parametric spatial audio signal
US11463834B2 (en) Concept for generating an enhanced sound field description or a modified sound field description using a multi-point sound field description
JP6950014B2 (en) Methods and Devices for Decoding Ambisonics Audio Field Representations for Audio Playback Using 2D Setup
EP3197182B1 (en) Method and device for generating and playing back audio signal
CN106664501B (en) The systems, devices and methods of consistent acoustics scene reproduction based on the space filtering notified
KR101407200B1 (en) Apparatus and Method for Calculating Driving Coefficients for Loudspeakers of a Loudspeaker Arrangement for an Audio Signal Associated with a Virtual Source
EP2613564A2 (en) Focusing on a portion of an audio scene for an audio signal
EP3975176A2 (en) Apparatus, method and computer program for encoding, scene processing and other procedures related to dirac based spatial audio coding
US11863962B2 (en) Concept for generating an enhanced sound-field description or a modified sound field description using a multi-layer description
CN111108555A (en) Concept for generating an enhanced or modified sound field description using depth extended DirAC techniques or other techniques
US20210168553A1 (en) Headtracking for Pre-Rendered Binaural Audio
Rafaely et al. Spatial audio signal processing for binaural reproduction of recorded acoustic scenes–review and challenges
CN114450977A (en) Apparatus, method or computer program for processing a representation of a sound field in the spatial transform domain
CN114270878A (en) Sound field dependent rendering

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

AX Request for extension of the european patent

Extension state: AL BA ME RS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20120121