The Las Vegas Sphere’s immersive spatial audio system is powered by 167,000 amplifier channels and sophisticated multilayer speaker arrays.
This FAQ begins with a review of the amplifier technology, looks at the structure of the speaker arrays, and closes with the audio beamforming technology used to create spatial audio in the Sphere.
The 167,000 channels of amplification are driven by over 10,000 16-channel high-efficiency amplifiers that deliver about 40% energy savings compared to traditional amplifiers. That energy saving arises from the use of power factor correction (PFC) on the input and a pulse width modulation (PWM) architecture that recycles the reactive energy coming back from the loudspeakers.
High-performance audio results from a combination of differential pressure control technology that uses pressure sensors to monitor differential pressures from pairs of loudspeakers in real-time and make corrections to the drive power to balance the system as needed. Plus, the amplifiers have a custom digital signal processor (DSP) core in the control circuitry that supports a 10 µs latency on the critical feedback paths, enabling “analog-like” amplification with the flexibility and efficiency of digital control.
The speaker arrays include different sizes of speakers in a multi-layered configuration. The amplification channel for each speaker is integrated into the speaker array to produce a compact solution. The use of different sizes of loudspeakers enables precise control of sound propagation across a wide frequency range, with precise control of the sound pressure levels (SPLs) of individual speakers.
The speaker arrays are available as a two-way module that integrates 96 drivers in a two-layer matrix (Figure 1) or a three-layer matrix with 80 drivers in the first two layers and a subwoofer driver with sensor control in the third layer. The combination of DSP-controlled amplification and speaker arrays supports wave field synthesis and audio beamforming for immersive spatial audio.
Wave field synthesis
Wave field synthesis is used to create audio objects and precisely place them within the Sphere. This technology enables all listeners to perceive the location, distance, and direction of multiple distinct virtual audio sources. Some can be static while others can be moving, even getting closer to the listeners to create realistic and engaging auditory scenes (for more information on wave field synthesis see the FAQ ‘The Las Vegas Sphere by the numbers). The amplifier and drive system also compensates for unwanted audio reflections and echoes to create a consistent audio experience across the entire venue.
The drivers use a Linux-based distributed audio operating system running on a digital signal processing (DSP) system based on a dual-core ARM Cortex A9 processor with high-performance field programmable gate array (FPGA) and algorithms for 3D wave field synthesis and audio beamforming (Figure 2). The processing module provides individual control of each loudspeaker in the array.
Audio beamforming
Each driver module has over 200 inputs that can accept either object- or channel-based audio streams. The FPGA also runs a routing matrix algorithm to downmix the 200 channels into up to 12 individually controlled beams for each of the speaker arrays. Each beam can be separately equalized using 24 parametric equalization bands.
Audio beamforming is an advanced application of wave field synthesis. While basic wave field synthesis provides a sound source with a virtual point of origin, beamforming uses control of sound propagation in the vertical and horizontal axis to give sound increased directivity.
Beamforming can simultaneously send different audio content to different locations. Each sound field can have a unique equalization, level, shape, and position. And beamforming can minimize sounds spilling between zones. It can provide customized audio content to small groups and even to individual seats. It can create customized special effects based on the position of the listener(s) and can be used to bounce focused audio beams off surfaces to create additional audio textures.
Summary
The creation of spatial and immersive audio at the Sphere takes a combination of advanced hardware and sophisticated software control. The amplifiers and loudspeakers are designed to deliver high efficiency, minimizing energy consumption, as well as precise audio. The software enables control of that audio using wave field syntheses and beamforming techniques to deliver customized audio experiences throughout the venue.
References
HOLOPLOT X1 The world’s first Matrix Array, HOLOPLOT
Loudspeaker array beamforming for sound projection in a half-space with an impedance boundary, Journal of the Acoustical Society of America
Making All the Right Noises: Shaping Sound with Audio Beamforming, MathWorks
Sphere immersive sound powered by HOLOPLOT, Sphere Entertainment
Wave Field Synthesis, Fraunhofer
Leave a Reply
You must be logged in to post a comment.