Gilly Moon
Part one covered the differences and similarities between dialogue editing and podcast mixing and between sound reinforcement for musical theater and themed entertainment. Here we will be comparing two other surprisingly related audio disciplines: game audio and sound design for themed entertainment.
Collaboration
Every medium benefits from collaboration. Video games and themed entertainment can’t be made without it! Collaboration manifests in many ways throughout the design process, so I’ll focus on communicating with the art and programming departments. Collaboration between those departments with audio designers has quite a few similarities when we compare game audio processes with themed entertainment workflows.
Decisions the art department makes affect the sound design. In game audio, it is very hard to put sound effects to an animation that does not exist yet, or even create ambience and music when you do not know what the world looks like. It is wise for sound designers to check in early in the process. Storyboards and renderings can tell a lot about the world of a game before the game itself is even built. Incomplete or temporary animations are often more than enough to get started on sounds for a character. The sound designer combines these resources with clarifying questions, such as: What is this character wearing? Are the walls in this room wood or stone?, et cetera.
Reaching out to the art department early on informs the sound design for themed entertainment attractions as well. Working off of a script and renderings is a great start, just like with game audio. Set designers for live events will also provide draftings of the scenery, such as ground plans and elevations. This paperwork informs speaker placement, acoustics, backstage tech rooms, and audience pathways. It is wise to have conversations with the art department early on about where you need to put speakers. Directors and set designers will typically want audio equipment to be hidden, and it is the sound designer’s job to make sure that there is a speaker for every special effect and that there are no dead zones. Showing your system to the rest of the team post-installation will not only frustrate other departments but also, your technicians who ran all the cable and hung all the speakers who will have to redo the work. And, you may end up not having the best speaker placement. Communicating from the start will empower you to advocate for the ideal placement of equipment, and ideate with the set designer on ways of hiding equipment.
Then there’s programmers. Programmers implement sound effects and music, integrate all the art and lighting assets and video game programmers also code game mechanics. Establishing an ongoing process ensures that sounds are being played when and where you want them to, and the programming team might even have some cool ideas! In both mediums, the most obvious way to relay necessary information is by keeping an asset list. The asset list should say what the sound is, where and when it plays, how it is triggered, whether it loops, the file name, the file length, the sample rate, the number of channels, and any short creative notes. It is also wise to meet with the programmers early and often, so they can flag any limitations on their end. They are implementing your work, so a positive relationship is a must.
This section talked about collaboration between departments, and these examples are just the most common and the most similar between video games and live attractions. More about programming audio later.
Non-Linear Processes
When I say that the process is non-linear for both disciplines, I do not just mean that the sound design is reiterated until the desired outcome is achieved. Both types of experiences have to be designed based on conditions, not a linear timeline.
In video games, sound (or a change in sound) is triggered such as when the player enters a new room, area, or level; when they encounter/attack/kill an enemy; when the status of their health changes; when a button is pressed; and many more. In themed entertainment, sound is heard or changes when the audience enters a new room, when an actor decides to jump at them (hitting a button to activate a startling sound); when the player interacts with an object in the world; and much more.
Notice how all of these things may occur at a moment in time, but they are triggered by conditions determined by choice and interaction, or by other conditions that have previously been met. Before the product is launched, sound designers need to be able to work on any part of the project at a given time or make adjustments to one part without affecting the rest of the experience. In addition to having a cohesive design in mind, designers in both sectors need to plan their sounds and programming so change can be implemented without a terrible domino effect on other parts of the experience.
Spatialization and Acoustics
In games and themed entertainment, the goal is to immerse the audience in a realistic, three-dimensional space. For this reason, both video games and live experiences involve realistic localization of sound. Google Dictionary defines localization as, “the process of making something local in character or restricting it to a particular place.” Music, user interface sounds, and ambiences are often stereo. However, all other sounds assigned to an object or character, and thus are usually mono. In real life, sound comes from specific sources in a physical space, and games and attractions emulate that through object-based immersive audio. Sound sources are attached to objects in the game engine. Sound designers in themed entertainment use precise speaker placement and acoustics to trick the audience into hearing a sound from a particular source.
Both mediums require knowledge of and a good ear for acoustics. In game audio, sound designers program virtual room acoustics as a part of creating realistic environments. They have to understand how the sound of voices and objects are affected by the room they are in and the distance from the player. Themed entertainment deals with real-life acoustics, which uses the same principles to achieve immersion. Knowing how sound will bounce off of or get absorbed by objects will inform speaker placement, how the audience perceives the sound, and how the sound designer can work with the set designer to hide the speakers.
Both mediums implement audio via specific sound sources and 3D audio environments, and audio designers have to understand acoustics to create realistic immersion.
Audio Implementation for Adaptive Audio
Earlier we talked about condition-based sound design, where sound is triggered when conditions are met. Possible conditions include entering a new room, encountering an enemy, or pressing a button. The actual term for this is “adaptive audio.” Games and attractions may have linear elements, but both use adaptive audio principles. So how do those sounds get from your DAW to the game engine (video games) or sent through speakers (themed entertainment)? There is another step between those, which is implementation.
Games use something called middleware. Sound files are brought into the middleware software where they then get mixed and programmed. Sound designers can even connect to a build of the game and rehearse their mixes. Common middleware programs are Wwise, FMOD, and Unreal Engine’s Blueprints. Some game studios have their own proprietary middleware. Developers will then integrate metadata from the middleware into the game. On a very small team, the sound designer will also program in the game engine. On larger teams, there is a separate technical sound designer role that will handle programming. No matter the team size, game audio designers implement audio via middleware.
Themed entertainment attractions, on the other hand, use something called show control software. Show control software mixes and routes audio signal and patches inputs and outputs (alongside using other DSP). Show control software is also where triggers for all the technical elements of the experience are programmed. Types of triggers can include but are not limited to a “go” button (at the most basic, the spacebar of a computer), a contact closure, OSC commands, or MIDI commands. Sound, lights, automation, and video are all things that are possible to trigger in show control software. Think of it as a combination of an audio middleware and a game engine. Examples of show control software are QLab, QSys, and WinScript. On a very small team, the sound designer will create content as well as programming audio and all the show control for the experience. As teams get larger, there are more roles. Sound content may be a separate role from the programmer, and there may even be a programmer, a sound designer, a mixer, and someone dedicated to installing and troubleshooting IP networks.
Both video games and themed entertainment require some knowledge of audio implementation. Even if the sound designer is focused solely on content, they need to have an idea of how audio gets into the experience so they can communicate effectively with the programmer and build content appropriate for the sound system.
Agility
As mentioned in part one, many audio disciplines have more in common than most audio professionals realize. The past few years have shown us that being agile can sustain us in times of unpredictable circumstances such as labor strikes, economic uncertainty, and pandemics. Opening up our minds to applications of similar skills across mediums can also open up new job possibilities.