Wireless Integration Of Remote Clients For Mixing & Plugin Control
Posted by Nathan Lively
Monday, 2016-12-19
0 Comments
Wireless Integration Of Remote Clients For Mixing & Plugin Control

A good operator is as much a part of a performance as a good actor. —Steve Brown

Mixing on iPads is all the rage. Bands are controlling their IEMs (in-ear monitors) with their iPhones. Concert sound engineers are impressing artists by mixing their monitors standing onstage with them. System techs are adjusting high-frequency coverage from the last row of the venue. Mixing boards are powerful tools that multiply in power when they allow for multiple users.

Bob Lentini (software programmer)

Lentini’s Software Audio Console allows remote clients to access twenty-four separate mixing boards within a single console. He talks about how funny it is that he has been offering wireless access to nearly all mix parameters for years and the marketing is just now catching up. Now I laugh when I see all these big consoles with an eight-foot control surface that cost $90,000. When you look at the marketing now, what’s the big push? They are bragging about how they can now mix on their iPad. So what good is this giant control surface? Why would I pay $90,000 to have this hooked up when you are telling me that the coolest thing is to go out in the audience with your iPad? I’m laughing, because literally SAC was designed to work virtually without the need for a control surface whereas these guys are still designed around a control surface and now they have added on the ability to get a few figures on an iPad. It is interesting that that’s the big marketing push now. Everybody is standing there with their ad, holding up their iPad with a couple of faders on it. It’s funny because I took note of this idea over twenty years ago. Nineteen ninety-two is when I demoed the SAC concepts for the first time at AES in San Francisco.

GW Rodriguez (sound designer)

Rodriguez is a southern California sound designer and engineer who helps us imagine an environment where any parameter can be controlled remotely. Now, I want you to imagine this scenario: You’re the sound designer for a big musical in a venue with a balcony. You can’t afford to hire an assistant to mix for you while you listen in the room to make EQ changes on the actors’ vocals or change the delay times of your front fills. You could just mix the show from the front of house position, which is all the way up in the balcony, and do your best guess work about the sound of the system and the actors and the instruments and the sound cues. Or, you can do what I did. I used an iPad with the program called TouchOSC to send open sound control commands wirelessly to a computer running MAX. That allowed me to put MIDI values through an audio interface connected to my digital console. All right, let me break that down for you: OSC, Open Sound Control, is a programming language that was developed to basically surpass MIDI. The language can communicate between software and hardware in an unlimited way, not constrained to values between 0-127. The Mac iOS app, TouchOSC, is a pay program that lets you use or create custom interfaces comprised of sliders, buttons, knobs, and labels. These user interface objects send OSC commands wirelessly to another device, typically a software program like Ableton Live or really anything that will accept OSC commands. So, I decided to send it to Max/MSP. Max is a graphical programming environment by Cycling 74 that lets you create, control, and manipulate input and output messages, MIDI, and audio. It was originally designed for computer music implementation but has found a large variety of other artistic and utility applications. I’ve used it to create completely unique sounds, as a teaching tool for visually showing harmonics, and as a basic speaker processor. I also used it to create an interactive and organic sound design for a production of 4.48 Psychosis by Sarah Kane. The beauty of Max is that it is a pure programming language like C or C++ or Java, but instead of writing lines of code, you have visual objects with names that have certain functions. It’s very intuitive. If you want to play a sound file, you simply add a sound-playing object and connect a virtual wire to a DAC, your digital audio converter. I have a show currently running in a small black box space that doesn’t have any processing equipment. I needed to delay some of my speakers and use a crossover, so I simply built a MAX patch that did it all for me. I used Qlab, sent my outputs via SoundFlower into MAX, processed the audio and sent it to my audio interface. The patch took me a whole thirty minutes to program.

Dimitris Sotiropoulos (sound engineer)

Sotiropoulos is a Greek sound engineer. He prefers a standard console over software mixing, but he does use his computer for outboard processing. He normally doesn’t work on touring productions big enough to carry a dedicated mixing console, so he uses audio plug-ins to augment a variety of available equipment and create consistency between venues. He caries a collection of cables to connect his audio interface to the local console, and uses TouchOSC on his iPhone to adjust processing parameters wirelessly (like Rodriguez). What kind of plug-ins are you using? My favorites lately are Waves. I have the Live Bundle from Waves and I think they have the greatest DeEsser. What kind of stuff are you using on your vocal tracks, for example? You are using the DeEsser and what else? I am use a DeEsser and a C4, a multiband dynamics plug-in. I'm using my laptop computer with a Motu 828mk3 sound card and MultiRack by Waves. I'm interested in this because I did a tour last year where I did all computer-based mixing with Software Audio Console. That was a contained system because I had the preamps and the computer and all the processing and outputs in one rack, so I could just show up and send our outputs to the house system. Tell me how you are hooking that up. Walk me through an event where you use your computer for that. Most of the small-format analog consoles have Y insert points, so I have a few pairs of insert cables with me all the time and on my rider I ask for some extras. I figure out what I need. They bring me whatever I don't have. I get there, we line check, and then I set up the computer. I patch the outputs and the inputs to the insert points. If I have a big setup with drums and everything, I'd probably use the card for the kick drum, the snare drum, the bass, and the vocals. You're showing up with your computer instead of a big rack of outboard gear. Exactly. All in a backpack. You know, it's really convenient. I'm trying to make it smaller and smaller. That's why I love my little 13-inch MacBook. The latency isn't a problem? MultiRack reports which plug-ins have latency and which don't. This is really convenient because you can go ahead and say, "I like this plug-in, I like how it sounded at the studio," and then you plug it in and it has like 4000 samples of latency [about 90ms or 31m, aka. too much]. It reminds you and you just take it out. I just found out about this program called TouchOSC. What's that? What's that, dude?! [Laughter] It allows you to communicate with the computer and MultiRack through network MIDI. You have to use a Wi-Fi connection or I use the 3G connection of my iPhone. Then I assign MIDI controllers through MultiRack. It has a function called Hot Plug-ins that you want to see all the time. So, you have Hot Plug-in one through eight. If you have eight little buttons, you can assign them to MultiRack on your iPhone, then you click it and you have your plug-in right in front of you. See a video of Sotiropoulos’ interface and download his TouchOSC setup.

Nathan Lively

Lively is a San Francisco Bay Area sound engineer. Here he talks about his experience with wireless control on tour.

In early 2010 I went on a four-month national tour with Springer Theatricals as the sound engineer for their musical revue, Route 66. I had a lot of performances to tweak the setup, and although it improved with each show, I learned that wireless networks are too unreliable for mission-critical show operation. Even if you create a secure connection, it could be violated at any moment by a microwave, cordless phone, or Bluetooth device. They are great for setup, sound check, and any kind of remote access, but if I did it again, I would use a wired system. The host processing was rock-solid. It was only the remote control that was ever in jeopardy. [See Eddie Codel on wireless networks.] Going into rehearsals I knew that I really wanted a digital board with scene memories, which they didn’t have. I offered to rent my software mix system, which had the added benefit of wireless control so I would never need to roll out a snake. It was all contained in one rack. Once that was in place at a venue, all I had to do was set up my laptop and fader bank at FOH. The first ten performances went off without a hitch. Then, at the end of the second act of the eleventh performance, wireless signal strength went way down and I lost my connection with the host. I ran backstage with my laptop and did the rest of the show from there. For the next several shows I moved the network router out of the rack and built RF reflectors to push signal out into the theatre where I needed it. Next I upgraded my wireless notebook card and added an external Wi-Fi antenna to the router. With each change I made I thought, “This is it. Hello, stable connection!” About three quarters of the way through the tour, I finally got fed up with the problems and spent a day researching wireless networking. I installed some network diagnostic utilities on each computer and discovered that my wireless-N network adapter card caused all of the problems. I still suspect bad Windows7 drivers or my PCMCIA card slot, but I switched over to my factory-installed wireless-G card for the rest of the tour and had far fewer problems. I also picked up wireless network analyzer software, which helped me to find the best open channels available.