Musician, songwriter, tech pioneer, and now educator, Thomas Dolby is preparing for a multi-city tour in April 2026, covering the U.S. and UK.
Though he briefly took piano lessons as a child, he was largely self-taught by ear, finding early inspiration in jazz and electronic groups like Kraftwerk. In his youth he went “dumpster diving” for circuit boards near the ESI manufacturing facility. When Thomas Dolby first started, MIDI had not yet been invented and making electronic music was an exercise in patience and engineering. He famously used the PPG Wave and the Fairlight CMI, instruments that cost as much as a F1 car and required a specialized set of skills to operate. Today, those same sounds are available for the price of a mid-range plugin.
Beginning his career as an artist he learned every little detail of the Micromoog and Roland JP-4 hardware synths to create hits like One of Our Submarines. Thomas Dolby has seen the music industry migrate from a world of room-sized mainframes, temperamental analog oscillators, and expensive and volatile magnetic tape, to the infinite possibilities of the modern digital studio. When Mr. Dolby first started his career, making electronic music required patience and technical acumen. Synthesizers and sampling instruments were extremely expensive and required a specialized set of skills to operate.
Dolby’s career transitioned from hardware to software as he beta-tested for companies like Opcode and Digidesign. He later moved into the tech sector, working at Paul Allen’s Interval Research and founding Beatnik, which provided the polyphonic ringtone engine for billions of Nokia phones.
Currently a Professor of the Arts at Johns Hopkins University, Dolby is now helping students navigate the current intersection of storytelling and technology. Currently, Dolby teaches at the Peabody Institute, where he instructs students on film and game music composition. He emphasizes professionalism and the importance of finding a unique individual voice in an era of generative AI. He argues that while AI is a powerful tool, it lacks the consciousness and contextual awareness found in human musical collaboration.
Thomas was kind enough to take some time from preparations for his Iconic 80s Recollections Tour to chat with us.
Tell us about your musical background. Did you take lessons as a child?
Transcendent 2000 Circuit Board
I sang in a choir at school and took six months of piano lessons or something like that. I moved schools and my mom forgot to sign me up for music lessons at the new school. I was kind of glad because I wasn’t enjoying it, really.
After that, I was self-taught by ear. I would just copy records that I liked and try and figure out what they were playing. I gravitated towards jazz stuff, not from the virtuoso perspective, but more the composition. I was into Dave Brubeck, Bill Evans, people like that. My connection to electronics really started when I was exposed to Kraftwerk, Tangerine Dream, and Popol Vuh.
The very first synthesizer I had didn’t even have a keyboard. It was a Transcendent 2000. It had been built, I think, from a kit. An Electronics Today magazine had a schematic for it.
What inspired you to get your first synthesizer, and what was it?
Thomas Dumpster Diving?
EMS synthesizers, who made the VCS3 that Pink Floyd used on Dark Side of the Moon, were in Putney, which is where I lived. There was a garbage strike one year and the dumpsters were piling up. I went dumpster diving and found this circuit board in a skip (dumpster) from the London synthesizer company.
I also had a Wurlitzer piano at the time that had little built-in speakers. I often had the lid open because you had to adjust the tines with a soldering iron. So one day, I connected the guts of this Transcendent 2000 to the speakers of the Wurlitzer and I was able to make some good bleeps and blips by twiddling little mini-pots and things like that. But I couldn’t get notes from the keyboard to it. So that was really my first experience with a synthesizer.
Obviously, pre-MIDI…
Laughs. Yeah. After that, in London, the center for guitar shops and things was Denmark Street. You could go there and, if you were lucky, there was a Moog or an ARP in the back. If you were really lucky, they would let you play around with it until they kicked you out. I was delighted when Moog came out with the Micromoog, which was only one oscillator but more affordable. I remember finding a second-hand one and bringing it home on the bus. A truly exciting experience.
The Moog Micromoog
In the early days, all of this synth stuff was rarefied you had to be Sherlock Holmes to find information about them. Roland was making organs, so I went to a Roland store to try out one of them. There was this box sitting on top that had Rumba and Bossa Nova and Pop beats on it. You’d press these buttons and out came these drum machine sounds. And later, Roland came out with Boss Dr. Rhythm DR-55 that was a little box that had four programmable beats on it and was battery operated. So what was happening, I think, was expensive, rarefied technology was beginning to trickle down to street level where people like myself could actually get their hands on them.
Let us ask about The Golden Age of Wireless. “One of Our Submarines” is a classic synth song. What were you using at the time that you recorded that album?
I had the Micromoog and, for One of Our Submarines, I’d just got a Roland JP-4. It was four-voice, had eight user presets on it, and a bunch of very short-range sliders. It was really great for brassy sounds, but you could get a shimmer to it that up to then had been very hard to get with monophonic synths. So those two were really the basis of my arsenal. I think the bass synth on that and on “Science” were both the Micromoog.
And that through the tune was from the JP-4?
Yes, that would have been from the Roland. The JP-4 arpeggiator.
Do you still have that Micromoog?
I don’t, unfortunately. A bunch of my keyboards lived in a cage in North London for years and when I finally went to pick them up, they said, “Oh, that was broken into a couple of years ago.” Everything was gone. I lost the JP-4, a JP-8, and a PPG Wave 2.2, all in the same raid.
When you were touring to support that The Golden Age of Wireless album, what was your rig?
I think the Jupiter-8 was the only keyboard that I used on stage. Back then I’d run around with a microphone, but it was really the Jupiter-8 for the parts I played.
When did you transition from using hardware synths to software, and what made that important for you at the time?
Opcode Studio Vision
Yeah, I mean, I think the key moment really was when Macs started having software that was usable for professional audio. Before that, I had the Fairlight CMI. I did a lot of programming in the Fairlight and recorded to multitrack tape. But when the Mac started getting usable software like Studio Vision from Opcode, from Digidesign, and so on, that was obviously very exciting because this was factors cheaper than buying a Fairlight for 80 grand or whatever.
So that was really thrilling. This is where your career and mine started to dovetail together. I was living in Los Angeles, you guys were up in Silicon Valley. I would beta test some of your products, meet you at NAMM shows. I’d make suggestions of things to do with them.
Very often, I would see possibilities of things to do that others hadn’t. I mean, as an example, the first Studio Vision, which had audio recording alongside MIDI. A lot of people were using that to lay down guitar tracks or vocal tracks or whatever. I would take a drum track and chop it up into individual kicks and snares and make my own beats. So I had these little fragments of audio, and I remember showing that to Opcode and saying, “Well, this is how I use your product, but it’s got a lot of limitations. If you’re going to use it like this versus just as a multitrack recorder, then there’s a bunch of different things that need to happen.”
And lo and behold, six months later, a revision of Vision would come out that had some of my suggestions in it. It was like, wow, that was really cool. I’d be flying back and forth up to Silicon Valley consulting a little bit with you guys and with Digidesign. I got the bug. There’s no way… once you actually feel the power of a software team taking your artistic suggestions and implementing them, there’s just no turning back from that.
Dave Oppenheim and Evan Brooks were both great listeners. That was one of their strengths, and still is, I’m sure. Your career moved along and eventually you became a tech entrepreneur in Silicon Valley.
Yes. Interval Research was a research company set up by Microsoft co-founder Paul Allen. He put together a sort of think tank of people in the multimedia field to try out ideas that possibly in the future could turn into startup companies or into patents or whatever. I became part of that team and that became a lifeline for a while.
Windsurfing
I’m a very keen windsurfer. A windsurfing friend of mine who’s also a musician and I said, “What we need to do is we need to find a rich guy to pay us to mess around with music software in the morning and then in the afternoon we’ll go shred the bay.” So I would work at Interval in the morning and round about 1:00 my beeper would go off saying that the wind on the water was up to 12 knots. I’d say, “Sorry guys, I have to go to a meeting.” This was like a dream for a few years.
But when that came to an end, it coincided with the days when you could get a meeting with a Silicon Valley VC on Sand Hill Road and they would listen to your wacky ideas about internet startups. And the wacky idea that I had was this: the web was taking off, it was big news. Given the bandwidth that we had in those days, you didn’t put a glossy brochure on the web. You put individual JPEGs and icons and a description of the text and so on, because that was a more efficient way to use the bandwidth that we had versus doing a big compressed graphic, right?
So I thought, well, okay, that’s like MIDI and samples. The MIDI is the HTML and the samples are the icons, the graphics, whatever they may be. And that would be the right way to do music over the internet.
So at the time, I think RealAudio had just started up and it sounded horrible—this was early streaming audio over modems. And I thought we could get much, much higher quality sounding music on the web by breaking up the MIDI and the samples and just sending the MIDI. So you pre-load the large assets and then you tell them how to play back using MIDI. So we came up with essentially MIDI HTML, a language for that, using what became JavaScript in the end.
That was the concept for the company (Beatnik/Headspace). I thought, well, if the VCs are willing to fund this for a while with the outside chance that it turns into a successful company, I’m going to get a lot of windsurfing done. And it turned into quite a big deal because we got swept along by the whole wave of internet startups.
At the end of the decade, we would have gone up in smoke like so many other internet startups because millions of people were using our software but it was all free. Were it not for one deal that we had, and that was with Nokia. Nokia needed a sound engine to do polyphonic ringtones on their cell phones. Once again, the most efficient way to do that was with a small synthesizer engine and MIDI-type instructions for the notes. They licensed and embedded it and shipped it in two or three billion phones.
Last question. You’re with Peabody Institute at Johns Hopkins now. How did that come about, and I’m really curious what kinds of discussions about generative AI they’re having there?
Thomas directing Music at TED
Well, I teach film and game music composition at Peabody. That’s a great career path for musicians currently. It’s kind of a result of the advent of prestige TV and the number of sample-based orchestral sounding scores that are being produced. A wider range of composers are making a living in media composing. Video games, obviously—the music which people used to turn off is these days absolutely crucial to AAA video games.
Then supporting the composers, there’s a whole hierarchy of assistant composers and studio engineers and notation people and so on. It’s a great career path for musicians. I have limited experience in film scoring myself, but I have enough to be teaching it. More importantly, teaching them professionalism. I wish I’d known when I was their age what I know now about working as part of a team. I sort of just assumed that people would think my music was brilliant and therefore they’d keep hiring me. In reality, it doesn’t matter how brilliant your music is if you can’t hit deadlines and you don’t deliver in the right formats and stuff like that.
So I teach them professionalism, I teach them how to work with a director who is probably not a musician, how to interpret the director’s instinct about the scene. They get a thorough training in theory and sight-reading and some of them play in the symphony orchestra. And then what I bring them is something a little bit different, which is industry know-how and being resourceful.
It’s very tempting for them to cut corners using AI. Within the last couple of years, things like Suno have gone from being sort of a joke to being… I’m concerned when they hand in an assignment whether this is really them or this is something that they generated using the latest revision of some AI program. But what I try and teach them is that if you want to express yourself, and unless you learn to express yourself as an individual and find your own voice as a composer—you ain’t going to have a career. If you end up sounding like 10,000 other guys out there, you just ain’t going to compete on that level.
So tempting as it may be to just hit a few keys and solve your problem with a search or with AI or generative programs, you need to actually get down to the nuts and bolts of it and learn how to do it. If I locked my student in your room and turned off the internet and their phone and said, “An hour from now I need a cue for this scene,” they’ve got to make do with what’s there. They’ve got to be MacGyver. MacGyver who had a carrot and a piece of copper wire to get out of the room. Then you learn to use your ingenuity, your creativity, your individuality.
Thomas with a fan at TED
So that’s what I try and convey to them. Many of them are scared of technology, they’re scared there’ll be no career for them because AI is going to take their jobs away. But you’ve got to put your fears behind you and just focus on the creative possibilities, which are massive.
One of the things I noticed being around so much great music in the corridors of Peabody every day is that when they’re really in tune, not just intonation but really locked in. It’s like looking at a flock of birds or a school of fish. You don’t notice the individual decision making, it’s just the cluster. Because there is this connection between the tips of their fingers and their brain and the open strings and their neighbor’s open strings and the piano lid that’s open and the sound from the back of the hall. When it’s all working, it’s just a magnificent thing that I couldn’t program in a million years. Why? Because those machines we have, they just follow individual instructions like building blocks. They have no knowledge of the context, they don’t listen to each other, they have no consciousness. I don’t think that’s a natural state of affairs.
Somewhere in this whole morass of AI and deep learning is a future in which the elements of electronic music creation actually have some knowledge of context and relate to each other and there’s a resonance between these different elements.
Wow, that’s a great answer. Are you going to get out to California at some point on your tour?
I’m going to be playing with a symphony next year and as a prelude to that, I’m going to be in San Diego in March doing an intro for Foreigner, playing “Waiting for a Girl Like You” with the San Diego Symphony.
Thomas Dolby is currently preparing for a multi-city tour beginning in April 2026. The tour includes dates in the U.S. (NYC, Chicago, Indianapolis) and the UK (London, Manchester, Edinburgh).
Find out more at: www.thomasdolby.com/tour-dates
Read More