VO Pro - Voice Over Marketing | VO Marketing | VO Business

View Original

Voquent CEO Miles Chicoine: AI's 3 Big Threats to the VO Biz

Today, a fascinating discussion with the CEO of UK-based Voquent, one of the major voiceover service providers globally about their recent stand against AI. It's an especially strong stand in the context of the ones that the other providers have taken in recent months, and we talked to Miles Chicoine today about that stance. Miles has got some amazing insights, some really interesting insights about AI and how it's affecting both the voiceover industry and the world at large. This is our chat with Voquent CEO Miles Chicoine.

Miles, first of all, thanks for taking the time to join us all the way from Glasgow, Scotland today.

Miles Chicoine:

Thank you, Paul. Nice to be here.

Paul:

You and Voquent have taken a firm stance against AI voice cloning, and I want to get to that. But first I want to ask you, back in 2020 you said that what many expert voice actors don't realize is that AI voices are currently generating an insatiable appetite for real human voices, in 2023, AI has evolved quite a bit since then. Do you still think that's true?

Miles Chicoine:

See this product in the original post

Yeah. No, I do think it's true. I think that right now, the appetite for human voiceover is great as it's ever been. I think the market for human voiceover is still expanding. I think that's part of why Voquent as a company is expanding. But I think it's also why we still see more inquiries and more requirements for human voiceover than ever before. But I know that that's not maybe exactly what everybody is experiencing. If there are voice actor, they might not be seeing that same thing. They might find that work is drying up or they're going through a more difficult slump. I know a lot of very, very experienced and professional voice actors right now who are saying that it's the quietest downturn in work that they've ever been through as of late.

And I think there's other factors at play. I think it would be giving AI far too much credit to suggest that that's the reason why there's a downturn. I think there's actually much bigger things going on in the economy right now. We've obviously had the whole situation with Russia and Ukraine. We've had inflation increasing at exponential levels and then obviously interest rates being pushed up. And I think that what we're seeing in the economy and the struggles, the fear of looming recession, what we're seeing in the television economy, certainly, in the United Kingdom, it's very obvious that a lot of the television production companies have burned all their budgets for what they thought was a COVID-era or a post-pandemic-era type of production schedule. And then suddenly they're finding that there's a lot of media production agencies, a lot of freelancers, not just people who work in performing, but in all sides of the cast and crew are desperate for work right now.

So there is a downturn I think, in the industry that is visible in the States, it's visible in the UK, it's visible around the world. We've been lucky in that we're not experiencing that downturn, I guess. But I think that there's definitely an issue right now with the voiceover economy. And part of that is to do with the media, and some of that is to do with ... or sorry, I say the lack of work, but I think that AI has got a part to play in that.

But the question maybe is better, is AI causing a downturn or is it taking work that was never going to get sold in the first place? And I think that that's where you can make a case for AI still playing a ... I need to be careful of my words here, sort of having a ... if you could call it a positive influence, that is the positive influence. There's people who are now purchasing voiceover, even if it is through an AI who would never have been able to afford any kind of voiceover. And just the very existence of that has been the basis for a lot of projects that we're getting where people are saying, we did this with an AI voice. The audience hated it. We'd like to get it done with the human recording.

So to me, those are good examples of where the market is expanding because of AI. But obviously, there's an offset to that, which is there's a sour point. And I think the sour points are where the real problems are and certainly dragging down the rates, is it starts there instantly. If people think they can get a voiceover done with an AI that sounds pretty good for a couple of dollars, or they can spend $300 or go to Fiverr and spend $50 or talk to an agency and spend a thousand dollars or $2,000, all of these are big numbers in comparison to a couple of dollars for an AI voice.

And so it's not really an even playing field when you find yourself as a voice actor competing with pretty good sounding AI voices for maybe corporate explainer or training videos or bread-and-butter jobs that maybe aren't promotional, but the narration, it kind of just about works. And I think that that's particularly difficult for new voice actors getting into the game. They don't have the skills professionally to be able to compete with these AIs. These AIs are tremendous compared to somebody who's just starting out. And that definitely creates a problem in terms of market value and the perception of what a voiceover should cost to today's modern customer.

Paul:

As the CEO of Voquent, I'm wondering where you sit is the primary threat of AI. Is it that economic pressure? Is it the commoditization of voiceover, or is it something larger?

Miles Chicoine:

Yeah, so there's really three areas that I think AI represents a threat. I'd say that the first point is where it comes to remuneration and compensation. This isn't just voice actors. I mean, this is really any artist, someone who's ... they could be a writer, they could be a painter, they could be a singer, they could be an actor, they could be a voice actor. Any art right now is at threat in terms of remuneration because if your work is being reproduced by generative AI, AI, and particularly without your permission, but let's put that aside, the permission side just for a moment, we'll come back to it. If you're not getting remunerated for that or if the perception is that the remuneration should be much smaller because it's already so widely abundant and available, that definitely represents an erosion of rates. And there has already been an erosion of rates problem for a long time, particularly in the voiceover industry, but across all the arts where people are struggling to produce beautiful creative works and getting fair value for that.

And that's not something that's easy to regulate because when somebody starts out, somebody says, okay, I'm going to be a painter. Maybe they're not a very good painter, and they want to sell paintings. Is their paintings worth thousands? Is their master worth thousands or the copies that they got printed worth thousands? Maybe not. Maybe they're just not worth that much at all. But you have to build up a name for yourself and get yourself there and get yourself to that position where you've kind of marketed yourself, you know how it is. And eventually, you establish that benchmark that says, I know what I'm worth. I know what my worth is, and therefore, I have a line that I'm willing to draw in terms of what I charge and what I expect to get paid no matter who's paying me, whatever it is, this is what I have to get paid.

But obviously, when you're trying to draw that line and you're competing against a couple of dollars, it means that you're having to sell the difference a lot harder. And that's frustrating if the clients are sitting there saying ... and we get clients who say that ourselves. Why should I work with one of your voice actors when I can get this done for a couple of dollars? And when it's weird, when you get asked that question, you go, is this person really a customer, or is this someone who's just going to waste everybody's time? Because to me, the obvious difference is visible within a couple of seconds.

And I think people sometimes try to use it to beat us up and try to get us to lower our prices. You know what, if you were just a little bit cheaper, but you're going to force me to go the AI route. And we do hear that a lot, but I mean, we're a pretty well-placed professional team. We know how to, basically ... we know what our value is, we know what the value of our voice actors is. And so we don't sit here in a position and go, oh, well, in that case, since you're only going to pay a couple of dollars, I guess we'll go ahead and just knock a bunch off the price and ask the voice actors to knock a bunch off their prices and we'll just find a way to give this away because you could obviously get this really great quality thing for a couple of dollars.

But I think that it can be difficult if somebody has not been in the industry very long and they start to feel that pressure and they start to think, okay, I do need to lower my rates, I do need to charge myself at maybe $20 instead of a hundred, and they're just starting out, or $200 is what they would've charged. And now they're thinking, yeah, you're right. Okay, I'll do it for you for 90 or whatever. Whatever the kind of logic that goes through their head, for someone who's new in the industry, this is definitely going to affect them. They're thinking, how do I sell against AI? And they're not prepared. They don't know how to sell against AI. So inevitably, it's going to push the rates down, and the rates were already low before AI came into the scene and made everything much cheaper.

The rates were already a problem, rates were a problem before we even started Voquent. I could see that when we came into the industry, I thought, wow, okay, there's a real issue around rates and what people are getting paid. There's an oversubscription of people who want to be voice actors, and there's not enough work to go around, and this is creating a real problem, and that's going to continue to exist. But I think that that exists even more as a problem because of AI. It's basically making that problem more visible than ever, and is dragging the overall rates down.

Paul:

So very recently, you in a blog post and therefore Voquent have, I don't want to say taken a new hard stance, but you've certainly solidified and clarified your stance specifically around AI voice cloning. What precipitated that, Miles, and why now?

Miles Chicoine:

Okay. I think the thing is, a year ago we had already recognized that there was a lot of voice actors. We speak to voice actors every day. I would imagine that we would have conversations, somebody in our production team or a talent management team or engineering team would probably speak to anywhere between 50 to a hundred voice actors a day, conversations with voice actors on the phone will take place. And I think that when you've got as many different conversations like that, you end up starting to realize what matters to people. People talk about their fears and anxieties and their concerns in the industry, and people have been afraid of AI for quite a long time, and some of those scenarios that they're afraid of are a little bit farfetched. AI is going to take over the world and it's going to become the next cyber-dine and it's going to be like Terminator and what have you.

And maybe that's not farfetched. Maybe that's what's going to happen, and we don't even realize it's coming. But if I'm dead honest, I feel that some of those scenarios are a little bit farfetched, but the fear has been very real. The fear has been very present and very real for a lot of people, and we've noticed that fear getting stronger and stronger for some years now. And so we would do a lot to try to reassure them and say, Hey, look, we're in this game too. We're right here with you, and we wouldn't be growing if there wasn't an appetite for human voices. And therefore, you should be confident that there's going to be an appetite for your voices. Well, the work's not going to dry up. Just keep marketing yourself, keep doing all the things you do, and you can still be successful. Don't think you're going to be out of a job because of AI tomorrow.

I think though that some of the narrative is starting to change around that, but we've seen it going on for years. And so what we did a year ago is we thought, well, what we're going to do is we're going to just add into our terms a clause that makes it clear to people that if they upload their audio to us, we are not using that audio to build AI. Now, a year and a half ago, it seemed almost kind of pointless to do that because to train an AI model a year and a half ago usually took quite a lot of work, generally looking about 30 to 50,000 words. Yes, there was closely guarded secrets in development at Meta and Microsoft and what have you, where they had the technology that looked like it could be better than that.

But in practical speaking, practical terms, I would say that it would've taken quite an investment of time and energy intently to build an AI clone out of someone's voice. And so at the time, I really wanted us to add that into our terms, really just to offer some peace of mind to the people, not who are interested in reading our terms, but just so that we could point to our terms so that they would understand that we have absolute clarity on what our position is and what we're agreeing to and what they're agreeing to. And they don't need to worry that we as a company that does take our technology seriously. And we obviously, our whole search engine is obviously something that's quite technically advanced. We don't want people to think that we're also technically advanced and secretly doing something that they didn't agree to, which is around the permission and consent aspect.

Nobody's giving us their consent when they upload their audio to our site to build an AI out of that. And we want it to be explicit that we're definitely not doing that. So if they think that might be happening, we're giving them the assurance that it definitely isn't and we're ready to commit to that. So we put that into our terms a year ago, and we didn't want to make a lot of noise about it because we thought that what it was do is draw the attention to the sort of fear and anxiety that we felt wasn't really necessarily a good thing for everybody to be panicking about AI. But I think earlier this year when it became clear, I'm not going to name names, but we probably all know who I'm talking about.

Paul:

Pretty good idea. Yeah.

Miles Chicoine:

A platform decides that it's a great thing actually to get on board with AI. In fact, a couple of platforms seem to think it was a really good idea to get on board with AI and get in bed with it and turn it into a wonderful opportunity. And obviously, it becomes more obscure how this is all going to work. And it's at that point that we felt pretty strongly about pointing back to our terms and saying, this was back in April, we just said, Hey guys, we're here. Our terms make it quite clear and have done over a year now that we don't use voices for this. And other people should be making it clear that they don't use that otherwise because if they're not making that clear, then their terms do seem to suggest that they can. And that's a gray area that's unacceptable.

And so we kind of pushed that narrative on that pretty strongly, and I feel good that we did that. I think that was the right thing to do, and it was really exciting to see that NAVA then picked up obviously their whole agreement. And so obviously, when actually I found out from Tim, Tim's basically didn't actually know that much about Voquent, I ended up saying, Tim, this is great, because we were doing this a year ago and we really hit it off. And ever since then, yeah, I've become an advocate of NAVA. I think NAVA's great and obviously happy to be subscribing to their terms because to be fair, their terms are really the same terms that we were trying to write a year ago.

But this year and most recently, we decided to take things a step forward. And this is important because I think that we realize that it's not good enough just to say to the voice actors, we're not going to use your voice for AI. We need to actually actively start telling customers who do ask, do you do AI voices, can we commission an AI voice through you? We've seen some huge projects which we haven't generally participated in, I might add. I mean, we've done some great stuff for good causes. We've worked on some wonderful projects for people with disabilities and for children in AI.

And I don't feel bad about having done that, but I would say that what's really become more and more visible and what's really opened my eyes is the sheer number of people asking about whether they could get an AI built with one of our voice actors. And we've been open to that as long as the pay has been pretty substantial, and we have been generally talking six figures upwards. So these aren't small sums of money. I would still argue even now, how much is enough? Right now, the way things are, I would say nothing. There is no amount of money that is going to be worth cloning your entire identity to the extent that now looks possible.

And so it has changed my opinion. But I think more importantly is that when we realized how the technology is progressing and we started to realize how it can be misused, it started to become really, really clear that we need to raise the alarm to the best extent that we can with our influence. We're not going to be the loudest voice in the world in terms of protecting safety for modern society, but honestly, voice cloning is extremely dangerous and it's being used in some very, very bad ways. And seeing how effective the technology is, which is being sold to consumers for dollars is frightening. And that's why we realized we don't want to be part of anything to do with these projects, at least until such time is very clear, regulation is put in place, and that needs to happen at a government level. Self-regulation is not regulation. It's like basically asking a burglar to burgle responsibly.

Paul:

Absolutely. I wanted to ask you about that because we're now where we sit when we're recording this interview, and probably well within the next six months to a year, apple will release iOS 17, which is fairly substantially rumored to have a function in there where you can build an AI model of your own voice. Is this good for society, or is it not so good for society?

Miles Chicoine:

Only Apple could probably try to answer that question. I want to say Apple didn't ... One of the things I made a note of in the article that, well, not even the article, our statement was that Apple were noticeably absent from the technology agreement that was made between most of the big partners, the Amazons, the Microsofts, and the Metas and what have you in relation to putting in place some form of regulation that addresses things like the safety of people who use these technologies of AI in general. And Apple wasn't on there, but I think that's not because Apple doesn't agree with the premise of it. I think Apple has very clear ideas about how they think privacy should be protected and in what way.

And I think that Apple's inclusion maybe comes at a cost to the other parties. Maybe they feel that they all have to be beholden to the way Apple wants to do things. But however you look at it, I think it's difficult because you want to trust a company like Apple that has as much, much influence as they do, and you want to trust that they're thinking about the bigger picture and trust that they're thinking about the ramifications to society. And maybe when we see how Apple's phones are made in China, and you look at the labor rights issues that are going on there and some of the terrible things that you see on the news and you think, are Apple a good company?

But then you could say that about maybe a lot of companies, they've got this kind of Western front, they look really amazing and wonderful and great values, and then you find out that everything's getting subcontracted to a Third World country where human rights are a really big issue. But to get on point, I think as far as Apple's new technology is concerned, I don't know if that's so beneficial to society, what it's suggesting you can do to me, what it is beneficial to Apple because they'll find a way to repurpose that in some other way. And I think Apple's got some work to do to give people the assurances that those recordings and those AIs that are being built on your phone aren't capable of being misappropriated because if they are, that could be deadly. And that's a problem, isn't it?

Paul:

It is a problem. And you have said before that, for instance, a well-modeled AI voice served over a bad audio device now here in 2023, like a phone makes children and the elderly especially vulnerable. And I want to talk about that because I think it has applications for the voiceover industry. Talk about that a little bit more.

Miles Chicoine:

Well, there's been a number of different stories. I can't give you an immediate link that I could talk to about offhand, but there's enough of them out there. You don't have to Google very long to start finding stories about where cloned voices have become a case of abuse and misuse for ... it doesn't necessarily have to be something as basic as fraud for something that's much more personal, like speaking to someone else's kids for reasons that we are horrified to think about. I think the thing is that if that technology is going to be widely and openly available without regulation, and it is right now, it is absolutely being used for all the wrong reasons. And it's almost like the governments are kind of waiting for enough of these tragedies to take place before they decide to react. Everybody knows there's an issue with this. They know that there's a problem, but everyone's waiting.

And I think the thing is that this is sort of a modern-day societal safety issue. This is a human rights issue that can my voice be cloned? Someone could watch this video afterwards and say, I'm going to clone Miles' voice and make him say something else that he didn't say. Make him commit an act of fraud, make him endorse a product that he didn't endorse. This goes well beyond the misappropriation where I'm missing out on compensation. I might not want to have anything to do with how this AI clone is being used at all for any reason under any circumstance. There's no amount of money that I'd be willing to accept to have my AI clone being used for something else. And I think that just the very fact that that's even possible is a huge problem.

And we can't just sit back and go, well, the government will fix it. They won't. And we can't go, oh, you know something, the AI is really creating a problem for performers, and it's a performers' problem because the rest of the world doesn't look at creative copyrights and compensation for creatives as their problem. People think about their own problems in their own lives. And if somebody goes, well, I'm working at McDonald's or Burger King and I'm trying to make ends meet, and I don't know what this voice actor stuff is all about, but if they're not getting paid for their work, well, Whoopi do. It's not my problem If they've got a copyright problem that sounds like a rich person's problem to me. You don't necessarily have to be rich, you just have to be disjointed and disconnected from that.

But if somebody was actually knocking on their door to pick up their kids and it isn't actually them, then it is their problem, isn't it? And I think that this is happening right now, this misappropriation is happening right now, and if it doesn't get dealt with, it's going to keep getting worse. And it's something that affects everyone. And I think there's a wider public interest at heart here. I think the government knows that there's a wider public interest in relation to all aspects of AI, but I think identity cloning, whether it's audio or visual or both, is particularly dangerous, and it needs to be dealt with more urgently than it is right now.

Paul:

And that's where I think the urgency for all of us being vulnerable, especially children, especially the elderly, especially the disadvantaged, I think I hope Miles, and you can tell me whether or not you agree. I hope that urgency will drive some of the evolution of the legislation and the regulation that's going to protect not just voice actors, but all of us.

Miles Chicoine:

Yeah, absolutely. And I think the thing is that if you think about the scope of the problem and how to resolve it, because obviously, governments don't want to stand in the way of AI being developed primarily because they don't have a lot of control over it. The AI is not run by the government. It's run by private corporations who are trying to monetize every aspect of what generative AI can do. And so I think where regulation needs to step in place is when it comes to licensing, how audio is used, I think that that's what needs to be fixed. The same with video, I think particularly in the context of creating a clone. If the idea is ...

I see this on LinkedIn, people go, oh, it's something pretty soon all my meetings are going to be held by my virtual clone, and I think it's great. And I think, well, that's really wonderful. You're going to attend that meeting with all those virtual clones and they're going to be attending with their virtual clones and we'll just have a bunch of clones talking to each other. It sounds pretty useless to me, but this is the kind of future that a lot of people are buying into and are interested in.

And I think that the government doesn't want to stand in the way of what they call sort of developmental progress of tech, but it's primarily because they're not actually in control of it. They're playing catch-up with the tech companies. And the tech companies are looking to monetize every aspect from every angle, particularly the small ones that are trying to get in on the action. If they can just do a niche, if they can just be somebody who plagiarizes art or if they can just be somebody who plagiarizes video or somebody who just plagiarizes audio, whatever it is, they're plagiarizing. If they can figure out how to plagiarize for creating generative works, well, if they can get on the map, they can build up the company, get it sold to somebody else, get the technology sold and get out, and it's somebody else's problem.

And so there's a lot of technical opportunists right now, which is making the whole field very, very, very noisy. But for voice actors, it obviously represents an issue because it doesn't just affect their safety, it also affects their livelihoods. And so it's a double or even like a triple whammy because they may be getting paid less than ever because AI's around, their voice is getting used without their consent, or maybe they consented to an AI and that AI is really successful, more successful than they are, and they're trying to figure out how they're supposed to capitalize on an AI that they haven't managed to make any success out of, and they just start to feel used and abused. And then, of course, there's a safety issue. So I think that all these things affect voice actors particularly badly, and I think actors as well. And I think that problem is going to get worse before it gets better.

Paul:

Well, and I think that's the crux of the issue between the current SAG after strike here in the States, both the SAG after and the Writer's Guild strikes are essentially around the same issue. And that is we're fighting not to be eliminated by AI, whether it's a mocap, an on-camera actor, whether it's a writer, whether it's a voice actor. And this is going to affect pretty much every creative as you started at the top of our chat to say, this doesn't just affect our industry. The AI impact is going to be felt through every particular realm and genre of art as we move forward. So the battles that we're all fighting, whether it's writers, actors, producers, sculptor, you name it, whoever it is, we're all fighting for each other when you really get down to it.

I want to ask you one last question, Miles, and that is, we've talked at several points, and even in this chat about the gap between the technology and the regulation, the legislation. Do we ever close that gap? Do you see a world where we're ever able to either remove the gap or to at least minimize it?

Miles Chicoine:

Yeah, I mean, I think that what regulation could start to lock down the sale in the jurisdictions that they operate in. So the sale of cloning technology could be locked down, it could be federated, it could be moderated, and I don't mean self-moderated. It could be government-regulated to the extent, like in the United Kingdom, you have the FCA, which is the Financial Conduct Authority, and they play a big part in making sure that people aren't able to organize works and methods using currency just because they decide they want to, they're subject to regulations, they have to demonstrate that they are going to be responsible with how they're dealing with other people's money. And I think that if essentially the models that train clones is within itself becomes something of value, it has within itself a similar quality to a currency and therefore should be subject to the same level of scrutiny, oversight, and responsibility.

And the way to start that would just be to start making it a little bit more difficult to sell these things without there being a lot more questions answered right out the gate. Like what specifically is this going to be used for? And has the person who's being cloned specifically given their consent for that specific thing that is going to be used for? And will they always have the ability to pull the plug if they want to? Because I think that there's people who've been asked to do recordings for contracts. I know people right now who I'm speaking to, whose voices are getting used for AIs. And those AIs were built on audio that they gave 20 years ago. And so the problem is that they signed a contract not really thinking that they were going to necessarily have a problem 20 years on, but it is a problem if they feel that they're ultimately going to end up competing with a younger version of their self as an AI.

But there's one thing that I think that should be said, Paul, and that is we shouldn't give AI as much credit as AI is currently trying to get and take right now. AI still has got a long way to go to be as good as the human voice, and if there's anything that we should feel good about, it's the fact that it's still really easy to detect. And people still really hate AI voices. There's nobody who wants to watch a really great film or listen to a great audiobook or consume a great series on TV, whether it's animated or acted. There's nobody who wants to play a video game that's got a crummy AI voice that they can detect within seconds. It just makes them feel a little bit sick. They just think, oh, this isn't really good quality material. This is cheap crap. Anybody could have made this. Anybody could have written this. It could have even been written by an AI.

And people automatically, their sensors go off and say, this is garbage. Anything that AI currently is associated with, at least from an audio/video perspective, is garbage. And I think will that change, will that get harder to detect over time? I think it will. I think that's where the real challenges are. But for now, for the time being, and I think for the near term, foreseeable future, no change, it's still garbage.

But will it always be that way? I don't know. It's really hard to say. A few years ago, I would've thought that that's 10 years away. Maybe it's more like five years away. But I think we've still got a good five-year run where humans can absolutely demolish an AI voice comfortably. No one's going to pick an AI voice for anything that has any meaning or budget. And so people aren't losing work, I think to people who have money, who want to protect their creations that they've made with a great quality product, whether it's a great quality training video, a great quality corporate explainer, a great quality advertisement or promotion, a great quality radio ad, a great quality IVR voice even, there's no comparison. Anybody who wants quality is not going to go to an AI provider, but it's still there, and it's still an issue, and it is getting better. There's no doubt.

Paul:

So some cause for hope and yet still some uncertainty. Miles Chicoine, the CEO of Voquent. Miles, we really appreciate your time today. Thank you so much for joining us and furthering the conversation around this issue that's going to continue to evolve at light speed. Thanks for your time, sir.

Miles Chicoine:

Thank you.