Exclusive: From generative imagery to ‘Star Wars’ deepfakes, what AI’s rise means for TV

An AI-generated image from Adobe Stock

Mark Layton (and ChatGPT) explore the myriad uses for artificial intelligence in the TV sector and how future developments could revolutionise the business.

In a short span of years, the idea that artificial intelligence (AI) could become a part of our daily lives has transformed from the stuff of science fiction into very real discussions around how the technology can and does impact how we work.

While we might not (yet) be at risk of termination by a robotic Arnold Schwarzenegger, many in the industry certainly view the rise of the machines as a threat to their livelihoods.

Look to the Hollywood strikes, and the strikers’ concerns around AI, and you will see these fears writ large – enough to bring US studios to a standstill as creatives seek assurances that they won’t be replaced by the technology.

Content creators often grapple with concerns that AI will gradually encroach upon their livelihoods. As AI technologies become increasingly sophisticated, they can generate text, images and even videos with surprising accuracy. This automation can expedite content production, but it also raises fears of human creators being sidelined.

It’s not hard to understand these fears; that paragraph above was written by AI-powered language model ChatGPT upon this author’s prompt – with AI tools such as this just a click away for anyone that wishes to use them.

Software such as MidJourney, Stable Diffusion or Dall-E was used when exploring different characters & backgrounds in very early development. “Doing so gives us a lot of options for what direction we are going to go into stylewise,” says Composition. “Once we have a general idea, we put together mood boards for our artists to use.” This image from ‘Once And Away’ is a product of that process

Speeding up development

However, not everyone is entirely opposed to using AI and some have been utilising it as a supporting tool for some time, particular in the animation sector.

Carl Reed, producer of Oscar-winning short film Hair Love, and CEO of US animation studio Composition Media (Bad Grandmas, Catapult Feud), tells TBI he has been using AI machine learning to uncover budget trends and with asset tracking to speed up production time and predict future issues.

“So that’s one side, and then in the actual generation of art and animation, during the concepting process, and during development, [to] rapidly generate ideas.”

Reed estimates that AI allows his visual development team and artists to save around 40% of their time by creating generative imagery or by applying some of their work on to footage. He adds that Composition Media is also looking at developing bespoke internal AI tools.

“Generative AI has raised the floor of even what a non-artist can produce. In actual production, you have a very specific need for specific assets – we aren’t there yet as to where it can meet that need, but if we can get there, then that gives a boutique shop like ours superpowers,” explains Reed.

“It gives us the ability to compete at a level that we’ve never been able to compete before; it gives us the ability to operate in tighter teams; it gives us the ability to operate in faster timelines in a market where the demand is at an all-time high.”

These sentiments are being shared across the animation sector. Speaking to TBI at MIFA in June, Terry Kalagian, president of global animation, kids & family, at Gaumont, said that while the Paris-based firm is firmly opposed to the use of AI in scriptwriting, it has been using the tech to speed up the production process on shows still in development.

“We’re not afraid of it and we’re using it as a tool, not to replace anyone, but as an initial inspiration for visual development, so that we can have a better language to go to artists and say, ‘OK, this is the thing we’re kind of looking for’. Before, it would have taken us maybe months to get there, now it’s taking us weeks.”

Benoît Di Sabatino, CEO of Banijay Kids & Family, meanwhile, told us: “We are also using AI as supporting tools, but not more than that, because I think we are opening the Pandora’s box.”

Disney+ ‘Star Wars’ series ‘Ahsoka’ de-aged Hayden Christensen to match his 2005 feature film appearance

Return of the Jedi

The concern is that once that box is opened, it will lead to job losses and could cause even more dramatic repercussions, as is slowly becoming clear.

Since the 1970s, the Star Wars franchise has been helping to push forward developments in both practical and visual effects. Its recent crop of Disney+ shows were not different, grabbing headlines, for good or ill, for their use of AI in innovative ways.

The Mandalorian, Obi-Wan Kenobi, Ahsoka and The Book Of Boba Fett have all used AI to recreate either actors’ voices or appearances, with AI-driven de-aging and deepfake tech bringing actors Mark Hamill and Hayden Christensen back to the franchise with a younger appearance to match how they looked in their movie outings decades ago.

The technology to take a step beyond a youthful digital makeover – and to instead resurrect an actor (or their voice and appearance, at least) years after their death and insert them in a new show – is now well within reach.

“That’s definitely going to happen; there’s no question at this point,” says Frank Spotnitz, the Big Light Productions CEO who created titles including Leonardo for Rai, The Man In The High Castle for Prime Video and was a writer and EP on sci-fi/horror hit The X-Files.

“I’d watch that; I’d be very curious to see Humphrey Bogart or whoever it is. But I think it’s going to be like watching a cartoon; I don’t think it will replace real live drama,” he told TBI at the Monte-Carlo TV Festival in June.

“I think there already are existing copyright laws. I don’t think you can use Humphrey Bogart or Marilyn Monroe without securing the permission of their estate – but they probably will, that’ll happen,” predicts the US writer-producer.

Speaking on a panel at the festival, Spotnitz further suggested that AI-written shows entering the market was an inevitability, and that projects from human creators would, in effect, become premium content.

“I wouldn’t be surprised if, in time, there isn’t a class of entertainment that is wildly popular, and mass produced and inexpensive that is completely AI generated.

“But there will be people who want human created content. It’ll be like fine artists; more expensive, not for everybody. But I want to see Aaron Sorkin write something, not the Aaron Sorkin AI. I want to see Martin Scorsese – I want to know he did that movie by hand, not Martin Scorsese AI.”

Echoing this sentiment, Composition’s Reed told TBI: “You know, McDonald’s didn’t kill restaurants – there’s a purpose for fast food. I will stop there when I need to, but I still enjoy fine dining.”

The ‘Sky And Luna’ poster has a “neat little generative AI trick hiding in it,” says Composition. “The original was vertical, but we turned it into the one you see here using Adobe Firefly’s generative fill technology in Photoshop then hand drawing in details that the AI did not get right by itself”

The real AI revolution

Production and distribution giant Fremantle has not let the AI revolution pass them by either. Even before the explosion in AI usage over the past year, the firm had been working with companies like Papercup to do synthesised translations, as well as machine learning for predictive analytics around different shows.

In August, TBI revealed that Fremantle had promoted long-standing exec Tom Hoffman to a newly created, dedicated role overseeing how AI can be applied to its entire global business.

“Unlike the last major tech revolution with social media, AI is a transformative technology that is growing to underpin everything we already have been using, from cameras to spreadsheets,” Hoffman tells TBI, explaining the need for this oversight.

Tom Hoffman

“It requires a central position of technical empathy to make sure a talented employee who does not know what [AI program] Midjourney is, is not afraid to ask and subsequently learn it. Likewise, if a maverick editor in Germany has discovered a game-changing post-production tool, we win when our editors from Spain to Indonesia also know about it.”

Hoffman highlights the two main uses of AI in the industry, which he refers to as “the creative bucket and the productivity bucket.”

“Generative AI may seem like the most obvious entertainment panacea, but anyone who has used these tools knows they aren’t as creative as you would expect.

“That’s fine, though, because when we start to look at it more realistically, the practical applications start to reveal themselves; like machine learning filters to upscale 1980s TV into 4K HD.”

Hoffman suggests that the true major uses for AI are less about the tech itself and more “the inspiration it is bringing for clever people to look at their work now and come up with new solutions. That cultural shift is the real AI revolution.”

Read Next

Stay Updated Icon 3 min read

TBI Weekly: The evolution of ethics in true crime filmmaking

Stay Updated Icon 3 min read

Buyer’s Profile: Neil Friedman, co-founder, ChaiFlicks

Stay Updated Icon 2 min read

TBI Tech & Analysis: How SkyShowtime’s ad move reveals ambition