Here’s a provocation for you: AI has been used in the creation of this blog post…
When I say I use AI in my work I meet a variety of reactions. Some people shrug – of course I do. Some people are worried, or horrified even, visualising the forthcoming robot army of the AI apocalypse. But most people are ambivalent. As a creator of immersive experiences (many with a historical focus), what does this mean for emotional resonance? What does this mean for IP ownership? What does this mean for integrity? And how does this differ from using the other technologies already available to makers?
For me, immersive technologies have revolutionised how I can tell stories about history. I was lucky enough to be Creative Director on The Gunpowder Plot (Layered Reality)— a multi-sensory journey that plunges audiences into the tumultuous events of 1605. The plethora of immersive technologies available to me was inspiring, daunting and delightful: and I believe The Gunpowder Plot demonstrates the incredible potential of these tools to bring the past to life, and more, to explore how the past feels to an audience. We blend live performance, a fantastic script and beautiful sets with immersive technologies including VR, ambisonic sound design, projections, automated control systems, smells, wind-flow, and even temperature to offer audiences a visceral connection to the past. I absolutely embrace technology in historical storytelling: but when it comes to AI there are some darker, meatier issues to contend with.
AI vs Humans: Emotional Resonance
The art world is at the sharp end of discussions about AI as a creative force. There are artists (such as Sougwen Chung and Refik Anadol) who embrace AI. Sougwen as a collaborator and Anadol arguing that AI, like any medium, is a tool for a creator. But there are also questions about the value of human intervention and intention, both in the confusion about who owns an outputted artwork (where does the machine begin and the artist end?) and about its inherent value as a piece of art. Does artistic intention = artistic merit? In many cases clearly not. The intention to provoke emotion (or the emotion with which a piece of art was made) doesn’t always translate into ‘good’ art. So to imagine the opposite for a moment: if, in the case of AI created output you remove human intention entirely, does that automatically make ‘bad’ art? When we, as makers of games and immersive experiences, prompt an output from an AI, does our prompt hold the value or does the AI created output hold the value? Or is there any value at all?
The Ethical Canvas
When we apply AI to historical storytelling, we discover a whole new layer of complexity. Like every industry on the planet, the immersive industry is increasingly exploring the use of AI. But what does this mean for historical accuracy?
AI systems often draw from vast datasets that include inaccuracies, biases, or anachronistic interpretations. For example, an AI rendering of Guy Fawkes’ execution often reflects Victorian interpretations—which are more numerous in available datasets—rather than authentic early 17th-century renditions. You can test this yourself. I use Midjourney, but you can use any number of image generator AI models.
While preparing a talk on AI and immersive technology for UNAM University in Mexico City, I created an image of a ‘Tudor’ village: the header image for this blog. The result? A distilled, almost caricatured version of Tudor architecture.
I also created the below image of late 1800s Mexico City:
I asked the audience to judge the accuracy of both images. Dangerously, the mainly local audience thought the Tudor image was correct. For my part, I have little knowledge of 1800s Mexico City. When the audience was shown that image, it met incredulous guffaws.
This phenomenon, which is coming to be known as the “Ouroboros Effect,” highlights a troubling cycle: AI tools increasingly rely on datasets polluted by previous AI-generated content, distilling and distorting history further with each iteration, just like a snake eating its own tail. This issue is exacerbated by the reliance on sources like the “common crawl,” which indiscriminately sweeps the internet, embedding inaccuracies and biases into the foundations of future AI outputs.
And we can see even more worrying outputs: datasets often skew toward dominant cultural narratives, marginalising alternative perspectives and misrepresenting underrepresented groups.
Safeguarding Historical Integrity in the Digital Age
So what can we do? Well, we must be aware. We must never undervalue research. We must collaborate with historians, archivists, and cultural institutions. And we must question content and its sources as much as we can.
Transparency is also paramount. Best practice at the BBC for documentary makers using AI (for example to fill in pixel gaps in historical footage) is that audiences should always be made aware of the use of AI.
Looking ahead, the development of ethical guidelines for AI in historical storytelling is essential. Creating an accurate dataset (a collection of data used to train an AI), is key. For closed model AIs (not using the common crawl), developing a historically focused Curator position for datasets could mitigate inaccuracies. Encouraging makers to use closed model AIs, where the data can be verified, would also be a strong step. We need documentation of sources, and peer-review of generated content. By holding AI systems to high standards of accountability, we can harness their potential without sacrificing historical integrity.
Conclusion
AI is both a mirror and a doorway—reflecting our hopes and fears while opening paths to possibilities we’ve only dreamed of. It places powerful tools in the hands of creators, democratising access to technologies that were once out of reach for many. Yet, like all revolutionary forces, it challenges us to think deeply about the stories we tell and the truths we hold dear.
This is an era of extraordinary potential, where the past can be reimagined and experienced in entirely new ways. But with that potential comes responsibility, and a lot of painful questions.
One thing is for sure. AI is not going anywhere. How we use it is the next great adventure. To use a cliche I’m sure ChatGPT would avoid: this is history in the making.
Hannah Price is a director working across immersive, theatre, games, video content and VR. She was Creative Director on The Gunpowder Plot Immersive Experience receiving rave 5 star reviews. It is now one of the longest running experiences in the world, coming up to its 3rd year. She was also Creative Director on the new Museum of Shakespeare, due to open in 2026. Hannah trained at RADA, the NT Young Directors Course and the Donmar Warehouse. She has directed for theatres across the world. Hannah established international new writing theatre company Theatre Uncut in 2011. Theatre Uncut now works in 25 countries with more than 6000 participants. As a performance director on games Hannah has directed BAFTA winning performances for some of the world’s biggest titles including Alan Wake 2, Final Fantasy 16, CONTROL, Forza Horizon 4 and 5, Fable, Gwent, The Division, Lego Ninjago Movie Video Game, and many many more. She directs per-cap, mo-cap, face-cap and voice. In the last few years Hannah has moved into VR/ crossover projects and immersive productions, adapting Orwell’s famous novels ‘1984’ and ‘Down and Out in Paris and London’ for immersive environments in London and Paris. She has directed two 360 experiences for the Barbican, before moving onto the Gunpowder Plot, the Museum of Shakespeare and multiple other projects upcoming. She is Co-Director of immersive storytelling company Buried Giants.