About a third of the way up Queen Street used to be the location of a true Auckland CBD icon: the QF Tavern. It’s now moved (around the corner to Wyndham Street), and in its place has been a sign saying “Watch This Space”. For the next seven weeks, “Watch This Space” will be replaced by HyperCinema, which purports to be “the world’s first live AI experience” and “a revolutionary leap in entertainment”.
It’s the co-creation of Dr Miles Gregory, formerly of the Pop-Up Globe, and Tarver Graham, founder of creative agency Gladeye. The press release describes it as incorporating “theatre, film and digital technology”, but I’d mostly describe it as an interactive art experience.
It can be experienced in groups or solo – I did the latter, due to it being a workday and most of my friends being artists whose livelihoods are potentially at risk of being replaced by AI. I am at best agnostic towards it and at worst dreading it. The most useful application of AI that I’ve found is that hearing it brought up in conversation tends to be a great indication of when to walk away and talk to somebody else.
The start of the experience involves taking a cube (allegedly it stores the data recorded throughout the process, but my scepticism and Luddism remains) and placing it into a socket. Each participant then has photos taken of them from multiple angles, a painless process that takes about two minutes. I tend to dissociate when I have my photo taken – a mental state that adequately prepared me for what followed.
Following that, you take your cube and complete a questionnaire. You enter your name, your favourite vegetable, what superpower you would like. The two last questions, which I won’t spoil here, are disarmingly deep, and I picked answers I thought would throw off the AI. For a moment, I pictured myself as some sort of unfit Sarah Connor, trying to outwit SkyNet, and probably failing.
After that, you wait – roughly 10 to 15 minutes – for your AI film to be generated. The first “hyperfilm” in the series is called Enter the Multiverse, and is spread out across three spaces – a cinema, a gallery and a projection space, the last of which was being kept as a secret.
Most of the AI art I’ve seen looks, unsurprisingly, as if it was artificially generated after being trained on the work of many, many artists of varying qualities and styles. It doesn’t feel like art at all - it feels like the concept of art. There’s a disarming lack of specificity to it.
There is something specific to HyperCinema, though, and it’s your face.
While I sat in the cinema with fresh hope, having been thus far impressed at the technical achievement, my dissociation kicked in as I saw my face, over and over again, placed on to the bodies of AI-generated characters. There was a Sam Brooks who was an actor, a Sam Brooks who was an entrepreneur, and many versions of Sam Brooks doing or saying things that I had typed into the questionnaire. These characters were of various genders, ages and races, although that may be the result of my racial ambiguity, which has foiled many intelligences, artificial or otherwise.
The experience of seeing your own face plastered across multiple bodies it does not belong to is quite alarming. The quality of the 10-minute film is sort of beside the point, although it is fairly easy to tell human actors from AI-generated actors. One, I am not an actor and I haven’t done the things that I was shown doing on-screen. Two, human actors, even bad ones, tend not to cycle between facial expressions on a loop. Three, I already know they’re AI actors because that’s the whole point of the entire experience.
The second part of the experience – a gallery with AI “paintings” – is less unnerving. You enter the room and are surrounded by portraits adhering to certain themes. There are elves, pirates, glamorous movie stars, so on and so forth. Your cube goes into a socket next to each portrait, and the portrait’s face morphs to suit your own. It’s neat, although it’s basically just one of those fairground amusements where you put your face through the hole and take a photo.
I almost never think about seeing myself dressed up like a cowboy in a painting, and I definitely never thought I’d see myself in a painting dressed as a cowboy with seven fingers on each hand. AI, it seems, may have mastered superimposing faces on other faces, but “hands” remain, somewhat ironically, out of its grasp. (Also, to be pedantic, these are not paintings. Paintings, as the name suggests, involve paint.)
As a novelty, which HyperCinema absolutely is, it’s definitely technically impressive and it has clearly taken the dedicated work of many technicians to turn it into a proper experience, rather than a mere curiosity. If people are intrigued by the capabilities of AI and want to test it out, this seems like a good way to do that. I also imagine children (or adult narcissists) will find seeing their face everywhere highly entertaining for the promised 45 minutes of the experience.
As entertainment, I found HyperCinema lacking. But as a thought-provoking experiment on the role of AI in art, it was a remarkably useful way to quickly and efficiently find out how I feel about it. And how I feel about it is not great!
As an artist living and working in a country where the arts sector is struggling and the work produced by that sector is undervalued, financially and culturally, however, I find the use of AI in an artistic context to be spiritually bankrupt. So, I am not the target market for this. I would much rather see the art made by my colleagues, my friends or my own imagination.
I walked out of that space, turned around, and thought about the QF Tavern. The amount of nights that it served as a second, third or fourth location. The countless beers pulled at the bar. The diverse patrons – crotchety regulars, Devonport residents sneaking in a quick beer before running to catch the ferry, out-of-towners led astray – who have occupied it. Give me those people, and their stories, any day.