Beneath that giant metal structure lovingly referred to as “Montreal’s cock ring” and in the depths of the unnavigable Place Ville Marie has been installed a groundbreaking art project and social experiment. Prémonitions : Les voix is the brainchild of Nicolas Grenier, a transdisciplinary artist based in Tio’tia:ke / Montreal whose work explores themes of social order transformation, paradigm shifts, and power structures. Les voix, the second part of the two-part Prémonitions project, invites the public to converse with a Large Language Model (an AI application that can be trained to recognize, translate, predict, and generate text and other content) through a human interface – an embodied AI. The willing visitor speaks to a human performer, but the words spoken back to them are decided not by the performer but by ChatGPT. The resulting dialogue between visitor and performer, or between visitor and AI, poses a central question: “when an AI speaks, whose voice is it?”
There’s something intimidating about striking a conversation with an embodied AI. The usual social awkwardness of talking to a stranger is enhanced by the fact that this stranger is a performer reading from a script. Before any pleasantries can be exchanged and basic personal information shared, the visitor is expected to initiate a discussion on one of a host of complex topics. A guide provided by the artists suggests: philosophy and ethics (e.g., “rights of robots”), technology (e.g., “the possible impact of generative AI on the future of employment”), and society (e.g., “the geopolitical roots of global inequality”), among others.
My topic of choice was beauty. When I asked its opinion on beauty, the AI responded mostly in dictionary-ready definitions and clichés, but I was impressed with the humanness of its speech. The AI didn’t pause or stutter, but its speech wasn’t otherwise different from that of you or me. Notably, the AI did struggle to answer my subsequent, more subjective questions: it couldn’t tell me whether one celebrity, in its opinion, was more attractive than another.
There are some important things to keep in mind when you’re dealing with a Large Language Model. For one thing, like the ChatGPT we’re familiar with, the Model doesn’t know about anything that’s happened since 2021. (You can, however, inform it of a recent event and ask it to provide its own insights and analyses.) I also learned, during the course of my visit, about a phenomenon the artists call AI Brutalism. Because the AI is powered by giant datasets created by humans – naturally fraught with errors, preferences, and priorities – its speech can often sound more artificial than intelligent. Visitors should not be surprised to hear the AI combine expertise in a subject with basic errors, to hear it lie with confidence, or to endure its exasperating politeness.
Above all, my experience talking to an embodied AI was eye-opening. It forced me, maybe for the first time, to seriously consider the paradigm-shifting possibilities AI affords. I discussed these possibilities – and the excitement and fear surrounding them – with Avery Suzuki, assistant to Nicolas Grenier.
Avery Suzuki (BFA ’23) is an artist based in Tio’tia:ke / Montreal. His work is inspired by folklore, spirituality, cultural artifacts, and everyday life.
Catey Fifield for The McGill Daily (MD): How are you involved with this project? What sort of research did you conduct for the artist?
Avery Suzuki (AS): I started out as a research assistant, but my job has expanded to cover many aspects of the project. Mostly, my job was preliminary research on AI and art in historical examples. I also had to come up with ways that we could conceivably execute the project technologically, figuring out all the different things we’d need to procure to make it work – not to mention the people we’d need to hire to make it work. And then, finally, I was modifying GPT, using prompt engineering to give it specific personalities and create an experience that’s a little different from interacting with a standard GPT.
What’s cool is that it was just me and Nicolas doing it. And we have absolutely no technological experience. We’ve never coded before in our lives, and we didn’t really need to because to modify GPT, you can now use prompts that are super powerful. We did it through trial and error and using plain English – and you can do a lot with just that.
MD: Why has this project been installed in a shopping mall rather than in a traditional gallery?
AS: I’m not the best person to ask this question because I didn’t make the decision to install it in a mall. But the funding for the project came from the Chambre de commerce du Montréal métropolitain (CCMM) under this initiative called “I love working downtown.” The idea was to fund these art projects that would be installed in the downtown area to attract people there – so that’s why it’s downtown. And then Nicolas and MASSIVart, the production company, found this space and it worked for what we needed. Being installed here, it’s nice that there’s a lot of foot traffic and that we get a diverse mix of people – I don’t think we would get that in a traditional gallery.
MD: How has the public reacted to Prémonitions?
AS: There’s a diverse mix of people who come through, and everyone has a different experience level when it comes to interacting with AI. They all have different feelings about it, and they all have different levels of comfort – even just with talking to a stranger. One thing I really like that we’ve done is that on the website we have a place where people can leave anonymous comments. We don’t edit them at all. Good or bad or stupid or smart – we just put it all on there for everyone to see. And it gives a good idea of the mix of reactions that people have: some people are really confused by it and some people are really inspired by it and some people are scared.
MD: What is it people are scared of?
AS: It’s hard to say whether it’s about the technology or whether it’s about being asked to participate in an art installation and being asked to talk to a stranger – some people feel like they’ll mess up. But then other people go in and they treat it like a friend: they confide in it and they tell it stuff that maybe you wouldn’t tell an actual human. Some people tell it secrets and some people look to it for comfort. Which I think has to do with the fact that there’s a human with a compassionate voice who looks friendly in between them and the AI.
MD: How many human interpreters do you have? And how did you choose them?
AS: We have nine interpreters. We chose trained performers because we wanted people who would have a level of comfort interacting with the public or being seen publicly, and also people who were familiar with reading lines. But we tried to make it as diverse as we could from the applicants that applied. We wanted a diverse group because the show is called Les voix (“the voices”) – it’s supposed to be an amalgamation of different people.
MD: How has Prémonitions influenced your opinion on AI? Do you feel excited about this technology or threatened by it?
AS: Both. Definitely both. I mean, it’s hard to feel one way or another about it because it’s just so broad. It’s like the internet in 1996 – try imagining all the ways that the internet could have affected our lives, say, 30 years ago. There’s no way to predict all the ways that AI will affect our lives, but I know it’ll be really deep and really change things from a societal level.
MD: Has anything about this project or about your research surprised you?
AS: The surprise has been how convincing it is as a sort of human entity. Even without the interpreters giving it a body and a voice, it’s able to imitate any kind of linguistic style or attitude or opinion that you want it to. And without much strong intervention, you can kind of make it do whatever you want. It’s pretty shocking how accessible it is – that anyone can do this without much effort.
MD: Has AI influenced your own work at all?
AS: It’s hard to say because it’s so soon, but it’s definitely becoming a reflex. Whereas before I might Google something, I’m going to AI now. I’m using it to accomplish tasks – writing, translation, brainstorming. I use it for brainstorming a lot.
Generally, at any sort of creative point in this project, Nicolas and I would use GPT as much as possible. We wanted the AI to take the lead on it. In that way, it’s kind of making me rethink the whole creative process and the very idea of originality. I think people look at GPT as something separate from themselves as a user – a sort of separate consciousness that makes its own decisions. But I think it’s more helpful to treat it like a collaborative tool that you can use to bounce ideas off of yourself, or off of each other – something that can help you to flesh out ideas in a more streamlined way.
MD: Do you feel, by and large, that GPT is helping you to generate new ideas rather than hindering you from exercising your own creativity?
AS: Definitely. It’s totally expanding the potential of creative thinking. I think the big thing is speed. In the same way that doing research became so much faster when the internet came around, or when word processors started to outpace typewriters, AI is enabling a much faster workflow.
MD: Is there anything else you think readers of the Daily should know?
AS: I think it would be cool to highlight the fact that this thing has a rolling memory. The interpreters keep this journal that the artists then interpret and feed back into the machine. And so the AI has these injected human memories that it then interprets further as a machine – which, as far as I can tell, is a very new idea in terms of Large Language Model processing. It’s a pretty unique aspect of the project that I think should be explored further.
Prémonitions : Les voix is on display until September 16, open Tuesday to Saturday from 11:00 a.m. to 5:00 p.m. Participation is free, but visitors are encouraged to book online to guarantee a conversation with the AI.