
In summer 2023 Ieva (middle left pictured) put out a call for people to join her in a collective application for a show at Hypha Studios in Mayfair. Hypha are an organisation that takes stewardship of meanwhile spaces and offers them out as studios and exhibition spaces to artists, curators and other such creatives. Unfortunately our application was rejected, but we decided to stick together as a collective and apply to more open calls and attempt to find an exhibition space where we could work together.
In December we heard back from Hypha about a different space we had applied for. A group of us met them on video call where we were expecting to have to finish our pitch for the space, but we were greeted with a congratulations we’d like to offer the space, here are all the admin tasks that need to be completed as part of the terms.
Naturally we were overjoyed, and so began the month-and-a-bit process of making my piece, and of us getting ready and moving into the space for our month long exhibition: Dissection.




A short digression. During the Autumn term of 2023, having finished my tenure on the course, I was employed as a TA on the Programming for Artists and Designers module. Offered on the course was a new module: Critical AI, led by Dr. Dan McQuillan. It fit neatly into my timetable, and I am interested in the subject, having used AI extensively over the past year for my work.
The module very quickly became my favourite that I had taken across the entire course. Dan’s wonderful human analysis and demystifying of the hype and marketing around the subject that is swallowing the tech news landscape, really gave me a solid foundation into what the materiality of AI is, and how I might rethink my using of it.











We spent a session looking into the hidden labour that creates datasets, as well as their misuse, their intellectual property questions, their inherent biases, and other limitations. And it was when looking at one particular image in the Imagenet dataset that I realised a little wedge I wanted to explore with my practice.

This image from the Imagenet database is labelled “Bagel, Beigel”. Now it may be correct that somewhere in this image is a bagel, it’s so tiny I’m not sure how you could determine that for sure, but I’d argue that this is not a picture of a bagel. If I were to say what this was a picture of, it would be “a woman having breakfast”. Maybe that interpretation would not be universal, but the point I wanted to make is that the bagel is not the subject of this photo, and in fact the “subject” is not actually in the photo itself, but is more a set of relations between objects that are visible in the photo.
This is where the a-contextual nature of AI creates a problem, how does, or how can we be sure that an automated system built to detect what it can see, is accurately observing? Now, I don’t want to fall into a god of the gaps fallacy here, whereby just because I don’t know of a system that can understand context, doesn’t mean context isn’t a computable problem. I think with chatGPT it is clear that indirect inferences can be made about a given dataset. But this all boils down to a correct, accurate, and detailed labelling of a dataset.
When we are learning from our experiences, there is a wealth of information that we bring with us and that we can infer whilst developing a future inference about a subject. We understand time of day, lighting conditions, social environments, typical foods for each of those, what the setup might be, whether it is formal or casual, whether it is high quality or not. These are just a few things that come immediately to mind when I look at that image and believe I have a suggestion on each point. This is all knowledge that could be computised I’m sure, that can be given to a sufficently large enough engine to be able to start to understand all of the complexities that can go into an image like that.
But the crucial point is that for the Imagenet dataset, it didn’t. This takes not only a lot of computing power to be able to meaningfully extract work from such data and the connections it might draw between different points within it, but it takes a bucketload of labour to make these labels in the first place. To produce that level of detail would require a lot of distilled human experience, and it still would be a massive reduction in resolution from the intrinsic understanding that we carry with us when we look at an image such as the one shown and make a judgement as to its content.

What all this brought to mind was imagining a photo of a murder scene or something else gruesome and horrible. There is no death to point at in an image. A still image doesn’t intrinsically hold time in the way a video would. As such what is in the image isn’t necessarily something capturable in such a shutter flash. With something as extreme as say a murder scene, we might look at the image and see none of its active parts and yet come to the conclusion of the whole, especially if given a headline or some warning. If passed to an AI we might see a knife and a person.
This idea is explored circumspectually in Andy Warhol’s Death and Disaster series. Warhol took large media stories, newspaper headlines, front pages and photographs, and turned them into objects of inspection and introspection. We see images such as these on a daily basis, are bombarded with the gruesome and the terrible because it captures our attention and keeps us buying papers. CNN and Fox News developed on this format with the 24 hour news cycle and the sewing of fear as an effective product to manufacture a greater desire to know what’s going on to protect oneself from the fear, thereby perpetuating it.
We become desensitised to this imagery, requiring ever more of it to be able to trigger the same hormonal responses that give us the weird kick of being sad or scared, or angry. In this same way we stop seeing anything in the images, they become symbols that carry the space for this emotional response. The headline tells us how to frame the image, we see the image, insert sadness, then we move on, never having inspected or truly engaged with what we were looking at.
Warhol attempts to interrupt this. By framing these as artworks, either simply the image, or with the headline, we are now given the space, the gallery space, in which to look at what is occuring. It is now we can examine what these images become as media, what the mediatisation of these figures does to our own comprehension, and reflect on the humanity or lack-thereof present within the picture.






Human agency therefore becomes a key part of the conversation, decisions on when, how, and on what to apply attention. So what happens when this agency is machinic? When an AI system is brought into the fold of recognition, analysis, and decision making, how does this affect our capacity to trust images?
The series of prints or paintings (I’m not sure exactly what to call them) I have made as the first work within this project, which I would like to continue, take photojournalistic images from the current conflict in Gaza and run them through through the YOLOv8 object detection system. I’m sure there are technical improvements that can be made within my methodology, training my own or finding a dataset that is more tailored to the application of looking at crowd scenes perhaps. But in my use of the system I received these outputs that forefronted something that is missing when images become symbolic. “Person”s populate the canvases, highlighting that there is a humanity here when there is little other information to gather. And equally the obverse is showing just how much information is lost in order to make an image computable.
The final matter is the symmetry at the heart of why I chose these photos, and constructed them in this way, calling them Missing Persons: The AI doesn’t identify every person in each photo. As the photos are of people looking for people missing, trapped in the rubble, a symmetry arises that these are images of missing people, that are missing people. This is a task that a human would perform far better at. Again, this isn’t to suggest that this is a wholly uncomputable problem, that an AI can’t or even that an AI that exists now can’t identify people in photos better than the average human, but when this system has agency in the world, it becomes a non-technical issue, and becomes one of ethics and responsibility. (As a tangent there is an argument that the application of an AI or even simply data gathering itself has ethical questions wrapped within it).
The Israel Defense Force (IDF) have employed a system called “Habsora” (Which translates to “the Gospel”) as part of their target selection pipeline, upping the number of targets suggested for bombing from 50 per year, to over a hundred per day. I think it’s a fallacy to imagine that the human operating step between this system and a bomb being dropped is exercising the appropriate level of diligence as to whether each of these is individually a legitimate target based on the incoming data. If that was the case, then there would be no need or incentive to employ the system in the first place. What else is an AI decision making system for if not to make decisions? As such, we fall into the trap of the AI producing the truth of the world. What is a legitimate target is what the AI has determined to be one, and this will no doubt be fed back into the training data for the model, solidifiying its legitimency regardless if it would have passed muster as a target otherwise.
Agency of the machine is thus effected onto reality, onto real people, in situations of life and death. If these decisions are “right” or “wrong” (so much as concepts such as these are even applicable in a situation like this), the responsibilty for the decision is diffuse. The AI cannot stand trial for a warcrime it might commit. Can the operator? They might blame the developer (such a system is unlikely to have been made in-house), who might blame faulty data, or misuse of the system. I fear this bucket of responsibility being kicked around in a way that makes it a technological question, as opposed to a human one. How did the technology fail? As opposed to what are the assumptions inherent in its use and application that devalues the lives of those affected?







The paintings were created through a technique of creating blocks/stamps I developed using the laser cutter to first engrave the image onto card, applying glue to the engraving, sticking another sheet of card to the first and then cutting the same lines through only the top sheet. This worked remarkably well when the laser cutter was tuned correctly. These sheets were then used to block print onto canvases I had painted black. I think the block creation technique is effective, though my actual printing perhaps not. The blotchy, patchy-ness is welcome, and I enjoy the analog randomness and imperfections, however these are less choices and more a skill issue I feel.


We installed our work hurriedly in the week before we opened on the 18th January. I wrote the copy for our press release, as well as choosing the idea of dissection as a methodology of investigation being the theme that could link our work.




Here’s the gang all together at the opening. From left to right: Ieva Vaiti, Maria Gracia Cebrecos, Stathis Doganis, Yimiao Pan, Sofia Taipa, me, with-lasers (Nathan Adams), Ewan Peng.












Missing Persons 1-7
2024
Oil on Linen Canvas
60 x 40 cm
Using an AI object recognition system, Missing Persons takes seven photo-journalistic images of Palestinians digging, searching through, and surrounded by rubble in the aftermath of Israeli bombings during the 2023 War in Gaza. This AI lens, which identifies and silhouettes all recognised objects within the source images, whilst making the identification of “persons” within a given image available to a computer and thus further computation for e.g. statistical evaluation, fails to capture all of the people within the image, and in doing so, also de-contextualises the image from the horror and reality that is immediate to a human viewer.
Missing Persons’s dissection of these images, highlights the increased abstraction of war, rendering its subjects into data-points. Selection of Palestinian targets for bombing have increasingly been chosen by an AI system known as “The Gospel”, which has upped target identification numbers from 50 a year to 100 per day, continuing a process of dehumanising the very real people affected by operators’ decisions.
Credited Images: https://www.reuters.com/pictures/israel-pounds-gaza-hostage-release-looms-2023-11-22/
1. Palestinians search for casualties at the site of an Israeli strike on a house in Rafah in the southern Gaza Strip November 23. REUTERS/Ibraheem Abu Mustafa
2. Palestinians search for casualties at the site of an Israeli strike on a house in Rafah in the southern Gaza Strip November 23. REUTERS/Ibraheem Abu Mustafa
3. Palestinians conduct search and rescue works on the site of an Israeli air strike on a house in Khan Younis, in the southern Gaza Strip November 22. REUTERS/Mohammed Salem
4. Palestinians search for casualties at the site of Israeli strikes on a house in Rafah in the southern Gaza Strip November 22. REUTERS/Bassam Masoud
5. Palestinians search for casualties at the site of Israeli strikes on a house in Rafah in the southern Gaza Strip November 22. REUTERS/Bassam Masoud
6. Palestinians conduct search and rescue works on the site of an Israeli air strike on a house in Khan Younis, in the southern Gaza Strip November 22. REUTERS/Mohammed Salem
7. Palestinians gather as others search for casualties at the site of an Israeli strike on a house in Rafah, amid the ongoing conflict between Israel and the Palestinian Islamist group Hamas, in the southern Gaza Strip November 23. REUTERS/Ibraheem Abu Mustafa

And finally, I also showed my Portrait of the Artist as a Young AI print, in much better lighting than it has ever been displayed so far.
As _threadsafe, we’ll have another event in April at ArtHub in Deptford. I have a couple of bits of writing from the critical AI module that will be posted soon following this post.
Leave a comment