Time Machine at Hayward Gallery
This bit of writing was an assignment on the Critical AI module that I audited in Autumn 2023. I thought there were a number of parallels in the labour relations between the work undertaking to make “AI” happen, and that of some of the images and subjects in Hiroshi Sugimoto’s Retrospective Exhibition “Time Machine” at Hayward Gallery in the same period. This is not an extensive review of the entire show, and focuses mainly on a couple of series and their respective curatorial framing. It’s been a while since I did an exhibition review, and this was quite fun, now that I have a little more time perhaps I’ll do more if other such shows give me the same compulsion to analyse them.
This essay seeks to investigate the positioning of Hiroshi Sugimoto’s photography or camera as a “Time Machine”, as is the title of the recent exhibition at Hayward Gallery, London. I’d like to further make a comparison between this metaphor, and the attitudes employed in the curation of the work, to that of the Artificial Intelligence (AI) hype cycle, to the production of both types of artefact, and the underlying assumptions being made in their representation.
We begin our journey through the gallery space investigating implausibly dramatic wildlife photography, the type that adventurers would spend a career to capture. These works are of dioramas, from the Museum of Natural History in New York, USA, as stated in the wall plaque. These Victorian era scenes (assuming such displays had remained unchanged in the 70+ intervening years) captured Sugimoto’s attention as “Utterly fake” but that when viewed quickly through one eye “suddenly they looked very real” with the loss of perspective.

The camera Sugimoto employs takes on this monocle perspective and through care and technical prowess, he is able to transform these “fake” articles into something resembling reality. We observe a flattening process that takes place with the intervention of technology, and in this explanation the curators present another flattening which Sugimoto employs within his work: The reality presented is already a lensed fiction, the caricature of nature by teams of artists, a series of decisions, extrapolations and fantasies of the natural real.
Sugimoto’s historical portraits illustrate the fiction of images more clearly than the natural scenes. Portraits of Napoleon, Queen Elizabeth II, Henry VIII, and Fidel Castro, among others, are presented in the same greyscale of camera wizardry that breathes an ab-life into them. The photographs are of the wax figures normally displayed in Madame Tussauds, where the rift in the photographs from their curatorial positioning of being brought “to life” by Sugimoto’s technical mastery begins.
Before Sugimoto spent a night with each of these figures in turn, they were the product of many hours of labour by a team of artists, practitioners, perhaps historians (though the waxworks are all based on readily accessible images). This labour is hidden from view by Sugimoto’s “portraits”, as it is by the attraction themselves, who have nothing on their website about the creation or maintenance of the waxworks. This is not to say there is no information available, videos online exist of behind the scenes action, but the point I would like to illustrate is the idea of these objects existing as solely an entertainment, with no need for accreditation of the work behind them as part of their public presence, other than to attribute the work to the Madame Tussaud who began the establishment.

This is the first bugbear I have with the “portraits”; we are dually told that the photographic nature of the images is to make these figures almost alive, and in doing so that the skill of liveliness is in the photograph, not within the model. The curation talks about the lighting and photographic efforts softening the wax and artefacts of their actual nature, which is to suggest that they are of an obvious nature, fraudulent objects, not realistic enough to trick anyone, without Sugimoto’s camera. Which I find almost laughable, as that is the very purpose of the waxwork museum, and they do a convincing enough job that most photographs of the models require a double-take to understand their unreality. This serves to downplay the skill and effort of the sculptors by appealing to a perceived truism that such work is never good enough to be life-like in the way that the technological mastery of Sugimoto’s photographs can now muster.
The waxworks do something that Sugimoto does not, they have colour, and three dimensional form. The camera here uses its medium specificity of inherently not providing the same wealth of information as the sculpture initially would to then suggest that there is a magic trick being done that manages your perception as a viewer to instil a sentiment of liveliness into the subjects.
This is a feature of AI technologies being rolled out into a variety of industries, positioned to replace humans at tasks, if not whole jobs. Cost-saving, efficiency, accuracy, and many more reasons are given for this rapid implementation, at the heart of which is fear of being outcompeted by a market rival who does it first and through the exponential gains of AI will quickly come to dominate.
But in these implementations there is often a sneaky reduction in resolution of what a task might consist of, paring it down, not into its productive parts, but into the aspects that can be done by a computer. In essence, these systems often reduce the task to a paradigm in which the computer can now act, have it do so, and then say that magic has been performed as it now outcompetes a human at that same re-evaluated task. This then foregoes any and all labour being performed by the human that falls outside of the shifted goalposts to which they are now being measured, that formerly might have been an integral, but perhaps not entirely purchasable aspect of their labour. Yanis Varoufakis explains that a capitalist cannot buy labour directly, the sweat of a factory worker’s brow, the inspiration of a marketing guru, the carving of a waxwork; instead, what can only be bought from the labourer is their time, within which, at their capacity, to the extent that they perform satisfactorily, they work and produce goods. This relationship isn’t the same when the labour is itself capital, as is the case for AI, as was the case for the cotton mill.
The cotton mill didn’t create better linen, of higher quality, or more elaborate fashions than the cottage industry it replaced, it simply made linen consistently and faster, and in the process reoriented the human labour from skilled craft, to lesser skilled, repetitive tasks, easily replaceable by another worker, or another automated process should one be developed. AI is set to do the same to cognitive labour, and it doesn’t do so through response to novelty, but through replication of behaviour trained into it. Labour that is invisible on the output, and completely obfuscated by the model it was used to create, is hidden in plain sight as the mundane regression to the mean of outputs it provides.
Such hidden labour is rife within the attitudes given to AI. The added value, skill and effort within AI conversations is attributed to the model, to the organisations, and engineers who have created them, to the artificers of the technology, but not to the countless hours of labour that provides all the knowledge within a dataset. Tesla, and any self-driving car company, use user created data of real-world examples of driving to train and fine-tune their models. This knowledge, this understanding of how to drive, is a social commons, a collective understanding, which is being harvested and extracted by such tech giants and re-presented as their original work.

My second issue with the “portraits” addresses why I put the word in quotes. These are not portraits. It’s simple to state that neither Sugimoto, nor the camera existed to photograph Napoleon, but grander than this is that these are photographs of waxworks. Still simple? Okay, these waxworks are all based on photographs or paintings, which were all political symbols before the camera shutter closed. The unblinking regurgitation of these lacks reflection on what the poses being struck are, what purpose they held or what their context was in relation to their medium.
This is not to say that decontextualisation of these historical figures isn’t valuable. Removing them from their environments within their respective images, might do something to address a person, but I would think of Alison Jackson’s lookalikes performing their lives outside the public eye, or Cindy Sherman’s unreal film stars as aspects of her self-portraiture. Through these lenses the figures can be examined, as people, or as symbols standing in for people. These figures were not dressed down, and the waxworks weren’t moved to alternate environments, allowing the viewer to consider their subjects through a new light. The photographs instead perpetuate the images that these people wanted of themselves, unquestioningly.

Further, the artisans at Madame Tussauds have already done this work for Sugimoto. Henry VIII might have been a stunningly handsome fat king with a massive cock, but the portrait accurate 3D reimagining of him, tight-lipped, with grizzled-sideburns, and prominent codpiece, is just that, portrait-accurate. This is not a painting, nor a statue of the truth of an individual, it is the political performance of an individual. Perhaps this is more important or valuable, but it is nonetheless different. The camera, like the painter’s brush, will also manufacture through its image-making, a version of its subject, but there is a difference here when it is not of the subject, but of a copy of a copy, and it serves no more life to the subject than a photograph of the Yoda waxwork would.
This is where a second comparison to the world of contemporary AI can be made and particularly the confidence in the fidelity of datasets as representations of truth. Much like the previous comparison of the scope of a task being remade to fit the capabilities of the algorithm, as opposed to the other way around, that would be the expectation of any employer of a human worker, the truth is what the algorithm makes of it.
As the early ELIZA experiments taught us, we trust machines. The computer cannot lie to you, sure it can be wrong, but it cannot willfully misdirect you, it has no will. So when “Computer says no”, we feel inclined to trust its judgement, because we anthropomorphise its decision making, but discard the possibility of anthropogenic falsehoods. We are often unaware of the work that goes into creating a process that can make a decision, and as such are unaware of the very human assumptions built into these processes. This can be bias in data collection, over-reliance on correlations when drawing conclusions, or lack of accounting for variations of aesthetic patterns within the data. The conclusions then drawn from these results, when enacted upon the world, serve to clean the data, by reproducing its truth in the real world. The classic example is predictive policing: The more heavily you police an area, the more crimes will be seen by the police. The now higher reported crime rate suggests a need for even heavier policing. Dataset biases can crop up when not acknowledging systemic biases within the system that creates the dataset, e.g. criminal sentencing, if there is a statistically racialised bias to sentencing in the court, its untouched record and approach will be reproduced by an AI system acting to replicate it.
To further illustrate the production of truth that occurs within an improperly understood AI system, I would like to reframe the criminal sentencing bias: Sentences given before lunch are statistically harsher than those given afterwards. This was initially believed to be about ego depletion and hunger, but can be explained by defendants without lawyers being scheduled for the end of sessions. However, take this data, i.e. no lawyer, and time of day or time in session, as contextless data points being fed into a machine to process into vectors of sentencing. Suddenly, we are introducing biases where otherwise there was none. A judge may even be impressed with the strength of a self-defending individual’s argument because they have no lawyer, in this same case, with an AI there will be a tick to the negative because of their association to other defendants with no lawyer, and the time that the argument is being heard.
There is a danger when regurgitating something that carries a message within itself and presenting that as truth, and then using it to further inform the reality that produces the message. Hayward calls the camera a “Time Machine” in the title of the show. In doing so they draw this link to suggest Sugimoto is taking an image of the past, but I think it is much closer to an illustration of the present.
References
- Varoufakis, Yanis. Technofeudalism: What killed capitalism. Bodley Head, 2023.
- Ibid.
- Dzieza, Josh. “Ai Is a Lot of Work.” The Verge, June 20, 2023. https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots.
- Rowe, Niamh. “Millions of Workers Are Training AI Models for Pennies.” WIRED UK, October 15, 2023. https://www.wired.co.uk/article/low-paid-workers-are-training-ai-models-for-tech-giants.
- Jackson, A. Diana Gives the Finger. 1998
- Sherman, C. Untitled Film Still #21. 1978
- Glover, Ellen. “What Is the Eliza Effect?” Built In. Accessed January 15, 2024. https://builtin.com/artificial-intelligence/eliza-effect.
- Arcas, Blaise Aguera y. “Physiognomy’s New Clothes.” Medium, May 20, 2017. https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a.
- Angwin, Julia, Jeff Larson, Lauren Kirchner, and Surya Mattu. “Machine Bias.” ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
- Duke, Annie. “‘no, Judges Don’t Give Harsher Sentences When They Are Hungry’- Annie’s Newsletter, October 5, 2018.” Annie Duke, October 19, 2018. https://www.annieduke.com/no-judges-dont-give-harsher-sentences-hungry-annies-newsletter-october-5-2018/.
Leave a comment