From the discussion in class, a point Malu raised is the binary between emittor and sensor. There is always from the point of view of the subject a binary between the subject itself and an object receiver or emittor of data.
Thinking about this relationship, there are singulars that are produced from the multiplicities of communication. The example Malu gave was that of the self-conscious is the product of the multiple others that is encountered. (Probably some Lacan to come in here about the development of the self in the child). But if not the product of these externalities, I think it is true that these externalities are the driver of feedback within the system of the self-creation. (Would be interesting to think through the purpose and possibility of ego-death within such a conceptual framework.)
If communication systems are anti-entropic, in that living systems are temporary reverse-entropic islands, water running uphill, then the productive factor of this entropy is the meta-data produced by the relations of the emittors and receivers.
For example: A performer (emittor) plays to an audience (sensor), who are all singularly transfixed and not in communication with each other. Here we have a singular in many binary connections to other singulars. However, there is the relational data that is produced by these many sensors communicating at once with the singular, this is (perhaps) the meta-data (As Mattia condensed it). Though I am unsure if this is a qualitatively different form of data from the meta-data of the singular communication e.g. relative position between emittor and sensor, time start-end of comms, length of comms, size of data transfer. There is a relational data that can be inferred from non-intercommunicating sensors. I need to think more about what form this data is, I have a couple of nebulous ideas but these may be explained down to the binary, example: demographic inferences can’t be made by engaging with binary comms singularly but are made from the multitude.
The network effect and the efficiency of information transfer through a network I think is a result of this when nodes within the network are capable of being emittors and sensors of the data that is being transferred within the network. The production of meta-data scales with the network exponentially as each node produces relational data to each other node, the inferences from which become more precise. However this may not mean greater accuracy as noise within a network also scales. Information that is sent is not always recieved in its intended manner. Data points and relationships between them can be read differently given the different relational aspect of the sensor node to the emittor. If this noise begins to become formalised this can be considered “agitation” as was being discussed in class.
Another point Malu raised about the binaries between communication partners was the idea that within a society there are built symbols of representation that are useful. This came from a point Arjan made, citing Bratton, that the modes of standardisation such as that of traffic lights within a city can be beneficial in the large scale. An individual traffic light may restrict flow for a car or pedestrian, but the system as a whole enables a better flow of traffic and therefore goods/information/energy throughout the system as a whole. This in turn was a response to the discussion of normativity and the medicalisation of non-normative behaviours, or symptoms of political ones. Mattia’s example being a meeting about mental health treatment and help at work if staff were feeling stressed or overworked etc. He points out that these feelings are the result of poor management and over-exploitation of staff, and as such are probably healthy sensor responses to such stimuli and pressures. I think there is a major distinction between the two systems that Malu and Arjan are talking about and the system Mattia is angered by: Democracy. The city in whatever its guise, at least in the city of London as we were discussing has its modern decisions and its major infrastructure aside from the layout of primarily old major roads, determined by the democratically elected councillors, mayors and other such officials of the state. The symbols we agree upon are brought about by agreement of social groups, and very much begin as particulars to social groups before disseminating to a wider audience. Where these differ from Machine Learning models and tools derived from them as Mattia asked the room, I believe, is the fact these are in themselves not the symbols of communication but are the infrastructure on top of which interfacial surfaces are placed for interaction. The traffic lights are interface with the road system, the symbology that we communicate with are interfaces to information transfer, shortcuts to given preconceived sets of ideas. But they are not, fundamentally and respectively the highways themselves or the structure which underpins the language of communication. And of further importance is their ability to reshape and be remade by the agents using them. For instance: some traffic lights are hard coded patterns, I’m sure. Some use weight sensors in the roads to know when there are vehicles waiting, and some are in fact controlled by the buttons at crossing points, and quite possibly there is a combination of all three at work. Here, via the interface of the traffic lights, the Brattonic User has an influence over the whole system by engaging with and using the system. With language, an example I gave about how symbology is fluid and how that’s a good thing is the use of the word “man”. “Hey, man”, “I’m good, man, you” etc come as a conscious repositioning of black pride in the 20th century. The outgrowth of race relations from the abolition of slavery had many whites referring to black men as “boy”, especially in the southern United States. Through the growth of black art, music and culture, such terminology found its way into the generic anglo-slang lexicon where it no longer carries the emphasis of recognition.
The importance then of open democratic engagement for Machine Larning models is because, as an infrastructure, they are themselves the ossification of a weighted set of ideas and understood symbologies, much like the road layout and departmentalisation of a city. Unlike the city however, there isn’t a council or figure elected by the Users who might use that bestowed judgment to tender a contract for the city design to an architect. And whilst it is possible, likely or even common that these elected entities are not thinking truly for the benefit of the Users when they make such commissions (e.g. Pork Barreling, see Brazil) there is the institutional framework there that makes the pretense they are. And I would argue that the suring up of these institutions is all that would be required to reduce much of the misalignment of motivations we see between the stated role of elected officials and their actual goals. The entities that have the most control and weight in the development of ML technology will always be the ones with access to the most data, the most computing power to process that data, and the most engineers to design that analysis. At the moment that is the Big Tech firms, and their motivations are not for the public good but for growth and profit. This doesn’t mean the two things can never be aligned, but it is important to recognise that this alignment is never outside of the interest of growth and profit.
As the technology gets more widespread and computing power becomes cheaper and better, the ability to train models and create ML platforms will be there for people to engage with. This isn’t the iphone where manufacture is a prohibitive barrier to anyone who has an idea about the way phones could work. But, as bank of America recently said, ML is about to have its iphone moment and open a market that could be worth north of $100 trillion. What this means is though the technical possibility of the upstart ML developer will remain, the infrastructures and platforms of the current and soon to emerge tech giants, and the axioms of interaction that they bring with them in the data they have already collected will likely dominate.
So to return to the class discussion. In my opinion, Malu is right, that on a personal level, normativity is not an evil, and the binary distinctions between in and outgroup are useful to us, and do not solely present a negative that we ourselves have to develop a formlessness to combat. However, I think where her point lies flat is that I do not believe this was ever a discussion of the individual. Agitation isn’t the work of an individual in a societal sense, I have explained earlier, I believe this to be noise, agitation is the combinatory nature of noise given form by its relational structure with other nodes producing noise. The example of LSD, and the Deleuzian schizo-capitalism that caused her reaction of “I’ve been agitated, I’ve done drugs, I’ve been lost, it wasn’t a good thing, sometimes these things are bad and the norms can be good”, I think is very individualistic and misses the point. Especially when an institution whose motivations are openly uncaring about your position unless it is with regards to how it relates to their true motivation, you don’t get to choose where you are in the binary. She makes this point from a position of experience, but one that (and I’m fairly sure she is not a drugs abolitionist) seeks to infantilise others and proclaim that position of experience as the one by which we should measure an institutional framework (I’m sure if I put that position to her she would not agree, but from my reading of her statements in class, this is the conclusion I see). And I wish to reiterate, the medicalisation, the abnormalisation is not judgement by individuals to individuals, this is noise, it is judgement that comes from institutions, where it becomes formalised, from node groups.
I brought up in class that this idea of agitation and medicalisation of the same reminded me of something Mark Fisher writes in Capitalist Realism. He talks about the prevalence of dyslexia in university students and the reluctance to read (preach) and other such observations about the ways in which the modern pupil schizmatically interacts with the institutionalised, formalised and traditional methods of learning. This he sees wrapped up in this bow of neurodivergence, a medicalised view to embrace and encompass difference in order to make allowances through extensions and assistance. And then Fisher suggests that this isn’t a regressed inability to approach subject material but perhaps the adaptation to the visual culture we inhabit (though he would say that working in the Visual Cultures department). He suggests the term post-lexia for such a pathology, because these dyslexic individuals aren’t aren’t struggling socially, their ability to read the images coming through their phones, their capacity to understand tiktoks isn’t in jeopardy. And it brings me back to those evolving systems of signs which we are capable of navigating as subjects within society, but which are congealed in ML systems (though perhaps through feedback these biases may be mitigated).
A post came up on reddit recently, a piece of information I already knew, that Sir Paul McCartney cannot read or write music. And here, in music, we have an example of the emittor/sensor symbolic relationship changing. And this change I believe is due to the mediatization of the information being transferred. Before Edison and recorded music, music had two lives. It was performed and it was written, and usually written to be performed. In such a way the composer through the systems of signs of written music was able to communicate expression to the performer who would in turn communicate ideas to the audience, rather like the laval and adult stages of an insects’ life. It would be fair to say, I think that sheet music is not in itself ‘music’ but it is the thing shared, and the thing interfaced. And so music has form, you can have improvisation which is deformalised expression, though it will carry the form of the instrument and the themes of stylistic makeup in order to express a consensual system of signs to the audience, and you have performance which is the concretized momentary expression of a written set of forms.
But then music can be recorded as sound. And now you start to see genres like Jazz and Blues develop, breaking the mould of the now (improvisation) and the forever (performance), realising that you can have the now and record it forever. That you can play the same piece and all be part of the master piece which is the fossilized recording of a single improvised or practised version. The one becomes many. And music starts to deformalise further, where experimental music is able to forgoe structure itself, forgoe the thematic ligatures earlier improvisation had held on to. And now the liveness isn’t simply the way to hear music at all, it is the contrast to recording, the social venture, the specific in a sea of repetition. Under such a system, one needs only the ears and a turntable to play, note, learn, repeat music and yet still produce and reproduce complex musical language. This is McCartney, a post-lexical musician. And I think we see it further in hip-hop and the development of sampling, and now ML itself will start encroaching on the aspect of a singer by reproducing what makes their voice distinctly theirs. These are symbols and signifiers that have changed, are not gone, but no longer hold a singular supremacy. Musical notation is no longer a language that denotes music itself, but a particular interface and carries associated connotations with that.
This fluidity of signs given by social communication is what might be lacking if the medium of communication flows through ML infrastructures as the design decisions calcify and develop into fixed channels. And equally the mediatization of everything as it seems to be, will bring their own post-lexicons to match, so the danger of normativity and the source of agitation might just be that your lexicon does not parse.
Leave a Reply