I enjoyed this Humans on the Loop essay which considers various layers of social, capital, psychological, and technical predictive engines.
In general, I’m less optimistic about our ability to secure agency while existing in spaces run by and for surveillance capitalism, but the point about improvisation will stay with me for some time. I often consider various forms of marginalisation as a reciprocal process: being an “edge case” outlier to statistical norms, existing amid constructed norms that centre the privileged, and being pushed even further to the margins in response to those norms. The point about improvisation here highlights the agency that is present in that latter part of the cycle.
This passage stood out to me as a description of how predictive models themselves impact outcomes:
This is evident in economics, a field of study that has slowly been transformed by insights from the evolutionary sciences (3). Every actor in a market is anticipating everybody else’s actions, running simulations of the other parties. As soon as someone changes their behavior based on an updated model, this informs and alters every other model held in every other mind, eventually obsoleting even the most complete and accurate “theory of everything.”
I’ve long wanted to make a piece of art about “model autophagy disorder” or “model collapse”, a phenomenon in which generative AI models trained on data produced by generative AI tend to overestimate probable outcomes and underestimate improbable outcomes. Eventually, they produce gibberish that all looks the same. I think normativity itself can be understood as this kind of predictive model that feeds on its own output. This is part of why heteronormativity, for example, produces distorted stereotypes of gender and relationships that ultimately harm everybody.
But also, importantly, these systems are not just harmful because of problems inherent to predictive models. They are also intentionally extractive and exploitative. The normativities that shape our lives are heavily influenced by power structures whose central aim is to extract as much labour as possible from as many people as possible at the lowest cost possible.
The same is true of content recommendation algorithms. Yes, any predictive model is likely to be susceptible to the impact of bias and feedback distortion. But also, the algorithms of the platforms on which we choose to operate are shaped by corporations that see their users as resources to be mined, rather than as recipients of a service who have the agency to leave that service or build something better elsewhere. These algorithms are black-boxed, constantly being honed to further reduce our agency, and they are the subject of a growing folk canon of rumour and superstition.
In that context, I worry that the stories people tell about creative engagement with algorithms amount to little more than myth and legend. Instead of contributing to material change that could actually give power to the oppressed, people instead imagine their in-group as a band of heroes who can transform the kingdom from within. Yes, we can improvise with algorithms to produce unexpected outcomes that break their models - but perhaps this improvisational work risks overshadowing the constructive work of building real alternatives for one another.
To return to the analogy with heteronormativity, this ambivalence is almost identical to the issue of marriage. The majority of people seem to feel confident in their ability to transform the institution from within, and I’d never attempt to take that hope away from anyone. But a part of me will always mourn the lost opportunity to build something more liberatory for one another. I guess I need Maggie Nelson’s The Argonauts for algorithms.