In this talk, I try to make sense of our current techno-cultural moment in which reality is disappearing and reappearing in confusing ways. I point out two major developments relevant here. First, as capitalism expanded, it has created a system of communication that cannot, and does not want to, deal with meaning. And, second, as the complexity of the socio-techno-biological connections of society increases, the old ways of organizing knowledge have become increasingly unable to provide an accurate understanding of the world we are living in. In their place, a new type of knowledge regime is emerging that produces its own form of meaninglessness.

I use the analysis of the crisis of meaning as a starting point to also think about some of the "counter-measures" that might be appropriate to overcome this crisis.

Anyway, here's the talk which I gave as part of the "New Alphabet: Opening Days" event (10-13.01, 2019) at HKW in Berlin.

Following this was a great presentation by Kate Crawford and Trevor Paglen on how machines are trained to "see" and how this creates what they call "predator vision". You can see their talk (starting at 00:33:00) and the discussion (starting at 01:05:00) on HKW's Facebook page.

I must admit, I’m confused. I have an increasingly hard time grasping the world I’m living in. Even though I have ever more powerful informational tools at my disposal. Over the years, I have learned to follow the guidance of the navigation system that came with my phone. And it has been great. Despite my terribly poor sense of orientation, I have managed to find my way easily through cities and places I’ve never been before, no need for planning ahead and sticking to the route, no need to buy maps, puzzling over how to fold them and pulling them out every five minutes, looking like a complete dork. Things really have gotten better.

But, they also got worse. But one day, we were traveling by car to a small village high up in the Swiss mountains. This being Switzerland, there are excellent roads to almost everywhere and the Navi showed a road leading directly to the village. As we drove up, the road got more and more narrow, to the point where it was impossible to turn around, and then it ceased to be a road altogether. It turned into a pathway for forestry, seriously pushing the capacities of our definitely not off-road car. Confusion about where the pathway would lead, panic over getting stuck in the middle of nowhere, were mixed with the embarrassment of trusting the device more our own senses and the all too obvious shapes of the landscape. It created a form of self-loathing that comes from not being able to decide whether one is the affected by or source of the problem.

The most likely cause of the problem was a classic category mistake. One type of road was classified as another one. But, frankly, it could have been any number of reasons, the system might have been hacked, perhaps there was a glitch in the software, perhaps it was designed like that to hide some military facility or the CEO’s villa, perhaps we didn’t use the device correctly. In most instances, the cause of such problem is in practice unknowable. But here we were, stuck with only bad options.

Do a quick search on GPS accidents, you will see, this is clearly not just my problem, but a collective one. And it’s not limited to GPS navigation, but we can take it to represent an ever more prevalent experience of our present techno-cultural condition. Reality is disappearing and reappearing in confusing ways.

But, obviously, we made it out of the woods, we villagers who knew the different roads and pathways by heart, laughed at us. Now I’m asking myself: how did we – as a culture built around ubiquitous digital communication and self-declared “smart” systems – how did get here, and what does that tell us for thinking about how we can get out of here again?

I can offer two ideas on how we got there, and, if they are relevant, they might point towards possible exits. I cannot offer a map, unfortunately, but neither can your GPS system.

So, let me start with how we got here. There are at least two major developments relevant here. First, as capitalism expanded, it has created a system of communication that cannot, and does not want to, deal with meaning, with very problematic consequences for social life that relies on communication to negotiate shared meaning and to create and navigate the world. And, second, as the complexity of the socio-techno-biological connections of society increases, the old ways of organizing knowledge – the printed maps, if you will – have become increasingly unable to provide an accurate understanding of the world we are living in. In their place, a new type of knowledge regime is emerging that produces its own form of meaninglessness.

In the 1940s the economist and historian Karl Polanyi developed the idea of a “fictitious commodity” and he went on to identify three of them: labor, land, and money. A commodity, he argued, is something that has been produced in order to be sold and bought in the market and its value fluctuates with the market. If there is no market demand, the commodity will cease to be produced. Now capitalism is very good at producing commodities, but Polanyi maintained, not everything that is necessary for the economy can be produced as a commodity. The economy is always embedded in the larger social and natural environment and it draws on resources produced outside of it. In other words, the economy cannot produce its own pre-conditions.

Labor, a key component in any economic activity, he argued, is inseparable from the fullness of human life, and this life is not created for and by the market, but it is an offspring of the fullness of human life itself. To treat human life solely as a commodity, to take its price as labor as the only relevant dimension, is cut off human life from all other dimensions – sociability and purpose, for example – and subsume it fully under the dynamics of the market. With the ultimate consequence that if there is no demand for labor, there should be no human life. The market radicals of the 19th century had the expectation that at some point, a grim Malthusian equilibrium would emerge. The same with land. Land, when turned into property or extractive resources can be sold and bought in the market, and as we know, prices can fluctuate quite significantly. But land is nothing but the environment in which we live and to which we are connected in ways so complex that we are only slowly beginning to understand these connections more fully. When turning nature into a commodity, all these myriad of connections and relations are destroyed, and they become replaced by a single one, the price that can be realized through market transactions. But real estate cannot produce nature, even if a developer plants some trees in a corporate plaza or golf course inside a gated community.

Land and labor are fictitious commodities because human life and natural environment are not produced to be sold and bought in the market. They require far more complex relations to be able to reproduce themselves. Destroying all these relationships by insisting on the market relationship as the only relevant one, is ultimately leading to the destruction of both human life and the natural environment. Polanyi called the market society – as envisioned by his compatriot Friedrich von Hayek – a “stark utopia”, in the sense of a vision that cannot exist. And seeking to realize this impossible dream, threatens to annihilate the very complexity necessary for the existence of human life and the natural environment.

Now, with social media, I would argue, contemporary capitalism has produced a forth type of fictitious commodity: “engagement”. Engagement, according to the first definition Google pointed me to, and I quote the consultant with the best search engine optimization strategy, “simply means getting your fans to do something in response to your post: Like, Comment, Click to open picture, Click on Link, or Share. These are all forms of engagement, and each time one of these things is done, Facebook specifically measures it. Not only that, but it becomes more popular, and on Facebook more people will see that post.” In other words, engagement is any reaction in response to a stimulus. This reaction is measured, and like all numbers, it needs to be optimized, in this case, increased. And the purpose of social media companies, their entire technological infrastructure, all their activities are geared toward producing and selling engagement. Under the imperative of capitalism, this means to continuously produce more of it. And they have gotten really good at it. I’m sure, quite a few of us are producing it right now.

But like land and labor, engagement is just a small aspect of something much larger: communication. Communication, like human life and the natural environment, is a complex and shifting system. There are infinite ways to say something and infinite ways to understand something, and infinite ways to go back and forth trying to match what has been said to what has been understood. The core of communication is the establishment of meaning, which is always relational and unstable. And communication as meaning, precisely because of its contextuality and subjectivity, cannot be measured. It is not discreet. Communication, the never-ending negotiation of meaning, both rational and affective, is a fundamental element of social life, for establishing and adjusting a sense of self and of other, with whom and with what one can be together and what it means to be together. And engagement is only a by-product of communication. Reacting to something is a means towards something else, not an end in itself.

To turn social communication into the fictitious commodity of measurable engagement means to disembed the acts of communication from the complexity of the production of meaning and embed into to an environment in which only this commodity aspects count. All the rest is discounted. Communication is being reduced to the most behaviorist dimension, to a pattern of stimulus and response. But this is even more radical than B.F. Skinner would have thought of because it has been married to the economic demands of producing more and more.

Thus, part of our disorientation, our driving down the stairs and into the ditch, comes from communicating through systems that have been created and are continuously optimized for, well, the commodity of engagement. It’s not entirely impossible to create meaning, that is, shared understanding, through the social mass media, it happens as a by-product and is counteracted all the time by the platforms’ relentless orientation to increase engagement irrespective of any meaning. This is driving people crazy, a state of mind from which they protect themselves with conspiracy theories.

But this is only one source of the problem. Another other source is a deep epistemological crisis in, at least applied, science, that is, in the currently most consequential way of knowing about the world. This crisis comes from the fact that the complexity of the problems that science – and politics – is now dealing with, has sharply increased. The last time this happened in Europe was with the influx of new knowledge about the natural and the cultural worlds following the first wave of colonization, an influx which helped to spark the scientific revolution of the 17th century, as we heard yesterday. Now, as it was the case then, there is an explosion in the number of actors – more, and more diverse human, biological and technological actors – that need to be accounted for. And, to make things worse, an even greater explosion in the number of relationships between these actors whose interactions need to be taken into consideration. Virtually any problem, but particularly the very fundamental ones such as climate change, can no longer be thought of as knowledge objects in front of us with clear demarcations, but are now seen as sprawling networks around us.

Like all networks, there are areas of increased density, but no real boundaries and even the weakest of links can turn out to affect the entire network. More importantly, the observer is most often a node in the network herself, and from every node, there can only be a partial view of the network, and every node has a slightly different one.

This makes it impossible to maintain some of the core stances of modern science. One is reductionism, that is the claim that one can isolate a few relevant relationships and ignore of the rest and still describe a problem accurately. This is the famous “ceteris paribus” clause, introduced to the English language in 1662, two years after the foundation of the Royal Society-

In such an approach, you remove what appears as weak links from the picture, but what if the weak link turns out to be the most relevant ones?

Second is the claim of distance and disinterestedness – the argument that the observer is not affecting the observation and that the result of the observation does not affect the observer. This is clearly not the case when every analysis become a prescription for action to affect the problem, and the different course of action have massive repercussions on the social system which produces the knowledge.

Finally, the claims of objectivity, or more mildly, inter-subjectivity, the claim that the position of the observer doesn’t matter, hence that different people, from different points of view, can observe the same thing, has been subjected to a long and thorough critique at least since the 1970s.

The long history of this critique shows that this crisis of epistemology is not entirely new. What is new is that we have now a new technological infrastructure from which to generate very different knowledge claims.

Machine-driven analysis of large data sets is introducing a new way of knowing. In this, it is answering to this crisis of modern science while, at the same time, deepening it. This is the case even if machine-driven analysis works according to its own program. Thus I will ignore here practical issues such as quality of data, issues of modeling and so on. Kate Crawford and Trevor Paglen will speak about that later.

The claim to be able to processes large quantities of unstructured data can be seen as avoiding the problem of reductionism. Rather than relying on small sample sizes, or on a controlled laboratory environment, or on theory-driven hypotheses. The new promise is to take in all data without any prior separation of important from unimportant aspects of the problem. This separation is done now through machine learning, and the fewer assumptions go into the processes, the higher the chance to find something new. Yet, the opacity and the complexity of the tools of analysis re-introduces problems of replicability with a vengeance. Because, the problem of reductionism has turned into a more fundamental problem of method, the very core of science itself. By focusing on ‘relations that work’ the questions are continuously adapted until they yield a statistically significant answer. The value of these answers lies in their utilitarian effect, that is the capacity to accurately map the short-term, rather than in revealing some fundamental causations. Machine-driven analysis dispenses with the notion of a disinterested search for an external truth and fully concentrates on relationships that can be manipulated for pre-determined ends.

The researchers who are doing the analysis – most clearly in the case of social media companies – are a core element of the situation they are analyzing. And since they are inside the problem rather than outside of it, the results of the analysis can be immediately fed back into the situation changing its composition and dynamics and making the analysis unrepeatable, and hence un-falsifiable. From the point of view of the institutions doing the research, this is not a bug, but a feature. This is how they learn about the world and their constitutive role in it: by tweaking parameters.

This goes beyond a move from causation with an explanatory value to correlation with predictive value. The goal of the knowledge processes is no longer prediction at all, but its opposite: preemption. The standard test of validating scientific claims has always been: does it allow to predict what will happen under particular circumstances? Now, the goal is to be able to intervene in order to change what is happening.

On an epistemological level, this shift from distanced knowledge towards involved manipulation, towards feeding knowledge directly back into the environment in order to see how it is affected by it, is perhaps an adequate reaction to the sharp rise in complexity of the problems that defy an outside view, predictions and certainty. But, and this is a very significant but, underlying this turn towards preemption is not the cautionary ethics of a risk society, but the neo-liberal drive towards endless optimization. So what is to be prevented is a future that is not tweaked according to the agenda of those seeking this type of knowledge.

So, where does it leave us? If we return again to Polanyi’s argument, it would suggest that we need what he called a “protective movement” that fights against this reduction of communication to engagement. Like the labor movement has always fought against the reduction of human life to labor, and the environmental movements have been fighting against the reduction of nature to land. To fight against the reduction of communication to engagement, could mean to force the large companies to acknowledge that they need meaningful communication but cannot produce it themselves. Hence, they need to support the complexity of communication that lies outside of the market. We have a model for that, it’s called public broadcast. While the established institutions of public broadcasting have their own problems that they need to address urgently, the basic principle, that the democratic society needs sources of information that are independent of both the market and government, could easily be applied to digital communication as well. There is enough money around. The social media companies are fantastically profitable and it’s time they contribute to the ecosystem from which they extract their commodities. Rather than paying users individually for their data – which is a nonsensical idea that makes the problems worse – they would pay per user into a fund from which public interest communication environments and productions are funded. In the public interest act those who understand communication as social meaning, created through useful controversies and collective decision-making: this is after all that democracy is about: making disagreement productive. It is definitely not about producing meaningless engagement to be tallied up in the bottom line.

The second problem, the epistemological crisis of science, is much more complex. As I said, involved manipulation, feeding knowledge directly back into the environment in order to see how it is affected by it, can be an adequate reaction to a sharp rise in complexity. If we cannot predict the long-term, we have to act incrementally in the short-term and observe the results self-critically. The problem here is that there are only very few actors, with very narrow agendas, that can do this. Thus, there is a need to broaden the range of actors involved in this process. This does not mean just more people with more diverse agendas, but also new forms of actors, biological as well as technological, with their own affordances. And there is lots of progress in this era, even if the examples are small and disconnected. One example that impressed me in particular. In March 2017, the Whanganui River in New Zealand was given its own legal identity, with the rights, duties, and liabilities of a legal person. This brought litigation to a happy end which started in with a petition to parliament in 1873. So, there are ways to recognize and formally acknowledge different types of agency.

And in this perspective, the image of the car stuck on the stairs, can also be read in hopeful ways, not as the failure of communication and information, but as a successful expansion of it. Here, the stair starts to speak very loudly and clearly, so much so, that they make themselves heard against the all the digital information flows. No matter how many online resources Google may find which question whether stairs exist at all. No matter in what in a psychologically-optimized tone of voice the navigation system admonishes to keep on driving, it is very likely that the voice of the stairs will overpower them all.

Thank you!