Sometime in my childhood there was a news cycle that centred around the growing ubiquity of “security cameras,” suggesting that some large percentage of public spaces (at least in Britain) was already being filmed. Areas that you might not imagine having 24-hour monitoring, like street corners and parks, were now possible to watch all the time.
But even if cameras were starting to be everywhere, we had an idea that an actual human had to be paying attention, hence the movie trope of the distracted, sleeping, or absent security guard and the meta-camera view of a bank of monitors with our characters unseen except through our own omniscient perspective.
We could assume that our homes or other places we “owned” were not under continual monitoring, unless we were doing things of interest and/or threat to a nation-state. We could say things to other people that no one else would hear and that would live on only as part of human memory, subject to our subjectivity and sense of salience.
Those were the days.
The end of privacy
How far away are we now from near-total surveillance?
Recently, in a meeting I regularly attend on Zoom, one specifically oriented around the sharing of quite vulnerable and personal information, the software began to show a warning as we entered the room.
AI will be listening and summarizing this meeting, it said.
There was no “I don’t consent” option.
Zoom has various reasons to let us know we’re being monitored, but in more and more cases, we may not even know that our locations, likeness, words, or voices are being captured. And what’s more, we’re largely agreeing to this capture without awareness.
Death by a thousand paper cuts
Many things have led to this moment, in which we are experiencing the last days of an expectation of what we called privacy.
Our monitored-ness follows Moore’s or other power laws that predict ancillary outcomes of cheaper processors and storage. Digital video has made incredible strides from the early days of tape-based camrecorders. Quality cameras are tiny and cheap and nearly everyone is carrying at least one audio-visual device around constantly. We have room for massive amounts of data to flow through servers. AI can now process language and visual information to an extent that while it may still be cost-challenging to save every bit of video, we don’t need humans to watch it all to determine its importance or relevance.
And the emergence of click-wrapped licences has accustomed everyone to the idea that they have no recourse but to agree to whatever data usage a company puts forth, if they want access to the benefits or even to the other people who have already struck such bargains. What’s more, we seem to have little sense, so long as the effects of our surveillance are not authorities acting against us, of what it means to lose what we knew as privacy.
Subjects and Rulers
In The Dawn of Everything, authors David Graeber and David Wengrow posit the idea that control of knowledge is one of the three elementary principles of domination.
Historically, surveillance was defined primarily in terms of the state, who had the means and motivation to enforce control of knowledge with one of the other key principles of domination: violence. We had spies and police, and then eventually, as property rights of individuals other than rulers began to be backed with state violence and technology became more accessible, private detectives and personal surveillance emerged and eventually became small industries. But now, we’re mostly being watched by for-profit companies.
When I started down the rabbit hole of “the implications of AI” thirteen years ago, even ideas about human-destroying agentic AI such as “Roko’s basilisk” were thought of by some (notably Eliezer Yudkowsky) as dangerous, akin to releasing blueprints for nuclear weapons.
But most people didn’t think there was much to worry about. Technology was still a domain mostly thought of as benign. iPhones were brand new. Even the idea that AI might be trained in such a way as to maximize its outcomes at human expense, as in the ‘paper clip factory‘ metaphor, seemed far-fetched to most.
For me, the idea of the technology being able to become conscious or even agentic was less compelling than the way people who DID think about this outcome were thinking about it at the time. This was my first foray into the Silicon Valley underground, and what I observed was that many people within the subculture were thinking about everything as machines, while simultaneously longing for more human, embodied, emotional connections.
What I didn’t see then was the cascading motivations that would make AI’s surveillance inevitable and not exactly state-based (though the state still acts as an enforcer). It didn’t occur to me that most people would willingly trade in their freedom for access to entertainment. I didn’t see how compelling the forces behind corporate capitalism were becoming.
Voluntary bondage
“Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don’t really have any rights left. Leasing our eyes and ears and nerves to commercial interests is like handing over the common speech to a private corporation, or like giving the earth’s atmosphere to a company as a monopoly.” —Marshall McLuhan
Though the “Internet of Things” seemed to be hype when it got lots of press in the 90s, we didn’t need to adopt smart appliances to begin shadow surveillance in our private spaces- we invited it in so we could shop more easily.
The current crop of AI tools centre mainly around figuring out how to sell more things, how to optimize selling, how to invent new things to sell. If we made it illegal to profit from AI trained on public data (as opposed to trying to put the genie back in the bottle), we’d surely see less unconsidered damage in the future.
It occurs to me that our only real form of resistance is not buying or selling things. And that form of resistance may actually be harder than smuggling refugees or purloining state secrets.
Each new technological breakthrough recreates the myth of social mobility- ‘anyone,’ it’s said, can become a wealthy person by using these new tools. Meanwhile, actual wealth is becoming more and more concentrated, and most people making their living using the tools of the digital age (versus creating them) are scraping by.
The upcoming innovations in surveillance involve not only being able to record and analyse everything from a human-capable observational standpoint. They will include ways of seeing that go beyond our natural capabilities, using biometrics like body heat or heartbeats, facial gestures, and network patterns. We will have satellites and drones, we will have wearables, we will have unavoidable scans and movement tracking.
Follow the money
As someone involved in the world of internet Trust & Safety, I’m aware that there’s a kind of premise of harm prevention or rule-enforcement that is involved in the collection of vast amounts of information, just as there has concurrently been a groundswell of behaviour that requires redress.
To me, it seems strange to simply accept all surveillance as fine as long as you’re ‘not doing anything wrong;’ but this is a vestige of the idea that being monitored only serves as a way to enforce the laws of the state. What’s happening now is that we are being tracked as a means of selling us things, or as a means of arbitration of our wages.
None of these thoughts or ideas are particularly innovative, nor do thoughts like these have any protection against a future of total tracking. We could have some boundaries, perhaps, but I don’t feel optimistic about them in any short term timeframe.
Instead, I am drawn towards embodied experience of untracked being, while it is still possible. We may be living in the last times where we can know what it feels like to be with other people and not be mediated or observed by technology, to not be on record in any way. We can notice our choice and where we are not offered a choice.
We can feel the grief of this passing.