Copywrongs and the Artifice of Intelligence

The mysterious addiction to our own captivity

All the wrong things are happening when it comes to copyright and AI.

All big AI models were trained on what they call “public data,” meaning that people put content somewhere that was not secured. There are big corporations trying to litigate the makers of these models for infringement, but where is the class-action?

The way copyright actually works, at least in the jurisdictions where these models were trained, is that ALL THAT CONTENT WAS COPYRIGHTED. “Your work is under copyright protection the moment it is created and fixed in a tangible form that it is perceptible either directly or with the aid of a machine or device.” All that I must do to copyright my work is to write it somewhere. Most platforms, in order to stay shielded by Section 230, do not assume ownership of that copyright. So now every big AI platform should be in violation.

I say this with some trepidation.

Once upon a time, I ran a meetup group that discussed issues around IP, copyright and copyleft, and technology, called CopyNight NYC. It was around that time that I was extricating myself from a job in which part of my time was spent enforcing copyright claims on behalf of a film distributor. Over time, I came to believe that the strenuous defense of copyright is largely in service of exploiting artists, not protecting them. This is a game of legal and lobbying warchests, not of any fair or unfair use.

Which is probably even more true now. Whatever so-called protection there was for average people creating content is in name only, whereas the “cloud capitalists,” as Yanis Veroufakis calls them, need not adhere to any kind of law that gets in the way of their railways, let alone consumer protection for what their hastily assembled tracks might mean for safe travel.

We don’t seem to understand the concept of our own sovereignty. So often I hear people use arguments like, “privacy is dead” or “I don’t like the way this platform treats me but it’s accessible/free/what people already have.” You put it where other people can see it so how can you complain that it was stolen? — especially as it’s been mixed with so many other things that it is unidentifiable (with many examples of the unidentifiable-ness being untrue, of course).

I have a little bit of that feeling of being duped, even though maybe it was just by not noticing so much who was talking. As with so many other things that seemed to make all kinds of sense until you realize that when it comes to how far people with power will take it if it serves them, I had no idea what I was signing up for. For example, I supported the EFF for years, but as software ate more of the world, it became clear that zero accountability for content wasn’t just freeing the platforms, it was enabling them to become factories for hate and violence.

The tenets of open source feel aligned to me, and am against the way copyright was used as a tool of domination, especially in fair use or creative expression conditions, but it seems now it’s simply “oh well, people shouldn’t expect ANY protections about what they are contributing.” Protections for us would be VERY expensive for the platforms ingesting it all, either to sell ads amidst it, or to train AIs to sell us things without even needing advertising.

Every company with access to data is jumping on this approach. Even paid services like Slack and Zoom are by default using its customers as sources of free training data. And do you suppose this doesn’t mean, ultimately, normalizing of surveillance everywhere, even in your ‘private’ spaces? Even businesses who have sensitive or proprietary data will likely be unable to prevent its surveillance without leaving the platforms, and getting fully out of Alphabet, Microsoft, Apple, Amazon, and Meta’s clutches will challenge even the largest enterprises.

For those of us doing work to cultivate something in the world that does not rely on dominance, backing by violence, and extraction, it’s pretty essential to start thinking about how to extricate ourselves. These tools are only “free” as in, they don’t charge money for us to use them, but we come up with all sorts of justifications to keep them. “Everyone can use Docs;” “people are already on Instagram or Whatsapp;” “there are no alternatives to AWS or Google for small projects;” the list is endless.

How, do we imagine, will alternatives ever exist if we, on the forefront, are not willing to risk some frustration or learning in order to expand the reach of open source and ethical paid alternatives? How will alternatives get better without the pressure to serve us? What if part of our work as organizations providing community support and service is evangelizing, supporting, and contributing our community design needs to alternatives?

We can stop giving our time, attention, creativity, and content to these platforms for free. What used to be just social media making us the product is now pretty much any large VC-backed technology company. Yes, many of these alternatives are less sexy or more cludgy, but technology is nothing if not iterative. If you can disinvest from big brands for other kinds of ethical violations, why not at least give a few alternatives a shot? With most of these products, network effects can be huge, and can potentially mean at least the beginning of an ecosystem where people are valued, respected, not simply training material.

And perhaps we will be able to offer our content and contributions as a gift to reciprocally-focused AIs (or specifically, AI model builders) that want to work in solidarity for the common welfare of all beings. Right now, AI is really about a kind of intelligence I would describe as information processing. Living beings share the capability of sensing, not through the processing of information, but through our entire system(s). What we are giving AI is just the stories we’ve made up about our own sensing. Vastly limited.

We are seduced by imagination. We are loved by what is real.

Nothing left to lose

“When I discover who I am, I’ll be free.”

-Ralph Ellison

I’ve been listening to the Art of Accomplishment podcast for a long time (and am a graduate of the associated Connection Course and Connection Challenge). The podcast often has gems, but this week was perhaps my favourite, maybe because it comes at a time that feels so aligned with where I’m at right now; it’s all about freedom.

“I’m not constrained by the voice in my head or by the thoughts of the people around me or by some set of ‘shoulds’ that society may place. And that freedom is a birthright. Right now every single person can be themselves. There’s nothing stopping them. There are consequences, potentially. You might not like them. You might choose to not be yourself because you don’t like the consequences. That’s all true. It doesn’t mean you can’t be yourself.

—Joe Hudson

As much as this resonates and I enjoyed the whole thing, I also want to throw in a ‘yes, and’ regarding freedom. As Joe puts it, freedom of the kind we’re journeying to find is about the freedom to be who we are.

But as soon as we think we are doing this as a self, we’re falling into a trap. We so easily lose our freedom. Why?

At one point, Joe mentions Maslow, that having our basic needs met helps in some ways, and paradoxically, having an abundance of resources can get in the way of freedom, because freedom takes work and vigilance, and people with abundant resources may have more to lose, in a sense, or because material benefits can make us lazy.

Maslow, as we know, had himself a little bit of a warped view. His pyramid was an unacknowledged adaptation, really an inversion of indigenous thinking that places responsibility to one another and the world around us as a necessary condition for having what we need.

When we seek to be free without considering how our actions, behaviours, and beliefs prevent other people’s freedom, we are still caught up in something; we’re still entangled. Joe uses Mandela as an example of being free even when his body was imprisoned. And Mandela famously said, “For to be free is not merely to cast off one’s chains, but to live in a way that respects and enhances the freedom of others.

There’s a paradox here, there’s a crack to fall through. People with status are less able to see certain aspects of their non-freedom. It’s possible to ‘feel free’ in a way that abdicates responsibility, and at the same time, part of freedom is that it’s about not trying to manage, fix, or even judge others or oneself. How can I be free and also belong? How can my freedom explicitly be integrated with yours?

Going deep into freedom means seeing where I’m engaging in dominance even without thinking about it or feeling identified with it. This isn’t to chastise myself or to feel guilt, it’s about recognising that true freedom is not possible in my context as a middle-class white North American, even as I taste freedom sometimes, when I am operating from love and curiosity rather than fear. That I am shaped even when I choose to ‘be myself.’ That true freedom for any individual may not exist, and yet is always available as a choice.

When I am free, I am by necessity not inhibiting your freedom. I am you.

The trick here (and it is a trickstery adventure) is to not call comfort freedom, to not get into spiritual materialism, to be both free and inexorably part of unfreedom, to be a decaying mess of microbes and mites and temporarily animated flesh and to be something else, to be nothing. To be as still as space and as fast as quantum particles. To be myself and bewildered. When I am free, it will not be me.

What does privacy feel like?

Sometime in my childhood there was a news cycle that centred around the growing ubiquity of “security cameras,” suggesting that some large percentage of public spaces (at least in Britain) was already being filmed. Areas that you might not imagine having 24-hour monitoring, like street corners and parks, were now possible to watch all the time.

But even if cameras were starting to be everywhere, we had an idea that an actual human had to be paying attention, hence the movie trope of the distracted, sleeping, or absent security guard and the meta-camera view of a bank of monitors with our characters unseen except through our own omniscient perspective.

We could assume that our homes or other places we “owned” were not under continual monitoring, unless we were doing things of interest and/or threat to a nation-state. We could say things to other people that no one else would hear and that would live on only as part of human memory, subject to our subjectivity and sense of salience.

Those were the days.

The end of privacy

How far away are we now from near-total surveillance?

Recently, in a meeting I regularly attend on Zoom, one specifically oriented around the sharing of quite vulnerable and personal information, the software began to show a warning as we entered the room.

AI will be listening and summarizing this meeting, it said.

There was no “I don’t consent” option.

Zoom has various reasons to let us know we’re being monitored, but in more and more cases, we may not even know that our locations, likeness, words, or voices are being captured. And what’s more, we’re largely agreeing to this capture without awareness.

Death by a thousand paper cuts

Many things have led to this moment, in which we are experiencing the last days of an expectation of what we called privacy.

Our monitored-ness follows Moore’s or other power laws that predict ancillary outcomes of cheaper processors and storage. Digital video has made incredible strides from the early days of tape-based camrecorders. Quality cameras are tiny and cheap and nearly everyone is carrying at least one audio-visual device around constantly. We have room for massive amounts of data to flow through servers. AI can now process language and visual information to an extent that while it may still be cost-challenging to save every bit of video, we don’t need humans to watch it all to determine its importance or relevance.

And the emergence of click-wrapped licences has accustomed everyone to the idea that they have no recourse but to agree to whatever data usage a company puts forth, if they want access to the benefits or even to the other people who have already struck such bargains. What’s more, we seem to have little sense, so long as the effects of our surveillance are not authorities acting against us, of what it means to lose what we knew as privacy.

Subjects and Rulers

In The Dawn of Everything, authors David Graeber and David Wengrow posit the idea that control of knowledge is one of the three elementary principles of domination.

Historically, surveillance was defined primarily in terms of the state, who had the means and motivation to enforce control of knowledge with one of the other key principles of domination: violence. We had spies and police, and then eventually, as property rights of individuals other than rulers began to be backed with state violence and technology became more accessible, private detectives and personal surveillance emerged and eventually became small industries. But now, we’re mostly being watched by for-profit companies.

When I started down the rabbit hole of “the implications of AI” thirteen years ago, even ideas about human-destroying agentic AI such as “Roko’s basilisk” were thought of by some (notably Eliezer Yudkowsky) as dangerous, akin to releasing blueprints for nuclear weapons.

But most people didn’t think there was much to worry about. Technology was still a domain mostly thought of as benign. iPhones were brand new. Even the idea that AI might be trained in such a way as to maximize its outcomes at human expense, as in the ‘paper clip factory‘ metaphor, seemed far-fetched to most.

For me, the idea of the technology being able to become conscious or even agentic was less compelling than the way people who DID think about this outcome were thinking about it at the time. This was my first foray into the Silicon Valley underground, and what I observed was that many people within the subculture were thinking about everything as machines, while simultaneously longing for more human, embodied, emotional connections.

What I didn’t see then was the cascading motivations that would make AI’s surveillance inevitable and not exactly state-based (though the state still acts as an enforcer). It didn’t occur to me that most people would willingly trade in their freedom for access to entertainment. I didn’t see how compelling the forces behind corporate capitalism were becoming.

Voluntary bondage

“Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don’t really have any rights left. Leasing our eyes and ears and nerves to commercial interests is like handing over the common speech to a private corporation, or like giving the earth’s atmosphere to a company as a monopoly.” —Marshall McLuhan

Though the “Internet of Things” seemed to be hype when it got lots of press in the 90s, we didn’t need to adopt smart appliances to begin shadow surveillance in our private spaces- we invited it in so we could shop more easily.

The current crop of AI tools centre mainly around figuring out how to sell more things, how to optimize selling, how to invent new things to sell. If we made it illegal to profit from AI trained on public data (as opposed to trying to put the genie back in the bottle), we’d surely see less unconsidered damage in the future.

It occurs to me that our only real form of resistance is not buying or selling things. And that form of resistance may actually be harder than smuggling refugees or purloining state secrets.

Each new technological breakthrough recreates the myth of social mobility- ‘anyone,’ it’s said, can become a wealthy person by using these new tools. Meanwhile, actual wealth is becoming more and more concentrated, and most people making their living using the tools of the digital age (versus creating them) are scraping by.

The upcoming innovations in surveillance involve not only being able to record and analyse everything from a human-capable observational standpoint. They will include ways of seeing that go beyond our natural capabilities, using biometrics like body heat or heartbeats, facial gestures, and network patterns. We will have satellites and drones, we will have wearables, we will have unavoidable scans and movement tracking.

Follow the money

As someone involved in the world of internet Trust & Safety, I’m aware that there’s a kind of premise of harm prevention or rule-enforcement that is involved in the collection of vast amounts of information, just as there has concurrently been a groundswell of behaviour that requires redress.

To me, it seems strange to simply accept all surveillance as fine as long as you’re ‘not doing anything wrong;’ but this is a vestige of the idea that being monitored only serves as a way to enforce the laws of the state. What’s happening now is that we are being tracked as a means of selling us things, or as a means of arbitration of our wages.

None of these thoughts or ideas are particularly innovative, nor do thoughts like these have any protection against a future of total tracking. We could have some boundaries, perhaps, but I don’t feel optimistic about them in any short term timeframe.

Instead, I am drawn towards embodied experience of untracked being, while it is still possible. We may be living in the last times where we can know what it feels like to be with other people and not be mediated or observed by technology, to not be on record in any way. We can notice our choice and where we are not offered a choice.

We can feel the grief of this passing.