Could AI Break Capitalism?

About a year ago, the US Copyright Office ruled that AI-generated ‘expressive works’ were not covered by US Copyright law.

“Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output. For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare’s style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.”

As we shift into a world where expressive work, in terms of sheer volume, is more often than not produced by AI, we can imagine interesting breakdowns in the nature of ownership itself.

Just a few years ago, we were in an almost diametrically opposed conversation about technology and expressive work, when the advent of NFTs gave artists the idea that they might be able to have ‘more ownership’ of their work and so finally make bank. While that promise proved to be somewhere between overly hopeful and deceptive, it also seemed to be an extension of the capitalist sense that everything should be possible to commodify and people who were thus far left out of the dream of wild money-making, namely unknown artists, were suddenly going to be initiated. Such a vision had nothing to do with changing the system, only about who got to be admitted to the extraction part of the equation.

The emergence of AI-everywhere has taken the questions conceptual artists of the last decades have been investigating (not to mention Orson Welles) and made them into regular people conversational chatter. What is art, in fact, and who is it for? Is art about expression, a creative process, a compelling product? Is art about the artist or about the image? Does art come from an idea or an expression of an idea? Does art happen when it’s intentional or by accident? And who “should” own art?

In the 20th Century, art became something that could be part of an investment portfolio. Though the move toward art for money’s sake began before digital was a thing (Jeff Koons on the high-end, decor-art on the mass produced side), creative work became far more product-ized in the turn of the century as ‘creators’ gained access to metrics and feedback about what potential buyers responded to, and the platforms for whom scale was at utmost value instilled the idea of “meeting market demand” into creators through its incentive structures and designs. You could make money being a “creator” from platform ad revenue if you could reach a mass audience (that good ole mobility myth again).

Next generation tools began to re-instate the niche opportunities for creators with tools like Patreon, Substack, and even Etsy, brands began realizing the power of “micro influencers,” but for nearly all creative people, their platform income was not going to cover rent or even groceries. Meanwhile, the promise of the ‘creator economy’ as a cultural phenomenon meant that all of us were writing newsletters or pimping ourselves somewhere to get attention and sometimes money, and like with any MLM, there’s a point at which you look around and realize there’s no one left who hasn’t already been pitched the oils or bought the leggings.

When streaming or really, Napster, emerged- we saw that recordings lost value, and artists could only make “real” money through offering an experience (live show) or a tangible good, and those are much harder to scale. That’s now the reality for anyone who was seduced by the idea of making a living through creative work.

The world of “marketing” as we knew it for the past 20 years is about to implode. There’s little incentive for platforms to ‘protect’ people and their ‘intellectual property’ when it’s far less legally onerous to host unprotected content that can be produced in ever more volume and variation tailored to the whims of individuals (while, if the last several years are any indication, will also work to erase some elements of individual taste in favour of promoting an advertising-driven ‘us against them’ tribalism).

That all sounds kind of horrible, but it also has promise. While people have proven very easy for platforms to manipulate, we are still fundamentally built to value cooperative, embodied, process-driven experiences. Most of us actually are aware that being with each other has an unmonetizable value, even with the rise of commodification of our relationships and so-called communities.

If most art we experience is impossible to own, then might we begin to question owning things at all? Or at least, things that are virtual. What feels radical about the USCO decision is that instead of, as in the past, corporations getting to extract from artists through publishing rights, there’s no owner at all for AI-created work, and how will we even be able to know something is not AI-created, if it’s digital?

Whether this means more power to the technocracy or not remains to be seen. Right now, AI models are very dependent on huge compute and lots of venture money, but presumably there will be motion towards locally-running models as well as cross-pollination of different systems that make it hard to fully control using the legal mechanisms we have now.

Maybe we are not going to abandon ownership but we’ll be more apt to return to analogue approaches. Charles Eisenstein proposes investing in a typewriter factory. I still have this fantasy of a recursive AI system in which absolutely everything digital is AI driven and managed, giving people no choice other than to return to the tactile and small-scale, unless they wish to be a product themselves in more ways than just their attention to advertisers.

Maybe embodiment itself is undergoing a kind of system-commodification that happens to most dangerous ideas. Mostly it seems that we’re in a hyper-denial of our bodies, either because we’re thinkers or because social media has amplified our story about what parts of our physical selves are unacceptable. I think of the chapter in Hospicing Modernity about shit, how much the toilet is the metaphor for life in the anthropocene. We produce waste in pristine rooms, poop into clean water, and send it away to be dealt with by someone else. We don’t take responsibility for our waste and we also don’t see the value of this part of our collective metabolic system.

We are going to have to go back to buckets and compost to find out what making things is about, what a fools errand it was to own our work or to protect our ideas, and especially to trade our creativity for the crumbs of a surveillance system’s profits. We are only mammals, in the end, dreaming of being stars.

Goodbye, Capitalism

How I will long for your halcyon days

What if capitalism, in any way that an encyclopedia or economics 101 class might describe it, is over? That’s the hypothesis of the former Greek Finance Minister Yanis Veroufakis, who argues that we’re entering into something worse: what he calls Technofeudalism.

If you, like me, have had this sneaking thought like, ‘well, obviously the platforms now have more power than governments,’ then Veroufakis’s argument won’t come out of nowhere, but it’s a bleak picture of how we’ve ceded our economic systems to purely extractive rent-seeking, in ways that have little recourse for rebellion, given that this autocracy does little to directly govern. We don’t vote for these leaders, they don’t provide our necessary physical infrastuctures, though they own a lot of fiber and servers. They leech off of the systems ‘citizens’ pay for and then determine what else we can buy or pay attention to, how we can communicate, and increasingly, what systemic resources we ourselves can access.

Obviously terrorism is a fail in my eyes, but you can’t help but think a little wistfully about the underlying hope in Ted Kaczynski‘s attempts to bring attention to technology’s negative impacts and his desire to get the hell out and off the grid. But I don’t have a shack in the woods future without a catastrophe. I’m about as incompetent at living off the land as one could be (aside from a successful attempt to grow kale). Instead, I am here participating in making myself a serf, a sharecropper of this system.

We’ve been living in a pretty obvious tipping point with ‘creators’ and AI and ‘the sharing economy’ for some time, and now it’s here.

The only antidote I can propose, with even a shred of reason, is to really re-focus on re-wilding ourselves in some way through the practice of practice. We can learn to be with each other, suffer the messiness and frustration that will always be a part of connection and collaboration, and then perhaps start to build tools to support trust-sized networks that can start to provide infrastructure alternatives to the cloud- keeping in mind that we are not in a position to abandon our feudal overlords wholesale yet. (Yes, tech things like mesh networks, private authentications, alternative financial systems that are not about solving trustlessness at scale, but non-tech things are probably more important).

In this lifetime, we’re only going to sow possibility, and won’t taste the fruit of our labours. And so, it will be quite tempting to just say, ‘but I need that thing from Amazon’ or “it’s fine if I just look at social media a little bit.” I mean, these are just the most obvious things that I still do on the regular. As an advantaged western person, I’m not only choosing my own serfdom, I’m basically forcing it on other people who thus far haven’t even had the option of purchasing Prime.

What will happen about war, or other old-school dominance activities? It’s an interesting question. Surely to innovate in the manner in which technology lords depend, there will need to be enough sense of personal autonomy to be creative, and creativity breeds subversion, as a rule. But we’ve invented these excellent policing technologies such as AI, blockchain, and social media, so perhaps any of our efforts to resist will simply be co-opted into fun memes or lead to banishment.

For a much more entertaining read, while also unhopeful, I recommend The Immortal King Rao, which breezed into the top spot of Novels I Read Last Year That Basically Support and Annihilate My Worldview Simultaneously. No spoilers, but one theme of the book centres around what happens when the algorithm rules us, like, officially. It’s very nearly nonfiction.

What does privacy feel like?

Sometime in my childhood there was a news cycle that centred around the growing ubiquity of “security cameras,” suggesting that some large percentage of public spaces (at least in Britain) was already being filmed. Areas that you might not imagine having 24-hour monitoring, like street corners and parks, were now possible to watch all the time.

But even if cameras were starting to be everywhere, we had an idea that an actual human had to be paying attention, hence the movie trope of the distracted, sleeping, or absent security guard and the meta-camera view of a bank of monitors with our characters unseen except through our own omniscient perspective.

We could assume that our homes or other places we “owned” were not under continual monitoring, unless we were doing things of interest and/or threat to a nation-state. We could say things to other people that no one else would hear and that would live on only as part of human memory, subject to our subjectivity and sense of salience.

Those were the days.

The end of privacy

How far away are we now from near-total surveillance?

Recently, in a meeting I regularly attend on Zoom, one specifically oriented around the sharing of quite vulnerable and personal information, the software began to show a warning as we entered the room.

AI will be listening and summarizing this meeting, it said.

There was no “I don’t consent” option.

Zoom has various reasons to let us know we’re being monitored, but in more and more cases, we may not even know that our locations, likeness, words, or voices are being captured. And what’s more, we’re largely agreeing to this capture without awareness.

Death by a thousand paper cuts

Many things have led to this moment, in which we are experiencing the last days of an expectation of what we called privacy.

Our monitored-ness follows Moore’s or other power laws that predict ancillary outcomes of cheaper processors and storage. Digital video has made incredible strides from the early days of tape-based camrecorders. Quality cameras are tiny and cheap and nearly everyone is carrying at least one audio-visual device around constantly. We have room for massive amounts of data to flow through servers. AI can now process language and visual information to an extent that while it may still be cost-challenging to save every bit of video, we don’t need humans to watch it all to determine its importance or relevance.

And the emergence of click-wrapped licences has accustomed everyone to the idea that they have no recourse but to agree to whatever data usage a company puts forth, if they want access to the benefits or even to the other people who have already struck such bargains. What’s more, we seem to have little sense, so long as the effects of our surveillance are not authorities acting against us, of what it means to lose what we knew as privacy.

Subjects and Rulers

In The Dawn of Everything, authors David Graeber and David Wengrow posit the idea that control of knowledge is one of the three elementary principles of domination.

Historically, surveillance was defined primarily in terms of the state, who had the means and motivation to enforce control of knowledge with one of the other key principles of domination: violence. We had spies and police, and then eventually, as property rights of individuals other than rulers began to be backed with state violence and technology became more accessible, private detectives and personal surveillance emerged and eventually became small industries. But now, we’re mostly being watched by for-profit companies.

When I started down the rabbit hole of “the implications of AI” thirteen years ago, even ideas about human-destroying agentic AI such as “Roko’s basilisk” were thought of by some (notably Eliezer Yudkowsky) as dangerous, akin to releasing blueprints for nuclear weapons.

But most people didn’t think there was much to worry about. Technology was still a domain mostly thought of as benign. iPhones were brand new. Even the idea that AI might be trained in such a way as to maximize its outcomes at human expense, as in the ‘paper clip factory‘ metaphor, seemed far-fetched to most.

For me, the idea of the technology being able to become conscious or even agentic was less compelling than the way people who DID think about this outcome were thinking about it at the time. This was my first foray into the Silicon Valley underground, and what I observed was that many people within the subculture were thinking about everything as machines, while simultaneously longing for more human, embodied, emotional connections.

What I didn’t see then was the cascading motivations that would make AI’s surveillance inevitable and not exactly state-based (though the state still acts as an enforcer). It didn’t occur to me that most people would willingly trade in their freedom for access to entertainment. I didn’t see how compelling the forces behind corporate capitalism were becoming.

Voluntary bondage

“Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don’t really have any rights left. Leasing our eyes and ears and nerves to commercial interests is like handing over the common speech to a private corporation, or like giving the earth’s atmosphere to a company as a monopoly.” —Marshall McLuhan

Though the “Internet of Things” seemed to be hype when it got lots of press in the 90s, we didn’t need to adopt smart appliances to begin shadow surveillance in our private spaces- we invited it in so we could shop more easily.

The current crop of AI tools centre mainly around figuring out how to sell more things, how to optimize selling, how to invent new things to sell. If we made it illegal to profit from AI trained on public data (as opposed to trying to put the genie back in the bottle), we’d surely see less unconsidered damage in the future.

It occurs to me that our only real form of resistance is not buying or selling things. And that form of resistance may actually be harder than smuggling refugees or purloining state secrets.

Each new technological breakthrough recreates the myth of social mobility- ‘anyone,’ it’s said, can become a wealthy person by using these new tools. Meanwhile, actual wealth is becoming more and more concentrated, and most people making their living using the tools of the digital age (versus creating them) are scraping by.

The upcoming innovations in surveillance involve not only being able to record and analyse everything from a human-capable observational standpoint. They will include ways of seeing that go beyond our natural capabilities, using biometrics like body heat or heartbeats, facial gestures, and network patterns. We will have satellites and drones, we will have wearables, we will have unavoidable scans and movement tracking.

Follow the money

As someone involved in the world of internet Trust & Safety, I’m aware that there’s a kind of premise of harm prevention or rule-enforcement that is involved in the collection of vast amounts of information, just as there has concurrently been a groundswell of behaviour that requires redress.

To me, it seems strange to simply accept all surveillance as fine as long as you’re ‘not doing anything wrong;’ but this is a vestige of the idea that being monitored only serves as a way to enforce the laws of the state. What’s happening now is that we are being tracked as a means of selling us things, or as a means of arbitration of our wages.

None of these thoughts or ideas are particularly innovative, nor do thoughts like these have any protection against a future of total tracking. We could have some boundaries, perhaps, but I don’t feel optimistic about them in any short term timeframe.

Instead, I am drawn towards embodied experience of untracked being, while it is still possible. We may be living in the last times where we can know what it feels like to be with other people and not be mediated or observed by technology, to not be on record in any way. We can notice our choice and where we are not offered a choice.

We can feel the grief of this passing.

Sweet Social Media

Should we try to make it keto or just have an apple?

Whether ‘social media is bad for you’ has these annoying debates. Social media has benefits! But there’s no doubt that there are psychological effects from forms of communication that are equivalent to advertising, in which consuming content necessitates others to be served a lot of actual advertising for things they don’t need, which themselves are served by fostering insecurity and internal lack, not to mention tribalism and division. And that there have been real harms as a result of social media, including genocide.

Unsavory similarities

What if we think of social media like refined sugar? It’s fantastically tasty, but has no nutritional value. Its negative effects go beyond individual health.

Production of sugar comes from a powerful (and subsidised) industry with roots in slavery. It is ubiquitous and seems impossible to avoid in modern life. Our collective palate-shifting towards it has caused all kinds of downstream effects on our health and ability to moderate our behaviour (so much so that we now have pharmaceuticals to address our inability to naturally self-moderate).

Sounds similar to many criticisms of social media. The idea that we should try to hang on to the “good parts” of social media does seem akin to the proliferation of ‘keto snacks,’ highly processed items that are low in ‘net sugar.’ (TBH some of those snacks are pretty delicious! but probably not great for us).

There’s always an interesting tension between ‘we’re living with human systems that are leading to our ultimate demise‘ and ‘we have to live in these systems and anyway there are rewards in this system I don’t want to live without.‘ Part of the practice, in my mind, is holding both feelings while getting curious about how either are true.

I am fairly certain that my absolute happiness would not be reduced by the non-existence of social media, even though it has its rewards and pleasures. I was alive, even if I was only a child, when we weren’t all connected and ‘sharing’, and people were pretty OK.

Don’t look back

I am not advocating ‘going back in time;’ instead, I am asking ‘how can technology support first principles?’ What might ‘unprocessed’ look like in our digital interactions?

If everyone was on Mastodon or some other still-social-media platform that was not ad-driven, would all the problems dissolve? Is it possible to have a way to share thoughts and information and promote your group or art or thinking in a network that isn’t gross?

There’s a distinction between “within my network and x degrees of separation” to “public,” and perhaps there’s some ways of imagining ourselves being less prone to performance and self-censorship if we have an idea of who we’re talking to. Models of highly cross-pollinated small groups could serve us to share more thoughtfully than trying to get attention from ‘everyone in the world.’ Decentralization could make this possible but more needs to be done to set limits, to normalise boundaries.

The negative effects of social media come not just from bad actors and harassment, not just from being exposed to advertising and algorithms, not even just from participation in a system that mirrors corporate oppression in general. Investment of time and emotional bandwidth into superficial forms of connection, being constantly evaluated, and seeking attention take us out of our own freedom and sense of belonging.

But what about nihilism?!

Is there any real argument for not eating a sugar-filled diet if you are like “well, we’re all going to die eventually?” My experience suggests that as I divest from more of the systems like corporate work, social media, faith in institutions, I not only feel better but I start seeing the possibility of supporting human patterns of connection and belonging with technology, rather than trying to create a successful startup that exploits human behaviour to gain power and influence.

My experience with the path towards internal freedom is that I find more compassion for my behaviour but far fewer reasons/ less need to choose comfort and convenience over what seems to be right for me. But it’s a curious question of whether you act your way into right thinking or if you heal enough to not need the crutch?

Cultural addictions

Our collective decisions about what to do about addictive things is curiously inconsistent. Some people become alcoholics and there’s no evidence that alcohol has health benefits, but we’ve collectively decided to allow adults to make their own choices about how to use it. Some people become nicotine addicts and cigarettes are still widely available, though there’s awareness of some of the malfeasance of the companies who profit from selling tobacco products. Some people become heroin addicts and we’ve collectively decided to criminalize that behaviour or at least criminalize possession of heroin. Some people become prescription opioid abusers and we have decided to hold corporations somewhat accountable and also continue to permit doctors to prescribe opioids. Meanwhile, it’s worth pointing out that when there’s a lot of money being made, less profitable alternatives will often be suppressed or vilified, even if they are actually more salubrious.

To take an opposing position, indulgence is fun. And social media is fun. It entertains us, it gets us excited, it is silly and sexy and delightful. We can be creative and be rewarded and recognised. We can find people who we vibe with and share aspects of ourselves that might be unappreciated or censured in our local community. We can learn and discover things and perspectives we wouldn’t have encountered offline.

Everything in moderation no pun indended

There are no absolutes. I still love to eat a brownie, have a drink, and watch YouTube videos. But I don’t feel happier if I have two brownies or three drinks or spend too long looking at content. It’s only because I generally eat healthy that I notice the ugh feeling of going off the rails. It’s because I have so many other, meaningful things I care about that I’m satiated by a limited amount of entertainment. I don’t long for more stuff. But we’re living in a time where limits are not the norm, and consumption is king.

When people bring up “making the internet weird and fun again,” I am reminded that the online world can feel like a portal, a place of mystery, surprise, and new connections. Part of this to me feels like it’s not compatible with social media, which is designed to be a firehose, an endless amount of stuff, not a place to have an experience, to feel something and feel a reciprocal sense of knowing.

Socially mindful

How can we have social media that is intentional? How can we create an environment that still allows us to perform, to show off our creativity, but to slow it down to an embodied, breathing, collaborative experience?

Could we have the delight and fun of social media without the approval-seeking and ranking algorithms? Let’s start with the ‘feed.’ What does feed-free social media look like? Even in the so-called ‘cozy web,’ popular community platforms have feed metaphors, though they may not be updating at a social media clip.

Could we live without likes and views? Could we have social media that didn’t reinforce unnatural standards of physical appearance or encourage polarizing viewpoints? Could we have social media that didn’t replace actual feelings of interdependence, collective good, and mattering?

I’m excited to live in a world where we’re not going for “not as bad as…” and living into new ways of being and thinking that rest on the fundamental idea of our collective freedom, our collective responsibility for the space and for our individual experience, and it strikes me that centralized social media simply is anathema to that vision.

Yes, we need breaks from all the seriousness, we need to have fun, we need to laugh and play- but I am not convinced social media is a prerequisite for these activities. If anything, it seems much harder to really feel joy when we’re glued to screens, when there’s always another thing queued up to entertain us or to be processed, especially when a huge number of those things are selling us something. This is how it seems to me, but I love to discover the ways I am making assumptions, and I want to understand how you think we can align social media with a world that supports us as humans, isn’t extractive, and doesn’t rely on violence and dominance to function.

Post-Growth Product Management

The discipline of product management and the business goal of exponential growth have emerged in tandem.

Literally billions of dollars have poured into startups and tech companies on the promise and execution of growth, even to the point where actually making a profit has relatively little weight.

Interviewing for my last job in tech, I asked the founder, “what’s the business plan?” and he said (in effect), ‘in Silicon Valley, we don’t need a business model. Our (blue-chip) investors fund us to grow and then, once we have the growth, we’ll figure out how to make money.’

The belief in growth has been so religious that there was actually no need to have a plan to make money (it helps if you have white, male, Stanford grads on the founding team for this model to apply. No shade to that particular company who are now on a revenue track and are genuinely focused on connecting people). I have also been told that having revenue can be a problem for getting investment, and you should fundraise before you’re making money, presumably because it ties the prospects of the company to something real. Investors naturally are seduced by and/or promoters of the breathless aspirational optimism that just saying “we’re on track to have billions of free users” provokes.

There are wild stories of companies spending vast sums to ‘own a market’ so they can keep growing at a rate that admits no competition, even when there’s really no clear path to being profitable.

And look, all of that ‘works’ in a system in which the primary way things work is to attain some growth milestone and then get to ‘exit’ into public ownership. All of this ‘works’ in the sense that people who put money in as equity investors sometimes get it out at a multiple. Ideally most of the losses are written off, avoiding taxes on whatever revenues come in, or underwritten by infrastructure that’s publicly funded. It works for the investors and sometimes the founders, and everyone else? Perhaps for some well-paid employees who can live well until the layoffs start cascading.

We as PMs learn about moats and owning a market and strategies that involve winning. We listen to well-produced podcasts about blitzscaling and subscribe to the YouTubes of founders and marketers who tell us about the magic of network effects and how worth it selling a lot of your company is in service of having the capital to grow, grow, grow. You’d rather have 20% of $2B than 80% of $1M, right?

So, what if we’re burning down the world in the process?

Like, literally?

How much computing power is wasted, how much money, how much electricity, how many downstream social and economic effects does this approach have?

Product Management => Problem Management

We can use the skills of great product management to play a different game.

PMs are fundamentally problem solvers. Our skills are synthesising needs and creatively building solutions. When we are doing product well, we’re orienting around customer needs and business goals. What if we continue doing that with a much more holistic approach?

Post-growth is a term used to describe what must happen when we approach the limits of growth from a systems standpoint. There are many signs we’re at the limit:

1. We’re overloading the biosphere with pollution and using up the irreplaceable natural resources of the planet

2. We’re creating wealth creation machines that extract from the general public and only benefit a few powerful people

3. We’re living under systems of corporate surveillance and in some places, state surveillance

4. We’ve created global systems of human interaction that essentially commodify our identities and our relationships

5. Social mobility is decreasing, economic inequality is increasing

6. Pandemics

7. We’re experiencing a aack of affordable housing and increasing houselessness

8. We have heath care inequities and many people living in heath-related precarity

9. There’s a co-incidence of obesity and malnutrition

10. Ongoing wars and conflicts have a cascading effect on supply chains

11. Climate changes are affecting food production and housing

12. Probably many other things you’ll be able to come up with just by scanning a newspaper

So what might this world look like if we approach changing the outcomes from a product management perspective? (By this, I mean a good PM approach, not a “CEO of the product” approach!)

Human-centric Product Management

First, we’d be trying to understand what people actually care about. This might come from direct research, but as humans with a long history of self-documentation, it’s pretty clear that indicators of satisfaction are tied to some pretty consistent things:

  • Having our basic physical needs met (shelter, clean air, water, and enough nutritious food)
  • Healthy diet and exercise
  • Feeling loved
  • Having a sense of self-worth
  • Being free of oppression, abuse, and domination by other people
  • Living without a threat of violence
  • Spending time in nature
  • Being part of a collective or community where we matter to others
  • Having some sense of security and serenity

What’s crazy to me is that the vast majority of products tech companies are building don’t serve those needs, and many of them directly subvert them. In many cases, companies may have a mission to support well-being or happiness but the actual way the company operates is antithetical to those goals.

Everything is connected, so if we truly understand what supports people, it won’t be a narrow solution for some small problem without an understanding of what problems the solution itself creates. In the old world, that’s not a problem, just more “opportunities” to make money. We can’t keep thinking this way.

Internal Shifts Ahead

One thing I’ve learned in being alive for a while and seeking answers is that change begins within and reverberates.

The way we are in ourselves and in the world informs what we build and how we affect the world around us. So even with the best of intentions to ‘solve problems,’ when we’re coming up with solutions that must meet a concurrent goal of “winning a market” and increasing our status and wealth so that we can have more power or feel more important, we’re just going to fail to truly see the reality of our impact.

I was 100% into playing the game that the rule set laid out. I wrote an actual guide to “startup pirate metrics.” But these days, there are too many signs that this approach leads to very bad global outcomes and I have been on my own journey that has found me feeling like personal responsibility is real freedom.

We can do things differently, but we will have to start small, and what’s more, we’ll need to abandon growth as our measure of impact. As soon as we put growth as our top line metric, we start to undermine the practices necessary for change. It’s not that our products and companies won’t grow, but they will grow like trees, not like kudzu.

What if we tried to solve problems with these constraints instead?

1. We start by understanding what our customers value and what truly matters to them

2. We reject strategies or solutions that involve inevitable extraction

3. We reject building products or services that exploit human psychology rather than fostering well-being

4. We do the work to uncover our own assumptions and biases

5. We prioritise ensuring that the people with whom we work are taken care of and we foster healthy interdependence among our team

6. We put sustainability before growth, meaning that we don’t need capital just to juice our numbers

7. We collaborate with other teams and companies who are working on the same problem rather than trying to beat them

8. We work together in ways that recognise different roles, skillsets, and experience without creating hierachies that entrench power over and don’t allow for the possibility that people with non-traditional backgrounds may be suited and able to do work we’ve traditionally gatekept with hiring requirements

Why is it that we’re so creative and love solving problems but we also hold the belief that things have to work in the way they do now? Aren’t we ‘disruptors’ and ‘innovative thinkers?’

To think this way, we do have to give up some of the status-seeking and lottery-winning mentalities that drive our industry today, but in return, we have a big blue ocean and lots of opportunities to prototype and test. And the best part is, we can do it together, collaboratively, using the very superpower that has made humans such a growth-oriented species in the first place.

Start small and be a listener first

The first step, I think, is to create more partnership between people served by technology and builders. If you’re a technologist, don’t ignore the cultural, emotional, and societal dimensions to what you are building. Work with researchers, UX practitioners, and above all, customers to consider what might be valuable, not how you can exploit behaviour, make things sticky, or otherwise try to growth hack your way into success. Growth may be a consequence, but if you’re moving at the speed of trust, you can build with care, to create something lasting, to have responsibility to your customers and the world at large, not to make some already-wealthy people money within a short time horizon.

How else might we use our skills to solve problems, without shifting into paternalism and manipulation? Mostly, it’s about being willing to recognise our training, to find internal integrity, and to practice with others who can see our blind spots. We need to put down the master’s tools and learn a new approach, one that sees product far more holistically. I’d love to hear how this idea lands for fellow product people and how we might support one another to make a change.

AI and the Myth of the Creator Economy

Once upon a time, I wrote poems. And I sing to myself quite often, so I had this kind of typical random thought, ‘maybe I should learn some easy musical software thing and write some songs.’

And then I thought, oh, well, what would be the point of that? AI will certainly get better at writing songs before I ever will. That self-defeating thought did spark a little bit of insight, though. What am I creative for?

One way to see it: creative practice is for oneself. For example, people learn woodworking or other crafts to make things that would likely look better, take less time and energy, and be cheaper if they just bought a product from an industrial producer.

If you become good at your craft, you might be a maker. You can go out to craft fairs and sell your items, but chances are, you’ll be operating at a loss when materials and labour are factored in. When you start woodworking, you are not thinking, “now maybe I can be rich and famous.”

Even before AI began inducing a mass pearl-clutching about artists’ rights, being a ‘creator’ was a pretty unlikely path to wealth.

Some kinds of creative work seemed like they might lead to a big payout: the ‘artistic’ careers that fell under a lottery system. The lottery system was always primarily one of overall exploitation and extraction.

Making music is an example. Right now there are so many people making music, perhaps more publicly and intentionally than ever before. Platform algorithms primarily drive discovery and popularity, and those things reinforce the patterns that were already in place. In other words, things that are like other things are most likely to surface. And once something does surface, it benefits from network effects- there’s great research that indicates that people listen to things because they think other people like them far more than as a result of their own individual tastes.

Few artists even make much money from the platforms. Even before there were algorithms, there was the corporate consolidation of the music business, which meant that just a few corporations owned nearly all of the sizable record labels and many of the small ones as well, so homogenization had already begun. And from the beginning, stars of the recording industry made little in comparison to their record labels.

This pattern is true in general for creative or generative work. We went from a pre-industrialized situation where ‘artists’ were mostly wealthy or beholden to the wealthy but there were plenty of people practicing creative crafts for themselves or a few people in their community, to a time when companies began to profit from the distribution of other people’s creative work. Within that system, there have been small companies that were not as extractive, but as time has gone on, the direction has been one of ever-increasing disparities between the creators and the distributors in terms of relative individual profit.

We recently went through a kind of collective delusion with the proliferation of creator platforms and the so-called Creator Economy. Many people were called to put out their ideas, art, and creative work as products. As the wave of industrialization-employment has ebbed due to automation, and because industrialization, media, and the internet have created this sense of global scale on which to market ourselves, we found ourselves looking for ways of expressing ourselves for money. And we were seduced by the corporations who distribute creative work into thinking that ‘owning’ the work was the path to protecting creators (had this ever have been true, these companies largely would not have existed, since they are the primary predators).

But many ‘creators’ were willing to buy into creator economies and copyright, perhaps because they thought they might be the exception. (Does this remind you of other delusions of social mobility that have led to many collective positions that reinforce the benefits for wealthy people against non-wealthy people’s own self-interest?) We were willing to believe that platforms ‘allowed’ creators to make a living being creative, when they would have otherwise laboured in penniless obscurity. (In fact, artists can be streamed millions of times on Spotify and not receive enough money to pay for two months of a Spotify subscription (on the individual plan, mind you). And most people don’t garner millions of streams).

Many years ago, I wrote about the idea that creators might be best served thinking about making a living much the way one might by having a shoe repair business. It could be possible to create enough direct relationships with people who like your work to get by, and that would be a remarkable success- you’d have a basic income, be in your creative practice, and not have a boss telling you what to do or what to make. Instead, I’ve seen people trying to negotiate the systems by learning how to ‘make more of what people want,’ and creating a glut of sameness, which honestly makes it that much easier for AI to step in and be as ‘good.’

Now, we’re perhaps confronting something that could be transformational to the whole notion of art-as-commerce. It might be that the only real value in being creative is in the practice itself. In the learning, experimenting, doing of the thing, not in the marketing of the product. It might be that we value human-made things because we are part of the process, because the creative output has meaning.

Perhaps we’re headed into a farmer’s market model of ideas, songs, or art. There was a moment when it seemed like NFTs were a version of this (only if you squinted) but Open Sea showed that mostly the money was in applying the same kind of platform economics that the streaming platforms have. Extraction for the few. And so, you may ask, where does that leave creators for making a real living?

Well, right. Corporate capitalism evolves to take more out and leave less for most people. And this is where I think (being fairly ignorant about political science) I don’t resonate with Marx when I think about what’s next. Because “workers” seems to me like a function of industrialization itself, and what’s happening is that we won’t have work. This may seem kind of nice for those people with enough advantage to enjoy leisure and minimally-paid creative pursuits. There will likely still be work for those who sell access to their own status for a time, and perhaps people at the upper echelons of corporations will still be needed to formulate strategies or be figureheads for a time.

There are still low-wage jobs and service providers who are more challenging to replace, but industry is plugging away at making them dispensable too. From my life on the edge of Silicon Valley I see that there are ideas to automate everything from drivers to service workers to doctors, lawyers, and therapists.

If we keep going down this path without alternatives, most of this displacement will come without alternatives for ‘making a living.’ Capitalism is a vacuum hose trying to suck every particle of wealth and power out of the earth and its inhabitants. In the US, the top 1% have more wealth than the bottom 90%. Even with supposedly more access to investing with the advent of platforms like Robin Hood, the top 1% own more than half of all stocks. Access to wealth overall is decreasing, with the top 10% owning about 90% of all stocks, and every year the gap widening. A group of 725 individual people have more wealth than a collective 50% of Americans, and that doesn’t even factor in global disparities.

I can see why cryptocurrency seems attractive as a solution. If we just had a way to create capitalism for ourselves! seems to be the idea. I mean, was capitalism a good thing, leading to post-scarcity where, once we find a collective way to revolution our way out of disparity, we can all live in a happy place where we have all our needs met and can just play and be creative and garden and get on up in the Maslow’s hierarchy? (Or a more appropriate framing.) Hmm. As we experience massive climate upheaval, intense scarcities in housing, pandemics, and all the other things that in the short term, money can still largely mitigate, I don’t know if post-scarcity looks imminent.

And yet. We are darn resistant beings. If we can resist the commodification of post-capitalism itself (not a joke- capitalism is cunning, baffling, and powerful!) we might discover this truth- that it really is all about practice. That if we give up the idea that our identities, relationships, and creative process are all really products, we might find out that there’s a lot of power in our collective and interdependent practice. Doing that practice gives us the opportunity to find new ways to collaborate and contradict the idea that it’s just naive to find an alternative to states, corporations, or other systems of control.

AI is not benign. We can regard it with curiosity and wonder, and also recognise that the vast majority of the energy around it right now is focused on figuring out how to make more money and add it to the arsenal of corporate domination. Creative hackers may find ways to use it as a tool of subversion as well. But the general idea that it’s going to put artists out of business implies that artists were in business in the first place, and that’s something we can see through without any help from GPT.

Freedom might not be free

Free sounds great. Who doesn’t love free stuff? Who can say how many random and unnecessary calories I’ve consumed at parties or at those in-store sample stands. Goodness knows I have wasted a lot of hours online that never would have happened if I had to assess the value I was getting from it. (Of course, someone made money from that time I contributed).

But free is never free, it’s only subsidised, whether that’s by others, by ourselves indirectly, or even by the earth. In a system of capitalism, free things made with someone’s labour lead to unsustainability and poor motive alignment, even if they result from the best intentions. Instead, we could think about products as either coming from collective investment with collective and equitable ownership, like public goods, or we should have models in which there is value exchange, even if we eliminate some of the regressive nature of flat pricing models. If we’re taking things from the earth, we might imagine how we can reciprocate, not just take and use.

With tech, we’re making products that are intangible, but they still require labour to produce. When we make them “free,” we are in a situation where we’re going to be dependent on money that isn’t tied to the value we’re creating for the people who use our products. And yet, we’re working in a context where many companies, especially in the social tech world, make their products free.

Free feels like it’s generous, until you’re out of funding. Free suggests there’s no needs among the people who are working on the product. And free feels like it’s a commitment to some kind of ethical stance and cooperation (see Open Source philosophies) but Open Source is rife with abandoned projects, projects with only one real contributor, and tools that mostly just serve developers. It leaves out creating the kind of relationships that emerge when people exchange energy for value. It leaves out creating systems of mutual benefit.

I’ve worked on products where we offered free versions. It’s great for growth and for giving people a sense of what value they might find by making a commitment. There’s room for free, but something interesting to me is how much less responsibility people felt when using the product for free. They often didn’t value the work of the people making the product at all, and were more antisocial in their communications with the company.

There’s a real and interesting tension in how to approach charging for technology, especially social tech. As a person who does lean to the cheapskate side, I like to ask myself, for the products I pay for, what are my feelings versus free products? Much of my thinking has to do with user experience, especially collective user experience. It makes sense to me to pay for things that allow me to extend a good experience to others. I suspect that there needs to be a re-norming if we want to create sustainable companies around social and collaborative technologies. For now, most of these make money by offering a business product. Perhaps that’s the right transitional path, as long as we don’t lose our missions along the way.

Feeding on Empty

When we think about communities and community platforms, we as builders help communities thrive in part by not reinforcing an illusion that the platform is the community.

Great community technology can emerge by observing how great communities function outside of technology. And sometimes from questioning some of our assumptions about what communities actually need.

There is one metaphor that shows up in most community platforms that comes not from how communities work, but instead how technology and social media work: the “Post Feed.”

Posts and post feeds are problematic for a number of reasons.

  1. They create a hierarchy – the person posting is dominant.
  2. They encourage a self-promotion mindset. Posts are structured like ads of oneself.
  3. They are not conversational, one posts not knowing who will or won’t see the post and thus often lacks meaningful context or a clear idea of what the information shared is meant for.
  4. They are, in most online communities, dominated by just a few people. They feel like overkill to share a small piece of information and can lead to lower participation overall as a result.
  5. They ‘contain discussions’ that are usually not very easy to follow, not transparent, and easily get lost. And yet lead people in the discussion to feel as though they’ve been sharing transparently and others should be informed as a result.
  6. The conversations within a post are typically text-based, have a high bar for expression, are disembodied, and are easy to misunderstand.
  7. The feed metaphor leads to a finger-in-the-river feel when most community information-sharing benefits from being able to be either retrievable or to be clearly ephemeral (such as date-driven information)

Can we actually think of any real-life communities where there even exists something that mirrors the metaphor of a Post (which is essentially one person broadcasting some information and then other people having clearly less important responses to that broadcast) and a Feed (an endless list of things people share)?

Instead, what if we use a different structure, one in which we design for conversations that happen between people more naturally and equitably, and for information-sharing that may prompt discussion but doesn’t masquerade as such?

There are two key ways information-sharing typically works in non-platform-based community.

One is some kind of “bulletin board” or “announcements time”, where there is just the information being shared, and often that kind of broadcast is normatively reserved for things clearly of importance to the group at large.

The other is in an actual conversation or meeting where information is submitted or shared to be discussed (often as an agenda item) and there are facilitated or normative ways that a discussion occurs.

Though it’s perhaps difficult to move away from technology designs people are “used to”, or the way technology has trained us to interact with it and one another online, it’s not actually that difficult to design a different kind of division between information sharing and discussion. Information in posts is very difficult to organize and filter, whereas a system where people can share information only makes it easier to make sense of.

When we look at how great communities and collectives operate, connection and trust-building is prioritised and baked into the practice. It’s also fractal, in which individual values, relationships, and collective actions and communication are aligned.

It makes more sense to emphasize meetings and conversations where connection and trust emerge and to let information-sharing be a smaller piece of the platform. It makes sense to choose design patterns that work against dominance. It makes sense to help communities support members’ journeys and to encourage real interaction than to be a private social media where everyone sees a feed of posts.

Communities online have begun to regard platforms like Mighty Networks, Circle, Slack, or Discord as gathering places but for the most part, they are not very broadly participatory, inclusive, or connecting.

This is a UX problem. As community platform builders, we have a real opportunity to use the best practices of offline communities to inform the way we imagine spaces we offer online. And choosing to do things differently is only a good thing when it comes to what the impacts we’ve seen are from how technology has been built in the past, not to mention the benefits of positioning and innovating.

A Cold, Cold Problem

One of my apparent side hobbies is reading startup advice books.

These books follow a predictable formula: know your customer, build cheaply, be a painkiller not a vitamin, raise VC, and grow as though your life depends on it. I’ve been in the startup world for a while and most of these books are pretty successful at describing what someone would do to play the game of startup we’ve seen for the last decade or so. They are all manuals for getting funding and then scaling, without much question about whether either one of those is the ideal path for a business.

I remember when I first got into tech and took a lot of this at face value. I wanted to play the game because I wanted to be a winner. I thought people who were successful at tech startups knew something, and that I needed to learn that thing. I learned a lot about building product, and those lessons have been incredibly valuable, no matter what kind of business you want to apply them to. I’ve learned from some very smart people.

But there are some things these books largely ignore or gloss over. For one thing, they will explain how VCs put money into lots of startups but only a few scale, and therefore VC investment is only for products that have huge markets and want to blitzscale. For one thing, this framing instantly leads all founders playing the game to try to win by ‘articulate solutions that are broad-based enough to be a big brand someday ‘go after big markets.’ Most startups should be doing the opposite, looking for niches and building a viable company. What’s more, getting VC funding is, by the accounts of nearly every founder I’ve met who has done it, somewhere between risky and business-killing. Ceding control of your company to VC means trying to extract as much value as quickly as possible, not to build a sustainable and profitable business.

Even if you wanted to play this game, good luck to 50% of the population. Of the companies funded by VCs last year, less than 2% were not men. Imagine the stats for being not male AND not white. Why even bother with the game in the first place?

With the waves of tech layoffs, it seems like perhaps at least a few people will question the very nature of the industry or I don’t know, corporate-centred capitalism. I’m very curious to see if what comes of that might look different or if it’s just going to be more people trying to follow the playbook.

Back when I got into tech, I heard the disparagement of what was known as “lifestyle companies,” which if you weren’t trying to scale and become a monopoly, you were by default. It’s kind of like regular business fundamentals are simply ignorable when you’re a ‘disruptor.’ And the IV of VC keeps that story alive.

I started reading Andrew Chen’s The Cold Start Problem recently. There are some interesting insights about network effects in the book, but I have to admit I get hot under the collar every time he explains how Uber did growth. You have to be willing to hustle, you have to do what works even if it doesn’t scale, so you can ruthlessly undercut the existing industry and find a way to incentivize people to exploit themselves on your behalf. THIS IS HOW YOU WIN!

But is it? What would happen if founders played a different game? Can we get out of winner-take-all if we just radically decide to do something different? Collaborate, for example?

Peter Thiel’s famous paean to monopoly thinking, “competition is for losers” will surely be an epitaph for this age of do-anything-to-win, whether it’s because we burn down the planet or because we learn we actually succeed more sustainably by cooperating- making competition for losers of a different sort.

I’m sure this would have sounded naive to my younger self, but it turns out that when you get on a path to having values and living in integrity, you don’t really give an eff whether people playing the game think you’re an all-star. You’d be surprised how many people actually win by building trust, connection, and products and businesses people care about instead.

I began this post a while back and it’s funny how possibly ‘radical’ or ‘realistic’ I have become since. In the interim time, big tech companies have laid off 200,000 people and SVB and other banks have failed. It’s becoming more apparent to me every day that even when a company isn’t in a VC-driven death spiral, tech is largely being built to reinforce systems of extraction that are a death spiral for the whole world, that tech is being built in a culture with all the hallmarks of white supremacy. It might not be crazy at all to reject the very basis of tech economics, in which foundations and pension funds must, through VC, waste so many resources by funding companies with a 9/10 failure rate, that the answer, even for most investors, is to find “something that can scale.” The game is not just rigged, it’s a battle royale.

Disrupt LinkedIn

I have something to confess. I like LinkedIn.

I’ve trained that algorithm to deliver the things that make me say, “right on!” My feed, which I also feel fine ignoring for periods of time without any sense that my absence will be noticed, is full of people whose faces I like to see.

But I also hate LinkedIn.

Of course, there’s the fundamental problem of a centralized, traditionally-run tech company owning me as much as LinkedIn does. Of my contribution to it profiting someone, of my social relationships being owned. I can “export” a list of people I am connected to, my “data”, but I can’t easily connect LinkedIn to anything that would improve the connections – or so-called connections- I have developed.

Those corporate web 2.0 issues aside, LinkedIn is also kind of the opposite of useful for the very thing I am there for. Every time I am at an event with interesting people, or meet a new person, I “link” with them, only to have zero context about the whole thing immediately. There they are in the collection, but there’s basically no impetus to go further than that, or to be able to use LinkedIn to do the kind of basic follow-up and relationship development that would obviously be necessary for someone to go beyond being a profile to being a colleague. I hate having messages locked away in a UI that leads me to forget I ever spoke to someone. And don’t get me started on the giant misses in other parts of the product- why are LinkedIn Groups so completely terrible?

Yes, there have been attempts to disrupt LinkedIn before, but most of them are just the same ‘own everyone’s data’ kind of approaches. We are perhaps starting to have enough DeCent tech to do something more exciting? So if that’s what you are building, here are some wish list items:

  1. You have a public profile but you can limit information you share to classes of people (co-workers, communities, friends) or even to certain people. Over time you can shift the level of sharing based on relationships that develop.
  1. You have a social graph and when someone looks at your profile while logged into theirs, they can see your mutual connections
  2. When someone connects with you, you have the opportunity (or even requirement? You choose.) to add context about how you know them. You can add more context as time goes on. Their profile reflects messages you’ve shared as well. This shared context is visible to both people but no one else. Could this be contained in some E2E way? Why not?
  3. There’s an RSS feed associated with you that you can feed your various channels of public streams through, ala ActivityPub, and there’s a way for your contacts to see this as a feed with other people’s content. There’s a transparent algorithm you can adjust to favour certain people or relevant-to-you content or other preferences
  4. You can use the platform as a mailing list. Contacts can opt-in and then you can message groups of people filtered by interests, locations, etc.
  5. It’s easy to pass along other people’s content as well, so there are still metaphors for responding to and amplifying content (though it could be as a comment that is attached to the content or as a message only delivered to the person posting). There are ways to take content into a better UI for conversations, ideally. I’m imagining here that we’re really talking about something independent of this particular platform, no need to re-invent every wheel.
  6. There is probably benefit in orienting around profiles as a resume as well as simply a way to connect. LinkedIn has managed to stay less toxic by both avoiding being ad-driven and by being a space that represents you to the working world. But you could make many improvements to the way information is presented- making the resume part more modular so you could choose to emphasize non-job-work more prominently, and to more easily share your non-job aspects with people you get to know more deeply. The trick is to have that information be yours and not the platform’s. Now we get into identity management, but you know, people are figuring it out, or trying to- eep.

If you’re making this, I am excited. As everyone always says to me, “I want to be a beta tester.” I am guessing something like it already is happening. And then perhaps LinkedIn will finally be… left out.