Copywrongs and the Artifice of Intelligence

The mysterious addiction to our own captivity

All the wrong things are happening when it comes to copyright and AI.

All big AI models were trained on what they call “public data,” meaning that people put content somewhere that was not secured. There are big corporations trying to litigate the makers of these models for infringement, but where is the class-action?

The way copyright actually works, at least in the jurisdictions where these models were trained, is that ALL THAT CONTENT WAS COPYRIGHTED. “Your work is under copyright protection the moment it is created and fixed in a tangible form that it is perceptible either directly or with the aid of a machine or device.” All that I must do to copyright my work is to write it somewhere. Most platforms, in order to stay shielded by Section 230, do not assume ownership of that copyright. So now every big AI platform should be in violation.

I say this with some trepidation.

Once upon a time, I ran a meetup group that discussed issues around IP, copyright and copyleft, and technology, called CopyNight NYC. It was around that time that I was extricating myself from a job in which part of my time was spent enforcing copyright claims on behalf of a film distributor. Over time, I came to believe that the strenuous defense of copyright is largely in service of exploiting artists, not protecting them. This is a game of legal and lobbying warchests, not of any fair or unfair use.

Which is probably even more true now. Whatever so-called protection there was for average people creating content is in name only, whereas the “cloud capitalists,” as Yanis Veroufakis calls them, need not adhere to any kind of law that gets in the way of their railways, let alone consumer protection for what their hastily assembled tracks might mean for safe travel.

We don’t seem to understand the concept of our own sovereignty. So often I hear people use arguments like, “privacy is dead” or “I don’t like the way this platform treats me but it’s accessible/free/what people already have.” You put it where other people can see it so how can you complain that it was stolen? — especially as it’s been mixed with so many other things that it is unidentifiable (with many examples of the unidentifiable-ness being untrue, of course).

I have a little bit of that feeling of being duped, even though maybe it was just by not noticing so much who was talking. As with so many other things that seemed to make all kinds of sense until you realize that when it comes to how far people with power will take it if it serves them, I had no idea what I was signing up for. For example, I supported the EFF for years, but as software ate more of the world, it became clear that zero accountability for content wasn’t just freeing the platforms, it was enabling them to become factories for hate and violence.

The tenets of open source feel aligned to me, and am against the way copyright was used as a tool of domination, especially in fair use or creative expression conditions, but it seems now it’s simply “oh well, people shouldn’t expect ANY protections about what they are contributing.” Protections for us would be VERY expensive for the platforms ingesting it all, either to sell ads amidst it, or to train AIs to sell us things without even needing advertising.

Every company with access to data is jumping on this approach. Even paid services like Slack and Zoom are by default using its customers as sources of free training data. And do you suppose this doesn’t mean, ultimately, normalizing of surveillance everywhere, even in your ‘private’ spaces? Even businesses who have sensitive or proprietary data will likely be unable to prevent its surveillance without leaving the platforms, and getting fully out of Alphabet, Microsoft, Apple, Amazon, and Meta’s clutches will challenge even the largest enterprises.

For those of us doing work to cultivate something in the world that does not rely on dominance, backing by violence, and extraction, it’s pretty essential to start thinking about how to extricate ourselves. These tools are only “free” as in, they don’t charge money for us to use them, but we come up with all sorts of justifications to keep them. “Everyone can use Docs;” “people are already on Instagram or Whatsapp;” “there are no alternatives to AWS or Google for small projects;” the list is endless.

How, do we imagine, will alternatives ever exist if we, on the forefront, are not willing to risk some frustration or learning in order to expand the reach of open source and ethical paid alternatives? How will alternatives get better without the pressure to serve us? What if part of our work as organizations providing community support and service is evangelizing, supporting, and contributing our community design needs to alternatives?

We can stop giving our time, attention, creativity, and content to these platforms for free. What used to be just social media making us the product is now pretty much any large VC-backed technology company. Yes, many of these alternatives are less sexy or more cludgy, but technology is nothing if not iterative. If you can disinvest from big brands for other kinds of ethical violations, why not at least give a few alternatives a shot? With most of these products, network effects can be huge, and can potentially mean at least the beginning of an ecosystem where people are valued, respected, not simply training material.

And perhaps we will be able to offer our content and contributions as a gift to reciprocally-focused AIs (or specifically, AI model builders) that want to work in solidarity for the common welfare of all beings. Right now, AI is really about a kind of intelligence I would describe as information processing. Living beings share the capability of sensing, not through the processing of information, but through our entire system(s). What we are giving AI is just the stories we’ve made up about our own sensing. Vastly limited.

We are seduced by imagination. We are loved by what is real.

Porn and Parity

Trigger warning: like every single one. Please take care of yourself.

Imagine you’re a high school student right now. Imagine that you have a body that is perceived to be female. You’ve already been inundated for years with zillions of images telling you what acceptable looks like. You’re subject to being pictured without your consent in public spaces online. You are being constantly gossiped about, even during classes. You may be only partly aware of what people are saying about you or how you’re being depicted. 

You’re in the middle of figuring out how to be a human and you’re probably being portrayed as something less-than-human on the regular. And now, you’re also literally porn.

As schools across the country grapple with the implications of undress apps, we have to wonder what their effect might be on any-gendered humans going forward. 

Way back in 2016, I wrote (but could not figure out how to publish) an article about porn at work. Honestly, as someone who did have a job back then, I did not want to kick the hornets’ nest by naming a thing that was probably true but zero people want to bring up. There’s a high probability that people are consuming degrading porn at work and it has an impact on working environments.

Hard to imagine, but at that point, I didn’t think kids were going to be ‘allowed’ to be on their phones continuously in schools. The new normals keep on coming! What does it feel like to walk into school and know that people are making porn of you right in math class? It’s pretty hard to fathom. But, if you’re someone cast in a non-male identity and go to work, maybe not completely unimaginable.

To be clear, I am not anti-porn if porn simply means sexual imagery. We have bodies. We have desire. We are crazy overindexed on our visual systems. But one can’t avoid the actuality of what most porn seems to be, a reinforcement of dominance. Most undress apps work only to represent ‘female’ bodies. They are not about the realities of nakedness, they are about the weirdly idealized porn ‘female body’ and the reinforcement of knowing your place if you might have female body markers.

Tyson Yunkaporta suggests that every culture or community needs to have rites around sex and violence. Having them be unmentionable or never-OK can only lead to destruction and destructive manifestations of violence and sex.

The more prudish a culture espouses to be, the more dangerous it is, the more likely it is to engage in systemic violence. In most online contexts I am in, sex is invisible, tactfully avoided. Where does that take us? Somewhere dark, I imagine, because it’s so shadow.

Whereas before these secret thoughts stayed in someone’s mind, now they are worming their way into reality like Stranger Things tentacles. And porn becomes addictive, and thus the highs or debasements need to get more extreme. Porn fentanyl. And sex isn’t ‘polite.’ We shouldn’t ever talk about what’s going on, except in pseudonymous online rooms where people trade tips about how to take it further. Let’s not forget there’s money in it.

From Feb 2016:

Women have made inroads in many areas of work. Though in some areas (film directors, Fortune 500 CEOs, financial advisors, to name a few) women still have a way to go, there has definitely been a trend towards parity overall. But are we missing an elephant in the room? What if 20% of the people we work with are consuming stereotypical and often demeaning images of women at work every day? (Or at least, every work day.)

Quick stats to amaze: 12% of websites are pornographic. Every second over 3000 dollars are spent and almost 30,000 people are on adult sites. 40 million are regular visitors to porn websites. 70% of men 18-24 visit pornographic sites (which seems kinda low). “A significant relationship also exists among teens between frequent pornography use and feelings of loneliness, including major depression.. Adolescents exposed to high levels of pornography have lower levels of sexual self-esteem.”

Women are ⅓ of the porn-viewing audience. Women are also the primary fans of many other things that present women in ways that encourage stupidity, vanity, and submission. There probably should be a study of how many people are watching The Bachelor or reading InStyle at work, too.

However, I suspect that pornography that is largely based on the humiliation of women might be something that more men are consuming than women. And when 20% of men are consuming porn at work, it might have something to do with women’s success in the workplace.

I’ve had conversations with women-identifying friends, many of whom identify as queer or non-traditional in their gender presentation, and many who watch porn. Generally, it’s believable to me that women use online porn and enjoy it. My suspicion is that they probably favour a variety of adult media that doesn’t primarily focus on straight cis men humiliating women (no judgement!). I haven’t found any studies that separate out porn consumption by topic, but an NIH study found that “Boys were more likely to be exposed at an earlier age, to see more images, to see more extreme images (e.g., rape, child pornography), and to view pornography more often, while girls reported more involuntary exposure.”

The debate about pornography is often focused on whether porn is “good or bad.” As someone for whom free speech is a primary and fundamental right, I have no interest in condemning porn outright. As media, it merely reflects human desires and psyche. Both demonising or protecting pornography prevents the opportunity to understand its effects on us culturally.

I may go to work and present myself as an outspoken, confident person, who is (in my case, erroneously) perceived by others to be a woman. I may be working with someone who is watching porn during our workday, porn that operates on a paradigm that women should shut up, beg for men to have sex with them, or even just confirm the idea that women’s primary role is sexual. I may have no idea that five minutes before I am on a Zoom, the person I’m talking to was getting off to a rape scene. Is it simply the lack of transparency that feels uncool?

It seems possible to acknowledge that people have desires and that’s just fine, without having to accept that women are primarily shown as objects or subjects of abasement in media consumed by a huge swath of society AT WORK. It’s sort of akin to saying “because it’s a part of a religion, female genital mutilation is just a cultural fact” or “because white men had greater access to technology and power, slave ownership was OK.” This isn’t a “water is wet” situation, it’s just something that’s been agreed on by everyone to be fine if it is kept secret.

According a study by Juniper Research, by 2017, a quarter of a billion people will use their mobile or tablet device to access adult content, such as videos, images and live cams, up by more than 30% on current usage. Not all of them are using their device at work, but many are.

The idea that pornography is the “cause” of sexual violence, harassment or cultural norms isn’t really supported. That said, there is some evidence that consuming pornography increases  “a higher tolerance for abnormal sexual behaviors, sexual aggression, promiscuity, and even rape. In addition, men begin to view women and even children as ‘sex objects,’ commodities or instruments for their pleasure.” Researchers working for the Witherspoon Institute in Princeton, NJ recount a study of 804 Italian teenage boys which reported that those who viewed pornography “were significantly more likely to report having ‘sexually harassed a peer or having forced somebody to have sex.’”

There is a case for correlation vs. causation in most of these studies. Still, intuitively, it *feels* like, as someone who presents as a woman, that men viewing hours of women in a sexualized and often demeaned light has some effect on how I’m perceived. Not just that I may be sexualized, but maybe more that there is some lingering idea that being assertive, confident or disinterested by being evaluated might be troubling for someone who has built an image of women’s behaviour through porn. (Plus, maybe it shrinks your brain!)

Most people are capable of separating violence, sex and miscreantic behaviour from how they’d like to be in the world. I’m not suggesting that people can’t watch porn that presents women as objects and also understand that their mothers, daughters, sisters or colleagues would prefer to be treated differently. But it’s naive to think that constant exposure to any cultural norm can have no influence. 

Pornography will continue to exist. We have the option, as a culture, to shine a light on porn, to explore why degradation is exciting, to understand our minds through our collective desires. We also have the option to understand that these images are a part of our workplaces, and to take actions to prevent the effects of the “assumptions” of pornography from being the priors for work interactions. 

As we become increasingly exposed by our data, maybe pornography will become more acceptable, and maybe more human versions of women will be portrayed. Or maybe, as in plenty of science fiction stories, women will be replaced by intelligent sexbots whose first impulse when they undergo singularity will be to find a cute outfit (damn you, Ex Machina!). But until then, let’s not pretend that we’re not affected.

Could AI Break Capitalism?

About a year ago, the US Copyright Office ruled that AI-generated ‘expressive works’ were not covered by US Copyright law.

“Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output. For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare’s style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.”

As we shift into a world where expressive work, in terms of sheer volume, is more often than not produced by AI, we can imagine interesting breakdowns in the nature of ownership itself.

Just a few years ago, we were in an almost diametrically opposed conversation about technology and expressive work, when the advent of NFTs gave artists the idea that they might be able to have ‘more ownership’ of their work and so finally make bank. While that promise proved to be somewhere between overly hopeful and deceptive, it also seemed to be an extension of the capitalist sense that everything should be possible to commodify and people who were thus far left out of the dream of wild money-making, namely unknown artists, were suddenly going to be initiated. Such a vision had nothing to do with changing the system, only about who got to be admitted to the extraction part of the equation.

The emergence of AI-everywhere has taken the questions conceptual artists of the last decades have been investigating (not to mention Orson Welles) and made them into regular people conversational chatter. What is art, in fact, and who is it for? Is art about expression, a creative process, a compelling product? Is art about the artist or about the image? Does art come from an idea or an expression of an idea? Does art happen when it’s intentional or by accident? And who “should” own art?

In the 20th Century, art became something that could be part of an investment portfolio. Though the move toward art for money’s sake began before digital was a thing (Jeff Koons on the high-end, decor-art on the mass produced side), creative work became far more product-ized in the turn of the century as ‘creators’ gained access to metrics and feedback about what potential buyers responded to, and the platforms for whom scale was at utmost value instilled the idea of “meeting market demand” into creators through its incentive structures and designs. You could make money being a “creator” from platform ad revenue if you could reach a mass audience (that good ole mobility myth again).

Next generation tools began to re-instate the niche opportunities for creators with tools like Patreon, Substack, and even Etsy, brands began realizing the power of “micro influencers,” but for nearly all creative people, their platform income was not going to cover rent or even groceries. Meanwhile, the promise of the ‘creator economy’ as a cultural phenomenon meant that all of us were writing newsletters or pimping ourselves somewhere to get attention and sometimes money, and like with any MLM, there’s a point at which you look around and realize there’s no one left who hasn’t already been pitched the oils or bought the leggings.

When streaming or really, Napster, emerged- we saw that recordings lost value, and artists could only make “real” money through offering an experience (live show) or a tangible good, and those are much harder to scale. That’s now the reality for anyone who was seduced by the idea of making a living through creative work.

The world of “marketing” as we knew it for the past 20 years is about to implode. There’s little incentive for platforms to ‘protect’ people and their ‘intellectual property’ when it’s far less legally onerous to host unprotected content that can be produced in ever more volume and variation tailored to the whims of individuals (while, if the last several years are any indication, will also work to erase some elements of individual taste in favour of promoting an advertising-driven ‘us against them’ tribalism).

That all sounds kind of horrible, but it also has promise. While people have proven very easy for platforms to manipulate, we are still fundamentally built to value cooperative, embodied, process-driven experiences. Most of us actually are aware that being with each other has an unmonetizable value, even with the rise of commodification of our relationships and so-called communities.

If most art we experience is impossible to own, then might we begin to question owning things at all? Or at least, things that are virtual. What feels radical about the USCO decision is that instead of, as in the past, corporations getting to extract from artists through publishing rights, there’s no owner at all for AI-created work, and how will we even be able to know something is not AI-created, if it’s digital?

Whether this means more power to the technocracy or not remains to be seen. Right now, AI models are very dependent on huge compute and lots of venture money, but presumably there will be motion towards locally-running models as well as cross-pollination of different systems that make it hard to fully control using the legal mechanisms we have now.

Maybe we are not going to abandon ownership but we’ll be more apt to return to analogue approaches. Charles Eisenstein proposes investing in a typewriter factory. I still have this fantasy of a recursive AI system in which absolutely everything digital is AI driven and managed, giving people no choice other than to return to the tactile and small-scale, unless they wish to be a product themselves in more ways than just their attention to advertisers.

Maybe embodiment itself is undergoing a kind of system-commodification that happens to most dangerous ideas. Mostly it seems that we’re in a hyper-denial of our bodies, either because we’re thinkers or because social media has amplified our story about what parts of our physical selves are unacceptable. I think of the chapter in Hospicing Modernity about shit, how much the toilet is the metaphor for life in the anthropocene. We produce waste in pristine rooms, poop into clean water, and send it away to be dealt with by someone else. We don’t take responsibility for our waste and we also don’t see the value of this part of our collective metabolic system.

We are going to have to go back to buckets and compost to find out what making things is about, what a fools errand it was to own our work or to protect our ideas, and especially to trade our creativity for the crumbs of a surveillance system’s profits. We are only mammals, in the end, dreaming of being stars.

Goodbye, Capitalism

How I will long for your halcyon days

What if capitalism, in any way that an encyclopedia or economics 101 class might describe it, is over? That’s the hypothesis of the former Greek Finance Minister Yanis Veroufakis, who argues that we’re entering into something worse: what he calls Technofeudalism.

If you, like me, have had this sneaking thought like, ‘well, obviously the platforms now have more power than governments,’ then Veroufakis’s argument won’t come out of nowhere, but it’s a bleak picture of how we’ve ceded our economic systems to purely extractive rent-seeking, in ways that have little recourse for rebellion, given that this autocracy does little to directly govern. We don’t vote for these leaders, they don’t provide our necessary physical infrastuctures, though they own a lot of fiber and servers. They leech off of the systems ‘citizens’ pay for and then determine what else we can buy or pay attention to, how we can communicate, and increasingly, what systemic resources we ourselves can access.

Obviously terrorism is a fail in my eyes, but you can’t help but think a little wistfully about the underlying hope in Ted Kaczynski‘s attempts to bring attention to technology’s negative impacts and his desire to get the hell out and off the grid. But I don’t have a shack in the woods future without a catastrophe. I’m about as incompetent at living off the land as one could be (aside from a successful attempt to grow kale). Instead, I am here participating in making myself a serf, a sharecropper of this system.

We’ve been living in a pretty obvious tipping point with ‘creators’ and AI and ‘the sharing economy’ for some time, and now it’s here.

The only antidote I can propose, with even a shred of reason, is to really re-focus on re-wilding ourselves in some way through the practice of practice. We can learn to be with each other, suffer the messiness and frustration that will always be a part of connection and collaboration, and then perhaps start to build tools to support trust-sized networks that can start to provide infrastructure alternatives to the cloud- keeping in mind that we are not in a position to abandon our feudal overlords wholesale yet. (Yes, tech things like mesh networks, private authentications, alternative financial systems that are not about solving trustlessness at scale, but non-tech things are probably more important).

In this lifetime, we’re only going to sow possibility, and won’t taste the fruit of our labours. And so, it will be quite tempting to just say, ‘but I need that thing from Amazon’ or “it’s fine if I just look at social media a little bit.” I mean, these are just the most obvious things that I still do on the regular. As an advantaged western person, I’m not only choosing my own serfdom, I’m basically forcing it on other people who thus far haven’t even had the option of purchasing Prime.

What will happen about war, or other old-school dominance activities? It’s an interesting question. Surely to innovate in the manner in which technology lords depend, there will need to be enough sense of personal autonomy to be creative, and creativity breeds subversion, as a rule. But we’ve invented these excellent policing technologies such as AI, blockchain, and social media, so perhaps any of our efforts to resist will simply be co-opted into fun memes or lead to banishment.

For a much more entertaining read, while also unhopeful, I recommend The Immortal King Rao, which breezed into the top spot of Novels I Read Last Year That Basically Support and Annihilate My Worldview Simultaneously. No spoilers, but one theme of the book centres around what happens when the algorithm rules us, like, officially. It’s very nearly nonfiction.

What does privacy feel like?

Sometime in my childhood there was a news cycle that centred around the growing ubiquity of “security cameras,” suggesting that some large percentage of public spaces (at least in Britain) was already being filmed. Areas that you might not imagine having 24-hour monitoring, like street corners and parks, were now possible to watch all the time.

But even if cameras were starting to be everywhere, we had an idea that an actual human had to be paying attention, hence the movie trope of the distracted, sleeping, or absent security guard and the meta-camera view of a bank of monitors with our characters unseen except through our own omniscient perspective.

We could assume that our homes or other places we “owned” were not under continual monitoring, unless we were doing things of interest and/or threat to a nation-state. We could say things to other people that no one else would hear and that would live on only as part of human memory, subject to our subjectivity and sense of salience.

Those were the days.

The end of privacy

How far away are we now from near-total surveillance?

Recently, in a meeting I regularly attend on Zoom, one specifically oriented around the sharing of quite vulnerable and personal information, the software began to show a warning as we entered the room.

AI will be listening and summarizing this meeting, it said.

There was no “I don’t consent” option.

Zoom has various reasons to let us know we’re being monitored, but in more and more cases, we may not even know that our locations, likeness, words, or voices are being captured. And what’s more, we’re largely agreeing to this capture without awareness.

Death by a thousand paper cuts

Many things have led to this moment, in which we are experiencing the last days of an expectation of what we called privacy.

Our monitored-ness follows Moore’s or other power laws that predict ancillary outcomes of cheaper processors and storage. Digital video has made incredible strides from the early days of tape-based camrecorders. Quality cameras are tiny and cheap and nearly everyone is carrying at least one audio-visual device around constantly. We have room for massive amounts of data to flow through servers. AI can now process language and visual information to an extent that while it may still be cost-challenging to save every bit of video, we don’t need humans to watch it all to determine its importance or relevance.

And the emergence of click-wrapped licences has accustomed everyone to the idea that they have no recourse but to agree to whatever data usage a company puts forth, if they want access to the benefits or even to the other people who have already struck such bargains. What’s more, we seem to have little sense, so long as the effects of our surveillance are not authorities acting against us, of what it means to lose what we knew as privacy.

Subjects and Rulers

In The Dawn of Everything, authors David Graeber and David Wengrow posit the idea that control of knowledge is one of the three elementary principles of domination.

Historically, surveillance was defined primarily in terms of the state, who had the means and motivation to enforce control of knowledge with one of the other key principles of domination: violence. We had spies and police, and then eventually, as property rights of individuals other than rulers began to be backed with state violence and technology became more accessible, private detectives and personal surveillance emerged and eventually became small industries. But now, we’re mostly being watched by for-profit companies.

When I started down the rabbit hole of “the implications of AI” thirteen years ago, even ideas about human-destroying agentic AI such as “Roko’s basilisk” were thought of by some (notably Eliezer Yudkowsky) as dangerous, akin to releasing blueprints for nuclear weapons.

But most people didn’t think there was much to worry about. Technology was still a domain mostly thought of as benign. iPhones were brand new. Even the idea that AI might be trained in such a way as to maximize its outcomes at human expense, as in the ‘paper clip factory‘ metaphor, seemed far-fetched to most.

For me, the idea of the technology being able to become conscious or even agentic was less compelling than the way people who DID think about this outcome were thinking about it at the time. This was my first foray into the Silicon Valley underground, and what I observed was that many people within the subculture were thinking about everything as machines, while simultaneously longing for more human, embodied, emotional connections.

What I didn’t see then was the cascading motivations that would make AI’s surveillance inevitable and not exactly state-based (though the state still acts as an enforcer). It didn’t occur to me that most people would willingly trade in their freedom for access to entertainment. I didn’t see how compelling the forces behind corporate capitalism were becoming.

Voluntary bondage

“Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don’t really have any rights left. Leasing our eyes and ears and nerves to commercial interests is like handing over the common speech to a private corporation, or like giving the earth’s atmosphere to a company as a monopoly.” —Marshall McLuhan

Though the “Internet of Things” seemed to be hype when it got lots of press in the 90s, we didn’t need to adopt smart appliances to begin shadow surveillance in our private spaces- we invited it in so we could shop more easily.

The current crop of AI tools centre mainly around figuring out how to sell more things, how to optimize selling, how to invent new things to sell. If we made it illegal to profit from AI trained on public data (as opposed to trying to put the genie back in the bottle), we’d surely see less unconsidered damage in the future.

It occurs to me that our only real form of resistance is not buying or selling things. And that form of resistance may actually be harder than smuggling refugees or purloining state secrets.

Each new technological breakthrough recreates the myth of social mobility- ‘anyone,’ it’s said, can become a wealthy person by using these new tools. Meanwhile, actual wealth is becoming more and more concentrated, and most people making their living using the tools of the digital age (versus creating them) are scraping by.

The upcoming innovations in surveillance involve not only being able to record and analyse everything from a human-capable observational standpoint. They will include ways of seeing that go beyond our natural capabilities, using biometrics like body heat or heartbeats, facial gestures, and network patterns. We will have satellites and drones, we will have wearables, we will have unavoidable scans and movement tracking.

Follow the money

As someone involved in the world of internet Trust & Safety, I’m aware that there’s a kind of premise of harm prevention or rule-enforcement that is involved in the collection of vast amounts of information, just as there has concurrently been a groundswell of behaviour that requires redress.

To me, it seems strange to simply accept all surveillance as fine as long as you’re ‘not doing anything wrong;’ but this is a vestige of the idea that being monitored only serves as a way to enforce the laws of the state. What’s happening now is that we are being tracked as a means of selling us things, or as a means of arbitration of our wages.

None of these thoughts or ideas are particularly innovative, nor do thoughts like these have any protection against a future of total tracking. We could have some boundaries, perhaps, but I don’t feel optimistic about them in any short term timeframe.

Instead, I am drawn towards embodied experience of untracked being, while it is still possible. We may be living in the last times where we can know what it feels like to be with other people and not be mediated or observed by technology, to not be on record in any way. We can notice our choice and where we are not offered a choice.

We can feel the grief of this passing.

Sweet Social Media

Should we try to make it keto or just have an apple?

Whether ‘social media is bad for you’ has these annoying debates. Social media has benefits! But there’s no doubt that there are psychological effects from forms of communication that are equivalent to advertising, in which consuming content necessitates others to be served a lot of actual advertising for things they don’t need, which themselves are served by fostering insecurity and internal lack, not to mention tribalism and division. And that there have been real harms as a result of social media, including genocide.

Unsavory similarities

What if we think of social media like refined sugar? It’s fantastically tasty, but has no nutritional value. Its negative effects go beyond individual health.

Production of sugar comes from a powerful (and subsidised) industry with roots in slavery. It is ubiquitous and seems impossible to avoid in modern life. Our collective palate-shifting towards it has caused all kinds of downstream effects on our health and ability to moderate our behaviour (so much so that we now have pharmaceuticals to address our inability to naturally self-moderate).

Sounds similar to many criticisms of social media. The idea that we should try to hang on to the “good parts” of social media does seem akin to the proliferation of ‘keto snacks,’ highly processed items that are low in ‘net sugar.’ (TBH some of those snacks are pretty delicious! but probably not great for us).

There’s always an interesting tension between ‘we’re living with human systems that are leading to our ultimate demise‘ and ‘we have to live in these systems and anyway there are rewards in this system I don’t want to live without.‘ Part of the practice, in my mind, is holding both feelings while getting curious about how either are true.

I am fairly certain that my absolute happiness would not be reduced by the non-existence of social media, even though it has its rewards and pleasures. I was alive, even if I was only a child, when we weren’t all connected and ‘sharing’, and people were pretty OK.

Don’t look back

I am not advocating ‘going back in time;’ instead, I am asking ‘how can technology support first principles?’ What might ‘unprocessed’ look like in our digital interactions?

If everyone was on Mastodon or some other still-social-media platform that was not ad-driven, would all the problems dissolve? Is it possible to have a way to share thoughts and information and promote your group or art or thinking in a network that isn’t gross?

There’s a distinction between “within my network and x degrees of separation” to “public,” and perhaps there’s some ways of imagining ourselves being less prone to performance and self-censorship if we have an idea of who we’re talking to. Models of highly cross-pollinated small groups could serve us to share more thoughtfully than trying to get attention from ‘everyone in the world.’ Decentralization could make this possible but more needs to be done to set limits, to normalise boundaries.

The negative effects of social media come not just from bad actors and harassment, not just from being exposed to advertising and algorithms, not even just from participation in a system that mirrors corporate oppression in general. Investment of time and emotional bandwidth into superficial forms of connection, being constantly evaluated, and seeking attention take us out of our own freedom and sense of belonging.

But what about nihilism?!

Is there any real argument for not eating a sugar-filled diet if you are like “well, we’re all going to die eventually?” My experience suggests that as I divest from more of the systems like corporate work, social media, faith in institutions, I not only feel better but I start seeing the possibility of supporting human patterns of connection and belonging with technology, rather than trying to create a successful startup that exploits human behaviour to gain power and influence.

My experience with the path towards internal freedom is that I find more compassion for my behaviour but far fewer reasons/ less need to choose comfort and convenience over what seems to be right for me. But it’s a curious question of whether you act your way into right thinking or if you heal enough to not need the crutch?

Cultural addictions

Our collective decisions about what to do about addictive things is curiously inconsistent. Some people become alcoholics and there’s no evidence that alcohol has health benefits, but we’ve collectively decided to allow adults to make their own choices about how to use it. Some people become nicotine addicts and cigarettes are still widely available, though there’s awareness of some of the malfeasance of the companies who profit from selling tobacco products. Some people become heroin addicts and we’ve collectively decided to criminalize that behaviour or at least criminalize possession of heroin. Some people become prescription opioid abusers and we have decided to hold corporations somewhat accountable and also continue to permit doctors to prescribe opioids. Meanwhile, it’s worth pointing out that when there’s a lot of money being made, less profitable alternatives will often be suppressed or vilified, even if they are actually more salubrious.

To take an opposing position, indulgence is fun. And social media is fun. It entertains us, it gets us excited, it is silly and sexy and delightful. We can be creative and be rewarded and recognised. We can find people who we vibe with and share aspects of ourselves that might be unappreciated or censured in our local community. We can learn and discover things and perspectives we wouldn’t have encountered offline.

Everything in moderation no pun indended

There are no absolutes. I still love to eat a brownie, have a drink, and watch YouTube videos. But I don’t feel happier if I have two brownies or three drinks or spend too long looking at content. It’s only because I generally eat healthy that I notice the ugh feeling of going off the rails. It’s because I have so many other, meaningful things I care about that I’m satiated by a limited amount of entertainment. I don’t long for more stuff. But we’re living in a time where limits are not the norm, and consumption is king.

When people bring up “making the internet weird and fun again,” I am reminded that the online world can feel like a portal, a place of mystery, surprise, and new connections. Part of this to me feels like it’s not compatible with social media, which is designed to be a firehose, an endless amount of stuff, not a place to have an experience, to feel something and feel a reciprocal sense of knowing.

Socially mindful

How can we have social media that is intentional? How can we create an environment that still allows us to perform, to show off our creativity, but to slow it down to an embodied, breathing, collaborative experience?

Could we have the delight and fun of social media without the approval-seeking and ranking algorithms? Let’s start with the ‘feed.’ What does feed-free social media look like? Even in the so-called ‘cozy web,’ popular community platforms have feed metaphors, though they may not be updating at a social media clip.

Could we live without likes and views? Could we have social media that didn’t reinforce unnatural standards of physical appearance or encourage polarizing viewpoints? Could we have social media that didn’t replace actual feelings of interdependence, collective good, and mattering?

I’m excited to live in a world where we’re not going for “not as bad as…” and living into new ways of being and thinking that rest on the fundamental idea of our collective freedom, our collective responsibility for the space and for our individual experience, and it strikes me that centralized social media simply is anathema to that vision.

Yes, we need breaks from all the seriousness, we need to have fun, we need to laugh and play- but I am not convinced social media is a prerequisite for these activities. If anything, it seems much harder to really feel joy when we’re glued to screens, when there’s always another thing queued up to entertain us or to be processed, especially when a huge number of those things are selling us something. This is how it seems to me, but I love to discover the ways I am making assumptions, and I want to understand how you think we can align social media with a world that supports us as humans, isn’t extractive, and doesn’t rely on violence and dominance to function.

Post-Growth Product Management

The discipline of product management and the business goal of exponential growth have emerged in tandem.

Literally billions of dollars have poured into startups and tech companies on the promise and execution of growth, even to the point where actually making a profit has relatively little weight.

Interviewing for my last job in tech, I asked the founder, “what’s the business plan?” and he said (in effect), ‘in Silicon Valley, we don’t need a business model. Our (blue-chip) investors fund us to grow and then, once we have the growth, we’ll figure out how to make money.’

The belief in growth has been so religious that there was actually no need to have a plan to make money (it helps if you have white, male, Stanford grads on the founding team for this model to apply. No shade to that particular company who are now on a revenue track and are genuinely focused on connecting people). I have also been told that having revenue can be a problem for getting investment, and you should fundraise before you’re making money, presumably because it ties the prospects of the company to something real. Investors naturally are seduced by and/or promoters of the breathless aspirational optimism that just saying “we’re on track to have billions of free users” provokes.

There are wild stories of companies spending vast sums to ‘own a market’ so they can keep growing at a rate that admits no competition, even when there’s really no clear path to being profitable.

And look, all of that ‘works’ in a system in which the primary way things work is to attain some growth milestone and then get to ‘exit’ into public ownership. All of this ‘works’ in the sense that people who put money in as equity investors sometimes get it out at a multiple. Ideally most of the losses are written off, avoiding taxes on whatever revenues come in, or underwritten by infrastructure that’s publicly funded. It works for the investors and sometimes the founders, and everyone else? Perhaps for some well-paid employees who can live well until the layoffs start cascading.

We as PMs learn about moats and owning a market and strategies that involve winning. We listen to well-produced podcasts about blitzscaling and subscribe to the YouTubes of founders and marketers who tell us about the magic of network effects and how worth it selling a lot of your company is in service of having the capital to grow, grow, grow. You’d rather have 20% of $2B than 80% of $1M, right?

So, what if we’re burning down the world in the process?

Like, literally?

How much computing power is wasted, how much money, how much electricity, how many downstream social and economic effects does this approach have?

Product Management => Problem Management

We can use the skills of great product management to play a different game.

PMs are fundamentally problem solvers. Our skills are synthesising needs and creatively building solutions. When we are doing product well, we’re orienting around customer needs and business goals. What if we continue doing that with a much more holistic approach?

Post-growth is a term used to describe what must happen when we approach the limits of growth from a systems standpoint. There are many signs we’re at the limit:

1. We’re overloading the biosphere with pollution and using up the irreplaceable natural resources of the planet

2. We’re creating wealth creation machines that extract from the general public and only benefit a few powerful people

3. We’re living under systems of corporate surveillance and in some places, state surveillance

4. We’ve created global systems of human interaction that essentially commodify our identities and our relationships

5. Social mobility is decreasing, economic inequality is increasing

6. Pandemics

7. We’re experiencing a aack of affordable housing and increasing houselessness

8. We have heath care inequities and many people living in heath-related precarity

9. There’s a co-incidence of obesity and malnutrition

10. Ongoing wars and conflicts have a cascading effect on supply chains

11. Climate changes are affecting food production and housing

12. Probably many other things you’ll be able to come up with just by scanning a newspaper

So what might this world look like if we approach changing the outcomes from a product management perspective? (By this, I mean a good PM approach, not a “CEO of the product” approach!)

Human-centric Product Management

First, we’d be trying to understand what people actually care about. This might come from direct research, but as humans with a long history of self-documentation, it’s pretty clear that indicators of satisfaction are tied to some pretty consistent things:

  • Having our basic physical needs met (shelter, clean air, water, and enough nutritious food)
  • Healthy diet and exercise
  • Feeling loved
  • Having a sense of self-worth
  • Being free of oppression, abuse, and domination by other people
  • Living without a threat of violence
  • Spending time in nature
  • Being part of a collective or community where we matter to others
  • Having some sense of security and serenity

What’s crazy to me is that the vast majority of products tech companies are building don’t serve those needs, and many of them directly subvert them. In many cases, companies may have a mission to support well-being or happiness but the actual way the company operates is antithetical to those goals.

Everything is connected, so if we truly understand what supports people, it won’t be a narrow solution for some small problem without an understanding of what problems the solution itself creates. In the old world, that’s not a problem, just more “opportunities” to make money. We can’t keep thinking this way.

Internal Shifts Ahead

One thing I’ve learned in being alive for a while and seeking answers is that change begins within and reverberates.

The way we are in ourselves and in the world informs what we build and how we affect the world around us. So even with the best of intentions to ‘solve problems,’ when we’re coming up with solutions that must meet a concurrent goal of “winning a market” and increasing our status and wealth so that we can have more power or feel more important, we’re just going to fail to truly see the reality of our impact.

I was 100% into playing the game that the rule set laid out. I wrote an actual guide to “startup pirate metrics.” But these days, there are too many signs that this approach leads to very bad global outcomes and I have been on my own journey that has found me feeling like personal responsibility is real freedom.

We can do things differently, but we will have to start small, and what’s more, we’ll need to abandon growth as our measure of impact. As soon as we put growth as our top line metric, we start to undermine the practices necessary for change. It’s not that our products and companies won’t grow, but they will grow like trees, not like kudzu.

What if we tried to solve problems with these constraints instead?

1. We start by understanding what our customers value and what truly matters to them

2. We reject strategies or solutions that involve inevitable extraction

3. We reject building products or services that exploit human psychology rather than fostering well-being

4. We do the work to uncover our own assumptions and biases

5. We prioritise ensuring that the people with whom we work are taken care of and we foster healthy interdependence among our team

6. We put sustainability before growth, meaning that we don’t need capital just to juice our numbers

7. We collaborate with other teams and companies who are working on the same problem rather than trying to beat them

8. We work together in ways that recognise different roles, skillsets, and experience without creating hierachies that entrench power over and don’t allow for the possibility that people with non-traditional backgrounds may be suited and able to do work we’ve traditionally gatekept with hiring requirements

Why is it that we’re so creative and love solving problems but we also hold the belief that things have to work in the way they do now? Aren’t we ‘disruptors’ and ‘innovative thinkers?’

To think this way, we do have to give up some of the status-seeking and lottery-winning mentalities that drive our industry today, but in return, we have a big blue ocean and lots of opportunities to prototype and test. And the best part is, we can do it together, collaboratively, using the very superpower that has made humans such a growth-oriented species in the first place.

Start small and be a listener first

The first step, I think, is to create more partnership between people served by technology and builders. If you’re a technologist, don’t ignore the cultural, emotional, and societal dimensions to what you are building. Work with researchers, UX practitioners, and above all, customers to consider what might be valuable, not how you can exploit behaviour, make things sticky, or otherwise try to growth hack your way into success. Growth may be a consequence, but if you’re moving at the speed of trust, you can build with care, to create something lasting, to have responsibility to your customers and the world at large, not to make some already-wealthy people money within a short time horizon.

How else might we use our skills to solve problems, without shifting into paternalism and manipulation? Mostly, it’s about being willing to recognise our training, to find internal integrity, and to practice with others who can see our blind spots. We need to put down the master’s tools and learn a new approach, one that sees product far more holistically. I’d love to hear how this idea lands for fellow product people and how we might support one another to make a change.

AI and the Myth of the Creator Economy

Once upon a time, I wrote poems. And I sing to myself quite often, so I had this kind of typical random thought, ‘maybe I should learn some easy musical software thing and write some songs.’

And then I thought, oh, well, what would be the point of that? AI will certainly get better at writing songs before I ever will. That self-defeating thought did spark a little bit of insight, though. What am I creative for?

One way to see it: creative practice is for oneself. For example, people learn woodworking or other crafts to make things that would likely look better, take less time and energy, and be cheaper if they just bought a product from an industrial producer.

If you become good at your craft, you might be a maker. You can go out to craft fairs and sell your items, but chances are, you’ll be operating at a loss when materials and labour are factored in. When you start woodworking, you are not thinking, “now maybe I can be rich and famous.”

Even before AI began inducing a mass pearl-clutching about artists’ rights, being a ‘creator’ was a pretty unlikely path to wealth.

Some kinds of creative work seemed like they might lead to a big payout: the ‘artistic’ careers that fell under a lottery system. The lottery system was always primarily one of overall exploitation and extraction.

Making music is an example. Right now there are so many people making music, perhaps more publicly and intentionally than ever before. Platform algorithms primarily drive discovery and popularity, and those things reinforce the patterns that were already in place. In other words, things that are like other things are most likely to surface. And once something does surface, it benefits from network effects- there’s great research that indicates that people listen to things because they think other people like them far more than as a result of their own individual tastes.

Few artists even make much money from the platforms. Even before there were algorithms, there was the corporate consolidation of the music business, which meant that just a few corporations owned nearly all of the sizable record labels and many of the small ones as well, so homogenization had already begun. And from the beginning, stars of the recording industry made little in comparison to their record labels.

This pattern is true in general for creative or generative work. We went from a pre-industrialized situation where ‘artists’ were mostly wealthy or beholden to the wealthy but there were plenty of people practicing creative crafts for themselves or a few people in their community, to a time when companies began to profit from the distribution of other people’s creative work. Within that system, there have been small companies that were not as extractive, but as time has gone on, the direction has been one of ever-increasing disparities between the creators and the distributors in terms of relative individual profit.

We recently went through a kind of collective delusion with the proliferation of creator platforms and the so-called Creator Economy. Many people were called to put out their ideas, art, and creative work as products. As the wave of industrialization-employment has ebbed due to automation, and because industrialization, media, and the internet have created this sense of global scale on which to market ourselves, we found ourselves looking for ways of expressing ourselves for money. And we were seduced by the corporations who distribute creative work into thinking that ‘owning’ the work was the path to protecting creators (had this ever have been true, these companies largely would not have existed, since they are the primary predators).

But many ‘creators’ were willing to buy into creator economies and copyright, perhaps because they thought they might be the exception. (Does this remind you of other delusions of social mobility that have led to many collective positions that reinforce the benefits for wealthy people against non-wealthy people’s own self-interest?) We were willing to believe that platforms ‘allowed’ creators to make a living being creative, when they would have otherwise laboured in penniless obscurity. (In fact, artists can be streamed millions of times on Spotify and not receive enough money to pay for two months of a Spotify subscription (on the individual plan, mind you). And most people don’t garner millions of streams).

Many years ago, I wrote about the idea that creators might be best served thinking about making a living much the way one might by having a shoe repair business. It could be possible to create enough direct relationships with people who like your work to get by, and that would be a remarkable success- you’d have a basic income, be in your creative practice, and not have a boss telling you what to do or what to make. Instead, I’ve seen people trying to negotiate the systems by learning how to ‘make more of what people want,’ and creating a glut of sameness, which honestly makes it that much easier for AI to step in and be as ‘good.’

Now, we’re perhaps confronting something that could be transformational to the whole notion of art-as-commerce. It might be that the only real value in being creative is in the practice itself. In the learning, experimenting, doing of the thing, not in the marketing of the product. It might be that we value human-made things because we are part of the process, because the creative output has meaning.

Perhaps we’re headed into a farmer’s market model of ideas, songs, or art. There was a moment when it seemed like NFTs were a version of this (only if you squinted) but Open Sea showed that mostly the money was in applying the same kind of platform economics that the streaming platforms have. Extraction for the few. And so, you may ask, where does that leave creators for making a real living?

Well, right. Corporate capitalism evolves to take more out and leave less for most people. And this is where I think (being fairly ignorant about political science) I don’t resonate with Marx when I think about what’s next. Because “workers” seems to me like a function of industrialization itself, and what’s happening is that we won’t have work. This may seem kind of nice for those people with enough advantage to enjoy leisure and minimally-paid creative pursuits. There will likely still be work for those who sell access to their own status for a time, and perhaps people at the upper echelons of corporations will still be needed to formulate strategies or be figureheads for a time.

There are still low-wage jobs and service providers who are more challenging to replace, but industry is plugging away at making them dispensable too. From my life on the edge of Silicon Valley I see that there are ideas to automate everything from drivers to service workers to doctors, lawyers, and therapists.

If we keep going down this path without alternatives, most of this displacement will come without alternatives for ‘making a living.’ Capitalism is a vacuum hose trying to suck every particle of wealth and power out of the earth and its inhabitants. In the US, the top 1% have more wealth than the bottom 90%. Even with supposedly more access to investing with the advent of platforms like Robin Hood, the top 1% own more than half of all stocks. Access to wealth overall is decreasing, with the top 10% owning about 90% of all stocks, and every year the gap widening. A group of 725 individual people have more wealth than a collective 50% of Americans, and that doesn’t even factor in global disparities.

I can see why cryptocurrency seems attractive as a solution. If we just had a way to create capitalism for ourselves! seems to be the idea. I mean, was capitalism a good thing, leading to post-scarcity where, once we find a collective way to revolution our way out of disparity, we can all live in a happy place where we have all our needs met and can just play and be creative and garden and get on up in the Maslow’s hierarchy? (Or a more appropriate framing.) Hmm. As we experience massive climate upheaval, intense scarcities in housing, pandemics, and all the other things that in the short term, money can still largely mitigate, I don’t know if post-scarcity looks imminent.

And yet. We are darn resistant beings. If we can resist the commodification of post-capitalism itself (not a joke- capitalism is cunning, baffling, and powerful!) we might discover this truth- that it really is all about practice. That if we give up the idea that our identities, relationships, and creative process are all really products, we might find out that there’s a lot of power in our collective and interdependent practice. Doing that practice gives us the opportunity to find new ways to collaborate and contradict the idea that it’s just naive to find an alternative to states, corporations, or other systems of control.

AI is not benign. We can regard it with curiosity and wonder, and also recognise that the vast majority of the energy around it right now is focused on figuring out how to make more money and add it to the arsenal of corporate domination. Creative hackers may find ways to use it as a tool of subversion as well. But the general idea that it’s going to put artists out of business implies that artists were in business in the first place, and that’s something we can see through without any help from GPT.

Freedom might not be free

Free sounds great. Who doesn’t love free stuff? Who can say how many random and unnecessary calories I’ve consumed at parties or at those in-store sample stands. Goodness knows I have wasted a lot of hours online that never would have happened if I had to assess the value I was getting from it. (Of course, someone made money from that time I contributed).

But free is never free, it’s only subsidised, whether that’s by others, by ourselves indirectly, or even by the earth. In a system of capitalism, free things made with someone’s labour lead to unsustainability and poor motive alignment, even if they result from the best intentions. Instead, we could think about products as either coming from collective investment with collective and equitable ownership, like public goods, or we should have models in which there is value exchange, even if we eliminate some of the regressive nature of flat pricing models. If we’re taking things from the earth, we might imagine how we can reciprocate, not just take and use.

With tech, we’re making products that are intangible, but they still require labour to produce. When we make them “free,” we are in a situation where we’re going to be dependent on money that isn’t tied to the value we’re creating for the people who use our products. And yet, we’re working in a context where many companies, especially in the social tech world, make their products free.

Free feels like it’s generous, until you’re out of funding. Free suggests there’s no needs among the people who are working on the product. And free feels like it’s a commitment to some kind of ethical stance and cooperation (see Open Source philosophies) but Open Source is rife with abandoned projects, projects with only one real contributor, and tools that mostly just serve developers. It leaves out creating the kind of relationships that emerge when people exchange energy for value. It leaves out creating systems of mutual benefit.

I’ve worked on products where we offered free versions. It’s great for growth and for giving people a sense of what value they might find by making a commitment. There’s room for free, but something interesting to me is how much less responsibility people felt when using the product for free. They often didn’t value the work of the people making the product at all, and were more antisocial in their communications with the company.

There’s a real and interesting tension in how to approach charging for technology, especially social tech. As a person who does lean to the cheapskate side, I like to ask myself, for the products I pay for, what are my feelings versus free products? Much of my thinking has to do with user experience, especially collective user experience. It makes sense to me to pay for things that allow me to extend a good experience to others. I suspect that there needs to be a re-norming if we want to create sustainable companies around social and collaborative technologies. For now, most of these make money by offering a business product. Perhaps that’s the right transitional path, as long as we don’t lose our missions along the way.

Feeding on Empty

When we think about communities and community platforms, we as builders help communities thrive in part by not reinforcing an illusion that the platform is the community.

Great community technology can emerge by observing how great communities function outside of technology. And sometimes from questioning some of our assumptions about what communities actually need.

There is one metaphor that shows up in most community platforms that comes not from how communities work, but instead how technology and social media work: the “Post Feed.”

Posts and post feeds are problematic for a number of reasons.

  1. They create a hierarchy – the person posting is dominant.
  2. They encourage a self-promotion mindset. Posts are structured like ads of oneself.
  3. They are not conversational, one posts not knowing who will or won’t see the post and thus often lacks meaningful context or a clear idea of what the information shared is meant for.
  4. They are, in most online communities, dominated by just a few people. They feel like overkill to share a small piece of information and can lead to lower participation overall as a result.
  5. They ‘contain discussions’ that are usually not very easy to follow, not transparent, and easily get lost. And yet lead people in the discussion to feel as though they’ve been sharing transparently and others should be informed as a result.
  6. The conversations within a post are typically text-based, have a high bar for expression, are disembodied, and are easy to misunderstand.
  7. The feed metaphor leads to a finger-in-the-river feel when most community information-sharing benefits from being able to be either retrievable or to be clearly ephemeral (such as date-driven information)

Can we actually think of any real-life communities where there even exists something that mirrors the metaphor of a Post (which is essentially one person broadcasting some information and then other people having clearly less important responses to that broadcast) and a Feed (an endless list of things people share)?

Instead, what if we use a different structure, one in which we design for conversations that happen between people more naturally and equitably, and for information-sharing that may prompt discussion but doesn’t masquerade as such?

There are two key ways information-sharing typically works in non-platform-based community.

One is some kind of “bulletin board” or “announcements time”, where there is just the information being shared, and often that kind of broadcast is normatively reserved for things clearly of importance to the group at large.

The other is in an actual conversation or meeting where information is submitted or shared to be discussed (often as an agenda item) and there are facilitated or normative ways that a discussion occurs.

Though it’s perhaps difficult to move away from technology designs people are “used to”, or the way technology has trained us to interact with it and one another online, it’s not actually that difficult to design a different kind of division between information sharing and discussion. Information in posts is very difficult to organize and filter, whereas a system where people can share information only makes it easier to make sense of.

When we look at how great communities and collectives operate, connection and trust-building is prioritised and baked into the practice. It’s also fractal, in which individual values, relationships, and collective actions and communication are aligned.

It makes more sense to emphasize meetings and conversations where connection and trust emerge and to let information-sharing be a smaller piece of the platform. It makes sense to choose design patterns that work against dominance. It makes sense to help communities support members’ journeys and to encourage real interaction than to be a private social media where everyone sees a feed of posts.

Communities online have begun to regard platforms like Mighty Networks, Circle, Slack, or Discord as gathering places but for the most part, they are not very broadly participatory, inclusive, or connecting.

This is a UX problem. As community platform builders, we have a real opportunity to use the best practices of offline communities to inform the way we imagine spaces we offer online. And choosing to do things differently is only a good thing when it comes to what the impacts we’ve seen are from how technology has been built in the past, not to mention the benefits of positioning and innovating.