A Cold, Cold Problem

One of my apparent side hobbies is reading startup advice books.

These books follow a predictable formula: know your customer, build cheaply, be a painkiller not a vitamin, raise VC, and grow as though your life depends on it. I’ve been in the startup world for a while and most of these books are pretty successful at describing what someone would do to play the game of startup we’ve seen for the last decade or so. They are all manuals for getting funding and then scaling, without much question about whether either one of those is the ideal path for a business.

I remember when I first got into tech and took a lot of this at face value. I wanted to play the game because I wanted to be a winner. I thought people who were successful at tech startups knew something, and that I needed to learn that thing. I learned a lot about building product, and those lessons have been incredibly valuable, no matter what kind of business you want to apply them to. I’ve learned from some very smart people.

But there are some things these books largely ignore or gloss over. For one thing, they will explain how VCs put money into lots of startups but only a few scale, and therefore VC investment is only for products that have huge markets and want to blitzscale. For one thing, this framing instantly leads all founders playing the game to try to win by ‘articulate solutions that are broad-based enough to be a big brand someday ‘go after big markets.’ Most startups should be doing the opposite, looking for niches and building a viable company. What’s more, getting VC funding is, by the accounts of nearly every founder I’ve met who has done it, somewhere between risky and business-killing. Ceding control of your company to VC means trying to extract as much value as quickly as possible, not to build a sustainable and profitable business.

Even if you wanted to play this game, good luck to 50% of the population. Of the companies funded by VCs last year, less than 2% were not men. Imagine the stats for being not male AND not white. Why even bother with the game in the first place?

With the waves of tech layoffs, it seems like perhaps at least a few people will question the very nature of the industry or I don’t know, corporate-centred capitalism. I’m very curious to see if what comes of that might look different or if it’s just going to be more people trying to follow the playbook.

Back when I got into tech, I heard the disparagement of what was known as “lifestyle companies,” which if you weren’t trying to scale and become a monopoly, you were by default. It’s kind of like regular business fundamentals are simply ignorable when you’re a ‘disruptor.’ And the IV of VC keeps that story alive.

I started reading Andrew Chen’s The Cold Start Problem recently. There are some interesting insights about network effects in the book, but I have to admit I get hot under the collar every time he explains how Uber did growth. You have to be willing to hustle, you have to do what works even if it doesn’t scale, so you can ruthlessly undercut the existing industry and find a way to incentivize people to exploit themselves on your behalf. THIS IS HOW YOU WIN!

But is it? What would happen if founders played a different game? Can we get out of winner-take-all if we just radically decide to do something different? Collaborate, for example?

Peter Thiel’s famous paean to monopoly thinking, “competition is for losers” will surely be an epitaph for this age of do-anything-to-win, whether it’s because we burn down the planet or because we learn we actually succeed more sustainably by cooperating- making competition for losers of a different sort.

I’m sure this would have sounded naive to my younger self, but it turns out that when you get on a path to having values and living in integrity, you don’t really give an eff whether people playing the game think you’re an all-star. You’d be surprised how many people actually win by building trust, connection, and products and businesses people care about instead.

I began this post a while back and it’s funny how possibly ‘radical’ or ‘realistic’ I have become since. In the interim time, big tech companies have laid off 200,000 people and SVB and other banks have failed. It’s becoming more apparent to me every day that even when a company isn’t in a VC-driven death spiral, tech is largely being built to reinforce systems of extraction that are a death spiral for the whole world, that tech is being built in a culture with all the hallmarks of white supremacy. It might not be crazy at all to reject the very basis of tech economics, in which foundations and pension funds must, through VC, waste so many resources by funding companies with a 9/10 failure rate, that the answer, even for most investors, is to find “something that can scale.” The game is not just rigged, it’s a battle royale.

Disrupt LinkedIn

I have something to confess. I like LinkedIn.

I’ve trained that algorithm to deliver the things that make me say, “right on!” My feed, which I also feel fine ignoring for periods of time without any sense that my absence will be noticed, is full of people whose faces I like to see.

But I also hate LinkedIn.

Of course, there’s the fundamental problem of a centralized, traditionally-run tech company owning me as much as LinkedIn does. Of my contribution to it profiting someone, of my social relationships being owned. I can “export” a list of people I am connected to, my “data”, but I can’t easily connect LinkedIn to anything that would improve the connections – or so-called connections- I have developed.

Those corporate web 2.0 issues aside, LinkedIn is also kind of the opposite of useful for the very thing I am there for. Every time I am at an event with interesting people, or meet a new person, I “link” with them, only to have zero context about the whole thing immediately. There they are in the collection, but there’s basically no impetus to go further than that, or to be able to use LinkedIn to do the kind of basic follow-up and relationship development that would obviously be necessary for someone to go beyond being a profile to being a colleague. I hate having messages locked away in a UI that leads me to forget I ever spoke to someone. And don’t get me started on the giant misses in other parts of the product- why are LinkedIn Groups so completely terrible?

Yes, there have been attempts to disrupt LinkedIn before, but most of them are just the same ‘own everyone’s data’ kind of approaches. We are perhaps starting to have enough DeCent tech to do something more exciting? So if that’s what you are building, here are some wish list items:

  1. You have a public profile but you can limit information you share to classes of people (co-workers, communities, friends) or even to certain people. Over time you can shift the level of sharing based on relationships that develop.
  1. You have a social graph and when someone looks at your profile while logged into theirs, they can see your mutual connections
  2. When someone connects with you, you have the opportunity (or even requirement? You choose.) to add context about how you know them. You can add more context as time goes on. Their profile reflects messages you’ve shared as well. This shared context is visible to both people but no one else. Could this be contained in some E2E way? Why not?
  3. There’s an RSS feed associated with you that you can feed your various channels of public streams through, ala ActivityPub, and there’s a way for your contacts to see this as a feed with other people’s content. There’s a transparent algorithm you can adjust to favour certain people or relevant-to-you content or other preferences
  4. You can use the platform as a mailing list. Contacts can opt-in and then you can message groups of people filtered by interests, locations, etc.
  5. It’s easy to pass along other people’s content as well, so there are still metaphors for responding to and amplifying content (though it could be as a comment that is attached to the content or as a message only delivered to the person posting). There are ways to take content into a better UI for conversations, ideally. I’m imagining here that we’re really talking about something independent of this particular platform, no need to re-invent every wheel.
  6. There is probably benefit in orienting around profiles as a resume as well as simply a way to connect. LinkedIn has managed to stay less toxic by both avoiding being ad-driven and by being a space that represents you to the working world. But you could make many improvements to the way information is presented- making the resume part more modular so you could choose to emphasize non-job-work more prominently, and to more easily share your non-job aspects with people you get to know more deeply. The trick is to have that information be yours and not the platform’s. Now we get into identity management, but you know, people are figuring it out, or trying to- eep.

If you’re making this, I am excited. As everyone always says to me, “I want to be a beta tester.” I am guessing something like it already is happening. And then perhaps LinkedIn will finally be… left out.

Matters of Trust

How does trust work? It’s multi-faceted.

First, there’s congruence.

You say this is how you are, I see you behaving in ways that reflect that. This isn’t something I want to farm out to technology, it’s too easy to game anything that tried to quantify congruence.

Then, there’s connection.

I can’t trust someone who clearly has indifference or disregard for me. There’s no technology that proxies for this.

Next, we can throw in social context, or transitive trust.

When someone behaves negatively to other people with whom I can identify, I am likely to lose trust. This one is problematic in its nature because sometimes people are stuck in in-group thinking where doing bad things to someone we think of as ‘bad’ may feel correct to us, but I would argue that most of the time, there’s a violence and dominance in that behaviour that also provokes fear, rather than trust. If I see someone punishing someone I don’t like, I may feel like it’s warranted, but I also lose trust in the person performing the action.

We may have become inculturated with a kind of paternalism or patriarchal perspective that leads us to see punishers as protectors. My guess is that at heart, we know punishers are operating by invoking fear. First they came for…

We can also consider whether people we trust also trust other people and use their trust as a basis for our own.

Should we assume trust?

There’s a norm I’ve seen many communities and companies trying to establish of “trust until the trust is broken.” On some level, it’s a good strategy, a game theory that works. But it also feels like an approach developed by people who don’t regularly have to watch out for danger. Who don’t experience trust-breaking frequently, even in places where everyone has the best intentions.

To suggest that trust is about some kind of contractual, verifiable, identity-based thing feels like what is broken in the whole system (meaning the drive to quantify everything and extract its value). The way many technologists are talking today about trust is- basically a paradigm of imperialist, heteronormative, white supremacist patriarchy. (Though I’m not sure those labels are continuing to serve me, they have been a helpful lens).

Of course, trust is hard. Fundamentally, it starts by being able to trust yourself. It might be that a requirement for this kind of self-trust may be to go through the pain and heartbreak of seeing where one is not trustworthy to oneself. To notice how, to speak for myself, I have carried maladaptive lessons from trauma, how I have tried to avoid feeling shame by coming up with justifications, how I have been unwilling to look at my part in the systems I see as broken.

For communities to build trust, we need to start by creating containers that allow people to self-reflect without judgement. Witnessing this in others turns out to be highly trust-building. Offering welcome and checking our judgement builds trust for ourselves and others. It lets us be vulnerable, it lets us notice when we might want to rush to judgement and sit with that impulse, getting curious about what in ourselves we’re running from.

Trust, Identity, Community

As someone who has been around in the tech freedom space for a while, though always a bit on the fringes (‘the fringe’ is dead center I suppose), I’ve been noodling on the idea of what it might look like to have control of what one shares with websites, apps, platforms, or even other people online.

It’s interesting to me (though not exactly a surprise) that the way so many developers approach the problem orients around as much automation and taking people out of the picture as possible. I read debates about “zero knowledge” that largely focus on whether the mechanisms employed are actually zero knowledge, but what problem is that trying to solve?

There is no doubt that there are certain situations where real anonymity has positive utility, primarily in situations where repressive state surveillance has a role. But the downsides of real anonymity are also real and shouldn’t be glossed over. How can we fight repressive state surveillance, not orient everything we build to address that problem?

I’m not just talking about human trafficking, child abuse, or terrorism being problematic from the perspective of anonymity. They are definitely not good and ideally we do not build technology that facilitates these harms.

But we have a bigger issue. When we consider what we need to build functional communities, democracies, and relationships, trustless systems are not just counter-productive, they create false ideas of security and safety.

Trust-breaking is not a technical problem, it’s a human problem. As we start to find ourselves in less and less authentically human contexts (interacting with ChatGPT, deepfakes, bots, etc), we’re in dire need of ways to create trusted systems and identity management that helps us verify our mutual humanity and trustworthiness as people.

One idea for this might be identity management that happens within actual human communities, where as someone who knows me, you can verify my identity. This doesn’t require a state-level or sanctioned identity, but it does require people vouching for one another. Presumably there would need to be some threshold for this kind of verification (how many people would it take?) and a complementary technology layer to support the process. We’d need to consider accessibility, but I think the genius part of this kind of scheme is that it requires people to be in relation to one another and that might mean creating new kinds of interpersonal networks to accomplish verification.

Imagine, for example, that you’re unhoused, or living with a disability, or don’t have regular access to a computer. How might a human-trust-building identity system serve these use cases? How could this work in a decentralized way, so that identity could be community-verified for communities you participate in, and proxy-verified by having one community trust another’s verification? Is it necessary to have a universally-verified identity or simply one that allows access to your particular contexts?

In general, I believe trust is built among people, not among technologies. This happens in small groups, in situations where we actually are known and show up in trustworthy ways. We just have these crazy complicated and nested systems to deal with more and more scale and therefore, less human trust. We have these systems to help giant corporations and states extract money and time, not because they actually make our life better, necessarily.

I want to build ‘identity systems’ and technology in general that looks at the world as it could be, that gives up on trying to fix something that was never functional in the first place, to take a leap into the unknown because we’re at a point of singularity anyway, so why not start from scratch when it comes to structures that support our collective humanity?

If you want a different world–and if you’re about human liberation you do–you’ll have to start thinking about things from a different perspective. Not how can we use the technologies we’re inventing for good, but what does a world look like that truly reflects freedom?

As the awesome poet, intimacy organizer, and abolitionist Mwende Katwiwa, aka FreeQuency, pointed out on the Emergent Strategy podcast:

When I say ‘better,’ I don’t mean it will be like you’ll get everything that you have here and then some and it will be great… we might never get get some of the shit we were promised if we give this world up, but I believe there are things that are better than what this world has actually given us, that are more equitable, that feel better, not just when we consume them, but when we are in relationship, they feel good for us in our collective bodies… Are you willing to lose all of this and believe there is something better than we can’t even actually imagine? (That’s the wildest part about it). You will have to be able to let go of this shit without tangibly being able to see what’s on the other side and say it’s worth it.’

FreeQuency

Web 3.33

In the past, I have struggled to find a lot of necessity for blockchain but I wonder if ChatGPT has actually created a new needed use case- some way to understand provenance?

I’m a person who often likes to read the source paper or to mine the bibliography section of a book to find ways to go deeper on a certain topic or idea. Now, there were already authors who synthesized thoughts and then wrote their own version without attribution, and I suppose it’s fine (cough, cough), and fine for ChatGPT to do its own “remixing” (to use the charitable perspective of aughts-era copyleft), but it also means an explosion of content that may or may not have any reason or careful thought behind it.

In theory, you might be able to train a model with your own curation rules and at least prevent yourself from accidentally incorporating pure nonsense in the results of queries, but then you’d need to evaluate sources (I mean books, writers, possibly publications) and few of us have read enough to cover our bases well here. I read more than the average person and yet every day I can discover many other things I haven’t read by actual real people. To be fair, many of these are just the same content with mildly different perspectives, but there are reams of subject areas that overlap and I have no idea how to evaluate. In science, there’s the added challenge of new developments happening all the time that update and shift what we understand- but can we be sure that ChatGPT can update and capture the nuance of that shift?

I’m not necessarily arguing that people will be better than AI in synthesizing and analysing information in the long run. People do have lived experience that shapes their own personal worldview, and often being able to triangulate my own experience, theirs, and their ideas helps to contextualize information.

But in the short term, we’re about to experience a potential volume of content that, despite being rather prolific themselves, humans can’t produce, to an exponential degree. In the short term, this volume will most likely be in service to selling things, because at the moment, that’s what most of our technology is for- either a metacognative selling in the form of ad-tech or just direct sales of the digital product or service itself. (Social media, like TV before it, is fundamentally ad-tech).

I’m going back to one of my earlier questions – is technology going to start doing the labour technology has created? Is there a level of recursiveness coming that essentially frees me from the digital experience entirely? Could we be headed to a world where one just spends time in person with other people without phones or anything because we’ve replaced ourselves with AI, which is busily selling things to itself with cryptocurrency that doesn’t even apply to physical space at all?

Or, at the very least, will we need to use some kind of immutable ledger to show that a human was involved in ideas and analysis and curation? Because without that, where will attribution exist?

Back when I was filming for Acceleration, I talked to many smart people about their vision of an AI-driven future (or nightmare, as the case may be). That was 10 years ago now, and the way technology developed has still come as a kind of surprise, not that what we have now wasn’t predictable, but how it feels to live in the world where almost all structures of human trust have been torpedoed. That we could be in a situation where simply knowing if something has an individual human source could be the destabilizing force that pushes us over the edge, rendering our main evolutionary advantage (cooperation) moot. On the other hand, maybe we’ll band together in the face of non-humanity becoming a force of dominance. I am living into the latter future.

Tipping the scale

Lately I’ve been thinking about “scale” and I have some questions.

  • Can tech scale without VC backing?
  • Is scale inherently problematic? Does it diffuse connection and lead us to behave in ways that are unnatural and possibly polarizing?
  • Is scale mainly held as a good thing because it is a reverse wealth distribution scheme (taking money or labour from regular people and turning it into wealth for a few)?
  • Can we build technologies that scale but actually encourage people not to? (Supporting clustering and “the splinternet” in service of building spaces where trust is possible)
  • How is scale different if it’s open source and not owned by a corporation or a government?
  • How can we scale things without applying a layer of dominance in some form or another?
  • What has been the result of scale when the scale is not extractive (and do Wikipedia, Mozilla, Signal, or perhaps even services like Libby or Kanopy qualify as non-extractive)? Could these be models to emulate?
  • Why isn’t there a definition of “scale” in Merriam-Webster that reflects this notion of expansion? Is scale just nonsense techspeak?

Some of the people I have talked to about trying to raise money to support a prosocial technology face the problem of wanting to build something that likely won’t scale, but also isn’t expensive to build and maintain- perhaps community support and platforms like Open Collective will make that kind of technology more feasible.

My goal is to build something that does scale, at least in the sense that I can’t imagine any reason why every adult or maybe even young person wouldn’t benefit from being part of a community oriented around connection, sensemaking, support, and transformation. Not just benefit from, but desperately need, though it’s also true that our current scaled tech mostly serves as a distraction from the feelings not having that need met provoke.

It’s an interesting conundrum, can we create the containers for community with enough scaffolding to support self-responsibility, prosocial interaction, but open enough for lots of different kinds of communities and lots of different sets of norms and values?

The Silicon Valley version of scale is “blitzscaling” – a term that inherently reflects violence. The purpose of this scale isn’t to be of service to more people, it’s to eliminate the possibility of competition. Where once you might think of a platform as being a welcome place for people to contribute and collaborate, it’s now more of a word to imply ever-expanding reach, “organizing the world’s information” and the like.

What does organic scaling in tech look like? Growth that isn’t juiced by dark patterns, unconcerned with privacy, and driven by unsustainable spending? Is there such a thing in a platform world? It seems like a lot of the things that are being touted as the future are just the wolf in ethically-sourced shearling. Fundamentally, the kind of scale I hope for comes from a kind of emergence, not a strategy.

Calendar Games

My partner works as a project manager, which is funny for a few reasons. One, because in my world of product management, there’s a palpable shudder when the job is confused with project management- PMs of the product variety should spend their time determining what to build, not managing the building process! Two, because nothing seems to stress this person more than what they refer to as “calendar games.” (Ironically, they’re pretty into other kinds of games, such as ridiculously complex board games or totally uncomplex retro arcade games).

Calendaring is a weirdly hard problem, for a technology that has been around at least since the industrial age and presumably far earlier. You have a calendar, I have one, and if we need to find a time when both of us are free, it seems pretty intuitive that simply laying one over the other would expose availability.

But there are complications, especially when there are more than two of us. For one thing, I might be in a different time zone. And offsets change and complicate things further (yet another reason why daylight savings is outmoded).

OK, but Calendly and similar services have basically solved this problem by letting you limit your availability not just by claimed slots, but also by your individual schedule. And Calendly works very well for one person choosing a time on another person’s calendar.

This still gets messy when say, you travel and you’re in another time zone, but that’s possible to deal with if annoying for either the person with the calendar or the developers trying to accommodate the traveler.

There’s another challenge, this time not so technical. We are, as a rule, very emotionally invested in the idea of “owning our own time.”

Every day in my meditation practice space, I hear, “Time just is,” but it’s still quite hard to turn over the perceived control of that time to someone else. Conversely, and as reflected by a social media controversy, some people take affront at being asked to select times based on when another person is available, with the idea that whoever set up the calendar is enforcing their own boundaries as an act of dominance. Anger is a boundary energy, as they say, so it’s no surprise people get irritable when it comes to trying to converge into mutual time without feeling controlled in one way or another.

Along with this generalized frustration, we also tend to not want to have the same personal and work calendars, nor make plans the same way with work and personal contacts. For example, I tried to make a Calendly event for a friend but discovered we were meeting within business hours, unless I could figure out how to block things on my ‘work calendar’ that are personal, many of which are not as specifically time blocked, like “going out with a friend.” And it could get very awkward if people at your work can see what’s on your calendar.

But for those of us who find services such as Calendly a welcome relief from sending emails back and forth, it feels mysterious why there’s been almost zero innovation in finding times for multiple people to meet. Partly this is a function of the aforementioned power dynamics- who gets to be the one determining the time for everyone? Because if it’s not one person, then things just return to endless group email chains with one person inevitably coming in at the last minute to spoil everything.

Partly though, it’s something else, which is that there is no, as far as I know, service that can look at multiple people’s calendars across different organizations and calendar clients and let you know when everyone is free. I am guessing there may be some complexities in the technology that make this hard- but I mean, really? Up to 10 people’s calendar on a given week should be within the capability of an not-too-complicated algorithm to evaluate. (No, you tell me- how hard is this, really?)

Perhaps the last piece of the problem is that while people may have space in their calendar or have meetings booked, there’s another wrinkle- what I might loosely characterize as FOMO. Yes, I may have a meeting booked but if something more compelling comes along I might want to move it. As a Calendly user, I’m often caught by mildly irritating surprise when someone reschedules an hour before we were to meet, leaving me with a 45 minute gap that doesn’t lend itself to actually getting things done. But as a researcher, I quickly gave up on any rebooking or even no-show resentment since it’s a fact of life. Still, when you have multiple people in a meeting and a key person develops a conflict, it quickly becomes everyone’s headache. But even this seems much better to give to AI than to have to spend additional time negotiating a new slot.

I propose we have an AI that is granted more limited views of people’s schedules, maybe for just enough time to cross reference a week and surface availabilities, with rules input by the players about how to choose the preferred time among what’s possible. I suppose if you wanted to get really fancy, it could even figure out travel time and factor that in.

Could this work? I am guessing the APIs are available if we can have Calendly connections with the various clients. So how about it? Surely, we can surpass Doodle and make meeting as easy as just showing up?

I predict that as we figure out calendar games, we’ll also move away from separate productivity tools and have planning, goal setting, tasks all live in the calendar. Right now this is pretty terrible on Google Calendar, but someone will come along and make this new category of time-management exciting. Or we’ll end up working for our robot overlords when they figure out how to schedule us. Either way, I won’t be filling in Doodles.

Could AI solve ADD?

Does technology works for people or the other way around? We are creative, collaborative, inventive beings but we’re also social, easily influenced, and normative. What we’ve imagined ourselves “creating” has often been more of a prison we’ve build around ourselves.

Consider Yuval Noah Harari’s provocative idea that wheat domesticated people rather than the other way around.

Think for a moment about the Agricultural Revolution from the viewpoint of wheat. Ten thousand years ago wheat was just a wild grass, one of many, confined to a small range in the Middle East. Suddenly, within just a few short millennia, it was growing all over the world. According to the basic evolutionary criteria of survival and reproduction, wheat has become one of the most successful plants in the history of the earth.

In areas such as the Great Plains of North America, where not a single wheat stalk grew 10,000 years ago, you can today walk for hundreds upon hundreds of kilometers without encountering any other plant. Worldwide, wheat covers about 2.25 million square kilometers of the globe’s surface, almost ten times the size of Britain. How did this grass turn from insignificant to ubiquitous?

Wheat did it by manipulating Homo sapiens to its advantage. This ape had been living a fairly comfortable life hunting and gathering until about 10,000 years ago, but then began to invest more and more effort in cultivating wheat. Within a couple of millennia, humans in many parts of the world were doing little from dawn to dusk other than taking care of wheat plants. It wasn’t easy. Wheat demanded a lot of them. Wheat didn’t like rocks and pebbles, so Sapiens broke their backs clearing fields. Wheat didn’t like sharing its space, water, and nutrients with other plants, so men and women labored long days weeding under the scorching sun. Wheat got sick, so Sapiens had to keep a watch out for worms and blight. Wheat was defenseless against other organisms that liked to eat it, from rabbits to locust swarms, so the farmers had to guard and protect it. Wheat was thirsty, so humans lugged water from springs and streams to water it. Its hunger even impelled Sapiens to collect animal feces to nourish the ground in which wheat grew.

The body of Homo sapiens had not evolved for such tasks. It was adapted to climbing apple trees and running after gazelles, not to clearing rocks and carrying water buckets. Human spines, knees, necks, and arches paid the price. Studies of ancient skeletons indicate that the transition to agriculture brought about a plethora of ailments, such as slipped disks, arthritis, and hernias. Moreover, the new agricultural tasks demanded so much time that people were forced to settle permanently next to their wheat fields. This completely changed their way of life. We did not domesticate wheat. It domesticated us. The word “domesticate” comes from the Latin domus, which means “house.” Who’s the one living in a house? Not the wheat. It’s the Sapiens.

Yuval Noah Harari, Sapiens: A Brief History of Humankind

When I consider what I spend my own time on, quite a bit of it feels like it’s me working for my technology. I receive hundreds of emails a day, which somehow I’m to blame for, most of which I never open, but still require quite a lot of time sifting through, managing, responding, adding things to my calendar, and in the saddest part of the manipulation, checking to see if there’s more. And even with all that dedication to its maintenance, every so often I will miss email that matters to me, and fail to respond to someone or to take an action in my interest.

I have ADD, so email is not all I forget. Even if I wasn’t enticed by the badges and notifications on my phone (which are mostly turned off- I can’t imagine what it would be like if they were all allowed) I forget what I went upstairs to get on a pretty regular basis. I will literally speak out loud “don’t forget to x” and an hour or a day later be like, “how did I forget that thing?” To some extent, this is a product of loving to do a lot of things and having many things to remember. Some of it is my biology.

But when I have tried to use technology to solve this problem, it inevitably fails. And part of the reason, I think, is that using apps produces too explicitly a feeling of having a new boss in the form of some checklist, chart, or calendar.

List interfaces inevitably lead to to-dos too numerous to fit into the course of my waking hours, and what Steven Pressfield has so eloquently outlined in a few books, The Resistance, seems to just feed on the notion of having a time set by me to do things that only I am accountable for, which is like 80% of creative work. Suddenly I find myself in that “just one more little thing and then I will do the thing I scheduled” mode, or you know, reading Slack messages. (Slack being another prime example of a technology I’m working for without compensation).

So here’s the question. It seems plausible that there’s a near future when I can assign AI to sort through my email, to keep me abreast of information I’m interested in, to help me avoid missing an event on my calendar, to do all the work for other technology I’m currently volunteering for. But I have a sneaking suspicion that this line of thinking may lead to something even worse.

When I am in goal-setting conversations (BASB, communities, coaching, ‘productivity porn’) what I notice is how much of what people aspire to feels kind of… unconsidered? Why is this thing important to you? And of course, we live in a culture where people like me who actually can self-determine what they want to do have this privilege as a result of the mostly non-self-determined work of other people.

I suspect we all mostly want to feel loved, like we matter, and like what we’re doing with our time has utility or even service to others. Deep down, under the stories about having stuff, or being fit, or being enlightened, or whatever your flavour of goal looks like, this is what humans crave when their physical needs are met and they have done the work of healing from trauma.

Will technology make it easier to be in that state or harder? So far, every new “age” seems to be filled with innovation that largely takes us out of that state. So being hopeful about AI does feel a little naïve.

But perhaps what’s going on isn’t a problem with technology itself but with the violence of the systems feeding it, and that it reinforces? What would it look like for technology to emerge from people and communities who have done the work to heal and to recognise the deep trauma of living within “imperialist white supremacist heteropatriarchy.”

Even in the more “progressive” spaces I’m in of tech makers, it’s mostly advantaged people (men, whites, westerners, elite-collecge-educated) that predominate. I mean, I am sure there are spaces I’m not in where there are technologists with different demographics (please invite me if I’m welcome!), but looking at the leadership in global tech and the funding numbers, it’s hard to imagine that AI’s current developers are not continuing to operate with pretty gigantic blind spots.

Still, I find myself pretty seduced by the idea of externalizing my executive function to the degree that I can remember things without having to remember to remember them- but what happens when I outsource the process of deciding what exactly is worth remembering? In the end, the answer is probably “embrace forgetting” and appreciate where I am and what’s around me without the endless overlord of achievement made manifest in code.

Who makes the makers?

Around a decade ago, I co-organized a regularly scheduled event named CopyNight NYC. As someone working in the film industry who was tasked with issuing DMCA takedowns as a component of my job, and at the same time, someone who at that time was a bit of a free speech absolutist (how times change in so many ways), I was trying to figure out how we should negotiate the tensions between people who make things and shouldn’t be exploited in the process with the obvious impossibility of preventing digital things from being copied, shared, remixed, and repurposed, or even from being sold with no benefit to the maker.

Now that we have AI that has ingested at least some of what humans have made and is taking remix to a singularity-ish level, we’re going to have to have the Napster conversation again, but this time, without the chance to blame people for being thieves. But when writers, artists, musicians or work done by other ‘creators’ becomes just an input for AI, what will that mean?

Part of me thinks it’s kind of no surprise this technology is fast-following the short-lived ‘creator economy’ that also was a driving force behind a lot of scammy blockchain projects and the emergence of extreme power law disparities that meant nearly everyone feeding the maw of the centralized creator platforms got little-to-no compensation for their efforts.

This might prove to be an uniformed take on the whole thing, but to me, most of the creators in the creator economy were simply servants of the consumption-at-all-costs ad-driven models behind social media. If you were able to skim enough of that ad money, you were also likely to be able to work for “brands” and directly shill things yourself.

Meanwhile, somehow people (millennials?) have been convinced that “personalized ads are better” – so they are more likely to get you to buy things they probably don’t need and ultimately burn down the world? I’m happy with my terrible ads for things I couldn’t care less about, tyvm.

Back in the day, I wrote an article in which I proposed that the new digital economy would work for artists if they went direct and didn’t expect to make a fortune, but instead thought of their work as a solo entrepreneur, a “cobbler” in that case.

This seems more true than ever to me now. Artists can succeed when they consider having customers, not the customer of the platform, which incentivizes erasing difference and uniqueness, but people who care about what they do.

This has changed the nature of art-making, I am sure, since there’s some aspect of art we value that might be thought of as “not caring about an audience,” being true to one’s own vision, not “selling out.” But you could also argue that artists can retain that vision if they think not about making art people like but instead, learning how to find and nurture the people who care about the truth as the artist sees it.

Historically, creative work has been something one could do if one was rich, had a benefactor, or was exploited by people who could distribute and/or sell the work. Nothing has changed much, except that it’s perhaps a lot easier from a technical aspect to make things, and for that reason, there’s a lot more to consume, sift through, or pay for.

Is there any real defense of copyright for individual creators? In this new economy, the best bet creators have is to build relationships and community around their work, and in doing so, most likely grow their business slowly, rather than trying to get that one lottery ticket ‘hit’,

But what will Disney do as AI strips out an identity from those using the work? We can hardly have an AI that just leaves out the most popular and capitalized media without creating huge gaps of knowledge- and presumably that cat is way out of the bag by now.

There are so many interesting and possibly radical outcomes of what’s happening with this technology, to state the obvious. My question for the AI: are you just here to keep us captive on our consumer treadmill, or are you going to force people to contend with making their own meaning and coming together as the still-human?

Missing the flight

We were in Taipei, packing up our little AirBnB to head out to the airport to go to our next stop on our trip, a resort in Vietnam. I’ve never thought of myself as a resort kind of person, but it was our honeymoon, and we hadn’t taken a vacation in years. We gathered all our things, I assembled the visa paperwork I’d printed out, and then, hmm, where was my passport?

“Have you seen my passport?” I asked A. This is not a good question to ask A. Immediately, I could tell I was under suspicion. My loose ways were certainly to blame, according to the look I received.

We looked everywhere. I had carried the passport in my bag the day before, and on retracing our steps, soon it was evident what had happened. We had wandered into a “vegetarian expo” being held at the conference center nearby, which had been very packed, back when being in compressed spaces bumping into strangers was a thing. Obviously, someone had reached into my bag, brimming with packages of vegan jerky, and light fingers had made off with my official ID.

OK, this was bad, but I mean, things happen when you travel. I felt mad at myself but then went into ‘how do I solve this’ mode. We took our bags, left the Airbnb, and got an Uber to the American Embassy.

The staff were not very comforting. It could take a while, they said. But I explained that we were supposed to be going to Vietnam that very day, and with some vaguely questionable charges that had to be paid in cash, I was eventually (about three hours later) handed a thin emergency passport with a harried photobooth picture to take away.

We went to the airport, where we had missed our flight, but we went to one of travel agent kiosks and booked another one. We got to the check-in counter and oops, the visa was now ineligible with my new passport. Oh, and was it refundable? We went back to the kiosk and spent a lot of time trying to convince them to help us, then spending a few hours trying to get online in the one corner of the airport where there was any wifi, refreshing in that horrible jonesing kind of way, like please please please just give me my fix. Finally, we found another Airbnb and went there, tired and dejected, and I re-applied for our visas, which perhaps would come through in 48 hours, maybe, don’t pin your hopes on it, according to the translation on the official government site, which appeared to be built with GeoCities.

We wandered around Taipei as we’d been doing, getting to know the underground malls, getting chilly and wet walking outside, and visiting various Starbucks–though Taipei is a coffee mecca, the idea of decaf coffee has not occurred, which, you know, I can’t fault them for. I contacted the resort and told them we were coming in a couple of days and not to cancel our whole reservation. Staying within the vicinity of wifi on the chance that a magical visa email might appear more quickly than expected.

After two days in the tiny Airbnb, I thought, if we’re stuck here, let’s at least be on vacation and found a hotel near the hot springs in the northeast outside the city. We got there in the evening, wandered around looking for plausible food, then discovered the email had arrived. I got online and booked our flights for early the next day. I booked an Uber that would pick us up at some ungodly morning hour. YES!

We woke bleary-eyed and stumbled down to the lobby, where our Uber was right on time. The trip to the airport was remarkably fast. Finally, things are working! I thought, until we opened the door to step out and realized, oh eff, this is not the right airport. Throwing our things back into the car, we sped for an hour through the still-fluorescent-lit streets to get to Taoyuan International, where we rushed to the ticket counters about 30 minutes before the flight was scheduled to depart- and couldn’t find a ticket counter for our airline. Finally, someone explained that since the window to check in had closed, the counter was now being used by another airline.

Our marriage had lasted for six months, but I wasn’t sure it was going to make it through this. I could not think straight, I just was like, “there must be a way to get to Ho Chi Minh City today and we’re going to find it.” And so, I found myself in a conversation with a guy who was a travel agent, I guess, kinda? He seemed to know the people at the ticket counter and had access to tickets for flights that were no longer booking online. How much was this going to cost? A lot. And we weren’t going to get our money back for the flights we missed. A was by turns furious and incredulous as I allowed this dude to take pictures of my credit card as watched him flirt with the ticket agents, brandishing multiple phones, and eventually producing tickets that took us to Saigon, luckily where it was warm with a little spa outside the airport terminal, and then to Ho Chi Minh.

I’m thinking about this trip because it’s the kind of crazy situation that results from a combination of typical travel hijinks, my own ADD (choosing the wrong airport, for example), and technology systems that create their own layers of bureaucracy on top of what is already bureaucratic, like visas and airline ticketing. Will such situations be a thing of the past as we develop more effective AI?

What would this have looked like in a world with AI managing my affairs? I mean, just having AI deal with the visa in the first place would have been amazing. (I suppose there’s also an argument that just having a different human than me responsible for all the arrangements might have ultimately saved us money and heartache).

I am very good at discovering deals or hacking together possibilities, but I am not great at keeping track of all the details later. Could AI be the answer to my executive function deficiencies?

Imagine a world where all these systems we’ve built talk to each other, in that convivial style we’ve already seen with ChatGPT. Oh, hey, airline system, my AI might say, let me look at all the flights you have scheduled, match them up with all the hotel rooms and Airbnbs in the world, and take my [owner? friend? client?]’s preference into account along with any paperwork involved and get amazing deals that involve no real compromises at all. I mean, this is just the tip of the iceberg.

My AI could be going through all my email, noting the things that need my attention, and deleting or archiving all the dross. It could be making arrangements with other people’s AI so there’s never a need for calendar games. I mean, all of these things are what people have assistants for, but obviously AI could be even more effective, since it can scan a zillion things in no time and access APIs hidden these days from humans.

Maybe being ADD won’t matter when most of these memory and time management functions can be taken care of, especially in a world where AI can just ask you what you want with no data entry required.

But I wonder then, what will we need ourselves for?

One could get very Buddhist here and recognise that the self was just a construct anyway, so the purpose of oneself is, in a way, just to be, and if you’re next-level, to notice your being and how everything around you is also how you’re being (social networks, systems of domination, ecologies, quanta, whatever). I like sitting in the storm of all of it and feeling it gust against me.

Another way of thinking about this has to do with value. Naturally, in the way things are set up now, one’s access to an AI of this kind, and perhaps the access the AI has to systems it can manage, will cost something.

I haven’t yet read Bullshit Jobs, but what I infer about the concept tells me that the onset of technology (in the context of ‘late-stage capitalism’) thus far has only pushed us into having far more bureaucratic responsibilities than ever before. We spend less time being creative and less time in real connection and more checking emails and websites and looking at analytics and scrolling and liking and dealing with more and more paperwork, even if the paper part of it isn’t visible. If we actually read all the terms of service we’ve agreed to in the last 10 years, we probably wouldn’t have had time to eat or sleep. My stepson has 20+ apps and sites he has had to learn to navigate for academic assignments, and has yet to read a book after a semester of high school.

The trend is definitely in the wrong direction, but it seems like it could change. It could remove the need for all kinds of trustees of paperwork, but then what?

We developed money, it seems, in large part to deal with lack of trust. You might be fine extending credit to people you know or who is part of your general community, but maybe not so much to a guy with a weapon who you don’t know from Adam. Graeber suggests that we have money for one main reason- armies.

Once we had money, we could use it to buy things or pay people for doing a service, but then things got more weird. People started accumulating money just to have more money, and then power and money became more synonymous. I’m really compressing this- please do read Debt: The First 5,000 years where it’s more expansive. But the way one accumulates money depends on there being ways to multiply money, which turn out to largely be loans of one form or another.

So in our current system, I can get money by working for someone who pays me for the time I spend working, or by making a good that someone else buys, by producing goods from farming or extraction of natural resources, by selling things I ‘own’, or by offering a service that someone else pays for. Alternately, I can loan money and demand interest or fees as a condition of the load, or I can make an ‘investment’ in which I will get a return, should the business itself produce money in one of the ways outlined above.

In all these scenarios, I now have money, but what can I do with money? I can purchase things and pay for services. Some of these things are necessary for my survival, whereas others may make my life more convenient, easy, or pleasant, and others might signal my status. (Side hot take question: do you only need status signaling in types of work where what you do is largely unnecessary?) Alternately, I can use the money to make more money, and this is where things maybe go off the rails, but OK, it’s a thing).

In a future where we don’t need so many people to produce needed goods, or to perform valuable services, what are people going to do? And what will we do about money?

On the plus side, I won’t have to have systems to organize my to-dos and I’ll have a lot fewer to-dos of the kind that seem boring but necessary.

I think I am lucky, very lucky, in fact, that not only do I have systemic advantages, being white and educated, but I also already spend much of my time making meaning for myself, which does seem like one of the things no AI can replace. It doesn’t mean that we’re all suddenly going to have the freedom to spend our time meaning-making, of course, but I do have that pragmatic optimism, that if we can avoid burning down the world (a big if) then perhaps we’re going to see big, big systemic changes as the unsustainabilty of what we’re doing now becomes readily apparent, not just environmentally, but also in the way we’ve become bound to superficial levels of meaning.

This morning I was in a dream in which I had forgotten to go to the airport for my flight, and a huge sense of relief came with the realization that I could wake up. Maybe that’s on the horizon for us all.