Resident Evil

‘Evil’ is a concept that is strange to me when I am in the “we’re all part of an inexplicable universe” state of mind. But I think I can understand it from at least one perspective: what we think of when we call a person evil.

Isn’t “evil” just a name for extreme aspects of lack of empathy? What is ‘evil’ to most people at its core? I am a little out of the loop with the good and evil framing but what comes to mind are ‘people being terrible’ in a way that put themselves first, that are power-hungry, that are cruel to other people or beings. (I am leaving aside ‘evil’ that just reflects people’s censorious interpretation of things other people shouldn’t do because another human wrote down words according to a deity, I am talking about evil you can feel).

In that case, why isn’t the answer to evil always empathy? This seems obvious, yet somehow the impulse in history is always war or punishment or something of that nature. We fear this kind of lack of empathy, perhaps because it lurks within us. Maybe the real evil is how lack of empathy provokes lack of empathy.

Of course there are those incapable of empathy, but as Robet Sapolsky suggests, as humanity we’re perhaps better off identifying and isolating sociopaths and then using deep empathy with everyone else. To recognise that in almost every case where empathy is possible, anti-social or ‘evil’ behaviour results not from free will, but from being an organism in a system that hasn’t properly nurtured it. As a society, we can look (with empathy) at what leads to this behaviour (and maybe even this thinking) and address it.

I mean, that will probably never happen, but there are interesting ‘microcosm’ experiments with this approach among collectives, people creating containers of empathy and building trust with each other. In a context of collective healing, it’s easier to spot dominance, anti-social behaviour, power-seeking, and to respond with empathy as well as approbation. It’s possible to have norms that allow people to make mistakes and be called in, and this rests, at least in part, in practices of interaction that involve witnessing, processing, slowness, not so much dialogue until the trust and norms are clear and held in the space.

It is not easy to meet what feels evil with empathy, and empathy isn’t enough. Empathy alone does not address the systemic scaffold that leads to evil, that supports massive social inequity that yields power of the kind that can drain empathy away. So I’m not saying, let’s just be understanding. We can put boundaries into play and make sure evil isn’t normalized. We can fight the idea that evil is not evil if it has some utility, if it means jobs or profit or security or control.

I do wonder in myself how I can meet evil more with wonder, like “humans are capable of this. I am a human. Am I capable of this?” And notice where the edges of evil might live in me, so that I can love them out of existence.

Matters of Trust

How does trust work? It’s multi-faceted.

First, there’s congruence.

You say this is how you are, I see you behaving in ways that reflect that. This isn’t something I want to farm out to technology, it’s too easy to game anything that tried to quantify congruence.

Then, there’s connection.

I can’t trust someone who clearly has indifference or disregard for me. There’s no technology that proxies for this.

Next, we can throw in social context, or transitive trust.

When someone behaves negatively to other people with whom I can identify, I am likely to lose trust. This one is problematic in its nature because sometimes people are stuck in in-group thinking where doing bad things to someone we think of as ‘bad’ may feel correct to us, but I would argue that most of the time, there’s a violence and dominance in that behaviour that also provokes fear, rather than trust. If I see someone punishing someone I don’t like, I may feel like it’s warranted, but I also lose trust in the person performing the action.

We may have become inculturated with a kind of paternalism or patriarchal perspective that leads us to see punishers as protectors. My guess is that at heart, we know punishers are operating by invoking fear. First they came for…

We can also consider whether people we trust also trust other people and use their trust as a basis for our own.

Should we assume trust?

There’s a norm I’ve seen many communities and companies trying to establish of “trust until the trust is broken.” On some level, it’s a good strategy, a game theory that works. But it also feels like an approach developed by people who don’t regularly have to watch out for danger. Who don’t experience trust-breaking frequently, even in places where everyone has the best intentions.

To suggest that trust is about some kind of contractual, verifiable, identity-based thing feels like what is broken in the whole system (meaning the drive to quantify everything and extract its value). The way many technologists are talking today about trust is- basically a paradigm of imperialist, heteronormative, white supremacist patriarchy. (Though I’m not sure those labels are continuing to serve me, they have been a helpful lens).

Of course, trust is hard. Fundamentally, it starts by being able to trust yourself. It might be that a requirement for this kind of self-trust may be to go through the pain and heartbreak of seeing where one is not trustworthy to oneself. To notice how, to speak for myself, I have carried maladaptive lessons from trauma, how I have tried to avoid feeling shame by coming up with justifications, how I have been unwilling to look at my part in the systems I see as broken.

For communities to build trust, we need to start by creating containers that allow people to self-reflect without judgement. Witnessing this in others turns out to be highly trust-building. Offering welcome and checking our judgement builds trust for ourselves and others. It lets us be vulnerable, it lets us notice when we might want to rush to judgement and sit with that impulse, getting curious about what in ourselves we’re running from.

Trust, Identity, Community

As someone who has been around in the tech freedom space for a while, though always a bit on the fringes (‘the fringe’ is dead center I suppose), I’ve been noodling on the idea of what it might look like to have control of what one shares with websites, apps, platforms, or even other people online.

It’s interesting to me (though not exactly a surprise) that the way so many developers approach the problem orients around as much automation and taking people out of the picture as possible. I read debates about “zero knowledge” that largely focus on whether the mechanisms employed are actually zero knowledge, but what problem is that trying to solve?

There is no doubt that there are certain situations where real anonymity has positive utility, primarily in situations where repressive state surveillance has a role. But the downsides of real anonymity are also real and shouldn’t be glossed over. How can we fight repressive state surveillance, not orient everything we build to address that problem?

I’m not just talking about human trafficking, child abuse, or terrorism being problematic from the perspective of anonymity. They are definitely not good and ideally we do not build technology that facilitates these harms.

But we have a bigger issue. When we consider what we need to build functional communities, democracies, and relationships, trustless systems are not just counter-productive, they create false ideas of security and safety.

Trust-breaking is not a technical problem, it’s a human problem. As we start to find ourselves in less and less authentically human contexts (interacting with ChatGPT, deepfakes, bots, etc), we’re in dire need of ways to create trusted systems and identity management that helps us verify our mutual humanity and trustworthiness as people.

One idea for this might be identity management that happens within actual human communities, where as someone who knows me, you can verify my identity. This doesn’t require a state-level or sanctioned identity, but it does require people vouching for one another. Presumably there would need to be some threshold for this kind of verification (how many people would it take?) and a complementary technology layer to support the process. We’d need to consider accessibility, but I think the genius part of this kind of scheme is that it requires people to be in relation to one another and that might mean creating new kinds of interpersonal networks to accomplish verification.

Imagine, for example, that you’re unhoused, or living with a disability, or don’t have regular access to a computer. How might a human-trust-building identity system serve these use cases? How could this work in a decentralized way, so that identity could be community-verified for communities you participate in, and proxy-verified by having one community trust another’s verification? Is it necessary to have a universally-verified identity or simply one that allows access to your particular contexts?

In general, I believe trust is built among people, not among technologies. This happens in small groups, in situations where we actually are known and show up in trustworthy ways. We just have these crazy complicated and nested systems to deal with more and more scale and therefore, less human trust. We have these systems to help giant corporations and states extract money and time, not because they actually make our life better, necessarily.

I want to build ‘identity systems’ and technology in general that looks at the world as it could be, that gives up on trying to fix something that was never functional in the first place, to take a leap into the unknown because we’re at a point of singularity anyway, so why not start from scratch when it comes to structures that support our collective humanity?

If you want a different world–and if you’re about human liberation you do–you’ll have to start thinking about things from a different perspective. Not how can we use the technologies we’re inventing for good, but what does a world look like that truly reflects freedom?

As the awesome poet, intimacy organizer, and abolitionist Mwende Katwiwa, aka FreeQuency, pointed out on the Emergent Strategy podcast:

When I say ‘better,’ I don’t mean it will be like you’ll get everything that you have here and then some and it will be great… we might never get get some of the shit we were promised if we give this world up, but I believe there are things that are better than what this world has actually given us, that are more equitable, that feel better, not just when we consume them, but when we are in relationship, they feel good for us in our collective bodies… Are you willing to lose all of this and believe there is something better than we can’t even actually imagine? (That’s the wildest part about it). You will have to be able to let go of this shit without tangibly being able to see what’s on the other side and say it’s worth it.’

FreeQuency

Only Two Values

I had an aha! moment a few months ago when listening to a book on startups that described how many companies create lists of values so long and generic that they become essentially meaningless. Your core values should not be “stuff we think is good” but instead should be the specific qualities your company embodies more than 95% of other organizations. These values should reflect the founders, the team, the product, and the customers. They should be one of your litmus tests for making collective decisions.

The companies I’ve worked for have been led by people who cared about values, but those values were not honed down in this way, and there was some cognitive dissonance as a result.

For GetWith, I created a Three Core Values matrix, which reflects what I’m about and what I want to build with others.

This week I’ve been tasked by my coach to tackle the Brené Brown exercise from Dare to Lead to narrow your personal values down to two and then write about what they mean to you and how you put them into practice. Challenge accepted!

Brown gives these instructions:

I know this is tough, because almost everyone we’ve done this work with (including me) wants to pick somewhere between ten and fifteen. I can soften the blow by suggesting that you start by circling those fifteen. But you can’t stop until you’re down to two core values.

Brené Brown

Here’s the whole list– I will be curious about which ones resonate with you:

The first one was easy. Curiosity is my core motivation, and the practice that feels most rewarding. Curiosity leads me to learn, to examine my own motives, to be able to wonder about the world and other people. It leads me to look for unusual solutions when the common ones feel ineffective.

Curiosity feels amazing, too. When I am curious, I am not in fear, not in judgement, not in suffering. It’s got a natural level of detachment to it and simultaneously leads to listening and compassion.

Curiosity has a downside, I guess, in that there’s a certain insatiability to it. It can lead me to go deep and also to go on meandering paths.

The upside to this downside is that it’s only a problem when around people who aren’t also curious, for the most part. At one point, I thought curiosity was a problem because it led to a lack of focus, but now I see that everything is connected somehow, and productivity can be a result of curiosity as much as a casualty.

I had to ponder my second choice carefully. There are many values that feel wrapped together for me. Where I land, perhaps unsurprisingly in retrospect, is on “belonging.”

Belonging ties into nearly everything I think about and care about. For one thing, I can easily look back on many of the paths I’ve chosen or interests I’ve pursued and see how belonging was a factor. I’ve sought belonging, rejected it, and offered it. I found ways to not belong and created and contributed to communities where I got to experience how distinct and perhaps even opposite meanings of belonging and fitting in.

Belonging feels like my core value, but it doesn’t mean I always feel like I belong in every context. It does mean that I locate belonging in myself and question systems or contexts that exclude people based on factors that don’t have anything to do with connection or mutual resonance among individuals. It means I deeply appreciate systems that promote mattering, interconnection, and caring, which are the bedrocks upon which belonging rests.

In more recent times, I’ve been learning about what facilitates belonging and choosing to belong more actively. I’ve been on a quest to find ways for technology to support the process and practice of belonging and seeking to support the stewards of the spaces where I notice it emerging.

Belonging does have a dark side, but it’s a result of it being half-practiced or using only the instinctive aspects of belonging that rest on us-them dynamics. People feel like they belong because they are not “othered,” and our limbic systems actually run on implicit biases all the time.

As soon as you start going deep into belonging using other parts of your intelligence, it’s clear that the way to have the deepest sense of belonging is by welcoming. There are interesting subtleties to explore, especially because trust and belonging are so intertwined, and for the most part the collective technology we have now doesn’t respect the constraints of systemic trust, which has natural numerical boundaries.

What does it look like to fully live these values? I don’t know, but my guess is that there are a lot of self-defeating behaviours that can’t co-exist with these particular two, and that is fun to think about. Oops, self-judgement isn’t going to work in that system! But honestly seeing where things aren’t working and growth and change? You betcha.

Now I’m super curious where all my closest compatriots land on this exercise. Do tell.

Web 3.33

In the past, I have struggled to find a lot of necessity for blockchain but I wonder if ChatGPT has actually created a new needed use case- some way to understand provenance?

I’m a person who often likes to read the source paper or to mine the bibliography section of a book to find ways to go deeper on a certain topic or idea. Now, there were already authors who synthesized thoughts and then wrote their own version without attribution, and I suppose it’s fine (cough, cough), and fine for ChatGPT to do its own “remixing” (to use the charitable perspective of aughts-era copyleft), but it also means an explosion of content that may or may not have any reason or careful thought behind it.

In theory, you might be able to train a model with your own curation rules and at least prevent yourself from accidentally incorporating pure nonsense in the results of queries, but then you’d need to evaluate sources (I mean books, writers, possibly publications) and few of us have read enough to cover our bases well here. I read more than the average person and yet every day I can discover many other things I haven’t read by actual real people. To be fair, many of these are just the same content with mildly different perspectives, but there are reams of subject areas that overlap and I have no idea how to evaluate. In science, there’s the added challenge of new developments happening all the time that update and shift what we understand- but can we be sure that ChatGPT can update and capture the nuance of that shift?

I’m not necessarily arguing that people will be better than AI in synthesizing and analysing information in the long run. People do have lived experience that shapes their own personal worldview, and often being able to triangulate my own experience, theirs, and their ideas helps to contextualize information.

But in the short term, we’re about to experience a potential volume of content that, despite being rather prolific themselves, humans can’t produce, to an exponential degree. In the short term, this volume will most likely be in service to selling things, because at the moment, that’s what most of our technology is for- either a metacognative selling in the form of ad-tech or just direct sales of the digital product or service itself. (Social media, like TV before it, is fundamentally ad-tech).

I’m going back to one of my earlier questions – is technology going to start doing the labour technology has created? Is there a level of recursiveness coming that essentially frees me from the digital experience entirely? Could we be headed to a world where one just spends time in person with other people without phones or anything because we’ve replaced ourselves with AI, which is busily selling things to itself with cryptocurrency that doesn’t even apply to physical space at all?

Or, at the very least, will we need to use some kind of immutable ledger to show that a human was involved in ideas and analysis and curation? Because without that, where will attribution exist?

Back when I was filming for Acceleration, I talked to many smart people about their vision of an AI-driven future (or nightmare, as the case may be). That was 10 years ago now, and the way technology developed has still come as a kind of surprise, not that what we have now wasn’t predictable, but how it feels to live in the world where almost all structures of human trust have been torpedoed. That we could be in a situation where simply knowing if something has an individual human source could be the destabilizing force that pushes us over the edge, rendering our main evolutionary advantage (cooperation) moot. On the other hand, maybe we’ll band together in the face of non-humanity becoming a force of dominance. I am living into the latter future.

How distributed governance actually works

I’ve been reflecting over the past few days about how amazingly unusual it is that for most of my life I’ve been a participant and member of groups that operate within large-scale distributed, anti-dominance systems of governance.

Wow, that’s a mouthful.

I’m feeling like the fish who someone asked, how’s the water? And all of a sudden I’m like holy moly, I am totally wet!

I grew up in the unprogrammed version of Quakerism, found healing in a program for families and friends of alcoholics, and generally have gravitated to groups that hold space rather than impose order.

I’m looking at these different [floundering for the right word- it’s not organizations or communities but it’s in that direction] structures? and realizing there is certainly a pattern, a blueprint if you will, for how humans can successfully come together and find meaning, camaraderie, connection, support, trust, a shared sense of purpose.

The orientation towards anti-dominance underlies the success of these communities.

Stay tuned for a deeper dive into each group/community. Here are the commonalities I see repeating over and over:

  1. Trustbuilding with bounded open sharing. A space where people speak and are witnessed without dialogue or helping. A place to learn empathetic listening and keeping in one’s own experience.
  2. A focus on individual choice and responsibility. Everything is voluntary and each person’s perspective seen within their own experience. There are common themes and recommended practices but not ‘rules.’
  3. Fellowship or more informal kinds of connection that take place outside the “official” time held by the group.
  4. Autonomous smaller groups within the overall organization. Each group has its own norms and decisions through unity-focused, knowledge-based decision-making. However, there are constraints that govern whether such a group is considered to be part of the larger organization.
  5. Leaderlessness at the group level, though work is done by rotating volunteers to support logistics, communication, collection and allocation of resources. These volunteers are known to the group and typically go through some kind of vetting or training process though it can be very informal (voting by acclimation, being paired with a prior volunteer, etc.)
  6. Wide ranges of opportunities to volunteer in the group and beyond-the-group with different levels of commitment or experience needed. Even at the individual group level, there is typically no need for any particular person to be present in order for a meeting to happen.
  7. Delegate-based governance at beyond-the-group levels, where the delegates meet more infrequently and do not make decisions for the groups, rather they act as conduits between the individual groups and the collective-of-groups at large. They do make decisions about their respective layer of the organization, such as allocation of resources, outreach efforts, organizing events and fundraising. These business meetings are typically open to all members even if they are not designated delegates or position-holders.
  8. Trusted committees that are temporary or have rotating-membership to complete projects- these may exist at any layer of the collective/organization.
  9. A collective of collectives at the broadest level, which serves to implement decisions by delegates, keeps track of and communicates resource allocation, creates and disseminates general materials and literature based on the collective purpose and intentions of members in aggregate (the unifying purpose). The service of this kind of body only becomes necessary with bottom-up growth, i.e. there are more groups with delegates than can comfortably participate in decision-making. For the most part, these are regional but in global communities that function online, this might be structured differently.

With these components in mind, it’s quite interesting to consider how we might build tools that support various aspects of the community. I’ve seen a number of technologies specific to governance, but I think the more interesting part is the trust-building and fellowship, which currently takes place online with tools that are not designed for the purpose.

Spoiler alert! I think we can use a broader understanding of the way these communities function to inform what kinds of online containers might serve them more effectively.

Tipping the scale

Lately I’ve been thinking about “scale” and I have some questions.

  • Can tech scale without VC backing?
  • Is scale inherently problematic? Does it diffuse connection and lead us to behave in ways that are unnatural and possibly polarizing?
  • Is scale mainly held as a good thing because it is a reverse wealth distribution scheme (taking money or labour from regular people and turning it into wealth for a few)?
  • Can we build technologies that scale but actually encourage people not to? (Supporting clustering and “the splinternet” in service of building spaces where trust is possible)
  • How is scale different if it’s open source and not owned by a corporation or a government?
  • How can we scale things without applying a layer of dominance in some form or another?
  • What has been the result of scale when the scale is not extractive (and do Wikipedia, Mozilla, Signal, or perhaps even services like Libby or Kanopy qualify as non-extractive)? Could these be models to emulate?
  • Why isn’t there a definition of “scale” in Merriam-Webster that reflects this notion of expansion? Is scale just nonsense techspeak?

Some of the people I have talked to about trying to raise money to support a prosocial technology face the problem of wanting to build something that likely won’t scale, but also isn’t expensive to build and maintain- perhaps community support and platforms like Open Collective will make that kind of technology more feasible.

My goal is to build something that does scale, at least in the sense that I can’t imagine any reason why every adult or maybe even young person wouldn’t benefit from being part of a community oriented around connection, sensemaking, support, and transformation. Not just benefit from, but desperately need, though it’s also true that our current scaled tech mostly serves as a distraction from the feelings not having that need met provoke.

It’s an interesting conundrum, can we create the containers for community with enough scaffolding to support self-responsibility, prosocial interaction, but open enough for lots of different kinds of communities and lots of different sets of norms and values?

The Silicon Valley version of scale is “blitzscaling” – a term that inherently reflects violence. The purpose of this scale isn’t to be of service to more people, it’s to eliminate the possibility of competition. Where once you might think of a platform as being a welcome place for people to contribute and collaborate, it’s now more of a word to imply ever-expanding reach, “organizing the world’s information” and the like.

What does organic scaling in tech look like? Growth that isn’t juiced by dark patterns, unconcerned with privacy, and driven by unsustainable spending? Is there such a thing in a platform world? It seems like a lot of the things that are being touted as the future are just the wolf in ethically-sourced shearling. Fundamentally, the kind of scale I hope for comes from a kind of emergence, not a strategy.

Audiobook note-taking

In April 2020, we adopted our ‘Covid dog’, a rescue who needed a lot of TLC. There’s no better illustration of how love heals trauma than this little guy. He’s slowly becoming affectionate and playful, at least some of the time. He also spends a lot of his time lying under the table but that’s probably sensible in an earthquake zone like Portland.

Since he’s entered my life, something else has also happened: I started to listen to a ton of audiobooks. Long walks are perfect for this. I find audiobooks to be sort-of in the middle of reading and having a discussion, but one thing is annoying. When an audiobook provokes a thought I’d like to return to, it’s probably destined to go uncaptured.

I took Building A Second Brain for the first time a while ago now, and though I’ve refreshed my understanding several times in later cohorts, I have not adopted a systematic note-taking practice. In a way, this daily writing is as close as I get, and it’s definitely skipping some steps of the CODE (Capture, Organize, Distill, Express) process that the course centres around. Is this “bad?” Not exactly- I think the key takeaway of BASB for me is “focus on shipping” – or something maybe less bro engineer speak, like “move projects ahead and don’t get too caught up in the planning.”

By failing to prepare, you are preparing to fail.

-Benjamin Franklin

I love me some planning. It is valuable to plan, if only because it can help to focus, prioritize, and catch some blind spots. That said, thinking doesn’t manifest the world I want to be in. That requires action, process, practice. For a lot of things, the preparation is life. So my current PARA system (Projects, Areas, Resources, Archives— oh, the acronyms!) is basically all P – a few key things I want to make sure I’m moving forward on – plus random tasks that I guess would be categorized in Areas.

That said, it would be so much more awesome if there were a way for audiobook listening to be more capturable, so here’s my proposal:

  1. Audiobooks should have universal protocols for bookmarking. No matter where I listened to an audiobook, I should be able to take my timestamps and apply them elsewhere.
  2. Audiobook bookmarks should sync with all eBook bookmarks. When I bookmark in the audio, I should be able to open an Ebook and find the spot easily so I can grab the text in a note.
  3. The device I listen to the audiobook on should allow me to use voice controls to pause and take notes. As someone living in a place where cold and rainy is pretty common, I can’t keep a little notebook or even use a notes app without pain or at least basic dysfunction – have you ever tried typing with a wet screen? This seems like it’s the furthest out, but why? One reason, when a phone is playing audio, it isn’t listening too. If a watch could solve this problem, I would fork out for it in a heartbeat, but I was disappointed when I got an Apple watch and realized it wouldn’t work with Libby nor capture audio separately from my phone.
  4. Stop the terrible BS of walled gardens around eBooks. I get it, everyone believes DRM can keep people from “abusing copyright” but it’s nuts to me that I have books on my Kindle that I can’t read with reMarkable, audiobooks in Libby that can’t talk to eBooks from the same library, Scrib’d which rocks for all the content but doesn’t work on a Kindle, I mean everything about this is dumb. Perhaps THIS is a good use of the blockchain, to allow copyright holders to license content in the various ways they can today but let those systems communicate because they are tied to a user. (I’m inclided to say this will never happen because capitalism. But in a post-extractive world, I dunno, maybe?)

Imagine you write a book, and you release it in the various ways one does now:

  1. Paper book
  2. E-book for purchase
  3. E-book as a license (like on Amazon)
  4. E-book licensed to a distributor (a public library, a subscription service)
  5. E-book licensed to an organization (corporate, educational)
  6. Audiobook for purchase (note: does this even exist?)
  7. Audiobook for individual license (Amazon, Apple)
  8. Audiobook licensed to a B2C distributor (library or subscription service)
  9. E-book licensed to an organization (corporate, educational)

What if these licenses were on some kind of chain, so whenever an individual accessed the title, they could associate it with another format if they have access to it. This is convoluted, not for technical reasons but because copyright and licensing is a mess. Wouldn’t there be utility, though, for the copyright holder (ideally the author) to have a way to see how the book was being read, maybe even to be able to have some way to reach these people?

I do wonder sometimes if it’s a good thing to imagine business solutions to problems that only exist because of other businesses, I mean, it’s easy for this to all end up creating more bureaucracy rather than creating a clearer path. But if we had a solution like this, would we even need copyright? (If we’re pretending that copyright serves its stated purpose rather than being a tool of power and violence). Can copyright survive Dall-E and ChatGPT anyway?

I like thinking of even more interesting possibilities too, like what if as I’m listening to an audiobook, I could be connected with other people who have recently read the same thing and have a little 10 minute book club? It’s interesting to me that we don’t have a better book-based social media, since people who read books potentially are also more able to have prosocial conversations.

How are you doing your audiobook note-taking? I’d love to know.

Calendar Games

My partner works as a project manager, which is funny for a few reasons. One, because in my world of product management, there’s a palpable shudder when the job is confused with project management- PMs of the product variety should spend their time determining what to build, not managing the building process! Two, because nothing seems to stress this person more than what they refer to as “calendar games.” (Ironically, they’re pretty into other kinds of games, such as ridiculously complex board games or totally uncomplex retro arcade games).

Calendaring is a weirdly hard problem, for a technology that has been around at least since the industrial age and presumably far earlier. You have a calendar, I have one, and if we need to find a time when both of us are free, it seems pretty intuitive that simply laying one over the other would expose availability.

But there are complications, especially when there are more than two of us. For one thing, I might be in a different time zone. And offsets change and complicate things further (yet another reason why daylight savings is outmoded).

OK, but Calendly and similar services have basically solved this problem by letting you limit your availability not just by claimed slots, but also by your individual schedule. And Calendly works very well for one person choosing a time on another person’s calendar.

This still gets messy when say, you travel and you’re in another time zone, but that’s possible to deal with if annoying for either the person with the calendar or the developers trying to accommodate the traveler.

There’s another challenge, this time not so technical. We are, as a rule, very emotionally invested in the idea of “owning our own time.”

Every day in my meditation practice space, I hear, “Time just is,” but it’s still quite hard to turn over the perceived control of that time to someone else. Conversely, and as reflected by a social media controversy, some people take affront at being asked to select times based on when another person is available, with the idea that whoever set up the calendar is enforcing their own boundaries as an act of dominance. Anger is a boundary energy, as they say, so it’s no surprise people get irritable when it comes to trying to converge into mutual time without feeling controlled in one way or another.

Along with this generalized frustration, we also tend to not want to have the same personal and work calendars, nor make plans the same way with work and personal contacts. For example, I tried to make a Calendly event for a friend but discovered we were meeting within business hours, unless I could figure out how to block things on my ‘work calendar’ that are personal, many of which are not as specifically time blocked, like “going out with a friend.” And it could get very awkward if people at your work can see what’s on your calendar.

But for those of us who find services such as Calendly a welcome relief from sending emails back and forth, it feels mysterious why there’s been almost zero innovation in finding times for multiple people to meet. Partly this is a function of the aforementioned power dynamics- who gets to be the one determining the time for everyone? Because if it’s not one person, then things just return to endless group email chains with one person inevitably coming in at the last minute to spoil everything.

Partly though, it’s something else, which is that there is no, as far as I know, service that can look at multiple people’s calendars across different organizations and calendar clients and let you know when everyone is free. I am guessing there may be some complexities in the technology that make this hard- but I mean, really? Up to 10 people’s calendar on a given week should be within the capability of an not-too-complicated algorithm to evaluate. (No, you tell me- how hard is this, really?)

Perhaps the last piece of the problem is that while people may have space in their calendar or have meetings booked, there’s another wrinkle- what I might loosely characterize as FOMO. Yes, I may have a meeting booked but if something more compelling comes along I might want to move it. As a Calendly user, I’m often caught by mildly irritating surprise when someone reschedules an hour before we were to meet, leaving me with a 45 minute gap that doesn’t lend itself to actually getting things done. But as a researcher, I quickly gave up on any rebooking or even no-show resentment since it’s a fact of life. Still, when you have multiple people in a meeting and a key person develops a conflict, it quickly becomes everyone’s headache. But even this seems much better to give to AI than to have to spend additional time negotiating a new slot.

I propose we have an AI that is granted more limited views of people’s schedules, maybe for just enough time to cross reference a week and surface availabilities, with rules input by the players about how to choose the preferred time among what’s possible. I suppose if you wanted to get really fancy, it could even figure out travel time and factor that in.

Could this work? I am guessing the APIs are available if we can have Calendly connections with the various clients. So how about it? Surely, we can surpass Doodle and make meeting as easy as just showing up?

I predict that as we figure out calendar games, we’ll also move away from separate productivity tools and have planning, goal setting, tasks all live in the calendar. Right now this is pretty terrible on Google Calendar, but someone will come along and make this new category of time-management exciting. Or we’ll end up working for our robot overlords when they figure out how to schedule us. Either way, I won’t be filling in Doodles.

Could AI solve ADD?

Does technology works for people or the other way around? We are creative, collaborative, inventive beings but we’re also social, easily influenced, and normative. What we’ve imagined ourselves “creating” has often been more of a prison we’ve build around ourselves.

Consider Yuval Noah Harari’s provocative idea that wheat domesticated people rather than the other way around.

Think for a moment about the Agricultural Revolution from the viewpoint of wheat. Ten thousand years ago wheat was just a wild grass, one of many, confined to a small range in the Middle East. Suddenly, within just a few short millennia, it was growing all over the world. According to the basic evolutionary criteria of survival and reproduction, wheat has become one of the most successful plants in the history of the earth.

In areas such as the Great Plains of North America, where not a single wheat stalk grew 10,000 years ago, you can today walk for hundreds upon hundreds of kilometers without encountering any other plant. Worldwide, wheat covers about 2.25 million square kilometers of the globe’s surface, almost ten times the size of Britain. How did this grass turn from insignificant to ubiquitous?

Wheat did it by manipulating Homo sapiens to its advantage. This ape had been living a fairly comfortable life hunting and gathering until about 10,000 years ago, but then began to invest more and more effort in cultivating wheat. Within a couple of millennia, humans in many parts of the world were doing little from dawn to dusk other than taking care of wheat plants. It wasn’t easy. Wheat demanded a lot of them. Wheat didn’t like rocks and pebbles, so Sapiens broke their backs clearing fields. Wheat didn’t like sharing its space, water, and nutrients with other plants, so men and women labored long days weeding under the scorching sun. Wheat got sick, so Sapiens had to keep a watch out for worms and blight. Wheat was defenseless against other organisms that liked to eat it, from rabbits to locust swarms, so the farmers had to guard and protect it. Wheat was thirsty, so humans lugged water from springs and streams to water it. Its hunger even impelled Sapiens to collect animal feces to nourish the ground in which wheat grew.

The body of Homo sapiens had not evolved for such tasks. It was adapted to climbing apple trees and running after gazelles, not to clearing rocks and carrying water buckets. Human spines, knees, necks, and arches paid the price. Studies of ancient skeletons indicate that the transition to agriculture brought about a plethora of ailments, such as slipped disks, arthritis, and hernias. Moreover, the new agricultural tasks demanded so much time that people were forced to settle permanently next to their wheat fields. This completely changed their way of life. We did not domesticate wheat. It domesticated us. The word “domesticate” comes from the Latin domus, which means “house.” Who’s the one living in a house? Not the wheat. It’s the Sapiens.

Yuval Noah Harari, Sapiens: A Brief History of Humankind

When I consider what I spend my own time on, quite a bit of it feels like it’s me working for my technology. I receive hundreds of emails a day, which somehow I’m to blame for, most of which I never open, but still require quite a lot of time sifting through, managing, responding, adding things to my calendar, and in the saddest part of the manipulation, checking to see if there’s more. And even with all that dedication to its maintenance, every so often I will miss email that matters to me, and fail to respond to someone or to take an action in my interest.

I have ADD, so email is not all I forget. Even if I wasn’t enticed by the badges and notifications on my phone (which are mostly turned off- I can’t imagine what it would be like if they were all allowed) I forget what I went upstairs to get on a pretty regular basis. I will literally speak out loud “don’t forget to x” and an hour or a day later be like, “how did I forget that thing?” To some extent, this is a product of loving to do a lot of things and having many things to remember. Some of it is my biology.

But when I have tried to use technology to solve this problem, it inevitably fails. And part of the reason, I think, is that using apps produces too explicitly a feeling of having a new boss in the form of some checklist, chart, or calendar.

List interfaces inevitably lead to to-dos too numerous to fit into the course of my waking hours, and what Steven Pressfield has so eloquently outlined in a few books, The Resistance, seems to just feed on the notion of having a time set by me to do things that only I am accountable for, which is like 80% of creative work. Suddenly I find myself in that “just one more little thing and then I will do the thing I scheduled” mode, or you know, reading Slack messages. (Slack being another prime example of a technology I’m working for without compensation).

So here’s the question. It seems plausible that there’s a near future when I can assign AI to sort through my email, to keep me abreast of information I’m interested in, to help me avoid missing an event on my calendar, to do all the work for other technology I’m currently volunteering for. But I have a sneaking suspicion that this line of thinking may lead to something even worse.

When I am in goal-setting conversations (BASB, communities, coaching, ‘productivity porn’) what I notice is how much of what people aspire to feels kind of… unconsidered? Why is this thing important to you? And of course, we live in a culture where people like me who actually can self-determine what they want to do have this privilege as a result of the mostly non-self-determined work of other people.

I suspect we all mostly want to feel loved, like we matter, and like what we’re doing with our time has utility or even service to others. Deep down, under the stories about having stuff, or being fit, or being enlightened, or whatever your flavour of goal looks like, this is what humans crave when their physical needs are met and they have done the work of healing from trauma.

Will technology make it easier to be in that state or harder? So far, every new “age” seems to be filled with innovation that largely takes us out of that state. So being hopeful about AI does feel a little naïve.

But perhaps what’s going on isn’t a problem with technology itself but with the violence of the systems feeding it, and that it reinforces? What would it look like for technology to emerge from people and communities who have done the work to heal and to recognise the deep trauma of living within “imperialist white supremacist heteropatriarchy.”

Even in the more “progressive” spaces I’m in of tech makers, it’s mostly advantaged people (men, whites, westerners, elite-collecge-educated) that predominate. I mean, I am sure there are spaces I’m not in where there are technologists with different demographics (please invite me if I’m welcome!), but looking at the leadership in global tech and the funding numbers, it’s hard to imagine that AI’s current developers are not continuing to operate with pretty gigantic blind spots.

Still, I find myself pretty seduced by the idea of externalizing my executive function to the degree that I can remember things without having to remember to remember them- but what happens when I outsource the process of deciding what exactly is worth remembering? In the end, the answer is probably “embrace forgetting” and appreciate where I am and what’s around me without the endless overlord of achievement made manifest in code.