As someone who has been around in the tech freedom space for a while, though always a bit on the fringes (‘the fringe’ is dead center I suppose), I’ve been noodling on the idea of what it might look like to have control of what one shares with websites, apps, platforms, or even other people online.
It’s interesting to me (though not exactly a surprise) that the way so many developers approach the problem orients around as much automation and taking people out of the picture as possible. I read debates about “zero knowledge” that largely focus on whether the mechanisms employed are actually zero knowledge, but what problem is that trying to solve?
There is no doubt that there are certain situations where real anonymity has positive utility, primarily in situations where repressive state surveillance has a role. But the downsides of real anonymity are also real and shouldn’t be glossed over. How can we fight repressive state surveillance, not orient everything we build to address that problem?
I’m not just talking about human trafficking, child abuse, or terrorism being problematic from the perspective of anonymity. They are definitely not good and ideally we do not build technology that facilitates these harms.
But we have a bigger issue. When we consider what we need to build functional communities, democracies, and relationships, trustless systems are not just counter-productive, they create false ideas of security and safety.
Trust-breaking is not a technical problem, it’s a human problem. As we start to find ourselves in less and less authentically human contexts (interacting with ChatGPT, deepfakes, bots, etc), we’re in dire need of ways to create trusted systems and identity management that helps us verify our mutual humanity and trustworthiness as people.
One idea for this might be identity management that happens within actual human communities, where as someone who knows me, you can verify my identity. This doesn’t require a state-level or sanctioned identity, but it does require people vouching for one another. Presumably there would need to be some threshold for this kind of verification (how many people would it take?) and a complementary technology layer to support the process. We’d need to consider accessibility, but I think the genius part of this kind of scheme is that it requires people to be in relation to one another and that might mean creating new kinds of interpersonal networks to accomplish verification.
Imagine, for example, that you’re unhoused, or living with a disability, or don’t have regular access to a computer. How might a human-trust-building identity system serve these use cases? How could this work in a decentralized way, so that identity could be community-verified for communities you participate in, and proxy-verified by having one community trust another’s verification? Is it necessary to have a universally-verified identity or simply one that allows access to your particular contexts?
In general, I believe trust is built among people, not among technologies. This happens in small groups, in situations where we actually are known and show up in trustworthy ways. We just have these crazy complicated and nested systems to deal with more and more scale and therefore, less human trust. We have these systems to help giant corporations and states extract money and time, not because they actually make our life better, necessarily.
I want to build ‘identity systems’ and technology in general that looks at the world as it could be, that gives up on trying to fix something that was never functional in the first place, to take a leap into the unknown because we’re at a point of singularity anyway, so why not start from scratch when it comes to structures that support our collective humanity?
If you want a different world–and if you’re about human liberation you do–you’ll have to start thinking about things from a different perspective. Not how can we use the technologies we’re inventing for good, but what does a world look like that truly reflects freedom?
As the awesome poet, intimacy organizer, and abolitionist Mwende Katwiwa, aka FreeQuency, pointed out on the Emergent Strategy podcast:
When I say ‘better,’ I don’t mean it will be like you’ll get everything that you have here and then some and it will be great… we might never get get some of the shit we were promised if we give this world up, but I believe there are things that are better than what this world has actually given us, that are more equitable, that feel better, not just when we consume them, but when we are in relationship, they feel good for us in our collective bodies… Are you willing to lose all of this and believe there is something better than we can’t even actually imagine? (That’s the wildest part about it). You will have to be able to let go of this shit without tangibly being able to see what’s on the other side and say it’s worth it.’
FreeQuency