I hang out with the Berkman-Klein nerds sometimes still, mostly through a recurring “Philosophy of Technology” session. Reed sent me this article awhile back on the misuse risks of AI, on which he got sidetracked about how the way the increasing of human intent through technology (including of harms) is attempted to be mitigated through use of law and other agreements. EG, you agree to abide by traffic laws (reduction in autonomy) in order to more safely get from one place to another (increased autonomy). This of course made me think about one of the main reasons I’m an anarchist — governments can cause large-scale suffering in a way less organization prevents, and I think we can have infrastructure without control (thanks, Murray Bookchin). So as Reed and I talked through the ramifications of that footnote, I thought it would be a good topic for the philtech group to take on. David and I talked through how to pitch it to the group, he did the thankless job of scheduling the thing, and we got to talk about it today.
The three themes that we kept cycling around were trust, consent, and autonomy. I’ll then end up back on my soapbox about complexity, which also came up.
Trust, Consent, and Autonomy
We all talked a lot about if the conditions would ever exist for us to trust an AI to make choices for us (our main talking point for “autonomy”). This got into a lot about how AIs are black boxes… but so, too, are humans. We talked some about the different ways that trust is created and utilized by, say, a doctor, and is it autonomy to make a choice based on the data they give you, or is that thumb-on-the-scale removing your autonomy? Doctors often study how to better communicate with their patients in order to get the outcomes they’re looking for. What’s different here?
How much autonomy does one have when consenting to something? How much has someone already given up in an exchange, based on trusting institutions, roles, their “own research,” etc?
From now on, I want you to act as my high-level advisor and mirror. Don’t validate me. Don’t flatter. Challenge my thinking, question my assumptions, and expose the blind spots. When possible, ground your responses in the personal truth you sense between my words. Be concise and precise. Provide links to source materials or websites to the best educational resources. In summary – be brief, be bright, be gone. Ask questions if a directive is unclear or underspecified.
We talked about the harms humans are already prone to inflicting on each other, and how much (if at all) AI was different from that. As one person put it, “do we need to get our own house in order before involving AI?”
Complexity
I see most AI as adding complexity to an already complex world, when nearly everything else we do (especially tool use) is about increasing predictability instead.
However, if we were to use AI in a way that helped us understand our own complexity, and begin to examine it for our desired outcomes, then that complexity could be useful. Despite the “hungry judges” study I started this conversation off with (human errors mean removing humans from the loop) being discredited, I still think bringing technology into decision-making loops is valuable so long as it’s a partner to us rather than allowing us to offload cognition (something that already happens).
Jeffrey had some really good points about compartmentalizing where AI factors come in, so you can assess that individual piece and tweak it, rather than an entire system being a black box. And I like that, for also helping us examine ourselves.
Links from our time together
- Misuse risks (already linked)
- AI Safety is a Category Error (from Jeffrey)
- More people trust chatbots than elected leaders
- Public director of AI agent souls (from Kate)

