I’m excited to be going to conferences again, after 5 years of not really doing any. I like the thrum of so many people in one place, conversations with random folks in the lunch line, and seeing old friends. The one I went to this week was [un]prompted, about the overlap of AI and security. I saw some tried and true exploits brought to new scale with AI, and I heard about a lot of potential routes to securing existing code bases with AI. I also saw a fair amount of what I’d call “put a bird on it” approaches to AI.
I’m walking away with two big questions (beyond the preexisting “where is all this energy coming from?” and “how does wealth redistribution work with these new models?”), one about complexity and the other about trustworthiness.
What complexity is worth taking on?
Mudge, I think somewhat famously, long ago pointed out that exploits were happening nonlinearly, becoming more likely the larger and more complex a codebase became. In contrast, the exploits themselves were remaining steadily small. So one of my sniff tests now for how load bearing a system can be has to do with how complex and tested it is.
The technical talks I saw at [un]prompted had to do with increasing complexity, not decreasing it. It piles MORE layers on, it doesn’t remove the unknown or unnecessary. The closest I saw to removing complexity were analysis of proliferated documentation to come up with a summary and a (new) single source of truth. I’d like to see more adventures in “cheap” refactors that simplify and streamline code bases.
I’m the vendor now
The conference organizers did a fabulous job on many fronts, but they did not do a good job of stopping sales pitches from happening on stage. So many of these amounted to “your vendor for $thing is slow and doesn’t meet your needs, but ✨our AI can solve this for you✨” which is just so boring.
Beyond being boring, however, I truly wonder how we can trust any of these providers to not inject backdoors (intentionally or otherwise) when their values so clearly scream that they’re open for business on every front. So saying “hey just ask for what you want and trust the outputs!” seems shady AF. And if we do what some suggested, of making agents fully autonomous, we wouldn’t ever have cause to pause and reflect (let alone catch) this happening.
What I am interested in using these things for
I’m interested in reviewing code humans don’t have time for. Several of the better talks shared the goal of complete code coverage. I’m also interested in putting in guidance and nudges towards doing better work (either from humans or from robots), rather than adding layers on other layers. I’m interested in help for what we know needs doing, and investigations in formats that humans are bad at and machines are good at.
From this conference, I’m now prepared to spend even more time on evaluation than I expected to (50% after baseline systems are in place). And I have new ways of talking about where to interject to inspect the system instead of just trusting it’s working.
I now have more supporting evidence for continuing to think that a workflow or premise needs to be figured out before automation, which happens before AI tooling. And that organizational structures need to allow for this happening at a deep layer, not as something that gets tacked on later as an afterthought.
It also seems like we’re moving away from “zero click attacks” towards “zero user intervention attack” – what can we get agents to do without you noticing?
I really like this post. It speaks to some very nebulous thinking I’ve been doing along the lines of, “AI is here to stay, so how do we make it really useful?” and that includes cutting through the hype that is actually BS. An article in the most recent Harper’s mag covered Reindustrialization 2.0, a conference held in Detroit, and one of the takeaways that sticks with me was, “Wow, if AI is so inevitable, why are these people trying so hard?” And in reading your thoughts, it makes me think that AI is adding to complexity to justify its existence, so it’s very human savvy to ask, how do we use this tool to make things LESS complex. A Peace Corps example, composting toilets can be great. They can also be terrible. There’s a bit of a learning curve, but when the curve is followed, there’s responsible waste management that ends in a sterile, useable product. Yay! Interesting caveat, we came back from PC and decided to use a composting toilet in our house. It was TERRIBLE. Part of that was the added complexity of a heater and rotation components that got in the way of the product working the way it was intended. The less complex your composting toilet, the easier it will be to get the sterile, useable product. And I would forevermore like to talk about AI in terms of managing actual shit.
Pingback: Early March link roundup