Mutual Aid and The Crowd

Months ago, one of my friends at the Naval Defense University sent me an article from Scientific American on how social media is making crowds less predictable. It hit a nerve with me, my response being that “social media makes crowds more predictable to themselves.” The article talks about uprisings in various countries, popular choice, and collective action. It also cites this argument, shoehorning collective action into hierarchical framework, indicative of its missing the point.

Matthew Salganik, Peter Dodds, and Duncan Watts conducted large-scale experiments to investigate the effect of the strength of social influence on collective action. People were given a list of previously unknown songs from unknown bands. They listened to the songs and downloaded them if they wanted to. In the independent condition, people did not see other people’s choices. In the social influence condition, people saw how many times each song had been downloaded by others. The collective outcome in the social influence condition was more unequal. That is, popular choices were much more popular under social influence.

Crowds are only less predictable to the outside. They are becoming more predictable to themselves. Not talking about ranking, not talking about decision, simply speaking to awareness and therefore paths to action. This, to me, is related to the core disconnection in disaster response between official response’s view on social media/The Crowd as a resource to be tapped for situational awareness, and the mutual aid of The Crowd as self-organization. Formal organizations tend to think of The Crowd as an input function to their workflows. Their concerns therefore revolve around verifiability, bad actors, and predictability. A manifestation of this are the self-mapped roads in remote places via Open Street Map being grumbled over for not fitting into the data hierarchies of official responders. That is not the point of the maps being built.

These are identity politics on the scale of a community. These are people using a tool to their own ends, to support themselves, to gain better understanding of their world, not as a resource to be tapped. It is a group of people talking to itself. If institutions exist to serve collective purpose, their role here is to provide institutional knowledge (with awareness and self-reflection of bias), guiding frameworks (possibly), and response at scale (upon request). In this way, we can benefit from history and iterative learnings while escaping paternalistic ends.

Which brings us to responsible data practices. If data must be collected on a group of people, either ambiently  through things like the Firehose or directly provided, the output should be useful to those people. This is the difference that makes ethical digital response seeking the integration of multiple datasets to have better situational awareness, and what the NSA does. For instance, if you’re collecting information on homeless shelters and the movements of homeless individuals, the information should be able to be used by those folk to self-organize. Else we’re just recreating the systems we’ve been trying to get away from. We’re even making them more robust with new technologies, the biases hidden away in algorithms.

As a crowd comes to know itself better, the intelligence can becomes an embedded, rather than external, component. We start to see many eyes on the bugs of society.