The Fog of War in the Age of Machines

We may not know what we think we know

The stories we tell ourselves about automation and artificial intelligence are themselves a distraction from the political and economic changes that are underway.

The rapid rate by which artificial intelligence and automated decision making is spreading represents a kind of revolution as to how societies choose to govern themselves. Take a look at DeepIndex, the growing list of areas and applications involving AI.

This boom in machine learning applications is also resulting in a boom for ethicists and lawyers to help adjust existing regimes and moral frameworks to the new world of what is possible. Even the US Department of Defense is seeking an ethicist to help them build killer robots.

Not to fear dear human as the (mostly) young people of Hong Kong are developing tactics and techniques to sabotage and evade the surveillance state.

Unfortunately these tactics are not helping us escape the fog of war that surrounds the rise of AI. Rather the fire hose of public relations and start-up pitches that provide the chorus for automation hide from the human elements that drive this digital transformation.

When we think about the impacts of algorithms we should not focus on the bias of the machines, or their ethics, but rather the broader notion of systemic algorithmic harms. Algorithms are not individuals, the biases they exhibit and the ethics that should inform their governance are derived from their use in society as a whole. It is important to then pay attention to how those algorithms reinforce the status quo and potentially automate inequality.

These systemic impacts that an algorithm can have are made explicit when they are then deployed as “individuals” seeking to influence the biases of humans:

Yet this practice is not solely dependent or based upon automation. Rather it takes advantage of mixed or hybrid systems that involve humans and machines and the ability to combine the two to skew or amplify particular narratives or perspectives.

Researchers are to a certain extent catching up with the practices honed and developed on imageboards like 4chan and later 8chan where media manipulation and algorithmic exploitation flourished. VICE recently aired an interview with the founder of 8chan that provides insight into the platforms that incubated these methods (and why).

Historically this fog of war surrounding the rise of the machines will be regarded as a preface to a larger storm. A storm that further erodes a shared reality towards a virtual reality that is increasingly subjective and imagined.

Just as with AI, the stories we’ve been sharing about virtual reality are also a distraction, and neither indicative of the potential that exists, nor the potential dystopia they could enable.

Certainly a goal of this newsletter is to attempt to make sense of the terrain, in spite of the fog of war, and create, together, a reality that empowers and educates rather than exploits and deceives.

Jesse Hirsh

Jesse Hirsh