Ai, social media, and the problem with tech evilness
Last year I read Careless People, by Sarah Wynn-Wiliams, a tell-all book about the inner workings and people of Facebook/Meta. My thought while reading it was “holy shit, these people are evil.”
The overall premise that I took away from her book is that they (the people who run Facebook/Meta and others like them) are unintentionally evil, their evilness a byproduct of insane, rapidly accumulated wealth in a sector and society with no guardrails on wealth or their products. They are essentially children who are handed absolute power, with no idea how to safely use it. This power erodes their moral compass over time until they are isolated in a world where they believe and are told every impulse they have is correct, no matter the external consequences.
And as Maria Ressa also highlights in her book How to Stand up to a Dictator, this evilness extends to potential destruction of nations and world order, a willingness to bend to authoritarianism and destroy democracies, often in a quest for even more wealth and power.
Tech companies have shown a complete willingness to bow down to and even directly support authoritarianism if they think there's even a chance it will bring them more power and wealth. For example:
- Palintir has muzzled its employees as the company collaborates with ICE
- Apple bowed to the administration's demand to remove an app that tracks ICE and then Tim Apple gave dear leader a glass plaque
- Tech companies got rid of their anti-diversity efforts
- Tech companies have embraced the administration's anti-regulation stance for tech and Ai
- Ai companies have engaged in censorship as preemptive compliance
- Literally everything Elon Musk has done.
And this is all just the tip of the iceberg, because with Ai, the potential for tech evilness is exploding.
Dario Amodei, the CEO of Anthropic, recently wrote two great essays. They are essays in contrast (intentionally) - Machines of Loving Grace and the Adolescence of Technology.
The first is about the amazing potential of Ai. The second is about how we are standing on the precipice of potential danger at a moment when our collective pre-frontal cortex isn’t fully developed.
As a kid I was an "early adopter" of technology. Sierra computer games and the Prodigy internet were my jam. I had my first IBM PC at the age of 7 and even learned a little programming from reading the thick computer manuals my dad brought home from IBM.
And I was "Team Machines of Loving Grace." Even now, I have a tendency to see only the good in technology - its ability to connect, speed up knowledge, make information accessible, share data, better us as humans. Always the optimist, I have not given enough thought to the potential dangers, of how those connections could be abused, information weaponized, data used against us, societies destroyed.
(In just one example of how companies are weaponizing data, Taylor Lorenz shows how Google is now using Ai to engage in surveillance pricing.)
But here we are - Ai is forcing us to confront the evilness and very real dangers faster than we have ever imagined would be needed. Personal computers were widely available in the 80s, but it wasn't until the late 90s or early 2000s when we really had to contemplate as a society the effects of the internet had on society. Social media (MySpace, Friendster, and then Facebook) didn't take off until around 2004, and it took about a decade to realize the potential dangers social media posed, to us individually and as a political system.
So how long until we are seriously confronted with the dangers of Ai? Are we already being confronted with it? There's the very real job loss that is hitting some sectors worse than others. But what about the weaponization of these incredibly powerful Ai tools? Can we even begin to contemplate how the companies that operate them could unintentionally or intentionally use them for evil?