We have been bombarded with idea of “AI Inevitability” in the news and social media. We are told whether you agree with the technologies being developed with AI or not, find them morally questionable or not, find them threatening or not—it does not matter. These technologies, as we are told time and time again, “are here to stay.” Even some who question what is happening with AI are resigning themselves to the motto of AI companies of “adapt or die” without question.
These statements are untrue. If we heard anyone else tell us to “do or die” like the AI companies are telling us, we would think they are being abusive. What this phrase really means is “do as I say or die”—take on whatever we make with AI technology, or there will be consequences. We are told by many AI companies, like a psychopath would, to “just adapt” as if it is a moral imperative we should follow, when they do not have to adapt to others’ concerns. They may pretend to listen to the public after getting away with as much as they can, and make concessions only afterwards if it serves their interests. They are telling us there is nothing we can do about the current technologies, so we must shut up and accept them.
One of the mottos that represents the worst in the tech world is “move fast and break things,” which is clearly the way many AI companies are operating: “I do whatever I want to make a win for myself, and everyone has to deal with it later.” “AI inevitability” is really a cover for operating according to “move fast and break things,” but it presents as a seemingly neutral phrase projecting objectivity, while hiding the real motivation of these companies.
“AI inevitability” is used to justify appeasing AI companies to do whatever they want, even if they are run by a small group of people. Their power is certainly not greater than the rest of us combined, but this phrase allows us to sleepwalk into handing the reigns over to them. But like what you can do with an out-of-bounds psychopath, we can set hard boundaries with them which puts them in their place. We do not have to be gaslit into believing in their unpalpable “do or die” messages, especially when ironically the technology itself is unstable and is predicted to put society at great risk. Not only that, put itself at risk.
If there was a bully being aggressive and telling us what to do, no one has to put up with their behavior. We cannot chalk this behavior up to free-market capitalism. Due to the coercive messaging, this is technofascism of a small group of companies. When we learn to set strong boundaries rather than give in, the bully will have no choice but to respect our boundaries. This would be better not only for our wellbeing, but also for the bully as well. When healthy boundaries are set, the AI companies will learn to collaborate and work with us, and listen to the concerns of society first before rushing ahead to prioritize their wins. They will instead make AI technology that helps us, that is safe and secure.
Though technological growth is inevitable, it is easy to confound “Move fast and break things” with technological inevitability. “Move fast and break things” is not inevitable—it is just a dumb, damaging cultural ideal of a select group. When we can collectively set those people who live by that motto in their place, we can greatly mitigate negative outcomes of AI, and instead ensure positive outcomes.

