
[This post is a continual WIP]
Persona 1: risk of looking silly > AI risk
When I hear bright people dismissing AI-risk claims with comments like “if we realize it’s not safe we just won’t build it”, I find myself unable to take them or their ideas at face value. Its not that there are no good arguments against AI x-risks, but “humans don’t build dangerous things” is not one of them. Perhaps their identity is so tied up in not looking stupid that they have convinced themselves that publicly downplaying the risk of AI is the best way to protect their intellectual credibility (see the figure below). They might believe they’re avoiding the potential embarrassment of falling for an overhyped threat that may never materialize. Major advancements in technology create space for the “remember the dot com bubble” trope, and there are always people happy to fill it. And there are of course historical examples of crashes following hype and speculation. Yet many hype cycles correspond to a technological advancement that did in fact change the world in some fundamental way, its just that humans are quick to adapt their expectations to include things that were once unthinkable.
Persona 2: collapsed timelines
Someone certain of AI doom would most likely not work on AI-alignment problems, unless working on those problems allowed them to interact with the parts of the world they hold most dear. The life of a true believer might look like: you max out your long term debt, reduce work hours to only what is necessary, pull your kids out of school so you can hug them more often, move back close to your parents, and tell people in your life how much you love them. You might spend Saturday mornings roaming the woodlot in your neighborhood and realize for the first time since you were a child that its a wild place of dark carbon, bird song, and everywhere-light—gravid with meristematic potential. You might lie on a hillside with your young family, dozing in springtime grass, thinking what might be some of the very last human thoughts.