Balancing Doomsday and AI

Balancing Doomsday and AI
"Doomsday Scenario", by Marcel Gagné, created using Stable Diffusion

A couple of years ago, there was this combination science-fiction, disaster, not to mention heavily satirical film called, "Don't Look Up". The movie was written, produced, and directed by Adam McKay and David Sirota. The film included a star-studded cast that included Leonardo DiCaprio, Jennifer Lawrence, Rob Morgan, Jonah Hill, Mark Rylance, Tyler Perry, Timothée Chalamet, Ron Perlman, Ariana Grande, Scott Mescudi, and the iconic Meryl Streep.

The story is centered around two astronomers, Dr. Randall Mindy (Leonardo DiCaprio) and Ph.D. candidate Kate Dibiasky (Jennifer Lawrence), who discover a comet, which would normally be a really cool discovery except that this particular comet is on a direct collision course with Earth. Oh, and it's a big one, meaning we're looking at an extinction-level event. Extinction of the human race, in case you weren't clear on that. The two astronomers then embark on a media tour to warn humanity about the impending doomsday.

Remember Cassandra...

Some might claim that the satire part of it comes from the response the two get from the general public, political leaders, and greedy CEO types who see the comet's impact as a potential boon for their corporate bottom lines, and mostly people showing a complete lack of interest and/or disbelief in the dire warning. Faced with certain death for our species, people use the story to make YouTube videos, post to Instagram or TikTok, for clicks, likes... you know, exactly what you'd expect.

That led some, including myself, to comment that this movie wasn't a satire or comedy, but a documentary exploring exactly what would happen if such an event took place. The premise, of course, is meant to comment on contemporary issues such as climate change denial, political short-termism (Meryl Streep plays a Donald Trump-like President of the US), and media sensationalism.

The zeitgeist of the 21st century seems to be punctuated by an underlying current of anxiety. From climate change to the rise of authoritarianism, to fake news, to killer pathogens (remember COVID-19?), and the overlooked, yet persistent threat of nuclear warfare, the list of existential threats is extensive and pressing. Few of these are particularly new. When I was a kid and the Cold War was still a thing (originally?), teenagers like myself worried that we'd all die in a nuclear war. Clearly we still have problems to this day. Yet, in this vast arena of doomsday scenarios, one subject has recently been thrust into the limelight—Artificial Intelligence (AI).

The advent of AI, which once occupied a distant corner in the realm of science fiction, has been catapulted into our daily lives. Six months ago, ChatGPT made it grand appearance and it's probably an understatement to say that it has changed everything. Along with its generative AI brethren like Dall-E, Midjourney, Stable Diffusion, Bard, and so on, AI steadily permeates every facet of our existence. As such, there's growing alarm that this technological marvel could someday go rogue (it's tempting to use a picture of the Terminator here, but I refuse). The dystopian narratives spun around this 'AI Apocalypse' have been compelling, warranting discussions and debates in equal measure.

There's the famous letter asking for a six month pause on large AI development. Consider as well the new obsession with p(doom) where people come up with a probability number of the likelihood of AI bringing about the end of humanity (that being the 'doom' part). There are figures bouncing around that cover the gamut. I estimate my AI p(doom) to be around 2%.

I started out by mentioning the movie, "Don't Look Up". As of a few days ago, another similar title has appeared called, "Don't Look - Up The Documentary: The case for AI as an Existential Threat." It runs just over 17 minutes, so you should definitely watch it. Here's a taste of the sort of quote you can expect from the movie. "50% of A.I. researchers believe there's a 10% or greater chance that humans go extinct from our inability to control AI." So, 50% are going with a 10% or greater p(doom).

Yes, there's a real possibility that AI could be really bad, hence the call for a pause (not going to happen). Some are suggesting we abandon the whole AI experiment as a bad idea and move on. Some, usually DUNE fans, might even call for a Butlerian Jihad. However, it's essential to weigh this newfound concern against the backdrop of other longstanding crises.

Take, for instance, climate change—a threat that's not so much looming as it is upon us. The signals have been clear and consistent: escalating temperatures, melting glaciers and collapsing Antarctic ice shelves, intensifying weather extremes, rising sea levels, and mass extinction of species. Of the latter, it is said that we are in the midst of the sixth great extinction, the "Holocene extinction", and we humans are responsible. One of the species that could go extinct is us! And yet, despite the grim reality and substantial scientific consensus, our collective response has been lackluster (I'm being kind), marred by political gridlock and short-term economic interests. We've been granted numerous wake-up calls, yet we just hit the snooze button repeatedly.

Equally alarming is the resurgence of authoritarianism. From democratic backsliding in traditionally liberal countries to the consolidation of power in autocratic regimes, the trend is disturbing. The principles of freedom, equality, and the rule of law are under threat. As we grapple with this political reality, the discourse around AI doomsday often seems disproportionate. It's not to downplay the potential risks that AI might pose in the future but to highlight the ongoing crises that urgently demand our attention and action (talk to the hand).

Ah, high school memories. Let's not forget the ever-present specter of nuclear warfare—a threat that, despite its destructive potential, has been largely relegated to the background. Remember what I said about what it was like in my teenage years? None of that has changed. With thousands of nuclear weapons globally, the prospect of a nuclear war, accidental or intentional, is not merely a product of Cold War-era paranoia. That shit is still around and it could still destroy us, several times over. But no, we're worried about AI.

As we engage with the 'AI Apocalypse' discourse, we must remember that this narrative is fundamentally based on a potential future, one that we can shape and guide with prudent policies and ethical AI practices. Sort of like the Ghost of Christmas Future (these are but the shadows of things that might be). In contrast, climate change, authoritarianism, and nuclear warfare are not potential threats—they are current realities. They are issues we've had decades to address and, for a multitude of reasons, have fallen short. And now, suddenly, because we have these amazing Large Language Models (LLMs), AI art generators, etc, we panic. Oh, and somehow, we'll solve that problem if we pause development for six months.

Wait! What about all those other things?

While it's crucial to ponder, debate, and otherwise prepare for the challenges AI might present in the future, it's equally important to ensure that this doesn't distract us from the pressing problems of the present. The stakes are enormous, and the clock is ticking. And we, as a species, are so easily distracted.

As I said in an earlier post, modern AI is a marvel that has given each and every one of us superpowers. We have big problems to solve. Massive problems. Huge scary problems. And now, we have just been handed tools that extend our capacity to imagine alternate futures and to create solutions that were previously beyond us.

Take climate change, for starters, where AI is integral in forecasting climatic patterns, identifying areas most vulnerable to climate change, and innovating renewable energy sources. AI algorithms can optimize the grid management of renewable energy and predict energy demand. Machine learning models can help understand climate patterns and improve the accuracy of climate models and, if we're lucky and they are willing to look up, guide policymakers to make more informed decisions.

We just went through a pandemic. Well, AI can accelerate drug discovery (and has). Just a few days ago, it was announced that a new antibiotic to fight resistant "superbugs" was discovered using AI. That's because AI algorithms can analyze vast amounts of data to detect patterns of disease spread and offer predictive analytics, enabling faster responses. What had taken years can sometimes be done in days.

In the political realm, AI has the potential to enhance decision-making processes, predict conflict areas, and streamline public services. On the issue of nuclear war, while AI can't directly prevent it, it can help in monitoring compliance with nuclear treaties, provide simulations of potential conflict scenarios, and analyze geopolitical data to anticipate and potentially deescalate tensions.

We've got big problems; problems so large that decades have passed and we're still working on answers. AI is a tool, and one that (at the risk of repeating myself too many times) grants us a kind of superpower. Sure, let's continue the dialogue on AI's potential risks, but let's also tone down the panic and doomsday scenarios. As Uncle Ben told Peter Parker, "With great power, comes great responsibility." Our responsibility is to use that power for good, not to throw it away in some great memetic panic.

Our future—AI or otherwise—depends on it.