I’m Not Anti-AI

I’m accused of being anti-AI. I’m not.
This image was generated using Adobe Firefly.
This image was generated using Adobe Firefly.

I am accused all the time of being “anti AI” and it simply isn’t true.

Some Background

Machine Learning (ML) and Artificial Intelligence (AI) are not new concepts. Many of the foundational mathematical and statistical principles (e.g. the method of Least Squares) that underlie today’s machine learning algorithms date back centuries. Thinking machines have been contemplated, celebrated, and vilified for nearly as long.

What is relatively new is the availability of high-speed, low-cost compute resources that make the application of sophisticated algorithms practical to use. Until the early 21st century, even basic ML techniques like linear regression were difficult to use at scale because only mathematicians and scientists at universities and big corporations had access to the compute resources needed to run them. Today, most of us have more computing horsepower in our pockets than those researchers had just decades ago. Even if someone could have developed sophisticated AI-based products back then, who could have used them? Widespread use of interconnected computers date back only to the late 1990s.

All this computing power has enabled the rise of large language models (LLMs). These are ML models trained on very large corpuses of text. Using complex mathematical and statistical methods, they attempt to predict the most probable responses to a given prompt, based on what they have “learned” from the corpus. The ubiquity of high performance GPU-based compute resources has made training LLMs at scale practical.

A lot of people confuse “artificial intelligence” with artificial general intelligence (AGI). AI is a catch-all for all kinds of algorithmic tasks, including AGI and machine learning (ML), while AGI is a specific type of AI capable of performing as well as or better than humans on a variety of tasks in a variety of situations.

If this sounds nuanced that’s because it is. A dozen different definitions or subcategories of “narrow” and “general” artificial intelligence have been proposed, and more than half of them have already been achieved.

But a lot of us think of (or even fear) specialized (“expert” level of greater) levels of artificial intelligence that, while perhaps inevitable, simply don’t yet exist.

The Gartner Hype Cycle
The Gartner Hype Cycle

AI Hype

I work in the field of advanced data analytics—which includes Data Science, Artificial Intelligence, Machine Learning, and a host of related capabilities and technologies. I understand how AI works and what it is (and is not) capable of. Maybe this is why a lot of people are surprised by my AI skepticism. In fact, it’s why I’m an AI skeptic.

The idea of artificial general intelligence or artificial superintelligence (ASI) stirs a lot of excitement in people’s imaginations, which is why anything labeled AI is reported widely, even if it has nothing to do with AGI or ASI. The average Joe doesn’t know the difference.

A lot of what is reported (and assumed by many to be true) is so grossly oversimplified as to be misleading bordering on just plain wrong. The hyperbole—and there’s a lot of it, especially at the utopian and dystopian fringes—drowns out all the good stuff in the middle.

We are still riding a rocketship of hype to the peak of the Gartner Hype Cycle, the point in the evolution of any technology where the promise of something still outshines its actual utility. Some of the stuff at the fringes may eventually come true, but we’re years away from ubiquitous reliable expert AI solutions—to say nothing of superintelligence. We’re still in the pioneer days.

Monetize Monetize Monetize

Business leaders are always eager to capitalize on, well, anything, so here we are. Big breakthroughs in LLMs combine with the entrepreneurial spirit and boards of directors with massive FOMO complexes, and you’ve got a recipe for chaos.

New features that incorporate large language models and generative AI are being foisted onto users at such breakneck speeds that (at least in a few cases) they’ve avoided dealing fully with ethical concerns, introduced security vulnerabilities, and given bogus and sometimes life threatening advice.

Another problem with monetization is that (historically anyway) the gains associated with increased productivity mostly go to shareholders and not to regular people. True, innovations do trickle down in the form of “smart appliances” and “personal assistants” but despite being fairly common, they have yet to be robust enough to be reliable. Siri and Alexa create as much frustration for me as value. I just want my shit to work.

Where Are We Headed?

The hype cycle will do its thing, the news stories will taper off, and the people working on these projects will continue to make improvements. We’ll eventually get to the “plateau of productivity” and AI will inevitably become part of our daily lives. But we’re not there yet, and getting there will be a bumpy ride.