whatbrentsay logo

It’s the AI event horizon I’m worried about, not the singularity | whatbrentsay

It’s the AI event horizon I’m worried about, not the singularity

  • #ai
  • #tech

You’ve probably heard tech/science-y folks refer to the “singularity”—a point in time that separates a hyper advanced, impossible to predictable technological future from the status quo before it. That rapid, unforeseeable growth is often tied to the invention of a non-human super intelligence.

The recent, startling deluge of advancements in the field of AI have become almost unavoidable if you spend a reasonable amount of time online. Much of the discourse is a mixture of celebration and awe tempered by job security concerns. We’re still in the honeymoon period and deeper implications of AI’s potential—but not yet guaranteed—democratization are still being explored.

What seems clear to those closer to AI is the power it represents. A war is brewing among our tech juggernauts, with Microsoft taking an early lead. Their investments in and relationship with OpenAI, and by extension their GPT models, are their advantage. One that has become so obvious and valuable to them that OpenAI’s most recent paper for GPT-4 contains no information about the model’s architecture, hardware, or training.

Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

That’s an excerpt from the technical report. OpenAI ought to consider changing their name to avoid easy digs at such obvious irony. Beyond a lack of openness, John Montgomery, a Microsoft VP responsible for AI, indicated that there’s pressure to move fast despite reservations from those working on the technology. He said this around the time Microsoft laid off members of its AI org who were responsible for ethics.

While the team was being reduced last fall, according to Platformer, Microsoft’s corporate vice president of AI, John Montgomery, said that there was great pressure to “take these most recent OpenAI models and the ones that come after them and move them into customers’ hands at a very high speed.” Employees warned Montgomery of “significant” concerns they had about potential negative impacts of this speed-based strategy, but Montgomery insisted that “the pressures remain the same.”

Pace dictated by public, for-profit corporations in relation to a technology that is expected to have a hard to predict but meaningful global impact is, at least, noteworthy. Indeed, a growing number of tech, science, and other influential leaders—including recognizable names like Steve Wozniak, Elon Musk, Victoria Krakovna, Yuval Noah Harari, Evan Sharp, and Andrew Yang—are calling for a six month pause in further development so risks can be assessed. They’ve released an open letter stating their concern.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

This is starting to sound like the first act of a sci-fi disaster novel—a transformative technology valued for its advantage in a capitalistic system at the expense of what that means for everyone else. Dramatic, sure, but it serves as a useful counterbalance to the general wonder we’re still experiencing in response to it.

As I see more public dissent I’m becoming more concerned about the time leading up to the singularity—its event horizon. We may not realize we’re sliding down its slope until the grade becomes steep enough for us to lose our footing. At some point, as this technology develops, the momentum will become too great for us to avoid whatever conclusion lies ahead. Tremendous upside, guaranteed power, and large profits are hard variables to account for along any path. Is it already too late? Are these only the first warning bells? I don’t know but I do know that danger lies before the singularity itself.

whatbrentsay © 2024 | words by brent