Appreciating the Difference Between Quantitative and Qualitative Change: It Could Be Existential
I would argue, if asked, that technology tends to advance quantitatively, until it reaches a tipping point. Then we witness a qualitative change. Consider the nuclear bomb. Throughout World War Two, weapons became more and more deadly, The firebombing of Japanese cities is a case on point. But did that persuade the Japanese Empire to surrender? Emphatically no. Then the A-Bomb emerged, Two demonstrations of its qualitatively unique power and Japan capitulated. Fortunately for us —- and my Baby Boomer/Cold War generation in particular—- world leaders, democratic and authoritarian alike, recognized the principle of Mutually Assured Destruction (MAD). Consequently, in the 80 years since Nagasaki and Hiroshima, no nation has unleashed The Bomb’s existential power.
Recognizing when a qualitative tipping point has been reached is not so simple in other contexts. Consider immigration. If I had an extra dollar for every time I’ve heard “All Americans were once immigrants,” as the rationale for open borders, I’d be living in a larger house in a balmier climate. When my father’s family emigrated from Italy at the turn of the last century, the U.S. population was about 76.3 million. Today it is about 350 million. That’s an increase of 460%. The land mass of the United States has increased not at all during the same period, and annexing Greenland is no solution. Have we reached a tipping point? I don’t know. I do know that “We were all once immigrants” is no satisfactory rationale for justifying the influx that would occur along our southern border if we allowed it. This amounts to substituting a cliche for a rational policy.
Now consider Generative Artificial Intelligence (GenAI) and Artificial General Intelligence (AGI). Consider the probable proximity of the technological singularity. Per Wikipedia (hey, don’t knock it if you haven’t tried it), “The technological singularity—or simply the singularity —is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing a rapid increase (‘explosion’) in intelligence which would culminate in a powerful superintelligence, far surpassing all human intelligence.” Forbes ran an article last fall that placed the singularity 10 to 20 years from now. The remaining Boomers and I might not be around to see it. But will you?
The cheerleaders for AI like to compare it to the Industrial Revolution or the Internet. I maintain they are wrong. They are misleading us. AI is a lot more like the nuclear bomb and a lot less like immigration, or climate change for that matter. It is so much more than a quantitative change. It is qualitative, as nothing else in history of technology has been. And if I’m right about that, AI will turn the world of work on its head. How will we cope with that?
Well, for one thing, we might reassess our views on immigration and population. If most of the population will be redundant, so far as work is concerned, how will they (we?) be provided for? I have some thoughts on this. Life could be heaven or it could be hell. I’ve explored these alternatives in some detail in my new book. For now, suffice to say that we humans have in our hands a revolutionary technology, comparable in its disruptive power to the nuclear bomb. But we have nothing comparable to MAD restraining us. My money is on 10 years, not 20, for the singularity to occur. Given the pace at which corporations are investing in AI, and the Trump Administration’s stance against regulating its development, that may be too conservative.