AI Agonistes

 Apocalypses Now III

AI Agonistes

The apocalyptic threats associated with AI development are much in the news these days. The field itself is divided into accelerationists who want to move forward as quickly as possible and the effective altruists  (aka decelerationists aka decels aka doomers) who also want to move forward but who are wary of the possible anti-human uses AI might be put to. And the latter have reason for concern as AI’s record up till now is not all that reassuring – it seems to have a natural affinity for the dark side.
Technology has always been catnip to the military and throughout history great scientific minds from Archimedes to Leonardo da Vinci to Robert Oppenheimer have worked on weapons systems. Moreover, AI research in the US owes its own start to the military funding agency DARPA (Defense Advanced Research Projects Agency) and it is still of great interest to the military. For example, the recently announced breakthrough in Quantum Computing at Harvard had DARPA support (click HERE ); moreover, for some time now AI has been a critical part of the software for military drones – at this point (at least as far as the public is allowed to know), these drones are still controlled remotely by a human and the final decision where to attack is a joint human-AI one. But if the move is made to largely autonomous systems, this will also magnify the chances of misguided AI-driven targeting decisions that could lead to unintended slaughter and other disasters. In this context, the threat of nuclear war is often mentioned. And, just in, the IDF is employing an AI system, Hasbora (“The Gospel” in Hebrew) to locate Hamas affiliated targets for bombing while estimating the concomitant civilian casualties.
Yet another way AI is having an unsettling impact on society is through surveillance technology: from NSA eavesdropping to hovering surveillance drones to citywide face recognition systems and more. London, once the capital of civil liberties and individual freedom, became the surveillance capital of the world – but (breaking news) Shanghai has already overtaken London in this dystopian competition. What is more we are now subjecting our own selves to constant monitoring: our movements tracked by our cell phones, our keystrokes logged by social media. In the process, the surveillance state has created its own surveillance capitalism: our personal behavorial data are amassed by AI enhanced software – Fitbit, Alexa, Siri, Google, FaceBook, … ; the data are analyzed and sold for targeted advertising and other feeds to guide us in our lives. All this is the subject of a recent book by Harvard Professor Emerita Shoshana Zuboff, The Age of Surveillance Capitalism.
Another major concern people have with AI is the threat to jobs, especially good jobs. Ray Dalio, the billionaire investor and manager of the Bridgewater Hedge Fund calls AI a “great disruptor” and predicts massive changes in the workplace within the next five years, starting right now – others more biblical in spirit call it the coming jobocalypse. A recent NYTimes article in the Business Section gets right to the point: “The world’s artificial intelligence researchers are transforming chatbots into autonomous systems that could one day replace white-collar workers.” One can imagine a team of 4 accountants, say, being replaced by 1 accountant and 1 AI. The phenomenon will be world-wide; Goldman-Sachs estimates that 300 million jobs will be lost. By way of example, Onclusive, a prominent French public-relations/market-research firm in the Paris region, just announced that it will lay off 217 of its 383 employees and use AI to replace them! Pour plus de détails, cliquez  ICI .
This is all consistent with Moravec’s paradox, a principle formulated in the 1980s by AI luminaries Hans Moravec, Rodney Brooks and Marvin Minsky: AI systems outperform humans in the skills that emerged the latest in our evolutionary history. Thus motor control is difficult for a machine to emulate but computations and other “advanced” tasks are easy for machines – and with AI, more and more of these advanced functions will fall victim to such mechanization while those skill sets developed earlier in our history will insulate many from its ravages. To quote the generally optimistic and ever prescient Stephen Pinker (The Language Instinct, 1994)
    “As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.”
Mathematicians, for their part, are not worried – yet. These newest generative AI systems like OpenAI’s ChatGPT do not follow logic; rather they are trained to associate probabilities connecting one node to another in a software emulation of the brain known as a neural network. Thus, very roughly put, they do not make a step toward a conclusion because it is the most logical but rather because it is the most banal; and a percentage of generative AI “facts” are just wrong, so-called hallucinations. So the first thing math people did with ChatGPT was to bait it into giving proofs of classical theorems – the “proofs” would sound good but would have obvious gaps in them. Another thing is that we do not understand how people actually go about proving theorems. The mathematician (or worker in most every creative field) turns his or her entire body into an analog processor that labors 24/7 on the issue at hand, knitting things together busily until something clear emerges consciously in language, symbols or paint strokes. Tales along these lines involving famous mathematicians (e.g. Poincaré) are part of the folk-lore of the subject. Others experience this when they return to a crossword puzzle and suddenly see answers that they were nowhere near seeing a short time before.
But maybe mathematicians are too cocky – the AI revolution is still in its early stages and we haven’t yet reached the Singularity, the point where the machines become as intelligent as human beings, the point where AI becomes AGI, Artificial General Intelligence. We could be close; AI visionary Ray Kurzweil has predicted that by the end of this decade an AI will be able to pass for human in an interactive session, the “Turing Test” which was proposed by Alan Turing in 1950 as a key milestone for AI. For his part, in a piece on AI in the Atlantic Monthly, Sam Altman, the once and future CEO of OpenAI, teasingly let the reporter understand that AI researchers are often surprised at how their code has learned something apparently all on its own. The genie is out of the bottle.
Speaking of Sam Altman and AGI, the tech world and the business world were shocked recently on Nov. 22 when the board at OpenAI fired Altman for not being “consistently candid.” That is rather vague but the best guess is that Altman was not forthcoming about the development of an AGI kind of system (known as Q* according to the rumor mill) and the board was concerned about ethical and existential issues associated with these new developments – this is basically Elon Musk’s interpretation of events (Business Insider, Nov 29; click HERE ). All this makes sense in terms of OpenAI’s original mission: a not-for-profit company that would develop AI systems in the public interest and ensure that these systems would not present threats of whatever kind to humanity. But, then, OpenAI created a for-profit subdivision – the purpose being to raise the capital needed to continue with its mission. This move prompted a schism with a group leaving OpenAI to form a more ethics-oriented company, Anthropic. This was also connected with the ever altruistic Elon Musk’s departure from the company. But ethics remained critical for the OpenAI board and even one member, Helen Toner of Georgetown University’s Center for Security and Emerging Technology (CSET), published an academic paper where she criticized the way OpenAI was proceeding; worse she wrote that the schismatic rival Anthropic was approaching AI in a more responsible way.  All this led to the Board’s firing the uncandid Altman. But Microsoft, which owns 49% of OpenAI’s stock, rattled rockets and announced that they were going to hire Altman and key members of the technical staff themselves; the OpenAI board caved – reconfigured itself and restored Altman to his throne; the whole drama took place in four days – in the end a triumph of the amoral force that is Capitalism: there is just too much money involved for this to be entrusted to hands of do-gooders. The New York Times celebrated this victory of the accelerationists with a piece in the Sunday Business section (Dec 11) which even put Altman and others of the San Francisco AI elite in the avant-garde group of effective accelerationists (aka e/accs pronounced e/acks). BTW For his part, the Pope  himself has entered the lists as a decelerationist – on Dec 14, Francis, using the handle @pontifex, tweeted  “We must make sure that Artificial Intelligence is put at the service of peace in the world, rather than being a threat and that it will make a beneficial contribution to humanity’s future.”
All this conflict between the lucrative and the good is reminiscent of a line of Mae West in her pre-code film Night After Night: seeing the jewels Mae was wearing, the hat-check girl in the speakeasy cries out “Goodness, what beautiful diamonds!”; Mae replies “Goodness had nothing to do with it.” For the clip from the movie, click HERE . For more on the controversy around Helen Toner, click HERE .
This battle between the accelerationists who want the technology to move forward as quickly as possible to AGI and those who want to move with real caution is part of a broader philosophical debate. Indeed, Accelerationism is the name of a philosophical school which emerged from post-Structuralism in the 1970s: roughly put, the Accelerationist position is that, at this point in time, civilization can only be saved from itself by pushing technology forward at the fastest possible pace; citing Nietzsche, they call for us “to accelerate the process.” For background, click HERE .
AGI and its potentially dire consequences were predicted at the very outset of the Computer Revolution: mathematician and computer pioneer John von Neumann (the Von Neumann Architecture, Stored Programming, the EDVAC) expressed his misgivings in the 1940s
     “the ever accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
Writing in 1951, Alan Turing theorized
     “… once the machine thinking method has started, it would not take long to outstrip our feeble powers. …  At some stage therefore we should have to expect the machines to take control … .”
(For a short recap of the philosophical and mathematical run up to the Computer Revolution from Aristotle to Pascal to Turing, click HERE .)
Much more recently in 2014, Stephen Hawking put it most dramatically telling the BBC: “The development of full artificial intelligence could spell the end of the human race.” “It would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Eerily, Part III of the awesome book Homo Deus (2015) by Yuval Harari is simply entitled “Homo Sapiens Loses Control.” Philosophers, for their part, tend to be pessimistic and, true to form, Accelerationist philosopher Nick Land is even nihilistic: “The demise of humanity is probably in the cosmic interest.”  Ominously, the “godfather of AI,” Geoffrey Hinton has left his position at Google in order to speak openly about the dangers presented by AI – adding that a part of him now regrets his life’s work (NYTimes, May 1). Concern is spreading – another Google alumnus, AI ethicist and podcaster Tristan Harris, recently appeared on Bill Maher’s TV show to sound the alarm and even the blasé Maher was left unsettled.
These deep thinkers all put a dystopian slant on things AI. The main fear is that the machines will “take over.” This threat is real and is also connected to issues of social hierarchy and class struggle that have been in motion for millennia now. Since the dawn of civilization, inequality and hierarchy have gone hand in hand with technology and social complexity and these trends are on the rise again. In recent times, the “30 glorious years” of the post-WWII period saw the growth of the “middle class” and opening of economic opportunity. However, since the Reagan era, things in the US and elsewhere have been heading in the opposite direction: in the US, in real terms, working class income has not gone up since 1980 while for the top 10%, and even more the top 1%, life has become very comfortable indeed: tellingly the word millionaire has ceded its place to billionaire as the measure of true wealth. Technology has certainly contributed to this as computers and computer chips have inserted themselves in all nooks and crannies of the economy: it is an axiom of capitalism that it pays to replace people you have to pay with machines you own outright – “ownership of the means of production” and all that, again Marxism 101.
Another technological “gorilla in the room” is AI enhancement of the individual human being. Indeed, today, most dramatically, we are seeing a physical merge of humankind and machinekind taking place what with nanobots, brain implants, genetic engineering, etc. By way of example, even before the launching of OpenAI, Elon Musk started a company Neuralink which is working on chip implants for the brain that would allow humans to control devices through their thoughts; and Neuralink is not alone: one often sees demonstrations of this kind of thing on TV involving seriously injured people recovering some bodily function – one example is Anderson Cooper’s CNN program on Dec. 3 which featured work of a Dutch company whose AI software and brain implants were enabling a wheelchair bound man to walk again. The concern is that these new medical technologies will not serve to improve the health of humankind generally but rather to provide enhancements (intelligence, longevity, … ) for an elite subset of the population – an elite that will be the top of a new caste system. Activist thinker Bill McKibben warns that a “genetic divide” will be created as the rich alone will have access to these enhancements: as “low level” jobs are not threatened by these developments but “high level” work is, the result will be greater inequality where the economic elite becomes smaller and biologically enhanced while the professional class all but disappears.
So the smart money is betting that once again a jump forward in technology will lead to hierarchy and domination and maybe worse – which is very consistent with the pattern that goes back to the development of agriculture some 12.000 years ago. Another concerning phenomenon that has a long history is that progress in technology can reduce the importance of the role of human intelligence in human survival and therefore in human evolution, the symbiosis of humankind and machinekind at work. This point was already made by Socrates – in the Phaedrus, Plato quotes Socrates as follows: “If men learn this [writing], it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.” For more analysis along these lines, click HERE .
Then there is the HAL Problem where the machines go rogue. There is also the Control Problem where the machines simply take over and reduce humanity to blissful servitude.
But not to worry – this apocalypse might all be to the good. Futurists like Transhumanists, Cybertotalists and Prometheists argue that the role of the human race in galactic history is to serve as a pass-through for the introduction of intelligence in the universe. So humans will have given life to intelligent machines who will then take over and export precious intelligence around the galaxies – reassuring, perhaps, but admittedly not quite the magical future for humans envisioned by Shakespeare’s heroine Miranda in The Tempest: “How beauteous mankind is! O brave new world!”