Apocalypses Now II

Climate Change is one threatening apocalypse. Here is another.

Artificial Intelligence

Everywhere you turn these days, Artificial Intelligence pops up. Indeed the press has been all over it. Thus for its Sept. 7 edition, Time Magazine, in the tradition of its Person of the Year, published a cover featuring the “Time 100 AI,” billed as “The 100 Most Influential People in Artificial Intelligence.” This last summer, Vanity Fair and The Atlantic ran long pieces on the AI revolution we find ourselves in the middle of. Virtually every day for months now the self-anointed newspaper of record, the New York Times, has published at least one piece on AI and its threats and promises, sometime as many as 4 articles. AI is impacting everything, it seems, from dating sites to the military.
In fact, from the start, the US Military has been a backer of research and development in AI and the principal military funding agency for this has been the Defense Research Projects Agency (DARPA). (BTW Also starting in the 1960s, DARPA funded the ARPANET, the source of the Internet, the World Wide Web and all that. These two areas have become symbiotic as it is the Internet that gives AI companies access today to inexhaustible stores of data for training their engines. Thus, two more examples of how taxpayer funded R&D led to untold riches for the private sector.) The Military Industrial Complex is still very much interested in AI – a recent piece in the Defense Industry online press has the breezy title “3 ways DARPA aims to tame ‘strategic chaos’ with AI” (BTW The site for this article, breakingdefense.com, is underwritten by Lockheed Martin.)
The DARPA people have put forth a scheme dividing the development of AI thus far into two waves, all to be followed now by a 3rd Wave which will bring us close to the Singularity where machines become more intelligent than humans. In DARPA’s terminology, the key tool of the current wave is “statistical learning” which is based on a technology known as Connectionism. The Connectionist approach to AI employs networks implemented in software known as “neural networks” or “neural nets” to imitate the way the neurons in the brain function. “Imitate” is the operative word since the human brain has has 1011 neurons, 1014 connections or 100 billion neurons, 100 trillion connections – far beyond what can be done in silicon.
Connectionism begins with a paper by McCulloch and Pitts in 1943 which provided the first mathematical model of an artificial neuron; this was followed by the single layer perceptron of Frank Rosenblatt and the Perceptron Learning Theorem in 1958 which showed that perceptrons could learn! However, this cognitive model was soon shown to be too limited in power. So perceptrons were strengthened to multiple-layer neural networks which are composed of large numbers of units together with weights that measure the strength of connections between the units. These weights model the strength of the synapses that link one neuron to another. Neural nets learn by adjusting the weights according to a feedback method (“back propagation”) which reacts to the network’s performance on test data, the more data the better. (For a thorough tutorial on neural nets, click HERE .)
These net architectures (neural nets, convolutional neural nets, recurrent neural nets, …) and associated learning methods (deep learning, reinforcement learning, competitive learning, large language models, transformer models, …) are constantly improving and constitute a true engineering achievement. Driving the advances in software has been the dazzling progress in hardware. Moore’s Law that the power of chips doubles every two years has been in effect since the late 1960s and AI computing power has been multiplied further by parallel computing, by much faster floating point units (FPUs) for numerical processing, etc. In fact the darling of the connectionist world is the Nvidia GPU chip which was initially developed for graphics processing and computer games; now on Twitter (aka X), one sees ads for processing time on clusters of these chips at what look to be very reasonable rates. And then Arm (the British maker of AI chips originally called Advanced RISC Machines Ltd) had a very recent, very successful IPO which lifted the stock market out of the doldrums. To boot Google and Meta have been developing their own AI chips; for some idea of the computing power needed for Meta’s large language model, LLaMA, click HERE . And then there is Cerebras with its imposing Wafer Scale System, click HERE .
BTW An important special feature of these CPUs is the floating point hardware; in many areas of computing – spreadsheets, graphics, scientific programming, updating the weights connecting the nodes in a neural net, etc. – doing arithmetic with decimals (123.45678 divided by 876.54321) is critical and is performed by a special unit of the hardware, the FPU (Floating Point Unit). But floating point operations are not easy for computers – very time consuming and tricky to implement: as an example of the first, this writer, working on the development of a Constraint Logic Programming language, once tested to see if at least one of a pair of floating point numbers was 0.0 by multiplying the two and then testing if the result was 0.0 – this slowed things down measurably and the source of the problem was only spotted by a persistent member of the lab (a Russian speaking undergraduate from Brighton Beach); as an admittedly more dramatic example of the second, in 1994 Intel had to issue a recall of its new Pentium processor because of a bug in the implementation of the FPU – graciously sharing the financial burden with the American taxpayer, in its annual report that year, Intel reported “a $475 million pre-tax charge … to recover replacement and write-off of these microprocessors.” But here, the point is that the Nvidia FPU is precise to fewer decimal points than a classical FPU but is much faster while being precise enough for neural nets.
In the first two decades of this century, AI steadily made headlines:  the virtual assistant (Alexa Amazon 2013), self-driving cars (Autopilot Tesla 2015), winning on the American television quiz show Jeopardy by defeating top champions (IBM’s Watson 2013), outdueling the world’s Go champion (AlphaGo in 2017), … . The transition to DARPA’s 3rd Wave and its “statistical learning” was ushered in with brio by the chatbot ChatGPT (Open AI 2020), followed by the DALL-E system (pace Salvador) that generates images from text (Open AI 2021), and others. The ChatGPT system is an example of “generative AI” – it generates coherent, substantial text in grammatical English (or Spanish, French, German, Portuguese, Italian, Dutch, Russian, Arabic, Chinese, …) and it is trained using a formidable technique called “deep learning.” Since, the general release of a more robust ChatGPT (summer 2023) has created a juggernaut joined by systems such as Google’s Bard (2023), the AI enhanced Microsoft Bing (2023), IBM’s Watsonx (2023), … .
Politics also intervenes. In a remarkable turn, Sam Altman (the CEO of OpenAI) and other leaders in the field appeared before a Senate subcommittee (May 16) and actually asked for government regulation – repeat, Silicon Valley techies asking for government interference in their libertarian universe: unimaginable. But IMHO this is a Parthian Retreat – they’ll turn and make Congress do their bidding, thus using the cover of government regulation to avoid liability when their products misfire or are misused; the US Congress doesn’t have a clue as to what to do and so will just claim credit for the industry friendly ideas the AI people feed them, an old trick of the lobbying industry of K Street.
On Sept 13, there was another extraordinary meeting of AI magnates with congress in a closed door session set up by Senator Chuck Schumer. The ɑ-guest list included Elon Musk (Neuralink), Sandar Pichai (Google), Mark Zuckerberg (Meta), Bill Gates (Microsoft), Sam Altman (Open AI), Satya Nadella (Microsoft), Jensen Huang (Nvidia), Alex Karp (Palantir), Jack Clark (Anthropic). Senator and tech adversary Josh Hawley loudly compared the guests to the robber baron monopolists of the Gilded Age, making a good point for once. Another participant was UC Berkeley researcher Deborah Raji, who works on AI accountability and the threat of misuse; according to the NYTimes, she addressed a question about Tesla and driverless car safety to Elon Musk which he chose cavalierly not to respond to. Religious leaders are also starting to weigh in on the impact of AI; by way of example, the Sept 13 meeting prompted a piece by John Stonestreet, president of the Colson Center for Christian Worldview, on the topic of ethics and AI, for which click HERE .
AI generated texts and images dazzle us and confuse us; epistemology is challenged: “artificial intelligent accelerates a growing difficulty discerning who and what is real,” to cite social critic Naomi Klein. And bald misuse is already upon us: given a recording of someone’s voice, an AI (the new way to refer to one of these systems, an unusual upper-case common noun) can turn text into an almost perfect recitation by the person with that voice. This trick is already being used by scammers to call a bank and give instructions to transfer money from the victim’s account; in the dialogue with the bank employee, the scammer types responses and the AI “reads” the generated text aloud. This kind of trick with the human voice has spread to TikTok leading to a new flood of misinformation – the NYTimes piece on this has the title “ ‘A.I. Obama and Fake Newscasters’: How A.I. Audio is Swarming TikTok.” Dr. Joy Buolamwini, a member of MIT’s Media Lab, heads up the Algorithmic Justice League (AJL) which leads the effort to build “equitable and accountable” AI based systems: to quote their website, “Unchecked, unregulated and, at times, unwanted, AI systems can amplify racism, sexism, ableism, and other forms of discrimination.” These concerns are not new: for example, there was Google’s embarrassing misadventure with facial recognition software – for reporting from Forbes , click HERE .
BTW The attempt by the AJL to copyright their name has been challenged by DC Comics who fielded a Justice League of their own back in the mid 20th Century – superheroes like Superman, Wonder Woman, the Flash, Batman, … in league to defend us from supervillains such as Lex Luthor, The Joker, Circe, … . For more on this unusual litigation, click HERE for reporting from Wired Magazine. One more thing: most recently the old unalgorithmic Justice League gave birth to an eponymous Warner Bros. film (2017) which bombed at the box office despite an impressive cast: Ben Affleck, Gal Godot, Jermey Irons, … !
At first educators cried foul and said generative AI must be kept out of the classroom but then they made a kind of “Marxism 101” analysis and realized that richer kids had computers at home and could afford the price of the software which would put the poorer kids at a real disadvantage; as a result, they now accept the fact that they have no choice but to embrace the technology. When it comes to those college admissions essays though, the academics are still in a quandary. Other applications are less problematic; for example, the Cape Cod Times reported how a local real estate agent was using ChatGPT to jazz up listings: “Coquette cottage with view of Bay, … ” – generative AI at work as a counter to “writer’s block,” to use the agent’s own phrase.
AI is well on its way to becoming pervasive – the goal of every new technology. In fact, Madison Avenue is in on the act and firms that having been using software with “intelligence” (be it in the form of connectionism, rule-base systems, Bayesian networks or just clever mathematical algorithms like a GPS) are coming out of the closet and jumping on the AI bandwagon; thus tax firms now advertise on Sirius Radio that they use AI in getting your liability down, the logistics software company One Network Enterprises (ONE) labels its offerings as AI empowered, … . And playwrights are in on the act: on Oct 6 th and 7th, the Alliance Française in NYC hosted a play with three characters, one of whom is created in the play by AI – tickets cost $30.00.
Elite universities can’t stay away – the word “intelligence” is catnip to them. The Sloan (General Motors CEO) Business School at MIT is advertising a 6 week online course for executives: “Artificial Intelligence for Business Strategy”; UC Berkeley has followed suit: the Haas (Levi-Strauss CEO) Business School has a two month online course “Artificial Intelligence: Business Strategies and Applications”; Columbia has announced that Maria Ressa, the Nobel Peace laureate, is joining the faculty at the Institute of Global Politics (IGP) – to quote their website: “Ressa will lead several projects related to the role of artificial intelligence in democracy”; not to be left out, Harvard has announced that its Radcliffe Institute is fielding a Zoom course “How Do We Improve Control of AI Systems?”
The Customer Relationship Management company Salesforce advertises a webcast on FaceBook called “How Generative AI Will Transform the Media and the Entertainment Industry”; Salesforce also had a full page ad recently in the NYTimes positioning itself as a righteous player in the new AI world; in giant letters with a western movie town in the background, the ad asked “If AI is the Wild West, who’s the sheriff?” – all with the forceful tag line at the bottom “Bringing Trust to AI.” Given the potential for evil in AI, the Sales force message makes good marketing sense. Along these lines, Microsoft has announced that it will pay the legal fees of any of its clients who are sued because of Microsoft’s AI products; IBM has announced that “it would indemnify companies against copyright or other intellectual property claims for using its generative AI systems.”
Going abroad, the European Central Bank is studying the use of AI to better understand inflation and to better its support and oversight of those big European banks; that British chip company Arm and its parent company the Japanese giant Softbank are in negotiations with Jony Ive, the designer of the iPhone, and Sam Walton, CEO of Open AI, to create a new device to deliver the benefits of AI to the masses – an iPhone on super steroids is in our future!
AI is now an industry all its own and things are moving fast; indeed, they will accelerate even further with improvements in technology and with lower costs. And for that the investment boom is happening: PitchBook, which tracks AI investments, reports that in the first half of 2023, funding for generative AI startups reached $15.3 billion – and that ain’t hay! And just in, Kneron, a San Diego based semi-conductor startup, announced that it had raised $49M as part of its campaign to commercialize its AI chips – a new rival to Nvidia, Arm and the others. And the beat goes on.
If all these AI developments are not apocalyptic enough, stay tuned for the next installment of Apocalypses Now.