DARPA stands for Defense Advanced Research Projects Agency, a part of the US Department of Defense that has played a critical role in funding scientific projects since WW II, among them the ARPANET which has morphed into the Internet and the World Wide Web. DARPA has also been an important source of funding for research into Artificial Intelligence (AI). Following a scheme put forth by John Launchbury, then director of the DARPA I2O (Information Innovation Office), the timeline of AI can be divided into three parts like Caesar’s Gaul. The 1st Wave of AI went from 1950 to the turn of the millennium. The 2nd Wave of AI went from 2000 to the present. During this period advances continued in fields like expert systems and Bayesian networks; search based software for games like chess also advanced considerably. However, it is in this period that Connectionism – imitating the way neurons are connected in the brain – came into its own.
The human brain is one of nature’s grandest achievements – a massively parallel, multi-tasking computing device (albeit rather slow by electronic standards) that is the command and control center of the body. Some stats:
It contains about 100 billion nerve cells (neurons) — the “gray matter.”
It contains about 700 billion nerve fibers (1 axon per neuron and 5-7 dendrites per neuron) — the “white matter.”
The neurons are linked by 100 trillion connections (synapses) — structures that permit a neuron to pass an electrical or chemical signal to another neuron or to a target cell.
The Connectionist approach to AI employs networks implemented in software known as “neural networks” or “neural nets” to mimic the way the neurons in the brain function. Connectionism proper begins in 1943 with a paper by Warren McCulloch and Walter Pitts which provided the first mathematical model of an artificial neuron. This inspired the single layer perceptron network of Frank Rosenblatt whose Perceptron Learning Theorem (1958) showed that machines could learn! However, this cognitive model was soon shown to be very limited in what it could do, which did dull the enthusiasm of AI funding sources– but the idea of machine learning by means of neuron-like networks was established and research went on.
So, already by the 1980s, the connectionist model was expanded to include more complex neural networks, composed of large numbers of units together with weights that measure the strength of the connections between the units – in the brain, if enough input accumulates at a neuron, it then sends a signal along the synapses extending from it. These weights model the effects of the synapses that link one neuron to another. Neural nets learn by adjusting the weights according to a feedback method which reacts to the network’s performance on test data, the more data the better – mathematically speaking this is a kind of non-linear optimization plus numerical differentiation, statistics and more.
These net architectures have multiplied and there are now not only classical neural nets but also convolutional neural nets, recurrent neural nets, neural Turing Machines, etc. Along with that, there are multiple new machine learning methods such as deep learning, reinforcement learning, competitive learning, etc. These methods are constantly improving and constitute true engineering achievements. Accordingly, there has been progress in the handling of core applications like text comprehension and translation, vision, sensor technology, voice recognition, face recognition, etc.
Popular apps such as eHarmony, Tinder, ancestry.com and 23AndMe all use AI and machine learning in their mix of algorithms. These algorithms are purported to have learned what makes for a happy marriage and how Italian you really can claim to be.
IBM’s Watson proved it had machine-learned just about everything with its victory on Jeopardy; its engine is now being deployed in areas such as cancer detection, finance and eCommerce.
In 2010, Google purchased DeepMind, a British AI company and soon basked in the success of DeepMind’s Go playing software. First there was AlphaGo which stunned the world by beating champion player Lee Se-dol in a five game match in March, 2016 – something that was thought to be yet years away as the number of possible positions in Go dwarfs that of Chess. But things didn’t stop there: AlphaGo has been followed by AlphaZero a system that defeated world champion Ke Jie in 2019. In fact, AlphaZero can learn how to play multiple games such as Chess and Shogi (Japanese Chess) as well as Go; what is more, AlphaZero does not learn by playing against human beings or other systems: it learns by playing against itself – playing against humans would just be a waste of precious time!
Applying machine learning to create a computer that can win at Go is a milestone. But applying machine learning so that a robot can enjoy “on the job training” is having more of an impact on the world of work. For example, per a recent NY Times article, an AI trained robot has been deployed in Europe to sort articles for packing and shipping for eCommerce. The robot is trained using reinforcement learning, an engineering extension of the mathematical optimization technique of dynamic programming (the one used by GPS systems to find the best route). This is another example where the system learns pretty much on its own; it is also an example of serious job-killing technology – one of the unsettling things about AI’s potential to force changes in society even beyond the typical “creative destruction” of free-market capitalism.
Another way AI is having an impact on society is through surveillance technology: from NSA eavesdropping to hovering surveillance drones to citywide face recognition cameras. London, once the capital of civil liberties and individual freedom, has become the surveillance capital of the world – but (breaking news) Shanghai has already overtaken London in this dystopian competition. What is more we are now subjecting our own selves to constant monitoring: our movements traced by our cellphones, our keystrokes logged by social media.
In the process, the surveillance state has created it own surveillance capitalism: our personal behavorial data are amassed by AI enhanced software – Fitbit, Alexa, Siri, Google, FaceBook, … ; the data are analyzed and sold for targeted advertising and other feeds to guide us in our lives; an example: as one googles work on machine intelligence, Amazon drops ads for books on the topic (e.g. The Sentient Machine) onto one’s FaceBook page. This is only going to get worse as the internet of things puts sensors and listening devices throughout the home and machines start to shepherd us through our day – a GPS for everything, adieu free will! For the in-depth story of this latest chapter in the history of capitalism, consult Shoshana Zuboff’s The Age of Surveillance Capitalism (2019).
Word to the wise: machine intelligence is one thing but do avoid googling eHarmony or Tinder – the surveillance capitalists do not know that’s part of your innocent research endeavors.
Moreover, there is the emerging field of telehealth: the provision of healthcare remotely by means of telecommunications technology. In addition to office visits via Zoom or Skype or WhatsApp, there are wearable devices that monitor one’s heart’s functions and report via the internet to an algorithm that checks for abnormalities etc. Such devices are typically worn for a week or so and then have to be carefully returned. Recently Apple and Stanford Medical have produced an app where an Apple Watch checks constantly for cardiac issues and, if something is detected, it prompts a call to the wearer’s iPhone from a telehealth doctor. Indeed, in the future we will be permanently connected to the internet for monitoring – the surveillance state on steroids.
In fact, all this information about us lives a life parallel to our own out in the cloud – it has become our avatar, and for many purposes it is more important than we are.
The English philosopher Jeremy Bentham is known for his Utilitarian principle: “it is the greatest happiness of the greatest number of people that is the measure of right and wrong.” From the 1780s on, Bentham also promoted the idea of the panopticon, a prison structured so that the inmates would be under constant surveillance by unseen guards – click HERE and scroll down for a visualization. To update a metaphor from French post-modernist philosopher Michel Foucault, with surveillance technology we have created our own panopticon – one in which we dwell quietly and willingly as our every keystroke, every move is observed.
Some see an upside to all this connectivity: back in 2004, Google’s young founders told Playboy magazine that one day we would have direct access to the Internet through brain implants, with “the entirety of the world’s information as just one of our thoughts.” This hasn’t happened quite yet but one wouldn’t want to bet against Page and Brin. Indeed, we are now entering the 3rd Wave of AI which the DARPA schedule has lasting until 2030 – the waves get shorter as progress builds on itself. So what can be expected in the next decade, in this 3rd Wave? And then what? More to come. Affaire à suivre.