AI VI: Towards the Singularity

The next decade (2020-2030) will see the 3rd Wave of Artificial Intelligence (AI). Given the dazzling progress during the 2nd Wave, expectations are high. Another reason for this optimism is that technology feeds on itself and continually accelerates – for  example, Moore’s Law: the power of computer chips doubles every 18 months. For the coming 3rd Wave, the futurolgists predict that new systems will be able to learn, to reason, to converse in natural language and to generalize! A tall order, indeed.
One problem for this kind of progress is that current deep learning systems are muscle-bound and highly specialized for a specific domain. They depend on huge training sets and are supervised to master specific tasks. One side effect of all this is that, despite the large training sets, the systems are not good at truly exceptional situations that can pop up in the real world (“black swans” to economists, “long-tail distributions” to mathematicians). This is an especial problem for self-driving vehicles: human drivers with their ability to deal with the unexpected can handle an unusual situation while self-driving systems still cannot. Tesla’s self-driving vehicles have been involved in fatal accidents because of their inability to react to an unusual situation. Another recent example (2016) is that, when lines of salt were laid down on a highway in anticipation of a snow storm, Tesla’s self-driving vehicles confused the salt lines with the lane boundary lines.
In fact, critics have long argued that AI systems cannot “understand” what it is that they are doing. A classic attack was mounted by Berkeley Philosophy professor John Searle already in 1980 with his example of the “Chinese Room.” Roughly put, let us suppose there is an AI system that can read and respond in written Chinese; suppose a person in a room can take something written in Chinese characters and then (somehow) perform exactly the same mechanical steps on the data that the computer would in order to produce an appropriate response in Chinese characters. Although quite impressive, this performance by the person still would not mean that the he or she understood the Chinese language in any real way – the corollary being that that the AI system cannot be said to understand it either. Note that Searle’s argument implies that, for him at least, the Turing Test will not be a sufficient standard to establish that a machine can actually think!
For Searle’s talk on all this at Google’s Silicon Valley AI center, including an exchange with futurologist Ray Kurzweil, click  HERE . For the record, Google also has AI centers in London, Zurich, Accra, New York, Paris and Beijing – not a shabby address among them.
Another serious issue is that biases can be embedded into a system too easily. Well known examples are face-recognition systems which perform badly on images of people of color. Google itself faced a public relations nightmare in 2015 when its photo-tagger cruelly mislabeled images of African-Americans.
In her thoughtful and well-written book Artificial Intelligence: A Guide for Thinking Humans (2019), AI researcher Melanie Mitchell describes how machine learning systems can fall victim to attacks. In fact, a whole field known as adversarial learning has developed where researchers seek out weaknesses in deep learning systems. And they are good at it! Indeed, it is often the case that changes imperceptible to humans in a photo, for example, can force a well trained system to reverse direction and completely misclassify an image. Adversarial learning exploits the fact that we do not know how these AI systems are reaching the conclusions they do: a tweak to the data that would not affect our judgment can confuse an AI system which is reacting to clues in the data that are completely different from the ones humans react to.
Indeed, we do not really understand how an AI system “reasons.” That is, we do not know “why” the system does do what it does; its set of “reasons” for a conclusion might not at all be anything like what ours would be. A related issue is that the systems cannot explain why and how they arrive at their results. This is an issue for mathematical algorithms as well – there it is a deep problem because extracting an explanation can bring us right back to the dreaded specter of Combinatorial Explosion!
Mitchell also remarks on how IBM’s Watson, though renowned for its Jeopardy victory, has not proved all that successful when ported to domains like medicine and finance. Its special skill at responding to single-answer queries does not carry over well to other areas. Also, for those who watched the shows, the human players outperformed Watson on the more difficult “answers” while Watson’s strength was fielding the easier ones almost instantaneously.
However, futurologists hold that the 3rd Wave of AI will break through these barriers, developing systems that will be proficient at perceiving, learning, reasoning and generalization; they will not require such huge training sets or such extensive human supervision.
In any case, there where AI will next have a most dramatic impact going forward is the area of Bionics and it will bring us into the “transhuman era”: when, using science and technology, the human race evolves beyond its current physical and mental limitations.
To start, life expectancy will increase. Futurologists “joke” that as we age, accelerated medicine will be able to buy us 10 more years every 10 years, thus making us virtually immortal.
In fact, we are already putting computers—neural implants—directly into people’s brains to counteract Parkinson’s disease and tremors from multiple sclerosis. We have cochlear implants that restore hearing. A retinal implant has been developed that provides some visual perception for some blind individuals, basically by replacing certain visual-processing circuits of the brain.
Recently Apple and Stanford Medical announced an app where an Apple Watch checks constantly for cardiac issues and, if something is detected, it prompts a call to the wearer’s iPhone from a telehealth doctor. Indeed, in the future we will be permanently connected to the internet for monitoring and for cognitive enhancement, the surveillance state on steroids.
We have already reached the point where there are AI based prostheses such as artificial hands which communicate with receptors implanted in the brain. For example, the BrainGate company’s technology uses dime sized computer chips that connect the mind to computers and the internet: the chip is implanted into the brain and attached to connectors outside the skull which are hooked up to external computers; for one application, the computers can be linked to a robotic arm that a handicapped patient can control with his or her thoughts.
N.B. BrainGate is headed up by entrepreneur Jeff Stibel, cofounder with the late Kobe Bryant of the venture capital firm Bryant-Stibel. Indeed, Kobe Bryant was a man of parts.
Nanotechnology is the manipulation of matter at the atomic and molecular scale, down to the level of one billionth of a meter: to track the pioneering basic research in this field, round up some of the usual suspects – Cal Tech, Bell Labs, IBM Research, MIT. This technology promises a suite of miracles far into the future. By way of example, a nanobot will sail around the insides of the body searching out and destroying cancer cells. It is expected that nanobots embedded in the brain will be capable of reprogramming neural connections to enhance human intellectual power.
Indeed, we are at the dawn of a new era, where biology, mathematics, physics, AI and computer science more generally all converge and combine – a development heralded in Prof. Susan Hockfield’s new book The Age of Living Machines (2020). Reading Hockfield’s book is like reading Paul deKruif’s Microbe Hunters – the excitement of the creation of a science. Thus, viruses are being employed to build lithium-ion batteries using nanomaterials – to boot, these new batteries will be environmentally safe!
So the momentum is there. A confluence of sciences is thrusting humanity forward to a Brave New World, to the Technological Singularity where machine intelligence catches up to human intelligence. The run up to the Singularity already will have a profound impact on human life and human social structure. Historically, humanity and its technology have co-evolved, as seen in the record of human biological evolution and its accompanying ever accelerating technological progress. But this time there is reason to fear that dystopia awaits us and not a better world. The surveillance state is here to stay. The new developments in Bionics are bringing us to the point where the specter of a caste of Nietzschean supermen looms large – there is no reason to suppose that Bionics will be uniformly available to the human population as a whole; worse, many think that race as well as class will play a role in future developments. The list goes on. More to come. Affaire à suivre.

2 thoughts on “AI VI: Towards the Singularity

  1. As ever, most interesting. Thanks for this.

    As I read the essay, there were a number of places that I copied a sentence or two, intending to comment on each, but decided at the end, that doing so was too left-brained/analytic, when that is not at all what I doubt about AI; they are already mechanically faster than are we and at objective-logic-flow brain processes I have no doubt that they will beat us if they already haven’t. As an analytic tool AI seems near boundless.

    But to capture what I have said in response to past posts on this subject (it has always intrigued me) I will introduce a word to my ongoing comments: Synaesthesia, a process we are all born with to varying degrees and which most people generally lose/weaken; some folks retain a greater workable capacity for it because of their unique wiring and relevant nurturing. It is the ability to cross reference/relate the input we get from our senses: Hearing color, seeing sound, etc. It is most prominent in artists and other non-linear thinkers; it also runs in families; my father had it to some degree as do I.

    As a toddler– about age 3– my younger son, Chris, had a real fondness for Stravinsky’s “Firebird”– a very strange piece of music for a little kid to like. He would, at times, bring the record over to me from my collection and ask for it to be played.

    One weekend day at the De Young Museum, in SFO, he was with his mother, while, I walking with his brother, Brian, was across the room; we were at some modern, abstract art exhibit there. She raced over and asked me and Brian to come over to one picture she and Chris had been looking at, and she asked Chris to tell me what the picture was. He said, “Firebird.” A small note next to the painting well above his head? It said, “The artist’s interpretation of Stravinsky’s ‘Firebird’.” The picture looked to be naught but an abstract series of lines and paint smears and the like: No recognizable image, no words. He could not yet read and there had been no one near them for him to hear someone read the title card.

    At another museum, shortly thereafter in Berkeley (he was still ~3, maybe just 4) , I was there with him and his brother at another modern art exhibit (I was quite into that at the time). One painting was essentially a black canvas, roughly painted with thick impasto and swirly lines, with an irregularly-shaped object in the middle, an intensely red color, done with the same paint style. I asked him– as I was beginning to do whenever we went to art museums (often, back then)– what he thought it was. He said “A birdie.” The title of the piece? “The Raven. ” His hit rate on WHAT the object was came to a staggering 75-80 % in those days. I doubt I would have gotten half of that “titling”, though I could have offered MY interpretation of what I saw.

    No AI will do that…. for not being able to do that, ever I believe. Nor will it catch subtle turns of phrase, non-linear grasps of relationships that are not at all objective, esp when a large part of a message may be in the body language/tone (which latter starts getting close to what AI may be able to do, for the predictability of gestures and tones) .

    We can do it because we are more than time-bound, linear-oriented machines. And, for that, a 3 year old child could and did do it, and so can others older, if not all– actually mostly not all. Yet he, and I, still can… if I not so well as once. 🙂

    1. Bringing up synaesthesia is a most interesting point and a deep remark. Some people have remarkable forms of it but a simple version is a powerful tool that humans use all the time – singing a phone number so you will remember it, seeing French words as you speak so you can see the liaison coming, … . The closest thing in Computer Science that comes to mind is parallel algorithms but that only captures a tiny part of it. Ripe for a PhD thesis and beyond.

Comments are closed.