Work on Artificial Intelligence systems continues apace. But one must ask whether this juggernaut could run into obstacles that would seriously slow it down?
Jaron Lanier, a Virtual Reality pioneer and now a researcher now at Microsoft, points out that all this acceleration in AI so far has been based on hardware (solid state physics/engineering) and algorithms (mathematics/engineering), progress made possible largely by the exponential growth in computer power that the world has known since the 1970s, epitomized by Moore’s Law: the computing power of a chip doubles every 18 months. What has not “accelerated” is the art and science of software development itself.
There are integrated development environments (IDEs) for a programmer working in the workhorse computer languages C++ and Java, complete with source code editors, debuggers, profilers, compilers, etc. But even now, debugging is in no way automated as there are no algorithms as such for debugging – this goes back to the issue of Combinatorial Explosion and those Gödel-like limits on what can be done; for example, the only way for an algorithm to reliably predict how long a program will run is to run the program, alas.
But maybe down the road the AI Bots or the enhanced humans will be able to create the software science of the future that will rationalize the development of the phenomenal new software projects that will be undertaken. But for now, programming is more art than science: in fact, neuroscience research reveals that it is the language area of the brain that is activated while programming and not the mathematical, logical part. For one article, click HERE .
BTW, development at legendary Silicon Valley research center Xerox PARC pioneered features like wysiwyg editors (what you see is what you get), the GUI (graphical user interface) with windows and icons, the mouse, even the Ethernet (the backbone of computer networks). Commercially, Xerox fielded the Alto and then the Dandelion, a machine designed for AI work in the programming language LISP which provided a dazzling IDE. In the meantime there were the savvy Steve Jobs and his Apple engineers who visited PARC in the 1970s in conjunction with Xerox’ involvement in Apple’s IPO. The Apple people then applied what they learned to build the Lisa and the Macintosh – thinking mass market rather than an expensive, specialized AI workstation aimed at Xerox’ corporate clients.
But, though Moore’s Law is slowing down now if not already over, AI researchers are a resourceful and motivated group: instead of using general purpose processors from companies like Intel, they have simply turned to more specialized hardware.
As a case in point, Google has developed its own Tensor Processing Unit (TCU) as an application specific chip for neural network machine learning (2015). In fact, much of the leap forward in deep learning these last couple of years was made possible by employing Nvidia’s V100 GPU (graphics processing unit), which was originally designed for video games. (Looking ahead, Nvidia CEO Jen-Hsun Huang has said that the company, like Google, will next be working on a version specially tailored for neural nets.)
As a new example of a deep learning system trained on Nvidia processors, we have the third generation Generative Pretrained Transformer (GPT-3), a natural language processor (NLP) from San Francisco based Open AI. It performs most impressively on NLP tasks such as translation and answering questions; it even writes articles and blog posts that pass for human-made (nothing is sacred anymore, alas). With graphics processors, Open AI trained the underlying neural network on an enormous data set and built the GPT-3 network so vast that it dwarfed even that of Microsoft’s latest powerful NLP system, Turing-NLG.
BTW, Open AI is a company founded by Elon Musk who has added to his investment in AI by starting Neuralink, a company whose goal is embedding chips in the human brain.
Another force driving progress in AI continues to gain momentum, private sector R&D. In the years following WWII, for the most part AI research was done in universities, largely sponsored by the military. Business interest grew with the development of Expert System (aka Rule Based Systems) and then with the progress made with Connectionist models based on neural nets. In this century, the role of industry and business in the development of AI has become paramount. Not only is there money to be made but, thanks in large part to the Internet, business now has access to extraordinarily rich data sets with abundant information on millions and millions of people. This is exemplified by Chinese companies like AliBaba, ByteDance and Tencent which provide vertically integrated apps to their “customers.” So where Facebook and Google interact with users in delimited application programs (e.g. even Messenger is a different app from Facebook itself), in one product the Chinese companies offer everything from internet search to social media to messaging to on-line shopping to in-store payment and much more all on your phone. (Interestingly, China has gone from a cash economy to a mobile device payment economy without passing through the credit card stage, a development accelerated by these AI technologies.) With that, these companies are integrating the online and offline shopping experience by making the product/consumer relationship symmetric (the product can now seek out the consumer as well), calling it OMO for “Online Merging with Offline.” For a cloying Alibaba video, click HERE .
A key to the success of these Chinese companies is their access to limitless consumer data – their customer base is huge to begin with and the vertically integrated apps log about everything those customers do. Kai-Fu Lee, who headed up Google’s foray into China and who is now an AI venture capitalist in China, lays all this out in his book AI Superpowers: China, Silicon Valley and the New World Order (2018). He sums things up with a dash of humor by saying that data is the oil of the AI era and that China is the Saudi Arabia of data.
Indeed, the subject of data is much in the papers these days as we see the Trump administration acting to ban the Chinese apps Tik-Tok and WeChat in the US, charging that the mass of data these apps collect could be used nefariously by the Chinese government or other actors. The Chinese are crying “foul” but then data is the new black gold. Apropos, because of the role of personal data in all this, the EU is worried about privacy and ethics and plans to regulate AI (NY Times, Feb 17, Business Section, Tech Titans). Such government interference could slow things down considerably – perhaps for the good – but this is not likely in the US and unthinkable in China.
As per a recent NY Times article, tension with China worries companies and university research labs in the US because, as the authors (ineptly) put it, “much of the groundbreaking working coming out the United States has been powered by Chinese brains” – a point that is bolstered by statistics on publications and talks at prestigious conferences etc.
As another example of this superpower AI relationship, we have the TU Simple company, a leader in driverless trucks: it is headquartered in Tucson AZ but its research lab is in Beijing.
Indeed, in the world of AI, the US-China relationship has become symbiotic. The thrust of Lee’s book is that China and the US are the two superpowers of AI and stand as frenemies together – leaving Japan, Europe and the rest in the dust. Looked at as a race between the two, Lee gives China an edge because of access to data, its ferociously competitive business culture and supportive government policy. But as is happening now, this duel will give these two superpowers a significant advantage over all the others. The side-effects will be widespread: e.g., through AI automation, much manufacturing will return to the US and the loss of manufacturing jobs will leave low wage countries like Vietnam in the lurch – capital always prefers machinery it can own to the worker it must pay (as per Karl Marx).
So the AI juggernaut looks poised for yet greater things despite the roadblocks created by software engineering issues and the plateauing of Moore’s Law. The road to the Singularity where machine intelligence catches up to human intelligence is still open. The next decade will see the 3rd Wave of AI and the futurologists are bullish. More to come.