Artificial Intelligence Or Real Stupidity?

Authored by David Robertson via RealInvestmentAdvice.com,

It’s hard to go anywhere these days without coming across some mention of artificial intelligence (AI). You hear about it, you read about it and it’s hard to find a presentation deck (on any subject) that doesn’t mention it. There is no doubt there is a lot of hype around the subject.

While the hype does increase awareness of AI, it also facilitates some pretty silly activities and can distract people from much of the real progress being made. Disentangling the reality from the more dramatic headlines promises to provide significant advantages for investors, business people and consumers alike.

Artificial intelligence has gained its recent notoriety in large part due to high profile successes such as IBM’s Watson winning at Jeopardy and Google’s AlphaGo beating the world champion at the game “Go”. Waymo, Tesla and others have also made great strides with self-driving vehicles. The expansiveness of AI applications was captured by Richard Waters in the Financial Times [here}: “If there was a unifying message underpinning the consumer technology on display [at the Consumer Electronics Show] … it was: ‘AI in everything’.”

High profile AI successes have also captured people’s imaginations to such a degree that they have prompted other far reaching efforts. One instructive example was documented by Thomas H. Davenport and Rajeev Ronanki in the Harvard Business Review [here]. They describe, “In 2013, the MD Anderson Cancer Center launched a ‘moon shot’ project: diagnose and recommend treatment plans for certain forms of cancer using IBM’s Watson cognitive system.” Unfortunately, the system didn’t work and by 2017, “the project was put on hold after costs topped $62 million—and the system had yet to be used on patients.”

Waters also picked up on a different message – that of tempered expectations. In regard to “voice-powered personal assistants”, he notes, “it isn’t clear the technology is capable yet of becoming truly useful as a replacement for the smart phone in navigating the digital world” other than to “play music or check the news and weather”.

Other examples of tempered expectations abound. Generva Allen of Baylor College of Medicine and Rice University warned [here], “I would not trust a very large fraction of the discoveries that are currently being made using machine learning techniques applied to large sets of data.” The problem is that many of the techniques are designed to deliver specific answers and research involves uncertainty. She elaborated, “Sometimes it would be far more useful if they said, ‘I think some of these are really grouped together, but I’m uncertain about these others’.”

Worse yet, in extreme cases AI not only underperforms; it hasn’t even been implemented yet. The FT reports [here], “Four in 10 of Europe’s ‘artificial intelligence’ startups use no artificial intelligence programs in their products, according to a report that highlights the hype around the technology.”

Cycles of inflated expectations followed by waves of disappointment come as no surprise to those who have been around artificial intelligence for a while: They know all-too-well this is not the first rodeo for AI. Indeed, much of the conceptual work dates to the 1950s. In reviewing some of my notes recently I came across a representative piece that explored neural networks for the purpose of stock picking – dated from 1993 [here].

The best way to get perspective on AI is to go straight to the source and Martin Ford gives us that opportunity through his book, Architects of Intelligence. Organized as a succession of interviews with the industry’s leading researchers, scholars and entrepreneurs, the book provides a useful history of AI and highlights the key strands of thinking.

Two high level insights emerge from the book.

One is that despite the disparate backgrounds and personalities of the interviewees, there is a great deal of consensus on important subjects.

The other is that many of the priorities and concerns of the top AI researches are quite noticeably different from those expressed in mainstream media.

Take for example, the concept of artificial general intelligence (AGI). This is closely related to the notion of the “Singularity” which is the point at which artificial intelligence matches that of humans – on its path to massively exceeding human intelligence. The idea has captured people’s concerns about AI that include massive job losses, killer drones, and a host of other dramatic manifestations.

AI’s leading researchers have very different views; as a group they are completely unperturbed by AGI.

Geoffrey Hinton, Professor of computer science at the University of Toronto and Vice president and engineering fellow at Google said, “If your question is, ‘When are we going to get a Commander Data [from the Star Trek TV series]’, then I don’t think that’s how things are going to develop. I don’t think we’re going to get single, general-purpose things like that.”

Yoshua Bengio, Professor of computer science and operations research at the University of Montreal, tells us that, “There are some really hard problems in front of us and that we are far from human-level AI.” He adds, “we are all excited because we have made a lot of progress on climbing the hill, but as we approach the top of the hill, we can start to see a series of other hills rising in front of us.”

Barbara Grosz, Professor of natural sciences at Harvard University, expressed her opinion, “I don’t think AGI is the right direction to go”. She argues that because the pursuit of AGI (and dealing with its consequences) are so far out into the future that they serve as “a distraction”.

Another common thread among the AI researches is the belief that AI should be used to augment human labor rather than replace it.

Cynthia Breazeal, Director of the personal robots group for MIT media laboratory, frames the issue: “The question is what’s the synergy, what’s the complementarity, what’s the augmentation that allows us to extend our human capabilities in terms of what we do that allows us to really have greater impact in the world.”

Fei-Fei Li, Professor of computer science at Stanford and Chief Scientist for Google Cloud, described, “AI as a technology has so much potential to enhance and augment labor, in addition to just replace it.”

James Manyika, Chairman and director of McKinsey Global Institute noted since 60% of occupations have about a third of their constituent activities automatable and only about 10% of occupations have more than 90% automatable, “many more occupations will be complemented or augmented by technologies than will be replaced.”

Further, AI can only augment human labor insofar as it can effectively work with human labor.

 Barbara Grosz pointed out, “I said at one point that ‘AI systems are best if they’re designed with people in mind’.” She continued, “I recommend that we aim to build a system that is a good team partner and works so well with us that we don’t recognize that it isn’t human.”

David Ferrucci, Founder of Elemental Cognition and Director of applied AI at Bridgewater Associates, said, “The future we envision at Elemental Cognition has human and machine intelligence tightly and fluently collaborating.” He elaborated, “We think of it as thought-partnership.” Yoshua Bengio reminds us, however, of the challenges in forming such a partnership: “It’s not just about precision [with AI], it’s about understanding the human context, and computers have absolutely zero clues about that.”

It is interesting that there is a fair amount of consensus regarding key ideas such as AGI is not an especially useful goal right now, AI should be applied to augment labor and not replace it, and AI should work in partnership with people. It’s also interesting that these same lessons are borne out by corporate experiences.

Richard Waters describes how AI implementations are still at a fairly rudimentary stage in the FT [here]:

“Strip away the gee-whizz research that hogs many of the headlines (a computer that can beat humans at Go!) and the technology is at a rudimentary stage.”

He also notes, “But beyond this ‘consumerisation’ of IT, which has put easy-to-use tools into more hands, overhauling a company’s internal systems and processes takes a lot of heavy lifting.”

That heavy lifting takes time and exceptionally few companies are there. Ginni Rometty, head of IBM, characterizes her clients’ applications as “Random acts of digital” and describes many of the projects as “hit and miss”. Andrew Moore, the head of AI for Google’s cloud business, describes it as “Artisanal AI”. Rometty elaborates, “They tend to start with an isolated data set or use case – like streamlining interactions with a particular group of customers. They are not tied into a company’s deeper systems, data or workflow, limiting their impact.”

While the HBR case of the MD Anderson Cancer Center provides a good example of a moonshot AI project that probably overreached, it also provides an excellent indication of the types of work that AI can materially improve. At the same time the center was trying to apply AI to cancer treatment, its “IT group was experimenting with using cognitive technologies to do much less ambitious jobs, such as making hotel and restaurant recommendations for patients’ families, determining which patients needed help paying bills, and addressing staff IT problems.”

In this endeavor, the center had much better experiences: “The new systems have contributed to increased patient satisfaction, improved financial performance, and a decline in time spent on tedious data entry by the hospital’s care managers.” Such mundane functions may not exactly be Terminator stuff but are still important.

Leveraging AI for the purposes of augmenting human labor collaborating with humans was also the focus of an HBRpiece by H. James Wilson and Paul R. Daugherty [here]. They point out, “Certainly, many companies have used AI to automate processes, but those that deploy it mainly to displace employees will see only short-term productivity gains. In our research involving 1,500 companies, we found that firms achieve the most significant performance improvements when humans and machines work together … Through such collaborative intelligence, humans and AI actively enhance each other’s complementary strengths: the leadership, teamwork, creativity, and social skills of the former, and the speed, scalability, and quantitative capabilities of the latter.”

Wilson and Daugherty elaborate, “To take full advantage of this collaboration, companies must understand how humans can most effectively augment machines, how machines can enhance what humans do best, and how to redesign business processes to support the partnership.” This takes a lot of work that is well beyond just dumping an AI system into a pre-existing work environment.

The insights from leading AI researchers combined with the realities of real-world applications provide some useful implications. One is that AI is a double-edged sword: The hype can cause distraction and misallocation, but the capabilities are too important to ignore.

Ben Hunt discusses the roles of intellectual property (IP) and AI in regard to the investment business [here], but his comments are widely relevant to other businesses. He notes,

“The usefulness of IP in preserving pricing power is much less a function of the better mousetrap that the IP helps you build, and much more a function of how neatly the IP fits within the dominant zeitgeist in your industry.

He goes on to explain that the “WHY” of your IP must “fit the expectations that your customers have for how IP works” in order to protect your product. He continues, “If you don’t fit the zeitgeist, no one will believe that your castle walls exist. Even if they do.” In the investment business (and plenty of others), “NO ONE thinks of human brains as defensible IP any longer. No one.” In other words, if you aren’t employing AI, you won’t get pricing power, regardless of the actual results.

This hints at an even bigger problem with AI: Too many people are simply not ready for it. 

Daniela Rus, Director of the Computer science and artificial intelligence laboratory (CSAIL) at MIT, said, “I want to be a technology optimist. I want to say that I see technology as something that has the huge potential to unite people rather than divide people, and to empower people rather than estrange people. In order to get there, though, we have to advance science and engineering to make technology more capable and more deployable.” She added, “We need to revisit how we educate people to ensure that everyone has the tools and the skills to take advantage of technology.”

Yann Lecun added, “We’re not going to have widely disseminated AI technology unless a significant proportion of the population is trained to actually take advantage of it”

Cynthia Breazeal echoed, “In an increasingly AI-powered society, we need an AI-literate society.”

These are not hollow statements either; there is a vast array of free learning materials for AI available online to encourage participation in the field.

If society does not catch up to the AI reality, there will be consequences.

 Brezeal notes, “People’s fears about AI can be manipulated because they don’t understand it.” 

Lecun points out, “There is a concentration of power. Currently, AI research is very public and open, but it’s widely deployed by a relatively small number of companies at the moment. It’s going to take a while before it’s used by a wider swath of the economy and that’s a redistribution of the cards of power.”

Hinton highlights another consequence, “The problem is in the social systems, and whether we’re going to have a social system that shares fairly … That’s nothing to do with technology.”

In many ways, then, AI provides a wakeup call. Because of AI’s unique interrelationship with humankind, AI tends to bring out its best and the worst elements. Certainly, terrific progress is being made on the technology side which promises to provide ever-more powerful tools to solve difficult problems. However, those promises are also constrained by the capacity of people, and society as a whole, to embrace AI tools and deploy them in effective ways.

Recent evidence suggests we have our work cut out for us in preparing for an AI-enhanced society. In one case reported by the FT [here], UBS created “recommendation algorithms” (such as those used by Netflix for movies) in order to suggest trades for its clients. While the technology certainly exists, it strains credibility to understand how this application is even remotely useful for society.

In another case, Richard Waters reminds us, “It Is almost a decade, for instance, since Google rocked the auto world with its first prototype of a self-driving car.” He continues [here]: “The first wave of driverless car technology is nearly ready to hit the mainstream — but some carmakers and tech companies no longer seem so eager to make the leap.” In short, they are getting pushback because current technology is at “a level of autonomy that scares the carmakers — but it also scares lawmakers and regulators.”

In sum, whether you are an investor, business person, employee, or consumer, AI has the potential to make things a lot better – and a lot worse. In order to make the most of the opportunity, an active effort focusing on education is a great place to start. If AI’s promises are to be realized it will also take a lot of effort to establish system infrastructures and to map complementary strengths. In other words, it’s best to think of AI as a long journey rather than a short-term destination.

via ZeroHedge News https://ift.tt/2TLVdHq Tyler Durden

Leave a Reply

Your email address will not be published.