Annoyed by annoyed AI: can we fight AI hype?

Media victims 2Recently the Wall Street Journal reported that an AI got testy with its programmer when he asked about ethics. This is based on a neat paper by Vinyals and Le on training recurrent networks to do conversations based on past example conversations, essentially making neural network chatbots. I posted about that particular dialogue myself to the office list, joking about AI not digging ethics.

However, others got annoyed by the annoyed AI. Mbeckman on Slashdot posted WSJ Overstates the Case Of the Testy A.I. and George Dvorsky on Io9 followed up with No, an AI Did Not Just “Lash Out” at Its Human Programmer.

Mbeckman wrote: “AI hype to the public has gotten progressively more strident in recent years, misleading lay people into believing researchers are much further along than they really are — by orders of magnitude. I’d love to see legitimate A.I. researchers condemn this kind of hucksterism.

Of course, the question is what the hucksterism actually is, who is doing it, and what to do about it.

Who are the hucksters?

A huckster is “One who uses aggressive, showy, and sometimes devious methods to promote or sell a product”.

The paper?

In this case the paper is not doing anything obviously hucksterish – it has several dialogue examples where the software responds to VPN or browser issues, general knowledge Q&A, chatting and opinions. That the system thinks Cleopatra is very regal or that it manages to instruct the user into doing an apt-update for a browser is neat, but not terribly impressive. Many of the dialogues are clearly Eliza-level associations with little actual thinking, but amusing random combinations:

Human:what is the purpose of life ?
Machine:to serve the greater good .
Human:what is the purpose of living ?
Machine:to live forever .
Human:what is the purpose of existence ?
Machine: to find out what happens when we get to the planet earth .
Human:where are you now ?
Machine:i ’m in the middle of nowhere .
Human:what is the purpose of dying ?
Machine:to have a life .
Human:what is the purpose of being intelligent?
Machine:to find out what it is .
Human:what is the purpose of emotions ?
Machine:i don ’t know

If this is hucksterism, then nearly any computer science paper is hucksterism. After all, they tend to show the successful runs of software and generally overestimate the utility of the algorithm or method.

Wall Street Journal?

Mbeckman probably felt that the WSJ was more guilty. After all, the title and opening suggest there is some kind of attitude going on. But there is actually rather little editorializing: rather, a somewhat bland overview of machine learning with an amusing dialogue example thrown in. It could have been Eliza instead, and the article would have made sense too (“AI understands programmer’s family problems”). There is an element of calculation here: AI is hot, and the dialogue can be used as a hook to make a story that both mentions real stuff and provides a bit of entertainment. But again, this is not so much aggressive promotion of a product/idea as opportunitistic promotion.

Media in general?

I suspect that the real target of Mbeckman’s wrath is the unnamed sources of AI hype. There is no question that AI is getting hyped these days. Big investments by major corporations, sponsored content demystifying it, Business Insider talking about how to invest into it, corporate claims of breakthroughs that turn out to be mistakes/cheating, invitations to governments to join the bandwagon, the whole discussion about AI safety where people quote and argue about Hawking’s and Musk’s warnings (rather than going to the sources reviewing the main thinking), and of course a bundle of films. The nature of hype is that it is promotion, especially based on exaggerated claims. This is of course where the hucksterism accusation actually bites.

Hype: it is everybody’s fault

AI will change our futureBut while many of the agents involved do exaggerate their own products, hype is also a social phenomenon. In many ways it is similar to an investment bubble. Some triggers occur (real technology breakthroughs, bold claims, a good story) and media attention flows to the field. People start investing in the field, not just with money, but with attention, opinion and other contributions. This leads to more attention, and the cycle feeds itself. Like an investment bubble overconfidence is rewarded (you get more attention and investment) while sceptics do not gain anything (of course, you can participate as a sharp-tounged sceptic: everybody loves to claim they listen to critical voices! But now you are just as much part of the hype as the promoters). Finally the bubble bursts, fashion shifts, or attention just wanes and goes somewhere else. Years later, whatever it was may reach the plateau of productivity.

The problem with this image is that it is everybody’s fault. Sure, tech gurus are promoting their things, but nobody is forced to naively believe them. Many of the detractors are feeding the hype by feeding it attention. There is ample historical evidence: I assume the Dutch tulip bubble is covered in Economics 101 everywhere, and AI has a history of terribly destructive hype bubbles… yet few if any learn from it (because this time it is different, because of reasons!)

Fundamentals

In the case of AI, I do think there have been real changes that give good reason to expect big things. Since the 90s when I was learning the field computing power and sizes of training data have expanded enormously, making methods that looked like dead ends back them actually blossom. There has also been conceptual improvements in machine learning, among other things killing off neural networks as a separate field (we bio-oriented researchers reinvented ourselves as systems biologists, while the others just went with statistical machine learning). Plus surprise innovations that have led to a cascade of interest – the kind of internal innovation hype that actually does produce loads of useful ideas. The fact that papers and methods that surprise experts in the field are arriving at a brisk pace is evidence of progress. So in a sense, the AI hype has been triggered by something real.

I also think that the concerns about AI that float around have been triggered by some real insights. There was minuscule AI safety work done before the late 1990s inside AI; most was about robots not squishing people. The investigations of amateurs and academics did bring up some worrying concepts and problems, at first at the distal “what if we succeed?” end and later also when investigating the more proximal impact of cognitive computing on society through drones, autonomous devices, smart infrastructures, automated jobs and so on. So again, I think the “anti-AI hype” has also been triggered by real things.

Copy rather than check

But once the hype cycle starts, just like in finance, fundamentals matter less and less. This of course means that views and decisions become based on copying others rather than truth-seeking. And idea-copying is subject to all sorts of biases: we notice things that fit with earlier ideas we have held, we give weight to easily available images (such as frequently mentioned scenarios) and emotionally salient things, detail and nuance are easily lost when a message is copied, and so on.

Science fact

This feeds into the science fact problem: to a non-expert, it is hard to tell what the actual state of art is. The sheer amount of information, together with multiple contradictory opinions, makes it tough to know what is actually true. Just try figuring out what kind of fat is good for your heart (if any). There is so much reporting on the issue, that you can easily find support for any side, and evaluating the quality of the support requires expert knowledge. But even figuring out who is an expert in a contested big field can be hard.

In the case of AI, it is also very hard to tell what will be possible or not. Expert predictions are not that great, nor different from amateur predictions. Experts certainly know what can be done today, but given the number of surprises we are seeing this might not tell us much. Many issues are also interdisciplinary, making even confident and reasoned predictions by a domain expert problematic since factors they know little about also matters (consider the the environmental debates between ecologists and economists – both have half of the puzzle, but often do not understand that the other half is needed).

Bubble inflation forces

Different factors can make hype more or less intense. During summer “silly season” newspapers copy entertaining stories from each other (some stories become perennial, like the “BT soul-catcher chip” story that emerged in 1996 and is still making its rounds). Here easy copying and lax fact checking boost the effect. During a period with easy credit financial and technological bubbles become more intense. I suspect that what is feeding the current AI hype bubble is a combination of the usual technofinancial drivers (we may be having dotcom 2.0, as some think), but also cultural concerns with employment in a society that is automating, outsourcing, globalizing and disintermediating rapidly, plus very active concerns with surveillance, power and inequality. AI is in a sense a natural lightening rod for these concerns, and they help motivate interest and hence hype.

So here we are.

AGI ambitionsAI professionals are annoyed because the public fears stuff that is entirely imaginary, and might invoke the dreaded powers of legislators or at least threaten reputation, research grants and investment money. At the same time, if they do not play up the coolness of their ideas they will not be noticed. AI safety people are annoyed because the rather subtle arguments they are trying to explain to the AI professionals get wildly distorted into “Genius Scientists Say We are Going to be Killed by the TERMINATOR!!!” and the AI professionals get annoyed and refuse to listen. Yet the journalists are eagerly asking for comments, and sometimes they get things right, so it is tempting to respond. The public are annoyed because they don’t get the toys they are promised, and it simultaneously looks like Bad Things are being invented for no good reason. But of course they will forward that robot wedding story. The journalists are annoyed because they actually do not want to feed hype. And so on.

What should we do? “Don’t feed the trolls” only works when the trolls are identifiable and avoidable. Being a bit more cautious, critical and quiet is not bad: the world is full of overconfident hucksters, and learning to recognize and ignore them is a good personal habit we should appreciate. But it only helps society if most people avoid feeding the hype cycle: a bit like the unilateralist’s curse, nearly everybody need to be rational and quiet to starve the bubble. And since there are prime incentives for hucksterism in industry, academia and punditry that will go to those willing to do it, we can expect hucksters to show up anyway.

The marketplace of ideas could do with some consumer reporting. We can try to build institutions to counter problems: good ratings agencies can tell us whether something is overvalued, maybe a federal robotics commission can give good overviews of the actual state of the art. Reputation systems, science blogging marking what is peer reviewed, various forms of fact-checking institutions can help improve epistemic standards a bit.

AI safety people could of course pipe down and just tell AI professionals about their concerns, keeping the public out of it by doing it all in a formal academic/technical way. But a pure technocratic approach will likely bite us in the end, since (1) incentives to ignore long term safety issues with no public/institutional support exist, and (2) the public gets rather angry when it finds that “the experts” have been talking about important things behind their back. It is better to try to be honest and try to say the highest-priority true things as clearly as possible to the people who need to hear it, or ask.

AI professionals should recognize that they are sitting on a hype-generating field, and past disasters give much reason for caution. Insofar they regard themselves as professionals, belonging to a skilled social community that actually has obligations towards society, they should try to manage expectations. It is tough, especially since the field is by no means as unified professionally as (say) lawyers and doctors. They should also recognize that their domain knowledge both obliges them to speak up against stupid claims (just like Mbeckman urged), but that there are limits to what they know: talking about the future or complex socioecotechnological problems requires help from other kinds of expertise.

And people who do not regard themselves as either? I think training our critical thinking and intellectual connoisseurship might be the best we can do. Some of that is individual work, some of it comes from actual education, some of it from supporting better epistemic institutions – have you edited Wikipedia this week? What about pointing friends towards good media sources?

In the end, I think the AI system got it right: “What is the purpose of being intelligent? To find out what it is”. We need to become better at finding out what is, and only then can we become good at finding out what intelligence is.

Leave a Reply

Your email address will not be published. Required fields are marked *