Best problems to work on?

80,000 hours has a lovely overview of “What are the biggest problems in the world?” The best part is that each problem gets its own profile with a description, arguments in favor and against, and what already exists. I couldn’t resist plotting the table in 3D:

Most important problems according to 80,000 Hours, according to scale, neglectedness, and solvability.
Most important problems according to 80,000 Hours, according to scale, neglectedness, and solvability. Color denotes the sum of the values.

There are of course plenty of problems not listed; even if these are truly the most important there will be a cloud of smaller scale problems to the right. They list a few potential ones like cheap green energy, peace, human rights, reducing migration restrictions, etc.

I recently got the same question, and here are my rough answers:

  • Fixing our collective epistemic systems. Societies work as cognitive systems: acquiring information, storing, filtering and transmitting it, synthesising it, making decisions, and implementing actions. This is done through individual minds, media and institutions. Recently we have massively improved some aspects through technology, but it looks like our ability to filter, organise and jointly coordinate has not improved – in fact, many feel it has become worse. Networked media means that information can bounce around multiple times acquiring heavy bias, while filtering mechanisms relying on authority has lost credibility (rightly or wrongly). We are seeing all sorts of problems of coordinating diverse, polarised, globalised or confused societies. Decision-making that is not reality-tracking due to (rational or irrational) ignorance, bias or misaligned incentives is at best useless, at worst deadly. Figuring out how to improve these systems seem to be something with tremendous scale (good coordination and governance helps solve most problems above), it is fairly neglected (people tend to work on small parts rather than figuring out better systems), and looks decently solvable (again, many small pieces may be useful together rather than requiring a total perfect solution).
  • Ageing. Ageing kills 100,000 people per day. It is a massive cause of suffering, from chronic diseases to loss of life quality. It causes loss of human capital at nearly the same rate as all education and individual development together. A reduction in the health toll from ageing would not just save life-years, it would have massive economic benefits. While this would necessitate changes in society most plausible shifts (changing pensions, the concepts of work and life-course, how families are constituted, some fertility reduction and institutional reform) the cost and trouble with such changes is pretty microscopic compared to the ongoing death toll and losses. The solvability is improving: 20 years ago it was possible to claim that there were no anti-ageing interventions, while today there exist enough lab examples to make this untenable. Transferring these results into human clinical practice will however be a lot of hard work. It is also fairly neglected: far more work is being spent on symptoms and age-related illness and infirmity than root causes, partially for cultural reasons.
  • Existential risk reduction: I lumped together all the work to secure humanity’s future into one category. Right now I think reducing nuclear war risk is pretty urgent (not because of the current incumbent of the White House, but simply because the state risk probability seems to dominate the other current risks), followed by biotechnological risks (where we still have some time to invent solutions before the Collingridge dilemma really bites; I think it is also somewhat neglected) and AI risk (I put it as #3 for humanity, but it may be #1 for research groups like FHI that can do something about the neglectedness while we figure out better how much priority it truly deserves). But a lot of the effort might be on the mitigation side: alternative food to make the world food system more resilient and sun-independent, distributed and more robust infrastructure (whether better software security, geomagnetic storm/EMP-safe power grids, local energy production, distributed internet solutions etc.), refuges and backup solutions. The scale is big, most are neglected and many are solvable.

Another interesting set of problems is Robin Hanson’s post about neglected big problems. They are in a sense even more fundamental than mine: they are problems with the human condition.

As a transhumanist I do think the human condition entails some rather severe problems – ageing and stupidity is just two of them – and that we should work to fix them. Robin’s list may not be the easiest to solve, though (although there might be piecemeal solutions worth doing). Many enhancements, like moral capacity and well-being, have great scope and are very neglected but lose out to ageing because of the currently low solvability level and the higher urgency of coordination and risk reduction. As I see it, if we can ensure that we survive (individually and collectively) and are better at solving problems, then we will have better chances at fixing the tougher problems of the human condition.

Don’t be evil and make things better

Autonomous deviceI am one of the signatories of an open letter calling for a stronger aim at socially beneficial artificial intelligence.

It might seem odd to call for something like that: who in their right mind would not want AI to be beneficial? But when we look at the field (and indeed, many other research fields) the focus has traditionally been on making AI more capable. Besides some pure research interest and no doubt some “let’s create life”-ambition, the bulk of motivation has been to make systems that do something useful (or push in the direction of something useful).

“Useful” is normally defined in term of performing some task – translation, driving, giving medical advice – rather than having a good impact on the world. Better done tasks are typically good locally – people get translations more cheaply, things get transported, advice may be better – but have more complex knock-on effects: fewer translators, drivers or doctors needed, or that their jobs get transformed, plus potential risks from easy (but possibly faulty) translation, accidents and misuse of autonomous vehicles, or changes in liability. Way messier. Even if the overall impact is great, socially disruptive technologies that appear surprisingly fast can cause trouble, emergent misbehaviour and bad design choices can lead to devices that amplify risk (consider high frequency trading, badly used risk models, or anything that empowers crazy people). Some technologies may also lend themselves to centralizing power (surveillance, autonomous weapons) but reduce accountability (learning algorithms internalizing discriminatory assumptions in an opaque way). These considerations should of course be part of any responsible engineering and deployment, even if handling them is by no means solely the job of the scientist or programmer. Doing it right will require far more help from other disciplines.

Halloween scenarioThe most serious risks come from the very smart systems that may emerge further into the future: they either amplify human ability in profound ways, or they are autonomous  themselves. In both cases they make achieving goals easier, but do not have any constraints on what goals are sane, moral or beneficial. Solving the problem of how to keep such systems safe is a hard problem we ought to start on early. One of the main reasons for the letter is that so little effort has gone into better ways of controlling complex, adaptive and possibly self-improving technological systems. It makes sense even if one doesn’t worry about superintelligence or existential risk.

This is why we have to change some research priorities. In many cases it is just putting problems on the agenda as useful to work on: they are important but understudied, and a bit of investigation will likely go a long way. In some cases it is more a matter of signalling that different communities need to talk more to each other. And in some instances we really need to have our act together before big shifts occur – if unemployment soars to 50%, engineering design-ahead enables big jumps in tech capability, brains get emulated, or systems start self-improving we will not have time to carefully develop smart policies.

My experience with talking to the community is that there is not a big split between AI and AI safety practitioners: they roughly want the same thing. There might be a bigger gap between the people working on the theoretical, far out issues and the people working on the applied here-and-now stuff. I suspect they can both learn from each other. More research is, of course, needed.