Truth and laughter

ReassuringSlate Star Codex has another great post: If the media reported on other dangers like it does AI risk.

The new airborne superplague is said to be 100% fatal, totally untreatable, and able to spread across an entire continent in a matter of days. It is certainly fascinating to think about if your interests tend toward microbiology, and we look forward to continuing academic (and perhaps popular) discussion and debate on the subject.

I have earlier discussed how AI risk suffers from the silliness heuristic.

Of course, one can argue that AI risk is less recognized as a serious issue than superplagues, meteors or economic depressions (although, given what news media have been writing recently about Ebola and 1950 DA, their level of understanding can be debated). There is disagreement on AI risk among people involved in the subject, with some rather bold claims of certainty among some, rational reasons to be distrustful of predictions, and plenty of vested interests and motivated thinking. But this internal debate is not the reason media makes a hash of things: it is not like there is an AI safety denialist movement pushing the message that worrying about AI risk is silly, or planting stupid arguments to discredit safety concerns. Rather, the whole issue is so out there that not only the presumed reader but the journalist too will not know what to make of it. It is hard to judge credibility, how good arguments are and the size of risks. So logic does not apply very strongly – anyway, it does not sell.

This is true for climate change and pandemics too. But here there is more of an infrastructure of concern, there are some standards (despite vehement disagreements) and the risks are not entirely unprecedented. There are more ways of dealing with the issue than referring to fiction or abstract arguments that tend to fly over the heads of most. The discussion has moved further from the frontiers of the thinkable not just among experts but also among journalists and the public.

How do discussions move from silly to mainstream? Part of it is mere exposure: if the issue comes up again and again, and other factors do not reinforce it as being beyond the pale, it will become more thinkable. This is how other issues creep up on the agenda too: small stakeholder groups drive their arguments, and if they are compelling they will eventually leak into the mainstream. High status groups have an advantage (uncorrelated to the correctness of arguments, except for the very rare groups that gain status from being documented as being right about a lot of things).

Another is demonstrations. They do not have to be real instances of the issue, but close enough to create an association: a small disease outbreak, an impressive AI demo, claims that the Elbonian education policy really works. They make things concrete, acting as a seed crystal for a conversation. Unfortunately these demonstrations do not have to be truthful either: they focus attention and update people’s probabilities, but they might be deeply flawed. Software passing a Turing test does not tell us much about AI. The safety of existing AI software or biohacking does not tell us much about their future safety. 43% of all impressive-sounding statistics quoted anywhere is wrong.

Truth likely makes argumentation easier (reality is biased in your favour, opponents may have more freedom to make up stuff but it is more vulnerable to disproof) and can produce demonstrations. Truth-seeking people are more likely to want to listen to correct argumentation and evidence, and even if they are a minority they might be more stable in their beliefs than people who just view beliefs as clothing to wear (of course, zealots are also very stable in their beliefs since they isolate themselves from inconvenient ideas and facts).

Truth alone can not efficiently win the battle of bringing an idea in from the silliness of the thinkability frontier to the practical mainstream. But I think humour can act as a lubricant: by showing the actual silliness of mainstream argumentation, we move them outwards towards the frontier, making a bit more space for other things to move inward. When demonstrations are wrong, joke about their flaws. When ideas are pushed merely because of status, poke fun at the hot air cushions holding them up.

Somebody think of the electrons!

Atlas 6Brian Tomasik has a fascinating essay: Is there suffering in fundamental physics?

He admits from the start that “Any sufficiently advanced consequentialism is indistinguishable from its own parody.” And it would be easy to dismiss this as taking compassion way too far: not just caring about plants or rocks, but the possible suffering of electrons and positrons.

I think he has enough arguments to show that the idea is not entirely crazy: we do not understand the ontology of phenomenal experience well enough that we can easily rule out small systems having states, panpsychism is a view held by some rational people, it seems a priori unlikely that there is some mid-sized systems that have all the value in the universe rather than the largest or the smallest scale, we have strong biases towards our kind of system, and information physics might actually link consciousness with physics.

None of these are great arguments, but there are many of them. And the total number of atoms or particles is huge: even assigning a tiny fraction of human moral consideration to them or a tiny probability of them mattering morally will create a large expected moral value. The smallness of moral consideration or the probability needs to be far outside our normal reasoning comfort zone: if you assign a probability lower than 10^{-10^{56}} to a possibility you need amazingly strong reasons given normal human epistemic uncertainty.

I suspect most readers will regard this outside their “ultraviolett cutoff” for strange theories: just as physicists successfully invented/discovered a quantum cutoff to solve the ultraviolet catastrophe, most people have a limit where things are too silly or strange to count. Exactly how to draw it rationally (rather than just base it on conformism or surface characteristics) is a hard problem when choosing between the near infinity of odd but barely possible theories.

What is the mass of the question mark?One useful heuristic is to check whether the opposite theory is equally likely or important: in that case they balance each other (yes, the world could be destroyed by me dropping a pen – but it could also be destroyed by not dropping it). In this case giving greater weight to suffering than neutral states breaks the symmetry: we ought to investigate this possibility since the theory that there is no moral considerability in elementary physics implies no particular value is gained from discovering this fact, while the suffering theory implies it may matter a lot if we found out (and could do something about it). The heuristic is limited but at least a start.

Another way of getting a cutoff for theories of suffering is of course to argue that there must be a lower limit of the system that can have suffering (this is after all how physics very successfully solved the classical UV catastrophe). This gets tricky when we try to apply it to insects, small brains, or other information processing systems. But in physics there might be a better argument: if suffering happens on the elementary particle level, it is going to be quantum suffering. There would be literal superpositions of suffering/non-suffering of the same system. Normal suffering is classical: either it exists or not to some experiencing system, and hence there either is or isn’t a moral obligation to do something. It is not obvious how to evaluate quantum suffering. Maybe we ought to perform a quantum-action that moves the wavefunction to a pure non-suffering state (a bit like quantum game theory: just as game theory might have ties to morality, quantum game theory might link to quantum morality), but this is constrained by the tough limits in quantum mechanics on what can be sensed and done. Quantum suffering might simply be something different from suffering, just as quantum states do not have classical counterparts. Hence our classical moral obligations do not relate to it.

But who knows how molecules feel?

More robots, and how to take over the world with guaranteed minimum income

I was just watching “Humans Need Not apply” by CGPGrey,

when I noticed a tweet from Wendy Grossman, who I participated with in a radio panel about robotics (earlier notes on the discussion). She has some good points inspired by our conversation in her post, robots without software.

I think she has a key observation: much of the problem lies in the interaction between the automation and humans. On the human side, that means getting the right information and feedback into the machine side. From the machine side, it means figuring out what humans – those opaque and messy entities who change behaviour for internal reasons – want. At the point where the second demand is somehow resolved we will not only have really useful automation, but also essentially a way of resolving AI safety/ethics. But before that, we will have a situation of only partial understanding , and plenty of areas where either side will not be able to mesh well. Which either forces humans to adapt to machines, or machines to get humans to think that what they really wanted was what they got served. That is risky.

Global GMI stability issues

Incidentally, I have noted that many people hearing the current version of the machines will take our jobs story bring up the idea of a guaranteed minimum income as a remedy. If nobody has a job but there is a GMI we can still live a good life (especially since automation would make most things rather cheap). This idea has a long history, and Hans Moravec suggested it in his book Robot (1998) in regard to a future where AI-run corporations would be running the economy. It can be appealing even from a libertarian standpoint since it does away with a lot of welfare and tax bureaucracy (even Hayek might have been a fan).

I’m not enough of an economist to analyse it properly, but I suspect the real problem is stability when countries compete on tax: if Foobonia has a lower corporate tax rate than Baristan and the Democratic Republic of Baaz, then companies will move there – still making money by selling stuff to people in Baristan and Baaz. The more companies there are in Foobonia, the less taxes are needed to keep the citizens wealthy. In fact, as I mentioned in my earlier post, having fewer citizens might make the remaining more well off (things like this have happened on a smaller scale). The ideal situation would be to have the lowest taxes in the world and just one citizen. Or none, so the AI parliament can use the entire budget to improve the future prosperity and safety of Foobonia.

In our current world tax competition is only one factor determining where companies go. Not every company moves to Bahamas, Chile, Estonia or the UAE. One factor is other legal issues and logistics, but a big part is that you need to have people actually working in your company. Human capital is distributed very unevenly, and it is rarely where you want it (and the humans often do not want to move, for social reasons). But in an automated world machine capital will exist wherever you buy it so it can be placed where the taxes are cheaper. There will be a need to perform some services and transport goods in other areas, but unless they are taxed (hence driving up the price for your citizens) this is going to be a weaker constraint than now. How much weaker, I do not know – it would be interesting to see it investigated properly.

The core problem remains that if humans are largely living off the rents from a burgeoning economy there better exist stabilizing safeguards so these rents remain, and stabilizers that keep the safeguards stable. This is a non-trivial legal/economical problem, especially since one failure mode might be that some countries become zero citizen countries with huge economic growth and gradually accumulating investments everywhere (a kind of robotic Piketty situation, where everything in the end ends up owned by the AI consortium/sovereign wealth fund with the strongest growth). In short, it seems to require something just as tricky to develop as the friendly superintelligence program.

In any case, I suspect much of the reason people suggest GMI is that it is an already existing idea and not too strange. Hence it is thinkable and proposable. But there might be far better ideas out there for how to handle a world with powerful automation. One should not just stick with a local optimum idea when there might be way more stable and useful ideas further out.

The last sunset

Recently encountered the paper The last sunset on mainland Europe by Jorge Mira. No, despite the ominous title and my other interests it is not an estimate of when the ultimate sunset would happen (presumably either when Europe is subducted or when it gets vaporized together with Earth a few billion years hence by the ultimate sunrise). It is more along the lines of XKCD What-If’s “When will the sun finally set on the British Empire?

Mira points out that the terminator is a great circle that changes direction throughout the year, so at different times different parts of Europe will be last. He found that these parts are Cabo de São Vicente (Portugal, Oct 19-Feb 21), Cabo da Roca (Portugal, Feb 21-Mar 24, Sep 20-Oct 19), Cabo Tourinan (Spain, Mar 24-Apr 23, Aug 18-Sep 19), a site near Aglapsvik (Norway, Apri 24-May 1, Aug 11-Aug 18), and a place in Masoy south of Havoysund(Norway, May 1-May 10, Aug 2-Aug 10).  From May 11-Aug 1 the point skips along the coast to the Arctic circle and back. Which technically might mean it moves instantly through Sweden, Finland and Russia too at the summer solstice.

I happened to be taking a sunset photo at Cabo de São Vicente when I was there Dec 27: this was the last mainland sunset for that day.
Last sunset

Just outside the Kardashian index danger zone

Renommée des SciencesMy scientific Kardashian index is 3.34 right now. 

This weeks talkie in the scientific blogosphere is a tongue-in-cheek paper by Neil Hall, The Kardashian index: a measure of discrepant social media profile for scientists (Genome Biology 2014, 15:424). He suggests it as the ratio K=F_a/F_c between actual twitter followers F_a and the one predicted by the number of scientific citations a scholar has,  F_c = 43.3 \cdot C^{0.32} . A higher value than 5 indicates scientists whose visibility exceeds their contributions.

Of course, not everybody took it well, and various debates erupted. Since I am not in the danger zone (just as my blood pressure, cholesterol and weight are all just barely in the normal range and hence entirely acceptable) I can laugh at it, while recognizing that some people may have huge K scores while actually being good scientists – in fact, part of being a good scientific citizen is to engage with the outside world. As Micah Allen at UCL said: “Wear your Kardashian index with pride.”

Incidentally, the paper gives further basis for my thinking about merit vs. fame. There has been debate over whether fame depends linearly on merit (measured by papers published) (Bagrow et al.) or increases exponentially (M.V. Simkin and V.P. Roychowdhury,  subsequent paper). The above paper suggests a cube-root law, more dampened than Bagrow’s linear claim. However, Hall left out people on super-cited papers and may have used a small biased sample: I suspect, given other results, that there will be a heavy tail of super-followed scientists (Neil deGrasse Tyson, anyone?)






Risky and rewarding robots

Robot playpenYesterday I participated in recording a radio program about robotics, and I noted that the participants were approaching the issue from several very different angles:

  • Robots as symbols: what we project things on them, what this says about humanity, how we change ourselves in respect to them, the role of hype and humanity in our thinking about them.
  • Robots as practical problem: how do you make a safe and trustworthy autonomous device that hangs around people? How do we handle responsibility for complex distributed systems that can generate ‘new’ behaviour?
  • Automation and jobs: what kinds of jobs are threatened or changed by automation? How does it change society, and how do we steer it in desirable directions – and what are they?
  • Long-term risks: how do we handle the potential risks from artificial general intelligence, especially given that many people think there are absolutely no problem and others are convinced that this could be existential if we do not figure out enough before it emerges?

In many cases the discussion got absurd because we talked past each other due to our different perspectives, but there were also some nice synergies. Trying to design automation without taking the anthropological and cultural aspects into account will lead to something that either does not work well with people or forces people to behave more machinelike. Not taking past hype cycles into account when trying to estimate future impact leads to overconfidence. Assuming that just because there has been hype in the past nothing will change is equally overconfident. The problems of trustworthiness and responsibility distribution become truly important when automating many jobs: when the automation is an essential part of the organisation, there needs to be mechanisms to trust it and to avoid dissolution of responsibility. Currently robot ethics is more about how humans are impacted by robots rather than ethics for robots, but the latter will become quite essential if we get closer to AGI.


Robot on break

I focused on jobs, starting from the Future of Employment paper. Maarten Goos and Alan Manning pointed out that automation seems to lead to a polarisation into “lovely and lousy jobs“: more non-routine manual jobs (lousy), more non-routine cognitive jobs (lovely). The paper strongly supports this, showing that a large chunk of occupations that rely on routine tasks might be possible to automate but things requiring hand-eye coordination, human dexterity, social ability, creativity and intelligence – especially applied flexibly – are pretty safe.

Overall, the economist’s view is relatively clear: automation that embodies skills and ability to do labour can only affect the distribution of jobs and how much certain skills are valued and paid compared with others. There is no rule that if task X can be done by a machine it will be done by a machine: handmade can still pay premium, and the law of comparative advantage might mean it is not worth using the machine to do X when it can do the even more profitable task Y. Still, being entirely dependent on doing X for your living is likely a bad situation.

Also, we often underestimate the impact of “small” parts of tasks that in formal analysis don’t seem to matter. Underwriters are on paper eminently replaceable… except that the ability to notice “Hey! Those numbers don’t make sense” or judge the reliability of risk models is quite hard to implement, and actually may constitute most of their value. We care about hard to automate things like social interaction and style. And priests, politicians, prosecutors and prostitutes are all fairly secure because their jobs might inherently require being a human or representing a human.

However, the development of AI ability is not a continuous predictable curve. We get sudden surprises like the autonomous cars (just a few years ago most people believed autonomous cars were a very hard, nearly impossible problem) or statistical translation. Confluences of technology conspire to change things radically (consider the digital revolution of printing, both big and small, in the 80s that upended the world for human printers). And since we know we are simultaneously overhyping and missing trends, this should not give us a sense of complacency at all. Just because we have always failed to automate X in the past doesn’t mean X might not suddenly turn out to be automateable tomorrow: relying on X being stably in the human domain is a risky assumption, especially when thinking about career choices.


Robin, supply, demand and robots

Robots also have another important property: we can make a lot of them if we have a reason. If there is a huge demand for humans doing X we need to retrain or have children who grow up to be Xers. That makes the price go up a lot. Robots can be manufactured relatively easily, and scaling up the manufacturing is cheaper: even if X-robots are fairly expensive, making a lot more X-robots might be cheaper than trying to get humans if X suddenly matters.

This scaling is a bit worrisome, since robots implement somebody’s action plan (maybe badly, maybe dangerously creatively): they are essentially an extension of somebody or something’s preferences. So if we could make robot soldiers, the group or side that could make the most would have a potential huge strategic advantage. Making innovations in fast manufacture becomes important, in turn leading to a situation where there is an incentive for an arms race in being able to get an army by a press of a button. This is where I think atomically precise manufacturing is potentially risky: it might enable very quick builds, and that is potentially destabilizing. But even just automatic production (remember, this is a scenario where some robotics is good enough to implement useful military action, so manufacturing robotics will be advanced too). Also, countries running mostly on export on raw materials, if they automate the production there might not be much of a need of most of the population… An economist would say the population might be used for other profitable activities, but many nasty resource-driven governments do not invest in their human capital very much. In fact, they tend to see it as a security problem.

Of course, if we ever get to the level where intellectual tasks and services close to the human scale can be done, the same might apply to more developed economies too. But at that point we are so close to automating the task of making robots and AI better that I expect an intelligence explosion to occur before any social explosions. A society where nobody needs to work might sound nice and might be very worth striving for, but in order to get there we need at the very least get close to general AI and solve its safety problems.

See also this essay: commercializing the robot ecosystem in the anthropocene.

Mathematical anti-beauty

Browsing Mindfuck Math I came across a humorous Venn diagram originally from It got me to look up the Borwein integral. Wow. A kind of mathematical anti-beauty.

“As we all know”, sinc(x)=sin(x)/x for x\neq 0 and defined to be 1 for x=0. It is not that uncommon as a function. Now look at the following series of integrals:

\int_0^{\infty} sinc(x) dx = \pi/2 ,

\int_0^{\infty} sinc(x) sinc(x/3) dx = \pi/2 ,

\int_0^\infty sinc(x) sinc(x/3) sinc(x/5) dx = \pi/2 .

The pattern continues:

\int_0^\infty sinc(x) sinc(x/3) sinc(x/5) \cdots sinc(x/13) dx = \pi/2 .

And then…

\int_0^\infty sinc(x) sinc(x/3) sinc(x/5) \cdots sinc(x/13) sinc(x/15) dx

It is around 0.499999999992646 \pi – nearly a half, but not quite.

What is going on here? Borwein & Borwein give proofs, but they are not entirely transparent. Basically the reason is that 1/3+1/5+…1/13 < 1, while 1/3+1/5+…1/13 + 1/15 > 1, but why this changes things is clear as mud. Thankfully Hanspeter Schmid has a very intuitive explanation that makes what is going on possible to visualize. At least if you like convolutions.

In any case, there is something simultaneously ugly and exciting when neat patterns in math just ends for no apparent reason.

Another good example is the story of the Doomsday conjecture. Gwern tells the story well, based on Klarreich: a certain kind of object is found in dimension 2, 6, 14, 30 and 62… aha! They are conjectured to occur in all  2^n-2  dimensions. A branch of math was built on this conjecture… and then the pattern failed in dimension 254. Oops. 

It is a bit like the opposite case of the number of regular convex polytopes in different dimensions: 1, infinity, 5, 6, 3, 3, 3, 3… Here the series start out crazy, and then becomes very regular.

Another dimensional “imperfection” is the behaviour of the volume of the n-sphere: V_n(r)=\frac{\pi^{n/2}r^n}{\Gamma(1+n/2)}

Volume of unit hyperspheres as a function of dimension

The volume of a unit sphere increases with dimension until n \approx 5.25 , and then decreases. Leaving the non-intuitiveness of why volumes would shrink aside, the real oddness is that the maximum is for a non-integer dimension. We might argue that the formula is needlessly general and only the integer values count, but many derivations naturally bring in the Gamma function and hence the possibility of non-integer values.

Another association is to this integral problem: given a set of integers x_i, is the integral \int_0^\pi \prod_i \sin(x_i \theta) d\theta = 0 ? As shown in Moore and Mertens, this is NP-complete. Here the strangeness is that integrals normally are pretty well behaved. It seems absurd that a particular not very scary trigonometric integral should require exponential work to analyze. But in fact, multivariate integrals are NP-hard to approximate, and calculating the volume of a n-dimensional polytope is actually #P-complete.

We tend to assume that mathematics is smoother and more regular than reality. Everything is regular and exceptionless because it is generated by universal rules… except when it isn’t. The rules often act as constraints, and when they do not mesh exactly odd things happen. Similarly we may assume that we know what problems are hard or not, but this is an intuition built in our own world rather than the world of mathematics. Finally, some mathematical truths maybe just are. As Gregory Chaitin has argued, some things in math are irreducible; there is no real reason (at least in the sense of a comprehensive explanation) for why they are true.

Mathematical anti-beauty can be very deep. Maybe it is like the insects, rot and other memento mori in classical still life paintings: a deviation from pleasantness and harmony that adds poignancy and a bit of drama. Or perhaps more accurately, it is wabi-sabi.

Thunderbolts and lightning, very very frightning

Cloud powerOn the Conversation, I blog about the risks of electromagnetic disruption from solar storms and EMP: Electromagnetic disaster could cost trillions and affect millions. We need to be prepared.

The reports from Lloyds and the National Academies are worrying, but as a disaster it would not kill that many people directly. However, an overall weakening of our societal and global systems is nothing to joke about: when societies have less resources they are less resilient to other threats. In in this case it would both information processing, resources and ability to do stuff. Just the thing to make other risks way worse.

As a public goods problem I think this risk is easier to handle than others; it is more like Y2K than climate since most people have aligned interests. Nobody wants a breakdown, and few actually win from the status quo. But there are going to be costs and inertia nevertheless. Plus, I don’t think we have a good answer yet to local EMP risks.

Cryonics: too rational, hence fair game?

CryotagOn Practical Ethics I blog about cryonics acceptance: Freezing critique: privileged views and cryonics. My argument is that cryonics tries to be a rational scientific approach, which means it is fair game for criticism. Meanwhile many traditional and anti-cryonic views are either directly religious or linked to religious views, which means people refrain from criticising them back. Since views that are criticised are seen as more questionable than non-criticised (if equally strange) views, this makes cryonics look less worth respecting.