Brewing bad policy

Weak gravityThe New York Times reports that yeast has been modified to make THC. The paper describing the method uses precursor molecules, so it is not a hugely radical step forward. Still, it dovetails nicely with the recent paper in Science about full biosynthesis of opiates from sugar (still not very efficient compared to plants, though). Already this spring there was a comment piece in Nature about how to regulate the possibly imminent drug microbrewing, which I commented on at Practical Ethics.

Rob Carlsson has an excellent commentary on the problems with the regulatory reflex about new technology. He is basically arguing a “first, do no harm” principle for technology policy.

Policy conversations at all levels regularly make these same mistakes, and the arguments are nearly uniform in structure. “Here is something we don’t know about, or are uncertain about, and it might be bad – really, really bad – so we should most certainly prepare policy options to prevent the hypothetical worst!” Exclamation points are usually just implied throughout, but they are there nonetheless. The policy options almost always involve regulation and restriction of a technology or process that can be construed as threatening, usually with little or no consideration of what that threatening thing might plausibly grow into, nor of how similar regulatory efforts have fared historically.

This is such a common conversation that in many fields like AI even bringing up that there might be a problem makes practitioners think you are planning to invoke regulation. It fits with the hyperbolic tendency of many domains. For the record, if there is one thing we in the AI safety research community agree on, it is that more research is needed before we can give sensible policy recommendations.

Figuring out what policies can work requires understanding both what the domain actually is about (including what it can actually do, what it likely will be able to do one day, and what it cannot do), how different policy options have actually worked in the past, and what policy options actually exist in policy-making. This requires a fair bit of interdisciplinary work between researchers and policy professionals. Clearly we need more forums where this can happen.

And yes, even existential risks need to be handled carefully like this. If their importance overshadows everything, then getting policies that actually reduce the risk is a top priority: dramatic, fast policies doesn’t guarantee working risk reduction, and once a policy is in place it is hard to shift. For most low-probability threats we do not gain much survival by rushing policies into place compared to getting better policies.