Brewing bad policy

Weak gravityThe New York Times reports that yeast has been modified to make THC. The paper describing the method uses precursor molecules, so it is not a hugely radical step forward. Still, it dovetails nicely with the recent paper in Science about full biosynthesis of opiates from sugar (still not very efficient compared to plants, though). Already this spring there was a comment piece in Nature about how to regulate the possibly imminent drug microbrewing, which I commented on at Practical Ethics.

Rob Carlsson has an excellent commentary on the problems with the regulatory reflex about new technology. He is basically arguing a “first, do no harm” principle for technology policy.

Policy conversations at all levels regularly make these same mistakes, and the arguments are nearly uniform in structure. “Here is something we don’t know about, or are uncertain about, and it might be bad – really, really bad – so we should most certainly prepare policy options to prevent the hypothetical worst!” Exclamation points are usually just implied throughout, but they are there nonetheless. The policy options almost always involve regulation and restriction of a technology or process that can be construed as threatening, usually with little or no consideration of what that threatening thing might plausibly grow into, nor of how similar regulatory efforts have fared historically.

This is such a common conversation that in many fields like AI even bringing up that there might be a problem makes practitioners think you are planning to invoke regulation. It fits with the hyperbolic tendency of many domains. For the record, if there is one thing we in the AI safety research community agree on, it is that more research is needed before we can give sensible policy recommendations.

Figuring out what policies can work requires understanding both what the domain actually is about (including what it can actually do, what it likely will be able to do one day, and what it cannot do), how different policy options have actually worked in the past, and what policy options actually exist in policy-making. This requires a fair bit of interdisciplinary work between researchers and policy professionals. Clearly we need more forums where this can happen.

And yes, even existential risks need to be handled carefully like this. If their importance overshadows everything, then getting policies that actually reduce the risk is a top priority: dramatic, fast policies doesn’t guarantee working risk reduction, and once a policy is in place it is hard to shift. For most low-probability threats we do not gain much survival by rushing policies into place compared to getting better policies.

Brewing more than booze

TastingOver on Practical Ethics I blog about how to handle production of opiates from bioengineered yeast.

The basic problem is that opiates seem to be unusually harmful (rather nasty dependency, social withdrawal and risky methods of administration), yet restricting access looks hard in the long run. I don’t subscribe to the view that mere exposure will turn all people into addicts (it looks like it is a subset of people who are vulnerable), but there is a fair bit of harm here that likely is not outweighed by cheapness and better quality. Yet proposed methods restricting access to the modified yeast are unlikely to work in the long run, and may some bad effects on their own.

My own solution is to recognize that in 10-20 years it will be possible to brew many strong drugs discreetly at home, and that we need to reduce the harm from this by developing other technologies that make them less problematic. It might sound wussy and complex compared to the more easily actionable targets suggested in the article, but I think it has a greater chance of actually reducing harms in the long run than policies that merely delay the broad arrival of microbrew drugs.