Why fears of supersizing are misplaced

I am a co-author of the paper “On the Impossibility of Supersized Machines” (together with Ben Garfinkel, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Andrew Snyder-Beattie, and Max Tegmark):

In recent years, a number of prominent computer scientists, along with academics in fields such as philosophy and physics, have lent credence to the notion that machines may one day become as large as humans. Many have further argued that machines could even come to exceed human size by a significant margin. However, there are at least seven distinct arguments that preclude this outcome. We show that it is not only implausible that machines will ever exceed human size, but in fact impossible.

In the spirit of using multiple arguments to bound a risk (so that the failure of single arguments do not decrease the power of the joint argument strongly) we show that there are philosophical reasons (the meaninglessness of “human-level largeness”, the universality of human largeness, the hard problem of largeness), psychological reasons (acting as an error theory based on motivated cognition), conceptual reasons (humans plus machines will be larger) and scientific/mathematical reasons (irreducible complexity, the quantum-Gödel issue) to not believe the possibility of machines larger than humans.

While it is cool to do exploratory engineering to demonstrate what can in principle be built, it is also very reassuring to show there are boundaries of what is possible. That allows us to focus on the (large) space within.