Catastrophizing for not-so-fun and non-profit

T-valuesOren Cass has an article in Foreign Affairs about the problem of climate catastrophizing. It is basically how it becomes driven by motivated reasoning but also drives motivated reasoning in a vicious circle. Regardless of whether he himself has motivated reasoning too, I think the text is relevant beyond the climate domain.

Some of FHI research and reports are mentioned in passing. Their role is mainly in showing that there could be very bright futures or other existential risks, which undercuts the climate catastrophists that he is really criticising:

Several factors may help to explain why catastrophists sometimes view extreme climate change as more likely than other worst cases. Catastrophists confuse expected and extreme forecasts and thus view climate catastrophe as something we know will happen. But while the expected scenarios of manageable climate change derive from an accumulation of scientific evidence, the extreme ones do not. Catastrophists likewise interpret the present-day effects of climate change as the onset of their worst fears, but those effects are no more proof of existential catastrophes to come than is the 2015 Ebola epidemic a sign of a future civilization-destroying pandemic, or Siri of a coming Singularity

I think this is an important point for the existential risk community to be aware of. We are mostly interested in existential risks and global catastrophes that look possible but could be impossible (or avoided), rather than trying to predict risks that are going to happen. We deal in extreme cases that are intrinsically uncertain, and leave the more certain things to others (unless maybe they happen to be very under-researched). Siri gives us some singularity-evidence, but we think it is weak evidence, not proof (a hypothetical AI catastrophist would instead say “so, it begins”).

Confirmation bias is easy to fall for. If you are looking for signs of your favourite disaster emerging you will see them, and presumably loudly point at them in order to forestall the disaster. That suggests extra value in checking what might not be xrisks and shouldn’t be emphasised too much.

Catastrophizing is not very effective

The nuclear disarmament movement also used a lot of catastrophizing, with plenty of archetypal cartoons showing Earth blowing up as a result of nuclear war or commonly claiming it would end humanity. The fact that the likely outcome merely would be mega- or gigadeath and untold suffering was apparently not regarded as rhetorically punchy enough. Ironically, Threads, The Day After or the Charlottesville scenario in Effects of Nuclear War may have been far more effective in driving home the horror and undesirability of nuclear war better, largely by giving a smaller-scale more relateable scenarios. Scope insensitivity, psychic numbing, compassion fade and related effects make catastrophizing a weak, perhaps even counterproductive, tool.

Defending bad ideas

Another take-home message: when arguing for the importance of xrisk we should make sure we do not end up in the stupid loop he describes. If something is the most important thing ever, we better argue for it well and backed up with as much evidence and reason as can possibly be mustered. Turning it all into a game of overcoming cognitive bias through marketing or attributing psychological explanations to opposing views is risky.

The catastrophizing problem for very important risks is related to Janet Radcliffe-Richards’ analysis of what is wrong with political correctness (in an extended sense). A community argues for some high-minded ideal X using some arguments or facts Y. Someone points out a problem with Y. The rational response would be to drop Y and replace it with better arguments or facts Z (or, if it is really bad, drop X). The typical human response is to (implicitly or explicitly) assume that since Y is used to argue for X, then criticising Y is intended to reduce support for X. Since X is good (or at least of central tribal importance) the critic must be evil or at least a tribal enemy – get him! This way bad arguments or unlikely scenarios get embedded in a discourse.

Standard groupthink where people with doubts figure out that they better keep their heads down if they want to remain in the group strengthens the effect, and makes criticism even less common (and hence more salient and out-groupish when it happens).

Reasons to be cheerful?

An interesting detail about the opening: the GCR/Xrisk community seems to be way more optimistic than the climate community as described. I mentioned Warren Ellis little novel Normal earlier on this blog, which is about a mental asylum for futurists affected by looking into the abyss. I suspect he was maybe modelling them on the moody climate people but adding an overlay of other futurist ideas/tropes for the story.

Assuming climate people really are that moody.

An elliptic remark

I recently returned to toying around with circle and sphere inversion fractals, that is, fractal sets that are invariant under inversion in a given set of circles or spheres.

That got me thinking: can you invert points in other things than circles? Of course you can! José L. Ramírez has written a nice overview of inversion in ellipses. Basically a point P is projected to another point P' so that ||P-O||\cdot ||P'-O||=||Q-O||^2 where O is the centre of the ellipse and Q is the point where the ray between O, P', P intersects the ellipse.

In Cartesian coordinates, for an ellipse centered on the origin and with semimajor and minor axes a,b, the inverse point of P=(u,v) is P'=(x,y) where x=\frac{a^2b^2u}{b^2u^2+a^2v^2} and y=\frac{a^2b^2v}{b^2u^2+a^2v^2}. Basically this is a squashed version of the circle formula.

Many of the properties remain the same. Lines passing through the centre of the ellipse are unchanged. Other lines get mapped to ellipses; if they intersect the inversion ellipse the new ellipse also intersect it at those points. Hence tangent lines are mapped to tangent ellipses. Ellipses with parallel axes and equal eccentricities map onto other ellipses (or lines if they intersect the centre of the inversion ellipse). Other conics get turned into cubics; for example a hyperbola gets mapped to a lemniscate. (See also this paper for more examples).

Now, from a fractal standpoint this means that if you have a set of ellipses that are tangent you should expect a fractal passing through their points of tangency. Basically all of the standard circle inversion fractals hence have elliptic counterparts. Here is the result for a ring of 4 or 6 mutually tangent ellipses:

Invariant set fractal (blue) for inversion in the red ellipses. Generated using an IFS algorithm.

Invariant set fractal (blue) for inversion in the red ellipses. Generated using an IFS algorithm.
Invariant set fractal (blue) for inversion in the red ellipses. Generated using an IFS algorithm.

These pictures were generated by taking points in the plane and inverting them with randomly selected ellipses; as the process continues they get attracted to the invariant set (this is basically a standard iterated function system). It also has the known problem of finding the points at the tangencies, since the iteration has to loop consistently between inverting in the two ellipses to get there, but it is likely that a third will be selected at some point.

One approach is to deliberately recurse downward to find the points using a depth first search. We can take look at where each ellipse is mapped by each of the inversions, and since the fractal is inside each of the mapped ellipses we can then continue mapping the chain of mapped ellipses, getting nice bounds on where it is going (as long as everything is shrinking: this is guaranteed as long as it is mappings from the outside to the inside of the generating ellipses, but if they were to overlap things can explode). Doing this for just one step reveals one reason for the quirky shapes above: some of the ellipses get mapped into crescents or pears, adding a lot of bends:

Mappings of the ellipses by their inversions: each of the four main ellipses map the other three to their interior but distort the shape of two of them.
Mappings of the ellipses by their inversions: each of the four main ellipses map the other three to their interior but distort the shape of two of them.

Now, continuing this process makes a nested structure where the invariant set is hidden inside all the other mapped ellipses.

Nested mappings of the ellipses in the chain, bounding the invariant set. Colors are mixtures of the colors of the generating ellipses, with an increase in saturation.
Nested mappings of the ellipses in the chain, bounding the invariant set. Colors are mixtures of the colors of the generating ellipses, with an increase in saturation.

It is still hard to reach the tangent points, but at least now they are easier to detect. They are also numerically tough: most points on the ellipse circumferences are mapped away from them towards the interior of the generating ellipse. Still, if we view the mapped ellipses as uncertainties and shade them in we can get a very pleasing map of the invariant set:

Invariant set of chain of four outer ellipses and a circle tangent to them on the inside.
Invariant set of chain of four outer ellipses and a circle tangent to them on the inside.

Here are a few other nice fractals based on these ideas:

Using a mix of circles and ellipses produces a nice mix of the regularity of the circle-based Apollonian gaskets and the swooshy, Hénon fractal shape the ellipses induce.

Appendix: Matlab code

 

% center=[-1 -1 2 1; -1 1 1 2; 1 -1 1 2; 1 1 2 1];
% center(:,3:4)=center(:,3:4)*(2/3);
%
%center=[-1 -1 2 1; -1 1 1 2; 1 -1 1 2; 1 1 2 1; 3 1 1 2; 3 -1 2 1];
%center(:,3:4)=center(:,3:4)*(2/3);
%center(:,1)=center(:,1)-1;
%
% center=[-1 -1 2 1; -1 1 1 2; 1 -1 1 2; 1 1 2 1];
% center(:,3:4)=center(:,3:4)*(2/3);
% center=[center; 0 0 .51 .51];
%
% egg
% center=[0 0 0.6666 1; 2 2 2 2; -2 2 2 2; -2 -2 2 2; 2 -2 2 2];
%
% double
%r=0.5;
%center=[-r 0 r r; r 0 r r; 2 2 2 2; -2 2 2 2; -2 -2 2 2; 2 -2 2 2];
%
% Double egg
center=[0.3 0 0.3 0.845; -0.3 0 0.3 0.845; 2 2 2 2; -2 2 2 2; -2 -2 2 2; 2 -2 2 2];
%
M=size(center,1); % number of ellipses
N=100; % points on fill curves
X=randn(N+1,2);
clf
hold on
tt=2*pi*(0:N)/N;
alpha 0.2
for i=1:M
    X(:,1)=center(i,1)+center(i,3)*cos(tt);
    X(:,2)=center(i,2)+center(i,4)*sin(tt);
    plot(X(:,1),X(:,2),'k'); 
    for j=1:M
        if (i~=j)
            recurseDown(X,[i j],10,center)
            drawnow
       end
    end
end

Recursedown.m

function recurseDown(X,ellword,maxlevel,center)
i=ellword(end); % invert in latest ellipse
%
% Perform inversion
C=center(i,1:2);
A2=center(i,3).^2;
B2=center(i,4).^2;
Y(:,1)=X(:,1)-C(:,1);
Y(:,2)=X(:,2)-C(:,2);
X(:,1)=C(:,1)+A2.*B2.*Y(:,1)./(B2.*Y(:,1).^2+A2.*Y(:,2).^2);
X(:,2)=C(:,2)+A2.*B2.*Y(:,2)./(B2.*Y(:,1).^2+A2.*Y(:,2).^2);
%
if (norm(max(X)-min(X))<0.005) return; end
%
co=hsv(size(center,1));
coco=mean([1 1 1; 1 1 1; co(ellword,:)]);
%
%    plot(X(:,1),X(:,2),'Color',coco)
fill(X(:,1),X(:,2),coco,'FaceAlpha',.2,'EdgeAlpha',0)
%
if (length(ellword)<maxlevel)
    for j=1:size(center,1)
        if (j~=i)
            recurseDown(X,[ellword j],maxlevel,center)
        end
    end
end

The frightening infinite spaces: apeirophobia

Bobby Azarian writes in The Atlantic about Apeirophobia: The Fear of Eternity. This is the existential vertigo experienced by some when considering everlasting life (typically in a religious context), or just the infinite. Pascal’s Pensées famously touches on the same feeling: “The eternal silence of these infinite spaces frightens me.” For some this is upsetting enough that it actually count as a specific phobia, although in most cases it seems to be more a general unease.

Fearing immortality

Circle of life

I found the concept relevant since yesterday I had a conversation with a philosopher arguing against life extension. Many of her arguments were familiar: they come up again and again if you express a positive view of longevity. It is interesting to notice that many other views do not elicit the same critical response. Suggest a future in space and some think it might be wasteful or impossible, but rarely with the same tenaciousness as life extension. As soon as one rational argument is disproven another one takes its place.

In the past I have usually attributed this to ego defence and maybe terror management. We learn about our mortality when we are young and have to come up with a way of handling it: ignoring it, denying it by assuming eternal hereafters, that we can live on through works or children, various philosophical solutions, concepts of the appropriate shape of our lives, etc. When life extension comes up, this terror management or self image is threatened and people try to defend it – their emotional equilibrium is risked by challenges to the coping strategy (and yes, this is also true for transhumanists who resolve mortality by hoping for radical life extension: there is a lot of motivated thinking going on in defending the imminent breakthroughs against death, too). While “longevity is disturbing to me” is not a good argument it is the motivator for finding arguments that can work in the social context. This is also why no amount of knocking down these arguments actually leads anywhere: the source is a coping strategy, not a rationally consistent position.

However, the apeirophobia essay suggests a different reason some people argue against life extension. They are actually unsettled by indefinite or infinite lives. I do not think everybody who argues has apeirophobia, it is probably a minority fear (and might even be a different take on the fear of death). But it is a somewhat more respectable origin than ego defence.

When I encounter arguments for the greatness of finite and perhaps short spans of life, I often rhetorically ask – especially if the interlocutor is from a religious worldview – if they think people will die in Heaven. It is basically Sappho’s argument (“to die is an evil; for the gods have thus decided. For otherwise they would be dying.”) Of course, this rarely succeeds in convincing anybody but it tends to throw a spanner in the works. However, the apeirophobia essay actually shows that some religious people may have a somewhat consistent fear that eternal life in Heaven isn’t a good thing. I respect that. Of course, I might still ask why God in their worldview insists on being eternal, but even I can see a few easy ways out of that issue (e.g. it is a non-human being not affected by eternity in the same way).

Arbitrariness

I found infinity on the stairsAs I often have to point out, I do not believe immortality is a thing. We are finite beings in a random universe, and sooner or later our luck runs out. What to aim for is indefinitely long lives, lives that go on (with high probability) until we no longer find them meaningful. But even this tends to trigger apeirophobia. Maybe one reason is the indeterminacy: there is nothing pre-set at all.

Pascal’s worry seem to be not just the infinity of the spaces but also their arbitrariness and how insignificant we are relative to them. The full section of the Pensées:

205: When I consider the short duration of my life, swallowed up in the eternity before and after, the little space which I fill, and even can see, engulfed in the infinite immensity of spaces of which I am ignorant, and which know me not, I am frightened, and am astonished at being here rather than there; for there is no reason why here rather than there, why now rather than then. Who has put me here? By whose order and direction have this place and time been allotted to me? Memoria hospitis unius diei prætereuntis.

206: The eternal silence of these infinite spaces frightens me.

207: How many kingdoms know us not!

208: Why is my knowledge limited? Why my stature? Why my life to one hundred years rather than to a thousand? What reason has nature had for giving me such, and for choosing this number rather than another in the infinity of those from which there is no more reason to choose one than another, trying nothing else?

Pascal is clearly unsettled by infinity and eternity, but in the Pensées he tries to resolve this psychologically: since he trusts God, then eternity must be a good thing even if it is hard to bear. This is a very different position from my interlocutor yesterday, who insisted that it was the warm finitude of a human life that gave life meaning (a view somewhat echoed in Mark O’Connell’s To Be a Machine). To Pascal apeirophobia was just another challenge to become a good Christian, to the mortalist it is actually a correct, value-tracking intuition.

Apeirophobia as a moral intuition

Infinite ShardI have always been sceptical of psychologizing why people hold views. It is sometimes useful for emphatizing with them and for recognising the futility of knocking down arguments that are actually secondary to a core worldview (which it may or may not be appropriate to challenge). But it is easy to make mistaken guesses. Plus, one often ends up in the “sociological fallacy”: thinking that since one can see non-rational reasons people hold a belief then that belief is unjustified or even untrue. As Yudkowsky pointed out, forecasting empirical facts by psychoanalyzing people never works. I also think this applies to values, insofar they are not only about internal mental states: that people with certain characteristics are more likely to think something has a certain value than people without the characteristic only gives us information about the value if that characteristic somehow correlates with being right about that kind of values.

Feeling apeirophobia does not tell us that infinity is bad, just as feeling xenophobia does not tell us that foreigners are bad. Feeling suffering on the other hand does give us direct knowledge that it is intrinsically aversive (it takes a lot of philosophical footwork to construct an argument that suffering is actually OK). Moral or emotional intuitions certainly can motivate us to investigate a topic with better intellectual tools than the vague unease, conservatism or blind hope that started the process. The validity of the results should not depend on the trigger since there is no necessary relation between the feeling and the ethical state of the thing triggering it: much of the debate about “the wisdom of repugnance” is clarifying when we should expect the intuition to overwhelm the actual thinking and when they are actually reliable. I always get very sceptical when somebody claims their intuition comes from a innate sense of what the good is – at least when it differs from mine.

Would people with apeirophobia have a better understanding of the value of infinity than somebody else? I suspect apeirophobes are on average smarter and/or have a higher need for cognition, but this does not imply that they get things right, just that they think more and more deeply about concepts many people are happy to gloss over. There are many smart nonapeirophobes too.

A strong reason to be sceptical of apeirophobic intuitions is that intuitions tend to work well when we have plenty of experience to build them from, either evolutionarily or individually. Human practical physics intuitions are great for everyday objects and speeds, and progressively worsens as we reach relativistic or quantum scales. We do not encounter eternal life at all, and hence we should be very suspicious about the validity of aperirophobia as a truth-tracking innate signal. Rather, it is triggered when we become overwhelmed by the lack of references to infinity in our lived experience or we discover the arbitrarily extreme nature of “infinite issues” (anybody who has not experienced vertigo when they understood uncountable sets?) It is a correct signal that our minds are swimming above an abyss we do not know but it does not tell us what is in this abyss. Maybe it is nice down there? Given our human tendency to look more strongly for downsides and losses than positives we will tend to respond to this uncertainty by imagining diffuse worst case scenario monsters anyway.

Bad eternities

I do not think I have apeirophobia, but I can still see how chilling belief in eternal lives can be. Unsong’s disutility-maximizing Hell is very nasty, but I do not think it exists. I am not worried about Eternal Returns: if you chronologically live forever but actually just experience a finite length loop of experiences again and again then it makes sense to say that your life just that long.

My real worry is quantum immortality: from a subjective point of view one should expect to survive whatever happen in a multiverse situation, since one cannot be aware in those branches where one died. The problem is that the set of nice states to be in is far smaller than the set of possible states, so over time we should expect to end up horribly crippled and damaged yet unable to die. But here the main problem is the suffering and reduction of circumstances, not the endlessness.

There is a problem with endlessness here though: since random events play a decisive role in our experienced life paths it seems that we have little control over where we end up and that whatever we experience in the long run is going to be wholly determined by chance (after all, beyond 10100 years or more we will all have to be a succession of Boltzmann brains). But the problem seems to be more  the pointlessness that emerges from this chance than that it goes on forever: a finite randomised life seems to hold little value, and as Tolstoy put it, maybe we need infinite subjective lives where past acts matter to actually have meaning. I wonder what apeirophobes make of Tolstoy?

Embracing the abyss

XXI: Azathoth PleromaMy recommendation to apeirophobes is not to take Azarian’s advice and put eternity out of mind, but instead to embrace it in a controllable way. Learn set theory and the paradoxes of infinity. And then look at the time interval t=[0, \infty) and realise it can be mapped into the interval [0,1) (e.g. by f(t)=1/(t+1)). From the infinite perspective any finite length of life is equal. But infinite spans can be manipulated too: in a sense they are also all the same. The infinities hide within what we normally think of as finite.

I suspect Pascal would have been delighted if he knew this math. However, to him the essential part was how we turn intellectual meditation into emotional or existential equilibrium:

Let us therefore not look for certainty and stability. Our reason is always deceived by fickle shadows; nothing can fix the finite between the two Infinites, which both enclose and fly from it.

If this be well understood, I think that we shall remain at rest, each in the state wherein nature has placed him. As this sphere which has fallen to us as our lot is always distant from either extreme, what matters it that man should have a little more knowledge of the universe? If he has it, he but gets a little higher. Is he not always infinitely removed from the end, and is not the duration of our life equally removed from eternity, even if it lasts ten years longer?

In comparison with these Infinites all finites are equal, and I see no reason for fixing our imagination on one more than on another. The only
comparison which we make of ourselves to the finite is painful to us.

In the end it is we who make the infinite frightening or the finite painful. We can train ourselves to stop it. We may need very long lives in order to grow to do it well, though.

Calabi-Yau and Hanson’s surfaces

I have a glass cube on my office windowsill containing a slice of a Calabi-Yau manifold, one of Bathsheba Grossman’s wonderful creations. It is an intricate, self-intersecting surface with lots of unexpected symmetries. A visiting friend got me into trying to make my own version of the surface.

First, what is the equation for it? Grossman-Hanson’s explanation is somewhat involved, but basically what we are seeing is a 2D slice through a 6-dimensional manifold in a projective space expressed as the 4D manifold z_1^5+z_2^5=1, where the variables are complex. Hanson shows that this is a kind of complex superquadric in this paper. This leads to the formulae:

z_1(\theta,\xi,k_1)=e^{2\pi i k_1 / n}\cosh(\theta+\xi i)^{2/n}

z_2(\theta,\xi,k_2)=e^{2 \pi i k_2 / n}\sinh(\theta+\xi i)^{2/n}/i

where the k’s run through 0 \leq k \leq (n-1). Each pair k_1,k_2 corresponds to one patch of what is essentially a complex catenoid. This is still a 4D object. To plot it, we plot the points

(\Re(z_1),\Re(z_2),\cos(\alpha)\Im(z_1)+\sin(\alpha)\Im(z_2))

where \alpha is some suitable angle to tilt the projection into 3-space. Hanson’s explanation is very clear; I originally reverse-engineered the same formula from the code at Ziyi Zhang’s site.

The Hanson n=4 Calabi-Yau manifold projected into 3-space.
The Hanson n=4 Calabi-Yau manifold projected into 3-space.

The result is pretty nifty. It is tricky to see how it hangs together in 2D; rotating it in 3D helps a bit. It is composed of 16 identical patches:

The patches making up the Hanson Calabi-Yau surface.
The patches making up the Hanson Calabi-Yau surface.

The boundary of the patches meet other patches except along two open borders (corresponding to large or small values of \theta): these form the edges of the manifold and strictly speaking I ought to have rendered them to infinity. That would have made it unbounded and somewhat boring to look at: four disks meeting at an angle, with the interesting part hidden inside. By marking the edges we can see that the boundary are four linked wobbly circles:

Boundary of the piece of the Hanson Calabi-Yau manifold displayed.
Boundary of the piece of the Hanson Calabi-Yau manifold displayed.

A surface bounded by a knot or a link is called a Seifert surface. While these surfaces look a lot like minimal surfaces they are not exactly minimal when I estimate the mean curvature (it should be exactly zero); while this could be because of lack of numerical precision I think it is real: while minimal surfaces are Ricci-flat, the converse is not necessarily true.

Changing N produces other surfaces. N=2 is basically a catenoid (tilted and self-intersecting). As N increases it becomes more like a barrel or a pufferfish, with one direction dominated by circular saddle regions, one showing a meshwork of spaces reminiscent of spacefilling minimal surfaces, and one a lot of overlapping “barbs”.

Hanson's Calabi-Yau surface for N=2, N=3, N=5 and N=8.
Hanson’s Calabi-Yau surface for N=2, N=3, N=5 and N=8.

Note that just like for minimal surfaces one can multiply z_1, z_2 by e^{i\omega} to get another surface in an associate family. In this case it circulates the patches along their circles without changing the surface much.

Hanson also notes that by changing the formula to z_1^{n_1}+z_2^{n_2}=1 we can get boundaries that are torus-knot-like. This leads to the formulae:

z_1(\theta,\xi,k_1)=e^{2\pi i k_1 / n_1}\cosh(\theta+\xi i)^{2/n_1}

z_2(\theta,\xi,k_2)=e^{2 \pi i k_2 / n_2}\sinh(\theta+\xi i)^{2/n_2}/i

Knotted surface for n1=4, n2=3.
Knotted surface for n1=4, n2=3.

Appendix: Matlab code

%% Initialization
edge=0; % Mark edge?
coloring=1; % Patch coloring type
n=4;
s=0.1; % Gridsize
alp=1; ca=cos(alp); sa=sin(alp); % Projection
[theta,xi]=meshgrid(-1.5:s:1.5,1*(pi/2)*(0:1:16)/16);
z=theta+xi*i;
% Color scheme
tt=2*pi*(1:200)'/200; co=.5+.5*[cos(tt) cos(tt+1) cos(tt+2)];
colormap(co)
%% Plot
clf
hold on
for k1=0:(n-1)
for k2=0:(n-1)
z1=exp(k1*2*pi*i/n)*cosh(z).^(2/n);
z2=exp(k2*2*pi*i/n)*(1/i)*sinh(z).^(2/n);
X=real(z1);
Y=real(z2);
Z=ca*imag(z1)+sa*imag(z2);
if (coloring==0)
surf(X,Y,Z);
else
switch (coloring)
case 1
C=z1*0+(k1+k2*n); % Color by patch
case 2
C=abs(z1);
case 3
C=theta;
case 4
C=xi;
case 5
C=angle(z1);
case 6
C=z1*0+1;
end
h=surf(X,Y,Z,C);
set(h,'EdgeAlpha',0.4)
end
if (edge>0)
plot3(X(:,end),Y(:,end),Z(:,end),'r','LineWidth',2)
plot3(X(:,1),Y(:,1),Z(:,1),'r','LineWidth',2)
end
end
end
view([2 3 1])
camlight
h=camlight('left');
set(h,'Color',[1 1 1]*.5)
axis equal
axis vis3d
axis off