What causes ascents and collapses? What if the West is in fact in decline in some important indicator? Low interest rates means its hard to find and exploit reliable opportunity. Stagnating real wages means our techno-progress is not translating into actual wealth for people. Rampant anti-intellectualism, low STEM participation, and seemingly insane social policy are worrying signs. How seriously should we take these indicators and our interpretations of them? How would we know what was going on? We need a model of this stuff.
The model described here is a very basic preliminary crystallization of some of our thoughts on social tech. It is intended as a foundation for further criticism and development in talk about social technology.
The basic model highlights two related variables:
Material Technology: The stuff we usually call “technology”: computers, energy generation, software, clothes, antibiotics, weapons, transport, etc. Produced by smart people working together to get things done, and is generally more or less directly valuable. I don’t have to explain this too much.
Social Technology: The social constructs in culture and society that increase a civilization’s ability to build stuff, react to circumstances, and solve problems. Basically, social tech increases aggregate intelligence. Examples: the rule of law, civilized interpersonal norms (politeness), science fiction (yes, really), techno-enlightenment culture, formalized hierarchy, private property, professional specialization, joint-stock corporations, old bodies of expertise transferred by mentorship and apprenticeship, etc. Things that increase internal coordination, specialization, depth of inquiry, long-forecasting, align individual incentives with aggregate, increase attention efficiency, etc are what I’d call higher social technology.
These variables are not intended to be interpreted as scalars or even as natural categories. Anything we try to derive here must make sense on the ground level without the abstractions, but the abstractions are useful shorthand.
The core thesis is that these variables are related by a few mechanisms:
Technological development and other good things is strongly dependent on aggregate intelligence. Many things affect aggregate intelligence, but we’ll focus now on those that are social constructs. Without substantial social technology, a prospering civilization is very unlikely, and with it, much more likely.
It takes time for changes in social tech to trickle down and propagate, leading to a potentially multi-decade or multi-generation delay between happenings in the core generators of social-tech, and observable technological effects. This obviously depends on the details; really big and well-distributed social-tech interventions can have effects quite quickly.
Any substantial technological system will have enough exposure to changing availabilities, changing circumstances, stuff wearing out, and so on that you need to be able to understand it creatively just to maintain it against entropy. If you remove the generating intelligence, you can’t just keep it running from old manuals and procedures; it falls apart eventually and you lose it. Counterexamples abound on the specific scale, but the gap between creation and maintenance shrinks as the complexity of the system grows.
I consider those points obvious, but we’ll back them up briefly, even if only to clarify:
Why did the industrial revolution happen? My understanding is that it could not have happened without a number of necessary social-tech preconditions including a stable state, a largish politically unified market, the rule of law, joint-stock corporations, scientific enlightenment, and a free market. Besides the obvious material and technological preconditions and a few other social ones, I think that demonstrates the first mechanism reasonably well.
To illustrate the causes of the social-material delay, imagine that social tech is managed from the top by Chief Social Engineer. Now imagine that he is removed. The rest of the structure quickly closes up the wound and carries on as usual, but isn’t quite as coordinated or visionary anymore. Eventually it hits some bump that damages it in a way that it is not capable of recognizing and repairing, but it’s still mostly functional. Eventually, decades later the loss of the CSE has consequences for material tech when some important project doesn’t happen, or political unpredictability retards economic risk-taking a bit. Conversely, reinstate him, and it takes a long time for him to learn circumstances, reengineer things, encounter and seize some critical opportunities, and for economic culture to catch up. Thus a delay.
Implications Assuming No Feedback
Suppose social-tech level is randomly doing its thing for inscrutable reasons with no particular feedback effects. High-points in social tech would produce high-points in material tech. But you would expect that because of the delay effect, the periods of high wealth-tech would lag peaks in core social tech. Thus given relatively high tech-level, we expect recently high but currently declining social tech with elevated probability. Note that this is not conclusive; social tech could be still increasing, or constantly high, if those are plausible.
So high or even increasing technology is evidence of recently but not necessarily currently high social tech. Note that if we assume that regression to the mean can be applied to social tech – that it has a well defined mean that it clusters around – then high technology does become evidence of social decline in a Bayesian sense, because the recent peak will be regressing to the mean.
What if we wanted more evidence to discriminate decline from sustained height? Well, if we can look inside social tech a bit, we would look for the state of the generators of social tech. I don’t know exactly what those are, but a visionary leader with the power to actually do stuff, and wisdom and experience to know what to do would be a promising sign. Corruption at the top would be a bad sign. Recent large changes in generator would be a bad sign, a-priori.
To justify large changes being bad, we’ll need an additional premise, which we can call Narrowness-of-Good: most configurations of stuff are meaningless or bad; the useful and good occupies very particular narrow regions in the space of possible configurations, usually because it requires many things to work together. Thus, large changes in a working piece of technology, social-tech included, are a-priori bad unless you have strong reason to believe the change is systematically good-tracking. As analogy, think of a randomly generated painting, a chemical plant hit by a powerful tornado, or an organism with a huge number of mutations. You get ugliness, a junk pile, and retardation.
In our own situation, we see large recent changes in leadership structure with various democratic revolutions in the 18th century, the fall of the church, the political re-engineering with the world wars, and the cold war. I don’t think it’s plausible that we’re running entirely on pre-democratic momentum, but if democracy is negative (which seems likely at least) but metastasizes slowly, that might not be too far from the truth. (Would democracy have invented capitalism or the rule of law on its own? How has it done at preserving them?)
So to conclude the no-feedback model, peaks in technology trail peaks in social-tech potentially far enough that by the time observable decline occurs, it’s way too late and collapse might be inevitable. Regression to the mean predicts that periods of high technology have elevated probability of declining social tech. Social-tech change will start from some generator process and propagate down to the reality on the ground, so we should be looking there, not only at the ground facts, for predictive insights. It is plausible that recent large changes in social technology (e.g. democracy) are deleterious.
Implications Assuming Feedback
What if technological change perturbs the social order? No particular direction or consistency, it just perturbs it. Seems obvious.
Agriculture, the printing press, the gun, the factory, the car, the atom bomb, the pill, and the Internet are good examples of technologies that perturbed the social order.
A naive extrapolation from the narrowness-of-good thesis and tech-perturbs-society is that you would expect good social orders to destroy themselves with disruptive technology. This doesn’t seem entirely supported by history; we stand at the top of an ascent too long and unique with too many disruptive changes to be entirely the result of momentum and chance. As much as it grates on my horrorist aesthetic to add an inexplicable force for good to the equations, it seems we must.
The force might be that social technology is simply civilization-level intelligence, and intelligence means nothing if it can’t adapt to disruptive threats. Thus we might expect things to go a little better than naively expected just like we would expect deep blue to win at chess despite being unable to predict its moves.
I’ll quickly stop being able to speculate with simple models once we add semi-intelligence as a factor, but I think we can still say a few more things:
I’ll note that we now predict that stable high-tech configurations are unlikely; by the time you get to high technology, there are too many things out of equilibrium to really be stable; social change has not really caught up, the social change that has happened has not propagated, technology morphs as it is maintained, etc. Roughly analogize an inverted pendulum or even double pendulum; add enough energy and you’ll see it near upright sometimes, but getting it to stay there for two moments in succession requires careful balance with a high frequency feedback controller and some predictive abstraction, especially in the case of the double pendulum. So high enough meta-level social intelligence might be able to pull off a stable high-tech society, but it won’t happen by accident.
The next thing to note is that really fast technological increase might overload the civilization’s ability to intelligently adapt. Faster techno-disruption makes disruption more likely to be deleterious. You can jump out of the way of a slow-moving car, but a fast one will get you. If we had a theory of intelligence, this would be a straightforward theorem, as it is, we’ll have to rely on intuition. Worse, social technology and therefore a civilization’s aggregate intelligence is part of the domain being optimized, so any loss of grip for a civilization is also a loss of the ability to keep a grip. This thesis, that accelerating technological change causes accelerating social decay, is the Strong Antisingularity Hypothesis.
Summarizing, if we add the assumption that technology perturbs social order, we need to also add the assumption that the intelligent steering effects of social technology are non-negligible, to account for history. With technology feeding back into social tech, we don’t expect stable high-tech configurations any more than we expect naturally inverted pendulums, but sufficiently powerful social tech might be able to work it. Interestingly, this model supports the Strong Antisingularity Hypothesis: accelerating technological change accelerates social decay.
That’s it for now. Half of it is probably wrong, but the point of this is to lay out our current thoughts and convince some of you that this is an interesting and important point of theory to develop. A theory of social technology would have more than novelty value, I suspect; if developed well enough and used by the powerful, it could put us on a much better path.
I’ll reiterate that the intelligence of civilization is embedded in the domain it is trying to optimize. This means it is fundamentally about reflection. A theory and practice of reflective civilizational science is a hard problem, but the payoff is potentially very high as increasing meta-level social tech begets even better and more stable social tech, assuming the destabilizing effects of technology can be managed.
This piece by a fellow writer was reposted from our previous blog.