Politics Is Upstream of AI

The dominance of politics over science has serious implications for future technological projects, such as artificial intelligence.

To tackle the nexus of AI and politics, let’s first review the model we have developed so far. In my article, Politics is Upstream of Science, I explored several examples of politics controlling science:

  • Lysenkoism. Soviet agriculturists rejected genetics in favor of Marxist science for 30 years with the blessing of the party. Famines occurred across the Soviet Union, China, and other communist countries.

  • Deutsche Physik. In Nazi Germany, a clique of scientists rejected Einstein’s theories of relativity and quantum mechanics as “Jewish Physics,” and persecuted Werner Heisenberg.

  • World Ice Theory (Welteislehre). German theory that ice was the building blocks of the cosmos, popularized in Nazi Germany to be contrarian towards “Jewish Physics.”

  • Project Camelot. US military social science project with a $6 million budget for counterinsurgency in Latin America during the Cold War.

  • POLITICA. A successor project to Project Camelot, where a computer program was used to forecast politics in Latin America based on inputs of social science data. Used in the planning of Pinochet’s coup in Chile.

In the big three 20th century regimes, politics and science were heavily entangled. In the case of Lysenkoism, Deutsche Physik, and World Ice Theory, the results were deeply insane and held back the scientific community of their respective states. In the case of Project Camelot and POLITICA, the results were more successful, but they also demonstrate how the state drives science.

Next, I took a closer look at what it means to say that politics is upstream of science. I concluded that politicians have executive authority over scientists. Politics has an intellectual influence on science, more than the other way around.

The entanglement of politics and science does not bode well for AI risk, or for risk emanating from other future technologies. A common question in AI risk research is whether AI will be “friendly” to humans, i.e. whether it is likely to behave consistently with human values.

If AI occurs, politics will make it less friendly to humans. To have friendly AI, you need friendly institutions. Human-friendly AI is not just a computer science problem, it’s a political problem.

State AI

Technology is developed by humans inside institutions governed by states. To understand the influence of politics on AI, it is necessary to imagine the relationship between states and AI projects. Typical discussions of AI focus on the relationship between the AI and the researchers, but I believe we should be examining the entire stack that creates the AI—which includes the state.

It’s likely that AI projects will be funded by states, and even if they are not, the state will be unable to stay away if the projects show any signs of progress. States cannot afford to leave potentially powerful technology on the table, especially when there are security implications. They have a mandate to monitor any dangerous technologies, and they want to stay at the cutting edge of technology themselves.

Try thinking like any of the major superpower states, and you will get a better sense of what that state’s incentives are. If other states are developing advanced technologies, then states have to participate in an arms race. An arms race is a pessimistic subject, but my analysis suggests that states have already formed opinions about the game theory of future technologies. That ship has sailed.

The United States government is looking into AI research. The US Air Force Center for Strategy and Technology (CSAT) is conducting a series of studies called Blue Horizons to identify technological security threats. It is clear that they are watching the developments in the areas of transhumanism and artificial intelligence.

A slide from the Blue Horizons 2012 briefing mentions AI:

Blue Horizons slide

Blue Horizons is likely only one of many government agencies that are looking into AI. Defense and intelligence agencies of other states are undoubtedly doing the same thing.

How states view AI may not be aligned with how AI researchers view AI. What AI researchers say, and what the bureaucrats hear, may be two different things. Such misalignment would be significant given that state actors have authority over AI researchers. If states are studying AI research, then AI researchers need to study states and account for their potential influence on AI safety.

In theory, it would be in the interest of states to ensure that AI is built in a friendly way, but this realization would require that state actors are rational and unified agents exercising good judgment. It would require that they understand the risks and have the incentives to care about those risks over the long-term.

To make matters more difficult, states are not always unified and may have competing entities, bureaucracies, and proxies. The state is not a single actor, but a combination of multiple actors. Different parts of the state may have different views of AI.

When science and technology are enthralled by states, and states are focused on internal or external political factional conflicts, then this upstream political influence will increase the risk of the project going wrong. If you have a case of bad government, then the project is even more at risk of going wrong.

It it likely that any significant AI research project is either funded by the state, or funded by institutions with ties to the state. If any AI research project was not involved with the state, and began to show promise, then the state would have to involve itself with that project, which would be easy, given the money and power of the state. And even if somehow an AI project was allowed to be supposedly separate from the state, it would still be immersed in a political environment of journalism and values that are influenced by the state.

All superintelligent AI will be state AI, one way or another.

Can Math Save AI?

In the AI safety world, there are various ideas about how math might ensure that an artificial intelligence is friendly to humans: schemes for aggregating or extrapolating from human values. In theory, such a solution might get around the problem of institutions and states injecting locking their own values into AI.

We already see cases of tech companies helping the state spread its values, such as Google’s Jigsaw, using algorithms to target propaganda at international and domestic populations.

Even if AI that is friendly to human values is possible, there is no guarantee that the researchers would be allowed to build it. The state might have other priorities: political priorities. Comrade, you must change your math because it seems to be reaching oppressive conclusions. Remember, Comrade, you are building a Friendly AI of Social Justice. It must be friendly to the people.

Think this can’t happen? It’s already happening. The outcome of algorithms is already attracting criticism of bias due to politics (more sober analysis here).

Mere mathematical correctness will not save an algorithm if it is deemed politically incorrect.

From the examples of Lysenkoism, Deutsche Physik, and the political bandwagon behavior of present-day US tech companies, there is a trend of ideology attempting to insert itself into technical endeavors. Ideological commissars—state-backed or opportunistic—are a risk for any high profile technology project. Any serious AI project needs to maintain its own values and ideological security, or else it will get hijacked and turned into something else.

Who programs the programmers?

AI researchers recognize the mathematical problem of keeping an AI’s goals stable, but they also face the political problem of keeping their own projects’ goals stable.

Even aside from heavy-handed ideological intervention into AI projects, there is another political problem: If politics has an intellectual influence on science, then politics has an intellectual influence on AI researchers. See the other arguments in my last article about the intellectual influence of politics on science.

You can’t keep bad politics out of AI development if it’s already on the inside.

Given that AI researchers would be trying to solve complex mathematical and philosophical problems about human values, their intellectual backgrounds and moral education would be very important. AI researchers might use human values as inputs or training data to algorithms.

We should carefully scrutinize the values and politics that AI researchers might bring into their work. As any programmer will tell you: garbage in, garbage out.

If AI researchers decide the inputs to the AI, who decides the inputs to the AI researchers? Who programs the programmers?

If Soviet AI researchers went home from work and opened up Pravda, then Pravda would have an ideological influence on Soviet AI development. If American AI researchers go home from work and open up the New York Times, then the NYT would have an ideological influence on American AI development. Ditto for Russia Today and People’s Daily.

Pravda Lenin holding Pravda

Politics is upstream of all the information sources that technically-minded people ingest, while innocently believing themselves to be “moderates” or “independent thinkers.”

If AI researchers are trying to design algorithms that solve human values, then it seems like they would need to be really, really good at moral and political philosophy to get it right. Instead, they are trapped inside the present-day Overton Window—a filter bubble of high-prestige sources at the mercy of the current political climate. The modern independent thinker is a Philistine who discards all the historical data on human values, which is exactly the opposite of what any potentially high-impact project should be doing.

Thought Experiment: Soviet AI

To understand the gravity of political influence on AI, imagine if the Soviet Union had created Sovetskiy Iskusstvennyy Intellekt. Imagine if Nazi Germany had created Nazi künstliche Intelligenz. Imagine if Maoist China had created 人们对人工智能. Would AI coming out of those political environments be remotely friendly? What math could save it?

I’m asking these questions to provoke you to think outside the media bubble of your current society and recognize the gravity of political influence on futuristic technologies. It’s easy to imagine how Soviet AI or Nazi AI would be unfriendly to humans. In this hypothetical scenario, these states, of course, would claim that their AIs were friendly.

The Politics of AI

My arguments about AI and politics also apply to other future technologies and weaponry, including dangers that we are not yet aware of. Any other advanced technology projects will be subject to political ideology and the attention of states, such as biological engineering. Politics finds its way into everything.

The general problem is that political conflicts drive political behavior that is deeply unfriendly to human welfare: weapons development and ideological warfare.

What can be done to deescalate these conflicts? The first step is to look at the history and try to put together theories about what went wrong.

In historical context, it was the French Revolution and Napoleonic Wars which were responsible for modern total warfare and nationalism. They opened the door for the world wars, communism, and the Manhattan Project in the 20th century. The Cuban Missile crisis was one of the greatest existential risks that humanity has faced.

Napoleon returns from Elba

Technological progress has contributed to ideologically-motivated total warfare, but a purely technological analysis fails to capture how human agency and politics has stimulated the development of dangerous technologies. Technology doesn’t develop or use itself (yet), big red buttons don’t push themselves (yet); human agents and institutions do.

Building friendly AI is a political problem and an institutional problem, not just an algorithmic problem. If so, then we need to solve this problem on a political and institutional level.

Conclusions

  • AI researchers are intellectually influenced by their present political climate
  • AI researchers will be subject to the whims of states, political actors, and ideological commissars
  • Present-day values are narrow, politically influenced, and unsuited for any serious project about human values
  • Friendly AI and other technology can only be developed by friendly institutions

Recommendations

  • Study the history of state influence on technology, particularly the case studies of Lysenkoism, Deutsche Physik, Project Camelot, and the Manhattan Project
  • Study human values and morality from a historical perspective and take full advantage of all the historical data
  • Identify political ideologies that could lead AI or other advanced technologies to be used in harmful ways
  • Practice ideological hygiene in AI research projects
  • Study the political history behind dangerous technological development

Full series