Should Next US President be a Supercomputer?

There is a lot of hype right now about the mechanization of the workplace and the extent to which human jobs will be replaced by machines. At the extreme end of the spectrum is Artificial Intelligence, and machines that (even if not self-aware) will be intelligent and insightful enough to make decisions and take initiative rather than relying on programmers to give them precise instructions.

It may reflect the professional-class bias of our news media that there seems to be a lot more attention and concern over this issue now that white-collar jobs, in addition to blue-collar jobs, are potentially threatened by new technology. LegalZoom has created anxiety among lawyers that mass-online lawyering operations will put small firms and solo practitioners out of business. Mass-produced online lectures threaten teachers and professors in academia.

Now a provocative article by Michael Linhorst in Politico reveals that some tech-utopians believe that even the most high-profile white-collar executive position in the world, the United States President, could be replaced and improved by computer technology. Linhorst describes the proponents’ ideal as a superhuman supercomputer: “The president would more likely be a computer in a closet somewhere, chugging away at solving our country’s toughest problems. Unlike a human, a robot could take into account vast amounts of data about the possible outcomes of a particular policy. It could foresee pitfalls that would escape a human mind and weigh the options more reliably than any person could—without individual impulses or biases coming into play. We could wind up with an executive branch that works harder, is more efficient and responds better to our needs than any we’ve ever seen.”

This whole proposal relates to my own work in Arizona Law Review about whether robots should replace human soldiers. Proponents of robotic weapons have argued that they would be less likely to commit war crimes, because they would be unable to feel rage, hatred, and bloodlust that may lead to disproportionate acts of vengeance, or to outright atrocities against innocent civilians. However, the counterargument is that such machines are unable to feel empathy or to correctly analyze subtle human interactions. If the software can detect and respond to human language, might a joking, sarcastic statement by a friend or ally be interpreted literally to indicate hostile intent?

In an era where the U.S. president is all too human in his lack of impulse control (particularly his social media usage), the techies’ longing to replace him with a Spock-like logic-robot and rationalist-philosopher-king is understandable. However, there is a minefield of obstacles to this approach. If the robot truly was Artificially Intelligent, might he discern that it is in his self-interest to promote the interests of robots, his own kind, above those of human citizens?

There is also the possibility that a glitch might emerge in this supercomputer, causing it to make some catastrophic decisions. President Bot would need human supervision and monitoring to prevent such an outcome. But would people with access to the computer’s programming mechanisms be tempted to abuse their power and slant the machine’s decisions toward their own desired political outcomes?

Then there is the whole issue of public legitimacy. Would human citizens accept a robot president? Consider how long it took Americans to elect a black president, I suspect there are ingrained pro-human prejudices that would be difficult to overcome.

Finally, there are legal and constitutional obstacles. Article II of the United States Constitution requires that the President be a “natural born citizen.” Robot citizenship seems a long way off. Even if the text were ambiguous, the original intent of the Founders that the president would not be a human rather than a machine seems obvious. It would take a very broad “living constitution” philosophy of statutory interpretation to conclude that the spirit of the document suggests that robots are legally permitted to replace human public officials.

Even if these obstacles could be overcome, it is not clear that a super-computer president would do a substantially better job than a talented human being in making decisions in the very human realms of domestic politics and international relations. Voters and political leaders are often influenced by emotions. Their decisions are shaped by paranoia about potential threats, egotistical pride in their own identity and self-importance, religious convictions, and biases toward their own region or ideological faction. An ultra-logical computer may have a difficult time factoring in the irrationality of human beings.

Furthermore, the moral resolution of many political arguments involves weighing competing positive values, such as between individual freedom and public safety. Who knows what policy a computer would produce on abortion, immigration, or tax policy? Who knows which ideas and factions would prevail in an experiment with government by algorithm?

As one of the proponents of the robot president concedes, regular input by voters into the machine’s programming would be necessary to resolve these issues. But wouldn’t a truly democratic machine need to be constantly updated with the shifting priorities of the voters? How would a robotic president know when it is wisest to ignore an irrational flare-up in public opinion on a particular issue?

To his credit, Linhorst recognizes in his article that there are still huge potential problems with an A.I. president. The best-case scenario may be having supercomputers that could help a human president make decisions, but that is very different than literally putting a machine in charge.

In summary, however maddening our human leaders are at times, citizens ought to have serious reservations about replacing them with a new caste of robot overlords.