Seventy years ago today, a United States Air Force B-29 bomber called Bockscar dropped the plutonium bomb "Fat Man" on the Japanese city of Nagasaki. Along with "Little Boy," which targeted Hiroshima three days before, the Aug. 9 bomb helped bring World War II to a close. The United States had become the first country to deploy nuclear weapons in wartime. Today it is still the only nation to have done so. Over the last seven decades, legions of scholars have picked apart the implications of atomic weapons, the nature of their deployment during the closing stages of the war and the immediate aftermath of the August 1945 bombings.
Even with the end of the Cold War — the nuclear stalemate that characterized the post-war years — the scientific debates and dilemmas encountered by the United States and other countries that developed nuclear programs are still relevant. Chief among these are the question of how and what information should be shared as well as the ethical and moral implications of scientific research. As new technologies develop, the scientific community will, as before, need to answer these questions as they apply to the next steps in the evolution of warfare. Society will also have to consider the ethics of developing technologies with the potential to take human life.
Today, one of the main barriers to collaboration is the division between nation states. Nationalism is, in fact, part a modern notion, one that was still in its infancy in the 18th and early 19th centuries. Before this period, the scientific community acted mostly as an international fraternity in which innovators shared their findings and research openly. Discoveries, especially in scientific theory, were able to build on each another freely because national borders were less significant than the pursuit of greater knowledge. International collaboration continues between academics as well as in industry, but complications arise when theoretical science manifests itself in concrete applications and technology. This type of innovation, key to profits or national security, has given rise to protective measures in the form of restricted government programs or commercial patents.
As nationalism and the commercialization of science increased in importance, priorities shifted, especially in times of war. During World War II, the populations of both the Allied and Axis powers were mobilized by the love of their own countries and peoples. The scientific community was no exception. International collaboration laid the groundwork for a nuclear weapon in terms of physics, chemistry and engineering, but the environment of free collaboration fractured during the war. Germany, Russia and the United States each sought to harness nuclear power first, and independent of the rest of the world.
Dilemmas of Research
The Manhattan Project was beset by ethical as well as purely scientific questions about the nature of closed research and the weapon it had produced. Szilard and Bohr were two of the scientific community's loudest voices associated with the project, and they called for the international control of nuclear arms. Bohr remained a scientific idealist to the end. He recognized the potential that this new technology had to change the nature of warfare and objected to the lack of open communication between the United States and other allied nations, specifically Russia. Szilard, a pioneer of the nuclear chain reaction, even questioned the need to use nuclear weapons at all. Both men raised concerns about the potential for international competition for nuclear superiority once the Axis powers were defeated. Their fears proved prescient and the geopolitics of the Cold War inevitably gave rise to the iconic nuclear arms race between the United States and the Soviet Union.
In the United States and elsewhere, applications for patents covering intellectual property rights related to academic discoveries are on the rise. In the current economic climate, pressure for protection and regulation is growing: startup businesses that package smaller or seemingly academic advancements to generate revenue emerge daily. The line between basic science and commercial technology has become blurred. Furthermore, the commercial potential of science is increasingly important, making securing the commercial rights to a discovery or new technology essential.
There is, however, an argument that patents can slow down innovation. As a result, some companies have formed a small movement to openly share patents as an impetus to spark innovation — an approach exemplified by the electric vehicle sector. But the debate between patenting or publishing discoveries is better relegated to the academic and industrial sectors. Military and national security research programs, by contrast, are far more secretive by nature than their classroom counterparts.
The world may never face another scientific and ethical dilemma on par with the creation and use of nuclear weapons. Technology, though, is constantly evolving and there are a number of developments that will inevitably cause moral debate. Most scientific discoveries can be adapted for multiple uses, benign or otherwise. Just because there is a potentially harmful application does not mean the associated concepts, techniques or even technologies cannot or should not be pursued. Research into nuclear weapons also led to nuclear power generation. Virus modification and gene editing technology, for example, have the potential to serve dual purposes as well, but there is a reluctance to share the research. A few years ago, publication of studies on the avian flu virus were delayed because many feared non-state actors would use the information to develop biological weapons. And while the development of a genetically engineered "super soldier" is far from becoming a reality, gene editing technology has already sparked a moral debate over what research is and is not appropriate. Ethical oversight is not meant to hamper scientific endeavors, but for dual-use technologies in particular, bans or moratoriums can hinder technological development.
From a military perspective, developments in artificial intelligence and autonomous warfare have the potential to substantially alter military strategy. The pursuit of artificial intelligence once again raises the question of where do we draw the ethical lines on the battlefield. The topic is by no means black and white. Autonomous weapons could potentially make the decision to open hostilities easier for an aggressor nation, but the deployment of unmanned systems could in theory minimize casualties once the war began. Nuclear weapons raised the question of total destruction, which acted as a natural deterrent to their use. If wars could be fought with little human cost, would the moral inhibitions of waging war be somehow reduced as a result?
Today, unmanned vehicle technology is the subject of intense debate. Just as with nuclear weapons, the issue of oversight is at the forefront of thinking. Nuclear proliferation was eventually subject to international law. Efforts are underway to pursue similar agreements on artificial intelligence and autonomous warfare. Drones presently have human operators, and while there is a degree of automation, a person ultimately tells the machine what to do. Debates over the ethics of having truly autonomous weapons making decisions about whether or not to kill a target are already raging — the critical question is just how much human involvement is necessary to take a life.
Just like nuclear weapons, the moral implications are manifold. There is an argument that the distance between the human operator and the target makes it easier to take a human life. And, the ability to wage war without risking casualties is attractive to political and military decision-makers, which has the potential to influence whether to act belligerently or not. Yet, studies show that drone pilots experience post-traumatic stress disorder at the same rate as regular combat pilots. Remote operators habitually report telepresence, the feeling of being at another location. Killing, irrespective of distance, takes a psychological toll. In theory, artificial intelligence has the potential to act more rationally or even humanely in times of war, and with a higher level of consistency. Combatants have to learn rules of engagement and are subject to make mistakes in the heat of the moment. Machines do not suffer from battle fatigue, and will act within their programmed parameters. And loss of a machine will not be mourned in the same way as the loss of a human combatant. For these reasons, countries and militaries are attracted to autonomous technologies that could give them more options to achieve their goals.
The United Nations has held conferences on the topic of lethal autonomous weapon systems, though no binding international agreement has been written. One of the major things under debate is what constitutes meaningful human control, which is a critical factor should war crimes be committed. But there is no clear definition yet of where that line is drawn or how those parameters are defined. Prominent developers of the technology are once again voicing concerns. While physicist Stephen Hawking and businessman Elon Musk may be the public face of the movement, many of those engaging in practical research have signed on as well. Developers of artificial intelligence, including scientists and engineers at the London-based firm DeepMind (now owned by Google) and other researchers into deep learning AI, have signed letters calling for a ban on autonomous weapons. In doing so, they echo the sentiments of their scientific predecessors, Bohr and Szilard.
As frightening as some of the possibilities may be when thinking about the endgame for some of these emerging technologies, research itself cannot be halted, just as research into nuclear power could not be halted. Staying at the forefront of these technological advances is paramount for nations in competition with each other. The push and pull that tore at the scientific community during World War II still exists. Knowledge for the sake of knowledge is the romanticized scientific ideal for many. But real world applications of scientific principles often bleed over. Technological developments, especially those that serve a militaristic purpose, still require scientists to navigate constantly shifting ethical and moral terrain.