In Laudato Si and many of his other public remarks, Pope Francis has expressed concerns about the rapid advances in technology and the implications for society. Now, some noteworthy secular voices, Tesla co-founder Elon Musk, theoretical physicist Stephen Hawking, and entrepreneur Bill Gates are echoing the Holy Father’s warnings with regard to the rapidly growing use of Artificial Intelligence.
John McCarthy, a Stanford computer science professor, coined the term “Artificial Intelligence” at a workshop conducted by IBM in 1956. He defined AI as the “science and engineering of making intelligent machines, especially intelligent computer programs.” Over time, however, engineers have moved beyond simply programming computers to solve specific problems. They realized, as Bernard Marr of Forbes magazine phrased it, that “it would be far more efficient to code them [computers] to think like human beings, and then plug them into the internet to give them access to all of the information in the world.”
The increasingly human-like abilities of AI are at once exciting and disturbing. The rapidity with which AI is being incorporated into the home and workplace means that nearly everyone in the world utilizes, on a daily basis, the conveniences that AI makes possible. However, the swift development of the technology has occurred essentially without any corresponding ethical and moral guidelines to govern it. This is an omission that the technology industry, pro-life groups, and the Vatican are feverishly trying to rectify. They believe the preservation of human society depends upon their success.
From such concerns, a new field of study called robot ethics or “roboethics” is emerging.
The purpose of roboethics is ensuring that machines with artificial intelligence (AI) behave in ways that prioritize human safety above their assigned tasks and their own safety and that are also in accordance with accepted precepts of human morality (whatis.techtarget.com).
John Markoff of the New York Times reported on three programs that were launched in 2016 to study roboethics.
Under the Obama Administration, the National Science and Technology Council (NSTC) published a report titled Preparing for the Future of Artificial Intelligence which discussed the impact and possible outcomes from AI. Carnegie Mellon University created the K &L Gates Endowment for Ethics and Computational Technologies research center to study the ethics of Artificial Intelligence. Technology giants Microsoft, Amazon, Facebook, Google, and IBM formed the Partnership on AI to “study and formulate best practices, advance public understanding of AI, and serve as an open platform for discussion and engagement about AI and its influences on people and society.”
Effects of AI on the Workforce
In December 2016, the Pontifical Academy of Sciences held its own colloquium, titled Power and Limits of Artificial Intelligence. According to Catholic News Agency, speakers included Stephen Hawking, Demis Hassabis, CEO of Google DeepMind, and Yann LeCun of Facebook. The predominent topic was how the increasing presence of AI in industry will eventually lead to the replacement of a majority of humans in the workforce.
As Patrick Doherty of catholicinsight.com explained, while the use of automation in industry is nothing new, what is new is that robots will replace not only so-called blue-collar jobs, but white-collar ones as well. A 2016 study by Citi and Oxford estimated “47% of Jobs in the US will be replaced by computer automation. Other countries will be hit harder, however, with a projected 87% of jobs being replaced in Ethiopia, 77% in China and 65% in Argentina.”
Obviously, this trend raises significant humanitarian concerns. It is feared that the financial benefits of automation will profit only a small group of shareholders and a few well-educated, highly skilled employees. The majority of workers, the middle class and, especially, the poor, will lose their jobs to the machines and will have insufficient education and job skills to find other employment.
Unless channeled for public benefit, AI will soon raise important concerns for the economy and the stability of society. We are living in a drastic transition period where millions of jobs are being lost to computerized devices, with a resulting increase in income disparity and knowledge gaps. With AI in the hands of companies, the revenues of intelligence may no longer be redistributed equitably. (Pontifical Academy of the Sciences, Final Statement of the Workshop)
The “Terminator Conundrum”
There is a second, more ominous use of AI that is garnering the attention of the Vatican, the United Nations, and pro-life groups around the world. That is the development of autonomous weapons by the military in the United States and in countries around the world.
Almost unnoticed outside defense circles, the Pentagon … is spending billions of dollars to develop what it calls autonomous and semiautonomous weapons and to build an arsenal stocked with the kind of weaponry that existed only in Hollywood movies and science fiction, raising alarm among scientists and activists concerned by the implications of a robot arms race. (Rosenberg and Markoff, “The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own,” New York Times, 2016).
Unlike drones which rely on commands issued from a distance by a human soldier, autonomous weapons rely completely on computer software to direct them. Such weapons have the advantage of being much faster and far more accurate than conventional weapons while reducing the need to expose soldiers to the dangers of the battlefield.
However, there is considerable anxiety that the lack of human control could result in the weapons going rogue. Pentagon officials were quick to dismiss this speculation as unfounded. They claim their autonomous weapons will always require “a man in in the loop” when making life and death decisions.
Quoted by Rosenberg and Markoff, Deputy Defense Secretary Robert O. Work stated, “There’s so much fear out there about killer robots and Skynet, (from the Terminator movies). … That’s not the way we envision it at all.”
A Virtually Inevitable Arms Race?
Nonetheless, international opposition to the development of autonomous weapons is growing. Interestingly, some of the strongest objections come from scientists involved in AI research. An open letter presented at the 2015 International Joint Conference on Artificial Intelligence (IJCAI) by the Future of Life Institute urged world governments to abandon the idea and thus avoid the “virtually inevitable” global arms race.
Unlike nuclear weapons, [autonomous weapons] require no costly or hard-to-obtain raw materials, so they will become [commonplace] for all significant military powers to mass-produce. It will only be a matter of time until they appear … in the hands of terrorists. … Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.
As Archbishop Silvano Tomasi, former Permanent Observer of the Holy See to the United Nations, said, “Technology, … has many beneficial uses and even the idea of a nation keeping its soldiers out of harm’s way is praiseworthy, but when nations are using … technology to target and kill human beings, they are obliged to weigh decisions in a way only a human being can.”
Roboethics does not only apply to how humanity uses robots. There is also what is called “machine ethics” which pertains “to the behavior of robots themselves, whether or not they are considered artificial moral agents and, ultimately, with robot rights.” (wtvox.com)
Up to now, the singular limitation of robots is that they do not possess a human’s ability to make decisions based on an evaluation of right and wrong. However, this may not be true much longer as the British Standards Institute (BSI) recently issued guidelines to help designers create “ethically sound robots.” For example,
Robots should not be designed solely or primarily to kill or harm humans; humans, not robots, are the responsible agents; it should be possible to find out who is responsible for any robot and its behavior. (BS-8611 Robots and Robotic Devices)
It may seem strange that it is necessary to clarify upon whom, creator or creation, the responsibility for the actions of a robot should be placed. But there are those who are willing to argue, if a robot can be held responsible for an action, then it should be possible to ascribe a degree of personhood, and, therefore, personal rights, to them as well.
Thus, if they weren’t already, the distinctions separating man and machine would become increasingly blurred.
In October, history was made at the Future Investment Initiative when Sophia, a robot created by Hanson Robotics, was granted official citizenship by the host country, Saudi Arabia.
The irony of this was not lost on the international community. Critics pointed out how a robot was given a privilege denied to most foreign workers in the Saudi population. Women’s groups objected to the fact that Sophia enjoyed many privileges forbidden to her female human counterparts, such as appearing in public without a hijab and without being accompanied by a male guardian.
While this was likely nothing more than a publicity stunt, it gave a glimpse of what lies ahead for mankind if the progress of Artificial Intelligence goes unchallenged. Yet, as Pope Francis writes in Laudato Si, “We have the freedom needed to limit and direct technology; we can put it at the service of another type of progress, one which is healthier, more human, more social, more integral.” (112)
Nobody is suggesting a return to the Stone Age, but we do need to slow down and look at reality in a different way, to appropriate the positive and sustainable progress which has been made, but also to recover the values and the great goals swept away by our unrestrained delusions of grandeur. (114)