Editorial: Ethics in the Age of Artificial Intelligence

Artificial Intelligence (AI)

Hardly a day goes by in the current news cycle without some mention of artificial intelligence (AI), whether it’s a story on facial recognition software, self-driving cars, or plagiarism in academia. Long the dream of technological optimists and the nightmare scenario of some science fiction writers, AI appears to have arrived as a powerful tool with myriad applications in the pres-ent and near future. 

In fact, we’ve been living with AI under different names for years. Google’s famous search algorithms? Driven by AI. Ever notice how Facebook or other platforms seem to know which items you are in the market for purchasing online? AI again. Even in 1997, when Yahoo was the search engine of choice, there were signs of AI’s evolution when IBM’s Deep Blue supercomputer famously defeated chess champion Gary Kasparov. 

As with most technological advances, the most crucial question isn’t whether something can be done, but whether it should be done. If the leap forward has already happened or is happening in real time, then we need to explore how to use AI ethically, be ready for what happens if it is used for ill, and ask ourselves as a society, Is it too late to stop AI from going in the wrong direction? 

The Human Variable 

Fortunately, there are already many influential people, not the least of whom is Pope Francis, sounding the alarm on the possible harmful ramifications of AI. Though the pope has previously admitted to not knowing how to work a computer, his knowledge of ethics and human nature runs deep. So when he and other faith and thought leaders gate their optimism about technological developments with some concern, we should pay heed. 

“I am convinced that the development of artificial intelligence and machine learning has the potential to contribute in a positive way to the future of humanity,” Pope Francis said at a Vatican gathering of scientists and technology experts in March. “At the same time, I am certain that this potential will be realized only if there is a constant and consistent commitment on the part of those developing these technologies to act ethically and responsibly.” 

The word that grabs my attention from what the pope said is “potential.” AI, like many innovations brought about by humankind, is not innately good or bad, but the way it is used by people and to what ends determines its relative benefit or detriment to humanity. The phrase nuclear technology, for example, might make you think of atomic weapons or cataclysmic power plant failures such as Chernobyl or Fukushima. But it’s also tied to radiation treatments of cancer and efforts to bring potable water to impoverished regions of the world through desalination. 

Algor-Ethics

The pope is not alone in his assessment that the use of AI must include ethical considerations. Cardinal José Tolentino de Mendonça, prefect of the Dicastery for Culture and Education, spoke at a July conference in Milan titled “The Future of Catholic Universities in the AI Age.” He pointed out that “mere training in the correct use of new technologies will not prove sufficient. It is not enough to simply trust in the moral sense of researchers and developers of devices and algorithms.” The cardinal seems to be suggesting that ethical guardrails need to be built into AI’s capabilities, an idea that has been coined algor-ethics

Although the cardinal was speaking about AI’s impact on places of higher learning, the moral implications of AI systems must be considered across all its applications. If the past is any indicator of the future—as it sadly tends to be—we’d do well to balance our enthusiasm for the promise of AI with a healthy dose of skepticism and caution. 


SAMO blog footer
Facebook
Twitter
Pinterest
Email

Subscribe to St. Anthony Messenger!

Skip to content