(OSV News) -- While the smoke was still rising from its ruins and rescuers were extracting 175 bodies from the rubble of the Shajarah-Tayyebeh elementary school in Minab, Iran, Feb. 28, the questions started about how this could have happened in the opening hours of the U.S and Israel war with Iran.
Preliminary reports suggest the school was misidentified as a military site due to outdated human intelligence when it was struck by a U.S. Tomahawk cruise missile. But the incident has also drawn attention to the complex interplay between the use of generative artificial intelligence for mass processing the data it is given into thousands of potential military targets, identified and ranked for human reviewers to approve.
In what some commentators are calling "the first AI war" -- the current war between the United States, Israel and Iran -- there are a host of both old and new ethical considerations. But many revolve around the same concern: that AI should not be left to its own devices.
Yet on Feb. 27, the eve of Operation Epic Fury, President Donald Trump directed government agencies to no longer work with tech giant Anthropic, amid a critical difference in views on acceptable uses of its technology by the Department of Defense, also known as the Department of War. In response, Anthropic -- whose tools analyze imagery and intelligence data -- filed suit against the Pentagon March 9.
Anthropic CEO Dario Amodei -- citing mass domestic surveillance and fully autonomous weapons -- stated, "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do."
On March 13, a group of 14 Catholic moral theologians filed a friend-of-the-court brief in support of Anthropic. They noted Anthropic's objections to use of AI for mass surveillance of Americans is "aligned" with Catholic teaching on privacy and subsidiarity, and argued from the Church's teaching on justice in war that "lethal autonomous weapons problematically obscure human agency, dangerously shifting responsibility away from human decision-makers to machines."
Anthony Granado, associate general secretary at the U.S. Conference of Catholic Bishops, told OSV News that "Pope Leo XIV has described the implications of AI technology for humanity as a 'digital revolution.'"
"And the U.S. bishops have underscored what the Holy Father has said, reminding lawmakers that AI impacts a very wide range of issues which the Church considers important for human life and dignity," he said.
In emailed comments, Granado enumerated several aspects of the war that seem to hold an answer to Pope Leo's query.
"AI is neither neutral in its implementation nor by its design. The decision to take life in warfare or to undertake war itself must be guided by the moral law and ethical considerations," he explained. "The bishops have expressed their deep concern about the development and use of autonomous weapons and any attempt to replace or undermine human decision making, which they urge, remains essential to mitigate what they refer to as 'the horrors of warfare and the undermining of fundamental human rights.'"
U.S. Navy Adm. Brad Cooper, the head of U.S. Central Command, said in a March 11 video the United States is using a variety of advanced AI tools to conduct strikes.
"These systems help us sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react," Cooper said. "Humans will always make final decisions on what to shoot and what not to shoot, and when to shoot. But advanced AI tools can turn processes that used to take hours and sometimes even days into seconds."
But 120 Democratic lawmakers in a March 12 letter to Defense Secretary Pete Hegseth raised concern about reports that U.S. and Israeli strikes on Iran have hit "schools, hospitals, gymnasiums, public gathering spaces, and a UNESCO heritage site." They asked what role AI was playing in selecting targets, assessing intelligence, making legal determinations -- and at what point was it subject to human review for verification -- including the Feb. 28 strike on the Shajareh-Tayyebeh schoolchildren.
Historically, the ethics of warfare have been guarded by broad safety rails and rules of engagement, strictly overseen by the Department of Defense and the federal administration, John Slattery, executive director of the Carl G. Grefenstette Center for Ethics in Science, Technology, and Law at Duquesne University in Pittsburgh, told OSV News.
"These ostensibly would also apply to any technological system -- even before generative AI, you still had technology-enhanced tracking systems. You still had an early form of artificial intelligence doing a lot of work; drone warfare, and tracking systems, and targeting, and things like this," he said. "All of these fell under the broader impulses of rules of engagement -- targeting innocent people -- and these were hotly debated well before the Trump administration."
Observers, however, have voiced concerns over the administration's stance concerning how warfare is conducted. Hegseth has referred to a military ethos of "maximum lethality, not tepid legality."
"Both Secretary Hegseth -- as well as President Trump -- have openly said that they don't want guardrails on AI. They don't want to be restricted by international law on international rules of wartime engagement," noted Slattery. "And that raises a lot of problematic questions -- because if you take some of these guardrails off, we already know that generative AI is highly biased, even with guardrails on ... If you start to take these off, there's no telling what types of things it could be recommending."
The Pentagon's pitched battle with Anthropic has some in the business world questioning the placement of a federal thumb on the scales of corporate independence.
"I wouldn't say that Catholic social teaching says the state can never dictate what companies can do; that's clearly wrong," said Anthony Cannizzaro, associate professor of international business and strategic management as well as vice dean at The Catholic University of America Busch School of Business.
"But the idea that you would remove any sort of moral checks from technology is problematic," he qualified. "So then the question becomes, from whom do those moral checks stem -- should it stem from the creator of the product, or should it stem from the government?
The argument, Cannizzaro said, leans toward the Catholic principle of subsidiarity: That larger institutions in society, such as the state or federal government, should not overwhelm or interfere with smaller or local institutions, such as the family, local schools, the Church community -- or in this case, a corporation.
According to subsidiarity, "Anthropic has a prerogative to build its own values into that product -- and it's not really the government's role to come into lower levels of society, whether it be individuals or organizations, and dictate what those values would be," he explained.
Msgr. Stuart Swetland, a canon lawyer and former U.S. Navy officer, who is president of Donnelly College in Kansas City, Kansas, agreed.
"I think Anthropic and other corporations do have responsibilities -- as part of being a good corporate actor -- to want to see their products used only appropriately and ethically," he told OSV News. "And they have every right to place in their contract limitations on the use of their product. If a buyer -- in this case, the government -- doesn't want to live by those restrictions, they don't have to interact with the company, and they don't have to purchase their service or their product."
"There's limits to what can ethically be done in war," Msgr. Swetland said. "And the Church has been consistent ... that there has to be involved a human in decision-making in the application of just war criteria."
The Catechism of the Catholic Church states the basic conditions of just war criteria, including that aggressors must have inflicted lasting damage; all other means of ending aggression must have failed; there must be a prospect of success; and the use of weapons shouldn't produce a graver evil.
"Dehumanizing war, even more than it already is, will make it way too easy to use deadly force," Msgr. Swetland said. "And the dangers of that leading to all kinds of disproportionate effects have to be closely guarded against."

