By Brett Wilkins

Microsoft — which recently adopted a set of six core principles for the development of ethical artificial intelligence (AI) — has come under fire for investing in an Israeli company that produces facial recognition technology used to monitor and control Palestinians living in the Israeli-occupied West Bank.

AI For Good?

Earlier this year, Microsoft President Brad Smith met with Pope Francis at the Vatican to discuss the ethical use of AI. According to news reports at the time, their conversation covered topics including “artificial intelligence at the service of the common good.” The Holy See then announced that its Academy for Life would partner with Microsoft to jointly sponsor a prize for the best doctoral dissertation concerning “artificial intelligence at the service of human life.”

A month after Smith’s meeting with the Pope, Microsoft Executive Vice President of AI and Research Harry Shum told an audience at the MIT Technology Review’s EmTech Digital Conference that the company was working on ways to include AI ethics reviews to a standard checklist of audits for new products. Microsoft said that it had “implemented its internal facial recognition principles and is continuing work to operationalize its broader AI principles.”

Additionally, Microsoft has several internal working groups dedicated to AI ethics. These include Fairness, Accountability, Transparency and Ethics in AI (FATE), a group of nine researchers “working on collaborative research projects that address the need for transparency, accountability, and fairness in AI,” and AI Ethics and Effects in Engineering and Research (AETHER), an advisory board that reports directly to the company’s C-suite.

“The expanding use of artificial intelligence raises important questions about the relationship between people and technology, and the impact of the new capabilities on individuals and communities,” Microsoft said. “AI augments and amplifies human capabilities, and its governance will require a human-centered approach.”

From all of this came a set of six Microsoft AI principles. These are: fairness, reliability, privacy and security, inclusiveness, transparency and accountability. Previously, Microsoft Research Labs Director Eric Horvitz had announced that the company had “cut off significant sales” over AI ethics concerns, although he understandably did not give much detail about which potential customers were rejected. Microsoft’s AI ethics page lists the company’s six AI principles and declares that “applied AI uses technology to empower solutions to humanitarian issues and create a more sustainable and accessible world.”

Palestinians queue at an Israeli military checkpoint at the apartheid wall in Bethlehem, in the illegally-occupied West Bank (Photo: Andrew E. Larsen/Flickr Creative Commons)

A Tool of Oppression

Earlier this week, Palestinian and Israeli news media reported that residents of Kober, a Palestinian village in the Israeli-occupied West Bank, had unearthed a camouflaged surveillance camera hidden by Israeli occupation forces in the local cemetery. The surveillance device was made by the Israeli company AnyVision Interactive Technologies, which specializes in facial recognition technology. Companies including US chipmaker Qualcomm, German engineering and technology company Bosch and California-based graphics processing unit designer Nvidia have all invested in AnyVision, according to Fast Company. In June, Microsoft’s M12 venture fund announced it was joining AnyVision’s $74 million Series A investment round, citing the Israeli company as a “tool for good.”

However, critics — including Ethics In Tech — counter that the company is a tool for oppression. In July, the Israeli newspaper Ha’aretz reported AnyVision products and technology are used to monitor Palestinians entering Israel from the West Bank, which has been illegally occupied by Israel since 1967. As a former head of the Defense Ministry’s security department between 2007 and 2015, AnyVision President Amir Kain played an instrumental role in the perpetuation of the criminal occupation, and thus in the violation of Palestinian human rights.

In addition to an apartheid wall that has divided Palestinian communities and devastated the economic life of the occupied territory, Palestinians wishing to travel between the occupied territories and Israel are forced to endure wasted days and rampant humiliation at Israel Defense Forces (IDF) checkpoints that control access not only to Israel but often also routes between Palestinian towns and villages. Scores of pregnant Palestinian women have given birth while waiting to pass through these checkpoints, and dozens of babies have died there after failing to receive timely medical care. IDF soldiers sometimes sadistically refer to long, sometimes dangerous, checkpoint queues as “time off from life.”

According to Ha’aretz, AnyVision technology is used by IDF troops to quickly determine if people attempting to enter Israel to work have the required permit. Advocates say this shortens the sometimes notoriously long queues at checkpoints. Ethics In Tech, however, is reminded of the partnership between US tech giant International Business Machines (IBM) and its European subsidiaries and the Third Reich government of Nazi Germany led by Adolf Hitler. IBM punch card technology helped speed up the mass extermination of Jews during the Holocaust.

AnyVision founder Eylon Etshtein has defended his company, insisting it is sensitive to racial and gender bias, and that it only sells to democracies. But what’s that worth when AnyVision’s home country — which trumpets itself as the only democracy in a decidedly authoritarian region — is perpetrating some of the world’s worst human rights violations against Palestinian men, women and children, and doing so with AnyVision technology?

Rights Groups Alarmed

Human rights, civil liberties, privacy and anti-occupation groups implored Microsoft to avoid complicity in grave Israeli human rights violations. Shankar Narayan, the director of the Technology and Liberty Project at the American Civil Liberties Union (ACLU), told Forbes that Microsoft has not followed through on earlier interest in curbing the proliferation of facial recognition technology. “This particular investment is not a big surprise to me,” said Narayan. “There’s a demonstrable gap between action and rhetoric in the case of most big tech companies and Microsoft in particular.”

“If Microsoft were serious about technology with ethics, then perhaps these kinds of transactions would receive more scrutiny,” said Narayan, who added that he was particularly concerned about AnyVision’s affect analysis technology, which can read emotions to determine if someone is a “threat.”

Amos Toh, senior researcher covering artificial intelligence and human rights at the Human Rights Watch, said he believes “it’s incumbent on Microsoft to really look at what that means for the human rights risk associated with the investment in a company that’s providing this technology to an occupying power.”

“It’s not just privacy risk but a privacy risk associated with a minority group that has suffered repression and persecution for a long time,” Toh told Forbes. “There are special considerations of discrimination there.”

The Palestinian Boycott, Divestment and Sanctions (BDS) National Committee is calling for a boycott of AnyVision, which it calls  “complicit in the Israeli occupation and repression of Palestinians.”

“AnyVision [also] plays a direct role in Israel’s… illegal wall and military checkpoints,” the committee, which is at the head of the global BDS movement for Palestinian freedom, justice and equality, said in a statement. “AnyVision also maintains cameras for the Israeli military deep inside the West Bank to spy on Palestinians and enable the Israeli military’s illegal targeting of civilians.”

Jewish Voice for Peace (JVP) has published a petition calling on Microsoft CEO Satya Nadella to “drop AnyVision.”

“There is nothing ethical about creating an Orwellian surveillance system to watch over an imprisoned people,” the US-based activist group asserted.

Secretly spying and oppressive surveillance of a long-occupied people ought to be enough to give any ethics-minded investor serious pause. But that’s not all — there’s AnyVision’s assertion that it only sells to democracies, a claim belied by the presence of its cameras in Moscow’s Domodedovo Airport and its search for a sales executive in Hong Kong. Pro-democracy demonstrators in Hong Kong, who have suffered increasingly violent police repression under pressure from China’s authoritarian government, have used laser pointers as they attempt to thwart facial recognition surveillance devices. Global outrage over the crackdown on Hong Kong protesters has prompted AnyVision to reconsider doing business with its authorities, according to Fast Company.

Potential for Abuse

Even in countries with relatively high regard for civil liberties, privacy advocates and rights groups have raised concerns about the rapid proliferation of facial recognition technology and the growing potential for its misuse by governments. For example, the city of Nice, France, monitors its residents with AnyVision technology, and entry to an undisclosed London stadium was surveilled by the company last summer.

“We recognize such powerful technology has the potential to be misused if placed in the wrong hands, and that we have an inherent responsibility to ensure our technology and products are used properly,” AnyVision said in a statement published by Forbes.

The ACLU recently published a report, “The Dawn of Robot Surveillance: AI, Video Analytics and Privacy,” in which it warned that installing tens of millions of security cameras equipped with AI technology will lead to abuses including over-enforcement and discrimination. The civil liberties group recommended a ban on mass government surveillance and the implementation of “policies that will prevent private-sector deployments from becoming equally oppressive.”

Some people are fighting back. In May, San Francisco became the first US city to pass a law banning government agencies from using facial recognition technology. On Tuesday, California Gov. Gavin Newsom (D) signed into law a measure prohibiting state and local law enforcement from using facial recognition software in officer body cameras for the next three years.

“The public wanted their officers and deputies to use body cameras to provide accountability and transparency for the community,” Assemblyman Phil Ting (D-San Francisco) said in a statement after Newsom signed the bill. “[But] the addition of facial recognition technology essentially turns them into 24-hour surveillance tools, giving law enforcement the ability to track our every move.”

“We cannot become a police state,” Ting added.

Please subscribe to receive the latest newsletter!
We respect your privacy.