The European Union has published new guidelines explaining how governments and businesses can develop “trustworthy artificial intelligence.”

The EU’s High-Level Expert Group on AI has laid out three core rules for trustworthy AI — artificial intelligence technology should be legal, ethical and robust. To this end, the group presented a set of seven key requirements that AI systems should meet in order to be considered trustworthy:

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
  • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
  • Privacy and data governance: besides ensuring full respect for privacy and date protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimized access to data.
  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
  • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered. 
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

The new guidelines are part of the EU Commission for Digital Economy and Society’s AI Strategy, which was launched last April to deal with “technological, ethical, legal and socio-economic aspects to boost EU’s research and industrial capacity and to put AI at the service of European citizens and economy.”

“Today, we are taking an important step towards ethical and secure AI in the EU,” Commissioner for Digital Economy and Society Mariya Gabriel said in a statement. “We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI.”

Implementation of the new guidelines will begin with a “piloting process,” billed as a means of gathering feedback on how the list can be improved. This phase will begin this summer and will be followed by a review period lasting through early 2020. And while the guidelines will not be legally binding, it is hoped that they will play a significant role in shaping future EU legislation relating to artificial intelligence.

“The EU has the potential and responsibility to be in the forefront of this work,” Fanny Hidvégi, a policy manager at digital rights group Access Now who co-authored the guidelines, told The Verge. “We do think that the European Union should not stop at ethics guidelines… It can only come on top of legal compliance.”


1 Comment

oprolevorter · May 23, 2019 at 2:45 am

Just want to say your article is as surprising. The clarity to your submit is simply cool and that i can think you are knowledgeable in this subject. Fine along with your permission allow me to grab your RSS feed to keep updated with coming near near post. Thank you a million and please continue the rewarding work.

Leave a Reply to oprolevorter Cancel reply

Avatar placeholder
Please subscribe to receive the latest newsletter!
We respect your privacy.