Europe sets Guidelines for AI Use & Development

Europe sets Guidelines for AI Use & Development
x
Highlights

A European Commission working group has just published its guidelines on the use of artificial intelligence (AI). What are these ethical considerations and what do they mean for the development of AI in Europe?

A European Commission working group has just published its guidelines on the use of artificial intelligence (AI). What are these ethical considerations and what do they mean for the development of AI in Europe?

It's been a long time in the making but the European Commission's High Level Expert Group has finally completed its work on ethics guidelines for the development and use of AI. The Commission set out in April of last year to develop a policy around AI.

The guidelines put forward both technical and non-technical solutions as to how AI is used responsibly, with the provision of transparency and accountability. The trading block's legislative body is laying down a basis upon which startups in the region can build upon in developing the technology.

What are the Guidelines?

The Commission has set a three-phase approach to its guidelines and their implementation. In the first instance, it sets out the specific requirements for responsible AI. In the second phase, it plans on launching a pilot programme which will induce feedback from stakeholders. The final phase involves establishing consensus internationally in terms of an ethical approach to AI.

♦The guidelines themselves incorporate what the High Level Expert Group term as 'seven essentials' to bring about trustworthy use of AI:

♦Human agency and oversight: The Commission sets out to ensure that human autonomy is not degraded or misguided through the use of AI.

♦Robustness and safety: AI-based algorithms need to be secure, reliable and sufficiently robust in order to overcome the potential for errors and inconsistencies in AI-based systems.

♦Privacy and data governance: Europe has strengthened its approach to data privacy in recent times with the advent of the GDPR in 2018. It's continued on that path in this instance, with the guidelines calling for citizens to have full control over their data.

♦Transparency: Traceability of AI systems is to be safeguarded.

♦Diversity, non-discrimination and fairness: Where AI is deployed, it must take into account the complete range of human abilities, skills and requirements – whilst ensuring accessibility.

♦Societal and environmental well-being: AI-based systems should be used to effect positive social change, enhancing sustainability and ecological responsibility.

♦Accountability: Checks and balances need to be put in place to facilitate responsibility and accountability when it comes to AI systems.

What does it mean for Europe?

Thomas Metzinger, Professor of Theoretical Philosophy at the University of Mainz, was a member of the expert group that drew up the guidelines.

He told German daily newspaper, Der Tagesspiegel, that whilst he felt that it represented a case of ethical white-washing, it is nonetheless good for Europe.

The resultant guidelines are the consequence of compromises he's not entirely happy with. However, Metzinger maintains that the United States or China have nothing comparable in place. "Good AI is ethical AI", he states. "AI is one of the best instruments for practical ethics humankind has. We cannot afford to politically slow down this technology."

"Good AI is ethical AI"

Europe has been lagging behind in terms of the technical development of AI. Major U.S. tech giants have been making progress in terms of the development and use of the technology. However, whilst they may be leading the surge towards AI technology in the United States, that move is not without its issues.

Last week, Google took the step of dissolving its AI ethics board. Only a week after it's formation, the board was dissolved due to anti-LGBTQ rhetoric espoused by one of its members. In the same week, Amazon provoked the ire of leading AI researchers in relation to their use of facial analysis technology. They called on Amazon to stop selling their facial recognition software Rekognition, to law enforcement agencies.

Aircraft manufacturer, Boeing, has faced criticism for the use of an AI-based component which is believed to have been at fault in the recent Ethiopian Airlines Boeing 737 Max airliner crash. Whilst AI can be efficient, it is not always accurate. Given that background, it needs to be used responsibly.

Also in recent weeks, ride-sharing service facilitator, Uber, has been accused of using AI to unethically fleece customers through via 'surge' pricing.

Whilst U.S. companies may be ahead in terms of AI development, issues like those experienced by Amazon, Google, Uber and Boeing suggest that the Europeans may have established a better basis for development with their ethics guidelines. However, with the guidelines set, European companies still have a lot to do. As Achim Berg, President of Bitkom -Germany's Federal Association of Information Technology, Telecommunications and New Media – told Reuters, "we must ensure in Germany and Europe that we do not only discuss AI but also make AI".

SOURCE:150SEC.COM

Show Full Article
Print Article
Next Story
More Stories
ADVERTISEMENT
ADVERTISEMENTS