Send us your story

AINews-AIEU Reveals Ethical Guidelines for Artificial Intelligence

EU Reveals Ethical Guidelines for Artificial Intelligence

Trending in the latest AI news, the European Union (EU) recently announced the launch of a pilot project that would work on drafting ethical rules for artificial intelligence. They would be used in the development and application of AI technology in order to ensure that they can be implemented in practice. Read on to know everything about this new development in the world of AI.

Read the latest AI news on iTMunch titled, "EU Reveals Ethical Guidelines for Artificial Intelligence"

Why is the EU Drafting Guidelines for AI?

Currently, social media is facing a severe crisis of trust and the EU wants to ensure that AI doesn’t head in the same direction. Recently, the organization unveiled its ethics guidelines that are designed to impact the development of AI systems before they are embedded in society.

This intervention of sorts aims at breaking the pattern of regulatory bodies who are forced to play catch up with technological advancements which eventually result in negative consequences. The need for doing so was emphasized by Liam Benham, the vice president for regulatory affairs in Europe at IBM who is closely involved in drafting these AI guidelines. He said, “It’s like putting the foundations in before you build a house … now is the time to do it.”

Summary of EU Guidelines

The guidelines were drafted by the EU’s High-Level Group on AI, back in December 2018. It is a body consisting of 52 academic, industry and civic society experts. The guidelines comprise a set of seven “key requirements” for trustworthy AI as deemed by the EU. These include:

  • Accountability: “Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.”
  • Diversity, non-discrimination and fairness: “AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.”
  • Privacy and data governance: “Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.”
  • Robustness and safety: “Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.”
  • Societal and environmental well-being: “AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.”
  • Transparency: “The traceability of AI systems should be ensured.”

A Step Towards Ethical and Secure Future AI

According to a statement by Mariya Gabriel, commissioner for Digital Economy and Society, “Today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI.”

SEE ALSO: How the Cloud is Changing the Tech World

After the early pilot phase that is slated for early 2020, the EU commission’s group of AI experts with review the assessment lists for all the key requirements and incorporate the feedback received. Then it will amend the review by evaluating outcomes and proposing new steps.

Subscribe to iTMunch for the latest news, updates and developments in the world of finance, marketing, AI and much more.

Shweta Dabholkar
Shweta Dabholkar
Shweta is a content writer and an aspiring novelist. She loves keeping up with current affairs and the latest tech developments. Someday, she hopes to walk on Mars and travel in a time machine. When she isn’t writing, she enjoys reading fiction, trekking and cultivating her green thumb.

RELATED ARTICLES