Table of Contents
Introduction
Trending in the latest AI news, the European Union (EU) recently announced the launch of a pilot project that would work on drafting ethical rules for artificial intelligence. They would be used in the development and application of AI technology in order to ensure that they can be implemented in practice. Read on to know everything about this new development in the world of AI.
Why is the EU Drafting Guidelines for AI?
Currently, social media is facing a severe crisis of trust and the EU wants to ensure that AI doesn’t head in the same direction. Recently, the organization unveiled its ethics guidelines that are designed to impact the development of AI systems before they are embedded in society.
This intervention of sorts aims at breaking the pattern of regulatory bodies who are forced to play catch up with technological advancements which eventually result in negative consequences. The need for doing so was emphasized by Liam Benham, the vice president for regulatory affairs in Europe at IBM who is closely involved in drafting these AI guidelines. He said, “It’s like putting the foundations in before you build a house … now is the time to do it.”
Summary of EU Guidelines
The guidelines were drafted by the EU’s High-Level Group on AI, back in December 2018. It is a body consisting of 52 academic, industry and civic society experts. The guidelines comprise a set of seven “key requirements” for trustworthy AI as deemed by the EU. These include:
- Accountability: “Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.”
- Diversity, non-discrimination and fairness: “AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.”
- Privacy and data governance: “Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.”
- Robustness and safety: “Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.”
- Societal and environmental well-being: “AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.”
- Transparency: “The traceability of AI systems should be ensured.”

See Also: How the Cloud is Changing the Tech World
A Step Towards Ethical and Secure Future AI
According to a statement by Mariya Gabriel, commissioner for Digital Economy and Society, “Today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI.”
After the early pilot phase that is slated for early 2020, the EU commission’s group of AI experts with review the assessment lists for all the key requirements and incorporate the feedback received. Then it will amend the review by evaluating outcomes and proposing new steps.
Conclusion
The European Union is taking significant steps towards shaping the future of Artificial Intelligence (AI) by drafting ethical guidelines to ensure AI systems are developed and implemented responsibly. By focusing on accountability, fairness, transparency, and societal well-being, these guidelines aim to prevent negative consequences and promote a secure, ethical, and human-centric AI ecosystem. As the EU pilots these guidelines, they hope to lay the foundation for a global conversation around trustworthy AI. The guidelines will continue to evolve with feedback, ensuring that AI development aligns with EU values.
FAQs
- Why is the EU drafting guidelines for AI?
- The EU is creating ethical guidelines to ensure that AI systems are developed responsibly and prevent issues like discrimination, privacy violations, and lack of accountability, thus promoting a secure and trustworthy AI future.
- What are the key requirements in the EU’s AI guidelines?
- The guidelines include accountability, fairness, privacy and data governance, robustness, societal well-being, and transparency, ensuring AI systems are ethical and beneficial for society.
- How does this impact AI development in the future?
- The EU’s guidelines will shape how AI systems are created, ensuring they align with ethical standards, are transparent, and contribute to social and environmental well-being. The guidelines also promote responsible use of AI technology.
- When will these guidelines be implemented?
- The EU is piloting the guidelines starting in early 2020, with ongoing assessments and reviews by AI experts to incorporate feedback and refine the rules.
- What role does the EU’s High-Level Group on AI play?
- The High-Level Group on AI, consisting of experts from various sectors, developed the initial guidelines. They are responsible for reviewing and refining the guidelines as they move forward with implementation.


