New OECD Artificial Intelligence Principles: Governments Agree on International Standards for Trustworthy AI

OECD member countries approve and promote principles on AI that respect human rights and democratic values.
Fabienne Lang

On 22 May, the Organization for Economic Co-operation and Development (OECD), an international team working on creating stronger policies in order to improve lives, adopted and approved new Artificial Intelligence (AI) principles. 

RELATED: WHAT IS EXPLAINABLE ARTIFICIAL INTELLIGENCE AND IS IT NEEDED?

OECD principles on AI focus on AI that is original and trustworthy. Respect for human rights and democratic values are also strong focal points of these principles. 

This is a first of such principles to be agreed upon and put forward by governments. Moreover, other countries outside of the OECD group members have also adhered to these principles. These countries include Argentina, Brazil, Colombia, Costa Rica, Peru and Romania.

The trickiest part in this ever-expanding and the fast-growing industry is keeping up. The standards have been carefully selected to be practical and adjustable enough to match the rapidly evolving field of AI. 

What are the OECD AI principles?

The five new principles mesh well with the pre-existing OECD standards: privacy, digital security risk management, and responsible business conduct. Complementing them, the new principles focus on values-based principles for the responsible leadership of trustworthy AI.

  • AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being.
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values, and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  • AI systems must function in a robust, secure, and safe way throughout their life cycles, and potential risks should be continually assessed and managed.
  • Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles.

What are governments to do?

On top of the five principles, the OECD offers five recommendations for governments: 

  • Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  • Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  • Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
  • Empower people with the skills for AI and support workers for a fair transition.
  • Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

What are OECD Recommendations?

Not legally-binding, the OECD AI principles are nonetheless highly powerful. On a number of occasions, they have been the base from where international standards grow, and have helped governments create national legislation.

For example, in 1980, the OECD Privacy Guidelines stated that the collection of personal data should be limited, which now underlie many privacy laws and structures in the United States, Europe, and Asia.

Most Popular

Who are the people behind the OECD AI principles?

Over 50 member expert groups on AI set up and now form the OECD to span a set of principles. Twenty government representatives and leaders from the business, labor, civil society, academic and science communities form this group of experts. Their proposals were accepted and taken on by the OECD, which in turn expanded into the OECD AI Principles. 

New OECD Artificial Intelligence Principles: Governments Agree on International Standards for Trustworthy AI
The OECD member states. Source: OECD

What about the future?

Developing a system of metrics to measure AI research and development will be a strong orientation for the Recommendation. It will gather information in order to assess its implementation. 

The OECD’s future AI Policy Observatory will be the entity to provide evidence and assistance on AI metrics, policies and practices to help implement the Principles, and constitute a core to open dialogue and share best practices on AI policies.

message circleSHOW COMMENT (1)chevron