The OCDE principles on Artificial Intelligence
Fecha de la noticia: 13-08-2019

Virtual assistants, purchase prediction algorithms or fraud detection systems. We all interact every day with Artificial Intelligence technologies.
Although there is still a lot of development ahead, the current Artificial Intelligence impact in our lives cannot be denied. When we talk about Artificial Intelligence (or AI) we don't mean humanoid-looking robots that think like us, but rather a succession of algorithms that help us extract value from large volumes of data in an agile and efficient way, facilitating automatic decision making. These algorithms need to be trained with quality data so that their behaviour adapts to our social context rules.
Currently, Artificial Intelligence has a high impact on the business value chain, and affects many of the decisions taken not only by companies but also by individuals. Therefore, it is essential that the data they use are not biased and respect human rights and democratic values.
The European Union and the governments of the different countries are promoting policies in this regard. To help them in this process, the OECD has developed a series of minimum principles that AI systems should comply with. These principles are a series of practical and flexible standards that can stand the test of time in a constantly evolving field. These standards are not legally binding, but they seek to influence international standards and function as the basis of the different laws.
The OECD principles on Artificial Intelligence are based on the recommendations developed by a working group composed of 50 expert AI members, including representatives of governments and business communities, as well as civil, academic and scientific society. These recommendations were adopted on May 22, 2019 by OECD member countries.
These recommendations identify five complementary values-based for the responsible stewardship of Artificial Intelligence:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Consistent with these principles, the OECD also provides five recommendations to governments:
- Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
- Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
- Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
- Empower people with the skills for AI and support workers for a fair transition.
- Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.
These recommendations are a first step towards the achievement of responsible Artificial Intelligence. Among its next steps, the OECD contemplates the development of the AI Policy Observatory, which will be responsible for providing guidance on metrics, policies and good practices in order to help implement the principles indicated above, something fundamental if we want to move beyond the theoretical to practice scope.
Governments can take these recommendations as a basis and develop their own policies, which will facilitate the homogeneity of Artificial Intelligence systems and ensure that their behaviour respects the basic principles of coexistence.