OECD Principles for trustworthy AI
05-28-2019
There is hardly an international body that does not currently deal with the subject of Artificial Intelligence (AI). Whether at the EU level, in the G7 or G20 – groups of states around the world are struggling to find the political framework for technology. Less attention has been paid to the work of the Organization for Economic Co-operation and Development (OECD), the group of industrialized countries with 36 members from Europe, North and South America, and Asia and Oceania. On May 20th , a set of rules was adopted in Paris by the representatives of the member states and four other states, which is probably the most binding international agreement on AI so far.
Not only because it includes the US as an important player for the first time, but also because it goes beyond merely establishing ethical guidelines that companies should follow. Instead, the OECD wants to make governments responsible. For this, the OECD relies on evidence-based monitoring and control mechanisms. By the end of the year, for example, an AI observatory will be set up to measure the implementation of the guidelines at the policy level. According to the “peer pressure” principle, it will therefore be discussed publicly if certain states do not implement the rules well or too slowly. In addition, an exchange of “best practices” is made possible. The organization hopes that the legally non-binding rules can still be effective.
Experts had also criticized the German AI strategy that the measurability of the projects was not sufficiently considered. In this regard, the approach of the OECD could add value. The recommendation adopted now by the OECD’s annual Council of Ministers contains on the one hand “value-based principles” to be observed by so-called “AI actors”, ie organizations or individuals responsible for their work. Thus, AI should enable inclusive growth – thus also prevent discrimination -, respect democratic and liberal values and be transparent and explainable. Their use should be safe and this should be checked continuously. Actors who use them should be responsible.
In addition to these principles, the Recommendation also provides guidelines for governments for their national legislation and international cooperation. Member states should encourage long-term investment in the application of AI, whether from the state or the private sector. It is recommended to invest in the development of open and interoperable data applications. States should also promote AI ecosystems. This also means fundamentals, such as data infrastructures or cooperative models for data sharing. States should also engage in dialogue with social partners to prepare for changes in the labor market through the increasing use of AI.
Ref.: