Artificial intelligence (AI) and machine learning (ML), explained in the box below, are two of the most commonly used buzzwords nowadays. What we already see in our lives – and what we will likely increasingly see in the future – are AI-based technologies. These technologies are already used in sectors such as transportation, insurance, health and medical services, administration, media and advertising, and the military. There are three levels of AI technologies:
- Assisted intelligence, widely available today, improves what people and organisations are already doing. A simple example, prevalent in cars today, is the GPS navigation programme that offers directions to drivers.
- Augmented intelligence, emerging today, helps people and organisations do things they couldn’t otherwise do. For example, car- and ride-sharing businesses couldn’t exist without the combination of programmes that organise these services.
- Autonomous intelligence, being developed for the future, creates machines that can act on their own.
What is AI and machine learning?
Artificial Intelligence (AI) is a scientific discipline that aims to bring intelligent behaviour to machine-based problem-solving. It intends to simulate human intelligence in machines that are programmed to think like humans. (source: D 4.3. TRIGGER project) There exists narrow AI and general AI. The first one is designed to perform narrow tasks, for example, driving a car. The second one has a broader spectrum of activities and can outperform humans not only in specific tasks, such as face identification, but in most cognitive tasks (Future of Life Institute). Machine Learning (ML) is a subset of artificial intelligence that enables computer systems to learn from data and produce outcomes that can be used for diagnostics or predictions. In other words, it allows these systems to automatically learn and improve from experience without being explicitly programmed. ML systems learn by extracting clues from raw data using generic statistical constructs, such as smoothness, clustering, spatio-temporal coherence and sparsity, and by organizing these clues in neuro-biologically inspired computational architectures (source: D 4.3. TRIGGER project). |
However, the use of artificial intelligence and machine learning imposes certain risks when it comes to decision-making in sensitive areas where ethical deliberation is needed. Above all, humans need to guide these technologies and not be guided by them. Here the concept of “responsible” innovation comes in. Responsibility means taking care of the possible ethical, societal and environmental consequences of these technologies, anticipating their potential use and impact, and discussing how to reduce the risks and increase the benefits for everyone. One example of a responsible approach to artificial intelligence and machine learning is the work of the High Level Expert Group on Artificial Intelligence (“AI HLEG”), an independent expert group set up by the European Commission in June 2018 as part of its AI strategy. The AI HLEG published its final “Ethics guidelines for trustworthy Artificial Intelligence” in April 2019, after receiving more than 500 comments through an open consultation on a draft. It is centered in Europe, but other countries, such as Australia, Japan, Canada and Singapore, have already shown interest in this initiative.
Based on fundamental rights and ethical principles, the Guidelines provide seven key requirements that AI systems should meet in order to be trustworthy:
- Human agency and oversight. AI systems should support human agency and fundamental rights, and not decrease, limit or misguide human autonomy. This will require proper human oversight mechanisms.
- Technical robustness and safety. Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies.
- Privacy and data governance. Individuals should have full control over their own data, which will not be used to harm or discriminate against them.
- Transparency. The traceability of AI systems should be ensured. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
- Diversity, non-discrimination and fairness. AI systems should consider the whole range of human abilities, skills and requirements, and should be available to all. Unfair bias should be avoided, as this could have multiple negative implications, including the marginalization of vulnerable groups.
- Societal and environmental well-being. AI systems should benefit all human beings and must be sustainable and environmentally friendly.
- Accountability. Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
These requirements should be applied whenever AI is developed, deployed and used, and we should be mindful that there might be fundamental tensions between different principles and requirements. This will require the continuous identification, evaluation, documentation and communication of these trade-offs and their solutions. In line with this, the TRIGGER project contributes to the discussion, providing recommendations for the governance of and artificial intelligence in Europe and beyond, with the ambition of offering ethical standards for the development of trustworthy AI applications all over the world.