Artificial intelligence is increasingly being used to help with decision-making. In the context of this blog post, we don’t need to delve deeper into the definition of artificial intelligence. The key is to understand that artificial intelligence that is created by people, as the meaning of the word implies. How intelligent this little helper is depends on the data we use to teach it. If the data used to teach AI is distorted, the conclusions drawn by the AI will be incorrect.
Indeed, AI is like a mirror that often reflects our biases and stereotypical thinking.
It is precisely these biases, stereotypes and deficiencies in data that influence women’s lives every day.
Could distorted data kill?
Sometimes, deficient data has consequences that are annoying but not life-threatening. A study by Wunderman Thompson looked at whether AI can recognise a COVID-19 face mask. The results were baffling. While none of the AIs tested in the study were particularly good at recognising a face mask, they were twice as likely to recognise one when it was worn by a man instead of a woman. For example, Google’s Cloud Vision AI thought that a woman pictured wearing a face mask actually had duct tape across her mouth. The results deliver a stark message about the data used to teach AI.
Distortions in data can be awkward and lead to stereotypes being reinforced, but they can also have even worse consequences. In the worst case, they can even endanger women’s lives and health. The crash-test dummies used in testing the collision safety of cars are almost exclusively based on the average male body, which makes driving significantly less safe for women than men. A woman’s risk of serious injury in a car crash is a staggering 73 per cent higher than that of a man. There are enough examples like this to fill an entire book. In her book Invisible Women: Exposing Data Bias in a World Designed for Men, the British non-fiction author Caroline Criado Perez reveals the lack of gender data in our world and the effects of half of the population being pushed to the margins.
AI has been taught using material that consists of imperfect data.
Exactly how imperfect the data is is a more complicated issue. As algorithms are often protected by intellectual property rights, we rarely have the opportunity to assess the extent to which different population groups are taken into consideration.
Towards a fairer future?
How, then, do we ensure that the technology of the future is more ethical and equal?
The easiest solution is also the most obvious: more diversity is needed among the people who work in the industry. Women must be given the opportunity to participate in the development of data-driven solutions as researchers, programmers and users. There is a lot of room for improvement in this regard. For example, women currently represent only 22 per cent of AI developers.
The European Commission also wants to promote the equal development of artificial intelligence. The commission’s new white paper on artificial intelligence presents a European approach that is grounded in the EU’s values and fundamental rights, including non-discrimination and gender equality. The next Horizon Europe framework programme, in turn, offers funding for rectifying potential gender distortions in artificial intelligence and debunking gender stereotypes.
Deficient data and competence must be addressed at source
While the sustainable use of artificial intelligence and data is discussed at the European level, legislation on automated decision-making is also being drafted in Finland. However, this alone is not enough. Transparency with regard to algorithms, the use of data and the key information that influences decisions is of such importance that it calls for consistent rules in a broader sense. In fact, companies and the government should embrace digital responsibility even more extensively than the laws require.
The ethics of artificial intelligence and data use need to be developed as much as possible through self-regulation. The international technology association IEEE is one example of an organisation that is developing methods and certificates for ensuring the ethical use of data. Companies already have access to low-threshold tools such as the ODI’s Data Ethics Canvas. Of course, whenever something new is being developed, it is important to take into consideration and engage the groups that are affected – such as women.