EVP General Counsel and Chair of AI Ethics Working Group, Arm
Artificial intelligence is already making key decisions in our lives – whether it’s your smartphone adjusting its lens to snap the ideal portrait, or vehicle making an automated emergency stop – we need methods to identify and place limits on bias in computer algorithms.
New applications for AI are created every day – an exciting frontier for technologists. But new developments in AI have also illuminated a novel problem: human bias reproduced in computer algorithms.
At scale, these biases could contribute to an increasingly lopsided world where the benefits of a modern, digital society are not inclusive.
As the General Counsel and lead for AI ethics initiatives at Arm, a foundational IP processor technology company, I spend a great deal of time thinking about technology, good governance and how AI could and should impact humanity.
To realise the full benefits of AI, it must be built in an inclusive way and be trusted by everyone. Global governments have begun to explore these considerations, and the EU has even drawn up proposals for regulating AI in situations where there is risk of harm.
The price of less-inclusive AI
We’re calling for a vigorous industry-wide effort to take responsibility for a new set of ethical design system principles through the establishment of an AI Trust Manifesto.
A key principle in the manifesto states every effort should be made to eliminate discriminatory bias in designing and developing AI decision systems.
Women are the largest underrepresented group as a whole in the world, which means we will need to have an inclusive team of people – including women of diverse backgrounds and women of colour – involved in engineering AI.
According to STEM Women, the UK saw little to no change in the percentage of woman engineering and technology graduates from 2015 to 2018. In fact, only 15% of graduates between those years were women.
That brings up an important consideration – AI is programmed to mimic human thought and rationale. If programmed by a non-diverse workforce, it can seriously hinder widespread technology development and implementation.
One example of this is facial recognition. If trained on only Caucasian faces, for instance, that oversight could result in AI misidentifying minorities during facial recognition scans.
There is wide acknowledgement that the careful use of training data is crucial in ensuring that discrimination and bias do not enter AI systems to the extent that the implementation of such data may be illegal or unfair.
If we are to give machines the ability to make life-changing decisions, we must put in place structures to reveal the decision-making behind the outcomes, providing transparency and reassurance.
Companies must take the lead by setting high standards, promoting trust and ensuring they maintain a diverse staff trained in AI ethics.
We must continue to explore different solutions to the complex issue of ethical AI decision making. One possibility is building a review process that incorporates the key pillars of AI ethics, including issues of bias and transparency, to ensure products and technologies available in the marketplace receive appropriate prior approval for adherence to ethical standards.
This type of system would help consumers trust that the technology has been anti-bias trained and produced with fairness and inclusivity methodologies.
To fully realise this reality, it’s critical for girls, women and the greater technology industry to use their voices and networks to increase female participation in STEM and AI.
In all its forms, AI has the potential to contribute to an unprecedented level of prosperity and productivity.
To do that, it must be built on a foundation of trust by the diverse range of people for whom the technology will ultimately be catered toward – including women.