Much of the global debate around Artificial Intelligence has become concerned with unaccountable, propriety systems and algorithms that could control our lives. The Finnish government however, has decided to embrace this opportunity by rolling out a nationwide educational campaign.
Artificial Intelligence can have many positive applications, from being trained to identify cancerous cells in biopsy screenings, predict weather patterns that can help farmers increase their crop yields and improve traffic efficiency. But some believe that A.I expertise is currently too concentrated in the hands of just a few companies with opaque business models, meaning resources are being diverted awat from projects that could be more socially rather than commercially, beneficial.
Finland's approach of making A.I accessible and understandable to its citizens is part of a broader movement of people who want to democratize the technology, putting utility and opportunity ahead of profit. The shift toward "democratic A.I" has three main principles:
A growing movement across industry and academia believes that A.I needs to be treated like any other "public awareness" program - just like the sceme rolled out in Finland.
Data on the other hand, is critical to the development of A.I. The most common form of Artificial Intelligence is Machine Learning, where computers build models based on large volumes of data. The models it creates are only as reliable as the quality of data it relies on, and concern has been building about tools for recruitment or loan applications being built on biased data sets.
One data set, Labeled Faces in the Wild, used to train facial recognition systems, proved to be unrepresentative of both women and people of color - of more that 13.000 images, 83% were of white people and nearly 78% were of men. The discrimination had been baked into the data. And because systems often rely on multiple data sets, it can be hard to identify which decision points relied on bad data.
Understanding how data is collected and used to make decisions are therefore the first crucial steps in democratizing A.I and building the necessary trust among the people it impacts the most.
The Institute for Ethical A.I and Machine Learning, sees the democratization of A.I developing in four main stages. The strategy initially looks at empowering individuals, through best practices and applied principles. The second stage is all about empowering leaders and the third and fourth stage focus on the entire industries.
The General Data Protection Regulations (GDPR), which were introduced across Europe in 2018, aim to give EU citizens control over their personal data by making consent to collect and process it clear and explicit, most commonly through opt-ins. This has gone a long way to delivering that initial empowerment previously describes, but there is still plenty of education to be done. For example, most people applying for a loan won't realize that they have the right for that loan to be assessed by a person rather than a machine.
The global A.I market size is forecast to reach $169 billion in 2025 up from $4 billion in 2016. The Pentagon is expecting to spend $2 billion on next-generation A.I, and the U.K government has pledged £1 billion to invest in similar projects.
However, the democratization of A.I will not be easy, especially as it fights against aggresive expansion by immensely wealthy and profit-driven technology companies. But beyond profit and growth targets, A.I's real value will be in its ability to provide a positive social impact and give citizens the freedom to benefit from their own data. To do taht, it has to have the trust and involvement of the people, and that is the responsibility of today.
As Tim Berners-Lee says:
“I believe we’ve reached a critical tipping point, and that powerful change for the better is possible — and necessary.”