By: Kim Furman, Synthesis Marketing Manager
Freedom comes with responsibility. To be free (to have the power and right to act, speak and think as we wish), is to be responsible for the freedom of others. Freedom means respect for those who are also living free. It asks us to stand up and be accountable. But can Artificial Intelligence (AI) be accountable when even we, as humans, can find this challenging? A Google search leads to a plethora or positive and negative results of what AI can do but does it drive or defy our freedom? The verdict is out.
I examined a number of AI use cases and the answers were not what I was expecting. For every case where AI could drive our freedom, there was a risk it could defy it.
A person may be free to do as they wish (to an extent) but many are unable. AI is opening up a new world for people living with disability. Examples of this include AI-empowered apps that narrate the surroundings for the visually impaired to closed captioning technology that transcribes text in real-time for the hearing impaired while translating sign language to text for those who are unable to read it. These types of AI allow anyone to experience and interact with their world, removing obstacles to connection. AI is not only opening up the world for those with disabilities but for those wishing to engage across languages, whether through translating speech in real time or in translating and transcribing websites in foreign languages so our access of the world is not limited.
Yes, AI is opening up our world. However, AI is proving to be vulnerable to bias and stereotyping and this is problematic. If the descriptions of the world or transcription and translation of text or speech are bias then this creates a warped accessibility, if any accessibility at all. For example, algorithms can develop sexist or racist traits. Google Translate was accused of gender stereotyping as it was making the assumption that all doctors were male, and all nurses were female. It would be problematic if AI was advising a visually impaired person that their doctor was male, if they were in fact not. They would not be accessing a correctly represented world.
In 2020 South Africa experienced 621 282 contact crimes (murder, attempted murder, sexual offences, assault and robbery). Yet we give off non-verbal cues that act as signs about what we plan to do. These are called micro expressions and AI algorithms placed in CCTVs can detect these expressions among pedestrians and anticipate potential criminal behaviour before it occurs. This is being piloted in India and ensures the freedom of safety and the adherence to responsibility. According to Steven Feldstein, an associate professor of public affairs at Boise State University, 75 countries have begun using AI technologies for surveillance with 52 of them relying on these technologies for preventative measures.
However, some argue that the right to privacy is being infringed on by this right to security. Furthermore, this can become problematic if bias creeps into the system or if a person is inaccurately identified as a threat. This occurred to Nijeer Parks in New Jersey in 2019 when he was arrested based on an incorrect facial recognition scan that matched him to the image on a fake ID left behind at a scene of a crime. Facial recognition scans are becoming more accurate according to research, but they are more prone to error with people of colour which can lead to traumatic and unjust situations like that of Parks.
Detecting hate speech, misinformation and disinformation
AI is being used to detect and immediately eliminate hate speech (infringement of freedom), misinformation and disinformation. Facebook is using AI to scale through its immense content and automate decisions so hate speech and other inappropriate or incorrect content can be removed in real time. Its AI now detects 97% of the hate speech that it removes from the platform which is an increase from 80.5% the year before and 24% in 2017.
Where AI can be used to detect and illuminate hate speech, misinformation and disinformation, it can also be used to create it. An example of this is deepfakes, or AI-generated videos using impersonation. Put another way, this is manipulated footage that looks real. AI is actually being used to detect these videos, however, they can cause fast damage, spread disinformation from what looks like reliable sources and leave anyone at the mercy of extortion.
Is AI driving or defying our freedom?
AI can do either. It can be the hero that can track down a kidnapped child through CCTV footage or predict breast cancer with 99% accuracy. It can be the villain that inaccurately labels someone as a threat to downgrading a person’s job application due to bias. The EU is trying to tackle this exact problem with AI regulations. We need to use AI to advance our freedoms and progress, while not allowing its risks to undermine this exact advancement, but this is no easy task.
The following is critical:
- The public understanding the possibilities and risks of AI.
- The government playing an active role to ensure human rights are being upheld while innovations are still being advanced.
- The intention of the creator of AI (positive or negative).
- The creators of the AI need to be aware that their bias can creep into the technology which requires self reflection of what that bias could entail.
- Diversity of the team to help eliminate bias.
- Quality data (rubbish in, rubbish right on out).
- Constant investigation to ensure the AI has not taken on bias or discrimination even when preventative measures were put in place.
“Algorithmic decision-making is the civil rights issue of the 21st century,” says Labor Rights Scholar Ifeoma Ajunwa.
But where does this leave our verdict? Mahatma Gandhi said that: “Freedom is not worth having if it does not include the freedom to make mistakes.” With this type of innovation, mistakes are bound to be made, yet, Uncle Ben’s words seem to answer the question: “With great power comes great responsibility.” AI is today’s great power, and it is the duty of all involved to carry this great responsibility.