Categories
Uncategorised

Reading session – Coded Bias

Directed by Shalini Kantayya. 2020 Netflix.

Wednesday 26th April
I find this film fascinating; it examines the use of algorithms and how at their foundation they are imbued with unconscious bias because they were created by a limited pool of white males. We are introduced to Joy Buolamwini who loved computer science growing up, especially as coding seem detached from the problems of the real world. She went to MIT and made art projects that would use computer vision technology. In her first semester she took a class called science fabrication where the brief was to read science fiction and try to build something you are inspired to do. She created the ‘aspire mirror’ which you would look into each morning, it would inspire you by projecting onto the viewers face, images of inspiration people, such as Serena Williams. Joy used a camera with computer vision software that should track her face but surprisingly she found that it didn’t work – until she put on a white mask. Then the software worked perfectly and easily detecting her face. I learn that we often teach machines to see by providing training examples of what we want it to learn. For example, for a machine to learn what a face is, you must provide lots of examples of faces and lots of examples that aren’t faces. The data sets being given were majority men and majority lighter skinned individuals. This highlights the issues of bias that can creep into technology. AI ideas come from science-fiction. Narrow AI is just maths. The first AI was created at Dartmouth maths department. As the founders the male professors there, got to decide what AI would be. They decided that intelligence could be demonstrated by the ability to play games, specifically chess. Intelligence was defined by the ability to win at these games. Yet we know that in reality, intelligence is much more than that, there are many different types of intelligence not just a single version. The programming of AI was created by a small, homogenised group of men. All affected by unconscious biases. The ended up embedding their own biases into technology. The algorithms now perform better on male, lighter faces.

‘Data is destiny.’ Data is what we are using to teach different machines to learn different kinds of patterns, skin and data sets gives skewed results. Data is a reflection of our history. The past dwells within our algorithms. This shows the inequality. What does it mean to live in a society where AI is starting to govern the liberties we might have and what does it mean if people are discriminated against? We meet Cathy O’Neill the writer of ‘Weapons of Maths destruction.’ She believes that algorithms can be destructive and harmful. Mathematics is being used as a shield for corrupt practices as it is ‘Using historical information to make a prediction about the future.’ Algorithms use a score system to rate us – the power of the algorithm is in who owns the code. There is no appeal system if the algorithm says ‘no’, no accountability. The algorithms can be racist, sexist, ablest in their behaviour – because they have been programmed by humans who can have all of those biases. We need to monitor the process for that bias. Be aware that it exists.

The film moves to the UK where we learn that there is systematic bias and issues with the police. Who will be hardwired into new technologies – creating a shift to authoritarianism? There is a huge CCTV network in this country, there are Six million cameras operating in the UK – China style surveillance. Computers feed it data; it digests it. We are being watched; our faces scanned against a data base to see if we have committed a crime yet this facial recognition is often inaccurate.

In Hong Kong surveillance is used to track down those causing problems. In the USA 117 million people have their face on a facial recognition system without regulation of accuracy. 

The future of AI is being created by nine companies that are building the next applications of artificial intelligence – six of these are in the USA: Facebook, Apple, Amazon, IBM, Google and Microsoft and three are in China including the company Tencent. China and the USA are taking wo very different tracks – the Chinese have access to all data to maintain social order. In the USA, AI is being developed for commercial purposes, to earn revenue.

In the USA Key fob entry is being replaced by a biometric security system of facial recognition. However, the recognition system is then used to watch residents and send warming letters if they are pictured conducting behaviour that those who own the housing system deem inappropriate. It is control without the occupant’s agreement which is a frightening path to tread.

I learn about social media platforms and how the automated AI decision making – what you see in your feed, what is highlighted is powered by AI enabled algorithms. Your view of the world is being governed by AI. These algorithms also decide whether you get into college or not, get a credit card or not, whether you get a mortgage or not – how do you get justice when you don’t know how the system works? How does the algorithm work? We don’t know!

The example given is an Amazon recruitment tool, the company discovered it was biased against women – it rejected all received resumes from women. It was making mathematical not ethical decisions. The fear is that Civil rights could be rolled back under the Justice of machine neutrality. We are living with the awareness of being watched. Internet increasing inequality through data collection and surveillance. A company can double guess what you are thinking. Machine learning – we don’t yet understand what the data is capable of predicting. Marketing is not just for products, it can also market ideas. It has been discovered that Amazon has a racial and gender bias in some of its AI services. They created Amazon recognition. An open source facial recognition system which was going to be used by the police in some states until it was highlighted the inequalities the service was embedded with. ‘When you are an outsider, you always have the perception of the underdog’ Cathy O’Neil. She talks about the financial crisis in 2008 caused the largest wipe out of Black wealth in history of the USA, it was quite simply discrimination.

what are the social implication implications of AI? The value-added model – teaches no observations. AI model, what is the constitutional right to due process? Such as HR resume analytics – this is potentially a big problem, if candidates are not given a fair chance to apply for a job because the screening is done by an algorithm, we must bring ethics on-board. We need to recognise our differences and make the system more inclusive.

In a more extreme realisation China’s citizens have an individual social credit score – if you speak about the Chinese government this will affect your score, your family and your friend’s score. It’s obedience training. Those who’ve lost credibility will be restricted, loosing access to travel on trains and planes. The interviewee says ‘You want to behave because your face represents the state of your credit – you trust someone based on their credit score’. How frightening to be controlled in this way by the authorities.

Yet in reality throughout the world, we are all being scored all the time – you might be shown better items on Amazon or pay a different price for toilet roll etc. In the UK we have some protection in the form of GDPR – the misuse of information. We need Algorithmic Justice – this is a large civil rights concern. AL has the potential to run people’s lives through their liberty, their finances or their livelihood – reducing their options through the decisions that are automatically made. We need AI to work for society & be fair, not be racist or sexist or discrimination against people with disability status. We need intelligence and ethics. Joy founded the Algorithm Justice League to begin the fight back! She speaks about ‘Supremely white data and the coded gaze’ taking her concerns to Congress who listen and take onboard the information and concerns she presents to them.  Including the harvesting of face data – no one should have to use their face data to access a platform, economic opportunity or basic services. Facebook has 2 .6 billion people on it which is incomprehensible. Awareness is a beginning – I have learnt so much from this movie but we must be concerned about our futures and the mathematics that holds the potential for so much power and control over our lives the more digitally we choose to exist.

Reference
S Kantayya. Coded Bias. January 2020. Available at Netflix. 

Leave a Reply

Your email address will not be published. Required fields are marked *