The last few weeks have offered a glimpse of the capabilities of algorithms and how much impact they can cause in their implementation after a chaotic exam marks distribution in the UK.
The UK had decided to use the Direct Centre Performance Model as a predictive algorithm to allocate A-levels grade to students. This is after the government indicated that it had noticed inflation of grades that could have been problematic to universities, whereby there would be more qualified students than available vacancies.
To cut this number, they used the algorithm that based its logic on past performances of schools to allocate grades. They failed to consider the marks that the teachers had used, and this meant that hundreds of thousands of students across the UK, saw their grades downgraded by an algorithm.
Algorithms determined the fate of over 40 percent of students
The total number of downgraded students was indicated to be over 40 percent. This meant that their entire fabric of life had been dismantled, based on the A-levels importance in determining the career choices of these students.
Once the results came out, the Universities followed with their admittance, and students who had been accepted using their previous grades started getting rejected. On a whim, their life had completely been changed by an algorithm.
A deeper dive into the logic of the algorithm showed that the most adversely affected schools were state-run and those from underprivileged backgrounds.
The clear indication was that the algorithm had used past biases and discrimination to award people based on reinforced future bias and discrimination that indicated that these demographics of students were more likely to fail and that the teachers had been kind to them.
Future impact of the technology
Cori Crider, co-founder of Foxglove, an organization that challenges the alleged abuse of digital technology noted this problem and indicated that this is just the tip of the icebergs on how algorithms will help reinforce bias and keep on widening the current inequalities in society.
Crider, however, said that this was not a tech problem, but a people problem, where a political choice had been made and fed into a computer. The computer model was thereby able to implement these biases and came up with results that further reinforced anticipation of future failure from the targeted groups.
Foxglove and the Joint Council for the Welfare of Immigrants have also demonstrated the impact of algorithms on visa applications in the UK. They argued that the current system being used to stream visa applications had been fed biases that almost made sure that applicants from certain countries could not be granted visas.
Global issues with the technology
This problem is not just in the UK, and around the world, people are starting to wake up to the realization that algorithms will still continue to promote inequalities.
In the US, algorithms have been implemented in many major industries including banking, real estate and policing. In all these institutions, the algorithms have ended up hurting the poor and people of color and this has been demonstrated in multiple instances.
Just recently, the Californian city of Santa Cruz banned the use of algorithmic policing due to the system targeting people of color. Mayor Justin Cummings said the algorithm had a problem of targeting people of color in the community and after reviewing the program, they had decided to discontinue the program.
Although the UK government finally revised their decision to use the algorithmic results that had been generated by a computer model, and instead go with what the teachers had recommended, it offered a glimpse of just how the technology is disruptive and can lead to severe consequences.
Featured image by Pixabay