How have Racial Issues been Impacted by Technology and Automation?
26th episode of the Automated Podcast. Check out the podcast episodes and other blogs at https://automatedpodcast.org/
Over the last little while I have been talking to a number of my friends in the USA about the evolving situation with regards to the race protests and riots. Though I will definitely avoid any political commentary on this charged subject I do think one very positive thing that has come from it is an increased awareness of both past and present racial issues. But I have yet to hear very much talk on the impact of technology on these issues, so this will be the topic of today’s episode. Also, before I jump into things, this will be the last episode for the summer period. I am taking on the 2nd part of a 4 section trek across the Pyrenees mountain range and will be out of contact for a few weeks. I will however be back in September, and I’m currently in the midst of recording a few interesting discussions, but will have to release these when I get back.
But let’s get into today’s topic. Though this podcast usually has a large focus on what aspects of the future might look like regarding certain technologies, I think the more relevant parts when looking at technology and race are more connected to past and present issues.
Though it is quite easy to see that an ultimate future of implementing cheap, efficiency, and numerous automation systems like robots and AI could well spell the end of abhorrent practices like slavery, the reverse is unfortunately seen to be the case when looking at the past. It is in fact argued that slavery in the US was not just supported by, but actually grew to such an extent due to certain automation technologies. In episode 32 of this podcast I explored the 4 industrial revolutions that the world has experienced and is experiencing and specifically discussed some of the technologies that were central to the first industrial revolution such as the steam engine, power loom, and cotton gin to name a few. One of the key aspects of each of these technologies is that they had a tremendous impact on the ability to process cotton, but not on the harvesting or cultivation of it. Today we are seeing the use of automated harvesters for many agricultural products, but these require sophisticated vision, guidance, and movement systems which were impossible to build a few hundred years ago. The perhaps obvious outcome of this now improved processing ability was the linked increased need to provide these machines the raw material to process, most notably cotton. If by using a cotton gin, one could do in a day as much as someone by hand could do in 2 months, it would naturally follow that a growth in the workforce to harvest the cotton would follow. And as it so happened, the majority of cotton came from the southern states of the US. This in turn pushed the growth of the number of black slaves in order to cultivate the crop, and it is easy to see that their numbers grew alongside these technological changes.
But moving onto the present. Just a few episodes ago, my guest Brigitte Tousignant, mentioned the issue of Google’s search showing a racist picture of Michelle Obama as a top search result in 2009, and how a number of our current algorithms perpetuate racism and bias inherently. I also discussed a water dispenser that only worked for a white colleague. Now I was actually wrong in my example. It wasn’t a water, but a soap dispenser, which actually turned into a bit of a viral video back in 2015 showing the problem directly. These examples bring up one of the same recurring questions that has been touched in a number of episodes, where does the responsibility lie? For the classic example of autonomous vehicles, is it the driver, the car maker, or the software company that is responsible if the vehicle is speeding? The similar question now is, is the soap dispenser, or the manufacturer, or the programmer racist or showing signs of inherent bias? The question of accountability is still something we haven’t seemed to solve. For the soap dispenser issue “ The soap dispenser was created by a company called Technical Concepts, which actually unintentionally made the dispenser discriminate simply because no one at the company thought to test their product on dark skin.” Other similar examples of AI and software not working properly for black people exist and can be found quite easily. There appears to be ever more modern examples where algorithms and automation technologies keep making similar mistakes towards different racial groups, One of the other better known biases has been with facial recognition. The most recent example of the Little Mix pop group that had two of its members incorrectly identified by an AI that was writing news articles, was also discussed in the previous episode. Now these examples seem to be small and even slightly comical if this is the first time you’re hearing about this, even the man from the viral video laughed when the soap dispenser wasn’t working for him. But there are two real problems that are brought up with these issues. The first lies in the fact that we are starting to implement these technologies at a larger scale throughout our society. As with the recent mis-identification of the Little Mix group in a news article that was written by the AI. As AI is starting to be implemented at scale its inability to correctly identify people of colour or other different races will vastly increase the number of problems that have been showcased. The second, more pernicious problem, is that these technologies, when implemented in more essential parts of our systems can cause real harm to individuals. Topically, the issue of facial recognition being used by the police has been a central issue in the protests in the US. “Facial-recognition systems are more likely either to misidentify or fail to identify African Americans than other races, errors that could result in innocent citizens being marked as suspects in crimes.” This is of course wildly different in importance from having or not having soap for your hands. And though the issue of accountability is of course important here, damage to an individual can be permanent in these situations which makes the issue that much more severe. Luckily this has already seen some very positive movement in the last few months. IBM, Microsoft, and Amazon, have already stepped back from implementing and in some cases even selling facial recognition technology to police departments. This has spurred on the discourse of racial issues within automation technologies and has made the problem more apparent for more people across the world, which is a great start to a very complicated problem.
So overall, if you’ve been listening to this podcast over the last year, you may have picked up on the fact that I generally lean towards being more of a more technological positivist than a pessimist. But with that said, I think events such as those in the USA are really important to bring to light new perspectives that many of us weren’t aware of before. I hope that this was as much of the case for you as it was for me, as in preparing this episode I came across a number of new issues that our past and present automation technologies have imposed on people. One point that I didn’t bring up in this episode but definitely plan to later on is the current evolution of slavery across the world and how technology will impact it. Though definitely not an activity that falls under the traditional umbrella of work, nevertheless I think is relevant for this podcast. But that will be for a future episode. For now I hope everyone has a great end to their summer, and I’ll be back in September with new episodes.