FAIR AI
Ontological Problems of Creating Conscience
By Andrés Cortés
October 30, 2020
Alexander Rozhkov
The age of AI
AI is far away from being the idyllic, fair decision-taking program we would all benefit from. The article “The race problem with Artificial Intelligence: ‘Machines are learning to be racist’ “ on Metro Uk illustrates the biggest problems concerning the development of AI.
The lack of inclusivity plays a big role in the growing racism of our automated technology. As the author points out, the tech and computer industry is ruled by white privileged males; with no black women being employed by these companies. The result of this discrimination can be seen in the application of AI:
''In 2017, a video went viral on social media of a soap dispenser that would only automatically release soap onto white hands. The dispenser was created by a company called Technical Concepts, and the flaw occurred because no one on the development team thought to test their product on dark skin. A study in March last year found that driverless cars are more likely to drive into black pedestrians, again because their technology has been designed to detect white skin, so they are less likely to stop for black people crossing the road. ‘’
AI is still a complex tool to be used broadly; as a learning software, it was able to discern the finest biases the developers had. Ultimately showing how racism unconsciously affects us and then gets repeated in such not human supervised decision making. One could ask - is this how racism is socialized in the minds of children? - Our purpose here is to focus on the questions one should ask, or in this case, should have asked before creating this software.
This relies on a simple logical statement, individuals from a given society are likely not to think outside of the linings of their predecessors. Replicating in turn models within their system of belief, not because of conviction rather from instruction. This makes unnoticeable biases persist over generations whether it’s intended or instructed, making raising awareness a key factor in the overall creation of such powerful tools and developing a more equal society.
Hannah Arendt, a german political thinker, explained these difficulties during her class '' The crisis of education''. We live in societies our ancestors could not imagine, we raise our children in a world they do not know and they will create societies that yet do not exist. She pointed out the utter danger of replicating existing models and the lack of adaptation of the US. Yet what’s utterly amazing about the education she is referring to is the universality of human experience independent of geographical or cultural boundaries.
How can we prevent it - the wonders of art in foreseeing the future
Since there is no scientific way to accurately predict the future, we have science fiction to explore these areas. The well renowned English series Black Mirror explores these ideas in the final episode Black Museum.
Spoiler alert – In this episode, we follow our protagonist who goes into this creepy museum that is telling the story of the technological progress made by the guide of the museum. He embodies opportunistic neoliberal ''self-made man'', showing disregard for minimum ethical values and being willing to trespass legal boundaries in order to profit from his clinic technological advances. Through the episode, he shows with a certain pride, all the technological implants he developed and tested on people, which all ended in tragedy.
We see here the possible outcomes of unscrupulous technological progress. The dangers we could face if we disregard ethics when applied to our human experiences and, the ultimate warning, living in a world where consequences for actions are not taken. A state that legislates after the damage is done. In one sentence a state that does not foresee this development is trapped in its bureaucracy.
Opinion
As a so-called ''Digital Native'' – a term to define the people born with technological upgrades- I have to say that I am in a middle ground. On the one hand, I appreciate all the progress being made on localization, automatization, and digital governance. I see how this is the future of our societies and has a big potential to be inclusive and diverse. On the other hand, I think these technological tools, as they exist nowadays, are subjected to a lot of misuses, creating mistrust even among my peers. And the ambiance of a dark internet where there is no privacy and respect for fundamental rights.
In the future, I expect to have better conversations about these implementations, transparency, and logical restrictions. This not only for the technological domain but for all the aspects of Society.
Read More


Watch Our Episodes