Bias in AI.

It’s not at all hard to understand a person. it’s only hard to listen without bias.

Abhi Avasthi
3 min readMay 12, 2022
Photo by Andrea De Santis

It wouldn’t be a stretch to say that AI is everywhere, right from our phones, to our cars to our homes even. So it’s safe to assume that AI would eventually controlling things where it would be making decisions which could decide the fate of people, be it in law (an interesting prospect) and even to technologies where it would be taking decisions in real time where it might come down to picking one or the either of two individuals in danger.

This is where it gets really interesting, when we’re building AI, our researchers and scientists try their best to reduce the bias they induce into the system, and there is has been a lot of literature around this, it still manages to seep in, and compounds over integration of multiple systems.

The bias could come from the person implementing the system (we are talking about complex implementations of them, and not just standard template algorithms) , or from the language that they use (English could introduce certain biases due to certain origins of words it uses) or from the data itself, and the processes used to collect it.

Now that we know how bias could enter the system, we can also see other non-lethal implications of the bias introduced, AI could be used in processing admission applications, loan applications or even rental applications, and any bias in the system could negatively affect the outcomes for people due to no fault of their own, a slightly more dangerous implication could be the involvement of AI in law enforcement, and any sort of a bias in such a system could have serious ramifications that don’t need any expanding upon.

Now that we’re aware of the dangers of bias in AI, it still needs a framework to develop a general “intuition” , or to train itself (model training), and here is where the topic of discussion comes in. A lot of the documents used in day to day proceedings that are perceived to be sacrosanct contain biases in them (from the point of view of an AI) and when humans are in charge of proceedings, we expect them to knead out these biases with their discretion ability , but the same cannot be expected from an AI, any such document which aligns with the biases already existing in the AI will further reinforce them.

How can we tackle this? well, one of the ways is to interpret the constitutions into basic rules and then apply those rules as constraints and let the algorithm learn from the data in the past (which is another dilemma in itself).Most things point toward AI being an assistive technology in certain fields till the very end of it.

Finally, bias is one of the biggest follies of AI is that it can learn from data (I know it sounds stupid) and most of the data is heavily biased towards certain ideologies and this is one of the biggest reasons why AI can only continue to be assistive in certain high importance fields.

--

--