Artificial Intelligence Bias and Ethical Algorithms

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now

AI has been a part of people’s lives for a long time. Some algorithms adhere the advertisements to people of different ages and backgrounds. There are also systems allowing for ordering food online or suggesting alternative routes from home to work. However, the structure suffers from the lack of diversity and makes assumptions based on false facts. In her TED talk, Kriti Sharma brings an example of fertility clinics being advertised to females because this is what AI has learned from human behavior (2018). Also, searching for the perfect candidate for the workplace in IT, the website AI shows primarily male CVs to the CEO because of this company owner accepts mainly male candidates. Therefore, algorithm bias exists, and better guidelines should be implemented for AI.

AI’s algorithm bias is based on assumptions about people of different age groups, races, and gender. People that are chosen to participate in machine learning come from the same background and have benchmark abilities. However, in this case, what has not been considered is the diversified demands and opportunities of different social groups. As Kriti Sharma stated in her TED talk, she received backlash for being female in the IT industry; however, this should be considered a typical case (2018).

Alexa, a personal home assistant with a female voice, has primary commands of ordering food or scheduling a meeting, whereas more serious male-voiced assistants are related to business. Those AI machines are made by people who have the same assumptions inside, imagining a male when talking about a CEO and a woman when talking about personal assistants. In addition, an African American is considered likely by AI to commit a crime rather than a Caucasian, which happens because of the bias people have when programming the machine.

To solve the problem of lack of diversity and assessing human needs correctly, there is a need to implement better guidelines for AI. The policies could solve domestic abuse, which is frequent in some African households, as the AI would know the signals and understand the circumstances of a typical place. People that create AI should teach the system more diverse cases and tell it about different backgrounds people may come from to ensure a better quality of life.

Reference

Sharma, K. (2018). How to keep human bias out of AI [Video]. TED. Web.

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now