We are living in an age where our dependence on technology has increased to such a point that we even trust machines and artificial intelligence (AI) with activities that are “too sensitive” to be handled by machines as they might learn to perpetuate certain unfavourable human traits. Our trend snack below looks at how humans are trying to iron out unfair bias in AI and technology. Continue reading to find out how far we’ve come…
Textio is an AI-powered platform which has been created to enhance job descriptions, by eliminating any traces of bias in them. The platform highlights jargon, unnecessary words and any words that are gender specific or discriminate in some way or another. The aim is to make job descriptions more diverse and inclusive, thus opening them up to a greater pool of qualified individuals. Textio is on Fast Company’s Most Innovative Companies of 2018 list.
Themis is software that has been developed by Sainyam Galhotra, Yuriy Brun and Alexandra Meliou, to measure two kinds of discrimination found in software: casual discrimination and group discrimination. Themis generates tests automatically and methodically looks through software (website/app) to flag discrimination in the software. Its aim is to establish fairness on online platforms with regards to people applying for loans or jobs, thus getting one step closer to ironing out subconscious biases that slip through the cracks when a piece of coding is being developed for any online platform. Click here to find out more on how Themis works.
Facial recognition software has in several instances been found to be biased, especially against people of colour and of a certain gender. Amazon’s facial recognition technology which struggles to identify its female subscribers of colour is a case in point. The Algorithmic Justice League has been set up in response to this issue. It is an online community created by Joy Buolamwini, a graduate researcher at the MIT Media Lab , to iron out algorithmic biases detected on online platforms and apps. The issues that are flagged are addressed from the design to the launch of the coded systems, to actively stop the spread of algorithmic bias in software or facial recognition technology. There is also a team from MIT’s Computer Science and Artificial Intelligence Laboratory that is looking into developing an algorithm that will “de-bias” data to make it more balanced. Click here [.], for more information on the work that this team is doing.
By Tumelo Mojapelo
As Head of Content and Foresight Facilitator, Tumelo Mojapelo oversees and directs the research undertaken and content generated by the Flux Trends team.
With a wealth of knowledge and experience in the trends analysis space, her mission is to empower entrepreneurs and business people to make better decisions through an understanding of trends – how seemingly unrelated factors and events have the potential to disrupt current business models and society.
Flux Trends’ experts are available for comment and interviews. For all media enquiries please contact Faeeza Khan on firstname.lastname@example.org .