Post

Tech that targets bias

Posted by Flux on 

9 September 2019

We are living in an age where our dependence on technology has increased to such a point that we even trust machines and artificial intelligence (AI) with activities that are “too sensitive” to be handled by machines as they might learn to perpetuate certain unfavourable human traits. Our trend snack below looks at how humans are trying to iron out unfair bias in AI and technology. Continue reading to find out how far we’ve come…

Textio

Textio is an AI-powered platform which has been created to enhance job descriptions, by eliminating any traces of bias in them. The platform highlights jargon, unnecessary words and any words that are gender specific or discriminate in some way or another. The aim is to make job descriptions more diverse and inclusive, thus opening them up to a greater pool of qualified individuals. Textio is on Fast Company’s Most Innovative Companies of 2018 list.

Themis

Themis is software that has been developed by Sainyam Galhotra, Yuriy Brun and Alexandra Meliou, to measure two kinds of discrimination found in software: casual discrimination and group discrimination. Themis generates tests automatically and methodically looks through software (website/app) to flag discrimination in the software. Its aim is to establish fairness on online platforms with regards to people applying for loans or jobs, thus getting one step closer to ironing out subconscious biases that slip through the cracks when a piece of coding is being developed for any online platform. Click here to find out more on how Themis works.

Algorithmic Justice League

Facial recognition software has in several instances been found to be biased, especially against people of colour and of a certain gender. Amazon’s facial recognition technology which struggles to identify its female subscribers of colour is a case in point. The Algorithmic Justice League has been set up in response to this issue. It is an online community created by Joy Buolamwini, a graduate researcher at the MIT Media Lab , to iron out algorithmic biases detected on online platforms and apps. The issues that are flagged are addressed from the design to the launch of the coded systems, to actively stop the spread of algorithmic bias in software or facial recognition technology. There is also a team from MIT’s Computer Science and Artificial Intelligence Laboratory that is looking into developing an algorithm that will “de-bias” data to make it more balanced. Click here [.], for more information on the work that this team is doing.

By Tumelo Mojapelo

About Tumelo

As Head of Content and Foresight Facilitator, Tumelo Mojapelo oversees and directs the research undertaken and content generated by the Flux Trends team.

With a wealth of knowledge and experience in the trends analysis space, her mission is to empower entrepreneurs and business people to make better decisions through an understanding of trends – how seemingly unrelated factors and events have the potential to disrupt current business models and society.

Flux Trends’ experts are available for comment and interviews. For all media enquiries please contact Faeeza Khan on info@fluxtrends.co.za .

To book our corporate presentations please contact Bethea Clayton on connected@fluxtrends.co.za ..

Image credit: Caspar Camille Rubin AND Textio AND Dlanor S AND MIT Media Lab

Arrow Up

Related Trends

The State We’re In 2022 – Six Key Trend Pillars for 2022
LIFESTYLE – 2022 LIFESTYLE TRENDS
What to expect from BizTrends 02.02.2022
Die wêreld en besighede in 2022, BRONWYN WILLIAMS – WINSLYN | 30 DES 2021 | kykNET
Through the eyes of Gen Z: A glimpse of the Post-Pandemic Workplace
Targeted Dream Incubation | WINSLYN TV