Search
Close this search box.
Post

AI hallucinations

Posted by Flux on 

8 May 2023

What’s trending?
The term AI hallucination refers to instances where generative AI platforms, such as ChatGPT, provide answers that are factually incorrect, irrelevant or nonsensical, because of limitations in their training data and architecture. In other words, AI models can ‘hallucinate’ unreliable or misleading responses. These can take the form of false news reports, false documents about persons, historical events, or scientific facts. One of the most recent examples is where  ChatGPT referenced a Guardian newspaper article for a researcher but the article was in fact never written. The bot simply invented the reference. ChaptGPT also recently named a real life law professor as the accused in a made-up sexual harassment scandal, citing a fake Washington Post article as evidence.

Why is it important?
This has become a big obstacle for AI. This lack of reliability may result in users losing trust in the technology, hindering adoption across various sectors. In fields such as finance, healthcare and law, AI systems are being used to inform critical decisions and hallucinations could lead to poor choices. From an ethical standpoint, hallucinated outputs could perpetuate harmful stereotypes and spread misinformation. These hallucinations could also expose AI developers to legal liabilities. As a result, those working in AI are actively looking for solutions to this problem. 

What can businesses do about it?
Companies are increasingly turning to artificial intelligence tools and analytics to reduce costs, enhance efficiency, raise performance, and minimise bias in hiring and other job-related decisions. The use of generative AI in business is in its early stages as organisations seek to understand how to apply the technology, but they need to ensure that the creators of these AI models use data to train them that is diverse and representative of the real world. They should also monitor the outputs of their AI models to detect instances of AI hallucinations in order to identify and address any false or misleading outputs. If this happens, it is essential to provide an explanation to the affected parties. This can help build trust and transparency with customers, employees, and other stakeholders. Should you trust what the machine is telling you? Not completely. Human oversight is crucial. 

By Faeeza Khan

Flux Trends is proud to announce the launch of the Flux Innovation Tour 2023: Meet the Solution-Based, Future Innovators Defining the New World Order.

Join us for this unique full-day tour that is designed to simultaneously shift your thinking and challenge your perceptions of the innovation process by – literally – introducing delegates to the future. Specifically, by introducing you to the young innovators, creatives and entrepreneurs building the future of South Africa, Africa, and the world. 

CPD Points and level: 5 CPD POINTS at MPSA Level Designated Members

AMSA Designated Members can attend the Gen Z Immersion Experience and claim these CPD points as well.

Category:  Non-Marketing and Marketing

CPD Approval Number: MA FT 23003

Certificate of completion to be loaded onto MarkEdonline to claim CPD points.

Image credit: Pawel Czerwinski

Arrow Up

Related Trends

Bridgebuilders™ The Future Of Working Together
The Business of Disruption: “Futurenomics” Edition 
LIFESTYLE – 2022 LIFESTYLE TRENDS
What to expect from BizTrends 02.02.2022
Die wêreld en besighede in 2022, BRONWYN WILLIAMS – WINSLYN | 30 DES 2021 | kykNET
Through the eyes of Gen Z: A glimpse of the Post-Pandemic Workplace
Targeted Dream Incubation | WINSLYN TV