1. News
  2. Technology
  3. Biased and hallucinatory AI models can produce inequitable results

Biased and hallucinatory AI models can produce inequitable results

featured
Share

Share This Post

or copy the link

[ad_1]

“Code me a treasure-hunting game.” “Cover ‘Gangnam Style” by Psy in the style of Adele.” “Create a photorealistic, closeup video of two pirate ships battling each other as they sail inside a cup of coffee.” Even that final prompt is no exaggeration – today’s best AI tools can create all these and more in minutes, making AI seem like a real-world type of modern-day magic.

We know, of course, that it isn’t magic. In fact, a huge amount of work, instruction and information go into the models that power GenAI and produce its output. AI systems need to be trained to learn patterns from data: GPT-3, the base model of ChatGPT, was trained on 45TB of Common Crawl data, the equivalent of around 45 million 100-page PDF documents. In the same way that we humans learn from experience, training helps AI models to better understand and process information. Only then can they make accurate predictions, perform important tasks and improve over time. 

[ad_2]

Source link

0
joy
Joy
0
cong_
Cong.
0
loved
Loved
0
surprised
Surprised
0
unliked
Unliked
0
mad
Mad
Biased and hallucinatory AI models can produce inequitable results
Comment

Your email address will not be published. Required fields are marked *

Login

To enjoy 9News privileges, log in or create an account now, and it's completely free!

Follow Us