What are AI Hallucinations?

What are AI hallucinations—and why do they matter in schools?

In this AI Foundations video from Ed3, we explain AI hallucinations: moments when an AI tool gives an answer that sounds correct, confident, and polished—but is actually false, misleading, or completely invented.

AI hallucinations happen because most AI systems (especially LLMs) don’t “know” facts. They generate responses by predicting the next most likely word based on patterns in training data. When the model lacks good grounding sources—or the data is incomplete, contradictory, or outdated—it may still fill in the blanks with something convincing… and wrong.

This video covers:

  • What an AI hallucination is (with clear examples)
  • Why AI often answers anyway instead of saying “I don’t know”
  • When hallucinations are more likely: niche/new topics, long prompts, and precise facts
  • A real-world case where AI-generated fake legal citations caused serious consequences
  • Why teachers and students should never treat AI as a “source of truth”
  • How hallucinations can compromise student learning (wrong dates, facts, explanations)
  • How hallucinations can sneak into teacher resources (quizzes, lesson plans, research summaries)
  • Three practical ways to reduce risk

A common misconception is that hallucinations are rare “glitches.” In reality, they’re a predictable outcome of how many AI systems generate text—especially when users ask for exact facts without providing reliable source material.

This video is part of the AI Foundations series by Ed3, supporting educators worldwide in making informed, ethical, and human-centered decisions about AI in classrooms.

👉 Learn more about Ed3: https://www.ed3global.org

👉 Explore professional learning, courses, and events designed for educators navigating AI responsibly.

👉 Join our community of practice: https://community.ed3global.org

Timestamps

00:14 A confident wrong answer

00:31 What an AI hallucination is

00:40 Why AI doesn’t “know” facts

00:58 When hallucinations are more likely

01:20 Why AI answers anyway

01:36 A real-world legal example

01:48 Why this matters for students and teachers

02:09 Verify before you trust

02:18 Three strategies to handle hallucinations

02:55 Turn it into media literacy instruction

show transcript

1

00:00:14,700 --> 00:00:15,866

Candy,

2

00:00:15,866 --> 00:00:17,600

you know that's not right,

3

00:00:17,600 --> 00:00:18,833

yeah?

4

00:00:18,833 --> 00:00:19,433

Neil Armstrong

5

00:00:19,433 --> 00:00:21,100

was the first person on the moon,

6

00:00:21,100 --> 00:00:22,733

but Christa McAuliffe

7

00:00:22,733 --> 00:00:24,900

was the first teacher in space.

8

00:00:24,900 --> 00:00:26,633

There's a name for this

9

00:00:26,633 --> 00:00:28,200

kind of made up answer

10

00:00:28,200 --> 00:00:31,200

it’s called an AI hallucination.

11

00:00:31,300 --> 00:00:33,000

An AI hallucination

12

00:00:33,000 --> 00:00:34,966

is when the AI generates information

13

00:00:34,966 --> 00:00:36,466

that sounds correct

14

00:00:36,466 --> 00:00:38,633

but is actually false, misleading,

15

00:00:38,633 --> 00:00:40,633

or completely invented.

16

00:00:40,633 --> 00:00:42,533

AI doesn't know facts.

17

00:00:42,533 --> 00:00:43,066

It predicts

18

00:00:43,066 --> 00:00:44,600

the next most likely word

19

00:00:44,600 --> 00:00:47,200

based on patterns in its training data.

20

00:00:47,200 --> 00:00:47,866

If it doesn't have

21

00:00:47,866 --> 00:00:49,100

the right grounding sources,

22

00:00:49,100 --> 00:00:50,133

or if the training data

23

00:00:50,133 --> 00:00:51,666

has gaps, contradictions,

24

00:00:51,666 --> 00:00:53,533

or outdated information,

25

00:00:53,533 --> 00:00:55,333

it might still fill in the blank

26

00:00:55,333 --> 00:00:57,533

with something that sounds convincing,

27

00:00:57,533 --> 00:00:58,533

but it isn't.

28

00:00:58,533 --> 00:00:59,633

This is especially true

29

00:00:59,633 --> 00:01:01,566

for niche or new topics,

30

00:01:01,566 --> 00:01:03,033

long or complex prompts,

31

00:01:03,033 --> 00:01:05,533

and for precise facts.

32

00:01:05,533 --> 00:01:08,300

In this example, the data only consists

33

00:01:08,300 --> 00:01:11,300

of green, red, and orange things.

34

00:01:11,400 --> 00:01:13,733

So when I asked for blue things,

35

00:01:13,733 --> 00:01:15,033

it presented green, red

36

00:01:15,033 --> 00:01:17,766

and orange things, but in a blue tint.

37

00:01:17,766 --> 00:01:20,600

This is a hallucination.

38

00:01:20,600 --> 00:01:21,800

Now you're probably thinking,

39

00:01:21,800 --> 00:01:23,366

why doesn't it just tell you

40

00:01:23,366 --> 00:01:24,666

that it doesn't know?

41

00:01:24,666 --> 00:01:24,966

Well, it's

42

00:01:24,966 --> 00:01:26,633

because AI has been programed

43

00:01:26,633 --> 00:01:27,833

to give you an answer

44

00:01:27,833 --> 00:01:29,200

no matter what.

45

00:01:29,200 --> 00:01:31,266

Its algorithm would rather be over

46

00:01:31,266 --> 00:01:32,500

confidently incorrect

47

00:01:32,500 --> 00:01:34,733

with its very best educated guess,

48

00:01:34,733 --> 00:01:36,833

than completely silent.

49

00:01:36,833 --> 00:01:39,033

In the Mata v Avianca case,

50

00:01:39,033 --> 00:01:39,900

ChatGPT

51

00:01:39,900 --> 00:01:41,366

gave this lawyer several

52

00:01:41,366 --> 00:01:42,900

non-existent federal

53

00:01:42,900 --> 00:01:45,366

and state court decisions as precedent

54

00:01:45,366 --> 00:01:46,633

to support his argument.

55

00:01:48,766 --> 00:01:50,900

When students and teachers use AI,

56

00:01:50,900 --> 00:01:51,766

it can be dangerous

57

00:01:51,766 --> 00:01:53,933

to consider it a source of truth.

58

00:01:53,933 --> 00:01:54,866

For example,

59

00:01:54,866 --> 00:01:57,533

if a student asks AI for historical dates

60

00:01:57,533 --> 00:01:59,266

and it provides the wrong years,

61

00:01:59,266 --> 00:01:59,800

the students

62

00:01:59,800 --> 00:02:01,833

learning will be compromised.

63

00:02:01,833 --> 00:02:02,900

And for teachers,

64

00:02:02,900 --> 00:02:04,300

these types of hallucinations

65

00:02:04,300 --> 00:02:05,666

can challenge reliability

66

00:02:05,666 --> 00:02:06,533

of the resources

67

00:02:06,533 --> 00:02:07,733

they produce, from quizzes

68

00:02:07,733 --> 00:02:09,600

to lesson plans.

69

00:02:09,600 --> 00:02:11,700

Hallucinations can sneak into all kinds

70

00:02:11,700 --> 00:02:13,733

of resources and research.

71

00:02:13,733 --> 00:02:14,033

That's why

72

00:02:14,033 --> 00:02:15,666

we need to verify the information

73

00:02:15,666 --> 00:02:18,033

before we trust it.

74

00:02:18,033 --> 00:02:18,866

There are three things

75

00:02:18,866 --> 00:02:21,633

we can do to deal with hallucinations.

76

00:02:21,633 --> 00:02:22,500

One.

77

00:02:22,500 --> 00:02:24,600

Make our prompts better.

78

00:02:24,600 --> 00:02:25,666

Ground your query

79

00:02:25,666 --> 00:02:27,366

in sources by providing the text

80

00:02:27,366 --> 00:02:29,466

it needs to derive the answer.

81

00:02:29,466 --> 00:02:31,600

When AI produces a response,

82

00:02:31,600 --> 00:02:33,600

ask it to cite the source.

83

00:02:33,600 --> 00:02:35,233

You can also constrain the task

84

00:02:35,233 --> 00:02:36,300

by breaking the problem

85

00:02:36,300 --> 00:02:38,333

or query into steps.

86

00:02:38,333 --> 00:02:40,166

Ask it to list any uncertainties

87

00:02:40,166 --> 00:02:42,333

it has about the query.

88

00:02:42,333 --> 00:02:45,333

Two. Judge before you trust.

89

00:02:45,333 --> 00:02:45,966

Double check

90

00:02:45,966 --> 00:02:47,733

AI answers with a trusted source

91

00:02:47,733 --> 00:02:48,933

and use your own judgment

92

00:02:48,933 --> 00:02:49,966

to carefully read

93

00:02:49,966 --> 00:02:51,833

and understand the response.

94

00:02:51,833 --> 00:02:54,833

If it sounds fishy, it likely is.

95

00:02:55,033 --> 00:02:57,066

And finally. Three.

96

00:02:57,066 --> 00:02:58,166

Use hallucinations

97

00:02:58,166 --> 00:02:59,300

as a teachable moment

98

00:02:59,300 --> 00:03:02,300

about fact checking and media literacy.

99

00:03:02,466 --> 00:03:04,066

AI is powerful,

100

00:03:04,066 --> 00:03:06,000

but it can still make things up.

101

00:03:06,000 --> 00:03:07,466

It's important to be skeptical

102

00:03:07,466 --> 00:03:09,133

and check the sources.

103

00:03:09,133 --> 00:03:10,133

As educators,

104

00:03:10,133 --> 00:03:11,933

knowing how AI hallucinations

105

00:03:11,933 --> 00:03:13,100

work helps us separate

106

00:03:13,100 --> 00:03:14,966

the hype from the reality

107

00:03:14,966 --> 00:03:16,066

so we can make wise

108

00:03:16,066 --> 00:03:17,366

choices for our classrooms.