What exactly is an AI agent—and why should educators understand how they work?
In this AI Foundations video from Ed3, we explore AI autonomous agents: systems that can take a goal, make decisions, and carry out actions across multiple steps with limited human input. Unlike typical AI tools that respond only when prompted, agents are designed to keep working toward a goal until they believe the task is complete.
Many AI agents are powered by large language models, but they are built with additional layers that allow them to plan actions, choose tools, remember context over time, and trigger real-world tasks such as sending messages, updating files, or running code. In other words, instead of simply answering questions, they can decide what to do next.
For educators, that shift matters.
AI agents are already appearing in education—sometimes without being labeled as such. Systems that automatically adjust assignments, monitor engagement data, flag students who may need support, or sequence learning content are increasingly acting with agent-like autonomy. As these systems begin making decisions within learning environments, the role of human judgment becomes even more important.
This video explores what makes AI agents different, how they operate, and what educators should consider as these systems become more common in classrooms.
This video covers:
A common misconception is that AI agents are simply more advanced chatbots. The deeper shift is that agents move from responding to instructions to acting toward goals—sometimes using tools, data, and automated steps along the way. Understanding that distinction helps educators think more clearly about oversight, responsibility, and human decision-making in AI-supported learning environments.
This video is part of the AI Foundations series by Ed3, supporting educators worldwide in making informed, ethical, and human-centered decisions about AI in classrooms.
👉 Learn more about Ed3: https://www.ed3global.org
👉 Explore professional learning, courses, and events designed for educators navigating AI responsibly.
👉 Join our community of practice: https://community.ed3global.org
00:13 Meet Eddie: an example AI agent
00:19 What an AI autonomous agent is
00:30 The key difference between agents and typical AI tools
01:02 How agents decide what to do next
01:03 What powers AI agents under the hood
01:29 What people mean when they say “my AI decided”
01:29 What makes an agent autonomous
01:38 Simulated beliefs, desires, and intentions
02:15 Why today’s agents are still limited
02:28 How agents communicate and optimize interaction
02:39 Memory and long-term task tracking
02:45 Where AI agents are already appearing in education
03:05 Why this changes the role of teacher judgment
03:17 The risk to student agency
03:30 Three ways educators can respond
03:38 Setting clear boundaries
03:47 Keeping humans in the loop
03:57 Making agency visible to students
1
00:00:13,100 --> 00:00:13,933
That's Eddie.
2
00:00:13,933 --> 00:00:16,300
He's my AI autonomous agent.
3
00:00:16,300 --> 00:00:18,400
He's going to be giving me a hand today.
4
00:00:18,400 --> 00:00:19,500
An AI autonomous
5
00:00:19,500 --> 00:00:21,666
agent is a system that can take a goal,
6
00:00:21,666 --> 00:00:22,766
make decisions,
7
00:00:22,766 --> 00:00:24,733
and carry out actions, often across
8
00:00:24,733 --> 00:00:28,066
multiple steps with limited human input.
9
00:00:28,666 --> 00:00:30,000
Here's a simple analogy
10
00:00:30,000 --> 00:00:30,900
that explains the key
11
00:00:30,900 --> 00:00:33,100
difference between an AI agent
12
00:00:33,100 --> 00:00:36,100
and a typical AI tool like an LLM.
13
00:00:36,300 --> 00:00:36,900
A typical
14
00:00:36,900 --> 00:00:38,033
AI tool responds
15
00:00:38,033 --> 00:00:39,100
when you ask it something,
16
00:00:39,100 --> 00:00:41,800
or give it instructions like “go left”
17
00:00:41,800 --> 00:00:44,700
“now go right” and “go right again”.
18
00:00:44,700 --> 00:00:47,200
An AI agent keeps going until it believes
19
00:00:47,200 --> 00:00:48,833
the task is done.
20
00:00:48,833 --> 00:00:51,400
Eddie, go find the cheese.
21
00:00:51,400 --> 00:00:52,333
You give it a goal
22
00:00:52,333 --> 00:00:53,933
like “plan a trip”,
23
00:00:53,933 --> 00:00:55,066
“monitor student progress”,
24
00:00:55,066 --> 00:00:56,966
or “optimize my schedule”,
25
00:00:56,966 --> 00:00:59,200
then the agent decides what to do next,
26
00:00:59,200 --> 00:01:01,766
what tools to use, and when to stop.
27
00:01:02,866 --> 00:01:03,800
Under the hood,
28
00:01:03,800 --> 00:01:04,633
many agents
29
00:01:04,633 --> 00:01:06,900
are powered by large language models,
30
00:01:06,900 --> 00:01:08,633
but they're wrapped in extra layers
31
00:01:08,633 --> 00:01:09,966
that allow them to remember
32
00:01:09,966 --> 00:01:11,433
context over time,
33
00:01:11,433 --> 00:01:13,833
choose between tools and trigger actions
34
00:01:13,833 --> 00:01:15,500
like sending messages, updating
35
00:01:15,500 --> 00:01:17,766
files, or running code.
36
00:01:17,766 --> 00:01:19,066
So if someone says
37
00:01:19,066 --> 00:01:21,366
“My AI decided to do this”,
38
00:01:21,366 --> 00:01:22,500
what they usually mean is
39
00:01:22,500 --> 00:01:24,300
the agent was given permission
40
00:01:24,300 --> 00:01:26,733
to act on their behalf.
41
00:01:26,733 --> 00:01:29,600
So what makes an agent autonomous?
42
00:01:29,600 --> 00:01:30,400
Well,
43
00:01:30,400 --> 00:01:31,833
autonomous agents are programed
44
00:01:31,833 --> 00:01:34,600
to be just that autonomous
45
00:01:34,600 --> 00:01:36,266
and agentic.
46
00:01:36,266 --> 00:01:38,466
They can be reactive and proactive,
47
00:01:38,466 --> 00:01:39,900
meaning that they not only do
48
00:01:39,900 --> 00:01:41,300
what you've asked them to do,
49
00:01:41,300 --> 00:01:42,100
but they also use
50
00:01:42,100 --> 00:01:43,500
their own assessment of the task
51
00:01:43,500 --> 00:01:45,900
to offer suggestions and actions
52
00:01:45,900 --> 00:01:48,333
in order to complete the task.
53
00:01:48,333 --> 00:01:49,966
They have simulated beliefs,
54
00:01:49,966 --> 00:01:52,033
desires, and intentions.
55
00:01:52,033 --> 00:01:53,566
And this is a big one.
56
00:01:53,566 --> 00:01:54,900
This means that they are programmed
57
00:01:54,900 --> 00:01:57,200
to have an internal compass.
58
00:01:57,200 --> 00:01:59,500
Remember the movie ‘I, Robot’?
59
00:01:59,500 --> 00:02:01,700
VIKI the supercomputer believed
60
00:02:01,700 --> 00:02:04,466
she needed to save humanity at large,
61
00:02:04,466 --> 00:02:05,933
even if it meant destroying
62
00:02:05,933 --> 00:02:07,766
before rebuilding.
63
00:02:07,766 --> 00:02:09,466
[VIKI] To ensure your future,
64
00:02:09,466 --> 00:02:12,100
[VIKI] some freedoms must be surrendered.
65
00:02:12,100 --> 00:02:14,266
[VIKI] We must save you from yourselves.
66
00:02:15,500 --> 00:02:16,700
But don't worry.
67
00:02:16,700 --> 00:02:18,800
We're nowhere near that reality.
68
00:02:18,800 --> 00:02:20,900
Today, the simulated beliefs, desires,
69
00:02:20,900 --> 00:02:21,566
and intentions
70
00:02:21,566 --> 00:02:22,500
are meant to help you
71
00:02:22,500 --> 00:02:25,500
execute simple, multi-step tasks.
72
00:02:25,800 --> 00:02:28,366
Agents also have simulated social ability
73
00:02:28,366 --> 00:02:29,600
and communication
74
00:02:29,600 --> 00:02:31,233
so they can optimize their engagement
75
00:02:31,233 --> 00:02:32,066
with you.
76
00:02:32,066 --> 00:02:33,533
They have an internal constitution
77
00:02:33,533 --> 00:02:36,533
that allows them to be task oriented.
78
00:02:36,733 --> 00:02:37,866
And finally,
79
00:02:37,866 --> 00:02:39,266
they have long term memory
80
00:02:39,266 --> 00:02:39,900
that stores
81
00:02:39,900 --> 00:02:41,033
unfinished tasks
82
00:02:41,033 --> 00:02:44,033
and can track the passing of time.
83
00:02:44,133 --> 00:02:45,400
In education,
84
00:02:45,400 --> 00:02:47,766
AI agents are already showing up,
85
00:02:47,766 --> 00:02:49,666
even if we don't call them that.
86
00:02:49,666 --> 00:02:51,400
Examples include systems
87
00:02:51,400 --> 00:02:53,133
that automatically adjust assignments
88
00:02:53,133 --> 00:02:54,966
based on student performance,
89
00:02:54,966 --> 00:02:56,566
tools that monitor engagement
90
00:02:56,566 --> 00:02:57,833
data, and flag students
91
00:02:57,833 --> 00:02:59,200
who might need support,
92
00:02:59,200 --> 00:03:00,000
and platforms
93
00:03:00,000 --> 00:03:01,500
that sequence learning content
94
00:03:01,500 --> 00:03:04,166
without a teacher approving each step.
95
00:03:04,166 --> 00:03:05,133
For teachers,
96
00:03:05,133 --> 00:03:07,500
this changes the role of judgment.
97
00:03:07,500 --> 00:03:08,700
If an agent is deciding
98
00:03:08,700 --> 00:03:10,000
what happens next,
99
00:03:10,000 --> 00:03:12,366
we need to ask who set the goal,
100
00:03:12,366 --> 00:03:13,766
what data is it using,
101
00:03:13,766 --> 00:03:16,766
and where does human oversight step in?
102
00:03:16,766 --> 00:03:17,900
For students,
103
00:03:17,900 --> 00:03:19,866
the risk isn't just accuracy.
104
00:03:19,866 --> 00:03:21,366
It's agency.
105
00:03:21,366 --> 00:03:22,933
If learners get used to systems
106
00:03:22,933 --> 00:03:25,166
that decide, plan and act for them,
107
00:03:25,166 --> 00:03:26,533
we have to be intentional
108
00:03:26,533 --> 00:03:27,933
about preserving decision
109
00:03:27,933 --> 00:03:29,600
making as a human skill.
110
00:03:30,566 --> 00:03:31,300
There are three
111
00:03:31,300 --> 00:03:32,766
ways educators can engage
112
00:03:32,766 --> 00:03:35,133
with AI agents wisely.
113
00:03:35,133 --> 00:03:38,133
First, be explicit about boundaries.
114
00:03:38,566 --> 00:03:40,333
If a tool can take action,
115
00:03:40,333 --> 00:03:40,866
understand
116
00:03:40,866 --> 00:03:42,500
exactly what it's allowed to do
117
00:03:42,500 --> 00:03:45,266
and what still requires a human.
118
00:03:45,266 --> 00:03:48,266
Second, keep humans in the loop.
119
00:03:48,300 --> 00:03:49,533
Agents should support
120
00:03:49,533 --> 00:03:50,633
professional judgment,
121
00:03:50,633 --> 00:03:53,366
not replace it. Drafts are fine,
122
00:03:53,366 --> 00:03:56,100
but final decisions are human work.
123
00:03:56,100 --> 00:03:57,100
And third,
124
00:03:57,100 --> 00:03:59,566
make agency visible to students.
125
00:03:59,566 --> 00:04:00,366
Talk openly
126
00:04:00,366 --> 00:04:01,600
about when a system
127
00:04:01,600 --> 00:04:02,833
is making choices versus
128
00:04:02,833 --> 00:04:04,200
when a person is.
129
00:04:04,200 --> 00:04:05,700
This builds critical awareness
130
00:04:05,700 --> 00:04:08,233
instead of quiet dependance.
131
00:04:08,233 --> 00:04:10,600
AI agents can save time, surface
132
00:04:10,600 --> 00:04:13,300
insights, and reduce cognitive load,
133
00:04:13,300 --> 00:04:14,700
but only when they're designed
134
00:04:14,700 --> 00:04:16,566
and used with intention.
135
00:04:16,566 --> 00:04:18,533
They don't understand goals.
136
00:04:18,533 --> 00:04:20,100
They execute them.
137
00:04:20,100 --> 00:04:21,466
Our role is to decide
138
00:04:21,466 --> 00:04:23,533
which goals are worth automating
139
00:04:23,533 --> 00:04:26,333
and which ones should always stay human.
140
00:04:26,333 --> 00:04:28,333
Because while AI can act,
141
00:04:28,333 --> 00:04:30,300
teachers still choose what matters.
142
00:04:31,233 --> 00:04:32,500
As educators,
143
00:04:32,500 --> 00:04:34,366
knowing what AI agents are
144
00:04:34,366 --> 00:04:35,133
helps us separate
145
00:04:35,133 --> 00:04:37,200
the hype from the reality
146
00:04:37,200 --> 00:04:38,300
so we can make wise
147
00:04:38,300 --> 00:04:39,733
choices for our classrooms.