Photo by Gabriella Clair Marino on Unsplash |
Since the launch of Chat-GPT, followed by several clones, mimics, improvements, and other artificial intelligence (AI) applications, there’s been a raft of articles and editorials on the subject, some of them going so far as to present doomsday scenarios, predicting that if unchecked AI will take over the world and destroy us. On the less frantic, but no less of a hair-on-fire attitude, are predictions that AI will make certain human workers obsolete, and will retard learning because students will no longer know how to do research or even write their own essays.
Now, there is a
certain amount of validity to the latter two opinions. There are likely
to be some jobs that are better, more efficiently and cheaply done by AI rather
than humans. But I predict that these will be the drudge, number crunching jobs
that most humans hate doing anyway, and a computer can crunch numbers faster
and more accurately than the smartest human. There will still be, though, a requirement
for humans to make the decisions about what to do with those crunched numbers.
As for the impact on students, if educators abdicate their responsibility to
set clear standards and requirements and monitor their students’ activities,
there could be situations where students ‘let the AI do it,’ thus not acquiring
research and communication skills of their own. I teach online graduate courses
in geopolitics, for example, and I use AI for baseline grading—with extensive
manual input from me—which frees me to focus on students having problems, and
to have the time to carefully review their written assignments. I forbid my
students from using AI to write their essays, and caution them when using AI to
do research to verify everything the AI provides them, preferably with at least
one or two non-AI sources. Properly used, AI can be an aid in compiling sources
for further study, and for establishing outlines of projects.
Because AI, like
human researchers, is pulling the information it provides from Internet
searches, it can sometimes be wrong, just as a human researcher who doesn’t try
to verify what pops up on the screen in a search can be wrong. A good example
in the news recently was Michael Cohen, Donald Trump’s former lawyer and fixer
having to admit that in a court filing he submitted some phony legal cases that
had been provided by an AI. You can’t blame the AI for this. Depending on how
he worded his search, it provided relevant cases that it found in searching the
Internet, but the Internet is like an open shelf, anyone with a computer can
put anything on it. If you grab things off the shelf without examining them
closely, you—or the AI—just might get the wrong thing.
So, let’s stop
blaming the AI for things going wrong. AI is a tool, and like any tool, it can
be used or misused. Don’t blame the tool, blame the mechanic.
No comments:
Post a Comment