Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Showing Original Post only (View all)College professor had students grade ChatGPT-generated essays. All 63 essays had hallucinated errors [View all]
Found this thread thanks to a quote-tweet from Gary Marcus, the AI expert who testified before Congress, along with OpenAI CEO Sam Altman, a couple of weeks ago. Marcus saw the thread because he'd suggested this exercise. His comment on Twitter: "Every. Single. One."
Link to tweet
Link to tweet
Link to tweet
C.W. Howell
@cwhowell123
2h
So I followed @GaryMarcus's suggestion and had my undergrad class use ChatGPT for a critical assignment. I had them all generate an essay using a prompt I gave them, and then their job was to "grade" it--look for hallucinated info and critique its analysis. *All 63* essays had
hallucinated information. Fake quotes, fake sources, or real sources misunderstood and mischaracterized. Every single assignment. I was stunned--I figured the rate would be high, but not that high.
The biggest takeaway from this was that the students all learned that it isn't fully reliable. Before doing it, many of them were under the impression it was always right. Their feedback largely focused on how shocked they were that it could mislead them. Probably 50% of them
were unaware it could do this. All of them expressed fears and concerns about mental atrophy and the possibility for misinformation/fake news. One student was worried that their neural pathways formed from critical thinking would start to degrade or weaken. One other student
opined that AI both knew more than us but is dumber than we are since it cannot think critically. She wrote, "Im not worried about AI getting to where we are now. Im much more worried about the possibility of us reverting to where AI is."
I'm thinking I should write an article on this and pitch it somewhere...
@cwhowell123
2h
So I followed @GaryMarcus's suggestion and had my undergrad class use ChatGPT for a critical assignment. I had them all generate an essay using a prompt I gave them, and then their job was to "grade" it--look for hallucinated info and critique its analysis. *All 63* essays had
hallucinated information. Fake quotes, fake sources, or real sources misunderstood and mischaracterized. Every single assignment. I was stunned--I figured the rate would be high, but not that high.
The biggest takeaway from this was that the students all learned that it isn't fully reliable. Before doing it, many of them were under the impression it was always right. Their feedback largely focused on how shocked they were that it could mislead them. Probably 50% of them
were unaware it could do this. All of them expressed fears and concerns about mental atrophy and the possibility for misinformation/fake news. One student was worried that their neural pathways formed from critical thinking would start to degrade or weaken. One other student
opined that AI both knew more than us but is dumber than we are since it cannot think critically. She wrote, "Im not worried about AI getting to where we are now. Im much more worried about the possibility of us reverting to where AI is."
I'm thinking I should write an article on this and pitch it somewhere...
C.W.Howell is Christopher Howell: https://www.linkedin.com/in/christopher-howell-6ba00b242?trk=people-guest_people_search-card . Re the science fiction video game he was lead writer on: https://opencritic.com/game/5383/the-minds-eclipse/reviews .
47 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies

College professor had students grade ChatGPT-generated essays. All 63 essays had hallucinated errors [View all]
highplainsdem
May 2023
OP
" I'm much more worried about the possibility of us reverting to where AI is."
WestMichRad
May 2023
#2
I think the difficulty for most of us is in "context switching". When we're interacting with a tool
erronis
May 2023
#12
The best way to explain the productivity destruction of context switching!
Lucky Luciano
May 2023
#24
Very nice depiction of the tangled mess we weave. I think a lot of current software
erronis
May 2023
#32
Did the Wendy's drive-thru, I handed $20 on an $18.60 order and got $7.40 back. Returned it.
TheBlackAdder
May 2023
#30
The correct change is $1.40. You got $7.40 back. Could the cashier have calculated it on a
progree
May 2023
#31
I think you are right - Garry Kasparov was the Grand Master. And he is now knows a lot about AI.
erronis
May 2023
#13
Not really. Only linked pages. There's a lot of content that isn't accessed without
erronis
May 2023
#14
The future of Chat bots won't be trained on available datasets like the internet.
Yavin4
May 2023
#19
It probably will get better, but it could also stall out on improving accuracy...
Silent3
May 2023
#35