General Discussion
In reply to the discussion: Something Disturbing Happens When You "Learn" Something With ChatGPT [View all]highplainsdem
(59,174 posts)to generative AI. The organizing that goes into writing is critical to thinking. It isn't just getting "ink on paper" that then just needs some editing to make it your own. Too many students using AI that way now are never learning how to organize their thoughts and communicate effectively.
This is important to me because I've been discussing this for years with educators upset by how genAI is dumbing users down. I've read thousands of articles on AI over the last few years, posted hundreds of threads about it here, and also posted a lot about it elsewhere. Besides the dumbing down of users, there's the theft of intellectual property used to train AI tools, which IMO makes them fundamentally unethical to use. Huge issue. Then there's the environmental damage, the waste of money on the AI bubble, etc.
GenAI isn't like earlier tech. It's much more harmful.
I have nothing against tech in general. I first got a PC and first got online - and moderated a forum on politics and technology - in the mid-1980s. Before there was a world wide web, when I had to subscribe to three different online services to keep in touch with people for personal and business reasons. I still appreciated the convenience. Thought word processing software was wonderful. Ditto laser jet printers, though they were expensive then (equivalent to a few thousand today, but computers were also much more expensive then).
Those tools didn't dumb users down and do their thinking for them, and leave them remembering less.
That tech didn't hallucinate while sounding convincing. Didn't have to be checked on every detail because, although it sounded rational, it wasn't.
I'm skeptical of AI transcription being used, wary of doctors using it, because of the errors it can make, and how hard it can be to catch the errors. I've seen too many articles on errors made by AI transcription to consider it trustworthy, with LLMs not just mangling what was said, but adding statements that weren't made, even adding people to meetings when they weren't there.
I'm sorry you're going to need that surgery.
I was my mom's caregiver for years, took her to lots of doctors' appointments. I'll admit I didn't take notes. I do have a very good memory, and I could catch conflicting/wrong advice.
If I were in your situation, I'd probably take notes from a recording, rather than AI transcription, to get the most important details as well as anything important about tone of voice. They have done studies showing notes taken by hand are remembered best.
If you simply can't remember, if you have to have a full transcript, I guess you feel you have to use AI, though it is necessary to check every word.
They had transcription software decades ago, well before LLMs and genAI trained on stolen intellectual property. The Dragon dictation software doctors used goes back a few decades, for instance. It made mistakes, but genAI still does, and at least the older software didn't invent conversations that didn't happen and people that weren't there.