Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Metaphorical

(2,556 posts)
2. I have worked with various forms of AI since the 1980s.
Fri Oct 31, 2025, 01:42 AM
Oct 31

There are several major flaws in your post. The first is that LLMs in general have an accuracy rate of approximately 70% (across the board, OpenAI is less accurate than others). This means 30% of the time, the information that you receive from an AI will be wrong in some critical way. There are many sound mathematical reasons why this is the case, and it's pretty fundamental to the underlying transformer model. This has been known for a decade or so. AI can be useful - I use it myself for intellisense, when I know I can reasonably count of the underlying patterns, but even here, the benefits that I get back from such LLMs need to be weighed in terms of how much additional time I am now spending analysing the results to make sure that what I'm getting back is valid, and correcting it when it's now.

OpenAI does not "self-train". It (and others like it) typically employ many people (at very low wages) to filter out and "pretrain" their data, often at considerable psychological distress; that work means that much of the hard task of classification has already been done, but it's something that is in fact not sustainable. There have been many attempts to generate content that can be used for pre-training; however, because of the nature of the way that latent spaces generate narrative threads (something I won't get into here), what happens is that the mock training data usually loses a lot of intrinsic context, becoming blander and smoothed down over time, much like multiple repetitions of copying the output from one copying machine by the same mechanism.

Finally, we have effectively used the bulk of the Internet to train the models used for things like ChatGPT 4 (yes, we're up to 5, but the document corpuses have not changed significantly) and this means that we're seeing a plateuing of improvements in design in particular.

There are some interesting areas of research (especially into world models) that I suspect may provide a better approach to GenAI, but right now the consoritium of the Mighty Seven that effectively support AI are reluctant to go down that road because it is not beneficial to their longer term goals of getting the American public to build data centres for them.

Yes, AI might (almost certainly will) improve, but its probably not with this architecture, nor with the incredible amount of very questionable financial plays going along behind the architecture.

Recommendations

10 members have recommended this reply (displayed in chronological order):

I hope it replaces all of the investment bankers VMA131Marine Oct 31 #1
I have worked with various forms of AI since the 1980s. Metaphorical Oct 31 #2
I'm really not asserting that this is what will happen, but I believe machines capable of tasks on that scale AZJonnie Oct 31 #3
Yes, I agree that is the selling point being used. Hugin Oct 31 #5
Oh, I absolutely agree with that Metaphorical Oct 31 #21
My own cursory queries bear out the approximate 30% error rate. Hugin Oct 31 #4
Good points and good analysis. . . . nt Bernardo de La Paz Oct 31 #6
You could have chosen any number of similar pie in the sky examples and they'd all be dreams but not selling points. Bernardo de La Paz Oct 31 #7
Sure, I use Gemini Agentic via CLI everyday. But all I'm talking about is a collection of agents that effectively talk AZJonnie Oct 31 #9
Remember that most agentic calls Metaphorical Oct 31 #22
Well yeah I don't have Gemini running locally like I do Ollama and a couple other models AZJonnie Oct 31 #23
Helpful Starbeach Oct 31 #8
Too long to read. I'll bookmark it. QueerDuck Oct 31 #10
And you'll end up with a 90k sqft ballroom LetsGetSmartAboutIt Oct 31 #11
Stupid Question About AI Bibbers Oct 31 #12
Some amateur answers... Hugin Oct 31 #14
+1 leftstreet Oct 31 #15
Thanks so much! Bibbers Sunday #31
What a pile of bullshit. hunter Oct 31 #13
The coding is useful...but the resto is just bullshit and will crash...the AI companies are Demsrule86 Oct 31 #16
Yeah. I probably didn't make clear enough that my point was more political than technical AZJonnie Oct 31 #17
I didn't mean to call you out...just interests me...I love computers and coding. Demsrule86 Saturday #30
I've seen too many articles on problems with AI coding including security risks that aren't caught to be highplainsdem Oct 31 #18
I probably should have made it more clear that my point was more political than technical AZJonnie Oct 31 #19
That's very interesting, but I think there is a flaw in that MineralMan Oct 31 #20
I really didn't mean it's definitely going to work and building a building was just a convenient illustration AZJonnie Oct 31 #25
All well and good, but the energy demand will kill us (financially and literally) . . . . hatrack Oct 31 #24
Yeah I certainly did not to have it come off sounding like it's all a 'good thing' ESPECIALLY not for the climate AZJonnie Oct 31 #27
Priceline refund works on AI Turbineguy Oct 31 #26
Fascinating discussion. Thank you all. cachukis Oct 31 #28
AI, as both a technology and a commodity, is in its infancy, but this thread makes some nice points Ilikepurple Oct 31 #29
Latest Discussions»General Discussion»If you're confused about ...»Reply #2