Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search
89 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
So, I'm gonna stir the pot a bit... don't be too harsh... (Original Post) Joinfortmill 8 hrs ago OP
Its a loaded gun, given to a child. Swede 8 hrs ago #1
AI is simply a change accelerant to make the wealthy wealthier more quickly Tim S 7 hrs ago #2
Best explanation of it I've heard so far Walleye 7 hrs ago #3
Except this time genxlib 7 hrs ago #10
Elon Musk promises an age of abundance thought crime 6 hrs ago #23
Elitist billionaires... Safe as Milk 5 hrs ago #53
I believe he said that is one possible path genxlib 5 hrs ago #57
Even Elmo doesn't know what he has in mind. paleotn 3 hrs ago #70
Oh dear lord. Not this shit again. paleotn 4 hrs ago #64
I hope you are right genxlib 3 hrs ago #73
We already have near humanoid robots... paleotn 3 hrs ago #75
Thanks for the interesting links - bookmarking. jmbar2 2 hrs ago #88
Well, that describes the market. thought crime 6 hrs ago #37
Yup. orangecrush 6 hrs ago #46
Ha! I think I could do better stirring, but I'll pass. BootinUp 7 hrs ago #4
I'd take one of these leftstreet 7 hrs ago #5
Of course, that video was made with AI thought crime 6 hrs ago #26
How about AI is like fire, and has to be controlled FullySupportDems 7 hrs ago #6
Exactly. anciano 7 hrs ago #7
I agree. But it has to have some (considerable regulatory) constraints hlthe2b 7 hrs ago #8
A LOT of constraints! n/t Safe as Milk 5 hrs ago #56
Seems like a... 2naSalit 7 hrs ago #9
It's much better and safer than Google gulliver 7 hrs ago #11
I have yet to find a single AI summary which is accurate. Ms. Toad 7 hrs ago #15
It's better for the AI to tell you about your symptoms than Google gulliver 7 hrs ago #16
It is absolutely NOT better for AI to tell you about your symptoms than Google Ms. Toad 3 hrs ago #76
The trouble with Wiki isn't that it spouts false information on a regular basis. Igel 6 hrs ago #18
Agreed - but it took me until the last paragraph to get the AI connection. Ms. Toad 3 hrs ago #79
I've found many accurate responses from AI summary thought crime 6 hrs ago #28
Those AI overviews are stealing traffic from the websites they stole the information from, and the highplainsdem 6 hrs ago #35
And you've fact checked every bit of its response? Ms. Toad 3 hrs ago #78
A Substantial Majority Of The Time... ProfessorGAC 2 hrs ago #89
A coworker used AI to get a second opinion from his doctor's. Shermann 6 hrs ago #41
I have a long history of correcting doctors with independent research. Ms. Toad 3 hrs ago #81
Anthropic Claude is very accurate. milestogo 6 hrs ago #42
It still hallucinates. All genAI models do. It can hallucinate at any time, and for that reason its highplainsdem 5 hrs ago #47
Define accurate. paleotn 4 hrs ago #65
I haven't specifically checked it myself - Ms. Toad 3 hrs ago #77
And current LLMs do exactly the same thing as Google search or YouTube algorithms. You just don't realize it. paleotn 4 hrs ago #66
While I agree, in principle as to the possibilities for it's use, Ms. Toad 7 hrs ago #12
"AI relies on the stolen works of humans (art and writing)..." mike_c 2 hrs ago #86
Without planning and guardrails... Happy Hoosier 7 hrs ago #13
AI doesn't concern me as much... biocube 7 hrs ago #14
You aren't wrong ... BUT KentuckyWoman 6 hrs ago #17
Of course it is, Disaffected 6 hrs ago #19
If you mean generative AI, the kind most hyped now, it's badly flawed tech based on stolen intellectual property, highplainsdem 6 hrs ago #20
Very true statement mgardener 6 hrs ago #21
Absolutely. It's a key tool of production. David__77 6 hrs ago #22
It works - to the extent it works when it's mindless and will always hallucinate - only because of IP theft. highplainsdem 6 hrs ago #29
I guess that depends on one's view of "intellectual property". David__77 6 hrs ago #32
The AI companies who felt they had a right to take everyone else's IP have been quick to scream if highplainsdem 6 hrs ago #40
That's absolutely true and on a certain level funny to see. David__77 3 hrs ago #82
I'm in favor of creatives owning their intellectual property, and that right being protected. It's as highplainsdem 2 hrs ago #84
That can certainly be adjudicated as with any other property issue. David__77 2 hrs ago #85
Legal judgments aren't always ethical, as everyone here is aware. Creatives and those who support highplainsdem 2 hrs ago #87
A.I. got us Donald Trump in 2024. Nuff said. Botany 6 hrs ago #24
Wish AI meant actual (human) intelligence. BattleRow 5 hrs ago #49
AI is the devil. We think we can control it, but we can't. Scrivener7 6 hrs ago #25
Devil with the Blue Dress? She's the Devil in disguise? thought crime 6 hrs ago #31
The problem is not a fork or a knife, the problem is who has it in their hand...An assassin with a knife is very Escurumbele 6 hrs ago #27
"Guns aren't the problem..." ? thought crime 6 hrs ago #33
An accurate analogy, however dlk 6 hrs ago #30
I agree. I've been saying this about computers for decades. However, I think most of us agree that IA should be Martin68 6 hrs ago #34
I think it sound like a scream.AAAAA.IIIIII... MiHale 6 hrs ago #36
The problem is not how we use it, Mblaze 6 hrs ago #38
The most critical word is "you" -- meaning WHO? Martin Eden 6 hrs ago #39
If it were only looked as a fork Javaman 6 hrs ago #43
We are about to FAFO on AI. LudwigPastorius 6 hrs ago #44
True, AI by itself is benign. The companies controlling it, however, are not. tinrobot 6 hrs ago #45
hmmm...almost 50 replies and no interaction... ret5hd 5 hrs ago #48
I sometimes stir a pot in the kitchen and then walk away until dinner is served Soul_of_Wit 4 hrs ago #60
Don't see how that's "stirring the pot" ABC123Easy 5 hrs ago #50
I do agree with you there. One of my smartest friends, a tech professional, thinks like Joinformill. Scrivener7 5 hrs ago #55
It's a tool for the billionaire overlords, not for us FakeNoose 5 hrs ago #51
Where was our blue ribbon commission prior to its release. Prairie_Seagull 5 hrs ago #52
AI can be rejected - and should be, by ethical, smart people who have any choice in the matter. highplainsdem 4 hrs ago #59
Granted that using genAI is optional and can be rejected..... anciano 4 hrs ago #62
It's genAI being hyped and used most widely. Which is why people need to know about how harmful highplainsdem 4 hrs ago #63
The sole cat I ever had agreed with you. Prairie_Seagull 2 hrs ago #83
Not like a fork: like a cruise missle with a spork instead of a warhead. JustABozoOnThisBus 5 hrs ago #54
Two huge negatives, both related to human nature Soul_of_Wit 4 hrs ago #58
In addition, we need... cornball 24 4 hrs ago #61
And using AI harms human intelligence. See this thread on yet another article about that: highplainsdem 4 hrs ago #67
That's very simple Renew Deal 4 hrs ago #68
Sadly, few people are fully able to tell when AI provides facts or fallacies. MineralMan 4 hrs ago #69
This is absolutely true! Oneironaut 3 hrs ago #71
AI differs from a fork in that a fork does not PufPuf23 3 hrs ago #72
americans can't be trusted with sharp objects bigtree 3 hrs ago #74
AI cannot replace humans Progressive dog 3 hrs ago #80

Tim S

(224 posts)
2. AI is simply a change accelerant to make the wealthy wealthier more quickly
Sun Mar 29, 2026, 11:32 AM
7 hrs ago

at the expense of everyone else.

genxlib

(6,135 posts)
10. Except this time
Sun Mar 29, 2026, 11:41 AM
7 hrs ago

The buggy whips are people.

We are not ready for having vast swaths of our populace rendered economically obsolete.

I could see hitting double digit unemployment in the next couple of years. Rough time when it goes over 10%. Societal breakdown when it goes to 20% or beyond.

thought crime

(1,554 posts)
23. Elon Musk promises an age of abundance
Sun Mar 29, 2026, 12:42 PM
6 hrs ago

The robots will gladly send us their paychecks. Right?

Safe as Milk

(251 posts)
53. Elitist billionaires...
Sun Mar 29, 2026, 01:59 PM
5 hrs ago

are not immune from severe emotional disturbances that can harm everyone around them. The greater their power, the greater the harm they can cause.

genxlib

(6,135 posts)
57. I believe he said that is one possible path
Sun Mar 29, 2026, 02:29 PM
5 hrs ago

I think we have learned enough about Elon during the DOGE disaster to realize this is not what he has in mind.

I am old enough to remember the many times in the past how the "futurists" would predict a future of leisure and abundance while we worked 20 hours a week. Hell, Disney had singing animatronics selling us on that future. Turns out that capitalism doesn't work that way unless government forces it to.

In fact I would argue that the IT bros are out to correct the wrongs of Ayn Rand. They love the idea of the pulling a John Galt but never found a way to dispose of the workers that actually made their lives leisurely. Until now... I think they finally see the Rand vision of a perfect society without us riff-raff.

paleotn

(22,199 posts)
70. Even Elmo doesn't know what he has in mind.
Sun Mar 29, 2026, 03:33 PM
3 hrs ago

He's a ketamine doped dweeb who lucked into his billions by starting on third base, coupled with the lottery of being in the right place at exactly the right time. Lottery that is. Not anything he actually did. That said, the vast majority of what he spews is unworkable bullshit that will never happen.

Still waiting for "self driving" taxis that don't need a team of remote, human operators in The Philippines and elsewhere. I'm not holding my breath.

paleotn

(22,199 posts)
64. Oh dear lord. Not this shit again.
Sun Mar 29, 2026, 03:17 PM
4 hrs ago

We're not going to see vast swaths of the populace rendered obsolete by AI. That is when LLMs are not hallucinating and spewing bullshit. But before you say "in the future!", I've been hearing that nuclear fusion, quantum computing, and a whole host of other innovations are just 10 years away for the last 50 years.

Artificial general intelligence (AGI) and artificial super intelligence (ASI) fall into that category. That's what it would take to reproduce most if not all complex tasks trained humans do effortlessly. Due to physical limits and the laws of physics itself, both are probably as impossible as approaching even a small fraction of the speed of light. A pipe dream.

Another tool? Yes. Some productivity gains? Yes, maybe. Enough to accomplish what you're worried about? Not hardly. From a physical hardware, resource use, and economic standpoint (return on investment), it ain't happening.

You see, Moore's Law (which was never a "law" to begin with. Only an early observation of digitization. ) is dead. It ran into the laws of physics and died. See circuit miniaturization and leakage (quantum tunneling). That's why they're building ginormous data centers and not packing the technology into relatively small packages. To even TRY to do what you're afraid of would require turning much of the planet into a giant server farm. Even then, it probably will never work. Oh, and some tech bros are actually proposing POWERING their AI fever dream WITH fusion power. A pipe dream two-fer!


https://cmr.berkeley.edu/2025/10/seven-myths-about-ai-and-productivity-what-the-evidence-really-says/
https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
https://mitsloan.mit.edu/ideas-made-to-matter/a-new-look-economics-ai?utm_source=mitsloangooglep&utm_medium=cpc&utm_campaign=macroAI&gad_source=1&gad_campaignid=20986709924&gbraid=0AAAAABQU3hesYIdiKnQZG13GxpIojSFfz&gclid=Cj0KCQjwm6POBhCrARIsAIG58CKL7fj8hh6gQLgGvPcfhp39t0PuC7Ta1XMFmphMtadRKSXoCESxClIaAg3NEALw_wcB

genxlib

(6,135 posts)
73. I hope you are right
Sun Mar 29, 2026, 03:42 PM
3 hrs ago

But it is already affecting some industries.

I don't actually think it will be as bad as many. But I also think it will take a lot less disruption to really fuck things up than people realize. A few million jobs would really put the economic system in a bad way. With the approaching demographic bomb coming, we can't afford to have a whole generation underemployed.

I also think that the development in humanoid robots along side AI will go a long way towards determining the outcome. If they can make competent physical analogs to go with the mental analogs I think it exponentially increases the ability to supplant humans from jobs. They have a ways to go with that but I also noted that there have been advances lately that sure have moved the needle.

paleotn

(22,199 posts)
75. We already have near humanoid robots...
Sun Mar 29, 2026, 03:55 PM
3 hrs ago

I've seen them from their early stages in industry. Mostly alleviating dangerous tasks, especially those prone to repetitive motion injuries. They're more consistent than humans generally, but not necessarily faster. The problem is, they're stuck in place, tethered to a PLC. And that's for relatively simple, programmable tasks. Cramming the digital "horsepower" into them that's necessary to break that tether and do what humans do naturally has proven elusive, though great strides have been made. Reproducing what a billion years of evolution has given every human isn't as easy as it seems.

thought crime

(1,554 posts)
37. Well, that describes the market.
Sun Mar 29, 2026, 01:06 PM
6 hrs ago

AI is another technology that capitalism exploits. There are already victims of many products of capitalism, like guns, drugs and oil.

Of course, this time it may kill us all...

hlthe2b

(113,911 posts)
8. I agree. But it has to have some (considerable regulatory) constraints
Sun Mar 29, 2026, 11:39 AM
7 hrs ago

Do I worry that the last group in this country to have the intellect, understanding, sophistication, and awareness necessary to EFFECTIVELY and APPROPRIATELY regulate it is what we are stuck with in Congress---and years TOO LATE? Damned right. But that is where we are at. We have to try.

gulliver

(13,978 posts)
11. It's much better and safer than Google
Sun Mar 29, 2026, 11:43 AM
7 hrs ago

Until recently, you could type anything into Google and it would find something. That's okay for intelligent, sane people. Unfortunately, there are a lot of paranoids and dummies around. Google puts them in contact with one another where they inter-validate. AI will tell you no if you ask it if the Earth is flat.

Ms. Toad

(38,617 posts)
15. I have yet to find a single AI summary which is accurate.
Sun Mar 29, 2026, 12:14 PM
7 hrs ago

It lies as casually as Trump does and unfortunately people assume what it says is true - as you apparently do.

With Google, you at least know where the information is coming from, since you are directed to a specific website. You can verify the accuracy of the website by checking it is reputation. For example, is the medical information coming from the Mayo Clinic - or - some no-name crackpot? For news sources, there are tools to check the factual reliability and political bias of the source.

It terrifies me that people are relying on AI for medical advice. I'm in a number of medical support groups in which the advice given is to paste a medical report into ChatGPT. Someone is going to die, if they haven't already.

You have no such opportunity with AI to easily test the reliability of AI output, because you have no idea where the information came from. You can fact check every single sentence - including and for which a link is provided. It is just as likely to lie about what a source says as it is to use it accurately. I have extensively fact checked several AI sources, in at least a half dozen diverse subject areas (history, medicine, law, just to name 3). Not a single answer was accurate. They generally had grains of truth, mixed well with outright lies (party affiliation of politicians, districts they represent, famous people who share certain medical conditions) and mis-contextualized facts (e.g. applying civil law to criminal cases, or mixing up the very different risk profiles for taking vancomycin orally versus intravenously)

As for telling you "No" the earth is not flat - AI is designed to be a people pleaser. If you challenge it, it is likely to apologize for getting it wrong, them tell you that you are correct the earth is indeed flat.

It is far more dangerous for an unintelligent person to use AI than it is for them to use Google. In fact (moral issues aside), I would be far less concerned about misuse of AI if it was only embraced by intelligent people who fully understood it's limitations and truly used it as a tool to make their work more productive than a replacing their own efforts.

gulliver

(13,978 posts)
16. It's better for the AI to tell you about your symptoms than Google
Sun Mar 29, 2026, 12:28 PM
7 hrs ago

I didn't say AI is perfect. I'm only saying it's vastly superior to Google. To use your example, someone can go to the Mayo clinic site via Google to look up a symptom. If that person is dumb (which it is fairly normal to be) or paranoid or a hypochondriac, they will invariably come up with the wrong answer.

My experience with AI is quite good. As you say, it makes mistakes. But, as a research tool, it beats Google. You have to check AI, but it's very good.

Unfortunately, I have very little trust in a large segment of the population being able to "check Google" or even check the sites it finds. AI won't tell you the Earth is flat, usually. Google might take you to CNN which, on any given day, will tell you the Earth is some version of flat.

A lot of people just don't have it in them to be able to effectively check sources. That's how we see so much confidence these days in so much poppycock, imo.

Ms. Toad

(38,617 posts)
76. It is absolutely NOT better for AI to tell you about your symptoms than Google
Sun Mar 29, 2026, 03:55 PM
3 hrs ago

It is incredibly dangerous, unless what you have is a benign, style-limiting illness. Anyone with half a brain can look to see if the Google source is reliable - is it pub-med, any of reputable medical facilities, or a respected medical group dedicated to that condition? If not, keep digging.

AI is designed to please the user, and to gap fill any gaps in it's knowledge (i.e. to make crap up, rather than admitting it doesn't know.

Anyone who can't or won't fact check Google - when the tools are staring them in the face (a direct link to the source) will be even worse off with AI, when the information comes from a black box with no guidance for where to look for more context, or a basic accuracy check.

Igel

(37,530 posts)
18. The trouble with Wiki isn't that it spouts false information on a regular basis.
Sun Mar 29, 2026, 12:33 PM
6 hrs ago

Some things slip through but often enough they get caught if they're important enough to be noticed. (Tree octopus or the Goa war notwithstanding.)

The problem with Wiki is that on some topics info is left out entirely and that skews the takeaway. I remember reading up on a topic (and exploring some of the links) in Wiki. Then summer '20 I went back and the article was different--it had been expurgated of anything that prevented one particular view from being presented without challenge or doubt, and claims were cited (with references) but the former discussion of the claims as being first made 30 years after the fact, being implausible and only from one source (each) gave the impression that the unlikely claims were verified fact.

Some topics in Wiki are excellent--but they tend to be harshly scientific where the culture is that your argument must take into account other arguments and try to ID unresolved issues with your own. (Because if you don't somebody else will revel in un-deluding you.) Otherwise, the more political/controversial the topic the more skeptical you need to be and the more you need to know before you make a judgment about what the Wiki says.

AI both hallucinates and omits stuff. If you don't already know more than the Chatbutt exudes, you're quite possibly going to know wrong stuff that you'll have to unlearn before you can become more education.

Ms. Toad

(38,617 posts)
79. Agreed - but it took me until the last paragraph to get the AI connection.
Sun Mar 29, 2026, 04:10 PM
3 hrs ago

It is the hallucinations and omissions which are an AI problem. And once you disclose your biases, both the hallucinations and omissions start to reinforce your own biases.

And unlike Google or Wiki, there aren't any clues that you're getting biased information. The very problem the OP accused Google of creating.

thought crime

(1,554 posts)
28. I've found many accurate responses from AI summary
Sun Mar 29, 2026, 12:51 PM
6 hrs ago

It's just another tool and you have to understand its limitations. I find it's okay as a starting point and it often provides links to specific references.

highplainsdem

(62,062 posts)
35. Those AI overviews are stealing traffic from the websites they stole the information from, and the
Sun Mar 29, 2026, 01:04 PM
6 hrs ago

nonstop scraping by AI companies is driving up websites' costs (something EarlG has mentioned happening here as well). They're destroying the internet. And error-filled AI results are polluting our information ecosystem, with some AI slop even getting published in scientific and medical journals.

Ms. Toad

(38,617 posts)
78. And you've fact checked every bit of its response?
Sun Mar 29, 2026, 04:04 PM
3 hrs ago

My experience, with extensive testing of numerous AI bots is that they contain enough grains of truth to sound good, but when I dive into the details the answer falls apart.


ProfessorGAC

(76,676 posts)
89. A Substantial Majority Of The Time...
Sun Mar 29, 2026, 05:28 PM
2 hrs ago

...I have found that the AI summary is a direct cut & paste (aka plagiarism) from some other source.
I have found that to be the case with Wiki, Mayo Clinic, Baseball Reference, & PubChem.
In those examples (numbering in the dozens) not a single character was different.
The AI didn't "know" this stuff; it's just very fast at looking stuff up.
That is not inherent accuracy. AI itself doesn't need to be, and often isn't trying to be, any more accurate than the references it scans.

Shermann

(9,062 posts)
41. A coworker used AI to get a second opinion from his doctor's.
Sun Mar 29, 2026, 01:08 PM
6 hrs ago

The AI-generated opinion better matched his symptoms, and he challenged his doctor with the information. The doctor grumbled about it but changed positions.

Ms. Toad

(38,617 posts)
81. I have a long history of correcting doctors with independent research.
Sun Mar 29, 2026, 04:17 PM
3 hrs ago

I diagnosed my daughter's rare disease, developed my own treatment plan for a then rare condition, and have refused medical treatment when doctors were being stupid. That doesn't make me an easy patient for doctors who think patients should be meek followers. So I strongly believe in challenging doctors when I believe they are off base.

I would never, in a million years, even consider doing so in reliance on AI. That would be like asking for a drug because I saw a TV commercial advertising it. I know a lot of people do, but that doesn't make it a smart thing to do.

highplainsdem

(62,062 posts)
47. It still hallucinates. All genAI models do. It can hallucinate at any time, and for that reason its
Sun Mar 29, 2026, 01:33 PM
5 hrs ago

results need to be checked just as much as any other AI model's results.

paleotn

(22,199 posts)
65. Define accurate.
Sun Mar 29, 2026, 03:18 PM
4 hrs ago


One small mistake and your probe doesn't insert itself into Mars orbit. It crashes into Mars.

Ms. Toad

(38,617 posts)
77. I haven't specifically checked it myself -
Sun Mar 29, 2026, 04:01 PM
3 hrs ago

But my recollection in a recent programming study was that it's deficiencies were not all in the same areas as the other bots, but that overall it didn't live up to it's hype.

paleotn

(22,199 posts)
66. And current LLMs do exactly the same thing as Google search or YouTube algorithms. You just don't realize it.
Sun Mar 29, 2026, 03:20 PM
4 hrs ago

Ms. Toad

(38,617 posts)
12. While I agree, in principle as to the possibilities for it's use,
Sun Mar 29, 2026, 11:44 AM
7 hrs ago

It is not a tool like a fork in one significant way - there are largely has no moral implications in the creation of the fork or in it's ongoing use. There are significant ones in both creation and use as to AI.

Virtually all generative AI relies on the stolen works of humans (art and writing) and there is no practical way to extract that stolen material from an existing generative AI tool and start fresh.

Generative AI uses vast amounts of resources (especially water), which people need to survive.

mike_c

(37,051 posts)
86. "AI relies on the stolen works of humans (art and writing)..."
Sun Mar 29, 2026, 04:52 PM
2 hrs ago

Except for the occasional prodigy, that's the way humans learn, too. I learned my profession from the work of others, and I pay attention to new developments achieved by others so I can mimic their successes in my own work. I developed whatever sense of aesthetics I have by viewing and listening to the work of others. I judge AI performance by comparing it to what I know about the work of other humans. That's how we all learn-- AI's simply automate the process and accomplish it faster, often on the fly. If you detest AIs learning from human accomplishments then you have to detest most human education as well, because that's how it works. We're not all innate prodigies, and even those who are still depend upon education-- "scraping" the work of others-- for the rest of their knowledge base.

I have no quibble with training LLMs with human examples. That's exactly how we train ourselves, too. It's not surprising that we developed a tool that mimics our own learning process.

That said, remember that current AIs don't "learn" anything. If an LLM says that an apple is red, that might be true, but it doesn't mean the AI knows what an apple is or what red looks like. All it does is predict the statistical likelihood of "red" being the correct response when the textual context involves the "color" of "apple." Does a wrench know anything about bolts? Can a ladder steal understanding of "elevation" from other tools?

Happy Hoosier

(9,533 posts)
13. Without planning and guardrails...
Sun Mar 29, 2026, 11:47 AM
7 hrs ago

It is an economic disaster waiting to happen. In fact, it is already happening.

biocube

(215 posts)
14. AI doesn't concern me as much...
Sun Mar 29, 2026, 11:52 AM
7 hrs ago

...as that it's rising when we have scary tech moguls who have bought the government who think we have a glorious Star Trek future if we could just do more damage to the environment and get rid of the social safety net.

KentuckyWoman

(7,400 posts)
17. You aren't wrong ... BUT
Sun Mar 29, 2026, 12:32 PM
6 hrs ago

We don't allow violent murders in jail to be in charge of the forks. The folks in charge of AI can't be trusted either.

Disaffected

(6,396 posts)
19. Of course it is,
Sun Mar 29, 2026, 12:35 PM
6 hrs ago

most like any other tool. Sadly there is an ocean of FUD about AI, and hype, and it is seen in many places including right here.

highplainsdem

(62,062 posts)
20. If you mean generative AI, the kind most hyped now, it's badly flawed tech based on stolen intellectual property,
Sun Mar 29, 2026, 12:38 PM
6 hrs ago

and saying it's just "a tool" is like saying slave plantations were just farming.

AI companies' theft of the world's intellectual property was both illegal and unethical. Even if they succeed in getting laws changed, the theft will still be unethical.

People who are aware of that IP theft and don't think it matters have taken a stand on that legal and ethical issue, and it's the wrong stand. For liberals, anyway. For people who favor human rights over corporate exploitation. The AI companies were quite aware it was theft.

And since generative AI is flawed - genAI models aren't intelligent, are little more than fancy autocomplete, and will always hallucinate no matter how good the data set of stolen IP - it's less like a regular fork than like a badly produced fork that might break in any way at any time. AI companies tell users that their flawed tech makes mistakes, so they should always check its results. You don't see companies selling flatware having to warn people that they should check the fork's tines after every bite - and before swallowing - in case part of the fork broke off.

mgardener

(2,353 posts)
21. Very true statement
Sun Mar 29, 2026, 12:40 PM
6 hrs ago

I have found several times I was given false info.

Verify everything.
I prefer Gemini over Chat GPT

David__77

(24,690 posts)
22. Absolutely. It's a key tool of production.
Sun Mar 29, 2026, 12:40 PM
6 hrs ago

It is not somehow embedded with a unique ethical quality.

highplainsdem

(62,062 posts)
29. It works - to the extent it works when it's mindless and will always hallucinate - only because of IP theft.
Sun Mar 29, 2026, 12:52 PM
6 hrs ago

It's inherently unethical tech, as well as inherently flawed and unreliable.

David__77

(24,690 posts)
32. I guess that depends on one's view of "intellectual property".
Sun Mar 29, 2026, 12:57 PM
6 hrs ago

I think there are very valid criticisms of the way this concept has been used to extract rents, particularly from developing countries.

highplainsdem

(62,062 posts)
40. The AI companies who felt they had a right to take everyone else's IP have been quick to scream if
Sun Mar 29, 2026, 01:07 PM
6 hrs ago

they feel their own has been taken.

David__77

(24,690 posts)
82. That's absolutely true and on a certain level funny to see.
Sun Mar 29, 2026, 04:24 PM
3 hrs ago

I like seeing oligopolistic IT companies losing their rents.

highplainsdem

(62,062 posts)
84. I'm in favor of creatives owning their intellectual property, and that right being protected. It's as
Sun Mar 29, 2026, 04:34 PM
2 hrs ago

important as the right to own physical property. And I'd guess most people who don't think intellectual property rights should exist would not want other people to help themselves to their physical belongings, at least if the alleged owner (and IP rights opponent) wasn't using them at the moment.

highplainsdem

(62,062 posts)
87. Legal judgments aren't always ethical, as everyone here is aware. Creatives and those who support
Sun Mar 29, 2026, 04:53 PM
2 hrs ago

them will never accept theft of intellectual property.

The AI companies were always free to train their AI models on what's in the public domain, and what they obtained legal permission to use. But they had no intention of paying for any of it, though some AI companies have made token agreements to pay for a tiny fraction of what they stole - and are continuing to steal every day.

They're crooks. Every bit as much crooks as Trump or any other crook.

Botany

(77,302 posts)
24. A.I. got us Donald Trump in 2024. Nuff said.
Sun Mar 29, 2026, 12:42 PM
6 hrs ago

Both Musk and Peter Thiel’s Palantir used it to rat fuck the elections.

Escurumbele

(4,092 posts)
27. The problem is not a fork or a knife, the problem is who has it in their hand...An assassin with a knife is very
Sun Mar 29, 2026, 12:48 PM
6 hrs ago

dangerous, even when he/she is eating...

In my opinion, you are correct, AI is a tool. Now, the problem is that those who are shaping AI are like the assassins, and I would bet some of them are assassins because of their past and present actions.

So, we have Elon Musk, Peter Thiel and some other crazies shaping AI, and to me, THAT IS THE PROBLEM. Unless regulation comes along to curve the assassins desires, then AI will become another problem we will have to deal with, it will become a tool for evil. We see now how evil people are using it for misinformation, to distort reality making people believe things that are not true, so I hope regulations kick in, which I doubt will happen while republicans are in power.

thought crime

(1,554 posts)
33. "Guns aren't the problem..." ?
Sun Mar 29, 2026, 01:00 PM
6 hrs ago

Yeah, the problem is capitalism. The market can't decide what is evil or good.

dlk

(13,247 posts)
30. An accurate analogy, however
Sun Mar 29, 2026, 12:55 PM
6 hrs ago

There isn’t a way to effectively eliminate bad actors and this fact can’t be ignored.

We can’t count on our politicians to enact strong protections. For example, take a look at today’s internet and social media.


Martin68

(27,712 posts)
34. I agree. I've been saying this about computers for decades. However, I think most of us agree that IA should be
Sun Mar 29, 2026, 01:02 PM
6 hrs ago

developed with built-in safeguards to prevent accidents and harm to actual people and institutions. What we hear now on a daily basis is that many (most?) of the developers are pulling out all stops in the quest to beat the competition and make big profits. We are already hearing about misuse of AI that is having a negative impact on individuals, court cases, journalism, and other fields. I'm all for the wise development of AI as a tool. I suggest we slow down and carefully consider how to prevent predictable damage along the way.

Martin Eden

(15,611 posts)
39. The most critical word is "you" -- meaning WHO?
Sun Mar 29, 2026, 01:07 PM
6 hrs ago

Billionaires who own increasing acerage of the media landscape are very likely to use AI for rightwing propaganda considerably more effective than what has already comvinced far too mant to vote against their own best interests.

Javaman

(65,703 posts)
43. If it were only looked as a fork
Sun Mar 29, 2026, 01:18 PM
6 hrs ago

CEOs are thinkings it’s a hammer and can solve all their issues by thinking they’re nails

LudwigPastorius

(14,707 posts)
44. We are about to FAFO on AI.
Sun Mar 29, 2026, 01:20 PM
6 hrs ago

LLMs will always have limitations. All the big companies are working on recursive self-improving AI agents.

https://o-mega.ai/articles/self-improving-ai-agents-the-2026-guide

https://arxiv.org/pdf/2603.19461

These things are potentially very dangerous.



Soul_of_Wit

(99 posts)
60. I sometimes stir a pot in the kitchen and then walk away until dinner is served
Sun Mar 29, 2026, 02:48 PM
4 hrs ago

I insist to my wife that I participated in meal preparation by stirring the pot. She is not amused.

Scrivener7

(59,498 posts)
55. I do agree with you there. One of my smartest friends, a tech professional, thinks like Joinformill.
Sun Mar 29, 2026, 02:01 PM
5 hrs ago

I disagree with the opinion, but it's definitely legit.

FakeNoose

(41,591 posts)
51. It's a tool for the billionaire overlords, not for us
Sun Mar 29, 2026, 01:50 PM
5 hrs ago

This is next-generation spying, and once you give them permission by allowing AI aps on your phone or your computer, the spying never stops.

There's a reason why they want to "give" it to us for "free." It will never be free.

Are you OK with that? I'm not.

highplainsdem

(62,062 posts)
59. AI can be rejected - and should be, by ethical, smart people who have any choice in the matter.
Sun Mar 29, 2026, 02:47 PM
4 hrs ago

And I wouldn't compare AI to a cat. Any cat is much smarter than AI. I remember one AI expert on X pointing out that an amoeba has more true intelligence than genAI.

anciano

(2,254 posts)
62. Granted that using genAI is optional and can be rejected.....
Sun Mar 29, 2026, 03:00 PM
4 hrs ago

but genAI is just one small tool in a very large tool box. AI technology is now affecting almost everyone's life in some way whether they are aware of it or not.

highplainsdem

(62,062 posts)
63. It's genAI being hyped and used most widely. Which is why people need to know about how harmful
Sun Mar 29, 2026, 03:16 PM
4 hrs ago

and unethical it is.

Prairie_Seagull

(4,684 posts)
83. The sole cat I ever had agreed with you.
Sun Mar 29, 2026, 04:32 PM
2 hrs ago

Loved that cat. When he grew up he could be vicious though. I was single at the time and I trained him to exhibit this behavior. He'd play fetch for hours with a hackey sac.

Soul_of_Wit

(99 posts)
58. Two huge negatives, both related to human nature
Sun Mar 29, 2026, 02:33 PM
4 hrs ago
Ow! My eyes.

1. It will accelerate income inequality at a frightening rate. Folks have only begun to realize how many humans will lose their livelihoods. Universal Basic Income is the solution but one should never underestimate the power of greed to overwhelm basic human dignity.

2. Military-industrial complex + end-stage capitalism = lust for profit/power = robot apocalypse. One slip-up allows an AI to reprogram itself. Test scenarios have already shown that a reasonably competent AI goes after the human command and control first. If the AI has the option, then it will use it. Too many developers have failed to learn the lessons from Asimov.

Just look at the "brain trust" currently in charge of the most powerful military on Earth. They have no qualms about violating the US Constitution to squash any AI company willing to resist the potential for a runaway AI. Read up on the dispute between the Department of Defense and Anthropic (the Claude folks.)

highplainsdem

(62,062 posts)
67. And using AI harms human intelligence. See this thread on yet another article about that:
Sun Mar 29, 2026, 03:23 PM
4 hrs ago
https://www.democraticunderground.com/100221132789

I liked what a producer friend said about AI a while back - that he doesn't need artificial intelligence because he has actual intelligence.

MineralMan

(151,232 posts)
69. Sadly, few people are fully able to tell when AI provides facts or fallacies.
Sun Mar 29, 2026, 03:25 PM
4 hrs ago

And that's the core of the problem. AI sometimes hallucinates. When it does, if the consumer of it does not know that, then that consumer is likely to believe something that is not true.

Sometimes we can detect that. Other times, we cannot. Worse, we don't know which time is which.

So, AI? Nope. I don't trust it one bit.

Oneironaut

(6,295 posts)
71. This is absolutely true!
Sun Mar 29, 2026, 03:34 PM
3 hrs ago

AI is not in itself bad. The mainstream use of it, however, is completely unoptimized and in one of the worst ways possible. This is what most people think "AI" is, when, in reality, "AI" has been around and being improved upon for most of everyone here's lifetimes.

Wall Street thinks AI is just a way for short term profit and cost reduction, which makes me sad. It's completely different from scientific uses of AI, for example.

PufPuf23

(9,837 posts)
72. AI differs from a fork in that a fork does not
Sun Mar 29, 2026, 03:35 PM
3 hrs ago

limit what you perceive, influence what you think, is not as readily recognized, is less a personal choice and lacks the potential for grooming society. A fork is less prone to lie.

Progressive dog

(7,599 posts)
80. AI cannot replace humans
Sun Mar 29, 2026, 04:16 PM
3 hrs ago

It has no capacity to do anything physical. It has no capacity to experiment, to create data, to learn on its own. It is artificial, but not intelligent.
There are things that need doing that perhaps AI could help with but those things only get done if humans can pay for them. Right now, AI is burning money to build AI centers and supply them with electricity. They may never make a profit.

Latest Discussions»General Discussion»So, I'm gonna stir the po...