General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsSo, I'm gonna stir the pot a bit... don't be too harsh...
AI is a tool - like a fork that can feed you or poke your eye out, it's all in how you use it.Swede
(39,465 posts)That's my take.
Tim S
(224 posts)at the expense of everyone else.
Walleye
(44,778 posts)genxlib
(6,135 posts)The buggy whips are people.
We are not ready for having vast swaths of our populace rendered economically obsolete.
I could see hitting double digit unemployment in the next couple of years. Rough time when it goes over 10%. Societal breakdown when it goes to 20% or beyond.
thought crime
(1,554 posts)The robots will gladly send us their paychecks. Right?
Safe as Milk
(251 posts)are not immune from severe emotional disturbances that can harm everyone around them. The greater their power, the greater the harm they can cause.
genxlib
(6,135 posts)I think we have learned enough about Elon during the DOGE disaster to realize this is not what he has in mind.
I am old enough to remember the many times in the past how the "futurists" would predict a future of leisure and abundance while we worked 20 hours a week. Hell, Disney had singing animatronics selling us on that future. Turns out that capitalism doesn't work that way unless government forces it to.
In fact I would argue that the IT bros are out to correct the wrongs of Ayn Rand. They love the idea of the pulling a John Galt but never found a way to dispose of the workers that actually made their lives leisurely. Until now... I think they finally see the Rand vision of a perfect society without us riff-raff.
paleotn
(22,199 posts)He's a ketamine doped dweeb who lucked into his billions by starting on third base, coupled with the lottery of being in the right place at exactly the right time. Lottery that is. Not anything he actually did. That said, the vast majority of what he spews is unworkable bullshit that will never happen.
Still waiting for "self driving" taxis that don't need a team of remote, human operators in The Philippines and elsewhere. I'm not holding my breath.
paleotn
(22,199 posts)We're not going to see vast swaths of the populace rendered obsolete by AI.
That is when LLMs are not hallucinating and spewing bullshit. But before you say "in the future!", I've been hearing that nuclear fusion, quantum computing, and a whole host of other innovations are just 10 years away for the last 50 years.
Artificial general intelligence (AGI) and artificial super intelligence (ASI) fall into that category. That's what it would take to reproduce most if not all complex tasks trained humans do effortlessly. Due to physical limits and the laws of physics itself, both are probably as impossible as approaching even a small fraction of the speed of light. A pipe dream.
Another tool? Yes. Some productivity gains? Yes, maybe. Enough to accomplish what you're worried about? Not hardly. From a physical hardware, resource use, and economic standpoint (return on investment), it ain't happening.
You see, Moore's Law (which was never a "law" to begin with. Only an early observation of digitization. ) is dead. It ran into the laws of physics and died. See circuit miniaturization and leakage (quantum tunneling). That's why they're building ginormous data centers and not packing the technology into relatively small packages. To even TRY to do what you're afraid of would require turning much of the planet into a giant server farm. Even then, it probably will never work. Oh, and some tech bros are actually proposing POWERING their AI fever dream WITH fusion power. A pipe dream two-fer!
https://cmr.berkeley.edu/2025/10/seven-myths-about-ai-and-productivity-what-the-evidence-really-says/
https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
https://mitsloan.mit.edu/ideas-made-to-matter/a-new-look-economics-ai?utm_source=mitsloangooglep&utm_medium=cpc&utm_campaign=macroAI&gad_source=1&gad_campaignid=20986709924&gbraid=0AAAAABQU3hesYIdiKnQZG13GxpIojSFfz&gclid=Cj0KCQjwm6POBhCrARIsAIG58CKL7fj8hh6gQLgGvPcfhp39t0PuC7Ta1XMFmphMtadRKSXoCESxClIaAg3NEALw_wcB
genxlib
(6,135 posts)But it is already affecting some industries.
I don't actually think it will be as bad as many. But I also think it will take a lot less disruption to really fuck things up than people realize. A few million jobs would really put the economic system in a bad way. With the approaching demographic bomb coming, we can't afford to have a whole generation underemployed.
I also think that the development in humanoid robots along side AI will go a long way towards determining the outcome. If they can make competent physical analogs to go with the mental analogs I think it exponentially increases the ability to supplant humans from jobs. They have a ways to go with that but I also noted that there have been advances lately that sure have moved the needle.
paleotn
(22,199 posts)I've seen them from their early stages in industry. Mostly alleviating dangerous tasks, especially those prone to repetitive motion injuries. They're more consistent than humans generally, but not necessarily faster. The problem is, they're stuck in place, tethered to a PLC. And that's for relatively simple, programmable tasks. Cramming the digital "horsepower" into them that's necessary to break that tether and do what humans do naturally has proven elusive, though great strides have been made. Reproducing what a billion years of evolution has given every human isn't as easy as it seems.
jmbar2
(7,978 posts)thought crime
(1,554 posts)AI is another technology that capitalism exploits. There are already victims of many products of capitalism, like guns, drugs and oil.
Of course, this time it may kill us all...
BootinUp
(51,301 posts)leftstreet
(40,600 posts)Link to tweet
?s=20
thought crime
(1,554 posts)But I want one, too.
FullySupportDems
(446 posts)Or it will consume us
anciano
(2,254 posts)hlthe2b
(113,911 posts)Do I worry that the last group in this country to have the intellect, understanding, sophistication, and awareness necessary to EFFECTIVELY and APPROPRIATELY regulate it is what we are stuck with in Congress---and years TOO LATE? Damned right. But that is where we are at. We have to try.
Safe as Milk
(251 posts)2naSalit
(102,731 posts)Twisted ponzi scheme to me.
gulliver
(13,978 posts)Until recently, you could type anything into Google and it would find something. That's okay for intelligent, sane people. Unfortunately, there are a lot of paranoids and dummies around. Google puts them in contact with one another where they inter-validate. AI will tell you no if you ask it if the Earth is flat.
Ms. Toad
(38,617 posts)It lies as casually as Trump does and unfortunately people assume what it says is true - as you apparently do.
With Google, you at least know where the information is coming from, since you are directed to a specific website. You can verify the accuracy of the website by checking it is reputation. For example, is the medical information coming from the Mayo Clinic - or - some no-name crackpot? For news sources, there are tools to check the factual reliability and political bias of the source.
It terrifies me that people are relying on AI for medical advice. I'm in a number of medical support groups in which the advice given is to paste a medical report into ChatGPT. Someone is going to die, if they haven't already.
You have no such opportunity with AI to easily test the reliability of AI output, because you have no idea where the information came from. You can fact check every single sentence - including and for which a link is provided. It is just as likely to lie about what a source says as it is to use it accurately. I have extensively fact checked several AI sources, in at least a half dozen diverse subject areas (history, medicine, law, just to name 3). Not a single answer was accurate. They generally had grains of truth, mixed well with outright lies (party affiliation of politicians, districts they represent, famous people who share certain medical conditions) and mis-contextualized facts (e.g. applying civil law to criminal cases, or mixing up the very different risk profiles for taking vancomycin orally versus intravenously)
As for telling you "No" the earth is not flat - AI is designed to be a people pleaser. If you challenge it, it is likely to apologize for getting it wrong, them tell you that you are correct the earth is indeed flat.
It is far more dangerous for an unintelligent person to use AI than it is for them to use Google. In fact (moral issues aside), I would be far less concerned about misuse of AI if it was only embraced by intelligent people who fully understood it's limitations and truly used it as a tool to make their work more productive than a replacing their own efforts.
gulliver
(13,978 posts)I didn't say AI is perfect. I'm only saying it's vastly superior to Google. To use your example, someone can go to the Mayo clinic site via Google to look up a symptom. If that person is dumb (which it is fairly normal to be) or paranoid or a hypochondriac, they will invariably come up with the wrong answer.
My experience with AI is quite good. As you say, it makes mistakes. But, as a research tool, it beats Google. You have to check AI, but it's very good.
Unfortunately, I have very little trust in a large segment of the population being able to "check Google" or even check the sites it finds. AI won't tell you the Earth is flat, usually. Google might take you to CNN which, on any given day, will tell you the Earth is some version of flat.
A lot of people just don't have it in them to be able to effectively check sources. That's how we see so much confidence these days in so much poppycock, imo.
Ms. Toad
(38,617 posts)It is incredibly dangerous, unless what you have is a benign, style-limiting illness. Anyone with half a brain can look to see if the Google source is reliable - is it pub-med, any of reputable medical facilities, or a respected medical group dedicated to that condition? If not, keep digging.
AI is designed to please the user, and to gap fill any gaps in it's knowledge (i.e. to make crap up, rather than admitting it doesn't know.
Anyone who can't or won't fact check Google - when the tools are staring them in the face (a direct link to the source) will be even worse off with AI, when the information comes from a black box with no guidance for where to look for more context, or a basic accuracy check.
Igel
(37,530 posts)Some things slip through but often enough they get caught if they're important enough to be noticed. (Tree octopus or the Goa war notwithstanding.)
The problem with Wiki is that on some topics info is left out entirely and that skews the takeaway. I remember reading up on a topic (and exploring some of the links) in Wiki. Then summer '20 I went back and the article was different--it had been expurgated of anything that prevented one particular view from being presented without challenge or doubt, and claims were cited (with references) but the former discussion of the claims as being first made 30 years after the fact, being implausible and only from one source (each) gave the impression that the unlikely claims were verified fact.
Some topics in Wiki are excellent--but they tend to be harshly scientific where the culture is that your argument must take into account other arguments and try to ID unresolved issues with your own. (Because if you don't somebody else will revel in un-deluding you.) Otherwise, the more political/controversial the topic the more skeptical you need to be and the more you need to know before you make a judgment about what the Wiki says.
AI both hallucinates and omits stuff. If you don't already know more than the Chatbutt exudes, you're quite possibly going to know wrong stuff that you'll have to unlearn before you can become more education.
Ms. Toad
(38,617 posts)It is the hallucinations and omissions which are an AI problem. And once you disclose your biases, both the hallucinations and omissions start to reinforce your own biases.
And unlike Google or Wiki, there aren't any clues that you're getting biased information. The very problem the OP accused Google of creating.
thought crime
(1,554 posts)It's just another tool and you have to understand its limitations. I find it's okay as a starting point and it often provides links to specific references.
highplainsdem
(62,062 posts)nonstop scraping by AI companies is driving up websites' costs (something EarlG has mentioned happening here as well). They're destroying the internet. And error-filled AI results are polluting our information ecosystem, with some AI slop even getting published in scientific and medical journals.
Ms. Toad
(38,617 posts)My experience, with extensive testing of numerous AI bots is that they contain enough grains of truth to sound good, but when I dive into the details the answer falls apart.
ProfessorGAC
(76,676 posts)...I have found that the AI summary is a direct cut & paste (aka plagiarism) from some other source.
I have found that to be the case with Wiki, Mayo Clinic, Baseball Reference, & PubChem.
In those examples (numbering in the dozens) not a single character was different.
The AI didn't "know" this stuff; it's just very fast at looking stuff up.
That is not inherent accuracy. AI itself doesn't need to be, and often isn't trying to be, any more accurate than the references it scans.
Shermann
(9,062 posts)The AI-generated opinion better matched his symptoms, and he challenged his doctor with the information. The doctor grumbled about it but changed positions.
Ms. Toad
(38,617 posts)I diagnosed my daughter's rare disease, developed my own treatment plan for a then rare condition, and have refused medical treatment when doctors were being stupid. That doesn't make me an easy patient for doctors who think patients should be meek followers. So I strongly believe in challenging doctors when I believe they are off base.
I would never, in a million years, even consider doing so in reliance on AI. That would be like asking for a drug because I saw a TV commercial advertising it. I know a lot of people do, but that doesn't make it a smart thing to do.
milestogo
(23,071 posts)Not all AI is the same.
highplainsdem
(62,062 posts)results need to be checked just as much as any other AI model's results.
paleotn
(22,199 posts)One small mistake and your probe doesn't insert itself into Mars orbit. It crashes into Mars.
Ms. Toad
(38,617 posts)But my recollection in a recent programming study was that it's deficiencies were not all in the same areas as the other bots, but that overall it didn't live up to it's hype.
paleotn
(22,199 posts)Ms. Toad
(38,617 posts)It is not a tool like a fork in one significant way - there are largely has no moral implications in the creation of the fork or in it's ongoing use. There are significant ones in both creation and use as to AI.
Virtually all generative AI relies on the stolen works of humans (art and writing) and there is no practical way to extract that stolen material from an existing generative AI tool and start fresh.
Generative AI uses vast amounts of resources (especially water), which people need to survive.
mike_c
(37,051 posts)Except for the occasional prodigy, that's the way humans learn, too. I learned my profession from the work of others, and I pay attention to new developments achieved by others so I can mimic their successes in my own work. I developed whatever sense of aesthetics I have by viewing and listening to the work of others. I judge AI performance by comparing it to what I know about the work of other humans. That's how we all learn-- AI's simply automate the process and accomplish it faster, often on the fly. If you detest AIs learning from human accomplishments then you have to detest most human education as well, because that's how it works. We're not all innate prodigies, and even those who are still depend upon education-- "scraping" the work of others-- for the rest of their knowledge base.
I have no quibble with training LLMs with human examples. That's exactly how we train ourselves, too. It's not surprising that we developed a tool that mimics our own learning process.
That said, remember that current AIs don't "learn" anything. If an LLM says that an apple is red, that might be true, but it doesn't mean the AI knows what an apple is or what red looks like. All it does is predict the statistical likelihood of "red" being the correct response when the textual context involves the "color" of "apple." Does a wrench know anything about bolts? Can a ladder steal understanding of "elevation" from other tools?
Happy Hoosier
(9,533 posts)It is an economic disaster waiting to happen. In fact, it is already happening.
biocube
(215 posts)...as that it's rising when we have scary tech moguls who have bought the government who think we have a glorious Star Trek future if we could just do more damage to the environment and get rid of the social safety net.
KentuckyWoman
(7,400 posts)We don't allow violent murders in jail to be in charge of the forks. The folks in charge of AI can't be trusted either.
Disaffected
(6,396 posts)most like any other tool. Sadly there is an ocean of FUD about AI, and hype, and it is seen in many places including right here.
highplainsdem
(62,062 posts)and saying it's just "a tool" is like saying slave plantations were just farming.
AI companies' theft of the world's intellectual property was both illegal and unethical. Even if they succeed in getting laws changed, the theft will still be unethical.
People who are aware of that IP theft and don't think it matters have taken a stand on that legal and ethical issue, and it's the wrong stand. For liberals, anyway. For people who favor human rights over corporate exploitation. The AI companies were quite aware it was theft.
And since generative AI is flawed - genAI models aren't intelligent, are little more than fancy autocomplete, and will always hallucinate no matter how good the data set of stolen IP - it's less like a regular fork than like a badly produced fork that might break in any way at any time. AI companies tell users that their flawed tech makes mistakes, so they should always check its results. You don't see companies selling flatware having to warn people that they should check the fork's tines after every bite - and before swallowing - in case part of the fork broke off.
mgardener
(2,353 posts)I have found several times I was given false info.
Verify everything.
I prefer Gemini over Chat GPT
David__77
(24,690 posts)It is not somehow embedded with a unique ethical quality.
highplainsdem
(62,062 posts)It's inherently unethical tech, as well as inherently flawed and unreliable.
David__77
(24,690 posts)I think there are very valid criticisms of the way this concept has been used to extract rents, particularly from developing countries.
highplainsdem
(62,062 posts)they feel their own has been taken.
David__77
(24,690 posts)I like seeing oligopolistic IT companies losing their rents.
highplainsdem
(62,062 posts)important as the right to own physical property. And I'd guess most people who don't think intellectual property rights should exist would not want other people to help themselves to their physical belongings, at least if the alleged owner (and IP rights opponent) wasn't using them at the moment.
David__77
(24,690 posts)highplainsdem
(62,062 posts)them will never accept theft of intellectual property.
The AI companies were always free to train their AI models on what's in the public domain, and what they obtained legal permission to use. But they had no intention of paying for any of it, though some AI companies have made token agreements to pay for a tiny fraction of what they stole - and are continuing to steal every day.
They're crooks. Every bit as much crooks as Trump or any other crook.
Botany
(77,302 posts)Both Musk and Peter Thiels Palantir used it to rat fuck the elections.
BattleRow
(2,443 posts)Scrivener7
(59,498 posts)"I can't do that Dave."
thought crime
(1,554 posts)100 recs for the Dave reference!
Escurumbele
(4,092 posts)dangerous, even when he/she is eating...
In my opinion, you are correct, AI is a tool. Now, the problem is that those who are shaping AI are like the assassins, and I would bet some of them are assassins because of their past and present actions.
So, we have Elon Musk, Peter Thiel and some other crazies shaping AI, and to me, THAT IS THE PROBLEM. Unless regulation comes along to curve the assassins desires, then AI will become another problem we will have to deal with, it will become a tool for evil. We see now how evil people are using it for misinformation, to distort reality making people believe things that are not true, so I hope regulations kick in, which I doubt will happen while republicans are in power.
thought crime
(1,554 posts)Yeah, the problem is capitalism. The market can't decide what is evil or good.
dlk
(13,247 posts)There isnt a way to effectively eliminate bad actors and this fact cant be ignored.
We cant count on our politicians to enact strong protections. For example, take a look at todays internet and social media.
Martin68
(27,712 posts)developed with built-in safeguards to prevent accidents and harm to actual people and institutions. What we hear now on a daily basis is that many (most?) of the developers are pulling out all stops in the quest to beat the competition and make big profits. We are already hearing about misuse of AI that is having a negative impact on individuals, court cases, journalism, and other fields. I'm all for the wise development of AI as a tool. I suggest we slow down and carefully consider how to prevent predictable damage along the way.
MiHale
(13,015 posts)Mblaze
(1,028 posts)It's how others use it.
Martin Eden
(15,611 posts)Billionaires who own increasing acerage of the media landscape are very likely to use AI for rightwing propaganda considerably more effective than what has already comvinced far too mant to vote against their own best interests.
Javaman
(65,703 posts)CEOs are thinkings its a hammer and can solve all their issues by thinking theyre nails
LudwigPastorius
(14,707 posts)LLMs will always have limitations. All the big companies are working on recursive self-improving AI agents.
https://o-mega.ai/articles/self-improving-ai-agents-the-2026-guide
https://arxiv.org/pdf/2603.19461
These things are potentially very dangerous.
tinrobot
(12,060 posts)ret5hd
(22,500 posts)from the OP.
just sayin.
Soul_of_Wit
(99 posts)I insist to my wife that I participated in meal preparation by stirring the pot. She is not amused.
ABC123Easy
(275 posts)Scrivener7
(59,498 posts)I disagree with the opinion, but it's definitely legit.
FakeNoose
(41,591 posts)This is next-generation spying, and once you give them permission by allowing AI aps on your phone or your computer, the spying never stops.
There's a reason why they want to "give" it to us for "free." It will never be free.
Are you OK with that? I'm not.
Prairie_Seagull
(4,684 posts)This kitty is out of the bag. What happens when it grows it's teeth?
highplainsdem
(62,062 posts)And I wouldn't compare AI to a cat. Any cat is much smarter than AI. I remember one AI expert on X pointing out that an amoeba has more true intelligence than genAI.
anciano
(2,254 posts)but genAI is just one small tool in a very large tool box. AI technology is now affecting almost everyone's life in some way whether they are aware of it or not.
highplainsdem
(62,062 posts)and unethical it is.
Prairie_Seagull
(4,684 posts)Loved that cat. When he grew up he could be vicious though. I was single at the time and I trained him to exhibit this behavior. He'd play fetch for hours with a hackey sac.
JustABozoOnThisBus
(24,681 posts)Bon apetit!
Soul_of_Wit
(99 posts)1. It will accelerate income inequality at a frightening rate. Folks have only begun to realize how many humans will lose their livelihoods. Universal Basic Income is the solution but one should never underestimate the power of greed to overwhelm basic human dignity.
2. Military-industrial complex + end-stage capitalism = lust for profit/power = robot apocalypse. One slip-up allows an AI to reprogram itself. Test scenarios have already shown that a reasonably competent AI goes after the human command and control first. If the AI has the option, then it will use it. Too many developers have failed to learn the lessons from Asimov.
Just look at the "brain trust" currently in charge of the most powerful military on Earth. They have no qualms about violating the US Constitution to squash any AI company willing to resist the potential for a runaway AI. Read up on the dispute between the Department of Defense and Anthropic (the Claude folks.)
cornball 24
(1,580 posts)highplainsdem
(62,062 posts)I liked what a producer friend said about AI a while back - that he doesn't need artificial intelligence because he has actual intelligence.
Renew Deal
(85,113 posts)And is basically true, though AI is far more consequential than a fork.
MineralMan
(151,232 posts)And that's the core of the problem. AI sometimes hallucinates. When it does, if the consumer of it does not know that, then that consumer is likely to believe something that is not true.
Sometimes we can detect that. Other times, we cannot. Worse, we don't know which time is which.
So, AI? Nope. I don't trust it one bit.
Oneironaut
(6,295 posts)AI is not in itself bad. The mainstream use of it, however, is completely unoptimized and in one of the worst ways possible. This is what most people think "AI" is, when, in reality, "AI" has been around and being improved upon for most of everyone here's lifetimes.
Wall Street thinks AI is just a way for short term profit and cost reduction, which makes me sad. It's completely different from scientific uses of AI, for example.
PufPuf23
(9,837 posts)limit what you perceive, influence what you think, is not as readily recognized, is less a personal choice and lacks the potential for grooming society. A fork is less prone to lie.
bigtree
(94,246 posts)...adjust your expectations accordingly.
Progressive dog
(7,599 posts)It has no capacity to do anything physical. It has no capacity to experiment, to create data, to learn on its own. It is artificial, but not intelligent.
There are things that need doing that perhaps AI could help with but those things only get done if humans can pay for them. Right now, AI is burning money to build AI centers and supply them with electricity. They may never make a profit.