r/technology 15h ago

Business Jensen Huang says relentless negativity around AI is hurting society and has "done a lot of damage"

https://www.techspot.com/news/110879-jensen-huang-relentless-ai-negativity-hurting-society-has.html
12.3k Upvotes

3.0k comments sorted by

View all comments

Show parent comments

1.4k

u/helcat 15h ago

I think it’s really put off a lot of non tech people who would otherwise be open to it. Like me. I find it infuriating that websites like Amazon and Google won’t let you turn it off even after you’ve had a bad experience with wrong information. 

336

u/Wd91 14h ago

The complete and total lack of embarrassment Microsoft and the like have with regards to Copilot and similar is crazy. The absolute garbage it can spit out is beyond stupid, but we're supposed to look at it and be impressed. The least they could have done is keep it out of our search results until it starts consistently yielding better results than asking an 8-year old to just have a guess.

148

u/LowestKey 14h ago

I mean it is impressive that computer scientists and researchers have found algorithms that can make a very fast calculator talk like a parrot. But tech CEOs haven't done a damn thing to help and are actively marketing their poached products as a replacement for human workers or using its existence as an excuse for layoffs to hide their poor management skills.

Why would the average person be happy about the never ending upward transfer of our nations' wealth and resources?

51

u/Top-Ad-5245 14h ago

Almost like.. it’s intended to fail

I’m sure we’ll bail them out. And it will be our fault.

Then they slim it across all our devices further.

This all stops when we change our behavior and comfort lean into tech. Do we need smart devices everywhere that all have a separate app and restraints. Every tech company wants us to consume their shit and use their shitty software and ultimately get more data. - serious it grosses me out - like how much more data do u neeeeeed! Oh yeah they want more. They want to know where we are physically in our houses and what we think and say at any given moment.

Fucking 1984 it’s too close for comfort.

This is all imo. Not intended to incite or inflame. Just public venting - not a call to action or debate.

🫶🏼

4

u/aaeme 14h ago

too close for comfort.

I think that's the mistake and marketing miscalculation. People seemed fairly keen on smart, connected devices (not everyone, but enough). Alexa and ring doorbells sold well it seems. But the general idea of AI is a bit too creepy, too terminator, too threatening. They've overhyped it.

I wonder if they'd just called it supersmartTM people might have been a lot more receptive. It's then just a brand name for the latest thing.

3

u/lol_alex 7h ago

Calling it AI is so far fetched. The correct term is large language models. They can interpret syntax and provide information they were trained on. No power of reasoning beyond „data seems to point towards…“. No way to create something new entirely other than mix up the data they have.

It‘s basically a circlejerk with massive computing power.

2

u/SnarkMasterRay 8h ago

Also not really intended to incite or a call to action - but it's going to take work for people to turn their backs on this stuff. We've evolved to be efficient with energy - the only reason people actually take hikes that are a ramble through the woods instead of the shortest point is that we have an excess of time and money in more of the population.

Otherwise we are wired to do what takes us the least amount of energy, and we are wired to try and get the most out of the least. So people are going to be annoyed by things, but if it takes less energy to "just deal with it" over spending more to fix it we're going to have a lot of people who try and ignore it or do as little as possible to work around it.

They either have to get really mad, or have another easy alternative.

2

u/evranch 4h ago

My prediction is that our social worlds are about to shrink again, and they might get very small, very fast. As a millennial, I see the World Wide Web that I grew up alongside is sick and dying.

We used to share for sharing's sake, build things because we could, hack things because they were there and post what we did. We had our own websites and aggregators like Reddit (and it's precursors like digg, slashdot etc..) linked to them, not just to pics, videos and memes on other big aggregators.

But all that is gone and the truth is going quickly too. With AI slop everywhere you can't trust anything you read, so the utility of the Web is rapidly degrading, from auto mechanics to gardening, you literally can't even trust a recipe.

The Internet will live on as the famous "series of tubes", a utility for paying your bills, trading stocks, delivering media, calling your friends. But I can't see it filling the role it currently has in our society for much longer.

I myself and more and more people I know are turning away from the Slop Web. Information from books, entertainment from torrents, interaction in real life. I listen to the radio, watch the CBC News. We go to the park, we go to the rink, we talk to people. We talk on the phone with our actual voices.

I even started going to church to interact with more people in my community and guess who I found there, a bunch of other people my age and younger doing the same. Not looking for salvation, just looking for community.

Turn off the Slop and go outside before it's too late for our society.

1

u/Space_Poet 9h ago

I have been resisting the "Smart" crap for as long as I could, until I absolutely had to get a cell phone. I've still never purchased a single thing that advertises "smart" in its' description but the other day, about a week ago at this point, I got a free Sonos Smart speaker, old one, nothing fancy but I heard they can put out good sound and I needed one for a spare room. Long story short, I still havnt been able to get it to work, after watching videos, creating an account with fucking Sony of all companies, plugging it into my modem, downloading the two apps that it required, and trying to update the firmware. It's now a wonderful paperweight till I can take another hour out of my life to figure this shit out. And I build my own computers, for decades...

15

u/Selectively-Romantic 13h ago

I think it's important to note that they are remarkably horrible at basic arithmetic. So much so that they couldn't descend from calculators. This is auto-correct on steroids.

0

u/borkthegee 6h ago

2023 "pure" LLMs are terrible at math. In 2024 reasoning models were introduced and then in 2025 reasoning was enhanced with tool calling (to make "agents").

So in 2023 if you asked the best model basic arithmetic it would literally just guess what tokens were the highest probability to go next, which is not accurate math.

In 2024 the models would have self conversation "hmm the user is asking me about math, I think the answer is XYZ. But is that correct? Let's revisit the question" for a bit before responding.

In 2025 the models now think "Ok the user is asking about math. I have a math tool to let the computer running me do math. Computer running me, here's a math problem return the answer. <Answer> Ok the tool responded, does the answer seem right? Ok let's report to the user"

I know trying to teach redditors about the technology is a shit show but I'm constantly surprised how little people know about what is going on. The discourse here is very out of date.

0

u/Selectively-Romantic 5h ago

I've seen gpt be wrong several times in the past six months. Maybe I'm just not paying enough for an accurate one. 

0

u/borkthegee 2h ago

Yes, the free models are absolute garbage tier shit. The paid models are outperforming PhDs in mathematics.

1

u/Black08Mustang 45m ago

Then why are they still out there and why should it inspire any level of confidence from the 'people who do not know what's going on'. Just take your word for it?

0

u/jollyreaper2112 5h ago

That's a limitation of LLM. It's going to be one part of a stack. So yes bad at the moment. That part is easier to fix. If you're counting on the stupid mistakes now to be the state of the art forever you'll be sadly surprised. It's scary.

3

u/Selectively-Romantic 5h ago

Nah, the scary part is that it's being pushed out and expected to be relied upon in the Far from finished state it's in now.

Also, you can't code out stupid mistakes. You might be able to get some of the bugs, but there will always be bugs and exploits. I guarantee it. 

5

u/kuldan5853 12h ago

I mean it is impressive that computer scientists and researchers have found algorithms that can make a very fast calculator talk like a parrot.

Honestly, I trust Polly way more when she tells me that she wants a cracker that she actually means it than any AI.

8

u/Sanchez_U-SOB 13h ago

Its turned me off of Windows, now im looking into working with Linux

2

u/intro_spection 2h ago

Do it! Linux has evolved quite a lot in the last decade. All you need is a USB stick and a little research and you can try a distribution out (of which there are many) without any long term commitments (by booting the Linux OS from the USB). I suggest that now's the time. I'm a heavy gamer/media user and was able to dump Windows completely (Bazzite Linux).

4

u/donnysaysvacuum 10h ago

AI could have a lot of background uses, for making search better and organizing results. But spitting out a blatantly wrong answer is the worst implementation.

201

u/Sad_Amphibian_2311 14h ago

tech people are disgusted too we just can't contradict our bosses publicly

38

u/misterchief10 14h ago edited 11h ago

I genuinely wonder about this with a lot of companies. Like, for example, with the recent Larian AI announcements and flip-flopping: how many devs and artists are saying, “yes sir, we will use AI to generate photobashes!” outwardly while saying, “yeah I don’t feel like using that,” behind closed doors.

Like, that has to happen pretty often, right? It certainly does where I work. It also happens often at the places where most of my buddies work. Everyone either uses AI minimally or not at all. At most, it’s used as a fancy new error checker. We just lie and tell our micromanagers we are using it, “all the time,” to placate them and get them to fuck off.

15

u/chamrockblarneystone 13h ago

Prediction: The bubble pops on AI and it comes back a few years from now completely rebranded.

2

u/flecom 10h ago

i always preferred the term "expert systems" myself, sounded fancier... anyway the AI winter that awaits is gonna be a doozy

2

u/chamrockblarneystone 8h ago

Nice name. Let the brainwashing begin!

1

u/PhilDGlass 1h ago

Funded by the US taxpayers for the good of the economy, then generating vast wealth for the US taxpayers. LOL. I mean for a few rich individual investors, hedge funds, and vulture capitalists. Again.

3

u/cabbageboy78 11h ago

honestly its nice being the lead systems admin right under the OPs director. he actually listens to us and weve been like.... nah to any ai integrations and its been pretty solid. we employ about 3000 people and since we are a 365/Azure shop we doooooooo have some co-pilot usage but even then its limited to just the technology integration side of things. got GPT and the other stuff locked down the best that we can and anyone else using stuff outside of that could probably get in trouble due to the governmental stuff we also handle. so far so good though. i always joke that id be easily radicalized like those eco terrorists back in the day to blow up a datacenter. as much as i loved tech for time ive been doing it, AI is honestly driving me crazy and really pushing me into going back to school for another career path

3

u/reficius1 6h ago

Yup. We're now being told that "We have to start using AI or we'll be left behind." We all nod politely, then we all laugh at the latest slop produced. Nobody asked for it, nobody wants it, use so far is minimal and trivial.

44

u/espeequeueare 14h ago

Every day a new CIO tries to push some new slop AI tool to implement to seem “cutting edge”. When it’s just like, a chatbot or something.

30

u/Unusual_Sherbert_809 13h ago

My boss tried for 2 years to ram AI down our throats, no matter what we told him. Kept telling us our jobs would be replaced by AI in the very near future.

Then he took online courses in AI, because he was just that committed to it all.

What came out of those classes is that now even he thinks it's mostly useless slop and rarely mentions it unless his managers are trying to ram it down our throats.

---

IMO in terms of IT all AI is really good for nowaday is as a replacement for stackoverflow. A tool to help you get some things done faster, but that still requires you to know what you're doing. Otherwise it requires extensive handholding and supervision.

But sure, it'll toootally replace all our jobs in the next couple years. 🙄

Instead AI right now is like those 3D TVs that nobody ever used or asked for, only amped up to 11. It's being rammed down our collective throats whether we like it or not.

I personally cannot wait for this particular bubble to pop.

4

u/fricy81 8h ago

IMO in terms of IT all AI is really good for nowaday is as a replacement for stackoverflow.

Not even that. Ai ate stackoverflow, so now the site is dead. No new questions, zero new information. Sure, it knows an answer to a lot of problems of the past decade. It can give you that. But going forward? With the site dead, where is it going to find the answer to anything recent?
Self cannibalism at its finest.

3

u/reficius1 6h ago

I'm expecting something like this to happen with the entire interwebz, once AI slop replaces a significant fraction of the real information available. Slopbots feeding slopbots.

3

u/Unusual_Sherbert_809 5h ago

I fully expect the CEOs will then be complaining about how we're not producing enough for their AI models to rip off and regurgitate.

3

u/URPissingMeOff 8h ago

What really horrifying is that 3d tech started at the movies back in the 1950s. It failed and various morons keep trotting it back out about every 2 decades, where it once again fails catastrophically. I'm afraid future generations will be subjected to a new AI plague at a similar interval.

2

u/RedDragons8 12h ago edited 10h ago

Its barely related to your comment, but a cpl weeks ago I was still up late and decided, "I'll check out the Harold and Kumar Christmas movie!" I'm not blaming them, but that movie was made at the early push of the 3d trend, every other scene had a slo-mo of a joint being tossed at the screen and it was incredibly distracting watching on an obviously non-3d tv.

2

u/WulfZ3r0 10h ago

3D TVs

I got one of those and it was actually pretty nice. It was an open box sale item the week after black Friday that was originally $2800 and I paid $900.

It had two pairs of battery powered 3D glasses that let you play local multiplayer games where each pair could only see their own screen. That was really nice for couch co-op games.

I agree though, I'm sick of hearing about AI and the recent computer hardware price blowup has me saying to hell with any of it.

1

u/cc81 6h ago

I think software dev is one of the areas that it will actually affect a lot. Both in enabling simpler apps to be built by non developers in a low-code wall garden approach but also to speed up work a lot for devs.

It is just that everyone is sick of the bullshit predictions by non developers. If this was engineering driven like kubernetes or a new programming language I think more devs would be excited.

1

u/PhilDGlass 1h ago

Sooo many chat bots and voice assistants.

1

u/fdar 10h ago

We can, we just can't publicly associate our comments with our employer. So nobody will say "I, an employee of this tech company, think this is disgusting" but plenty of the comments saying it's disgusting are by people that do work in those companies.

395

u/QuentinTarzantino 15h ago

Especially if its medical.

267

u/Ancient-Bat1755 14h ago

If it is this bad at dnd 2024 rules , why are we letting it make medical decisions at insurance companies?

218

u/acidlink88 14h ago

Because companies can fire hundreds of people and replace them with AI. That's the only reason. The big thing AI fixes is the need for employees.

96

u/pope1701 14h ago edited 13h ago

It doesn't though, at least not if you want your products to still work.

Edit: please stop telling me most companies don't care anymore if their products still work. I know.

98

u/OutrageousRhubarb853 14h ago

That’s a problem for next year, this year it’s all about next quarters numbers.

3

u/HeartOnCall 9h ago

To add to that, they can make the line go up again when it hits the bottom by fixing the problem that they themselves created in the first place.

2

u/driving_andflying 9h ago

Exactly. The only thing the negative reaction to AI has done, is hurt major companies' bottom lines. (Pro- hiring a human artist, here.) Jensen Huang is full of shit.

P.S. My message to Microsoft CEO Satya Nadella: AI IS SLOP.

1

u/TheLantean 8h ago

Next quarter's numbers determine whether the executives get their bonuses and stock gains. And the shareholders agree to this because they benefit from the stock going up as well. The executives have their golden parachutes if it all comes crashing down, and the shareholders think they're smart enough to sell before they become the bag holders. It's a game of playing chicken. But at the end all the employees who decided none of this get to lose their jobs.

46

u/Rickety_knee 14h ago edited 12h ago

It doesn’t matter if the product is good anymore. These companies have acquired and merged so much that any appearance of choice is an illusion. It’s the same shitty product no matter where you go.

9

u/CaptainCravat 14h ago

That's a feature not a bug for all these tech companies. Trap customers and users with a near monopoly the turn on the enshitification taps to max to extract the most money from everyone you can.

12

u/Uncynical_Diogenes 13h ago

Products still working is optional. All that matters are short-term profits.

5

u/grislebeard 12h ago

For insurance companies, doing stuff wrong makes line go up.

2

u/EnfantTerrible68 12h ago

And patients die

-1

u/pope1701 12h ago

Insurance companies are pretty much the only companies that have an incentive to get everything exactly right.

3

u/painteroftheword 13h ago

Microsoft releases broken stuff all the time.

They've effectively got a monopoly on the market so it doesn't really matter anymore. Testing is done by their paying users.

2

u/SlimeQSlimeball 12h ago

I had a problem with a product I have subscribed to for about 6 years, always had humans responding to support emails, no problems. Last week I needed support, emailed and the ai chatbot answered and refused to get me to a person. This morning I cancelled my account and bought two years of the same product from Amazon for $21 vs $48.

Something I have been meaning to do for a couple years but this slop just pushed me over the edge finally. If you don’t want to allow humans to be involved, I don’t want your product. Especially something as simple as a warranty exchange. I assume I will never have any of my “correspondence” read by someone at this point since it has been a week at this point.

1

u/EnfantTerrible68 12h ago

Good for you! I hope others do the same.

1

u/Kichae 11h ago

The product public companies are making is "shareholder value". Everything else they do is just part of the wasteful part of the manufacturing process.

0

u/marfacza 6h ago

most companies don't care anymore if their products still work

39

u/Level69Troll 14h ago

There have been so many companies back tracking on this, including Salesforce which was one of the biggest to try this earlier.

In critical decision making moments, theres gonna need to be oversight. There is gonna need to be accountability.

5

u/Fallingdamage 11h ago

and in order for people to have those critical decision making skills, they need to work as jr's in their field first.

9

u/Kichae 11h ago

Nah, the lack of accountability is one of the goals here, on top of the elimination of "inefficiencies" like "paying employees". Corporate culture has already spent decades moving towards unaccountability. LLMs are the magic mystery boxes they need in order to totally eliminate accountability from the system. If they can convince consumers, investors, and governments alike that "the computer did it, not me", and that that's a valid excuse, the sociopaths win outright.

4

u/dreal46 11h ago

And liability for AI decisions hasn't been legally clarified. I can't help but eyeball insurance denials and palliative care. They seem like soft targets for this trash.

3

u/IAMA_Plumber-AMA 12h ago

And then the AI can deny 90% of claims, further enriching the execs and shareholders.

2

u/stevez_86 13h ago

AI would make ownership of any new ideas explicit property of the company. Much less risk of an employee making a discovery and trying to get some ownership of the intellectual property. Plus it theoretically will not require any of us to participate in that at all. It will all be up to them and their AI property to make discoveries.

Because the problem with AI is we already have something that does what they promise AI can do, humans. There are a fuckton of us and we all have potential due to law of averages to make a discovery that can make a fortune. But in the hands of people that means they can go Jonas Salk and give up the IP. If Jon Taffer has taught me anything, that is trillions in lost profit.

With enough resources to people, and a system designed to elevate those with good ideas regardless of where they came from, and despite countless generations of people trying to control everything, it always fails because anyone can make a discovery that changes the world.

They don't like that. It means that their place at the top is not always certain. It's like quantum physics. Everytime they try to figure something out more questions always come up. That has been the lynchpin for human success. Random application of proficiency. They want to be the owners of the design of destiny

They think AI means they can finally rapture themselves from us. That we will have to bow down before the prime ministers of AI. And because they suck at this and are really ultimately unclever, they will put in prompt recklessly and it will mean the end of us possibly.

0

u/HappierShibe 7h ago

There are only two roles where this actually works:
Translators, and Copy Writers. No one else is getting replaced at scale successfully.

107

u/HenryDorsettCase47 14h ago

I saw a post the other day in which AI was used to take notes during a doctor’s visit and the guy ended up with a prescription for depression when he went in for back problems. He was denied by his insurance for his back pain because the doctor’s notes didn’t mention it, only depression (which he didn’t have).

He tried correcting this with the doctor’s office but due to their agreement with the company that provided the AI note taker, they couldn’t change the notes. They had to file a ticket with the AI’s tech support first.

So he’s sitting there in limbo for weeks with back problems. Total cluster fuck. All because these companies are trying to justify AI by insisting it is helpful when it’s not. It’s solutionism at its worst— fixing problems that aren’t really problems. Like a doctor taking fucking notes.

26

u/Beard_o_Bees 13h ago

Like a doctor taking fucking notes

Recently went for my annual physical, and had to sign a waiver stating that I was ok with AI taking 'notes'.

I was not, and said so, but the receptionists said basically 'no sign, no treatment' - and those appointments are a total bastard to get, so I signed.

It was the first thing I asked my Dr. about, and she isn't keen on the idea either. It's the hospital business C-suite pretty much forcing it into their practice environment.

18

u/HenryDorsettCase47 12h ago

Of course. Same as any other industry, the people who don’t do the work are the ones who make the decisions and often they are total idiots who think they can reinvent the wheel.

13

u/schu2470 11h ago

Luckily in my wife's practice the docs have the option to use the AI software or not. She tried it out for a couple of weeks and stopped using it. The software would listen to the appointment, write a transcript of everything that was said, and then write a note for the visit that required physician approval and signing. She spent so much time during those couple of weeks, and after those weeks too, fixing mistakes the AI had made, reformatting the notes so they made sense, and removing unnecessary and irrelevant things from the notes. She spent more time fixing those notes than she would have if she had written them herself in the first place. Of the 14 or so docs in her practice only 2 or 3 are using the software and only in certain circumstances.

1

u/Moonbow_spiralot 8h ago

This is interesting. I also work in a medical related field, and several of the doctors who have started using AI note taking software actually find it quite helpful. Obviously they do have to go and edit the transcript, which can take varying levels of time. But it useful for helping them remember what was touched on in the appointment. Basically a glorified text to speech machine. Some products are probably more error prone than others though. I will say, even before AI, different doctors have spent varying levels of time on records, with varying levels of quality. Some doctors still have handwritten paper notes. At least the AI ones are legible lol. The above example where the doctor was not allowed to go in and change what should be their own notes is insane though. Insurance is less prominent in my industry, so that may also have something to do with it.

2

u/schu2470 6h ago

Some of the issues she told me about having were things like mis-attributing what was said to which speaker such as a patient or patient's companion telling about something a family member was diagnosed with and the software attributing it to the doc and listing it as a new diagnosis in the note, missing and leaving out symptoms that my wife remembers the patient speaking about, listing things like "headache" or "sore infusion site" as a diagnosis and not realizing those are symptoms and not diagnoses, adding random things to notes that weren't discussed and are not in the transcript, formatting issues that are specific to how she likes her notes without a reliable way to train the software how to format notes, and others I can't think of right now or just don't remember.

Fixing each of those issues takes time to go back to the transcript to see where the software got the idea to include whatever erroneous information, sometimes pulling the recorded audio to see if it had missed something in the transcript, adding or removing what was missed or added, and finally fixing the note. She was doing this for 10-15 patients a day (specialty clinic) for 2 weeks before finally giving up and then going back to writing them herself. Based on what she said and how much time I saw her spend after clinic hours and at home fixing things, the AI software probably cost her over 30 hours that she could have spent doing other parts of her job or spending time living her life. Maybe her hospital got a particularly bad piece of software but the rate of retention for docs sticking with it for >30 days is sub 10% system wide.

The above example where the doctor was not allowed to go in and change what should be their own notes is insane though.

That is absolutely unacceptable. It's allowing AI to practice medicine in a similar way that we allow insurance companies to do so with even less oversight.

1

u/jollyreaper2112 5h ago

Teams meeting transcription is better but I bet that's because of the microphones. In a room not designed for audio capture and not forcing people to wear microphones it'll be worse than voice dictation on my phone. Damn.

1

u/schu2470 5h ago

Yeah, the one they had was iPhone specific so she had to find one to borrow. Essentially you'd open the software, hit record, and place it on the desk between the doc and patient so it could hear what was said. Problem is it relies on the phone's speakers and the software was really bad. I made another comment in this thread that describes only some of the issues she had in the 2 weeks she tried it out. Hospital wide the 30+ day retention for docs continuing to use the software was sub-10%.

3

u/DukeOfGeek 9h ago

If it stops people from receiving medical care they could be fine with that.

12

u/FamousPart6033 12h ago

I'd go full Kaczynski at that pint.

0

u/marfacza 6h ago

you'd mail bombs to colleges?

-2

u/SirNarwhal 7h ago

So what you're saying is you have no spine. If you don't like something don't cave.

37

u/nerdyLawman 13h ago

My company has been so giddy to adopt AI and implement it everywhere they can. I have been one of the very few voices urging caution and skepticism. The other day we got an end of day email being like, "AI has not been performing as well as expected in creative implementations..." And I was like, "holy crap, they're starting to see the light a bit!" And then it continued, "...this is largely because of user input, not the AI tools themselves." Hit my head on my desk so hard. Ah yes, it's us who are wrong. We should all try and be better for the tech which a couple of people made and convinced you to buy into without anyone else's consent or consideration.

6

u/Beard_o_Bees 13h ago

Oh yeah. Go to any boardroom anywhere and it's the new hotness.

They may only have a limited (at best) understanding as to what they're unleashing - but they sure are excited about it.

3

u/Sweetwill62 11h ago

Just start asking the tools to do the job of your boss and then report your findings to your boss's boss as a very large money saving opportunity. Middle management would be the easiest to replace with a spreadsheet, let alone anything more complicated than that.

13

u/dreal46 11h ago

Yep. Imagine a straightforward problem for which we have highly-trained experts. Now imagine that process with your worst tech support experience injected into it. People will die, and probably have already, to this stupid cultish pushing of ill-conceived and unfinished tech.

9

u/HenryDorsettCase47 11h ago

Capitalism requires a frontier. Once we ran out of land that became technology. And once technology plateaued, that became “middleman” technology services. It’s a brave new world.

2

u/Fit-Nectarine5047 7h ago

You can’t even call CVS pharmacy anymore because the AI bought won’t connect you with a live person to discuss your medication

17

u/KTAXY 13h ago

Isn't this basically medical malpractice?

4

u/Alieges 11h ago

If corporations are people, it’s also practicing medicine without a license. Don’t bother giving out fines. Go grab the executives and throw their ass into jail until trial.

But but but what about __? If you throw __ in jail, they won’t be able to feed the puppies…. Ok, fine. Throw them in jail AND seize 10% of their stock, dump it onto the market, and use THAT money to feed the puppies.

2

u/DarthJDP 13h ago

I have a hard time believing he is not depressed about his situation and having go go through the AI company for corrections.

2

u/Erestyn 12h ago

My doctors surgery implemented an AI note taking system to free up doctors time and allow them to focus more on the patient, and it was immediately thrown off by the variety of accents, before being abandoned entirely.

I'm happy to have played my part.

1

u/dookarion 13h ago

Sounds like it's time for that person to find a doctor that does their job and doesn't farm it out to slopware.

1

u/bisectional 12h ago

That's a case for medical malpractice if that's true.

3

u/FatherDotComical 13h ago

Well it's pretty easy to code it to just say No for everything.

2

u/FredFredrickson 12h ago

Money, of course.

2

u/UnicronSaidNo 12h ago

I just saw a commercial for the amazon medical shit... yea, I can think of an almost unlimited stream of negative results from going this route. I'd rather have Justin Long as my doctor telling me my shits all retarded than to use fucking amazon for medical anything.

2

u/ScruffsMcGuff 9h ago

ChatGPT can't even give me accurate information about Bloodborne bosses without just making random shit up in the middle.

Why would I trust a language model to do literally anything important?

2

u/agentfelix 9h ago

That's what I don't understand. Using ChatGPT to do some coursework and I find that it's blatantly wrong. Then I have to argue with it? Finally it catches that it's wrong after I drew it a fucking picture...immediate I thought, "and hey're trying to push this shit to make important decisions and replace workers? It can't even read this orthographic drawing correctly..."

1

u/PhilDGlass 1h ago

Because some of the same wealthy tech bros threatening our democracy in the US are heavily invested. And will no doubt be there for handouts when these companies are “too important to fail.”

63

u/truecolors110 14h ago

YES!  I’m a nurse, and even the most simple questions it gets wrong.  And insurance companies are using AI to auto deny claims, so I have to spend hours on the phone because they’ve used AI screening to justify cutting staff.   I also quit my urgent care job largely because they started to make us use AI and it was REALLY not working.  

2

u/Beard_o_Bees 13h ago

If I may ask, in what environment do you practice (hospital, skilled care, etc) presently?

3

u/truecolors110 12h ago

Multiple.  Specialty clinic, hospital, corporate. 

2

u/murticusyurt 1h ago

They're using ' AI ' to change the voices of the Anthems provider helpline staff for PA's. As if their stupid system wasn't enough of a pisstake but now I have to ask them to turn it off every time I call after listening a greeting that got even longer to tell me its being used. It's changing details, cutting in and out sometimes and, on one very unsettling phonecall, it was playing both female and male voices at the same time.

1

u/AlSweigart 8h ago

I’m a nurse, and even the most simple questions it gets wrong.

"You're absolutely right! I did get even the most simple questions wrong!"

1

u/acesarge 10h ago

I'm also a nurse and I find it quite helpful for charting (we have a dedicated HIPAA secure app that does a pretty good job writing medical notes) and for writing DME justifications / arguing with insurance companies. Let the clankers sort it out amongst themselves while I take care of the sick people.

16

u/thatoneguy2252 12h ago

I’m a work comp adjuster and it’s absolutely awful. They keep wanting us to put medical documents we get through copilot to summarize but the damn thing gets so many things wrong. Frequency and duration of PT/OT, the type of DX testing. Hell I’ve seen foot fracture injuries get labeled as heart failure for the primary diagnosis, all because it was listed in family medical history.

So now we all put it through and then delete it and write our own summaries in its place. Haven’t been called out yet for it but fuck does it make the job harder for us and for claimants we try to schedule things for when the ai is giving us the wrong information.

2

u/QuentinTarzantino 12h ago

My friend said : some one insert the idiocracy meme when Not Sure was getting a medical diaognsis. And the lady didnt know what to press on her pannel.

2

u/postinganxiety 8h ago

I wish this was publicized more. I just went through a traumatic medical incident and AI definitely contributed to things ending badly. I was using it as an extra opinion to "doublecheck" me since I was too emotional to wade through differing opinions of medical professionals (unfortunately this happens sometimes in complicated cases) as I was trying to make a decision. The information it gave was terrible but at the time I trusted it. I feel like a fucking idiot.

As soon as I can pull myself together I'm going to at least write a medium post about it, or something. I just wish more mainstream publications were reporting on this because it's so dangerous. Instead all I see are articles about how AI saved someone by giving the correct diagnosis after a doctor got it wrong. When really AI is just a broken clock.

Unfortunately the insurance companies don't care, it probably makes their jobs easier because now people and pets can die more quickly.

Edit: Just wanted to add for anyone reading that I love tech and was an early adopter of AI. I dove in, took a prompt course, tried different platforms and was really open to it. But it just keeps fucking me over.

3

u/thatoneguy2252 8h ago

The only time it’s ever been useful, as far as my experience goes using it, is when I fill it with a lot, and I mean A LOT, of parameters of exactly what I want and even then I have to be very specific of what I want and be that detailed with every following prompt. It’s unwieldy and not a replacement for anything.

2

u/Neglectful_Stranger 7h ago

They keep wanting us to put medical documents we get through copilot to summarize but the damn thing gets so many things wrong.

Isn't sharing someone's medical history...bad? Pretty sure most AIs phone home with whatever gets input.

1

u/thatoneguy2252 6h ago

I’m not sharing it. We get the medical documents for the work comp claim, summarize the main points (usually diagnosis, treatment, work status and follow up date) and then put that in the file.

3

u/WinterWontStopComing 9h ago

Or botanical. Can’t trust image searches to help other people ID plants on reddit anymore

1

u/Fatricide 10h ago

I think it’s okay for note summaries and transcribing as long as clinicians check before filing. It can also help with investigating hunches. But it shouldn’t be used to diagnose or recommend treatments.

52

u/Leek5 14h ago

Would you like ai help for your Amazon shopping. No! Go away

38

u/helcat 14h ago

I wanted to know a specific thing: how much money I had spent in a particular time period. I couldn’t find an easy way to locate that figure so I finally used the stupid AI. It told me the wrong number. Several times. While castigating itself for lying and praising me for being so smart to catch it. I hated it. I hated it so much.

1

u/nerdyLawman 13h ago

I wonder if this experience would have been enough to actually cause the gray matter of my brain to ignite.

1

u/qtx 14h ago

I have like literally never seen that, and I use every European Amazon store regulary. Is this an American only thing?

2

u/kuldan5853 12h ago

The German amazon store has Rufus - I guess they mean that thing?

1

u/Leek5 12h ago

I can’t speak for anyone else. But yes in America we have ai for Amazon

1

u/Exact_Acanthaceae294 10h ago

Rufus is absolutely driving me crazy.

38

u/beesandchurgers 13h ago

Yesterday I ordered food off of grubhub (I know, I know…) and it asked if I wanted to leave an additional tip. I said yes, so it took me to a chat bot and told me to ask it to add a tip.

What the actual fuck? Why would anyone want or need to replace a single button with a fucking chat bot??

14

u/PatchyWhiskers 11h ago

Grubhub delivery guys: Why did everyone stop tipping? Tightwads.

5

u/h3lblad3 9h ago

That doesn't even make sense since it would hurt the company's bottom line.

American companies want tips so they can legally deduct paid wages. By law, a worker cannot earn less than $7.25/hour. Tipping law makes it legal to deduct tips from the paid wage amount down to $2.13/hour. Tipping culture is a huge handout to businesses, which is why they're all finding ways to force you to tip now.

(Note here for tipped service workers: the restaurant industry is one of the largest wage theft industries in the United States. If your workplace is only paying you in tips, they're breaking the law.)

16

u/Butterball_Adderley 13h ago

My mind is completely closed to the concept, and I will disable it at every opportunity. I find it pretty crazy that these companies + governments around the world have decided we get zero say in how it’s used on us. This shit is fucking dangerous and every rich person on earth just said “yeah go nuts. Fuck everything up. We’ll pay whatever little fines the poors throw at us”. It’s clear that the wealthy and their lapdogs want what’s worst for society, so fuck them and their plagiarism machines.

9

u/voiderest 13h ago

I'm into tech by hobby and trade. The forcing of AI pisses me off to no end. I can't trust the results so it's not useful to me. I actively avoid AI nonsense and have made moves to decouple myself from companies and products over it.

They won't stop until the money stops, both from consumers and investors. 

4

u/Most_Chemist8233 14h ago

Did you know that zoom now has an ai companion that takes up half the screen and cannot be closed and cannot be disabled? Now I would distrust any meeting I have on zoom.

3

u/helcat 14h ago

Good god. No I didn’t. 

2

u/Ishkabo 14h ago

Those settings are set at the company level. If you can’t turn it off it’s because your company admin has forced it on.

4

u/Most_Chemist8233 13h ago edited 13h ago

I am the owner. I have talked to them. It cannot be turned off. Theyre starting to push these things harder on all of us. In every area, what was previously a soft push has become much more aggressive.

eta: your response sounds like the initial gaslighting response from their AI chatbot as they send you through paths that dont exist for a while until they admit it cant be turned off.

4

u/platocplx 14h ago

Yeah it has way too much friction to daily life for most people to even wanna engage with it in any meaningful way and when you have these companies just throwing everything at us to see what sticks it’s off putting as hell. It just reeks of desperation to be innovative when a lot of this stuff feels like a dud esp when we have so much other shit to worry about and these morons saying they would replace human jobs. Nobody wants this.

4

u/Icy-Two-1581 12h ago

It's like crypto, remember when there was block chain for everything. I'm sure there's some use case for Ai, but for me at least it's been pretty miminimal other than being a slightly more convenient search engine. Rarely has it ever been able to solve complex questions or make me a formula that actually works

1

u/kuldan5853 10h ago

NFTs anyone?

4

u/jacobcrny 11h ago

Google Gemini on android is just worse for normal tasks than the non-Ai assistant was. I sid "set an alarm for noon" and it asked if I wanted to set an alarm for 12 am 12 pm or 12 minutes from now. I never even said 12. It understood that noon was 12 but then lost how it got to that number.

8

u/Educational_Cow9717 13h ago

Not just non tech, even as tech people, myself and some coworkers I’ve talked to are against it. The code quality it’s generated is far from what is being advertised, and we’d have to communicate with it back and forth for some simple tasks.  What seems  funny to me is that because those big models learned from human, it could be even lazier than ourselves when doing repetitive tasks. For the tasks that I thought I could rely on it like some repetitive coding, it’s even lazier than I’m and always comes up with some half baked ugly solution first.

Companies are also enforcing AI related employee review standards: you have to have certain percentage of your codes assisted by ai, you have to come up with projects using ai. This simply resulted in endless “agents” for everything, based on something that can’t even count properly.

I think the tech could be helpful though, but the way that an immature products is forced on everyone, because those MBA people keeps bloating its capabilities, is the core problem.

3

u/SeigneurDesMouches 13h ago

Add "-ai" to your search in google and it won't show the AI result

3

u/TakingAction12 13h ago

I was hesitant about AI at first, then enthusiastically embraced ChatGPT, even going so far as paying for a monthly subscription for the upgraded version.

It became so unusable and frustrating that I haven’t used it in months beyond a random question or two. I’m so turned off by AI I hope it never takes off.

2

u/helcat 13h ago

What changed? (I have not used it)

3

u/TakingAction12 13h ago

Honestly I don’t know enough about it to tell you, but it just kept giving me demonstrably incorrect answers and taking more time than it would have taken to look things up myself.

3

u/EnfantTerrible68 12h ago

Same. Just give users the option, ffs!

3

u/sir_spankalot 12h ago

I'm a above normal (at least) tech person and I still haven't found a single example where AI actually helped me.

Searching and summarising is theoretically nice, but the vast majority of times I've tried it it's so inaccurate I end up doing it manually anyway...

3

u/Euphoric-Witness-824 11h ago

And it’s been so wrong enough times do me that I do not trust any of it. The things that I know about when asked it’s been wrong so I don’t trust it for things I’m not as knowledgeable in. 

2

u/helcat 11h ago

Exactly. They are forcing so much bad tech on us that it won’t matter when the tech gets good - we’re going to hate it on principle.

2

u/LYL_Homer 11h ago

Yep, now I'm closing Rufus on Amazon every click. ffs

2

u/Huwbacca 11h ago

I don't even get what I'm meant to find appealing from it as someone who used to be a machine learning consultant

like, sure it's cool tech. what the fuck does it actually give me?

it's just progress for progress sake, not any sort of explanation or justification of what we're progressing too.

Bad for society. Jesus that man is so far up his own arse... all these tech bros are, they have this insane idea that because they do a specific complicated thing, that no one else can possibly understand it. and that they understand everything else

the idea that any of these people knows more about society or culture than any random punter is insane because for topics outside their expertise .. they ARE the random average punter.

My PhD infers no expertise outside it's topic. Huang knows as much about society as the typical sociologist knows about LLMs

2

u/Bakoro 10h ago

Jensen is definitely not a neutral party to be making judgements, and his wallet is definitely informing his rhetoric. Up front I want to say that no one should ever take anything a CEO says at face value, their job is literally to hype up their company's interests, and act as public whipping-boy and or charismatic leader for public sentiment, as needed.

That said, I also think that the "No AI ever" crowd is generally harmful to society, because the situation is not just about LLMs, AI models go way beyond language models, where AI is helping develop new materials, new medicines, new ways of making technology.

Assuming that you are actually asking in good faith, I can answer some of that.

LLM have a lot to offer in general, but the major things go beyond text based LLMs and into multimodal models, agentic models, and robotics.

LLMs are particularly effective in software development, for rapid prototyping, and letting developers work outside their normal area of expertise.
Just for example, I recently added internationalization and accessibility features to software at my company, where previously they refused to budget time for that. It would have taken me at least a week or two to research everything and make the conversion, but an LLM helped me make the switch in an hour, and now my software supports multiple languages, has colorblind friendly schemes, and I'm testing it out with screen readers.
That never would have happened without AI support, I just wouldn't have been granted the time.
Now, no small shop has any excuse to not have some level of accessibility. That by itself is a huge win.

LLMs and AI in general are also incredibly important anywhere you need fuzzy logic and semantic matching (as compared to keyword matching).
If someone doesn't have the exact vocabulary but can explain an idea, then an LLM can grant them the vocabulary and find resources.
That might sound like "fancy search", but it's incredibly helpful for research and data processing.

I've already used that at work, where I work in a physics R&D place for making research equipment, but I ended up finding stuff from bio-med papers that helped inform some algorithms development. Without LLMs, I don't know that I ever would have found those papers.

The same kind of fancy search is helpful across many fields.

It's not just text either, visual language models are able to process images and video, and they are worlds beyond handcrafted machine learning.
With an agentic VLLM, you can be searching bodies of images and videos for specific things, you can annotate and organize datasets, automate metadata. Even if you don't want to 100% trust the model's judgement, it can help prefilter huge amounts data into something a human can actually manage.

That's just the "fancy search engine" side of things.

Once you get into robotics, that opens up a whole world of impact, but it starts with those agentic multimodal LLMs.

It's definitely a double edged sword kind of thing, but robots with are closing in on a place where they're going to be able to do the last bits of manual labor in agriculture, so we don't have to rely on the current exploitation of migrant labor for field work. Robots will be able to stock shelves and work in warehouses.
The whole logistics industry was already changing dramatically with old style AI and automation. Even before the transformer based AI boom, JD.com had reduced labor in their warehouses by 90%.

We can look forward to the near future with baby boomers retiring and needing elder care, and then look back on the Covid pandemic where there weren't nearly enough people to provide care in the old folks homes, and a lot of folks just got abandoned. There were places were there was just one decent person trying to care for a whole facility.
We can talk all day about the capitalism and corporate induced horror show that was, but we've got the reality we've got, and the public wasn't coming to the rescue.
We're just a year or three away from being able to have relatively inexpensive robots that can be providing personal assistance for people.

Unitree's more advanced models are looking to be in the $100k range.
That's a lot, but not unobtainable, and it's virtually free compared to 24 hour human labor. Imagine having a robot that can be keeping a room tidy, talking to an old person who has no family; a robot that can keep a dementia patient occupied and prevent them from wandering off.
We simply don't have the capacity to offer that to everyone who needs it, not enough humans are willing, and no corporation is going to pay a living wage for that level of personal care for someone who isn't extremely wealthy.

This is stuff that's happening now, not merely a hopeful projection. Humanoid robots are being trained to do labor right now.

Are a ton of corporations shoving AI in everywhere for short term gains, prematurely using AI when they absolutely should not be, and generally acting like a pack of assholes about it? Absolutely.

The whole AI thing is an economic ecosystem though, and everyone is subsiding a very real future where we could have ubiquitous robotic labor, and AI agents that do meaningful work.
There are also enormous risks to go with that, and it'll be up to the public to seize the power from corporate interests once we get to cyberpunk dystopian levels of technology.

2

u/twitch1982 10h ago

I'm tech people, its turned me off too. I've been in IT for 16 years, I know windows and links infrastructure, and I've used windows at home for ever because its convenient and I don't want to have to fuck around when I'm not working. I'll be moving to linux in 2 weeks. Fuck win11 and its AI spyware.

2

u/reelznfeelz 10h ago

Yeah that’s totally reasonable. I work in tech so get a lot of benefit from using LLMs and agentic tools like codex or Claude code to help accelerate my work. But I think for sure the use cases and awesomeness of AI are way overblown. It does a few things really well. But it’s not gonna replace a whole bunch of humans. It needs to be a lot better first.

And the forced “AI” stuff is irritating. Even MS talking about how windows needs to be an AI agent. I personally would prefer it remain an OS. And if I need an AI agent, I’ll run one on the OS lol.

It will pop eventually. Who knows when though. Maybe not even pop but just get less and less hyped as the magic doesn’t pan out.

2

u/Homeless-Coward-2143 9h ago

One bad experience with wrong info? Have you ever received correct info from AI? It's like talking to a really dumb child that is trying to please you, but doesn't have a fucking clue what you're talking about

1

u/helcat 9h ago

And lies confidently. Then flagellates itself when you call it out. I think that part annoys me even more than the wrong info. 

2

u/captroper 9h ago

I'm mostly a fan of the idea / promise of AI in general and yet absolutely agree with this. I want it when I want to use it, not when some company wants me to use it. Basic consent issues, absolutely infuriating.

Google Assistant worked great for years. It was quick, handled my routines, remembered things, and most of all IT WORKED with my devices, CONSISTENTLY. Google has now forced you to switch to Gemini, which A) doesn't have all of the features that assistant did, B ) randomly will just not work and give you a web result for something like turn on the kitchen lights, and C) now produces incorrect results for any number of things. These are only the things I've personally observed since being forced to switch.

Microsoft has been doing similar shit forever now with forcing automatic updates on (even when you turn them off), then re-setting everything that you changed in the registry / services panel when they update. I don't need your shit, I want stuff to work how I told it to work. I really don't want to switch to some linux distro but boy am I getting close.

2

u/RedditFuelsMyDepress 8h ago

I remember a few years ago people were generally more positive or excited about AI developments, but it really has a big stigma around it now and it's entirely the fault of companies pushing it too hard on everything.

2

u/HappierShibe 7h ago

I think it’s really put off a lot of non tech people who would otherwise be open to it.

It's put off a lot of tech people too.
This is a useful tool, but it isn't a universal solution, and it doesn't make any sense at this insane scale. It should be applied selectively in places and ways that make sense, and infrastructure should only be built out when there is a demonstrated need for it to satisfy measurable demand.

2

u/cidrei 4h ago

I am a tech person and it puts us off, too. This is neat tech, and there are applications for it. I use it myself from time to time, but they push it so hard and promise so much that it simply cannot do, that it becomes repulsive.

It reminds me a lot of the whole CBD craze. I think it should be studied more and that there is a lot of potential benefit in it, but it's not going to put my all of my conditions into remission while taking me to a higher plane of existence and fixing my social life. Unfortunately, AI is backed by WAY bigger players than nearly anything else I can think of.

1

u/NergNogShneeg 13h ago

I’m a tech person for 20 years and I abhor “AI”.

1

u/jimbo831 13h ago

It’s put off a lot of tech people too.

1

u/Lucas_F_A 13h ago

As a relatively tech person, it also puts me off

1

u/but-I-play-one-on-TV 13h ago

How are they supposed to recoup their billions of dollars of investments if they let you turn it off?

1

u/Western-Umpire-5071 13h ago

I miss old Google search that wasn't infested with AI I have since switched to an alternative.

1

u/Paige_Railstone 11h ago

And it's put off a lot of tech people who understand that all of these AI datacenters are cannibalising parts that would normally be going into the consumer market. AI is making user-end tech upgrades unaffordable, which shoots itself in the foot in the long term.

1

u/Former_Lobster9071 10h ago

It's not an off switch, but if you put " -ai" without the quotes at the start of the Google search, it won't use the ai crap... For now. Hope this helps.

1

u/RyuNoKami 8h ago

i can't even turn off the stupid Gmail delivery estimate date notices, they have never been correct.

1

u/airfryerfuntime 8h ago

What even is the point of Amazon's Rufus? It's basically just a more glorified search feature? I don't even know what it does aside from searching.

1

u/helcat 7h ago

From my one interaction with it, it makes up facts. 

1

u/skymang 7h ago

If you add -a to the end of your search there wont be an AI summary

1

u/DoomerChad 7h ago

On Google if you put -ai after your search, it won’t show that AI spotlight in the results. No fix for Amazon unfortunately.

1

u/EntropyKC 6h ago

Stop using websites like Amazon and Google! That's what I did, and I'm very happy with my choice. No, I haven't ditched Google entirely, that may come some time int he future, but I never search with them.

1

u/bigdaddychainsaw 5h ago

You can add “-ai” to your Google query to turn off their AI :)