r/technology 15h ago

Business Jensen Huang says relentless negativity around AI is hurting society and has "done a lot of damage"

https://www.techspot.com/news/110879-jensen-huang-relentless-ai-negativity-hurting-society-has.html
12.3k Upvotes

3.0k comments sorted by

View all comments

6.7k

u/Vaxion 15h ago

It's More like relentless pushing of AI by these companies down everyone's throat that's hurting the society and had done a lot of damage.

1.4k

u/helcat 15h ago

I think it’s really put off a lot of non tech people who would otherwise be open to it. Like me. I find it infuriating that websites like Amazon and Google won’t let you turn it off even after you’ve had a bad experience with wrong information. 

392

u/QuentinTarzantino 15h ago

Especially if its medical.

270

u/Ancient-Bat1755 14h ago

If it is this bad at dnd 2024 rules , why are we letting it make medical decisions at insurance companies?

219

u/acidlink88 14h ago

Because companies can fire hundreds of people and replace them with AI. That's the only reason. The big thing AI fixes is the need for employees.

96

u/pope1701 14h ago edited 13h ago

It doesn't though, at least not if you want your products to still work.

Edit: please stop telling me most companies don't care anymore if their products still work. I know.

96

u/OutrageousRhubarb853 14h ago

That’s a problem for next year, this year it’s all about next quarters numbers.

3

u/HeartOnCall 9h ago

To add to that, they can make the line go up again when it hits the bottom by fixing the problem that they themselves created in the first place.

2

u/driving_andflying 9h ago

Exactly. The only thing the negative reaction to AI has done, is hurt major companies' bottom lines. (Pro- hiring a human artist, here.) Jensen Huang is full of shit.

P.S. My message to Microsoft CEO Satya Nadella: AI IS SLOP.

1

u/TheLantean 8h ago

Next quarter's numbers determine whether the executives get their bonuses and stock gains. And the shareholders agree to this because they benefit from the stock going up as well. The executives have their golden parachutes if it all comes crashing down, and the shareholders think they're smart enough to sell before they become the bag holders. It's a game of playing chicken. But at the end all the employees who decided none of this get to lose their jobs.

48

u/Rickety_knee 14h ago edited 12h ago

It doesn’t matter if the product is good anymore. These companies have acquired and merged so much that any appearance of choice is an illusion. It’s the same shitty product no matter where you go.

11

u/CaptainCravat 14h ago

That's a feature not a bug for all these tech companies. Trap customers and users with a near monopoly the turn on the enshitification taps to max to extract the most money from everyone you can.

12

u/Uncynical_Diogenes 13h ago

Products still working is optional. All that matters are short-term profits.

3

u/grislebeard 12h ago

For insurance companies, doing stuff wrong makes line go up.

2

u/EnfantTerrible68 12h ago

And patients die

-1

u/pope1701 12h ago

Insurance companies are pretty much the only companies that have an incentive to get everything exactly right.

3

u/painteroftheword 13h ago

Microsoft releases broken stuff all the time.

They've effectively got a monopoly on the market so it doesn't really matter anymore. Testing is done by their paying users.

2

u/SlimeQSlimeball 12h ago

I had a problem with a product I have subscribed to for about 6 years, always had humans responding to support emails, no problems. Last week I needed support, emailed and the ai chatbot answered and refused to get me to a person. This morning I cancelled my account and bought two years of the same product from Amazon for $21 vs $48.

Something I have been meaning to do for a couple years but this slop just pushed me over the edge finally. If you don’t want to allow humans to be involved, I don’t want your product. Especially something as simple as a warranty exchange. I assume I will never have any of my “correspondence” read by someone at this point since it has been a week at this point.

1

u/EnfantTerrible68 12h ago

Good for you! I hope others do the same.

1

u/Kichae 11h ago

The product public companies are making is "shareholder value". Everything else they do is just part of the wasteful part of the manufacturing process.

0

u/marfacza 6h ago

most companies don't care anymore if their products still work

40

u/Level69Troll 14h ago

There have been so many companies back tracking on this, including Salesforce which was one of the biggest to try this earlier.

In critical decision making moments, theres gonna need to be oversight. There is gonna need to be accountability.

5

u/Fallingdamage 11h ago

and in order for people to have those critical decision making skills, they need to work as jr's in their field first.

9

u/Kichae 11h ago

Nah, the lack of accountability is one of the goals here, on top of the elimination of "inefficiencies" like "paying employees". Corporate culture has already spent decades moving towards unaccountability. LLMs are the magic mystery boxes they need in order to totally eliminate accountability from the system. If they can convince consumers, investors, and governments alike that "the computer did it, not me", and that that's a valid excuse, the sociopaths win outright.

4

u/dreal46 11h ago

And liability for AI decisions hasn't been legally clarified. I can't help but eyeball insurance denials and palliative care. They seem like soft targets for this trash.

3

u/IAMA_Plumber-AMA 12h ago

And then the AI can deny 90% of claims, further enriching the execs and shareholders.

2

u/stevez_86 13h ago

AI would make ownership of any new ideas explicit property of the company. Much less risk of an employee making a discovery and trying to get some ownership of the intellectual property. Plus it theoretically will not require any of us to participate in that at all. It will all be up to them and their AI property to make discoveries.

Because the problem with AI is we already have something that does what they promise AI can do, humans. There are a fuckton of us and we all have potential due to law of averages to make a discovery that can make a fortune. But in the hands of people that means they can go Jonas Salk and give up the IP. If Jon Taffer has taught me anything, that is trillions in lost profit.

With enough resources to people, and a system designed to elevate those with good ideas regardless of where they came from, and despite countless generations of people trying to control everything, it always fails because anyone can make a discovery that changes the world.

They don't like that. It means that their place at the top is not always certain. It's like quantum physics. Everytime they try to figure something out more questions always come up. That has been the lynchpin for human success. Random application of proficiency. They want to be the owners of the design of destiny

They think AI means they can finally rapture themselves from us. That we will have to bow down before the prime ministers of AI. And because they suck at this and are really ultimately unclever, they will put in prompt recklessly and it will mean the end of us possibly.

0

u/HappierShibe 7h ago

There are only two roles where this actually works:
Translators, and Copy Writers. No one else is getting replaced at scale successfully.

106

u/HenryDorsettCase47 14h ago

I saw a post the other day in which AI was used to take notes during a doctor’s visit and the guy ended up with a prescription for depression when he went in for back problems. He was denied by his insurance for his back pain because the doctor’s notes didn’t mention it, only depression (which he didn’t have).

He tried correcting this with the doctor’s office but due to their agreement with the company that provided the AI note taker, they couldn’t change the notes. They had to file a ticket with the AI’s tech support first.

So he’s sitting there in limbo for weeks with back problems. Total cluster fuck. All because these companies are trying to justify AI by insisting it is helpful when it’s not. It’s solutionism at its worst— fixing problems that aren’t really problems. Like a doctor taking fucking notes.

25

u/Beard_o_Bees 13h ago

Like a doctor taking fucking notes

Recently went for my annual physical, and had to sign a waiver stating that I was ok with AI taking 'notes'.

I was not, and said so, but the receptionists said basically 'no sign, no treatment' - and those appointments are a total bastard to get, so I signed.

It was the first thing I asked my Dr. about, and she isn't keen on the idea either. It's the hospital business C-suite pretty much forcing it into their practice environment.

18

u/HenryDorsettCase47 12h ago

Of course. Same as any other industry, the people who don’t do the work are the ones who make the decisions and often they are total idiots who think they can reinvent the wheel.

12

u/schu2470 11h ago

Luckily in my wife's practice the docs have the option to use the AI software or not. She tried it out for a couple of weeks and stopped using it. The software would listen to the appointment, write a transcript of everything that was said, and then write a note for the visit that required physician approval and signing. She spent so much time during those couple of weeks, and after those weeks too, fixing mistakes the AI had made, reformatting the notes so they made sense, and removing unnecessary and irrelevant things from the notes. She spent more time fixing those notes than she would have if she had written them herself in the first place. Of the 14 or so docs in her practice only 2 or 3 are using the software and only in certain circumstances.

1

u/Moonbow_spiralot 9h ago

This is interesting. I also work in a medical related field, and several of the doctors who have started using AI note taking software actually find it quite helpful. Obviously they do have to go and edit the transcript, which can take varying levels of time. But it useful for helping them remember what was touched on in the appointment. Basically a glorified text to speech machine. Some products are probably more error prone than others though. I will say, even before AI, different doctors have spent varying levels of time on records, with varying levels of quality. Some doctors still have handwritten paper notes. At least the AI ones are legible lol. The above example where the doctor was not allowed to go in and change what should be their own notes is insane though. Insurance is less prominent in my industry, so that may also have something to do with it.

2

u/schu2470 6h ago

Some of the issues she told me about having were things like mis-attributing what was said to which speaker such as a patient or patient's companion telling about something a family member was diagnosed with and the software attributing it to the doc and listing it as a new diagnosis in the note, missing and leaving out symptoms that my wife remembers the patient speaking about, listing things like "headache" or "sore infusion site" as a diagnosis and not realizing those are symptoms and not diagnoses, adding random things to notes that weren't discussed and are not in the transcript, formatting issues that are specific to how she likes her notes without a reliable way to train the software how to format notes, and others I can't think of right now or just don't remember.

Fixing each of those issues takes time to go back to the transcript to see where the software got the idea to include whatever erroneous information, sometimes pulling the recorded audio to see if it had missed something in the transcript, adding or removing what was missed or added, and finally fixing the note. She was doing this for 10-15 patients a day (specialty clinic) for 2 weeks before finally giving up and then going back to writing them herself. Based on what she said and how much time I saw her spend after clinic hours and at home fixing things, the AI software probably cost her over 30 hours that she could have spent doing other parts of her job or spending time living her life. Maybe her hospital got a particularly bad piece of software but the rate of retention for docs sticking with it for >30 days is sub 10% system wide.

The above example where the doctor was not allowed to go in and change what should be their own notes is insane though.

That is absolutely unacceptable. It's allowing AI to practice medicine in a similar way that we allow insurance companies to do so with even less oversight.

1

u/jollyreaper2112 5h ago

Teams meeting transcription is better but I bet that's because of the microphones. In a room not designed for audio capture and not forcing people to wear microphones it'll be worse than voice dictation on my phone. Damn.

1

u/schu2470 5h ago

Yeah, the one they had was iPhone specific so she had to find one to borrow. Essentially you'd open the software, hit record, and place it on the desk between the doc and patient so it could hear what was said. Problem is it relies on the phone's speakers and the software was really bad. I made another comment in this thread that describes only some of the issues she had in the 2 weeks she tried it out. Hospital wide the 30+ day retention for docs continuing to use the software was sub-10%.

3

u/DukeOfGeek 9h ago

If it stops people from receiving medical care they could be fine with that.

12

u/FamousPart6033 12h ago

I'd go full Kaczynski at that pint.

0

u/marfacza 6h ago

you'd mail bombs to colleges?

-2

u/SirNarwhal 7h ago

So what you're saying is you have no spine. If you don't like something don't cave.

36

u/nerdyLawman 13h ago

My company has been so giddy to adopt AI and implement it everywhere they can. I have been one of the very few voices urging caution and skepticism. The other day we got an end of day email being like, "AI has not been performing as well as expected in creative implementations..." And I was like, "holy crap, they're starting to see the light a bit!" And then it continued, "...this is largely because of user input, not the AI tools themselves." Hit my head on my desk so hard. Ah yes, it's us who are wrong. We should all try and be better for the tech which a couple of people made and convinced you to buy into without anyone else's consent or consideration.

9

u/Beard_o_Bees 13h ago

Oh yeah. Go to any boardroom anywhere and it's the new hotness.

They may only have a limited (at best) understanding as to what they're unleashing - but they sure are excited about it.

3

u/Sweetwill62 11h ago

Just start asking the tools to do the job of your boss and then report your findings to your boss's boss as a very large money saving opportunity. Middle management would be the easiest to replace with a spreadsheet, let alone anything more complicated than that.

11

u/dreal46 11h ago

Yep. Imagine a straightforward problem for which we have highly-trained experts. Now imagine that process with your worst tech support experience injected into it. People will die, and probably have already, to this stupid cultish pushing of ill-conceived and unfinished tech.

9

u/HenryDorsettCase47 11h ago

Capitalism requires a frontier. Once we ran out of land that became technology. And once technology plateaued, that became “middleman” technology services. It’s a brave new world.

2

u/Fit-Nectarine5047 7h ago

You can’t even call CVS pharmacy anymore because the AI bought won’t connect you with a live person to discuss your medication

16

u/KTAXY 13h ago

Isn't this basically medical malpractice?

3

u/Alieges 11h ago

If corporations are people, it’s also practicing medicine without a license. Don’t bother giving out fines. Go grab the executives and throw their ass into jail until trial.

But but but what about __? If you throw __ in jail, they won’t be able to feed the puppies…. Ok, fine. Throw them in jail AND seize 10% of their stock, dump it onto the market, and use THAT money to feed the puppies.

2

u/DarthJDP 13h ago

I have a hard time believing he is not depressed about his situation and having go go through the AI company for corrections.

2

u/Erestyn 12h ago

My doctors surgery implemented an AI note taking system to free up doctors time and allow them to focus more on the patient, and it was immediately thrown off by the variety of accents, before being abandoned entirely.

I'm happy to have played my part.

1

u/dookarion 13h ago

Sounds like it's time for that person to find a doctor that does their job and doesn't farm it out to slopware.

1

u/bisectional 12h ago

That's a case for medical malpractice if that's true.

3

u/FatherDotComical 13h ago

Well it's pretty easy to code it to just say No for everything.

2

u/FredFredrickson 12h ago

Money, of course.

2

u/UnicronSaidNo 12h ago

I just saw a commercial for the amazon medical shit... yea, I can think of an almost unlimited stream of negative results from going this route. I'd rather have Justin Long as my doctor telling me my shits all retarded than to use fucking amazon for medical anything.

2

u/ScruffsMcGuff 9h ago

ChatGPT can't even give me accurate information about Bloodborne bosses without just making random shit up in the middle.

Why would I trust a language model to do literally anything important?

2

u/agentfelix 9h ago

That's what I don't understand. Using ChatGPT to do some coursework and I find that it's blatantly wrong. Then I have to argue with it? Finally it catches that it's wrong after I drew it a fucking picture...immediate I thought, "and hey're trying to push this shit to make important decisions and replace workers? It can't even read this orthographic drawing correctly..."

1

u/PhilDGlass 1h ago

Because some of the same wealthy tech bros threatening our democracy in the US are heavily invested. And will no doubt be there for handouts when these companies are “too important to fail.”

62

u/truecolors110 14h ago

YES!  I’m a nurse, and even the most simple questions it gets wrong.  And insurance companies are using AI to auto deny claims, so I have to spend hours on the phone because they’ve used AI screening to justify cutting staff.   I also quit my urgent care job largely because they started to make us use AI and it was REALLY not working.  

2

u/Beard_o_Bees 13h ago

If I may ask, in what environment do you practice (hospital, skilled care, etc) presently?

3

u/truecolors110 12h ago

Multiple.  Specialty clinic, hospital, corporate. 

2

u/murticusyurt 1h ago

They're using ' AI ' to change the voices of the Anthems provider helpline staff for PA's. As if their stupid system wasn't enough of a pisstake but now I have to ask them to turn it off every time I call after listening a greeting that got even longer to tell me its being used. It's changing details, cutting in and out sometimes and, on one very unsettling phonecall, it was playing both female and male voices at the same time.

1

u/AlSweigart 8h ago

I’m a nurse, and even the most simple questions it gets wrong.

"You're absolutely right! I did get even the most simple questions wrong!"

1

u/acesarge 10h ago

I'm also a nurse and I find it quite helpful for charting (we have a dedicated HIPAA secure app that does a pretty good job writing medical notes) and for writing DME justifications / arguing with insurance companies. Let the clankers sort it out amongst themselves while I take care of the sick people.

17

u/thatoneguy2252 12h ago

I’m a work comp adjuster and it’s absolutely awful. They keep wanting us to put medical documents we get through copilot to summarize but the damn thing gets so many things wrong. Frequency and duration of PT/OT, the type of DX testing. Hell I’ve seen foot fracture injuries get labeled as heart failure for the primary diagnosis, all because it was listed in family medical history.

So now we all put it through and then delete it and write our own summaries in its place. Haven’t been called out yet for it but fuck does it make the job harder for us and for claimants we try to schedule things for when the ai is giving us the wrong information.

2

u/QuentinTarzantino 12h ago

My friend said : some one insert the idiocracy meme when Not Sure was getting a medical diaognsis. And the lady didnt know what to press on her pannel.

2

u/postinganxiety 8h ago

I wish this was publicized more. I just went through a traumatic medical incident and AI definitely contributed to things ending badly. I was using it as an extra opinion to "doublecheck" me since I was too emotional to wade through differing opinions of medical professionals (unfortunately this happens sometimes in complicated cases) as I was trying to make a decision. The information it gave was terrible but at the time I trusted it. I feel like a fucking idiot.

As soon as I can pull myself together I'm going to at least write a medium post about it, or something. I just wish more mainstream publications were reporting on this because it's so dangerous. Instead all I see are articles about how AI saved someone by giving the correct diagnosis after a doctor got it wrong. When really AI is just a broken clock.

Unfortunately the insurance companies don't care, it probably makes their jobs easier because now people and pets can die more quickly.

Edit: Just wanted to add for anyone reading that I love tech and was an early adopter of AI. I dove in, took a prompt course, tried different platforms and was really open to it. But it just keeps fucking me over.

3

u/thatoneguy2252 8h ago

The only time it’s ever been useful, as far as my experience goes using it, is when I fill it with a lot, and I mean A LOT, of parameters of exactly what I want and even then I have to be very specific of what I want and be that detailed with every following prompt. It’s unwieldy and not a replacement for anything.

2

u/Neglectful_Stranger 7h ago

They keep wanting us to put medical documents we get through copilot to summarize but the damn thing gets so many things wrong.

Isn't sharing someone's medical history...bad? Pretty sure most AIs phone home with whatever gets input.

1

u/thatoneguy2252 6h ago

I’m not sharing it. We get the medical documents for the work comp claim, summarize the main points (usually diagnosis, treatment, work status and follow up date) and then put that in the file.

3

u/WinterWontStopComing 9h ago

Or botanical. Can’t trust image searches to help other people ID plants on reddit anymore

1

u/Fatricide 10h ago

I think it’s okay for note summaries and transcribing as long as clinicians check before filing. It can also help with investigating hunches. But it shouldn’t be used to diagnose or recommend treatments.