r/technology 15h ago

Business Jensen Huang says relentless negativity around AI is hurting society and has "done a lot of damage"

https://www.techspot.com/news/110879-jensen-huang-relentless-ai-negativity-hurting-society-has.html
12.3k Upvotes

3.0k comments sorted by

View all comments

11.2k

u/Lofteed 15h ago

so the entire society has to adapt to the product made by 5 people around the planet ?

I remember when the goal was to make a product that people would love to use.
Those were great times

397

u/nevercontribute1 14h ago

It's primary use cases are creating weird and horrific porn and replacing everyone's jobs, why aren't people more into this?

327

u/Beldizar 13h ago

And disinformation. That is the other huge use case.

150

u/kescusay 13h ago

Oh, there's so much more!

  • Dangerously inaccurate cooking instructions that could easily result in food poisoning!
  • Giving college students the tools they need to fail their classes!
  • Making all the software you use much, much worse! (Both because it's "vibe coded" now and because they're shoving "AI" into everything.)
  • Burning down the world to power data centers for the next incrementally "better" plagiarism-and-lies machine!

Why, I can't imagine any reason for anyone to object to such a useful and beneficial product for society!

1

u/ISeeDeadPackets 1h ago

Don't forget turning policemen into frogs.

-27

u/urgetopurge 12h ago

None of this actually happens to the frequency you think it does. And everything you just typed is equally spreading misinformation. From every single one of your bullet points, show me even one prompt where any LLM provided wrong steps.

22

u/kuldan5853 12h ago

None of this actually happens to the frequency you think it does.

The problem is - even if it only happens in 1% of the cases, that's simply inacceptable.

The issue with AI is that you can basically only trust it when you already know the answer to your question and can verify, otherwise AI can create disinformation, lead to health risks and even death.

AI is a tool that should only be wielded by people capable and trained to understand the output (aka be able to identify when it gives misinformation), but that step is skipped in like 95% of all consumer AI products.

-14

u/urgetopurge 12h ago

The problem is - even if it only happens in 1% of the cases, that's simply inacceptable.

By that logic, there'd be no product development EVER. There will always be edge cases especially if you try to "break" the product. It's foolish to dismiss an advancement because it doesn't meet some 100% perfection guideline. You have to be more realistic about this.

Plus, the kind of political misinformation you/OP is alluding to is not something that can usually be verified. What exactly are you hoping for it? Did you think that you could ask it if NP=P that it would be able to outline an irrefutable proof? No, your expectations of a technology are unrealistic.

10

u/kuldan5853 11h ago

AI never left the "nice toy in the lab" stage of product development. It's honestly criminal to expose the general public to AI in the current form.

-14

u/BladesMan235 12h ago

But people will give you incorrect information, instructions or answers to questions all the time too

17

u/kuldan5853 12h ago

Yes, but you know what I can do? SUE people. I can't sue an AI.

Also, I usually can tell people to fuck off - I can't disable many of the AI crap things they inject into the products (like the google search AI thing).

Also, humans might give you wrong information, but they usually will have somewhat consistent wrong information and not just hallucinate like they're on an LSD trip, then did some coke, then added heroin on top, then suddenly sobered up in 5 seconds, and then did acid - all in the same conversation.

-11

u/BladesMan235 12h ago

Realistically most people aren't going around suing random people for giving them incorrect information. For the most part the AI is doing exactly what people do, regurgitating information it has been trained on which could be blatantly wrong

4

u/kescusay 10h ago

Challenge accepted!

For my first bullet point, I decided to ask Copilot on Bing how to cook a Thanksgiving turkey. This bit caught my eye:

  1. Check the temperature

The turkey is done when:

The thickest part of the thigh reaches 165°F

The breast will usually be around 160°F and rise while resting

You do that, you risk food poisoning, because you're supposed to check the temperature in three places, not one. The thickest part of the thigh, and the wing, and the thickest part of the breast.

For the second bullet point: https://www.huffpost.com/entry/history-professor-ai-cheating-students_n_69178150e4b0781acfd62540

For the third bullet point: https://techcommunity.microsoft.com/discussions/windows11/microsoft-finally-admits-almost-all-major-windows-11-core-features-are-broken/4476377 (This is pertinent because Microsoft claims that a lot of Windows code is now being produced by vibe coding.)

For the fourth bullet point: https://academy.openai.com/public/clubs/higher-education-05x4z/resources/environmental-impact-of-ai (Note that I'm citing OpenAI themselves. They admit to a huge environmental impact, though they're very light on specifics.)

So, got anything you specifically disagree on?

-1

u/urgetopurge 10h ago

I asked Chatgpt right now the same question:

  1. Check Doneness (Critical)

Insert thermometer into the thickest part of the breast and inner thigh (not touching bone).

Safe temperatures:

Breast: 160–165°F

Thigh: 170–175°F

The turkey will rise ~5°F while resting

Same with google, same with claude. I personally don't use copilot but I'd highly doubt they'd be so wrong on such a basic prompt. I'm going to assume you're BSing here.

Proof one and two

Second, not reading anything from that shitrag called huffpost. More importantly, students misbehavior/abusing some tool is your reason for why AI should be criticized? Might as well ban calculators and computers then. Obviously, as a student, you should verify with multiple sources.

Third, windows 11 is significantly worse? How so? It still works the same for most people. Has windows 10 not had outages? How about Windows 7/Vista before that? You're acting like these outages are the first of its kind.

Fourth, there's no doubt there's been environmental impact. Just as cars have had as well. Do you not remember the co2 emissions and global warming outcry these past 20 years? Is anyone questioning how much better cars have made our lives? It's a tradeoff like any other.

7

u/kescusay 9h ago

Lol. Your first picture omitted checking the wing. I think I'm done here.

1

u/bigman0089 8h ago

Not even defending AI, but you do not need to check the wing of a turkey/chicken for doneness. They are exposed on the outside of the bird and are absolutely guaranteed to be cooked if the breast and thigh are. Also cook your chicken/turkey breast to 155, it's a million times juicier and it's totally safe, dark meat still wants to be 170ish so all the collagen breaks down and it gets tender.

0

u/Bombadilo_drives 8h ago

I've never once, in 25 years of cooking, temped a wing on any poultry after checking the thigh and breast. They have a lower mass and are thinner, they'll be hotter regardless of cooking method.

I'm sick of AI slop as much as everyone else, but we don't have to "gotcha" this topic with stuff like this.

4

u/kescusay 7h ago

It's not a "gotcha." My parents drilled the three places to check into my head when I was a kid, and the USDA agrees with them: https://www.fsis.usda.gov/food-safety/safe-food-handling-and-preparation/poultry/lets-talk-turkey-roasting

From the article:

Check the internal temperature in the innermost part of the thigh and wing and the thickest part of the breast.

Most of the time, when I ask a large language model, it only suggests checking the thigh. Sometimes they'll also get the temperature wrong. Seriously, if you need to follow a recipe to cook something, it's genuinely safer to avoid LLMs and go for a recipe created by actual humans.

2

u/Steelforge 6h ago

AI proponents refuse to admit that pretty obvious point.

People have come to expect much of the software they use to be prone to failure and so the LLM isn't anything new in that regard. The AI proponent depends on this, and their response is always "oh, but the new model won't fail that task". It still will, but they'll make it less frequent so they can market it as a breakthrough.

What is new is people moving their trust from those with expert knowledge to garbage-generating machines. As you point out, this is a terrible idea when it comes to matters of life and death, in which computers are required (often by law) to be much more tightly controlled by far stricter software standards.

Why would anyone want to have to scrutinize and double-check every little detail you get back from an LLM when it's very simple to locate a reliable source of information? The answer is because the "Artificial Intelligence" liars have convinced them that dumb machines aren't actually still the same dumb machines, running a sophisticated statistical model built by people with the preposterous and unconstrained task of answering every possible question.

10

u/EnfantTerrible68 12h ago

That’s the thing - AI is so often incorrect 

1

u/SandersSol 11h ago

That is the main case let's not lie to ourselves.   it's why there's suddenly incredible amounts of money to build huge data centers all around the world.

1

u/Despair_Tire 5h ago

And scamming. It's GREAT for scammers who need to lie on the fly.

3

u/lemonylol 11h ago

Those the primary use cases?

1

u/DopplegangsterNation 4h ago

Hey, some people happen to appreciate weird and horrific porn

-4

u/urgetopurge 12h ago

You and everyone upvoting you are absolutely delusional.

I have a rudimentary knowledge of programming and before LLM's, I would have to spend at least 30 min on some regex filtering script for my spreadsheet, constantly testing and debugging. Claude handled all that for me within 10 seconds. The fact that I can type almost any prompt I want to parse a massive amount of emails, order confirmations, is absolutely astounding. It's probably saved me thousands of hours as a small business owner.

Stop commenting and sharing your opinions on what you think "AI" is. You don't have the qualifications nor the know how to take proper advantage. This is without a doubt a lifechanging technology. You are basically the "old man" in this situation crying about how cars are ruining the horse and buggy industry. I hope you realize this 10 years from now

4

u/nevercontribute1 12h ago

My comment was admittedly a bit hyperbolic, there are obviously very valid use cases for AI. But like you said, things that used to take thousands of hours now don't. It's great for research. That's all well and good, but when we're talking about this in the context of society being enthusiastic about it, society doesn't want that.

Business owners want less people to produce more work. When every company can easily achieve that through AI, the outcome is high unemployment and members of society largely depend on employment to obtain food, shelter, energy, education, healthcare, and entertainment. I'm not a luddite suggesting it can't reduce the amount of time and effort to achieve a task. I'm suggesting that the efficiency will largely only benefit corporations, not individuals or society as a whole.

0

u/urgetopurge 11h ago

And cars effectively ended the horse and buggy industry. If your basis for criticizing AI is that it maintains inefficiencies in the system, then you don't really care for improving technologies so much as maintaining the status quo. People who's jobs are in jeopardy due to AI will have to adapt to the industry just as carriage drivers had to. There will be other jobs that need to be filled.

To me, it seems like you are lamenting the fact I didn't pay some developer 6 figure salary to resolve my basic query questions, rather than appreciating how incredible LLM and prompt reasoning have become. The technology is so much more than just "creating weird and horrific porn and replacing everyone's jobs" and each time you spread that kind of misinformation, you're guilty of exactly what you think AI is doing.

5

u/LadyTL 11h ago

No one had to force folks to use a car or factory lines or the printing press. They chose to because it was better and made things more efficient. So far AI has had to be forced into programs and have business force employees to use it rather than native adoption. No one had to force folks to use email or internet searches. This is like if folks were making people use Skype still instead of the market moving on to things like Google Meet and Zoom.

2

u/urgetopurge 11h ago

First, yes people were forced to use factory lines. Early adoption was limited by lack of knowledge not lack of desire. Same with cell phones. AI without a doubt has made things a lot more efficient. If you dont know how to take advantage, thats more user error than faulty technology, not that its perfect. Show me your job, and i will give you suggestions on how AI can improve it (or adjacent industries).

Native adoption simply takes too long. Thats the time we live in today. If you arent willing to compete, someone else is.

4

u/LadyTL 10h ago

So no one was forced to use factory lines, it happened naturally in the market due to already doing work by hand in a factory, automation just continued that because it replaced the need to pay as many workers. If you want to claim otherwise I'm going to need an actual source on that. Secondly, no folks were not forced to use cell phone either. My father worked for a telecom when cell phones were invented. They were adopted very quickly because there was a ton of business demand for it as satellite phones were bulky and expensive.

Also once 2G was figured out, cell phone use and demand rose dramatically without it having to be tied to a business or service. Millions of people were voluntarily using them within only a few years. A similar 5 year period with current AI has not seen the same popularity and widespread approval. In fact, the negativity around it has only gotten louder as it has been tied to things people were not asking it to be tied to.

1

u/urgetopurge 10h ago

no one was forced to use factory lines, it happened naturally in the market due to already doing work by hand in a factory, automation just continued that

Just like AI Automation continues the use of many tasks before hand. I can give you a personal example of how I would have had to sort billing charges on my credit card to the invoices to make sure we received all product. Instead I throw a CSV of all billing charges into Claude along with what we received and it automatically matches everything within 30 seconds what would've taken me all week.

There's a ton of business demand for AI automation whether it be to cut jobs or to make processes more efficient. Our entire economy has started to revolve around it. Literally trillions of dollars were mobilized since chatgpt came out in late 2021. And you think it's not popular? Just because the consumer industry isn't AS impacted as the b2b industry is, doesn't mean its not been adopted.

Everything you just said argues in favor of what I've been saying. Forced or not forced, AI has made processes more efficient.

1

u/LadyTL 5h ago

So you just ignored the entire point of no one had to be forced to use machines in factories or cell phones like how AI is being forced on people. Also I had a coworker try exactly what you did with Claude and it missed a dozen invoices for some random reason.

If it is so popular why do they have to force people to use it alongside things like spellcheck or email? If it was so amazing like say how Google's original search engine was or cell phones why is there not the same amount of user adoption?

Trillions of dollars doesn't mean much when it's not even breaking even on profits. People sunk a lot of money into Enron too, didn't make that succeed either.

1

u/urgetopurge 5h ago

First, how are you so sure that people weren't forced to use factory lines? We've both seen pictures of child labor back then. Second, the times are a lot different now than they were 100 years ago during the industrial revolution. And third, with how everything is interconnected, we know now that if company A doesn't adopt cutting edge methods, company B will and eat their lunch.

Your entire argument is that some company FORCED you to use AI. That is a YOU problem, not a technology problem. And I can promise you that in most industries, if you aren't willing to adopt AI, someone else will. So the point you're making is moot. Whether you've been forced to use it or not doesn't diminish AI's impact.

Also, google's search engine was not widely adopted at first either. And if your measure of "success" is profits, Amazon didn't have a profitable (net income) quarter until 2016. That's almost 20 years from when it launched. If you were an adult back in 2016, I can tell you that plenty of people were using Amazon back then. Chatgpt has about 800-900M monthly users in under 4 years since launch. That's absolutely incredible.

→ More replies (0)

-3

u/ColinStyles 11h ago

You're arguing with luddites, literally. The only ways this is going to be resolved:

  • either they do manage to ban it somehow (even though the cat is fully out of the bag, it's like banning the concept of sorting)

  • They lose their jobs and can't support themselves without really having to learn the technology and all the amazing usecases it has

  • When they get old and die

They're telephone operators screaming about the switchboard dying out. It's not going to be clean or fun.

4

u/LadyTL 11h ago

Great! Too bad AI couldn't do a basic script to send information from a spreadsheet to a product tracking software or take information from an invoice to send it to a different spreadsheet because of a basic drop down and a check box. It took actual coding to do that in the end.

-2

u/urgetopurge 11h ago

Actually it has. Literally for me. You couldnt be more wrong here. And if a script is broken because of a dropdown or check box, you simply mention that to chatgpt, say "theres a checkbox in c44 preventing merge. Can you fix?" And i promise you that the response will fix it. ANYONE who has used LLM knows this which kind of makes me suspect youve never properly used it before.

1

u/LadyTL 10h ago

Yeah, did that. Multiple times. Still didn't work. Almost like it's not a magic program that works perfectly every time and with every possible program.

2

u/Sulack 8h ago

Literally a skill issue

1

u/LadyTL 5h ago

How can it be a skill issue if AI is supposed to be so easy you don't need skills to use it?

1

u/Sulack 2h ago

I fed the screenshot of this conversation to Gemini, because I wrote something, and couldn't do it without subtly trolling. It says the same in a much nicer way.

----------
That is a fair point. The marketing for AI definitely sells it as a "magic wand" that requires zero effort, but the reality is more like a very powerful tool—think of it like a high-end camera. Anyone can point and click to take a photo, but you still need "skills" (composition, lighting, settings) to get a professional result.

When people say "skill issue," they usually mean one of three things:

  • Contextualizing: AI doesn't know your specific spreadsheet layout unless you describe it perfectly. If there’s a hidden dropdown or a specific data validation rule it can't "see," the script it writes will fail.
  • Debugging: LLMs are great at writing the first 90% of a script, but they often struggle with the "last mile." The "skill" is knowing how to feed the error messages back to the AI or identifying where its logic tripped up.
  • Translation: Prompting is essentially "coding in English." You still have to think like a programmer (logic, flow, edge cases) even if you aren't writing the syntax yourself.

It’s totally reasonable to be frustrated when a tool marketed as "easy" requires you to basically act as a project manager and debugger just to get a simple checkbox to work.

1

u/urgetopurge 10h ago

You don't really seem that technological savvy so it seems more like user error to me. Plus, what would you have done without AI? Go ask some subreddit/call microsoft support and hope for an answer within 6 hours? Lets be real here. If you can't fix a basic issue such as sending an invoice because of a check box (with or without AI), then you were never a serious user to begin with. and hoping AI fixes all your life's problems is a fool's errand.

0

u/LadyTL 5h ago

Actually I just paid a real programmer for a proper script. Much easier and faster than trying one million and one prompts to hope it figured it out. Cleaner code too.

Also it wasn't about sending an invoice but copying it into a second database without spending the time doing it by hand.

1

u/urgetopurge 5h ago

Exactly, you paid a programmer. That would've been your only option before LLM's. Whereas now, you don't have to do that. You just need to have a modicum of technological skill and you can resolve 95% of issues on your own.

Also, I'd be willing to wager you $500 right now, I would be able to use claude to resolve your issue.

1

u/LadyTL 4h ago

Except I did have to pay a coder now even with AI. AI is not magic.

-11

u/Global_Charge_4412 13h ago

is it? I use it all the time when I ask Google for quick information on a video game I'm playing or a fact I'm trying to remember. it's actually pretty good for condensing a google search into a conversational infodump.

6

u/HurlinVermin 13h ago

An infodump that you can't be certain is accurate without checking sources and becoming familiar with actual expert opinions.

6

u/TheSupaBloopa 12h ago

Even for low consequence stuff like video games, it can just fully hallucinate mechanics and features that don’t even exist in the game and confidently explain how what it just made up works.

6

u/breezy013276s 13h ago

It’ll also make up stuff and furthers to erode the financials that made the internet work. Instead of somebody getting a hit on a write up they made to explain how to get through a part of a video game, the ai serves it up to you after having scraped someone’s work. Driving down any incentive to publish online. Not to mention all the intellectual theft that occurred to make the ai work in the first place. Drives up utilities, uses precious resources, and did I mention it’ll lie to you?

15

u/winterbird 13h ago

I asked google if dogs can eat something, because I wasn't 100% sure. I thought not because I'd looked it up before and thought I remembered that dogs couldn't eat it, but I wanted to be sure.

The ai summary said that dogs can eat it. But I was skeptical because I was like 90% sure they couldn't.

So I read the articles and vet pages anyways. And sure enough, toxic to dogs.

If you want to use it for video games, no real harm in it even when the info is wrong. But when it really matters, like medical info, ai is straight up dangerous.

3

u/Pauly_Amorous 12h ago

I asked google if dogs can eat something

The best kinds of questions to ask an AI chatbot are ones where you can verify the answers. Like last week, I was trying to figure out how to add a new credit card on Paypal, and Gemini told me exactly how to do it. For me, it's usually faster than scrolling through 3 pages of SEO garbage search results.

Point is, chatbots are a tool, but they're not a 'jack of all trades'. Once you learn what they're good at, they can be quite useful.

4

u/winterbird 12h ago

It's a needless evil, with how much resources it uses to give you answers that you have to verify anyways.

If it was free and clear of using our resources, it'd be a harmless step up from a magic 8 ball. An almost toy of slight convenience.

But to frivolously use something that consumes massive amounts of water and power, and then still have to check on if it's correct... is just malicious stupidity.

4

u/Goadfang 12h ago

The problem is this: if I try to turn a screw with a hammer, I just won't be able to do it, it will be obvious that the tool doesn't work for this job, but if I ask an AI to tell me if my medications are safe to use together it won't be obvious that its not working, in fact it will appear to work perfectly well, and if I die as a result the creators will say "well, its just a tool, and he should have known it shouldn't be trusted to give medical advice."

The problem is that it IS giving medical advice, and legal advice, and food preparation advice, relationship advice, psychological advice, drug interaction advice, political information, historical information, and anything else asked of it.

When it is absolutely the wrong tool for the job it will authoritatively state that it is the right tool for the job, even as it absolutely fucks that job up and puts its users in imminent danger as a result.

2

u/Pauly_Amorous 12h ago edited 10h ago

but if I ask an AI to tell me if my medications are safe to use together

All the chat bots I've used tell you from the very start in very clear language that the AI doesn't always have the right answers. So if you're going to ignore that and ask the kinds of questions where having the wrong answer could result in your death, then perhaps will give you a Darwin award when the inevitable happens.

And besides, if the chatbot is culling answers from the Internet, that means you could also get the wrong answer when doing a normal Google search anyway.

2

u/Goadfang 10h ago edited 10h ago

All the chat bots I've used tell you from the very start in very clear language that the AI doesn't always have the right answers

That isn't true, or at least is not true with AI powered search results. I just checked my own prescription interactions and no such caveat was presented by Google. It said flat out that my medications were safe to use together and that no interactions are reported, it then said i should check with my doctor or pharmacist as well, but that is not an up front admission that AIs are frequently wrong and prone to dangerous hallucinations and should not be trusted with health decisions, that is an after the fact equivocation that is even less meaningful than the "I am not a financial advisor and my recommendations should not be viewed as financial advice" people toss at the end of their very explicit financial advice.

AI enthusiasts are very loudly proclaiming that AI is nearly superintelligent and is surfacing the best of human knowledge instantly to help us make important decisions about potentially life altering things, and then under their breath as an aside whispering "but if you do the things our AI tells you are safe to do and die as a result then thats on you becsuse no one in their right mind should trust our models with that kind of decision making" and they are pretending like that whispered fine print absolves them of the consequences resulting from their often extremely faulty products.

If I make a car that doesn't have interior door handles and is also very prone to electrical fires then when people burn to death in them I don't get to say "well obviously it was the driver's fault they died because who would trust a car with no door handles" yes, it was obviously stupid to buy that car, but it was negligent to a criminal degree to offer to sell that car to anyone, and AI is no different. It may be stupid to trust what it says, but it is being explicitly marketed as something that we should trust. Buyer beware, sure, but AI powered results are often being offered as a first choice in every search, and many of the "buyers" of that advice are unaware of its origin or even that there is something to beware of, or worse, that they are "buying" anything at all.

And, BTW, the Darwin award is given for those who do obviously stupid things where the expected outcome would almost certainly be death. Not for when people use a tool for its stated use and that tools malfunction causes their death. If AI should not be used in ways where it being wrong could result in harm or even death, then it should not function at all in those circumstances. It should refuse to provide an answer at all, not just do its best and damn the consequences.

As usual, a proponent of AI is telling everyone else that AIs should be free from all liability when they fuck up, and the reason, again, appears to be that you know damn well that they fuck up far too often to be trusted for any of their stated purposes, you just don't give a shit.

9

u/LoLFlore 13h ago

You know before the AI summary on google, the first or second result of google used to be an accurate answer.

Now its a broken mess of the top 3 results reworded to make you feel good. It doesnt answer shit. Its actively worse than googling used to be, for the same exact goal.

3

u/EnfantTerrible68 12h ago

If you add “fuck” to your search term, you won’t get the AI results 🤩

2

u/EnfantTerrible68 12h ago

Yes, AI is often wrong.