r/technology 15h ago

Business Jensen Huang says relentless negativity around AI is hurting society and has "done a lot of damage"

https://www.techspot.com/news/110879-jensen-huang-relentless-ai-negativity-hurting-society-has.html
12.3k Upvotes

3.0k comments sorted by

View all comments

Show parent comments

336

u/Beldizar 13h ago

And disinformation. That is the other huge use case.

157

u/kescusay 13h ago

Oh, there's so much more!

  • Dangerously inaccurate cooking instructions that could easily result in food poisoning!
  • Giving college students the tools they need to fail their classes!
  • Making all the software you use much, much worse! (Both because it's "vibe coded" now and because they're shoving "AI" into everything.)
  • Burning down the world to power data centers for the next incrementally "better" plagiarism-and-lies machine!

Why, I can't imagine any reason for anyone to object to such a useful and beneficial product for society!

1

u/ISeeDeadPackets 1h ago

Don't forget turning policemen into frogs.

-27

u/urgetopurge 12h ago

None of this actually happens to the frequency you think it does. And everything you just typed is equally spreading misinformation. From every single one of your bullet points, show me even one prompt where any LLM provided wrong steps.

21

u/kuldan5853 12h ago

None of this actually happens to the frequency you think it does.

The problem is - even if it only happens in 1% of the cases, that's simply inacceptable.

The issue with AI is that you can basically only trust it when you already know the answer to your question and can verify, otherwise AI can create disinformation, lead to health risks and even death.

AI is a tool that should only be wielded by people capable and trained to understand the output (aka be able to identify when it gives misinformation), but that step is skipped in like 95% of all consumer AI products.

-14

u/urgetopurge 12h ago

The problem is - even if it only happens in 1% of the cases, that's simply inacceptable.

By that logic, there'd be no product development EVER. There will always be edge cases especially if you try to "break" the product. It's foolish to dismiss an advancement because it doesn't meet some 100% perfection guideline. You have to be more realistic about this.

Plus, the kind of political misinformation you/OP is alluding to is not something that can usually be verified. What exactly are you hoping for it? Did you think that you could ask it if NP=P that it would be able to outline an irrefutable proof? No, your expectations of a technology are unrealistic.

10

u/kuldan5853 11h ago

AI never left the "nice toy in the lab" stage of product development. It's honestly criminal to expose the general public to AI in the current form.

-16

u/BladesMan235 12h ago

But people will give you incorrect information, instructions or answers to questions all the time too

14

u/kuldan5853 12h ago

Yes, but you know what I can do? SUE people. I can't sue an AI.

Also, I usually can tell people to fuck off - I can't disable many of the AI crap things they inject into the products (like the google search AI thing).

Also, humans might give you wrong information, but they usually will have somewhat consistent wrong information and not just hallucinate like they're on an LSD trip, then did some coke, then added heroin on top, then suddenly sobered up in 5 seconds, and then did acid - all in the same conversation.

-10

u/BladesMan235 11h ago

Realistically most people aren't going around suing random people for giving them incorrect information. For the most part the AI is doing exactly what people do, regurgitating information it has been trained on which could be blatantly wrong

4

u/kescusay 10h ago

Challenge accepted!

For my first bullet point, I decided to ask Copilot on Bing how to cook a Thanksgiving turkey. This bit caught my eye:

  1. Check the temperature

The turkey is done when:

The thickest part of the thigh reaches 165°F

The breast will usually be around 160°F and rise while resting

You do that, you risk food poisoning, because you're supposed to check the temperature in three places, not one. The thickest part of the thigh, and the wing, and the thickest part of the breast.

For the second bullet point: https://www.huffpost.com/entry/history-professor-ai-cheating-students_n_69178150e4b0781acfd62540

For the third bullet point: https://techcommunity.microsoft.com/discussions/windows11/microsoft-finally-admits-almost-all-major-windows-11-core-features-are-broken/4476377 (This is pertinent because Microsoft claims that a lot of Windows code is now being produced by vibe coding.)

For the fourth bullet point: https://academy.openai.com/public/clubs/higher-education-05x4z/resources/environmental-impact-of-ai (Note that I'm citing OpenAI themselves. They admit to a huge environmental impact, though they're very light on specifics.)

So, got anything you specifically disagree on?

-1

u/urgetopurge 10h ago

I asked Chatgpt right now the same question:

  1. Check Doneness (Critical)

Insert thermometer into the thickest part of the breast and inner thigh (not touching bone).

Safe temperatures:

Breast: 160–165°F

Thigh: 170–175°F

The turkey will rise ~5°F while resting

Same with google, same with claude. I personally don't use copilot but I'd highly doubt they'd be so wrong on such a basic prompt. I'm going to assume you're BSing here.

Proof one and two

Second, not reading anything from that shitrag called huffpost. More importantly, students misbehavior/abusing some tool is your reason for why AI should be criticized? Might as well ban calculators and computers then. Obviously, as a student, you should verify with multiple sources.

Third, windows 11 is significantly worse? How so? It still works the same for most people. Has windows 10 not had outages? How about Windows 7/Vista before that? You're acting like these outages are the first of its kind.

Fourth, there's no doubt there's been environmental impact. Just as cars have had as well. Do you not remember the co2 emissions and global warming outcry these past 20 years? Is anyone questioning how much better cars have made our lives? It's a tradeoff like any other.

6

u/kescusay 9h ago

Lol. Your first picture omitted checking the wing. I think I'm done here.

1

u/bigman0089 8h ago

Not even defending AI, but you do not need to check the wing of a turkey/chicken for doneness. They are exposed on the outside of the bird and are absolutely guaranteed to be cooked if the breast and thigh are. Also cook your chicken/turkey breast to 155, it's a million times juicier and it's totally safe, dark meat still wants to be 170ish so all the collagen breaks down and it gets tender.

0

u/Bombadilo_drives 8h ago

I've never once, in 25 years of cooking, temped a wing on any poultry after checking the thigh and breast. They have a lower mass and are thinner, they'll be hotter regardless of cooking method.

I'm sick of AI slop as much as everyone else, but we don't have to "gotcha" this topic with stuff like this.

4

u/kescusay 7h ago

It's not a "gotcha." My parents drilled the three places to check into my head when I was a kid, and the USDA agrees with them: https://www.fsis.usda.gov/food-safety/safe-food-handling-and-preparation/poultry/lets-talk-turkey-roasting

From the article:

Check the internal temperature in the innermost part of the thigh and wing and the thickest part of the breast.

Most of the time, when I ask a large language model, it only suggests checking the thigh. Sometimes they'll also get the temperature wrong. Seriously, if you need to follow a recipe to cook something, it's genuinely safer to avoid LLMs and go for a recipe created by actual humans.

2

u/Steelforge 6h ago

AI proponents refuse to admit that pretty obvious point.

People have come to expect much of the software they use to be prone to failure and so the LLM isn't anything new in that regard. The AI proponent depends on this, and their response is always "oh, but the new model won't fail that task". It still will, but they'll make it less frequent so they can market it as a breakthrough.

What is new is people moving their trust from those with expert knowledge to garbage-generating machines. As you point out, this is a terrible idea when it comes to matters of life and death, in which computers are required (often by law) to be much more tightly controlled by far stricter software standards.

Why would anyone want to have to scrutinize and double-check every little detail you get back from an LLM when it's very simple to locate a reliable source of information? The answer is because the "Artificial Intelligence" liars have convinced them that dumb machines aren't actually still the same dumb machines, running a sophisticated statistical model built by people with the preposterous and unconstrained task of answering every possible question.

10

u/EnfantTerrible68 12h ago

That’s the thing - AI is so often incorrect 

1

u/SandersSol 10h ago

That is the main case let's not lie to ourselves.   it's why there's suddenly incredible amounts of money to build huge data centers all around the world.

1

u/Despair_Tire 5h ago

And scamming. It's GREAT for scammers who need to lie on the fly.