Apple has come under intense scrutiny for rolling out an underbaked AI-powered feature that summarizes breaking news — while oftenThey're all "underbaked" right now, Sunshine.
butchering it beyond recognition.
On 2025-01-16 16:02:46 +0000, badgolferman said:
Apple has come under intense scrutiny for rolling out an underbaked<snip>
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
Apple has come under intense scrutiny for rolling out an underbaked AI-powered feature that summarizes breaking news — while often<snip>
butchering it beyond recognition.
On 2025-01-16 16:02:46 +0000, badgolferman said:
Apple has come under intense scrutiny for rolling out an underbaked<snip>
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
On 1/16/2025 4:15 PM, Your Name wrote:
On 2025-01-16 16:02:46 +0000, badgolferman said:
Apple has come under intense scrutiny for rolling out an underbaked<snip>
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped
crap that they are. Hopefully it will be just another quickly gone tech
fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
On 2025-01-16 22:18:10 +0000, Rick said:
On 1/16/2025 4:15 PM, Your Name wrote:
On 2025-01-16 16:02:46 +0000, badgolferman said:
Apple has come under intense scrutiny for rolling out an underbaked<snip>
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-
hyped crap that they are. Hopefully it will be just another quickly
gone tech fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do. It does NOT "learn" and it does NOT "think",
and it can never do either of those things.
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Not really. ChatGPT has numerous issues as well.
When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...
The really scary part is that morons are trusting this garbage to do important work like medical diagnosis and supposed self-driving cars!! :-(I agree with this entirely.
On 2025-01-16 22:18:10 +0000, Rick said:
On 1/16/2025 4:15 PM, Your Name wrote:
On 2025-01-16 16:02:46 +0000, badgolferman said:
Apple has come under intense scrutiny for rolling out an underbaked<snip>
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless,
over-hyped crap that they are. Hopefully it will be just another
quickly gone tech fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do. It does NOT "learn" and it does NOT "think",
and it can never do either of those things.
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Not really. ChatGPT has numerous issues as well.
When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...
The really scary part is that morons are trusting this garbage to do important work like medical diagnosis and supposed self-driving cars!! :-(
Alan <nuh-uh@nope.com> wrote:
On 2025-01-16 08:02, badgolferman wrote:
Apple has come under intense scrutiny for rolling out an underbakedThey're all "underbaked" right now, Sunshine.
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
This is not news to anyone who has actually been paying attention.
Yes they are, but that’s not the issue here. Apple knows their version is not good enough but still pushes it out to users. They should not be using their customers as unwilling beta testers.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
Apple is not pushing it out to unwilling beta testers. You have to choose
to download and install it beyond the IOS upgrade. It is an option and not >> forced on you.
Users are being fed bad data by Apple Intelligence and the company knows
it. Why don¢t they make it more mature before rolling it out?
On 1/16/2025 6:55 PM, Your Name wrote:
On 2025-01-16 22:18:10 +0000, Rick said:
On 1/16/2025 4:15 PM, Your Name wrote:
On 2025-01-16 16:02:46 +0000, badgolferman said:
Apple has come under intense scrutiny for rolling out an underbaked<snip>
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
So, no different to any other idiotic AI nonsense. They should all be
avoided by any sane person because they're all just useless, over-hyped >>>> crap that they are. Hopefully it will be just another quickly gone tech >>>> fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do. It does NOT "learn" and it does NOT "think",
and it can never do either of those things.
I agree that a lot of AI isn't very good (and Apple is clearly on that
list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful
when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...
The really scary part is that morons are trusting this garbage to do
important work like medical diagnosis and supposed self-driving cars!!
:-(
badgolferman <REMOVETHISbadgolferman@gmail.com> wrote:
Alan <nuh-uh@nope.com> wrote:
On 2025-01-16 08:02, badgolferman wrote:
Apple has come under intense scrutiny for rolling out an underbakedThey're all "underbaked" right now, Sunshine.
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
This is not news to anyone who has actually been paying attention.
Yes they are, but that’s not the issue here. Apple knows their version is >> not good enough but still pushes it out to users. They should not be using >> their customers as unwilling beta testers.
Apple is not pushing it out to unwilling beta testers. You have to choose
to download and install it beyond the IOS upgrade. It is an option and not forced on you.
On 2025-01-17 00:48:49 +0000, Rick said:
On 1/16/2025 6:55 PM, Your Name wrote:
On 2025-01-16 22:18:10 +0000, Rick said:
On 1/16/2025 4:15 PM, Your Name wrote:
On 2025-01-16 16:02:46 +0000, badgolferman said:
Apple has come under intense scrutiny for rolling out an underbaked >>>>>> AI-powered feature that summarizes breaking news — while often<snip>
butchering it beyond recognition.
So, no different to any other idiotic AI nonsense. They should all be >>>>> avoided by any sane person because they're all just useless, over-hyped >>>>> crap that they are. Hopefully it will be just another quickly gone tech >>>>> fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it
has been programmed to do. It does NOT "learn" and it does NOT "think",
and it can never do either of those things.
I agree that a lot of AI isn't very good (and Apple is clearly on that >>>> list), but from the limited testing I have done, I think certain more
mature LLM products like ChatGPT and Copilot are actually quite useful >>>> when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply useless, over-hyped crap that all the idiot tech companies are jumping
on the bandwagon of as the latest fad.
When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...
The really scary part is that morons are trusting this garbage to do
important work like medical diagnosis and supposed self-driving cars!!
:-(
Your Name wrote:
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste billions of dollars?
On 2025-01-17 00:48:49 +0000, Rick said:
On 1/16/2025 6:55 PM, Your Name wrote:
On 2025-01-16 22:18:10 +0000, Rick said:
On 1/16/2025 4:15 PM, Your Name wrote:
On 2025-01-16 16:02:46 +0000, badgolferman said:
Apple has come under intense scrutiny for rolling out an underbaked >>>>>> AI-powered feature that summarizes breaking news — while often<snip>
butchering it beyond recognition.
So, no different to any other idiotic AI nonsense. They should all
be avoided by any sane person because they're all just useless,
over-hyped crap that they are. Hopefully it will be just another
quickly gone tech fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what
it has been programmed to do. It does NOT "learn" and it does NOT
"think", and it can never do either of those things.
I agree that a lot of AI isn't very good (and Apple is clearly on
that list), but from the limited testing I have done, I think
certain more mature LLM products like ChatGPT and Copilot are
actually quite useful when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply useless, over-hyped crap that all the idiot tech companies are jumping
on the bandwagon of as the latest fad.
When it comes to the uselessness of "AI, you just have to look at the
absymal images that it creates - people with three hands, buildings
floating in mid-air, ...
The really scary part is that morons are trusting this garbage to do
important work like medical diagnosis and supposed self-driving
cars!! :-(
On 2025-01-17 00:48:49 +0000, Rick said:
On 1/16/2025 6:55 PM, Your Name wrote:
On 2025-01-16 22:18:10 +0000, Rick said:
On 1/16/2025 4:15 PM, Your Name wrote:
On 2025-01-16 16:02:46 +0000, badgolferman said:
Apple has come under intense scrutiny for rolling out an underbaked >>>>>> AI-powered feature that summarizes breaking news — while often<snip>
butchering it beyond recognition.
So, no different to any other idiotic AI nonsense. They should all
be avoided by any sane person because they're all just useless,
over-hyped crap that they are. Hopefully it will be just another
quickly gone tech fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what
it has been programmed to do. It does NOT "learn" and it does NOT
"think", and it can never do either of those things.
I agree that a lot of AI isn't very good (and Apple is clearly on
that list), but from the limited testing I have done, I think
certain more mature LLM products like ChatGPT and Copilot are
actually quite useful when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply useless, over-hyped crap that all the idiot tech companies are jumping
on the bandwagon of as the latest fad.
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply >>useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste billions of dollars?
Paul Goodman <no_email@invalid.invalid> wrote:
badgolferman <REMOVETHISbadgolferman@gmail.com> wrote:
Alan <nuh-uh@nope.com> wrote:
On 2025-01-16 08:02, badgolferman wrote:
Apple has come under intense scrutiny for rolling out an underbakedThey're all "underbaked" right now, Sunshine.
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
This is not news to anyone who has actually been paying attention.
Yes they are, but that’s not the issue here. Apple knows their version is >>> not good enough but still pushes it out to users. They should not be using >>> their customers as unwilling beta testers.
Apple is not pushing it out to unwilling beta testers. You have to choose
to download and install it beyond the IOS upgrade. It is an option and not >> forced on you.
Users are being fed bad data by Apple Intelligence and the company knows
it. Why don’t they make it more mature before rolling it out?
On Fri, 17 Jan 2025 12:55:22 -0000 (UTC), badgolferman wrote :
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste
billions of dollars?
Had nospam's contract not expired, he would have definitively claimed AI is
 "not needed"       "not wanted"
simply because Apple doesn't have it while everyone else already does.
Your Name wrote:
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste billions of dollars?
On 1/17/2025 12:38 AM, Your Name wrote:
On 2025-01-17 00:48:49 +0000, Rick said:
On 1/16/2025 6:55 PM, Your Name wrote:
On 2025-01-16 22:18:10 +0000, Rick said:
On 1/16/2025 4:15 PM, Your Name wrote:
On 2025-01-16 16:02:46 +0000, badgolferman said:
Apple has come under intense scrutiny for rolling out an underbaked >>>>>>> AI-powered feature that summarizes breaking news — while often >>>>>>> butchering it beyond recognition.<snip>
So, no different to any other idiotic AI nonsense. They should all be >>>>>> avoided by any sane person because they're all just useless, over-hyped >>>>>> crap that they are. Hopefully it will be just another quickly gone tech >>>>>> fad.
AI is nothing more than software, which means that, like any other
software program, it is only as good as it the way it is programmed.
And hence why so-called "Artifical Intelligence" is not actually
intelligence of any kind. It's simpl;y a computer program doing what it >>>> has been programmed to do. It does NOT "learn" and it does NOT "think", >>>> and it can never do either of those things.
I agree that a lot of AI isn't very good (and Apple is clearly on that >>>>> list), but from the limited testing I have done, I think certain more >>>>> mature LLM products like ChatGPT and Copilot are actually quite useful >>>>> when used properly.
Not really. ChatGPT has numerous issues as well.
That's why I say "when used properly".
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are jumping
on the bandwagon of as the latest fad.
It's no more "massively flawed" than any other software or programming
- because that's all AI is. It is just software - written by and for
humans - and it is up to each person to assess how much if any benefit
to derive from it. You've assessed that it's useless to you and that's
fine. Many people do derive perceived benefits from it, and that's
fine for them. And you can call them "idiot" tech companies if you
want, but that just sounds like envy, as many of those companies are
doing quite well and will likely continue to do so. IF AI is just a
fad, as you suggest, it will ultimately fizzle out, but as someone with
more than 40 years of IT experience, I actually see it as the natural
and normal evolution of software development.
On Jan 17, 2025 at 7:55:22 AM EST, ""badgolferman"" <REMOVETHISbadgolferman@gmail.com> wrote:
Your Name wrote:
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste
billions of dollars?
They won't listen anyway. They need to lose billions of dollars before they will understand.
Again. Remember "smart speakers"? Remember "virtual reality"? Both were once
"the next big thing".
Both are now just quaint memories.
On 2025-01-17 14:34:14 +0000, Johnny LaRue said:
On Jan 17, 2025 at 7:55:22 AM EST, ""badgolferman""
<REMOVETHISbadgolferman@gmail.com> wrote:
Your Name wrote:
You can use it as "properly" as you like, but the entire idea is
massively flawed, so it will never work properly at all. AI is simply
useless, over-hyped crap that all the idiot tech companies are
jumping on the bandwagon of as the latest fad.
It's amazing how much smarter you are than these tech companies. Have
you considered becoming a consultant and advise them how not to waste
billions of dollars?
They won't listen anyway. They need to lose billions of dollars before they >> will understand.
Many of them in big business management won't listen even when they do
lose billions.
There's a company here that was recently declared bankrupt and had lots hundreds of thousands of dollars,
but the CEO who owned the majority of the stock bought up all the rest
of the stock and "re-opened" the same busines under the same name.
Either he's an idiot or it the bankruptcy was a scam to get out of
paying all the debt ... or more likely both.
Apple has come under intense scrutiny for rolling out an underbaked AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
For over a month, roughly as long as the feature has been available to
iPhone users, publishers have found that it consistently generates
false information and pushes it to millions of users.
Despite broadcasting a barrage of fabrications for weeks, Apple has yet
to meaningfully address the problem.
"This is my periodic rant that Apple Intelligence is so bad that today
it got every fact wrong in its AI a summary of Washington Post news
alerts," the newspaper's tech columnist Geoffrey Fowler wrote in a post
on Bluesky this week.
Fowler appended a screenshot of an alert, which claimed that Pete
Hegseth, who's been facing a confrontational confirmation hearing for
the role of defense secretary this week, had been fired by his former employer, Fox News — which is false and not what the WaPo's syndication
of an Associated Press story actually said. The AI alert also claimed
that Florida senator Marco Cubio had been sworn in as secretary of
state, which is also false as of the time of writing.
"It's wildly irresponsible that Apple doesn't turn off summaries for
news apps until it gets a bit better at this AI thing," Fowler added.
The constant blunders of Apple's AI summaries put the tech's nagging shortcomings on full display, demonstrating that even tech giants like
Apple are failing miserably to successfully integrate AI without
constantly embarrassing themselves.
AI models are still coming up with all sorts of "hallucinated" lies, a problem experts believe could be intrinsic to the tech. After all,
large language models like the one powering Apple's summarizing feature simply predict the next word based on probability and are incapable of actually understanding the content they're paraphrasing, at least for
the time being.
And the stakes are high, given the context. Apple's notifications are intended to alert iPhone users to breaking news — not sow distrust and confusion.
The story also highlights a stark power imbalance, with news
organizations powerless to determine how Apple represents their work to
its vast number of users.
"News organizations have vigorously complained to Apple about this, but
we have no power over what iOS does to the accurate and expertly
crafted alerts we send out," Fowler wrote in a followup.
In December, the BBC first filed a complaint with Apple after the
feature mistakenly claimed that Luigi Mangione, the man who killed UnitedHealthcare CEO Brian Thompson, had shot himself — an egregious
and easily disproven fabrication.
Last week, Apple finally caved and responded to the complaint, vowing
to add a clarifying disclaimer that the summaries were AI-generated
while also attempting to distance itself from bearing any
responsibility.
"Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback," a company spokesperson
told the BBC in a statement. "A software update in the coming weeks
will further clarify when the text being displayed is summarization
provided by Apple Intelligence."
"We encourage users to report a concern if they view an unexpected notification summary," the company continued.
The disclaimer unintentionally points to the dubious value proposition
of today's AI: what's the point of a summarizing feature if the company
is forced to include a disclaimer on each one that it might be entirely wrong? Should Apple's customers really be the ones responsible for
pointing out each time its AI summaries are spreading lies?
"It just transfers the responsibility to users, who — in an already confusing information landscape — will be expected to check if
information is true or not," Reporters Without Borders technology and journalism desk head Vincent Berthier told the BBC.
Journalists are particularly worried about further eroding trust in the
news industry, a pertinent topic given the tidal wave of AI slop that
has been crashing over the internet.
"At a time where access to accurate reporting has never been more
important, the public must not be placed in a position of
second-guessing the accuracy of news they receive," the National Union
of Journalists general secretary Laura Davison told the BBC.
https://futurism.com/apple-ai-butchering-news-summaries
On 1/16/25 11:02 AM, badgolferman wrote:
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
For over a month, roughly as long as the feature has been available to
iPhone users, publishers have found that it consistently generates
false information and pushes it to millions of users.
Despite broadcasting a barrage of fabrications for weeks, Apple has yet
to meaningfully address the problem.
"This is my periodic rant that Apple Intelligence is so bad that today
it got every fact wrong in its AI a summary of Washington Post news
alerts," the newspaper's tech columnist Geoffrey Fowler wrote in a post
on Bluesky this week.
Fowler appended a screenshot of an alert, which claimed that Pete
Hegseth, who's been facing a confrontational confirmation hearing for
the role of defense secretary this week, had been fired by his former
employer, Fox News — which is false and not what the WaPo's syndication
of an Associated Press story actually said. The AI alert also claimed
that Florida senator Marco Cubio had been sworn in as secretary of
state, which is also false as of the time of writing.
"It's wildly irresponsible that Apple doesn't turn off summaries for
news apps until it gets a bit better at this AI thing," Fowler added.
The constant blunders of Apple's AI summaries put the tech's nagging
shortcomings on full display, demonstrating that even tech giants like
Apple are failing miserably to successfully integrate AI without
constantly embarrassing themselves.
AI models are still coming up with all sorts of "hallucinated" lies, a
problem experts believe could be intrinsic to the tech. After all,
large language models like the one powering Apple's summarizing feature
simply predict the next word based on probability and are incapable of
actually understanding the content they're paraphrasing, at least for
the time being.
And the stakes are high, given the context. Apple's notifications are
intended to alert iPhone users to breaking news — not sow distrust and
confusion.
The story also highlights a stark power imbalance, with news
organizations powerless to determine how Apple represents their work to
its vast number of users.
"News organizations have vigorously complained to Apple about this, but
we have no power over what iOS does to the accurate and expertly
crafted alerts we send out," Fowler wrote in a followup.
In December, the BBC first filed a complaint with Apple after the
feature mistakenly claimed that Luigi Mangione, the man who killed
UnitedHealthcare CEO Brian Thompson, had shot himself — an egregious
and easily disproven fabrication.
Last week, Apple finally caved and responded to the complaint, vowing
to add a clarifying disclaimer that the summaries were AI-generated
while also attempting to distance itself from bearing any
responsibility.
"Apple Intelligence features are in beta and we are continuously making
improvements with the help of user feedback," a company spokesperson
told the BBC in a statement. "A software update in the coming weeks
will further clarify when the text being displayed is summarization
provided by Apple Intelligence."
"We encourage users to report a concern if they view an unexpected
notification summary," the company continued.
The disclaimer unintentionally points to the dubious value proposition
of today's AI: what's the point of a summarizing feature if the company
is forced to include a disclaimer on each one that it might be entirely
wrong? Should Apple's customers really be the ones responsible for
pointing out each time its AI summaries are spreading lies?
"It just transfers the responsibility to users, who — in an already
confusing information landscape — will be expected to check if
information is true or not," Reporters Without Borders technology and
journalism desk head Vincent Berthier told the BBC.
Journalists are particularly worried about further eroding trust in the
news industry, a pertinent topic given the tidal wave of AI slop that
has been crashing over the internet.
"At a time where access to accurate reporting has never been more
important, the public must not be placed in a position of
second-guessing the accuracy of news they receive," the National Union
of Journalists general secretary Laura Davison told the BBC.
https://futurism.com/apple-ai-butchering-news-summaries
Maybe so...but it's for the children and the environment :-)
On 2025-01-18 18:57:53 +0000, Colour Sergeant Bourne said:
On 1/16/25 11:02 AM, badgolferman wrote:
Apple has come under intense scrutiny for rolling out an underbaked
AI-powered feature that summarizes breaking news — while often
butchering it beyond recognition.
For over a month, roughly as long as the feature has been available to
iPhone users, publishers have found that it consistently generates
false information and pushes it to millions of users.
Despite broadcasting a barrage of fabrications for weeks, Apple has yet
to meaningfully address the problem.
"This is my periodic rant that Apple Intelligence is so bad that today
it got every fact wrong in its AI a summary of Washington Post news
alerts," the newspaper's tech columnist Geoffrey Fowler wrote in a post
on Bluesky this week.
Fowler appended a screenshot of an alert, which claimed that Pete
Hegseth, who's been facing a confrontational confirmation hearing for
the role of defense secretary this week, had been fired by his former
employer, Fox News — which is false and not what the WaPo's syndication >>> of an Associated Press story actually said. The AI alert also claimed
that Florida senator Marco Cubio had been sworn in as secretary of
state, which is also false as of the time of writing.
"It's wildly irresponsible that Apple doesn't turn off summaries for
news apps until it gets a bit better at this AI thing," Fowler added.
The constant blunders of Apple's AI summaries put the tech's nagging
shortcomings on full display, demonstrating that even tech giants like
Apple are failing miserably to successfully integrate AI without
constantly embarrassing themselves.
AI models are still coming up with all sorts of "hallucinated" lies, a
problem experts believe could be intrinsic to the tech. After all,
large language models like the one powering Apple's summarizing feature
simply predict the next word based on probability and are incapable of
actually understanding the content they're paraphrasing, at least for
the time being.
And the stakes are high, given the context. Apple's notifications are
intended to alert iPhone users to breaking news — not sow distrust and >>> confusion.
The story also highlights a stark power imbalance, with news
organizations powerless to determine how Apple represents their work to
its vast number of users.
"News organizations have vigorously complained to Apple about this, but
we have no power over what iOS does to the accurate and expertly
crafted alerts we send out," Fowler wrote in a followup.
In December, the BBC first filed a complaint with Apple after the
feature mistakenly claimed that Luigi Mangione, the man who killed
UnitedHealthcare CEO Brian Thompson, had shot himself — an egregious
and easily disproven fabrication.
Last week, Apple finally caved and responded to the complaint, vowing
to add a clarifying disclaimer that the summaries were AI-generated
while also attempting to distance itself from bearing any
responsibility.
"Apple Intelligence features are in beta and we are continuously making
improvements with the help of user feedback," a company spokesperson
told the BBC in a statement. "A software update in the coming weeks
will further clarify when the text being displayed is summarization
provided by Apple Intelligence."
"We encourage users to report a concern if they view an unexpected
notification summary," the company continued.
The disclaimer unintentionally points to the dubious value proposition
of today's AI: what's the point of a summarizing feature if the company
is forced to include a disclaimer on each one that it might be entirely
wrong? Should Apple's customers really be the ones responsible for
pointing out each time its AI summaries are spreading lies?
"It just transfers the responsibility to users, who — in an already
confusing information landscape — will be expected to check if
information is true or not," Reporters Without Borders technology and
journalism desk head Vincent Berthier told the BBC.
Journalists are particularly worried about further eroding trust in the
news industry, a pertinent topic given the tidal wave of AI slop that
has been crashing over the internet.
"At a time where access to accurate reporting has never been more
important, the public must not be placed in a position of
second-guessing the accuracy of news they receive," the National Union
of Journalists general secretary Laura Davison told the BBC.
https://futurism.com/apple-ai-butchering-news-summaries
Maybe so...but it's for the children and the environment :-)
According to a recent report, all the servers, cooling, etc. needed for
this silly AI fad will be worse for the environment than all of the cars driving around in California. :-\
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 11:43:15 |
Calls: | 10,389 |
Calls today: | 4 |
Files: | 14,061 |
Messages: | 6,416,869 |
Posted today: | 1 |