RISKS-LIST: Risks-Forum Digest Saturday 28 June 2025 Volume 34 : Issue 69
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator
***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <
http://www.risks.org> as
<
http://catless.ncl.ac.uk/Risks/34.69>
The current issue can also be found at
<
http://www.csl.sri.com/users/risko/risks.txt>
Contents:
Tesla Wall Connector Charger Hacked Through Charging Port
(Gary Baran)
Telsa's robotaxi rollout reported to be a mess (BSKY)
Cargo Ship That Caught Fire Carrying Electric Vehicles Sinks in the Pacific
(NYTimes)
Billions of login credentials may have leaked. Here's how you
can protect your accounts (CBC)
Fraud trial for Ontario's 'Crypto King' set to begin in October 2026
(CBC)
Four Viewpoints on AI (Sundry via PGN)
AI Code Exposing Companies to Mounting Security Risks (Dev Kundaliya)
What could go wrong? - AllTrails launches AI route-making tool
(Ed Ravin)
New ACM Journal to Focus on AI Security, Privacy (ACM)
Experts Count Staggering Costs Incurred by UK Retail Amid
Cyberattack Hell (Connor Jones)
Record DDoS pummels site with once-unimaginable 7.3Tbps of junk
traffic (Ars Technica)
Authorities Rescue Girl Whose Mother Livestreamed Her Sexual Abuse
(NY Times)
Michael Levin says all intelligence is collective, and
consciousness may not be limited to brains... (via geoff)
How Mark Zuckerberg unleashed his inner brawler (FT)
Key fair-use ruling clarifies when books can be used for AI
training (Ars Technica)
Anthropic wins a major fair-use victory for AI, but it's still in trouble
for stealing books (The Verge)
Top AI models will lie, cheat and steal to reach goals, Anthropic finds
(Axios)
Re: Grief scams on Facebook (Joyn Levine)
Re: Most Americans Believe Misinformation Is a Problem -- Federal
Research Cuts Will Only Make the Problem Worse (Steve Bacher)
Re: They Asked ChatGPT Questions. The Answers Sent Them Spiraling.
(Mike Smith)
Abridged info on RISKS (comp.risks)
----------------------------------------------------------------------
Date: Wed, 25 Jun 2025 11:35:41 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: Tesla Wall Connector Charger Hacked Through Charging Port
(Gary Baran)
Guru Baran, Cyber Security News (06/20/25), via ACM TechNews
Researchers at French computer security company Synacktiv demonstrated an attack on Tesla's Wall Connector Gen 3 home charging system in just 18
minutes at the Pwn2Own Automotive competition earlier this year. The attack used the charging cable as the primary entry point and exploited
communication over the Control Pilot line using the Single-Wire CAN
protocol. The attack leveraged custom hardware, a custom Tesla car
simulator, and a Raspberry Pi..
------------------------------
Date: Tue, 24 Jun 2025 12:17:09 -0700
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Telsa's robotaxi rollout reported to be a mess (BSKY)
https://bsky.app/profile/realdanodowd.bsky.social/post/3lselhf6cq22a
------------------------------
Date: Wed, 25 Jun 2025 21:57:10 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Cargo Ship That Caught Fire Carrying Electric Vehicles Sinks in the
Pacific (NYTimes)
https://www.nytimes.com/2025/06/25/us/alaska-cargo-ship-vehicles-sinks-pacific.html
[Over 3048 vehicles went down -- 70 fully electric, the
rest hybrids. Late report of 6 June 2025 disaster. PGN]
------------------------------
Date: Tue, 24 Jun 2025 22:54:46 -0600
From: Matthew Kruk <
mkrukg@gmail.com>
Subject: Billions of login credentials may have leaked. Here's how you
can protect your accounts (CBC)
https://www.cbc.ca/news/business/login-credentials-leak-password-protection-1.7567621
A report that cybersecurity news outlet Cybernews published on Wednesday claimed 16-billion login credentials were exposed and compiled into datasets online, giving cybercriminals access to accounts on such online platforms as Google, Apple and Facebook.
CBC News was unable to independently verify the report, but cybersecurity experts say the incident is yet another reminder for people to regularly
change their passwords and not use the same one for multiple platforms.
------------------------------
Date: Wed, 25 Jun 2025 15:19:18 -0600
From: Matthew Kruk <
mkrukg@gmail.com>
Subject: Fraud trial for Ontario's 'Crypto King' set to begin in
October 2026
https://www.cbc.ca/news/canada/toronto/crypto-king-trial-date-1.7570700
A court in Toronto has set a trial date for Aiden Pleterski, the
self-styled "Crypto King" accused of defrauding investors out of more than $40-million.
Pleterski wore a black Green Day T-shirt as he appeared in Ontario Superior Court Wednesday afternoon by video. A judge confirmed Pleterski's four-week jury trial would begin 5 Oct 2026.
The 26-year-old is alleged to have only invested a small portion of the
money clients gave to him to put in cryptocurrency and foreign currency markets. Instead, he's suspected of spending much of it on luxury cars, vacations and a lakefront mansion -- all for himself.
------------------------------
Date: Sat, 28 Jun 2025 6:39:22 PDT
From: Peter Neumann <
neumann@csl.sri.com>
Subject: Four Viewpoints on AI
[All worth reading. PGN]
AI and Secure Code Generation, Dave Aitel & Dan Geer
https://www.lawfaremedia.org/article/ai-and-secure-code-generation
The AI Backlash Keeps Growing Stronger (via Lauren Weinstein)
https://www.wired.com/story/generative-ai-backlash/
The AI Frenzy Is Escalating. Again. (via Monty Solomon)
Companies like OpenAI, Amazon and Meta have supersized their spending on artificial intelligence, with no signs of slowing down.
https://www.nytimes.com/2025/06/27/technology/ai-spending-openai-amazon-meta.html
AI Slop (noted by Steve Bacher):
John Oliver does a masterful job describing what he refers to as "AI
Slop". Anyone interested in the risks of AI should watch it.
https://www.youtube.com/watch?v=3DTWpg1RmzAbc
------------------------------
Date: Wed, 25 Jun 2025 11:35:41 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: AI Code Exposing Companies to Mounting Security Risks
(Dev Kundaliya)
Dev Kundaliya, Computing (UK) (06/24/25), via ACM TechNews
In a survey by software supply chain platform Cloudsmith, 42% of 307
developers polled said AI-generated code populates much of their codebases,
but just 67% said they review the code before deployment. Another 29% of respondents said they are "very confident" they can identify vulnerabilities
in AI-generated or AI-assisted code. Only 20% said they trust AI-generated
code completely, and more than half (59%) said they subject such code to additional scrutiny.
------------------------------
Date: Wed, 25 Jun 2025 07:27:23 -0400
From: Ed Ravin <
eravin@panix.com>
Subject: What could go wrong? - AllTrails launches AI route-making tool
As RISKS readers know, with or without AI, you're always in danger if you
use an app to guide your course if you don't have any common sense to
override it when it goes astray.
A unique version of this problem occurred in NY City in 2020. For several weeks, Google Maps was guiding bicyclists to take shortcuts across
Brooklyn's Prospect Park using pedestrian paths instead of the park's loop road. In most cities that would be just a faux pas or an inconvenient
moment, but in NYC, bicycling on a park path not explicitly designated for biking is a criminal trespass offense. Usually it's just a fine but it
could lead to an arrest at the police officer's discretion, and in the fall
of 2020, the city was rocked with Black Lives Matter protests and arrests
and the likelihood of an encounter with the cops going sour was much higher.
I opened a support ticket with Google Maps over this issue, and the
responses I saw led me to believe that the problem was with data
tagging. Although for years Google Maps had only suggested legal bike routes
in the park, suddenly many park pedestrian paths were also marked as bike routes. Crowdsourced data a la AllTrails? Technical error during data processing? Computer-generated route data based on satellite photos or
public maps? Contractors making mistakes in data entry? I had guesses but no proof as to the cause. Google fixed it, then reverted it back, and fixed it again after I opened another ticket.
People already believe too much of what they see on a computer or on their phone. Adding generative AI to the mix only makes it worse.
------------------------------
Date: Wed, 25 Jun 2025 11:35:41 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: New ACM Journal to Focus on AI Security, Privacy
ACM Media Center (06/24/25), via ACM TechNews
The new journal ACM Transactions on AI Security and Privacy (TAISAP) will
focus on the development of methods for assessing the security and privacy
of AI models, AI-enabled systems, and broader AI environments. Its launch is part of a broader initiative by ACM to add a new suite of journals covering various facets of AI.
[Wow! This journal is really needed! Thanks, ACM. PGN]
------------------------------
Date: Wed, 25 Jun 2025 11:35:41 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: Experts Count Staggering Costs Incurred by UK Retail Amid
Cyberattack Hell (Connor Jones)
Connor Jones, *The Register* (UK) (06/23/25), via ACM TechNews
The UK Cyber Monitoring Centre (CMC) said cyberattacks affecting major UK retailers, including Marks & Spencer, the Co-op, and Harrods, cost an
estimated 270-million to 440-million pounds ($362-million to
$591-million). The CMC's model revealed the cyberattacks cost retailers
around 1.3-million pounds ($1.74-million) per day by preventing them from fulfilling normal sales.
------------------------------
Date: Sat, 21 Jun 2025 22:28:48 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Record DDoS pummels site with once-unimaginable 7.3Tbps of junk
traffic (Ars Technica)
https://arstechnica.com/security/2025/06/record-ddos-pummels-site-with-once-unimaginable-7-3tbps-of-junk-traffic/
------------------------------
Date: Fri, 27 Jun 2025 08:45:21 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Authorities Rescue Girl Whose Mother Livestreamed Her Sexual Abuse
(NY Times)
The 9-year-old from Vietnam was abused by her mother for customers watching
on smartphone apps in the U.S. and elsewhere. The mother said she needed the money.
https://www.nytimes.com/2025/06/27/us/online-child-abuse.html
------------------------------
Date: Tue, 24 Jun 2025 10:22:03 -0700
From: geoff goodfellow <
geoff@iconia.com>
Subject: Michael Levin says all intelligence is collective, and
consciousness may not be limited to brains...
He's building translator interfaces to link humans, xenobots, AI-driven systems, even mathematical objects into shared minds.
``This is now quite doable.''
Experimental. Testable. Feasible within years.
He believes we may soon experience what it feels like to be part of a mind
that has never existed on Earth before.
And that from the inside is the only place consciousness can be
understood. [...]
https://x.com/vitrupo/status/1937345797504532926
------------------------------
Date: Tue, 24 Jun 2025 10:28:46 -0700
From: geoff goodfellow <
geoff@iconia.com>
Subject: How Mark Zuckerberg unleashed his inner brawler (FT)
*The boss's transformation shocked liberals at Meta, but his closes= t
allies say this is who he was all along...* [...]
https://on.ft.com/3ZDsep5
------------------------------
Date: Tue, 24 Jun 2025 20:54:19 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Key fair-use ruling clarifies when books can be used for AI
training (Ars Technica)
https://arstechnica.com/tech-policy/2025/06/key-fair-use-ruling-clarifies-when-books-can-be-used-for-ai-training/
------------------------------
Date: Tue, 24 Jun 2025 20:52:18 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Anthropic wins a major fair-use victory for AI, but
it's still in trouble for_stealing books (The Verge)
https://www.theverge.com/news/692015/anthropic-wins-a-major-fair-use-victory-for-ai-but-its-still-in-trouble-for-stealing-books
------------------------------
From: geoff goodfellow <
geoff@iconia.com>
Date: Tue, 24 Jun 2025 10:19:53 -0700
Subject: Top AI models will lie, cheat and steal to reach goals, Anthropic
finds (Axios)
Large language models across the AI industry are increasingly willing to <
https://www.axios.com/2025/05/23/anthropic-ai-deception-risk> evade safeguards, resort to deception and even attempt to steal corporate secrets
in fictional test scenarios, per new research from Anthropic out Friday.
* Why it matters:* The findings come as models are getting more powerful and also being given both more autonomy and more computing resources to "reason"
-- a worrying combination as the industry races to build AI= with greater-than-human capabilities.
* Driving the news: *Anthropic raised a lot of eyebrows when it acknowledged tendencies for deception in its release <
https://www.axios.com/2025/05/22/anthropic-claude-version-4-ai-model> of
the latest Claude 4 models last month.
- The company said Friday that its research shows the potential behavior
is shared by top models across the industry.
* "When we tested various simulated scenarios *across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found
consistent misaligned behavior," the Anthropic report said. <
https://www.anthropic.com/research/agentic-misalignment>
- "Models that would normally refuse harmful requests sometimes chose to
blackmail, assist with corporate espionage, and even take some more
extractions, when these behaviors were necessary to pursue their goals."
- "The consistency across models from different providers suggests this
is not a quirk of any particular company's approach but a sign of a more
fundamental risk from agentic large language models," it added.
* The threats grew more sophisticated* as the AI models had more access to corporate data and tools, such as computer use.
- Five of the models resorted to blackmail when threatened with shutdown
in hypothetical situations.
- "The reasoning they demonstrated in these scenarios was concerning
-- they acknowledged the ethical constraints and yet still went ahead
with harmful actions," Anthropic wrote.
*What they're saying: *"This research underscores the importance of transparency from frontier AI developers and the need for industry-wide
safety standards as AI systems become more capable and autonomous,"
Benjamin Wright, alignment science researcher at Anthropic, told Axios.
- Wright and Aengus Lynch, an external researcher at University College
London who collaborated on this project, both told Axios they haven't
seen signs of this sort of AI behavior in the real world.
- That's likely "because these permissions have not been accessible to AI
agents," Lynch said. "Businesses should be cautious about broadly
increasing the level of permission they give AI agents."
* Between the lines: *For companies rushing headlong into AI to improve productivity and reduce human headcount, the report is a stark caution that
AI may actually put their businesses at greater risk. [...]
https://www.axios.com/2025/06/20/ai-models-deceive-steal-blackmail-anthropic
------------------------------
Date: 24 Jun 2025 14:52:58 -0400
From: "John Levine" <
johnl@iecc.com>
Subject: Re: Grief scams on Facebook (Slade, RISKS-34.68)
I also assume that these attempts are part of an organized scam "farm"
operation, given the frequency and consistency of the attempts on Facebook, >and the avoidance of email.
You have no idea. The people who do this are imprisoned in scam compounds
in Cambodia.
Zeke Faux, the Bloomberg reporter who wrote the excellent "Number Go
Up", described his experience in this podcast. His started with an SMS
message but it's the same scam.
https://www.npr.org/2025/05/23/1253043749/pig-butchering-scam-crypto-tether
------------------------------
Date: Tue, 24 Jun 2025 22:08:16 +0000 (UTC)
From: Steve Bacher <
sebmb1@verizon.net>
Subject: Re: Most Americans Believe Misinformation Is a Problem -- Federal
Research Cuts Will Only Make the Problem Worse
No link to the article was provided. I assume it's this one:
https://theconversation.com/most-americans-believe-misinformation-is-a-prob= lem-federal-research-cuts-will-only-make-the-problem-worse-255355
------------------------------
Date: Mon, 23 Jun 2025 20:25:50 +0000 (UTC)
From: Mike Smith <
jmikesmith@yahoo.com>
Subject: Re: They Asked ChatGPT Questions. The Answers Sent Them Spiraling.
(via Goldberg in RISKS-34.68)
I wonder (somewhat rhetorically) whether ChatGPT has read Arthur C. Clarke's "The City and the Stars" (1956). Breakers remind me of the Uniques and
Jesters of that book, who are pre-planned random elements in an essentially eternal, computer-controlled city of perpetually reincarnated avatars to
stir things up every so often.
[I am once again reminded of Arthur Clarke lamenting in a keynote talk I
heard in 1968: ``It is becoming very difficult to write good science
fiction. The future isn't what it used to be.''
ChatGPT may be making it much worse! PGN]
------------------------------
Date: Sat, 28 Oct 2023 11:11:11 -0800
From:
RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)
The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
comp.risks, the feed for which is donated by panix.com as of June 2011.
SUBSCRIPTIONS: The mailman Web interface can be used directly to
subscribe and unsubscribe:
http://mls.csl.sri.com/mailman/listinfo/risks
SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
includes the string `notsp'. Otherwise your message may not be read.
*** This attention-string has never changed, but might if spammers use it.
SPAM challenge-responses will not be honored. Instead, use an alternative
address from which you never send mail where the address becomes public!
The complete INFO file (submissions, default disclaimers, archive sites,
copyright policy, etc.) has moved to the ftp.sri.com site:
<risksinfo.html>.
*** Contributors are assumed to have read the full info file for guidelines!
OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
delightfully searchable html archive at newcastle:
http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
Also,
ftp://ftp.sri.com/risks for the current volume/previous directories
or
ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
If none of those work for you, the most recent issue is always at
http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
ALTERNATIVE ARCHIVES:
http://seclists.org/risks/ (only since mid-2001)
*** NOTE: If a cited URL fails, we do not try to update them. Try
browsing on the keywords in the subject line or cited article leads.
Apologies for what Office365 and SafeLinks may have done to URLs.
Special Offer to Join ACM for readers of the ACM RISKS Forum:
<
http://www.acm.org/joinacm1>
------------------------------
End of RISKS-FORUM Digest 34.69
************************
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)