[continued from previous message]
sometimes dangerous as well.
This is unconscionable. Frankly, whether Google understands this or
not, this behavior is uncaring and evil. Apparently Google's
leadership no longer feels any shame at all. Disgusting.
------------------------------
Date: Wed, 21 May 2025 18:44:01 -0700
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Dark LLMs: The Growing Threat of Unaligned AI Models (arxiv)
https://www.arxiv.org/abs/2505.10066
------------------------------
Date: Wed, 21 May 2025 09:10:29 -0700
From: "Jim" <
jgeissman@socal.rr.com>
Subject: Most AI chatbots easily tricked into giving dangerous responses,
study finds (The Guardian)
Researchers say threat from jail-broken chatbots trained to churn out
illegal information is ``tangible and concerning''.
Hacked AI-powered chatbots threaten to make dangerous knowledge readily available by churning out illicit information the programs absorb during training, researchers say.
The warning comes amid a disturbing trend for chatbots that have been "jailbroken" to circumvent their built-in safety controls. The restrictions
are supposed to prevent the programs from providing harmful, biased or inappropriate responses to users' questions.
The engines that power chatbots such as ChatGPT, Gemini and Claude - large language models (LLMs) - are fed vast amounts of material from the Internet.
Despite efforts to strip harmful text from the training data, LLMs can still absorb information about illegal activities such as hacking, money
laundering, insider trading and bomb-making. The security controls are
designed to stop them using that information in their responses.
In a report <
https://www.arxiv.org/abs/2505.10066> on the threat, the researchers conclude that it is easy to trick most AI-driven chatbots into generating harmful and illegal information, showing that the risk is "immediate, tangible and deeply concerning".
"What was once restricted to state actors or organised crime groups may soon
be in the hands of anyone with a laptop or even a mobile phone," the authors warn.
https://www.theguardian.com/technology/2025/may/21/most-ai-chatbots-easily-tricked-into-giving-dangerous-responses-study-finds
------------------------------
Date: Tue, 20 May 2025 19:58:31 -0600
From: Matthew Kruk <
mkrukg@gmail.com>
Subject: AI chatbot to be embedded in Google search (BBC)
https://www.bbc.com/news/articles/cpw77qwd117o
Google is introducing a new artificial intelligence (AI) mode that more
firmly embeds chatbot capabilities into its search engine, aiming to give
users the experience of having a conversation with an expert.
The "AI Mode" was made available in the US on Tuesday, appearing as an
option in Google's search bar.
The change, unveiled at the company's annual developers conference in
Mountain View, California, is part of the tech giant's push to remain competitive against ChatGPT and other AI services, which threaten to erode Google's dominance of online search.
The company also announced plans for its own augmented reality glasses and
said it planned to offer a subscription AI tool.
------------------------------
Date: Tue, 20 May 2025 16:13:16 -0400
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Chicago Sun-Times Prints AI-Generated Summer Reading List With
Books That Don't Exist (Chicago Sun-Times)
"I can't believe I missed it because it's so obvious. No excuses," the
writer said. "I'm completely embarrassed."
https://www.404media.co/chicago-sun-times-prints-ai-generated-summer-reading-list-with-books-that-dont-exist/
[Paywalled, but clear enough. GG]
[Also noted by Matthew Kruk and Monty Solomon. PGN]
Good luck picking up the books on an unofficial summer reading list from
the Chicago Sun-Times.
Hoping to delve into the "multigenerational saga" Tidewater Dreams by
Isabel Allende, for instance? Keep dreaming. Maybe a science-driven story
like Andy Weir's The Last Algorithm is more to your taste? The algorithm
can't help you.
OK then, how about Min Jin Lee's "riveting tale set in Seoul's underground economy," Nightshade Market? Sorry -- all you're going to find is shade.
That's because, while the authors may be real, the books don't actually
exist. And the Chicago Sun-Times is being roasted online for publishing the AI-generated list. The paper initially couldn't explain how the piece was published.
https://www.cbc.ca/news/world/chicago-sun-times-ai-book-list-1.7539016
------------------------------
Date: Fri, 23 May 2025 11:47:23 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: Vulnerability Exploitation Probability Metric Proposed by NIST, CISA
(Eduard Kovacs)
Eduard Kovacs, Security Week (05/20/25), via ACM TechNews
A cybersecurity metric developed by researchers at the U.S. Cybersecurity
and Infrastructure Security Agency (CISA) and the U.S. National Institute of Standards & Technology (NIST) calculates the likelihood a vulnerability has been exploited. The Likely Exploited Vulnerabilities (LEV) metric could help estimate the comprehensiveness of KEV lists and enhance KEV -- and
EPSS-based vulnerability remediation prioritization.
------------------------------
Date: Sun, 18 May 2025 14:21:12 +0100
From: Martin Ward <
martin@gkc.org.uk>
Subject: Re: Why We're Unlikely to Get Artificial General Intelligence,
Anytime Soon (NY Times)
Back in the 1940's, Turing wrote about his famous Test, and predicted that within 20 years we would have machines as intelligent as humans.
Back in the 1960s, when AI research was just beginning, researchers
predicted that within the next 20 years we would have machines as
intelligent as humans. I remember reading some of these predictions in the 1970's and wondering...
Back in the 1980s, I read Douglas Hofstadter's brilliant book "Godel,
Escher, Bach" in which he predicted that within the next 20 years we would
have machines as intelligent as humans. At that point, I made my own prediction: "In 20 years time people will *still* be predicting that in 20 years time we would have machines as intelligent as humans!"
Back in 2000, Ray Kurzweil (The Age of Spiritual Machines) and Hans Moravec (Robot) proposed that perhaps even as early as 2020 to 2030 we will have sufficient hardware complexity, as well as sufficient insights from
cognitive neuroscience (reverse engineering salient neural structure of the mammalian brain), to create silicon evolutionary spaces that will develop higher-level intelligence." Bill Gates says ""Twenty years from now,
predicts Ray Kurzweil, $1,000 computers will match the power of the human brain." (
http://us.penguingroup.com/static/packages/us/kurzweil/index.htm).
It seems that *my* prediction was fulfilled!
Now, in 2025, we have Sam Altman, Dario Amodei and Elon Musk saying that artificial intelligence will "soon" match the powers of humans' brains, but some AI researchers are finally coming around to the possibility that human level AI may not actually be achieved with in the next ten years "At this point, we can't tell." (Yann LeCun, the chief A.I. scientist at Meta)
Some tentative conclusions:
(1) Twenty years is just about as far ahead as anyone can imagine.
(2) "Moore's Law", observed in 1965 that computer power doubles every two years. This "law" continued to hold for the subsequent four decades, yet despite this huge technological gain, human intelligence is still just as
far away as it ever was. It is as if despite building bigger and bigger ladders, we are getting no closer to Andromeda galaxy!
(3) This suggests that in reality, human intelligence is
*infinitely* far removed from machine intelligence: in other words,
that there really is some *qualitative* difference between man
and machine, and not just a quantitative gap which can be bridged
with a few more transistors and a better programming language.
You simply cannot get to Andromeda by climbing a ladder :-)
(4) In this context, the arguments about a "Technological Singularity" begin
to look more like a "reductio ad absurdum" proof that machine intelligence
will *never* surpass human intelligence. (Since the super-intelligent
machine will be able to design a still more intelligent machine, and so on
ad infinitum. Quod est absurdum).
------------------------------
From: Paul Edwards <
paule@paul-edwards.com>
Date: Sun, 18 May 2025 13:44:47 +1000
Subject: Re: IBM Vibe coding
It's probably worth noting that vibe in a legal context had its earliest documented use in Australia as early as 1997:
https://www.youtube.com/watch?v=nMuh33BMZYY
------------------------------
Date: Sun, 18 May 2025 10:03:49 -0700
From: Steve Bacher <
sebmb1@verizon.net>
Subject: Re: Rogue communication devices found in Chinese solar power inverter
(RISKS-34.63)
The second URL
https://www.huschblackwell.com/newsandinsights/new-executive-order-prohibits-use-of-equipment-produced-by-foreign-adversaries-in-bulk-power-system
gets a page not found error. The correct URL appears to be:
https://www.huschblackwell.com/newsandinsights/new-executive-order-prohibits-use-of-equipment-produced-by-foreign-adversaries-in-bulk-power-systems
------------------------------
Date: Sun, 18 May 2025 17:50:58 -0400
From: Peter Calingaert <
pc@cs.unc.edu>
Subject: Re: Peter's Puns (RISKS-34.63)
Puns make me numb.
Math puns make me number.
------------------------------
Date: Sat, 28 Oct 2023 11:11:11 -0800
From:
RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)
The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
comp.risks, the feed for which is donated by panix.com as of June 2011.
SUBSCRIPTIONS: The mailman Web interface can be used directly to
subscribe and unsubscribe:
http://mls.csl.sri.com/mailman/listinfo/risks
SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
includes the string `notsp'. Otherwise your message may not be read.
*** This attention-string has never changed, but might if spammers use it.
SPAM challenge-responses will not be honored. Instead, use an alternative
address from which you never send mail where the address becomes public!
The complete INFO file (submissions, default disclaimers, archive sites,
copyright policy, etc.) has moved to the ftp.sri.com site:
<risksinfo.html>.
*** Contributors are assumed to have read the full info file for guidelines!
OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
delightfully searchable html archive at newcastle:
http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
Also,
ftp://ftp.sri.com/risks for the current volume/previous directories
or
ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
If none of those work for you, the most recent issue is always at
http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
ALTERNATIVE ARCHIVES:
http://seclists.org/risks/ (only since mid-2001)
*** NOTE: If a cited URL fails, we do not try to update them. Try
browsing on the keywords in the subject line or cited article leads.
Apologies for what Office365 and SafeLinks may have done to URLs.
Special Offer to Join ACM for readers of the ACM RISKS Forum:
<
http://www.acm.org/joinacm1>
------------------------------
End of RISKS-FORUM Digest 34.64
************************
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)