• Re: Paris : In Rush For Profits, AI Safety Issues Are Ignored

    From chrisv@21:1/5 to All on Thu Feb 13 16:36:34 2025
    XPost: comp.os.linux.advocacy

    D wrote:

    Give AI enhanced facial recognition to the cops -- won't that be fun.
    Enter 'Minority Report'.

    They did that in some country, and it always targeted immigrants. The AI
    was judged racist, and the project shut down. It was hilarious! =D

    Well, of course it was shut down. Even though it was correct.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to rbowman on Fri Feb 14 00:24:11 2025
    XPost: comp.os.linux.advocacy

    On 2025-02-13, rbowman <bowman@montana.com> wrote:

    On Thu, 13 Feb 2025 03:50:11 -0500, WokieSux282@ud0s4.net wrote:

    The "a few mistakes are OK" logic WILL be applied.

    It's often sanitized with that lovely phrase "collateral damage".

    During my first exam with my current primary about 20 years ago she
    offered a PSA test but explained that there are a lot of false positives
    that scare the hell out of people. She would order the test if I wanted or
    we could go the traditional route. I passed on the test.

    I went the other way. I had been getting a DRE (digital rectal exam,
    a.k.a. the finger) every year for 10 years with negative results.
    My wife suggested a PSA, and I figured I could look at a number
    without freaking out. The result came back 20 (where 4 is considered
    cause for concern). I calmly asked for another test. It came out
    the same, making it less likely it was a false positive. Next step
    was a biopsy (8 on the Gleason scale), which led to a radical
    prostatectomy. If I had opted for blissful ignorance I'd probably
    be dead by now.

    At this point I've probably reached the status of men who die with, but
    not from, prostate cancer.

    Me too - but I'm still watching my PSA. It started slowly creeping
    up again, but a round of hormone therapy knocked it back down.
    Gotta keep weeding the garden...

    Please, guys, if you're over 50, get a PSA test.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From D@21:1/5 to The Natural Philosopher on Fri Feb 14 09:45:46 2025
    XPost: comp.os.linux.advocacy

    On Fri, 14 Feb 2025, The Natural Philosopher wrote:

    On 14/02/2025 00:24, Charlie Gibbs wrote:
    Please, guys, if you're over 50, get a PSA test.
    Every time I have a test they come up with yet another incurable condition.

    If I were a gambling man, I'd take bets on which one is going to kill me first

    And yet, I seem to be still here...

    You are a tough cookie! =)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to nospam@example.net on Fri Feb 14 10:30:06 2025
    XPost: comp.os.linux.advocacy

    On Fri, 14 Feb 2025 09:50:34 +0100, D <nospam@example.net> wrote in <67e55c72-9634-7568-8a35-034bd127c016@example.net>:

    On Fri, 14 Feb 2025, WokieSux282@ud0s4.net wrote:

    On 2/13/25 4:07 PM, D wrote:


    On Thu, 13 Feb 2025, WokieSux282@ud0s4.net wrote:

    On 2/13/25 2:10 AM, rbowman wrote:
    On Wed, 12 Feb 2025 22:58:30 -0500, WokieSux282@ud0s4.net wrote:

        Neural networks can likely do "someone in there" even better, >>>>>>     eventually. At the moment LLMs get most of the funding so NNs are a
        bit behind the curve. New/better hardware and paradigms are needed
        but WILL eventually arrive.

    So far there is nobody in there for CNNs. You know all the pieces and >>>>> they
    don't magically start breathing when you put them together. It is true >>>>> the
    whole system is a bit of a black box but it is describable.


     Well, I agree about "CNNs"  :-)

     As for LLMs ... dunno. Get enough stuff going there and
     something very hard, maybe impossible, to distinguish
     from "someone in there" may be realized. Then what do
     we do - ruthlessly pull the plug ?

    Nope. Volition, will to live, drive, goals are completely missing. The best >>> trick to find out if you're talking with an AI is to write nothing. A human >>> will write "hello" after a few seconds. The AI will just sit there waiting >>> for input.

    Yes... those things can be hardcoded, but what would make me impressed is >>> when spontaneous behaviour, motivation, will to live emerges on its own, >>> without being hard coded or simulated through logic.

    Then we're talking AI!

    I am not beyond thinking LLMs will eventually, maybe
    kinda soon, exhibit 'conscious', 'self-realized'
    intelligence. The complexity increases apace. At
    SOME point ..........

    I do not things LLMs will reach consciousness, looking at the technology, training data and how they work. I see them as a potential "language
    center" of the brain of a AGI.

    But how do we KNOW and what do we DO about it ?

    We look at the effects. That's all we can do.

    Those are the HARD questions.

    Yes! Let me welcome you to alt.philosophy! =) Hard and interesting
    questions!

    There's also comp.ai.philosophy ...

    --
    -v System76 Thelio Mega v1.1 x86_64 NVIDIA RTX 3090 Ti
    OS: Linux 6.14.0-rc2 Release: Mint 22.1 Mem: 258G
    "These are only my opinions. You should see my convictions."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)