• =?UTF-8?Q?Google=e2=80=99s_AI_Is_Churning_Out_a_Deluge_of_Completel?= =

    From bfh@21:1/5 to All on Mon May 27 14:35:02 2024
    ------------------------------------------------------
    Google's AI search, which swallows up web results and delivers them to
    users in a regurgitated package, delivers each of its AI-paraphrased
    answers to user queries in a concise, coolly confident tone. Just one
    tiny problem: it's wrong. A lot. ------------------------------------------------------ https://futurism.com/google-ai-inaccurate-garbage

    I allege that the Google AI - and other AIs - could statistically significantly improve their accuracy if they were prohibited from
    learning Anything from quora, reddit, and probably most of the rest of
    social media. Failing that, maybe the developer geniuses could create
    a front-end gatekeeper AI to do some fact-checking before the main AI
    blathers its 'expert' responses.

    What I further allege is that I hope medical professionals are not -
    and will not - rely too heavily on these damthings for medical
    diagnoses and treatments.

    --
    bill
    Theory don't mean squat if it don't work.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George.Anthony@21:1/5 to bfh on Mon May 27 21:16:31 2024
    bfh <redydog@rye.net> wrote:
    ------------------------------------------------------
    Google's AI search, which swallows up web results and delivers them to
    users in a regurgitated package, delivers each of its AI-paraphrased
    answers to user queries in a concise, coolly confident tone. Just one
    tiny problem: it's wrong. A lot. ------------------------------------------------------ https://futurism.com/google-ai-inaccurate-garbage

    I allege that the Google AI - and other AIs - could statistically significantly improve their accuracy if they were prohibited from
    learning Anything from quora, reddit, and probably most of the rest of
    social media. Failing that, maybe the developer geniuses could create
    a front-end gatekeeper AI to do some fact-checking before the main AI blathers its 'expert' responses.

    What I further allege is that I hope medical professionals are not -
    and will not - rely too heavily on these damthings for medical
    diagnoses and treatments.


    Just one more way to infect the populous with woke ass stupidity.

    --
    Biden doesn’t need a cognitive test… his voters do.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Van Pelt@21:1/5 to redydog@rye.net on Wed May 29 16:44:56 2024
    In article <rn45O.43762$In22.8003@fx11.iad>, bfh <redydog@rye.net> wrote: >What I further allege is that I hope medical professionals are not -
    and will not - rely too heavily on these damthings for medical
    diagnoses and treatments.

    Depends on the training set. If the medical "simulated
    intelligence" is trained on actual medical diagnoses, including
    all the "what we found out later that we missed on first
    examination" cases, it looks like it can be really valuable,
    at least as first line screening. Some of the studies seem
    to be pretty spectacular at picking up stuff.

    Keep it way, way, WAY away from Reddit, FaceBook, and all
    those "Doctors hate this!" "Kill your belly fat!" clickbaity
    quackpottery ads that most news web pages are flooded with.
    --
    Mike Van Pelt | "I don't advise it unless you're nuts."
    mvp at calweb.com | -- Ray Wilkinson, after riding out Hurricane
    KE6BVH | Ike on Surfside Beach in Galveston

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)