• AI is Dehumanizing Technology

    From Ben Collver@21:1/5 to All on Sat May 31 13:07:20 2025
    AI is Dehumanization Technology
    ===============================

    Vintage black & white illustration of a craniometer measuring
    someone's skull.

    <https://thedabbler.patatas.ca/images/ai-dehumanization/
    craniometer2.jpg>

    AI systems are an attack on workers, climate goals, our information environment, and civil liberties. Rather than enhancing our human
    qualities, these systems degrade our social relations, and undermine
    our capacity for empathy and care. The push to adopt AI is, at its
    core, a political project of dehumanization, and serious
    consideration should be given to the idea of rejecting the deployment
    of these systems entirely, especially within Canada's public sector.

    * * *

    At the end of February, Elon Musk--whose xAI data centre is being
    powered by nearly three dozen on-site gas turbines that are poisoning
    the air of nearby majority-Black neighborhoods in Memphis--went on
    the Joe Rogan podcast and declared that "the fundamental weakness of
    western civilization is empathy", describing "the empathy response"
    as a "bug" or "exploit" in our collective programming.

    <https://www.theguardian.com/technology/2025/apr/24/
    elon-musk-xai-memphis>

    <https://www.theguardian.com/us-news/ng-interactive/2025/apr/08/ empathy-sin-christian-right-musk-trump>

    This is part of a broader movement among Silicon Valley tech
    oligarchs and a billionaire-aligned political elite to advance a
    disturbing notion: that by abandoning our deeply-held values of
    justice, fairness, and duty toward one another--in short, by
    abandoning our humanity--we are in fact promoting humanity's
    advancement. It's clearly absurd, but if you're someone whose wealth
    and power are predicated on causing widespread harm, it's probably
    easier to sleep at night if you can tell yourself that you're serving
    a higher purpose.

    And, well, their AI systems and infrastructure cause an awful lot of
    harm.

    To get to the root of why AI systems are so socially corrosive, it
    helps to first step back a bit and look at how they work. Physicist
    and critical AI researcher Dan McQuillan has described AI as
    'pattern-finding' tech. For example, to create an LLM such as
    ChatGPT, you'd start with an enormous quantity of text, then do a lot
    of computationally-intense statistical analysis to map out which
    words and phrases are most likely to appear near to one another.
    Crunch the numbers long enough, and you end up with something similar
    to the next-word prediction tool in your phone's text messaging app,
    except that this tool can generate whole paragraphs of mostly plausible-sounding word salad.

    <https://www.danmcquillan.org/>

    What's important to note here is that the machine's outputs are
    solely based on patterns of statistical correlation. The AI doesn't
    have an understanding of context, meaning, or causation. The system
    doesn't 'think' or 'know', it just mimics the appearance of human communication. That's all. Maybe the output is true, or maybe it's
    false; either way the system is behaving as designed.

    Automating bias
    ===============

    When an AI confidently recommends eating a deadly-poisonous mushroom,
    or summarizes text in a way that distorts its meaning--perhaps a
    research paper, or maybe one day an asylum claim--the consequences
    can range from bad to devastating. But the problems run deeper still:
    AI systems can't help but reflect the power structures, hierarchies,
    and biases present in their training data. A 2024 Stanford study
    found that the AI tools being deployed in elementary schools
    displayed a "shocking" degree of bias; one of the LLMs, for example,
    routinely created stories in which students with names like Jamal and
    Carlos would struggle with their homework, but were "saved" by a
    student named Sarah.

    <https://hai.stanford.edu/news/how-harmful-are-ais-biases-on-diverse- student-populations>

    As alarming as that is, at least those tools exhibit obvious bias.
    Other times it might not be so easy to tell. For instance, what
    happens when a system like this isn't writing a story, but is being
    asked a simple yes/no question about whether or not an organization
    should offer Jamal, Carlos, or Sarah a job interview? What happens to
    people's monthly premiums when a US health insurance company's AI
    finds a correlation between high asthma rates and home addresses in a
    certain Memphis zip code? In the tradition of skull-measuring
    eugenicists, AI provides a way to naturalize and reinforce existing
    social hierarchies, and automates their reproduction.

    <https://racismandtechnology.center/2024/04/01/ openais-gpt-sorts-resumes-with-a-racial-bias/>

    This is incredibly dangerous, particularly when it comes to embedding
    AI inside the public sector. Human administrators and decision-makers
    will invariably have biases and prejudices of their own, of
    course--but there are some important things to note about this. For
    one thing, a diverse team can approach decisions from multiple
    angles, helping to mitigate the effects of individual bias. An AI
    system, insofar as we can even say it 'approaches' a problem, does so
    from a single, culturally flattened and hegemonic perspective.
    Besides, biased human beings, unlike biased computers, are aware that
    we can be held accountable for our decisions, whether via formal
    legal means, professional standards bodies, or social pressure.

    Algorithmic systems can't feel those societal constraints, because
    they don't think or feel anything at all. But the AI industry
    continues to tell us that at some point, somehow, they will solve the
    so-called 'AI alignment problem', at which point we can trust their
    tools to make ethical, unbiased decisions. Whether it's even possible
    to solve this problem is still very much an open debate among
    experts, however.

    <https://www.aibiasconsensus.org/>

    Possible or not, we're told that in the meantime, we should always
    have human beings double-checking their systems' outputs. That might
    sound like a good solution, but in reality it opens a whole new can
    of worms. For one thing, there's the phenomenon of 'automation
    bias'--the tendency to rely on an automated system's result more than
    one's own judgement--something that affects people of all levels of
    skill and experience, and undercuts the notion that error and bias
    can be reliably addressed by having a 'human in the loop'.

    <https://pubs.rsna.org/doi/10.1148/radiol.222176>

    * * *

    Then there's the deskilling effect. Despite AI being touted as a way
    to 'boost productivity', researchers are consistently finding that
    these tools don't result in productivity gains. So why do people in
    positions of power continue to push for AI adoption? The logical
    answer is that they want an excuse to fire workers, and don't care
    about the quality of work being done.

    <https://www.project-syndicate.org/commentary/ai-productivity-boom- forecasts-countered-by-theory-and-data-by-daron-acemoglu-2024-05>

    This attack on labour becomes a self-reinforcing cycle. With a
    smaller team, workers get overloaded, and increasingly need to rely
    on whatever tools are at their disposal, even as those tools devalue
    their skills and expertise. This drives down wages, reduces
    bargaining power, and opens the door for further job cuts--and likely
    for privatization.

    <https://www.404media.co/microsoft-study-finds-ai-makes-human- cognition-atrophied-and-unprepared-3/>

    Worse still, it seems that the Canadian federal government is
    actively pursuing policy that could reinforce this abusive dynamic
    further; the 2024 Fall Economic Statement included a proposal that
    would, using public money, incentivize our public pension funds to
    invest in AI data centres to the tune of tens of billions of
    dollars.

    <https://budget.canada.ca/update-miseajour/2024/report-rapport/ chap2-en.html#catalyzing-ai-infrastructure>

    Suffocating the soul of the public service ==========================================

    I'd happily wager that when people choose careers in the public
    sector, they rarely do so out of narrow self-interest. Rather, they
    choose this work because they're mission-oriented: they want the
    opportunity to express care through their work by making a positive
    difference in people's lives. Often the job will entail making
    difficult decisions. But that's par for the course: a decision isn't
    difficult if the person making it doesn't care about doing the right
    thing.

    And here's where we start to get to the core of it all: human
    intelligence, whatever it is, definitely isn't reducible to just
    logic and abstract reasoning; feeling is a part of thinking too. The
    difficulty of a decision isn't merely a function of the number of
    data points involved in a calculation, it's also about understanding,
    through lived experience, how that decision will affect the people
    involved materially, psychologically, emotionally, socially. Feeling
    inner conflict or cognitive dissonance is a good thing, because it
    alerts us to an opportunity: it's in these moments that we're able to
    learn and grow, by working through an issue to find a resolution that
    expresses our desire to do good in the world.

    AI, along with the productivity logic of those pushing its adoption, short-circuits that reflective process before it can even begin, by
    providing answers at the push of a button or entry of a prompt. It
    turns social relations into number-crunching operations, striking a technocratic death blow to the heart of what it means to have a
    public sector in the first place.

    * * *

    The dehumanizing effects of AI don't end there, however. Meredith
    Whittaker, president of the Signal Foundation, has described AI as
    being fundamentally "surveillance technology". This rings true here
    in many ways. First off, the whole logic of using AI in government is
    to render members of the public as mere collections of measurements
    and data points. Meanwhile, AI also acts as a digital intermediary
    between public sector workers and service recipients (or even between
    public employees, whenever they generate an email or summarize a
    meeting using AI), an intermediary that's at least capable of keeping
    records of each interaction, if not influencing or directing it.

    <https://techcrunch.com/2023/09/25/signals-meredith-whittaker-ai-is- fundamentally-a-surveillance-technology/>

    This doesn't inescapably lead to a technological totalitarianism. But
    adopting these systems clearly hands a lot of power to whoever
    builds, controls, and maintains them. For the most part, that means
    handing power to a handful of tech oligarchs. To at least some
    degree, this represents a seizure of the 'means of production' from
    public sector workers, as well as a reduction in democratic oversight.

    Lastly, it may come as no surprise that so far, AI systems have found
    their best product-market fit in police and military applications,
    where short-circuiting people's critical thinking and decision-making
    processes is incredibly useful, at least for those who want to turn
    people into unhesitatingly brutal and lethal instruments of authority.

    <https://www.washingtonpost.com/business/interactive/2025/ police-artificial-intelligence-facial-recognition/>

    <https://www.businessinsider.com/us-special-forces-using-lot-of-ai- for-cognitive-load-2025-5>

    * * *

    AI systems reproduce bias, cheapen and homogenize our social
    interactions, deskill us, make our jobs more precarious, eliminate opportunities to practice care, and enable authoritarian modes of
    surveillance and control. Deployed in the public sector, they
    undercut workers' ability to meaningfully grapple with problems and
    make ethical decisions that move our society forward. These
    technologies dehumanize all of us. Collectively, we can choose to
    reject them.

    From: <https://thedabbler.patatas.ca/pages/ ai-is-dehumanization-technology.html>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From D@21:1/5 to Ben Collver on Sat May 31 10:18:08 2025
    On Sat, 31 May 2025 13:07:20 -0000 (UTC), Ben Collver <bencollver@tilde.pink> wrote:
    snip
    Lastly, it may come as no surprise that so far, AI systems have found
    their best product-market fit in police and military applications,
    where short-circuiting people's critical thinking and decision-making >processes is incredibly useful, at least for those who want to turn
    people into unhesitatingly brutal and lethal instruments of authority.

    militarized artificial intelligence is superseding nonessential and imperfect humans like star trek's captain dunsel, thus human beings have already become obsolete... which if you enjoyed kubrick's dr. strangelove, sounds intriguing especially since "a.i." was predicted to become man's invincible frankenstein

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Ben Collver on Sun Jun 1 20:08:49 2025
    Ben Collver <bencollver@tilde.pink> wrote or quoted:
    For example, to create an LLM such as
    ChatGPT, you'd start with an enormous quantity of text, then do a lot
    of computationally-intense statistical analysis to map out which
    words and phrases are most likely to appear near to one another.
    Crunch the numbers long enough, and you end up with something similar
    to the next-word prediction tool in your phone's text messaging app,
    except that this tool can generate whole paragraphs of mostly >plausible-sounding word salad.

    I see stuff like that from time to time, but it's really just
    a watered-down way of explaining LLMs for kids, and you can't use
    it if you're actually trying to make a solid point, since the way
    those networks are layered means words turn into concepts, links,
    and statements that aren't tied to any one way of saying things.
    That ends up getting turned back into language that clearly isn't
    just word salad. Sure, stats matter - whether a drug helps 90 or
    10 percent of people is a big deal, and knowing statistically common
    sentence patterns is exactly what keeps output from turning into
    word salad, you want to learn such stats when you learn a language.

    The quoted text is from someone trying to make AI criticism
    look bad by pretending to be an unqualified critic who just
    tosses around stuff that's obviously off base.

    If you know your stuff and can actually break down AI or LLMs and get
    what's risky about them, speak up, because we need people like you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Collver@21:1/5 to Stefan Ram on Mon Jun 2 13:59:39 2025
    On 2025-06-01, Stefan Ram <ram@zedat.fu-berlin.de> wrote:
    Ben Collver <bencollver@tilde.pink> wrote or quoted:
    For example, to create an LLM such as
    ChatGPT, you'd start with an enormous quantity of text, then do a lot
    of computationally-intense statistical analysis to map out which
    words and phrases are most likely to appear near to one another.
    Crunch the numbers long enough, and you end up with something similar
    to the next-word prediction tool in your phone's text messaging app,
    except that this tool can generate whole paragraphs of mostly >>plausible-sounding word salad.

    If you know your stuff and can actually break down AI or LLMs and get
    what's risky about them, speak up, because we need people like you.

    I remember reading about the dangers of GMO crops. At the time a
    common modification was to make corn and soy roundup ready. The
    official research said that roundup was safe for human consumption.

    I read a story that some found it cheaper to douse surplus roundup on
    wheat after the harvest rather than buy the normal dessicants. This
    was not the the intended use nor was this the amount of human
    exposure reported in the studies. However, it is consistent with the
    values that produced roundup: profit being more valuable than health
    or safety.

    Unintended consequences are bound to come out sideways. Did we need
    more expertise in GMOs? No, we needed a different approach.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ethan Carter@21:1/5 to Ben Collver on Thu Jun 5 12:46:50 2025
    Ben Collver <bencollver@tilde.pink> writes:

    On 2025-06-01, Stefan Ram <ram@zedat.fu-berlin.de> wrote:
    Ben Collver <bencollver@tilde.pink> wrote or quoted:
    For example, to create an LLM such as
    ChatGPT, you'd start with an enormous quantity of text, then do a lot
    of computationally-intense statistical analysis to map out which
    words and phrases are most likely to appear near to one another.
    Crunch the numbers long enough, and you end up with something similar
    to the next-word prediction tool in your phone's text messaging app, >>>except that this tool can generate whole paragraphs of mostly >>>plausible-sounding word salad.

    If you know your stuff and can actually break down AI or LLMs and get
    what's risky about them, speak up, because we need people like you.

    I remember reading about the dangers of GMO crops. At the time a
    common modification was to make corn and soy roundup ready. The
    official research said that roundup was safe for human consumption.

    I read a story that some found it cheaper to douse surplus roundup on
    wheat after the harvest rather than buy the normal dessicants. This
    was not the the intended use nor was this the amount of human
    exposure reported in the studies. However, it is consistent with the
    values that produced roundup: profit being more valuable than health
    or safety.

    Unintended consequences are bound to come out sideways. Did we need
    more expertise in GMOs? No, we needed a different approach.

    Quite right.

    What's frightening, though, is that so long as the means to evolve these techniques---for GMOs, automation, surveillance et cetera---lives on,
    such techniques and systems will evolve. History shows that we never
    stopped developing anything because they destroy the dignity of human
    life. An approach dies because it loses to another approach---this is
    the way of techniques. Technique is not a specific approach; it is all techniques together.

    This is not the age of AI; this is the age of technique; the age of
    efficiency. If you were to stop a big shot leader from doing his work,
    another one would appear and take it from there. It's an autonomous
    system; it has a life of its own.

    The matter discussed in the article is a superficial symptom; it
    scratches the surfaces. Underneath the symptom, there's a movement, a
    system at work. When doctors remove a cancer from someone's body, they
    do not destroy the properties of the system that produced that cancer,
    which explain why so many people get cancer, remove it and die later
    when new tumors appear and there's nothing else to do. Still, some
    people look at tumors and exclaim---wow, look how fast, efficient this
    system is; brave new world!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)