• [OT] Why governments must limit AI violations of copyright

    From Rhino@21:1/5 to All on Tue May 27 12:06:34 2025
    Mary Spender presents a relatively brief but, I think, compelling
    argument for why governments need to reject the tech firms claims that
    using existing works to train AIs is fair use and does not need to be
    paid for.

    https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]

    The tech bros are wallowing in almost unimagineable wealth: they can
    definitely afford to compensate copyright holders for using their work
    as training data. Alternatively, they can let copyright holders exclude
    their works from use in training data and compensate them for what they
    have used without permission.

    I don't believe the tech companies have some kind of natural right to
    generate new works that are closely modelled on existing works without
    paying for their use of those works. The new works generated by humans
    are already pretty derivative in too many cases: we don't need AIs
    generating still more of the same.

    There's a wealth of art (whether music, visual art, or literature)
    freely available in the public domain. Let them use that if they need
    large quantities of art to train their models.

    --
    Rhino

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BTR1701@21:1/5 to All on Tue May 27 18:17:37 2025
    On May 27, 2025 at 9:06:34 AM PDT, "Rhino" <no_offline_contact@example.com> wrote:

    Mary Spender presents a relatively brief but, I think, compelling
    argument for why governments need to reject the tech firms claims that
    using existing works to train AIs is fair use and does not need to be
    paid for.

    https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]

    The tech bros are wallowing in almost unimagineable wealth: they can definitely afford to compensate copyright holders for using their work
    as training data. Alternatively, they can let copyright holders exclude
    their works from use in training data and compensate them for what they
    have used without permission.

    I don't believe the tech companies have some kind of natural right to generate new works that are closely modelled on existing works without
    paying for their use of those works.

    If you can show that the AI produces a copy of the work it was trained on, or one substantially similar enough as to be confusing to the reasonable man,
    then yes, I agree.

    E.g., if you ask it to generate a story about a young girl who finds herself lost in a fantasy world and it spits out the plot to Alice in Wonderland.

    But if you ask it that same question and it produces a totally different story that isn't Alice in Wonderland in any recognizable way but it learned how to
    do that from 'reading' Alice in Wonderland, then I don't see how you have a copyright violation under existing law or even under the philosophical framework on which existing law has been built. At that point, it's no different from a human reading Alice in Wonderland and figuring out how to use the elements and techniques employed by Carroll in his story to produce a different story of his own. No one would suggest copyright violation if a
    human did it, so how can it suddenly be one if a computer algorithm does it?

    The new works generated by humans are already pretty derivative in too many cases: we don't need AIs
    generating still more of the same.

    Well therein lies the rub. At least in America. We call it the Bill of Rights, not the Bill of Needs, for a reason.

    There's a wealth of art (whether music, visual art, or literature)
    freely available in the public domain. Let them use that if they need
    large quantities of art to train their models.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rhino@21:1/5 to All on Tue May 27 15:20:52 2025
    On 2025-05-27 2:17 PM, BTR1701 wrote:
    On May 27, 2025 at 9:06:34 AM PDT, "Rhino" <no_offline_contact@example.com> wrote:

    Mary Spender presents a relatively brief but, I think, compelling
    argument for why governments need to reject the tech firms claims that
    using existing works to train AIs is fair use and does not need to be
    paid for.

    https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]

    The tech bros are wallowing in almost unimagineable wealth: they can
    definitely afford to compensate copyright holders for using their work
    as training data. Alternatively, they can let copyright holders exclude
    their works from use in training data and compensate them for what they
    have used without permission.

    I don't believe the tech companies have some kind of natural right to
    generate new works that are closely modelled on existing works without
    paying for their use of those works.

    If you can show that the AI produces a copy of the work it was trained on, or one substantially similar enough as to be confusing to the reasonable man, then yes, I agree.

    E.g., if you ask it to generate a story about a young girl who finds herself lost in a fantasy world and it spits out the plot to Alice in Wonderland.

    But if you ask it that same question and it produces a totally different story
    that isn't Alice in Wonderland in any recognizable way but it learned how to do that from 'reading' Alice in Wonderland, then I don't see how you have a copyright violation under existing law or even under the philosophical framework on which existing law has been built. At that point, it's no different from a human reading Alice in Wonderland and figuring out how to use
    the elements and techniques employed by Carroll in his story to produce a different story of his own. No one would suggest copyright violation if a human did it, so how can it suddenly be one if a computer algorithm does it?

    The new works generated by humans are already pretty derivative in too many >> cases: we don't need AIs
    generating still more of the same.

    Well therein lies the rub. At least in America. We call it the Bill of Rights,
    not the Bill of Needs, for a reason.

    There's a wealth of art (whether music, visual art, or literature)
    freely available in the public domain. Let them use that if they need
    large quantities of art to train their models.



    Your points are well taken. Yes, if the AI-generated material isn't recognizable to someone familiar with Alice in Wonderland, it's hard to
    make a case for copyright infringement. And yes, even if *I* don't see a
    need for yet more derivative works, it's not illegal, even if it is
    annoying.

    The challenge is going to come with deciding if an AI-generated work is
    "too similar" to something it trained on. I expect that similarity, like beauty, is in the eye (or ear) of the beholder. Maybe a committee will
    have to do the deciding and only if a majority of its members thinks the similarity is too close will the AI work be labelled a copyright
    infringement. Of course selection of this committee will be challenging
    since the tech companies are going to favour people that don't ever see similarities even of identical things and the human creators will tend
    to see similarity in everything because its in their financial interest
    to find similarity.

    --
    Rhino

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From anim8rfsk@21:1/5 to atropos@mac.com on Tue May 27 13:25:50 2025
    BTR1701 <atropos@mac.com> wrote:
    On May 27, 2025 at 9:06:34 AM PDT, "Rhino" <no_offline_contact@example.com> wrote:

    Mary Spender presents a relatively brief but, I think, compelling
    argument for why governments need to reject the tech firms claims that
    using existing works to train AIs is fair use and does not need to be
    paid for.

    https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]

    The tech bros are wallowing in almost unimagineable wealth: they can
    definitely afford to compensate copyright holders for using their work
    as training data. Alternatively, they can let copyright holders exclude
    their works from use in training data and compensate them for what they
    have used without permission.

    I don't believe the tech companies have some kind of natural right to
    generate new works that are closely modelled on existing works without
    paying for their use of those works.

    If you can show that the AI produces a copy of the work it was trained on, or one substantially similar enough as to be confusing to the reasonable man, then yes, I agree.

    E.g., if you ask it to generate a story about a young girl who finds herself lost in a fantasy world and it spits out the plot to Alice in Wonderland.

    But if you ask it that same question and it produces a totally different story
    that isn't Alice in Wonderland in any recognizable way but it learned how to do that from 'reading' Alice in Wonderland, then I don't see how you have a copyright violation under existing law or even under the philosophical framework on which existing law has been built. At that point, it's no different from a human reading Alice in Wonderland and figuring out how to use
    the elements and techniques employed by Carroll in his story to produce a different story of his own. No one would suggest copyright violation if a human did it, so how can it suddenly be one if a computer algorithm does it?

    The new works generated by humans are already pretty derivative in too many >> cases: we don't need AIs
    generating still more of the same.

    Well therein lies the rub. At least in America. We call it the Bill of Rights,
    not the Bill of Needs, for a reason.

    There's a wealth of art (whether music, visual art, or literature)
    freely available in the public domain. Let them use that if they need
    large quantities of art to train their models.


    I just got unfriended and banned and blocked and thrown out of a group for posting a meme that the guy claimed was AI. I don’t think it was. It was
    the creature from the black Lagoon shopping for fish sticks and it looked
    like a plastic model kit to me. Plus the group I posted in wasn’t even this guy’s group that he threw me out of. It seemed a bit over reactionary. He says that all AI is pieced together out of stolen works.



    --
    The last thing I want to do is hurt you, but it is still on my list.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From moviePig@21:1/5 to Rhino on Tue May 27 17:16:14 2025
    On 5/27/2025 3:20 PM, Rhino wrote:
    On 2025-05-27 2:17 PM, BTR1701 wrote:
    On May 27, 2025 at 9:06:34 AM PDT, "Rhino"
    <no_offline_contact@example.com>
    wrote:

    Mary Spender presents a relatively brief but, I think, compelling
    argument for why governments need to reject the tech firms claims that
    using existing works to train AIs is fair use and does not need to be
    paid for.

    https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]

    The tech bros are wallowing in almost unimagineable wealth: they can
    definitely afford to compensate copyright holders for using their work
    as training data. Alternatively, they can let copyright holders exclude
    their works from use in training data and compensate them for what they
    have used without permission.

    I don't believe the tech companies have some kind of natural right to
    generate new works that are closely modelled on existing works without
    paying for their use of those works.

    If you can show that the AI produces a copy of the work it was trained
    on, or
    one substantially similar enough as to be confusing to the reasonable
    man,
    then yes, I agree.

    E.g., if you ask it to generate a story about a young girl who finds
    herself
    lost in a fantasy world and it spits out the plot to Alice in Wonderland.

    But if you ask it that same question and it produces a totally
    different story
    that isn't Alice in Wonderland in any recognizable way but it learned
    how to
    do that from 'reading' Alice in Wonderland, then I don't see how you
    have a
    copyright violation under existing law or even under the philosophical
    framework on which existing law has been built. At that point, it's no
    different from a human reading Alice in Wonderland and figuring out
    how to use
    the elements and techniques employed by Carroll in his story to produce a
    different story of his own. No one would suggest copyright violation if a
    human did it, so how can it suddenly be one if a computer algorithm
    does it?

    The new works generated by humans are already pretty derivative in
    too many
    cases: we don't need AIs
    generating still more of the same.

    Well therein lies the rub. At least in America. We call it the Bill of
    Rights,
    not the Bill of Needs, for a reason.

    There's a wealth of art (whether music, visual art, or literature)
    freely available in the public domain. Let them use that if they need
    large quantities of art to train their models.



    Your points are well taken. Yes, if the AI-generated material isn't recognizable to someone familiar with Alice in Wonderland, it's hard to
    make a case for copyright infringement. And yes, even if *I* don't see a
    need for yet more derivative works, it's not illegal, even if it is
    annoying.

    The challenge is going to come with deciding if an AI-generated work is
    "too similar" to something it trained on. I expect that similarity, like beauty, is in the eye (or ear) of the beholder. Maybe a committee will
    have to do the deciding and only if a majority of its members thinks the similarity is too close will the AI work be labelled a copyright infringement. Of course selection of this committee will be challenging
    since the tech companies are going to favour people that don't ever see similarities even of identical things and the human creators will tend
    to see similarity in everything because its in their financial interest
    to find similarity.

    Two ancillary thoughts: Afaics, we're already within reach of such a
    pilfering AI-agent that can be dialed to a desired degree of "distance"
    from the original work it's copying. Meanwhile, whenever a claim of infringement is brought, adjudicating that "distance" sounds like a
    proper and plausible task for a magistrate that is itself an AI.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BTR1701@21:1/5 to moviePig on Tue May 27 21:43:59 2025
    On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com> wrote:

    On 5/27/2025 3:20 PM, Rhino wrote:
    On 2025-05-27 2:17 PM, BTR1701 wrote:
    On May 27, 2025 at 9:06:34 AM PDT, "Rhino"
    <no_offline_contact@example.com>
    wrote:

    Mary Spender presents a relatively brief but, I think, compelling
    argument for why governments need to reject the tech firms claims that >>>> using existing works to train AIs is fair use and does not need to be >>>> paid for.

    https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]

    The tech bros are wallowing in almost unimagineable wealth: they can
    definitely afford to compensate copyright holders for using their work >>>> as training data. Alternatively, they can let copyright holders exclude >>>> their works from use in training data and compensate them for what they >>>> have used without permission.

    I don't believe the tech companies have some kind of natural right to >>>> generate new works that are closely modelled on existing works without >>>> paying for their use of those works.

    If you can show that the AI produces a copy of the work it was trained
    on, or
    one substantially similar enough as to be confusing to the reasonable
    man,
    then yes, I agree.

    E.g., if you ask it to generate a story about a young girl who finds
    herself
    lost in a fantasy world and it spits out the plot to Alice in Wonderland. >>>
    But if you ask it that same question and it produces a totally
    different story
    that isn't Alice in Wonderland in any recognizable way but it learned
    how to
    do that from 'reading' Alice in Wonderland, then I don't see how you
    have a
    copyright violation under existing law or even under the philosophical
    framework on which existing law has been built. At that point, it's no
    different from a human reading Alice in Wonderland and figuring out
    how to use
    the elements and techniques employed by Carroll in his story to produce a >>> different story of his own. No one would suggest copyright violation if a >>> human did it, so how can it suddenly be one if a computer algorithm
    does it?

    The new works generated by humans are already pretty derivative in
    too many
    cases: we don't need AIs
    generating still more of the same.

    Well therein lies the rub. At least in America. We call it the Bill of
    Rights,
    not the Bill of Needs, for a reason.

    There's a wealth of art (whether music, visual art, or literature)
    freely available in the public domain. Let them use that if they need >>>> large quantities of art to train their models.



    Your points are well taken. Yes, if the AI-generated material isn't
    recognizable to someone familiar with Alice in Wonderland, it's hard to
    make a case for copyright infringement. And yes, even if *I* don't see a
    need for yet more derivative works, it's not illegal, even if it is
    annoying.

    The challenge is going to come with deciding if an AI-generated work is
    "too similar" to something it trained on. I expect that similarity, like
    beauty, is in the eye (or ear) of the beholder. Maybe a committee will
    have to do the deciding and only if a majority of its members thinks the
    similarity is too close will the AI work be labelled a copyright
    infringement. Of course selection of this committee will be challenging
    since the tech companies are going to favour people that don't ever see
    similarities even of identical things and the human creators will tend
    to see similarity in everything because its in their financial interest
    to find similarity.

    Two ancillary thoughts: Afaics, we're already within reach of such a pilfering AI-agent that can be dialed to a desired degree of "distance"
    from the original work it's copying.

    We're at that point with humans, too, and long have been.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From moviePig@21:1/5 to All on Tue May 27 18:18:45 2025
    On 5/27/2025 5:43 PM, BTR1701 wrote:
    On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
    wrote:

    On 5/27/2025 3:20 PM, Rhino wrote:
    On 2025-05-27 2:17 PM, BTR1701 wrote:
    On May 27, 2025 at 9:06:34 AM PDT, "Rhino"
    <no_offline_contact@example.com> wrote:

    Mary Spender presents a relatively brief but, I think,
    compelling argument for why governments need to reject the
    tech firms claims that using existing works to train AIs is
    fair use and does not need to be paid for.

    https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]

    The tech bros are wallowing in almost unimagineable wealth:
    they can definitely afford to compensate copyright holders
    for using their work as training data. Alternatively, they
    can let copyright holders exclude their works from use in
    training data and compensate them for what they have used
    without permission.

    I don't believe the tech companies have some kind of natural
    right to generate new works that are closely modelled on
    existing works without paying for their use of those works.

    If you can show that the AI produces a copy of the work it was
    trained on, or one substantially similar enough as to be
    confusing to the reasonable man, then yes, I agree.

    E.g., if you ask it to generate a story about a young girl who
    finds herself lost in a fantasy world and it spits out the
    plot to Alice in Wonderland.

    But if you ask it that same question and it produces a totally
    different story that isn't Alice in Wonderland in any
    recognizable way but it learned how to do that from 'reading'
    Alice in Wonderland, then I don't see how you have a copyright
    violation under existing law or even under the philosophical
    framework on which existing law has been built. At that point,
    it's no different from a human reading Alice in Wonderland and
    figuring out how to use the elements and techniques employed
    by Carroll in his story to produce a different story of his
    own. No one would suggest copyright violation if a human did
    it, so how can it suddenly be one if a computer algorithm does
    it?

    The new works generated by humans are already pretty
    derivative in too many cases: we don't need AIs generating
    still more of the same.

    Well therein lies the rub. At least in America. We call it the
    Bill of Rights, not the Bill of Needs, for a reason.

    There's a wealth of art (whether music, visual art, or
    literature) freely available in the public domain. Let them
    use that if they need large quantities of art to train their
    models.



    Your points are well taken. Yes, if the AI-generated material
    isn't recognizable to someone familiar with Alice in Wonderland,
    it's hard to make a case for copyright infringement. And yes,
    even if *I* don't see a need for yet more derivative works, it's
    not illegal, even if it is annoying.

    The challenge is going to come with deciding if an AI-generated
    work is "too similar" to something it trained on. I expect that
    similarity, like beauty, is in the eye (or ear) of the beholder.
    Maybe a committee will have to do the deciding and only if a
    majority of its members thinks the similarity is too close will
    the AI work be labelled a copyright infringement. Of course
    selection of this committee will be challenging since the tech
    companies are going to favour people that don't ever see
    similarities even of identical things and the human creators
    will tend to see similarity in everything because its in their
    financial interest to find similarity.

    Two ancillary thoughts: Afaics, we're already within reach of
    such a pilfering AI-agent that can be dialed to a desired degree
    of "distance" from the original work it's copying. Meanwhile,
    whenever a claim of infringement is brought, adjudicating that
    "distance" sounds like a proper and plausible task for a
    magistrate that is itself an AI.

    We're at that point with humans, too, and long have been.

    An answer might lie in my second thought (restored above). An AI that
    could detect similarity between a work and its alleged copy might be
    sufficient proof of infringement. Even though it'd almost certainly be somewhat imprecise, that shouldn't concern any truly original author.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BTR1701@21:1/5 to moviePig on Tue May 27 22:31:17 2025
    On May 27, 2025 at 3:18:45 PM PDT, "moviePig" <nobody@nowhere.com> wrote:

    On 5/27/2025 5:43 PM, BTR1701 wrote:
    On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
    wrote:

    On 5/27/2025 3:20 PM, Rhino wrote:
    On 2025-05-27 2:17 PM, BTR1701 wrote:
    On May 27, 2025 at 9:06:34 AM PDT, "Rhino"
    <no_offline_contact@example.com> wrote:

    Mary Spender presents a relatively brief but, I think,
    compelling argument for why governments need to reject the
    tech firms claims that using existing works to train AIs is
    fair use and does not need to be paid for.

    https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]

    The tech bros are wallowing in almost unimagineable wealth:
    they can definitely afford to compensate copyright holders
    for using their work as training data. Alternatively, they
    can let copyright holders exclude their works from use in
    training data and compensate them for what they have used
    without permission.

    I don't believe the tech companies have some kind of natural
    right to generate new works that are closely modelled on
    existing works without paying for their use of those works.

    If you can show that the AI produces a copy of the work it was
    trained on, or one substantially similar enough as to be
    confusing to the reasonable man, then yes, I agree.

    E.g., if you ask it to generate a story about a young girl who
    finds herself lost in a fantasy world and it spits out the
    plot to Alice in Wonderland.

    But if you ask it that same question and it produces a totally
    different story that isn't Alice in Wonderland in any
    recognizable way but it learned how to do that from 'reading'
    Alice in Wonderland, then I don't see how you have a copyright
    violation under existing law or even under the philosophical
    framework on which existing law has been built. At that point,
    it's no different from a human reading Alice in Wonderland and
    figuring out how to use the elements and techniques employed
    by Carroll in his story to produce a different story of his
    own. No one would suggest copyright violation if a human did
    it, so how can it suddenly be one if a computer algorithm does
    it?

    The new works generated by humans are already pretty
    derivative in too many cases: we don't need AIs generating
    still more of the same.

    Well therein lies the rub. At least in America. We call it the
    Bill of Rights, not the Bill of Needs, for a reason.

    There's a wealth of art (whether music, visual art, or
    literature) freely available in the public domain. Let them
    use that if they need large quantities of art to train their
    models.



    Your points are well taken. Yes, if the AI-generated material
    isn't recognizable to someone familiar with Alice in Wonderland,
    it's hard to make a case for copyright infringement. And yes,
    even if *I* don't see a need for yet more derivative works, it's
    not illegal, even if it is annoying.

    The challenge is going to come with deciding if an AI-generated
    work is "too similar" to something it trained on. I expect that
    similarity, like beauty, is in the eye (or ear) of the beholder.
    Maybe a committee will have to do the deciding and only if a
    majority of its members thinks the similarity is too close will
    the AI work be labelled a copyright infringement. Of course
    selection of this committee will be challenging since the tech
    companies are going to favour people that don't ever see
    similarities even of identical things and the human creators
    will tend to see similarity in everything because its in their
    financial interest to find similarity.

    Two ancillary thoughts: Afaics, we're already within reach of
    such a pilfering AI-agent that can be dialed to a desired degree
    of "distance" from the original work it's copying. Meanwhile,
    whenever a claim of infringement is brought, adjudicating that
    "distance" sounds like a proper and plausible task for a
    magistrate that is itself an AI.

    We're at that point with humans, too, and long have been.

    An answer might lie in my second thought (restored above). An AI that
    could detect similarity between a work and its alleged copy might be sufficient proof of infringement. Even though it'd almost certainly be somewhat imprecise, that shouldn't concern any truly original author.

    Again, you'd have to come up with a coherent legally acceptable reason why de minimis similarity with AI would constitute violation but the same similarity in a human-produced work would not.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From anim8rfsk@21:1/5 to moviePig on Tue May 27 15:31:48 2025
    moviePig <nobody@nowhere.com> wrote:
    On 5/27/2025 5:43 PM, BTR1701 wrote:
    On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
    wrote:

    On 5/27/2025 3:20 PM, Rhino wrote:
    On 2025-05-27 2:17 PM, BTR1701 wrote:
    On May 27, 2025 at 9:06:34 AM PDT, "Rhino"
    <no_offline_contact@example.com> wrote:

    Mary Spender presents a relatively brief but, I think,
    compelling argument for why governments need to reject the
    tech firms claims that using existing works to train AIs is
    fair use and does not need to be paid for.

    https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]

    The tech bros are wallowing in almost unimagineable wealth:
    they can definitely afford to compensate copyright holders
    for using their work as training data. Alternatively, they
    can let copyright holders exclude their works from use in
    training data and compensate them for what they have used
    without permission.

    I don't believe the tech companies have some kind of natural
    right to generate new works that are closely modelled on
    existing works without paying for their use of those works.

    If you can show that the AI produces a copy of the work it was
    trained on, or one substantially similar enough as to be
    confusing to the reasonable man, then yes, I agree.

    E.g., if you ask it to generate a story about a young girl who
    finds herself lost in a fantasy world and it spits out the
    plot to Alice in Wonderland.

    But if you ask it that same question and it produces a totally
    different story that isn't Alice in Wonderland in any
    recognizable way but it learned how to do that from 'reading'
    Alice in Wonderland, then I don't see how you have a copyright
    violation under existing law or even under the philosophical
    framework on which existing law has been built. At that point,
    it's no different from a human reading Alice in Wonderland and
    figuring out how to use the elements and techniques employed
    by Carroll in his story to produce a different story of his
    own. No one would suggest copyright violation if a human did
    it, so how can it suddenly be one if a computer algorithm does
    it?

    The new works generated by humans are already pretty
    derivative in too many cases: we don't need AIs generating
    still more of the same.

    Well therein lies the rub. At least in America. We call it the
    Bill of Rights, not the Bill of Needs, for a reason.

    There's a wealth of art (whether music, visual art, or
    literature) freely available in the public domain. Let them
    use that if they need large quantities of art to train their
    models.



    Your points are well taken. Yes, if the AI-generated material
    isn't recognizable to someone familiar with Alice in Wonderland,
    it's hard to make a case for copyright infringement. And yes,
    even if *I* don't see a need for yet more derivative works, it's
    not illegal, even if it is annoying.

    The challenge is going to come with deciding if an AI-generated
    work is "too similar" to something it trained on. I expect that
    similarity, like beauty, is in the eye (or ear) of the beholder.
    Maybe a committee will have to do the deciding and only if a
    majority of its members thinks the similarity is too close will
    the AI work be labelled a copyright infringement. Of course
    selection of this committee will be challenging since the tech
    companies are going to favour people that don't ever see
    similarities even of identical things and the human creators
    will tend to see similarity in everything because its in their
    financial interest to find similarity.

    Two ancillary thoughts: Afaics, we're already within reach of
    such a pilfering AI-agent that can be dialed to a desired degree
    of "distance" from the original work it's copying. Meanwhile,
    whenever a claim of infringement is brought, adjudicating that
    "distance" sounds like a proper and plausible task for a
    magistrate that is itself an AI.

    We're at that point with humans, too, and long have been.

    An answer might lie in my second thought (restored above). An AI that
    could detect similarity between a work and its alleged copy might be sufficient proof of infringement. Even though it'd almost certainly be somewhat imprecise, that shouldn't concern any truly original author.



    The Facebook made to AI has started adding to peoples posts, offering additional information. You can’t turn it off. It’s explaining to people how I do my work and what software I do it with and so far it’s been 100% wrong.

    --
    The last thing I want to do is hurt you, but it is still on my list.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From moviePig@21:1/5 to All on Tue May 27 22:44:50 2025
    On 5/27/2025 6:31 PM, BTR1701 wrote:
    On May 27, 2025 at 3:18:45 PM PDT, "moviePig" <nobody@nowhere.com> wrote:

    On 5/27/2025 5:43 PM, BTR1701 wrote:
    On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
    wrote:

    On 5/27/2025 3:20 PM, Rhino wrote:
    On 2025-05-27 2:17 PM, BTR1701 wrote:
    On May 27, 2025 at 9:06:34 AM PDT, "Rhino"
    <no_offline_contact@example.com> wrote:

    Mary Spender presents a relatively brief but, I think,
    compelling argument for why governments need to reject the
    tech firms claims that using existing works to train AIs is
    fair use and does not need to be paid for.

    https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]

    The tech bros are wallowing in almost unimagineable wealth:
    they can definitely afford to compensate copyright holders
    for using their work as training data. Alternatively, they
    can let copyright holders exclude their works from use in
    training data and compensate them for what they have used
    without permission.

    I don't believe the tech companies have some kind of natural
    right to generate new works that are closely modelled on
    existing works without paying for their use of those works.

    If you can show that the AI produces a copy of the work it was
    trained on, or one substantially similar enough as to be
    confusing to the reasonable man, then yes, I agree.

    E.g., if you ask it to generate a story about a young girl who
    finds herself lost in a fantasy world and it spits out the
    plot to Alice in Wonderland.

    But if you ask it that same question and it produces a totally
    different story that isn't Alice in Wonderland in any
    recognizable way but it learned how to do that from 'reading'
    Alice in Wonderland, then I don't see how you have a copyright
    violation under existing law or even under the philosophical
    framework on which existing law has been built. At that point,
    it's no different from a human reading Alice in Wonderland and
    figuring out how to use the elements and techniques employed
    by Carroll in his story to produce a different story of his
    own. No one would suggest copyright violation if a human did
    it, so how can it suddenly be one if a computer algorithm does
    it?

    The new works generated by humans are already pretty
    derivative in too many cases: we don't need AIs generating
    still more of the same.

    Well therein lies the rub. At least in America. We call it the
    Bill of Rights, not the Bill of Needs, for a reason.

    There's a wealth of art (whether music, visual art, or
    literature) freely available in the public domain. Let them
    use that if they need large quantities of art to train their
    models.



    Your points are well taken. Yes, if the AI-generated material
    isn't recognizable to someone familiar with Alice in Wonderland,
    it's hard to make a case for copyright infringement. And yes,
    even if *I* don't see a need for yet more derivative works, it's
    not illegal, even if it is annoying.

    The challenge is going to come with deciding if an AI-generated
    work is "too similar" to something it trained on. I expect that
    similarity, like beauty, is in the eye (or ear) of the beholder.
    Maybe a committee will have to do the deciding and only if a
    majority of its members thinks the similarity is too close will
    the AI work be labelled a copyright infringement. Of course
    selection of this committee will be challenging since the tech
    companies are going to favour people that don't ever see
    similarities even of identical things and the human creators
    will tend to see similarity in everything because its in their
    financial interest to find similarity.

    Two ancillary thoughts: Afaics, we're already within reach of
    such a pilfering AI-agent that can be dialed to a desired degree
    of "distance" from the original work it's copying. Meanwhile,
    whenever a claim of infringement is brought, adjudicating that
    "distance" sounds like a proper and plausible task for a
    magistrate that is itself an AI.

    We're at that point with humans, too, and long have been.

    An answer might lie in my second thought (restored above). An AI that
    could detect similarity between a work and its alleged copy might be
    sufficient proof of infringement. Even though it'd almost certainly be
    somewhat imprecise, that shouldn't concern any truly original author.

    Again, you'd have to come up with a coherent legally acceptable reason why de minimis similarity with AI would constitute violation but the same similarity in a human-produced work would not.

    I assume that any claim of infringement would be lodged against the
    copy's publisher, irrespective of his source.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From moviePig@21:1/5 to All on Tue May 27 22:49:16 2025
    On 5/27/2025 6:31 PM, anim8rfsk wrote:
    moviePig <nobody@nowhere.com> wrote:
    On 5/27/2025 5:43 PM, BTR1701 wrote:
    On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
    wrote:

    On 5/27/2025 3:20 PM, Rhino wrote:
    On 2025-05-27 2:17 PM, BTR1701 wrote:
    On May 27, 2025 at 9:06:34 AM PDT, "Rhino"
    <no_offline_contact@example.com> wrote:

    Mary Spender presents a relatively brief but, I think,
    compelling argument for why governments need to reject the
    tech firms claims that using existing works to train AIs is
    fair use and does not need to be paid for.

    https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]

    The tech bros are wallowing in almost unimagineable wealth:
    they can definitely afford to compensate copyright holders
    for using their work as training data. Alternatively, they
    can let copyright holders exclude their works from use in
    training data and compensate them for what they have used
    without permission.

    I don't believe the tech companies have some kind of natural
    right to generate new works that are closely modelled on
    existing works without paying for their use of those works.

    If you can show that the AI produces a copy of the work it was
    trained on, or one substantially similar enough as to be
    confusing to the reasonable man, then yes, I agree.

    E.g., if you ask it to generate a story about a young girl who
    finds herself lost in a fantasy world and it spits out the
    plot to Alice in Wonderland.

    But if you ask it that same question and it produces a totally
    different story that isn't Alice in Wonderland in any
    recognizable way but it learned how to do that from 'reading'
    Alice in Wonderland, then I don't see how you have a copyright
    violation under existing law or even under the philosophical
    framework on which existing law has been built. At that point,
    it's no different from a human reading Alice in Wonderland and
    figuring out how to use the elements and techniques employed
    by Carroll in his story to produce a different story of his
    own. No one would suggest copyright violation if a human did
    it, so how can it suddenly be one if a computer algorithm does
    it?

    The new works generated by humans are already pretty
    derivative in too many cases: we don't need AIs generating
    still more of the same.

    Well therein lies the rub. At least in America. We call it the
    Bill of Rights, not the Bill of Needs, for a reason.

    There's a wealth of art (whether music, visual art, or
    literature) freely available in the public domain. Let them
    use that if they need large quantities of art to train their
    models.



    Your points are well taken. Yes, if the AI-generated material
    isn't recognizable to someone familiar with Alice in Wonderland,
    it's hard to make a case for copyright infringement. And yes,
    even if *I* don't see a need for yet more derivative works, it's
    not illegal, even if it is annoying.

    The challenge is going to come with deciding if an AI-generated
    work is "too similar" to something it trained on. I expect that
    similarity, like beauty, is in the eye (or ear) of the beholder.
    Maybe a committee will have to do the deciding and only if a
    majority of its members thinks the similarity is too close will
    the AI work be labelled a copyright infringement. Of course
    selection of this committee will be challenging since the tech
    companies are going to favour people that don't ever see
    similarities even of identical things and the human creators
    will tend to see similarity in everything because its in their
    financial interest to find similarity.

    Two ancillary thoughts: Afaics, we're already within reach of
    such a pilfering AI-agent that can be dialed to a desired degree
    of "distance" from the original work it's copying. Meanwhile,
    whenever a claim of infringement is brought, adjudicating that
    "distance" sounds like a proper and plausible task for a
    magistrate that is itself an AI.

    We're at that point with humans, too, and long have been.

    An answer might lie in my second thought (restored above). An AI that
    could detect similarity between a work and its alleged copy might be
    sufficient proof of infringement. Even though it'd almost certainly be
    somewhat imprecise, that shouldn't concern any truly original author.



    The Facebook made to AI has started adding to peoples posts, offering additional information. You can’t turn it off. It’s explaining to people how I do my work and what software I do it with and so far it’s been 100% wrong.

    Except that I shun Facebook, I'd almost like to see that in action.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Horny Goat@21:1/5 to All on Sun Jun 1 12:16:27 2025
    On Tue, 27 May 2025 22:49:16 -0400, moviePig <nobody@nowhere.com>
    wrote:

    The Facebook made to AI has started adding to peoples posts, offering
    additional information. You can’t turn it off. It’s explaining to people >> how I do my work and what software I do it with and so far it’s been 100% >> wrong.

    Except that I shun Facebook, I'd almost like to see that in action.

    I use Facebook for 2 reasons and ignore it the rest of the time.

    (1) Now that my parents (and their siblings) are gone, I have a large
    number of cousins spread all over North America (with a couple in
    Europe) who have chosen to keep in touch via FB (plus I like the
    pictures a couple of them post - along with a local amateur historian
    who does the same)

    (2) it seems to be the easiest way to use Messenger which my kids use
    to keep in touch.

    Other than that I pretty much ignore Facebook since "memorializing" my
    late wife's account and to be fair they were first rate in holding my
    hand through the process. (My wife posted a ton of photos I don't
    otherwise have access to but really should go through and download the
    key ones)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)