Mary Spender presents a relatively brief but, I think, compelling
argument for why governments need to reject the tech firms claims that
using existing works to train AIs is fair use and does not need to be
paid for.
https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
The tech bros are wallowing in almost unimagineable wealth: they can definitely afford to compensate copyright holders for using their work
as training data. Alternatively, they can let copyright holders exclude
their works from use in training data and compensate them for what they
have used without permission.
I don't believe the tech companies have some kind of natural right to generate new works that are closely modelled on existing works without
paying for their use of those works.
The new works generated by humans are already pretty derivative in too many cases: we don't need AIs
generating still more of the same.
There's a wealth of art (whether music, visual art, or literature)
freely available in the public domain. Let them use that if they need
large quantities of art to train their models.
On May 27, 2025 at 9:06:34 AM PDT, "Rhino" <no_offline_contact@example.com> wrote:
Mary Spender presents a relatively brief but, I think, compelling
argument for why governments need to reject the tech firms claims that
using existing works to train AIs is fair use and does not need to be
paid for.
https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
The tech bros are wallowing in almost unimagineable wealth: they can
definitely afford to compensate copyright holders for using their work
as training data. Alternatively, they can let copyright holders exclude
their works from use in training data and compensate them for what they
have used without permission.
I don't believe the tech companies have some kind of natural right to
generate new works that are closely modelled on existing works without
paying for their use of those works.
If you can show that the AI produces a copy of the work it was trained on, or one substantially similar enough as to be confusing to the reasonable man, then yes, I agree.
E.g., if you ask it to generate a story about a young girl who finds herself lost in a fantasy world and it spits out the plot to Alice in Wonderland.
But if you ask it that same question and it produces a totally different story
that isn't Alice in Wonderland in any recognizable way but it learned how to do that from 'reading' Alice in Wonderland, then I don't see how you have a copyright violation under existing law or even under the philosophical framework on which existing law has been built. At that point, it's no different from a human reading Alice in Wonderland and figuring out how to use
the elements and techniques employed by Carroll in his story to produce a different story of his own. No one would suggest copyright violation if a human did it, so how can it suddenly be one if a computer algorithm does it?
The new works generated by humans are already pretty derivative in too many >> cases: we don't need AIs
generating still more of the same.
Well therein lies the rub. At least in America. We call it the Bill of Rights,
not the Bill of Needs, for a reason.
There's a wealth of art (whether music, visual art, or literature)
freely available in the public domain. Let them use that if they need
large quantities of art to train their models.
On May 27, 2025 at 9:06:34 AM PDT, "Rhino" <no_offline_contact@example.com> wrote:
Mary Spender presents a relatively brief but, I think, compelling
argument for why governments need to reject the tech firms claims that
using existing works to train AIs is fair use and does not need to be
paid for.
https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
The tech bros are wallowing in almost unimagineable wealth: they can
definitely afford to compensate copyright holders for using their work
as training data. Alternatively, they can let copyright holders exclude
their works from use in training data and compensate them for what they
have used without permission.
I don't believe the tech companies have some kind of natural right to
generate new works that are closely modelled on existing works without
paying for their use of those works.
If you can show that the AI produces a copy of the work it was trained on, or one substantially similar enough as to be confusing to the reasonable man, then yes, I agree.
E.g., if you ask it to generate a story about a young girl who finds herself lost in a fantasy world and it spits out the plot to Alice in Wonderland.
But if you ask it that same question and it produces a totally different story
that isn't Alice in Wonderland in any recognizable way but it learned how to do that from 'reading' Alice in Wonderland, then I don't see how you have a copyright violation under existing law or even under the philosophical framework on which existing law has been built. At that point, it's no different from a human reading Alice in Wonderland and figuring out how to use
the elements and techniques employed by Carroll in his story to produce a different story of his own. No one would suggest copyright violation if a human did it, so how can it suddenly be one if a computer algorithm does it?
The new works generated by humans are already pretty derivative in too many >> cases: we don't need AIs
generating still more of the same.
Well therein lies the rub. At least in America. We call it the Bill of Rights,
not the Bill of Needs, for a reason.
There's a wealth of art (whether music, visual art, or literature)
freely available in the public domain. Let them use that if they need
large quantities of art to train their models.
On 2025-05-27 2:17 PM, BTR1701 wrote:
On May 27, 2025 at 9:06:34 AM PDT, "Rhino"Your points are well taken. Yes, if the AI-generated material isn't recognizable to someone familiar with Alice in Wonderland, it's hard to
<no_offline_contact@example.com>
wrote:
Mary Spender presents a relatively brief but, I think, compelling
argument for why governments need to reject the tech firms claims that
using existing works to train AIs is fair use and does not need to be
paid for.
https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
The tech bros are wallowing in almost unimagineable wealth: they can
definitely afford to compensate copyright holders for using their work
as training data. Alternatively, they can let copyright holders exclude
their works from use in training data and compensate them for what they
have used without permission.
I don't believe the tech companies have some kind of natural right to
generate new works that are closely modelled on existing works without
paying for their use of those works.
If you can show that the AI produces a copy of the work it was trained
on, or
one substantially similar enough as to be confusing to the reasonable
man,
then yes, I agree.
E.g., if you ask it to generate a story about a young girl who finds
herself
lost in a fantasy world and it spits out the plot to Alice in Wonderland.
But if you ask it that same question and it produces a totally
different story
that isn't Alice in Wonderland in any recognizable way but it learned
how to
do that from 'reading' Alice in Wonderland, then I don't see how you
have a
copyright violation under existing law or even under the philosophical
framework on which existing law has been built. At that point, it's no
different from a human reading Alice in Wonderland and figuring out
how to use
the elements and techniques employed by Carroll in his story to produce a
different story of his own. No one would suggest copyright violation if a
human did it, so how can it suddenly be one if a computer algorithm
does it?
The new works generated by humans are already pretty derivative in
too many
cases: we don't need AIs
generating still more of the same.
Well therein lies the rub. At least in America. We call it the Bill of
Rights,
not the Bill of Needs, for a reason.
There's a wealth of art (whether music, visual art, or literature)
freely available in the public domain. Let them use that if they need
large quantities of art to train their models.
make a case for copyright infringement. And yes, even if *I* don't see a
need for yet more derivative works, it's not illegal, even if it is
annoying.
The challenge is going to come with deciding if an AI-generated work is
"too similar" to something it trained on. I expect that similarity, like beauty, is in the eye (or ear) of the beholder. Maybe a committee will
have to do the deciding and only if a majority of its members thinks the similarity is too close will the AI work be labelled a copyright infringement. Of course selection of this committee will be challenging
since the tech companies are going to favour people that don't ever see similarities even of identical things and the human creators will tend
to see similarity in everything because its in their financial interest
to find similarity.
On 5/27/2025 3:20 PM, Rhino wrote:
On 2025-05-27 2:17 PM, BTR1701 wrote:
On May 27, 2025 at 9:06:34 AM PDT, "Rhino"Your points are well taken. Yes, if the AI-generated material isn't
<no_offline_contact@example.com>
wrote:
Mary Spender presents a relatively brief but, I think, compelling
argument for why governments need to reject the tech firms claims that >>>> using existing works to train AIs is fair use and does not need to be >>>> paid for.
https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
The tech bros are wallowing in almost unimagineable wealth: they can
definitely afford to compensate copyright holders for using their work >>>> as training data. Alternatively, they can let copyright holders exclude >>>> their works from use in training data and compensate them for what they >>>> have used without permission.
I don't believe the tech companies have some kind of natural right to >>>> generate new works that are closely modelled on existing works without >>>> paying for their use of those works.
If you can show that the AI produces a copy of the work it was trained
on, or
one substantially similar enough as to be confusing to the reasonable
man,
then yes, I agree.
E.g., if you ask it to generate a story about a young girl who finds
herself
lost in a fantasy world and it spits out the plot to Alice in Wonderland. >>>
But if you ask it that same question and it produces a totally
different story
that isn't Alice in Wonderland in any recognizable way but it learned
how to
do that from 'reading' Alice in Wonderland, then I don't see how you
have a
copyright violation under existing law or even under the philosophical
framework on which existing law has been built. At that point, it's no
different from a human reading Alice in Wonderland and figuring out
how to use
the elements and techniques employed by Carroll in his story to produce a >>> different story of his own. No one would suggest copyright violation if a >>> human did it, so how can it suddenly be one if a computer algorithm
does it?
The new works generated by humans are already pretty derivative in
too many
cases: we don't need AIs
generating still more of the same.
Well therein lies the rub. At least in America. We call it the Bill of
Rights,
not the Bill of Needs, for a reason.
There's a wealth of art (whether music, visual art, or literature)
freely available in the public domain. Let them use that if they need >>>> large quantities of art to train their models.
recognizable to someone familiar with Alice in Wonderland, it's hard to
make a case for copyright infringement. And yes, even if *I* don't see a
need for yet more derivative works, it's not illegal, even if it is
annoying.
The challenge is going to come with deciding if an AI-generated work is
"too similar" to something it trained on. I expect that similarity, like
beauty, is in the eye (or ear) of the beholder. Maybe a committee will
have to do the deciding and only if a majority of its members thinks the
similarity is too close will the AI work be labelled a copyright
infringement. Of course selection of this committee will be challenging
since the tech companies are going to favour people that don't ever see
similarities even of identical things and the human creators will tend
to see similarity in everything because its in their financial interest
to find similarity.
Two ancillary thoughts: Afaics, we're already within reach of such a pilfering AI-agent that can be dialed to a desired degree of "distance"
from the original work it's copying.
On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
wrote:
On 5/27/2025 3:20 PM, Rhino wrote:
On 2025-05-27 2:17 PM, BTR1701 wrote:
On May 27, 2025 at 9:06:34 AM PDT, "Rhino"Your points are well taken. Yes, if the AI-generated material
<no_offline_contact@example.com> wrote:
Mary Spender presents a relatively brief but, I think,
compelling argument for why governments need to reject the
tech firms claims that using existing works to train AIs is
fair use and does not need to be paid for.
https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
The tech bros are wallowing in almost unimagineable wealth:
they can definitely afford to compensate copyright holders
for using their work as training data. Alternatively, they
can let copyright holders exclude their works from use in
training data and compensate them for what they have used
without permission.
I don't believe the tech companies have some kind of natural
right to generate new works that are closely modelled on
existing works without paying for their use of those works.
If you can show that the AI produces a copy of the work it was
trained on, or one substantially similar enough as to be
confusing to the reasonable man, then yes, I agree.
E.g., if you ask it to generate a story about a young girl who
finds herself lost in a fantasy world and it spits out the
plot to Alice in Wonderland.
But if you ask it that same question and it produces a totally
different story that isn't Alice in Wonderland in any
recognizable way but it learned how to do that from 'reading'
Alice in Wonderland, then I don't see how you have a copyright
violation under existing law or even under the philosophical
framework on which existing law has been built. At that point,
it's no different from a human reading Alice in Wonderland and
figuring out how to use the elements and techniques employed
by Carroll in his story to produce a different story of his
own. No one would suggest copyright violation if a human did
it, so how can it suddenly be one if a computer algorithm does
it?
The new works generated by humans are already pretty
derivative in too many cases: we don't need AIs generating
still more of the same.
Well therein lies the rub. At least in America. We call it the
Bill of Rights, not the Bill of Needs, for a reason.
There's a wealth of art (whether music, visual art, or
literature) freely available in the public domain. Let them
use that if they need large quantities of art to train their
models.
isn't recognizable to someone familiar with Alice in Wonderland,
it's hard to make a case for copyright infringement. And yes,
even if *I* don't see a need for yet more derivative works, it's
not illegal, even if it is annoying.
The challenge is going to come with deciding if an AI-generated
work is "too similar" to something it trained on. I expect that
similarity, like beauty, is in the eye (or ear) of the beholder.
Maybe a committee will have to do the deciding and only if a
majority of its members thinks the similarity is too close will
the AI work be labelled a copyright infringement. Of course
selection of this committee will be challenging since the tech
companies are going to favour people that don't ever see
similarities even of identical things and the human creators
will tend to see similarity in everything because its in their
financial interest to find similarity.
Two ancillary thoughts: Afaics, we're already within reach of
such a pilfering AI-agent that can be dialed to a desired degree
of "distance" from the original work it's copying. Meanwhile,
whenever a claim of infringement is brought, adjudicating that
"distance" sounds like a proper and plausible task for a
magistrate that is itself an AI.
We're at that point with humans, too, and long have been.
On 5/27/2025 5:43 PM, BTR1701 wrote:
On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
wrote:
On 5/27/2025 3:20 PM, Rhino wrote:
On 2025-05-27 2:17 PM, BTR1701 wrote:
On May 27, 2025 at 9:06:34 AM PDT, "Rhino"Your points are well taken. Yes, if the AI-generated material
<no_offline_contact@example.com> wrote:
Mary Spender presents a relatively brief but, I think,
compelling argument for why governments need to reject the
tech firms claims that using existing works to train AIs is
fair use and does not need to be paid for.
https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
The tech bros are wallowing in almost unimagineable wealth:
they can definitely afford to compensate copyright holders
for using their work as training data. Alternatively, they
can let copyright holders exclude their works from use in
training data and compensate them for what they have used
without permission.
I don't believe the tech companies have some kind of natural
right to generate new works that are closely modelled on
existing works without paying for their use of those works.
If you can show that the AI produces a copy of the work it was
trained on, or one substantially similar enough as to be
confusing to the reasonable man, then yes, I agree.
E.g., if you ask it to generate a story about a young girl who
finds herself lost in a fantasy world and it spits out the
plot to Alice in Wonderland.
But if you ask it that same question and it produces a totally
different story that isn't Alice in Wonderland in any
recognizable way but it learned how to do that from 'reading'
Alice in Wonderland, then I don't see how you have a copyright
violation under existing law or even under the philosophical
framework on which existing law has been built. At that point,
it's no different from a human reading Alice in Wonderland and
figuring out how to use the elements and techniques employed
by Carroll in his story to produce a different story of his
own. No one would suggest copyright violation if a human did
it, so how can it suddenly be one if a computer algorithm does
it?
The new works generated by humans are already pretty
derivative in too many cases: we don't need AIs generating
still more of the same.
Well therein lies the rub. At least in America. We call it the
Bill of Rights, not the Bill of Needs, for a reason.
There's a wealth of art (whether music, visual art, or
literature) freely available in the public domain. Let them
use that if they need large quantities of art to train their
models.
isn't recognizable to someone familiar with Alice in Wonderland,
it's hard to make a case for copyright infringement. And yes,
even if *I* don't see a need for yet more derivative works, it's
not illegal, even if it is annoying.
The challenge is going to come with deciding if an AI-generated
work is "too similar" to something it trained on. I expect that
similarity, like beauty, is in the eye (or ear) of the beholder.
Maybe a committee will have to do the deciding and only if a
majority of its members thinks the similarity is too close will
the AI work be labelled a copyright infringement. Of course
selection of this committee will be challenging since the tech
companies are going to favour people that don't ever see
similarities even of identical things and the human creators
will tend to see similarity in everything because its in their
financial interest to find similarity.
Two ancillary thoughts: Afaics, we're already within reach of
such a pilfering AI-agent that can be dialed to a desired degree
of "distance" from the original work it's copying. Meanwhile,
whenever a claim of infringement is brought, adjudicating that
"distance" sounds like a proper and plausible task for a
magistrate that is itself an AI.
We're at that point with humans, too, and long have been.
An answer might lie in my second thought (restored above). An AI that
could detect similarity between a work and its alleged copy might be sufficient proof of infringement. Even though it'd almost certainly be somewhat imprecise, that shouldn't concern any truly original author.
On 5/27/2025 5:43 PM, BTR1701 wrote:
On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
wrote:
On 5/27/2025 3:20 PM, Rhino wrote:
On 2025-05-27 2:17 PM, BTR1701 wrote:
On May 27, 2025 at 9:06:34 AM PDT, "Rhino"Your points are well taken. Yes, if the AI-generated material
<no_offline_contact@example.com> wrote:
Mary Spender presents a relatively brief but, I think,
compelling argument for why governments need to reject the
tech firms claims that using existing works to train AIs is
fair use and does not need to be paid for.
https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
The tech bros are wallowing in almost unimagineable wealth:
they can definitely afford to compensate copyright holders
for using their work as training data. Alternatively, they
can let copyright holders exclude their works from use in
training data and compensate them for what they have used
without permission.
I don't believe the tech companies have some kind of natural
right to generate new works that are closely modelled on
existing works without paying for their use of those works.
If you can show that the AI produces a copy of the work it was
trained on, or one substantially similar enough as to be
confusing to the reasonable man, then yes, I agree.
E.g., if you ask it to generate a story about a young girl who
finds herself lost in a fantasy world and it spits out the
plot to Alice in Wonderland.
But if you ask it that same question and it produces a totally
different story that isn't Alice in Wonderland in any
recognizable way but it learned how to do that from 'reading'
Alice in Wonderland, then I don't see how you have a copyright
violation under existing law or even under the philosophical
framework on which existing law has been built. At that point,
it's no different from a human reading Alice in Wonderland and
figuring out how to use the elements and techniques employed
by Carroll in his story to produce a different story of his
own. No one would suggest copyright violation if a human did
it, so how can it suddenly be one if a computer algorithm does
it?
The new works generated by humans are already pretty
derivative in too many cases: we don't need AIs generating
still more of the same.
Well therein lies the rub. At least in America. We call it the
Bill of Rights, not the Bill of Needs, for a reason.
There's a wealth of art (whether music, visual art, or
literature) freely available in the public domain. Let them
use that if they need large quantities of art to train their
models.
isn't recognizable to someone familiar with Alice in Wonderland,
it's hard to make a case for copyright infringement. And yes,
even if *I* don't see a need for yet more derivative works, it's
not illegal, even if it is annoying.
The challenge is going to come with deciding if an AI-generated
work is "too similar" to something it trained on. I expect that
similarity, like beauty, is in the eye (or ear) of the beholder.
Maybe a committee will have to do the deciding and only if a
majority of its members thinks the similarity is too close will
the AI work be labelled a copyright infringement. Of course
selection of this committee will be challenging since the tech
companies are going to favour people that don't ever see
similarities even of identical things and the human creators
will tend to see similarity in everything because its in their
financial interest to find similarity.
Two ancillary thoughts: Afaics, we're already within reach of
such a pilfering AI-agent that can be dialed to a desired degree
of "distance" from the original work it's copying. Meanwhile,
whenever a claim of infringement is brought, adjudicating that
"distance" sounds like a proper and plausible task for a
magistrate that is itself an AI.
We're at that point with humans, too, and long have been.
An answer might lie in my second thought (restored above). An AI that
could detect similarity between a work and its alleged copy might be sufficient proof of infringement. Even though it'd almost certainly be somewhat imprecise, that shouldn't concern any truly original author.
On May 27, 2025 at 3:18:45 PM PDT, "moviePig" <nobody@nowhere.com> wrote:
On 5/27/2025 5:43 PM, BTR1701 wrote:
On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
wrote:
On 5/27/2025 3:20 PM, Rhino wrote:
On 2025-05-27 2:17 PM, BTR1701 wrote:
On May 27, 2025 at 9:06:34 AM PDT, "Rhino"Your points are well taken. Yes, if the AI-generated material
<no_offline_contact@example.com> wrote:
Mary Spender presents a relatively brief but, I think,
compelling argument for why governments need to reject the
tech firms claims that using existing works to train AIs is
fair use and does not need to be paid for.
https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
The tech bros are wallowing in almost unimagineable wealth:
they can definitely afford to compensate copyright holders
for using their work as training data. Alternatively, they
can let copyright holders exclude their works from use in
training data and compensate them for what they have used
without permission.
I don't believe the tech companies have some kind of natural
right to generate new works that are closely modelled on
existing works without paying for their use of those works.
If you can show that the AI produces a copy of the work it was
trained on, or one substantially similar enough as to be
confusing to the reasonable man, then yes, I agree.
E.g., if you ask it to generate a story about a young girl who
finds herself lost in a fantasy world and it spits out the
plot to Alice in Wonderland.
But if you ask it that same question and it produces a totally
different story that isn't Alice in Wonderland in any
recognizable way but it learned how to do that from 'reading'
Alice in Wonderland, then I don't see how you have a copyright
violation under existing law or even under the philosophical
framework on which existing law has been built. At that point,
it's no different from a human reading Alice in Wonderland and
figuring out how to use the elements and techniques employed
by Carroll in his story to produce a different story of his
own. No one would suggest copyright violation if a human did
it, so how can it suddenly be one if a computer algorithm does
it?
The new works generated by humans are already pretty
derivative in too many cases: we don't need AIs generating
still more of the same.
Well therein lies the rub. At least in America. We call it the
Bill of Rights, not the Bill of Needs, for a reason.
There's a wealth of art (whether music, visual art, or
literature) freely available in the public domain. Let them
use that if they need large quantities of art to train their
models.
isn't recognizable to someone familiar with Alice in Wonderland,
it's hard to make a case for copyright infringement. And yes,
even if *I* don't see a need for yet more derivative works, it's
not illegal, even if it is annoying.
The challenge is going to come with deciding if an AI-generated
work is "too similar" to something it trained on. I expect that
similarity, like beauty, is in the eye (or ear) of the beholder.
Maybe a committee will have to do the deciding and only if a
majority of its members thinks the similarity is too close will
the AI work be labelled a copyright infringement. Of course
selection of this committee will be challenging since the tech
companies are going to favour people that don't ever see
similarities even of identical things and the human creators
will tend to see similarity in everything because its in their
financial interest to find similarity.
Two ancillary thoughts: Afaics, we're already within reach of
such a pilfering AI-agent that can be dialed to a desired degree
of "distance" from the original work it's copying. Meanwhile,
whenever a claim of infringement is brought, adjudicating that
"distance" sounds like a proper and plausible task for a
magistrate that is itself an AI.
We're at that point with humans, too, and long have been.
An answer might lie in my second thought (restored above). An AI that
could detect similarity between a work and its alleged copy might be
sufficient proof of infringement. Even though it'd almost certainly be
somewhat imprecise, that shouldn't concern any truly original author.
Again, you'd have to come up with a coherent legally acceptable reason why de minimis similarity with AI would constitute violation but the same similarity in a human-produced work would not.
moviePig <nobody@nowhere.com> wrote:
On 5/27/2025 5:43 PM, BTR1701 wrote:
On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
wrote:
On 5/27/2025 3:20 PM, Rhino wrote:
On 2025-05-27 2:17 PM, BTR1701 wrote:
On May 27, 2025 at 9:06:34 AM PDT, "Rhino"Your points are well taken. Yes, if the AI-generated material
<no_offline_contact@example.com> wrote:
Mary Spender presents a relatively brief but, I think,
compelling argument for why governments need to reject the
tech firms claims that using existing works to train AIs is
fair use and does not need to be paid for.
https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
The tech bros are wallowing in almost unimagineable wealth:
they can definitely afford to compensate copyright holders
for using their work as training data. Alternatively, they
can let copyright holders exclude their works from use in
training data and compensate them for what they have used
without permission.
I don't believe the tech companies have some kind of natural
right to generate new works that are closely modelled on
existing works without paying for their use of those works.
If you can show that the AI produces a copy of the work it was
trained on, or one substantially similar enough as to be
confusing to the reasonable man, then yes, I agree.
E.g., if you ask it to generate a story about a young girl who
finds herself lost in a fantasy world and it spits out the
plot to Alice in Wonderland.
But if you ask it that same question and it produces a totally
different story that isn't Alice in Wonderland in any
recognizable way but it learned how to do that from 'reading'
Alice in Wonderland, then I don't see how you have a copyright
violation under existing law or even under the philosophical
framework on which existing law has been built. At that point,
it's no different from a human reading Alice in Wonderland and
figuring out how to use the elements and techniques employed
by Carroll in his story to produce a different story of his
own. No one would suggest copyright violation if a human did
it, so how can it suddenly be one if a computer algorithm does
it?
The new works generated by humans are already pretty
derivative in too many cases: we don't need AIs generating
still more of the same.
Well therein lies the rub. At least in America. We call it the
Bill of Rights, not the Bill of Needs, for a reason.
There's a wealth of art (whether music, visual art, or
literature) freely available in the public domain. Let them
use that if they need large quantities of art to train their
models.
isn't recognizable to someone familiar with Alice in Wonderland,
it's hard to make a case for copyright infringement. And yes,
even if *I* don't see a need for yet more derivative works, it's
not illegal, even if it is annoying.
The challenge is going to come with deciding if an AI-generated
work is "too similar" to something it trained on. I expect that
similarity, like beauty, is in the eye (or ear) of the beholder.
Maybe a committee will have to do the deciding and only if a
majority of its members thinks the similarity is too close will
the AI work be labelled a copyright infringement. Of course
selection of this committee will be challenging since the tech
companies are going to favour people that don't ever see
similarities even of identical things and the human creators
will tend to see similarity in everything because its in their
financial interest to find similarity.
Two ancillary thoughts: Afaics, we're already within reach of
such a pilfering AI-agent that can be dialed to a desired degree
of "distance" from the original work it's copying. Meanwhile,
whenever a claim of infringement is brought, adjudicating that
"distance" sounds like a proper and plausible task for a
magistrate that is itself an AI.
We're at that point with humans, too, and long have been.
An answer might lie in my second thought (restored above). An AI that
could detect similarity between a work and its alleged copy might be
sufficient proof of infringement. Even though it'd almost certainly be
somewhat imprecise, that shouldn't concern any truly original author.
The Facebook made to AI has started adding to peoples posts, offering additional information. You can’t turn it off. It’s explaining to people how I do my work and what software I do it with and so far it’s been 100% wrong.
The Facebook made to AI has started adding to peoples posts, offering
additional information. You can’t turn it off. It’s explaining to people >> how I do my work and what software I do it with and so far it’s been 100% >> wrong.
Except that I shun Facebook, I'd almost like to see that in action.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 02:11:20 |
Calls: | 10,387 |
Calls today: | 2 |
Files: | 14,061 |
Messages: | 6,416,750 |