Unmoderated goups are spammed so much that many have become unusable
and unused.
Is it already (or in near future) possible to construct an AI that
could moderate a discussion group so that the amount of off-topic
messages stays acceptable but acceptable messages are not rejected
too often?
For your reference, records indicate that
Mikko <mikko.levanto@iki.fi> wrote:
Unmoderated goups are spammed so much that many have become unusable
and unused.
If you’re talking about Usenet itself, I would dispute that premise. There are plenty of online forums that are still used despite being full of spam;
I could even argue that the sum total of social media exists *to* be a channel for spam, and that’s where the bulk of Usenet traffic has gone. Network effects are a better explanation for why nobody goes where nobody goes.
Is it already (or in near future) possible to construct an AI that
could moderate a discussion group so that the amount of off-topic
messages stays acceptable but acceptable messages are not rejected
too often?
It has been possible to stop spam for decades, and no AI is required to do it. It doesn’t even require natural language processing of message content!
Spam (and other forms of abuse) have a source, and using that metadata to block bad actors is all that is required to stop the abuse. The problem is that, if you do said analysis, you’ll quickly discover that the source of abuse turns out to be the same “too big to fail” companies that exploit network effects for their own benefits. For Usenet, that means Google Groups; if you have the courage to acknowledge Google is a hostile actor,
cut them off and you’ll eliminate 90% of the spam on Usenet.
On 2022-10-23 14:01:00 +0000, Doc O'Leary , said:
For your reference, records indicate that
Mikko <mikko.levanto@iki.fi> wrote:
Unmoderated goups are spammed so much that many have become unusable
and unused.
If you’re talking about Usenet itself, I would dispute that premise. There
are plenty of online forums that are still used despite being full of spam; >> I could even argue that the sum total of social media exists *to* be a
channel for spam, and that’s where the bulk of Usenet traffic has gone.
Network effects are a better explanation for why nobody goes where nobody
goes.
Is it already (or in near future) possible to construct an AI that
could moderate a discussion group so that the amount of off-topic
messages stays acceptable but acceptable messages are not rejected
too often?
It has been possible to stop spam for decades, and no AI is required to do >> it. It doesn’t even require natural language processing of message content!
Spam (and other forms of abuse) have a source, and using that metadata to
block bad actors is all that is required to stop the abuse. The problem is >> that, if you do said analysis, you’ll quickly discover that the source of >> abuse turns out to be the same “too big to fail” companies that exploit >> network effects for their own benefits. For Usenet, that means Google
Groups; if you have the courage to acknowledge Google is a hostile actor,
cut them off and you’ll eliminate 90% of the spam on Usenet.
That approach depends on identification of spam and spam sources. But my question about the possibility to identify on-topic messages is still unanswered.
Mikko
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 51:35:33 |
Calls: | 10,397 |
Calls today: | 5 |
Files: | 14,067 |
Messages: | 6,417,338 |
Posted today: | 1 |