Hi all,
Since changing to CNFS it seems my server won't stop expiring articles. According to expire.log I lost 46,456 articles yesterday. I thought I
fixed my expire.ctl to never expire articles after losing 20,000+ the
night before.
On a similar note, when adding a new peer, how can I make sure that I
receive everything it wants to throw at me regardless of the age of the articles?
On Apr 26, 2023 at 12:34:34 AM CDT, "Nigel Reed"
<sysop@endofthelinebbs.com> wrote:
Hi all,
Since changing to CNFS it seems my server won't stop expiring
articles. According to expire.log I lost 46,456 articles yesterday.
I thought I fixed my expire.ctl to never expire articles after
losing 20,000+ the night before.
What does your expire.ctl look like, and what is the output from
expire? Are you sure it isn't just removing articles from the
overview that have been overwritten/wrapped in a CNFS buffer? If you
have some buffers for smaller, unimportant groups (*.test for
example) that wrap the output may make it appear it is expiring
articles when it is just removing non-existent articles from the
overview.
On a similar note, when adding a new peer, how can I make sure that
I receive everything it wants to throw at me regardless of the age
of the articles?
The artcutoff parameter is a global option, as far as I know, so if
you change that value and have a peer feed you old articles you will re-propagate those articles to your other peers. I'm not aware of
anything like this to tweak on a per-feed basis.
On Apr 26, 2023 at 12:34:34 AM CDT, "Nigel Reed"
<sysop@endofthelinebbs.com> wrote:
Hi all,
Since changing to CNFS it seems my server won't stop expiring
articles. According to expire.log I lost 46,456 articles yesterday.
I thought I fixed my expire.ctl to never expire articles after
losing 20,000+ the night before.
What does your expire.ctl look like, and what is the output from
expire? Are you sure it isn't just removing articles from the
overview that have been overwritten/wrapped in a CNFS buffer? If you
have some buffers for smaller, unimportant groups (*.test for
example) that wrap the output may make it appear it is expiring
articles when it is just removing non-existent articles from the
overview.
On a similar note, when adding a new peer, how can I make sure that
I receive everything it wants to throw at me regardless of the age
of the articles?
The artcutoff parameter is a global option, as far as I know, so if
you change that value and have a peer feed you old articles you will re-propagate those articles to your other peers. I'm not aware of
anything like this to tweak on a per-feed basis.
It looks like I'm getting a high number of posts in relcom.test - like several a second? similar for de.test both from maxmin.
Nigel Reed schrieb:
It looks like I'm getting a high number of posts in relcom.test -
like several a second? similar for de.test both from maxmin.
Yes, someone from mixmin has been flooding de.test (since 2022-04-11), misc.test and other test groups with some 100.000 articles each day.
As cleanfeed's flood protections exclude test groups, a manual
interventions is necessary.
On Thu, 27 Apr 2023 04:16:01 -0000 (UTC)
Jesse Rehmer <jesse.rehmer@blueworldhosting.com> wrote:
On Apr 26, 2023 at 12:34:34 AM CDT, "Nigel Reed"
<sysop@endofthelinebbs.com> wrote:
Hi all,
Since changing to CNFS it seems my server won't stop expiring
articles. According to expire.log I lost 46,456 articles yesterday.
I thought I fixed my expire.ctl to never expire articles after
losing 20,000+ the night before.
What does your expire.ctl look like, and what is the output from
expire? Are you sure it isn't just removing articles from the
overview that have been overwritten/wrapped in a CNFS buffer? If you
have some buffers for smaller, unimportant groups (*.test for
example) that wrap the output may make it appear it is expiring
articles when it is just removing non-existent articles from the
overview.
On a similar note, when adding a new peer, how can I make sure that
I receive everything it wants to throw at me regardless of the age
of the articles?
The artcutoff parameter is a global option, as far as I know, so if
you change that value and have a peer feed you old articles you will
re-propagate those articles to your other peers. I'm not aware of
anything like this to tweak on a per-feed basis.
I think the problem is some turd dumping 1.4 million articles to
de.test.
de.test: headers received: 429751/1451594
I'm just loading them into slrn now to see who and within which
timeframe.
So, abuser@mixmin.net seems to be the reason for this. Unfortunately,
it looks like my CFS buffers have now started to overwrite so I'm
losing all my old articles. I'm really pissed off at this stage. If I
stuck with traditional spool I wouldn't be in this mess.
So here's more questions.
I'm going to check with my VPS provider to see if I can get a zfs disk
and, if so, recopy my traditional spool back and then get my peers to
refeed me a bunch of articles. Urgh!
On Apr 27, 2023 at 2:59:11 PM CDT, "Nigel Reed"
<sysop@endofthelinebbs.com> wrote:
On Thu, 27 Apr 2023 21:27:28 +0200
Thomas Hochstein <thh@thh.name> wrote:
Nigel Reed schrieb:
It looks like I'm getting a high number of posts in relcom.test -
like several a second? similar for de.test both from maxmin.
Yes, someone from mixmin has been flooding de.test (since
2022-04-11), misc.test and other test groups with some 100.000
articles each day. As cleanfeed's flood protections exclude test
groups, a manual interventions is necessary.
Might have been nice if someone noticed this that they would have
mentioned it so we could take mitigating action.
I might just blacklist that server then if they're not going to do
anything about it.
It was reported in news.admin.net-abuse.usenet on 04/16/2023:
<wwvy1mromnu.fsf@LkoBDZeT.terraraq.uk>
Might have been nice if someone noticed this that they would have
mentioned it so we could take mitigating action.
On Thu, 27 Apr 2023 21:27:28 +0200
Thomas Hochstein <thh@thh.name> wrote:
Nigel Reed schrieb:
It looks like I'm getting a high number of posts in relcom.test -
like several a second? similar for de.test both from maxmin.
Yes, someone from mixmin has been flooding de.test (since 2022-04-11),
misc.test and other test groups with some 100.000 articles each day.
As cleanfeed's flood protections exclude test groups, a manual
interventions is necessary.
Might have been nice if someone noticed this that they would have
mentioned it so we could take mitigating action.
I might just blacklist that server then if they're not going to do
anything about it.
Cleanfeed and PyClean ignore EMP on test groups, I assume due to commonly posted bodies that aren't unique, but are likely to be seen frequently (ie "test") in those groups, but recent floods seem like that may not be the bes
So, abuser@mixmin.net seems to be the reason for this. Unfortunately,
it looks like my CNFS buffers have now started to overwrite so I'm
losing all my old articles. I'm really pissed off at this stage. If I
stuck with traditional spool I wouldn't be in this mess.
I'm going to check with my VPS provider to see if I can get a zfs disk
and, if so, recopy my traditional spool back and then get my peers to
refeed me a bunch of articles. Urgh!
CNFS buffers can get tricky when you do not want them to wrap.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 13:35:45 |
Calls: | 10,389 |
Calls today: | 4 |
Files: | 14,061 |
Messages: | 6,416,888 |
Posted today: | 1 |