• Stop innd expiring articles

    From Nigel Reed@21:1/5 to All on Wed Apr 26 00:34:34 2023
    Hi all,

    Since changing to CNFS it seems my server won't stop expiring articles. According to expire.log I lost 46,456 articles yesterday. I thought I
    fixed my expire.ctl to never expire articles after losing 20,000+ the
    night before.

    On a similar note, when adding a new peer, how can I make sure that I
    receive everything it wants to throw at me regardless of the age of the articles?

    Knowing my luck, I'll cock it all up and expire everything if I'm not
    careful!

    Thanks,


    --
    End Of The Line BBS - Plano, TX
    telnet endofthelinebbs.com 23

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jesse Rehmer@21:1/5 to All on Thu Apr 27 04:16:01 2023
    On Apr 26, 2023 at 12:34:34 AM CDT, "Nigel Reed" <sysop@endofthelinebbs.com> wrote:

    Hi all,

    Since changing to CNFS it seems my server won't stop expiring articles. According to expire.log I lost 46,456 articles yesterday. I thought I
    fixed my expire.ctl to never expire articles after losing 20,000+ the
    night before.

    What does your expire.ctl look like, and what is the output from expire? Are you sure it isn't just removing articles from the overview that have been overwritten/wrapped in a CNFS buffer? If you have some buffers for smaller, unimportant groups (*.test for example) that wrap the output may make it
    appear it is expiring articles when it is just removing non-existent articles from the overview.

    On a similar note, when adding a new peer, how can I make sure that I
    receive everything it wants to throw at me regardless of the age of the articles?

    The artcutoff parameter is a global option, as far as I know, so if you change that value and have a peer feed you old articles you will re-propagate those articles to your other peers. I'm not aware of anything like this to tweak on
    a per-feed basis.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nigel Reed@21:1/5 to Jesse Rehmer on Thu Apr 27 10:53:37 2023
    On Thu, 27 Apr 2023 04:16:01 -0000 (UTC)
    Jesse Rehmer <jesse.rehmer@blueworldhosting.com> wrote:

    On Apr 26, 2023 at 12:34:34 AM CDT, "Nigel Reed"
    <sysop@endofthelinebbs.com> wrote:

    Hi all,

    Since changing to CNFS it seems my server won't stop expiring
    articles. According to expire.log I lost 46,456 articles yesterday.
    I thought I fixed my expire.ctl to never expire articles after
    losing 20,000+ the night before.

    What does your expire.ctl look like, and what is the output from
    expire? Are you sure it isn't just removing articles from the
    overview that have been overwritten/wrapped in a CNFS buffer? If you
    have some buffers for smaller, unimportant groups (*.test for
    example) that wrap the output may make it appear it is expiring
    articles when it is just removing non-existent articles from the
    overview.

    /remember/:11
    *:A:never:never:never
    0:never:never:never


    $ inndf -io
    58.26% overview space used

    I have plenty of overview buffer space.

    Unless I've run out of cnfs buffers.

    It looks like I'm getting a high number of posts in relcom.test - like
    several a second? similar for de.test both from maxmin.





    On a similar note, when adding a new peer, how can I make sure that
    I receive everything it wants to throw at me regardless of the age
    of the articles?

    The artcutoff parameter is a global option, as far as I know, so if
    you change that value and have a peer feed you old articles you will re-propagate those articles to your other peers. I'm not aware of
    anything like this to tweak on a per-feed basis.

    Maybe I'll expire the test groups on a more rigorous basis.

    I would think my peers would reject an article that I receive that is
    beyond their limit, right?


    --
    End Of The Line BBS - Plano, TX
    telnet endofthelinebbs.com 23

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nigel Reed@21:1/5 to Jesse Rehmer on Thu Apr 27 14:01:01 2023
    On Thu, 27 Apr 2023 04:16:01 -0000 (UTC)
    Jesse Rehmer <jesse.rehmer@blueworldhosting.com> wrote:

    On Apr 26, 2023 at 12:34:34 AM CDT, "Nigel Reed"
    <sysop@endofthelinebbs.com> wrote:

    Hi all,

    Since changing to CNFS it seems my server won't stop expiring
    articles. According to expire.log I lost 46,456 articles yesterday.
    I thought I fixed my expire.ctl to never expire articles after
    losing 20,000+ the night before.

    What does your expire.ctl look like, and what is the output from
    expire? Are you sure it isn't just removing articles from the
    overview that have been overwritten/wrapped in a CNFS buffer? If you
    have some buffers for smaller, unimportant groups (*.test for
    example) that wrap the output may make it appear it is expiring
    articles when it is just removing non-existent articles from the
    overview.

    On a similar note, when adding a new peer, how can I make sure that
    I receive everything it wants to throw at me regardless of the age
    of the articles?

    The artcutoff parameter is a global option, as far as I know, so if
    you change that value and have a peer feed you old articles you will re-propagate those articles to your other peers. I'm not aware of
    anything like this to tweak on a per-feed basis.


    I think the problem is some turd dumping 1.4 million articles to
    de.test.

    de.test: headers received: 429751/1451594

    I'm just loading them into slrn now to see who and within which
    timeframe.

    So, abuser@mixmin.net seems to be the reason for this. Unfortunately,
    it looks like my CFS buffers have now started to overwrite so I'm
    losing all my old articles. I'm really pissed off at this stage. If I
    stuck with traditional spool I wouldn't be in this mess.

    So here's more questions.

    I'm going to check with my VPS provider to see if I can get a zfs disk
    and, if so, recopy my traditional spool back and then get my peers to
    refeed me a bunch of articles. Urgh!


    --
    End Of The Line BBS - Plano, TX
    telnet endofthelinebbs.com 23

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Hochstein@21:1/5 to Nigel Reed on Thu Apr 27 21:27:28 2023
    Nigel Reed schrieb:

    It looks like I'm getting a high number of posts in relcom.test - like several a second? similar for de.test both from maxmin.

    Yes, someone from mixmin has been flooding de.test (since 2022-04-11), misc.test and other test groups with some 100.000 articles each day. As cleanfeed's flood protections exclude test groups, a manual interventions
    is necessary.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nigel Reed@21:1/5 to Thomas Hochstein on Thu Apr 27 14:59:11 2023
    On Thu, 27 Apr 2023 21:27:28 +0200
    Thomas Hochstein <thh@thh.name> wrote:

    Nigel Reed schrieb:

    It looks like I'm getting a high number of posts in relcom.test -
    like several a second? similar for de.test both from maxmin.

    Yes, someone from mixmin has been flooding de.test (since 2022-04-11), misc.test and other test groups with some 100.000 articles each day.
    As cleanfeed's flood protections exclude test groups, a manual
    interventions is necessary.

    Might have been nice if someone noticed this that they would have
    mentioned it so we could take mitigating action.

    I might just blacklist that server then if they're not going to do
    anything about it.

    --
    End Of The Line BBS - Plano, TX
    telnet endofthelinebbs.com 23

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jesse Rehmer@21:1/5 to All on Thu Apr 27 19:34:20 2023
    On Apr 27, 2023 at 2:01:01 PM CDT, "Nigel Reed" <sysop@endofthelinebbs.com> wrote:

    On Thu, 27 Apr 2023 04:16:01 -0000 (UTC)
    Jesse Rehmer <jesse.rehmer@blueworldhosting.com> wrote:

    On Apr 26, 2023 at 12:34:34 AM CDT, "Nigel Reed"
    <sysop@endofthelinebbs.com> wrote:

    Hi all,

    Since changing to CNFS it seems my server won't stop expiring
    articles. According to expire.log I lost 46,456 articles yesterday.
    I thought I fixed my expire.ctl to never expire articles after
    losing 20,000+ the night before.

    What does your expire.ctl look like, and what is the output from
    expire? Are you sure it isn't just removing articles from the
    overview that have been overwritten/wrapped in a CNFS buffer? If you
    have some buffers for smaller, unimportant groups (*.test for
    example) that wrap the output may make it appear it is expiring
    articles when it is just removing non-existent articles from the
    overview.

    On a similar note, when adding a new peer, how can I make sure that
    I receive everything it wants to throw at me regardless of the age
    of the articles?

    The artcutoff parameter is a global option, as far as I know, so if
    you change that value and have a peer feed you old articles you will
    re-propagate those articles to your other peers. I'm not aware of
    anything like this to tweak on a per-feed basis.


    I think the problem is some turd dumping 1.4 million articles to
    de.test.

    de.test: headers received: 429751/1451594

    I'm just loading them into slrn now to see who and within which
    timeframe.

    So, abuser@mixmin.net seems to be the reason for this. Unfortunately,
    it looks like my CFS buffers have now started to overwrite so I'm
    losing all my old articles. I'm really pissed off at this stage. If I
    stuck with traditional spool I wouldn't be in this mess.

    So here's more questions.

    I'm going to check with my VPS provider to see if I can get a zfs disk
    and, if so, recopy my traditional spool back and then get my peers to
    refeed me a bunch of articles. Urgh!

    CNFS buffers can get tricky when you do not want them to wrap. High traffic, "junk" groups, or hierarchies you don't care about losing articles in you'll want to put in a different buffer. I do this for *.test, news.lists.filters, junk, control.cancel, etc.

    Cleanfeed and PyClean ignore EMP on test groups, I assume due to commonly posted bodies that aren't unique, but are likely to be seen frequently (ie "test") in those groups, but recent floods seem like that may not be the best.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nigel Reed@21:1/5 to Jesse Rehmer on Thu Apr 27 16:14:22 2023
    On Thu, 27 Apr 2023 21:00:55 -0000 (UTC)
    Jesse Rehmer <jesse.rehmer@blueworldhosting.com> wrote:

    On Apr 27, 2023 at 2:59:11 PM CDT, "Nigel Reed"
    <sysop@endofthelinebbs.com> wrote:

    On Thu, 27 Apr 2023 21:27:28 +0200
    Thomas Hochstein <thh@thh.name> wrote:

    Nigel Reed schrieb:

    It looks like I'm getting a high number of posts in relcom.test -
    like several a second? similar for de.test both from maxmin.

    Yes, someone from mixmin has been flooding de.test (since
    2022-04-11), misc.test and other test groups with some 100.000
    articles each day. As cleanfeed's flood protections exclude test
    groups, a manual interventions is necessary.

    Might have been nice if someone noticed this that they would have
    mentioned it so we could take mitigating action.

    I might just blacklist that server then if they're not going to do
    anything about it.

    It was reported in news.admin.net-abuse.usenet on 04/16/2023:

    <wwvy1mromnu.fsf@LkoBDZeT.terraraq.uk>

    Ah, I should subscribe to that then :) Thank you.

    --
    End Of The Line BBS - Plano, TX
    telnet endofthelinebbs.com 23

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Hochstein@21:1/5 to Nigel Reed on Thu Apr 27 23:07:49 2023
    Nigel Reed schrieb:

    Might have been nice if someone noticed this that they would have
    mentioned it so we could take mitigating action.

    It was reported in de.admin.net-abuse.news (on 2023-04-11) and news.admin.net-abuse.usenet (on 2023-04-16) [1]; those seem to be the appropriate groups for these kinds of things.

    -thh

    [1] <wwvy1mromnu.fsf@LkoBDZeT.terraraq.uk>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jesse Rehmer@21:1/5 to All on Thu Apr 27 21:00:55 2023
    On Apr 27, 2023 at 2:59:11 PM CDT, "Nigel Reed" <sysop@endofthelinebbs.com> wrote:

    On Thu, 27 Apr 2023 21:27:28 +0200
    Thomas Hochstein <thh@thh.name> wrote:

    Nigel Reed schrieb:

    It looks like I'm getting a high number of posts in relcom.test -
    like several a second? similar for de.test both from maxmin.

    Yes, someone from mixmin has been flooding de.test (since 2022-04-11),
    misc.test and other test groups with some 100.000 articles each day.
    As cleanfeed's flood protections exclude test groups, a manual
    interventions is necessary.

    Might have been nice if someone noticed this that they would have
    mentioned it so we could take mitigating action.

    I might just blacklist that server then if they're not going to do
    anything about it.

    It was reported in news.admin.net-abuse.usenet on 04/16/2023:

    <wwvy1mromnu.fsf@LkoBDZeT.terraraq.uk>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Hochstein@21:1/5 to Jesse Rehmer on Thu Apr 27 23:07:53 2023
    Jesse Rehmer schrieb:

    Cleanfeed and PyClean ignore EMP on test groups, I assume due to commonly posted bodies that aren't unique, but are likely to be seen frequently (ie "test") in those groups, but recent floods seem like that may not be the bes

    You can just add the test groups to "flood_groups", for the time being.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Julien_=c3=89LIE?=@21:1/5 to All on Sat Apr 29 23:32:59 2023
    Hi Nigel, Jesse,

    So, abuser@mixmin.net seems to be the reason for this. Unfortunately,
    it looks like my CNFS buffers have now started to overwrite so I'm
    losing all my old articles. I'm really pissed off at this stage. If I
    stuck with traditional spool I wouldn't be in this mess.

    New CNFS buffers can be added on the fly in cycbuff.conf; you can
    monitor how fast they fill with cnfsstat, and add new buffers when
    appropriate.


    I'm going to check with my VPS provider to see if I can get a zfs disk
    and, if so, recopy my traditional spool back and then get my peers to
    refeed me a bunch of articles. Urgh!

    I hope you'll manage to.


    CNFS buffers can get tricky when you do not want them to wrap.

    It is where the timecaf storage method is good at!
    The advantages of CNFS without wrapping :)

    --
    Julien ÉLIE

    « Un ami fidèle qui parle très bien votre langue et toutes les langues
    vivantes : le latin, le grec, le celte, etc. » (Astérix)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)