Hi all, I've run into the following little bug.
For reference, I'm using Eternal September as you can see.
In my ~/.jewsrc I currently have:
comp.lang.c: 1-254200
1. When I run SLRN, it reports that comp.lang.c has 1 new article.
2. When I select the newsgroup, it immediately reports "No unread
articles" (without showing that any have been killed).
3. The unread count does not go to 0; it stays at 1.
4. When I quit slrn, the count stays at 1-254200
* Kaz Kylheku wrote:
Hi all, I've run into the following little bug.
For reference, I'm using Eternal September as you can see.
In my ~/.jewsrc I currently have:
comp.lang.c: 1-254200
1. When I run SLRN, it reports that comp.lang.c has 1 new article.
2. When I select the newsgroup, it immediately reports "No unread
articles" (without showing that any have been killed).
3. The unread count does not go to 0; it stays at 1.
4. When I quit slrn, the count stays at 1-254200
This is not a bug in slrn or any other newsreader that shows the same behaviour. This phenomenon occurs when srticles are removed from the
server by a cancel message or perl-mocem (because they are spam).
removed article is still present, hence it shows in the article list,
As comp.lang.c receives a massive flood drug/med spam, this behaviour can best be observed in this group.
Just not with slrn before the commit I found with git bisect.
New slrn leaves it like that; it will not increment to 1-254214 and
write the file.
Old slrn will catch the newsgroup up; no unread articles found,
and it will increment to 1-254214.
I can do that manually too by editing the file; then the newsgroup
is no longer listed as having unread articles.
The catch-up command works too.
This is a bad behavior; there is no use in leaving recalled,
inaccessible articles unread.
Kaz Kylheku <864-117-4973@kylheku.com> writes:
New slrn leaves it like that; it will not increment to 1-254214 and
write the file.
Old slrn will catch the newsgroup up; no unread articles found,
and it will increment to 1-254214.
I can do that manually too by editing the file; then the newsgroup
is no longer listed as having unread articles.
The catch-up command works too.
This is a bad behavior; there is no use in leaving recalled,
inaccessible articles unread.
The basic problem is that from the perspective of slrn it's an
inconsistency in the overview database (and maybe the active file; I'm not sure precisely what the database looks like since it's been a while since I've looked at this). There are records for two more articles, but those articles don't exist when looked up in the spool.
The thing is, slrn doesn't know which way that inconsistency will
eventually resolve. We all know that it's probably because the articles
were deleted and thus this will be resolved by deleting the overview
records as well, and this isn't done because due to the structure of the overview database in the most common configurations deletions are
expensive and are therefore batched. But from slrn's perspective, it's possible that the articles are going to show up later and make the
overview correct (and there are indeed some configurations where that can possibly happen). So it's making the conservative decision to keep
retrying those articles to see if they start working, until the overview records are removed.
This is not a bug in slrn or any other newsreader that shows the same
behaviour. This phenomenon occurs when srticles are removed from the
server by a cancel message or perl-mocem (because they are spam).
Okay Ray, so which is correct: slrn 1.0.3 or current?
The basic problem is that from the perspective of slrn it's an
inconsistency in the overview database (and maybe the active file; I'm not sure precisely what the database looks like since it's been a while since I've looked at this).
The thing is, slrn doesn't know which way that inconsistency will
eventually resolve. We all know that it's probably because the articles
were deleted and thus this will be resolved by deleting the overview
records as well, and this isn't done because due to the structure of the overview database in the most common configurations deletions are
expensive and are therefore batched. But from slrn's perspective, it's possible that the articles are going to show up later and make the
overview correct (and there are indeed some configurations where that can possibly happen). So it's making the conservative decision to keep
retrying those articles to see if they start working, until the overview records are removed.
Hi Russ,
The basic problem is that from the perspective of slrn it's an
inconsistency in the overview database (and maybe the active file; I'm not >> sure precisely what the database looks like since it's been a while since
I've looked at this).
The high water mark never decreases in the active file as this
information is used by innd to assign the next unused article number.
The thing is, slrn doesn't know which way that inconsistency will
eventually resolve. We all know that it's probably because the articles
were deleted and thus this will be resolved by deleting the overview
records as well, and this isn't done because due to the structure of the
overview database in the most common configurations deletions are
expensive and are therefore batched. But from slrn's perspective, it's
possible that the articles are going to show up later and make the
overview correct (and there are indeed some configurations where that can
possibly happen). So it's making the conservative decision to keep
retrying those articles to see if they start working, until the overview
records are removed.
In order to improve the experience of news clients, wouldn't a
groupexacthigh parameter in readers.conf be useful in INN?
The idea would be that nnrpd would give a high water mark corresponding
to an existing article when responding to GROUP, LISTGROUP, LIST ACTIVE
and LIST COUNTS.
The high water mark never decreases in the active file as this
information is used by innd to assign the next unused article number.
Though overview data may be handled differently, I think the same rule
is also followed, and the expiry process does not decrease high water
marks in overview data, but only updates low water marks, article counts
and removes old articles. It is what I have in mind from recent investigation in the 4 overview methods when looking at how they handle
empty newsgroups.
In order to improve the experience of news clients, wouldn't a
groupexacthigh parameter in readers.conf be useful in INN? The idea
would be that nnrpd would give a high water mark corresponding to an
existing article when responding to GROUP, LISTGROUP, LIST ACTIVE and
LIST COUNTS.
Though overview data may be handled differently, I think the same rule
is also followed, and the expiry process does not decrease high water
marks in overview data, but only updates low water marks, article counts
and removes old articles. It is what I have in mind from recent
investigation in the 4 overview methods when looking at how they handle
empty newsgroups.
I thought the overview data for the deleted article would be dropped from
the database during nightly expire, so will not appear in the OVER output, but it's been a while since I looked at it so I could be wrong. In other words, I think you're correct that the high water marks would not change, since that's how INN ensures that it never reuses an article number, but
when you run OVER on the group, I believe the entries for those articles should be missing.
In order to improve the experience of news clients, wouldn't a
groupexacthigh parameter in readers.conf be useful in INN? The idea
would be that nnrpd would give a high water mark corresponding to an
existing article when responding to GROUP, LISTGROUP, LIST ACTIVE and
LIST COUNTS.
Yes, that would also solve the problem. It won't avoid the appearance of three unread articles and then seeing only one unread article the next
time an article arrives that isn't deleted, but I think that's unavoidable
in the protocol.
The entries for cancelled articles no longer are in the overview data returned by an OVER request. When processing a cancel, the rows are
deleted in ovdb/ovsqlite database and the index entry is deleted in tradindexed. OVER no longer shows them.
The high water mark never decreases in the active file as this
information is used by innd to assign the next unused article number.
Though overview data may be handled differently, I think the same rule
is also followed, and the expiration process does not decrease high water marks in overview data, but only updates low water marks, article counts
and removes old articles.
Just to be sure, I have just cancelled an article in a local testing
group, and will report tomorrow.
Responding to my previous article:[...]
I confirm the high water mark does not decrease in the overview data
after an expiration run.
* Julien ÉLIE wrote:
Responding to my previous article:[...]
I confirm the high water mark does not decrease in the overview data
after an expiration run.
That's good to know, as anything else would break the xrefslave feature.
;-)
It's strange. While there are only junk articles you get the behavior
that it doesn't catch up. Currently there are 5. As soon as a 6th
article shows up that is good, then when I read it, slrn will catch
up and there will be 0 unread.
There must be a protocol level difference between what slrn is getting
from OVER vs XOVER.
I'm suspecting that in the situations in which this manifests itself,
OVER is reporting fewer articles than the watermark, which differs from XOVER.
And then slrn is refusing to catch up beyond what was reported by OVER.
Maybe this slight difference triggers the behaviour slrn has when using
OVER because it does not handle the 423 response code as equivalent to
XOVER returning no articles at all?
Hi all, I've run into the following little bug.
bug: OVER issue: not advancing past articles.
We need to handle the 423 code (ERR_NOARTIG) when OVER is used
instead of XOVER. Just pretending that it is 224 (OK_XOVER)
doesn't work because the code then tries to get headers,
and the connection will abruptly close unlike in the XOVER
case.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 498 |
Nodes: | 16 (2 / 14) |
Uptime: | 01:29:37 |
Calls: | 9,821 |
Calls today: | 9 |
Files: | 13,757 |
Messages: | 6,190,245 |