I often use newlib standard C libraries with gcc toolchain for Cortex-M platforms. It sometimes happens I need to manage calendar time: seconds
from 1970 or broken down time. And it sometimes happens I need to manage timezone too, because the time reference comes from NTP (that is UTC).
newlib as expected defines a time() function that calls a syscall
function _gettimeofday(). It should be defined as in [1].
What is it? There's an assembler instruction that I don't understand:
asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
What is the cleanest way to override this behaviour and let newlib
time() to return a custom calendar time, maybe counted by a local RTC, synchronized with a NTP?
The solution that comes to my mind is to override _gettimeofday() by
defining a custom function.
[1] https://github.com/eblot/newlib/blob/master/libgloss/arm/syscalls.c
I often use newlib standard C libraries with gcc toolchain for Cortex-M platforms. It sometimes happens I need to manage calendar time: seconds
from 1970 or broken down time. And it sometimes happens I need to manage timezone too, because the time reference comes from NTP (that is UTC).
newlib as expected defines a time() function that calls a syscall
function _gettimeofday(). It should be defined as in [1]. > What is it? There's an assembler instruction that I don't understand:
asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
What is the cleanest way to override this behaviour and let newlib
time() to return a custom calendar time, maybe counted by a local RTC, synchronized with a NTP?
On 30/09/2022 08:29, pozz wrote:
I often use newlib standard C libraries with gcc toolchain for
Cortex-M platforms. It sometimes happens I need to manage calendar
time: seconds from 1970 or broken down time. And it sometimes happens
I need to manage timezone too, because the time reference comes from
NTP (that is UTC).
newlib as expected defines a time() function that calls a syscall
function _gettimeofday(). It should be defined as in [1].
What is it? There's an assembler instruction that I don't understand:
asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
It is a "software interrupt" instruction. If you have a separation of user-space and supervisor-space code in your system, this is the way you
make a call to supervisor mode.
What is the cleanest way to override this behaviour and let newlib
time() to return a custom calendar time, maybe counted by a local RTC,
synchronized with a NTP?
The solution that comes to my mind is to override _gettimeofday() by
defining a custom function.
Yes, that's the way to do it.
Or define your own time functions that are appropriate to the task. I
have almost never had a use for the standard library time functions -
they are too much for most embedded systems which rarely need all the
locale stuff, time zones, and tracking leap seconds, while lacking the
stuff you /do/ need like high precision time counts.
Use a single 64-bit monotonic timebase running at high speed (if your microcontroller doesn't support that directly, use a timer with an
interrupt for tracking the higher part). That's enough for nanosecond precision for about 600 years.
For human-friendly time and dates, either update every second or write
your own simple second-to-human converter. It's easier if you have
your base point relatively recently (there's no need to calculate back
to 01.01.1970).
If you have an internet connection, NTP is pretty simple if you are
happy to use the NTP pools as a rough reference without trying to do millisecond synchronisation.
Il 30/09/2022 09:04, David Brown ha scritto:
On 30/09/2022 08:29, pozz wrote:
I often use newlib standard C libraries with gcc toolchain for
Cortex-M platforms. It sometimes happens I need to manage calendar
time: seconds from 1970 or broken down time. And it sometimes happens
I need to manage timezone too, because the time reference comes from
NTP (that is UTC).
newlib as expected defines a time() function that calls a syscall
function _gettimeofday(). It should be defined as in [1].
What is it? There's an assembler instruction that I don't understand:
asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
It is a "software interrupt" instruction. If you have a separation of
user-space and supervisor-space code in your system, this is the way
you make a call to supervisor mode.
Ok, but how that instruction helps in returning a value from
_gettimeofday()?
It's much more simple to start from seconds in UTC, as Linux (and maybe Windows) does. In this way you can use standard functions to convert
seconds in UTC to localtime
Il 30/09/2022 09:04, David Brown ha scritto:
On 30/09/2022 08:29, pozz wrote:
I often use newlib standard C libraries with gcc toolchain for
Cortex-M platforms. It sometimes happens I need to manage calendar
time: seconds from 1970 or broken down time. And it sometimes happens
I need to manage timezone too, because the time reference comes from
NTP (that is UTC).
newlib as expected defines a time() function that calls a syscall
function _gettimeofday(). It should be defined as in [1].
What is it? There's an assembler instruction that I don't understand:
asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
It is a "software interrupt" instruction. If you have a separation of
user-space and supervisor-space code in your system, this is the way
you make a call to supervisor mode.
Ok, but how that instruction helps in returning a value from
_gettimeofday()?
What is the cleanest way to override this behaviour and let newlib
time() to return a custom calendar time, maybe counted by a local
RTC, synchronized with a NTP?
The solution that comes to my mind is to override _gettimeofday() by
defining a custom function.
Yes, that's the way to do it.
Or define your own time functions that are appropriate to the task. I
have almost never had a use for the standard library time functions -
they are too much for most embedded systems which rarely need all the
locale stuff, time zones, and tracking leap seconds, while lacking the
stuff you /do/ need like high precision time counts.
Use a single 64-bit monotonic timebase running at high speed (if your
microcontroller doesn't support that directly, use a timer with an
interrupt for tracking the higher part). That's enough for nanosecond
precision for about 600 years.
For human-friendly time and dates, either update every second or write
your own simple second-to-human converter. It's easier if you have
your base point relatively recently (there's no need to calculate back
to 01.01.1970).
If you have an internet connection, NTP is pretty simple if you are
happy to use the NTP pools as a rough reference without trying to do
millisecond synchronisation.
I agree with you and I used to implement my own functions to manage
calendar times. Sometimes I used internal or external RTC that gives
date and time in broken down fields (seconds, minutes, ...).
However most RTCs don't manage automatic DST (daylight saving time), so
I started using a different approach.
I started using a simple 32-bits timer that increments every 1 second.
Maybe a timer clocked from an accurate 32.768kHz quartz with a 32768 prescaler (many RTC can be configured as a simple 32-bit counter).
Rarely I need calendar times with a resolution better than 1 second.
Now the big question: what the counter exactly represents? Of course,
seconds elapsed from an epoch (that could be Unix 1970 or 2000 or 2020
or what you choose). But the real question is: UTC or localtime?
I started using localtime, for example the timer counts seconds since
year 2020 (so avoiding wrap-around at year 2038) in Rome timezone.
However this approach pulls-in other issues.
How to convert this number (seconds since 2020 in Rome) to broken-down
time (day, month, hours...)? It's very complex, because you should count
for leap years, but mostly for DST rules.
In Rome we have a calendar times that occur two times, when the clock is moved backward by one hour for DST. What is the counter value of this
times as seconds from epoch in Rome for this time?
It's much more simple to start from seconds in UTC, as Linux (and maybe Windows) does. In this way you can use standard functions to convert
seconds in UTC to localtime. For example, you can use localtime() (or localtime_r() that is better).
Another bonus is when you have NTP, that returns seconds in UTC, so you
can set your counter with the exact number retrived by NTP.
I often use newlib standard C libraries with gcc toolchain for Cortex-M platforms. It sometimes happens I need to manage calendar time: seconds
from 1970 or broken down time. And it sometimes happens I need to manage timezone too, because the time reference comes from NTP (that is UTC).
newlib as expected defines a time() function that calls a syscall
function _gettimeofday(). It should be defined as in [1].
What is it? There's an assembler instruction that I don't understand:
asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
What is the cleanest way to override this behaviour and let newlib
time() to return a custom calendar time, maybe counted by a local RTC, synchronized with a NTP?
Il 30/09/2022 13:51, Clifford Heath ha scritto:
On 30/9/22 19:29, pozz wrote:
Il 30/09/2022 09:04, David Brown ha scritto:
On 30/09/2022 08:29, pozz wrote:
[...]
It's much more simple to start from seconds in UTC, as Linux (and
maybe Windows) does. In this way you can use standard functions to
convert seconds in UTC to localtime
That's a good way to always get the wrong result. You are ignoring the
need for leap seconds. If you want a monotonic counter of seconds
since some epoch, you must not use UTC, but TAI:
<https://en.wikipedia.org/wiki/International_Atomic_Time>
When I implmented this, I used a 64-bit counter in 100's of
nanoseconds since a date about 6000BC, measuring in TAI. You can
convert to UTC easily enough, and then use the timezone tables to get
local times.
What happens if the counter is UTC instead of IAT in a typical embedded application? There's a time when the counter is synchronized (by a
manual operation from the user, by NTP or other means). At that time the broken-down time shown on the display is precise.
You should wait for the next leap second to have an error of... 1 second.
On 30/9/22 19:29, pozz wrote:
Il 30/09/2022 09:04, David Brown ha scritto:
On 30/09/2022 08:29, pozz wrote:
It's much more simple to start from seconds in UTC, as Linux (and
maybe Windows) does. In this way you can use standard functions to
convert seconds in UTC to localtime
That's a good way to always get the wrong result. You are ignoring the
need for leap seconds. If you want a monotonic counter of seconds since
some epoch, you must not use UTC, but TAI:
<https://en.wikipedia.org/wiki/International_Atomic_Time>
When I implmented this, I used a 64-bit counter in 100's of nanoseconds
since a date about 6000BC, measuring in TAI. You can convert to UTC
easily enough, and then use the timezone tables to get local times.
When I implmented this, I used a 64-bit counter in 100's of nanoseconds since a
date about 6000BC, measuring in TAI. You can convert to UTC easily enough, and
then use the timezone tables to get local times.
Another bonus is when you have NTP, that returns seconds in UTC, so you can set
your counter with the exact number retrived by NTP.
On 1/10/22 04:55, Don Y wrote:
On 9/30/2022 4:51 AM, Clifford Heath wrote:
When I implmented this, I used a 64-bit counter in 100's of nanoseconds
since a date about 6000BC, measuring in TAI. You can convert to UTC easily >>> enough, and then use the timezone tables to get local times.
How did you address calls for times during the Gregorian changeover?
You're asking a question about calendars, not time. Different problem.
I find it easier to treat "system time" as an arbitrary metric
that runs at a nominal 1Hz per second and is never "reset".
Then, "wall time" is a bogus concept introduced just for human
convenience. Do you prevent a user (or an external reference)
from ever setting the wall time backwards?
That doesn't work for someone who's travelling between timezones.
Time keeps advancing regardless, but wall clock time jumps about.
Same problem for DST. Quite a lot of enterprise (financial) systems are barred
from running any transaction processing for an hour during DST switch-over, because of software that might malfunction.
Correctness is difficult, especially when you build systems on shifting sands.
On 9/30/2022 4:51 AM, Clifford Heath wrote:
When I implmented this, I used a 64-bit counter in 100's of
nanoseconds since a date about 6000BC, measuring in TAI. You can
convert to UTC easily enough, and then use the timezone tables to get
local times.
How did you address calls for times during the Gregorian changeover?
I find it easier to treat "system time" as an arbitrary metric
that runs at a nominal 1Hz per second and is never "reset".
Then, "wall time" is a bogus concept introduced just for human
convenience. Do you prevent a user (or an external reference)
from ever setting the wall time backwards?
There are other time systems (Like the TAI) that keep track of leap
seconds, but then to use those to convert to wall-clock time, you need a historical table of when leap seconds occured,
and you need to either
refuse to handle the farther future or admit you are needing to "Guess"
when those leap seconds will need to be applied.
Most uses of TAI time are for just short intervals without a need to
convert to wall clock.
On 9/30/2022 2:29 AM, pozz wrote:
Another bonus is when you have NTP, that returns seconds in UTC, so
you can set your counter with the exact number retrived by NTP.
No. You always have to ensure that time keeps flowing in one direction.
So, time either "doesn't exist" before your initial sync with the
time server (what if the server isn't available when you want to
do that?)
*or* you have to look at your current notion of "now"
and ensure that the "real" value of now, when obtained from the
time server, is always in the future relative to your notion.
[Note that NTP slaves don't blindly assume the current time is
as reported but *slew* to the new value, over some interval.]
This also ignores the possibility of computations with relative
*intervals* being inconsistent with these spontaneous "resets".
Il 30/09/2022 20:42, Don Y ha scritto:
On 9/30/2022 2:29 AM, pozz wrote:
Another bonus is when you have NTP, that returns seconds in UTC, so you can >>> set your counter with the exact number retrived by NTP.
No. You always have to ensure that time keeps flowing in one direction.
So, time either "doesn't exist" before your initial sync with the
time server (what if the server isn't available when you want to
do that?)
At startup, if NTP server is not available and I don't have any notion of "now", I start from a date in the past, i.e. 01/01/2020.
*or* you have to look at your current notion of "now"
and ensure that the "real" value of now, when obtained from the
time server, is always in the future relative to your notion.
Actually I don't do that and I replace the timer counter with the value retrieved from NTP.
What happens if the local timer is clocked by a faster clock then nominal? For
example, 16.001MHz with 16M prescaler.
If I try to NTP re-sync every 1-hour, it's probably the local counter is greater than the value retrieved from NTP. I'm forced to decrease the local counter, my notion of "now".
What happens if the time doesn't flow in one direction only?
[Note that NTP slaves don't blindly assume the current time is
as reported but *slew* to the new value, over some interval.]
This also ignores the possibility of computations with relative
*intervals* being inconsistent with these spontaneous "resets".
Il 30/09/2022 20:42, Don Y ha scritto:
On 9/30/2022 2:29 AM, pozz wrote:
Another bonus is when you have NTP, that returns seconds in UTC, so
you can set your counter with the exact number retrived by NTP.
No. You always have to ensure that time keeps flowing in one direction.
So, time either "doesn't exist" before your initial sync with the
time server (what if the server isn't available when you want to
do that?)
At startup, if NTP server is not available and I don't have any notion
of "now", I start from a date in the past, i.e. 01/01/2020.
*or* you have to look at your current notion of "now"
and ensure that the "real" value of now, when obtained from the
time server, is always in the future relative to your notion.
Actually I don't do that and I replace the timer counter with the value retrieved from NTP.
What happens if the local timer is clocked by a faster clock then
nominal? For example, 16.001MHz with 16M prescaler.
If I try to NTP re-sync every 1-hour, it's probably the local counter is greater than the value retrieved from NTP. I'm forced to decrease the
local counter, my notion of "now".
What happens if the time doesn't flow in one direction only?
[Note that NTP slaves don't blindly assume the current time is
as reported but *slew* to the new value, over some interval.]
This also ignores the possibility of computations with relative
*intervals* being inconsistent with these spontaneous "resets".
On 10/2/2022 3:09 PM, pozz wrote:
Il 30/09/2022 20:42, Don Y ha scritto:
On 9/30/2022 2:29 AM, pozz wrote:
Another bonus is when you have NTP, that returns seconds in UTC, so
you can set your counter with the exact number retrived by NTP.
No. You always have to ensure that time keeps flowing in one direction. >>>
So, time either "doesn't exist" before your initial sync with the
time server (what if the server isn't available when you want to
do that?)
At startup, if NTP server is not available and I don't have any notion
of "now", I start from a date in the past, i.e. 01/01/2020.
Then you have to be able to accept a BIG skew in the time when the first update arrives. What if that takes an hour, a day or more (because the server is down, badly configured or incorrect routing)? What if it *never* arrives?
If you apply the new time in a step function, then all of the potential
time related events between ~1/1/2020 and "now" will appear to occur
at the same instant -- *now* -- or, not at all. And, any time-related calculations will be grossly incorrect.
start_time := now()
dispenser(on)
wait_until(start_time + interval)
Imagine what will happen if the time is changed during this fragment.
If the change adds >= interval to the local notion of now, then the
dispenser will be "on" only momentarily. If it adds (0,interval),
then it will be on for some period LESS than the "interval" intended.
[I'm ignoring the possibility of it going BACKWARDS, for now]
Note that wait_until() could have been expressed as delay(interval)
and, depending on how this is internally implemented, it might be
silently translated to a wait_until() and thus dependant on the
actual value of now().
Likewise, imagine trying to measure the duration of an event:
wait_until(event)
start_time := now()
wait_until(!event)
duration = now() - start_time
Similarly, any implied ordering of actions is vulnerable:If you implement in this way:
do(THIS, time1)
do(THAT, time2)
What if the value of now() makes a jump from some time prior to
time1 to some time after time1, but before time2. Will THIS happen?
(i.e., will it be scheduled to happen?) How much ACTUAL (execution)
time will there be between THIS being started and THAT?
What if the value of now() makes a jump from some time prior to
time1 to some time after time2. Will THIS happen before THAT?
Will both start (be made ready) concurrently? Who will win the
unintended race?
[Note that many NTP clients won't "accept" a time declaration that is
"too far" from the local notion of now. If you want to *set* the
current time, you use something like ntpdate to impose a specific time regardless of how far that deviates from your own notion.
*or* you have to look at your current notion of "now"
and ensure that the "real" value of now, when obtained from the
time server, is always in the future relative to your notion.
Actually I don't do that and I replace the timer counter with the
value retrieved from NTP.
Then you run the risk that the local counter may have already surpassed
the NTP "count" by, for example, N seconds. And, time now jerks backwards as the previous N seconds appear to be relived.
Will you AGAIN do the task that was scheduled for "a few seconds ago"?
(even though it has already been completed) Will you remember to ALSO
do the task that expected to be done an hour before that -- if the "jerk back" wasn't a full hour?
You likely wrote your code (or, your user scheduled events) on the
assumption
that there are roughly 60 seconds between any two "minutes", etc. And,
that
time1 precedes time2 by (time2 - time1) actual seconds.
What happens if the local timer is clocked by a faster clock then
nominal? For example, 16.001MHz with 16M prescaler.
If I try to NTP re-sync every 1-hour, it's probably the local counter
is greater than the value retrieved from NTP. I'm forced to decrease
the local counter, my notion of "now".
No. You change the rate at which you run the local "clock" -- whatever timebase you are counting. So, if your jiffy was designed to happen at 100ms intervals (counted down from some XTAL reference by a divisor of
MxN) and you now discover that your notion of 100 was actually 98.7 REAL ms (because your time has been noted as moving faster than the NTP reference), then you change the divisor used to generate the jiffy to something
slightly larger to effectively slow the jiffy down to 100+ ms (the "+"
being present
to ensure the local time eventually slows enough so that "real" time
falls into sync).
This is an continuous process. (read the NTP sources and how the kernel implements "adjtime()")
What happens if the time doesn't flow in one direction only?
Then everything that (implicitly) relies on time to be monotonic is
hosed.
Repeat the examples at the start of my post with the case of time
jumping backwards and see what happens.
What if time goes backwards enough to muck with some calculation
or event sequence -- but, not far enough to cause the code that
*schedules* those events to reflect the difference.
What would you do if you saw entries in a log file:
12:01:07 start something
12:01:08 did whatever
12:01:15 did something else
12:01:04 finished up
[Note that NTP slaves don't blindly assume the current time is
as reported but *slew* to the new value, over some interval.]
This also ignores the possibility of computations with relative
*intervals* being inconsistent with these spontaneous "resets".
It's important that the RATE of time passage is reasonably accurate
and consistent (and monotonically increasing). But, the notion of
the "time of day" is dubious and exists just as a convenience for
humans to order events relative to the outside world (which uses
wall clocks). How accurate is YOUR wall clock? Does it agree
with your cell phone's notion of now? The alarm clock in your
bedroom? Your neighbor's timepiece when he comes to visit? etc.
At startup, if NTP server is not available and I don't have any notion of >>> "now", I start from a date in the past, i.e. 01/01/2020.
Then you have to be able to accept a BIG skew in the time when the first
update arrives. What if that takes an hour, a day or more (because the
server is down, badly configured or incorrect routing)? What if it
*never* arrives?
Certainly there's an exception at startup. When the *first* NTP response received, the code should accept a BIG shock of the current notion of now (that
could be undefined or 2020 or another epoch until now).
I read that ntpd accepts -g command line option that enable one (and only one)
big difference between current system notion of now and "NTP now".
I admit that this could lead to odd behaviours as you explained. IMHO however there aren't many solutions at startup, mainly if the embedded device should be
autonomous and can't accept suggestions from the user.
One is to suspend, at startup, all the device activites until a "fresh now" is
received from NTP server. After that, the normal tasks are started. As you noted, this could introduce a delay (even a BIG delay, depending on Internet connection and NTP servers) between the power on and the start of tasks. I think this isn't compatible with many applications.
Another solution is to fix the code in such a way it correctly faces the situation of a big afterward or backward step in the "now" counter.
The code I'm thinking of is not the one that manages normal timers that can depend on a local reference (XTAL, ceramic resonator, ...) completely independent from calendar counter. Most of the time, the precision of timers isn't strict and intervals are short: we need to activate a relay for 3 seconds
(but nothing happens if it is activated for 3.01 seconds) or we need to generate a pulse on an output of 100ms (but no problem if it is 98ms).
This means having a main counter clocked at 10ms (or whatever) from a local clock of 100Hz (or whatever). This counter isn't corrected with NTP.
The only code that must be fixed is the one that manages events that must occurs at specific calendar times (at 12 o'clock of 1st January, at 8:30 of everyday, and so on). So you should have *another* counter clocked at 1Hz (or 10Hz or 100Hz) that is adjusted by NTP. And abrupt changes should be taken into
account (event if I don't know how).
If you apply the new time in a step function, then all of the potential
time related events between ~1/1/2020 and "now" will appear to occur
at the same instant -- *now* -- or, not at all. And, any time-related
calculations will be grossly incorrect.
start_time := now()
dispenser(on)
wait_until(start_time + interval)
Imagine what will happen if the time is changed during this fragment.
If the change adds >= interval to the local notion of now, then the
dispenser will be "on" only momentarily. If it adds (0,interval),
then it will be on for some period LESS than the "interval" intended.
[I'm ignoring the possibility of it going BACKWARDS, for now]
Note that wait_until() could have been expressed as delay(interval)
and, depending on how this is internally implemented, it might be
silently translated to a wait_until() and thus dependant on the
actual value of now().
Good point. As I wrote before, events that aren't strictly related to wall clock shouldn't be coded with functions() that use now(). If the code that makes a 100ms pulse at an output uses now(), it is wrong and must be corrected.
Similarly, any implied ordering of actions is vulnerable:If you implement in this way:
do(THIS, time1)
do(THAT, time2)
What if the value of now() makes a jump from some time prior to
time1 to some time after time1, but before time2. Will THIS happen?
(i.e., will it be scheduled to happen?) How much ACTUAL (execution)
time will there be between THIS being started and THAT?
What if the value of now() makes a jump from some time prior to
time1 to some time after time2. Will THIS happen before THAT?
Will both start (be made ready) concurrently? Who will win the
unintended race?
[Note that many NTP clients won't "accept" a time declaration that is
"too far" from the local notion of now. If you want to *set* the
current time, you use something like ntpdate to impose a specific time
regardless of how far that deviates from your own notion.
void do(action_fn fn, uint32_t delay_ms) {
timer_add(delay_ms, fn);
}
and timer_add() uses the counter that is clocked *only* from local reference, no problem occurs.
Some problems could occur when time1 and time2 are calendar times. One solution
could be to have one module that manages calendar events with the following interface:
cevent_hdl_t cevent_add(time_t time, cevent_fn fn, void *arg);
void cevents_do(time_t now);
Every second cevents_do() is called with the new calendar time (seconds from an
epoch).
void cevents_do(time_t now) {
static time_t old_now;
if (now != old_now + 1) {
/* There's a discontinuity in now. What can we do?
* - Remove expired events without calling callback
* - Remove expired events and call callback for each of them
* I think the choice is application dependent */
}
/* Process the first elements of FIFO queue (that is sorted) */
cevent_s *ev;
while((ev = cevents_queue_peek())->time == now) {
ev->fn(ev->arg);
cevents_queue_pop();
}
old_now = now;
}
*or* you have to look at your current notion of "now"
and ensure that the "real" value of now, when obtained from the
time server, is always in the future relative to your notion.
Actually I don't do that and I replace the timer counter with the value
retrieved from NTP.
Then you run the risk that the local counter may have already surpassed
the NTP "count" by, for example, N seconds. And, time now jerks backwards >> as the previous N seconds appear to be relived.
Will you AGAIN do the task that was scheduled for "a few seconds ago"?
(even though it has already been completed) Will you remember to ALSO
do the task that expected to be done an hour before that -- if the "jerk
back" wasn't a full hour?
Good questions. You could try to implement a complex calendar time system in your device, one that mimics full featured OS. I mean the counter that tracks "now" (seconds or milliseconds from an epoch) isn't changed abruptly, but its reference is slowed down or accelerated.
You should have an hw that supports this. Many processors have timers that can
be used as counters, but their clock reference is limited to a prescaled main clock and the prescaler value is usually an integer, maybe only one from a limited set of values (1, 2, 4, 8, 32, 64, 256).
Anyway, even if you are so smart to implement this in a correct way, you have to solve the "startup issue". What happens if the first NTP response arrived after 5 minutes from startup and your notion of now at startup is completely useless (i.e., no battery is present)?
Maybe during initialization code you already added some calendar events.
What happens if the time doesn't flow in one direction only?
Then everything that (implicitly) relies on time to be monotonic is
hosed.
Repeat the examples at the start of my post with the case of time
jumping backwards and see what happens.
What if time goes backwards enough to muck with some calculation
or event sequence -- but, not far enough to cause the code that
*schedules* those events to reflect the difference.
What would you do if you saw entries in a log file:
12:01:07 start something
12:01:08 did whatever
12:01:15 did something else
12:01:04 finished up
In a real world, could this happen?
Except at startup, the seconds reported
from NTP should be very similar to "local seconds" that is clocked from local reference. I didn't make any test, but I expect offsets measured by NTP are well below 1s in normal situations. The worst case should be:
12:01:07 start something
12:01:08 did whatever
12:01:15 did something else
12:01:14 finished up
I admit it's not very good.
[Note that NTP slaves don't blindly assume the current time is
as reported but *slew* to the new value, over some interval.]
This also ignores the possibility of computations with relative
*intervals* being inconsistent with these spontaneous "resets".
It's important that the RATE of time passage is reasonably accurate
and consistent (and monotonically increasing). But, the notion of
the "time of day" is dubious and exists just as a convenience for
humans to order events relative to the outside world (which uses
wall clocks). How accurate is YOUR wall clock? Does it agree
with your cell phone's notion of now? The alarm clock in your
bedroom? Your neighbor's timepiece when he comes to visit? etc.
On 12/30/2022 9:08 AM, pozz wrote:
At startup, if NTP server is not available and I don't have any
notion of "now", I start from a date in the past, i.e. 01/01/2020.
Then you have to be able to accept a BIG skew in the time when the first >>> update arrives. What if that takes an hour, a day or more (because the >>> server is down, badly configured or incorrect routing)? What if it
*never* arrives?
Certainly there's an exception at startup. When the *first* NTP
response received, the code should accept a BIG shock of the current
notion of now (that could be undefined or 2020 or another epoch until
now).
I read that ntpd accepts -g command line option that enable one (and
only one) big difference between current system notion of now and "NTP
now".
Yes. But, your system design still has to "make sense" if it NEVER gets told the current time.
I admit that this could lead to odd behaviours as you explained. IMHO
however there aren't many solutions at startup, mainly if the embedded
device should be autonomous and can't accept suggestions from the user.
Note that "wall/calendar time" is strictly a user convenience. A device need only deal with it if it has to interact with a world in which the
user relates to temporal events using some external timepiece -- which
may actually be inaccurate!
But, your device can always have its own notion of time that monotonically increases -- even if the rate of time that it increases isn't entirely accurate wrt "real units" (i.e., if YOUR second is 1.001 REAL seconds,
that's likely not too important)
So, if you can postpone binding YOUR "system time" (counting jiffies)
to "wall time", then the problem is postponed.
E.g., I deal with events as referenced to *my* timebase (bogounits):
00000006 system initialized
00001003 network on-line
00001100 accepting requests
00020348 request from 10.0.1.88
00020499 reply to 10.0.1.88 issued
...
Eventually, there will be an entry:
XXXXXXXX NTPclient receives update (12:42:00.444)
At this point, you can retroactively update the times in the "log" with "real" times, relative to the time delivered to the NTP client.
(or, leave the log in bogounits and not worry about it)
The real problem comes when <someone> wants <something> to happen at
some *specific* wall time -- and, you don't yet know what the current
wall time happens to be!
If that will be guaranteed to be sufficiently far in the future that
you (think!) the actual wall time will be known to you, then you
can just cross your fingers and wait to sort out "when" that will be.
One is to suspend, at startup, all the device activites until a "fresh
now" is received from NTP server. After that, the normal tasks are
started. As you noted, this could introduce a delay (even a BIG delay,
depending on Internet connection and NTP servers) between the power on
and the start of tasks. I think this isn't compatible with many
applications.
Another solution is to fix the code in such a way it correctly faces
the situation of a big afterward or backward step in the "now" counter.
You're better off picking a time you KNOW to be in the past so that
any adjustments continue to run time *forwards*. We inherently think
of A happening after B implying that time(A) > time(B). It's so
fundamental that you likely don't even notice these dependencies in
your code/algorithms.
The code I'm thinking of is not the one that manages normal timers
that can depend on a local reference (XTAL, ceramic resonator, ...)
completely independent from calendar counter. Most of the time, the
precision of timers isn't strict and intervals are short: we need to
activate a relay for 3 seconds (but nothing happens if it is activated
for 3.01 seconds) or we need to generate a pulse on an output of 100ms
(but no problem if it is 98ms).
This means having a main counter clocked at 10ms (or whatever) from a
local clock of 100Hz (or whatever). This counter isn't corrected with
NTP.
The only code that must be fixed is the one that manages events that
must occurs at specific calendar times (at 12 o'clock of 1st January,
at 8:30 of everyday, and so on). So you should have *another* counter
clocked at 1Hz (or 10Hz or 100Hz) that is adjusted by NTP. And abrupt
changes should be taken into account (event if I don't know how).
You can use NTP to discipline the local oscillator so that times
measured from it are "more accurate". This, regardless of whether
or not the local time tracks the wall time.
If you apply the new time in a step function, then all of the potential
time related events between ~1/1/2020 and "now" will appear to occur
at the same instant -- *now* -- or, not at all. And, any time-related
calculations will be grossly incorrect.
start_time := now()
dispenser(on)
wait_until(start_time + interval)
Imagine what will happen if the time is changed during this fragment.
If the change adds >= interval to the local notion of now, then the
dispenser will be "on" only momentarily. If it adds (0,interval),
then it will be on for some period LESS than the "interval" intended.
[I'm ignoring the possibility of it going BACKWARDS, for now]
Note that wait_until() could have been expressed as delay(interval)
and, depending on how this is internally implemented, it might be
silently translated to a wait_until() and thus dependant on the
actual value of now().
Good point. As I wrote before, events that aren't strictly related to
wall clock shouldn't be coded with functions() that use now(). If the
code that makes a 100ms pulse at an output uses now(), it is wrong and
must be corrected.
Time should always be treated "fuzzily".
So, if (now() == CONSTANT) may NEVER be satisfied! E.g., if the code
runs at time CONSTANT+1, then you can know that it's never going to
meet that condition (imagine it in a wait_till loop)
If, instead, you assume that something may delay that statement from
being executed *prior* to CONSTANT, you may, instead, want to
code it as "if (now() >= CONSTANT)" to ensure it gets executed.
(and, if you only want it to be executed ONCE, then take steps to
note when you *have* executed it so you don't execute it again)
For example, my system is real-time so every action has an
associated deadline. But, it is entirely possible that some
actions will be blocked until long after their deadlines
have expired. Checking for "now() == deadline" would lead
to erroneous behavior; the time between deadline and now()
effectively doesn't exist, from the perspective of the
action in question. So, the deadline handler should be
invoked for ANY now() >= deadline.
Similarly, any implied ordering of actions is vulnerable:If you implement in this way:
do(THIS, time1)
do(THAT, time2)
What if the value of now() makes a jump from some time prior to
time1 to some time after time1, but before time2. Will THIS happen?
(i.e., will it be scheduled to happen?) How much ACTUAL (execution)
time will there be between THIS being started and THAT?
What if the value of now() makes a jump from some time prior to
time1 to some time after time2. Will THIS happen before THAT?
Will both start (be made ready) concurrently? Who will win the
unintended race?
[Note that many NTP clients won't "accept" a time declaration that is
"too far" from the local notion of now. If you want to *set* the
current time, you use something like ntpdate to impose a specific time
regardless of how far that deviates from your own notion.
void do(action_fn fn, uint32_t delay_ms) {
timer_add(delay_ms, fn);
}
and timer_add() uses the counter that is clocked *only* from local
reference, no problem occurs.
Some problems could occur when time1 and time2 are calendar times. One
solution could be to have one module that manages calendar events with
the following interface:
cevent_hdl_t cevent_add(time_t time, cevent_fn fn, void *arg);
void cevents_do(time_t now);
Every second cevents_do() is called with the new calendar time
(seconds from an epoch).
What if a "second" is skipped (because some higher priority activity
was using the processor)?
void cevents_do(time_t now) {
static time_t old_now;
if (now != old_now + 1) {
/* There's a discontinuity in now. What can we do?
* - Remove expired events without calling callback
* - Remove expired events and call callback for each of them
* I think the choice is application dependent */
}
/* Process the first elements of FIFO queue (that is sorted) */
cevent_s *ev;
while((ev = cevents_queue_peek())->time == now) {
ev->fn(ev->arg);
cevents_queue_pop();
}
old_now = now;
}
You have to figure out how to generalize this FOR YOUR APPLICATION.
Some things may not be important enough to waste effort on;
others may be considerably more sensitive.
*or* you have to look at your current notion of "now"
and ensure that the "real" value of now, when obtained from the
time server, is always in the future relative to your notion.
Actually I don't do that and I replace the timer counter with the
value retrieved from NTP.
Then you run the risk that the local counter may have already surpassed
the NTP "count" by, for example, N seconds. And, time now jerks
backwards
as the previous N seconds appear to be relived.
Will you AGAIN do the task that was scheduled for "a few seconds ago"?
(even though it has already been completed) Will you remember to ALSO
do the task that expected to be done an hour before that -- if the "jerk >>> back" wasn't a full hour?
Good questions. You could try to implement a complex calendar time
system in your device, one that mimics full featured OS. I mean the
counter that tracks "now" (seconds or milliseconds from an epoch)
isn't changed abruptly, but its reference is slowed down or accelerated.
If you ensure time always moves forward, most of these issues are
easy to resolve. You *know* you haven't done things scheduled for
t > now().
You should have an hw that supports this. Many processors have timers
that can be used as counters, but their clock reference is limited to
a prescaled main clock and the prescaler value is usually an integer,
maybe only one from a limited set of values (1, 2, 4, 8, 32, 64, 256).
You can dither the timebase so the average rate tracks your intent.
Anyway, even if you are so smart to implement this in a correct way,
you have to solve the "startup issue". What happens if the first NTP
response arrived after 5 minutes from startup and your notion of now
at startup is completely useless (i.e., no battery is present)?
Maybe during initialization code you already added some calendar events.
So, those may *never* get executed -- if the NTP server never replies OR
if you've coded for "time == now()". Or, may get executed (much) later
than intended (e.g., if the NTP server tells you it is 6:00, now, and
you had something scheduled for 5:00...)
What happens if the time doesn't flow in one direction only?
Then everything that (implicitly) relies on time to be monotonic is
hosed.
Repeat the examples at the start of my post with the case of time
jumping backwards and see what happens.
What if time goes backwards enough to muck with some calculation
or event sequence -- but, not far enough to cause the code that
*schedules* those events to reflect the difference.
What would you do if you saw entries in a log file:
12:01:07 start something
12:01:08 did whatever
12:01:15 did something else
12:01:04 finished up
In a real world, could this happen?
In a multithreaded application, of course it can!
task0() {
spawn(task1);
log("finished up");
}
task1() {
log("start something");
...
log("did whatever");
log("did something else")
}
Assume log() prepends a timestamp to the message emitted.
Assume task1 is lower priority than task0. It is spawned by
task0 but doesn't get a chance to execute until after
task0 has already printed its final message and quit.
If multiple processors/nodes are involved, then the uncertainty
between their individual clocks further complicates this.
And, of course, what do you do if <something> deliberately
introduces a delta to the current time?
Imagine Bob wants to set an alarm for a meeting at 5:00PM.
He then changes the current time to one hour later -- presumably
because he noticed that the clock was incorrect. Does that
mean the meeting will be one hour *sooner* than it would
appear to have been, previously?
What if he notices the date is off and it's really "tomorrow"
and advances the date by one day. Should the alarm be
canceled as the appointed time has already passed? Or,
should the date component of the alarm time be similarly
advanced?
And, what will *Bob* think the correct answers to these
questions should be? Will he be pissed because the alarm
didn't go off when he *expected* it? Or, pissed because the
alarm went off even though the meeting was YESTERDAY?
Except at startup, the seconds reported from NTP should be very
similar to "local seconds" that is clocked from local reference. I
didn't make any test, but I expect offsets measured by NTP are well
below 1s in normal situations. The worst case should be:
12:01:07 start something
12:01:08 did whatever
12:01:15 did something else
12:01:14 finished up
I admit it's not very good.
[Note that NTP slaves don't blindly assume the current time is
as reported but *slew* to the new value, over some interval.]
This also ignores the possibility of computations with relative
*intervals* being inconsistent with these spontaneous "resets".
It's important that the RATE of time passage is reasonably accurate
and consistent (and monotonically increasing). But, the notion of
the "time of day" is dubious and exists just as a convenience for
humans to order events relative to the outside world (which uses
wall clocks). How accurate is YOUR wall clock? Does it agree
with your cell phone's notion of now? The alarm clock in your
bedroom? Your neighbor's timepiece when he comes to visit? etc.
Il 31/12/2022 01:35, Don Y ha scritto:
On 12/30/2022 9:08 AM, pozz wrote:
At startup, if NTP server is not available and I don't have any notion of >>>>> "now", I start from a date in the past, i.e. 01/01/2020.
Then you have to be able to accept a BIG skew in the time when the first >>>> update arrives. What if that takes an hour, a day or more (because the >>>> server is down, badly configured or incorrect routing)? What if it >>>> *never* arrives?
Certainly there's an exception at startup. When the *first* NTP response >>> received, the code should accept a BIG shock of the current notion of now >>> (that could be undefined or 2020 or another epoch until now).
I read that ntpd accepts -g command line option that enable one (and only >>> one) big difference between current system notion of now and "NTP now".
Yes. But, your system design still has to "make sense" if it NEVER gets
told the current time.
Yes, the only solution that comes to my mind is to have a startup calendar time, such as 01/01/2023 00:00:00. Until a new time is received from NTP, that
is the calendar time that the system will use.
Of course, with this wrong "now", any event that is related to a calendar time
would fail.
The code I'm thinking of is not the one that manages normal timers that can >>> depend on a local reference (XTAL, ceramic resonator, ...) completely
independent from calendar counter. Most of the time, the precision of timers
isn't strict and intervals are short: we need to activate a relay for 3
seconds (but nothing happens if it is activated for 3.01 seconds) or we need
to generate a pulse on an output of 100ms (but no problem if it is 98ms). >>> This means having a main counter clocked at 10ms (or whatever) from a local >>> clock of 100Hz (or whatever). This counter isn't corrected with NTP.
The only code that must be fixed is the one that manages events that must >>> occurs at specific calendar times (at 12 o'clock of 1st January, at 8:30 of >>> everyday, and so on). So you should have *another* counter clocked at 1Hz >>> (or 10Hz or 100Hz) that is adjusted by NTP. And abrupt changes should be >>> taken into account (event if I don't know how).
You can use NTP to discipline the local oscillator so that times
measured from it are "more accurate". This, regardless of whether
or not the local time tracks the wall time.
Yes, but I don't remember an application I worked on that didn't track the wall
time and, at the same time, needed a greater precision than the local oscillator.
So, if (now() == CONSTANT) may NEVER be satisfied! E.g., if the code
runs at time CONSTANT+1, then you can know that it's never going to
meet that condition (imagine it in a wait_till loop)
If, instead, you assume that something may delay that statement from
being executed *prior* to CONSTANT, you may, instead, want to
code it as "if (now() >= CONSTANT)" to ensure it gets executed.
(and, if you only want it to be executed ONCE, then take steps to
note when you *have* executed it so you don't execute it again)
For example, my system is real-time so every action has an
associated deadline. But, it is entirely possible that some
actions will be blocked until long after their deadlines
have expired. Checking for "now() == deadline" would lead
to erroneous behavior; the time between deadline and now()
effectively doesn't exist, from the perspective of the
action in question. So, the deadline handler should be
invoked for ANY now() >= deadline.
Suppose you have some alarms scheduled weekly, for example at 8:00:00 every Monday and at 9:00:00 every Saturday.
In the week you have 604'800 seconds.
8:00 on Monday is at 28'800 seconds from the beginning of the week (I'm considering Monday as the first day of the week).
9:00 on Saturday is at 194'400 secs.
If the alarms manager is called exactly one time each second, it should be very
simple to understand if we are on time for an alarm:
if (now_weekly_secs == 28800) fire_alarm(ALARM1);
if (now_weekly_secs == 194400) fire_alarm(ALARM2);
Note the equality test. With disequality you can't use this:
if (now_weekly_secs > 28800) fire_alarm(ALARM1);
if (now_weekly_secs > 194400) fire_alarm(ALARM2);
otherwise alarms will occur countinuously after the deadline. You should tag the alarm as occured for the current week to avoid firing it again at the next
call.
Is it so difficult to *guarantee* calling alarms_manager(weekly_secs) every second?
Some problems could occur when time1 and time2 are calendar times. One
solution could be to have one module that manages calendar events with the >>> following interface:
cevent_hdl_t cevent_add(time_t time, cevent_fn fn, void *arg);
void cevents_do(time_t now);
Every second cevents_do() is called with the new calendar time (seconds from
an epoch).
What if a "second" is skipped (because some higher priority activity
was using the processor)?
A second is a very long interval. It's difficult to think of a system that isn't able to satisfy programmatically a deadline of a second.
What happens if the time doesn't flow in one direction only?
Then everything that (implicitly) relies on time to be monotonic is
hosed.
Repeat the examples at the start of my post with the case of time
jumping backwards and see what happens.
What if time goes backwards enough to muck with some calculation
or event sequence -- but, not far enough to cause the code that
*schedules* those events to reflect the difference.
What would you do if you saw entries in a log file:
12:01:07 start something
12:01:08 did whatever
12:01:15 did something else
12:01:04 finished up
In a real world, could this happen?
In a multithreaded application, of course it can!
task0() {
spawn(task1);
log("finished up");
}
task1() {
log("start something");
...
log("did whatever");
log("did something else")
}
Assume log() prepends a timestamp to the message emitted.
Assume task1 is lower priority than task0. It is spawned by
task0 but doesn't get a chance to execute until after
task0 has already printed its final message and quit.
If multiple processors/nodes are involved, then the uncertainty
between their individual clocks further complicates this.
And, of course, what do you do if <something> deliberately
introduces a delta to the current time?
Imagine Bob wants to set an alarm for a meeting at 5:00PM.
He then changes the current time to one hour later -- presumably
because he noticed that the clock was incorrect. Does that
mean the meeting will be one hour *sooner* than it would
appear to have been, previously?
No. The meeting is always at 5:00PM.
What if he notices the date is off and it's really "tomorrow"
and advances the date by one day. Should the alarm be
canceled as the appointed time has already passed? Or,
should the date component of the alarm time be similarly
advanced?
IMHO if the user set a time using the wall clock convention (shut the door at 8:00PM every afternoon), it shouldn't be changed when the calendar time used by
the system is adjusted. Anyway this should be application dependent.
[Note that NTP slaves don't blindly assume the current time is
as reported but *slew* to the new value, over some interval.]
This also ignores the possibility of computations with relative
*intervals* being inconsistent with these spontaneous "resets".
It's important that the RATE of time passage is reasonably accurate
and consistent (and monotonically increasing). But, the notion of
the "time of day" is dubious and exists just as a convenience for
humans to order events relative to the outside world (which uses
wall clocks). How accurate is YOUR wall clock? Does it agree
with your cell phone's notion of now? The alarm clock in your
bedroom? Your neighbor's timepiece when he comes to visit? etc.
On 1/4/2023 9:09 AM, pozz wrote:
I designed a product that had a really sensitive "front end".
To reduce the impact of the ACmains on our signal, I would
run the acquisition system at the mains frequency -- and,
included a setting for 50 vs. 60Hz selection (domestic/foreign
markets).
We found that blindly assuming the ACmains frequency was
as stated wasn't enough. Errors in the local oscillator
and variations in the ACmains introduced differences
so we had to frequency lock to the actual mains. Then,
use that to derive all related timing based on the
observed frequency as expressed by the local oscillator.
Suppose you have some alarms scheduled weekly, for example at 8:00:00 every >> Monday and at 9:00:00 every Saturday.
In the week you have 604'800 seconds.
8:00 on Monday is at 28'800 seconds from the beginning of the week (I'm
considering Monday as the first day of the week).
9:00 on Saturday is at 194'400 secs.
If the alarms manager is called exactly one time each second, it should be very
simple to understand if we are on time for an alarm:
if (now_weekly_secs == 28800) fire_alarm(ALARM1);
if (now_weekly_secs == 194400) fire_alarm(ALARM2);
Can you *guarantee* that it will always be called once and
exactly once per second? Regardless of other activities that
may be going on in your design? Including those that you
haven't yet imagined?
It is unlikely that this "needs" to be a high priority job.
If you have to MAKE it such, then you're bastardizing the
design needlessly.
Note the equality test. With disequality you can't use this:
if (now_weekly_secs > 28800) fire_alarm(ALARM1);
if (now_weekly_secs > 194400) fire_alarm(ALARM2);
otherwise alarms will occur countinuously after the deadline. You should tag >> the alarm as occured for the current week to avoid firing it again at the next
call.
if (!alarm1_done && now_weekly_seconds > 28800) {
fire_alarm(ALARM1);
alarm1_done = TRUE;
}
assuming that your code doesn't implicitly do this (e.g., one typically >designs a timer service that lets you schedule alarms and then
*clears* them (removes them) once they have expired.
...
A second is a very long interval. It's difficult to think of a system that >> isn't able to satisfy programmatically a deadline of a second.
Think harder. :>
If you don't operate in a preemptive environment, then any "task"
that hogs the processor can screw you over. Or, any *series*
of task invocations can screw you without individually misbehaving.
In a preemptive environment, any *effective* change in priorities
can screw you over.
What happens if the time doesn't flow in one direction only?
Then everything that (implicitly) relies on time to be monotonic is
hosed.
If multiple processors/nodes are involved, then the uncertainty
between their individual clocks further complicates this.
And, of course, what do you do if <something> deliberately
introduces a delta to the current time?
Imagine Bob wants to set an alarm for a meeting at 5:00PM.
He then changes the current time to one hour later -- presumably
because he noticed that the clock was incorrect. Does that
mean the meeting will be one hour *sooner* than it would
appear to have been, previously?
No. The meeting is always at 5:00PM.
But *which* 5:00PM? There's the 5:00PM on his wristwatach,
the 5:00PM in your device, the 5:00PM on his *boss's* wristwatch
(which trumps his), etc.
And, was it *today* at 5:00PM? Which "today"? (given that all
of these are arbitrary time references)
On Wed, 4 Jan 2023 12:19:55 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:
I designed a product that had a really sensitive "front end".
To reduce the impact of the ACmains on our signal, I would
run the acquisition system at the mains frequency -- and,
included a setting for 50 vs. 60Hz selection (domestic/foreign
markets).
We found that blindly assuming the ACmains frequency was
as stated wasn't enough. Errors in the local oscillator
and variations in the ACmains introduced differences
so we had to frequency lock to the actual mains. Then,
use that to derive all related timing based on the
observed frequency as expressed by the local oscillator.
You can't assume AC frequency will be held ... generator spin rates
are not constant under changing loads, and rectification is more
difficult because of the high voltages and the fact that generators
typically are 4..12 multiphase (for efficiency) being squashed into
(some resemblance of) a sine wave.
But the utilities are required to provide the expected number of
cycles in a given period. In the US, that period is contractual (not
law) and typically is from 6..24 hours.
It is unlikely that this "needs" to be a high priority job.
If you have to MAKE it such, then you're bastardizing the
design needlessly.
Note the equality test. With disequality you can't use this:
if (now_weekly_secs > 28800) fire_alarm(ALARM1);
if (now_weekly_secs > 194400) fire_alarm(ALARM2);
otherwise alarms will occur countinuously after the deadline. You should tag
the alarm as occured for the current week to avoid firing it again at the next
call.
if (!alarm1_done && now_weekly_seconds > 28800) {
fire_alarm(ALARM1);
alarm1_done = TRUE;
}
assuming that your code doesn't implicitly do this (e.g., one typically
designs a timer service that lets you schedule alarms and then
*clears* them (removes them) once they have expired.
Exactly. The above is an example of 'at most once'.
But incomplete: it fails to reset for next week. ;-)
What happens if the time doesn't flow in one direction only?
Then everything that (implicitly) relies on time to be monotonic is >>>>>> hosed.
Simple enough to maintain monotonically increasing system time (at
least until the counter rolls over, but that's easily handled).
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 13:51:48 |
Calls: | 10,389 |
Calls today: | 4 |
Files: | 14,061 |
Messages: | 6,416,889 |
Posted today: | 1 |