Is there an easier way of doing this? End goal is a double number representing centi-secs.
empty decimal
: SPLIT ( a u c -- a2 u2 a3 u3 ) >r 2dup r> scan 2swap 2 pick - ;
: >INT ( adr len -- u ) 0 0 2swap >number 2drop drop ;
: /T ( a u -- $hour $min $sec )
2 0 do [char] : split 2swap dup if 1 /string then loop
2 0 do dup 0= if 2rot 2rot then loop ;
: .T 2swap 2rot cr >int . ." hr " >int . ." min " >int . ." sec " ;
s" 1:2:3" /t .t
s" 02:03" /t .t
s" 03" /t .t
s" 23:59:59" /t .t
s" 0:00:03" /t .t
Not bad. Here's a translation. Hopefully it's equivalent (?)
: split ( a u c -- a2 u2 a3 u3 )
>r 2dup r> scan 2swap 2 pick - ;
: number ( a u -- u ) 0 0 2swap >number 2drop ;
: xx. ( u -- ) 0 <# bl hold # # #> type ;
: tab3. ( h m s -- ) 3 spaces ( tab) rot xx. swap xx. xx. ;
: ts_elms ( a u -- h m s )
2>r 0 0 0 2r> begin
[char] : skip [char] : split dup 0> while
number drop 5 roll drop -rot
repeat 2drop 2drop ;
s" 25" ts_elms tab3. 00 00 25 ok
s" 10:25" ts_elms tab3. 00 10 25 ok
s" 2:10:25" ts_elms tab3. 02 10 25 ok
OTOH some implementations
are just neater and its a matter of finding them!
2>r 0 0 0 2r> begin[..]
/int 5 -roll rot drop dup while [char] : ?skip
repeat 2drop ;
Mr. Fifo - self-proclaimed "Mark Twain of Forth"
- has no idea, that writing Forth code doesn't
mean to move bytes around "Back and Forth"
(where did I see that? Let's see... :D ).
Stack jugglery means wasting CPU cycles for
moving the bytes around - it's contrproductive.
Variables have been invented to be used. They're
useful, if you didn't notice, or if they didn't
tell you that in your college, or wherever.
----
On 20/06/2025 3:36 pm, minforth wrote:
Counter-example: a good number of my apps involve structs, arrays
and signal vectors in heap memory. Stack juggling? Absolutely not.
The code would be unreadable and a nightmare to debug.
Factoring in smaller code portions is often impossible because
you can't always distribute data, that inherently belongs together,
over separate words.
Then why factor, when with using named parameters = locals, the
code is already short, readable, maintainable, and bug-free.
Ask yourself why the Forth Scientific Library makes heavy use of
locals.
Of course things look different with simpler applications.
What you're saying is at the level you program, it hardly matters
whether it's Forth or something else. It's true I have little to
no reason to use floating-point. I did wonder why Julian Noble
persisted with Forth.
Counter-example: a good number of my apps involve structs, arrays
and signal vectors in heap memory. Stack juggling? Absolutely not.
The code would be unreadable and a nightmare to debug.
Factoring in smaller code portions is often impossible because
you can't always distribute data, that inherently belongs together,
over separate words.
Then why factor, when with using named parameters = locals, the
code is already short, readable, maintainable, and bug-free.
Ask yourself why the Forth Scientific Library makes heavy use of
locals.
Of course things look different with simpler applications.
On 20/06/2025 3:36 pm, minforth wrote:
Counter-example: a good number of my apps involve structs, arrays
and signal vectors in heap memory. Stack juggling? Absolutely not.
The code would be unreadable and a nightmare to debug.
Factoring in smaller code portions is often impossible because
you can't always distribute data, that inherently belongs together,
over separate words.
Then why factor, when with using named parameters = locals, the
code is already short, readable, maintainable, and bug-free.
Ask yourself why the Forth Scientific Library makes heavy use of
locals.
Of course things look different with simpler applications.
What you're saying is at the level you program, it hardly matters
whether
it's Forth or something else.
On Fri, 20 Jun 2025 5:36:05 +0000, minforth wrote:
Counter-example: a good number of my apps involve structs, arrays
and signal vectors in heap memory. Stack juggling? Absolutely not.
The code would be unreadable and a nightmare to debug.
Factoring in smaller code portions is often impossible because
you can't always distribute data, that inherently belongs together,
over separate words.
Then why factor, when with using named parameters = locals, the
code is already short, readable, maintainable, and bug-free.
Interesting questions. My experience says that arrays and vectors are
ok, but structs are dangerous, (especially?) when nested. In a 'C'
project that I contribute to, structs arbitrarily glue data together,
and then forwardly defined macros hide the details.
It is impossible to debug this code without tools to decompile/inspect
the source. It is very difficult to change/rearrange/delete struct
fields, because they may be used in other places of the code for a
completely different purpose. The result is that structs only grow
and nobody dares to prune them. The only remedy is to completely
start over.
Ask yourself why the Forth Scientific Library makes heavy use of
locals.
Because the original algorithms do.
Of course things look different with simpler applications.
And then Einstein's famous quote spoils the fun.
-marcel--
You can repair such things by using new stack paradigms. I've added<snip>
several ones, most of 'em inspired by others. E.g
"swap 3OS with TOS" (SPIN, a b c -- c b a)
"DUP 2OS" (STOW, a b -- a a b)
But of course, you have to do the work. If you're incapable or too lazy
to do the work, yeah, then you will find Forth bites you. Note that C is
a very nice language as well. Beats Forth performance wise - so, what's
there not to like :)
So, I made me a small extension to the locals word set. Using your
example SPIN (abc — cba), I can define it as follows:
: SPIN { a b c == c b a } ; \ no need for additional code before ;
or likewise for floats, doubles, strings, matrices
: FSPIN { f: a b c == c b a } ;
: DSPIN { d: a b c == c b a } ;
: "SPIN { s: a b c == c b a } ;
: MSPIN { m: a b c == c b a } ;
Code generation and register optimization is the computer's job.
SPIN/STOW or similar microexamples can, of course, be defined quickly
with classic Forth stack juggling too. The power of the extension
becomes more apparent with mixed parameter types and/or more parameters,
and of course, with some non-trivial algorithm to solve.
So, I made me a small extension to the locals word set. Using your
example SPIN (abc — cba), I can define it as follows:
: SPIN { a b c == c b a } ; \ no need for additional code before ;
or likewise for floats, doubles, strings, matrices
: FSPIN { f: a b c == c b a } ;
On Sun, 22 Jun 2025 21:27:40 +0000, minforth wrote:
[..]
So, I made me a small extension to the locals word set. Using your
example SPIN (abc — cba), I can define it as follows:
: SPIN { a b c == c b a } ; \ no need for additional code before ;
or likewise for floats, doubles, strings, matrices
: FSPIN { f: a b c == c b a } ;
: DSPIN { d: a b c == c b a } ;
: "SPIN { s: a b c == c b a } ;
: MSPIN { m: a b c == c b a } ;
Code generation and register optimization is the computer's job.
SPIN/STOW or similar microexamples can, of course, be defined quickly
with classic Forth stack juggling too. The power of the extension
becomes more apparent with mixed parameter types and/or more parameters,
and of course, with some non-trivial algorithm to solve.
Do you mean your compiler automatically handles/allows combinations
like
... 22e-12 69. A{{ ( F: -- a ) ( D: -- b ) ( M: -- c ) SPIN ...
I found that handling mixed types explodes the code that needs
to be written for a simple compiler like, e.g., Tiny-KISS . It
would be great if that can be automated.
On Mon, 23 Jun 2025 5:40:37 +0000, mhx wrote:
On Sun, 22 Jun 2025 21:27:40 +0000, minforth wrote:I don't know if I got you right, because as previously
[..]
Do you mean your compiler automatically handles/allows combinations
like
... 22e-12 69. A{{ ( F: -- a ) ( D: -- b ) ( M: -- c ) SPIN ...
I found that handling mixed types explodes the code that needs
to be written for a simple compiler like, e.g., Tiny-KISS . It
would be great if that can be automated.
defined, SPIN expects three integers on the data stack.
On Mon, 23 Jun 2025 10:02:44 +0000, minforth wrote:
On Mon, 23 Jun 2025 5:40:37 +0000, mhx wrote:
On Sun, 22 Jun 2025 21:27:40 +0000, minforth wrote:I don't know if I got you right, because as previously
[..]
Do you mean your compiler automatically handles/allows combinations
like
... 22e-12 69. A{{ ( F: -- a ) ( D: -- b ) ( M: -- c ) SPIN ...
I found that handling mixed types explodes the code that needs
to be written for a simple compiler like, e.g., Tiny-KISS . It
would be great if that can be automated.
defined, SPIN expects three integers on the data stack.
I was indeed too hasty. If items are stacked, no type conversion
is needed if they are only reordered. (Reordering needs
no code anyway, as it is a only a memo to the compiler.) The
problems only arise when an cell must be translated to a complex
extended float, or when using floats to initialize an arbitrary
precision matrix.
Do you really support matrix and string type locals? The former
I only do for arbitrary precision, the latter can be handled
with DLOCALS| .
minforth@gmx.net (minforth) writes:
So, I made me a small extension to the locals word set. Using your
example SPIN (abc — cba), I can define it as follows:
: SPIN { a b c == c b a } ; \ no need for additional code before ;
What is the advantage of using this extension over the Forth-2012:
: spin {: a b c :} c b a ;
?
On 23-06-2025 23:03, minforth wrote:
On Mon, 23 Jun 2025 5:18:34 +0000, Anton Ertl wrote:
minforth@gmx.net (minforth) writes:
So, I made me a small extension to the locals word set. Using your
example SPIN (abc — cba), I can define it as follows:
: SPIN { a b c == c b a } ; \ no need for additional code before ;
What is the advantage of using this extension over the Forth-2012:
: spin {: a b c :} c b a ;
?
Obviously, there is no advantage for such small definitions.
For me, the small syntax extension is a convenience when working
with longer definitions. A bit contrived (:= synonym for TO):
: SOME-APP { a f: b c | temp == n: flag z: freq }
\ inputs: integer a, floats b c
\ uninitialized: float temp
\ outputs: integer flag, complex freq
<: FUNC < ... calc function ... > ;>
\ emulated embedded function using { | xt: func }
< ... calc something ... > := temp
< ... calc other things ... > := freq / basic formula
< ... calc other things ... > := flag
< ... calc correction ... > := freq / better estimation
;
While working on such things, I can focus my eyes on the formulas,
all local values are visible in one place, and I don't have to
worry about tracking the data stack(s) for lost/accumulated items.
As I said, it is nothing spectacular, just helpful. And to my own
eyes, it looks neater. ;-)
And before dxf yowls again: it is still Forth. :o)
Well.. Technically everything written in Forth is Forth. But it is not canonical Forth - because if it were canonical Forth, we would have
covered locals in "Starting Forth" - and we didn't.
Now, let's assume we found we were wrong. But there was a chapter in "Thinking Forth" called "The stylish stack" - not "The stylish locals".
As a matter of fact, it states that "the stack is not an array" -
meaning: not randomly accessible. And what are locals? Right. Randomly accessible.
So, what is this? It's a feeble imitation of C. It's not part of the
original design. Because if it were part of the original design, you
would find out what it means to think differently. This is merely C
thinking. Nothing else. Certainly not Forth thinking.
'Look, Ma - I've solved Forth's biggest problem.' ;-)
No really, I'm not kidding. When done properly Forth actually changes
the way you work. Fundamentally. I explained the sensation at the end of
"Why Choose Forth". I've been able to tackle things I would never have
been to tackle with a C mindset. ( https://youtu.be/MXKZPGzlx14 )
Like I always wanted to do a real programming language - no matter how primitive. Now I've done at least a dozen - and that particular trick
seems to get easier by the day.
And IMHO a lot can be traced back to the very simple principles Forth is based upon - like a stack. Or the triad "Execute-Number-Error". Or the dictionary. But also the lessons from ThinkForth.
You'll also find it in my C work. There are a lot more "small functions"
than in your average C program. It works for me like an "inner API". Not
to mention uBasic/4tH - There are plenty of "one-liners" in my
uBasic/4tH programs.
But that train of thought needs to be maintained - and it can only be maintained by submitting to the very philosophy Forth was built upon. I
feel like if I would give in to locals, I'd be back to being an average
C programmer.
I still do C from time to time - but it's not my prime language. For
this reason - and because I'm often just plain faster when using Forth.
It just results in a better program.
The only thing I can say is, "it works for me". And when I sometimes
view the works of others - especially when resorting to a C style - I
feel like it could work for you as well.
Nine times out of ten one doesn't need the amount of locals which are applied. One doesn't need a 16 line word - at least not when you
actually want to maintain the darn thing. One could tackle the problem
much more elegant.
It's that feeling..
On Mon, 23 Jun 2025 5:18:34 +0000, Anton Ertl wrote:
minforth@gmx.net (minforth) writes:
So, I made me a small extension to the locals word set. Using your >>>example SPIN (abc — cba), I can define it as follows:
: SPIN { a b c == c b a } ; \ no need for additional code before ;
What is the advantage of using this extension over the Forth-2012:
: spin {: a b c :} c b a ;
?
Obviously, there is no advantage for such small definitions.
For me, the small syntax extension is a convenience when working
with longer definitions. A bit contrived (:= synonym for TO):
: SOME-APP { a f: b c | temp == n: flag z: freq }
\ inputs: integer a, floats b c
\ uninitialized: float temp
\ outputs: integer flag, complex freq
<: FUNC < ... calc function ... > ;>
\ emulated embedded function using { | xt: func }
< ... calc something ... > := temp
< ... calc other things ... > := freq / basic formula
< ... calc other things ... > := flag
< ... calc correction ... > := freq / better estimation
;
My philosophy for developing programs is "follow the problem".
That is we a problem to solve (task to do). We need to
understand it, introduce some data structures and specify
needed computation. This is mostly independent from programming
language. When problem is not well understood we need
to do some research. In this experiments may help a lot
and having interactive programming language in useful
(so this is plus of Forth compared to C). Once we have
data structures and know what computation is needed we
need to encode (represent) this in choosen language.
I would say that large scale structure of the program
will be mostly independent of programming language.
There will be differences at small scale, as different
languages have different idioms. "Builtin" features of
language or "standard" libraries may do significant
part of work. Effort of coding may vary widely,
depending how much is supported by the language and
surroundig ecosystem and how much must be newly
coded. Also, debugging features of programming
system affect speed of coding.
Frankly, I do not see how missing language features
can improve design. I mean, there are people who
try to use fancy features when thay are not needed.
But large scale structure of a program should not be
affected by this. And at smaller scale with some
experience it is not hard to avoid unneeded features.
I would say that there are natural way to approach
given problem and usually best program is one that
follows natural way. Now, if problem naturally needs
several interdependent attributes we need to represnt
them in some way. If dependence is naturaly in stack
way, than stack is a good fit. If dependence is not
naturaly in a stack way, using stack may be possible
after some reorganisation. But may experience is
that if a given structure does not naturally appear
after some research, than reorganisation is not
very likely to lead to such structure. And even if
one mananges to tweak program to such structure, it
is not clear if it is a gain. Anyway, there is substantial
number of problem where stack is unlikely to work in
natural way. So how to represnt attributes? If they
are needed only inside a single function, than natural
way is using local variables. One can use globals, but
for variables that are not needed outside a function
this in unnatural. One can use stack juggling, this
works, but IMO is unnatural. One can collect attributes
in a single structure dynamically allocated at
function entry and freed at exit. This works, but
again is unnatural and needs extra code.
You have some point about length of functions. While
pretty small functions using locals are possible, I
have a few longer functions where main reason for keeping
code in one function is because various parts need access
to the same local variables. But I doubt that eliminating
locals and splitting such functions leads to better code:
we get a cluster of function which depend via common
attibutes.
These are my observations as well. It all depends on the problem
that you are facing. Now there are some guys who behave
like self-declared Forth mullahs who shout heresy against
those who don't DUP ROT enough.
I realizad that (fortunately) long ago - actually
Stephen Pelc made me realized that (thanks) - see
the old thread "Vector additon" here:
https://groups.google.com/g/comp.lang.forth/c/m9xy5k5BfkY/m/qoq664B9IygJ
...and in particular this message:
https://groups.google.com/g/comp.lang.forth/c/m9xy5k5BfkY/m/-SIr9AqdiRsJ
Now there are some guys who behave
like self-declared Forth mullahs who shout heresy against
those who don't DUP ROT enough.
Is theirs the Forth philosophy?? Really?? I thought the main
Forth principle was "keep it simple". When stack reordering
is the easier way, do it. When using locals is the easier way,
do it.
The more common complaint is that you use some feature they dislike
(typically locals) when you would otherwise DUP ROT instead.
But aren't 'locals' actually PICK/ROLL in disguise?
Aren't 'locals' actually PICK/ROLL in disguise?
: 3DUP ( a b c -- a b c a b c ) 3 PICK 3 PICK 3 PICK ;
In a traditional Forth with locals, the locals are stack allocated so accessing them usually costs a memory reference. The programmer gets
the same convenience as a C programmer. The runtime takes a slowdown compared to code from a register-allocating compiler, but such a
slowdown is already present in a threaded interpreter, so it's fine.
The PDP-11 and 8086 had 8 registers and
programmers found that to be painful.
Eight registers "painful"? Then how would you
describe 6502 and its one plus two half-registers?
:D
----
Am 05.07.2025 um 14:41 schrieb minforth:
Am 05.07.2025 um 14:21 schrieb albert@spenarnc.xs4all.nl:
I investigated the instruction set, and I found no way to detect
if the 8 registers stack is full.
This would offer the possibility to spill registers to memory only
if it is needed.
IIRC signaling and handling fp-stack overflow is not an easy task.
At most, the computer would crash.
IOW, spilling makes sense.
A deep dive into the manual
... the C1 condition code flag is used for a variety of functions.
When both the IE and SF flags in the x87 FPU status word are set,
indicating a stack overflow or underflow exception (#IS), the C1
flag distinguishes between overflow (C1=1) and underflow (C1=0).
dxf <dxforth@gmail.com> writes:
But was it the case by the mid/late 70's - or certain individuals saw an
opportunity to influence the burgeoning microprocessor market? Notions
of
single and double precision already existed in software floating point -
Hardware floating point also had single and double precision. The
really awful 1960s systems were gone by the mid 70s. But there were a
lot of competing formats, ranging from bad to mostly-ok. VAX floating
point was mostly ok, DEC wanted IEEE to adopt it, Kahan was ok with
that, but Intel thought "go for the best possible". Kahan's
retrospectives on this stuff are good reading:
What is there not to like with the FPU? It provides 80 bits, which
is in itself a useful additional format, and should never have problems
with single and double-precision edge cases.
The only problem is that some languages and companies find it necessary
to boycott FPU use.
[..] if your implementation performs the same
bit-exact operations for computing a transcendental function on two
IEEE 754 compliant platforms, the result will be bit-identical (if it
is a number). So just use the same implementations of transcentental functions, and your results will be bit-identical; concerning the
NaNs, if you find a difference, check if the involved values are NaNs.
On Mon, 14 Jul 2025 6:04:13 +0000, Anton Ertl wrote:
[..] if your implementation performs the same
bit-exact operations for computing a transcendental function on two
IEEE 754 compliant platforms, the result will be bit-identical (if it
is a number). So just use the same implementations of transcentental
functions, and your results will be bit-identical; concerning the
NaNs, if you find a difference, check if the involved values are NaNs.
When e.g. summing the elements of a DP vector, it is hard to see why
that couldn't be done on the FPU stack (with 80 bits) before (possibly) >storing the result to a DP variable in memory. I am not sure that Forth
users would be able to resist that approach.
mhx@iae.nl (mhx) writes:[..]
On Mon, 14 Jul 2025 6:04:13 +0000, Anton Ertl wrote:
The question is: What properties do you want your computation to have?[..]
2) A more accurate result? How much more accuracy?
3) More performance?
C) Perform tree addition
a) Using 80-bit addition. This will be faster than sequential
addition because in many cases several additions can run in
parallel. It will also be quite accurate because it uses 80-bit
addition, and because the addition chains are reduced to
ld(length(vector)).
So, as you can see, depending on your objectives there may be more
attractive ways to add a vector than what you suggested. Your
suggestion actually looks pretty unattractive, except if your
objectives are "ease of implementation" and "more accuracy than the
naive approach".
On Mon, 14 Jul 2025 7:50:04 +0000, Anton Ertl wrote:
C) Perform tree addition
a) Using 80-bit addition. This will be faster than sequential
addition because in many cases several additions can run in
parallel. It will also be quite accurate because it uses 80-bit
addition, and because the addition chains are reduced to
ld(length(vector)).
This looks very interesting. I can find Kahan and Neumaier, but
"tree addition" didn't turn up (There is a suspicious looking
reliability paper about the approach which surely is not what
you meant). Or is it pairwise addition what I should look for?
I did not do any accuracy measurements, but I did performance
measurements on a Ryzen 5800X:
cycles:u
gforth-fast iforth lxf SwiftForth VFX 3_057_979_501 6_482_017_334 6_087_130_593 6_021_777_424 6_034_560_441 NAI 6_601_284_920 6_452_716_125 7_001_806_497 6_606_674_147 6_713_703_069 UNR 3_787_327_724 2_949_273_264 1_641_710_689 7_437_654_901 1_298_257_315 REC 9_150_679_812 14_634_786_781 SR
cycles:u
gforth-fast iforth lxf SwiftForth VFX
13_113_842_702 6_264_132_870 9_011_308_923 11_011_828_048 8_072_637_768 NAI
6_802_702_884 2_553_418_501 4_238_099_417 11_277_658_203 3_244_590_981 UNR 9_370_432_755 4_489_562_792 4_955_679_285 12_283_918_226 3_915_367_813 REC
51_113_853_111 29_264_267_850 SR
But I decided to use a recursive approach (recursive-sum, REC) that
uses the largest 2^k<n as the left child and the rest as the right
child, and as base cases for the recursion use a straight-line
balanced-tree evaluation for 2^k with k<=7 (and combine these for n
that are not 2^k). For systems with tiny FP stacks, I added the
option to save intermediate results on a software stack in the
recursive word. Concerning the straight-line code, it turned out that
the highest k I could use on sf64 and vfx64 is 5 (corresponding to 6
FP stack items); it's not clear to me why; on lxf I can use k=7 (and
it uses the 387 stack, too).
Well, that is strange ...
Results with the current iForth are quite different:
FORTH> bench ( see file quoted above + usual iForth timing words )
\ 7963 times
\ naive-sum : 0.999 seconds elapsed. ( 4968257259 )
\ unrolled-sum : 1.004 seconds elapsed. ( 4968257259 )
\ recursive-sum : 0.443 seconds elapsed. ( 4968257259 )
\ shift-reduce-sum : 2.324 seconds elapsed. ( 4968257259 ) ok
mhx@iae.nl (mhx) writes:[..]
Well, that is strange ...
The output should be the approximate number of seconds. Here's what I
get from the cycle:u numbers for iForth 5.1-mini given in the earlier postings:
\ ------------ input ---------- | output
6_482_017_334 scale 7 5 3 f.rdp 1.07534 ok
6_452_716_125 scale 7 5 3 f.rdp 1.07048 ok
2_949_273_264 scale 7 5 3 f.rdp 0.48927 ok
14_634_786_781 scale 7 5 3 f.rdp 2.42785 ok
The resulting numbers are not very different from those you show. My measurements include iForth's startup overhead, which may be one
explanation why they are a little higher.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 143:41:27 |
Calls: | 10,383 |
Calls today: | 8 |
Files: | 14,054 |
D/L today: |
2 files (1,861K bytes) |
Messages: | 6,417,671 |