64-bit cells are much wider than most data in the application. This opens up the possibility of vector operations. I don't mean SSE[..]
and such, I mean treating 64-bit words as vectors. For example, as 4-element groups of 16-bit numbers you can AND, OR, INVERT,
or XOR directly.
Has this been explored before?
Hi All,INVERT, or XOR directly.
As a 32-bit throwback, I am reluctant to write 64-bit Forth applications. However, 64-bit Forth is becoming a thing anyway. Moreover, 64-bit data is becoming a thing.
64-bit cells are much wider than most data in the application. This opens up the possibility of vector operations. I don't mean SSE and such, I mean treating 64-bit words as vectors. For example, as 4-element groups of 16-bit numbers you can AND, OR,
Vector addition would be a nice-to-have. Add the top two stack elements but break the carry chain every 16 or 32 bits. Being a little lazy today, I'm just going to ask if the i86 architecture supports this stuff. I assume it does. 64-bit Forth willhave to standardize on some kind of vector wordset.
Has this been explored before?
Hi All,2008 is some 20 years after ISO, and at that time
As a 32-bit throwback, I am reluctant to write 64-bit Forth
applications. However, 64-bit Forth is becoming a thing anyway.
Moreover, 64-bit data is becoming a thing.
Your applications should run on every hosted Forth, no need to write "64-bit Forth".
On Friday, September 15, 2023 at 11:21:37 AM UTC+2, none albert wrote:Thanks for the responses but I am thinking more along the lines of Forth computers, not i64 monsters. I tend to agree with Marcel that the added complexity probably isn't worth it. It would be interesting to experiment with primitives like >< (swap
Your applications should run on every hosted Forth, no need to write "64-bit Forth".It is true that you do not need to rewrite every 32-bit Forth application from scratch,
but you may need to adjust a few things. The following comes to my mind:
- If your 32-bit application uses assembly code, you must adjust register names
(e.g. EBX --> RBX)
- If you make use of truncated multiplication (e.g. in an LCG random number generator),
you must truncate the result using a bit mask
- If your 32-bit application makes use of double length math operators, you may be able
to simplify math operations using single length operators in the 64-bit Forth system
I recommend checking the 32-bit Forth source code carefully for possible traps before
running it in a 64-bit Forth system.
Henry
How wide should floating point numbers really be? 32-bit seems a bit small. 64-bit seems a bit big.
Where is the Goldilocks point? The B5500 that inspired Chuck Moore had a 48-bit word.
On Sunday, September 17, 2023 at 8:48:50 PM UTC+2, Brad Eckert wrote:
How wide should floating point numbers really be? 32-bit seems a bit small. 64-bit seems a bit big.
Where is the Goldilocks point? The B5500 that inspired Chuck Moore had a 48-bit word.
128 bits. The 64-bit doubles and 80-bits extended are enough, but there are algorithms
that only work when you have twice the width.
-marcel
On Friday, September 15, 2023 at 11:43:08 AM UTC-7, Heinrich Hohl wrote:halves of the top of stack) and H+ (add the top two stack elements as 2D vectors). Adding two vectors would be simple compared to the usual stackrobatics.
On Friday, September 15, 2023 at 11:21:37 AM UTC+2, none albert wrote:
Your applications should run on every hosted Forth, no need to write "64-bit Forth".It is true that you do not need to rewrite every 32-bit Forth application from scratch,
but you may need to adjust a few things. The following comes to my mind:
- If your 32-bit application uses assembly code, you must adjust register names
(e.g. EBX --> RBX)
- If you make use of truncated multiplication (e.g. in an LCG random number generator),
you must truncate the result using a bit mask
- If your 32-bit application makes use of double length math operators, you may be able
to simplify math operations using single length operators in the 64-bit Forth system
I recommend checking the 32-bit Forth source code carefully for possible traps before
running it in a 64-bit Forth system.
HenryThanks for the responses but I am thinking more along the lines of Forth computers, not i64 monsters. I tend to agree with Marcel that the added complexity probably isn't worth it. It would be interesting to experiment with primitives like >< (swap
The cathedrals of modern computing have already been built to suit the C programming paradigm. Apps are written for the hardware and hardware is designed for the apps. The prevailing computing religion is the only game in town unless you want to golive in a tent. It's like old times but they don't burn you for heresy. So, 32-bit and 64-bit paradigms are here to stay. The FPUs came, everyone loved IEEE754 doubles, and it became a good idea to move data in 64-bit chunks. 64-bit also gave segmented
Forth strongly favors integer arithmetic for reasons of ideological purity at this point. Maybe cells should fit floating point numbers. How wide should floating point numbers really be? 32-bit seems a bit small. 64-bit seems a bit big. Where is theGoldilocks point? The B5500 that inspired Chuck Moore had a 48-bit word.
[..]The 64-bit doubles and 80-bits extended are enough, but there are algorithms
that only work when you have twice the width.
Moore would find a better algorithm.
On Monday, September 18, 2023 at 6:13:52 AM UTC+2, dxf wrote:I can imagine Tim the Tool Time Guy saying "MORE BITS!".
[..]
[..]The 64-bit doubles and 80-bits extended are enough, but there are algorithms
that only work when you have twice the width.
Moore would find a better algorithm.I doubt it. It is not that the whole algorithm switches to 128 bits to avoid the problem.
For some equations/problems there can be a mix of very small and very big numbers where floating-point performs badly. By successive refinement the issue can be eliminated, but in its critical operation higher-than-default precision is needed. A typical use-case is throwing a switch.
-marcel
On Monday, September 18, 2023 at 10:34:58 AM UTC-7, Marcel Hendrix wrote:
On Monday, September 18, 2023 at 6:13:52 AM UTC+2, dxf wrote:I can imagine Tim the Tool Time Guy saying "MORE BITS!".
[..]
[..]The 64-bit doubles and 80-bits extended are enough, but there are >algorithms
that only work when you have twice the width.
Moore would find a better algorithm.I doubt it. It is not that the whole algorithm switches to 128 bits to >avoid the problem.
For some equations/problems there can be a mix of very small and very big
numbers where floating-point performs badly. By successive refinement the
issue can be eliminated, but in its critical operation higher-than-default >> precision is needed. A typical use-case is throwing a switch.
-marcel
As long as we are blue-skying, I would propose that arbitrary precision >floating point as well as arbitrary precision (bignum) integers be
supported in hardware. It would be nice to have IEEE standards for both. >Maybe we will see that someday.
In article <63976a7d-f3d4-4f15...@googlegroups.com>,[..]
Brad Eckert <hwf...@gmail.com> wrote:
On Monday, September 18, 2023 at 10:34:58 AM UTC-7, Marcel Hendrix wrote:
Most of the problematic problems are "stiff" problems where there are
orders of magnitude between the small and large eigenvalues of a
matrix. It is relatively easy to predict the position of the earth
years into the future, but the position of the moon relative to the
earth is far less precise. It is related to chaotic problems.
Boosting the precision is of the floating point is of little avail
and is surely not an alternative to numerical analysis.
64-bit cells are much wider than most data in the application.
This opens up the possibility of vector operations. I don't mean
SSE and such, I mean treating 64-bit words as vectors.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 475 |
Nodes: | 16 (2 / 14) |
Uptime: | 19:50:39 |
Calls: | 9,487 |
Calls today: | 6 |
Files: | 13,617 |
Messages: | 6,121,093 |