If you implement ALLOCATE yourself it is easy to incorporate SIZE :
1000 ALLOCATE THROW SIZE .
1000 OK
I have done that in my ALLOCATE and reportedly Hugh has done that
also. Maybe he even came up with that name.
You can now define :
\ To an allocated vector append an vector . Return new vector .
: vector-concat OVER SIZE DUP >R OVER SIZE + >R
SWAP R> RESIZE THROW ( first vector is now enlarged ) ( l e )
2DUP R> + OVER SIZE CMOVE NIP ;
This is applicable to vectors of cells or characters alike.
If you ALLOCATE your strings you can define
'vector-concat ALIAS $+
none albert schrieb am Montag, 4. September 2023 um 14:14:48 UTC+2:
If you implement ALLOCATE yourself it is easy to incorporate SIZE :
1000 ALLOCATE THROW SIZE .
1000 OK
I have done that in my ALLOCATE and reportedly Hugh has done that
also. Maybe he even came up with that name.
You can now define :
\ To an allocated vector append an vector . Return new vector .
: vector-concat OVER SIZE DUP >R OVER SIZE + >R
SWAP R> RESIZE THROW ( first vector is now enlarged ) ( l e )
2DUP R> + OVER SIZE CMOVE NIP ;
This is applicable to vectors of cells or characters alike.
If you ALLOCATE your strings you can define
'vector-concat ALIAS $+
What is so special here? There are zillion ways to manage heap objects
incl. resizing.
One could use a big os-preallocated heap and do allocation/resizing of
Forth objects within that heap without recalling further os functions. But I >remember reading an essay saying that modern os allocation functions belong >to the most heavily optimized functions anyhow. Which means that it
can be hard to beat their performance.
Of course in bare metal embedded devices things look different.
In article <e29c5b36-128e-43c6...@googlegroups.com>,
minforth <minf...@arcor.de> wrote:
none albert schrieb am Montag, 4. September 2023 um 14:14:48 UTC+2:
If you implement ALLOCATE yourself it is easy to incorporate SIZE :
1000 ALLOCATE THROW SIZE .
1000 OK
I have done that in my ALLOCATE and reportedly Hugh has done that
also. Maybe he even came up with that name.
You can now define :
\ To an allocated vector append an vector . Return new vector .
: vector-concat OVER SIZE DUP >R OVER SIZE + >R
SWAP R> RESIZE THROW ( first vector is now enlarged ) ( l e )
2DUP R> + OVER SIZE CMOVE NIP ;
This is applicable to vectors of cells or characters alike.
If you ALLOCATE your strings you can define
'vector-concat ALIAS $+
What is so special here? There are zillion ways to manage heap objects >incl. resizing.
One could use a big os-preallocated heap and do allocation/resizing of >Forth objects within that heap without recalling further os functions. But I >remember reading an essay saying that modern os allocation functions belong >to the most heavily optimized functions anyhow. Which means that it
can be hard to beat their performance.
Of course in bare metal embedded devices things look different.What are you talking about? Optimization is the last thing on my mind.
Ease of programming is. Traditional malloc don't allow you to
retrive size.
none albert schrieb am Dienstag, 5. September 2023 um 12:07:37 UTC+2:
In article <e29c5b36-128e-43c6...@googlegroups.com>,
minforth <minf...@arcor.de> wrote:
none albert schrieb am Montag, 4. September 2023 um 14:14:48 UTC+2:What are you talking about? Optimization is the last thing on my mind.
If you implement ALLOCATE yourself it is easy to incorporate SIZE :
1000 ALLOCATE THROW SIZE .
1000 OK
I have done that in my ALLOCATE and reportedly Hugh has done that
also. Maybe he even came up with that name.
You can now define :
\ To an allocated vector append an vector . Return new vector .
: vector-concat OVER SIZE DUP >R OVER SIZE + >R
SWAP R> RESIZE THROW ( first vector is now enlarged ) ( l e )
2DUP R> + OVER SIZE CMOVE NIP ;
This is applicable to vectors of cells or characters alike.
If you ALLOCATE your strings you can define
'vector-concat ALIAS $+
What is so special here? There are zillion ways to manage heap objects
incl. resizing.
One could use a big os-preallocated heap and do allocation/resizing of
Forth objects within that heap without recalling further os functions. But I
remember reading an essay saying that modern os allocation functions belong >> >to the most heavily optimized functions anyhow. Which means that it
can be hard to beat their performance.
Of course in bare metal embedded devices things look different.
Ease of programming is. Traditional malloc don't allow you to
retrive size.
Okay. malloc/resize have the desired size as input argument, so it is
already known.
When they fail, they return an error indicator.
When they succeed, the actually allocated size can be larger (eg by memory >page granularity).
Did I overlook something?
In article <96c68e49-4de7-4538...@googlegroups.com>,
minforth <minf...@arcor.de> wrote:
Did I overlook something?
A really substantial hassle. You need to keep track of the sizes.
E.g. concatenate gets 4 stack items instead of two.
If you implement ALLOCATE yourself it is easy to incorporate SIZE :4tH has got it too, but it is called ALLOCATED. Like ERROR I felt SIZE was
1000 ALLOCATE THROW SIZE .
1000 OK
I have done that in my ALLOCATE and reportedly Hugh has done that
also. Maybe he even came up with that name.
You can now define :
\ To an allocated vector append an vector . Return new vector .
: vector-concat OVER SIZE DUP >R OVER SIZE + >R
SWAP R> RESIZE THROW ( first vector is now enlarged ) ( l e )
2DUP R> + OVER SIZE CMOVE NIP ;
This is applicable to vectors of cells or characters alike.
If you ALLOCATE your strings you can define
'vector-concat ALIAS $+
Groetjes Albert
--
Don't praise the day before the evening. One swallow doesn't make spring. You must not say "hey" before you have crossed the bridge. Don't sell the hide of the bear until you shot it. Better one bird in the hand than ten in the air. First gain is a cat spinning. - the Wise from Antrim -
But I
remember reading an essay saying that modern os allocation functions belong >to the most heavily optimized functions anyhow. Which means that it
can be hard to beat their performance.
minforth <minf...@arcor.de> writes:
But II have read that many times over the decades. And then I did the
remember reading an essay saying that modern os allocation functions belong >to the most heavily optimized functions anyhow. Which means that it
can be hard to beat their performance.
measurements for Figure 14 of
<http://euroforth.org/ef17/papers/ertl.pdf>, and found those funny
kinky lines; I believe they come from the ALLOCATE and FREE calls
(which used glibc's malloc() and free() implementations). So in
further work <http://euroforth.org/ef18/papers/ertl-chaining.pdf> I implemented a cache of freed vectors. Some time later a new version
of glibc appeared, and the announcement claimed to reduce the overhead
of thread synchronization (which apparently also slows down
single-threaded programs like my benchmarks) of earlier versions by
using a per-thread cache of freed memory areas.
So maybe the allocation functions have been heavily optimized at least
since Doug Lea published his allocator. This does not mean that we
are close to the optimum.
minforth <minforth@arcor.de> writes:
But I
remember reading an essay saying that modern os allocation functions belong >>to the most heavily optimized functions anyhow. Which means that it
can be hard to beat their performance.
I have read that many times over the decades. And then I did the >measurements for Figure 14 of
<http://euroforth.org/ef17/papers/ertl.pdf>, and found those funny
kinky lines; I believe they come from the ALLOCATE and FREE calls
(which used glibc's malloc() and free() implementations). So in
further work <http://euroforth.org/ef18/papers/ertl-chaining.pdf> I >implemented a cache of freed vectors. Some time later a new version
of glibc appeared, and the announcement claimed to reduce the overhead
of thread synchronization (which apparently also slows down
single-threaded programs like my benchmarks) of earlier versions by
using a per-thread cache of freed memory areas.
So maybe the allocation functions have been heavily optimized at least
since Doug Lea published his allocator. This does not mean that we
are close to the optimum.
- antonGroetjes Albert
That opens an opportunity for Forth to adapt a simple ALLOCATE/FREE
for the situation at hand. It is not so simple to customize glib's allocate/free.
Anton Ertl schrieb am Donnerstag, 7. September 2023 um 09:40:13 UTC+2:
So maybe the allocation functions have been heavily optimized at least
since Doug Lea published his allocator. This does not mean that we
are close to the optimum.
Obviously. I am guessing that Linux is still mostly a server OS and thus >optimizations are more targeted towards running huge amounts of parallel jobs >on multi-CPU machines (eg K8s clusters).
Not really Forth's domain.
But this is not my field of expertise. For example I really can't estimate whether
your benchmark would show different results between a Linux enterprise or >desktop variant of the same distribution, or if they all use the same kernel module.
That opens an opportunity for Forth to adapt a simple ALLOCATE/FREE
for the situation at hand. It is not so simple to customize glib's >allocate/free.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 475 |
Nodes: | 16 (2 / 14) |
Uptime: | 18:09:02 |
Calls: | 9,487 |
Calls today: | 6 |
Files: | 13,617 |
Messages: | 6,121,091 |