Nasser M. Abbasi <
nma@12000.org> wrote:
I have not used lisp for long time. I used it once to write
a small program for an AI course I took.
I was reading the history of Macsyma here
<https://web.archive.org/web/20060404231043/http://www.ma.utexas.edu/pipermail/maxima/2003/005861.html>
And Richard Petti says that some of the reasons Macsyma failed
is because lisp (the language Macsyma written in) was slow for
numerical and linear algebra. He says:
------------------------------
Macsyma's most damaging product problem was slowness in numerical
analysis. Whenever a customer considered Macsyma for adoption in
a major commercial or government or educational program, the
severity of this problem killed Macsyma's chances. This problem
had two causes.
o Lisp systems had slow numerical analysis. Lisp's support for
uniform garbage collection and run-time type checking (and
possibly other developer-centric features) required software
indirection that slowed arithmetic and basic array operations.
Lisp developers focused on "artificial intelligence" and they
considered numerical analysis of little consequence for Lisp.
o Macsyma built all matrices at user level from list structures
which are terribly slow for linear algebra. For those matrix
operations that were performed internally on arrays, the
conversion between lists and arrays was itself very slow.
The slowness of numerical linear algebra was too big a problem
to tackle, given our tie to Lisp and the seriousness of other
problems that needed attention after Symbolics milked the product
two times in the 1980s. So I hoped I could build a product that
was best at everything and had all the basic numerical analysis
but was slow at numerics.
---------------------
My question is: Is this still the case in 2022? Or has things changed?
I did a lot of performance measurements looking for fastest code
to use in FriCAS. AFAICS current state of art is:
- very high level code may be slow. This is not specific to Lisp,
and in fact high-level Lisp may be faster than high-level Java
or Python.
- Lisp allows writing very low-level code, similar to Fortran, C
or (low-level) Java. Here speed depends on quality of code
generator, which in turn depend on amount of work spent on
compiler. C and Fortran have advantage, _much_ more work is
spent on them than on Lisp. As result, probably best Lisp
(that is sbcl) generates code than on average is probably
half of speed of C or Fortran. That vary, "memory intensive"
code will normally perform the same memory accesses regardless
of language, so speed will be essentially the same. "Cache
friendly" code will show differences in code generaters.
In particular, for two dimenesional array access currently
sbcl generates inline code, but is unable to move common
expressions outside loop. In my code that caused about
6 times slowdown compared to C compiler (which moved common
part of addressing expressions outside loop).
- dense matrix multiplication is very special. Naive code is has
very bad memory access pattern so is quite slow. Optimized
matrix multiply is cache friendy, but best code depends
quite a lot on specific processor. AFAIK for best results
inner loops are in hand-written assembly. This is potentially
labor intensive as there are several variants of inner loop
and several variants of processors, so a lot of code to
write. Some libraries try to generate code from patterns,
some compromise and use C. Normal users will use library
code, so really the problem is finding good library (which
is essentially language-independent).
Concerning Macsyma, from my otsider point of view old problems
are still in current Maxima: Maxima arrays use list, there is
dynamic typing and to avoid dynamic type errors there are
automatic convertions. Automatic convertion at hot spot
can cause significant slowdown.
I think that for FriCAS (which compiles to Lisp) situation is
somewhat different. Namely, FriCAS is statically typed and
in library code convertions (if needed) are inserted by hand.
FriCAS code is translated to low-level Lisp with type declaration,
and for real numerics can get best possible speed from Lisp.
ATM there is trouble with complex: Lisp complex types can not
handle generality needed by FriCAS, so FriCAS uses its own
representation of complex, which causes performance trouble
for Lisp. There are specialized arrays for real an complex
that can be passed to external routines. Let me add that
there is lot of things cauld be improved with more work.
And in last 10 years a lot of things related to numerics
was significantly improved.
--
Waldek Hebisch
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)