• Fortran was NOT higher level than C. Was: Computer architects leaving I

    From Michael S@21:1/5 to Thomas Koenig on Wed Sep 4 11:31:23 2024
    On Tue, 3 Sep 2024 20:05:14 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Stefan Monnier <monnier@iro.umontreal.ca> schrieb:
    My impression - based on hearsay for Rust as I have no experience
    - is that the key point of Rust is memory "safety". I use
    scare-quotes here, since it is simply about correct use of dynamic
    memory and buffers.

    It is entirely possible to have correct use of memory in C,

    If you look at the evolution of programming languages,
    "higher-level" doesn't mean "you can do more stuff". On the
    contrary, making a language "higher-level" means deciding what it
    is we want to make harder or even impossible.

    Really?

    I thought Fortran was higher level than C, and you can do a lot
    more things in Fortran than in C.

    Or rather, Fortran allows you to do things which are possible,
    but very cumbersome, in C. Both are Turing complete, after all.

    I'd say that C in the form that stabilized around 1975-1976 is
    significantly higher level language than contemporary Fortran dialects
    or even the next Fortran dialect (F77).

    EQUIVALENCE is lower level than union.

    COMMON is ALOT lower level both than C automatic storage and than
    dynamic storage (malloc/free) although the later probably was not
    considered part of the language in 1976.

    IF cond GOTO 42 is lower level than if (!cond) {}

    Call-by-reference as the only mode of parameter passing is lower level
    than call-by-value. Especially so in context of C, because in C one can
    easily emulate call-by-reference with pointers if/when such need arises.

    Few other higher level concepts of C appear to have no equivalents at
    all in contemporary Fortran:
    block scopes for variables, including variables with static storage;
    struct;
    enum.
    I don't remember for sure, but it seems that back then Fortran had no
    recursion.
    Standardized preprocessor vs at best non-standard macro systems or at
    worst nothing at all.
    I'd guess there are more features of that sort that I forgot, but they
    are less important than those I listed.

    Overall, the differences in favor of C looks rather huge.

    On the other hand, I recollect only two higher level feature present in
    old Fortran that were absent in pre-99 C - VLA and Complex.
    The first feature can be emulated in almost satisfactory manner by
    dynamic allocation. Also, I am not sure that VLA were already part of
    standard Fortran language in 1976.
    The second feature is very specialized and rather minor.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Michael S on Wed Sep 4 16:41:26 2024
    On Wed, 4 Sep 2024 8:31:23 +0000, Michael S wrote:

    On Tue, 3 Sep 2024 20:05:14 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Stefan Monnier <monnier@iro.umontreal.ca> schrieb:
    My impression - based on hearsay for Rust as I have no experience
    - is that the key point of Rust is memory "safety". I use
    scare-quotes here, since it is simply about correct use of dynamic
    memory and buffers.

    It is entirely possible to have correct use of memory in C,

    If you look at the evolution of programming languages,
    "higher-level" doesn't mean "you can do more stuff". On the
    contrary, making a language "higher-level" means deciding what it
    is we want to make harder or even impossible.

    Really?

    I thought Fortran was higher level than C, and you can do a lot
    more things in Fortran than in C.

    Or rather, Fortran allows you to do things which are possible,
    but very cumbersome, in C. Both are Turing complete, after all.

    I'd say that C in the form that stabilized around 1975-1976 is
    significantly higher level language than contemporary Fortran dialects
    or even the next Fortran dialect (F77).

    EQUIVALENCE is lower level than union.

    COMMON is ALOT lower level both than C automatic storage and than
    dynamic storage (malloc/free) although the later probably was not
    considered part of the language in 1976.

    COMMON was a way of passing arguments to functions without paying
    the overhead of passing them as arguments. This fell out of favor
    in languages that can pass structures.

    IF cond GOTO 42 is lower level than if (!cond) {}

    Call-by-reference as the only mode of parameter passing is lower level
    than call-by-value. Especially so in context of C, because in C one can easily emulate call-by-reference with pointers if/when such need arises.

    In Fortran's defense, it needed a way to pass arguments back without
    having pointers.

    Few other higher level concepts of C appear to have no equivalents at
    all in contemporary Fortran:
    block scopes for variables, including variables with static storage;
    struct;
    enum.
    I don't remember for sure, but it seems that back then Fortran had no
    recursion.

    WATFIV did
    FORTRAN 4 IBM did not
    FORTRAN 4 Univac 1108 did

    Standardized preprocessor vs at best non-standard macro systems or at
    worst nothing at all.

    I am often put in a position where I have to read the code after
    preprocessing just so I know what the macros expand into to make
    heads or tails about code I did not write.

    I'd guess there are more features of that sort that I forgot, but they
    are less important than those I listed.

    Overall, the differences in favor of C looks rather huge.

    On the other hand, I recollect only two higher level feature present in
    old Fortran that were absent in pre-99 C - VLA and Complex.
    The first feature can be emulated in almost satisfactory manner by
    dynamic allocation. Also, I am not sure that VLA were already part of standard Fortran language in 1976.

    C was the first language to be able to write all of C in::
    printf() is the big example; where vaargs was a set of macros.

    The second feature is very specialized and rather minor.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Michael S on Wed Sep 4 17:08:36 2024
    Michael S <already5chosen@yahoo.com> schrieb:
    On Tue, 3 Sep 2024 20:05:14 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Stefan Monnier <monnier@iro.umontreal.ca> schrieb:
    My impression - based on hearsay for Rust as I have no experience
    - is that the key point of Rust is memory "safety". I use
    scare-quotes here, since it is simply about correct use of dynamic
    memory and buffers.

    It is entirely possible to have correct use of memory in C,

    If you look at the evolution of programming languages,
    "higher-level" doesn't mean "you can do more stuff". On the
    contrary, making a language "higher-level" means deciding what it
    is we want to make harder or even impossible.

    Really?

    I thought Fortran was higher level than C, and you can do a lot
    more things in Fortran than in C.

    Or rather, Fortran allows you to do things which are possible,
    but very cumbersome, in C. Both are Turing complete, after all.

    I'd say that C in the form that stabilized around 1975-1976 is
    significantly higher level language than contemporary Fortran dialects
    or even the next Fortran dialect (F77).

    I did write Fortran, not FORTRAN :-)

    I agree that C had many very useful things that pre-FORTRAN 90
    did not have. This is not surprising, since the authors of
    C knew FORTRAN well.

    [...]

    Overall, the differences in favor of C looks rather huge.

    You are arguing from the point of view of more than 30 years ago.

    On the other hand, I recollect only two higher level feature present in
    old Fortran that were absent in pre-99 C - VLA and Complex.

    You forget arrays as first-class citizens, and a reasonable way
    to pass multi-dimensional arrays. Sure, you could roll them
    on your own with pointer arithmetic, but...

    The first feature can be emulated in almost satisfactory manner by
    dynamic allocation. Also, I am not sure that VLA were already part of standard Fortran language in 1976.

    It didn't.

    The second feature is very specialized and rather minor.

    Let's take a look at Fortran 95 vs. C99 (similar timeframe), and
    thrown in the allocatable TR as well, which everybody implemented.

    Fortran 95 already had (just going through https://en.wikipedia.org/wiki/Fortran_95_language_features
    and looking at the features that C does not have)

    - A sensible numeric model, where you can ask for a certain
    precision and range
    - Usable multi-dimensional arrays
    - Modules where you can specify accessibility
    - Intent for dummy arguments
    - Generics and overloaded operators
    - Assumed-shape arrays, where you don't need to pass array
    bounds explicitly
    - ALLOCATE and ALLOCATABLE variables, where the compiler
    cleans up after variables go out of scope
    - Elemental operations and functions (so you can write
    foo + bar where foo is an array and bar is either an
    array or scalar)
    - Array subobjects, you can specify a start, an end and
    a stride in any dimension
    - Array intrinsics for shifting, packing, unpacking,
    sum, minmum value, ..., matrix multiplication and
    dot product

    The main feature I find lacking is unsigned numbers, but at
    least I'm doing something about that, a few decades later :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Thomas Koenig on Thu Sep 5 13:04:24 2024
    On Wed, 4 Sep 2024 17:08:36 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:
    On Tue, 3 Sep 2024 20:05:14 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Stefan Monnier <monnier@iro.umontreal.ca> schrieb:
    My impression - based on hearsay for Rust as I have no
    experience
    - is that the key point of Rust is memory "safety". I use
    scare-quotes here, since it is simply about correct use of
    dynamic memory and buffers.

    It is entirely possible to have correct use of memory in C,

    If you look at the evolution of programming languages,
    "higher-level" doesn't mean "you can do more stuff". On the
    contrary, making a language "higher-level" means deciding what it
    is we want to make harder or even impossible.

    Really?

    I thought Fortran was higher level than C, and you can do a lot
    more things in Fortran than in C.

    Or rather, Fortran allows you to do things which are possible,
    but very cumbersome, in C. Both are Turing complete, after all.

    I'd say that C in the form that stabilized around 1975-1976 is significantly higher level language than contemporary Fortran
    dialects or even the next Fortran dialect (F77).

    I did write Fortran, not FORTRAN :-)

    I agree that C had many very useful things that pre-FORTRAN 90
    did not have. This is not surprising, since the authors of
    C knew FORTRAN well.

    [...]

    Overall, the differences in favor of C looks rather huge.

    You are arguing from the point of view of more than 30 years ago.

    On the other hand, I recollect only two higher level feature
    present in old Fortran that were absent in pre-99 C - VLA and
    Complex.

    You forget arrays as first-class citizens,

    In theory, this is an advantage. In practice - not so much.
    Old Fortran lacked two key features that make 1st-class arrays really
    useful - array length as an attribute and pass-by-value.
    So, one can enjoy his 1st class citizenship only with borders
    of procedure.

    and a reasonable way
    to pass multi-dimensional arrays.

    Considering total absence of inter-module check of matching dimensions,
    it's probably caused more troubles than it solved.

    Sure, you could roll them
    on your own with pointer arithmetic, but...


    OTOH, while C does not have formal concept of array slices, they are
    very easily and conveniently emulated in practice. Surely, not quite as
    nicely syntactically as in Modern Fortran, but equal to it on practical
    ground. According to my understanding, emulation of slices in Old
    FORTRAN is more cumbersome.

    The first feature can be emulated in almost satisfactory manner by
    dynamic allocation. Also, I am not sure that VLA were already part
    of standard Fortran language in 1976.

    It didn't.

    The second feature is very specialized and rather minor.

    Let's take a look at Fortran 95 vs. C99 (similar timeframe), and
    thrown in the allocatable TR as well, which everybody implemented.

    Fortran 95 already had (just going through https://en.wikipedia.org/wiki/Fortran_95_language_features
    and looking at the features that C does not have)

    - A sensible numeric model, where you can ask for a certain
    precision and range

    May be, it's good in theoretical sense, also I am not sure even about
    it. I most certainly don't like it as numerics professional (which I am formally not, but a lot closer to being such than an average programmer
    or an average physicists/chemist/biologist).
    I very much prefer IEEE-754 approach of fixed list of types with very
    strictly specified properties.

    - Usable multi-dimensional arrays
    - Modules where you can specify accessibility
    - Intent for dummy arguments
    - Generics and overloaded operators

    Handy, but dangerous.

    - Assumed-shape arrays, where you don't need to pass array
    bounds explicitly
    - ALLOCATE and ALLOCATABLE variables, where the compiler
    cleans up after variables go out of scope

    How does it differ from automatic variables that C together with nearly
    all other Algol derivatives, had from the very beginning?

    - Elemental operations and functions (so you can write
    foo + bar where foo is an array and bar is either an
    array or scalar)

    Yes, it is handy for certain classes of matrix and array processing.
    Still less powerful than similar features of Matlab and esp.
    of Gnu/Octave, where you have both matrix operations like * and
    cell-by-cell operations like .*
    Unfortunately, the meaning of code that intensively uses this features
    not always obvious to reader.
    C code that achieves the same effect with utility functions is looks
    much less nice, but at least it does not suffer from above mentioned
    problem.

    - Array subobjects, you can specify a start, an end and
    a stride in any dimension
    - Array intrinsics for shifting, packing, unpacking,
    sum, minmum value, ..., matrix multiplication and
    dot product

    The main feature I find lacking is unsigned numbers, but at
    least I'm doing something about that, a few decades later :-)

    I don't know much about typical users of Modern Fortran, but would
    think that those coming from other languages, esp. from Python, would appreciate built-in infinite-precision integers much more than unsigned integers.
    BTW, do your unsigned integers have defined behavior in case of
    overflow? Is it defined as a modulo 2**size?
    If the answers are yes, then may be you can find better name than
    'unsigned'?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Michael S on Thu Sep 5 14:36:30 2024
    On Thu, 5 Sep 2024 13:04:24 +0300
    Michael S <already5chosen@yahoo.com> wrote:

    I don't know much about typical users of Modern Fortran, but would
    think that those coming from other languages, esp. from Python, would appreciate built-in infinite-precision integers

    Somehow I feel that both "infinite-precision integers" and "arbitrary
    precision integers" are both misnomers. But they are established terms
    and I don't know how to express it better. May be, "arbitrary range" ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Michael S on Thu Sep 5 15:07:23 2024
    On 2024-09-05 14:36, Michael S wrote:
    On Thu, 5 Sep 2024 13:04:24 +0300
    Michael S <already5chosen@yahoo.com> wrote:

    I don't know much about typical users of Modern Fortran, but would
    think that those coming from other languages, esp. from Python, would
    appreciate built-in infinite-precision integers

    Somehow I feel that both "infinite-precision integers" and "arbitrary precision integers" are both misnomers. But they are established terms
    and I don't know how to express it better. May be, "arbitrary range" ?


    Ada calls then "Big", as in Big_Integers, Big_Reals.

    One of the few cases where Ada follows the common jargon -- "bignums" --
    to keep things short.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Michael S on Thu Sep 5 11:36:22 2024
    Michael S <already5chosen@yahoo.com> schrieb:
    On Wed, 4 Sep 2024 17:08:36 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:
    On Tue, 3 Sep 2024 20:05:14 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Stefan Monnier <monnier@iro.umontreal.ca> schrieb:
    My impression - based on hearsay for Rust as I have no
    experience
    - is that the key point of Rust is memory "safety". I use
    scare-quotes here, since it is simply about correct use of
    dynamic memory and buffers.

    It is entirely possible to have correct use of memory in C,

    If you look at the evolution of programming languages,
    "higher-level" doesn't mean "you can do more stuff". On the
    contrary, making a language "higher-level" means deciding what it
    is we want to make harder or even impossible.

    Really?

    I thought Fortran was higher level than C, and you can do a lot
    more things in Fortran than in C.

    Or rather, Fortran allows you to do things which are possible,
    but very cumbersome, in C. Both are Turing complete, after all.

    I'd say that C in the form that stabilized around 1975-1976 is
    significantly higher level language than contemporary Fortran
    dialects or even the next Fortran dialect (F77).

    I did write Fortran, not FORTRAN :-)

    I agree that C had many very useful things that pre-FORTRAN 90
    did not have. This is not surprising, since the authors of
    C knew FORTRAN well.

    [...]

    Overall, the differences in favor of C looks rather huge.

    You are arguing from the point of view of more than 30 years ago.

    On the other hand, I recollect only two higher level feature
    present in old Fortran that were absent in pre-99 C - VLA and
    Complex.

    You forget arrays as first-class citizens,

    In theory, this is an advantage. In practice - not so much.
    Old Fortran lacked two key features that make 1st-class arrays really
    useful - array length as an attribute and pass-by-value.

    You want to pass arrays by value? In practice, that would mean
    copy-in and copy-out. Is this something that you do often?

    So, one can enjoy his 1st class citizenship only with borders
    of procedure.

    Nope - you can declare a dummy array of DIMENSION (n,m,...), and
    then not to have to worry about implementing the index arithmetic
    yourself. That was a big deal, in which C didn't follow Fortran.

    But numerical code was only an afterthought in C, as you can
    also see by its brain-damaged handling of errno for mathematical
    functions and the fact that everything is promoted to double -
    were sinf and friends even introduced before C99 (if you want to
    be historical)?


    and a reasonable way
    to pass multi-dimensional arrays.

    Considering total absence of inter-module check of matching dimensions,
    it's probably caused more troubles than it solved.

    Definitely not. Yes, you had to keep counting dimensions, which
    was a drag, but multi-dimensional arrays in C... whenever I needed
    those, I used Fortran instead, also in the pre-F90 days.


    Sure, you could roll them
    on your own with pointer arithmetic, but...


    OTOH, while C does not have formal concept of array slices, they are
    very easily and conveniently emulated in practice. Surely, not quite as nicely syntactically as in Modern Fortran, but equal to it on practical ground.

    Please show an example how you would pass an 2*2 submatrix of a
    3*3 matrix in C.

    In Fortran, this is, on the caller's side,

    real, dimension(3,3) :: a

    call foo(a(1:2,1:2))

    or also

    call foo(a(1:3,1:3))

    and on the callee's side

    subroutine foo(a)
    real, dimension(:,:) :: a

    According to my understanding, emulation of slices in Old
    FORTRAN is more cumbersome.

    In the original post, I was talking about Fortran (=modern Fortran,
    F95ff), not F77 or earlier. So this is a bit of a red herring.

    The first feature can be emulated in almost satisfactory manner by
    dynamic allocation. Also, I am not sure that VLA were already part
    of standard Fortran language in 1976.

    It didn't.

    The second feature is very specialized and rather minor.

    Let's take a look at Fortran 95 vs. C99 (similar timeframe), and
    thrown in the allocatable TR as well, which everybody implemented.

    Fortran 95 already had (just going through
    https://en.wikipedia.org/wiki/Fortran_95_language_features
    and looking at the features that C does not have)

    - A sensible numeric model, where you can ask for a certain
    precision and range

    May be, it's good in theoretical sense, also I am not sure even about
    it.

    That's as may be.

    I most certainly don't like it as numerics professional (which I am
    formally not, but a lot closer to being such than an average programmer
    or an average physicists/chemist/biologist).
    I very much prefer IEEE-754 approach of fixed list of types with very strictly specified properties.

    If you want that, you can also have it (in more modern versions of
    Fortran than Fortran 95). But don't forget that, in this timeframe,
    there were still dinosaurs^W Cray and IBM-compatible mainframes
    roaming the computer centers, so it was eminently reasonable. Fortran
    then caught up with IEEE in 2003, and has very good support there.

    - Usable multi-dimensional arrays
    - Modules where you can specify accessibility
    - Intent for dummy arguments
    - Generics and overloaded operators

    Handy, but dangerous.

    Quite handy for putting in an operator like .cross. for
    the cross product of vectors, for example.


    - Assumed-shape arrays, where you don't need to pass array
    bounds explicitly
    - ALLOCATE and ALLOCATABLE variables, where the compiler
    cleans up after variables go out of scope

    How does it differ from automatic variables that C together with nearly
    all other Algol derivatives, had from the very beginning?

    You can allocate and deallocate whenever, so you have the
    flexibility of C's pointers with the deallocation handled
    by the compiler.

    For example

    type foo
    real, allocatable, dimension(:) :: x, y, z, f
    end type foo

    ...

    type(foo) :: p

    ...

    allocate (p%x(n), p%y(n), p%z(n), p%f(n))

    All of this will be deallocated when p gets out of scope.


    - Elemental operations and functions (so you can write
    foo + bar where foo is an array and bar is either an
    array or scalar)

    Yes, it is handy for certain classes of matrix and array processing.
    Still less powerful than similar features of Matlab and esp.
    of Gnu/Octave, where you have both matrix operations like * and
    cell-by-cell operations like .*

    Use MATMUL for the array operations, it's an intrinsic.

    Unfortunately, the meaning of code that intensively uses this features
    not always obvious to reader.
    C code that achieves the same effect with utility functions is looks
    much less nice, but at least it does not suffer from above mentioned
    problem.

    - Array subobjects, you can specify a start, an end and
    a stride in any dimension
    - Array intrinsics for shifting, packing, unpacking,
    sum, minmum value, ..., matrix multiplication and
    dot product

    The main feature I find lacking is unsigned numbers, but at
    least I'm doing something about that, a few decades later :-)

    I don't know much about typical users of Modern Fortran, but would
    think that those coming from other languages, esp. from Python, would appreciate built-in infinite-precision integers much more than unsigned integers.

    That wasn't the proposal I made.

    BTW, do your unsigned integers have defined behavior in case of
    overflow? Is it defined as a modulo 2**size?

    Yes.

    If the answers are yes, then may be you can find better name than
    'unsigned'?

    It's the name that C uses, and what people are used to. It is a
    bit out of my hand now, because the proposal has been accepted
    by J3, but what other suggestions would you have?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Michael S on Thu Sep 5 15:43:02 2024
    On 2024-09-05 15:31, Michael S wrote:
    On Thu, 5 Sep 2024 11:36:22 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:
    On Wed, 4 Sep 2024 17:08:36 -0000 (UTC)

    [snip]

    BTW, do your unsigned integers have defined behavior in case of
    overflow? Is it defined as a modulo 2**size?

    Yes.

    If the answers are yes, then may be you can find better name than
    'unsigned'?

    It's the name that C uses, and what people are used to. It is a
    bit out of my hand now, because the proposal has been accepted
    by J3, but what other suggestions would you have?


    In Ada Language manual they are called Modular types, which is not bad. Unfortunately, specific modular types defined in Ada's predefined
    packages are named Unsigned_nn and Cardinal. Neither is a name I would suggest.


    The Unsigned_nn names are defined in the library package Ada.Interfaces,
    so the names were chosen to match common usage in other languages and in HW-speak. The "_nn" is target-specific, no values are standardized, but
    of course most common HW has some powers-of-two values.

    I can't find any occurrence of "Cardinal" in the Ada Reference Manual
    index. I don't think it is used in Ada; I think Modula-2 uses it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Thomas Koenig on Thu Sep 5 15:31:03 2024
    On Thu, 5 Sep 2024 11:36:22 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:
    On Wed, 4 Sep 2024 17:08:36 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:
    On Tue, 3 Sep 2024 20:05:14 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Stefan Monnier <monnier@iro.umontreal.ca> schrieb:
    My impression - based on hearsay for Rust as I have no
    experience
    - is that the key point of Rust is memory "safety". I use
    scare-quotes here, since it is simply about correct use of
    dynamic memory and buffers.

    It is entirely possible to have correct use of memory in C,


    If you look at the evolution of programming languages,
    "higher-level" doesn't mean "you can do more stuff". On the
    contrary, making a language "higher-level" means deciding
    what it is we want to make harder or even impossible.

    Really?

    I thought Fortran was higher level than C, and you can do a lot
    more things in Fortran than in C.

    Or rather, Fortran allows you to do things which are possible,
    but very cumbersome, in C. Both are Turing complete, after
    all.

    I'd say that C in the form that stabilized around 1975-1976 is
    significantly higher level language than contemporary Fortran
    dialects or even the next Fortran dialect (F77).

    I did write Fortran, not FORTRAN :-)

    I agree that C had many very useful things that pre-FORTRAN 90
    did not have. This is not surprising, since the authors of
    C knew FORTRAN well.

    [...]

    Overall, the differences in favor of C looks rather huge.

    You are arguing from the point of view of more than 30 years ago.

    On the other hand, I recollect only two higher level feature
    present in old Fortran that were absent in pre-99 C - VLA and
    Complex.

    You forget arrays as first-class citizens,

    In theory, this is an advantage. In practice - not so much.
    Old Fortran lacked two key features that make 1st-class arrays
    really useful - array length as an attribute and pass-by-value.

    You want to pass arrays by value? In practice, that would mean
    copy-in and copy-out. Is this something that you do often?


    In languages that I use daily it's not something you can decide freely.

    I one group (C, to slightly less extent C++) passing arrays by value is cumbersome so I use it less than I would probably do if it was
    more convenient.

    In other group (Matlab/Octave) pass-by-value is the only available
    option, so I use it all the time, but it does not mean much.

    So, one can enjoy his 1st class citizenship only with borders
    of procedure.

    Nope - you can declare a dummy array of DIMENSION (n,m,...), and
    then not to have to worry about implementing the index arithmetic
    yourself. That was a big deal, in which C didn't follow Fortran.

    But numerical code was only an afterthought in C, as you can
    also see by its brain-damaged handling of errno for mathematical
    functions and the fact that everything is promoted to double -
    were sinf and friends even introduced before C99 (if you want to
    be historical)?


    and a reasonable way
    to pass multi-dimensional arrays.

    Considering total absence of inter-module check of matching
    dimensions, it's probably caused more troubles than it solved.

    Definitely not. Yes, you had to keep counting dimensions, which
    was a drag, but multi-dimensional arrays in C... whenever I needed
    those, I used Fortran instead, also in the pre-F90 days.


    Sure, you could roll them
    on your own with pointer arithmetic, but...


    OTOH, while C does not have formal concept of array slices, they are
    very easily and conveniently emulated in practice. Surely, not
    quite as nicely syntactically as in Modern Fortran, but equal to it
    on practical ground.

    Please show an example how you would pass an 2*2 submatrix of a
    3*3 matrix in C.


    I said 'arrays'. I never said that it is easy for matrices :(
    But it definitely works for matrices as well. C binding of LAPACK is a
    good example of the typical API. The trick is to not forget to keep
    lead dimension in separate parameter from number of columns (assuming C conventions for order of elements in matrix).
    I do use this trick in my signal processing practice.
    But agree that for case of matrices C is only very slightly more
    convenient than old FORTRAN.

    In Fortran, this is, on the caller's side,

    real, dimension(3,3) :: a

    call foo(a(1:2,1:2))

    or also

    call foo(a(1:3,1:3))

    and on the callee's side

    subroutine foo(a)
    real, dimension(:,:) :: a

    According to my understanding, emulation of slices in Old
    FORTRAN is more cumbersome.

    In the original post, I was talking about Fortran (=modern Fortran,
    F95ff), not F77 or earlier. So this is a bit of a red herring.

    The first feature can be emulated in almost satisfactory manner
    by dynamic allocation. Also, I am not sure that VLA were already
    part of standard Fortran language in 1976.

    It didn't.

    The second feature is very specialized and rather minor.

    Let's take a look at Fortran 95 vs. C99 (similar timeframe), and
    thrown in the allocatable TR as well, which everybody implemented.

    Fortran 95 already had (just going through
    https://en.wikipedia.org/wiki/Fortran_95_language_features
    and looking at the features that C does not have)

    - A sensible numeric model, where you can ask for a certain
    precision and range

    May be, it's good in theoretical sense, also I am not sure even
    about it.

    That's as may be.

    I most certainly don't like it as numerics professional (which I am
    formally not, but a lot closer to being such than an average
    programmer or an average physicists/chemist/biologist).
    I very much prefer IEEE-754 approach of fixed list of types with
    very strictly specified properties.

    If you want that, you can also have it (in more modern versions of
    Fortran than Fortran 95). But don't forget that, in this timeframe,
    there were still dinosaurs^W Cray and IBM-compatible mainframes
    roaming the computer centers, so it was eminently reasonable. Fortran
    then caught up with IEEE in 2003, and has very good support there.

    - Usable multi-dimensional arrays
    - Modules where you can specify accessibility
    - Intent for dummy arguments
    - Generics and overloaded operators

    Handy, but dangerous.

    Quite handy for putting in an operator like .cross. for
    the cross product of vectors, for example.


    - Assumed-shape arrays, where you don't need to pass array
    bounds explicitly
    - ALLOCATE and ALLOCATABLE variables, where the compiler
    cleans up after variables go out of scope

    How does it differ from automatic variables that C together with
    nearly all other Algol derivatives, had from the very beginning?

    You can allocate and deallocate whenever, so you have the
    flexibility of C's pointers with the deallocation handled
    by the compiler.

    For example

    type foo
    real, allocatable, dimension(:) :: x, y, z, f
    end type foo

    ...

    type(foo) :: p

    ...

    allocate (p%x(n), p%y(n), p%z(n), p%f(n))

    All of this will be deallocated when p gets out of scope.


    I still don't see how it's more capable than C99 except for minor
    ability to group automatic VLAs in sort of struct.


    - Elemental operations and functions (so you can write
    foo + bar where foo is an array and bar is either an
    array or scalar)

    Yes, it is handy for certain classes of matrix and array processing.
    Still less powerful than similar features of Matlab and esp.
    of Gnu/Octave, where you have both matrix operations like * and cell-by-cell operations like .*

    Use MATMUL for the array operations, it's an intrinsic.

    Unfortunately, the meaning of code that intensively uses this
    features not always obvious to reader.
    C code that achieves the same effect with utility functions is looks
    much less nice, but at least it does not suffer from above mentioned problem.

    - Array subobjects, you can specify a start, an end and
    a stride in any dimension
    - Array intrinsics for shifting, packing, unpacking,
    sum, minmum value, ..., matrix multiplication and
    dot product

    The main feature I find lacking is unsigned numbers, but at
    least I'm doing something about that, a few decades later :-)

    I don't know much about typical users of Modern Fortran, but would
    think that those coming from other languages, esp. from Python,
    would appreciate built-in infinite-precision integers much more
    than unsigned integers.

    That wasn't the proposal I made.

    BTW, do your unsigned integers have defined behavior in case of
    overflow? Is it defined as a modulo 2**size?

    Yes.

    If the answers are yes, then may be you can find better name than 'unsigned'?

    It's the name that C uses, and what people are used to. It is a
    bit out of my hand now, because the proposal has been accepted
    by J3, but what other suggestions would you have?


    In Ada Language manual they are called Modular types, which is not bad. Unfortunately, specific modular types defined in Ada's predefined
    packages are named Unsigned_nn and Cardinal. Neither is a name I would
    suggest.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Michael S on Thu Sep 5 14:37:56 2024
    Michael S <already5chosen@yahoo.com> schrieb:
    On Thu, 5 Sep 2024 11:36:22 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:
    On Wed, 4 Sep 2024 17:08:36 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:
    On Tue, 3 Sep 2024 20:05:14 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Stefan Monnier <monnier@iro.umontreal.ca> schrieb:
    My impression - based on hearsay for Rust as I have no
    experience
    - is that the key point of Rust is memory "safety". I use
    scare-quotes here, since it is simply about correct use of
    dynamic memory and buffers.

    It is entirely possible to have correct use of memory in C,


    If you look at the evolution of programming languages,
    "higher-level" doesn't mean "you can do more stuff". On the
    contrary, making a language "higher-level" means deciding
    what it is we want to make harder or even impossible.

    Really?

    I thought Fortran was higher level than C, and you can do a lot
    more things in Fortran than in C.

    Or rather, Fortran allows you to do things which are possible,
    but very cumbersome, in C. Both are Turing complete, after
    all.

    I'd say that C in the form that stabilized around 1975-1976 is
    significantly higher level language than contemporary Fortran
    dialects or even the next Fortran dialect (F77).

    I did write Fortran, not FORTRAN :-)

    I agree that C had many very useful things that pre-FORTRAN 90
    did not have. This is not surprising, since the authors of
    C knew FORTRAN well.

    [...]

    Overall, the differences in favor of C looks rather huge.

    You are arguing from the point of view of more than 30 years ago.

    On the other hand, I recollect only two higher level feature
    present in old Fortran that were absent in pre-99 C - VLA and
    Complex.

    You forget arrays as first-class citizens,

    In theory, this is an advantage. In practice - not so much.
    Old Fortran lacked two key features that make 1st-class arrays
    really useful - array length as an attribute and pass-by-value.

    You want to pass arrays by value? In practice, that would mean
    copy-in and copy-out. Is this something that you do often?


    In languages that I use daily it's not something you can decide freely.

    My sympathies.

    It is not always possible to avoid packing/unpacking of arrays in
    Fortran, for example when passing a non-contiguous array slice
    to an old-style or contiguous array, but at least gfortran
    has -Warray-temporaries, so the user can be notified.

    I one group (C, to slightly less extent C++) passing arrays by value is cumbersome so I use it less than I would probably do if it was
    more convenient.

    It is also a potential performance killer - allocating the
    temporary array, copying (which will impact the cache), and
    then possibly doing the same thing in reverse.

    This is one reason why INTENT is quite useful, it can inform
    the compiler that copying in or copying out may not be needed.

    In other group (Matlab/Octave) pass-by-value is the only available
    option, so I use it all the time, but it does not mean much.

    I knew there's a reason for me not using matlab or octave :-)
    But of course, if it's your day job, you have little choice
    in the matter.

    I prefer Julia for the more script-oriented stuff, it can be
    quite fast.

    [...]

    Please show an example how you would pass an 2*2 submatrix of a
    3*3 matrix in C.


    I said 'arrays'. I never said that it is easy for matrices :(

    Two-dimensional arrays or matrices, I don't see a big difference.

    But it definitely works for matrices as well. C binding of LAPACK is a
    good example of the typical API.

    Lapack, whose C interface is as cumbersome as the original FORTRAN one,
    is a good example why assumed-shape arrays are so powerful.

    The trick is to not forget to keep
    lead dimension in separate parameter from number of columns (assuming C conventions for order of elements in matrix).
    I do use this trick in my signal processing practice.
    But agree that for case of matrices C is only very slightly more
    convenient than old FORTRAN.

    The we disagree - old-style FORTRAN was more convenient for arrays
    than C.


    In Fortran, this is, on the caller's side,

    real, dimension(3,3) :: a

    call foo(a(1:2,1:2))

    or also

    call foo(a(1:3,1:3))

    and on the callee's side

    subroutine foo(a)
    real, dimension(:,:) :: a

    Or, maybe even harder to do in C

    call foo(a(1:3:2,1:3:2))

    which will pass the elements with indices (1,1),(3,1),(1,3),(3,3)
    to foo, whose programmer can be perfectly obvlivious of anything
    strange being passed, the arrays just work.

    [... going into F95 features...]

    - ALLOCATE and ALLOCATABLE variables, where the compiler
    cleans up after variables go out of scope

    How does it differ from automatic variables that C together with
    nearly all other Algol derivatives, had from the very beginning?

    You can allocate and deallocate whenever, so you have the
    flexibility of C's pointers with the deallocation handled
    by the compiler.

    For example

    type foo
    real, allocatable, dimension(:) :: x, y, z, f
    end type foo

    ...

    type(foo) :: p

    ...

    allocate (p%x(n), p%y(n), p%z(n), p%f(n))

    All of this will be deallocated when p gets out of scope.


    I still don't see how it's more capable than C99 except for minor
    ability to group automatic VLAs in sort of struct.

    You can do the following in Fortran:

    subroutine init_foo(p,n)
    type(foo), intent(out), dimension(:) :: p
    integer, intent(in) :: n
    integer :: i
    do i=1,size(p)
    allocate (p%x(n),p%y(n),p%z(n))
    end do
    end subroutine foo

    init_foo will deallocate every component upon entry to the
    subroutine (automatically), and then it allocates the components x,
    y and z. The caller then can do things with it.

    This would not be possible with pointers to VLA-allocated memory
    in C.

    It is very much like pointers in C, except the compiler cleans
    up after the variables go out of scope.

    Fortran also has pointers, which can also point to arrays and
    array slices. For a pointer to point to anything, either
    you have to allocate it (same syntax as allocatables, but
    you have to deallocate explicitly), or you can associate
    it with a target, which has to explicity to be marked TARGET.

    Plus, you don't need to pass a pointer to a variable if
    you want its value set (or do array stuff on it). One of
    my pet peeves about C is that in

    int a;

    /* Set vaulue for a. */
    foo(&a);
    ...
    bar();

    the value of a could be changed behind the programmer's back
    in bar() according to C's language definition.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Thomas Koenig on Thu Sep 5 10:52:51 2024
    On 9/5/2024 7:37 AM, Thomas Koenig wrote:


    I prefer Julia for the more script-oriented stuff, it can be
    quite fast.


    When I first saw Julia some years ago, I was very impressed. It
    certainly has some nice features. But apparently it hasn't caught on as quickly as I had hoped. :-(

    Can you talk about why you think it isn't more popular?



    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Stephen Fuld on Thu Sep 5 19:08:56 2024
    Stephen Fuld <sfuld@alumni.cmu.edu.invalid> schrieb:
    On 9/5/2024 7:37 AM, Thomas Koenig wrote:


    I prefer Julia for the more script-oriented stuff, it can be
    quite fast.


    When I first saw Julia some years ago, I was very impressed. It
    certainly has some nice features. But apparently it hasn't caught on as quickly as I had hoped. :-(

    Can you talk about why you think it isn't more popular?

    I can make guesses, but I'm not more informed than you.

    Python's popularity, due to the sheer number of people using it,
    is one reason. People who know Python will continue using it
    and see little reason to learn another language. Many don't care
    about Python's inefficiency when not using highly efficient compiled
    code and, truth be told, for many applications it doesn't matter,
    you have 3*10⁹ cycles to throw at it per second.
    But when it does, it suddenly starts to bite people...

    Also, Julia is simply less known than many other languages.

    Julia is quite popular in some areas, which results in some
    excellent packages. Autodifferentiation plays a large role there,
    which allows, for example, for excellent ODE solvers (a lot of
    cutting-edge ODE research seems to be done in Julia). If I needed
    to solve lots of coupled ODEs and had my own choice of tools,
    I would very probably use Julia.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Thomas Koenig on Thu Sep 5 23:02:15 2024
    On 9/5/2024 12:08 PM, Thomas Koenig wrote:
    Stephen Fuld <sfuld@alumni.cmu.edu.invalid> schrieb:
    On 9/5/2024 7:37 AM, Thomas Koenig wrote:


    I prefer Julia for the more script-oriented stuff, it can be
    quite fast.


    When I first saw Julia some years ago, I was very impressed. It
    certainly has some nice features. But apparently it hasn't caught on as
    quickly as I had hoped. :-(

    Can you talk about why you think it isn't more popular?

    I can make guesses, but I'm not more informed than you.

    Python's popularity, due to the sheer number of people using it,
    is one reason. People who know Python will continue using it
    and see little reason to learn another language. Many don't care
    about Python's inefficiency when not using highly efficient compiled
    code and, truth be told, for many applications it doesn't matter,
    you have 3*10⁹ cycles to throw at it per second.

    Wow! I hadn't thought of that as a reason, but I think it is correct.
    The Python bandwagon keeps attracting more and more followers. I see
    from the Tiobe index that Python is number one by a huge amount and is
    even growing its lead.

    Thanks.


    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Fri Sep 6 08:15:52 2024
    Michael S <already5chosen@yahoo.com> writes:

    On Thu, 5 Sep 2024 13:04:24 +0300
    Michael S <already5chosen@yahoo.com> wrote:

    I don't know much about typical users of Modern Fortran, but would
    think that those coming from other languages, esp. from Python, would
    appreciate built-in infinite-precision integers

    Somehow I feel that both "infinite-precision integers" and "arbitrary precision integers" are both misnomers. But they are established terms
    and I don't know how to express it better. May be, "arbitrary range" ?

    Knuth uses the term multiple-precision arithmetic, meaning operations
    with no fixed upper limit on range.

    In mathematical terminology, "infinite precision" is simply wrong;
    that should be "unbounded precision".

    Lisp has a long history of using the term Bignums (or is it BigNums?).

    I would like to see programming move in the direction of referring
    to 'integers' and 'limited-range integers', so multiple precision
    is the default unless specified otherwise.

    In Smalltalk, IIRC, there is class Integer, with subclasses
    SmallInteger, LargePositiveInteger, and LargeNegativeInteger.
    Both of the Large variants grow as needed. In fact that applies
    to SmallInteger objects as well: 10000 * 10000 * 10000 * 10000
    starts off with SmallInteger (for 10000) but ends up giving a LargePositiveInteger if SmallInteger cannot accommodate the
    resulting value.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Thomas Koenig on Sun Sep 8 12:33:30 2024
    On Thu, 5 Sep 2024 14:37:56 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:

    In other group (Matlab/Octave) pass-by-value is the only available
    option, so I use it all the time, but it does not mean much.

    I knew there's a reason for me not using matlab or octave :-)
    But of course, if it's your day job, you have little choice
    in the matter.


    Since in Matlab/Octave function can return as many arrays/matrices as
    one wants (by value, of course) and since memory management is
    automatic, it's all ends up sufficiently convenient. And certainly
    easier to follow for reader and less error prone to writer than most
    forms of passing arrays by reference.
    The only serious downside of this approach is a performance hit due to sometimes unnecessary copying and allocation/freeing. More often than
    not it's not a big deal and certainly not a main performance bottleneck
    of this environments.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Michael S on Sun Sep 8 14:05:00 2024
    Michael S <already5chosen@yahoo.com> schrieb:
    On Thu, 5 Sep 2024 14:37:56 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:

    In other group (Matlab/Octave) pass-by-value is the only available
    option, so I use it all the time, but it does not mean much.

    I knew there's a reason for me not using matlab or octave :-)
    But of course, if it's your day job, you have little choice
    in the matter.


    Since in Matlab/Octave function can return as many arrays/matrices as
    one wants (by value, of course)

    If you want to, you can also do so in Fortran.

    and since memory management is
    automatic,

    If you want to, you can also do so in Fortran.

    You can also do allocation on assignment, so the size is
    calculated automatically for you (since Fortran 2003).

    (Of course, Fortran stole^H^H^H^H^H borrowed this feature from
    Matlab, but for a compiled language).

    it's all ends up sufficiently convenient. And certainly
    easier to follow for reader and less error prone to writer than most
    forms of passing arrays by reference.

    I find the Fortran notation of having INTENT(IN), INTENT(OUT) and
    INTENT(INOUT) very convenient; you get to say explicitly what
    you want, and the compiler will check it for you.

    The only serious downside of this approach is a performance hit due to sometimes unnecessary copying and allocation/freeing. More often than
    not it's not a big deal and certainly not a main performance bottleneck
    of this environments.

    In an interpreted language, I guess the focus is less on squeezing
    out the last bit of speed... in Fortran, we find this something
    quite important.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)