In no particular order.
* Software development can ONLY be done on a Unix-related OS
* It is impossible to develop any software, let alone C, on pure Windows
* You can spend decades developing and implementing systems languages at
the level of C, but you still apparently know nothing of the subject
* You can spend a decade developing whole-program compilers and a
suitably matched language, and somebody can still lecture you on exactly
what a whole-language compiler is, because you've got it wrong
* No matter how crazy the interface or behaviour of some Linux utility,
no one is ever going to admit there's anything wrong with it
* Discussing my C compiler, is off-topic, but discussing gcc is fine
* Nobody here apparently knows how to build a program consisting purely
of C source files, using only a C compiler.
* Simply enumerating the N files and submitting them to the compiler in
any of several easy methods seems to be out of the question. Nobody has explained why.
* Nearly everyone here is working on massively huge and complex
projects, which all take from minutes to hours for a full build.
* There is not a single feature of my alternate systems language that is superior to the C equivalent
* Having fast compilation speed of C is of no use to anyone and
impresses nobody.
* Having code where you naughtily cast a function pointer to or from a function pointer is a no-no.
* There is no benefit at all in having a tool like a compiler, be a
small, self-contained executable.
On 2/4/2024 9:58 PM, Kaz Kylheku wrote:
On 2024-02-05, bart <bc@freeuk.com> wrote:
In no particular order.
* Software development can ONLY be done on a Unix-related OS
* It is impossible to develop any software, let alone C, on pure Windows
I've developed on DOS,
TSR's?
[...]
* Hardly anybody here has a project which can be built simply by
compiling and linking all the modules.
Even Tim Rentsch's simplest project has a dizzying set of special requirements.
On 2/4/2024 11:51 PM, Chris M. Thomasson wrote:
Nice. I only messed around with them a couple of times. There was a cool
one, iirc, called key correspondence (KEYCOR), iirc. It was
programmable, and could be used with any program. I used it for a
reporting system and to control WordPerfect 5.1. I still have it! lol.
For legacy purposes.
Writing WordPerfect 5.1 macros was a fun time.... ;^o
[...]
* Nobody here apparently knows how to build a program consisting purely
of C source files, using only a C compiler.
In no particular order.
Shall I post this pile of crap or not?
I really need to get back to some of those pointless, worthless toy
projects of mine.
On 05/02/2024 02:09, bart wrote:
In no particular order.
<snip>
Shall I post this pile of crap or not?
No. It is, after all, a pile of crap.
bart <bc@freeuk.com> writes:
[...]
* Hardly anybody here has a project which can be built simply by
compiling and linking all the modules.
Not everyone is working on a project where the deliverable is a
single executable. It's much more difficult to work on a project
where the deliverables form a set of related files that make up a
third-party library, and target multiple platforms.
Even Tim Rentsch's simplest project has a dizzying set of special
requirements.
This statement is a misrepresentation, and undoubtedly a deliberate
one. Furthermore how it is expressed is petty and childish.
On 2024-02-05, bart <bc@freeuk.com> wrote:
In no particular order.
* Software development can ONLY be done on a Unix-related OS
* It is impossible to develop any software, let alone C, on pure
Windows
I've developed on DOS, Windows as well as for DSP chips and some microcontrollers. I find most of the crap that you say is simply
wrong.
Speaking of Windows, the CL.EXE compiler does not know where its
include files are. You literally cannot do "cl program.c".
You have to give it options which tell it where the SDK is installed:
where the headers and libraries are.
The Visual Studio project-file-driven build build system passes all
those details to every invocation of CL.EXE. Your project file (called
a "solution" nowadays) includes information like the path where your
SDK is installed. In the GUI there is some panel where you specify
it.
If I'm going to be doing programming on Windows today, it's either
going be some version of that CL.EXE compiler from Microsoft, or GCC.
Is it due to decades of legacy code in GCC? Clang is a newer implementatation, so you might think it's faster than GCC. But it
manages only to be about the same.
* Nearly everyone here is working on massively huge and complex
projects, which all take from minutes to hours for a full build.
That's the landscape. Nobody is going to pay you for writing small
utilities in C. That sort of thing all went to scripting languages.
(It happens from time to time as a side task.)
I currently work on a a firmware application that compiles to a 100
megabyte (stripped!) executable.
bart <bc@freeuk.com> writes:
[...]
* Hardly anybody here has a project which can be built simply by
compiling and linking all the modules.
Not everyone is working on a project where the deliverable is a
single executable. It's much more difficult to work on a project
where the deliverables form a set of related files that make up a
third-party library, and target multiple platforms.
Even Tim Rentsch's simplest project has a dizzying set of special
requirements.
This statement is a misrepresentation, and undoubtedly a deliberate
one. Furthermore how it is expressed is petty and childish.
On Mon, 5 Feb 2024 05:58:55 -0000 (UTC)
Kaz Kylheku <433-929-6894@kylheku.com> wrote:
Is it due to decades of legacy code in GCC? Clang is a newer
implementatation, so you might think it's faster than GCC. But it
manages only to be about the same.
I still believe that "decades of legacy" are the main reason.
clang *was* much faster than gcc 10-12 years ago. Since then it
accumulated a decade of legacy. And this particular decade mostly
consisted of code that was written by people that (a) less experienced
than gcc maintainers (b) care about speed of compilation even less than
gcc maintainers. Well, for the later, I don't really believe that it is possible, but I need to bring a plausible explanation, don't I?
On 05/02/2024 17:32, Michael S wrote:
On Mon, 5 Feb 2024 05:58:55 -0000 (UTC)
Kaz Kylheku <433-929-6894@kylheku.com> wrote:
Is it due to decades of legacy code in GCC? Clang is a newer
implementatation, so you might think it's faster than GCC. But it
manages only to be about the same.
I still believe that "decades of legacy" are the main reason.
clang *was* much faster than gcc 10-12 years ago. Since then it
accumulated a decade of legacy. And this particular decade mostly
consisted of code that was written by people that (a) less experienced
than gcc maintainers (b) care about speed of compilation even less than
gcc maintainers. Well, for the later, I don't really believe that it is
possible, but I need to bring a plausible explanation, don't I?
Early clang was faster than C at compilation and static error checking.
And it had much nicer formats and outputs for its warnings. But it
wasn't close to gcc for optimisation and generated code efficiency, and
had less powerful checking.
Over time, clang has gained a lot more optimisation and is now similar
to gcc in code generation (each is better at some things), while gcc has
sped up some aspects and greatly improved the warning formats.
clang is now a similar speed to gcc because it does a similar job. It
turns out that doing a lot of analysis and code optimisation takes effort.
I have reached the point where it's not worth my time to respond
to bart, even to correct his misrepresentations of what I and
other have said.
On Mon, 5 Feb 2024 19:02:09 +0200, Michael S wrote:
Windows by itself is not a measurable slowdown, but antivirus is, and
until now I didn't find a way to get antivirus-free Windows at work.
But if you don’t have antivirus on your build machine, the sad fact of development on Windows is that there are viruses that will insinuate themselves into the build products.
On Mon, 5 Feb 2024 19:02:09 +0200, Michael S wrote:
Windows by itself is not a measurable slowdown, but antivirus is,
and until now I didn't find a way to get antivirus-free Windows at
work.
But if you don’t have antivirus on your build machine, the sad fact
of development on Windows is that there are viruses that will
insinuate themselves into the build products.
* Software development can ONLY be done on a Unix-related OS
Windows by itself is not a measurable slowdown, but antivirus is, and
until now I didn't find a way to get antivirus-free Windows at work.
On Mon, 5 Feb 2024 19:02:09 +0200, Michael S wrote:
Windows by itself is not a measurable slowdown, but antivirus is, and
until now I didn't find a way to get antivirus-free Windows at work.
But if you don’t have antivirus on your build machine, the sad fact of development on Windows is that there are viruses that will insinuate themselves into the build products.
On Mon, 5 Feb 2024 23:28:04 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Mon, 5 Feb 2024 19:02:09 +0200, Michael S wrote:
Windows by itself is not a measurable slowdown, but antivirus is,
and until now I didn't find a way to get antivirus-free Windows at
work.
But if you don’t have antivirus on your build machine, the sad fact
of development on Windows is that there are viruses that will
insinuate themselves into the build products.
No, if I use Windpws there are no danger of viruses like these.
Besides, it's not like antivirus could have helped against viruses if
I was stupid enough to catch them. To the opposite, I suspect that
presence of antivirus increases attak surface.
On 2024-02-05, David Brown <david.brown@hesbynett.no> wrote:
On 05/02/2024 17:32, Michael S wrote:
On Mon, 5 Feb 2024 05:58:55 -0000 (UTC)
Kaz Kylheku <433-929-6894@kylheku.com> wrote:
Is it due to decades of legacy code in GCC? Clang is a newer
implementatation, so you might think it's faster than GCC. But it
manages only to be about the same.
I still believe that "decades of legacy" are the main reason.
clang *was* much faster than gcc 10-12 years ago. Since then it
accumulated a decade of legacy. And this particular decade mostly
consisted of code that was written by people that (a) less experienced
than gcc maintainers (b) care about speed of compilation even less than
gcc maintainers. Well, for the later, I don't really believe that it is
possible, but I need to bring a plausible explanation, don't I?
Early clang was faster than C at compilation and static error checking.
And it had much nicer formats and outputs for its warnings. But it
wasn't close to gcc for optimisation and generated code efficiency, and
had less powerful checking.
Over time, clang has gained a lot more optimisation and is now similar
to gcc in code generation (each is better at some things), while gcc has
sped up some aspects and greatly improved the warning formats.
clang is now a similar speed to gcc because it does a similar job. It
turns out that doing a lot of analysis and code optimisation takes effort.
It takes more and more effort for diminishing results.
A compiler can spend a lot of time just searching for the conditions
that allow a certain optimization, where those conditions turn out to be false most of the time. So that in a large code base, there will be just
a couple of "hits" (the conditions are met, and the optimization can
take place). Yet all the instruction sequences in every basic block in
every file had to be looked at to determine that.
Mnay of these conditions are specific to the optimization. Another
kind of optimization has its own conditions that don't reuse anything
from that one. So the more optimizations you add, the more work it takes
just to determine applicability.
The optimizer may have to iterate on the program graph. After certain optimizations are applied, the program graph changes. And that may
"unlock" more opportunities to do optimizations that were not possible before. But because the program graph changed, its properties have to be recalculated, like liveness of variables/temporaries and whatnot.
More time.
A compiler can spend a lot of time just searching for the conditions
that allow a certain optimization, where those conditions turn out
to be false most of the time. So that in a large code base, there
will be just a couple of "hits" (the conditions are met, and the optimization can take place). Yet all the instruction sequences in
every basic block in every file had to be looked at to determine
that.
This is always the case with optimisations. Each pass might only
give a few percent increase in speed - but when you have 50 passes,
this adds up to a lot. And some passes (that is, some types of
optimisation) can open up new opportunities for if you redo previous
passes.
And the same applies to static error checking - there is
quite an overlap in the kinds of analysis used for optimisations and
for static error checking.
On Tue, 6 Feb 2024 09:44:20 +0100
David Brown <david.brown@hesbynett.no> wrote:
A compiler can spend a lot of time just searching for the conditions
that allow a certain optimization, where those conditions turn out
to be false most of the time. So that in a large code base, there
will be just a couple of "hits" (the conditions are met, and the
optimization can take place). Yet all the instruction sequences in
every basic block in every file had to be looked at to determine
that.
This is always the case with optimisations. Each pass might only
give a few percent increase in speed - but when you have 50 passes,
this adds up to a lot. And some passes (that is, some types of
optimisation) can open up new opportunities for if you redo previous
passes.
Except that at least gcc by design never redo previous passes. More so,
it does not even try to compare result of optimization with certain
pass vs result without this pass and to take better of the two.
I don't know if the same applies to clang, I never had
conversations with clang maintainers (had plenty with gcc maintainers). However, the bottom line for last 2-3 years is that when I compare
speed of gcc-compiled code vs clang-compiled then both can do good
job and both can do ordinary stupid things, but clang is much more
likely then gcc to do astonishingly stupid things. Like, for example, vectorization that reduces the speed by factor of 3 vs non-vectorized variant.
So, most likely, clang also proceeds pass after pass after pass and
never ever looks back. Seems like they took the lesson of Lot's wife
very seriously.
And the same applies to static error checking - there is
quite an overlap in the kinds of analysis used for optimisations and
for static error checking.
They reuse "temp" variables instead of making new ones.
And of course there are those two or three unfortunate people that have
to work with embedded Windows.
On 05/02/2024 01:09, bart wrote:
In no particular order.
* Software development can ONLY be done on a Unix-related OS
* It is impossible to develop any software, let alone C, on pure Windows
* You can spend decades developing and implementing systems languages
at the level of C, but you still apparently know nothing of the subject
The tone's currently rather bad, and somehow it has developed that you
and I are on one side and pretty much everyone else on the other. We
both have open source projects which are or at least attempt to be
actually useful to other people, whilst I don't think many of the others
can say that, and maybe that's the underlying reason. But who knows.
I'm trying to improve the tone. It's hard because people have got lots
of motivations for posting, and some of them aren't very compatible with
a good humoured, civilised group. And we've got a lot of bad behaviour,
not all of it directed at us by any means. However whilst you're very critical of other people's design decisions, I've rarely if ever heard
to say that therefore you criticise someone's general character.
But finally tolerance has snapped.
Well we've both posted code of sizeable, actual and practical projects.
Very few on the 'other side' have. Maybe it's proprietory or there are
other reasons. But it means their own output can't be criticised here.
On 2024-02-05, bart <bc@freeuk.com> wrote:
Writing a compiler is pretty easy, because the bar can be set very low
while still calling it a compiler.
Whole-program compilers are easier because there are fewer requirements.
You have only one kind of deliverable to produce: the executable.
You don't have to deal with linkage and produce a linkable format.
GCC is maintained by people who know what a C compiler is, and GCC can
be asked to be one.
Your idea of writing a C compiler seems to be to pick some random
examples of code believed to be C and make them work. (Where "work"
means that they compile and show a few behaviors that look like
the expected ones.)
Basically, you don't present a very credible case that you've actually written a C compiler.
I currently work on a a firmware application that compiles to a 100
megabyte (stripped!) executable.
* There is not a single feature of my alternate systems language that is
superior to the C equivalent
The worst curve ball someone could throw you would be to
be eagerly interested in your language, and ask for guidance
in how to get it installed and start working in it.
Not as much as fast executable code, unfortunately.
Compilers that blaze through large amounts of code in the blink of an
eye are almost certainly dodging on the optimization.
And because they
don't need the internal /architecture/ to support the kinds
optimizations they are not doing, they can speed up the code generation
also. There is no need to generate an intermediate representation like
SSA; you can pretty much just parse the syntax and emit assembly code in
the same pass. Particularly if you only target one architecture.
A poorly optimizing retargetable compiler that emits an abstract
intermediate code will never be as blazingly fast as something equally
poorly optimizing that goes straight to code in one pass.
* There is no benefit at all in having a tool like a compiler, be a
small, self-contained executable.
Not as much as there used to, decades ago.
On Tue, 6 Feb 2024 09:44:20 +0100, David Brown wrote:
They reuse "temp" variables instead of making new ones.
I like to limit the scope of my temporary variables. In C, this is as easy
as sticking a pair of braces around a few statements.
On Tue, 6 Feb 2024 09:50:02 +0100, David Brown wrote:
And of course there are those two or three unfortunate people that have
to work with embedded Windows.
I thought this has pretty much gone away, pushed aside by Linux.
On 05/02/2024 05:58, Kaz Kylheku wrote:
On 2024-02-05, bart <bc@freeuk.com> wrote:
Writing a compiler is pretty easy, because the bar can be set very low
while still calling it a compiler.
Whole-program compilers are easier because there are fewer requirements.
You have only one kind of deliverable to produce: the executable.
You don't have to deal with linkage and produce a linkable format.
David Brown suggested that they were harder than I said. You're saying
they are easier.
GCC is maintained by people who know what a C compiler is, and GCC can
be asked to be one.
So what is it when it's not a C compiler? What language is it compiling
here:
Whatever language that mcc processes must be similar to that that gcc processes.
Yet it is true that gcc can be tuned to a particular standard, dialect,
set of extensions and a set of user-specified behaviours. Which means it
can also compile some Frankensteinian version of 'C' that anyone can
devise.
Mine at least is a more rigid subset.
Your idea of writing a C compiler seems to be to pick some random
examples of code believed to be C and make them work. (Where "work"
means that they compile and show a few behaviors that look like
the expected ones.)
That's what most people expect!
Making some "temp" variables and re-using them was also common for some people in idiomatic C90 code, where all your variables are declared at the top of the function.
On 07/02/2024 00:24, Lawrence D'Oliveiro wrote:
On Tue, 6 Feb 2024 09:50:02 +0100, David Brown wrote:
And of course there are those two or three unfortunate people that
have to work with embedded Windows.
I thought this has pretty much gone away, pushed aside by Linux.
It was never common in the first place, and yes, it is almost
entirely non-existent now. I'm sure there are a few legacy products
still produced that use some kind of embedded Windows, but few more
than that
- which is what I was hinting at in my post.
On 2024-02-07, bart <bc@freeuk.com> wrote:
Well we've both posted code of sizeable, actual and practical projects.
Very few on the 'other side' have. Maybe it's proprietory or there are
other reasons. But it means their own output can't be criticised here.
Posting large amounts of code into discussion groups isn't practical,
and against netiquette.
The right thing is to host your code somewhere (which it behooves you to
do for obvious other reasons) and post a link to it.
People used to share code via comp.sources.*. Some well-known old
projects first made their appearance that way. E.g. Dick Grune posted
the first version of CVS in what was then called mod.sources in 1986.
On 07/02/2024 07:54, David Brown wrote:
On 07/02/2024 00:23, Lawrence D'Oliveiro wrote:The two common patterns are to give each variable the minimum scope, or to decare all variables at the start of the function and give them all
On Tue, 6 Feb 2024 09:44:20 +0100, David Brown wrote:Generally, you want to have the minimum practical scope for your local
They reuse "temp" variables instead of making new ones.
I like to limit the scope of my temporary variables. In C, this is as
easy
as sticking a pair of braces around a few statements.
variables. It's rare that you need to add braces just to make a scope
for a variable - usually you have enough braces in loops or conditionals
- but it happens.
function scope.
The case for minimum scope is the same as the case for scope itself.
The
variable is accessible where it is used and not elsewhere, which makes it less likely it will be used in error, and means there are fewer names to understand.
However there are also strong arguments for ducntion scope. A function is a natural unit. Adn all the varibales used in that unit are listed together and, ideally, commented. So at a glance you can see what is in scope and
what is being operated on.
And there are only three levels of scope. A
varibale is global, or it is file scope, or it is scoped to the
function.
I tend to prefer function scope for C.
However I use a lot of C++ these
days, and in C++ local scope is often better, and in some cases even necessary. So I find that I'm tending to use local scope in C more.
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
However there are also strong arguments for function scope. A function is a >> natural unit. And all the variables used in that unit are listed together
and, ideally, commented. So at a glance you can see what is in scope and
what is being operated on. [typos fixed]
You should not need an inventory of what's being operated on. Any
function so complex that I can't tell immediately what declaration corresponds to which name needs to be re-written.
And there are only three levels of scope. A
varibale is global, or it is file scope, or it is scoped to the
function.
You are mixing up scope and lifetime. C has no "global scope". A name
may have external linkage (which is probably what you are referring to),
but that is not directly connected to its scope.
On 07/02/2024 07:54, David Brown wrote:
On 07/02/2024 00:23, Lawrence D'Oliveiro wrote:The two common patterns are to give each variable the minimum scope, or
On Tue, 6 Feb 2024 09:44:20 +0100, David Brown wrote:
They reuse "temp" variables instead of making new ones.
I like to limit the scope of my temporary variables. In C, this is as
easy
as sticking a pair of braces around a few statements.
Generally, you want to have the minimum practical scope for your local
variables. It's rare that you need to add braces just to make a scope
for a variable - usually you have enough braces in loops or
conditionals - but it happens.
to decare all variables at the start of the function and give them all function scope.
The case for minimum scope is the same as the case for scope itself. The variable is accessible where it is used and not elsewhere, which makes
it less likely it will be used in error, and means there are fewer names
to understand.
However there are also strong arguments for ducntion scope.
A function
is a natural unit.
Adn all the varibales used in that unit are listed
together and, ideally, commented.
So at a glance you can see what is in
scope and what is being operated on. And there are only three levels of scope. A varibale is global, or it is file scope, or it is scoped to the function.
I tend to prefer function scope for C. However I use a lot of C++ these
days, and in C++ local scope is often better, and in some cases even necessary. So I find that I'm tending to use local scope in C more.
On 07/02/2024 10:47, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
However there are also strong arguments for function scope. A
function is a
natural unit. And all the variables used in that unit are listed
together
and, ideally, commented. So at a glance you can see what is in scope and >>> what is being operated on. [typos fixed]
You should not need an inventory of what's being operated on. Any
function so complex that I can't tell immediately what declaration
corresponds to which name needs to be re-written.
But if you keep functions small, eg. the whole body is visible at the
same time, then there is less need for declarations to clutter up the
code. They can go at the top, so that you can literally can just glance there.
And there are only three levels of scope. A
varibale is global, or it is file scope, or it is scoped to the
function.
You are mixing up scope and lifetime. C has no "global scope". A name
may have external linkage (which is probably what you are referring to),
but that is not directly connected to its scope.
Funny, I use the same definitions of scope:
On 07/02/2024 09:59, Malcolm McLean wrote:
On 07/02/2024 07:54, David Brown wrote:
On 07/02/2024 00:23, Lawrence D'Oliveiro wrote:The two common patterns are to give each variable the minimum scope,
On Tue, 6 Feb 2024 09:44:20 +0100, David Brown wrote:
They reuse "temp" variables instead of making new ones.
I like to limit the scope of my temporary variables. In C, this is
as easy
as sticking a pair of braces around a few statements.
Generally, you want to have the minimum practical scope for your
local variables. It's rare that you need to add braces just to make
a scope for a variable - usually you have enough braces in loops or
conditionals - but it happens.
or to decare all variables at the start of the function and give them
all function scope.
The case for minimum scope is the same as the case for scope itself.
The variable is accessible where it is used and not elsewhere, which
makes it less likely it will be used in error, and means there are
fewer names to understand.
It makes code simpler, clearer, easier to reuse, easier to see that it
is correct, and easier to see if there is an error. It is very much
easier for automatic tools (static warnings) to spot issues.
However there are also strong arguments for ducntion scope.
Not in my experience and in my opinion.
A function is a natural unit.
True, but irrelevant.
Adn all the varibales used in that unit are listed together and,
ideally, commented.
In reality, not commented. And if commented, then commented incorrectly.
Rather than trying to write vague comments to say what something is how
it is used, it is better to write the code so that it is clear. Giving variables appropriate names is part of that. For the most part, I'd say
if you think a variable needs a comment, your code is not clear enough
or has poor structure.
It is /massively/ simpler and clearer to write :
for (int i = 0; i < 10; i++) { ... }
than
int i;
/* ... big gap ... */
for (i = 0; i < 10; i++) { ... }
It doesn't help if you have "int loop_index;" or add a comment to the variable definition. Putting it at the loop itself is better.
So at a glance you can see what is in scope and what is being operated
on. And there are only three levels of scope. A varibale is global, or
it is file scope, or it is scoped to the function.
Every block is a new scope. Function scope in C is only for labels.
I tend to prefer function scope for C. However I use a lot of C++
these days, and in C++ local scope is often better, and in some cases
even necessary. So I find that I'm tending to use local scope in C more.
I hate having to work with code written in long-outdated "declare
everything at the top of the function" style. I realise style and experience are subjective, but I have not seen any code or any argument
that has led me to doubt my preferences.
On 07/02/2024 10:47, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:To explain this, if we have
On 07/02/2024 07:54, David Brown wrote:
On 07/02/2024 00:23, Lawrence D'Oliveiro wrote:The two common patterns are to give each variable the minimum scope,
On Tue, 6 Feb 2024 09:44:20 +0100, David Brown wrote:Generally, you want to have the minimum practical scope for your local >>>> variables. It's rare that you need to add braces just to make a scope >>>> for a variable - usually you have enough braces in loops or
They reuse "temp" variables instead of making new ones.
I like to limit the scope of my temporary variables. In C, this is as >>>>> easy
as sticking a pair of braces around a few statements.
conditionals
- but it happens.
or to
decare all variables at the start of the function and give them all
function scope.
The term "function scope" has a specific meaning in C. Only labels have
function scope. I know you are not very interested in using exact
terms, but some people might like to know the details.
void function(void)
{
int i;
for (i = 0; i < 10;; i++)
dosomething();
if ( condition)
{
int i;
for (i = 0; i < 11; i++)
dosomething();
if (i == 10)
/* always false */
}
}
The first i is not in scope when we test for i == 10 and the test will
be false. So "fucntion scope" isn't the term.
However if we have this:
void fucntion(void)
{
label:
dosomething();
if (condition)
{
label:
dosomething();
}
got label:
}
Then it is a error. Both labels are in scope and that isn't allowed.
Since you want to argue for the peculiar (but common) practice of givingSo "function scope" isn't the correct term. So we need another. I expect
names the largest possible scope (without altering their linkage) you
need a term for the outer-most block scope, but "function scope" is
taken.
that at this point someone will jump in and say it must be "Malcolm
scope". As you say, it's common enough to need a term for it.
The case for minimum scope is the same as the case for scope itself.
Someone might well misinterpret the term "minimum scope" since it would
require adding lots of otherwise redundant braces. I *think* you mean
declaring names at the point of first use. The resulting scope is not
minimum because it often extends beyond the point of last use.
Yes, I don't mean literally the minimum scope that would be possible by artificially ending a block when a variable is used for the last time.
No one would do that. I mean that the variable is either declared at
point of first use or, if this isn't allowed because of the C version,
at the top of the block in which it is used. But also that variables are
not reused if in fact the value is discarded between statements or
especially between blocks.
Other people, not familiar with" modern" C, might interpret the term toTop of the block or point of first use?
mean declaring names at the top of the inner-most appropriate block.
If you go for top of block and you don't have a value, you eitherThe
variable is accessible where it is used and not elsewhere, which
makes it
less likely it will be used in error, and means there are fewer names to >>> understand.
The case for declaration at first use is much stronger than this. It
almost always allows for a meaningful initialisation at the same point,
so the initialisation does not need to be hunted down a checked. For
me, this is a big win. (Yes, some people then insist on a dummy
initialisation when the proper one isn't know, but that's a fudge that
is, to my mind, even worse.)
intialise, usually to zero, or leave it wild. Neither is ideal.
But it
rarely makes a big difference. However if you go for policy two, all the variables are either given initial values at the top of the function or
they are not given initial values at the top of the function,and so you
can easily check, and ensure that all the initial values are consistent
woth each other.
The variable has scope within the function, within the whole of the
We could call it outer-most block scope rather than re-use a term with
an existing, but different, technical meaning.
function, and the motive is that the function is the natural unit of
thought. So I think we need the word "function".
However I use a lot of C++ these
days, and in C++ local scope is often better, and in some cases even
necessary. So I find that I'm tending to use local scope in C more.
Interesting. Is it just that using C++ has given you what you would
think of as a bad habit in C, or has using C++ led you to see that your
old preference was not the best one?
Not sure. If I thought it was a terrible habit of course I wouldn't do
it. I do think it makes the code look a little bit less clear. But it's slightly easier to write and hack, which is why I do it.
David Brown <david.brown@hesbynett.no> writes:
Making some "temp" variables and re-using them was also common for some
people in idiomatic C90 code, where all your variables are declared at the >> top of the function.
The comma suggests (I think) that it is C90 that mandates that all one's variables are declared at the top of the function. But that's not the
case (as I am sure you know).
The other reading -- that this is done in
idiomatic C90 code -- is also something that I'd question, but not
something that I'd want to argue.
I comment just because there seems to be a myth that "old C" had to have
all the declarations at the top of a function. That was true once, but
so long ago as to be irrelevant. Even K&R C allowed declarations at the
top of a compound statement.
On Wed, 7 Feb 2024 08:56:15 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 07/02/2024 00:24, Lawrence D'Oliveiro wrote:
On Tue, 6 Feb 2024 09:50:02 +0100, David Brown wrote:
And of course there are those two or three unfortunate people that
have to work with embedded Windows.
I thought this has pretty much gone away, pushed aside by Linux.
It was never common in the first place, and yes, it is almost
entirely non-existent now. I'm sure there are a few legacy products
still produced that use some kind of embedded Windows, but few more
than that
- which is what I was hinting at in my post.
Is there any digital oscilloscope that is not Windows under the hood?
How about medical equipment?
The first question is mostly rhetorical, the second is not.
On 07/02/2024 12:04, bart wrote:
Funny, I use the same definitions of scope:
For discussions of C, it's best to use the well-defined C terms for
scope and lifetime. Other languages may use different terms.
On 07/02/2024 11:04, Ben Bacarisse wrote:
David Brown <david.brown@hesbynett.no> writes:
Making some "temp" variables and re-using them was also common for someThe comma suggests (I think) that it is C90 that mandates that all one's
people in idiomatic C90 code, where all your variables are declared at the >>> top of the function.
variables are declared at the top of the function. But that's not the
case (as I am sure you know).
Yes.
The other reading -- that this is done in
idiomatic C90 code -- is also something that I'd question, but not
something that I'd want to argue.
"Idiomatic" is perhaps not the best word. (And "idiotic" is too strong!)
I mean written in a way that is quite common in C90 code.
On 07/02/2024 13:01, David Brown wrote:
On 07/02/2024 09:59, Malcolm McLean wrote:This is all true, but only in way. Whilst it's easier to see that there
The case for minimum scope is the same as the case for scope itself.
The variable is accessible where it is used and not elsewhere, which
makes it less likely it will be used in error, and means there are
fewer names to understand.
It makes code simpler, clearer, easier to reuse, easier to see that it
is correct, and easier to see if there is an error. It is very much
easier for automatic tools (static warnings) to spot issues.
are errors in one way, because you have to look at a smaller section of code, it's harder in others, for example because that small section is
more cluttered.
From experience with automatic tools, they give too many
false warnings for correct code, and then programmers often rewrite the
code less clearly to suppress the warning.
That's not a legitimate response. The correct thing to say is "you haveHowever there are also strong arguments for ducntion scope.
Not in my experience and in my opinion.
given a argment there but I don't think it is strong one".
Unless you
are claiming to be experieenced in arguing with people over scope, and I donlt think that is what yiu mean to say,
Variable names mean something. The classic name for a variable is "x".A function is a natural unit.
True, but irrelevant.
Adn all the varibales used in that unit are listed together and,
ideally, commented.
In reality, not commented. And if commented, then commented incorrectly. >>
This usually means either "the value that is given" or "the horizontal
value on an axis". But it can of ciurse mean "a value which we shall calculate that doesn;t have an abvous other name", or even maybe, "the
nunber of times the letter "x" appears in the data. It depnds on
context. However the imprtant thing is that x should always mean the
same thing within the same function.
Rather than trying to write vague comments to say what something isI prefer short variable names because it is the mathematical convention
how it is used, it is better to write the code so that it is clear.
Giving variables appropriate names is part of that. For the most
part, I'd say if you think a variable needs a comment, your code is
not clear enough or has poor structure.
and because it makes complex expressions easier to read. But of course
then they can't be as meaningful. So to use a short name and add a
comment is reasonable way to achieve both goals.
This pattern is quite common in C.
It is /massively/ simpler and clearer to write :
for (int i = 0; i < 10; i++) { ... }
than
int i;
/* ... big gap ... */
for (i = 0; i < 10; i++) { ... }
It doesn't help if you have "int loop_index;" or add a comment to the
variable definition. Putting it at the loop itself is better.
for (i = 0; i < N; i++)
if (x[i] == 0)
break;
if (i == N) /* no zero found */
So you can't scope the counter to the loop.
i is always a loop index. Usually I just out one at the top so it is
hanging around and handy.
I quite often work with code which was written a very long time ago and
I hate having to work with code written in long-outdated "declare
everything at the top of the function" style. I realise style and
experience are subjective, but I have not seen any code or any
argument that has led me to doubt my preferences.
is still useful. That's one of the big strengths of C. It is subjective however. It's not about making life easier for the compiler. It's about
what is clearer. That depends on the way people read code and think
about it, and that won't necessarily be the same for every person.
On 07/02/2024 10:47, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
However there are also strong arguments for function scope. A function is a >>> natural unit. And all the variables used in that unit are listed together >>> and, ideally, commented. So at a glance you can see what is in scope and >>> what is being operated on. [typos fixed]You should not need an inventory of what's being operated on. Any
function so complex that I can't tell immediately what declaration
corresponds to which name needs to be re-written.
But if you keep functions small, eg. the whole body is visible at the same time, then there is less need for declarations to clutter up the code. They can go at the top, so that you can literally can just glance there.
And there are only three levels of scope. A
varibale is global, or it is file scope, or it is scoped to the
function.
You are mixing up scope and lifetime. C has no "global scope". A name
may have external linkage (which is probably what you are referring to),
but that is not directly connected to its scope.
Funny, I use the same definitions of scope:
On 07/02/2024 10:47, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:To explain this, if we have
On 07/02/2024 07:54, David Brown wrote:The term "function scope" has a specific meaning in C. Only labels have
On 07/02/2024 00:23, Lawrence D'Oliveiro wrote:The two common patterns are to give each variable the minimum scope, or to >>> decare all variables at the start of the function and give them all
On Tue, 6 Feb 2024 09:44:20 +0100, David Brown wrote:Generally, you want to have the minimum practical scope for your local >>>> variables. It's rare that you need to add braces just to make a scope >>>> for a variable - usually you have enough braces in loops or conditionals >>>> - but it happens.
They reuse "temp" variables instead of making new ones.
I like to limit the scope of my temporary variables. In C, this is as >>>>> easy
as sticking a pair of braces around a few statements.
function scope.
function scope. I know you are not very interested in using exact
terms, but some people might like to know the details.
void function(void)
{
int i;
for (i = 0; i < 10;; i++)
dosomething();
if ( condition)
{
int i;
for (i = 0; i < 11; i++)
dosomething();
if (i == 10)
/* always false */
}
}
The first i is not in scope when we test for i == 10 and the test will be false. So "fucntion scope" isn't the term.
However if we have this:(you mean "goto label;")
void fucntion(void)
{
label:
dosomething();
if (condition)
{
label:
dosomething();
}
got label:
}
Then it is a error. Both labels are in scope and that isn't allowed.
Since you want to argue for the peculiar (but common) practice of givingSo "function scope" isn't the correct term. So we need another. I expect
names the largest possible scope (without altering their linkage) you
need a term for the outer-most block scope, but "function scope" is
taken.
that at this point someone will jump in and say it must be "Malcolm
scope". As you say, it's common enough to need a term for it.
The case for minimum scope is the same as the case for scope itself.Someone might well misinterpret the term "minimum scope" since it would
require adding lots of otherwise redundant braces. I *think* you mean
declaring names at the point of first use. The resulting scope is not
minimum because it often extends beyond the point of last use.
Yes, I don't mean literally the minimum scope that would be possible by artificially ending a block when a variable is used for the last time. No
one would do that. I mean that the variable is either declared at point of first use or, if this isn't allowed because of the C version, at the top of the block in which it is used. But also that variables are not reused if in fact the value is discarded between statements or especially between
blocks.
Other people, not familiar with" modern" C, might interpret the term toTop of the block or point of first use?
mean declaring names at the top of the inner-most appropriate block.
If you go for top of block and you don't have a value, you eitherTheThe case for declaration at first use is much stronger than this. It
variable is accessible where it is used and not elsewhere, which makes it >>> less likely it will be used in error, and means there are fewer names to >>> understand.
almost always allows for a meaningful initialisation at the same point,
so the initialisation does not need to be hunted down a checked. For
me, this is a big win. (Yes, some people then insist on a dummy
initialisation when the proper one isn't know, but that's a fudge that
is, to my mind, even worse.)
intialise, usually to zero, or leave it wild. Neither is ideal. But it
rarely makes a big difference. However if you go for policy two, all the variables are either given initial values at the top of the function or
they are not given initial values at the top of the function,and so you can easily check, and ensure that all the initial values are consistent woth
each other.
We could call it outer-most block scope rather than re-use a term withThe variable has scope within the function, within the whole of the
an existing, but different, technical meaning.
function, and the motive is that the function is the natural unit of
thought. So I think we need the word "function".
On 07/02/2024 00:24, Lawrence D'Oliveiro wrote:
On Tue, 6 Feb 2024 09:50:02 +0100, David Brown wrote:
And of course there are those two or three unfortunate people that have
to work with embedded Windows.
I thought this has pretty much gone away, pushed aside by Linux.
It was never common in the first place, and yes, it is almost entirely >non-existent now. I'm sure there are a few legacy products still
produced that use some kind of embedded Windows, but few more than that
- which is what I was hinting at in my post.
On 07/02/2024 07:54, David Brown wrote:
On 07/02/2024 00:23, Lawrence D'Oliveiro wrote:The two common patterns are to give each variable the minimum scope, or
On Tue, 6 Feb 2024 09:44:20 +0100, David Brown wrote:
They reuse "temp" variables instead of making new ones.
I like to limit the scope of my temporary variables. In C, this is as
easy
as sticking a pair of braces around a few statements.
Generally, you want to have the minimum practical scope for your local
variables. It's rare that you need to add braces just to make a scope
for a variable - usually you have enough braces in loops or conditionals
- but it happens.
to decare all variables at the start of the function and give them all >function scope.
The case for minimum scope is the same as the case for scope itself. The >variable is accessible where it is used and not elsewhere, which makes
it less likely it will be used in error, and means there are fewer names
to understand.
bart <bc@freeuk.com> writes:
On 07/02/2024 10:47, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
However there are also strong arguments for function scope. A function is aYou should not need an inventory of what's being operated on. Any
natural unit. And all the variables used in that unit are listed together >>>> and, ideally, commented. So at a glance you can see what is in scope and >>>> what is being operated on. [typos fixed]
function so complex that I can't tell immediately what declaration
corresponds to which name needs to be re-written.
But if you keep functions small, eg. the whole body is visible at the same >> time, then there is less need for declarations to clutter up the code. They >> can go at the top, so that you can literally can just glance there.
Declarations don't clutter up the code, just as the code does not
clutter up the declarations. That's just your own spin on the matter.
They are both important parts of a C program.
And there are only three levels of scope. A
varibale is global, or it is file scope, or it is scoped to the
function.
You are mixing up scope and lifetime. C has no "global scope". A name
may have external linkage (which is probably what you are referring to), >>> but that is not directly connected to its scope.
Funny, I use the same definitions of scope:
You can use any definition you like, provided you don't insit that other
use your own terms. I was just pointing out that the problems
associated with using the wrong terms in a public post.
I'll cut the text where you use the wrong terms, because there is
nothing to be gained from correcting your usage.
On 07/02/2024 15:36, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:
On 07/02/2024 10:47, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
However there are also strong arguments for function scope. A function is aYou should not need an inventory of what's being operated on. Any
natural unit. And all the variables used in that unit are listed together >>>>> and, ideally, commented. So at a glance you can see what is in scope and >>>>> what is being operated on. [typos fixed]
function so complex that I can't tell immediately what declaration
corresponds to which name needs to be re-written.
But if you keep functions small, eg. the whole body is visible at the same >>> time, then there is less need for declarations to clutter up the code. They >>> can go at the top, so that you can literally can just glance there.
Declarations don't clutter up the code, just as the code does not
clutter up the declarations. That's just your own spin on the matter.
They are both important parts of a C program.
That sounds like your opinion against mine. It's nothing to do with
spin, whatever that means.
I would argue however that it you take a clear, cleanly written >language-neutral algorithm, and then introduce type annotations /within/
that code rather than segragated, then it is no longer quite as clear or
as clean looking.
As a related example, suppose you had this function:
void F(int a, double* b) {...}
All the parameters are specified with their names and types at the top.
Now imagine if only the names were given,
bart <bc@freeuk.com> writes:
On 07/02/2024 15:36, Ben Bacarisse wrote
Declarations don't clutter up the code, just as the code does not
clutter up the declarations. That's just your own spin on the matter.
They are both important parts of a C program.
That sounds like your opinion against mine. It's nothing to do with
spin, whatever that means.
I would argue however that it you take a clear, cleanly written
language-neutral algorithm, and then introduce type annotations /within/
that code rather than segragated, then it is no longer quite as clear or
as clean looking.
As a related example, suppose you had this function:
void F(int a, double* b) {...}
All the parameters are specified with their names and types at the top.
Now imagine if only the names were given,
Now imagine if the moon was made from green cheese. It's just as
likely, and neither are C.
On 07/02/2024 13:21, David Brown wrote:
On 07/02/2024 12:04, bart wrote:
Funny, I use the same definitions of scope:
For discussions of C, it's best to use the well-defined C terms for
scope and lifetime. Other languages may use different terms.
Many of the terms used in the C grammar remind me exactly of the 'twisty little passages' variations from the original text Adventure game.
In my program, I choose to use identifiers that make more sense to me,
and that match my view of how the language works.
That's a shame. I think there is something to be gained by not sticking slavishly to what the C standard says (which very few people will study)
and using more colloquial terms or ones that more can relate to.
Apparently both 'typedef' and 'static' are forms of 'linkage'. But no identifiers declared with those will ever be linked to anything!
On 07/02/2024 15:30, Ben Bacarisse wrote:
David Brown <david.brown@hesbynett.no> writes:
On 07/02/2024 11:04, Ben Bacarisse wrote:
David Brown <david.brown@hesbynett.no> writes:
Making some "temp" variables and re-using them was also common forThe comma suggests (I think) that it is C90 that mandates that all
some
people in idiomatic C90 code, where all your variables are declared
at the
top of the function.
one's
variables are declared at the top of the function. But that's not the >>>> case (as I am sure you know).
Yes.
The other reading -- that this is done in
idiomatic C90 code -- is also something that I'd question, but not
something that I'd want to argue.
"Idiomatic" is perhaps not the best word. (And "idiotic" is too
strong!)
I mean written in a way that is quite common in C90 code.
The most common meaning of "idiomatic", and the one I usually associate
with it in this context, is "containing expressions that are natural and
correct". That's not how I would describe eschewing declarations in
inner blocks.
No. It means writing the code in a way which is common in C and has
certain advantages, but is not so in other languages.
David Brown <david.brown@hesbynett.no> writes:
On 07/02/2024 00:24, Lawrence D'Oliveiro wrote:
On Tue, 6 Feb 2024 09:50:02 +0100, David Brown wrote:
And of course there are those two or three unfortunate people that have >>>> to work with embedded Windows.
I thought this has pretty much gone away, pushed aside by Linux.
It was never common in the first place, and yes, it is almost entirely
non-existent now. I'm sure there are a few legacy products still
produced that use some kind of embedded Windows, but few more than that
- which is what I was hinting at in my post.
Wind river is still popular, I believe, but the linux kernel + busybox is probably the most common.
It makes code simpler, clearer, easier to reuse, easier to see that it
is correct, and easier to see if there is an error. It is very much
easier for automatic tools (static warnings) to spot issues.
BTW I've just done a quick survey of some codebases; functions tend to
have 3 local variables on average.
Is really worth spreading them out in nested block scopes?
On 07/02/2024 17:25, Scott Lurndal wrote:
David Brown <david.brown@hesbynett.no> writes:
On 07/02/2024 00:24, Lawrence D'Oliveiro wrote:
On Tue, 6 Feb 2024 09:50:02 +0100, David Brown wrote:
And of course there are those two or three unfortunate people
that have to work with embedded Windows.
I thought this has pretty much gone away, pushed aside by Linux.
It was never common in the first place, and yes, it is almost
entirely non-existent now. I'm sure there are a few legacy
products still produced that use some kind of embedded Windows,
but few more than that
- which is what I was hinting at in my post.
Wind river is still popular, I believe, but the linux kernel +
busybox is probably the most common.
VxWorks, you mean? Yes, that is still used in what might be called
"big" embedded systems. There are other RTOS's that have been common
for embedded systems with screens (and no one would bother with
embedded Windows without a screen!),
including QNX, Integrity, eCOS,
and Nucleus.
(There are many small RTOS's, but they are competing in a different
field.)
... the linux kernel + busybox is probably the most common.
On 07/02/2024 19:05, bart wrote:
That's a shame. I think there is something to be gained by not
sticking slavishly to what the C standard says (which very few people
will study) and using more colloquial terms or ones that more can
relate to.
There is something to be said for explaining the technical terms from
the C standards in more colloquial language to make it easier for others
to understand. There is nothing at all to be said for using C standard terms in clearly and obviously incorrect ways. That's just going to
confuse these non-standard-reading C programmers when they try to find
out more, no matter where they look for additional information.
Apparently both 'typedef' and 'static' are forms of 'linkage'. But no
identifiers declared with those will ever be linked to anything!
Could you point to the paragraph of the C standards that justifies that claim? Or are you perhaps mixing things up? (I can tell you the
correct answer, with references, if you are stuck - but I'd like to give
you the chance to show off your extensive C knowledge first.)
On 05/02/2024 05:58, Kaz Kylheku wrote:
On 2024-02-05, bart <bc@freeuk.com> wrote:
Writing a compiler is pretty easy, because the bar can be set very low
while still calling it a compiler.
Whole-program compilers are easier because there are fewer requirements.
You have only one kind of deliverable to produce: the executable.
You don't have to deal with linkage and produce a linkable format.
David Brown suggested that they were harder than I said. You're saying
they are easier.
GCC is maintained by people who know what a C compiler is, and GCC can
be asked to be one.
So what is it when it's not a C compiler? What language is it compiling
here:
c:\qx>gcc qc.c
c:\qx>
Mine at least is a more rigid subset.
Your idea of writing a C compiler seems to be to pick some random
examples of code believed to be C and make them work. (Where "work"
means that they compile and show a few behaviors that look like
the expected ones.)
That's what most people expect!
Basically, you don't present a very credible case that you've actually
written a C compiler.
Well, don't believe it if you don't want.
The NASM.EXE program is bit larger at 1.3MB for example, that's 98.7%
smaller than your giant program.
I mean, where is YOUR lower-level system language? Where is anybody's? I don't mean the Zigs and Rusts because that would be like comparing a
40-tonne truck with a car.
Compilers that blaze through large amounts of code in the blink of an
eye are almost certainly dodging on the optimization.
Yes, probably. But the optimisation is overrated. Do you really need optimised code to test each of those 200 builds you're going to do today?
* There is no benefit at all in having a tool like a compiler, be a
small, self-contained executable.
Not as much as there used to, decades ago.
Simplicity is always good. Somebody deletes one of the 1000s of files of
your gcc installation. Is it something that is essential? Who knows.
But if your compiler is the one file mm.exe, it's easy to spot if it's missing!
* The standard talks a lot about Linkage but there are no specific
lexical elements for those.
* Instead the standard uses lexical elements called 'storage-class specifiers' to control what kind of linkage is applied to identifiers
* Because of this association, I use 'linkage symbol' to refer to those particular tokens
* The tokens include 'typedef extern static'
6.2.2p3 says: "If the declaration of a file scope identifier for an
object or a function contains the storageclass specifier static, the identifier has internal linkage."
So it talks about statics as having linkage of some kind. What did I
say? I said statics will never be linked to anything.
6.6.2p6 excludes typedefs (by omission). Or rather it says they have 'no linkage', which is one of the three kinds of linkage (external,
internal, none).
So as far as I can see, statics and typedef are still lumped in to the
class of entities that have a form of linkage, and are part of the set
of tokens that control linkage.
This is to me is all a bit mixed up. Much as you dislike other languages being brought in, they can give an enlightening perspective.
We must be able to point to at least one other language where it is not
the idiom, in order to say that it is an idiom.
On 07/02/2024 20:44, David Brown wrote:
On 07/02/2024 16:45, Malcolm McLean wrote:We must be able to point to at least one other language where it is not
On 07/02/2024 15:30, Ben Bacarisse wrote:
No. It means writing the code in a way which is common in C and has
The most common meaning of "idiomatic", and the one I usually associate >>>> with it in this context, is "containing expressions that are natural and >>>> correct". That's not how I would describe eschewing declarations in
inner blocks.
certain advantages, but is not so in other languages.
An idiom in C could also be an idiom in C++, Python, or any other
language. Nothing in "idiomatic" implies that it is unique to a
particular language, just that it is commonly used in that language.
the idiom, in order to say that it is an idiom.
On 2024-02-07, bart <bc@freeuk.com> wrote:
On 05/02/2024 05:58, Kaz Kylheku wrote:
On 2024-02-05, bart <bc@freeuk.com> wrote:
Writing a compiler is pretty easy, because the bar can be set very low
while still calling it a compiler.
Whole-program compilers are easier because there are fewer requirements. >>> You have only one kind of deliverable to produce: the executable.
You don't have to deal with linkage and produce a linkable format.
David Brown suggested that they were harder than I said. You're saying
they are easier.
I'm saying it's somewhat easier to make a compiler which produces an
object file than to produce a compiler that produces object files *and*
a linker that combines them.
There is all that code you don't have to write to produce object files,Programs that generate object files usually invoke other people's linkers.
read them, and link them. You don't have to solve the problem of how to represent unresolved references in an externalized form in a file.
David made it clear he was referring to whole program optimization.
GCC is maintained by people who know what a C compiler is, and GCC can
be asked to be one.
So what is it when it's not a C compiler? What language is it compiling
here:
c:\qx>gcc qc.c
c:\qx>
Yes, sorry. It is compiling C also: a certain revision of GNU C,
which is family of dialects in the C family.
Mine at least is a more rigid subset.
Rigid? Where this subset it documented, other than in the code?
GNU C is documented, and tested.
Your idea of writing a C compiler seems to be to pick some random
examples of code believed to be C and make them work. (Where "work"
means that they compile and show a few behaviors that look like
the expected ones.)
That's what most people expect!
That's may be verbal way of expressing what a lot of developers
want, but it it has to be carefully interpreted to avoid a fallacy.
"Most people" expect the C compiler to work on /their/ respective code
they care about, which is different based on who you ask. The more
people you include in a sample of "most people", the more code that is.
Most people don't just expect a compiler to work on /your/ few examples.
Basically, you don't present a very credible case that you've actually
written a C compiler.
Well, don't believe it if you don't want.
Oh I want to believe; I just can't do that which I want, without
proper evidence.
Do you have a reference manual for your C dialect, and is it covered by tests? What programs and constructs are required to work in your C dialect? What are required to be diagnosed? What is left undefined?
The NASM.EXE program is bit larger at 1.3MB for example, that's 98.7%
smaller than your giant program.
That's amazingly large for an assembler. Is that stripped of debug info?
Yes, probably. But the optimisation is overrated. Do you really need
optimised code to test each of those 200 builds you're going to do today?
Yes, because of the principle that you should test what you ship.
On 2024-02-07, bart <bc@freeuk.com> wrote:
This is to me is all a bit mixed up. Much as you dislike other languages
being brought in, they can give an enlightening perspective.
Right, nobody here knows anything outside of C, or can think outside of
the C box, except for you.
On 08/02/2024 01:13, Kaz Kylheku wrote:
On 2024-02-07, bart <bc@freeuk.com> wrote:
This is to me is all a bit mixed up. Much as you dislike other languages >>> being brought in, they can give an enlightening perspective.
Right, nobody here knows anything outside of C, or can think outside of
the C box, except for you.
Well, quite. AFAIK, nobody here HAS (1) used a comparable language to C;
(2) over such a long term; (3) which they have invented themselves; (4)
have implemented themselves; (5) is similar enough to C yet different
enough in how it works to give that perspective.
See, I gave an interesting comparison of how my module scheme works orthogonally across all kinds of entities, compared with the confusing
mess of C, and you shut down that view.
You're never in a million years going to admit that my language has some
good points are you? Exactly as I said in my OP.
On 07/02/2024 23:24, Kaz Kylheku wrote:^^^^
On 2024-02-07, bart <bc@freeuk.com> wrote:
On 05/02/2024 05:58, Kaz Kylheku wrote:
On 2024-02-05, bart <bc@freeuk.com> wrote:
Writing a compiler is pretty easy, because the bar can be set very low >>>> while still calling it a compiler.
Whole-program compilers are easier because there are fewer requirements. >>>> You have only one kind of deliverable to produce: the executable.
You don't have to deal with linkage and produce a linkable format.
David Brown suggested that they were harder than I said. You're saying
they are easier.
I'm saying it's somewhat easier to make a compiler which produces an
object file than to produce a compiler that produces object files *and*
a linker that combines them.
Is there a 'than' missing above? Otherwise it's contradictory.
There is all that code you don't have to write to produce object files,Programs that generate object files usually invoke other people's linkers.
read them, and link them. You don't have to solve the problem of how to
represent unresolved references in an externalized form in a file.
But your comments are simplistic. EXE formats can be as hard to generate
as OBJ files. You still have to resolve the dynamic imports into an EXE.
You need to either have a ready-made language designed for whole-program work, or you need to devise one.
Plus, if the minimal compilation unit is now all N source modules of a project rather than just 1 module, then you'd better have a pretty fast compiler, and some strategies for dealing with scale.
Well, don't believe it if you don't want.
Oh I want to believe; I just can't do that which I want, without
proper evidence.
Do you have a reference manual for your C dialect, and is it covered by
tests? What programs and constructs are required to work in your C dialect? >> What are required to be diagnosed? What is left undefined?
So no one can claim to write a 'C' compiler unless it does everything as
well as gcc which started in 1987, has had hordes of people working with
it, and has had feedback from myriads of users?
I had some particular aims with my project, most of which were achieved, boxes ticked.
The NASM.EXE program is bit larger at 1.3MB for example, that's 98.7%
smaller than your giant program.
That's amazingly large for an assembler. Is that stripped of debug info?
The as.exe assembler for gcc/TDM 10.3 is 1.8MB. For mingw 13.2 it was 1.5MB.
Yes, probably. But the optimisation is overrated. Do you really needYes, because of the principle that you should test what you ship.
optimised code to test each of those 200 builds you're going to do today? >>
Then you're being silly. You're not shipping build#139 of 200 that day,
not even #1000 that week. You're debugging a logic bug that is nothing
to do with optimisation.
On 08/02/2024 01:38, Lawrence D'Oliveiro wrote:
On Thu, 8 Feb 2024 00:33:35 +0000, Malcolm McLean wrote:
We must be able to point to at least one other language where it is not
the idiom, in order to say that it is an idiom.
How about pointing to alternative ways it might be said in the same
language, and then proclaiming that “for some reason, nobody who uses the >> language is supposed to do it that way”?
So how do you say "My French is lousy" in idomatic French?
On Wed, 7 Feb 2024 21:49:52 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 07/02/2024 17:25, Scott Lurndal wrote:
David Brown <david.brown@hesbynett.no> writes:
On 07/02/2024 00:24, Lawrence D'Oliveiro wrote:
On Tue, 6 Feb 2024 09:50:02 +0100, David Brown wrote:
And of course there are those two or three unfortunate people
that have to work with embedded Windows.
I thought this has pretty much gone away, pushed aside by Linux.
It was never common in the first place, and yes, it is almost
entirely non-existent now. I'm sure there are a few legacy
products still produced that use some kind of embedded Windows,
but few more than that
- which is what I was hinting at in my post.
Wind river is still popular, I believe, but the linux kernel +
busybox is probably the most common.
VxWorks, you mean? Yes, that is still used in what might be called
"big" embedded systems. There are other RTOS's that have been common
for embedded systems with screens (and no one would bother with
embedded Windows without a screen!),
Then our company and me personally are no-ones 1.5 times.
The first time it was WinCE on small Arm-based board that served as
Ethernet interface and control plane controller for big boards that
was an important building blocks for very expensive industrial
equipment. Equipment as whole was not ours, we were sub-contractor for
this particular piece. This instance of Windows never ever had display
or keyboard.
We still make few boards per year more than 15 years later.
The second one was/is [part of] our own product, a regular Windows
Embedded, starting with XP, then 7, then 10. It runs on SBC that
functions as a host of Compact PCI frame with various I/O boards mostly
of our own making. SBC does both control plane and partial data plane processing and handles Ethernet communication with the rest of the
system. It's completely different industry, the system as a whole not
nearly as expensive as the first one, but still expensive enough for
this particular computer to be small part of the total cost.
The system does have connectors for display, keyboard and mouse.
Ssometimes it is handy to connect them during manufacturing testing. But
they are never connected in fully assembled product. However since they exist, with relation to this system I count myself as half-no-one
rather than full no-one.
including QNX, Integrity, eCOS,
and Nucleus.
(There are many small RTOS's, but they are competing in a different
field.)
On 2024-02-08, bart <bc@freeuk.com> wrote:
But your comments are simplistic. EXE formats can be as hard to generate
as OBJ files. You still have to resolve the dynamic imports into an EXE.
Generating just the EXE format is objectively less work than generating
OBJ files and linking them into that ... same EXE format, right?
Plus, if the minimal compilation unit is now all N source modules of a
project rather than just 1 module, then you'd better have a pretty fast
compiler, and some strategies for dealing with scale.
Easy; just drop language conformance, diagnostics, optimization.
"as" on Ubuntu 18, 32 bit.
$ size /usr/bin/i686-linux-gnu-as
text data bss dec hex filename
430315 12544 37836 480695 755b7 /usr/bin/i686-linux-gnu-as
Still pretty large. Always use the "size" utility, rather than raw
file size. This has 430315 bytes of code, 12544 of non-zero static data, 37836
bytes of zeroed data (not part of the executable size).
That's still large for an assembler, but at least it's not larger
than GNU Awk.
Yes, probably. But the optimisation is overrated. Do you really needYes, because of the principle that you should test what you ship.
optimised code to test each of those 200 builds you're going to do today? >>>
Then you're being silly. You're not shipping build#139 of 200 that day,
If I make a certain change for build #139, and that part of the code (function or entire source file) is not touched until build #1459 which ships, that compiled code remains the same! So in fact, the #139 version of that code is what build #1459 ships with. That code is being tested as part of
#140, #141, #142, ... even while some other things are changing.
You should not be doing all your development and developer testing with unoptimized builds so that only Q&A people test optimized code before shipping.
Every test, even of a private build, is a potential opportunity to find something wrong with some optimized code that would end up shipping otherwise.
Here is another reason to work with optimized code. If you have to debug
at the machine language level, optimized code is shorter and way more readable.
And it can help you understand logic bugs, because the compiler performs logical analysis in doing optimizations. The optimized code shows you what your
calculation reduced to, and can even help you see a better way of writing the code, like a tutor.
not even #1000 that week. You're debugging a logic bug that is nothing
to do with optimisation.
Though debugging logic bugs that have nothing to do with optimization can be somewhat impeded by optimization, it's still better to prioritize working with
the code in the intended shipping state.
You can drop to an unoptimized build when necessary.
Pretty much that only happens when
1. It is just a logic bug, but you have to resort to a debugger, and
the optimizations are interfering with being able to see variable values.)
2. You suspect it does have to do with optimization, so you see if it
the issue goes away in the unoptimized build.
On 2024-02-08, bart <bc@freeuk.com> wrote:
On 07/02/2024 23:24, Kaz Kylheku wrote:^^^^
On 2024-02-07, bart <bc@freeuk.com> wrote:
On 05/02/2024 05:58, Kaz Kylheku wrote:
On 2024-02-05, bart <bc@freeuk.com> wrote:
Writing a compiler is pretty easy, because the bar can be set
very low while still calling it a compiler.
Whole-program compilers are easier because there are fewer
requirements. You have only one kind of deliverable to produce:
the executable. You don't have to deal with linkage and produce
a linkable format.
David Brown suggested that they were harder than I said. You're
saying they are easier.
I'm saying it's somewhat easier to make a compiler which produces
an object file than to produce a compiler that produces object
files *and*
a linker that combines them.
Is there a 'than' missing above? Otherwise it's contradictory.
Other "than" that one? Hmm.
There is all that code you don't have to write to produce objectPrograms that generate object files usually invoke other people's
files, read them, and link them. You don't have to solve the
problem of how to represent unresolved references in an
externalized form in a file.
linkers.
But your comments are simplistic. EXE formats can be as hard to
generate as OBJ files. You still have to resolve the dynamic
imports into an EXE.
Generating just the EXE format is objectively less work than
generating OBJ files and linking them into that ... same EXE format,
right?
You need to either have a ready-made language designed for
whole-program work, or you need to devise one.
Plus, if the minimal compilation unit is now all N source modules
of a project rather than just 1 module, then you'd better have a
pretty fast compiler, and some strategies for dealing with scale.
Easy; just drop language conformance, diagnostics, optimization.
Well, don't believe it if you don't want.
Oh I want to believe; I just can't do that which I want, without
proper evidence.
Do you have a reference manual for your C dialect, and is it
covered by tests? What programs and constructs are required to
work in your C dialect? What are required to be diagnosed? What is
left undefined?
So no one can claim to write a 'C' compiler unless it does
everything as well as gcc which started in 1987, has had hordes of
people working with it, and has had feedback from myriads of users?
Nope; unless it is documented so that there is a box, where it says
what is in the box, and some way to tell that what's on the box is in
the box.
I had some particular aims with my project, most of which were
achieved, boxes ticked.
The NASM.EXE program is bit larger at 1.3MB for example, that's
98.7% smaller than your giant program.
That's amazingly large for an assembler. Is that stripped of debug
info?
The as.exe assembler for gcc/TDM 10.3 is 1.8MB. For mingw 13.2 it
was 1.5MB.
"as" on Ubuntu 18, 32 bit.
$ size /usr/bin/i686-linux-gnu-as
text data bss dec hex filename
430315 12544 37836 480695 755b7 /usr/bin/i686-linux-gnu-as
Still pretty large. Always use the "size" utility, rather than raw
file size. This has 430315 bytes of code, 12544 of non-zero static
data, 37836 bytes of zeroed data (not part of the executable size).
That's still large for an assembler, but at least it's not larger
than GNU Awk.
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
On 07/02/2024 07:54, David Brown wrote:
On 07/02/2024 00:23, Lawrence D'Oliveiro wrote:The two common patterns are to give each variable the minimum scope,
On Tue, 6 Feb 2024 09:44:20 +0100, David Brown wrote:
They reuse "temp" variables instead of making new ones.
I like to limit the scope of my temporary variables. In C, this
is as easy
as sticking a pair of braces around a few statements.
Generally, you want to have the minimum practical scope for your
local variables. It's rare that you need to add braces just to
make a scope for a variable - usually you have enough braces in
loops or conditionals
- but it happens.
or to decare all variables at the start of the function and give
them all function scope.
The case for minimum scope is the same as the case for scope itself.
The variable is accessible where it is used and not elsewhere, which
makes it less likely it will be used in error, and means there are
fewer names to understand.
And it means the compiler can re-use the local storage (if any was
allocated) for subsequent minimal scope variables (or even same scope
if the compiler knows the original variable is never used again),
so long as the address of the variable isn't taken.
On 07/02/2024 15:30, Ben Bacarisse wrote:
David Brown <david.brown@hesbynett.no> writes:No. It means writing the code in a way which is common in C and has certain advantages, but is not so in other languages.
On 07/02/2024 11:04, Ben Bacarisse wrote:The most common meaning of "idiomatic", and the one I usually associate
David Brown <david.brown@hesbynett.no> writes:
Making some "temp" variables and re-using them was also common for some >>>>> people in idiomatic C90 code, where all your variables are declared at theThe comma suggests (I think) that it is C90 that mandates that all one's >>>> variables are declared at the top of the function. But that's not the >>>> case (as I am sure you know).
top of the function.
Yes.
The other reading -- that this is done in
idiomatic C90 code -- is also something that I'd question, but not
something that I'd want to argue.
"Idiomatic" is perhaps not the best word. (And "idiotic" is too strong!) >>> I mean written in a way that is quite common in C90 code.
with it in this context, is "containing expressions that are natural and
correct". That's not how I would describe eschewing declarations in
inner blocks.
On 07/02/2024 15:36, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:
On 07/02/2024 10:47, Ben Bacarisse wrote:Declarations don't clutter up the code, just as the code does not
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
However there are also strong arguments for function scope. A function is aYou should not need an inventory of what's being operated on. Any
natural unit. And all the variables used in that unit are listed together >>>>> and, ideally, commented. So at a glance you can see what is in scope and >>>>> what is being operated on. [typos fixed]
function so complex that I can't tell immediately what declaration
corresponds to which name needs to be re-written.
But if you keep functions small, eg. the whole body is visible at the same >>> time, then there is less need for declarations to clutter up the code. They >>> can go at the top, so that you can literally can just glance there.
clutter up the declarations. That's just your own spin on the matter.
They are both important parts of a C program.
That sounds like your opinion against mine. It's nothing to do with spin, whatever that means.
I would argue however that it you take a clear, cleanly written language-neutral algorithm, and then introduce type annotations /within/
that code rather than segragated, then it is no longer quite as clear or as clean looking.
As a related example, suppose you had this function:
void F(int a, double* b) {...}
All the parameters are specified with their names and types at the top. Now imagine if only the names were given, but the types specified only at their first usage within the body:
void F(a, b) {...}
I /like/ having a summary of both parameters and locals at the top. I
/like/ code looking clean, and as aligned as possible (some decls will push code to the right). I /like/ knowing that there is only one instance of a variable /abc/, and it is the one at the top.
You can use any definition you like, provided you don't insit that otherAnd there are only three levels of scope. A
varibale is global, or it is file scope, or it is scoped to the
function.
You are mixing up scope and lifetime. C has no "global scope". A name >>>> may have external linkage (which is probably what you are referring to), >>>> but that is not directly connected to its scope.
Funny, I use the same definitions of scope:
use your own terms. I was just pointing out that the problems
associated with using the wrong terms in a public post.
I'll cut the text where you use the wrong terms, because there is
nothing to be gained from correcting your usage.
That's a shame. I think there is something to be gained by not sticking slavishly to what the C standard says (which very few people will study)
and using more colloquial terms or ones that more can relate to.
Apparently both 'typedef' and 'static' are forms of 'linkage'. But no identifiers declared with those will ever be linked to anything!
On 07/02/2024 20:37, David Brown wrote:
On 07/02/2024 19:05, bart wrote:
That's a shame. I think there is something to be gained by not
sticking slavishly to what the C standard says (which very few people
will study) and using more colloquial terms or ones that more can
relate to.
There is something to be said for explaining the technical terms from
the C standards in more colloquial language to make it easier for
others to understand. There is nothing at all to be said for using C
standard terms in clearly and obviously incorrect ways. That's just
going to confuse these non-standard-reading C programmers when they
try to find out more, no matter where they look for additional
information.
Apparently both 'typedef' and 'static' are forms of 'linkage'. But no
identifiers declared with those will ever be linked to anything!
Could you point to the paragraph of the C standards that justifies
that claim? Or are you perhaps mixing things up? (I can tell you the
correct answer, with references, if you are stuck - but I'd like to
give you the chance to show off your extensive C knowledge first.)
* The standard talks a lot about Linkage but there are no specific
lexical elements for those.
* Instead the standard uses lexical elements called 'storage-class specifiers' to control what kind of linkage is applied to identifiers
* Because of this association, I use 'linkage symbol' to refer to those particular tokens
* The tokens include 'typedef extern static'
6.2.2p3 says: "If the declaration of a file scope identifier for an
object or a function contains the storageclass specifier static, the identifier has internal linkage."
So it talks about statics as having linkage of some kind. What did I
say? I said statics will never be linked to anything.
6.6.2p6 excludes typedefs (by omission). Or rather it says they have 'no linkage', which is one of the three kinds of linkage (external,
internal, none).
So as far as I can see, statics and typedef are still lumped in to the
class of entities that have a form of linkage, and are part of the set
of tokens that control linkage.
---------------------------------------------------
This is to me is all a bit mixed up. Much as you dislike other languages being brought in, they can give an enlightening perspective.
So for me, linking applies to all named entities that occupy memory,
and
that have global/export scope.
On 08/02/2024 11:45, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:Degree in English literature.
No. It means writing the code in a way which is common in C and has
certain
advantages, but is not so in other languages.
Where do you get your superior knowledge of English from, and is there a
way anyone else can hope to achieve your level of competence?
On 08/02/2024 11:37, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:items = malloc(n_elements * sizeof *items);
That sounds like your opinion against mine. It's nothing to do with
spin,
whatever that means.
It's spin, because the term is emotive. "Cluttering up" is how you feel
about it. The phrase is just a mildly pejorative one about appearances.
There's no substance there. To make a technical point you would have to
explain how, for example,
struct item *items;
...
n_elements = get_number_of_items(...);
items = malloc(n_elements * sizeof *items);
...
is technically better than
n_elements = get_number_of_items(...);
struct item *items = malloc(n_elements * sizeof *items);
I've explained (more than once) how I find reasoning about the direct
initialise at first use style easier with fewer distractions.
is shorter than
struct item *items = malloc(n_elements * sizeof *items);
and that is an objective statement about which there can be no dispute.
On 08/02/2024 13:10, Malcolm McLean wrote:
On 08/02/2024 11:37, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:items = malloc(n_elements * sizeof *items);
That sounds like your opinion against mine. It's nothing to do with
spin,
whatever that means.
It's spin, because the term is emotive. "Cluttering up" is how you feel >>> about it. The phrase is just a mildly pejorative one about appearances. >>> There's no substance there. To make a technical point you would have to >>> explain how, for example,
struct item *items;
...
n_elements = get_number_of_items(...);
items = malloc(n_elements * sizeof *items);
...
is technically better than
n_elements = get_number_of_items(...);
struct item *items = malloc(n_elements * sizeof *items);
I've explained (more than once) how I find reasoning about the direct
initialise at first use style easier with fewer distractions.
is shorter than
struct item *items = malloc(n_elements * sizeof *items);
and that is an objective statement about which there can be no dispute.
But that is not the comparison.
struct item *items = malloc(n_elements * sizeof *items);
is shorter than:
struct item *items;
items = malloc(n_elements * sizeof *items);
You have to define the variable somewhere. Doing so when you initialise
it when you first need it, is, without doubt, objectively shorter.
Opinions may differ on whether it is clearer, or "cluttered", but which
is shorter is not in doubt. (What relevance that might have, is much
more in doubt.)
On 2024-02-08, bart <bc@freeuk.com> wrote:
On 08/02/2024 01:13, Kaz Kylheku wrote:
On 2024-02-07, bart <bc@freeuk.com> wrote:
This is to me is all a bit mixed up. Much as you dislike other languages >>>> being brought in, they can give an enlightening perspective.
Right, nobody here knows anything outside of C, or can think outside of
the C box, except for you.
Well, quite. AFAIK, nobody here HAS (1) used a comparable language to C;
(2) over such a long term; (3) which they have invented themselves; (4)
have implemented themselves; (5) is similar enough to C yet different
enough in how it works to give that perspective.
You've taken a perspective is not transferrable to others.
If one can only see something after using your own invention for many
years, and other people don't have that same invention and
implementation experience, then they just cannot see what you see.
You cannot teach (2) through (4), just like a basketball coach cannot
teach a player to be seven foot tall.
See, I gave an interesting comparison of how my module scheme works
orthogonally across all kinds of entities, compared with the confusing
mess of C, and you shut down that view.
You're never in a million years going to admit that my language has some
good points are you? Exactly as I said in my OP.
I have no idea what it is;
On 08/02/2024 13:15, Malcolm McLean wrote:
On 08/02/2024 11:45, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:Degree in English literature.
No. It means writing the code in a way which is common in C and has
certain
advantages, but is not so in other languages.
Where do you get your superior knowledge of English from, and is there a >>> way anyone else can hope to achieve your level of competence?
I would never have guessed that from the way you write and from how
often you fail to read other people's posts.
On 08/02/2024 12:24, David Brown wrote:
On 08/02/2024 13:10, Malcolm McLean wrote:
On 08/02/2024 11:37, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:items = malloc(n_elements * sizeof *items);
That sounds like your opinion against mine. It's nothing to do with
spin,
whatever that means.
It's spin, because the term is emotive. "Cluttering up" is how you
feel
about it. The phrase is just a mildly pejorative one about
appearances.
There's no substance there. To make a technical point you would
have to
explain how, for example,
struct item *items;
...
n_elements = get_number_of_items(...);
items = malloc(n_elements * sizeof *items);
...
is technically better than
n_elements = get_number_of_items(...);
struct item *items = malloc(n_elements * sizeof *items);
I've explained (more than once) how I find reasoning about the direct
initialise at first use style easier with fewer distractions.
is shorter than
struct item *items = malloc(n_elements * sizeof *items);
and that is an objective statement about which there can be no dispute.
But that is not the comparison.
struct item *items = malloc(n_elements * sizeof *items);
is shorter than:
struct item *items;
items = malloc(n_elements * sizeof *items);
You have to define the variable somewhere. Doing so when you
initialise it when you first need it, is, without doubt, objectively
shorter. Opinions may differ on whether it is clearer, or "cluttered",
but which is shorter is not in doubt. (What relevance that might
have, is much more in doubt.)
If you want to isolate the executable code then you'd write it like this:
struct item *items;
...
items = malloc(n_elements * sizeof *items);
That code is now cleaner.
It also doesn't have that somewhat confusing
(partly due to spacing) '... *items = malloc ...' which makes it look
like an indirect assignment to a pointer called 'item' (compounded by
that '*items' term as the sizeof operand).
It doesn't have the distracting juxtasposition in 'item * items'.
If there was a subsequence assignment to 'items':
items = malloc(n_elements * sizeof *items);
...
items = malloc(m_elements * sizeof *items);
the two look the same; you don't have one pushed over to the right. (I
was able to copy&paste with only a small tweak.)
This is especially the case if I have 'aitems' and 'bitems' of the same
type:
struct item *aitems, *bitems;
* I don't need a separate declaration for each
* I can instantly see they are the same type without needing to infer
* I can change the type of both in one place; they can't get out of step
Shall I go on?
Did you see my post where I established that C programs typically have
only 3 locals per function on average?
David Brown <david.brown@hesbynett.no> writes:
On 08/02/2024 13:15, Malcolm McLean wrote:
On 08/02/2024 11:45, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:Degree in English literature.
No. It means writing the code in a way which is common in C and has
certain
advantages, but is not so in other languages.
Where do you get your superior knowledge of English from, and is there a >>>> way anyone else can hope to achieve your level of competence?
I would never have guessed that from the way you write and from how
often you fail to read other people's posts.
And Malcolm doesn't apparently recognize sarcasm.
David Brown <david.brown@hesbynett.no> writes:
On 08/02/2024 13:15, Malcolm McLean wrote:
On 08/02/2024 11:45, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:Degree in English literature.
No. It means writing the code in a way which is common in C and has
certain
advantages, but is not so in other languages.
Where do you get your superior knowledge of English from, and is there a >>>> way anyone else can hope to achieve your level of competence?
I would never have guessed that from the way you write and from how
often you fail to read other people's posts.
And Malcolm doesn't apparently recognize sarcasm.
On 08/02/2024 11:37, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:items = malloc(n_elements * sizeof *items);
That sounds like your opinion against mine. It's nothing to do with spin, >>> whatever that means.It's spin, because the term is emotive. "Cluttering up" is how you feel
about it. The phrase is just a mildly pejorative one about appearances.
There's no substance there. To make a technical point you would have to
explain how, for example,
struct item *items;
...
n_elements = get_number_of_items(...);
items = malloc(n_elements * sizeof *items);
...
is technically better than
n_elements = get_number_of_items(...);
struct item *items = malloc(n_elements * sizeof *items);
I've explained (more than once) how I find reasoning about the direct
initialise at first use style easier with fewer distractions.
is shorter than
struct item *items = malloc(n_elements * sizeof *items);
and that is an objective statement about which there can be no
dispute.
$ /mingw32/bin/as.exe --version
GNU assembler (GNU Binutils) 2.40
Copyright (C) 2023 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the terms
of the GNU General Public License version 3 or later.
This program has absolutely no warranty.
This assembler was configured for a target of `i686-w64-mingw32'.
$ size /mingw32/bin/as.exe
text data bss dec hex filename
2941952 10392 43416 2995760 2db630
C:/bin/msys64a/mingw32/bin/as.exe
On 2024-02-08, Michael S <already5chosen@yahoo.com> wrote:
$ /mingw32/bin/as.exe --version
GNU assembler (GNU Binutils) 2.40
Copyright (C) 2023 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the
terms of the GNU General Public License version 3 or later.
This program has absolutely no warranty.
This assembler was configured for a target of `i686-w64-mingw32'.
$ size /mingw32/bin/as.exe
text data bss dec hex filename
2941952 10392 43416 2995760 2db630
C:/bin/msys64a/mingw32/bin/as.exe
LOL, even a 400 kilobyte assembler is bat shit crazy.
Can you imagine working with that on a 512 Kb IBM PC?
GNU as probably doesn't have half the features of, say, Randy Hyde's
LISA for APPLE II. (Lazer Systems Interactive Symbolic Assembler).
Might that be statically linking numerous libraries, like libbfd and
whatnot? Possibly it supports unnecessary object formats that the
MinGW user will never use.
On 2024-02-08, Michael S <already5chosen@yahoo.com> wrote:
$ /mingw32/bin/as.exe --version
GNU assembler (GNU Binutils) 2.40
Copyright (C) 2023 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the terms
of the GNU General Public License version 3 or later.
This program has absolutely no warranty.
This assembler was configured for a target of `i686-w64-mingw32'.
$ size /mingw32/bin/as.exe
text data bss dec hex filename
2941952 10392 43416 2995760 2db630
C:/bin/msys64a/mingw32/bin/as.exe
LOL, even a 400 kilobyte assembler is bat shit crazy.
Can you imagine working with that on a 512 Kb IBM PC?
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
On 2/8/2024 9:48 AM, Kaz Kylheku wrote:
On 2024-02-08, Michael S <already5chosen@yahoo.com> wrote:[...]
LOL, even a 400 kilobyte assembler is bat shit crazy.
Can you imagine working with that on a 512 Kb IBM PC?
This assembler was pretty small... ;^D
https://youtu.be/bCSDkjhVM2A
Please keep doing this.
You're never in a million years going to admit that my language
has some good points are you?
bart <bc@freeuk.com> writes:
You're never in a million years going to admit that my language
has some good points are you?
Where can I get a user manual for it?
The most common meaning of "idiomatic", and the one I usually
associate with it in [the context of some C coding conventions],
is "containing expressions that are natural and correct". [...]
On 09/02/2024 00:02, Tim Rentsch wrote:
bart <bc@freeuk.com> writes:
You're never in a million years going to admit that my language
has some good points are you?
Where can I get a user manual for it?
Why? I haven't read a language manual since the 1980s.
I can still appreciate interesting ideas another language might have, or
even just ideas in isolation.
This is just churlishness.
On 08/02/2024 12:29, David Brown wrote:
On 08/02/2024 13:15, Malcolm McLean wrote:
On 08/02/2024 11:45, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:Degree in English literature.
No. It means writing the code in a way which is common in C and has
certain
advantages, but is not so in other languages.
Where do you get your superior knowledge of English from, and is
there a
way anyone else can hope to achieve your level of competence?
I would never have guessed that from the way you write and from how
often you fail to read other people's posts.
What an Oxford English tutor mostly cares about is that the essay is produced. Not so much what it says. You'll always get some marks for
writing something and will therefore pass, but you can get no marks for nothing and can only fail. So handing in a stupid essay is acceptable
but not handing in the essay at all is a big no no and tutors really
hate that. It's all about pulling something together in a tight deadline
on a subject about which you knew absolutely nothing last week, and
that's what they value. So it does encourage a rather slapdash approach
and reading things quickly, and maybe that is a fault.
On 2/8/2024 2:29 PM, Kenny McCormack wrote:
In article <87il2yfvmp.fsf@nosuchdomain.example.com>,
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
On 2/8/2024 9:48 AM, Kaz Kylheku wrote:
On 2024-02-08, Michael S <already5chosen@yahoo.com> wrote:[...]
LOL, even a 400 kilobyte assembler is bat shit crazy.
Can you imagine working with that on a 512 Kb IBM PC?
This assembler was pretty small... ;^D
https://youtu.be/bCSDkjhVM2A
Please keep doing this.
"Introduction to Assembly Language Programming on the Apple IIgs -
Lesson 1"
Yup. I forgot to put in the title. Sorry everybody. ;^o
You are just a rounding error :-)
But it is interesting to hear of exceptions to the general trend.
On 09/02/2024 14:55, Michael S wrote:
On Thu, 8 Feb 2024 08:52:12 +0100
David Brown <david.brown@hesbynett.no> wrote:
You are just a rounding error :-)
But it is interesting to hear of exceptions to the general trend.
That is one option.
Another one is you pulling your statistics out of one of your major anatomical features.
You do know that embedded Windows - "WinCE" - had its last version
release in 2013, and ended extended support last year? It's share of
the market (whatever market you choose) was never particularly
significant despite significant effort from MS, which is why they
dropped it.
Clearly my comment about "two or three unfortunate people" was not
meant as a serious statistic.
And of course people also make systems that can be classified as
"embedded", but with a desktop (or even server) version of Windows.
On Thu, 8 Feb 2024 08:52:12 +0100
David Brown <david.brown@hesbynett.no> wrote:
You are just a rounding error :-)
But it is interesting to hear of exceptions to the general trend.
That is one option.
Another one is you pulling your statistics out of one of your major anatomical features.
On Fri, 9 Feb 2024 15:29:09 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 09/02/2024 14:55, Michael S wrote:
On Thu, 8 Feb 2024 08:52:12 +0100
David Brown <david.brown@hesbynett.no> wrote:
You are just a rounding error :-)
But it is interesting to hear of exceptions to the general trend.
That is one option.
Another one is you pulling your statistics out of one of your major
anatomical features.
You do know that embedded Windows - "WinCE" - had its last version
release in 2013, and ended extended support last year? It's share of
the market (whatever market you choose) was never particularly
significant despite significant effort from MS, which is why they
dropped it.
Clearly my comment about "two or three unfortunate people" was not
meant as a serious statistic.
And of course people also make systems that can be classified as
"embedded", but with a desktop (or even server) version of Windows.
Do you know that there were two families of Windows OSes intended for
use in embedded devices that used completely different kernels and were similar only by sharing [significant] part of the user-level API? One
is discontinued. I'd guess, because the CE kernel was designed for a
single core and nowadays even on the low end multiple cores are common. Discontinued, but still available.
Another one, based on NT family of kernels, is doing similarly to how
it did for the last couple of decades.
The major blow that could kill it in the future is a relatively recent requirement that all 64-bit kernel drivers should be not just
crypto-signed (that was always a case) but signed by Microsoft's test
lab, which means that it's not just costs money, but also requires bureaucratic procedures.
But that's what could kill it in the future rather than already
happening.
bart <bc@freeuk.com> writes:
On 07/02/2024 15:36, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:
On 07/02/2024 10:47, Ben Bacarisse wrote:
[on choosing between declaring local variables always at the
start of a function, before any statements, or declaring
local variables throughout the body of a function, usually
with an initializing declaration at the point of first use;
an all-at-the-top style gives a single place to look for
all locals used in the function body]
You should not need an inventory of what's being operated on.
Any function so complex that I can't tell immediately what
declaration corresponds to which name needs to be re-written.
But if you keep functions small, eg. the whole body is visible
at the same time, then there is less need for declarations to
clutter up the code. They can go at the top, so that you can
literally can just glance there.
Declarations don't clutter up the code, just as the code does not
clutter up the declarations. That's just your own spin on the
matter. They are both important parts of a C program.
That sounds like your opinion against mine. It's nothing to do
with spin, whatever that means.
It's spin, because the term is emotive. "Cluttering up" is how
you feel about it. The phrase is just a mildly pejorative one
about appearances. There's no substance there. To make a
technical point you would have to explain how, for example,
struct item *items;
...
n_elements = get_number_of_items(...);
items = malloc(n_elements * sizeof *items);
...
is technically better than
n_elements = get_number_of_items(...);
struct item *items = malloc(n_elements * sizeof *items);
I've explained (more than once) how I find reasoning about
the direct initialise at first use style easier with fewer
distractions.
But we have lots of them using embedded Linux.
On Fri, 9 Feb 2024 17:22:53 +0100, David Brown wrote:
But we have lots of them using embedded Linux.
What sort of CPUs, out of interest? Presumably x86/x86-64, ARM ... maybe
even RISC-V by now? Anything else (e.g. Motorola ColdFire)? Any feel for relative popularity?
Long ago, I worked a little with both MIPS and Coldfire embedded Linux systems, but those are pretty much gone from the market now.
On 09/02/2024 00:02, Tim Rentsch wrote:
bart <bc@freeuk.com> writes:
You're never in a million years going to admit that my language
has some good points are you?
Where can I get a user manual for it?
Why?
On Sat, 10 Feb 2024 17:11:47 +0100, David Brown wrote:
Long ago, I worked a little with both MIPS and Coldfire embedded Linux
systems, but those are pretty much gone from the market now.
MIPS gone as well? At one point I heard they were accounting for something like 840 million chips per year. They had a particular niche in wireless routers.
The company that used to own what there was of the MIPS IP is now called Imagination Tech, and has gone all-in on RISC-V.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 153:21:21 |
Calls: | 10,383 |
Files: | 14,054 |
Messages: | 6,417,839 |