Presumably everyone knows what AND and OR mean. So
why not just use AND and OR?
bart <bc@freeuk.com> wrote:
Presumably everyone knows what AND and OR mean. So
why not just use AND and OR?
Maybe back in the early days of C even a one byte
mattered?
strlen("AND") is 3 and strlen("&&") is 2.
So they chose "&&" and then "||" was chosen because
of symmetry? Just speculation, of course.
Dennis Ritchie has said that he regretted the
spelling of creat(2) function. Presumably the
abbreviation was supposed to save one byte of
storage.
Anyway, sorry for the off-topic, but Python aims to
be like what you say. It should be like executable
pseudo-code
On 23/01/2024 19:32, Kalevi Kolttonen wrote:
bart <bc@freeuk.com> wrote:
Presumably everyone knows what AND and OR mean. So
why not just use AND and OR?
Maybe back in the early days of C even a one byte
mattered?
Plenty of languages already existed that used 'and', 'or' and 'not'.
My first language did so as well, and was implemented on a smaller
machine than the first C compiler.
If compact code was advantageous, then you have to wonder about the considerable amount of punctuation in C.
strlen("AND") is 3 and strlen("&&") is 2.
So they chose "&&" and then "||" was chosen because
of symmetry? Just speculation, of course.
Why two & and two |? Was '|' even commonly available on keyboards at the time? I'm pretty sure 'o' and 'r' were!
Although whether in lower case, I don't know. If not,
then just typing
'if' was going to be a problem.
Dennis Ritchie has said that he regretted the
spelling of creat(2) function. Presumably the
abbreviation was supposed to save one byte of
storage.
Such identifiers would been exposed to external tools such as assemblers
and linkers, which may have had their own limits. Keywords such as 'and'
are internal to the C compiler.
Anyway, sorry for the off-topic, but Python aims to
be like what you say. It should be like executable
pseudo-code
That's a good idea. I've often wondered why, if pseudo-code
is so easy to understand, why it isn't used more for real
languages.
Dennis Ritchie has said that he regretted the spelling of creat(2)
function. Presumably the abbreviation was supposed to save one byte of storage.
But the Python designers botched at least one thing concerning that philosophy: private functions are not defined by "private" keyword like
in Java but instead the designers violated their own basic principle of
easy readability: the "privateness" is defined by by prefixing the
function names with an underscore ("_").
C's predecessor language B ...
On Tue, 23 Jan 2024 19:32:26 -0000 (UTC), Kalevi Kolttonen wrote:
Dennis Ritchie has said that he regretted the spelling of creat(2)
function. Presumably the abbreviation was supposed to save one byte of
storage.
Given that, in POSIX, all the functions of creat(2) have been subsumed
under open(2) anyway, we can largely ignore that.
There are other worse problems with C. Like its use of “=” (the mathematical equality operator) for assignment, instead of using the “:=” symbol that the Algols had adopted.
But the Python designers botched at least one thing concerning that
philosophy: private functions are not defined by "private" keyword like
in Java but instead the designers violated their own basic principle of
easy readability: the "privateness" is defined by by prefixing the
function names with an underscore ("_").
That is merely a convention, a hint that the user might want to stay away from such a symbol, not a hard requirement, which is why it was designed
that way. As Guido van Rossum has said: “We are all consenting adults here”.
What?! Are you saying that there is no way to label functions as private
in Python? That sounds absolutely horrible. Why would anyone design a language with object-oriented features without support for
encapsulation?
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 23 Jan 2024 19:32:26 -0000 (UTC), Kalevi Kolttonen wrote:
Dennis Ritchie has said that he regretted the spelling of creat(2)
function. Presumably the abbreviation was supposed to save one byte of
storage.
Given that, in POSIX, all the functions of creat(2) have been subsumed
under open(2) anyway, we can largely ignore that.
I know. creat() is now completely redundant because
open() suffices. The reason I brought it up was just
so that people would realize that at least sometimes
it was a matter of saving storage space - even one
byte.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 23 Jan 2024 19:32:26 -0000 (UTC), Kalevi Kolttonen wrote:
Dennis Ritchie has said that he regretted the spelling of creat(2)
function. Presumably the abbreviation was supposed to save one byte of
storage.
Given that, in POSIX, all the functions of creat(2) have been subsumed
under open(2) anyway, we can largely ignore that.
I know. creat() is now completely redundant because
open() suffices. The reason I brought it up was just
so that people would realize that at least sometimes
it was a matter of saving storage space - even one
byte.
There are other worse problems with C. Like its use of “=” (the
mathematical equality operator) for assignment, instead of using the “:=”
symbol that the Algols had adopted.
Well I started with Microsoft BASIC on Commodore 64 39 years ago.
So I am well used to "=" and do not consider it a problem.
But the Python designers botched at least one thing concerning that
philosophy: private functions are not defined by "private" keyword like
in Java but instead the designers violated their own basic principle of
easy readability: the "privateness" is defined by by prefixing the
function names with an underscore ("_").
That is merely a convention, a hint that the user might want to stay away
from such a symbol, not a hard requirement, which is why it was designed
that way. As Guido van Rossum has said: “We are all consenting adults
here”.
What?! Are you saying that there is no way to label functions as private
in Python? That sounds absolutely horrible. Why would anyone design a language with object-oriented features without support for encapsulation?
The keyboard required a lot more physical force than modern keyboards
do, which made names like "mv" and "rm" easier to type than, say
"rename" and "delete".
On Tue, 23 Jan 2024 15:35:39 -0800, Keith Thompson wrote:
The keyboard required a lot more physical force than modern keyboards
do, which made names like "mv" and "rm" easier to type than, say
"rename" and "delete".
And yet it turns out the OS calls are indeed things like “rename” and “unlink”, not “mv” or “rm”.
The term "conditional and" is probably not so good, but the
meaning of it here refers to the familiar short-circuiting
behaviour of C's "&&". The same behaviour exists in, I
think, all UNIX shells.
If I write this in bash:
rm foo.txt && rm bar.txt
then if the first rm-command fails with a non-zero exit value,
then the second rm-command is not executed at all.
On 23/01/2024 22:47, Kalevi Kolttonen wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 23 Jan 2024 19:32:26 -0000 (UTC), Kalevi Kolttonen wrote:
Dennis Ritchie has said that he regretted the spelling of creat(2)
function. Presumably the abbreviation was supposed to save one byte of >>>> storage.
Given that, in POSIX, all the functions of creat(2) have been subsumed
under open(2) anyway, we can largely ignore that.
I know. creat() is now completely redundant because
open() suffices. The reason I brought it up was just
so that people would realize that at least sometimes
it was a matter of saving storage space - even one
byte.
There are other worse problems with C. Like its use of “=” (the
mathematical equality operator) for assignment, instead of using the
“:=”
symbol that the Algols had adopted.
Well I started with Microsoft BASIC on Commodore 64 39 years ago.
So I am well used to "=" and do not consider it a problem.
I've used ":=" with Algol and "=" with Fortran (a bit longer ago).
That is not itself the problem.
The problem is allowing both assignment and equality within the same expression, which is what C does. Algol, Fortran and BASIC didn't do that.
If that wasn't the case, then you could use "=" for both assignment, and equality, without ambiguity.
Languages that use ":=" and "=" for those operations, fare better than C
that uses "=" and "==".
But the Python designers botched at least one thing concerning that
philosophy: private functions are not defined by "private" keyword like >>>> in Java but instead the designers violated their own basic principle of >>>> easy readability: the "privateness" is defined by by prefixing the
function names with an underscore ("_").
That is merely a convention, a hint that the user might want to stay
away
from such a symbol, not a hard requirement, which is why it was designed >>> that way. As Guido van Rossum has said: “We are all consenting adults
here”.
What?! Are you saying that there is no way to label functions as private
in Python? That sounds absolutely horrible. Why would anyone design a
language with object-oriented features without support for encapsulation?
Python is 'ultra-dynamic'. That's why it's so hard to make it
performant. So you can do this:
import math
math.pi = "pie"
math.sqrt = lambda x:x*x
print(math.pi*2)
print(math.sqrt(10))
Output is:
piepie
100
bart <bc@freeuk.com> wrote:
Presumably everyone knows what AND and OR mean. So
why not just use AND and OR?
Maybe back in the early days of C even a one byte
mattered? strlen("AND") is 3 and strlen("&&") is 2.
So they chose "&&" and then "||" was chosen because
of symmetry? Just speculation, of course.
Dennis Ritchie has said that he regretted the
spelling of creat(2) function. Presumably the
abbreviation was supposed to save one byte of
storage.
Back when I was learning Unix and C (late 1970s), the consensus belief
in my department was that they were simply poor typists. So many
commands that use two letters, one that can be typed with each hand, or commands that are one letter repeated (I've got a dd copying a disk
image even as we speak).
bart <bc@freeuk.com> wrote:
On 23/01/2024 19:32, Kalevi Kolttonen wrote:
bart <bc@freeuk.com> wrote:
Presumably everyone knows what AND and OR mean. So
why not just use AND and OR?
Maybe back in the early days of C even a one byte
mattered?
Plenty of languages already existed that used 'and', 'or' and 'not'.
I actually know that but it could have mattered
to Dennis Ritchie to be as succinct as possible.
In the same spirit as UNIX has "ls" not "list", "cp"
not "copy" and "mv" not "move".
My first language did so as well, and was implemented on a smaller
machine than the first C compiler.
If compact code was advantageous, then you have to wonder about the
considerable amount of punctuation in C.
True.
strlen("AND") is 3 and strlen("&&") is 2.
So they chose "&&" and then "||" was chosen because
of symmetry? Just speculation, of course.
Why two & and two |? Was '|' even commonly available on keyboards at the
time? I'm pretty sure 'o' and 'r' were!
I really do not know. I never thought of that but
it does seem strange especially if one claims
that they wanted to save storage.
kalevi@kolttonen.fi (Kalevi Kolttonen) writes:
[snip]
Why two & and two |? Was '|' even commonly available on keyboards at the >>> time? I'm pretty sure 'o' and 'r' were!
I really do not know. I never thought of that but
it does seem strange especially if one claims
that they wanted to save storage.
Back when I was learning Unix and C (late 1970s), the consensus belief
in my department was that they were simply poor typists. So many
commands that use two letters, one that can be typed with each hand, or >commands that are one letter repeated (I've got a dd copying a disk
image even as we speak).
kalevi@kolttonen.fi (Kalevi Kolttonen) writes:
bart <bc@freeuk.com> wrote:
On 23/01/2024 19:32, Kalevi Kolttonen wrote:
bart <bc@freeuk.com> wrote:
Presumably everyone knows what AND and OR mean. So
why not just use AND and OR?
Maybe back in the early days of C even a one byte
mattered?
Plenty of languages already existed that used 'and', 'or' and 'not'.
I actually know that but it could have mattered
to Dennis Ritchie to be as succinct as possible.
In the same spirit as UNIX has "ls" not "list", "cp"
not "copy" and "mv" not "move".
My first language did so as well, and was implemented on a smaller
machine than the first C compiler.
If compact code was advantageous, then you have to wonder about the
considerable amount of punctuation in C.
True.
strlen("AND") is 3 and strlen("&&") is 2.
So they chose "&&" and then "||" was chosen because
of symmetry? Just speculation, of course.
Why two & and two |? Was '|' even commonly available on keyboards at the >>> time? I'm pretty sure 'o' and 'r' were!
I really do not know. I never thought of that but
it does seem strange especially if one claims
that they wanted to save storage.
Back when I was learning Unix and C (late 1970s), the consensus belief
in my department was that they were simply poor typists.
In article <1bbk9a4wi2.fsf@pfeifferfamily.net>,
Joe Pfeiffer <pfeiffer@cs.nmsu.edu> wrote:
kalevi@kolttonen.fi (Kalevi Kolttonen) writes:
[snip]
Why two & and two |? Was '|' even commonly available on keyboards at the >>>> time? I'm pretty sure 'o' and 'r' were!
I really do not know. I never thought of that but
it does seem strange especially if one claims
that they wanted to save storage.
Back when I was learning Unix and C (late 1970s), the consensus belief
in my department was that they were simply poor typists. So many
commands that use two letters, one that can be typed with each hand, or
commands that are one letter repeated (I've got a dd copying a disk
image even as we speak).
The two-letter command thing dates from Multics, where most
commands had a "long" form (e.g., `list`) and a short form
(`ls`).
It is true that they were using teletypes like the ASR-33, which
were both slow and difficult to type on (more force required per
keystroke compared to modern keyboards), so brevity was prized.
Also, line rates were very slow, 110 or 300 BAUD; economy of
expression giving brevity lead to greater throughput.
On 24/01/2024 15:17, Dan Cross wrote:
In article <1bbk9a4wi2.fsf@pfeifferfamily.net>,
Joe Pfeiffer <pfeiffer@cs.nmsu.edu> wrote:
kalevi@kolttonen.fi (Kalevi Kolttonen) writes:
[snip]
Why two & and two |? Was '|' even commonly available on keyboards at the >>>>> time? I'm pretty sure 'o' and 'r' were!
I really do not know. I never thought of that but
it does seem strange especially if one claims
that they wanted to save storage.
Back when I was learning Unix and C (late 1970s), the consensus belief
in my department was that they were simply poor typists. So many
commands that use two letters, one that can be typed with each hand, or
commands that are one letter repeated (I've got a dd copying a disk
image even as we speak).
The two-letter command thing dates from Multics, where most
commands had a "long" form (e.g., `list`) and a short form
(`ls`).
It is true that they were using teletypes like the ASR-33, which
were both slow and difficult to type on (more force required per
keystroke compared to modern keyboards), so brevity was prized.
Also, line rates were very slow, 110 or 300 BAUD; economy of
expression giving brevity lead to greater throughput.
I think you're just making excuses.
I used ASR33s extensively and had no trouble typing on them. (Other than
a small latency between pressing a key and having it printed that you
got used to, but it didn't affect your typing speed.)
A 110 baud speed means max 10 characters per second, about 120 words per >minute. That's twice as fast as an expert typist on a modern keyboard.
So it's hardly as though that was the limiting factor.
It's also laughable that I get shouted down when I complain about all
the extra junk you have to type here:
cc prog.c -o prog -lm
compared with a minimal:
cc prog
but the use of 2-letter abbbreviations to save having to type out a 3 or >4-letter word (which is easier and quicker to type type than random >punctuation) gets lauded.
In article <uorbbr$1s2e2$1@dont-email.me>, bart <bc@freeuk.com> wrote:
I think you're just making excuses.
Rather, I'm simply explaining the history.
I used ASR33s extensively and had no trouble typing on them. (Other than
a small latency between pressing a key and having it printed that you
got used to, but it didn't affect your typing speed.)
Anecdotal. I, too, have typed on an ASR-33. I did not find it
pleasant and my hands hurt after a short time.
On 24/01/2024 16:27, Dan Cross wrote:
In article <uorbbr$1s2e2$1@dont-email.me>, bart <bc@freeuk.com> wrote:
I think you're just making excuses.
Rather, I'm simply explaining the history.
I used ASR33s extensively and had no trouble typing on them. (Other than >>> a small latency between pressing a key and having it printed that you
got used to, but it didn't affect your typing speed.)
Anecdotal. I, too, have typed on an ASR-33. I did not find it
pleasant and my hands hurt after a short time.
If this is the same terminal that you have to type program source code
on (especially C source!), and edit it, then clearly having two-letter
shell file commands instead of 3 or 4 letters is giving to make little >overall difference to how long development might take, or any fatigue >experienced.
(I don't remember that bit, but maybe my programs were short, and I
could only book terminals for an hour at a time anyway.)
I think it was suggested somewhere that the characteristics of such a >terminal were a reason for the short commands.
It's also laughable that I get shouted down when I complain about all
the extra junk you have to type here:
cc prog.c -o prog -lm
On Wed, 24 Jan 2024 15:46:04 +0000, bart wrote:
It's also laughable that I get shouted down when I complain about all
the extra junk you have to type here:
cc prog.c -o prog -lm
We normally have Makefiles to do that for us.
On 24/01/2024 20:17, Lawrence D'Oliveiro wrote:
On Wed, 24 Jan 2024 15:46:04 +0000, bart wrote:
It's also laughable that I get shouted down when I complain about all
the extra junk you have to type here:
cc prog.c -o prog -lm
We normally have Makefiles to do that for us.
You have 17 different source files like 'prog.c' in the same folder,
each a different program, and are using 4 different compilers, which are invoked in ad hoc permutations.
So your answer is to just put it all in a makefile (I'd like to see what you'd put in it) and type 'make', which magically knows your intentions?
bart <bc@freeuk.com> wrote:
On 24/01/2024 20:17, Lawrence D'Oliveiro wrote:
On Wed, 24 Jan 2024 15:46:04 +0000, bart wrote:
It's also laughable that I get shouted down when I complain about all
the extra junk you have to type here:
cc prog.c -o prog -lm
We normally have Makefiles to do that for us.
You have 17 different source files like 'prog.c' in the same folder,
each a different program, and are using 4 different compilers, which are
invoked in ad hoc permutations.
So your answer is to just put it all in a makefile (I'd like to see what
you'd put in it) and type 'make', which magically knows your intentions?
I think I have said this before: The first mistake in that approach is lumping all the different C source files in the same directory. You
should use subdirectories for different programs and their corresponding source files.
Anyway, even in this insane situation, you could use a Makefile. But
typing "make" would not suffice, you would have to use something like:
make <program name>
bart <bc@freeuk.com> wrote:
On 24/01/2024 20:17, Lawrence D'Oliveiro wrote:
On Wed, 24 Jan 2024 15:46:04 +0000, bart wrote:
It's also laughable that I get shouted down when I complain about all
the extra junk you have to type here:
cc prog.c -o prog -lm
We normally have Makefiles to do that for us.
You have 17 different source files like 'prog.c' in the same folder,
each a different program, and are using 4 different compilers, which are
invoked in ad hoc permutations.
So your answer is to just put it all in a makefile (I'd like to see what
you'd put in it) and type 'make', which magically knows your intentions?
I think I have said this before: The first mistake in that approach is lumping all the different C source files in the same directory.
You
should use subdirectories for different programs and their corresponding source files.
Anyway, even in this insane situation, you could use a Makefile.
On 25/01/2024 14:57, Kalevi Kolttonen wrote:
bart <bc@freeuk.com> wrote:
On 24/01/2024 20:17, Lawrence D'Oliveiro wrote:
On Wed, 24 Jan 2024 15:46:04 +0000, bart wrote:
It's also laughable that I get shouted down when I complain about all >>>>> the extra junk you have to type here:
cc prog.c -o prog -lm
We normally have Makefiles to do that for us.
You have 17 different source files like 'prog.c' in the same folder,
each a different program, and are using 4 different compilers, which are >>> invoked in ad hoc permutations.
So your answer is to just put it all in a makefile (I'd like to see what >>> you'd put in it) and type 'make', which magically knows your intentions?
I think I have said this before: The first mistake in that approach is
lumping all the different C source files in the same directory.
Then you will instead be issuing commands to switch between directories
all the time.
Is that what you do with, say, image files, have a dedicated folder per picture?
What's wrong having of lots of instances of the same type of file in one folder? (Most cameras do exactly that.) Then if you had a program P to operate on an image, you just type:
P file
(Without even an extension if P only works with .jpg for example.)
This is how I think it is reasonable for a compiler for a specific
language to work.
You
should use subdirectories for different programs and their corresponding
source files.
Anyway, even in this insane situation, you could use a Makefile.
A typical session in a console or terminal (at least for me) is typing commands to copy, create, delete, rename files and folders, or to
navigate between folders, launch programs which have parameters, etc. etc.
Some of those programs could be compilers, others the programs they
generate.
All lots of ad hoc activities with no particular pattern.
You wouldn't put such an arbitrary sequence, that you don't know in
advance anyway, into a script such as a makefile. Only if you see a
repeating pattern that you will invoke repeatedly.
Fortunately most such commands mentioned are sensible: there is rarely extraneous detail you have to enter over and over again, when it is
something that it should know or assume anyway.
My comment was anyway just highlighting the difference between people
getting uptight about having to type 'list' instead 'ls', and those who
just shrug at having to do 'gcc prog.c -o prog.c -lm ...'. Apparently
that is acceptable.
On 24/01/2024 20:17, Lawrence D'Oliveiro wrote:
On Wed, 24 Jan 2024 15:46:04 +0000, bart wrote:
It's also laughable that I get shouted down when I complain about all
the extra junk you have to type here:
cc prog.c -o prog -lm
We normally have Makefiles to do that for us.
You have 17 different source files like 'prog.c' in the same folder,
each a different program, and are using 4 different compilers, which are invoked in ad hoc permutations.
So your answer is to just put it all in a makefile (I'd like to see what you'd put in it) and type 'make', which magically knows your intentions?
On 2024-01-25, bart <bc@freeuk.com> wrote:
On 24/01/2024 20:17, Lawrence D'Oliveiro wrote:
On Wed, 24 Jan 2024 15:46:04 +0000, bart wrote:
It's also laughable that I get shouted down when I complain about all
the extra junk you have to type here:
cc prog.c -o prog -lm
We normally have Makefiles to do that for us.
You have 17 different source files like 'prog.c' in the same folder,
each a different program, and are using 4 different compilers, which are
invoked in ad hoc permutations.
So your answer is to just put it all in a makefile (I'd like to see what
you'd put in it) and type 'make', which magically knows your intentions?
Putting that into any file at all would be a good start.
The situation can be organized very nicely with GNU make.
compile_xcc = touch $(2) # fictitious
compile_gcc = gcc $(1) -o $(2)
srcs := $(wildcard *.c)
progs.gcc := $(patsubst %.c,%.gcc,$(srcs))
progs.xcc := $(patsubst %.c,%.xcc,$(srcs))
progs := $(progs.gcc) $(progs.xcc)
%.gcc : %.c; $(call compile_gcc,$<,$@)
%.xcc : %.c; $(call compile_xcc,$<,$@)
.PHONY: all clean
all : $(progs)
clean : ; rm -f $(progs)
Now if we type "make", every .c file is built into a .gcc
and .xcc file.
We can use "make whatever.gcc" just to update that, if necessary.
"make clean" removes all the executables.
"make -n" will show commands without running them.
Suppose we want the gcc executables in a gcc/ directory,
and the xcc ones in xcc/. We can make it a bit more complicated,
and actually make the creation of the directory a dependency.
Because the timestamps of directories is irrelevant (only their
existence or absence) we use the "order only prerequisite" feature
of GNU Make, indicated by the | character.
compile_xcc = touch $(2) # fictitious
compile_gcc = gcc $(1) -o $(2)
srcs := $(wildcard *.c)
progs.gcc := $(patsubst %.c,gcc/%,$(srcs))
progs.xcc := $(patsubst %.c,xcc/%,$(srcs))
progs := $(progs.gcc) $(progs.xcc)
gcc/% : %.c; $(call compile_gcc,$<,$@)
xcc/% : %.c; $(call compile_xcc,$<,$@)
% : ; mkdir $@
.PHONY: all clean
all : $(progs)
$(progs.gcc) : | gcc
$(progs.xcc) : | xcc
clean : ; rm -f $(progs); rmdir xcc gcc
Example session:
$ ls
Makefile prog.c
$ make
mkdir gcc
gcc prog.c -o gcc/prog
mkdir xcc
touch xcc/prog
$ ls gcc
prog
$ ls xcc
prog
$ rm -rf gcc
$ make
mkdir gcc
gcc prog.c -o gcc/prog
$ rm -rf xcc
$ make
mkdir xcc
touch xcc/prog
$ make clean
rm -f gcc/prog xcc/prog; rmdir xcc gcc
What?! Are you saying that there is no way to label functions as private
in Python? That sounds absolutely horrible. Why would anyone design a language with object-oriented features without support for
encapsulation?
On Tue, 23 Jan 2024 22:47:30 -0000 (UTC), Kalevi Kolttonen wrote:
What?! Are you saying that there is no way to label functions as
private in Python? That sounds absolutely horrible. Why would
anyone design a language with object-oriented features without
support for encapsulation?
[...]
Python also has metaclasses. Does your favourite OO language have metaclasses?
On Tue, 23 Jan 2024 12:09:09 -0800, Keith Thompson wrote:
C's predecessor language B ...
Let me see if I understand the genealogy:
B was an adaptation of Martin Richards’ BCPL. BCPL was a low-level, typeless language designed for system programming. It originated before byte-addressable machines became popular.
In fact, Richards looked at
adapting BCPL to the PDP-11, and came away unimpressed with the latter’s architecture.
So B was BCPL adapted by the Bell Labs crew to work on a byte-addressable machine.
But it still didn’t have types. And types turned out to be rather
important to the implementation of a decent OS and accompanying
software.
So the Bell Labs came up with the successor language C, which did have
types.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
In fact, Richards looked at adapting BCPL to the PDP-11, and came away
unimpressed with the latter’s architecture.
I didn't know that. Do you have a reference I can read?
So B was BCPL adapted by the Bell Labs crew to work on a
byte-addressable machine.
Not really. B also worked on words.
I say early BCPL because, while B evolved in to C,
BCPL also evolved and later acquired a byte access operator, "%", along
with the word access one: "!".
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 154:30:07 |
Calls: | 10,383 |
Files: | 14,054 |
Messages: | 6,417,847 |