An interesting detail about the /360 design was that they originally
wanted to do a stack-based machine. It would have been OK for the
mid- and high-end machines, but on low-end machines it would have
been undompetetive, so they rejected that approach.
He discusses the book on computer architecture he co-authored with
Gerrit Blaauw in it (as a project). Would be _very_ nice to read,
but the price on Amazon is somewhat steep, a bit more than 150 Euros.
They had the insight to see that the 16 fixed sizs registers could be
in fast storage on high end machines, main memory on low end machines,
so the high end machines were fast and the low end no slower than a memory-memory architecture which is what it in practice was. It was
really an amazing design, no wonder it's the only architecture of its
era that still has hardware implementations.
John Levine <johnl@taugh.com> schrieb:
They had the insight to see that the 16 fixed sizs registers could
be in fast storage on high end machines, main memory on low end
machines, so the high end machines were fast and the low end no
slower than a memory-memory architecture which is what it in
practice was. It was really an amazing design, no wonder it's the
only architecture of its era that still has hardware
implementations.
And they are making good money on it, too.
Prompted by a remark in another newsgroup, I looked at IBM's 2023
annual report, where zSystems is put under "Hybrid Infrastructure"
(lumped together with POWER). The revenue for both lumped together
is around 9,215 billion Dollars, with a pre-tax margin of more
than 50%.
At those margins, they can certainly pay for a development team
for future hardware generations.
Thomas Koenig wrote:
John Levine <johnl@taugh.com> schrieb:
They had the insight to see that the 16 fixed sizs registers could
be in fast storage on high end machines, main memory on low end
machines, so the high end machines were fast and the low end no
slower than a memory-memory architecture which is what it in
practice was. It was really an amazing design, no wonder it's the
only architecture of its era that still has hardware
implementations.
Yes, although it isn't clear how much of its success is due to
technical superiority versus marketing superiority.
According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
Thomas Koenig wrote:
John Levine <johnl@taugh.com> schrieb:
They had the insight to see that the 16 fixed sizs registers could
be in fast storage on high end machines, main memory on low end
machines, so the high end machines were fast and the low end no
slower than a memory-memory architecture which is what it in
practice was. It was really an amazing design, no wonder it's the
only architecture of its era that still has hardware
implementations.
Yes, although it isn't clear how much of its success is due to
technical superiority versus marketing superiority.
S/360 invented eight bit byte addressed memory with larger power of 2
data sizes, which I think all by itself is enough to explain why it
survived. All the others, which were word or maybe decimal digit
addressed, died. Its addresses could handle 16MB which without too
many contortions was expanded to 2GB, a lot more than any other design
of the era. We all know that the thing that kills architectures is
running out of address space.
I thought the PDP-10 was swell, but even if DEC had been able to
design and ship the Jupiter follow-on to the KL-10, its expanded
addressing was a kludge. It only provided addressing 8M words or about
32M bytes with no way to go past that.
S/360 invented eight bit byte addressed memory with larger power of 2
data sizes, which I think all by itself is enough to explain why it
survived. All the others, which were word or maybe decimal digit
addressed, died. Its addresses could handle 16MB which without too
many contortions was expanded to 2GB, a lot more than any other design
of the era. We all know that the thing that kills architectures is
running out of address space.
Note to self:: when designing a 36-bit machine, do not cripple it
with 18-bit addresses with inherent indirection....
S/360 invented eight bit byte addressed memory with larger power of 2
data sizes, which I think all by itself is enough to explain why it
survived. All the others, which were word or maybe decimal digit
addressed, died. Its addresses could handle 16MB which without too
many contortions was expanded to 2GB, a lot more than any other design
of the era. We all know that the thing that kills architectures is
running out of address space.
I thought the PDP-10 was swell, but even if DEC had been able to
design and ship the Jupiter follow-on to the KL-10, its expanded
addressing was a kludge. It only provided addressing 8M words or about
32M bytes with no way to go past that.
According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
Thomas Koenig wrote:
could >> > be in fast storage on high end machines, main memory onJohn Levine <johnl@taugh.com> schrieb:
They had the insight to see that the 16 fixed sizs registers
low end >> > machines, so the high end machines were fast and the low
end no >> > slower than a memory-memory architecture which is what it
in >> > practice was. It was really an amazing design, no wonder it's
the >> > only architecture of its era that still has hardware
implementations.
Yes, although it isn't clear how much of its success is due to
technical superiority versus marketing superiority.
S/360 invented eight bit byte addressed memory with larger power of 2
data sizes, which I think all by itself is enough to explain why it
survived.
addressed, died. Its addresses could handle 16MB which without too
many contortions was expanded to 2GB, a lot more than any other design
of the era. We all know that the thing that kills architectures is
running out of address space.
However, one questions. Designs like the PDP-10 or the UNIVAC
(from what I read on Wikipedia) had "registers" at certain
memory locations.
I do want to note that another factor in S/360's success was the
quality of the paper peripherals, expecially the 1401 printer, which
was a true marvel in its time. IBM got that advantage from their long >experience with punch card business systems.
All the others, which were word or maybe decimal digit
addressed, died. ...
The Univac 1110 (circa 1972), (about a devcade before XA) had banking,
which allowed an instruction to address anywhere within a 262K
(approximately 1 MB) "window" into what could be an "address space" of
about 4 GB. It was a little awkward in that, while you could have 4 of
such "windows" available at any time, changing windows required
executing an, (unprivlidged) instruction.
According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
I do want to note that another factor in S/360's success was the
quality of the paper peripherals, expecially the 1401 printer, which
was a true marvel in its time. IBM got that advantage from their
long experience with punch card business systems.
I presume you mean the 1403 which was indeed a great printer. I
printed a lot of term papers on them.
All the others, which were word or maybe decimal digit
addressed, died. ...
The Univac 1110 (circa 1972), (about a devcade before XA) had
banking, which allowed an instruction to address anywhere within a
262K (approximately 1 MB) "window" into what could be an "address
space" of about 4 GB. It was a little awkward in that, while you
could have 4 of such "windows" available at any time, changing
windows required executing an, (unprivlidged) instruction.
There were a lot of segmented address schemes and as far as I can tell
nobody liked them except maybe the Burroughs machines where the
compilers made it largely invisible. The most famous was the 8086 and
286 but the PDP-10 extended addressing was sort of like that and even
the PDP-8 had a bit to say whether an address was to page 0 or the
current one.
S/360 invented eight bit byte addressed memory with larger power of 2
Brooks wrote that the design was supposed to have been 32-bit
clean from the start, but that the people who implemented the BALR >instruction (which puts some bits of the PSW into the high-value
byte) didn't follow that guideline. He blamed himself for not making
that sufficiently clear to all the design team.
I thought the PDP-10 was swell, but even if DEC had been able to
design and ship the Jupiter follow-on to the KL-10, its expanded
addressing was a kludge. It only provided addressing 8M words or about
32M bytes with no way to go past that.
Reading
http://bitsavers.informatik.uni-stuttgart.de/pdf/dec/pdp10/KC10_Jupiter/ExtendedAddressing_Jul83.pdf
I concur that it was a kludge, but at least they seem to have
allowed for further extension by reserving a 1-1- bit pattern,
as an illegal indirect word.
However, one questions. Designs like the PDP-10 or the UNIVAC
(from what I read on Wikipedia) had "registers" at certain
memory locations. On the PDP-10, it even appears to have been
possible to run code in the first memory locations/registers.
It seems that the /360 was the first machine which put many
registers into a (conceptually) separate space, leaving them open
to implementing them either in memory or as faster logic.
Is that the case, or did anybody beat them to it?
Thomas Koenig wrote:
John Levine <johnl@taugh.com> schrieb:
They had the insight to see that the 16 fixed sizs registers could
be in fast storage on high end machines, main memory on low end
machines, so the high end machines were fast and the low end no
slower than a memory-memory architecture which is what it in
practice was. It was really an amazing design, no wonder it's the
only architecture of its era that still has hardware
implementations.
Yes, although it isn't clear how much of its success is due to
technical superiority versus marketing superiority.
Prompted by a remark in another newsgroup, I looked at IBM's 2023
annual report, where zSystems is put under "Hybrid Infrastructure"
(lumped together with POWER). The revenue for both lumped together
is around 9,215 billion Dollars, with a pre-tax margin of more
than 50%.
At those margins, they can certainly pay for a development team
for future hardware generations.
Yes, but remember that includes softwre revenue, which has higher
margins than hardware revenue. I believe I saw somewhere that IBM made
more from MVS, DB2, CICS, etc.
than they do on the hardware itself. So
one could argue that they have to develop mew hardware in order to
protect their software revenue!
I misread the manual. The extended addresses were 30 bits or about 4GB
which was plenty for that era, but the way they did it in 256K word
sections was still a kludge. In the original PDP-6/10 every
instruction could address all of memory. In extended mode you could
directly address only the current section, and everything else needed
an index register or an indirect address.
While this wasn't terribly hard, it did mean that any time you wanted
to change a program to run in extended mode you had to look at all the
code and check every instruction that did an address calculation,
which was tedious.
Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:
Thomas Koenig wrote:
could >> > be in fast storage on high end machines, main memory onJohn Levine <johnl@taugh.com> schrieb:
They had the insight to see that the 16 fixed sizs registers
low end >> > machines, so the high end machines were fast and the low
end no >> > slower than a memory-memory architecture which is what it
in >> > practice was. It was really an amazing design, no wonder it's
the >> > only architecture of its era that still has hardware
implementations.
Yes, although it isn't clear how much of its success is due to
technical superiority versus marketing superiority.
"Put a bullet through a CPU without missing a single transation"
is also a technical achievement :-)
I recently heard (but did not find a source that IBM did RAID on some
of their caches.
Prompted by a remark in another newsgroup, I looked at IBM's 2023
annual report, where zSystems is put under "Hybrid Infrastructure"
(lumped together with POWER). The revenue for both lumped together
is around 9,215 billion Dollars, with a pre-tax margin of more
than 50%.
At those margins, they can certainly pay for a development team
for future hardware generations.
Yes, but remember that includes softwre revenue, which has higher
margins than hardware revenue. I believe I saw somewhere that IBM
made more from MVS, DB2, CICS, etc.
SAP S4/HANA is going to hurt their bottom line, then. Earlier
versions of SAP could, I understand, run on zOS and DB2, S4/HANA
requires SAP's in-house database and requires Linux.
than they do on the hardware itself. So
one could argue that they have to develop mew hardware in order to
protect their software revenue!
Sounds reasonable, and the reverse of what they did in the
(far-away) past.
While this wasn't terribly hard, it did mean that any time you wanted
to change a program to run in extended mode you had to look at all the
code and check every instruction that did an address calculation,
which was tedious.
Hmm... would a simple recompilation have done the trick, or were there
also issues with integers being restricted to 18 bits, for example?
Yes, but remember that includes softwre revenue, which has higher
margins than hardware revenue. I believe I saw somewhere that IBM made
more from MVS, DB2, CICS, etc.
SAP S4/HANA is going to hurt their bottom line, then. Earlier
versions of SAP could, I understand, run on zOS and DB2, S4/HANA
requires SAP's in-house database and requires Linux.
According to Thomas Koenig <tkoenig@netcologne.de>:
Yes, but remember that includes softwre revenue, which has higher
margins than hardware revenue. I believe I saw somewhere that IBM made
more from MVS, DB2, CICS, etc.
SAP S4/HANA is going to hurt their bottom line, then. Earlier
versions of SAP could, I understand, run on zOS and DB2, S4/HANA
requires SAP's in-house database and requires Linux.
I dunno how much of a problem it'll be. IBM has put a lot of work into getting zSeries to run Linux well.
I realize neither you nor I would buy a mainframe to run Linux, but we wouldn't run SAP either.
According to Thomas Koenig <tkoenig@netcologne.de>:
While this wasn't terribly hard, it did mean that any time you wanted
to change a program to run in extended mode you had to look at all the
code and check every instruction that did an address calculation,
which was tedious.
Hmm... would a simple recompilation have done the trick, or were there
also issues with integers being restricted to 18 bits, for example?
This was 50 years ago. The system software was mostly written in
assembler. Some was written in BLISS which was more concise but still extremely machine specific.
I suppose you could recompile your Fortran programs, but the Fortran compiler was written in BLISS.
There were later versions of BLISS for the PDP=11, Vax and other
machines but they were not compatible with each other.
The earliest
places I can think of system programming languages with different
targets were when Bell Labs ported Unix to the Interdata, and the IBM
S/38 and its successors that had (still has) a virtual machine
language that is translated to whatever hardware it's running on.
This was 50 years ago. The system software was mostly written in
assembler. Some was written in BLISS which was more concise but still
extremely machine specific.
BLISS reads a LOT like the original K&R C.
There were later versions of BLISS for the PDP=11, Vax and other
machines but they were not compatible with each other.
Imagine if BLISS were machine independent ?!!
Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:[...]
[...] remember that includes softwre revenue, which has higher
margins than hardware revenue. I believe I saw somewhere that IBM
made more from MVS, DB2, CICS, etc.
SAP S4/HANA is going to hurt their bottom line, then. Earlier
versions of SAP could, I understand, run on zOS and DB2, S4/HANA
requires SAP's in-house database and requires Linux.
than they do on the hardware itself. So
one could argue that they have to develop mew hardware in order to
protect their software revenue!
Sounds reasonable, and the reverse of what they did in the
(far-away) past.
I've just read (most of) "The Design of Design" by Fred Brooks,
especially the chapters dealing with the design of the /360,
and it's certainly worth reading. (I had finished "The Mythical
Man-Month" before). There are chapters on computer and software architectures, but also something on a house he himself built.
An interesting detail about the /360 design was that they originally
wanted to do a stack-based machine. It would have been OK for the
mid- and high-end machines, but on low-end machines it would have
been undompetetive, so they rejected that approach.
He discusses the book on computer architecture he co-authored with
Gerrit Blaauw in it (as a project). Would be _very_ nice to read,
but the price on Amazon is somewhat steep, a bit more than 150 Euros.
One thing about Brooks - he is not shy of criticizing his own
works when his views changed. I liked his scathing comments on JCL
so much that I put them in the Wikipedia article :-)
His main criticism of his own book on computer architecture was
that it treated computer architecture as a finite field which had
been explored already.
@John S: Not sure if you've read "The Design of Design", but if you
haven't, you probably should. It might help you to refocus in your
quest to recreate a S/360 (especially the requirement to get the
architecture to work well on a very small machine like the 360/30).
Soo... good to read. Anything else?
Thomas Koenig wrote:
John Levine <johnl@taugh.com> schrieb:
They had the insight to see that the 16 fixed sizs registers could
be in fast storage on high end machines, main memory on low end
machines, so the high end machines were fast and the low end no
slower than a memory-memory architecture which is what it in
practice was. It was really an amazing design, no wonder it's the
only architecture of its era that still has hardware
implementations.
Yes, although it isn't clear how much of its success is due to
technical superiority versus marketing superiority.
Sounds reasonable, and the reverse of what they did in the
(far-away) past.
IBM was forced to change what it did in the past as a
consequence of an antitrust action filed by the US
government. And in fact there was more than one of
those.
John Levine <johnl@taugh.com> schrieb:
S/360 invented eight bit byte addressed memory with larger power of 2
data sizes, which I think all by itself is enough to explain why it
survived. All the others, which were word or maybe decimal digit
addressed, died. Its addresses could handle 16MB which without too
many contortions was expanded to 2GB, a lot more than any other design
of the era. We all know that the thing that kills architectures is
running out of address space.
Brooks wrote that the design was supposed to have been 32-bit
clean from the start, but that the people who implemented the BALR >instruction (which puts some bits of the PSW into the high-value
byte) didn't follow that guideline. He blamed himself for not making
that sufficiently clear to all the design team.
He also commented on the carefully-designed gaps in the opcode space; >extensibility was designed in from the beginning. @John S: Another
important point about S/360 you might want to follow, as Mitch
keeps telling you...
I thought the PDP-10 was swell, but even if DEC had been able to
design and ship the Jupiter follow-on to the KL-10, its expanded
addressing was a kludge. It only provided addressing 8M words or about
32M bytes with no way to go past that.
Reading
http://bitsavers.informatik.uni-stuttgart.de/pdf/dec/pdp10/KC10_Jupiter/ExtendedAddressing_Jul83.pdf
I concur that it was a kludge, but at least they seem to have
allowed for further extension by reserving a 1-1- bit pattern,
as an illegal indirect word.
However, one questions. Designs like the PDP-10 or the UNIVAC
(from what I read on Wikipedia) had "registers" at certain
memory locations. On the PDP-10, it even appears to have been
possible to run code in the first memory locations/registers.
It seems that the /360 was the first machine which put many
registers into a (conceptually) separate space, leaving them open
to implementing them either in memory or as faster logic.
Is that the case, or did anybody beat them to it?
According to Thomas Koenig <tkoenig@netcologne.de>:
S/360 invented eight bit byte addressed memory with larger power of 2
Brooks wrote that the design was supposed to have been 32-bit
clean from the start, but that the people who implemented the BALR >>instruction (which puts some bits of the PSW into the high-value
byte) didn't follow that guideline. He blamed himself for not making
that sufficiently clear to all the design team.
Yup. Even worse, the OS programmers were under extreme pressure
to save memory so in every data structure with address words,
they used the high byte for flags or other stuff. So when they
went to 31 bit addressing, they needed new versions of all of
the control blocks.
John Levine <johnl@taugh.com> writes:
According to Thomas Koenig <tkoenig@netcologne.de>:
S/360 invented eight bit byte addressed memory with larger power of 2
Brooks wrote that the design was supposed to have been 32-bit
clean from the start, but that the people who implemented the BALR >>>instruction (which puts some bits of the PSW into the high-value
byte) didn't follow that guideline. He blamed himself for not making >>>that sufficiently clear to all the design team.
Yup. Even worse, the OS programmers were under extreme pressure
to save memory so in every data structure with address words,
they used the high byte for flags or other stuff. So when they
went to 31 bit addressing, they needed new versions of all of
the control blocks.
The B300 had fixed instruction format that included three
operand fields. For instructions that didn't use all three
operands, the programmer was encouraged to use the unused
operand fields as scratch fields.
The B300 had fixed instruction format that included three
operand fields. For instructions that didn't use all three
operands, the programmer was encouraged to use the unused
operand fields as scratch fields.
For a modern ISA, the architect should specify that various bits
of the general format "must be zero"* when those bits are not used
in the instruction.
It seems that the /360 was the first machine which put many
registers into a (conceptually) separate space, leaving them open
to implementing them either in memory or as faster logic.
PDP-8 had both (auto-increment index registers in memory) and
the separate accumulator and link registers.
Scott Lurndal wrote:
John Levine <johnl@taugh.com> writes:
According to Thomas Koenig <tkoenig@netcologne.de>:
S/360 invented eight bit byte addressed memory with larger power of 2
Brooks wrote that the design was supposed to have been 32-bit
clean from the start, but that the people who implemented the BALR >>>>instruction (which puts some bits of the PSW into the high-value
byte) didn't follow that guideline. He blamed himself for not making >>>>that sufficiently clear to all the design team.
Yup. Even worse, the OS programmers were under extreme pressure
to save memory so in every data structure with address words,
they used the high byte for flags or other stuff. So when they
went to 31 bit addressing, they needed new versions of all of
the control blocks.
The B300 had fixed instruction format that included three
operand fields. For instructions that didn't use all three
operands, the programmer was encouraged to use the unused
operand fields as scratch fields.
For a modern ISA, the architect should specify that various bits
mitchalsup@aol.com (MitchAlsup1) writes:
Scott Lurndal wrote:
John Levine <johnl@taugh.com> writes:
According to Thomas Koenig <tkoenig@netcologne.de>:
S/360 invented eight bit byte addressed memory with larger power of 2 >>>>Brooks wrote that the design was supposed to have been 32-bit
clean from the start, but that the people who implemented the BALR >>>>>instruction (which puts some bits of the PSW into the high-value >>>>>byte) didn't follow that guideline. He blamed himself for not making >>>>>that sufficiently clear to all the design team.
Yup. Even worse, the OS programmers were under extreme pressure
to save memory so in every data structure with address words,
they used the high byte for flags or other stuff. So when they
went to 31 bit addressing, they needed new versions of all of
the control blocks.
The B300 had fixed instruction format that included three
operand fields. For instructions that didn't use all three
operands, the programmer was encouraged to use the unused
operand fields as scratch fields.
For a modern ISA, the architect should specify that various bits
The B300 was an extension of the 1950's Electrodata 220, and had
very little total memory.
In modern systems with several orders of magnitude more memory, the
more useful restriction is to make the text section read-only
via the MMU.
Yes, for extensibility, the hardware should, generally, fault
on unused instruction encodings (having a NOP space that can be
extended with 'hint' instructions in future versions of the
instruction space maintains backwards compatability with software
built for later generations when run on earlier generations which
treat the encoding as a NOP, viz. ARM64).
According to MitchAlsup1 <mitchalsup@aol.com>:
For a modern ISA, the architect should specify that various bits
of the general format "must be zero"* when those bits are not used
in the instruction.
That was another innovation of the 360. It specifically said that
unused bits (of which there were a few) and unused unstructions (of
which there were a lot) were reserved. The unused bits had to be
zero, and the instructions all trapped.
According to Tim Rentsch <tr.17687@z991.linuxsc.com>:
Sounds reasonable, and the reverse of what they did in the
(far-away) past.
IBM was forced to change what it did in the past as a
consequence of an antitrust action filed by the US
government. And in fact there was more than one of
those.
That's true but they didn't have that much practical effect.
The 1956 agreement required that they sell equipment, rather than only leasing it, let customers buy their cards from vendors other than IBM,
and some other related stuff. A big deal then, irrelevant now.
In 1969 they preemptively unbundled software and services, expecting
that an antitrust suit could force them to do so. There were many of antitrust suits through 1982, all of which IBM won or were dismissed.
John Levine <johnl@taugh.com> writes:
According to MitchAlsup1 <mitchalsup@aol.com>:
For a modern ISA, the architect should specify that various bits
of the general format "must be zero"* when those bits are not used
in the instruction.
That was another innovation of the 360. It specifically said that
unused bits (of which there were a few) and unused unstructions (of
which there were a lot) were reserved. The unused bits had to be
zero, and the instructions all trapped.
I would describe this not so much as an innovation but just as
applying a lesson learned from earlier experience.
Some earlier IBM
model (don't remember which one) had the property that instructions
were somewhat like microcode, and some undocumented combinations of
bits would do useful things.
Personally I think his assessment of JCL is harsher than it
deserves. Don't get me wrong, JCL is not my idea of a great
control language, but it was usable enough in the environment
that customers were used to.
The biggest fault of JCL is that it
is trying to solve the wrong problem.
It isn't clear that trying
to do something more ambitious would have fared any better in the
early 1960s (see also The Second System Effect in MMM).
No comment about JCL still being used today.
Tim Rentsch wrote:
snip
Personally I think his assessment of JCL is harsher than it
deserves. Don't get me wrong, JCL is not my idea of a great
control language, but it was usable enough in the environment
that customers were used to.
From 1972-1979 I worked at a site that had boh S/360s (mostly /65s)
running OS/MVT, and Univac 1108s running Exec 8. I used both, though
did mostly 1108 stuff.
For several reasons, JCL was terrible. One was its seemingly needless >obscurity. For example, IIRC the program name of the COBOL compiler
was ICKFBL00. In contrast, the COBOL compiler under Exec 8 was called
COB. It aalso lacked intelligent defaults, which made a it more
cumbersome to use. But this was mostly hidden due to a much bigger
problem.
The worst parts of JCL were "DD" cards, manual track allocation, and the lack >of any form of real filesystem. PDS do not count.
What problem was it trying to solve and what was the "right" problem?
The worst parts of JCL were "DD" cards, manual track allocation, and the lack of any form of real filesystem. PDS do not count.
According to Scott Lurndal <slp53@pacbell.net>:
The worst parts of JCL were "DD" cards, manual track allocation,
and the lack of any form of real filesystem. PDS do not count.
OS had named files (which they called datasets) and a catalog that
said which disk or tape it was on, so you could refer to SYS1.FOOLIB
and it would find it for you.
I agree the explicit track and cylinder allocation was painful. IBM outsmarted themselves with CKD disks that put the record boundaries in hardware
and did the key search for indexed files in the disk
controller.
That was fine on a 360/30 which was too slow to do
anything else (the CPU and channel shared the microengine and the CPU basically stopped during an index search) but very inefficient on
larger machines.
Later on VSAM handled disks with fixed block sizes and used B-trees to
do index searches but by then the cruft was all there.
John Levine wrote:
According to Scott Lurndal <slp53@pacbell.net>:
The worst parts of JCL were "DD" cards, manual track allocation,
and the lack of any form of real filesystem. PDS do not count.
OS had named files (which they called datasets) and a catalog that
said which disk or tape it was on, so you could refer to SYS1.FOOLIB
and it would find it for you.
Agreed. But by "real file system", Scott meant a hierarchical system
such as we are used to today with Windows and Unix,
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
John Levine wrote:
According to Scott Lurndal <slp53@pacbell.net>:
The worst parts of JCL were "DD" cards, manual track allocation,
and the lack of any form of real filesystem. PDS do not count.
OS had named files (which they called datasets) and a catalog that
said which disk or tape it was on, so you could refer to SYS1.FOOLIB
and it would find it for you.
Agreed. But by "real file system", Scott meant a hierarchical system
such as we are used to today with Windows and Unix,
I don't consider a partitioned dataset to be a 'filesystem'.
Burroughs MCP
had a traditional filesystem (albeit not heirarchical) where the filesystem automatically
handled area allocation for files[*], which were cataloged in a directory; there was a global disk directory supporting all 100-byte media units
(which provide a pool of disk units from which file areas were allocated;
the pool could be partitioned by operations personnel into 'subsystems'
from which applications or the operator could specify that a given file's areas should be
allocated).
Disk pack families (one or more packs in a family) had a directory
for the family, and the filesystem allocated space from any unit
in the family for files created on that family. The pack family
name was the root of the filename (e.g. MASTER/PAYROL, where MASTER
is the family name and PAYROL is a file on that family).
According to Tim Rentsch <tr.17687@z991.linuxsc.com>:
John Levine <johnl@taugh.com> writes:
According to MitchAlsup1 <mitchalsup@aol.com>:I would describe this not so much as an innovation but just as
For a modern ISA, the architect should specify that various bitsThat was another innovation of the 360. It specifically said that
of the general format "must be zero"* when those bits are not used
in the instruction.
unused bits (of which there were a few) and unused unstructions (of
which there were a lot) were reserved. The unused bits had to be
zero, and the instructions all trapped.
applying a lesson learned from earlier experience.
Well, yes, but another 360 innovation was the whole idea of computer architecture, as well as the term. It was the first time that the programmer's view of the computer was described independently of any implementation.
Some earlier IBM
model (don't remember which one) had the property that instructions
were somewhat like microcode, and some undocumented combinations of
bits would do useful things.
I wonder if that was the way that the 704 OR'ed the index registers.
There were three of them, numbered 1, 2, and 4, so if your index field
was 5, it OR'ed registers 1 and 4. It subtracted the index (or the
OR'ed combination of indexes) from the base address, so it would have
taken some really tricky programming to make use of that. But someone
must have since they documented it and it continued to work on the
709, 7090, and 7094 until they provided 7 index registers and a mode
bit to switch between the old OR and the new 7 registers.
I have never found anything that says whether it was deliberate or an accident of the 704's implementation, and I have looked pretty hard.
John Levine wrote:
Some earlier IBM
model (don't remember which one) had the property that instructions
were somewhat like microcode, and some undocumented combinations of
bits would do useful things.
I wonder if that was the way that the 704 OR'ed the index registers.
Having three 1-bit register select fields saves a decoder for
index register specifier.
This feature looks like it was was just a consequence of using
a wired-OR bus and skipping the decoder on the index field.
For completeness they documented what happens if one enables multiple
index registers at once - that it OR's (as opposed to burn out).
I wonder if that was the way that the 704 OR'ed the index registers.
There were three of them, numbered 1, 2, and 4, so if your index field
was 5, it OR'ed registers 1 and 4. It subtracted the index (or the
OR'ed combination of indexes) from the base address, so it would have
taken some really tricky programming to make use of that. But someone
must have since they documented it and it continued to work on the
709, 7090, and 7094 until they provided 7 index registers and a mode
bit to switch between the old OR and the new 7 registers.
I have never found anything that says whether it was deliberate or an accident of the 704's implementation, and I have looked pretty hard.
John Levine <johnl@taugh.com> schrieb:
I wonder if that was the way that the 704 OR'ed the index registers.
There were three of them, numbered 1, 2, and 4, so if your index field
was 5, it OR'ed registers 1 and 4. It subtracted the index (or the
OR'ed combination of indexes) from the base address, so it would have
taken some really tricky programming to make use of that. But someone
must have since they documented it and it continued to work on the
709, 7090, and 7094 until they provided 7 index registers and a mode
bit to switch between the old OR and the new 7 registers.
I have never found anything that says whether it was deliberate or an
accident of the 704's implementation, and I have looked pretty hard.
We've had that discussion before :-)
Looking at the "manual of operation" from 1955, the ORing is shown,
and it is not listed in the changes from the 1954 version
So, documented from the release, at least.
The (incomplete) schematics at Bitsavers will probably show the
ORs, if anybody can dig through them and the relevant drawings
are not in the missing parts. I can read "AND" and "OR", but I have
no idea what "CF", "T" or "2PCF" stand for.
Thomas Koenig wrote:
John Levine <johnl@taugh.com> schrieb:
I wonder if that was the way that the 704 OR'ed the index registers.
There were three of them, numbered 1, 2, and 4, so if your index field
was 5, it OR'ed registers 1 and 4. It subtracted the index (or the
OR'ed combination of indexes) from the base address, so it would have
taken some really tricky programming to make use of that. But someone
must have since they documented it and it continued to work on the
709, 7090, and 7094 until they provided 7 index registers and a mode
bit to switch between the old OR and the new 7 registers.
I have never found anything that says whether it was deliberate or an
accident of the 704's implementation, and I have looked pretty hard.
We've had that discussion before :-)
Looking at the "manual of operation" from 1955, the ORing is shown,
and it is not listed in the changes from the 1954 version
So, documented from the release, at least.
The (incomplete) schematics at Bitsavers will probably show the
ORs, if anybody can dig through them and the relevant drawings
are not in the missing parts. I can read "AND" and "OR", but I have
no idea what "CF", "T" or "2PCF" stand for.
I found a 704 glossary that defines:
CF = Cathode Follower
PCF = Power Cathode Follower
THY = Thyratron
A quicky search finds cathode follower is a signal regenerator buffer
tube circuit for driving high fan-out loads.
Unfortunately I couldn't find any 704 documents which detailcircuits,
its tube logic circuit designs.
BUT... in searching for "IBM 704 tube" I cames across this which
show a picture of a 704 logic circuit
https://computermuseum.uwaterloo.ca/index.php/Detail/objects/13
and says the 704 tube logic circuits were designed by someone named
A. Halsey Dickinson, AND it seems he also designed the 604 tube
which were circa 1948, and those are documented.
This document is dated 1958 so contemporaneous with the 704
and details the 604 tube logic circuits:
http://www.bitsavers.org/pdf/ibm/604/227-7609-0_604_CE_man_1958.pdf
It is possible the 704's "T" gate stands for what 604 called TR or
Trigger units, which appears to be what we today call an SR Latch.
https://computermuseum.uwaterloo.ca/index.php/Detail/objects/13
and says the 704 tube logic circuits were designed by someone named
A. Halsey Dickinson, AND it seems he also designed the 604 tube circuits, which were circa 1948, and those are documented.
This document is dated 1958 so contemporaneous with the 704
and details the 604 tube logic circuits:
http://www.bitsavers.org/pdf/ibm/604/227-7609-0_604_CE_man_1958.pdf
It is possible the 704's "T" gate stands for what 604 called TR or
Trigger units, which appears to be what we today call an SR Latch.
They had inverter, two-input NAND, two-input NOR, Pentagrid as a
two-input OR, and a cheap Diode Switch (DS) as two-input AND) as
logic gates. The 704 seems to have used mostly AND and OR gates,
so the decision to AND the index register with the bit from the
instruction and then OR them together actually seems straightforward,
this also gives you zero if none of them is selected.
Having the possibility of more than one index register seems to
have been a consequence of design which allowed for zero or the
content of one register as the main purpose. Even if no documents
survive to prove this, I'm fairly confident that this is why
they did it.
Programmers being programmers, they probably started using the
feature for some multi-dimensional arrays with sizes of powers
of two, and IBM was then stuck with the feature.
Tim Rentsch wrote:
snip
Personally I think his assessment of JCL is harsher than it
deserves. Don't get me wrong, JCL is not my idea of a great
control language, but it was usable enough in the environment
that customers were used to.
From 1972-1979 I worked at a site that had boh S/360s (mostly /65s)
running OS/MVT, and Univac 1108s running Exec 8. I used both,
though did mostly 1108 stuff.
For several reasons, JCL was terrible. One was its seemingly
needless obscurity. For example, IIRC the program name of the COBOL
compiler was ICKFBL00. In contrast, the COBOL compiler under Exec 8
was called COB. It aalso lacked intelligent defaults, which made a
it more cumbersome to use. But this was mostly hidden due to a much
bigger problem.
Perhaps due to the architectures inability to swap a program out and
reload it to any real address other than the one it had originally,
all resources to be used had to be avaable at the beginning of the
job, so all JCL was scanned at the beginning of the job, and no
"dynamic" allocations were possible.
So, for example, the COBOL compiler needed, besides the input file
name, IIRC four scratch files, an output file and a place to put the (spooled)print listing. These must be explicitly described (JCL DD
commands) in the JCL for the job, Similarly for other programs.
This was so inconvenient that IBM provided "Procedures" (essentially
JCL macros), that included all the necessry DD statements, hid the
actual program names, etc.) Thus to compile link and execute a
COBOL program you invoked the procedure called something like
ICOBUCLG (I have forgotten exactly, but the last thre characters
were for Compile, Link, and GO). Contrast that with the EXEC 8
command
@COB programname
(The @ was Exec's equivalent to // to indicate a comand) The
internal scratch files were alocated internally by the compiler,
the default print output (which could be overridden) went to the
printer, the default output name (again overridable) was the same
as the input (object files and source files could have the same
name).
Similarly, to copy a file from one place to another, JCL required
at least two DD cards and an exec card with the program IEBGENER.
Under Exec 8, the command
@Copy sourcefile, destinationfile
was sufficient, as both files would be dynamically assigned (Exec
term) internally by the copy program, and the indicator of success
or failure went to the default print output.
While, as you stated, programmers dealt with this, and it worked
in batch mode. But it clearly wouldn't work once time sharing
(called Demand in Exec terminology) became available. Thus IBM
had to invent a whole new, incompatible set of commands for TSO.
But the Exec 8 syntax was so straightforward that users used
exactly the same commands, keyed in at the terminal as were put on
cards or a file in batch mode. That difference persists to this
day.
The biggest fault of JCL is that it
is trying to solve the wrong problem.
What problem was it trying to solve and what was the "right"
problem?
It isn't clear that trying
to do something more ambitious would have fared any better in the
early 1960s (see also The Second System Effect in MMM).
Exec 8 was roughly comtemporaeous with OS/MVT. I claim, was a
much better choice,
According to Tim Rentsch <tr.17687@z991.linuxsc.com>:
John Levine <johnl@taugh.com> writes:
According to MitchAlsup1 <mitchalsup@aol.com>:
For a modern ISA, the architect should specify that various bits
of the general format "must be zero"* when those bits are not used
in the instruction.
That was another innovation of the 360. It specifically said that
unused bits (of which there were a few) and unused unstructions (of
which there were a lot) were reserved. The unused bits had to be
zero, and the instructions all trapped.
I would describe this not so much as an innovation but just as
applying a lesson learned from earlier experience.
Well, yes, but another 360 innovation was the whole idea of computer architecture, as well as the term. It was the first time that the programmer's view of the computer was described independently of any implementation.
Some earlier IBM
model (don't remember which one) had the property that instructions
were somewhat like microcode, and some undocumented combinations of
bits would do useful things.
I wonder if that was the way that the 704 OR'ed the index registers.
There were three of them, numbered 1, 2, and 4, so if your index field
was 5, it OR'ed registers 1 and 4. It subtracted the index (or the
OR'ed combination of indexes) from the base address, so it would have
taken some really tricky programming to make use of that. But someone
must have since they documented it and it continued to work on the
709, 7090, and 7094 until they provided 7 index registers and a mode
bit to switch between the old OR and the new 7 registers.
I have never found anything that says whether it was deliberate or an accident of the 704's implementation, and I have looked pretty hard.
John Levine <johnl@taugh.com> writes:
Well, yes, but another 360 innovation was the whole idea of computer
architecture, as well as the term. It was the first time that the
programmer's view of the computer was described independently of any
implementation.
I don't buy it. An architecture is just a description of system
behavior, and surely there were descriptions of system behavior
before System/360.
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
The biggest fault of JCL is that it
is trying to solve the wrong problem.
What problem was it trying to solve and what was the "right"
problem?
The problem it was trying to solve is contained in its name: Job
Control Language. It tacitly accepted the non-interactive batch
model for what it needed to address.
The problem that was in need of addressing is interactive use. I
think there are two reasons why JCL was so poor at that. One is
that they knew that teleprocessing would be important, but they
tried to cram it into the batch processing model, rather than
understanding a more interactive work style. The second reason is
that the culture at IBM, at least at that time, never understood the
idea that using computers can be (and should be) easy and fun. The
B in IBM is Business, and Business isn't supposed to be fun. And I
think that's part of why JCL was not viewed (at IBM) as a failure,
because their Business customers didn't mind. Needless to say, I am speculating, but for what it's worth those are my speculations.
Like I said, I'm not a fan of JCL, not at all, I just
think it wasn't as bad as the commentary in The Design of Design
makes it out to be.
but correct me if I am wrong, there was no
non-interactive model in the mid 1960s when JCL was devised. They
didn't address it because they couldn't forcast (obviouslyincorrectly),
that it would be a problem to solve.
Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:
Like I said, I'm not a fan of JCL, not at all, I just
think it wasn't as bad as the commentary in The Design of Design
makes it out to be.
I think the point he made is subtly different.
The UNIX shells have demonstrated that a command interface is,
and should be, a programming language in its own right.
JCL has
the rudiments of a programming language with its COND parameter
(which ties my brain into knots every time I think about it) and
the possibility of iteration via submitting new jobs via INTRDR,
plus its macro facility (but with global variables only).
Viewed through that lens, I can't think of any (serious) programming
language that is worse than JCL. Joke languages need not apply.
On Tue, 7 May 2024 06:54:17 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:
Like I said, I'm not a fan of JCL, not at all, I just
think it wasn't as bad as the commentary in The Design of Design
makes it out to be.
I think the point he made is subtly different.
The UNIX shells have demonstrated that a command interface is,
and should be, a programming language in its own right.
I wouldn't give that credit to UNIX.
I was not around, but my impression is that by time of creation of UNIX
it was a common understanding. For example, DEC supplied RSX-11 with
DCL at about the same time [as UNIX got Thompson shell) and I never
heard that anybody considered it novel.
I don't buy it. An architecture is just a description of system
behavior, and surely there were descriptions of system behavior
before System/360. Even in the 1950s companies must have changed implementations of a given model while still conforming to its
earlier description.
John Levine <johnl@taugh.com> writes:
Well, yes, but another 360 innovation was the whole idea of computer
architecture, as well as the term. It was the first time that the
programmer's view of the computer was described independently of any
implementation.
I don't buy it. An architecture is just a description of system
behavior, and surely there were descriptions of system behavior
before System/360. Even in the 1950s companies must have changed >implementations of a given model while still conforming to its
earlier description.
As for the word architecture, it seems like
an obvious and natural word choice, given the hundreds (or more)
of years of experience with blueprints and buildings.
On Mon, 06 May 2024 18:22:59 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
I don't buy it. An architecture is just a description of system
behavior, and surely there were descriptions of system behavior
before System/360. Even in the 1950s companies must have changed
implementations of a given model while still conforming to its
earlier description.
Were they?
My impression is that until S/360 there was no such thing as different
by 100% SW compatible models.
Michael S <already5chosen@yahoo.com> schrieb:
On Tue, 7 May 2024 06:54:17 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:
Like I said, I'm not a fan of JCL, not at all, I just
think it wasn't as bad as the commentary in The Design of Design
makes it out to be.
I think the point he made is subtly different.
The UNIX shells have demonstrated that a command interface is,
and should be, a programming language in its own right.
I wouldn't give that credit to UNIX.
I think I whould have qualitied that statement somewhat. What I
think the full set of features of the Bourne C shells finally made
Thomas Koenig <tkoenig@netcologne.de> writes:
Michael S <already5chosen@yahoo.com> schrieb:
On Tue, 7 May 2024 06:54:17 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:
Like I said, I'm not a fan of JCL, not at all, I just
think it wasn't as bad as the commentary in The Design of Design
makes it out to be.
I think the point he made is subtly different.
The UNIX shells have demonstrated that a command interface is,
and should be, a programming language in its own right.
I wouldn't give that credit to UNIX.
I think I whould have qualitied that statement somewhat. What I
think the full set of features of the Bourne C shells finally made
The Bourne shell and the C shell were two completely different
shells (the latter followed the former by several years).
Tim Rentsch wrote:
The problem it was trying to solve is contained in its name: Job
Control Language. It tacitly accepted the non-interactive batch
model for what it needed to address.
You may be right, but correct me if I am wrong, there was no
non-interactive model in the mid 1960s when JCL was devised.
Thomas Koenig <tkoenig@netcologne.de> writes:
Scott Lurndal <scott@slp53.sl.home> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
Michael S <already5chosen@yahoo.com> schrieb:
On Tue, 7 May 2024 06:54:17 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:
Like I said, I'm not a fan of JCL, not at all, I just
think it wasn't as bad as the commentary in The Design of Design >>>>>> > makes it out to be.
I think the point he made is subtly different.
The UNIX shells have demonstrated that a command interface is,
and should be, a programming language in its own right.
I wouldn't give that credit to UNIX.
I think I whould have qualitied that statement somewhat. What I
think the full set of features of the Bourne C shells finally made
The Bourne shell and the C shell were two completely different
shells (the latter followed the former by several years).
Having worked with both, I certainly know the differences.
But if Wikipedia is to be trusted, Bill Joy released the C shell in
1978, and the Bourne shell was released in 1979.
The V6 shell was released in 1975.
Scott Lurndal <scott@slp53.sl.home> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
Michael S <already5chosen@yahoo.com> schrieb:
On Tue, 7 May 2024 06:54:17 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:
Like I said, I'm not a fan of JCL, not at all, I just
think it wasn't as bad as the commentary in The Design of Design
makes it out to be.
I think the point he made is subtly different.
The UNIX shells have demonstrated that a command interface is,
and should be, a programming language in its own right.
I wouldn't give that credit to UNIX.
I think I whould have qualitied that statement somewhat. What I
think the full set of features of the Bourne C shells finally made
The Bourne shell and the C shell were two completely different
shells (the latter followed the former by several years).
Having worked with both, I certainly know the differences.
But if Wikipedia is to be trusted, Bill Joy released the C shell in
1978, and the Bourne shell was released in 1979.
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Tim Rentsch wrote:
The problem it was trying to solve is contained in its name: Job
Control Language. It tacitly accepted the non-interactive batch
model for what it needed to address.
You may be right, but correct me if I am wrong, there was no non-interactive model in the mid 1960s when JCL was devised.
BASIC and DTSS was developed in 1963.
Scott Lurndal wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Tim Rentsch wrote:
The problem it was trying to solve is contained in its name: Job
Control Language. It tacitly accepted the non-interactive batch
model for what it needed to address.
You may be right, but correct me if I am wrong, there was no
non-interactive model in the mid 1960s when JCL was devised.
BASIC and DTSS was developed in 1963.
Good Point. So IBM was "guuilty" of vastly mis-understanding and under estimating the future importance of interactive users.
Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:
Scott Lurndal wrote:
Job >> >> Control Language. It tacitly accepted the non-interactive"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Tim Rentsch wrote:
The problem it was trying to solve is contained in its name:
batch >> >> model for what it needed to address.
You may be right, but correct me if I am wrong, there was no
non-interactive model in the mid 1960s when JCL was devised.
BASIC and DTSS was developed in 1963.
Good Point. So IBM was "guuilty" of vastly mis-understanding and
under estimating the future importance of interactive users.
Only the team that made JCL, it seems.
Brooks claims that System/360 was premeditated for terminal
use from the start, and that somebody didn't get the memo
when designing JCL (my words).
My impression is that until S/360 there was no such thing as different
by 100% SW compatible models.
The Burroughs B5500 and B3500 were contemporaneous with the S/360
and provided 100% SW compatible models across a performance range
during the same 1965 to 1978 time period as the S/360.
In what sense was the S/360 architecture, designed for terminal use? I >already talked about the base register, BALR/Using stuff that prevented
an interative program from being swapped out and swapped in to a
different real memory location. This was a significant hinderance to >"terminal use".
BTW, another problem occurs in transaction workloads where there is
another level of software between the user and the OS, but insteaed of
TSO, it was IMS or CICS, ...
I was not around, but my impression is that by time of creation of UNIX
it was a common understanding. For example, DEC supplied RSX-11 with
DCL at about the same time [as UNIX got Thompson shell) and I never
heard that anybody considered it novel.
The Thompson shell was still restricted to GOTO (as was the RSX-11
shell).
According to Scott Lurndal <slp53@pacbell.net>:
My impression is that until S/360 there was no such thing as different
by 100% SW compatible models.
The Burroughs B5500 and B3500 were contemporaneous with the S/360
and provided 100% SW compatible models across a performance range
during the same 1965 to 1978 time period as the S/360.
Wikipedia says that while S/360 and the B3500 were announced in 1964,
the B3500 was announced in 1966. In the discussion of MCP on the B3500
it says 'It shared many architectural features with the MCP of
Burroughs' Large Systems stack machines, but was entirely different >internally, and was coded in assembly language, not an ALGOL
derivative." That suggests it was compatible for user programs, but
not for operating systems.
On the 360, if two models had similar memory and peripherals, you
could IPL and run the same operating system since it was specified
down to the details of interrupts and I/O instructions.
According to Scott Lurndal <slp53@pacbell.net>:
My impression is that until S/360 there was no such thing as different
by 100% SW compatible models.
The Burroughs B5500 and B3500 were contemporaneous with the S/360
and provided 100% SW compatible models across a performance range
during the same 1965 to 1978 time period as the S/360.
Wikipedia says that while S/360 and the B3500 were announced in 1964,
the B3500 was announced in 1966. In the discussion of MCP on the B3500
According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
This paper from U of Michigan lays out the problem and proposes a
paging design which soon became the 360/67:
https://dl.acm.org/doi/pdf/10.1145/321312.321313
TSS was a disaster due to an extreme case of second system syndrome,
but Michigan's MTS and IBM skunkworks CP/67 worked great.
Wikipedia says that while S/360 and the B5500 were announced in 1964,
the B3500 was announced in 1966. In the discussion of MCP on the B3500
Sorry, I mean to imply that the B3500 (and successors) were 100% sw >compatible within the medium systems family.
Likewise for the large systems (B5500) line. ...
TSS was a disaster due to an extreme case of second system syndrome,
but Michigan's MTS and IBM skunkworks CP/67 worked great.
TSS at CMU was extensively rewritten in assembly and became quite >tolerable--hosting 30+ interactive jobs along with a background
batch processing system. When I arrived in Sept 1975 it was quite
unstable with up times less than 1 hour. 2 years later it would run
for weeks at a time without going down.
According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
In what sense was the S/360 architecture, designed for terminal use? I
already talked about the base register, BALR/Using stuff that prevented
an interative program from being swapped out and swapped in to a
different real memory location. This was a significant hinderance to
"terminal use".
With sufficiently disciplined programming, you could swap and move data
by updating the base registers. APL\360 did this quite successfully
and handled a lot of interactive users on a 360/50.
Reading between the lines in the IBMSJ architecture paper, I get the impression they believed that moving code and data with base registers
would be a lot easier than it was, and missed the facts that a lot of pointers are stored in memory, and it is hard to know what registers
are being used as base registers when.
This paper from U of Michigan lays out the problem and proposes a
paging design which soon became the 360/67:
https://dl.acm.org/doi/pdf/10.1145/321312.321313
TSS was a disaster due to an extreme case of second system syndrome,
but Michigan's MTS and IBM skunkworks CP/67 worked great.
BTW, another problem occurs in transaction workloads where there is
another level of software between the user and the OS, but insteaed of
TSO, it was IMS or CICS, ...
There's two ways to write interacticve software, which I call the time-sharing
approach and the SAGE approach. In the time-sharing approach, the operating system
stops and starts user processes and transparently saves and restores the process
status. In the SAGE approach, programs are broken up into little pieces each of
which runs straight through, explicitly saves whatever context it needs to, and
then returns to the OS.
The bad news about the SAGE approach is that the programming is
tedious and as you note bugs can be catastrophic. The good news is
that it can get fantastic performance for lots of users.
It was
invented for the SAGE missile defense system on tube computers in the
1950s, adapted for the SABRE airline reservation system on 7094s in
the 1960s and has been used over and over, with the current trendy
version being node.js. We now have better ways to describe
continuations which make the programming a little easier, but it's
still a tradeoff. IMS and CICS used the SAGE approach to provide good performance on specific applications.
According to Thomas Koenig <tkoenig@netcologne.de>:
I was not around, but my impression is that by time of creation of UNIX
it was a common understanding. For example, DEC supplied RSX-11 with
DCL at about the same time [as UNIX got Thompson shell) and I never
heard that anybody considered it novel.
The Thompson shell was still restricted to GOTO (as was the RSX-11
shell).
You're probably thinking of the Mashey shell.
One of the first usenix
tapes has patches I wrote in about 1976 to add simple variables with
single character names to that shell. It was an improvement, but the
Bourne shell was way better.
Re when this stuff was invented, I did some work on CP/67 when I was
in high school in about 1970 and I recall that even then people
routinely ran files of CMS commands. Don't remember whether there were variables and control flow or that came later with REXX.
Thomas Koenig wrote:
Only the team that made JCL, it seems.
Just as an aside, though this thread may be somewhat OT, I consider it
fun and interesting.
I am not sure exactly what he is saying here. By JCL, does he mean
just the syntax of the language,
On Mon, 06 May 2024 18:22:59 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
I don't buy it. An architecture is just a description of system
behavior, and surely there were descriptions of system behavior
before System/360. Even in the 1950s companies must have changed
implementations of a given model while still conforming to its
earlier description.
Were they?
My impression is that until S/360 there was no such thing as different
by 100% SW compatible models.
My impression is that until S/360 there was no such thing as different
by 100% SW compatible models.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
John Levine <johnl@taugh.com> writes:
Well, yes, but another 360 innovation was the whole idea of computer
architecture, as well as the term. It was the first time that the
programmer's view of the computer was described independently of any
implementation.
I don't buy it. An architecture is just a description of system
behavior, and surely there were descriptions of system behavior
before System/360. Even in the 1950s companies must have changed
implementations of a given model while still conforming to its
earlier description.
Sure, the 7094 was a compatible successor of the 704,
but the idea of
implementation independence turns out to be much more profound than
most people (probably including its inventors at the time) realized.
Michael S <already5chosen@yahoo.com> schrieb:
My impression is that until S/360 there was no such thing as
different by 100% SW compatible models.
I think the important thing was that S/360 was designed and built,
right from the start, as a _series_ of compatible computers, which
were upward- and downward-compatible. They had the challenge
of designing an architecture where the instructions for the
high-end supercomputers still needed to work (although slowly)
on the low-end bread and butter machines, and what was efficient
on the low-end bread and butter machines should not constrain the
high-end supercomputers.
Most other computer series were built one at a time, with successors
usually extending the previous ones (which IBM also did with the /370, series). The VAX may have been another such line - DEC did not
release several models all at once, but they did release the cheaper
and slower 11/750 after they had released the 11/780.
John Levine <johnl@taugh.com> schrieb:
According to Thomas Koenig <tkoenig@netcologne.de>:
I was not around, but my impression is that by time of creation of UNIX >>>> it was a common understanding. For example, DEC supplied RSX-11 with
DCL at about the same time [as UNIX got Thompson shell) and I never
heard that anybody considered it novel.
The Thompson shell was still restricted to GOTO (as was the RSX-11 >>>shell).
You're probably thinking of the Mashey shell.
Disclaimer: I never worked on those old systems, my first UNIX
experience was with HP-UX in the late 1980s (where I accidentally
landed in vi and could not get out, but that's another story).
One of the first usenix
tapes has patches I wrote in about 1976 to add simple variables with
single character names to that shell. It was an improvement, but the
Bourne shell was way better.
https://grosskurth.ca/bib/1976/mashey-command.pdf (written by Mashey)
credits the original shell to Thompson, so I believe we are talking
about the same shell, just with different names.
Re when this stuff was invented, I did some work on CP/67 when I was
in high school in about 1970 and I recall that even then people
routinely ran files of CMS commands. Don't remember whether there were
variables and control flow or that came later with REXX.
Hmmm... I looked at
https://bitsavers.org/pdf/ibm/370/VM/370/Release_1/GX20-1926-1_VM_370_Quick_Guide_For_Users__Rel_1_Apr73.pdf
and found a reference to $LOOP and a reference to "tokens" (which I
suppose are variables), so that definitely predated the UNIX shells.
According to Scott Lurndal <slp53@pacbell.net>:
Wikipedia says that while S/360 and the B5500 were announced in 1964,
the B3500 was announced in 1966. In the discussion of MCP on the B3500
Sorry, I mean to imply that the B3500 (and successors) were 100% sw >>compatible within the medium systems family.
Likewise for the large systems (B5500) line. ...
Oh, OK. In view of the timing, I'd guess that the people at Burroughs,
who were certainly not dumb, looked at the S/360 material and figured
oh, that's a good idea, we can do that too.
Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:
Thomas Koenig wrote:
Only the team that made JCL, it seems.
Just as an aside, though this thread may be somewhat OT, I consider
it fun and interesting.
I am not sure exactly what he is saying here. By JCL, does he mean
just the syntax of the language,
His main criticism is that the design team failed to notice that
JCL was, in fact, a programming language, that the design team
thought of it as "just a few cards for job control". This led to
attributes such as DISP doing what he called "verbish things",
i.e. commands, dependence on card formats, a syntax similar to,
but incompatible with, the S/360 assembler, insufficient control
structures etc.
He did not criticize the OS itself too much, with its complicated
allocation stategies etc, mostly some remarks on the file structure
which he says could have been simplified.
According to MitchAlsup1 <mitchalsup@aol.com>:
TSS was a disaster due to an extreme case of second system syndrome,
but Michigan's MTS and IBM skunkworks CP/67 worked great.
TSS at CMU was extensively rewritten in assembly and became quite >>tolerable--hosting 30+ interactive jobs along with a background
batch processing system. When I arrived in Sept 1975 it was quite
unstable with up times less than 1 hour. 2 years later it would run
for weeks at a time without going down.
For reasons I do not want to try to guess, AT&T did the software
development for the 5ESS phone switches in a Unix system that sat on
top of TSS. After IBM cancelled TSS, AT&T continued to use it as some
sort of special order thing. At IBM there were only a handful of
programmers working on it, by that time all quite experienced, and I
hear that they also got rid of a lot of cruft and made it much faster
and more reliable.
At the same time, IBM turned the skunkworks CP/67 into VM/370 with a
much larger staff, leading to predictable consequences.
Of course, there is a theory and there is a practice.
In practice, downward compatibility lasted ~half a year, until Model 20. >Upward compatibility did not fare much better and was broken
approximately one year after initial release, in Model 67.
That is, if I didn't get upward and downward backward.
According to my understanding, since ~1970, IBM completely gave up on
all sorts of compatibility except backward compatibility.
I'd think that by 1977 (VAX) backward compatibility was widespread in
the industry.
For some reason AT&T longlines got an early version of my production
VM370 CSC/VM (before the multiprocessor support) ... and over the years
moved it to latest IBM 370s and propogated around to other
locations. Then comes the early 80s when next new IBM was 3081 ... which
was originally a multiprocessor only machine. The IBM corporate
marketing rep for AT&T tracks me down to ask for help with retrofitting multiprocessor support to old CSC/VM ... concern was that all those AT&T machines would migrate to the latest Amdahl single processor (which had
about the same processing as aggregate of the 3081 two processor).
With sufficiently disciplined programming, you could swap and move data
by updating the base registers. APL\360 did this quite successfully
and handled a lot of interactive users on a 360/50.
Wasn't APL\360 an interpreter? If so, then moving instructions and data >around was considerably simpler.
Reading between the lines in the IBMSJ architecture paper, I get the
impression they believed that moving code and data with base registers
would be a lot easier than it was, and missed the facts that a lot of
pointers are stored in memory, and it is hard to know what registers
are being used as base registers when.
Interesting. That would seem to imply that it wasn't that they didn't
think about the problems that base addressing would cause, they just
(vastly) underestimated the cost of fixing it. A different "design"
problem indeed.
Lynn Wheeler wrote:
For some reason AT&T longlines got an early version of my production
VM370 CSC/VM (before the multiprocessor support) ... and over the years
moved it to latest IBM 370s and propogated around to other
locations. Then comes the early 80s when next new IBM was 3081 ... which
was originally a multiprocessor only machine. The IBM corporate
marketing rep for AT&T tracks me down to ask for help with retrofitting
multiprocessor support to old CSC/VM ... concern was that all those AT&T
machines would migrate to the latest Amdahl single processor (which had
about the same processing as aggregate of the 3081 two processor).
Regarding retrofitting multiprocessor support to old CSC/VM,
by which I take it you mean adding SMP support to a uni-processor OS,
do you remember what changes that entailed? Presumably a lot more than acquiring one big spinlock every time the OS was entered.
That seems like a lot of work for one person.
implementations of each one. So when they went to S/370, there was the 370/115, /125, /135, /138, /145, /148, /158, and /168 which were
upward and downward compatible as were the 303x and 434x series. The
/155 and /165 were originally missing the paging hardware but later
could be field upgraded.
My impression is that until S/360 there was no such thing as different
by 100% SW compatible models.
I think a counterexample is the LGP-30 (1956) and its successor
the LGP-21 (1963).
Another example may be the IBM 709 and IBM 7090, both done in the 1950s.
According to Stephen Fuld <sfuld@alumni.cmu.edu.invalid>:
data >> by updating the base registers. APL\360 did this quiteWith sufficiently disciplined programming, you could swap and move
successfully >> and handled a lot of interactive users on a 360/50.
Wasn't APL\360 an interpreter? If so, then moving instructions and
data around was considerably simpler.
That's right. It could switch between users at well defined points
that made it practical to update the base registers pointing to the
user's workspace.
the >> impression they believed that moving code and data with baseReading between the lines in the IBMSJ architecture paper, I get
registers >> would be a lot easier than it was, and missed the facts
that a lot of >> pointers are stored in memory, and it is hard to
know what registers >> are being used as base registers when.
Interesting. That would seem to imply that it wasn't that they
didn't think about the problems that base addressing would cause,
they just (vastly) underestimated the cost of fixing it. A
different "design" problem indeed.
In Design of Design, Brooks said they knew about virtual memory but
thought it was too expensive, which he also says was a mistake, soon
fixed in S/370.
According to Michael S <already5chosen@yahoo.com>:
Of course, there is a theory and there is a practice.
In practice, downward compatibility lasted ~half a year, until Model
20. Upward compatibility did not fare much better and was broken >approximately one year after initial release, in Model 67.
That is, if I didn't get upward and downward backward.
The 360/22, /25, /30, /40, /50, /65, /75, and /85 were all compatible implementations of S/360. You could write a program that ran on any of
them, and it would also run on larger and smaller models.
The /20, /44, and /67 were each for special markets. The /20 was
basically for people who still wanted a 1401 (admittedly a pretty big market), the /44 for realtime, and the /67 for a handful of
time-sharing customers. The /67 was close enough to a /65 that you
could use it as one, often /67 timesharing during the day, and /65
batch overnight.
The /91 and /95 were also compatible except that the
/91 left out decimal arithmetic, which OS/360 would trap and slowly
emulate if need be.
According to my understanding, since ~1970, IBM completely gave up on
all sorts of compatibility except backward compatibility.
No. When hey updated the architecture and then shipped multiple implementations of each one. So when they went to S/370, there was the 370/115, /125, /135, /138, /145, /148, /158, and /168 which were
upward and downward compatible as were the 303x and 434x series. The
/155 and /165 were originally missing the paging hardware but later
could be field upgraded.
The point here is that you could write a program for any model, and
you could expect it to work unmodified on both larger and smaller
models. Later one there was S/390 and zSeries, again each with models
that were both upward and downward compatible.
I'd think that by 1977 (VAX) backward compatibility was widespread in
the industry.
More like 1957. The IBM 705 was mostly backward compatible with the
702, and the 709 with the 704. But only in one direction
-- if youwanted your 709 program to work on a 704, you had to be careful not to
use any of the new 709 stuff, and since the I/O was completely
different, you needed suitable operating systems or at least I/O
libraries.
Can you, please, define the meaning of upward and downward
compatibility? I had never seen this terms before this thread, so it is possible that I don't understand the meaning.
Michael S <already5chosen@yahoo.com> schrieb:
Can you, please, define the meaning of upward and downward
compatibility? I had never seen this terms before this thread, so
it is possible that I don't understand the meaning.
The term comes from Brooks. Specifically, he applied it to the
S/360 line of computers which had a very wide performance and
price range, and programs (including operating systems) were
binary compatible from the lowest to the highest performance and
price machine.
On Thu, 9 May 2024 08:19:39 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Michael S <already5chosen@yahoo.com> schrieb:
Can you, please, define the meaning of upward and downward
compatibility? I had never seen this terms before this thread, so
it is possible that I don't understand the meaning.
The term comes from Brooks. Specifically, he applied it to the
S/360 line of computers which had a very wide performance and
price range, and programs (including operating systems) were
binary compatible from the lowest to the highest performance and
price machine.
I suppose, it means that my old home PC (Core-i5 3550) is downward
compatible with my old work PC (Core-i7 3770). And my old work PC is
upward compatible with my old home PC.
But I still don't know if it would be correct to say that my old work
PC is downward compatible with with my just a little newer small FOGA development server (E3 1271 v3). My guess that it would be incorrect,
but it's just guess.
If Brook was still alive, we could have tried to ask him. But since he
is not, and since I have no plans to read his books by myself, my only
chance of knowing is for you or for John Levine to find the
definition it in his writings and then tell me.
On Thu, 9 May 2024 08:19:39 -0000 (UTC)
The term comes from Brooks. Specifically, he applied it to the
S/360 line of computers which had a very wide performance and
price range, and programs (including operating systems) were
binary compatible from the lowest to the highest performance and
price machine.
I suppose, it means that my old home PC (Core-i5 3550) is downward
compatible with my old work PC (Core-i7 3770). And my old work PC is
upward compatible with my old home PC.
But I still don't know if it would be correct to say that my old work
PC is downward compatible with with my just a little newer small FOGA >development server (E3 1271 v3).
Michael S <already5chosen@yahoo.com> writes:
On Thu, 9 May 2024 08:19:39 -0000 (UTC)
The term comes from Brooks. Specifically, he applied it to the
S/360 line of computers which had a very wide performance and
price range, and programs (including operating systems) were
binary compatible from the lowest to the highest performance and
price machine.
I suppose, it means that my old home PC (Core-i5 3550) is downward >>compatible with my old work PC (Core-i7 3770). And my old work PC is
upward compatible with my old home PC.
Given that both use Ivy Bridge CPUs, there is no compatibility issue
as far as the CPU is concerned.
Michael S wrote:
On Thu, 9 May 2024 08:19:39 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Michael S <already5chosen@yahoo.com> schrieb:
Can you, please, define the meaning of upward and downward compatibility? I had never seen this terms before this thread,
so it is possible that I don't understand the meaning.
The term comes from Brooks. Specifically, he applied it to the
S/360 line of computers which had a very wide performance and
price range, and programs (including operating systems) were
binary compatible from the lowest to the highest performance and
price machine.
I suppose, it means that my old home PC (Core-i5 3550) is downward compatible with my old work PC (Core-i7 3770). And my old work PC is
upward compatible with my old home PC.
But I still don't know if it would be correct to say that my old
work PC is downward compatible with with my just a little newer
small FOGA development server (E3 1271 v3). My guess that it would
be incorrect, but it's just guess.
If Brook was still alive, we could have tried to ask him. But since
he is not, and since I have no plans to read his books by myself,
my only chance of knowing is for you or for John Levine to find the definition it in his writings and then tell me.
Perhaps this interpretation will help clear things up. Think of compatibility as a two dimensional graph. On the Y axis is some
measure of compute power. The X axis is time. So upward/downward compatibility is among models announced at the same time and delivered
within a small time of each other. Backward compatibility is along
the X axis, that is, between models announced/delivered at a different
points in time. So under this scheme, the S/360 model 30 was upward compatible with the model /65 ( different Y values, but the same x
values) , but the S370s (not counting the /155 and /165) were backward compatible with the S/260 models (different x values)
The key innovation that IBM made with the S/360 was to announce
systems with a wide range of performance *at the same time*, i.e.
different Y values and the same X value.
So, when two models are pretty close on time scale, but from the
software perspective one of them is a superset of the other then
they are not upward/downward compatible?
In Design of Design, Brooks said they knew about virtual memory but
thought it was too expensive, which he also says was a mistake, soon
fixed in S/370.
While I agree that virtual memory was probably too expensive in the mid >1960s, I disagree that it was required, or even the optimal solution
back then. A better solution would have been to have a small number of
"base registers" that were not part of the user set, but could be
reloaded by the OS whenever a program needed to be swapped in to a
different address than it was swapped out to.
The /20, /44, and /67 were each for special markets. ...
But programs (or OSes) that utilize the features of /67 would not run
on anything else, right?
How about programs that depend on precise floating-point exceptions?
What about various vector facilities that they were adding and removing >seemingly at random during 1970s and 1980s ?
More like 1957. The IBM 705 was mostly backward compatible with the
702, and the 709 with the 704. But only in one direction
"One direction" is synonymous with "backward compatible", is it not?
But the word "mostly" is suspect.
Can you, please, define the meaning of upward and downward
compatibility?
Can you, please, define the meaning of upward and downward
compatibility? I had never seen this terms before this thread,
so it is possible that I don't understand the meaning.
Michael S <already5chosen@yahoo.com> writes:
[...]
Can you, please, define the meaning of upward and downward
compatibility?
The System/360 model 20 is described in TDOD as being "upward
compatible", which means that programs that run on a model 20
could be run on higher-numbered models, but usually not vice
versa.
Most models of System/360 had the property that code that runs on
model M would also run on model N > M and on model K < M, for other
models in the set. (The model 20, and arguably the model 30, were >exceptions, and probably some other models as well;
little over decade ago I was asked to track down decision to add virtual memory to all 370s and found staff to executive making the
decision. Basically OS/360 MVT storage management was so bad, the
execution regions had to be specified four times larger than used, as a result a 1mbyte 370/165 normally would only run four regions
concurrently, insufficient to keep system busy and justified. Mapping
MVT to a 16mbye virtual address space (aka VS2/SVS) would allow
increasing number of concurrently running regions by factor of four
times (with little or no paging), keeping 165 systems busy
... overlapping execution with disk I/O.
If Brook was still alive, we could have tried to ask him. But since
he is not, and since I have no plans to read his books by myself, my
only chance of knowing is for you or for John Levine to find the
definition it in his writings and then tell me.
The key innovation that IBM made with the S/360 was to announce
systems with a wide range of performance *at the same time*,
i.e. different Y values and the same X value.
Scott Lurndal wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Tim Rentsch wrote:
The problem it was trying to solve is contained in its name: Job
Control Language. It tacitly accepted the non-interactive batch
model for what it needed to address.
You may be right, but correct me if I am wrong, there was no
non-interactive model in the mid 1960s when JCL was devised.
BASIC and DTSS was developed in 1963.
Good Point. So IBM was "guuilty" of vastly mis-understanding and under estimating the future importance of interactive users.
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
The key innovation that IBM made with the S/360 was to announce
systems with a wide range of performance *at the same time*,
i.e. different Y values and the same X value.
I would argue that this property is only one of three factors
that made System/360 successful, and perhaps the least important
of the three. The other two factors are, one, addressing both
business computing and scientific computing rather than having
separate models for the two markets, and two, replacing and
discontinuing all of IBM's other lines of computers. I think
it's hard to overstate the importance of the last item.
According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
soon >> fixed in S/370.In Design of Design, Brooks said they knew about virtual memory but
thought it was too expensive, which he also says was a mistake,
While I agree that virtual memory was probably too expensive in the
mid 1960s, I disagree that it was required, or even the optimal
solution back then. A better solution would have been to have a
small number of "base registers" that were not part of the user
set, but could be reloaded by the OS whenever a program needed to
be swapped in to a different address than it was swapped out to.
Well, Brooks was there and said not having virtual memory was a
mistake. Dunno how much that is related to Lynn's point that paging
let them avoid the consequences of terrible storage management in MVS.
When designing the address structure of S/360 they had a big problem
in that they knew they wanted large addresses, 24 bits to be extended
later to 31 or 32, but they didn't want to waste a full word on every
address in programs running on small models. Base register with 12 bit
offset solved that quite well, making the address part of an
instruction 16 bits while not segmenting the memory. Since there were
a lot of registers it was usually possible to set up a few base
registers at the start of a routine and not do a lot of reloading. (At
least if the compiler was smart enough; Fortran G had a bad habit of
loading an address from the constant pool every time it wanted to use
a variable or an array.)
Brooks said it was ugly that some instructions (RX) had both base and
index registers while others (SS) only had base registers, which I
expect made it even harder to do what you suggested.
BASIC and DTSS was developed in 1963.
Good Point. So IBM was "guuilty" of vastly mis-understanding and under
estimating the future importance of interactive users.
Work on System/360 started in 1961 (and in some sense two years
earlier, but let's not get into that).
Brooks said it was ugly that some instructions (RX) had both base and
index registers while others (SS) only had base registers, which I
expect made it even harder to do what you suggested.
John Levine <johnl@taugh.com> schrieb:
Brooks said it was ugly that some instructions (RX) had both base and
index registers while others (SS) only had base registers, which I
expect made it even harder to do what you suggested.
Depending on base registers for both data and branches was one
of the ideas that did not age well, I think. We have since
seen in the RISC machines that having a stack implemented via
a register, with possibly a frame pointer, a global offset and
larger offsets (16 bits) works well, and we know how to generate position-independent code.
This is, of course, with 20/20 hindsight.
John Levine <johnl@taugh.com> schrieb:
Brooks said it was ugly that some instructions (RX) had both base and
index registers while others (SS) only had base registers, which I
expect made it even harder to do what you suggested.
Depending on base registers for both data and branches was one
of the ideas that did not age well, I think.
Thomas Koenig wrote:
John Levine <johnl@taugh.com> schrieb:
Brooks said it was ugly that some instructions (RX) had both base and
index registers while others (SS) only had base registers, which I
expect made it even harder to do what you suggested.
Depending on base registers for both data and branches was one
of the ideas that did not age well, I think. We have since
seen in the RISC machines that having a stack implemented via
a register, with possibly a frame pointer, a global offset and
larger offsets (16 bits) works well, and we know how to generate
position-independent code.
Position independent data is still difficult, though.
Do we know who invented relative branches? The PDP-11 had them in 1969
but I don't think they were new then.
They feel like one of those
things that are obvious in retrospect, but not at the time. (Why do
you want to make branch addressing different? And run them all through
an adder? Do you think gates grow on trees?)
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Scott Lurndal wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Tim Rentsch wrote:
The problem it was trying to solve is contained in its name: Job
Control Language. It tacitly accepted the non-interactive batch
model for what it needed to address.
You may be right, but correct me if I am wrong, there was no
non-interactive model in the mid 1960s when JCL was devised.
BASIC and DTSS was developed in 1963.
Good Point. So IBM was "guuilty" of vastly mis-understanding and under
estimating the future importance of interactive users.
Work on System/360 started in 1961 (and in some sense two years
earlier, but let's not get into that). System/360 and OS/360
were announced in April 1964. The Dartmouth Time Sharing System
first became operational in early 1964 and wasn't available for
use until after System/360 and OS/360 had been announced and
already had years of development.
John Levine <johnl@taugh.com> schrieb:
Do we know who invented relative branches? The PDP-11 had them in 1969
but I don't think they were new then.
Very good question.
MitchAlsup1 <mitchalsup@aol.com> schrieb:
Thomas Koenig wrote:
John Levine <johnl@taugh.com> schrieb:
Brooks said it was ugly that some instructions (RX) had both base and
index registers while others (SS) only had base registers, which I
expect made it even harder to do what you suggested.
Depending on base registers for both data and branches was one
of the ideas that did not age well, I think. We have since
seen in the RISC machines that having a stack implemented via
a register, with possibly a frame pointer, a global offset and
larger offsets (16 bits) works well, and we know how to generate
position-independent code.
Position independent data is still difficult, though.
Touché. Data is not the problem, but pointers to date (such as
addresses of arguments) is...
So, having a base register added to all addresses of user code
would definitely have been a better choice.
Do we know who invented relative branches? The PDP-11 had them in 1969
but I don't think they were new then.
I am just trying to make sense of the little documentation there
is of the PDP-X, and it seems it would have had PC- relative
branches too (but also branches relative to index registers),
either with an 8-bit or a 16-bit offset. The Nova had something
similar, but only jumps relative to PC or its index registers,
the PDP-11 went to relative-only branches.
According to Thomas Koenig <tkoenig@netcologne.de>:
John Levine <johnl@taugh.com> schrieb:
Do we know who invented relative branches? The PDP-11 had them in 1969
but I don't think they were new then.
Very good question.
Flipping through the machine descriptions in Blaauw and Brooks, I see
that the B5500 had relative addressing as one of its gazillion address
modes, which was quite possibly the first time they were used. But I
would not count on the PDP-11 designers being aware of that.
The page addressing on the PDP-8 is a pain since you have to divide
your code into little blocks of the right size to make it work.
Relative branching on the PDP-11 let them keep small branch addresses
but not force the memor into pages.
According to Thomas Koenig <tkoenig@netcologne.de>:
Do we know who invented relative branches? The PDP-11 had them in 1969 >>>> but I don't think they were new then.
I am just trying to make sense of the little documentation there
is of the PDP-X, and it seems it would have had PC- relative
branches too (but also branches relative to index registers),
either with an 8-bit or a 16-bit offset. The Nova had something
similar, but only jumps relative to PC or its index registers,
the PDP-11 went to relative-only branches.
This draft is pretty clear:
https://bitsavers.org/pdf/dec/pdp-x/29_Nov67.pdf
It had both short page 0 addressing like the PDP-8 and short relative
long and short indexed.
I'd now say relative branches were obvious once you got to the point
where the cost of the addition to the PC wasn't a big deal, so they
probably occurred to a lot of people around the same time.
The PDP-11 had short relative branches and a long jump that could use
any address mode that made sense, typically absolute or indirect. The
Unix assembler had conditional jump pseudo-ops that turned into a
branch if the target was close enough or a reverse branch around a
jump otherwise. If you allow chaining branches to the same place,
coming up with an optimal set of long and short is NP complete. If you
just do long and short, you can get close enough by starting with
everything long and making passes over the code shortening the ones
you can until you can't shorten anything else. (I did that for the AIX
ROMP assembler, same deal.)
You could do some funky things with PDP-11 jumps like
JMP @(R4)+
which dispatched to the next routine of threaded code pointed to by R4.
JSR PC,@(SP)+
Popped the return address off the stack, pushed another return address on
the stack and transfers control. This is how we did coroutines.
According to MitchAlsup1 <mitchalsup@aol.com>:
JSR PC,@(SP)+
Popped the return address off the stack, pushed another return
address on the stack and transfers control. This is how we did
coroutines.
When I was teaching an operating system class in about 1977 I
challenged the class to come up with a minimal coroutine package. They
all found that pretty quickly.
It's not very good coroutines since it just switches the return
address, not any other stack context, but it can sometimes be useful.
According to Thomas Koenig <tkoenig@netcologne.de>:
Do we know who invented relative branches? The PDP-11 had them in 1969 >>>> but I don't think they were new then.
I am just trying to make sense of the little documentation there
is of the PDP-X, and it seems it would have had PC- relative
branches too (but also branches relative to index registers),
either with an 8-bit or a 16-bit offset. The Nova had something
similar, but only jumps relative to PC or its index registers,
the PDP-11 went to relative-only branches.
This draft is pretty clear:
https://bitsavers.org/pdf/dec/pdp-x/29_Nov67.pdf
You could do some funky things with PDP-11 jumps like
JMP @(R4)+
which dispatched to the next routine of threaded code pointed to by R4.
According to MitchAlsup1 <mitchalsup@aol.com>:
JSR PC,@(SP)+
Popped the return address off the stack, pushed another return address
on
the stack and transfers control. This is how we did coroutines.
When I was teaching an operating system class in about 1977 I
challenged the class to come up with a minimal coroutine package. They
all found that pretty quickly.
It's not very good coroutines since it just switches the return
address, not any other stack context, but it can sometimes be useful.
Unfortunately I couldn't find any 704 documents which detail
its tube logic circuit designs.
Tim Rentsch wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
The key innovation that IBM made with the S/360 was to announce
systems with a wide range of performance *at the same time*,
i.e. different Y values and the same X value.
I would argue that this property is only one of three factors
that made System/360 successful, and perhaps the least important
of the three. The other two factors are, one, addressing both
business computing and scientific computing rather than having
separate models for the two markets, and two, replacing and
discontinuing all of IBM's other lines of computers. I think
it's hard to overstate the importance of the last item.
I didn't mean to imply that the performance range was the only factor
in S/360's success. Just that with S/360, IBM was the first to use
that strategy, and it was a factor in its success.
As to the other two factors you mentioned, I don't necessarily
disagree, but I do want to note that discontinuing older lines of
computers was factiltated by the ability of various S/360 models to
emulate various older computers. So a site that had, say a 1401, could upgrade to a S/360 mod 30, which could run in 1401 emulation mode, so
sites could keep their old programs running until they were replaced by
newer nativve S/360 applications. Similarly for 7080 emulation on
s60/65s. There were probably others that I don't know about.
Tim Rentsch wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
snip
The biggest fault of JCL is that it
is trying to solve the wrong problem.
What problem was it trying to solve and what was the "right"
problem?
The problem it was trying to solve is contained in its name: Job
Control Language. It tacitly accepted the non-interactive batch
model for what it needed to address.
You may be right, but correct me if I am wrong, there was no
non-interactive model in the mid 1960s
when JCL was devised.
They didn't address it because they couldn't forcast
(obviouslyincorrectly), that it would be a problem to solve.
The problem that was in need of addressing is interactive use. I
think there are two reasons why JCL was so poor at that. One is
that they knew that teleprocessing would be important, but they
tried to cram it into the batch processing model, rather than
understanding a more interactive work style. The second reason is
that the culture at IBM, at least at that time, never understood the
idea that using computers can be (and should be) easy and fun. The
B in IBM is Business, and Business isn't supposed to be fun. And I
think that's part of why JCL was not viewed (at IBM) as a failure,
because their Business customers didn't mind. Needless to say, I am
speculating, but for what it's worth those are my speculations.
Fair enough. A couple of comments. By the time TSO/360 came out,in
IIRC the early 1970s, they were already committed to JCL. TSO ran as a
batch job on top of the OS, and handled swapping, etc.itself within the region allocated to TSO within the OS. It was a disaster. Of course
this was later addressed by unifying TSO into the OS, but that couldn't happen until the S/370s (except the 155 and 165) and virtual memory.
But the legacy of two control languages was already set by then.
As for "fun". I agree that IBM didn't think of computers as fun, but
there were plenty of reasons to support interactive terminals for
purely business reasons, a major one being programmer productivity in developing business applications.
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Tim Rentsch wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
The key innovation that IBM made with the S/360 was to announce
systems with a wide range of performance *at the same time*,
i.e. different Y values and the same X value.
I would argue that this property is only one of three factors
that made System/360 successful, and perhaps the least important
of the three. The other two factors are, one, addressing both
business computing and scientific computing rather than having
separate models for the two markets, and two, replacing and
discontinuing all of IBM's other lines of computers. I think
it's hard to overstate the importance of the last item.
I didn't mean to imply that the performance range was the only
factor in S/360's success. Just that with S/360, IBM was the first
to use that strategy, and it was a factor in its success.
We agree that having multiple price/performance models helped
System/360 succeed. Where I think we don't agree is how big
a factor it was,or how innovative it was. Supporting multiple
models that differ only in price/performance is an obvious
idea, even in the early 1960s.
As to the other two factors you mentioned, I don't necessarily
disagree, but I do want to note that discontinuing older lines of
computers was factiltated by the ability of various S/360 models to
emulate various older computers. So a site that had, say a 1401,
could upgrade to a S/360 mod 30, which could run in 1401 emulation
mode, so sites could keep their old programs running until they
were replaced by newer nativve S/360 applications. Similarly for
7080 emulation on s60/65s. There were probably others that I don't
know about.
Read the chapter on System/360 in The Design of Design and you
may change your mind. It isn't surprising that IBM provided
a path for people who wanted to keep running their old software.
That is very different from deciding IBM wasn't going to sell
the old hardware.
Brooks points out that the decision to
drop all further development of IBM's six existing product
lines was made by CEO Thomas Watson (Jr).
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Tim Rentsch wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
snip
The biggest fault of JCL is that it
is trying to solve the wrong problem.
What problem was it trying to solve and what was the "right"
problem?
The problem it was trying to solve is contained in its name: Job
Control Language. It tacitly accepted the non-interactive batch
model for what it needed to address.
You may be right, but correct me if I am wrong, there was no non-interactive model in the mid 1960s
I'm having trouble making sense of this question. Did you mean
there was no interactive model? Certainly there was a
non-interactive model, which is in the batch approach to the
world.
when JCL was devised.
OS/360 was announced in April 1964, at the same time as System/360.
Surely there had been significant thought put into what JCL would
look like by that time, which puts it in the early 1960s.
They didn't address it because they couldn't forcast (obviouslyincorrectly), that it would be a problem to solve.
Talking about System/360 and OS/360 in The Design of Design, Brooks distinguishes between teleprocessing and interactive use, aka
time-sharing.
Teleprocessing is for remote submission of batch jobs
and for fixed applications such as airline reservation systems.
He
doesn't say why time-sharing was given short shrift but there is
this interesting statement: "There was no conscious decision to
cater to two use modes [namely, batch and interactive]; it merely
reflected subgroups holding differing use models." It seems clear
that the design team, including Brooks himself, expected that the
primary use mode would be batch-like (which includes teleprocessing applications such as airline reservation systems).
the >> idea that using computers can be (and should be) easy and fun.The problem that was in need of addressing is interactive use. I
think there are two reasons why JCL was so poor at that. One is
that they knew that teleprocessing would be important, but they
tried to cram it into the batch processing model, rather than
understanding a more interactive work style. The second reason is
that the culture at IBM, at least at that time, never understood
The >> B in IBM is Business, and Business isn't supposed to be fun.
And I >> think that's part of why JCL was not viewed (at IBM) as a
failure, >> because their Business customers didn't mind. Needless
to say, I am >> speculating, but for what it's worth those are my speculations.
Tim Rentsch wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Tim Rentsch wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
The problem that was in need of addressing is interactive use. I
think there are two reasons why JCL was so poor at that. One is
that they knew that teleprocessing would be important, but they
tried to cram it into the batch processing model, rather than
understanding a more interactive work style. The second reason is
that the culture at IBM, at least at that time, never understood
the >> idea that using computers can be (and should be) easy and fun.
The >> B in IBM is Business, and Business isn't supposed to be fun.
And I >> think that's part of why JCL was not viewed (at IBM) as a
failure, >> because their Business customers didn't mind. Needless
to say, I am >> speculating, but for what it's worth those are my
speculations.
I don't think we have a major disagreement that IBM didn't address the interactive user. We may have a slight disagreement as to the reason
for that. I believe you think that they considered it, but rejected it because it was too much like fun. I don't attribute that motivation,
and don't know what the resons for the rejection were, but we both
agree that they underestimated its importance for non-fun uses.
Another (and perhaps larger?) part of the motivation was about the
relative priorities, and this was (I believe) a conscious element.
In particular, interactive use was thought to be important for
program development (Brooks says something along these lines in
TDOD). I conjecture that IBM consciously decided -- whether
rightly or wrongly -- that program development was only a small
fraction of what IBM's market wanted to do with their computers,
and so IBM didn't prioritize it; they thought that what little
program development was needed could be carried out adequately
under the batch processing model. That 's understandable -
A quote from Tom Watson Sr comes
to mind (paraphrased): "I think there's a world market for about
five computers."
I conjecture that there was an unconscious
attitude at IBM at that time that interfered with them giving
interactive use serious consideration. Furthermore it isn't obvious
that they made a bad decision, considering the environment of the
marketplace of the time.
Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:
[...]
I conjecture that there was an unconscious
attitude at IBM at that time that interfered with them giving
interactive use serious consideration. Furthermore it isn't obvious
that they made a bad decision, considering the environment of the
marketplace of the time.
I assume you're right. There actually may have been one additional
factor: I don't think the 360/30 would have been powerful enough
for timesharing. It's hard to find comparative figures, but
I suspect it was considerably slower than a PDP/8.
I assume you're right. There actually may have been one additional
factor: I don't think the 360/30 would have been powerful enough
for timesharing. It's hard to find comparative figures, but
I suspect it was considerably slower than a PDP/8.
According to Tim Rentsch <tr.17687@z991.linuxsc.com>:
Among the many differences between IBM and DEC computers was that
IBM's had channels which did a lot of work between relatively
infrequent interrupts, while DEC's did not often had interrupts for
each word of data. (They did have DMA which they called data break but
only for fast devices like disks, not terminals or medium speed
DECtapes.)
At the time IBM's choice made sense. Now of course everything is so
much faster that the mini-server on my desk that is about the size of
an orange has an ATA disc controller that does more than a 1960s
channel, and the CPU takes thousands of interrupts per second
without noticably slowing down.
Tim Rentsch wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Tim Rentsch wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
The key innovation that IBM made with the S/360 was to announce
systems with a wide range of performance *at the same time*,
i.e. different Y values and the same X value.
I would argue that this property is only one of three factors
that made System/360 successful, and perhaps the least important
of the three. The other two factors are, one, addressing both
business computing and scientific computing rather than having
separate models for the two markets, and two, replacing and
discontinuing all of IBM's other lines of computers. I think
it's hard to overstate the importance of the last item.
I didn't mean to imply that the performance range was the only
factor in S/360's success. Just that with S/360, IBM was the first
to use that strategy, and it was a factor in its success.
We agree that having multiple price/performance models helped
System/360 succeed. Where I think we don't agree is how big
a factor it was,or how innovative it was. Supporting multiple
models that differ only in price/performance is an obvious
idea, even in the early 1960s.
I don't have an opinion on how big a factor it was, but if you think it
was innovative, can you name any other computer manufacturer who did
it, i.e. announced at the same time multiple models with difference performance?
As to the other two factors you mentioned, I don't necessarily
disagree, but I do want to note that discontinuing older lines of
computers was factiltated by the ability of various S/360 models to
emulate various older computers. So a site that had, say a 1401,
could upgrade to a S/360 mod 30, which could run in 1401 emulation
mode, so sites could keep their old programs running until they
were replaced by newer nativve S/360 applications. Similarly for
7080 emulation on s60/65s. There were probably others that I don't
know about.
Read the chapter on System/360 in The Design of Design and you
may change your mind. It isn't surprising that IBM provided
a path for people who wanted to keep running their old software.
Again, did any other manufactorer at the time provide, in their new
models, emulation of their older models with radically different archotectures?
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Tim Rentsch wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
Tim Rentsch wrote:
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
The key innovation that IBM made with the S/360 was to announce
systems with a wide range of performance *at the same time*,
i.e. different Y values and the same X value.
I would argue that this property is only one of three factors
that made System/360 successful, and perhaps the least important
of the three. The other two factors are, one, addressing both
business computing and scientific computing rather than having
separate models for the two markets, and two, replacing and
discontinuing all of IBM's other lines of computers. I think
it's hard to overstate the importance of the last item.
I didn't mean to imply that the performance range was the only
factor in S/360's success. Just that with S/360, IBM was the first
to use that strategy, and it was a factor in its success.
We agree that having multiple price/performance models helped
System/360 succeed. Where I think we don't agree is how big
a factor it was,or how innovative it was. Supporting multiple
models that differ only in price/performance is an obvious
idea, even in the early 1960s.
I don't have an opinion on how big a factor it was, but if you think it
was innovative, can you name any other computer manufacturer who did
it, i.e. announced at the same time multiple models with difference
performance?
I think it was an obvious idea at the time, even before IBM started
Among the many differences between IBM and DEC computers was that
IBM's had channels ...
Burroughs I/O subsystem offloaded even more than IBM channel
programs could provide. It was fire and forget from the MCP
perspective (e.g. read a set of cards or read a bunch of sectors
was one instruction that initiated a high level operation (read
card/cards, print line/lines, read sector/sectors, write sector/sectors, >backspace tape, etc) and the hardware took care of all the fiddley
little details.
Modern server-grade I/O hardware is more along the fire-and-forget model than >bit-twiddling models from the 8086 timeframe. Even SATA (which
is more capable than IDE) is fairly high level, as is FC and
NVMe. Server-grade NICs are also pretty capable and require
far fewer interrupts than early NICs to transfer a given amount
of data.
I think it was an obvious idea at the time, even before IBM started
The Burroughs B100/200/300 systems were just that, multiple models
with different performance using a common cpu architecture. Early
1960's.
According to Scott Lurndal <slp53@pacbell.net>:
Among the many differences between IBM and DEC computers was that
IBM's had channels ...
Burroughs I/O subsystem offloaded even more than IBM channel
programs could provide. It was fire and forget from the MCP
perspective (e.g. read a set of cards or read a bunch of sectors
was one instruction that initiated a high level operation (read
card/cards, print line/lines, read sector/sectors, write
sector/sectors, backspace tape, etc) and the hardware took care of
all the fiddley little details.
I can believe that Burroughs I/O was more flexible but IBM 360
channels could ran channel progarms that could be arbitrarily long and
had loops. If you wanted to write a channel program to read a dozen
cards or read all the records on a disk track, that wasn't hard. There
were even some self-modifying channel programs that were a pain to
virtualize on CP/67.
Modern server-grade I/O hardware is more along the fire-and-forget
model than bit-twiddling models from the 8086 timeframe. Even SATA
(which is more capable than IDE) is fairly high level, as is FC and
NVMe. Server-grade NICs are also pretty capable and require
far fewer interrupts than early NICs to transfer a given amount
of data.
Yup, now it's all channels all the time.
According to Scott Lurndal <slp53@pacbell.net>:
Among the many differences between IBM and DEC computers was that
IBM's had channels ...
Burroughs I/O subsystem offloaded even more than IBM channel
programs could provide. It was fire and forget from the MCP
perspective (e.g. read a set of cards or read a bunch of sectors
was one instruction that initiated a high level operation (read
card/cards, print line/lines, read sector/sectors, write sector/sectors, >>backspace tape, etc) and the hardware took care of all the fiddley
little details.
I can believe that Burroughs I/O was more flexible but IBM 360
channels could ran channel progarms that could be arbitrarily long and
had loops. If you wanted to write a channel program to read a dozen
cards or read all the records on a disk track, that wasn't hard. There
were even some self-modifying channel programs that were a pain to
virtualize on CP/67.
According to Scott Lurndal <slp53@pacbell.net>:
Among the many differences between IBM and DEC computers was that
IBM's had channels ...
Burroughs I/O subsystem offloaded even more than IBM channel
programs could provide. It was fire and forget from the MCP
perspective (e.g. read a set of cards or read a bunch of sectors
was one instruction that initiated a high level operation (read
card/cards, print line/lines, read sector/sectors, write
sector/sectors, backspace tape, etc) and the hardware took care of
all the fiddley little details.
I can believe that Burroughs I/O was more flexible but IBM 360
channels could ran channel progarms that could be arbitrarily long and
had loops. If you wanted to write a channel program to read a dozen
cards or read all the records on a disk track, that wasn't hard. There
were even some self-modifying channel programs that were a pain to
virtualize on CP/67.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 12:07:31 |
Calls: | 10,387 |
Calls today: | 2 |
Files: | 14,061 |
Messages: | 6,416,714 |