• The Design of Design

    From Thomas Koenig@21:1/5 to All on Sun Apr 21 20:56:05 2024
    I've just read (most of) "The Design of Design" by Fred Brooks,
    especially the chapters dealing with the design of the /360,
    and it's certainly worth reading. (I had finished "The Mythical
    Man-Month" before). There are chapters on computer and software
    architectures, but also something on a house he himself built.

    An interesting detail about the /360 design was that they originally
    wanted to do a stack-based machine. It would have been OK for the
    mid- and high-end machines, but on low-end machines it would have
    been undompetetive, so they rejected that approach.

    He discusses the book on computer architecture he co-authored with
    Gerrit Blaauw in it (as a project). Would be _very_ nice to read,
    but the price on Amazon is somewhat steep, a bit more than 150 Euros.

    One thing about Brooks - he is not shy of criticizing his own
    works when his views changed. I liked his scathing comments on JCL
    so much that I put them in the Wikipedia article :-)

    His main criticism of his own book on computer architecture was
    that it treated computer architecture as a finite field which had
    been explored already.

    @John S: Not sure if you've read "The Design of Design", but if you
    haven't, you probably should. It might help you to refocus in your
    quest to recreate a S/360 (especially the requirement to get the
    architecture to work well on a very small machine like the 360/30).

    Soo... good to read. Anything else?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to tkoenig@netcologne.de on Sun Apr 21 21:45:57 2024
    It appears that Thomas Koenig <tkoenig@netcologne.de> said:
    An interesting detail about the /360 design was that they originally
    wanted to do a stack-based machine. It would have been OK for the
    mid- and high-end machines, but on low-end machines it would have
    been undompetetive, so they rejected that approach.

    The 1964 IBM Systems Journal paper has half a page on that. They felt
    it would be about as good for scientific machines, worse for commercial.
    Stack machines have more compact instructions due to zero-address, but
    they need more instructions to move stuff around in the stack so that
    was a wash, and the performance depends on how much of the stack it
    can keep in fast memory.

    The 360 had way more registers than any previous IBM machine. The 7094
    had accumulator, MQ, and 7 half length index registers. STRETCH had an overcomplex architecture with 7 fast registers, mostly special
    purpose. Some of the commercial machines had an odd circular store
    treated as some number of variable length registers.

    They had the insight to see that the 16 fixed sizs registers could be
    in fast storage on high end machines, main memory on low end machines,
    so the high end machines were fast and the low end no slower than a memory-memory architecture which is what it in practice was. It was
    really an amazing design, no wonder it's the only architecture of its
    era that still has hardware implementations.

    He discusses the book on computer architecture he co-authored with
    Gerrit Blaauw in it (as a project). Would be _very_ nice to read,
    but the price on Amazon is somewhat steep, a bit more than 150 Euros.

    I have a copy. The first half is the textbook, which is pretty good.
    The second half is descriptions and evaluations of 30 architectures from Babbage and the Mark I to the 6502 and 68000, which are great.

    I see a used copy here for $105 which is what textbooks cost these days:

    https://www.valore.com/textbooks/computer-architecture-concepts-and-evolution-1stth-edition/0201105578

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to John Levine on Thu Apr 25 05:24:58 2024
    John Levine <johnl@taugh.com> schrieb:

    They had the insight to see that the 16 fixed sizs registers could be
    in fast storage on high end machines, main memory on low end machines,
    so the high end machines were fast and the low end no slower than a memory-memory architecture which is what it in practice was. It was
    really an amazing design, no wonder it's the only architecture of its
    era that still has hardware implementations.

    And they are making good money on it, too.

    Prompted by a remark in another newsgroup, I looked at IBM's 2023
    annual report, where zSystems is put under "Hybrid Infrastructure"
    (lumped together with POWER). The revenue for both lumped together
    is around 9,215 billion Dollars, with a pre-tax margin of more
    than 50%.

    At those margins, they can certainly pay for a development team
    for future hardware generations.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Thomas Koenig on Thu Apr 25 16:06:49 2024
    Thomas Koenig wrote:

    John Levine <johnl@taugh.com> schrieb:

    They had the insight to see that the 16 fixed sizs registers could
    be in fast storage on high end machines, main memory on low end
    machines, so the high end machines were fast and the low end no
    slower than a memory-memory architecture which is what it in
    practice was. It was really an amazing design, no wonder it's the
    only architecture of its era that still has hardware
    implementations.


    Yes, although it isn't clear how much of its success is due to
    technical superiority versus marketing superiority.




    And they are making good money on it, too.

    Prompted by a remark in another newsgroup, I looked at IBM's 2023
    annual report, where zSystems is put under "Hybrid Infrastructure"
    (lumped together with POWER). The revenue for both lumped together
    is around 9,215 billion Dollars, with a pre-tax margin of more
    than 50%.

    At those margins, they can certainly pay for a development team
    for future hardware generations.



    Yes, but remember that includes softwre revenue, which has higher
    margins than hardware revenue. I believe I saw somewhere that IBM made
    more from MVS, DB2, CICS, etc. than they do on the hardware itself. So
    one could argue that they have to develop mew hardware in order to
    protect their software revenue!




    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Thu Apr 25 22:13:33 2024
    According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
    Thomas Koenig wrote:

    John Levine <johnl@taugh.com> schrieb:

    They had the insight to see that the 16 fixed sizs registers could
    be in fast storage on high end machines, main memory on low end
    machines, so the high end machines were fast and the low end no
    slower than a memory-memory architecture which is what it in
    practice was. It was really an amazing design, no wonder it's the
    only architecture of its era that still has hardware
    implementations.

    Yes, although it isn't clear how much of its success is due to
    technical superiority versus marketing superiority.

    S/360 invented eight bit byte addressed memory with larger power of 2
    data sizes, which I think all by itself is enough to explain why it
    survived. All the others, which were word or maybe decimal digit
    addressed, died. Its addresses could handle 16MB which without too
    many contortions was expanded to 2GB, a lot more than any other design
    of the era. We all know that the thing that kills architectures is
    running out of address space.

    I thought the PDP-10 was swell, but even if DEC had been able to
    design and ship the Jupiter follow-on to the KL-10, its expanded
    addressing was a kludge. It only provided addressing 8M words or about
    32M bytes with no way to go past that.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to John Levine on Fri Apr 26 00:56:28 2024
    John Levine wrote:

    According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
    Thomas Koenig wrote:

    John Levine <johnl@taugh.com> schrieb:

    They had the insight to see that the 16 fixed sizs registers could
    be in fast storage on high end machines, main memory on low end
    machines, so the high end machines were fast and the low end no
    slower than a memory-memory architecture which is what it in
    practice was. It was really an amazing design, no wonder it's the
    only architecture of its era that still has hardware
    implementations.

    Yes, although it isn't clear how much of its success is due to
    technical superiority versus marketing superiority.

    S/360 invented eight bit byte addressed memory with larger power of 2
    data sizes, which I think all by itself is enough to explain why it
    survived. All the others, which were word or maybe decimal digit
    addressed, died. Its addresses could handle 16MB which without too
    many contortions was expanded to 2GB, a lot more than any other design
    of the era. We all know that the thing that kills architectures is
    running out of address space.

    Note to self:: when designing a 36-bit machine, do not cripple it
    with 18-bit addresses with inherent indirection....

    I thought the PDP-10 was swell, but even if DEC had been able to
    design and ship the Jupiter follow-on to the KL-10, its expanded
    addressing was a kludge. It only provided addressing 8M words or about
    32M bytes with no way to go past that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Fri Apr 26 01:40:46 2024
    According to MitchAlsup1 <mitchalsup@aol.com>:
    S/360 invented eight bit byte addressed memory with larger power of 2
    data sizes, which I think all by itself is enough to explain why it
    survived. All the others, which were word or maybe decimal digit
    addressed, died. Its addresses could handle 16MB which without too
    many contortions was expanded to 2GB, a lot more than any other design
    of the era. We all know that the thing that kills architectures is
    running out of address space.

    Note to self:: when designing a 36-bit machine, do not cripple it
    with 18-bit addresses with inherent indirection....

    In fairness, in 1963 when the PDP-6 was designed, 256K words seemed
    like an enormous amount of memory. A 7094 could only address 32K. Even
    with that limit, PDP-10 series lasted until 1983. Twenty years was a
    pretty good run.

    When it was new, S/360 was considered a memory hog. When they realized
    that OS/360 needed 64K to run and a lot more to run well, they quickly
    came up with DOS and TOS that ran in 16K and a minimal BOS that ran in
    8K. In retrospect we know that memory prices dropped quickly and the
    big address space was a good idea, but it was a gamble at the time.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to John Levine on Fri Apr 26 07:39:13 2024
    John Levine <johnl@taugh.com> schrieb:

    S/360 invented eight bit byte addressed memory with larger power of 2
    data sizes, which I think all by itself is enough to explain why it
    survived. All the others, which were word or maybe decimal digit
    addressed, died. Its addresses could handle 16MB which without too
    many contortions was expanded to 2GB, a lot more than any other design
    of the era. We all know that the thing that kills architectures is
    running out of address space.

    Brooks wrote that the design was supposed to have been 32-bit
    clean from the start, but that the people who implemented the BALR
    instruction (which puts some bits of the PSW into the high-value
    byte) didn't follow that guideline. He blamed himself for not making
    that sufficiently clear to all the design team.

    He also commented on the carefully-designed gaps in the opcode space; extensibility was designed in from the beginning. @John S: Another
    important point about S/360 you might want to follow, as Mitch
    keeps telling you...

    I thought the PDP-10 was swell, but even if DEC had been able to
    design and ship the Jupiter follow-on to the KL-10, its expanded
    addressing was a kludge. It only provided addressing 8M words or about
    32M bytes with no way to go past that.

    Reading

    http://bitsavers.informatik.uni-stuttgart.de/pdf/dec/pdp10/KC10_Jupiter/ExtendedAddressing_Jul83.pdf

    I concur that it was a kludge, but at least they seem to have
    allowed for further extension by reserving a 1-1- bit pattern,
    as an illegal indirect word.

    However, one questions. Designs like the PDP-10 or the UNIVAC
    (from what I read on Wikipedia) had "registers" at certain
    memory locations. On the PDP-10, it even appears to have been
    possible to run code in the first memory locations/registers.

    It seems that the /360 was the first machine which put many
    registers into a (conceptually) separate space, leaving them open
    to implementing them either in memory or as faster logic.

    Is that the case, or did anybody beat them to it?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to John Levine on Fri Apr 26 15:28:27 2024
    John Levine wrote:

    According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
    Thomas Koenig wrote:

    John Levine <johnl@taugh.com> schrieb:

    They had the insight to see that the 16 fixed sizs registers
    could >> > be in fast storage on high end machines, main memory on
    low end >> > machines, so the high end machines were fast and the low
    end no >> > slower than a memory-memory architecture which is what it
    in >> > practice was. It was really an amazing design, no wonder it's
    the >> > only architecture of its era that still has hardware
    implementations.

    Yes, although it isn't clear how much of its success is due to
    technical superiority versus marketing superiority.

    S/360 invented eight bit byte addressed memory with larger power of 2
    data sizes, which I think all by itself is enough to explain why it
    survived.


    Of cpurse, I agree about IBM inventing the 8 bit byte (although the
    Burroughs large scale systems used it too), and the power of two data
    sizes (although the Univac 1108 and successots sort of had that with
    quarter word, half word word and doube word data sizes). While
    important, I am not sure about succfient.

    I do want to note that another factor in S/360's success was the
    quality of the paper peripherals, expecially the 1401 printer, which
    was a true marvel in its time. IBM got that advantage from their long experience with punch card business systems.




    All the others, which were word or maybe decimal digit
    addressed, died. Its addresses could handle 16MB which without too
    many contortions was expanded to 2GB, a lot more than any other design
    of the era. We all know that the thing that kills architectures is
    running out of address space.

    The Univac 1110 (circa 1972), (about a devcade before XA) had banking,
    which allowed an instruction to address anywhere within a 262K
    (approximately 1 MB) "window" into what could be an "address space" of
    about 4 GB. It was a little awkward in that, while you could have 4 of
    such "windows" available at any time, changing windows required
    executing an, (unprivlidged) instruction.





    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Thomas Koenig on Fri Apr 26 15:39:42 2024
    Thomas Koenig wrote:

    snip

    However, one questions. Designs like the PDP-10 or the UNIVAC
    (from what I read on Wikipedia) had "registers" at certain
    memory locations.

    Univac reserved the first 0200 locations of the user's address space as "aliases" of the registers. This is because the instruction format
    only had room for one arithmetic register and one indes (address)
    register. Thus to, for example, load one register with the contents of another, you put the source register's address in the displacement
    field. But physically, the regoisters were separate.


    In fact, on the 1108, since the first 0200 memory locations weren't
    diretly usable by the software, they were reserved such that upon a
    system power loss, the actual CPU registers were saved into those (core
    - hence non-volatile) memory locations so at least in theory, you could
    recover from a power loss. It didn't work very well.



    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Fri Apr 26 18:02:19 2024
    According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
    I do want to note that another factor in S/360's success was the
    quality of the paper peripherals, expecially the 1401 printer, which
    was a true marvel in its time. IBM got that advantage from their long >experience with punch card business systems.

    I presume you mean the 1403 which was indeed a great printer. I printed a
    lot of term papers on them.

    All the others, which were word or maybe decimal digit
    addressed, died. ...

    The Univac 1110 (circa 1972), (about a devcade before XA) had banking,
    which allowed an instruction to address anywhere within a 262K
    (approximately 1 MB) "window" into what could be an "address space" of
    about 4 GB. It was a little awkward in that, while you could have 4 of
    such "windows" available at any time, changing windows required
    executing an, (unprivlidged) instruction.

    There were a lot of segmented address schemes and as far as I can tell
    nobody liked them except maybe the Burroughs machines where the
    compilers made it largely invisible. The most famous was the 8086 and
    286 but the PDP-10 extended addressing was sort of like that and even
    the PDP-8 had a bit to say whether an address was to page 0 or the
    current one.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to John Levine on Fri Apr 26 19:01:50 2024
    John Levine wrote:

    According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
    I do want to note that another factor in S/360's success was the
    quality of the paper peripherals, expecially the 1401 printer, which
    was a true marvel in its time. IBM got that advantage from their
    long experience with punch card business systems.

    I presume you mean the 1403 which was indeed a great printer. I
    printed a lot of term papers on them.

    Yes, 1403. Sorry.





    All the others, which were word or maybe decimal digit
    addressed, died. ...

    The Univac 1110 (circa 1972), (about a devcade before XA) had
    banking, which allowed an instruction to address anywhere within a
    262K (approximately 1 MB) "window" into what could be an "address
    space" of about 4 GB. It was a little awkward in that, while you
    could have 4 of such "windows" available at any time, changing
    windows required executing an, (unprivlidged) instruction.

    There were a lot of segmented address schemes and as far as I can tell
    nobody liked them except maybe the Burroughs machines where the
    compilers made it largely invisible. The most famous was the 8086 and
    286 but the PDP-10 extended addressing was sort of like that and even
    the PDP-8 had a bit to say whether an address was to page 0 or the
    current one.

    The 1100 series scheme, which was called multi banking, wsn't exactly a
    segment scheme, but the differences are subtle. It certainly wan't a
    clean as a single large adress space, but it did make certain things,
    like shared libraries (common banks in 1100 terminology) very easy.
    And it did eliminate the need for the BALR/Using stuff. And it didn't
    have the issue of having already used the upper bits that caused such
    problems with IBM's XA transition.


    My point is not that the 1100 scheme was better or worse than the S/360
    scheme. Each had benefits and drawbacks. But it is that the S/360 CPU architecture wasn't the only factor in its success. Other factors,
    like marketing and peripherals were a significan't, perhaps the major
    factors.





    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Fri Apr 26 18:38:43 2024
    According to Thomas Koenig <tkoenig@netcologne.de>:
    S/360 invented eight bit byte addressed memory with larger power of 2

    Brooks wrote that the design was supposed to have been 32-bit
    clean from the start, but that the people who implemented the BALR >instruction (which puts some bits of the PSW into the high-value
    byte) didn't follow that guideline. He blamed himself for not making
    that sufficiently clear to all the design team.

    Yup. Even worse, the OS programmers were under extreme pressure
    to save memory so in every data structure with address words,
    they used the high byte for flags or other stuff. So when they
    went to 31 bit addressing, they needed new versions of all of
    the control blocks.

    I thought the PDP-10 was swell, but even if DEC had been able to
    design and ship the Jupiter follow-on to the KL-10, its expanded
    addressing was a kludge. It only provided addressing 8M words or about
    32M bytes with no way to go past that.

    I misread the manual. The extended addresses were 30 bits or about 4GB
    which was plenty for that era, but the way they did it in 256K word
    sections was still a kludge. In the original PDP-6/10 every
    instruction could address all of memory. In extended mode you could
    directly address only the current section, and everything else needed
    an index register or an indirect address.

    While this wasn't terribly hard, it did mean that any time you wanted
    to change a program to run in extended mode you had to look at all the
    code and check every instruction that did an address calculation,
    which was tedious.

    Reading

    http://bitsavers.informatik.uni-stuttgart.de/pdf/dec/pdp10/KC10_Jupiter/ExtendedAddressing_Jul83.pdf

    I concur that it was a kludge, but at least they seem to have
    allowed for further extension by reserving a 1-1- bit pattern,
    as an illegal indirect word.

    Given that it could already address 4GB I don't think that would help, since anything
    larger would need multi-word addresses which would be an even worse kludge.

    However, one questions. Designs like the PDP-10 or the UNIVAC
    (from what I read on Wikipedia) had "registers" at certain
    memory locations. On the PDP-10, it even appears to have been
    possible to run code in the first memory locations/registers.

    Funny you should mention that. On the PDP-6/10, the registers were the
    first 16 memory locations. There were no register to register
    instructions since you used the regular instruction with a memory
    address between 0 and 017. You could indeed run code in the registers
    which was somewhat faster. I wrote a little multi-precision factorial
    routine that ran in the registers.

    It seems that the /360 was the first machine which put many
    registers into a (conceptually) separate space, leaving them open
    to implementing them either in memory or as faster logic.

    Is that the case, or did anybody beat them to it?

    On the PDP-6 and KA-10 the transistor registers were an extra cost
    option, so you could order your machine either way. I believe that DEC
    never shipped a machine without the fast registers since the speed
    difference was so great.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Stephen Fuld on Sat Apr 27 07:13:16 2024
    Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:
    Thomas Koenig wrote:

    John Levine <johnl@taugh.com> schrieb:

    They had the insight to see that the 16 fixed sizs registers could
    be in fast storage on high end machines, main memory on low end
    machines, so the high end machines were fast and the low end no
    slower than a memory-memory architecture which is what it in
    practice was. It was really an amazing design, no wonder it's the
    only architecture of its era that still has hardware
    implementations.


    Yes, although it isn't clear how much of its success is due to
    technical superiority versus marketing superiority.

    "Put a bullet through a CPU without missing a single transation"
    is also a technical achievement :-)

    I recently heard (but did not find a source that IBM did RAID on some
    of their caches.

    Prompted by a remark in another newsgroup, I looked at IBM's 2023
    annual report, where zSystems is put under "Hybrid Infrastructure"
    (lumped together with POWER). The revenue for both lumped together
    is around 9,215 billion Dollars, with a pre-tax margin of more
    than 50%.

    At those margins, they can certainly pay for a development team
    for future hardware generations.

    Yes, but remember that includes softwre revenue, which has higher
    margins than hardware revenue. I believe I saw somewhere that IBM made
    more from MVS, DB2, CICS, etc.

    SAP S4/HANA is going to hurt their bottom line, then. Earlier
    versions of SAP could, I understand, run on zOS and DB2, S4/HANA
    requires SAP's in-house database and requires Linux.

    than they do on the hardware itself. So
    one could argue that they have to develop mew hardware in order to
    protect their software revenue!

    Sounds reasonable, and the reverse of what they did in the
    (far-away) past.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to John Levine on Sat Apr 27 11:18:47 2024
    John Levine <johnl@taugh.com> schrieb:

    [PDP-10]

    I misread the manual. The extended addresses were 30 bits or about 4GB
    which was plenty for that era, but the way they did it in 256K word
    sections was still a kludge. In the original PDP-6/10 every
    instruction could address all of memory. In extended mode you could
    directly address only the current section, and everything else needed
    an index register or an indirect address.

    While this wasn't terribly hard, it did mean that any time you wanted
    to change a program to run in extended mode you had to look at all the
    code and check every instruction that did an address calculation,
    which was tedious.

    Hmm... would a simple recompilation have done the trick, or were there
    also issues with integers being restricted to 18 bits, for example?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Thomas Koenig on Sat Apr 27 13:33:31 2024
    Thomas Koenig wrote:

    Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:
    Thomas Koenig wrote:

    John Levine <johnl@taugh.com> schrieb:

    They had the insight to see that the 16 fixed sizs registers
    could >> > be in fast storage on high end machines, main memory on
    low end >> > machines, so the high end machines were fast and the low
    end no >> > slower than a memory-memory architecture which is what it
    in >> > practice was. It was really an amazing design, no wonder it's
    the >> > only architecture of its era that still has hardware
    implementations.


    Yes, although it isn't clear how much of its success is due to
    technical superiority versus marketing superiority.

    "Put a bullet through a CPU without missing a single transation"
    is also a technical achievement :-)

    True. But that is a relativly recent achievement. Long after IBM's
    mainframe dominance.





    I recently heard (but did not find a source that IBM did RAID on some
    of their caches.

    Prompted by a remark in another newsgroup, I looked at IBM's 2023
    annual report, where zSystems is put under "Hybrid Infrastructure"
    (lumped together with POWER). The revenue for both lumped together
    is around 9,215 billion Dollars, with a pre-tax margin of more
    than 50%.

    At those margins, they can certainly pay for a development team
    for future hardware generations.

    Yes, but remember that includes softwre revenue, which has higher
    margins than hardware revenue. I believe I saw somewhere that IBM
    made more from MVS, DB2, CICS, etc.

    SAP S4/HANA is going to hurt their bottom line, then. Earlier
    versions of SAP could, I understand, run on zOS and DB2, S4/HANA
    requires SAP's in-house database and requires Linux.

    than they do on the hardware itself. So
    one could argue that they have to develop mew hardware in order to
    protect their software revenue!

    Sounds reasonable, and the reverse of what they did in the
    (far-away) past.


    Yup. I remember when the woftware was free! Amdahl and the other
    PCM's forced that to change.





    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Sat Apr 27 16:38:15 2024
    According to Thomas Koenig <tkoenig@netcologne.de>:
    While this wasn't terribly hard, it did mean that any time you wanted
    to change a program to run in extended mode you had to look at all the
    code and check every instruction that did an address calculation,
    which was tedious.

    Hmm... would a simple recompilation have done the trick, or were there
    also issues with integers being restricted to 18 bits, for example?

    This was 50 years ago. The system software was mostly written in
    assembler. Some was written in BLISS which was more concise but still
    extremely machine specific. I suppose you could recompile your Fortran programs, but the Fortran compiler was written in BLISS.

    There were later versions of BLISS for the PDP=11, Vax and other
    machines but they were not compatible with each other. The earliest
    places I can think of system programming languages with different
    targets were when Bell Labs ported Unix to the Interdata, and the IBM
    S/38 and its successors that had (still has) a virtual machine
    language that is translated to whatever hardware it's running on.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Sat Apr 27 16:43:32 2024
    According to Thomas Koenig <tkoenig@netcologne.de>:
    Yes, but remember that includes softwre revenue, which has higher
    margins than hardware revenue. I believe I saw somewhere that IBM made
    more from MVS, DB2, CICS, etc.

    SAP S4/HANA is going to hurt their bottom line, then. Earlier
    versions of SAP could, I understand, run on zOS and DB2, S4/HANA
    requires SAP's in-house database and requires Linux.

    I dunno how much of a problem it'll be. IBM has put a lot of work into
    getting zSeries to run Linux well.

    I realize neither you nor I would buy a mainframe to run Linux, but we
    wouldn't run SAP either.


    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to John Levine on Sat Apr 27 18:12:14 2024
    John Levine <johnl@taugh.com> schrieb:
    According to Thomas Koenig <tkoenig@netcologne.de>:
    Yes, but remember that includes softwre revenue, which has higher
    margins than hardware revenue. I believe I saw somewhere that IBM made
    more from MVS, DB2, CICS, etc.

    SAP S4/HANA is going to hurt their bottom line, then. Earlier
    versions of SAP could, I understand, run on zOS and DB2, S4/HANA
    requires SAP's in-house database and requires Linux.

    I dunno how much of a problem it'll be. IBM has put a lot of work into getting zSeries to run Linux well.

    They won't get DB2 royalties, though. I'm also not sure what they
    could charge for Linux vs. zOS.

    I realize neither you nor I would buy a mainframe to run Linux, but we wouldn't run SAP either.

    Certainly not :-)

    I've worked with SAP's user interface a bit, for entering hours
    for accounting. The user intrface, well, let's just say it took
    longer to get used to than I used it (quite a few years).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to John Levine on Sat Apr 27 17:50:03 2024
    John Levine wrote:

    According to Thomas Koenig <tkoenig@netcologne.de>:
    While this wasn't terribly hard, it did mean that any time you wanted
    to change a program to run in extended mode you had to look at all the
    code and check every instruction that did an address calculation,
    which was tedious.

    Hmm... would a simple recompilation have done the trick, or were there
    also issues with integers being restricted to 18 bits, for example?

    This was 50 years ago. The system software was mostly written in
    assembler. Some was written in BLISS which was more concise but still extremely machine specific.

    BLISS reads a LOT like the original K&R C.

    I suppose you could recompile your Fortran programs, but the Fortran compiler was written in BLISS.

    There were later versions of BLISS for the PDP=11, Vax and other
    machines but they were not compatible with each other.

    Imagine if BLISS were machine independent ?!!

    The earliest
    places I can think of system programming languages with different
    targets were when Bell Labs ported Unix to the Interdata, and the IBM
    S/38 and its successors that had (still has) a virtual machine
    language that is translated to whatever hardware it's running on.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Sat Apr 27 18:17:28 2024
    According to MitchAlsup1 <mitchalsup@aol.com>:
    This was 50 years ago. The system software was mostly written in
    assembler. Some was written in BLISS which was more concise but still
    extremely machine specific.

    BLISS reads a LOT like the original K&R C.

    Not really, a little more like BCPL, maybe. It didn't have types or
    structures. It did have a rather extensive way to define pointer
    deferencing which meant it was easy to describe an upper diagonal
    array, but clumsy to describe a thing with two ints and a string.

    There were later versions of BLISS for the PDP=11, Vax and other
    machines but they were not compatible with each other.

    Imagine if BLISS were machine independent ?!!

    Much later DEC came up with versions of BLISS that were similar enough
    that you could write fairly portable code, with moderate amounts of
    per-machine conditional compilation. This article is a good summary
    of the language and its evolution.

    https://www.cs.tufts.edu/~nr/cs257/archive/ronald-brender/bliss.pdf

    By that time, though, Unix had been ported to lots of machines. BLISS
    suffered by its origin on the word-addressed PDP-10, while after its
    earliest years people only cared about C on 8-bit byte addressed
    machines.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Thomas Koenig on Sun Apr 28 14:27:38 2024
    Thomas Koenig <tkoenig@netcologne.de> writes:

    Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:
    [...]

    [...] remember that includes softwre revenue, which has higher
    margins than hardware revenue. I believe I saw somewhere that IBM
    made more from MVS, DB2, CICS, etc.

    SAP S4/HANA is going to hurt their bottom line, then. Earlier
    versions of SAP could, I understand, run on zOS and DB2, S4/HANA
    requires SAP's in-house database and requires Linux.

    than they do on the hardware itself. So
    one could argue that they have to develop mew hardware in order to
    protect their software revenue!

    Sounds reasonable, and the reverse of what they did in the
    (far-away) past.

    IBM was forced to change what it did in the past as a
    consequence of an antitrust action filed by the US
    government. And in fact there was more than one of
    those.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Thomas Koenig on Sun Apr 28 16:04:10 2024
    Thomas Koenig <tkoenig@netcologne.de> writes:

    I've just read (most of) "The Design of Design" by Fred Brooks,
    especially the chapters dealing with the design of the /360,
    and it's certainly worth reading. (I had finished "The Mythical
    Man-Month" before). There are chapters on computer and software architectures, but also something on a house he himself built.

    That he designed (with the help of a professional architect). It
    may be that Brooks and his family helped with some of the interior
    work, but professional contractors did the building.

    An interesting detail about the /360 design was that they originally
    wanted to do a stack-based machine. It would have been OK for the
    mid- and high-end machines, but on low-end machines it would have
    been undompetetive, so they rejected that approach.

    And it was a serious consideration, the team spending six months
    before rejecting it due to those performance limitations.

    He discusses the book on computer architecture he co-authored with
    Gerrit Blaauw in it (as a project). Would be _very_ nice to read,
    but the price on Amazon is somewhat steep, a bit more than 150 Euros.

    Yow. I think I'll try a local library.

    One thing about Brooks - he is not shy of criticizing his own
    works when his views changed. I liked his scathing comments on JCL
    so much that I put them in the Wikipedia article :-)

    Personally I think his assessment of JCL is harsher than it
    deserves. Don't get me wrong, JCL is not my idea of a great
    control language, but it was usable enough in the environment
    that customers were used to. The biggest fault of JCL is that it
    is trying to solve the wrong problem. It isn't clear that trying
    to do something more ambitious would have fared any better in the
    early 1960s (see also The Second System Effect in MMM).

    No comment about JCL still being used today.

    His main criticism of his own book on computer architecture was
    that it treated computer architecture as a finite field which had
    been explored already.

    @John S: Not sure if you've read "The Design of Design", but if you
    haven't, you probably should. It might help you to refocus in your
    quest to recreate a S/360 (especially the requirement to get the
    architecture to work well on a very small machine like the 360/30).

    Soo... good to read. Anything else?

    I read TDOD somewhat quickly completely through. After a time I
    went back and started re-reading, going more slowly the second
    time. That has turned out to be rather useful, and I would at
    least suggest that people try a second, and slower, reading.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Stephen Fuld on Sun Apr 28 20:22:43 2024
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Thomas Koenig wrote:

    John Levine <johnl@taugh.com> schrieb:

    They had the insight to see that the 16 fixed sizs registers could
    be in fast storage on high end machines, main memory on low end
    machines, so the high end machines were fast and the low end no
    slower than a memory-memory architecture which is what it in
    practice was. It was really an amazing design, no wonder it's the
    only architecture of its era that still has hardware
    implementations.

    Yes, although it isn't clear how much of its success is due to
    technical superiority versus marketing superiority.

    To me it seems clear that the success of System/360 was largely or
    mostly due to good technical decisions having been made. IBM
    salesmen were very effective in getting customers for the new
    line, but a lot of the reason for that is that they had a good
    product to sell. I don't mean just the number of registers or
    the size of the address space, but a commitment to a forward
    looking architecture that handles all the needs of every
    customer, both big and small, and would continue to do so in
    the future. In those days the vast majority of computer buyers
    were businesses, at least as measured by dollars, and in the
    business world that sort of pitch has enormous appeal.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Mon Apr 29 03:21:11 2024
    According to Tim Rentsch <tr.17687@z991.linuxsc.com>:
    Sounds reasonable, and the reverse of what they did in the
    (far-away) past.

    IBM was forced to change what it did in the past as a
    consequence of an antitrust action filed by the US
    government. And in fact there was more than one of
    those.

    That's true but they didn't have that much practical effect.

    The 1956 agreement required that they sell equipment, rather than only
    leasing it, let customers buy their cards from vendors other than IBM,
    and some other related stuff. A big deal then, irrelevant now.

    In 1969 they preemptively unbundled software and services, expecting
    that an antitrust suit could force them to do so. There were many of
    antitrust suits through 1982, all of which IBM won or were dismissed.

    Telex (a company unrelated to the Western Union telex) won a narrow
    case about peripheral interfaces, but lost on appeal. Around the same
    time there was an EU case that IBM settled and agreed to publish
    device interfaces, which was basically what Telex wanted.

    I don't think that any of these had a significant long term effect on
    the computer industry. When minicomputers appeared IBM never competed
    very successfully (nobody would have bought a slow expensive IBM 1130
    if IBM didn't make it), and when micros came along they had a short
    term success with the IBM PC but soon lost control of that market.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Thomas Koenig on Tue Apr 30 15:51:20 2024
    Thomas Koenig <tkoenig@netcologne.de> writes:
    John Levine <johnl@taugh.com> schrieb:

    S/360 invented eight bit byte addressed memory with larger power of 2
    data sizes, which I think all by itself is enough to explain why it
    survived. All the others, which were word or maybe decimal digit
    addressed, died. Its addresses could handle 16MB which without too
    many contortions was expanded to 2GB, a lot more than any other design
    of the era. We all know that the thing that kills architectures is
    running out of address space.

    Brooks wrote that the design was supposed to have been 32-bit
    clean from the start, but that the people who implemented the BALR >instruction (which puts some bits of the PSW into the high-value
    byte) didn't follow that guideline. He blamed himself for not making
    that sufficiently clear to all the design team.

    He also commented on the carefully-designed gaps in the opcode space; >extensibility was designed in from the beginning. @John S: Another
    important point about S/360 you might want to follow, as Mitch
    keeps telling you...

    I thought the PDP-10 was swell, but even if DEC had been able to
    design and ship the Jupiter follow-on to the KL-10, its expanded
    addressing was a kludge. It only provided addressing 8M words or about
    32M bytes with no way to go past that.

    Reading

    http://bitsavers.informatik.uni-stuttgart.de/pdf/dec/pdp10/KC10_Jupiter/ExtendedAddressing_Jul83.pdf

    I concur that it was a kludge, but at least they seem to have
    allowed for further extension by reserving a 1-1- bit pattern,
    as an illegal indirect word.

    However, one questions. Designs like the PDP-10 or the UNIVAC
    (from what I read on Wikipedia) had "registers" at certain
    memory locations. On the PDP-10, it even appears to have been
    possible to run code in the first memory locations/registers.

    It seems that the /360 was the first machine which put many
    registers into a (conceptually) separate space, leaving them open
    to implementing them either in memory or as faster logic.

    PDP-8 had both (auto-increment index registers in memory) and
    the separate accumulator and link registers.

    B3500 had index registers in low memory (relative to the base
    register) and an accumulator for fixed floating point, and
    later added four more index registers.


    Is that the case, or did anybody beat them to it?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to John Levine on Tue Apr 30 15:52:53 2024
    John Levine <johnl@taugh.com> writes:
    According to Thomas Koenig <tkoenig@netcologne.de>:
    S/360 invented eight bit byte addressed memory with larger power of 2

    Brooks wrote that the design was supposed to have been 32-bit
    clean from the start, but that the people who implemented the BALR >>instruction (which puts some bits of the PSW into the high-value
    byte) didn't follow that guideline. He blamed himself for not making
    that sufficiently clear to all the design team.

    Yup. Even worse, the OS programmers were under extreme pressure
    to save memory so in every data structure with address words,
    they used the high byte for flags or other stuff. So when they
    went to 31 bit addressing, they needed new versions of all of
    the control blocks.

    The B300 had fixed instruction format that included three
    operand fields. For instructions that didn't use all three
    operands, the programmer was encouraged to use the unused
    operand fields as scratch fields.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Scott Lurndal on Tue Apr 30 19:03:41 2024
    Scott Lurndal wrote:

    John Levine <johnl@taugh.com> writes:
    According to Thomas Koenig <tkoenig@netcologne.de>:
    S/360 invented eight bit byte addressed memory with larger power of 2

    Brooks wrote that the design was supposed to have been 32-bit
    clean from the start, but that the people who implemented the BALR >>>instruction (which puts some bits of the PSW into the high-value
    byte) didn't follow that guideline. He blamed himself for not making >>>that sufficiently clear to all the design team.

    Yup. Even worse, the OS programmers were under extreme pressure
    to save memory so in every data structure with address words,
    they used the high byte for flags or other stuff. So when they
    went to 31 bit addressing, they needed new versions of all of
    the control blocks.

    The B300 had fixed instruction format that included three
    operand fields. For instructions that didn't use all three
    operands, the programmer was encouraged to use the unused
    operand fields as scratch fields.


    For a modern ISA, the architect should specify that various bits
    of the general format "must be zero"* when those bits are not used
    in the instruction. We do this (and we have verification guys check
    that the HW raises an exception when those bits are non-zero. This
    is the only sane way to prevent malicious use of the ISA by erroneous
    compilers or from malicious SW with writeable access to code memory.

    (*) must be zero can also mean must be one or must be this pattern
    should 1 or "this pattern" be meaningful.

    What you want is for the ISA document to specify everything every
    instruction can do, and also to specify that all encodings not
    leading to well defined calculations and accesses fail. This
    preserves the encoding space for the future.

    {{And in a strange way: this makes verification easier, too}}

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Tue Apr 30 19:46:39 2024
    According to MitchAlsup1 <mitchalsup@aol.com>:
    The B300 had fixed instruction format that included three
    operand fields. For instructions that didn't use all three
    operands, the programmer was encouraged to use the unused
    operand fields as scratch fields.

    Cue "The Story of Mel"

    For a modern ISA, the architect should specify that various bits
    of the general format "must be zero"* when those bits are not used
    in the instruction.

    That was another innovation of the 360. It specifically said that
    unused bits (of which there were a few) and unused unstructions (of
    which there were a lot) were reserved. The unused bits had to be zero,
    and the instructions all trapped.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Tue Apr 30 20:22:27 2024
    According to Scott Lurndal <slp53@pacbell.net>:
    It seems that the /360 was the first machine which put many
    registers into a (conceptually) separate space, leaving them open
    to implementing them either in memory or as faster logic.

    PDP-8 had both (auto-increment index registers in memory) and
    the separate accumulator and link registers.

    The auto-increment "registers" in the PDP-8 and its 18-bit cousins
    were just regular memory locations with a hack that incremented them
    when you used them as an indirect address. They weren't anything like
    what we call index registers, since you couldn't combine them with
    anything else to get an address. (I speak from direct experience.)

    Much later the 18-bit PDP-15 added a real index register, but there is
    no way you could have done that on a PDP-8 since there was no spare
    place in the instruction word to put an index bit. And by that time it
    was clear that the PDP-8 was headed for niche process control
    applications and oblivion while the PDP-11 was the future.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to mitchalsup@aol.com on Tue Apr 30 20:37:22 2024
    mitchalsup@aol.com (MitchAlsup1) writes:
    Scott Lurndal wrote:

    John Levine <johnl@taugh.com> writes:
    According to Thomas Koenig <tkoenig@netcologne.de>:
    S/360 invented eight bit byte addressed memory with larger power of 2

    Brooks wrote that the design was supposed to have been 32-bit
    clean from the start, but that the people who implemented the BALR >>>>instruction (which puts some bits of the PSW into the high-value
    byte) didn't follow that guideline. He blamed himself for not making >>>>that sufficiently clear to all the design team.

    Yup. Even worse, the OS programmers were under extreme pressure
    to save memory so in every data structure with address words,
    they used the high byte for flags or other stuff. So when they
    went to 31 bit addressing, they needed new versions of all of
    the control blocks.

    The B300 had fixed instruction format that included three
    operand fields. For instructions that didn't use all three
    operands, the programmer was encouraged to use the unused
    operand fields as scratch fields.


    For a modern ISA, the architect should specify that various bits

    The B300 was an extension of the 1950's Electrodata 220, and had
    very little total memory.

    In modern systems with several orders of magnitude more memory, the
    more useful restriction is to make the text section read-only
    via the MMU.

    Yes, for extensibility, the hardware should, generally, fault
    on unused instruction encodings (having a NOP space that can be
    extended with 'hint' instructions in future versions of the
    instruction space maintains backwards compatability with software
    built for later generations when run on earlier generations which
    treat the encoding as a NOP, viz. ARM64).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Scott Lurndal on Tue Apr 30 20:59:47 2024
    Scott Lurndal wrote:

    mitchalsup@aol.com (MitchAlsup1) writes:
    Scott Lurndal wrote:

    John Levine <johnl@taugh.com> writes:
    According to Thomas Koenig <tkoenig@netcologne.de>:
    S/360 invented eight bit byte addressed memory with larger power of 2 >>>>
    Brooks wrote that the design was supposed to have been 32-bit
    clean from the start, but that the people who implemented the BALR >>>>>instruction (which puts some bits of the PSW into the high-value >>>>>byte) didn't follow that guideline. He blamed himself for not making >>>>>that sufficiently clear to all the design team.

    Yup. Even worse, the OS programmers were under extreme pressure
    to save memory so in every data structure with address words,
    they used the high byte for flags or other stuff. So when they
    went to 31 bit addressing, they needed new versions of all of
    the control blocks.

    The B300 had fixed instruction format that included three
    operand fields. For instructions that didn't use all three
    operands, the programmer was encouraged to use the unused
    operand fields as scratch fields.


    For a modern ISA, the architect should specify that various bits

    The B300 was an extension of the 1950's Electrodata 220, and had
    very little total memory.

    In modern systems with several orders of magnitude more memory, the
    more useful restriction is to make the text section read-only
    via the MMU.

    Necessary but insufficient. You not only want malicious programs
    from being prevented from altering .text you also need software
    generating programs (compilers, linkers, JITs) from creating BAD
    bit-patterns that could be mistaken as an instruction.

    Yes, for extensibility, the hardware should, generally, fault
    on unused instruction encodings (having a NOP space that can be
    extended with 'hint' instructions in future versions of the
    instruction space maintains backwards compatability with software
    built for later generations when run on earlier generations which
    treat the encoding as a NOP, viz. ARM64).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to John Levine on Tue Apr 30 23:07:09 2024
    John Levine <johnl@taugh.com> writes:

    According to MitchAlsup1 <mitchalsup@aol.com>:

    For a modern ISA, the architect should specify that various bits
    of the general format "must be zero"* when those bits are not used
    in the instruction.

    That was another innovation of the 360. It specifically said that
    unused bits (of which there were a few) and unused unstructions (of
    which there were a lot) were reserved. The unused bits had to be
    zero, and the instructions all trapped.

    I would describe this not so much as an innovation but just as
    applying a lesson learned from earlier experience. Some earlier IBM
    model (don't remember which one) had the property that instructions
    were somewhat like microcode, and some undocumented combinations of
    bits would do useful things. Needless to say these undocumented
    behaviors were discovered by programmers in the wild, and when IBM
    went to produce a newer model they found that they had to duplicate
    these "accidental features" or old programs wouldn't work on the
    newer, supposedly compatible, model. To stop that from happening
    again, the people doing System/360 insisted that there be no
    "accidental" behaviors, especially since an "accidental" behavior
    would likely be different on different models of the same line.

    I remember hearing or reading about this somewhere, but I don't
    remember where. It may have been when I took Fred Brooks's
    architecture class when I was a grad student.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to John Levine on Tue Apr 30 22:36:05 2024
    John Levine <johnl@taugh.com> writes:

    According to Tim Rentsch <tr.17687@z991.linuxsc.com>:

    Sounds reasonable, and the reverse of what they did in the
    (far-away) past.

    IBM was forced to change what it did in the past as a
    consequence of an antitrust action filed by the US
    government. And in fact there was more than one of
    those.

    That's true but they didn't have that much practical effect.

    The 1956 agreement required that they sell equipment, rather than only leasing it, let customers buy their cards from vendors other than IBM,
    and some other related stuff. A big deal then, irrelevant now.

    In 1969 they preemptively unbundled software and services, expecting
    that an antitrust suit could force them to do so. There were many of antitrust suits through 1982, all of which IBM won or were dismissed.

    Your impression of the history is rosier than mine. The danger to
    IBM was not that they would be forced to unbundle software but that
    the court would decide the company needed to be broken up, and that
    was a real fear on IBM's part. In any case I wasn't making any
    claim about how large the effect was (even if I might think it was
    larger than what you think it was) but only that they were forced to
    change by the antitrust action, which they were: IBM management
    certainly did not want to unbundle software, and they would not have
    done so if they thought they had a choice.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Wed May 1 17:40:45 2024
    According to Tim Rentsch <tr.17687@z991.linuxsc.com>:
    John Levine <johnl@taugh.com> writes:

    According to MitchAlsup1 <mitchalsup@aol.com>:

    For a modern ISA, the architect should specify that various bits
    of the general format "must be zero"* when those bits are not used
    in the instruction.

    That was another innovation of the 360. It specifically said that
    unused bits (of which there were a few) and unused unstructions (of
    which there were a lot) were reserved. The unused bits had to be
    zero, and the instructions all trapped.

    I would describe this not so much as an innovation but just as
    applying a lesson learned from earlier experience.

    Well, yes, but another 360 innovation was the whole idea of computer architecture, as well as the term. It was the first time that the
    programmer's view of the computer was described independently of any implementation.

    Some earlier IBM
    model (don't remember which one) had the property that instructions
    were somewhat like microcode, and some undocumented combinations of
    bits would do useful things.

    I wonder if that was the way that the 704 OR'ed the index registers.
    There were three of them, numbered 1, 2, and 4, so if your index field
    was 5, it OR'ed registers 1 and 4. It subtracted the index (or the
    OR'ed combination of indexes) from the base address, so it would have
    taken some really tricky programming to make use of that. But someone
    must have since they documented it and it continued to work on the
    709, 7090, and 7094 until they provided 7 index registers and a mode
    bit to switch between the old OR and the new 7 registers.

    I have never found anything that says whether it was deliberate or an
    accident of the 704's implementation, and I have looked pretty hard.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Tim Rentsch on Wed May 1 18:00:20 2024
    Tim Rentsch wrote:


    snip

    Personally I think his assessment of JCL is harsher than it
    deserves. Don't get me wrong, JCL is not my idea of a great
    control language, but it was usable enough in the environment
    that customers were used to.


    From 1972-1979 I worked at a site that had boh S/360s (mostly /65s)
    running OS/MVT, and Univac 1108s running Exec 8. I used both, though
    did mostly 1108 stuff.

    For several reasons, JCL was terrible. One was its seemingly needless obscurity. For example, IIRC the program name of the COBOL compiler
    was ICKFBL00. In contrast, the COBOL compiler under Exec 8 was called
    COB. It aalso lacked intelligent defaults, which made a it more
    cumbersome to use. But this was mostly hidden due to a much bigger
    problem.

    Perhaps due to the architectures inability to swap a program out and
    reload it to any real address other than the one it had originally, all resources to be used had to be avaable at the beginning of the job, so
    all JCL was scanned at the beginning of the job, and no "dynamic"
    allocations were possible.

    So, for example, the COBOL compiler needed, besides the input file
    name, IIRC four scratch files, an output file and a place to put the (spooled)print listing. These must be explicitly described (JCL DD
    commands) in the JCL for the job, Similarly for other programs. This
    was so inconvenient that IBM provided "Procedures" (essentially JCL
    macros), that included all the necessry DD statements, hid the actual
    program names, etc.) Thus to compile link and execute a COBOL program
    you invoked the procedure called something like ICOBUCLG (I have
    forgotten exactly, but the last thre characters were for Compile, Link,
    and GO). Contrast that with the EXEC 8 command

    @COB programname

    (The @ was Exec's equivalent to // to indicate a comand) The internal
    scratch files were alocated internally by the compiler, the default
    print output (which could be overridden) went to the printer, the
    default output name (again overridable) was the same as the input
    (object files and source files could have the same name).

    Similarly, to copy a file from one place to another, JCL required at
    least two DD cards and an exec card with the program IEBGENER. Under
    Exec 8, the command

    @Copy sourcefile, destinationfile

    was sufficient, as both files would be dynamically assigned (Exec term) internally by the copy program, and the indicator of success or failure
    went to the default print output.

    While, as you stated, programmers dealt with this, and it worked in
    batch mode. But it clearly wouldn't work once time sharing (called
    Demand in Exec terminology) became available. Thus IBM had to invent a
    whole new, incompatible set of commands for TSO. But the Exec 8 syntax
    was so straightforward that users used exactly the same commands, keyed
    in at the terminal as were put on cards or a file in batch mode. That difference persists to this day.




    The biggest fault of JCL is that it
    is trying to solve the wrong problem.



    What problem was it trying to solve and what was the "right" problem?







    It isn't clear that trying
    to do something more ambitious would have fared any better in the
    early 1960s (see also The Second System Effect in MMM).


    Exec 8 was roughly comtemporaeous with OS/MVT. I claim, was a much
    better choice,






    No comment about JCL still being used today.



    IBM takes backward compatibility really seriously.




    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Stephen Fuld on Wed May 1 18:33:07 2024
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
    Tim Rentsch wrote:


    snip

    Personally I think his assessment of JCL is harsher than it
    deserves. Don't get me wrong, JCL is not my idea of a great
    control language, but it was usable enough in the environment
    that customers were used to.


    From 1972-1979 I worked at a site that had boh S/360s (mostly /65s)
    running OS/MVT, and Univac 1108s running Exec 8. I used both, though
    did mostly 1108 stuff.

    For several reasons, JCL was terrible. One was its seemingly needless >obscurity. For example, IIRC the program name of the COBOL compiler
    was ICKFBL00. In contrast, the COBOL compiler under Exec 8 was called
    COB. It aalso lacked intelligent defaults, which made a it more
    cumbersome to use. But this was mostly hidden due to a much bigger
    problem.

    The worst parts of JCL were "DD" cards, manual track allocation, and the lack of any form of real filesystem. PDS do not count.

    On the Burroughs Medium systems COBAL74 was called COBOL.

    ?COMPILE PAYROL WITH COBOL MEM +300
    ?FILE INPUT = SRC CRD
    ?FILE PRINT = PAYROL/COBOL PRN.

    ?EXECUTE PAYROL
    ?FILE MASTER = PAYROL/MASTER DPK
    ?FILE UPDATE = UPDATE CRD
    ?FILE PRINT = PAYROL/LOG PRN.

    (master file on pack, timecards on card reader,
    detail log to printer (PRN) or spooled to disk
    with (PBD) or pack with (PBP)).

    COBOL68 compiler was "COBOLV".

    To specify input from a disk (100-byte sectors) file:

    ?FILE INPUT = PAYRLI DSK

    or from disk pack (180-byte sectors) named SOURCE:

    ?FILE INPUT = SOURCE/PAYROL DPK

    The major shortcoming with MCP was six-letter file and pack (family) names.

    The commands could be entered on cards (control cards were indicated by an invalid 1-2-3 punch in column 1) or directly on the operator console or
    CANDE (timesharing interactive editor) session.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Wed May 1 18:47:02 2024
    According to Scott Lurndal <slp53@pacbell.net>:
    The worst parts of JCL were "DD" cards, manual track allocation, and the lack >of any form of real filesystem. PDS do not count.

    OS had named files (which they called datasets) and a catalog that said
    which disk or tape it was on, so you could refer to SYS1.FOOLIB and it
    would find it for you.

    I agree the explicit track and cylinder allocation was painful. IBM
    outsmarted themselves with CKD disks that put the record boundaries in
    hardware and did the key search for indexed files in the disk
    controller. That was fine on a 360/30 which was too slow to do
    anything else (the CPU and channel shared the microengine and the CPU
    basically stopped during an index search) but very inefficient on
    larger machines.

    Later on VSAM handled disks with fixed block sizes and used B-trees to
    do index searches but by then the cruft was all there.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Stephen Fuld on Wed May 1 19:01:39 2024
    Stephen Fuld wrote:

    What problem was it trying to solve and what was the "right" problem?

    It was trying to solve the resource management problem when it should
    have been solving the program launch problem (ala c-shell)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Scott Lurndal on Wed May 1 19:28:55 2024
    Scott Lurndal <scott@slp53.sl.home> schrieb:

    The worst parts of JCL were "DD" cards, manual track allocation, and the lack of any form of real filesystem. PDS do not count.

    The lack of a hierarchical file system was due to the OS, not
    really JCL itself. But the difference is mostly academic to the
    user, who has to deal with the idiosyncracies of the OS through JCL.

    But PDS (which you needed to use, in reality) were also a trap if
    you wrote to one member and read another one... or you locked it
    with DISP=OLD and could not access it for reading. Rather like
    Microsoft Windows these days, by the way...

    Personally, I found the limited number of extents for a file
    (sorry, dataset) to be most bothersome. If you exceeded this
    (and space for new members of PDS was not reclaimed), you got an
    ABEND E37 (or something similar). Bah.

    Although it wasn't all bad... as a student, I did a side job of
    keeping track of file allocations etc on a mainframe for one of
    the institutes, plus some scientific work, on an hourly basis.
    This took quite some hours. I also accidentally broke through
    RACF once; that was fun.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to John Levine on Wed May 1 19:16:41 2024
    John Levine wrote:

    According to Scott Lurndal <slp53@pacbell.net>:
    The worst parts of JCL were "DD" cards, manual track allocation,
    and the lack of any form of real filesystem. PDS do not count.

    OS had named files (which they called datasets) and a catalog that
    said which disk or tape it was on, so you could refer to SYS1.FOOLIB
    and it would find it for you.

    Agreed. But by "real file system", Scott meant a hierarchical system
    such as we are used to today with Windows and Unix, then neither OS nor
    Exec 8 had that.



    I agree the explicit track and cylinder allocation was painful. IBM outsmarted themselves with CKD disks that put the record boundaries in hardware

    Technically block boundaries. Anyone who did unblocked 80 byte card
    image data on a disk diserved the poor space ysage and performance he
    deserved.




    and did the key search for indexed files in the disk
    controller.


    It was worse than that, even for non keyed datasets, as each key or
    count field for a block came under the read head, the controller sent
    the indication of a match or not back to the host channel, which, if it
    wasn't a match, reissued the search command.



    That was fine on a 360/30 which was too slow to do
    anything else (the CPU and channel shared the microengine and the CPU basically stopped during an index search) but very inefficient on
    larger machines.

    Later on VSAM handled disks with fixed block sizes and used B-trees to
    do index searches but by then the cruft was all there.


    Yes, but even early VSAM incurred the overhead until the Extended CKD
    stuff (i.e. define extent and locate commands) came into common use.




    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Stephen Fuld on Wed May 1 20:31:50 2024
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
    John Levine wrote:

    According to Scott Lurndal <slp53@pacbell.net>:
    The worst parts of JCL were "DD" cards, manual track allocation,
    and the lack of any form of real filesystem. PDS do not count.

    OS had named files (which they called datasets) and a catalog that
    said which disk or tape it was on, so you could refer to SYS1.FOOLIB
    and it would find it for you.

    Agreed. But by "real file system", Scott meant a hierarchical system
    such as we are used to today with Windows and Unix,

    I don't consider a partitioned dataset to be a 'filesystem'. Burroughs MCP
    had a traditional filesystem (albeit not heirarchical) where the filesystem automatically
    handled area allocation for files[*], which were cataloged in a directory; there was a global disk directory supporting all 100-byte media units
    (which provide a pool of disk units from which file areas were allocated;
    the pool could be partitioned by operations personnel into 'subsystems'
    from which applications or the operator could specify that a given file's areas should be
    allocated).

    Disk pack families (one or more packs in a family) had a directory
    for the family, and the filesystem allocated space from any unit
    in the family for files created on that family. The pack family
    name was the root of the filename (e.g. MASTER/PAYROL, where MASTER
    is the family name and PAYROL is a file on that family).

    [*] There were mechanisms where the operator could allocate specific
    portions of a disk or pack to a file, but that capability was removed
    sometime in the 70s and hadn't been used by customers since the mid 60s.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Scott Lurndal on Wed May 1 13:44:20 2024
    On 5/1/2024 1:31 PM, Scott Lurndal wrote:
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
    John Levine wrote:

    According to Scott Lurndal <slp53@pacbell.net>:
    The worst parts of JCL were "DD" cards, manual track allocation,
    and the lack of any form of real filesystem. PDS do not count.

    OS had named files (which they called datasets) and a catalog that
    said which disk or tape it was on, so you could refer to SYS1.FOOLIB
    and it would find it for you.

    Agreed. But by "real file system", Scott meant a hierarchical system
    such as we are used to today with Windows and Unix,

    I don't consider a partitioned dataset to be a 'filesystem'.

    Based on his response, I don't think John does either, and I certainly
    don't.




    Burroughs MCP
    had a traditional filesystem (albeit not heirarchical) where the filesystem automatically
    handled area allocation for files[*], which were cataloged in a directory; there was a global disk directory supporting all 100-byte media units
    (which provide a pool of disk units from which file areas were allocated;
    the pool could be partitioned by operations personnel into 'subsystems'
    from which applications or the operator could specify that a given file's areas should be
    allocated).

    I think OS had essentially that, though you did have to specify more information than on non CKD systems. Exec 8 had pretty much all of that
    except only one "pool" per device type, but users could specify that two
    files were to be placed on the same, or different devices (for
    performance). Of course, removable disks were handled individually.


    Disk pack families (one or more packs in a family) had a directory
    for the family, and the filesystem allocated space from any unit
    in the family for files created on that family. The pack family
    name was the root of the filename (e.g. MASTER/PAYROL, where MASTER
    is the family name and PAYROL is a file on that family).

    No "families" under Exec, though as I indicated above, the user could
    force allocation of multiple files to be on the same or different
    devices, but I think this was rarely used.


    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From EricP@21:1/5 to John Levine on Thu May 2 10:59:37 2024
    John Levine wrote:
    According to Tim Rentsch <tr.17687@z991.linuxsc.com>:
    John Levine <johnl@taugh.com> writes:

    According to MitchAlsup1 <mitchalsup@aol.com>:

    For a modern ISA, the architect should specify that various bits
    of the general format "must be zero"* when those bits are not used
    in the instruction.
    That was another innovation of the 360. It specifically said that
    unused bits (of which there were a few) and unused unstructions (of
    which there were a lot) were reserved. The unused bits had to be
    zero, and the instructions all trapped.
    I would describe this not so much as an innovation but just as
    applying a lesson learned from earlier experience.

    Well, yes, but another 360 innovation was the whole idea of computer architecture, as well as the term. It was the first time that the programmer's view of the computer was described independently of any implementation.

    Some earlier IBM
    model (don't remember which one) had the property that instructions
    were somewhat like microcode, and some undocumented combinations of
    bits would do useful things.

    I wonder if that was the way that the 704 OR'ed the index registers.
    There were three of them, numbered 1, 2, and 4, so if your index field
    was 5, it OR'ed registers 1 and 4. It subtracted the index (or the
    OR'ed combination of indexes) from the base address, so it would have
    taken some really tricky programming to make use of that. But someone
    must have since they documented it and it continued to work on the
    709, 7090, and 7094 until they provided 7 index registers and a mode
    bit to switch between the old OR and the new 7 registers.

    I have never found anything that says whether it was deliberate or an accident of the 704's implementation, and I have looked pretty hard.

    Some TTL of the late 1960's used open collector "wired-OR" for buses
    before tri-state was invented. Generally, it has a pull-down resistor
    and a pull-up transistor such that if any output goes to 1,
    the bus goes to 1.
    A pull-up resistor with pull-down transistor gives a "wired-AND".

    One difference of open collector from tri-state is if two outputs
    both drive the bus line at once but with different 0/1 values.
    With open collector you just get an OR or AND of the two outputs.
    With tri-state it would burn out the chip.

    The IBM 704 was a tube machine circa 1954.
    They might have used the tube equivalent of open collector "wired-OR" bus.

    Having three 1-bit register select fields saves a decoder for
    index register specifier.

    This feature looks like it was was just a consequence of using
    a wired-OR bus and skipping the decoder on the index field.
    For completeness they documented what happens if one enables multiple
    index registers at once - that it OR's (as opposed to burn out).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Thu May 2 15:35:17 2024
    According to EricP <ThatWouldBeTelling@thevillage.com>:
    John Levine wrote:
    Some earlier IBM
    model (don't remember which one) had the property that instructions
    were somewhat like microcode, and some undocumented combinations of
    bits would do useful things.

    I wonder if that was the way that the 704 OR'ed the index registers.

    Having three 1-bit register select fields saves a decoder for
    index register specifier.

    This feature looks like it was was just a consequence of using
    a wired-OR bus and skipping the decoder on the index field.
    For completeness they documented what happens if one enables multiple
    index registers at once - that it OR's (as opposed to burn out).

    That's what I was thinking, but as I said, I have never found anything
    to say whether it was deliberate ("it's free, maybe someone will find
    a way to use it") or not ("hey, boss, I found this when writing the diagnostics.")

    For that matter, the 704 was mostly sign-magnitude, but indexing did a
    two's complement subtract of the index register(s) from the address in
    the instruction. I've never found an explanation of why they
    subtracted or why it was a two's comp subtraction.

    Again, there are plenty of guesses like they thought it would let you
    use the index register both as an index and a counter down to zero,
    but that's just a guess, nothing documented. In fact, Fortran stored
    its arrays backward to avoid having to negate values before using them
    as an index, which suggests that whatever they expected didn't happen.

    Since it was 70 years ago, everyone involved has since died so we're
    stuck with the sparse material they left.


    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to John Levine on Thu May 2 16:15:55 2024
    John Levine <johnl@taugh.com> schrieb:

    I wonder if that was the way that the 704 OR'ed the index registers.
    There were three of them, numbered 1, 2, and 4, so if your index field
    was 5, it OR'ed registers 1 and 4. It subtracted the index (or the
    OR'ed combination of indexes) from the base address, so it would have
    taken some really tricky programming to make use of that. But someone
    must have since they documented it and it continued to work on the
    709, 7090, and 7094 until they provided 7 index registers and a mode
    bit to switch between the old OR and the new 7 registers.

    I have never found anything that says whether it was deliberate or an accident of the 704's implementation, and I have looked pretty hard.

    We've had that discussion before :-)

    Looking at the "manual of operation" from 1955, the ORing is shown,
    and it is not listed in the changes from the 1954 version

    So, documented from the release, at least.

    The (incomplete) schematics at Bitsavers will probably show the
    ORs, if anybody can dig through them and the relevant drawings
    are not in the missing parts. I can read "AND" and "OR", but I have
    no idea what "CF", "T" or "2PCF" stand for.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From EricP@21:1/5 to Thomas Koenig on Thu May 2 16:51:01 2024
    Thomas Koenig wrote:
    John Levine <johnl@taugh.com> schrieb:

    I wonder if that was the way that the 704 OR'ed the index registers.
    There were three of them, numbered 1, 2, and 4, so if your index field
    was 5, it OR'ed registers 1 and 4. It subtracted the index (or the
    OR'ed combination of indexes) from the base address, so it would have
    taken some really tricky programming to make use of that. But someone
    must have since they documented it and it continued to work on the
    709, 7090, and 7094 until they provided 7 index registers and a mode
    bit to switch between the old OR and the new 7 registers.

    I have never found anything that says whether it was deliberate or an
    accident of the 704's implementation, and I have looked pretty hard.

    We've had that discussion before :-)

    Looking at the "manual of operation" from 1955, the ORing is shown,
    and it is not listed in the changes from the 1954 version

    So, documented from the release, at least.

    The (incomplete) schematics at Bitsavers will probably show the
    ORs, if anybody can dig through them and the relevant drawings
    are not in the missing parts. I can read "AND" and "OR", but I have
    no idea what "CF", "T" or "2PCF" stand for.

    I found a 704 glossary that defines:

    CF = Cathode Follower
    PCF = Power Cathode Follower
    THY = Thyratron

    A quicky search finds cathode follower is a signal regenerator buffer
    tube circuit for driving high fan-out loads.

    Unfortunately I couldn't find any 704 documents which detail
    its tube logic circuit designs.

    BUT... in searching for "IBM 704 tube" I cames across this which
    show a picture of a 704 logic circuit

    https://computermuseum.uwaterloo.ca/index.php/Detail/objects/13

    and says the 704 tube logic circuits were designed by someone named
    A. Halsey Dickinson, AND it seems he also designed the 604 tube circuits,
    which were circa 1948, and those are documented.
    This document is dated 1958 so contemporaneous with the 704
    and details the 604 tube logic circuits:

    http://www.bitsavers.org/pdf/ibm/604/227-7609-0_604_CE_man_1958.pdf

    It is possible the 704's "T" gate stands for what 604 called TR or
    Trigger units, which appears to be what we today call an SR Latch.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to EricP on Thu May 2 22:47:57 2024
    EricP wrote:

    Thomas Koenig wrote:
    John Levine <johnl@taugh.com> schrieb:

    I wonder if that was the way that the 704 OR'ed the index registers.
    There were three of them, numbered 1, 2, and 4, so if your index field
    was 5, it OR'ed registers 1 and 4. It subtracted the index (or the
    OR'ed combination of indexes) from the base address, so it would have
    taken some really tricky programming to make use of that. But someone
    must have since they documented it and it continued to work on the
    709, 7090, and 7094 until they provided 7 index registers and a mode
    bit to switch between the old OR and the new 7 registers.

    I have never found anything that says whether it was deliberate or an
    accident of the 704's implementation, and I have looked pretty hard.

    We've had that discussion before :-)

    Looking at the "manual of operation" from 1955, the ORing is shown,
    and it is not listed in the changes from the 1954 version

    So, documented from the release, at least.

    The (incomplete) schematics at Bitsavers will probably show the
    ORs, if anybody can dig through them and the relevant drawings
    are not in the missing parts. I can read "AND" and "OR", but I have
    no idea what "CF", "T" or "2PCF" stand for.

    I found a 704 glossary that defines:

    CF = Cathode Follower
    PCF = Power Cathode Follower
    THY = Thyratron

    A quicky search finds cathode follower is a signal regenerator buffer
    tube circuit for driving high fan-out loads.

    Equivalent to bipolar configuration known as Emitter Follower.

    Unfortunately I couldn't find any 704 documents which detail
    its tube logic circuit designs.

    BUT... in searching for "IBM 704 tube" I cames across this which
    show a picture of a 704 logic circuit

    https://computermuseum.uwaterloo.ca/index.php/Detail/objects/13

    and says the 704 tube logic circuits were designed by someone named
    A. Halsey Dickinson, AND it seems he also designed the 604 tube
    circuits,
    which were circa 1948, and those are documented.
    This document is dated 1958 so contemporaneous with the 704
    and details the 604 tube logic circuits:

    http://www.bitsavers.org/pdf/ibm/604/227-7609-0_604_CE_man_1958.pdf

    Fascinating.

    It is possible the 704's "T" gate stands for what 604 called TR or
    Trigger units, which appears to be what we today call an SR Latch.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to EricP on Sun May 5 10:50:34 2024
    EricP <ThatWouldBeTelling@thevillage.com> schrieb:

    https://computermuseum.uwaterloo.ca/index.php/Detail/objects/13

    and says the 704 tube logic circuits were designed by someone named
    A. Halsey Dickinson, AND it seems he also designed the 604 tube circuits, which were circa 1948, and those are documented.
    This document is dated 1958 so contemporaneous with the 704
    and details the 604 tube logic circuits:

    http://www.bitsavers.org/pdf/ibm/604/227-7609-0_604_CE_man_1958.pdf

    It is possible the 704's "T" gate stands for what 604 called TR or
    Trigger units, which appears to be what we today call an SR Latch.

    Quite interesting.

    They had inverter, two-input NAND, two-input NOR, Pentagrid as a
    two-input OR, and a cheap Diode Switch (DS) as two-input AND) as
    logic gates. The 704 seems to have used mostly AND and OR gates,
    so the decision to AND the index register with the bit from the
    instruction and then OR them together actually seems straightforward,
    this also gives you zero if none of them is selected.

    Having the possibility of more than one index register seems to
    have been a consequence of design which allowed for zero or the
    content of one register as the main purpose. Even if no documents
    survive to prove this, I'm fairly confident that this is why
    they did it.

    Programmers being programmers, they probably started using the
    feature for some multi-dimensional arrays with sizes of powers
    of two, and IBM was then stuck with the feature.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Sun May 5 19:36:28 2024
    According to Thomas Koenig <tkoenig@netcologne.de>:
    They had inverter, two-input NAND, two-input NOR, Pentagrid as a
    two-input OR, and a cheap Diode Switch (DS) as two-input AND) as
    logic gates. The 704 seems to have used mostly AND and OR gates,
    so the decision to AND the index register with the bit from the
    instruction and then OR them together actually seems straightforward,
    this also gives you zero if none of them is selected.

    Having the possibility of more than one index register seems to
    have been a consequence of design which allowed for zero or the
    content of one register as the main purpose. Even if no documents
    survive to prove this, I'm fairly confident that this is why
    they did it.

    That has been my assumption too, despite the lack of documentation.

    Each intruction used 3 bits to specify the index register(s). They
    could have used 2 bits and decoded them with more logic, but my guess,
    again with no documentation at all, is that they already had 15
    address bits which allowed 32K words which in 1954 was an enormous
    amount of memory. Encoding the index would give an extra address bit
    and 64K of memory, but who'd be able to afford that much?

    The 704 manual says that you could get 4K, 8K, or 32K. I haven't been
    able to find out how big typical memories were. General Motors' 704
    had 8K, no hint about anyone else's.

    Programmers being programmers, they probably started using the
    feature for some multi-dimensional arrays with sizes of powers
    of two, and IBM was then stuck with the feature.

    There was a somewhat less implausible use. The PAX and PDX
    instructions put the address or decrement part of the AC into
    an index register, and you could load 2 or 3 index registers
    at the same time if you wanted.


    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Stephen Fuld on Mon May 6 15:45:51 2024
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    snip

    Personally I think his assessment of JCL is harsher than it
    deserves. Don't get me wrong, JCL is not my idea of a great
    control language, but it was usable enough in the environment
    that customers were used to.

    From 1972-1979 I worked at a site that had boh S/360s (mostly /65s)
    running OS/MVT, and Univac 1108s running Exec 8. I used both,
    though did mostly 1108 stuff.

    For several reasons, JCL was terrible. One was its seemingly
    needless obscurity. For example, IIRC the program name of the COBOL
    compiler was ICKFBL00. In contrast, the COBOL compiler under Exec 8
    was called COB. It aalso lacked intelligent defaults, which made a
    it more cumbersome to use. But this was mostly hidden due to a much
    bigger problem.

    Perhaps due to the architectures inability to swap a program out and
    reload it to any real address other than the one it had originally,
    all resources to be used had to be avaable at the beginning of the
    job, so all JCL was scanned at the beginning of the job, and no
    "dynamic" allocations were possible.

    So, for example, the COBOL compiler needed, besides the input file
    name, IIRC four scratch files, an output file and a place to put the (spooled)print listing. These must be explicitly described (JCL DD
    commands) in the JCL for the job, Similarly for other programs.
    This was so inconvenient that IBM provided "Procedures" (essentially
    JCL macros), that included all the necessry DD statements, hid the
    actual program names, etc.) Thus to compile link and execute a
    COBOL program you invoked the procedure called something like
    ICOBUCLG (I have forgotten exactly, but the last thre characters
    were for Compile, Link, and GO). Contrast that with the EXEC 8
    command

    @COB programname

    (The @ was Exec's equivalent to // to indicate a comand) The
    internal scratch files were alocated internally by the compiler,
    the default print output (which could be overridden) went to the
    printer, the default output name (again overridable) was the same
    as the input (object files and source files could have the same
    name).

    Similarly, to copy a file from one place to another, JCL required
    at least two DD cards and an exec card with the program IEBGENER.
    Under Exec 8, the command

    @Copy sourcefile, destinationfile

    was sufficient, as both files would be dynamically assigned (Exec
    term) internally by the copy program, and the indicator of success
    or failure went to the default print output.

    While, as you stated, programmers dealt with this, and it worked
    in batch mode. But it clearly wouldn't work once time sharing
    (called Demand in Exec terminology) became available. Thus IBM
    had to invent a whole new, incompatible set of commands for TSO.
    But the Exec 8 syntax was so straightforward that users used
    exactly the same commands, keyed in at the terminal as were put on
    cards or a file in batch mode. That difference persists to this
    day.

    I have no problem accepting all of your characterizations as
    accurate. Like I said, I'm not a fan of JCL, not at all, I just
    think it wasn't as bad as the commentary in The Design of Design
    makes it out to be.


    The biggest fault of JCL is that it
    is trying to solve the wrong problem.

    What problem was it trying to solve and what was the "right"
    problem?

    The problem it was trying to solve is contained in its name: Job
    Control Language. It tacitly accepted the non-interactive batch
    model for what it needed to address.

    The problem that was in need of addressing is interactive use. I
    think there are two reasons why JCL was so poor at that. One is
    that they knew that teleprocessing would be important, but they
    tried to cram it into the batch processing model, rather than
    understanding a more interactive work style. The second reason is
    that the culture at IBM, at least at that time, never understood the
    idea that using computers can be (and should be) easy and fun. The
    B in IBM is Business, and Business isn't supposed to be fun. And I
    think that's part of why JCL was not viewed (at IBM) as a failure,
    because their Business customers didn't mind. Needless to say, I am speculating, but for what it's worth those are my speculations.


    It isn't clear that trying
    to do something more ambitious would have fared any better in the
    early 1960s (see also The Second System Effect in MMM).

    Exec 8 was roughly comtemporaeous with OS/MVT. I claim, was a
    much better choice,

    Let me clarify my earlier statement: It isn't clear that >>IBM<<
    trying to do something more ambitious would have fared any better
    (at least not at that time). The people who did Exec 8 didn't have
    the baggage of IBM's customer model (here again I am speculating),
    so it was easier for them to do a better job.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to John Levine on Mon May 6 18:22:59 2024
    John Levine <johnl@taugh.com> writes:

    According to Tim Rentsch <tr.17687@z991.linuxsc.com>:

    John Levine <johnl@taugh.com> writes:

    According to MitchAlsup1 <mitchalsup@aol.com>:

    For a modern ISA, the architect should specify that various bits
    of the general format "must be zero"* when those bits are not used
    in the instruction.

    That was another innovation of the 360. It specifically said that
    unused bits (of which there were a few) and unused unstructions (of
    which there were a lot) were reserved. The unused bits had to be
    zero, and the instructions all trapped.

    I would describe this not so much as an innovation but just as
    applying a lesson learned from earlier experience.

    Well, yes, but another 360 innovation was the whole idea of computer architecture, as well as the term. It was the first time that the programmer's view of the computer was described independently of any implementation.

    I don't buy it. An architecture is just a description of system
    behavior, and surely there were descriptions of system behavior
    before System/360. Even in the 1950s companies must have changed implementations of a given model while still conforming to its
    earlier description. As for the word architecture, it seems like
    an obvious and natural word choice, given the hundreds (or more)
    of years of experience with blueprints and buildings.

    The Forbidden City in China was designed and built in a very
    short time (less than 15 years) in the early 1400s, and is
    still there today. The intellectual history of architecture
    is well established; it doesn't seem like any great leap to
    use the word "architecture" for something that is very much
    like a blueprint.

    I grant you that the idea of having a single architecture for a
    line of computers covering a large range of performance was a new
    idea, and a revolutionary one. Certainly IBM deserves credit for
    that. Furthermore that one idea is responsible for much if not
    most of the success of System/360, and well worth recognizing as
    such. However that idea is much more than just the notion of
    describing system behavior.

    Some earlier IBM
    model (don't remember which one) had the property that instructions
    were somewhat like microcode, and some undocumented combinations of
    bits would do useful things.

    I wonder if that was the way that the 704 OR'ed the index registers.
    There were three of them, numbered 1, 2, and 4, so if your index field
    was 5, it OR'ed registers 1 and 4. It subtracted the index (or the
    OR'ed combination of indexes) from the base address, so it would have
    taken some really tricky programming to make use of that. But someone
    must have since they documented it and it continued to work on the
    709, 7090, and 7094 until they provided 7 index registers and a mode
    bit to switch between the old OR and the new 7 registers.

    My memory (fuzzy and unreliable though it may be) is that the
    property I mentioned had nothing to do with index registers, but
    rather was about what operation (or combination of operations)
    would be performed. Apparently there was very little encoding of
    the "opcode" bits, so undocumented combinations of bits would
    have a different, and sometimes useful, effect. Like I said my
    memory is less than 100% reliable so feel to apply any number of
    grains of salt.

    I have never found anything that says whether it was deliberate or an accident of the 704's implementation, and I have looked pretty hard.

    Another dim memory from ages past is that the choice in early
    versions of FORTRAN to limit arrays to three dimensions was due
    to an early IBM model having three index registers. Probably an
    interested person could track that down if they wanted to, but
    for myself I am content to let the question fade into the mists
    of time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Tim Rentsch on Tue May 7 06:20:40 2024
    Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:
    John Levine <johnl@taugh.com> writes:

    Well, yes, but another 360 innovation was the whole idea of computer
    architecture, as well as the term. It was the first time that the
    programmer's view of the computer was described independently of any
    implementation.

    I don't buy it. An architecture is just a description of system
    behavior, and surely there were descriptions of system behavior
    before System/360.

    Indeed, but "independently of any implementation" is the key here.
    Brooks wrote that programmers viewed the "Principle of Operations"
    as *the* S/360, rather than the individual models.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Tim Rentsch on Tue May 7 06:19:18 2024
    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:


    snip



    The biggest fault of JCL is that it
    is trying to solve the wrong problem.

    What problem was it trying to solve and what was the "right"
    problem?

    The problem it was trying to solve is contained in its name: Job
    Control Language. It tacitly accepted the non-interactive batch
    model for what it needed to address.

    You may be right, but correct me if I am wrong, there was no
    non-interactive model in the mid 1960s when JCL was devised. They
    didn't address it because they couldn't forcast (obviouslyincorrectly),
    that it would be a problem to solve.



    The problem that was in need of addressing is interactive use. I
    think there are two reasons why JCL was so poor at that. One is
    that they knew that teleprocessing would be important, but they
    tried to cram it into the batch processing model, rather than
    understanding a more interactive work style. The second reason is
    that the culture at IBM, at least at that time, never understood the
    idea that using computers can be (and should be) easy and fun. The
    B in IBM is Business, and Business isn't supposed to be fun. And I
    think that's part of why JCL was not viewed (at IBM) as a failure,
    because their Business customers didn't mind. Needless to say, I am speculating, but for what it's worth those are my speculations.


    Fair enough. A couple of comments. By the time TSO/360 came out,in
    IIRC the early 1970s, they were already committed to JCL. TSO ran as a
    batch job on top of the OS, and handled swapping, etc.itself within the
    region allocated to TSO within the OS. It was a disaster. Of course
    this was later addressed by unifying TSO into the OS, but that couldn't
    happen until the S/370s (except the 155 and 165) and virtual memory.
    But the legacy of two control languages was already set by then.

    As for "fun". I agree that IBM didn't think of computers as fun, but
    there were plenty of reasons to support interactive terminals for
    purely business reasons, a major one being programmer productivity in developing business applications.





    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Tim Rentsch on Tue May 7 06:54:17 2024
    Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:

    Like I said, I'm not a fan of JCL, not at all, I just
    think it wasn't as bad as the commentary in The Design of Design
    makes it out to be.

    I think the point he made is subtly different.

    The UNIX shells have demonstrated that a command interface is,
    and should be, a programming language in its own right. JCL has
    the rudiments of a programming language with its COND parameter
    (which ties my brain into knots every time I think about it) and
    the possibility of iteration via submitting new jobs via INTRDR,
    plus its macro facility (but with global variables only).

    Viewed through that lens, I can't think of any (serious) programming
    language that is worse than JCL. Joke languages need not apply.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Stephen Fuld on Tue May 7 06:46:34 2024
    Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:
    but correct me if I am wrong, there was no
    non-interactive model in the mid 1960s when JCL was devised. They
    didn't address it because they couldn't forcast (obviouslyincorrectly),
    that it would be a problem to solve.

    That is one of the issues that Brooks raises. OS/360 was already
    predicated on terminal access, but the JCL designers missed that
    and chose punched cards.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Thomas Koenig on Tue May 7 11:38:45 2024
    On Tue, 7 May 2024 06:54:17 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:

    Like I said, I'm not a fan of JCL, not at all, I just
    think it wasn't as bad as the commentary in The Design of Design
    makes it out to be.

    I think the point he made is subtly different.

    The UNIX shells have demonstrated that a command interface is,
    and should be, a programming language in its own right.

    I wouldn't give that credit to UNIX.
    I was not around, but my impression is that by time of creation of UNIX
    it was a common understanding. For example, DEC supplied RSX-11 with
    DCL at about the same time [as UNIX got Thompson shell) and I never
    heard that anybody considered it novel.

    JCL has
    the rudiments of a programming language with its COND parameter
    (which ties my brain into knots every time I think about it) and
    the possibility of iteration via submitting new jobs via INTRDR,
    plus its macro facility (but with global variables only).

    Viewed through that lens, I can't think of any (serious) programming
    language that is worse than JCL. Joke languages need not apply.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Michael S on Tue May 7 08:49:41 2024
    Michael S <already5chosen@yahoo.com> schrieb:
    On Tue, 7 May 2024 06:54:17 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:

    Like I said, I'm not a fan of JCL, not at all, I just
    think it wasn't as bad as the commentary in The Design of Design
    makes it out to be.

    I think the point he made is subtly different.

    The UNIX shells have demonstrated that a command interface is,
    and should be, a programming language in its own right.

    I wouldn't give that credit to UNIX.

    I think I whould have qualitied that statement somewhat. What I
    think the full set of features of the Bourne C shells finally made
    it clear to everybody that shells could and should be a complete
    programming language.

    I was not around, but my impression is that by time of creation of UNIX
    it was a common understanding. For example, DEC supplied RSX-11 with
    DCL at about the same time [as UNIX got Thompson shell) and I never
    heard that anybody considered it novel.

    The Thompson shell was still restricted to GOTO (as was the RSX-11
    shell).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Tim Rentsch on Tue May 7 11:54:33 2024
    On Mon, 06 May 2024 18:22:59 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:


    I don't buy it. An architecture is just a description of system
    behavior, and surely there were descriptions of system behavior
    before System/360. Even in the 1950s companies must have changed implementations of a given model while still conforming to its
    earlier description.

    Were they?
    My impression is that until S/360 there was no such thing as different
    by 100% SW compatible models.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Tim Rentsch on Tue May 7 08:50:11 2024
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    John Levine <johnl@taugh.com> writes:
    Well, yes, but another 360 innovation was the whole idea of computer
    architecture, as well as the term. It was the first time that the
    programmer's view of the computer was described independently of any
    implementation.

    I don't buy it. An architecture is just a description of system
    behavior, and surely there were descriptions of system behavior
    before System/360. Even in the 1950s companies must have changed >implementations of a given model while still conforming to its
    earlier description.

    Sure, the 7094 was a compatible successor of the 704, but the idea of implementation independence turns out to be much more profound than
    most people (probably including its inventors at the time) realized.
    The hardware behind the z16 is vastly different from that of any of
    the initial members of the 360 family, its microarchitecture is also
    vastly different (caches, virtual memory, out-of-order execution,
    speculative execution, superscalar execution), and yet it can run
    software written for the S/360 60 years ago.

    And the important part is not just what to put into the architecture,
    but also what to leave out. There is no end to clever ideas that
    would improve performance of a particular implementation, that are bad
    ideas when it comes to architecture. A widely accepted example is
    branch delay slots.

    An interesting example is IA-64: It was designed as a long-lived
    architecture (while the VLIW machines that provided some of the ideas
    for IA-64 seem to be mainly designed as implementations), but it
    turned out that the special architectural features it had could be
    provided through microarchitecture (in particular, OoO execution) to
    earlier architectures, and that these features were pointless for OoO microarchitectures.

    Another interesting example is Alpha. It was claimed to be designed
    for a 25-year lifetime (actually there were 9 years between the
    introduction of the 21064 in 1992 and the cancellation of the 21464 in
    2001). They left out all features that (they claimed) hindered
    performance, such as a flags register and byte/word-access (BWX)
    instructions; the BWX case was supposedly because they required ECC
    for write-back caches. But the first implementations (EV4, EV45, EV5,
    EV56) all have write-through L1 caches, so BWX instructions would have
    been no problem (and they were added in EV56 in 1996, i.e. after 4
    years). EV6, which has a write-back L1 cache, has a write buffer, so generating ECC for the BWX instructions was not particularly
    expensive. The downside of leaving an architectural feature like BWX
    out in the first implementation is that much software would forego
    using BWX for a very long time (if Alpha had lived that long).

    As for the word architecture, it seems like
    an obvious and natural word choice, given the hundreds (or more)
    of years of experience with blueprints and buildings.

    I don't think that their achievement is in choosing the word
    "architecture". It reflects on the division of labor between the
    planner of a building and the people who implement the plan, but it
    does not transport the fact that computer ISA "architects" design the
    interface between software and hardware for a vast amount of software
    and a vast difference in potential hardware across the decades, much
    of it unforeseeable for the computer architect. By contrast, building architects are more like computer microarchitects, designing for
    building materials of the day, and the uses of the buildings tend to
    be less varied than the software that runs on a general-purpose ISA;
    ok, my office is in a building that was built as a residence building
    in 1913, but that's not because the architect designed it for
    general-purpose usage.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Michael S on Tue May 7 13:33:54 2024
    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 06 May 2024 18:22:59 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:


    I don't buy it. An architecture is just a description of system
    behavior, and surely there were descriptions of system behavior
    before System/360. Even in the 1950s companies must have changed
    implementations of a given model while still conforming to its
    earlier description.

    Were they?
    My impression is that until S/360 there was no such thing as different
    by 100% SW compatible models.

    The Burroughs B5500 and B3500 were contemporaneous with the S/360
    and provided 100% SW compatible models across a performance range
    during the same 1965 to 1978 time period as the S/360.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Thomas Koenig on Tue May 7 13:40:54 2024
    Thomas Koenig <tkoenig@netcologne.de> writes:
    Michael S <already5chosen@yahoo.com> schrieb:
    On Tue, 7 May 2024 06:54:17 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:

    Like I said, I'm not a fan of JCL, not at all, I just
    think it wasn't as bad as the commentary in The Design of Design
    makes it out to be.

    I think the point he made is subtly different.

    The UNIX shells have demonstrated that a command interface is,
    and should be, a programming language in its own right.

    I wouldn't give that credit to UNIX.

    I think I whould have qualitied that statement somewhat. What I
    think the full set of features of the Bourne C shells finally made

    The Bourne shell and the C shell were two completely different
    shells (the latter followed the former by several years).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Scott Lurndal on Tue May 7 13:59:39 2024
    Scott Lurndal <scott@slp53.sl.home> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> writes:
    Michael S <already5chosen@yahoo.com> schrieb:
    On Tue, 7 May 2024 06:54:17 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:

    Like I said, I'm not a fan of JCL, not at all, I just
    think it wasn't as bad as the commentary in The Design of Design
    makes it out to be.

    I think the point he made is subtly different.

    The UNIX shells have demonstrated that a command interface is,
    and should be, a programming language in its own right.

    I wouldn't give that credit to UNIX.

    I think I whould have qualitied that statement somewhat. What I
    think the full set of features of the Bourne C shells finally made

    The Bourne shell and the C shell were two completely different
    shells (the latter followed the former by several years).

    Having worked with both, I certainly know the differences.

    But if Wikipedia is to be trusted, Bill Joy released the C shell in
    1978, and the Bourne shell was released in 1979.

    That makes them roughly contemporary, considering that the C shell
    may have been released at an earlier stage of development.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Stephen Fuld on Tue May 7 13:39:07 2024
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
    Tim Rentsch wrote:

    The problem it was trying to solve is contained in its name: Job
    Control Language. It tacitly accepted the non-interactive batch
    model for what it needed to address.

    You may be right, but correct me if I am wrong, there was no
    non-interactive model in the mid 1960s when JCL was devised.

    BASIC and DTSS was developed in 1963.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Scott Lurndal on Tue May 7 14:56:44 2024
    Scott Lurndal <scott@slp53.sl.home> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> writes:
    Scott Lurndal <scott@slp53.sl.home> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> writes:
    Michael S <already5chosen@yahoo.com> schrieb:
    On Tue, 7 May 2024 06:54:17 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:

    Like I said, I'm not a fan of JCL, not at all, I just
    think it wasn't as bad as the commentary in The Design of Design >>>>>> > makes it out to be.

    I think the point he made is subtly different.

    The UNIX shells have demonstrated that a command interface is,
    and should be, a programming language in its own right.

    I wouldn't give that credit to UNIX.

    I think I whould have qualitied that statement somewhat. What I
    think the full set of features of the Bourne C shells finally made

    The Bourne shell and the C shell were two completely different
    shells (the latter followed the former by several years).

    Having worked with both, I certainly know the differences.

    But if Wikipedia is to be trusted, Bill Joy released the C shell in
    1978, and the Bourne shell was released in 1979.

    The V6 shell was released in 1975.

    Wikipedia claims that this stll used the Thompson shell, and looking
    at its man page at http://man.cat-v.org/unix-6th/1/sh , that seems
    to be the case - it makes no mention of a lot of the more elaborate
    features of the Bourne shell, which appears to be described in the man
    page at http://man.cat-v.org/unix_7th/1/sh .

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Thomas Koenig on Tue May 7 14:19:32 2024
    Thomas Koenig <tkoenig@netcologne.de> writes:
    Scott Lurndal <scott@slp53.sl.home> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> writes:
    Michael S <already5chosen@yahoo.com> schrieb:
    On Tue, 7 May 2024 06:54:17 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:

    Like I said, I'm not a fan of JCL, not at all, I just
    think it wasn't as bad as the commentary in The Design of Design
    makes it out to be.

    I think the point he made is subtly different.

    The UNIX shells have demonstrated that a command interface is,
    and should be, a programming language in its own right.

    I wouldn't give that credit to UNIX.

    I think I whould have qualitied that statement somewhat. What I
    think the full set of features of the Bourne C shells finally made

    The Bourne shell and the C shell were two completely different
    shells (the latter followed the former by several years).

    Having worked with both, I certainly know the differences.

    But if Wikipedia is to be trusted, Bill Joy released the C shell in
    1978, and the Bourne shell was released in 1979.

    The V6 shell was released in 1975.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Scott Lurndal on Tue May 7 18:00:50 2024
    Scott Lurndal wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
    Tim Rentsch wrote:

    The problem it was trying to solve is contained in its name: Job
    Control Language. It tacitly accepted the non-interactive batch
    model for what it needed to address.

    You may be right, but correct me if I am wrong, there was no non-interactive model in the mid 1960s when JCL was devised.

    BASIC and DTSS was developed in 1963.


    Good Point. So IBM was "guuilty" of vastly mis-understanding and under estimating the future importance of interactive users.





    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Stephen Fuld on Tue May 7 18:14:13 2024
    Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:
    Scott Lurndal wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
    Tim Rentsch wrote:

    The problem it was trying to solve is contained in its name: Job
    Control Language. It tacitly accepted the non-interactive batch
    model for what it needed to address.

    You may be right, but correct me if I am wrong, there was no
    non-interactive model in the mid 1960s when JCL was devised.

    BASIC and DTSS was developed in 1963.


    Good Point. So IBM was "guuilty" of vastly mis-understanding and under estimating the future importance of interactive users.

    Only the team that made JCL, it seems.

    Brooks claims that System/360 was premeditated for terminal
    use from the start, and that somebody didn't get the memo
    when designing JCL (my words).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Thomas Koenig on Tue May 7 19:11:33 2024
    Thomas Koenig wrote:

    Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:
    Scott Lurndal wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
    Tim Rentsch wrote:

    The problem it was trying to solve is contained in its name:
    Job >> >> Control Language. It tacitly accepted the non-interactive
    batch >> >> model for what it needed to address.

    You may be right, but correct me if I am wrong, there was no
    non-interactive model in the mid 1960s when JCL was devised.

    BASIC and DTSS was developed in 1963.


    Good Point. So IBM was "guuilty" of vastly mis-understanding and
    under estimating the future importance of interactive users.

    Only the team that made JCL, it seems.


    Just as an aside, though this thread may be somewhat OT, I consider it
    fun and interesting.

    I am not sure exactly what he is saying here. By JCL, does he mean
    just the syntax of the language, the funky program names, etc., or doea
    he include the things like the requirement that all allocation be done
    before the first program executes, which is perhaps more of an OS
    design issue?



    Brooks claims that System/360 was premeditated for terminal
    use from the start, and that somebody didn't get the memo
    when designing JCL (my words).

    In what sense was the S/360 architecture, designed for terminal use? I
    already talked about the base register, BALR/Using stuff that prevented
    an interative program from being swapped out and swapped in to a
    different real memory lcation. This was a significant hinderance to
    "terminal use".

    BTW, another problem occurs in transaction workloads where there is
    another level of software between the user and the OS, but insteaed of
    TSO, it was IMS or CICS, which did the terminal handling and imposed
    another level of scheduling (i.e. IMS/CICS competed with other programs
    for OS resources and the transactions within IMS/CICS competed with
    each other for the resources that IMS/CICS got.) Note that this
    overhead was so much that the high volume transaction users couldn't
    use it and instead developed their own OS (ACP).

    Also, the protection mechanism of S/360 was such that there was no
    protection between transactions within CICS (Idon'tknow about IMS),
    such that an errant subscript could cause the overwriting of other transactions, or even CICS itself. This was also fixed with the
    S/370s virtual memory.





    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Tue May 7 19:25:55 2024
    According to Scott Lurndal <slp53@pacbell.net>:
    My impression is that until S/360 there was no such thing as different
    by 100% SW compatible models.

    The Burroughs B5500 and B3500 were contemporaneous with the S/360
    and provided 100% SW compatible models across a performance range
    during the same 1965 to 1978 time period as the S/360.

    Wikipedia says that while S/360 and the B3500 were announced in 1964,
    the B3500 was announced in 1966. In the discussion of MCP on the B3500
    it says 'It shared many architectural features with the MCP of
    Burroughs' Large Systems stack machines, but was entirely different
    internally, and was coded in assembly language, not an ALGOL
    derivative." That suggests it was compatible for user programs, but
    not for operating systems.

    On the 360, if two models had similar memory and peripherals, you
    could IPL and run the same operating system since it was specified
    down to the details of interrupts and I/O instructions.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Tue May 7 19:47:46 2024
    According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
    In what sense was the S/360 architecture, designed for terminal use? I >already talked about the base register, BALR/Using stuff that prevented
    an interative program from being swapped out and swapped in to a
    different real memory location. This was a significant hinderance to >"terminal use".

    With sufficiently disciplined programming, you could swap and move data
    by updating the base registers. APL\360 did this quite successfully
    and handled a lot of interactive users on a 360/50.

    Reading between the lines in the IBMSJ architecture paper, I get the
    impression they believed that moving code and data with base registers
    would be a lot easier than it was, and missed the facts that a lot of
    pointers are stored in memory, and it is hard to know what registers
    are being used as base registers when.

    This paper from U of Michigan lays out the problem and proposes a
    paging design which soon became the 360/67:

    https://dl.acm.org/doi/pdf/10.1145/321312.321313

    TSS was a disaster due to an extreme case of second system syndrome,
    but Michigan's MTS and IBM skunkworks CP/67 worked great.

    BTW, another problem occurs in transaction workloads where there is
    another level of software between the user and the OS, but insteaed of
    TSO, it was IMS or CICS, ...

    There's two ways to write interacticve software, which I call the time-sharing approach and the SAGE approach. In the time-sharing approach, the operating system
    stops and starts user processes and transparently saves and restores the process
    status. In the SAGE approach, programs are broken up into little pieces each of
    which runs straight through, explicitly saves whatever context it needs to, and then returns to the OS.

    The bad news about the SAGE approach is that the programming is
    tedious and as you note bugs can be catastrophic. The good news is
    that it can get fantastic performance for lots of users. It was
    invented for the SAGE missile defense system on tube computers in the
    1950s, adapted for the SABRE airline reservation system on 7094s in
    the 1960s and has been used over and over, with the current trendy
    version being node.js. We now have better ways to describe
    continuations which make the programming a little easier, but it's
    still a tradeoff. IMS and CICS used the SAGE approach to provide good performance on specific applications.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Tue May 7 19:51:43 2024
    According to Thomas Koenig <tkoenig@netcologne.de>:
    I was not around, but my impression is that by time of creation of UNIX
    it was a common understanding. For example, DEC supplied RSX-11 with
    DCL at about the same time [as UNIX got Thompson shell) and I never
    heard that anybody considered it novel.

    The Thompson shell was still restricted to GOTO (as was the RSX-11
    shell).

    You're probably thinking of the Mashey shell. One of the first usenix
    tapes has patches I wrote in about 1976 to add simple variables with
    single character names to that shell. It was an improvement, but the
    Bourne shell was way better.

    Re when this stuff was invented, I did some work on CP/67 when I was
    in high school in about 1970 and I recall that even then people
    routinely ran files of CMS commands. Don't remember whether there were variables and control flow or that came later with REXX.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to John Levine on Tue May 7 19:58:24 2024
    John Levine <johnl@taugh.com> writes:
    According to Scott Lurndal <slp53@pacbell.net>:
    My impression is that until S/360 there was no such thing as different
    by 100% SW compatible models.

    The Burroughs B5500 and B3500 were contemporaneous with the S/360
    and provided 100% SW compatible models across a performance range
    during the same 1965 to 1978 time period as the S/360.

    Wikipedia says that while S/360 and the B3500 were announced in 1964,
    the B3500 was announced in 1966. In the discussion of MCP on the B3500
    it says 'It shared many architectural features with the MCP of
    Burroughs' Large Systems stack machines, but was entirely different >internally, and was coded in assembly language, not an ALGOL
    derivative." That suggests it was compatible for user programs, but
    not for operating systems.

    I spent almost a decade working on the later versions of the
    B3500 operating system. With minor changes (detected at runtime), the same MCP ran
    on B3500/B3700/B4700, B4800, B4925/B4955; three generations. We rewrote the MCP
    to enable access to more memory circa 1982 in a high-level language
    called SPRITE, incorporating quite a bit of the assembler code
    from the prior MCP and changed the name to V-Series which included
    three distinct models each using the same MCP/VS: V340/V380, V420 and
    the four processor ECL SMP V5x0.

    The customer never needed to build the MCP or SYSGEN it.
    It configured itself when installed and could be dynamically
    reconfigured without a halt/load (i.e. reboot).


    On the 360, if two models had similar memory and peripherals, you
    could IPL and run the same operating system since it was specified
    down to the details of interrupts and I/O instructions.

    Same for the B3/4/5xxx series.

    The first major architectural change occured in 1982, prior
    models all ran the same MCP.

    User application binaries were of course forward portable to _all_ generations of the MCP and CPU across the entire life of the product line.

    The IO Subsystems were always superior to the IBM channel programs,
    there was a separate I/O processor to which the OS presented
    and I/O descriptor, and the I/O processor wrote a result descriptor
    into memory before raising the I/O complete interrupt. the IOP
    managed data transfer between the peripheral and host.

    The I/O descriptor would indicate the high level operation:
    - Read Card, Read Tape Forward, Read Tape Backward, Read Disk Block
    - Punch Card, Write Tape Forward, Write Tape Backward, Write Tapemark, Write disk Block
    - Print Line, etc.
    - Terminal Read/Write
    - Cancel prior operation (for e.g. interactive READ, or to recover from error)
    - Identify channel (returned a peripheral identifier unique to the controller type,
    which identified which driver should be loaded during boot).

    It included a pair of addresses defining the bounds of the
    buffer that the I/O processor would DMA to/from and for disk
    included the sector number, for tape the skip count for
    space forward/backward operations.

    The R/D varied from 16-bits to 64 bits depending on device,
    with the first 16-bits common across all devices, and the
    rest was device specific.

    These I/O peripherals were common across both large systems (B[567]xxx)
    and medium systems (B[234]xxx).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to John Levine on Tue May 7 20:07:43 2024
    John Levine <johnl@taugh.com> writes:
    According to Scott Lurndal <slp53@pacbell.net>:
    My impression is that until S/360 there was no such thing as different
    by 100% SW compatible models.

    The Burroughs B5500 and B3500 were contemporaneous with the S/360
    and provided 100% SW compatible models across a performance range
    during the same 1965 to 1978 time period as the S/360.

    Wikipedia says that while S/360 and the B3500 were announced in 1964,
    the B3500 was announced in 1966. In the discussion of MCP on the B3500

    Sorry, I mean to imply that the B3500 (and successors) were 100% sw
    compatible within the medium systems family.

    Likewise for the large systems (B5500) line. I did not intend to imply
    that the B3500 and B5500 were application (or MCP) compatible with each other; they weren't (a 48-bit stack machine and a variable length BCD
    machine have little in common architecturally).

    The MCP had some usability and command similarites between the families,
    but implementation was family specific.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to John Levine on Tue May 7 20:51:25 2024
    John Levine wrote:

    According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:


    This paper from U of Michigan lays out the problem and proposes a
    paging design which soon became the 360/67:

    https://dl.acm.org/doi/pdf/10.1145/321312.321313

    TSS was a disaster due to an extreme case of second system syndrome,
    but Michigan's MTS and IBM skunkworks CP/67 worked great.

    TSS at CMU was extensively rewritten in assembly and became quite tolerable--hosting 30+ interactive jobs along with a background
    batch processing system. When I arrived in Sept 1975 it was quite
    unstable with up times less than 1 hour. 2 years later it would run
    for weeks at a time without going down.

    As I understand it* most of the changes were simply getting rid of
    things that were not present on CMU's 360/67.

    (*) was told by someone who should have known circa 1974 who also
    worked in the machine room 3rd floor in what became Scaife hall.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Wed May 8 02:51:30 2024
    According to Scott Lurndal <slp53@pacbell.net>:
    Wikipedia says that while S/360 and the B5500 were announced in 1964,
    the B3500 was announced in 1966. In the discussion of MCP on the B3500

    Sorry, I mean to imply that the B3500 (and successors) were 100% sw >compatible within the medium systems family.

    Likewise for the large systems (B5500) line. ...

    Oh, OK. In view of the timing, I'd guess that the people at Burroughs,
    who were certainly not dumb, looked at the S/360 material and figured
    oh, that's a good idea, we can do that too.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Wed May 8 02:56:15 2024
    According to MitchAlsup1 <mitchalsup@aol.com>:
    TSS was a disaster due to an extreme case of second system syndrome,
    but Michigan's MTS and IBM skunkworks CP/67 worked great.

    TSS at CMU was extensively rewritten in assembly and became quite >tolerable--hosting 30+ interactive jobs along with a background
    batch processing system. When I arrived in Sept 1975 it was quite
    unstable with up times less than 1 hour. 2 years later it would run
    for weeks at a time without going down.

    For reasons I do not want to try to guess, AT&T did the software
    development for the 5ESS phone switches in a Unix system that sat on
    top of TSS. After IBM cancelled TSS, AT&T continued to use it as some
    sort of special order thing. At IBM there were only a handful of
    programmers working on it, by that time all quite experienced, and I
    hear that they also got rid of a lot of cruft and made it much faster
    and more reliable.

    At the same time, IBM turned the skunkworks CP/67 into VM/370 with a
    much larger staff, leading to predictable consequences.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to John Levine on Tue May 7 23:50:04 2024
    On 5/7/2024 12:47 PM, John Levine wrote:
    According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
    In what sense was the S/360 architecture, designed for terminal use? I
    already talked about the base register, BALR/Using stuff that prevented
    an interative program from being swapped out and swapped in to a
    different real memory location. This was a significant hinderance to
    "terminal use".

    With sufficiently disciplined programming, you could swap and move data
    by updating the base registers. APL\360 did this quite successfully
    and handled a lot of interactive users on a 360/50.

    Wasn't APL\360 an interpreter? If so, then moving instructions and data
    around was considerably simpler.



    Reading between the lines in the IBMSJ architecture paper, I get the impression they believed that moving code and data with base registers
    would be a lot easier than it was, and missed the facts that a lot of pointers are stored in memory, and it is hard to know what registers
    are being used as base registers when.

    Interesting. That would seem to imply that it wasn't that they didn't
    think about the problems that base addressing would cause, they just
    (vastly) underestimated the cost of fixing it. A different "design"
    problem indeed.



    This paper from U of Michigan lays out the problem and proposes a
    paging design which soon became the 360/67:

    https://dl.acm.org/doi/pdf/10.1145/321312.321313

    TSS was a disaster due to an extreme case of second system syndrome,
    but Michigan's MTS and IBM skunkworks CP/67 worked great.

    BTW, another problem occurs in transaction workloads where there is
    another level of software between the user and the OS, but insteaed of
    TSO, it was IMS or CICS, ...

    There's two ways to write interacticve software, which I call the time-sharing
    approach and the SAGE approach. In the time-sharing approach, the operating system
    stops and starts user processes and transparently saves and restores the process
    status. In the SAGE approach, programs are broken up into little pieces each of
    which runs straight through, explicitly saves whatever context it needs to, and
    then returns to the OS.

    Unconventional terminology, but clear and I agree with your point. It
    is perhaps ironic that TSO (Time Sharing Option)/360 did not use the
    "time sharing" approach. :-(



    The bad news about the SAGE approach is that the programming is
    tedious and as you note bugs can be catastrophic. The good news is
    that it can get fantastic performance for lots of users.

    I think the key word here is "can". TSO/360 was a performance dog. :-(



    It was
    invented for the SAGE missile defense system on tube computers in the
    1950s, adapted for the SABRE airline reservation system on 7094s in
    the 1960s and has been used over and over, with the current trendy
    version being node.js. We now have better ways to describe
    continuations which make the programming a little easier, but it's
    still a tradeoff. IMS and CICS used the SAGE approach to provide good performance on specific applications.

    Agreed. But the tradeoff with CICS (I don't know about IMS) was the
    extra overhead of two levels of scheduling. I believe this is why it
    was not useful for the highest performance systems that instead used ACP.




    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to John Levine on Wed May 8 07:36:32 2024
    John Levine <johnl@taugh.com> schrieb:
    According to Thomas Koenig <tkoenig@netcologne.de>:
    I was not around, but my impression is that by time of creation of UNIX
    it was a common understanding. For example, DEC supplied RSX-11 with
    DCL at about the same time [as UNIX got Thompson shell) and I never
    heard that anybody considered it novel.

    The Thompson shell was still restricted to GOTO (as was the RSX-11
    shell).

    You're probably thinking of the Mashey shell.

    Disclaimer: I never worked on those old systems, my first UNIX
    experience was with HP-UX in the late 1980s (where I accidentally
    landed in vi and could not get out, but that's another story).

    One of the first usenix
    tapes has patches I wrote in about 1976 to add simple variables with
    single character names to that shell. It was an improvement, but the
    Bourne shell was way better.

    https://grosskurth.ca/bib/1976/mashey-command.pdf (written by Mashey)
    credits the original shell to Thompson, so I believe we are talking
    about the same shell, just with different names.

    Re when this stuff was invented, I did some work on CP/67 when I was
    in high school in about 1970 and I recall that even then people
    routinely ran files of CMS commands. Don't remember whether there were variables and control flow or that came later with REXX.

    Hmmm... I looked at

    https://bitsavers.org/pdf/ibm/370/VM/370/Release_1/GX20-1926-1_VM_370_Quick_Guide_For_Users__Rel_1_Apr73.pdf

    and found a reference to $LOOP and a reference to "tokens" (which I
    suppose are variables), so that definitely predated the UNIX shells.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Stephen Fuld on Wed May 8 09:38:41 2024
    Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:
    Thomas Koenig wrote:

    Only the team that made JCL, it seems.


    Just as an aside, though this thread may be somewhat OT, I consider it
    fun and interesting.

    I am not sure exactly what he is saying here. By JCL, does he mean
    just the syntax of the language,

    His main criticism is that the design team failed to notice that
    JCL was, in fact, a programming language, that the design team
    thought of it as "just a few cards for job control". This led to
    attributes such as DISP doing what he called "verbish things",
    i.e. commands, dependence on card formats, a syntax similar to,
    but incompatible with, the S/360 assembler, insufficient control
    structures etc.

    He did not criticize the OS itself too much, with its complicated
    allocation stategies etc, mostly some remarks on the file structure
    which he says could have been simplified.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Wed May 8 02:27:02 2024
    Michael S <already5chosen@yahoo.com> writes:

    On Mon, 06 May 2024 18:22:59 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    I don't buy it. An architecture is just a description of system
    behavior, and surely there were descriptions of system behavior
    before System/360. Even in the 1950s companies must have changed
    implementations of a given model while still conforming to its
    earlier description.

    Were they?
    My impression is that until S/360 there was no such thing as different
    by 100% SW compatible models.

    I think a counterexample is the LGP-30 (1956) and its successor
    the LGP-21 (1963). (For reference System/360 and OS/360 were
    announced in April 1964.) I expect there are other examples
    but it's hard to get the historical data needed to answer the
    question. Another example may be the IBM 709 and IBM 7090, both
    done in the 1950s.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Michael S on Wed May 8 10:03:51 2024
    Michael S <already5chosen@yahoo.com> schrieb:

    My impression is that until S/360 there was no such thing as different
    by 100% SW compatible models.

    I think the important thing was that S/360 was designed and built,
    right from the start, as a _series_ of compatible computers, which
    were upward- and downward-compatible. They had the challenge
    of designing an architecture where the instructions for the
    high-end supercomputers still needed to work (although slowly)
    on the low-end bread and butter machines, and what was efficient
    on the low-end bread and butter machines should not constrain the
    high-end supercomputers.

    Most other computer series were built one at a time, with successors
    usually extending the previous ones (which IBM also did with the /370,
    series). The VAX may have been another such line - DEC did not release
    several models all at once, but they did release the cheaper and slower
    11/750 after they had released the 11/780.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Anton Ertl on Wed May 8 03:37:31 2024
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    John Levine <johnl@taugh.com> writes:

    Well, yes, but another 360 innovation was the whole idea of computer
    architecture, as well as the term. It was the first time that the
    programmer's view of the computer was described independently of any
    implementation.

    I don't buy it. An architecture is just a description of system
    behavior, and surely there were descriptions of system behavior
    before System/360. Even in the 1950s companies must have changed
    implementations of a given model while still conforming to its
    earlier description.

    Sure, the 7094 was a compatible successor of the 704,

    I think a more accurate description is to say that the IBM 709 was
    an upgraded version of the IBM 704, the IBM 7090 was a compatible
    replacement for the 709, and the IBM 7094 was an upgraded version
    of the 7090. In both cases the upgrades included changes. The
    7094, for example, still had a three-bit field to select an index
    register, but there were 7 index registers, not 3, only one of
    which could be selected rather than OR-ing together all the index
    registers whose bits were on. To be fair I should add that the
    7094 could be run in a compatible mode where only 3 index registers
    were used, with OR-ing like in the earlier models, but there were
    other changes (or maybe only additions) as well. The 7094 may have
    been upward compatible relative to the 7090, but it wasn't plug
    compatible, and TTBOMU wasn't even upward compatible relative to
    the 704.

    but the idea of
    implementation independence turns out to be much more profound than
    most people (probably including its inventors at the time) realized.

    The point I was trying to make upthread (and whose significance seems
    to have been missed by some people) is that the important lessons had
    already been learned and understood -- by some key people at IBM,
    although certainly not all -- before the System/360 effort started.
    It isn't an accident that IBM decided to make an upward- and downward- compatible family of computer models. That the System/360 effort
    ended up producing a system description that is independent of any
    particular model, and the benefits that accrue as a result, is simply
    a consequence of that earlier and deeper understanding.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Thomas Koenig on Wed May 8 14:18:04 2024
    On Wed, 8 May 2024 10:03:51 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:

    My impression is that until S/360 there was no such thing as
    different by 100% SW compatible models.

    I think the important thing was that S/360 was designed and built,
    right from the start, as a _series_ of compatible computers, which
    were upward- and downward-compatible. They had the challenge
    of designing an architecture where the instructions for the
    high-end supercomputers still needed to work (although slowly)
    on the low-end bread and butter machines, and what was efficient
    on the low-end bread and butter machines should not constrain the
    high-end supercomputers.


    Of course, there is a theory and there is a practice.
    In practice, downward compatibility lasted ~half a year, until Model 20.
    Upward compatibility did not fare much better and was broken
    approximately one year after initial release, in Model 67.
    That is, if I didn't get upward and downward backward.

    According to my understanding, since ~1970, IBM completely gave up on
    all sorts of compatibility except backward compatibility. In more
    recent decades it was further reduced to application-level backward compatibility.

    Most other computer series were built one at a time, with successors
    usually extending the previous ones (which IBM also did with the /370, series). The VAX may have been another such line - DEC did not
    release several models all at once, but they did release the cheaper
    and slower 11/750 after they had released the 11/780.

    I'd think that by 1977 (VAX) backward compatibility was widespread in
    the industry.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Thomas Koenig on Wed May 8 14:31:38 2024
    Thomas Koenig <tkoenig@netcologne.de> writes:
    John Levine <johnl@taugh.com> schrieb:
    According to Thomas Koenig <tkoenig@netcologne.de>:
    I was not around, but my impression is that by time of creation of UNIX >>>> it was a common understanding. For example, DEC supplied RSX-11 with
    DCL at about the same time [as UNIX got Thompson shell) and I never
    heard that anybody considered it novel.

    The Thompson shell was still restricted to GOTO (as was the RSX-11 >>>shell).

    You're probably thinking of the Mashey shell.

    Disclaimer: I never worked on those old systems, my first UNIX
    experience was with HP-UX in the late 1980s (where I accidentally
    landed in vi and could not get out, but that's another story).

    One of the first usenix
    tapes has patches I wrote in about 1976 to add simple variables with
    single character names to that shell. It was an improvement, but the
    Bourne shell was way better.

    https://grosskurth.ca/bib/1976/mashey-command.pdf (written by Mashey)
    credits the original shell to Thompson, so I believe we are talking
    about the same shell, just with different names.

    Re when this stuff was invented, I did some work on CP/67 when I was
    in high school in about 1970 and I recall that even then people
    routinely ran files of CMS commands. Don't remember whether there were
    variables and control flow or that came later with REXX.

    Hmmm... I looked at

    https://bitsavers.org/pdf/ibm/370/VM/370/Release_1/GX20-1926-1_VM_370_Quick_Guide_For_Users__Rel_1_Apr73.pdf

    and found a reference to $LOOP and a reference to "tokens" (which I
    suppose are variables), so that definitely predated the UNIX shells.

    Burroughs had something called WFL (WorkFlow Language) that was
    effectively a compiler for a shell-like language.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to John Levine on Wed May 8 14:28:37 2024
    John Levine <johnl@taugh.com> writes:
    According to Scott Lurndal <slp53@pacbell.net>:
    Wikipedia says that while S/360 and the B5500 were announced in 1964,
    the B3500 was announced in 1966. In the discussion of MCP on the B3500

    Sorry, I mean to imply that the B3500 (and successors) were 100% sw >>compatible within the medium systems family.

    Likewise for the large systems (B5500) line. ...

    Oh, OK. In view of the timing, I'd guess that the people at Burroughs,
    who were certainly not dumb, looked at the S/360 material and figured
    oh, that's a good idea, we can do that too.

    Note that it takes several years to design and build the
    machine before first delivery. I'd say that Burroughs
    didn't look at the S/360 material before developing
    either family, rather the B3500 was a logical extension
    of the B300 family and the B5500 was a logical extension
    of the B5000.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Thomas Koenig on Wed May 8 15:55:16 2024
    Thomas Koenig wrote:

    Stephen Fuld <SFuld@alumni.cmu.edu.invalid> schrieb:
    Thomas Koenig wrote:

    Only the team that made JCL, it seems.


    Just as an aside, though this thread may be somewhat OT, I consider
    it fun and interesting.

    I am not sure exactly what he is saying here. By JCL, does he mean
    just the syntax of the language,

    His main criticism is that the design team failed to notice that
    JCL was, in fact, a programming language, that the design team
    thought of it as "just a few cards for job control". This led to
    attributes such as DISP doing what he called "verbish things",
    i.e. commands, dependence on card formats, a syntax similar to,
    but incompatible with, the S/360 assembler, insufficient control
    structures etc.

    He did not criticize the OS itself too much, with its complicated
    allocation stategies etc, mostly some remarks on the file structure
    which he says could have been simplified.


    Thank you Thomas. That clarifies it. As must be clear by now, I agree
    with him about the syntax, etc., my criticisms go much deeper.



    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lynn Wheeler@21:1/5 to John Levine on Wed May 8 07:58:52 2024
    John Levine <johnl@taugh.com> writes:
    According to MitchAlsup1 <mitchalsup@aol.com>:
    TSS was a disaster due to an extreme case of second system syndrome,
    but Michigan's MTS and IBM skunkworks CP/67 worked great.

    TSS at CMU was extensively rewritten in assembly and became quite >>tolerable--hosting 30+ interactive jobs along with a background
    batch processing system. When I arrived in Sept 1975 it was quite
    unstable with up times less than 1 hour. 2 years later it would run
    for weeks at a time without going down.

    For reasons I do not want to try to guess, AT&T did the software
    development for the 5ESS phone switches in a Unix system that sat on
    top of TSS. After IBM cancelled TSS, AT&T continued to use it as some
    sort of special order thing. At IBM there were only a handful of
    programmers working on it, by that time all quite experienced, and I
    hear that they also got rid of a lot of cruft and made it much faster
    and more reliable.

    At the same time, IBM turned the skunkworks CP/67 into VM/370 with a
    much larger staff, leading to predictable consequences.

    TSS/360 was decommitted and group reduced from 1100 to 20. Morph of
    TSS/360 to TSS/370 was much better (with only 20 people).

    Both Amdahl and IBM hardware field support claimed they wouldn't support
    370 machines w/o industrial strength EREP. The effort to add industrial strength EREP to UNIX was many times the effort to do 370 port. They did
    a stripped down TSS/370 with just hardware layer and EREP (called SSUP)
    with UNIX built on top. IBM AIX/370 and Amdahl UTS were run in VM/370
    virtual machines ... leveraging VM/370 industrial EREP.

    CP/40 was done on 360/40 with virtual memory hardware mods; it morphs
    into CP/67 when 360/67 standard with virtual memory became available.
    Group had 11 people (1/100th TSS/360).

    When I graduate and join IBM, one of my hobbies was enhanced production operating systems for internal datacenters. With the decision to
    add virtual memory to all 370s, it was decided to do VM/370 and some
    of the science center people move to the 3rd flr taking over the
    IBM Boston Programming Center for VM/370 group. The group was
    expanding to 200+ and outgrew the 3rd flr, moving to the vacant IBM SBS
    bldg out in Burlington Mall (of rt128).

    Note the morph of CP67->VM370 dropped and/or simplified a bunch of
    features (including multiprocessor support). In 1974, I started
    migrating a bunch of CP67 stuff to VM370 R2. I had also done automated benchmarking system and was the the 1st thing I migrated ... however,
    VM370 couldn't complete a full set of benchmarks w/o crashing ... so the
    next thing I had to migrate was the CP67 kernel synchronization &
    serialization function ... it order for VM370 to complete benchmark
    series. Then I started migrating a bunch of my enhancements.

    For some reason AT&T longlines got an early version of my production
    VM370 CSC/VM (before the multiprocessor support) ... and over the years
    moved it to latest IBM 370s and propogated around to other
    locations. Then comes the early 80s when next new IBM was 3081 ... which
    was originally a multiprocessor only machine. The IBM corporate
    marketing rep for AT&T tracks me down to ask for help with retrofitting multiprocessor support to old CSC/VM ... concern was that all those AT&T machines would migrate to the latest Amdahl single processor (which had
    about the same processing as aggregate of the 3081 two processor).


    --
    virtualization experience starting Jan1968, online at home since Mar1970

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Wed May 8 20:30:17 2024
    According to Michael S <already5chosen@yahoo.com>:
    Of course, there is a theory and there is a practice.
    In practice, downward compatibility lasted ~half a year, until Model 20. >Upward compatibility did not fare much better and was broken
    approximately one year after initial release, in Model 67.
    That is, if I didn't get upward and downward backward.

    The 360/22, /25, /30, /40, /50, /65, /75, and /85 were all compatible implementations of S/360. You could write a program that ran on any of
    them, and it would also run on larger and smaller models.

    The /20, /44, and /67 were each for special markets. The /20 was
    basically for people who still wanted a 1401 (admittedly a pretty big
    market), the /44 for realtime, and the /67 for a handful of
    time-sharing customers. The /67 was close enough to a /65 that you
    could use it as one, often /67 timesharing during the day, and /65
    batch overnight. The /91 and /95 were also compatible except that the
    /91 left out decimal arithmetic, which OS/360 would trap and slowly
    emulate if need be.

    According to my understanding, since ~1970, IBM completely gave up on
    all sorts of compatibility except backward compatibility.

    No. When hey updated the architecture and then shipped multiple
    implementations of each one. So when they went to S/370, there was the
    370/115, /125, /135, /138, /145, /148, /158, and /168 which were
    upward and downward compatible as were the 303x and 434x series. The
    /155 and /165 were originally missing the paging hardware but later
    could be field upgraded.

    The point here is that you could write a program for any model, and
    you could expect it to work unmodified on both larger and smaller
    models. Later one there was S/390 and zSeries, again each with models
    that were both upward and downward compatible.

    I'd think that by 1977 (VAX) backward compatibility was widespread in
    the industry.

    More like 1957. The IBM 705 was mostly backward compatible with the
    702, and the 709 with the 704. But only in one direction -- if you
    wanted your 709 program to work on a 704, you had to be careful not to
    use any of the new 709 stuff, and since the I/O was completely
    different, you needed suitable operating systems or at least I/O
    libraries.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From EricP@21:1/5 to Lynn Wheeler on Wed May 8 16:50:25 2024
    Lynn Wheeler wrote:

    For some reason AT&T longlines got an early version of my production
    VM370 CSC/VM (before the multiprocessor support) ... and over the years
    moved it to latest IBM 370s and propogated around to other
    locations. Then comes the early 80s when next new IBM was 3081 ... which
    was originally a multiprocessor only machine. The IBM corporate
    marketing rep for AT&T tracks me down to ask for help with retrofitting multiprocessor support to old CSC/VM ... concern was that all those AT&T machines would migrate to the latest Amdahl single processor (which had
    about the same processing as aggregate of the 3081 two processor).

    Regarding retrofitting multiprocessor support to old CSC/VM,
    by which I take it you mean adding SMP support to a uni-processor OS,
    do you remember what changes that entailed? Presumably a lot more than acquiring one big spinlock every time the OS was entered.
    That seems like a lot of work for one person.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Wed May 8 21:02:41 2024
    According to Stephen Fuld <sfuld@alumni.cmu.edu.invalid>:
    With sufficiently disciplined programming, you could swap and move data
    by updating the base registers. APL\360 did this quite successfully
    and handled a lot of interactive users on a 360/50.

    Wasn't APL\360 an interpreter? If so, then moving instructions and data >around was considerably simpler.

    That's right. It could switch between users at well defined points that
    made it practical to update the base registers pointing to the user's workspace.

    Reading between the lines in the IBMSJ architecture paper, I get the
    impression they believed that moving code and data with base registers
    would be a lot easier than it was, and missed the facts that a lot of
    pointers are stored in memory, and it is hard to know what registers
    are being used as base registers when.

    Interesting. That would seem to imply that it wasn't that they didn't
    think about the problems that base addressing would cause, they just
    (vastly) underestimated the cost of fixing it. A different "design"
    problem indeed.

    In Design of Design, Brooks said they knew about virtual memory but thought
    it was too expensive, which he also says was a mistake, soon fixed in S/370.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lynn Wheeler@21:1/5 to EricP on Wed May 8 15:10:20 2024
    EricP <ThatWouldBeTelling@thevillage.com> writes:
    Lynn Wheeler wrote:
    For some reason AT&T longlines got an early version of my production
    VM370 CSC/VM (before the multiprocessor support) ... and over the years
    moved it to latest IBM 370s and propogated around to other
    locations. Then comes the early 80s when next new IBM was 3081 ... which
    was originally a multiprocessor only machine. The IBM corporate
    marketing rep for AT&T tracks me down to ask for help with retrofitting
    multiprocessor support to old CSC/VM ... concern was that all those AT&T
    machines would migrate to the latest Amdahl single processor (which had
    about the same processing as aggregate of the 3081 two processor).

    Regarding retrofitting multiprocessor support to old CSC/VM,
    by which I take it you mean adding SMP support to a uni-processor OS,
    do you remember what changes that entailed? Presumably a lot more than acquiring one big spinlock every time the OS was entered.
    That seems like a lot of work for one person.

    Charlie had invented compare&swap (for his initials CAS) when he was
    doing fine-grain CP/67 multiprocessor locking at the science center
    ... when presented to the 370 architecture owners for adding to 370
    ... they said that the POK favorite son operating system (OS/360
    MVT/MVS) owners that 360/67 test&set was sufficient (i.e. they had a big
    kernel spin-lock) ... this also accounted for MVS documentation saying
    that two-processor support only had 1.2-1.5 times the throughput of
    single processor.

    I had initially done the multiprocessor kernel re-org for VM/370 for
    VM/370 Release2 based CSC/VM ... but not the actual multiprocessor
    support. The internal world-wide sales&marketing support HONE systems
    were long time customer for my enhanced CSC/VMs and then the US HONE datacenters were consolidated in silicon valley (trivia: when facebook
    1st moves moves into silicon valley, it was into a new bldg built next
    door to the former US HONE consolidated datacenter). They had added "loosely-coupled" shared DASD support to complex of eight large systems
    with load-balancing and fall-over. I then added SMP, tightly-coupled, multiprocessor to VM/370 Release3 based CSC/VM so they could add a 2nd processor to each system (for 16 processors total). Their two processor
    systems were getting twice the throughput of single processor ... a
    combination of very low overhead SMP, tightly-coupled, multiprocessor
    locking support and a hack for cache affinity that improved the cache
    hit ratio (with faster processing offsetting the multiprocessor
    overhead).

    The VM/370 SMP, tightly-coupled, multiprocessor locking was rather
    modest amount of work ... compared to all the other stuff I was doing.

    trivia: The future system stuff (to replace all 370) was going on during
    much of this period. When FS implodes there was mad rush to stuff back
    into the 370 product pipelines, including kicking off quick&dirty 3033
    and 3081 in parallel
    http://www.jfsowa.com/computer/memo125.htm

    about the same time, I'm roped into helping with a 16-processor
    tightly-coupled 370 effort and we con the 3033 processor engineers to
    work on it in their spare time (a lot more interesting than remapping
    168 logic to 20% faster chips) ... everybody thot it was great until
    somebody tells the head of POK that it could be decades before the POK
    favorite son operating system had (effective) 16-processor support (aka
    their spin-lock, POK doesn't ship 16-processor SMP until after the turn
    of century). Then the head of POK invites some of us to never visit POK
    again. The head of POK also manages to convince corporate to kill the
    VM370 product, shutdown the development group and transfer all the
    people to POK for MVS/XA (supposedly otherwise they wouldn't be able to
    ship MVS/XA on time) ... Endicott eventually manages to save the VM370
    product mission for the low&midrange ... but have to recreate a VM370 development group from scratch.

    I then transfer out to west coast and get to wander around (both IBM &
    non-IBM) datacenters in silicon valley, including disk engineering
    (bldg14) and disk product test (bldg15) across the street. At the time
    they are running prescheduled, 7x24, stand-alone testing ... and had
    recently tried MVS but it had 15min mean-time-between failure (in that environment, lots of faulty hardware). I offer to rewrite the I/O
    supervisor to make it bullet-proof and never fail so they can have any
    amount of on-demand testing, greatly improving productivity (downside
    any time they have problems, they imply its my software and I have to
    spend increasing time playing disk engineering diagnosing their hardware problems). I do a (internal only) San Jose Reseach report on the I/O
    Integrity work and happen to mention the MVS 15min MTBF, bringing down
    the wrath of the MVS organization on my head.

    --
    virtualization experience starting Jan1968, online at home since Mar1970

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lynn Wheeler@21:1/5 to John Levine on Wed May 8 15:43:52 2024
    John Levine <johnl@taugh.com> writes:
    implementations of each one. So when they went to S/370, there was the 370/115, /125, /135, /138, /145, /148, /158, and /168 which were
    upward and downward compatible as were the 303x and 434x series. The
    /155 and /165 were originally missing the paging hardware but later
    could be field upgraded.

    shortly after joining IBM I get con'ed into helping 370/195 to add multi-threading; 195 pipeline didn't have branch prediction, speculative execution, etc ... so conditional branches drained the pipeline ... and
    most codes only ran at half rate. Multi-thread would simulate two
    processor operation ... and two i-streams running at half rate might
    keep aggregate 195 throughput much higher (modulo the OS360 MVT
    multiprocessor support only having 1.2-1.5 throughput of single
    processor).

    little over decade ago I was asked to track down decision to add virtual
    memory to all 370s and found staff to executive making the
    decision. Basically OS/360 MVT storage management was so bad, the
    execution regions had to be specified four times larger than used, as a
    result a 1mbyte 370/165 normally would only run four regions
    concurrently, insufficient to keep system busy and justified. Mapping
    MVT to a 16mbye virtual address space (aka VS2/SVS) would allow
    increasing number of concurrently running regions by factor of four
    times (with little or no paging), keeping 165 systems busy
    ... overlapping execution with disk I/O.

    I had gotten into something of a dustup with VS2/SVS, claiming their
    page replacement algorithm was making poor choices ... they eventually
    fell back to since they were expecting nearly negligible paging rates,
    it wouldn't make any difference.

    Along the way, 370/165 engineers said that if they had to retrofit the
    full 370 virtual memory architecture ... it would slip the announcement
    date by six months ... so decision was made to drop features ... and all
    the other systems had to retrench to the 165 subset ... and any software dependent on the dropped features had to be reworked. For VM/370, they
    were planning on using R/O shared segment protection (one of the
    features dropped for 370/165) for sharing CMS pages ... and so had to substitute a real kludge. Also the 370/195 multi-threading was canceled
    ... since it was deamed to difficult to upgrade 195 for virtual memory.

    Amdahl had won the battle to make ACS, 360 compatible ... folklore is
    that executives then killed ACS/360 because it would advance the state-of-the-art too fast and IBM could loose control of the market
    (Amdahl leaves IBM shortly later) https://people.computing.clemson.edu/~mark/acs_end.html
    ... above also has multi-threading reference.

    --
    virtualization experience starting Jan1968, online at home since Mar1970

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Thu May 9 02:10:17 2024
    According to Tim Rentsch <tr.17687@z991.linuxsc.com>:
    My impression is that until S/360 there was no such thing as different
    by 100% SW compatible models.

    I think a counterexample is the LGP-30 (1956) and its successor
    the LGP-21 (1963).

    They were pretty close but it says on the intertubes that the -30 put
    memory words 9 apart on the drum and the -21 put them 18 apart, which
    I presume means that you would need to arrange your data differently
    to get good performance.

    Another example may be the IBM 709 and IBM 7090, both done in the 1950s.

    They were pretty similar but the 7090 had a more complex channel and
    new instructions to manage it. There was a trap mode that caught the
    old I/O instructions they used to run 704 or 709 coe on a 7090 but of
    course not vice versa.

    Compare that to S/360 where every model had the same channel
    interface, the same I/O instructions, and the same I/O interrupts.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to John Levine on Thu May 9 05:08:44 2024
    John Levine wrote:

    According to Stephen Fuld <sfuld@alumni.cmu.edu.invalid>:
    With sufficiently disciplined programming, you could swap and move
    data >> by updating the base registers. APL\360 did this quite
    successfully >> and handled a lot of interactive users on a 360/50.

    Wasn't APL\360 an interpreter? If so, then moving instructions and
    data around was considerably simpler.

    That's right. It could switch between users at well defined points
    that made it practical to update the base registers pointing to the
    user's workspace.

    Reading between the lines in the IBMSJ architecture paper, I get
    the >> impression they believed that moving code and data with base
    registers >> would be a lot easier than it was, and missed the facts
    that a lot of >> pointers are stored in memory, and it is hard to
    know what registers >> are being used as base registers when.

    Interesting. That would seem to imply that it wasn't that they
    didn't think about the problems that base addressing would cause,
    they just (vastly) underestimated the cost of fixing it. A
    different "design" problem indeed.

    In Design of Design, Brooks said they knew about virtual memory but
    thought it was too expensive, which he also says was a mistake, soon
    fixed in S/370.


    While I agree that virtual memory was probably too expensive in the mid
    1960s, I disagree that it was required, or even the optimal solution
    back then. A better solution would have been to have a small number of
    "base registers" that were not part of the user set, but could be
    reloaded by the OS whenever a program needed to be swapped in to a
    different address than it was swapped out to.




    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to John Levine on Thu May 9 10:54:22 2024
    On Wed, 8 May 2024 20:30:17 -0000 (UTC)
    John Levine <johnl@taugh.com> wrote:

    According to Michael S <already5chosen@yahoo.com>:
    Of course, there is a theory and there is a practice.
    In practice, downward compatibility lasted ~half a year, until Model
    20. Upward compatibility did not fare much better and was broken >approximately one year after initial release, in Model 67.
    That is, if I didn't get upward and downward backward.

    The 360/22, /25, /30, /40, /50, /65, /75, and /85 were all compatible implementations of S/360. You could write a program that ran on any of
    them, and it would also run on larger and smaller models.

    The /20, /44, and /67 were each for special markets. The /20 was
    basically for people who still wanted a 1401 (admittedly a pretty big market), the /44 for realtime, and the /67 for a handful of
    time-sharing customers. The /67 was close enough to a /65 that you
    could use it as one, often /67 timesharing during the day, and /65
    batch overnight.

    But programs (or OSes) that utilize the features of /67 would not run
    on anything else, right?

    The /91 and /95 were also compatible except that the
    /91 left out decimal arithmetic, which OS/360 would trap and slowly
    emulate if need be.


    How about programs that depend on precise floating-point exceptions?

    According to my understanding, since ~1970, IBM completely gave up on
    all sorts of compatibility except backward compatibility.

    No. When hey updated the architecture and then shipped multiple implementations of each one. So when they went to S/370, there was the 370/115, /125, /135, /138, /145, /148, /158, and /168 which were
    upward and downward compatible as were the 303x and 434x series. The
    /155 and /165 were originally missing the paging hardware but later
    could be field upgraded.


    What about various vector facilities that they were adding and removing seemingly at random during 1970s and 1980s ?
    My impression was that absence vector facilities were not emulated. Is
    it wrong?

    The point here is that you could write a program for any model, and
    you could expect it to work unmodified on both larger and smaller
    models. Later one there was S/390 and zSeries, again each with models
    that were both upward and downward compatible.

    I'd think that by 1977 (VAX) backward compatibility was widespread in
    the industry.

    More like 1957. The IBM 705 was mostly backward compatible with the
    702, and the 709 with the 704. But only in one direction

    "One direction" is synonymous with "backward compatible", is it not?
    But the word "mostly" is suspect.

    -- if you
    wanted your 709 program to work on a 704, you had to be careful not to
    use any of the new 709 stuff, and since the I/O was completely
    different, you needed suitable operating systems or at least I/O
    libraries.

    Can you, please, define the meaning of upward and downward
    compatibility? I had never seen this terms before this thread, so it is possible that I don't understand the meaning.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Michael S on Thu May 9 08:19:39 2024
    Michael S <already5chosen@yahoo.com> schrieb:

    Can you, please, define the meaning of upward and downward
    compatibility? I had never seen this terms before this thread, so it is possible that I don't understand the meaning.

    The term comes from Brooks. Specifically, he applied it to the
    S/360 line of computers which had a very wide performance and
    price range, and programs (including operating systems) were
    binary compatible from the lowest to the highest performance and
    price machine.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Thomas Koenig on Thu May 9 13:53:56 2024
    On Thu, 9 May 2024 08:19:39 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:

    Can you, please, define the meaning of upward and downward
    compatibility? I had never seen this terms before this thread, so
    it is possible that I don't understand the meaning.

    The term comes from Brooks. Specifically, he applied it to the
    S/360 line of computers which had a very wide performance and
    price range, and programs (including operating systems) were
    binary compatible from the lowest to the highest performance and
    price machine.


    I suppose, it means that my old home PC (Core-i5 3550) is downward
    compatible with my old work PC (Core-i7 3770). And my old work PC is
    upward compatible with my old home PC.

    But I still don't know if it would be correct to say that my old work
    PC is downward compatible with with my just a little newer small FOGA development server (E3 1271 v3). My guess that it would be incorrect,
    but it's just guess.

    If Brook was still alive, we could have tried to ask him. But since he
    is not, and since I have no plans to read his books by myself, my only
    chance of knowing is for you or for John Levine to find the definition
    it in his writings and then tell me.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Michael S on Thu May 9 13:10:42 2024
    Michael S wrote:

    On Thu, 9 May 2024 08:19:39 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:

    Can you, please, define the meaning of upward and downward
    compatibility? I had never seen this terms before this thread, so
    it is possible that I don't understand the meaning.

    The term comes from Brooks. Specifically, he applied it to the
    S/360 line of computers which had a very wide performance and
    price range, and programs (including operating systems) were
    binary compatible from the lowest to the highest performance and
    price machine.


    I suppose, it means that my old home PC (Core-i5 3550) is downward
    compatible with my old work PC (Core-i7 3770). And my old work PC is
    upward compatible with my old home PC.

    But I still don't know if it would be correct to say that my old work
    PC is downward compatible with with my just a little newer small FOGA development server (E3 1271 v3). My guess that it would be incorrect,
    but it's just guess.

    If Brook was still alive, we could have tried to ask him. But since he
    is not, and since I have no plans to read his books by myself, my only
    chance of knowing is for you or for John Levine to find the
    definition it in his writings and then tell me.

    Perhaps this interpretation will help clear things up. Think of
    compatibility as a two dimensional graph. On the Y axis is some
    measure of compute power. The X axis is time. So upward/downward compatibility is among models announced at the same time and delivered
    within a small time of each other. Backward compatibility is along the
    X axis, that is, between models announced/delivered at a different
    points in time. So under this scheme, the S/360 model 30 was upward
    compatible with the model /65 ( different Y values, but the same x
    values) , but the S370s (not counting the /155 and /165) were backward compatible with the S/260 models (different x values)

    The key innovation that IBM made with the S/360 was to announce systems
    with a wide range of performance *at the same time*, i.e. different Y
    values and the same X value.





    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Michael S on Thu May 9 12:13:47 2024
    Michael S <already5chosen@yahoo.com> writes:
    On Thu, 9 May 2024 08:19:39 -0000 (UTC)
    The term comes from Brooks. Specifically, he applied it to the
    S/360 line of computers which had a very wide performance and
    price range, and programs (including operating systems) were
    binary compatible from the lowest to the highest performance and
    price machine.


    I suppose, it means that my old home PC (Core-i5 3550) is downward
    compatible with my old work PC (Core-i7 3770). And my old work PC is
    upward compatible with my old home PC.

    Given that both use Ivy Bridge CPUs, there is no compatibility issue
    as far as the CPU is concerned. For other parts of the PCs, one would
    have to discuss everything separately.

    When it comes to upwards/downwards, you would have to compare with
    Saltwell or Silvermont. Ivy Bridge supports AVX, while Saltwell and
    Silvermont don't.

    AMD is somewhat better at these things: They added AVX support to
    their small cores in 2013 (Jaguar) and their big cores in 2011
    (Bulldozer), while Intel added AVX to their small cores in 2021
    (Gracemont) and to their big cores in 2011 (Sandy Bridge).

    For AVX2, AMD added it to their big cores in 2015 (Excavator), but by
    that time they had given up on the two-pronged approach and were on
    the way to the one-size-fits-all Zen line.

    But I still don't know if it would be correct to say that my old work
    PC is downward compatible with with my just a little newer small FOGA >development server (E3 1271 v3).

    Haswell is a successor to Ivy Bridge and supports AVX2 (unlike Ivy
    Bridge). So it's an unidirectional compatibility.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Anton Ertl on Thu May 9 13:39:55 2024
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Michael S <already5chosen@yahoo.com> writes:
    On Thu, 9 May 2024 08:19:39 -0000 (UTC)
    The term comes from Brooks. Specifically, he applied it to the
    S/360 line of computers which had a very wide performance and
    price range, and programs (including operating systems) were
    binary compatible from the lowest to the highest performance and
    price machine.


    I suppose, it means that my old home PC (Core-i5 3550) is downward >>compatible with my old work PC (Core-i7 3770). And my old work PC is
    upward compatible with my old home PC.

    Given that both use Ivy Bridge CPUs, there is no compatibility issue
    as far as the CPU is concerned.

    Actually, there are cases where this is not true: Intel sabotages upwards/downwards compatibility by disabling architectural features on
    cheaper models (in particular, they disabled AVX on Ivy Bridge CPUs
    sold as Pentium G or Celeron G). But for your Core ix-based Ivy
    Bridges, AFAIK there is no such problem.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Stephen Fuld on Thu May 9 19:52:34 2024
    On Thu, 9 May 2024 13:10:42 -0000 (UTC)
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> wrote:

    Michael S wrote:

    On Thu, 9 May 2024 08:19:39 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Michael S <already5chosen@yahoo.com> schrieb:

    Can you, please, define the meaning of upward and downward compatibility? I had never seen this terms before this thread,
    so it is possible that I don't understand the meaning.

    The term comes from Brooks. Specifically, he applied it to the
    S/360 line of computers which had a very wide performance and
    price range, and programs (including operating systems) were
    binary compatible from the lowest to the highest performance and
    price machine.


    I suppose, it means that my old home PC (Core-i5 3550) is downward compatible with my old work PC (Core-i7 3770). And my old work PC is
    upward compatible with my old home PC.

    But I still don't know if it would be correct to say that my old
    work PC is downward compatible with with my just a little newer
    small FOGA development server (E3 1271 v3). My guess that it would
    be incorrect, but it's just guess.

    If Brook was still alive, we could have tried to ask him. But since
    he is not, and since I have no plans to read his books by myself,
    my only chance of knowing is for you or for John Levine to find the definition it in his writings and then tell me.

    Perhaps this interpretation will help clear things up. Think of compatibility as a two dimensional graph. On the Y axis is some
    measure of compute power. The X axis is time. So upward/downward compatibility is among models announced at the same time and delivered
    within a small time of each other. Backward compatibility is along
    the X axis, that is, between models announced/delivered at a different
    points in time. So under this scheme, the S/360 model 30 was upward compatible with the model /65 ( different Y values, but the same x
    values) , but the S370s (not counting the /155 and /165) were backward compatible with the S/260 models (different x values)

    The key innovation that IBM made with the S/360 was to announce
    systems with a wide range of performance *at the same time*, i.e.
    different Y values and the same X value.






    So, when two models are pretty close on time scale, but from the
    software perspective one of them is a superset of the other then they
    are not upward/downward compatible?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Michael S on Thu May 9 20:28:00 2024
    In article <20240509195234.000000c5@yahoo.com>, already5chosen@yahoo.com (Michael S) wrote:

    So, when two models are pretty close on time scale, but from the
    software perspective one of them is a superset of the other then
    they are not upward/downward compatible?

    There's one-way compatibility, from the subset machine to the superset
    machine. That might be upwards or downwards, depending on the vagaries of marketing and their control of features.

    One may, of course, also restrict oneself to the common subset. Some
    compilers make that easier than others.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Fri May 10 00:08:54 2024
    According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
    In Design of Design, Brooks said they knew about virtual memory but
    thought it was too expensive, which he also says was a mistake, soon
    fixed in S/370.

    While I agree that virtual memory was probably too expensive in the mid >1960s, I disagree that it was required, or even the optimal solution
    back then. A better solution would have been to have a small number of
    "base registers" that were not part of the user set, but could be
    reloaded by the OS whenever a program needed to be swapped in to a
    different address than it was swapped out to.

    Well, Brooks was there and said not having virtual memory was a
    mistake. Dunno how much that is related to Lynn's point that paging
    let them avoid the consequences of terrible storage management in MVS.

    When designing the address structure of S/360 they had a big problem
    in that they knew they wanted large addresses, 24 bits to be extended
    later to 31 or 32, but they didn't want to waste a full word on every
    address in programs running on small models. Base register with 12 bit
    offset solved that quite well, making the address part of an
    instruction 16 bits while not segmenting the memory. Since there were
    a lot of registers it was usually possible to set up a few base
    registers at the start of a routine and not do a lot of reloading. (At
    least if the compiler was smart enough; Fortran G had a bad habit of
    loading an address from the constant pool every time it wanted to use
    a variable or an array.)

    Brooks said it was ugly that some instructions (RX) had both base and
    index registers while others (SS) only had base registers, which I
    expect made it even harder to do what you suggested.





    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Thu May 9 23:49:30 2024
    According to Michael S <already5chosen@yahoo.com>:
    The /20, /44, and /67 were each for special markets. ...

    But programs (or OSes) that utilize the features of /67 would not run
    on anything else, right?

    How about programs that depend on precise floating-point exceptions?

    As I think I said in the message you quoted, those three models were
    for special markets and they weren't fully compatible with other
    models. Having run my share of Fortran programs on a 360/91, I can
    report that you used the same Fortran compilers as you'd use on any
    other model of 360. The imprecise interrupts were a pain for debugging
    but in practice no useful Fortran programs depended on catching and
    recovering from floating point exceptions so it didn't matter.

    I also wrote some programs for a /20 and considering that it was a 16
    bit machine with 8 registers, it was surprising how similar the
    programming was to a real 32 bit 360 with 16 registers.

    The other eight models were extremely compatible including the exceptions.

    What about various vector facilities that they were adding and removing >seemingly at random during 1970s and 1980s ?

    Those were defined as special features. If you wanted to run your
    program, the machine needed to have the features your program used.

    More like 1957. The IBM 705 was mostly backward compatible with the
    702, and the 709 with the 704. But only in one direction

    "One direction" is synonymous with "backward compatible", is it not?
    But the word "mostly" is suspect.

    It was close enough that you could run 704 programs on the 709 or
    7090. Read the 709 and 7090 manuals at Bitsavers if you actually care
    about this.

    Can you, please, define the meaning of upward and downward
    compatibility?

    See page 5 of "IBM System/360 Principles Df Operation" published in
    1966. This is not new.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Thu May 9 19:30:03 2024
    Michael S <already5chosen@yahoo.com> writes:

    [...]

    Can you, please, define the meaning of upward and downward
    compatibility? I had never seen this terms before this thread,
    so it is possible that I don't understand the meaning.

    The System/360 model 20 is described in TDOD as being "upward
    compatible", which means that programs that run on a model 20
    could be run on higher-numbered models, but usually not vice
    versa.

    Most models of System/360 had the property that code that runs on
    model M would also run on model N > M and on model K < M, for other
    models in the set. (The model 20, and arguably the model 30, were
    exceptions, and probably some other models as well; I don't have
    enough information to be precise.) The point is that most models
    were compatible with both higher-numbered ("upwards compatible")
    and lower-numbered ("downwards compatible") models.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Fri May 10 03:05:26 2024
    According to Tim Rentsch <tr.17687@z991.linuxsc.com>:
    Michael S <already5chosen@yahoo.com> writes:

    [...]

    Can you, please, define the meaning of upward and downward
    compatibility?

    It's still in the S/360 Principles of Operation on page 5.

    The System/360 model 20 is described in TDOD as being "upward
    compatible", which means that programs that run on a model 20
    could be run on higher-numbered models, but usually not vice
    versa.

    Not really. It only had 8 16-bit registers, numbered 8 to 15, with a
    mutant form of addressing. If the high bit of the base register was 1,
    it worked normally, if 0 the low three bits were prefixed to the
    displacement which allowed absolute addressing. In assembler programs
    we pretended that base registers 1, 2, 3, contained 4K, 8K, 12K.

    The application instructions were a subset of the 360's, but the I/O
    was completely different and much simpler, as were the interrupts. You
    could write application code that would work on both a /20 and a real
    360 if you were careful, but you needed different I/O libraries.

    Most models of System/360 had the property that code that runs on
    model M would also run on model N > M and on model K < M, for other
    models in the set. (The model 20, and arguably the model 30, were >exceptions, and probably some other models as well;

    As I said, the /20, /44, and /67 were special, but all of the other
    models had the same instruction set and I/O structure. The /30 was
    quite slow but it was a real 360 and ran DOS or OS just like any other
    model, just slower.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lynn Wheeler@21:1/5 to Lynn Wheeler on Thu May 9 17:45:06 2024
    Lynn Wheeler <lynn@garlic.com> writes:
    little over decade ago I was asked to track down decision to add virtual memory to all 370s and found staff to executive making the
    decision. Basically OS/360 MVT storage management was so bad, the
    execution regions had to be specified four times larger than used, as a result a 1mbyte 370/165 normally would only run four regions
    concurrently, insufficient to keep system busy and justified. Mapping
    MVT to a 16mbye virtual address space (aka VS2/SVS) would allow
    increasing number of concurrently running regions by factor of four
    times (with little or no paging), keeping 165 systems busy
    ... overlapping execution with disk I/O.

    In some sense IBM CKD DASD was tech trade-off being able to use
    disk&channel capacity to search for information because of limited real
    memory for keeping tract of it. By the mid-70s that trade-off was
    starting to invert. In the early 80s, I was also pontificating that
    since mid-60s 360, relative system disk throughput had declined by an
    order of magnitude ... disks had gotten 3-5 times faster while systems
    had gotten 40-50 times faster. A disk division executive took exception
    to my statements and assigned the division performance group to refute
    it. After a couple weeks, they came back and explained that I had
    understated the problem. They then respun the analysis for
    recommendations for optimizing disk configurations for system throughput
    ... that was presented at IBM mainframe user groups.

    Now the MVT->VS2/SVS was actually capped at 15 concurrently executing
    regions because it was (still) using 4bit storage protect keys to keep
    the regions separate (in a single 16mbyte virtual address space)
    ... which prompted SVS->MVS with a different virtual address space for
    each executing region. However the OS/360 history was heavily pointer
    passing APIs ... and to facilitate kernel calls, an 8mbyte image of the
    MVS kernel was mapped into each 16mbyte application address space (so
    kernel code to easily fetch/store application data). However, for MVS,
    MVT subsystems were given their own virtual address space ... so for API parameter and returning information a one common segment area (CSA) was
    (also) mapped into every 16mbyte virtual address space (leaving 7mbytes
    for application). However, requirement for CSA space is somewhat
    proportional to number of number of subsystems and number of
    concurrently running applications ... and CSA quickly becomes multiple
    segement area and the "Common System Area" ... and by late 70s and 3033,
    it was common to be 5-6mbytes (leaving 2-3mbytes for applications) and threatening to become 8mbytes (leaving zero).

    That was part of the mad rush to get to 370/XA (31-bit) and MVS/XA
    (while separate virtual address spaces theoretically allowed for
    large number of concurrently computing programs, being able to
    overlap execution with waiting on disk i/o, the CSA kludge had
    severely capped it).

    There were a number of 3033 temporary hacks. One was retrofitting part
    of 370/xa access registers to 3033 as "dual-address space". A called
    subsystem in its own address space could have a secondary address space pointing to the calling application's address space ... so didn't
    require CSA for API passing&returning information. They also took two
    "unused" bits from page table to prefix to real page number ... while
    all instructions could only specify real & virtual 24bit address
    (16mbytes), it was possible to have virtual->real mapping up to 64mbytes
    for execution (attaching more than 16mbytes of real storage to 3033).


    --
    virtualization experience starting Jan1968, online at home since Mar1970

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Thu May 9 22:39:53 2024
    Michael S <already5chosen@yahoo.com> writes:

    If Brook was still alive, we could have tried to ask him. But since
    he is not, and since I have no plans to read his books by myself, my
    only chance of knowing is for you or for John Levine to find the
    definition it in his writings and then tell me.

    For what it's worth, I recommend reading both The Mythical Man-Month
    and The Design of Design. Fred Brooks is (or now was) a perceptive
    and experienced guy, who also puts effort into his writing, and there
    is a lot of insight in what he has to say.

    Incidentally I recommend the later 20th Anniversary edition of MMM,
    mainly for the update of his comments in the earlier edition.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Stephen Fuld on Thu May 9 22:49:26 2024
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    The key innovation that IBM made with the S/360 was to announce
    systems with a wide range of performance *at the same time*,
    i.e. different Y values and the same X value.

    I would argue that this property is only one of three factors
    that made System/360 successful, and perhaps the least important
    of the three. The other two factors are, one, addressing both
    business computing and scientific computing rather than having
    separate models for the two markets, and two, replacing and
    discontinuing all of IBM's other lines of computers. I think
    it's hard to overstate the importance of the last item.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Stephen Fuld on Thu May 9 22:17:53 2024
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Scott Lurndal wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    The problem it was trying to solve is contained in its name: Job
    Control Language. It tacitly accepted the non-interactive batch
    model for what it needed to address.

    You may be right, but correct me if I am wrong, there was no
    non-interactive model in the mid 1960s when JCL was devised.

    BASIC and DTSS was developed in 1963.

    Good Point. So IBM was "guuilty" of vastly mis-understanding and under estimating the future importance of interactive users.

    Work on System/360 started in 1961 (and in some sense two years
    earlier, but let's not get into that). System/360 and OS/360
    were announced in April 1964. The Dartmouth Time Sharing System
    first became operational in early 1964 and wasn't available for
    use until after System/360 and OS/360 had been announced and
    already had years of development.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Tim Rentsch on Fri May 10 06:20:00 2024
    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    The key innovation that IBM made with the S/360 was to announce
    systems with a wide range of performance *at the same time*,
    i.e. different Y values and the same X value.

    I would argue that this property is only one of three factors
    that made System/360 successful, and perhaps the least important
    of the three. The other two factors are, one, addressing both
    business computing and scientific computing rather than having
    separate models for the two markets, and two, replacing and
    discontinuing all of IBM's other lines of computers. I think
    it's hard to overstate the importance of the last item.

    I didn't mean to imply that the performance range was the only factor
    in S/360's success. Just that with S/360, IBM was the first to use
    that strategy, and it was a factor in its success.

    As to the other two factors you mentioned, I don't necessarily
    disagree, but I do want to note that discontinuing older lines of
    computers was factiltated by the ability of various S/360 models to
    emulate various older computers. So a site that had, say a 1401, could
    upgrade to a S/360 mod 30, which could run in 1401 emulation mode, so
    sites could keep their old programs running until they were replaced by
    newer nativve S/360 applications. Similarly for 7080 emulation on
    s60/65s. There were probably others that I don't know about.

    And, of course, we have already discussed several other factors in its
    success.





    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to John Levine on Fri May 10 06:28:33 2024
    John Levine wrote:

    According to Stephen Fuld <SFuld@alumni.cmu.edu.invalid>:
    In Design of Design, Brooks said they knew about virtual memory but
    thought it was too expensive, which he also says was a mistake,
    soon >> fixed in S/370.

    While I agree that virtual memory was probably too expensive in the
    mid 1960s, I disagree that it was required, or even the optimal
    solution back then. A better solution would have been to have a
    small number of "base registers" that were not part of the user
    set, but could be reloaded by the OS whenever a program needed to
    be swapped in to a different address than it was swapped out to.

    Well, Brooks was there and said not having virtual memory was a
    mistake. Dunno how much that is related to Lynn's point that paging
    let them avoid the consequences of terrible storage management in MVS.

    MVT?




    When designing the address structure of S/360 they had a big problem
    in that they knew they wanted large addresses, 24 bits to be extended
    later to 31 or 32, but they didn't want to waste a full word on every
    address in programs running on small models. Base register with 12 bit
    offset solved that quite well, making the address part of an
    instruction 16 bits while not segmenting the memory. Since there were
    a lot of registers it was usually possible to set up a few base
    registers at the start of a routine and not do a lot of reloading. (At
    least if the compiler was smart enough; Fortran G had a bad habit of
    loading an address from the constant pool every time it wanted to use
    a variable or an array.)

    Brooks said it was ugly that some instructions (RX) had both base and
    index registers while others (SS) only had base registers, which I
    expect made it even harder to do what you suggested.

    Good points. I'll have to think about it.







    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Fri May 10 21:35:46 2024
    According to Tim Rentsch <tr.17687@z991.linuxsc.com>:
    BASIC and DTSS was developed in 1963.

    Good Point. So IBM was "guuilty" of vastly mis-understanding and under
    estimating the future importance of interactive users.

    Work on System/360 started in 1961 (and in some sense two years
    earlier, but let's not get into that).

    IBM was certainly aware of CTSS which was running on a 709 and then
    7090 in 1961.

    As I've said before, I think they thought the 360's design was
    adequate for time-sharing, but they guessed wrong and believed it
    would be practical to move code or data by updating base registers.
    Hence Brooks' comment about SS insructions not having both base and
    index, fields and the flat statement in Design of Design is that their
    worst mistake was not to include virtual memory.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to John Levine on Sat May 11 07:04:55 2024
    John Levine <johnl@taugh.com> schrieb:

    Brooks said it was ugly that some instructions (RX) had both base and
    index registers while others (SS) only had base registers, which I
    expect made it even harder to do what you suggested.

    Depending on base registers for both data and branches was one
    of the ideas that did not age well, I think. We have since
    seen in the RISC machines that having a stack implemented via
    a register, with possibly a frame pointer, a global offset and
    larger offsets (16 bits) works well, and we know how to generate position-independent code.

    This is, of course, with 20/20 hindsight.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Thomas Koenig on Sat May 11 17:21:32 2024
    Thomas Koenig wrote:

    John Levine <johnl@taugh.com> schrieb:

    Brooks said it was ugly that some instructions (RX) had both base and
    index registers while others (SS) only had base registers, which I
    expect made it even harder to do what you suggested.

    Depending on base registers for both data and branches was one
    of the ideas that did not age well, I think. We have since
    seen in the RISC machines that having a stack implemented via
    a register, with possibly a frame pointer, a global offset and
    larger offsets (16 bits) works well, and we know how to generate position-independent code.

    Position independent data is still difficult, though.

    This is, of course, with 20/20 hindsight.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Sat May 11 20:54:53 2024
    According to Thomas Koenig <tkoenig@netcologne.de>:
    John Levine <johnl@taugh.com> schrieb:

    Brooks said it was ugly that some instructions (RX) had both base and
    index registers while others (SS) only had base registers, which I
    expect made it even harder to do what you suggested.

    Depending on base registers for both data and branches was one
    of the ideas that did not age well, I think.

    Yup. S/390 added relative versions of all the branches with a 16 bit
    signed offset. Since instructions are aligned on two byte boundaries,
    the offset is shifted bit left to allow 64K in either direction.
    zSeries added long versions of most branches with a 32 bit offset.

    Do we know who invented relative branches? The PDP-11 had them in 1969
    but I don't think they were new then. They feel like one of those
    things that are obvious in retrospect, but not at the time. (Why do
    you want to make branch addressing different? And run them all through
    an adder? Do you think gates grow on trees?)

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to mitchalsup@aol.com on Sun May 12 10:31:44 2024
    MitchAlsup1 <mitchalsup@aol.com> schrieb:
    Thomas Koenig wrote:

    John Levine <johnl@taugh.com> schrieb:

    Brooks said it was ugly that some instructions (RX) had both base and
    index registers while others (SS) only had base registers, which I
    expect made it even harder to do what you suggested.

    Depending on base registers for both data and branches was one
    of the ideas that did not age well, I think. We have since
    seen in the RISC machines that having a stack implemented via
    a register, with possibly a frame pointer, a global offset and
    larger offsets (16 bits) works well, and we know how to generate
    position-independent code.

    Position independent data is still difficult, though.

    Touché. Data is not the problem, but pointers to date (such as
    addresses of arguments) is...

    So, having a base register added to all addresses of user code
    would definitely have been a better choice.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to John Levine on Sun May 12 11:13:37 2024
    John Levine <johnl@taugh.com> schrieb:

    Do we know who invented relative branches? The PDP-11 had them in 1969
    but I don't think they were new then.

    Very good question.

    The Nova offered this possibility. They had three addressing modes:
    zero page with an 8-bit unsigned, PC-realtive with an 8-bit signed
    offset, and one of the two index registers with a signed 8-bit offset.
    This worked for load/store and for jumping. They implemented
    conditional via skipping over jump instructions.

    So, PC-relative jumps already existed before the PDP-11 at least,
    although not as branches as such.


    They feel like one of those
    things that are obvious in retrospect, but not at the time. (Why do
    you want to make branch addressing different? And run them all through
    an adder? Do you think gates grow on trees?)

    Obviously the Nova designers thought they could do so (but they
    only had a single 4-bit adder originally. This is why the load
    and store instructions were so slow, apparently, see appendix F of http://bitsavers.org/pdf/dg/015-000023-03_NOVA_PgmrRefMan_Jan76.pdf

    Hmm... browsing the appendix E, loading and storing bytes really
    was a hack on that machine, the PDP-11 (and the S/360) were much
    faster there. I wonder why deCastro didn't just put his byte
    pointer bit in the lowest position, it would have saved programmers
    a lot of grief (I assume...)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Tim Rentsch on Sun May 12 13:17:56 2024
    Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Scott Lurndal wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    The problem it was trying to solve is contained in its name: Job
    Control Language. It tacitly accepted the non-interactive batch
    model for what it needed to address.

    You may be right, but correct me if I am wrong, there was no
    non-interactive model in the mid 1960s when JCL was devised.

    BASIC and DTSS was developed in 1963.

    Good Point. So IBM was "guuilty" of vastly mis-understanding and under
    estimating the future importance of interactive users.

    Work on System/360 started in 1961 (and in some sense two years
    earlier, but let's not get into that). System/360 and OS/360
    were announced in April 1964. The Dartmouth Time Sharing System
    first became operational in early 1964 and wasn't available for
    use until after System/360 and OS/360 had been announced and
    already had years of development.

    Brooks mentions this in the requirements: The /360 was supposed
    to allow remote access for real-time database access and batch
    job execution (for which he mentioned airline reservartion
    systems). Interactive use was not in the requirements.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Sun May 12 22:21:37 2024
    According to Thomas Koenig <tkoenig@netcologne.de>:
    John Levine <johnl@taugh.com> schrieb:

    Do we know who invented relative branches? The PDP-11 had them in 1969
    but I don't think they were new then.

    Very good question.

    Flipping through the machine descriptions in Blaauw and Brooks, I see
    that the B5500 had relative addressing as one of its gazillion address
    modes, which was quite possibly the first time they were used. But I
    would not count on the PDP-11 designers being aware of that.

    The page addressing on the PDP-8 is a pain since you have to divide
    your code into little blocks of the right size to make it work.
    Relative branching on the PDP-11 let them keep small branch addresses
    but not force the memor into pages.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Thomas Koenig on Sun May 12 23:02:21 2024
    Thomas Koenig wrote:

    MitchAlsup1 <mitchalsup@aol.com> schrieb:
    Thomas Koenig wrote:

    John Levine <johnl@taugh.com> schrieb:

    Brooks said it was ugly that some instructions (RX) had both base and
    index registers while others (SS) only had base registers, which I
    expect made it even harder to do what you suggested.

    Depending on base registers for both data and branches was one
    of the ideas that did not age well, I think. We have since
    seen in the RISC machines that having a stack implemented via
    a register, with possibly a frame pointer, a global offset and
    larger offsets (16 bits) works well, and we know how to generate
    position-independent code.

    Position independent data is still difficult, though.

    Touché. Data is not the problem, but pointers to date (such as
    addresses of arguments) is...

    Position independent external data requires a load of the base of the
    region the data sits, and a second memory access to do something with
    the data. You CAN use this base register multiple times--IFF you can
    figure out matching regions external data resides and share use of the
    base register.

    Modern systems put these bases in GOT.

    This is a hangover from the days where one built fully resolved object
    modules. ASLR is a driving force for a "better solution" {whatever it
    ends up being.}

    So, having a base register added to all addresses of user code
    would definitely have been a better choice.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Mon May 13 20:39:55 2024
    According to Thomas Koenig <tkoenig@netcologne.de>:
    Do we know who invented relative branches? The PDP-11 had them in 1969
    but I don't think they were new then.

    I am just trying to make sense of the little documentation there
    is of the PDP-X, and it seems it would have had PC- relative
    branches too (but also branches relative to index registers),
    either with an 8-bit or a 16-bit offset. The Nova had something
    similar, but only jumps relative to PC or its index registers,
    the PDP-11 went to relative-only branches.

    This draft is pretty clear:

    https://bitsavers.org/pdf/dec/pdp-x/29_Nov67.pdf

    It had both short page 0 addressing like the PDP-8 and short relative
    long and short indexed.

    I'd now say relative branches were obvious once you got to the point
    where the cost of the addition to the PC wasn't a big deal, so they
    probably occurred to a lot of people around the same time.

    The PDP-11 had short relative branches and a long jump that could use
    any address mode that made sense, typically absolute or indirect. The
    Unix assembler had conditional jump pseudo-ops that turned into a
    branch if the target was close enough or a reverse branch around a
    jump otherwise. If you allow chaining branches to the same place,
    coming up with an optimal set of long and short is NP complete. If you
    just do long and short, you can get close enough by starting with
    everything long and making passes over the code shortening the ones
    you can until you can't shorten anything else. (I did that for the AIX
    ROMP assembler, same deal.)

    You could do some funky things with PDP-11 jumps like

    JMP @(R4)+

    which dispatched to the next routine of threaded code pointed to by R4.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to John Levine on Mon May 13 20:15:59 2024
    John Levine <johnl@taugh.com> schrieb:
    According to Thomas Koenig <tkoenig@netcologne.de>:
    John Levine <johnl@taugh.com> schrieb:

    Do we know who invented relative branches? The PDP-11 had them in 1969
    but I don't think they were new then.

    Very good question.

    Flipping through the machine descriptions in Blaauw and Brooks, I see
    that the B5500 had relative addressing as one of its gazillion address
    modes, which was quite possibly the first time they were used. But I
    would not count on the PDP-11 designers being aware of that.

    The page addressing on the PDP-8 is a pain since you have to divide
    your code into little blocks of the right size to make it work.
    Relative branching on the PDP-11 let them keep small branch addresses
    but not force the memor into pages.

    I am just trying to make sense of the little documentation there
    is of the PDP-X, and it seems it would have had PC- relative
    branches too (but also branches relative to index registers),
    either with an 8-bit or a 16-bit offset. The Nova had something
    similar, but only jumps relative to PC or its index registers,
    the PDP-11 went to relative-only branches.

    Interesting.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to John Levine on Mon May 13 21:09:13 2024
    John Levine wrote:

    According to Thomas Koenig <tkoenig@netcologne.de>:
    Do we know who invented relative branches? The PDP-11 had them in 1969 >>>> but I don't think they were new then.

    I am just trying to make sense of the little documentation there
    is of the PDP-X, and it seems it would have had PC- relative
    branches too (but also branches relative to index registers),
    either with an 8-bit or a 16-bit offset. The Nova had something
    similar, but only jumps relative to PC or its index registers,
    the PDP-11 went to relative-only branches.

    This draft is pretty clear:

    https://bitsavers.org/pdf/dec/pdp-x/29_Nov67.pdf

    It had both short page 0 addressing like the PDP-8 and short relative
    long and short indexed.

    I'd now say relative branches were obvious once you got to the point
    where the cost of the addition to the PC wasn't a big deal, so they
    probably occurred to a lot of people around the same time.

    This was about the time where new architectures were being designed
    where having an I-Cache was assumed.

    ----------------------------------------------------------------------

    The PDP-11 had short relative branches and a long jump that could use
    any address mode that made sense, typically absolute or indirect. The
    Unix assembler had conditional jump pseudo-ops that turned into a
    branch if the target was close enough or a reverse branch around a
    jump otherwise. If you allow chaining branches to the same place,
    coming up with an optimal set of long and short is NP complete. If you
    just do long and short, you can get close enough by starting with
    everything long and making passes over the code shortening the ones
    you can until you can't shorten anything else. (I did that for the AIX
    ROMP assembler, same deal.)

    This is exactly what the Mc 88100 linker did--the compiler emitted
    unresolved branches as long and the linker shortened them up when
    it was found to be in range.

    ----------------------------------------------------------------------


    You could do some funky things with PDP-11 jumps like

    JMP @(R4)+

    which dispatched to the next routine of threaded code pointed to by R4.

    JSR PC,@(SP)+

    Popped the return address off the stack, pushed another return address on
    the stack and transfers control. This is how we did coroutines.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Tue May 14 02:59:38 2024
    According to MitchAlsup1 <mitchalsup@aol.com>:
    JSR PC,@(SP)+

    Popped the return address off the stack, pushed another return address on
    the stack and transfers control. This is how we did coroutines.

    When I was teaching an operating system class in about 1977 I
    challenged the class to come up with a minimal coroutine package. They
    all found that pretty quickly.

    It's not very good coroutines since it just switches the return
    address, not any other stack context, but it can sometimes be useful.


    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to John Levine on Tue May 14 12:35:24 2024
    On Tue, 14 May 2024 02:59:38 -0000 (UTC)
    John Levine <johnl@taugh.com> wrote:

    According to MitchAlsup1 <mitchalsup@aol.com>:
    JSR PC,@(SP)+

    Popped the return address off the stack, pushed another return
    address on the stack and transfers control. This is how we did
    coroutines.

    When I was teaching an operating system class in about 1977 I
    challenged the class to come up with a minimal coroutine package. They
    all found that pretty quickly.

    It's not very good coroutines since it just switches the return
    address, not any other stack context, but it can sometimes be useful.



    I would guess that is was sometimes usefull in 1967, much less often
    usefull 1977 and almost never useful (on "big" general-purpose
    computers) in 1997 or later.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to John Levine on Tue May 14 16:38:34 2024
    John Levine <johnl@taugh.com> schrieb:
    According to Thomas Koenig <tkoenig@netcologne.de>:
    Do we know who invented relative branches? The PDP-11 had them in 1969 >>>> but I don't think they were new then.

    I am just trying to make sense of the little documentation there
    is of the PDP-X, and it seems it would have had PC- relative
    branches too (but also branches relative to index registers),
    either with an 8-bit or a 16-bit offset. The Nova had something
    similar, but only jumps relative to PC or its index registers,
    the PDP-11 went to relative-only branches.

    This draft is pretty clear:

    https://bitsavers.org/pdf/dec/pdp-x/29_Nov67.pdf

    I'd actually missed that one, thanks! (Didn't think to
    look at Bitsavers).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to John Levine on Sat May 25 17:35:57 2024
    John Levine <johnl@taugh.com> writes:
    You could do some funky things with PDP-11 jumps like

    JMP @(R4)+

    which dispatched to the next routine of threaded code pointed to by R4.

    Interestingly, the 68000 has a (An)+ addressing mode, but

    1) "JMP op" is equivalent to "LEA op -> PC" rather than "MOV.l op -> PC"

    2) The 68000 does not allow JMP (An)+

    So you need to write that as

    MOV.l (A0)+,A1
    JMP (A1)

    IA-32 (and AMD64) does not make mistake 1), but it has no
    autoincrement addressing mode, so you have to implement that as, e.g.

    add 8, %rbx
    jmp -8(%rbx)

    (or possibly arrange the convention between the threaded-code routines
    to avoid the -8).

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to John Levine on Sat May 25 20:29:20 2024
    John Levine wrote:

    According to MitchAlsup1 <mitchalsup@aol.com>:
    JSR PC,@(SP)+

    Popped the return address off the stack, pushed another return address
    on
    the stack and transfers control. This is how we did coroutines.

    When I was teaching an operating system class in about 1977 I
    challenged the class to come up with a minimal coroutine package. They
    all found that pretty quickly.

    It's not very good coroutines since it just switches the return
    address, not any other stack context, but it can sometimes be useful.


    The kinds of co-routines this is perfect for are those where each co-
    routine is not reentrant so they can use their own data n their own
    memory to control their own state transition. One accesses these
    variables disp(PC)+ addressing. {almost as free as registers containing

    those values {It was PDP-11 era...} I use this extensively in a RSTS
    controller applications.

    But no-one would use co-routines this way with all our "accumulated"
    knowledge of how to write small simple efficient software these days.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From EricP@21:1/5 to EricP on Sat May 25 22:16:20 2024
    EricP wrote:

    Unfortunately I couldn't find any 704 documents which detail
    its tube logic circuit designs.

    I found a copy of the 700 series tube logic series module designs

    IBM 700 Series Data Processing Component Circuits 1955-1959 http://www.piercefuller.com/scan/700circ.pdf

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Stephen Fuld on Wed May 29 21:43:15 2024
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    The key innovation that IBM made with the S/360 was to announce
    systems with a wide range of performance *at the same time*,
    i.e. different Y values and the same X value.

    I would argue that this property is only one of three factors
    that made System/360 successful, and perhaps the least important
    of the three. The other two factors are, one, addressing both
    business computing and scientific computing rather than having
    separate models for the two markets, and two, replacing and
    discontinuing all of IBM's other lines of computers. I think
    it's hard to overstate the importance of the last item.

    I didn't mean to imply that the performance range was the only factor
    in S/360's success. Just that with S/360, IBM was the first to use
    that strategy, and it was a factor in its success.

    We agree that having multiple price/performance models helped
    System/360 succeed. Where I think we don't agree is how big
    a factor it was, or how innovative it was. Supporting multiple
    models that differ only in price/performance is an obvious
    idea, even in the early 1960s.

    As to the other two factors you mentioned, I don't necessarily
    disagree, but I do want to note that discontinuing older lines of
    computers was factiltated by the ability of various S/360 models to
    emulate various older computers. So a site that had, say a 1401, could upgrade to a S/360 mod 30, which could run in 1401 emulation mode, so
    sites could keep their old programs running until they were replaced by
    newer nativve S/360 applications. Similarly for 7080 emulation on
    s60/65s. There were probably others that I don't know about.

    Read the chapter on System/360 in The Design of Design and you
    may change your mind. It isn't surprising that IBM provided
    a path for people who wanted to keep running their old software.
    That is very different from deciding IBM wasn't going to sell
    the old hardware. Brooks points out that the decision to
    drop all further development of IBM's six existing product
    lines was made by CEO Thomas Watson (Jr).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Stephen Fuld on Wed May 29 22:52:15 2024
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    snip


    The biggest fault of JCL is that it
    is trying to solve the wrong problem.

    What problem was it trying to solve and what was the "right"
    problem?

    The problem it was trying to solve is contained in its name: Job
    Control Language. It tacitly accepted the non-interactive batch
    model for what it needed to address.

    You may be right, but correct me if I am wrong, there was no
    non-interactive model in the mid 1960s

    I'm having trouble making sense of this question. Did you mean
    there was no interactive model? Certainly there was a
    non-interactive model, which is in the batch approach to the
    world.

    when JCL was devised.

    OS/360 was announced in April 1964, at the same time as System/360.
    Surely there had been significant thought put into what JCL would
    look like by that time, which puts it in the early 1960s.

    They didn't address it because they couldn't forcast
    (obviouslyincorrectly), that it would be a problem to solve.

    Talking about System/360 and OS/360 in The Design of Design, Brooks distinguishes between teleprocessing and interactive use, aka
    time-sharing. Teleprocessing is for remote submission of batch jobs
    and for fixed applications such as airline reservation systems. He
    doesn't say why time-sharing was given short shrift but there is
    this interesting statement: "There was no conscious decision to
    cater to two use modes [namely, batch and interactive]; it merely
    reflected subgroups holding differing use models." It seems clear
    that the design team, including Brooks himself, expected that the
    primary use mode would be batch-like (which includes teleprocessing applications such as airline reservation systems).

    The problem that was in need of addressing is interactive use. I
    think there are two reasons why JCL was so poor at that. One is
    that they knew that teleprocessing would be important, but they
    tried to cram it into the batch processing model, rather than
    understanding a more interactive work style. The second reason is
    that the culture at IBM, at least at that time, never understood the
    idea that using computers can be (and should be) easy and fun. The
    B in IBM is Business, and Business isn't supposed to be fun. And I
    think that's part of why JCL was not viewed (at IBM) as a failure,
    because their Business customers didn't mind. Needless to say, I am
    speculating, but for what it's worth those are my speculations.

    Fair enough. A couple of comments. By the time TSO/360 came out,in
    IIRC the early 1970s, they were already committed to JCL. TSO ran as a
    batch job on top of the OS, and handled swapping, etc.itself within the region allocated to TSO within the OS. It was a disaster. Of course
    this was later addressed by unifying TSO into the OS, but that couldn't happen until the S/370s (except the 155 and 165) and virtual memory.
    But the legacy of two control languages was already set by then.

    To me this sounds like another manifestation of the business
    mindset. IBM didn't understand interactive computing because
    business users, their primary market, spend their time "working" and
    not "goofing off". Interactive use was seen as catering to people
    who aren't serious about getting work done. Here again I am of
    course speculating, although the speculations are consistent with
    what I remember from that time.

    By the way I used TSO and also supported CICS for a while in the
    1970s. So my speculations have at least some foundation in real
    experience.

    As for "fun". I agree that IBM didn't think of computers as fun, but
    there were plenty of reasons to support interactive terminals for
    purely business reasons, a major one being programmer productivity in developing business applications.

    I don't disagree. I conjecture that there was an unconscious
    attitude at IBM at that time that interfered with them giving
    interactive use serious consideration. Furthermore it isn't obvious
    that they made a bad decision, considering the environment of the
    marketplace of the time. Real interactive use was just not very
    important for IBM's primary market at that time, and investing
    effort in supporting interactive computing might very well have
    cost them sales in their primary market. After all, IBM was in
    business to make money; they weren't in business to advance the
    state of the art in new computing technologies.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Tim Rentsch on Thu May 30 06:07:09 2024
    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    The key innovation that IBM made with the S/360 was to announce
    systems with a wide range of performance *at the same time*,
    i.e. different Y values and the same X value.

    I would argue that this property is only one of three factors
    that made System/360 successful, and perhaps the least important
    of the three. The other two factors are, one, addressing both
    business computing and scientific computing rather than having
    separate models for the two markets, and two, replacing and
    discontinuing all of IBM's other lines of computers. I think
    it's hard to overstate the importance of the last item.

    I didn't mean to imply that the performance range was the only
    factor in S/360's success. Just that with S/360, IBM was the first
    to use that strategy, and it was a factor in its success.

    We agree that having multiple price/performance models helped
    System/360 succeed. Where I think we don't agree is how big
    a factor it was,or how innovative it was. Supporting multiple
    models that differ only in price/performance is an obvious
    idea, even in the early 1960s.

    I don't have an opinion on how big a factor it was, but if you think it
    was innovative, can you name any other computer manufacturer who did
    it, i.e. announced at the same time multiple models with difference performance?





    As to the other two factors you mentioned, I don't necessarily
    disagree, but I do want to note that discontinuing older lines of
    computers was factiltated by the ability of various S/360 models to
    emulate various older computers. So a site that had, say a 1401,
    could upgrade to a S/360 mod 30, which could run in 1401 emulation
    mode, so sites could keep their old programs running until they
    were replaced by newer nativve S/360 applications. Similarly for
    7080 emulation on s60/65s. There were probably others that I don't
    know about.

    Read the chapter on System/360 in The Design of Design and you
    may change your mind. It isn't surprising that IBM provided
    a path for people who wanted to keep running their old software.

    Again, did any other manufactorer at the time provide, in their new
    models, emulation of their older models with radically different
    archotectures?


    That is very different from deciding IBM wasn't going to sell
    the old hardware.

    Agreed.



    Brooks points out that the decision to
    drop all further development of IBM's six existing product
    lines was made by CEO Thomas Watson (Jr).

    OK.



    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Tim Rentsch on Thu May 30 06:28:01 2024
    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    snip


    The biggest fault of JCL is that it
    is trying to solve the wrong problem.

    What problem was it trying to solve and what was the "right"
    problem?

    The problem it was trying to solve is contained in its name: Job
    Control Language. It tacitly accepted the non-interactive batch
    model for what it needed to address.

    You may be right, but correct me if I am wrong, there was no non-interactive model in the mid 1960s

    I'm having trouble making sense of this question. Did you mean
    there was no interactive model? Certainly there was a
    non-interactive model, which is in the batch approach to the
    world.

    Sorry, my error. I meant interactive (i.e. Time Sharing), not
    non-interactive. But as others have pointed out, there were such
    models, but IBM chose to ignore them.




    when JCL was devised.

    OS/360 was announced in April 1964, at the same time as System/360.
    Surely there had been significant thought put into what JCL would
    look like by that time, which puts it in the early 1960s.

    They didn't address it because they couldn't forcast (obviouslyincorrectly), that it would be a problem to solve.

    Talking about System/360 and OS/360 in The Design of Design, Brooks distinguishes between teleprocessing and interactive use, aka
    time-sharing.

    Good.


    Teleprocessing is for remote submission of batch jobs
    and for fixed applications such as airline reservation systems.

    Agreed.

    He
    doesn't say why time-sharing was given short shrift but there is
    this interesting statement: "There was no conscious decision to
    cater to two use modes [namely, batch and interactive]; it merely
    reflected subgroups holding differing use models." It seems clear
    that the design team, including Brooks himself, expected that the
    primary use mode would be batch-like (which includes teleprocessing applications such as airline reservation systems).

    Yes, that is consistend with what happened.




    The problem that was in need of addressing is interactive use. I
    think there are two reasons why JCL was so poor at that. One is
    that they knew that teleprocessing would be important, but they
    tried to cram it into the batch processing model, rather than
    understanding a more interactive work style. The second reason is
    that the culture at IBM, at least at that time, never understood
    the >> idea that using computers can be (and should be) easy and fun.
    The >> B in IBM is Business, and Business isn't supposed to be fun.
    And I >> think that's part of why JCL was not viewed (at IBM) as a
    failure, >> because their Business customers didn't mind. Needless
    to say, I am >> speculating, but for what it's worth those are my speculations.

    I don't think we have a major disagreement that IBM didn't address the interactive user. We may have a slight disagreement as to the reason
    for that. I believe you think that they considered it, but rejected it
    because it was too much like fun. I don't attribute that motivation,
    and don't know what the resons for the rejection were, but we both
    agree that they underestimated its importance for non-fun uses.



    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Stephen Fuld on Thu May 30 08:42:42 2024
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    [a bunch of stuff we agree on]

    The problem that was in need of addressing is interactive use. I
    think there are two reasons why JCL was so poor at that. One is
    that they knew that teleprocessing would be important, but they
    tried to cram it into the batch processing model, rather than
    understanding a more interactive work style. The second reason is
    that the culture at IBM, at least at that time, never understood

    the >> idea that using computers can be (and should be) easy and fun.
    The >> B in IBM is Business, and Business isn't supposed to be fun.
    And I >> think that's part of why JCL was not viewed (at IBM) as a
    failure, >> because their Business customers didn't mind. Needless
    to say, I am >> speculating, but for what it's worth those are my
    speculations.

    I don't think we have a major disagreement that IBM didn't address the interactive user. We may have a slight disagreement as to the reason
    for that. I believe you think that they considered it, but rejected it because it was too much like fun. I don't attribute that motivation,
    and don't know what the resons for the rejection were, but we both
    agree that they underestimated its importance for non-fun uses.

    Let me expand on my previous statement.

    I think the "fun" aspect was part of the motivation, but a mostly
    unconscious one.

    Another (and perhaps larger?) part of the motivation was about the
    relative priorities, and this was (I believe) a conscious element.
    In particular, interactive use was thought to be important for
    program development (Brooks says something along these lines in
    TDOD). I conjecture that IBM consciously decided -- whether
    rightly or wrongly -- that program development was only a small
    fraction of what IBM's market wanted to do with their computers,
    and so IBM didn't prioritize it; they thought that what little
    program development was needed could be carried out adequately
    under the batch processing model. That's understandable - it's
    hard for people who have a lot of experience in an old technology
    to appreciate the benefits of a new technology (countless examples
    over the last 50 or 60 years). A quote from Tom Watson Sr comes
    to mind (paraphrased): "I think there's a world market for about
    five computers." It isn't just coincidence that innovation tends
    to come from the young. DEC, to give one example, was a much
    younger company, and fully embraced the interactive model early
    on. Even 20 years later, I think IBM made a wise decision to
    farm out the development of an operating system for the PC,
    because it just wasn't in IBM's culture to know what those
    customers wanted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Thu May 30 17:05:59 2024
    According to Tim Rentsch <tr.17687@z991.linuxsc.com>:
    Another (and perhaps larger?) part of the motivation was about the
    relative priorities, and this was (I believe) a conscious element.
    In particular, interactive use was thought to be important for
    program development (Brooks says something along these lines in
    TDOD). I conjecture that IBM consciously decided -- whether
    rightly or wrongly -- that program development was only a small
    fraction of what IBM's market wanted to do with their computers,
    and so IBM didn't prioritize it; they thought that what little
    program development was needed could be carried out adequately
    under the batch processing model. That 's understandable -

    It's the Jevons Paradox. If you improve your processes to use
    something more efficiently, you often end up using more of it because
    the overall usage increases. Jevons said it about coal and steam
    engines but it happens all the time.

    IBM was certainly familiar with time-sharing from Project Mac which
    ran on a 7090. Time-sharing needs system managed context switches
    which are expensive, as opposed to SAGE/SABRE style transaction
    processing where each bit of application code manages its own context.
    In practice it also needs dynamic address translation which they added
    to the 360/67 to bid on the Multics project, and to all 370s both for
    that reason and the horrible OS memory management Lynn has described.

    Among the many differences between IBM and DEC computers was that
    IBM's had channels which did a lot of work between relatively
    infrequent interrupts, while DEC's did not often had interrupts for
    each word of data. (They did have DMA which they called data break but
    only for fast devices like disks, not terminals or medium speed
    DECtapes.)

    At the time IBM's choice made sense. Now of course everything is so
    much faster that the mini-server on my desk that is about the size of
    an orange has an ATA disc controller that does more than a 1960s
    channel, and the CPU takes thousands of interrupts per second
    without noticably slowing down.

    PS:

    A quote from Tom Watson Sr comes
    to mind (paraphrased): "I think there's a world market for about
    five computers."

    He never said that. What he probably said in 1943 was more like five
    computers could do all the computing the world is doing, which was
    true. I would think he was quite aware that as costs came down the
    demand would go up but I think everyone was surprised at how fast that happened.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Tim Rentsch on Thu May 30 17:29:04 2024
    Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:

    [...]

    I conjecture that there was an unconscious
    attitude at IBM at that time that interfered with them giving
    interactive use serious consideration. Furthermore it isn't obvious
    that they made a bad decision, considering the environment of the
    marketplace of the time.

    I assume you're right. There actually may have been one additional
    factor: I don't think the 360/30 would have been powerful enough
    for timesharing. It's hard to find comparative figures, but
    I suspect it was considerably slower than a PDP/8.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Thomas Koenig on Thu May 30 18:27:07 2024
    Thomas Koenig wrote:

    Tim Rentsch <tr.17687@z991.linuxsc.com> schrieb:

    [...]

    I conjecture that there was an unconscious
    attitude at IBM at that time that interfered with them giving
    interactive use serious consideration. Furthermore it isn't obvious
    that they made a bad decision, considering the environment of the
    marketplace of the time.

    I assume you're right. There actually may have been one additional
    factor: I don't think the 360/30 would have been powerful enough
    for timesharing. It's hard to find comparative figures, but
    I suspect it was considerably slower than a PDP/8.


    Our PDP-8 supported a 6-person time sharing OS.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Thu May 30 18:57:53 2024
    According to Thomas Koenig <tkoenig@netcologne.de>:
    I assume you're right. There actually may have been one additional
    factor: I don't think the 360/30 would have been powerful enough
    for timesharing. It's hard to find comparative figures, but
    I suspect it was considerably slower than a PDP/8.

    See some of my previous messages. It was a lot slower, 27us for a 16
    bit memory to register add vs about 3us for a 12 bit add on a PDP-8.

    On the other hand, a typical PDP-8 had a DECtape or two (a spiritual predecessor of floppy disks) and a teletype, while a /30 had a card
    reader, a line printer, and a real disk or some 9 track tape drives.

    You didn't rent a /30 for the CPU, it was for the peripherals. The
    channel shared the microcode engine with the CPU and when it was doing
    disk operations, the CPU pretty much stopped.

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to John Levine on Thu May 30 19:42:02 2024
    John Levine <johnl@taugh.com> writes:
    According to Tim Rentsch <tr.17687@z991.linuxsc.com>:

    Among the many differences between IBM and DEC computers was that
    IBM's had channels which did a lot of work between relatively
    infrequent interrupts, while DEC's did not often had interrupts for
    each word of data. (They did have DMA which they called data break but
    only for fast devices like disks, not terminals or medium speed
    DECtapes.)

    Burroughs I/O subsystem offloaded even more than IBM channel
    programs could provide. It was fire and forget from the MCP
    perspective (e.g. read a set of cards or read a bunch of sectors
    was one instruction that initiated a high level operation (read
    card/cards, print line/lines, read sector/sectors, write sector/sectors, backspace tape, etc) and the hardware took care of all the fiddley
    little details.


    At the time IBM's choice made sense. Now of course everything is so
    much faster that the mini-server on my desk that is about the size of
    an orange has an ATA disc controller that does more than a 1960s
    channel, and the CPU takes thousands of interrupts per second
    without noticably slowing down.

    Modern server-grade I/O hardware is more along the fire-and-forget model than bit-twiddling models from the 8086 timeframe. Even SATA (which
    is more capable than IDE) is fairly high level, as is FC and
    NVMe. Server-grade NICs are also pretty capable and require
    far fewer interrupts than early NICs to transfer a given amount
    of data.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Stephen Fuld on Thu May 30 14:12:01 2024
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    The key innovation that IBM made with the S/360 was to announce
    systems with a wide range of performance *at the same time*,
    i.e. different Y values and the same X value.

    I would argue that this property is only one of three factors
    that made System/360 successful, and perhaps the least important
    of the three. The other two factors are, one, addressing both
    business computing and scientific computing rather than having
    separate models for the two markets, and two, replacing and
    discontinuing all of IBM's other lines of computers. I think
    it's hard to overstate the importance of the last item.

    I didn't mean to imply that the performance range was the only
    factor in S/360's success. Just that with S/360, IBM was the first
    to use that strategy, and it was a factor in its success.

    We agree that having multiple price/performance models helped
    System/360 succeed. Where I think we don't agree is how big
    a factor it was,or how innovative it was. Supporting multiple
    models that differ only in price/performance is an obvious
    idea, even in the early 1960s.

    I don't have an opinion on how big a factor it was, but if you think it
    was innovative, can you name any other computer manufacturer who did
    it, i.e. announced at the same time multiple models with difference performance?

    I think it was an obvious idea at the time, even before IBM started
    work on System/360. What made System/360 different was not the idea
    of having a common architecture but the large range of performance for
    the various models. Besides being an impressive feat technically, it
    clearly showed IBM's commitment to the architecture, not just in the
    present but for many years into the future, and I believe that
    commitment being demonstrated (ignoring for the moment the
    discontinuing of other product lines) was the larger part of the
    success of System/360.

    As to the other two factors you mentioned, I don't necessarily
    disagree, but I do want to note that discontinuing older lines of
    computers was factiltated by the ability of various S/360 models to
    emulate various older computers. So a site that had, say a 1401,
    could upgrade to a S/360 mod 30, which could run in 1401 emulation
    mode, so sites could keep their old programs running until they
    were replaced by newer nativve S/360 applications. Similarly for
    7080 emulation on s60/65s. There were probably others that I don't
    know about.

    Read the chapter on System/360 in The Design of Design and you
    may change your mind. It isn't surprising that IBM provided
    a path for people who wanted to keep running their old software.

    Again, did any other manufactorer at the time provide, in their new
    models, emulation of their older models with radically different archotectures?

    Emulation was not a new idea, and there were historical precedents,
    for example the 7094 being able to run 7090 code even though how
    indexing was done on the two machines was completely different
    (admittedly this example is on a smaller scale than emulating a
    completely different architecture). Also it isn't like IBM knew
    going in that emulation would be the way that they would address the
    issue of bringing old customers forward. The System/360 effort
    started in 1961 and was announced in April 1964. Quoting from TDOD:
    "At a crucial point in January 1964, William Harms, Gerald Ottoway,
    and William Wright devised almost overnight a microprogrammed
    emulation of the 1401 on the Model 30. This mightily addressed the
    biggest single customer conversion problem." IBM knew they needed
    to provide a way forward but didn't know at the start how they would
    do that. It was a happy byproduct of the decision to use microcode
    in the smaller 360 models that emulation was possible, however IBM
    didn't realize or plan that until fairly late in the game.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Tim Rentsch on Thu May 30 21:42:08 2024
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    Tim Rentsch wrote:

    "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:

    The key innovation that IBM made with the S/360 was to announce
    systems with a wide range of performance *at the same time*,
    i.e. different Y values and the same X value.

    I would argue that this property is only one of three factors
    that made System/360 successful, and perhaps the least important
    of the three. The other two factors are, one, addressing both
    business computing and scientific computing rather than having
    separate models for the two markets, and two, replacing and
    discontinuing all of IBM's other lines of computers. I think
    it's hard to overstate the importance of the last item.

    I didn't mean to imply that the performance range was the only
    factor in S/360's success. Just that with S/360, IBM was the first
    to use that strategy, and it was a factor in its success.

    We agree that having multiple price/performance models helped
    System/360 succeed. Where I think we don't agree is how big
    a factor it was,or how innovative it was. Supporting multiple
    models that differ only in price/performance is an obvious
    idea, even in the early 1960s.

    I don't have an opinion on how big a factor it was, but if you think it
    was innovative, can you name any other computer manufacturer who did
    it, i.e. announced at the same time multiple models with difference
    performance?

    I think it was an obvious idea at the time, even before IBM started

    The Burroughs B100/200/300 systems were just that, multiple models
    with different performance using a common cpu architecture. Early
    1960's.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Fri May 31 00:37:28 2024
    According to Scott Lurndal <slp53@pacbell.net>:
    Among the many differences between IBM and DEC computers was that
    IBM's had channels ...

    Burroughs I/O subsystem offloaded even more than IBM channel
    programs could provide. It was fire and forget from the MCP
    perspective (e.g. read a set of cards or read a bunch of sectors
    was one instruction that initiated a high level operation (read
    card/cards, print line/lines, read sector/sectors, write sector/sectors, >backspace tape, etc) and the hardware took care of all the fiddley
    little details.

    I can believe that Burroughs I/O was more flexible but IBM 360
    channels could ran channel progarms that could be arbitrarily long and
    had loops. If you wanted to write a channel program to read a dozen
    cards or read all the records on a disk track, that wasn't hard. There
    were even some self-modifying channel programs that were a pain to
    virtualize on CP/67.

    Modern server-grade I/O hardware is more along the fire-and-forget model than >bit-twiddling models from the 8086 timeframe. Even SATA (which
    is more capable than IDE) is fairly high level, as is FC and
    NVMe. Server-grade NICs are also pretty capable and require
    far fewer interrupts than early NICs to transfer a given amount
    of data.

    Yup, now it's all channels all the time.



    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Fri May 31 00:52:47 2024
    According to Scott Lurndal <slp53@pacbell.net>:
    I think it was an obvious idea at the time, even before IBM started

    The Burroughs B100/200/300 systems were just that, multiple models
    with different performance using a common cpu architecture. Early
    1960's.

    I'm looking at the B100/200/300 manuals on bitsavers and I dunno.

    For one thing it appears that the models were the same machine adding
    more hardware components as you went up the scale. More importantly
    the larger machines had more instructions than the smaller ones. You
    could run B100 code on a B300 but not vice versa.

    IBM made the decimal and floating point instruction sets optional on
    the smaller models but if you got a 360/30 with both (the universal
    instruction set) it had precisely the same instruction set as a 360/65
    or /75. You could write code on a /75 and so long as there was enough
    memory and it didn't have timing dependencies you could count on it to
    run on a /30.

    The big innovation in the 360 architecture was that the models were
    not just upward compatible, which was familiar from newer machines
    running code for older machines, but also downward compatible. They
    could do all their software development on a /65 and not have to worry
    about whether it would work on a /50 or /40 or /30 or the later /25
    and /22. If it worked on one it would work on all of them.

    There were a few minor variations like the 360/20 which was a 16 bit
    subsset with simpler I/O, the 360/44 which was a scientific subset
    with realtime extensions and the 360/91 which had slightly different
    floating point rounding and some strangeness with imprecise
    interrupts, but even there there was a great deal of compatibility.

    I wrote Fortran programs on the 360/91 using the same Fortran G and H
    compilers that ran on every other model and they worked the same.



    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to John Levine on Fri May 31 12:46:46 2024
    On Fri, 31 May 2024 00:37:28 -0000 (UTC)
    John Levine <johnl@taugh.com> wrote:

    According to Scott Lurndal <slp53@pacbell.net>:
    Among the many differences between IBM and DEC computers was that
    IBM's had channels ...

    Burroughs I/O subsystem offloaded even more than IBM channel
    programs could provide. It was fire and forget from the MCP
    perspective (e.g. read a set of cards or read a bunch of sectors
    was one instruction that initiated a high level operation (read
    card/cards, print line/lines, read sector/sectors, write
    sector/sectors, backspace tape, etc) and the hardware took care of
    all the fiddley little details.

    I can believe that Burroughs I/O was more flexible but IBM 360
    channels could ran channel progarms that could be arbitrarily long and
    had loops. If you wanted to write a channel program to read a dozen
    cards or read all the records on a disk track, that wasn't hard. There
    were even some self-modifying channel programs that were a pain to
    virtualize on CP/67.

    Modern server-grade I/O hardware is more along the fire-and-forget
    model than bit-twiddling models from the 8086 timeframe. Even SATA
    (which is more capable than IDE) is fairly high level, as is FC and
    NVMe. Server-grade NICs are also pretty capable and require
    far fewer interrupts than early NICs to transfer a given amount
    of data.

    Yup, now it's all channels all the time.




    I don't think so.
    The processor and the rest of the hardware on server-grade NIC are more
    like IBM PP than like IBM CP.
    There were attempts to use CP-like functionality in non-mainframe
    computers, sometimes with some level of initial success. Even early IBM
    PCs had host-side DMA channels. But long term all such attempts
    [outside of mainframes] failed. On the other hand, PP-like things are successful. The distinctions between CP and PP are two:
    1. Physical. Which side of I/O bus?
    2. Responsibility. Who writes programs that run on this intelligent
    processing element? OS and apps programmer (seen like the same in this particular case) or device manufacturer? And on which level the whole
    thing standardized? Internal instructions or bus transfers/packets?

    I am not happy about 2nd part of my definition, but right now can't
    formulate it better.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to John Levine on Fri May 31 13:09:00 2024
    John Levine <johnl@taugh.com> writes:
    According to Scott Lurndal <slp53@pacbell.net>:
    Among the many differences between IBM and DEC computers was that
    IBM's had channels ...

    Burroughs I/O subsystem offloaded even more than IBM channel
    programs could provide. It was fire and forget from the MCP
    perspective (e.g. read a set of cards or read a bunch of sectors
    was one instruction that initiated a high level operation (read
    card/cards, print line/lines, read sector/sectors, write sector/sectors, >>backspace tape, etc) and the hardware took care of all the fiddley
    little details.

    I can believe that Burroughs I/O was more flexible but IBM 360
    channels could ran channel progarms that could be arbitrarily long and
    had loops. If you wanted to write a channel program to read a dozen
    cards or read all the records on a disk track, that wasn't hard. There
    were even some self-modifying channel programs that were a pain to
    virtualize on CP/67.

    To read a dozen cards on medium systems simply meant providing
    a 960 byte buffer for the "Read Cards" I/O descriptor.

    The burroughs disks were always sector based. There was generally no
    need to read a whole track (but if necessary, specifying
    a track-sized buffer when initiating a READ SECTORS operation
    was sufficient to the task). The common disk I/O size was
    based on the file characteristics (record and block sizes,
    which varied on a per-file basis).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to John Levine on Fri May 31 14:10:47 2024
    John Levine wrote:

    According to Scott Lurndal <slp53@pacbell.net>:
    Among the many differences between IBM and DEC computers was that
    IBM's had channels ...

    Burroughs I/O subsystem offloaded even more than IBM channel
    programs could provide. It was fire and forget from the MCP
    perspective (e.g. read a set of cards or read a bunch of sectors
    was one instruction that initiated a high level operation (read
    card/cards, print line/lines, read sector/sectors, write
    sector/sectors, backspace tape, etc) and the hardware took care of
    all the fiddley little details.

    I can believe that Burroughs I/O was more flexible but IBM 360
    channels could ran channel progarms that could be arbitrarily long and
    had loops. If you wanted to write a channel program to read a dozen
    cards or read all the records on a disk track, that wasn't hard. There
    were even some self-modifying channel programs that were a pain to
    virtualize on CP/67.


    While IBM's channels did provide a lot of flexibility, it came at a
    tremendous cost that, in the fullness of time, proved to be a bad
    tradeoff. I think it is unfair to compare IBM's mainframe
    implementation to a DEC mini, but it is fair to compare them to the
    other contemporous mainframe systems. I can't speak to paper
    peripherals, but I can about storage peripherals.

    We have discussed some of the problems with CKD disks. Others,
    including at least Univac, Burroughs (both large and medium scale) and
    CDC used disks with fixed block lengths. I don't know enough about
    Honeywell, nor NCR to comment on those. Sending a command to the
    channel was straight forward. On the Univac, for example, you executed
    a Load Function in Channel instruction to send a command to the
    channel, followed by a Load input or Output channel with gave the
    memory address and length of the data. The actual function was
    different depending upon the device, but was typically one or two words.

    You mentioned reading all the records on a track. With CKD, you had to
    know how many records that was, and there was work by the channel
    communicating with the disk controller for each record. With fixed
    length blocks, you could just specify the number of blocks you wanted
    in the initial command and would get interrupted when all of them had
    been transferred.

    With tape, I don't see an advantage in being able to read multiple
    blocks at once, as if you had the memory to do that, it would have been
    better to just write longer blocks to the tape and get better tape untilization.

    And, we have mentioned the bad decision to allow key searches on the
    disk. This persisted into the 1990s, since PDSs used key searches for
    members. A PDS with many members could tie up the channel for multiple
    disk rotations at a time, which was only alleviated by the fast PDS
    search capability, with, while it didn't speed up the (linear) search,
    at least allowed the channel to be freed up during the search.

    IBM tried to get away from CKD, supporting the fixed block 3370 for VM
    and DOS, but supportoing it under MVS was to big a lift, and it died.
    Today, of course, CKD (actually extended ECKD, which fixed some of the problems) is actually emulated in the disk controller using actual
    industry standard fixed block disks.


    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)