• scale-out of OS2200

    From Kurt Duncan@21:1/5 to All on Thu Jun 29 18:33:43 2023
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.

    TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.

    Spin up one or more database instances as needed.

    Spin up two or three batch services for nightly processing.

    Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.

    Same thing with BIS.

    Then you would truly have cloud-native...ish OS2200. Thoughts?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Kurt Duncan on Fri Jun 30 07:30:21 2023
    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    I am not sure exactly what you are proposing here, nor what the
    advantages of it are. See below for specifics.



    One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.

    Does this mean that a file name has to specify somehow which MFD
    instance it is in, or does a run specify which instance all files that
    run references belong to? What if a run wants to access a file in a
    different instance? Or do you mean multiple copies of the same
    information? In that case, updates have to be propagated to multiple instances. Both are messy, and what do you gain?



    TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.


    Sounds like early CICS on IBM S/360. Multiple "instances" of CICS, each managing its own set of transactions. It is costly performance wise as
    you are doing CPU dispatching twice, once in the OS and once in the CICS instance. There is a reason why TIP tended to out perform CICS.


    Spin up one or more database instances as needed.

    What is an instance here? The common code to handle the database? Why
    waste the space? The user code? Already done. Again, I am not sure
    what you mean here.



    Spin up two or three batch services for nightly processing.

    Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.

    One of the really nice things about OS/2200 is that batch and demand
    share so much of the same facilities, both within the OS, and things
    like ECL? What is the advantage of separating and duplicating them?
    And what does adding another instance of a demand service if one get
    over 100 users buy you?


    Same thing with BIS.

    IIRC BIS is what used to be called MAPPER. If so, I think you can have multiple Mapper runs open simultaneously. I don't know if anyone does
    this, and why?


    Then you would truly have cloud-native...ish OS2200. Thoughts?

    My initial thoughts are that it seems like a lot of work for minimal
    benefit. But I freely admit, I don't fully understand your proposal. :-(

    One further note. Your talk of "cloudish" could be sort of like
    multiple 2200s in a shared disk environment. I don't know if such configurations are still supported.

    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kurt Duncan@21:1/5 to Stephen Fuld on Fri Jun 30 09:18:16 2023
    On Friday, June 30, 2023 at 8:30:25 AM UTC-6, Stephen Fuld wrote:
    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
    I am not sure exactly what you are proposing here, nor what the
    advantages of it are. See below for specifics.
    One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
    Does this mean that a file name has to specify somehow which MFD
    instance it is in, or does a run specify which instance all files that
    run references belong to? What if a run wants to access a file in a different instance? Or do you mean multiple copies of the same
    information? In that case, updates have to be propagated to multiple instances. Both are messy, and what do you gain?
    TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
    Sounds like early CICS on IBM S/360. Multiple "instances" of CICS, each managing its own set of transactions. It is costly performance wise as
    you are doing CPU dispatching twice, once in the OS and once in the CICS instance. There is a reason why TIP tended to out perform CICS.
    Spin up one or more database instances as needed.
    What is an instance here? The common code to handle the database? Why
    waste the space? The user code? Already done. Again, I am not sure
    what you mean here.
    Spin up two or three batch services for nightly processing.

    Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
    One of the really nice things about OS/2200 is that batch and demand
    share so much of the same facilities, both within the OS, and things
    like ECL? What is the advantage of separating and duplicating them?
    And what does adding another instance of a demand service if one get
    over 100 users buy you?


    Same thing with BIS.

    IIRC BIS is what used to be called MAPPER. If so, I think you can have multiple Mapper runs open simultaneously. I don't know if anyone does
    this, and why?
    Then you would truly have cloud-native...ish OS2200. Thoughts?
    My initial thoughts are that it seems like a lot of work for minimal benefit. But I freely admit, I don't fully understand your proposal. :-(

    One further note. Your talk of "cloudish" could be sort of like
    multiple 2200s in a shared disk environment. I don't know if such configurations are still supported.

    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    Each instance boots from/into its local MFD (think STD#Q*F...).
    For TIP, the registered files would be in {something}#Q*F which is, yes, a shared MFD.
    But for transaction world, if you can live in different application contexts, then you can live in different shared directories for the TIP/database files.
    One TIP instance runs from Acct shared MFD, Five TIP instances from ShoppingCart shared MFD, Two TIP instances from... you get the idea.
    When you have a sale on Mary-Lou Retton action figures, you might scale the shopping cart microservices up to eight or ten TIP instances, until the sale is over.

    Developers would have their own sandboxes in the local MFD, possibly snapshotted each night or whatever/whenever, with pushes to production going to
    one common shared MFD.

    The point of cloud-native is that you break up your monolithic... whatever... into micro-services, which you can then scale out... into multiple instances of the service.
    If one service crashes, the others continue, *and* your spend is completely dependent upon your usage. So.... you don't pay for all the cycles you don't use during the evening, or whatever.
    And... if you suddenly have a huge spike in usage, you can push a button and have enough capacity to deal with it.

    Batch/Demand using the same facilities is really just a matter of using a lot of the same OS code. Which can be broken down into modules, and linked into the batch executable and into the demand executable.

    You already have something of a concept of instances in terms of recovery applications (and the whole concept of IRU is really the main hurdle... I think...).

    The why, is to allow *some* existing applications to migrate into the micro-services cloud world, with only a small rewrite, and hopefully little to no re-architecting, and to relieve the monolithic systems from the load of lots of developers, by moving
    those apps into scheduled-as-needed u-services.

    Unisys does metering, but you still have to have the whole system on-prem. Unless you do the cloud thing, but then you are still lugging around a whole virtual mainframe...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David W Schroth@21:1/5 to sfuld@alumni.cmu.edu.invalid on Fri Jun 30 22:16:05 2023
    On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
    <sfuld@alumni.cmu.edu.invalid> wrote:

    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    I am not sure exactly what you are proposing here, nor what the
    advantages of it are. See below for specifics.

    My take is someone has a shaky grasp of how Operting Systems
    (including OS2200) are structured.


    One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.

    Does this mean that a file name has to specify somehow which MFD
    instance it is in, or does a run specify which instance all files that
    run references belong to? What if a run wants to access a file in a >different instance? Or do you mean multiple copies of the same
    information? In that case, updates have to be propagated to multiple >instances. Both are messy, and what do you gain?



    TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.


    Sounds like early CICS on IBM S/360. Multiple "instances" of CICS, each >managing its own set of transactions. It is costly performance wise as
    you are doing CPU dispatching twice, once in the OS and once in the CICS >instance. There is a reason why TIP tended to out perform CICS.

    I tend to believe the per-Application Group does a better job of
    partitioning transactions.


    Spin up one or more database instances as needed.

    What is an instance here? The common code to handle the database? Why
    waste the space? The user code? Already done. Again, I am not sure
    what you mean here.

    Each Application Grop is a different DMS instance. I forget just hom
    many Application Groups are currently supporte, I'm pretty sure it is
    more than 10.



    Spin up two or three batch services for nightly processing.

    Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.

    One of the really nice things about OS/2200 is that batch and demand
    share so much of the same facilities, both within the OS, and things
    like ECL? What is the advantage of separating and duplicating them?
    And what does adding another instance of a demand service if one get
    over 100 users buy you?


    Same thing with BIS.

    IIRC BIS is what used to be called MAPPER. If so, I think you can have >multiple Mapper runs open simultaneously. I don't know if anyone does
    this, and why?


    Then you would truly have cloud-native...ish OS2200. Thoughts?

    My initial thoughts are that it seems like a lot of work for minimal
    benefit. But I freely admit, I don't fully understand your proposal. :-(

    One further note. Your talk of "cloudish" could be sort of like
    multiple 2200s in a shared disk environment. I don't know if such >configurations are still supported.

    As far as I know, we still support multiple 2200s in a shared disk
    environment. I typically have demand sessions open on RS06, RS08,
    RS15, and RS36 when I'm working. My Exec builds run from shared file
    sets, RS06 PRIMUS (a DMS application) runs on all systems in the RS06
    omplex. And I know there are other 2200 systems in the RS06 complex.

    Regards,

    David W. Schroth

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kurt Duncan@21:1/5 to David W Schroth on Fri Jun 30 21:19:53 2023
    On Friday, June 30, 2023 at 9:10:45 PM UTC-6, David W Schroth wrote:
    On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld <sf...@alumni.cmu.edu.invalid> wrote:

    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    I am not sure exactly what you are proposing here, nor what the
    advantages of it are. See below for specifics.

    My take is someone has a shaky grasp of how Operting Systems
    (including OS2200) are structured.

    I do have *some* small insight into OS2200 - not great, but some.
    Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
    I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a minimum
    amount of re-coding and/or re-architecting.

    It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
    But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
    I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.




    One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.

    Does this mean that a file name has to specify somehow which MFD
    instance it is in, or does a run specify which instance all files that
    run references belong to? What if a run wants to access a file in a >different instance? Or do you mean multiple copies of the same >information? In that case, updates have to be propagated to multiple >instances. Both are messy, and what do you gain?



    TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.


    Sounds like early CICS on IBM S/360. Multiple "instances" of CICS, each >managing its own set of transactions. It is costly performance wise as
    you are doing CPU dispatching twice, once in the OS and once in the CICS >instance. There is a reason why TIP tended to out perform CICS.
    I tend to believe the per-Application Group does a better job of partitioning transactions.


    Spin up one or more database instances as needed.

    What is an instance here? The common code to handle the database? Why >waste the space? The user code? Already done. Again, I am not sure
    what you mean here.
    Each Application Grop is a different DMS instance. I forget just hom
    many Application Groups are currently supporte, I'm pretty sure it is
    more than 10.



    Spin up two or three batch services for nightly processing.

    Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.

    One of the really nice things about OS/2200 is that batch and demand
    share so much of the same facilities, both within the OS, and things
    like ECL? What is the advantage of separating and duplicating them?
    And what does adding another instance of a demand service if one get
    over 100 users buy you?


    Same thing with BIS.

    IIRC BIS is what used to be called MAPPER. If so, I think you can have >multiple Mapper runs open simultaneously. I don't know if anyone does >this, and why?


    Then you would truly have cloud-native...ish OS2200. Thoughts?

    My initial thoughts are that it seems like a lot of work for minimal >benefit. But I freely admit, I don't fully understand your proposal. :-(

    One further note. Your talk of "cloudish" could be sort of like
    multiple 2200s in a shared disk environment. I don't know if such >configurations are still supported.
    As far as I know, we still support multiple 2200s in a shared disk environment. I typically have demand sessions open on RS06, RS08,
    RS15, and RS36 when I'm working. My Exec builds run from shared file
    sets, RS06 PRIMUS (a DMS application) runs on all systems in the RS06 omplex. And I know there are other 2200 systems in the RS06 complex.

    Regards,

    David W. Schroth

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lewis Cole@21:1/5 to Kurt Duncan on Fri Jun 30 22:30:05 2023
    On Friday, June 30, 2023 at 9:19:55 PM UTC-7, Kurt Duncan wrote:
    < snip >
    Let me clarify: I am not suggesting
    breaking up OS2200 into pieces, nor
    deploying lots of copies of it.
    I'm suggesting completely rewriting
    enough of the various parts of OS2200
    to allow what had been OS2200
    applications to begin looking more
    like cloud-native solutions...
    containers, all that fun stuff; and
    doing it in such a way as to require
    a minimum amount of re-coding and/or
    re-architecting.

    Okay, so you're trying to come up with some way or ways to effectively keep OS2200 alive and perhaps even growing. That's nice.
    (I don't understand why you think that "containers" should be a good selling point since a 2200 system VM should be able to do just about anything you might want to do with a container and it just so happens that a 2200 system VM just happens to exist [
    AKA PS2200] and doesn't require much, if any, OS2200 modifications to work. Maybe I've missed something, but I wasn't aware of a significant up-tick in OS2200 acceptance and use because of PS2200.)

    It is certainly a mind-bender for a
    person steeped in monolithic OS's
    and hardware (such as I am), to
    understand and adapt to the
    cloud/container world.
    But in doing so - as almost the
    entire rest of the non-mainframe
    world is doing - I'd hate to see
    the OS2200 architecture lose even
    more ground.
    I'm trying to think of unique ways
    in which the 2200 eco-sphere
    -- from a customer point of view --
    might be altered such that it can
    compete in this new goofy
    frustrating world.
    < snip >

    I was tempted to ask what thing(s) you think is preventing OS2200 and its applications from being more prevalent -- something that doesn't involve buzzwords like "containers" and "cloud based", but I think I'd like to try a thought experiment and see
    where it leads.
    Suppose that OS2200 was completely re-written to do something "magical" -- something that no other OS can do right now -- being able to potentially scale to billions and billions of processors efficiently like Barrelfish was hoping to show the way toward
    (i.e. the Multi-Kernel approach), for example.
    In your humble opinion, why would anyone in their right mind jump on to using the new and improved OS2200 rather than wait for someone to hopefully come up with a way to get Linux to be able to do the same thing(s) or actually take an active part to make
    it do the same thing(s)?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kurt Duncan@21:1/5 to Lewis Cole on Sat Jul 1 08:56:53 2023
    On Friday, June 30, 2023 at 11:30:08 PM UTC-6, Lewis Cole wrote:
    On Friday, June 30, 2023 at 9:19:55 PM UTC-7, Kurt Duncan wrote:
    < snip >
    Let me clarify: I am not suggesting
    breaking up OS2200 into pieces, nor
    deploying lots of copies of it.
    I'm suggesting completely rewriting
    enough of the various parts of OS2200
    to allow what had been OS2200
    applications to begin looking more
    like cloud-native solutions...
    containers, all that fun stuff; and
    doing it in such a way as to require
    a minimum amount of re-coding and/or
    re-architecting.
    Okay, so you're trying to come up with some way or ways to effectively keep OS2200 alive and perhaps even growing. That's nice.
    (I don't understand why you think that "containers" should be a good selling point since a 2200 system VM should be able to do just about anything you might want to do with a container and it just so happens that a 2200 system VM just happens to exist [
    AKA PS2200] and doesn't require much, if any, OS2200 modifications to work.
    Maybe I've missed something, but I wasn't aware of a significant up-tick in OS2200 acceptance and use because of PS2200.)
    It is certainly a mind-bender for a
    person steeped in monolithic OS's
    and hardware (such as I am), to
    understand and adapt to the
    cloud/container world.
    But in doing so - as almost the
    entire rest of the non-mainframe
    world is doing - I'd hate to see
    the OS2200 architecture lose even
    more ground.
    I'm trying to think of unique ways
    in which the 2200 eco-sphere
    -- from a customer point of view --
    might be altered such that it can
    compete in this new goofy
    frustrating world.
    < snip >

    I was tempted to ask what thing(s) you think is preventing OS2200 and its applications from being more prevalent -- something that doesn't involve buzzwords like "containers" and "cloud based", but I think I'd like to try a thought experiment and see
    where it leads.
    Suppose that OS2200 was completely re-written to do something "magical" -- something that no other OS can do right now -- being able to potentially scale to billions and billions of processors efficiently like Barrelfish was hoping to show the way
    toward (i.e. the Multi-Kernel approach), for example.
    In your humble opinion, why would anyone in their right mind jump on to using the new and improved OS2200 rather than wait for someone to hopefully come up with a way to get Linux to be able to do the same thing(s) or actually take an active part to
    make it do the same thing(s)?

    I'm suggesting that someone might take a more active role in making the architectures and facilities which, buzzwords or not, actually do define the state of general enterprise computing, available to those people who are still in the world of TIP/HVTIP
    and RDMS, and batch processing. So that they can leverage at least some major portion of their existing code, while moving toward a computing paradigm which is in use today, has been for a number of years and (for better or words) will be for some number
    of years ahead.

    I am not suggesting billions of processors. I am unsure that, if massive processing or IO is in a customers application mix, that anything other than a 2200 would satisfy them. But I don't think the entire world of OS2200 users requires un-obtanium. And
    I am not suggesting scaling an OS. I am suggesting providing the minimal environment necessary for a particular mix of (e.g., TIP) applications to function, and making that environment operate in exactly the same world which so many non-OS2200
    applications operate.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lewis Cole@21:1/5 to Kurt Duncan on Sat Jul 1 09:59:33 2023
    On Saturday, July 1, 2023 at 8:56:55 AM UTC-7, Kurt Duncan wrote:
    On Friday, June 30, 2023 at 11:30:08 PM UTC-6, Lewis Cole wrote:
    On Friday, June 30, 2023 at 9:19:55 PM UTC-7, Kurt Duncan wrote:
    < snip >
    I'm suggesting that someone might
    take a more active role in making
    the architectures and facilities
    which, buzzwords or not, actually
    do define the state of general
    enterprise computing, available
    to those people who are still in
    the world of TIP/HVTIP and RDMS,
    and batch processing. So that
    they can leverage at least some
    major portion of their existing
    code, while moving toward a
    computing paradigm which is in
    use today, has been for a number
    of years and (for better or words)
    will be for some number of years
    ahead.

    I think we may be talking past each other.

    While you seem to be looking for ways to make things "better"/"easier" for software developers out in the field, ISTM that "better"/"easier" doesn't mean spit unless you can make an economic argument to convince people who are "in charge" (who might well
    not be software developers) that the changes you are suggesting are A Good Thing (meaning worth money).

    I am not suggesting billions of
    processors. I am unsure that, if
    massive processing or IO is in a
    customers application mix, that
    anything other than a 2200 would
    satisfy them. [...]

    Supporting billions and billions (so to speak) of processors is something that (IMHO) is coming "Real Soon Now" because It Has To.
    The shared memory paradigm depends on underlying hardware keeping a cached shared view of memory consistent quickly/efficiently and the ability of hardware to do so has been getting ever closer to not being able to make that happen especially as the
    number of processors goes up.
    Barrelfish was/is an attempt to address this problem and while it has inspired other OS developers to see what they can do with the multi-kernel paradigm, neither Barrelfish nor any of the things it has inspired has come anywhere close to be "mainstream"
    yet despite 10+ years of effort.
    Given that it was a research project, this is to be expected, but I think it clearly shows that there can be a LONG lead time before any sort of pay off shows up even for something that Has To be coming.

    ISTM that the same is likely true with respect to the suggestions you are are making.
    Even if you are right about the things you are waving your arms at, it might take a long time before anyone, not least of all the Company, to see any benefit.
    The Company doesn't have the bodies that it used to and so I can't really see why they (or anyone else) would bother doing anything along the lines of what you suggesting unless it's going to almost certainly add to the bottom line "Real Soon Now".

    ISTM that you're asking for changes to make things "better"/"easier" for someone, but not someone who can/will cough up money in exchange, and I'm asking you to wave your arms at an economic argument that rebuts this.

    [...] But I don't think the entire
    world of OS2200 users requires
    un-obtanium. And I am not suggesting
    scaling an OS. I am suggesting
    providing the minimal environment
    necessary for a particular mix of
    (e.g., TIP) applications to function,
    and making that environment operate
    in exactly the same world which so
    many non-OS2200 applications operate.

    Once Upon a Time, many moons ago, a customer sent in a SUR (Software User Report) asking for the Company to make some change.
    The response to the SUR was basically, "Send Money".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David W Schroth@21:1/5 to kurtaduncan@gmail.com on Sat Jul 8 13:04:37 2023
    On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
    <kurtaduncan@gmail.com> wrote:

    On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
    On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
    <sf...@alumni.cmu.edu.invalid> wrote:

    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    I am not sure exactly what you are proposing here, nor what the
    advantages of it are. See below for specifics.

    My take is someone has a shaky grasp of how Operting Systems
    (including OS2200) are structured.

    I do have *some* small insight into OS2200 - not great, but some.

    I regard myself as reasonably cognizant of at least *some* of your
    exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
    in 1978.

    Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
    I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a minimum
    amount of re-coding and/or re-architecting.


    Which leads to the question I can never really find the answer for -
    what is a good reason for containers?

    As best I can tell, containers appear to be a response to Open Source
    Software shortcomings with a heavy overlay of "This isn't a bug, it's
    a feature" marketing.

    It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
    But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
    I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.


    <snip>

    Regards,

    David W. Schroth

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to David W Schroth on Sat Jul 8 22:46:52 2023
    David W Schroth <davidschroth@harrietmanor.com> writes:
    On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
    <kurtaduncan@gmail.com> wrote:

    Which leads to the question I can never really find the answer for -
    what is a good reason for containers?

    In one word, devops.

    Makes for simple software deployment (and isolation) in large
    data centers.

    It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
    But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
    I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.

    Managing one mainframe (or four) running batch production or on-line transaction processing is not a huge job with much dynamic management
    activity.

    Managing a large data center with applications intended to scale with
    usage is where containers come into play. Simply deploy the container
    to any host in the data center and it's up and running. All done
    either automatically based on demand, or as commanded.

    The application is isolated within the container both from the
    system as well as the other applications. A security benefit.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kurt Duncan@21:1/5 to David W Schroth on Mon Jul 10 07:25:59 2023
    On Saturday, July 8, 2023 at 11:59:07 AM UTC-6, David W Schroth wrote:
    On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
    <kurta...@gmail.com> wrote:
    On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
    On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
    <sf...@alumni.cmu.edu.invalid> wrote:

    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    I am not sure exactly what you are proposing here, nor what the
    advantages of it are. See below for specifics.

    My take is someone has a shaky grasp of how Operting Systems
    (including OS2200) are structured.

    I do have *some* small insight into OS2200 - not great, but some.
    I regard myself as reasonably cognizant of at least *some* of your
    exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
    in 1978.
    Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
    I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a
    minimum amount of re-coding and/or re-architecting.

    Which leads to the question I can never really find the answer for -
    what is a good reason for containers?

    As best I can tell, containers appear to be a response to Open Source Software shortcomings with a heavy overlay of "This isn't a bug, it's
    a feature" marketing.
    It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
    But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
    I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.


    <snip>

    Regards,

    David W. Schroth

    Your impression of the impetus for containers is shared by a lot of us. Nonetheless, they are certainly in use (Scott's answer is spot on). The combination of ease of rolling out new TIP transactions, married with the advantages of HVTIP and/or RTPS...
    are evocative of the advantages of containers. Having a single OS (not relevant to this discussion really, but) while all the containers are in separate security worlds is also a plus. Maybe... look at it this way. Instead of eight or ten application
    groups, you could have 100 of them, with data atomicity a feature of a combination of an app container and a db container, while execution atomicity is strictly in the domain of the particular container manager and the input/ingress/load-balancer/
    whatever.

    WRT demand mode... back in the dawn of history, we did have performance/capacity issues... we ran production full bore, and the developers suffered. We had a hard time convincing the money people to let us buy another processor and more memory, until
    things got to where we were time-shifting the developers, and even that didn't help. If we had something that let us spin up a dev environment only as long as the developer needed it, then we wouldn't have to buy additional hardware for 24/7. If those
    instances were on Amazon eks or whatever, then we would only pay for the time we used *plus* we would not need to pay for the data center costs 24/7 for something used 8/5.

    For batch mode... spin up what you need when you need it. Same economic argument as the DEMAND users. And finally... when hardware flakes out, it affects less of your production/dev if you are spread across a lot of loosely and nonely coupled systems.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From mperew@gmail.com@21:1/5 to Kurt Duncan on Mon Jul 10 10:51:22 2023
    On Monday, July 10, 2023 at 7:26:01 AM UTC-7, Kurt Duncan wrote:
    On Saturday, July 8, 2023 at 11:59:07 AM UTC-6, David W Schroth wrote:
    On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
    <kurta...@gmail.com> wrote:
    On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
    On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
    <sf...@alumni.cmu.edu.invalid> wrote:

    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    I am not sure exactly what you are proposing here, nor what the
    advantages of it are. See below for specifics.

    My take is someone has a shaky grasp of how Operting Systems
    (including OS2200) are structured.

    I do have *some* small insight into OS2200 - not great, but some.
    I regard myself as reasonably cognizant of at least *some* of your exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
    in 1978.
    Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
    I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a
    minimum amount of re-coding and/or re-architecting.

    Which leads to the question I can never really find the answer for -
    what is a good reason for containers?

    As best I can tell, containers appear to be a response to Open Source Software shortcomings with a heavy overlay of "This isn't a bug, it's
    a feature" marketing.
    It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
    But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
    I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.


    <snip>

    Regards,

    David W. Schroth
    Your impression of the impetus for containers is shared by a lot of us. Nonetheless, they are certainly in use (Scott's answer is spot on). The combination of ease of rolling out new TIP transactions, married with the advantages of HVTIP and/or RTPS...
    are evocative of the advantages of containers. Having a single OS (not relevant to this discussion really, but) while all the containers are in separate security worlds is also a plus. Maybe... look at it this way. Instead of eight or ten application
    groups, you could have 100 of them, with data atomicity a feature of a combination of an app container and a db container, while execution atomicity is strictly in the domain of the particular container manager and the input/ingress/load-balancer/
    whatever.

    WRT demand mode... back in the dawn of history, we did have performance/capacity issues... we ran production full bore, and the developers suffered. We had a hard time convincing the money people to let us buy another processor and more memory, until
    things got to where we were time-shifting the developers, and even that didn't help. If we had something that let us spin up a dev environment only as long as the developer needed it, then we wouldn't have to buy additional hardware for 24/7. If those
    instances were on Amazon eks or whatever, then we would only pay for the time we used *plus* we would not need to pay for the data center costs 24/7 for something used 8/5.

    For batch mode... spin up what you need when you need it. Same economic argument as the DEMAND users. And finally... when hardware flakes out, it affects less of your production/dev if you are spread across a lot of loosely and nonely coupled systems.

    There is a somewhat new acronym being tossed around for server security.

    DIE: Distributed, Immutable, and Ephemeral

    Distributed: Work can and should be spread across multiple devices for <reasons>. Here in the mainframe community we can argue about that, but we all know the notion is out there and dominant.

    Immutable: You can't change settings, code, databases, etc. You install a known secure configuration and set of software (to the degree that vulnerabilities are known and remediated) and nothing can cause configuration drift on that device.. Malware
    can't be installed. Configuration settings can't be changed to open up security holes. And, so on.

    Ephemeral: When it is time to make a controlled change, new VMs are spun up and the immutable image is installed there. The old images are then destroyed.

    Containers are a way of deploying those software defined images.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Kurt Duncan on Mon Jul 10 16:49:34 2023
    On 7/10/2023 7:25 AM, Kurt Duncan wrote:
    On Saturday, July 8, 2023 at 11:59:07 AM UTC-6, David W Schroth wrote:
    On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
    <kurta...@gmail.com> wrote:
    On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
    On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
    <sf...@alumni.cmu.edu.invalid> wrote:

    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    I am not sure exactly what you are proposing here, nor what the
    advantages of it are. See below for specifics.

    My take is someone has a shaky grasp of how Operting Systems
    (including OS2200) are structured.

    I do have *some* small insight into OS2200 - not great, but some.
    I regard myself as reasonably cognizant of at least *some* of your
    exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
    in 1978.
    Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
    I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a
    minimum amount of re-coding and/or re-architecting.

    Which leads to the question I can never really find the answer for -
    what is a good reason for containers?

    As best I can tell, containers appear to be a response to Open Source
    Software shortcomings with a heavy overlay of "This isn't a bug, it's
    a feature" marketing.
    It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
    But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
    I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.


    <snip>

    Regards,

    David W. Schroth

    Your impression of the impetus for containers is shared by a lot of us. Nonetheless, they are certainly in use (Scott's answer is spot on). The combination of ease of rolling out new TIP transactions, married with the advantages of HVTIP and/or RTPS...
    are evocative of the advantages of containers. Having a single OS (not relevant to this discussion really, but) while all the containers are in separate security worlds is also a plus. Maybe... look at it this way. Instead of eight or ten application
    groups, you could have 100 of them, with data atomicity a feature of a combination of an app container and a db container, while execution atomicity is strictly in the domain of the particular container manager and the input/ingress/load-balancer/
    whatever.

    WRT demand mode... back in the dawn of history, we did have performance/capacity issues... we ran production full bore, and the developers suffered. We had a hard time convincing the money people to let us buy another processor and more memory, until
    things got to where we were time-shifting the developers, and even that didn't help. If we had something that let us spin up a dev environment only as long as the developer needed it, then we wouldn't have to buy additional hardware for 24/7. If those
    instances were on Amazon eks or whatever, then we would only pay for the time we used *plus* we would not need to pay for the data center costs 24/7 for something used 8/5.

    For batch mode... spin up what you need when you need it. Same economic argument as the DEMAND users. And finally... when hardware flakes out, it affects less of your production/dev if you are spread across a lot of loosely and nonely coupled systems.

    ISTM that there is some conflation of two things here. One is the
    ability to provide greater isolation between applications than a single
    OS can provide (though the single OS side could argue that a well
    designed OS could provide all that you need), and the ability adjust
    compute capacity on the fly, to meet varying demand.

    If you really wanted more separation among instances of OS 2200, I don't
    think there is any technical barriers preventing you from running
    multiple instances of Linux each with its own copy of the emulator and
    OS/2200 to gain that separation. I know that Unisys modified Linux in
    some way to make OS/2200 run better on it, but I don't know if it is technically possible or not to run multiple copies of the emulator on a
    single Linux, or multiple copies of OS/2200 on a single copy of the
    emulator. That would achieve the separation. But all of that is not
    related to the varying the amount of CPU power available.

    No matter what OS you have, if you need more compute capacity, you need additional hardware. A single customer seems to me to be unlikely to
    have additional hardware available "just in case", so that requirement
    is met by some sort of "rent capacity as required" from some company
    that has lots of extra capacity to make available to different customers
    as needed, i.e. a "cloud computing facility". Having that seems to me
    to be independent of what software you run on that cloud. For example,
    assume that a business case could be made for it, I don't think there is
    any substantial technical problem from getting extra X86 computer power
    from a cloud, running Linux on that new CPU with OS2200 on top of that
    Linux.

    Of course, there are business issues that I am not competent to discuss.

    On a related note, regarding the separation issue (again, not the
    varying capacity issue), how is the solution different from something
    like IBM's VM, which has been around since the 1970s?

    And I may be misremembering, but I vaguely recall an aborted attempt by Univac/Sperry to add a VM like capability in the form of a specialized instruction to aid that, perhaps in the 1110? Am I just having a fever
    dream??










    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lewis Cole@21:1/5 to Stephen Fuld on Mon Jul 10 18:28:53 2023
    On Monday, July 10, 2023 at 4:49:37 PM UTC-7, Stephen Fuld wrote:
    ISTM that there is some conflation of two things here. One is the
    ability to provide greater isolation between applications than a single
    OS can provide (though the single OS side could argue that a well
    designed OS could provide all that you need), and the ability adjust
    compute capacity on the fly, to meet varying demand.

    While containers supposedly provide greater isolation, I don't see that as something that Mr. Duncan is really pushing.
    Instead, ISTM that it's just another selling point to supposedly make the case that implementing something container-like on OS2200 is A Good Thing.
    What I see Mr. Duncan trying to do is to come up with something that will increase OS2200's desirability.

    I can't fault him for that, but I don't see the value of what he's proposing economically.
    I keep remembering when some of the Blue Bell suits came to talk to the troops in the Building 2 cafeteria and one of them basically said, "We can't cost-cut our way to profitability ... we have to grow our markets.".
    I took that to mean that if the Company wanted to do more than just barely survive, it had to find other people to sell our products to (in particular, 1100/2200 hardware) to lots of people who weren't already our customers.

    I still think that's the "correct" strategy, but at this point, I just don't know if there's anyone else to sell to and adding containers to OS2200 doesn't change that.

    If you really wanted more separation among instances of OS 2200, I don't think there is any technical barriers preventing you from running
    multiple instances of Linux each with its own copy of the emulator and OS/2200 to gain that separation. [...]

    I don't think there's anything (aside from cost) to running 2200 emulator software directly on a Intel software with no hypervisor or a type-2 (bare metal) hypervisor rather than on a type-1 KVM Linux hypervisor.
    But while doing so might make an emulated 2200 system more secure, I just don't see that as being economically worthwhile either.
    (I've spent some time wondering how difficult it would be to get BOOTBOOT to load up a 2200 emulator on bare metal directly, but that's something that a hobbyist can do, not a for-profit company with a limited body count.)

    [...] I know that Unisys modified Linux in
    some way to make OS/2200 run better on it, but I don't know if it is technically possible or not to run multiple copies of the emulator on a single Linux, or multiple copies of OS/2200 on a single copy of the
    emulator. That would achieve the separation. But all of that is not
    related to the varying the amount of CPU power available.

    It is my understanding that the Company has a modified Linux kernel as the hypervisor (which it calls SAIL if you want to look up the Unisys documentation for it) upon which its emulated 2200 software runs.
    Since the Company tweaked it, I would kinda suspect that it not just a bog-standard KVM Linux.

    No matter what OS you have, if you need more compute capacity, you need additional hardware. A single customer seems to me to be unlikely to
    have additional hardware available "just in case", so that requirement
    is met by some sort of "rent capacity as required" from some company
    that has lots of extra capacity to make available to different customers
    as needed, i.e. a "cloud computing facility". Having that seems to me
    to be independent of what software you run on that cloud. For example,
    assume that a business case could be made for it, I don't think there is
    any substantial technical problem from getting extra X86 computer power
    from a cloud, running Linux on that new CPU with OS2200 on top of that
    Linux.

    I agree which is why trying to modify OS2200 to support something container-like doesn't make much sense to me.
    ISTM that it might if you had a server farm of 2200s (emulated or real) with a lot of excess compute capacity that you wanted to be able to re-purpose on the fly, but for some reason, I doubt that's what current Unisys customers have laying around.

    Of course, there are business issues that I am not competent to discuss.

    I disagree. I think that people who are actually familiar with the good or service that a company sells are as competent as those who "run" a company but don't have a clue what that company really sells (because everything is a "widget").

    On a related note, regarding the separation issue (again, not the
    varying capacity issue), how is the solution different from something
    like IBM's VM, which has been around since the 1970s?

    If you're referring to how a container is different from a VM, then I would say my understanding is that they are both functionally equivalent, but that containers are supposedly "lighter weight" when it comes to the underlying resources, quicker to spin
    up, and (usually?) faster.
    Meanwhile, VMs are supposedly more secure.
    Keep in mind that I'm just a poor dumb former Bootstrap programmer though who has no experience with containers and so anything I say on the subject should probably be taken with a large salt lick.

    And I may be misremembering, but I vaguely recall an aborted attempt by Univac/Sperry to add a VM like capability in the form of a specialized instruction to aid that, perhaps in the 1110? Am I just having a fever dream??

    I would think that it would take more than a single instruction to add VM support to the 1100 architecture, although I supposed that one could do it with a VERY modified "execute" instruction.
    FWIW, Once upon a time, I recall hearing that some processor (not a Univac/Sperry one) was going to have a "execute alternate architecture" instruction.
    I don't know who was supposedly going to make it or whether or not anything actually became real.
    I gather that IBM's "Future System" CPUs could be reprogrammed to be function specific CPUs.
    Of course, at least some Burroughs machines could be loaded up with different instruction sets for programs written in different HLLs.
    The only thing that kinda-sorta-vaguely strikes me as the same thing was Roanoake.

    If you come up with more details, I would be interested in hearing them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David W Schroth@21:1/5 to sfuld@alumni.cmu.edu.invalid on Tue Jul 11 00:28:10 2023
    On Mon, 10 Jul 2023 16:49:34 -0700, Stephen Fuld
    <sfuld@alumni.cmu.edu.invalid> wrote:

    On 7/10/2023 7:25 AM, Kurt Duncan wrote:
    On Saturday, July 8, 2023 at 11:59:07?AM UTC-6, David W Schroth wrote:
    On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
    <kurta...@gmail.com> wrote:
    On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
    On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
    <sf...@alumni.cmu.edu.invalid> wrote:

    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    I am not sure exactly what you are proposing here, nor what the
    advantages of it are. See below for specifics.

    My take is someone has a shaky grasp of how Operting Systems
    (including OS2200) are structured.

    I do have *some* small insight into OS2200 - not great, but some.
    I regard myself as reasonably cognizant of at least *some* of your
    exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
    in 1978.
    Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
    I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a
    minimum amount of re-coding and/or re-architecting.

    Which leads to the question I can never really find the answer for -
    what is a good reason for containers?

    As best I can tell, containers appear to be a response to Open Source
    Software shortcomings with a heavy overlay of "This isn't a bug, it's
    a feature" marketing.
    It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
    But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
    I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.


    <snip>

    Regards,

    David W. Schroth

    Your impression of the impetus for containers is shared by a lot of us. Nonetheless, they are certainly in use (Scott's answer is spot on). The combination of ease of rolling out new TIP transactions, married with the advantages of HVTIP and/or RTPS...
    are evocative of the advantages of containers. Having a single OS (not relevant to this discussion really, but) while all the containers are in separate security worlds is also a plus. Maybe... look at it this way. Instead of eight or ten application
    groups, you could have 100 of them, with data atomicity a feature of a combination of an app container and a db container, while execution atomicity is strictly in the domain of the particular container manager and the input/ingress/load-balancer/
    whatever.


    OS220 currently supports more than 8 or 10 Application Groups, and has
    for some time. I am unaware of any customer using anywhere near the
    maximum number of Application Groups.
    WRT demand mode... back in the dawn of history, we did have performance/capacity issues... we ran production full bore, and the developers suffered. We had a hard time convincing the money people to let us buy another processor and more memory, until
    things got to where we were time-shifting the developers, and even that didn't help. If we had something that let us spin up a dev environment only as long as the developer needed it, then we wouldn't have to buy additional hardware for 24/7. If those
    instances were on Amazon eks or whatever, then we would only pay for the time we used *plus* we would not need to pay for the data center costs 24/7 for something used 8/5.


    Back in the dawn of historu, *everybody* had performance/capacity
    issues, including OS1100. This does not generally seem to be the case
    these days.

    For batch mode... spin up what you need when you need it. Same economic argument as the DEMAND users. And finally... when hardware flakes out, it affects less of your production/dev if you are spread across a lot of loosely and nonely coupled systems.




    ISTM that there is some conflation of two things here. One is the
    ability to provide greater isolation between applications than a single
    OS can provide (though the single OS side could argue that a well
    designed OS could provide all that you need), and the ability adjust
    compute capacity on the fly, to meet varying demand.

    Not to pick on you (I had to respond somewhere), but I'd *really* like
    to see some evidence that containers are more secure than running
    applications on a 2200 or an MCP ayatwm. Or more secure than running applications on IBM kit, for that matter.

    Historically, one could adjust capacity on the fly by using a fully
    configured (as regards processors) system and moving processors
    between "partitions". Most of the time, people didn't bother.

    Much later in the game. the systems had more capacity than sites
    needed, so sites paid for what they used (or, more likely, what they
    thought they would use), with the ability to register a new
    performance key on (relatively) short notice if they underestimated
    how much performance they needed.


    If you really wanted more separation among instances of OS 2200, I don't >think there is any technical barriers preventing you from running
    multiple instances of Linux each with its own copy of the emulator and >OS/2200 to gain that separation. I know that Unisys modified Linux in
    some way to make OS/2200 run better on it, but I don't know if it is >technically possible or not to run multiple copies of the emulator on a >single Linux, or multiple copies of OS/2200 on a single copy of the
    emulator. That would achieve the separation. But all of that is not
    related to the varying the amount of CPU power available.

    No matter what OS you have, if you need more compute capacity, you need >additional hardware. A single customer seems to me to be unlikely to
    have additional hardware available "just in case", so that requirement
    is met by some sort of "rent capacity as required" from some company
    that has lots of extra capacity to make available to different customers
    as needed, i.e. a "cloud computing facility". Having that seems to me
    to be independent of what software you run on that cloud. For example, >assume that a business case could be made for it, I don't think there is
    any substantial technical problem from getting extra X86 computer power
    from a cloud, running Linux on that new CPU with OS2200 on top of that
    Linux.

    Of course, there are business issues that I am not competent to discuss.

    On a related note, regarding the separation issue (again, not the
    varying capacity issue), how is the solution different from something
    like IBM's VM, which has been around since the 1970s?

    And I may be misremembering, but I vaguely recall an aborted attempt by >Univac/Sperry to add a VM like capability in the form of a specialized >instruction to aid that, perhaps in the 1110? Am I just having a fever >dream??

    IIRC, the capability was designed into a system post 1100/80 and pre
    2200/900, but never saw the light of day.

    Regards,

    David W. Schroth

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Stephen Fuld on Wed Jul 12 14:13:19 2023
    Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
    On 7/10/2023 7:25 AM, Kurt Duncan wrote:

    On a related note, regarding the separation issue (again, not the
    varying capacity issue), how is the solution different from something
    like IBM's VM, which has been around since the 1970s?

    A docker container (as an example) is much lighter weight than
    a VM, yet still provides most of the isolation and security features.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lewis Cole on Wed Jul 12 14:16:27 2023
    Lewis Cole <l_cole@juno.com> writes:
    On Monday, July 10, 2023 at 4:49:37=E2=80=AFPM UTC-7, Stephen Fuld wrote:

    I don't think there's anything (aside from cost) to running 2200 emulator s= >oftware directly on a Intel software with no hypervisor or a type-2 (bare m= >etal) hypervisor rather than on a type-1 KVM Linux hypervisor.
    But while doing so might make an emulated 2200 system more secure, I just d= >on't see that as being economically worthwhile either.
    (I've spent some time wondering how difficult it would be to get BOOTBOOT t= >o load up a 2200 emulator on bare metal directly, but that's something that=
    a hobbyist can do, not a for-profit company with a limited body count.)

    Actually it should be quite straightforward to put the 2200 emulator
    in a container and simply deploy as many containers as needed (c.f. docker).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Kurt Duncan on Thu Jul 20 10:47:29 2023
    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.

    TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.

    Spin up one or more database instances as needed.

    Spin up two or three batch services for nightly processing.

    Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.

    Same thing with BIS.

    Then you would truly have cloud-native...ish OS2200. Thoughts?

    After thinking about all the comments, ISTM that, without using the
    names, you can do most, if not all of what you are suggesting using
    already existing capabilities, without the need for any modifications to
    the OS. The possible exception (and I am not sure about this and I
    think this is a business issue, not so much a technical one), is if a
    customer wants to avoid any direct hardware and hardware management
    costs by using a commercial cloud service such as AWS.

    But the claimed reason for doing this, to attract more 2200 customers,
    seems dubious to me. While it might be attractive to an existing
    customer, and thus prevent losing that customer, I just don't see it
    attracting any new name customers. There are just not enough
    advantages, and too high a cost for a new name customer to adopt the
    2200 environment.

    Don't get me wrong. I think the 2200 environment has a lot of very nice features.

    I have often said the the problem is with Univac/Sperry/Unisys
    marketing's utter failure to convince the world that 36 is an integral
    power of 2. The world has settled on 8 bit byte addressability. That
    the Unisys development and marketing teams have kept the systems viable
    for so long (I believe the 2200 is by far the oldest, and last surviving
    and actively marketed, 36 bit system) is a tribute to their abilities
    and determination. But it is ultimately doomed and their job now is to
    stave off that doom for a long as possible, and I think they are doing
    that well.



    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kurt Duncan@21:1/5 to Stephen Fuld on Thu Jul 20 14:10:11 2023
    On Thursday, July 20, 2023 at 11:47:34 AM UTC-6, Stephen Fuld wrote:
    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.

    TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.

    Spin up one or more database instances as needed.

    Spin up two or three batch services for nightly processing.

    Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.

    Same thing with BIS.

    Then you would truly have cloud-native...ish OS2200. Thoughts?
    After thinking about all the comments, ISTM that, without using the
    names, you can do most, if not all of what you are suggesting using
    already existing capabilities, without the need for any modifications to
    the OS. The possible exception (and I am not sure about this and I
    think this is a business issue, not so much a technical one), is if a customer wants to avoid any direct hardware and hardware management
    costs by using a commercial cloud service such as AWS.

    But the claimed reason for doing this, to attract more 2200 customers,
    seems dubious to me. While it might be attractive to an existing
    customer, and thus prevent losing that customer, I just don't see it attracting any new name customers. There are just not enough
    advantages, and too high a cost for a new name customer to adopt the
    2200 environment.

    Don't get me wrong. I think the 2200 environment has a lot of very nice features.

    I have often said the the problem is with Univac/Sperry/Unisys
    marketing's utter failure to convince the world that 36 is an integral
    power of 2. The world has settled on 8 bit byte addressability. That
    the Unisys development and marketing teams have kept the systems viable
    for so long (I believe the 2200 is by far the oldest, and last surviving
    and actively marketed, 36 bit system) is a tribute to their abilities
    and determination. But it is ultimately doomed and their job now is to
    stave off that doom for a long as possible, and I think they are doing
    that well.
    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    Thank you for an amazing discussion. I have much to consider, but I appreciate the knowledge in this forum, and your willingness to share.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Kurt Duncan on Thu Aug 17 10:45:51 2023
    On 7/20/2023 2:10 PM, Kurt Duncan wrote:
    On Thursday, July 20, 2023 at 11:47:34 AM UTC-6, Stephen Fuld wrote:
    On 6/29/2023 6:33 PM, Kurt Duncan wrote:
    So, just spit-balling ideas here.
    Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.

    One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.

    TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.

    Spin up one or more database instances as needed.

    Spin up two or three batch services for nightly processing.

    Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.

    Same thing with BIS.

    Then you would truly have cloud-native...ish OS2200. Thoughts?
    After thinking about all the comments, ISTM that, without using the
    names, you can do most, if not all of what you are suggesting using
    already existing capabilities, without the need for any modifications to
    the OS. The possible exception (and I am not sure about this and I
    think this is a business issue, not so much a technical one), is if a
    customer wants to avoid any direct hardware and hardware management
    costs by using a commercial cloud service such as AWS.

    But the claimed reason for doing this, to attract more 2200 customers,
    seems dubious to me. While it might be attractive to an existing
    customer, and thus prevent losing that customer, I just don't see it
    attracting any new name customers. There are just not enough
    advantages, and too high a cost for a new name customer to adopt the
    2200 environment.

    Don't get me wrong. I think the 2200 environment has a lot of very nice
    features.

    I have often said the the problem is with Univac/Sperry/Unisys
    marketing's utter failure to convince the world that 36 is an integral
    power of 2. The world has settled on 8 bit byte addressability. That
    the Unisys development and marketing teams have kept the systems viable
    for so long (I believe the 2200 is by far the oldest, and last surviving
    and actively marketed, 36 bit system) is a tribute to their abilities
    and determination. But it is ultimately doomed and their job now is to
    stave off that doom for a long as possible, and I think they are doing
    that well.
    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    Thank you for an amazing discussion. I have much to consider, but I appreciate the knowledge in this forum, and your willingness to share.

    Speaking of 2200 "in the cloud", look at what Unisys just announced.

    https://www.unisys.com/announcements-and-updates/ecs/new-capability-released-allowing-clearpath-os-2200/




    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)