So, just spit-balling ideas here.
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
Spin up one or more database instances as needed.
Spin up two or three batch services for nightly processing.
Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
Same thing with BIS.
Then you would truly have cloud-native...ish OS2200. Thoughts?
On 6/29/2023 6:33 PM, Kurt Duncan wrote:
So, just spit-balling ideas here.I am not sure exactly what you are proposing here, nor what the
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
advantages of it are. See below for specifics.
One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.Does this mean that a file name has to specify somehow which MFD
instance it is in, or does a run specify which instance all files that
run references belong to? What if a run wants to access a file in a different instance? Or do you mean multiple copies of the same
information? In that case, updates have to be propagated to multiple instances. Both are messy, and what do you gain?
TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.Sounds like early CICS on IBM S/360. Multiple "instances" of CICS, each managing its own set of transactions. It is costly performance wise as
you are doing CPU dispatching twice, once in the OS and once in the CICS instance. There is a reason why TIP tended to out perform CICS.
Spin up one or more database instances as needed.What is an instance here? The common code to handle the database? Why
waste the space? The user code? Already done. Again, I am not sure
what you mean here.
Spin up two or three batch services for nightly processing.
Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.One of the really nice things about OS/2200 is that batch and demand
share so much of the same facilities, both within the OS, and things
like ECL? What is the advantage of separating and duplicating them?
And what does adding another instance of a demand service if one get
over 100 users buy you?
Same thing with BIS.
IIRC BIS is what used to be called MAPPER. If so, I think you can have multiple Mapper runs open simultaneously. I don't know if anyone does
this, and why?
Then you would truly have cloud-native...ish OS2200. Thoughts?My initial thoughts are that it seems like a lot of work for minimal benefit. But I freely admit, I don't fully understand your proposal. :-(
One further note. Your talk of "cloudish" could be sort of like
multiple 2200s in a shared disk environment. I don't know if such configurations are still supported.
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
On 6/29/2023 6:33 PM, Kurt Duncan wrote:
So, just spit-balling ideas here.
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
I am not sure exactly what you are proposing here, nor what the
advantages of it are. See below for specifics.
One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
Does this mean that a file name has to specify somehow which MFD
instance it is in, or does a run specify which instance all files that
run references belong to? What if a run wants to access a file in a >different instance? Or do you mean multiple copies of the same
information? In that case, updates have to be propagated to multiple >instances. Both are messy, and what do you gain?
TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
Sounds like early CICS on IBM S/360. Multiple "instances" of CICS, each >managing its own set of transactions. It is costly performance wise as
you are doing CPU dispatching twice, once in the OS and once in the CICS >instance. There is a reason why TIP tended to out perform CICS.
Spin up one or more database instances as needed.
What is an instance here? The common code to handle the database? Why
waste the space? The user code? Already done. Again, I am not sure
what you mean here.
Spin up two or three batch services for nightly processing.
Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
One of the really nice things about OS/2200 is that batch and demand
share so much of the same facilities, both within the OS, and things
like ECL? What is the advantage of separating and duplicating them?
And what does adding another instance of a demand service if one get
over 100 users buy you?
Same thing with BIS.
IIRC BIS is what used to be called MAPPER. If so, I think you can have >multiple Mapper runs open simultaneously. I don't know if anyone does
this, and why?
Then you would truly have cloud-native...ish OS2200. Thoughts?
My initial thoughts are that it seems like a lot of work for minimal
benefit. But I freely admit, I don't fully understand your proposal. :-(
One further note. Your talk of "cloudish" could be sort of like
multiple 2200s in a shared disk environment. I don't know if such >configurations are still supported.
On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld <sf...@alumni.cmu.edu.invalid> wrote:
On 6/29/2023 6:33 PM, Kurt Duncan wrote:
So, just spit-balling ideas here.
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
I am not sure exactly what you are proposing here, nor what the
advantages of it are. See below for specifics.
My take is someone has a shaky grasp of how Operting Systems
(including OS2200) are structured.
One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
Does this mean that a file name has to specify somehow which MFD
instance it is in, or does a run specify which instance all files that
run references belong to? What if a run wants to access a file in a >different instance? Or do you mean multiple copies of the same >information? In that case, updates have to be propagated to multiple >instances. Both are messy, and what do you gain?
TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
Sounds like early CICS on IBM S/360. Multiple "instances" of CICS, each >managing its own set of transactions. It is costly performance wise asI tend to believe the per-Application Group does a better job of partitioning transactions.
you are doing CPU dispatching twice, once in the OS and once in the CICS >instance. There is a reason why TIP tended to out perform CICS.
Spin up one or more database instances as needed.
What is an instance here? The common code to handle the database? Why >waste the space? The user code? Already done. Again, I am not sureEach Application Grop is a different DMS instance. I forget just hom
what you mean here.
many Application Groups are currently supporte, I'm pretty sure it is
more than 10.
Spin up two or three batch services for nightly processing.
Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
One of the really nice things about OS/2200 is that batch and demand
share so much of the same facilities, both within the OS, and things
like ECL? What is the advantage of separating and duplicating them?
And what does adding another instance of a demand service if one get
over 100 users buy you?
Same thing with BIS.
IIRC BIS is what used to be called MAPPER. If so, I think you can have >multiple Mapper runs open simultaneously. I don't know if anyone does >this, and why?
Then you would truly have cloud-native...ish OS2200. Thoughts?
My initial thoughts are that it seems like a lot of work for minimal >benefit. But I freely admit, I don't fully understand your proposal. :-(
One further note. Your talk of "cloudish" could be sort of likeAs far as I know, we still support multiple 2200s in a shared disk environment. I typically have demand sessions open on RS06, RS08,
multiple 2200s in a shared disk environment. I don't know if such >configurations are still supported.
RS15, and RS36 when I'm working. My Exec builds run from shared file
sets, RS06 PRIMUS (a DMS application) runs on all systems in the RS06 omplex. And I know there are other 2200 systems in the RS06 complex.
Regards,
David W. Schroth
Let me clarify: I am not suggesting
breaking up OS2200 into pieces, nor
deploying lots of copies of it.
I'm suggesting completely rewriting
enough of the various parts of OS2200
to allow what had been OS2200
applications to begin looking more
like cloud-native solutions...
containers, all that fun stuff; and
doing it in such a way as to require
a minimum amount of re-coding and/or
re-architecting.
It is certainly a mind-bender for a< snip >
person steeped in monolithic OS's
and hardware (such as I am), to
understand and adapt to the
cloud/container world.
But in doing so - as almost the
entire rest of the non-mainframe
world is doing - I'd hate to see
the OS2200 architecture lose even
more ground.
I'm trying to think of unique ways
in which the 2200 eco-sphere
-- from a customer point of view --
might be altered such that it can
compete in this new goofy
frustrating world.
On Friday, June 30, 2023 at 9:19:55 PM UTC-7, Kurt Duncan wrote:AKA PS2200] and doesn't require much, if any, OS2200 modifications to work.
< snip >
Let me clarify: I am not suggestingOkay, so you're trying to come up with some way or ways to effectively keep OS2200 alive and perhaps even growing. That's nice.
breaking up OS2200 into pieces, nor
deploying lots of copies of it.
I'm suggesting completely rewriting
enough of the various parts of OS2200
to allow what had been OS2200
applications to begin looking more
like cloud-native solutions...
containers, all that fun stuff; and
doing it in such a way as to require
a minimum amount of re-coding and/or
re-architecting.
(I don't understand why you think that "containers" should be a good selling point since a 2200 system VM should be able to do just about anything you might want to do with a container and it just so happens that a 2200 system VM just happens to exist [
Maybe I've missed something, but I wasn't aware of a significant up-tick in OS2200 acceptance and use because of PS2200.)where it leads.
It is certainly a mind-bender for a< snip >
person steeped in monolithic OS's
and hardware (such as I am), to
understand and adapt to the
cloud/container world.
But in doing so - as almost the
entire rest of the non-mainframe
world is doing - I'd hate to see
the OS2200 architecture lose even
more ground.
I'm trying to think of unique ways
in which the 2200 eco-sphere
-- from a customer point of view --
might be altered such that it can
compete in this new goofy
frustrating world.
I was tempted to ask what thing(s) you think is preventing OS2200 and its applications from being more prevalent -- something that doesn't involve buzzwords like "containers" and "cloud based", but I think I'd like to try a thought experiment and see
Suppose that OS2200 was completely re-written to do something "magical" -- something that no other OS can do right now -- being able to potentially scale to billions and billions of processors efficiently like Barrelfish was hoping to show the waytoward (i.e. the Multi-Kernel approach), for example.
In your humble opinion, why would anyone in their right mind jump on to using the new and improved OS2200 rather than wait for someone to hopefully come up with a way to get Linux to be able to do the same thing(s) or actually take an active part tomake it do the same thing(s)?
On Friday, June 30, 2023 at 11:30:08 PM UTC-6, Lewis Cole wrote:< snip >
On Friday, June 30, 2023 at 9:19:55 PM UTC-7, Kurt Duncan wrote:
I'm suggesting that someone might
take a more active role in making
the architectures and facilities
which, buzzwords or not, actually
do define the state of general
enterprise computing, available
to those people who are still in
the world of TIP/HVTIP and RDMS,
and batch processing. So that
they can leverage at least some
major portion of their existing
code, while moving toward a
computing paradigm which is in
use today, has been for a number
of years and (for better or words)
will be for some number of years
ahead.
I am not suggesting billions of
processors. I am unsure that, if
massive processing or IO is in a
customers application mix, that
anything other than a 2200 would
satisfy them. [...]
[...] But I don't think the entire
world of OS2200 users requires
un-obtanium. And I am not suggesting
scaling an OS. I am suggesting
providing the minimal environment
necessary for a particular mix of
(e.g., TIP) applications to function,
and making that environment operate
in exactly the same world which so
many non-OS2200 applications operate.
On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
<sf...@alumni.cmu.edu.invalid> wrote:
On 6/29/2023 6:33 PM, Kurt Duncan wrote:My take is someone has a shaky grasp of how Operting Systems
So, just spit-balling ideas here.
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
I am not sure exactly what you are proposing here, nor what the
advantages of it are. See below for specifics.
(including OS2200) are structured.
I do have *some* small insight into OS2200 - not great, but some.
Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.amount of re-coding and/or re-architecting.
I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a minimum
It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.
On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
<kurtaduncan@gmail.com> wrote:
Which leads to the question I can never really find the answer for -
what is a good reason for containers?
It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.
On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncanminimum amount of re-coding and/or re-architecting.
<kurta...@gmail.com> wrote:
On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
<sf...@alumni.cmu.edu.invalid> wrote:
On 6/29/2023 6:33 PM, Kurt Duncan wrote:My take is someone has a shaky grasp of how Operting Systems
So, just spit-balling ideas here.
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
I am not sure exactly what you are proposing here, nor what the
advantages of it are. See below for specifics.
(including OS2200) are structured.
I do have *some* small insight into OS2200 - not great, but some.I regard myself as reasonably cognizant of at least *some* of your
exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
in 1978.
Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a
Which leads to the question I can never really find the answer for -
what is a good reason for containers?
As best I can tell, containers appear to be a response to Open Source Software shortcomings with a heavy overlay of "This isn't a bug, it's
a feature" marketing.
It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.
<snip>
Regards,
David W. Schroth
On Saturday, July 8, 2023 at 11:59:07 AM UTC-6, David W Schroth wrote:minimum amount of re-coding and/or re-architecting.
On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
<kurta...@gmail.com> wrote:
On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:
On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
<sf...@alumni.cmu.edu.invalid> wrote:
On 6/29/2023 6:33 PM, Kurt Duncan wrote:My take is someone has a shaky grasp of how Operting Systems
So, just spit-balling ideas here.
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
I am not sure exactly what you are proposing here, nor what the
advantages of it are. See below for specifics.
(including OS2200) are structured.
I do have *some* small insight into OS2200 - not great, but some.I regard myself as reasonably cognizant of at least *some* of your exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
in 1978.
Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a
Which leads to the question I can never really find the answer for -
what is a good reason for containers?
As best I can tell, containers appear to be a response to Open Source Software shortcomings with a heavy overlay of "This isn't a bug, it's
a feature" marketing.
It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.
But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.
are evocative of the advantages of containers. Having a single OS (not relevant to this discussion really, but) while all the containers are in separate security worlds is also a plus. Maybe... look at it this way. Instead of eight or ten application<snip>
Regards,
David W. SchrothYour impression of the impetus for containers is shared by a lot of us. Nonetheless, they are certainly in use (Scott's answer is spot on). The combination of ease of rolling out new TIP transactions, married with the advantages of HVTIP and/or RTPS...
WRT demand mode... back in the dawn of history, we did have performance/capacity issues... we ran production full bore, and the developers suffered. We had a hard time convincing the money people to let us buy another processor and more memory, untilthings got to where we were time-shifting the developers, and even that didn't help. If we had something that let us spin up a dev environment only as long as the developer needed it, then we wouldn't have to buy additional hardware for 24/7. If those
For batch mode... spin up what you need when you need it. Same economic argument as the DEMAND users. And finally... when hardware flakes out, it affects less of your production/dev if you are spread across a lot of loosely and nonely coupled systems.
On Saturday, July 8, 2023 at 11:59:07 AM UTC-6, David W Schroth wrote:minimum amount of re-coding and/or re-architecting.
On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
<kurta...@gmail.com> wrote:
On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:I regard myself as reasonably cognizant of at least *some* of your
On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
<sf...@alumni.cmu.edu.invalid> wrote:
On 6/29/2023 6:33 PM, Kurt Duncan wrote:My take is someone has a shaky grasp of how Operting Systems
So, just spit-balling ideas here.
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
I am not sure exactly what you are proposing here, nor what the
advantages of it are. See below for specifics.
(including OS2200) are structured.
I do have *some* small insight into OS2200 - not great, but some.
exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
in 1978.
Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a
are evocative of the advantages of containers. Having a single OS (not relevant to this discussion really, but) while all the containers are in separate security worlds is also a plus. Maybe... look at it this way. Instead of eight or ten applicationWhich leads to the question I can never really find the answer for -
what is a good reason for containers?
As best I can tell, containers appear to be a response to Open Source
Software shortcomings with a heavy overlay of "This isn't a bug, it's
a feature" marketing.
It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.<snip>
But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.
Regards,
David W. Schroth
Your impression of the impetus for containers is shared by a lot of us. Nonetheless, they are certainly in use (Scott's answer is spot on). The combination of ease of rolling out new TIP transactions, married with the advantages of HVTIP and/or RTPS...
WRT demand mode... back in the dawn of history, we did have performance/capacity issues... we ran production full bore, and the developers suffered. We had a hard time convincing the money people to let us buy another processor and more memory, untilthings got to where we were time-shifting the developers, and even that didn't help. If we had something that let us spin up a dev environment only as long as the developer needed it, then we wouldn't have to buy additional hardware for 24/7. If those
For batch mode... spin up what you need when you need it. Same economic argument as the DEMAND users. And finally... when hardware flakes out, it affects less of your production/dev if you are spread across a lot of loosely and nonely coupled systems.
ISTM that there is some conflation of two things here. One is the
ability to provide greater isolation between applications than a single
OS can provide (though the single OS side could argue that a well
designed OS could provide all that you need), and the ability adjust
compute capacity on the fly, to meet varying demand.
If you really wanted more separation among instances of OS 2200, I don't think there is any technical barriers preventing you from running
multiple instances of Linux each with its own copy of the emulator and OS/2200 to gain that separation. [...]
[...] I know that Unisys modified Linux in
some way to make OS/2200 run better on it, but I don't know if it is technically possible or not to run multiple copies of the emulator on a single Linux, or multiple copies of OS/2200 on a single copy of the
emulator. That would achieve the separation. But all of that is not
related to the varying the amount of CPU power available.
No matter what OS you have, if you need more compute capacity, you need additional hardware. A single customer seems to me to be unlikely to
have additional hardware available "just in case", so that requirement
is met by some sort of "rent capacity as required" from some company
that has lots of extra capacity to make available to different customers
as needed, i.e. a "cloud computing facility". Having that seems to me
to be independent of what software you run on that cloud. For example,
assume that a business case could be made for it, I don't think there is
any substantial technical problem from getting extra X86 computer power
from a cloud, running Linux on that new CPU with OS2200 on top of that
Linux.
Of course, there are business issues that I am not competent to discuss.
On a related note, regarding the separation issue (again, not the
varying capacity issue), how is the solution different from something
like IBM's VM, which has been around since the 1970s?
And I may be misremembering, but I vaguely recall an aborted attempt by Univac/Sperry to add a VM like capability in the form of a specialized instruction to aid that, perhaps in the 1110? Am I just having a fever dream??
On 7/10/2023 7:25 AM, Kurt Duncan wrote:minimum amount of re-coding and/or re-architecting.
On Saturday, July 8, 2023 at 11:59:07?AM UTC-6, David W Schroth wrote:
On Fri, 30 Jun 2023 21:19:53 -0700 (PDT), Kurt Duncan
<kurta...@gmail.com> wrote:
On Friday, June 30, 2023 at 9:10:45?PM UTC-6, David W Schroth wrote:I regard myself as reasonably cognizant of at least *some* of your
On Fri, 30 Jun 2023 07:30:21 -0700, Stephen Fuld
<sf...@alumni.cmu.edu.invalid> wrote:
On 6/29/2023 6:33 PM, Kurt Duncan wrote:My take is someone has a shaky grasp of how Operting Systems
So, just spit-balling ideas here.
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
I am not sure exactly what you are proposing here, nor what the
advantages of it are. See below for specifics.
(including OS2200) are structured.
I do have *some* small insight into OS2200 - not great, but some.
exposure to OS2200. Although, as I (barely) recall, it wasn't OS2200
in 1978.
Let me clarify: I am not suggesting breaking up OS2200 into pieces, nor deploying lots of copies of it.
I'm suggesting completely rewriting enough of the various parts of OS2200 to allow what had been OS2200 applications to begin looking more like cloud-native solutions... containers, all that fun stuff; and doing it in such a way as to require a
are evocative of the advantages of containers. Having a single OS (not relevant to this discussion really, but) while all the containers are in separate security worlds is also a plus. Maybe... look at it this way. Instead of eight or ten applicationWhich leads to the question I can never really find the answer for -
what is a good reason for containers?
As best I can tell, containers appear to be a response to Open Source
Software shortcomings with a heavy overlay of "This isn't a bug, it's
a feature" marketing.
It is certainly a mind-bender for a person steeped in monolithic OS's and hardware (such as I am), to understand and adapt to the cloud/container world.<snip>
But in doing so - as almost the entire rest of the non-mainframe world is doing - I'd hate to see the OS2200 architecture lose even more ground.
I'm trying to think of unique ways in which the 2200 eco-sphere -- from a customer point of view -- might be altered such that it can compete in this new goofy frustrating world.
Regards,
David W. Schroth
Your impression of the impetus for containers is shared by a lot of us. Nonetheless, they are certainly in use (Scott's answer is spot on). The combination of ease of rolling out new TIP transactions, married with the advantages of HVTIP and/or RTPS...
things got to where we were time-shifting the developers, and even that didn't help. If we had something that let us spin up a dev environment only as long as the developer needed it, then we wouldn't have to buy additional hardware for 24/7. If thoseWRT demand mode... back in the dawn of history, we did have performance/capacity issues... we ran production full bore, and the developers suffered. We had a hard time convincing the money people to let us buy another processor and more memory, until
For batch mode... spin up what you need when you need it. Same economic argument as the DEMAND users. And finally... when hardware flakes out, it affects less of your production/dev if you are spread across a lot of loosely and nonely coupled systems.
ISTM that there is some conflation of two things here. One is the
ability to provide greater isolation between applications than a single
OS can provide (though the single OS side could argue that a well
designed OS could provide all that you need), and the ability adjust
compute capacity on the fly, to meet varying demand.
If you really wanted more separation among instances of OS 2200, I don't >think there is any technical barriers preventing you from running
multiple instances of Linux each with its own copy of the emulator and >OS/2200 to gain that separation. I know that Unisys modified Linux in
some way to make OS/2200 run better on it, but I don't know if it is >technically possible or not to run multiple copies of the emulator on a >single Linux, or multiple copies of OS/2200 on a single copy of the
emulator. That would achieve the separation. But all of that is not
related to the varying the amount of CPU power available.
No matter what OS you have, if you need more compute capacity, you need >additional hardware. A single customer seems to me to be unlikely to
have additional hardware available "just in case", so that requirement
is met by some sort of "rent capacity as required" from some company
that has lots of extra capacity to make available to different customers
as needed, i.e. a "cloud computing facility". Having that seems to me
to be independent of what software you run on that cloud. For example, >assume that a business case could be made for it, I don't think there is
any substantial technical problem from getting extra X86 computer power
from a cloud, running Linux on that new CPU with OS2200 on top of that
Linux.
Of course, there are business issues that I am not competent to discuss.
On a related note, regarding the separation issue (again, not the
varying capacity issue), how is the solution different from something
like IBM's VM, which has been around since the 1970s?
And I may be misremembering, but I vaguely recall an aborted attempt by >Univac/Sperry to add a VM like capability in the form of a specialized >instruction to aid that, perhaps in the 1110? Am I just having a fever >dream??
On 7/10/2023 7:25 AM, Kurt Duncan wrote:
On a related note, regarding the separation issue (again, not the
varying capacity issue), how is the solution different from something
like IBM's VM, which has been around since the 1970s?
On Monday, July 10, 2023 at 4:49:37=E2=80=AFPM UTC-7, Stephen Fuld wrote:
I don't think there's anything (aside from cost) to running 2200 emulator s= >oftware directly on a Intel software with no hypervisor or a type-2 (bare m= >etal) hypervisor rather than on a type-1 KVM Linux hypervisor.
But while doing so might make an emulated 2200 system more secure, I just d= >on't see that as being economically worthwhile either.
(I've spent some time wondering how difficult it would be to get BOOTBOOT t= >o load up a 2200 emulator on bare metal directly, but that's something that=
a hobbyist can do, not a for-profit company with a limited body count.)
So, just spit-balling ideas here.
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
Spin up one or more database instances as needed.
Spin up two or three batch services for nightly processing.
Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
Same thing with BIS.
Then you would truly have cloud-native...ish OS2200. Thoughts?
On 6/29/2023 6:33 PM, Kurt Duncan wrote:
So, just spit-balling ideas here.
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
Spin up one or more database instances as needed.
Spin up two or three batch services for nightly processing.
Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
Same thing with BIS.
Then you would truly have cloud-native...ish OS2200. Thoughts?After thinking about all the comments, ISTM that, without using the
names, you can do most, if not all of what you are suggesting using
already existing capabilities, without the need for any modifications to
the OS. The possible exception (and I am not sure about this and I
think this is a business issue, not so much a technical one), is if a customer wants to avoid any direct hardware and hardware management
costs by using a commercial cloud service such as AWS.
But the claimed reason for doing this, to attract more 2200 customers,
seems dubious to me. While it might be attractive to an existing
customer, and thus prevent losing that customer, I just don't see it attracting any new name customers. There are just not enough
advantages, and too high a cost for a new name customer to adopt the
2200 environment.
Don't get me wrong. I think the 2200 environment has a lot of very nice features.
I have often said the the problem is with Univac/Sperry/Unisys
marketing's utter failure to convince the world that 36 is an integral
power of 2. The world has settled on 8 bit byte addressability. That
the Unisys development and marketing teams have kept the systems viable
for so long (I believe the 2200 is by far the oldest, and last surviving
and actively marketed, 36 bit system) is a tribute to their abilities
and determination. But it is ultimately doomed and their job now is to
stave off that doom for a long as possible, and I think they are doing
that well.
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
On Thursday, July 20, 2023 at 11:47:34 AM UTC-6, Stephen Fuld wrote:
On 6/29/2023 6:33 PM, Kurt Duncan wrote:
So, just spit-balling ideas here.After thinking about all the comments, ISTM that, without using the
Suppose one could break out the OS into modules, then set up multiple instances of the various modules a la micro-services.
One could re-imagine the idea of shared directories... each major MFD instance is a separately managed directory, as a service.
TIP is a service, and you could scale out as many instances of that as you need, with all the IO on the back end being handled by one or more of the MFD services.
Spin up one or more database instances as needed.
Spin up two or three batch services for nightly processing.
Spin up a DEMAND service, and if you get more than say, 100 users on that one service, spin up another one.
Same thing with BIS.
Then you would truly have cloud-native...ish OS2200. Thoughts?
names, you can do most, if not all of what you are suggesting using
already existing capabilities, without the need for any modifications to
the OS. The possible exception (and I am not sure about this and I
think this is a business issue, not so much a technical one), is if a
customer wants to avoid any direct hardware and hardware management
costs by using a commercial cloud service such as AWS.
But the claimed reason for doing this, to attract more 2200 customers,
seems dubious to me. While it might be attractive to an existing
customer, and thus prevent losing that customer, I just don't see it
attracting any new name customers. There are just not enough
advantages, and too high a cost for a new name customer to adopt the
2200 environment.
Don't get me wrong. I think the 2200 environment has a lot of very nice
features.
I have often said the the problem is with Univac/Sperry/Unisys
marketing's utter failure to convince the world that 36 is an integral
power of 2. The world has settled on 8 bit byte addressability. That
the Unisys development and marketing teams have kept the systems viable
for so long (I believe the 2200 is by far the oldest, and last surviving
and actively marketed, 36 bit system) is a tribute to their abilities
and determination. But it is ultimately doomed and their job now is to
stave off that doom for a long as possible, and I think they are doing
that well.
--
- Stephen Fuld
(e-mail address disguised to prevent spam)
Thank you for an amazing discussion. I have much to consider, but I appreciate the knowledge in this forum, and your willingness to share.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 495 |
Nodes: | 16 (2 / 14) |
Uptime: | 37:16:44 |
Calls: | 9,742 |
Calls today: | 2 |
Files: | 13,741 |
Messages: | 6,183,536 |