• The Attack of the Killer Micros

    From Quadibloc@21:1/5 to All on Wed Feb 14 05:50:41 2024
    In the early days of the microcomputer era, one could either
    have a cheap small computer with a single-chip CPU, or, if
    one wanted something bigger, moderate performance was available
    from bit-slice chips.

    If you wanted higher performance than a bit-slice design would
    allow, you had to use older, less highly integrated technology,
    so the increase in cost was too large to be justified by the
    increase in performance.

    Eventually, the Pentium Pro, and its popular successor the Pentium
    II came along, and now a System 360/195 architecture was placed
    on a single chip (two dies, though, as even the L1 cache, which
    was on the chip, had to have a separate die) and the problem was
    solved.

    This explains my goal of including a Cray I style vector capability
    on a microprocessor - this is the one historic thing not yet reduced
    to a chip which extends into a performance space beyond that of the
    360/195. My reasoning may be very naive, because I'm failing to take
    into account how the current gap between CPU and DRAM speeds makes
    older architectures not practical.

    And, as I've noted also, the overwhelming dominance of Windoes on
    the x86 shows "there can be only one", which is why I want my new
    architecture to offer something the x86 doesn't... efficient
    emulation of older architecures with 36-bit, 48-bit, and 60-bit
    words, so that those who have really old programs to run are no
    longer disadvantaged.

    While this seems like a super-niche thing to some, I see it as
    something that's practically _essential_ to have a future world of
    computers that doesn't leave older code behind - so that the
    computer you already have on your desktop is truly general in its
    capabilities.

    I don't see FPGAs in their current form as efficient enough to
    offer a route to the kind of generality I'm seeking.

    By explaining what my goals are, rather than discussing the ISA
    proposals that I see as a means to those goals, perhaps this makes
    it possible for a better and more practical way to achieve those
    goals to be suggested.

    John Savard

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Quadibloc on Wed Feb 14 09:56:00 2024
    In article <uqhkbh$2grub$2@dont-email.me>, quadibloc@servername.invalid (Quadibloc) wrote:

    I want my new architecture to offer something the x86 doesn't...
    efficient emulation of older architecures with 36-bit, 48-bit,
    and 60-bit words, so that those who have really old programs to
    run are no longer disadvantaged.

    While this seems like a super-niche thing to some, I see it as
    something that's practically _essential_ to have a future world of
    computers that doesn't leave older code behind - so that the
    computer you already have on your desktop is truly general in its capabilities.

    If this had been available in the 1970s, as the IBM 700/7000 series and
    others of their generation faded out of use, it would have been quite
    useful.

    All that code has been re-written for newer architectures or abandoned by
    now; it ran on expensive systems for expensive purposes, so if it was
    going to have continued uses there was usually budget to re-write it.

    Now that there's general alignment on 32-bit or 64-bit addressing, 8-bit
    bytes, and IEEE floating-point, portability is not such a big problem.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Quadibloc on Wed Feb 14 17:50:38 2024
    Quadibloc wrote:

    In the early days of the microcomputer era, one could either
    have a cheap small computer with a single-chip CPU, or, if
    one wanted something bigger, moderate performance was available
    from bit-slice chips.

    If you wanted higher performance than a bit-slice design would
    allow, you had to use older, less highly integrated technology,
    so the increase in cost was too large to be justified by the
    increase in performance.

    Eventually, the Pentium Pro, and its popular successor the Pentium
    II came along, and now a System 360/195 architecture was placed
    on a single chip (two dies, though, as even the L1 cache, which
    was on the chip, had to have a separate die) and the problem was
    solved.

    This explains my goal of including a Cray I style vector capability
    on a microprocessor - this is the one historic thing not yet reduced
    to a chip which extends into a performance space beyond that of the
    360/195.

    It has not been reduced into practice because it takes too many pins,
    wiggling at too high a rate, ...

    My reasoning may be very naive, because I'm failing to take
    into account how the current gap between CPU and DRAM speeds makes
    older architectures not practical.

    3 accesses per CPU cycle continuously (2 LDs and 1 ST) and hundreds
    of banks {Without cache lines}

    And, as I've noted also, the overwhelming dominance of Windoes on
    the x86 shows "there can be only one",

    There is now an ARM Windows.

    which is why I want my new architecture to offer something the x86 doesn't... efficient
    emulation of older architecures with 36-bit, 48-bit, and 60-bit
    words, so that those who have really old programs to run are no
    longer disadvantaged.

    Do you have a market demand survey ??

    While this seems like a super-niche thing to some, I see it as
    something that's practically _essential_ to have a future world of
    computers that doesn't leave older code behind - so that the
    computer you already have on your desktop is truly general in its capabilities.

    I don't see FPGAs in their current form as efficient enough to
    offer a route to the kind of generality I'm seeking.

    By explaining what my goals are, rather than discussing the ISA
    proposals that I see as a means to those goals, perhaps this makes
    it possible for a better and more practical way to achieve those
    goals to be suggested.

    John Savard

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Quadibloc on Wed Feb 14 20:33:48 2024
    On Wed, 14 Feb 2024 05:50:41 -0000 (UTC), Quadibloc wrote:

    Eventually, the Pentium Pro ...

    Ah, the poor Pentium Pro, that was a bit of a joke. The problem was that
    Intel expected that the majority of Windows code would be 32-bit by that
    point. It wasn’t.

    And, as I've noted also, the overwhelming dominance of Windoes on the
    x86 shows "there can be only one", which is why I want my new
    architecture to offer something the x86 doesn't... efficient emulation
    of older architecures with 36-bit, 48-bit, and 60-bit words, so that
    those who have really old programs to run are no longer disadvantaged.

    Didn’t a company called “Transmeta” try that ... something like 30 years ago? It didn’t work.

    There is no path forward for Windows on non-x86. Only open-source software
    is capable of being truly cross-platform.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Wed Feb 14 21:32:11 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 14 Feb 2024 05:50:41 -0000 (UTC), Quadibloc wrote:

    Eventually, the Pentium Pro ...

    Ah, the poor Pentium Pro, that was a bit of a joke. The problem was that >Intel expected that the majority of Windows code would be 32-bit by that >point.

    We used the P6 (aka the Pentium Pro) for a large massively parallel system
    (64 2-processor nodes, each with a SCSI controller and 1Gb ethernet port) running a single-system-image version of SVR4.2ES/MP.

    I wouldn't call it a joke. We also had the orange books for the
    never-built P7 (which morphed eventually into Itanium).

    Didn’t a company called “Transmeta” try that ... something like 30 years >ago? It didn’t work.

    They tried to build an architecture that supported run-time
    translation of x86 instructions to native instructions. Several
    former colleagues worked there - one of whom is now with Apple managing
    their ARM core development group. He used to take Linus Torvalds (another former transmeta employee) up in his Cessna 414 (A fun plane to fly).


    There is no path forward for Windows on non-x86.

    That's entirely up to Microsoft. As has been noted, they do have
    ARMv8 versions of windows 11.

    https://learn.microsoft.com/en-us/windows/arm/overview

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Wed Feb 14 16:57:54 2024
    Ah, the poor Pentium Pro, that was a bit of a joke. The problem was
    that Intel expected that the majority of Windows code would be 32-bit
    by that point. It wasn’t.

    Maybe for some segment of the Windows world, but for the
    workstation/unix/RISC world, the Pentium Pro was no joke at all: it was
    a game changer.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Wed Feb 14 22:29:39 2024
    Lawrence D'Oliveiro wrote:

    On Wed, 14 Feb 2024 05:50:41 -0000 (UTC), Quadibloc wrote:

    Eventually, the Pentium Pro ...

    Ah, the poor Pentium Pro, that was a bit of a joke. The problem was that Intel expected that the majority of Windows code would be 32-bit by that point. It wasn’t.

    I had the first 200 MHz Pentium Pro out of the Micron factory.
    It ran DOOM at 73 fps and Quake at 45+ fps both full screen.
    I would not call that a joke.

    It was <essentially> the death knell for RISC workstations.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Thu Feb 15 00:50:09 2024
    On Wed, 14 Feb 2024 21:32:11 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    There is no path forward for Windows on non-x86.

    That's entirely up to Microsoft. As has been noted, they do have ARMv8 versions of windows 11.

    They’ve been trying for years: Windows Phone 8, Windows RT, that laughable “Windows 10 IOT Edition” for the Raspberry Pi, whatever the name is for
    the current effort ... Windows-on-ARM has always been a trainwreck.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Stefan Monnier on Thu Feb 15 00:51:03 2024
    On Wed, 14 Feb 2024 16:57:54 -0500, Stefan Monnier wrote:

    Ah, the poor Pentium Pro, that was a bit of a joke. The problem was
    that Intel expected that the majority of Windows code would be 32-bit
    by that point. It wasn’t.

    Maybe for some segment of the Windows world, but for the workstation/unix/RISC world, the Pentium Pro was no joke at all: it was
    a game changer.

    A chip with the emphasis on 32-bit performance, later replaced by the
    Pentium II, with a greater emphasis on 16-bit performance ... only in the
    x86 world, eh?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Thu Feb 15 01:00:09 2024
    Lawrence D'Oliveiro wrote:

    On Wed, 14 Feb 2024 16:57:54 -0500, Stefan Monnier wrote:

    Ah, the poor Pentium Pro, that was a bit of a joke. The problem was
    that Intel expected that the majority of Windows code would be 32-bit
    by that point. It wasn’t.

    Maybe for some segment of the Windows world, but for the
    workstation/unix/RISC world, the Pentium Pro was no joke at all: it was
    a game changer.

    A chip with the emphasis on 32-bit performance, later replaced by the
    Pentium II, with a greater emphasis on 16-bit performance ... only in the
    x86 world, eh?

    This sounds remarkably like you expected sane behavior from x86 land.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to Lawrence D'Oliveiro on Thu Feb 15 07:54:57 2024
    Lawrence D'Oliveiro wrote:
    On Wed, 14 Feb 2024 05:50:41 -0000 (UTC), Quadibloc wrote:

    Eventually, the Pentium Pro ...

    Ah, the poor Pentium Pro, that was a bit of a joke. The problem was that

    That is so wrong that it isn't even funny.

    Intel expected that the majority of Windows code would be 32-bit by that point. It wasn’t.

    This is of course correct, but it really didn't matter!

    What did matter, a lot, was the fact that when the PPro arrived, at an
    initial speed of up to 200 MHz, it immediately took over the crown as
    the fastest specINT processor in the world. I.e. it was a huge deal and
    have been the basis for pretty much all x86 processors since then.

    Dominating a market for ~30 years is not "a bit of a joke" imho.


    And, as I've noted also, the overwhelming dominance of Windoes on the
    x86 shows "there can be only one", which is why I want my new
    architecture to offer something the x86 doesn't... efficient emulation
    of older architecures with 36-bit, 48-bit, and 60-bit words, so that
    those who have really old programs to run are no longer disadvantaged.

    Didn’t a company called “Transmeta” try that ... something like 30 years
    ago? It didn’t work.

    There is no path forward for Windows on non-x86. Only open-source software
    is capable of being truly cross-platform.

    That is correct, with the exception of special single-vendor platforms,
    like the AS400 and several mainframes where the vendor makes sure that
    all the old sw can still run with acceptable performance.

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Scott Lurndal on Thu Feb 15 07:24:56 2024
    scott@slp53.sl.home (Scott Lurndal) writes:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    There is no path forward for Windows on non-x86.

    That's entirely up to Microsoft.

    No. Microsoft is trying to commoditize their complement (in
    particular, Intel) by making Windows on ARM viable, but the ISVs don't
    play along. Of course some of that is Microsoft's own doing, as they
    ensured in earlier iterations of this stategy (MIPS, PowerPC, Alpha
    during the 1990s, IA-64 during the 2000s; there was also Windows RT)
    that all ISVs who invested in non-IA-32/x64 Windows lost their
    investment by MS dropping the support for these platforms. So now
    every sane ISV just sits back and waits until Microsoft has made the Windows-on-ARM market big on their own. Of course this does not work,
    and the high prices and lack of alternative OS options of the
    Windows-on-ARM hardware does not help, either.

    As has been noted, they do have
    ARMv8 versions of windows 11.

    https://learn.microsoft.com/en-us/windows/arm/overview

    Doomed.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Lawrence D'Oliveiro on Thu Feb 15 08:42:54 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Thu, 15 Feb 2024 07:24:56 GMT, Anton Ertl wrote:

    Microsoft is trying to commoditize their complement (in
    particular, Intel) by making Windows on ARM viable, but the ISVs don't
    play along.

    Can you blame them? They are not going to port their proprietary apps to
    ARM until they see the customers buying lots of ARM-based machines, and >customers are staying away from buying ARM-based machines because they >don’t see lots of software that will take advantage of the hardware.

    Chicken-and-egg situation, and no way to break out of it.

    A possible way would be to offer the ARM-based systems much cheaper,
    making the hardware attractive to users who do not use
    architecture-specific ISV software. That would result in a
    significant number of systems out there, and would inspire big ISVs
    like Adobe to support them, increasing the appeal of the platform,
    which again would result in increased sales, which would make the
    platform attractive to additional ISVs, and so on.

    The first part happened for Chromebooks and the Raspberry Pi, and,
    e.g., VFX Forth (a proprietary Forth system, i.e., an ISV product) is
    available on the Raspi, even though it does not run Windows.

    But wrt Windows-on-ARM, what actually happens is that laptops with
    that are rather expensive. It seems that someone (Qualcomm? The
    laptop producers? MS?) wants to milk that market before it has
    calved. This doesn't work.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Anton Ertl on Thu Feb 15 08:18:17 2024
    On Thu, 15 Feb 2024 07:24:56 GMT, Anton Ertl wrote:

    Microsoft is trying to commoditize their complement (in
    particular, Intel) by making Windows on ARM viable, but the ISVs don't
    play along.

    Can you blame them? They are not going to port their proprietary apps to
    ARM until they see the customers buying lots of ARM-based machines, and customers are staying away from buying ARM-based machines because they
    don’t see lots of software that will take advantage of the hardware.

    Chicken-and-egg situation, and no way to break out of it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Quadibloc@21:1/5 to All on Thu Feb 15 11:27:21 2024
    On Thu, 15 Feb 2024 01:00:09 +0000, MitchAlsup1 wrote:
    Lawrence D'Oliveiro wrote:
    On Wed, 14 Feb 2024 16:57:54 -0500, Stefan Monnier wrote:

    Ah, the poor Pentium Pro, that was a bit of a joke. The problem was
    that Intel expected that the majority of Windows code would be 32-bit
    by that point. It wasn’t.

    Maybe for some segment of the Windows world, but for the
    workstation/unix/RISC world, the Pentium Pro was no joke at all: it was
    a game changer.

    A chip with the emphasis on 32-bit performance, later replaced by the
    Pentium II, with a greater emphasis on 16-bit performance ... only in the
    x86 world, eh?

    This sounds remarkably like you expected sane behavior from x86 land.

    A chip which had leading-edge 32-bit performance, but which performed
    poorly on the existing software users already had installed, was replaced
    by one which _still_ had great 32-bit performance, but which fixed the
    defect of inferior support for the older software that was also in use.

    How was that not eminently sane behavior on the part of Intel? And what
    isn't sane about x86 users not spending money to replace software that
    was doing the job perfectly well?

    Only the reduced cache speed - which reduced manufacturing cost to something sustainable in a consumer-priced product - compromised performance in general.

    John Savard

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Terje Mathisen on Thu Feb 15 14:25:34 2024
    On Thu, 15 Feb 2024 07:54:57 +0100, Terje Mathisen wrote:

    What did matter, a lot, was the fact that when the PPro arrived, at an initial speed of up to 200 MHz, it immediately took over the crown as
    the fastest specINT processor in the world.

    SPECint, but not SPECfp? After all, decent workstations had to have good floating-point performance, and x86 was still saddled with that antiquated 8087-derived joke of a floating-point architecture.

    Windows NT liked to call itself a “workstation” OS, but it was really just a “desktop” OS.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Thu Feb 15 15:02:38 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 14 Feb 2024 21:32:11 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    There is no path forward for Windows on non-x86.

    That's entirely up to Microsoft. As has been noted, they do have ARMv8
    versions of windows 11.

    They’ve been trying for years: Windows Phone 8, Windows RT, that laughable >“Windows 10 IOT Edition” for the Raspberry Pi, whatever the name is for >the current effort ... Windows-on-ARM has always been a trainwreck.

    https://azure.microsoft.com/en-us/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Thu Feb 15 20:19:29 2024
    On Thu, 15 Feb 2024 15:02:38 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 14 Feb 2024 21:32:11 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    There is no path forward for Windows on non-x86.

    That's entirely up to Microsoft. As has been noted, they do have ARMv8
    versions of windows 11.

    They’ve been trying for years: Windows Phone 8, Windows RT, that laughable >>“Windows 10 IOT Edition” for the Raspberry Pi, whatever the name is for >>the current effort ... Windows-on-ARM has always been a trainwreck.

    https://azure.microsoft.com/en-us/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/

    You know that most of Microsoft’s cloud is running Linux, right?
    They’ve admitted as much themselves.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From sarr.blumson@alum.dartmouth.org@21:1/5 to Quadibloc on Thu Feb 15 22:43:21 2024
    Quadibloc <quadibloc@servername.invalid> wrote:
    : While this seems like a super-niche thing to some, I see it as
    : something that's practically _essential_ to have a future world of
    : computers that doesn't leave older code behind - so that the
    : computer you already have on your desktop is truly general in its
    : capabilities.

    This need is very real. At my first job the payroll ran on a
    360 using the hardwware emulatior to run a 1401 simulator
    for the 705 which ran the actual payroll. But,,,

    The only example I pay much attention to are the various PDP-10
    (not to be confused with DECSystem-10) simulators that run
    PDP-10 code on current hardwaare faster than any actual 10
    ever could. This seems like a much cheaper soltuions.

    Sarr

    --
    --------
    Sarr Blumson sarr.blumson@alum.dartmouth.org http://www-personal.umich.edu/~sarr/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Scott Lurndal on Fri Feb 16 08:55:00 2024
    In article <vjazN.324759$Wp_8.217967@fx17.iad>, scott@slp53.sl.home
    (Scott Lurndal) wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    There is no path forward for Windows on non-x86.

    That's entirely up to Microsoft. As has been noted, they do have
    ARMv8 versions of windows 11.

    https://learn.microsoft.com/en-us/windows/arm/overview

    Their attitude to it has evolved quite a bit. At first, there was WinRT,
    a cut-down version of Windows for 32-bit ARM, which was unsuccessful.
    Then they produced full Windows for 64-bit ARM, which initially came with
    a simplified GUI that was very limiting, although it could be turned off
    to get the full OS.

    That was viewed by MS as an "iPad killer", since it had a keyboard and
    the "vastly superior Windows GUI" which did seem to be missing the point
    quite badly. Development for it was supposed to be done on x64 Windows,
    with the ARM Windows device being used via a USB connection, like iPad development.

    However, I found that was hopelessly inconvenient, and installed
    compilers /on/ ARM Windows, using the built-in emulator, which was much
    easier to work with, although a bit slow. It appears that plenty of other people did the same thing, because MS now produce a native ARM64 Visual
    Studio, after not producing non-x86 versions since NT4 days.

    The available hardware has also evolved. At fist, there was only tablets
    and laptops, but now Microsoft and Qualcomm sell various mini-desktop
    systems for development, which are cheaper and faster than the laptops.

    The ecosystem is gradually growing, and ARM Windows is available on Azure. Qualcomm claim their Snapdragon X Elite CPUs will compete with Apple's
    CPUs, although proof will have to wait for them to be available.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to BGB on Fri Feb 16 14:32:00 2024
    In article <uqm2o9$3gha2$1@dont-email.me>, cr88192@gmail.com (BGB) wrote:

    Though, I suspect, this may be similar to what killed the IA-64.
    History might have gone quite differently if Intel, instead of
    targeting it at the high-end, made it first available as a
    lower-cost alternative to the Celeron line

    Selling it to the Celeron market would have been impossible: the games producers would not have wanted to support it, or found it too hard, much
    like Cell a few years later. The x86 emulation would not have saved it:
    that was slow by the standards of the time.

    Had it survived for longer, it could have maybe been a viable
    option for smartphones and tablets.

    IA-64 ran way too hot for portable devices. HP, who'd devised the
    architecture, wanted it for large servers, and that was what it was
    designed for. In the late 1990s, when those decisions were made, smart
    mobile devices didn't exist.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Dallman on Fri Feb 16 21:49:52 2024
    On Fri, 16 Feb 2024 08:55 +0000 (GMT Standard Time), John Dallman wrote:

    That was viewed by MS as an "iPad killer", since it had a keyboard and
    the "vastly superior Windows GUI" which did seem to be missing the point quite badly.

    A similar thing is happening again, with Valve’s Linux-based Steam Deck,
    that offers a handheld gaming platform with a purpose-built UI. Even
    though WINE/Proton offers less-than-perfect compatibility with Windows-
    only games, it still seems to have found a sustainable niche in the
    market.

    Microsoft has been showing off a “Handheld Mode” for Windows, in an
    attempt to compete, but so far that’s just vapourware.

    Development for it was supposed to be done on x64 Windows,
    with the ARM Windows device being used via a USB connection, like iPad development.

    Which is such a dumb thing to do, given the Linux alternatives offer self- hosted development and deployment stacks. Even the humble Raspberry Pi
    could manage that from Day 1.

    Qualcomm claim their Snapdragon X Elite CPUs will compete with Apple's
    CPUs, although proof will have to wait for them to be available.

    The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm
    chips? ARM Linux can run on a whole range of ARM chips from a whole range
    of different vendors.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Dallman on Fri Feb 16 21:51:31 2024
    On Fri, 16 Feb 2024 14:32 +0000 (GMT Standard Time), John Dallman wrote:

    In the late 1990s, when those decisions were made, smart
    mobile devices didn't exist.

    Actually, they did. PDAs, remember?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to D'Oliveiro on Sat Feb 17 00:10:00 2024
    In article <uqola0$1ha3$3@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    Development for it was supposed to be done on x64 Windows,
    with the ARM Windows device being used via a USB connection, like
    iPad development.

    Which is such a dumb thing to do, given the Linux alternatives
    offer self-hosted development and deployment stacks. Even the
    humble Raspberry Pi could manage that from Day 1.

    As I said, Microsoft's approach was widely rejected and they've abandoned
    it.

    The other thing is: why is Windows-on-ARM so heavily tied to
    Qualcomm chips? ARM Linux can run on a whole range of ARM chips
    from a whole range of different vendors.

    My knowledge of that story is under NDA at present.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup@21:1/5 to Lawrence D'Oliveiro on Fri Feb 16 23:38:03 2024
    Lawrence D'Oliveiro wrote:

    On Fri, 16 Feb 2024 08:55 +0000 (GMT Standard Time), John Dallman wrote:


    The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm
    chips? ARM Linux can run on a whole range of ARM chips from a whole range
    of different vendors.

    Qualcomm paid for the port ?!?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Dallman on Sat Feb 17 00:39:22 2024
    On Sat, 17 Feb 2024 00:10 +0000 (GMT Standard Time), John Dallman wrote:

    In article <uqold3$1ha3$4@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:

    On Fri, 16 Feb 2024 14:32 +0000 (GMT Standard Time), John Dallman
    wrote:

    In the late 1990s, when those decisions were made, smart mobile
    devices didn't exist.

    Actually, they did. PDAs, remember?

    True, but batteries of the period could not have supported Itanium's
    100W+ power consumption for any useful time.

    Nevertheless, smart mobile devices did exist.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to MitchAlsup on Sat Feb 17 00:38:13 2024
    On Fri, 16 Feb 2024 23:38:03 +0000, MitchAlsup wrote:

    Lawrence D'Oliveiro wrote:

    The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm
    chips? ARM Linux can run on a whole range of ARM chips from a whole
    range of different vendors.

    Qualcomm paid for the port ?!?

    Can’t Microsoft afford to port Windows to anything else?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Sat Feb 17 02:40:29 2024
    According to Lawrence D'Oliveiro <ldo@nz.invalid>:
    The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm
    chips? ARM Linux can run on a whole range of ARM chips from a whole range
    of different vendors.

    More likely the Qualcomm chips have some peripherals that Windows wants.



    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Levine on Sat Feb 17 05:20:47 2024
    On Sat, 17 Feb 2024 02:40:29 -0000 (UTC), John Levine wrote:

    According to Lawrence D'Oliveiro <ldo@nz.invalid>:

    The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm >>chips? ARM Linux can run on a whole range of ARM chips from a whole
    range of different vendors.

    More likely the Qualcomm chips have some peripherals that Windows wants.

    I wonder what they could be?

    What’s so special about Qualcomm chips, that is so specific to Windows? Because the products themselves don’t seem to reflect anything special.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to BGB on Sat Feb 17 11:41:00 2024
    In article <uqp7n4$87oj$1@dont-email.me>, cr88192@gmail.com (BGB) wrote:

    Presumably they could have scaled [IA-64] down, while still keeping the

    core ISA design intact?...

    Like, presumably they had wanted to use the design for things big
    and small, which would not have made sense if it could only be used
    in big server chips.

    Intel and HP showed no desire at the time to use IA-64 in anything
    smaller than a workstation.

    The huge number of architectural registers (128 64-bit integer, 128
    82-bit floating point) would have made shrinks hard. But most of all, the design is based on the compilers being able to solve a problem that can't
    be solved in practice: static scheduling of memory loads in a system with multiple levels of cache.

    But, maybe, say, as a CPU for home game-consoles or set-top
    boxes?...

    Or those thin clients that did little other than dial into the
    internet and run a web-browser?...

    It doesn't have any advantages for these roles over simpler, cheaper and
    faster RISC or x86 designs.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to John Dallman on Sat Feb 17 15:36:02 2024
    jgd@cix.co.uk (John Dallman) writes:
    In article <uqold3$1ha3$4@dont-email.me>, ldo@nz.invalid (Lawrence >D'Oliveiro) wrote:
    In the late 1990s, when those decisions were made, smart
    mobile devices didn't exist.
    Actually, they did. PDAs, remember?

    True, but batteries of the period could not have supported Itanium's 100W+ >power consumption for any useful time.

    I was happy with my linux-based Sharp Zaurus SL-5000 when it first came out
    in 2001.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to John Levine on Sat Feb 17 16:45:48 2024
    John Levine <johnl@taugh.com> writes:
    According to Lawrence D'Oliveiro <ldo@nz.invalid>:
    The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm >>chips? ARM Linux can run on a whole range of ARM chips from a whole range >>of different vendors.

    More likely the Qualcomm chips have some peripherals that Windows wants.

    Unlikely. More likely they fit the power curves required for the portable devices like the Surface and the Lenovo Thinkpad.

    https://github.com/AmpereComputing/Windows-11-On-Ampere

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to John Dallman on Sat Feb 17 19:22:25 2024
    On Sat, 17 Feb 2024 00:10 +0000 (GMT Standard Time)
    jgd@cix.co.uk (John Dallman) wrote:

    In article <uqola0$1ha3$3@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:

    Development for it was supposed to be done on x64 Windows,
    with the ARM Windows device being used via a USB connection, like
    iPad development.

    Which is such a dumb thing to do, given the Linux alternatives
    offer self-hosted development and deployment stacks. Even the
    humble Raspberry Pi could manage that from Day 1.

    As I said, Microsoft's approach was widely rejected and they've
    abandoned it.

    The other thing is: why is Windows-on-ARM so heavily tied to
    Qualcomm chips? ARM Linux can run on a whole range of ARM chips
    from a whole range of different vendors.

    My knowledge of that story is under NDA at present.

    John

    I don't know about you, but I personally find well implemented cross-development far more convenient than 'native' development.
    I never developed for Win-ARM64, so don't know how well-implemented it
    was.
    Many years ago I wrote few programs for Win-CE on ARM32. Those were
    relatively simple programs. So simple that I didn't bother to setup the
    link between Visual Studio and my target platform. I just compiled on
    my PC, copied to target (originally via Windows sharing, but later on
    it was founded to be limiting, so we quickly switched to FTP) and then
    run them there via telnet.

    I case of CE, native development was not an option, but even if it would
    be an option I would not use it. First, because probably there would
    not be my preferred programmer's editor installed. Second and far more important, because it would be too much trouble keeping all sources synchronized with company's source control servers. There is
    approximately zero chance that the target would be allowed to be
    connected into corporate network. And it does not matter if the target
    is WinArm32, WinArm64 or LinArm32 that I developed for couple of years
    ago and likely to touch again in the next couple of weeks. I would not
    do it natively, even despite the absence of well-implemented
    integration of cross-compiler and the target.

    May be, if my apps were order of magnitude more complicated than they
    actually are, I'd feel differently. May be, in this case I would prefer
    good native development environment over non-integrated cross. But I
    am sure that even in this case I'd prefer well-integrated cross over
    any native.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Scott Lurndal on Sat Feb 17 19:34:03 2024
    On Sat, 17 Feb 2024 16:45:48 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    John Levine <johnl@taugh.com> writes:
    According to Lawrence D'Oliveiro <ldo@nz.invalid>:
    The other thing is: why is Windows-on-ARM so heavily tied to
    Qualcomm chips? ARM Linux can run on a whole range of ARM chips
    from a whole range of different vendors.

    More likely the Qualcomm chips have some peripherals that Windows
    wants.

    Unlikely. More likely they fit the power curves required for the
    portable devices like the Surface and the Lenovo Thinkpad.


    So do Mediatek chips.
    And HiSilicon chips as well, but those, of course, are not the option in
    the current political climate.

    https://github.com/AmpereComputing/Windows-11-On-Ampere

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Lawrence D'Oliveiro on Sat Feb 17 18:08:36 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Fri, 16 Feb 2024 23:38:03 +0000, MitchAlsup wrote:

    Lawrence D'Oliveiro wrote:

    The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm
    chips? ARM Linux can run on a whole range of ARM chips from a whole
    range of different vendors.

    Qualcomm paid for the port ?!?

    Can’t Microsoft afford to port Windows to anything else?

    Given what I read about the woes of running Linux (Android) on various ARM-based SoCs, and the way that Windows deals with driver variations,
    MS would have to pay additional SoC manufacturers to produce Windows
    drivers, something that these SoC manufacturers are not set up to do.
    So I guess that, indeed, MS does not want to afford the substantial
    expense for porting Windows to additional SoCs, for now. I expect
    that Qualcomm asked for money or other benefits to do that work for
    MS, and likewise, the laptop manufacturer also had to be subsidized by
    MS.

    One solution would be if MS finally switched to using Linux as the
    basis for Windows. Then they would automatically get all the stuff
    that is done for Android and for the SBCs, although that is a sad
    story, too.

    Given the choice of an ARM-based system with some SoC-specific kernel
    that is only supported for a few years, or some AMD64-based system,
    which is supported by the Linux mainline for decades, I go for the
    AMD64 system.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Michael S on Sat Feb 17 18:22:00 2024
    In article <20240217192225.0000779b@yahoo.com>, already5chosen@yahoo.com (Michael S) wrote:

    I don't know about you, but I personally find well implemented cross-development far more convenient than 'native' development.
    I never developed for Win-ARM64, so don't know how well-implemented
    it was.

    Not well, for my uses. I don't do applications; I do porting and
    performance work for mathematical modelling libraries. These are tested
    in a command-line harness, which reads test data from a network server
    (there's a lot of test data).

    The Microsoft cross-development setup required doing everything in their
    IDE. I find that very hard to use, because I'm partially sighted, and it
    also doesn't understand our domain-specific programming language. That
    compiles to C, but editing the C is a very poor idea: it's used as a
    high-level assembly language and regenerated on every compile. Any
    changes you make in it have to be back-translated into the DSL by hand
    and edited into that, so nobody works that way, and the IDE is only
    useful as a debugger.

    The cross-development setup also required that all your test data be
    bundled with the app and pushed onto the device via USB, controlled by
    the IDE. There's enough test data to make that very slow indeed, and it
    didn't appear possible to operate the device through the IDE. Instead,
    you had to physically operate it. Somebody had apparently been told to
    make it just like developing for iOS, and had given it most of those disadvantages.

    We'd killed all of those dragons in supporting iOS, and we really didn't
    want to do it all again for a different platform. It was far easier to
    put the devices on Ethernet, unlock the GUI and use them as ordinary
    Windows machines, with our custom-written development environment.

    In case of CE, native development was not an option, but even if it
    would be an option I would not use it. First, because probably there
    would not be my preferred programmer's editor installed.

    My favoured editor and tools ran straight away on ARM Windows 10, in the
    x86 emulator. That made all of this practical. The difference from CE was
    that ARM Windows 10 is real, full-fat Windows: the same kernel, userland,
    APIs and utilities. It's compiled for ARM64, but it has an emulator to
    run x86 Windows binaries (plus x86-64 if you're running Windows 11) which works.

    Lots of people seem to have done the same thing, given that MS have
    switched plans and started producing native ARM64 versions of Visual
    Studio (which I still don't use) and its compiler, linker, and so on,
    which I will when I get to start that project.

    Second and far more important, because it would be too much
    trouble keeping all sources synchronized with company's source
    control servers.

    This is no problem at all for me. Being able to mount network filesystems
    on ARM Windows solves that problem. This is partly because we don't have
    full source trees in our working directories: the product is too big for
    that, and takes too long to compile. So we have just a few source files
    in our working directories and compile and link against the central build
    tree. We can do that because the domain-specific language gives us far
    more control over imports and exports than normal C or C++ programming.

    There is approximately zero chance that the target would be allowed
    to be connected into corporate network.

    It's real Windows. It integrates fine. Corporate IT can't forbid it
    without rendering the company unable to produce software for paying
    customers. They did not try.

    May be, if my apps were order of magnitude more complicated than
    they actually are, I'd feel differently.

    The main library that I work on is about 65MB as a Windows DLL; similar
    sizes on x86-64 and ARM64. The test harness is about 5MB. The full test
    data is somewhere over 300GB.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Anton Ertl on Sat Feb 17 18:58:00 2024
    In article <2024Feb17.190836@mips.complang.tuwien.ac.at>, anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:

    One solution would be if MS finally switched to using Linux as the
    basis for Windows. Then they would automatically get all the stuff
    that is done for Android and for the SBCs, although that is a sad
    story, too.

    Most Android device drivers are proprietary closed-source, belonging to
    the SoC designers or device designers. Open-source Android drivers are
    mostly written by reverse engineering the hardware, which is why fully open-source Android offshoots, like LineageOS, usually only support
    obsolete hardware.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Anton Ertl on Sat Feb 17 18:43:48 2024
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Fri, 16 Feb 2024 23:38:03 +0000, MitchAlsup wrote:

    Given the choice of an ARM-based system with some SoC-specific kernel
    that is only supported for a few years

    That's a false choice. See ARM BSA and SBSA.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Sat Feb 17 20:05:05 2024
    Lawrence D'Oliveiro wrote:

    On Sat, 17 Feb 2024 02:40:29 -0000 (UTC), John Levine wrote:

    According to Lawrence D'Oliveiro <ldo@nz.invalid>:

    The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm >>>chips? ARM Linux can run on a whole range of ARM chips from a whole
    range of different vendors.

    More likely the Qualcomm chips have some peripherals that Windows wants.

    I wonder what they could be?

    WiFi radio transceivers, bluetooth, ...

    What’s so special about Qualcomm chips, that is so specific to Windows? Because the products themselves don’t seem to reflect anything special.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to mitchalsup@aol.com on Sat Feb 17 22:20:57 2024
    On Sat, 17 Feb 2024 20:05:05 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    Lawrence D'Oliveiro wrote:

    On Sat, 17 Feb 2024 02:40:29 -0000 (UTC), John Levine wrote:

    According to Lawrence D'Oliveiro <ldo@nz.invalid>:

    The other thing is: why is Windows-on-ARM so heavily tied to
    Qualcomm chips? ARM Linux can run on a whole range of ARM chips
    from a whole range of different vendors.

    More likely the Qualcomm chips have some peripherals that Windows
    wants.

    I wonder what they could be?

    WiFi radio transceivers, bluetooth, ...


    Those are trivial parts.
    Much more importantly, they all have celular modems.
    MS wants their WinARM customers to be connected to Internet all the
    time, preferably even when big aplication processor is put to sleep.

    What’s so special about Qualcomm chips, that is so specific to
    Windows? Because the products themselves don’t seem to reflect
    anything special.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to John Dallman on Sat Feb 17 22:48:43 2024
    On Sat, 17 Feb 2024 18:22 +0000 (GMT Standard Time)
    jgd@cix.co.uk (John Dallman) wrote:


    The Microsoft cross-development setup required doing everything in
    their IDE.

    That's very strange. I know for sure that in Vs2019 they have fully
    functioning command line tools for aarch64. Was under impression that
    VS2017 also has them.
    It is typically more convenient to prepare the setup (project file) in
    IDE, but after that you don't have to touch IDE at all if you don't
    want to. Just type 'msbuild' from command prompt and everything is
    compiled exactly the same as from IDE. At worst, sometimes you need to
    add few magic compilation options like 'msbuild
    -p:Configuration=Release'.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Michael S on Sat Feb 17 21:37:00 2024
    In article <20240217224843.000052c3@yahoo.com>, already5chosen@yahoo.com (Michael S) wrote:

    The Microsoft cross-development setup required doing everything in
    their IDE.
    That's very strange. I know for sure that in Vs2019 they have fully functioning command line tools for aarch64. Was under impression
    that VS2017 also has them.

    They are there, and I use them. Do not try to use aarch64 tools before
    VS.2019 v16.7, which was when some significant code generator fixes
    appeared. VS.2022 is good from v17.0.0, and fixes a major misfeature in VS.2019's floating-point code generation.

    It is typically more convenient to prepare the setup (project file)
    in IDE, but after that you don't have to touch IDE at all if you don't
    want to. Just type 'msbuild' from command prompt and everything is
    compiled exactly the same as from IDE. At worst, sometimes you need
    to add few magic compilation options like 'msbuild
    -p:Configuration=Release'.

    The problems were (a) the project system can't build the domain-specific language I'm working in and (b) I could not find a way other than the IDE
    to do the pushing to the device and running the app. I stopped looking
    when I realised I could work on the device, so there may be a way to do
    it without the IDE, but it's not something that's easy to find.

    The IDE really does not work well with my partial sight. It expects me to
    be able to sit far enough from the screen to see the whole screen, while
    still able to read text on it. This is not achievable. I simply don't
    have the angular discrimination to do it. I need to be very close to a
    screen to read it - about 20cm is best - and then I'm simply unaware of
    things happening around the edge. I have this problem with all IDEs;
    Xcode is even more annoying than Visual Studio.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Sat Feb 17 22:05:00 2024
    On Sat, 17 Feb 2024 19:22:25 +0200, Michael S wrote:

    First, because probably there would not be my preferred programmer's
    editor installed.

    A commonality of OS distribution would fix that. Seems a lot of
    development is moving to Linux now, which is why Microsoft is putting so
    much effort in WSL. The Raspberry Pi, in particular, runs the same sort of Debian distro widely available on x86 and over half a dozen other architectures.

    Second and far more
    important, because it would be too much trouble keeping all sources synchronized with company's source control servers. There is
    approximately zero chance that the target would be allowed to be
    connected into corporate network.

    But the target is connected to your main PC, so it could pull indirectly
    from there. Or alternatively your main PC could push to it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Anton Ertl on Sat Feb 17 22:13:00 2024
    On Sat, 17 Feb 2024 18:08:36 GMT, Anton Ertl wrote:

    One solution would be if MS finally switched to using Linux as the basis
    for Windows.

    Once they brought a Linux kernel into Windows with WSL2, it seemed
    inevitable that they would rely on it more and more, until it became a mandatory part of a Windows install.

    I would call this <https://www.theregister.com/2023/12/14/windows_ai_studio_preview/>
    the first step.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Dallman on Sat Feb 17 22:09:40 2024
    On Sat, 17 Feb 2024 11:41 +0000 (GMT Standard Time), John Dallman wrote:

    But most of all,
    the design is based on the compilers being able to solve a problem that
    can't be solved in practice: static scheduling of memory loads in a
    system with multiple levels of cache.

    That seems insane. Since when did architectural specs dictate the levels
    of cache you could have? Normally, that is an implementation detail, that
    can vary between different instances of the same architecture.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to D'Oliveiro on Sat Feb 17 22:30:00 2024
    In article <uqrar3$k3pf$7@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:
    On Sat, 17 Feb 2024 11:41 +0000 (GMT Standard Time), John Dallman
    wrote:
    But most of all, the design is based on the compilers being
    able to solve a problem that can't be solved in practice:
    static scheduling of memory loads in a system with multiple
    levels of cache.
    That seems insane.

    To a modern understanding, it is insane. That's why I try to explain to
    people who think "weird architecture from twenty years ago, didn't work
    out, maybe I could make it work" that it is fundamentally flawed.

    Since when did architectural specs dictate the levels of cache
    you could have? Normally, that is an implementation detail, that
    can vary between different instances of the same architecture.

    IA-64 did not attempt to dictate that, and implementations did have
    varying levels and sizes of cache. That makes the attempt at static load scheduling impractical, even if the processor wasn't taking interrupts.


    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to BGB on Sat Feb 17 22:30:00 2024
    In article <uqr91u$k0jd$1@dont-email.me>, cr88192@gmail.com (BGB) wrote:

    Except, if they could have made the chip both cheaper and faster
    than a corresponding OoO x86 chip.

    As I understand it, this was the promise of IA-64.

    They never got anywhere close to "faster" which meant they never had the manufacturing volume to start working on "cheaper." The bulky instruction
    set meant the caches had to be larger than x86 which ate up die area, and
    the lack of OoO meant that it spent lots of time with the pipeline
    stalled.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Dallman on Sun Feb 18 00:27:21 2024
    On Sat, 17 Feb 2024 22:30 +0000 (GMT Standard Time), John Dallman wrote:

    To a modern understanding, it is insane.

    I think that was already becoming apparent even before it finally shipped.

    I think HP and Intel started the project around 1990, and it only reached production quality by nearly the end of that decade. During that time,
    RISC architectures continued to improve, with things like superscalar,
    multiple function units and out-of-order execution--basically leaving
    IA-64 in the dust before it could even ship.

    I think it was only fear of loss of corporate face that kept the project
    going when it became clear it should have been abandoned.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Sun Feb 18 01:10:46 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Sat, 17 Feb 2024 22:30 +0000 (GMT Standard Time), John Dallman wrote:

    To a modern understanding, it is insane.

    I think that was already becoming apparent even before it finally shipped.

    I think HP and Intel started the project around 1990,

    The HP and Intel didn't join forces on what became Itanium
    until intel gave up on the P7 project in 1994.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Lurndal on Sun Feb 18 08:59:00 2024
    In article <qOcAN.65951$6ePe.26632@fx42.iad>, scott@slp53.sl.home (Scott Lurndal) wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    I think HP and Intel started the project around 1990,
    The HP and Intel didn't join forces on what became Itanium
    until intel gave up on the P7 project in 1994.

    And they didn't start publicising it until 1998, IIRC. If they thought it wasn't going to work, they could have quietly cancelled it.

    It seems to have been a result of groupthink that got established, rather
    than face-saving. It was moderately convincing at the time; it took me a
    fair while to abandon the intuitive reaction that it ought to be very
    fast, and accept that measurement were the only true knowledge.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to BGB on Sun Feb 18 08:26:24 2024
    BGB <cr88192@gmail.com> writes:
    On 2/17/2024 5:41 AM, John Dallman wrote:
    The huge number of architectural registers (128 64-bit integer, 128
    82-bit floating point) would have made shrinks hard.

    By the time the Itanium and Itanium II were delivered, not really. At
    that time they already had the Pentium 4 with 128 physical integer
    registers and 128 FP/SIMD registers <https://i0.wp.com/chipsandcheese.com/wp-content/uploads/2022/06/pentium4_65nm.drawio-1.png?w=905&ssl=1>
    and the Pentium 4 was the bread-and-butter CPU for Intel; and if it
    had been less power-hungry, it would also have been used for mobile.

    AFAIK:
    I think the idea was that they already had 100+ registers internally
    with their x86 chips (due to register renaming). And, the idea of having
    128 GPRs in the IA-64, was to eliminate the register renaming?...

    No. The idea was that the IA-64 implementations would be ready in
    1997, and that it would be superior in performance to the OoO
    competition. That's also why they wanted to introduce it to the
    market from the high end.

    Another idea (and you see it in the IA-64 name that was later dropped
    in favour of IPF, and in the IA-32 name that was invented around the
    same time) was that in the transition to 64 bits, Intel's customers
    would switch from IA-32 to IA-64, and of course that would happen on
    servers and workstations first.

    The reality was that IA-64 implementations were never generally
    superior to the OoO competition. They were doing fine in HPC stuff,
    but sucked in anything where performance is not dominated by simple (software-pipelineable) loops.

    Except, if they could have made the chip both cheaper and faster than a >corresponding OoO x86 chip.

    As I understand it, this was the promise of IA-64.

    Yes. They just were not able to keep it. And the reason is that they
    thought that scheduling in hardware is hard and inefficient, but it
    turns out that branch prediction at compile time is so much worse than
    hardware branch prediction at run-time that EPIC was not competetive
    with OoO.

    It is like, if one looks at a Xeon, and then concludes that the Atom
    would have been impossible, because of how expensive and power hungry
    the Xeon is.

    They wanted to produce superior performance by being wider than (they
    thought) was practical for OoO: 6 wide for Merced and McKinley (later,
    with Poulson, 12 wide). They did not produce superior performance,
    and nowadays, the Cortex-X4 is 10-wide; and Golden Cove (Alder Lake
    P-core) renames 6 instructions per cycle and at the same time
    eliminates transitive moves and also transitive addition-by-constants.

    They could have made a chip, say, with only a tiny fraction as much
    cache, ...

    Yes, they could have made a, say, 3-wide IA-64 implementation and
    designed it for low power and low area. The result would have been
    even slower than the implementations they actually produced. But of
    course, given that they thought that their architecture would show its strengths at wide designs, they certainly did not want to go there at
    the start.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Lawrence D'Oliveiro on Sun Feb 18 08:59:45 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Sat, 17 Feb 2024 18:08:36 GMT, Anton Ertl wrote:

    One solution would be if MS finally switched to using Linux as the basis
    for Windows.

    Once they brought a Linux kernel into Windows with WSL2, it seemed
    inevitable that they would rely on it more and more, until it became a >mandatory part of a Windows install.

    That's not what I mean. What I mean is to turn Windows into using the
    Linux kernel rather than its current VMS-inspired kernel, and on top
    of Linux provide a proprietary layer that provides the Win32 etc. ABIs
    and APIs (what WINE is trying to do, but of course the WINE project
    has neither the resources nor the authority of Microsoft). Similar to
    Android.

    The benefit for Windows-on-ARM would be that all those SoCs that
    support by Android would also support Windows right away. The
    disadvantage would be that this support might be just as bad and short
    as for Android.

    Thinking about it again, the proprietary-binary driver model of
    Windows fits the tastes of these SoC manufacturers better than the
    free source-level driver model of Linux, so once Windows-on-ARM
    actually sells a significant number of SoCs, the SoC manufacturers
    will happily provide such drivers.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Anton Ertl on Sun Feb 18 10:44:00 2024
    In article <2024Feb18.095945@mips.complang.tuwien.ac.at>, anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:

    Thinking about it again, the proprietary-binary driver model of
    Windows fits the tastes of these SoC manufacturers better than the
    free source-level driver model of Linux, so once Windows-on-ARM
    actually sells a significant number of SoCs, the SoC manufacturers
    will happily provide such drivers.

    Windows is hungry for CPU power, so it has potential to sell the
    higher-end SoCs in more volume than flagship Android devices. Hum ...

    MediaTek recently launched a high-end Soc, their Dimensity 9300, which
    has four ARM Cortex-X4s and four Cortex-A720s. <https://en.wikipedia.org/wiki/List_of_MediaTek_systems_on_chips#Dimensity _9000_Series>

    That's a lot like the Qualcomm Snapdragon 8cx family which are intended
    for Windows. <https://en.wikipedia.org/wiki/List_of_Qualcomm_Snapdragon_systems_on_chip s#Snapdragon_8cx_Compute_Platforms>

    I suspect MediaTek may be preparing to join the Windows market; they also
    say that these fast cores end up using less total energy for a given task
    than slower cores.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Scott Lurndal on Sun Feb 18 11:19:33 2024
    scott@slp53.sl.home (Scott Lurndal) writes:
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Fri, 16 Feb 2024 23:38:03 +0000, MitchAlsup wrote:

    Given the choice of an ARM-based system with some SoC-specific kernel
    that is only supported for a few years

    That's a false choice. See ARM BSA and SBSA.

    Ok, I found "ARM Base System Architecture" and "Server Base System Architecture". What I have not found (and I doubt that I will find it
    there) is a mainline Linux kernel that runs on our Odroid N2 (SoC:
    Amlogic S922X) and where perf stat produces results. I doubt that I
    will find such a kernel in BSA or SBSA. By contrast, that's something
    that our complete arsenal of machines with the AMD64 architecture
    manages just fine. And that's just one thing.

    For a more mainstream problem, installing a new kernel on an AMD64 PC
    works the same way across the whole platform (well, UEFI introduced
    some excitement and problems, but for the earler machines, and the
    ones from after the first years of UEFI, this went smooth). By
    contrast, for the ARM-based SoCs, I have to read up about the Do!s and
    Don't!s for the Uboot for this particular SoC; I don't have time for
    this nonsense, so I don't remember what the specific issues are, only
    that there is quite a bit of uncertainty involved.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to John Dallman on Sun Feb 18 11:50:49 2024
    jgd@cix.co.uk (John Dallman) writes:
    And they didn't start publicising it until 1998, IIRC.

    Well, according to ZDNet <https://web.archive.org/web/20080209211056/http://news.zdnet.com/2100-9584-5984747.html>,
    Intel and HP announced their collaboration in 1994, and revealed more
    details in 1997. I find postings about IA64 in my archive from 1997,
    but I remember reading stuff about it with no details for several
    years. I posted my short review of the architecture in October 1999 <https://www.complang.tuwien.ac.at/anton/ia-64-1999.txt>, so by that
    time the architecture specification had already been published.

    If they thought it
    wasn't going to work, they could have quietly cancelled it.

    After the 1994 announcement, some people might have asked at one point
    what become of the project, but yes.

    It seems to have been a result of groupthink that got established, rather >than face-saving.

    Yes.

    It was moderately convincing at the time; it took me a
    fair while to abandon the intuitive reaction that it ought to be very
    fast, and accept that measurement were the only true knowledge.

    I certainly thought at the time that they were on the right track.
    Everything we knew about the success of RISC in the 1980s and about
    the difficulties of getting more instruction-level parallelism in the
    early 1990s suggested that EPIC would be a good idea.

    The worrying thing is that a few decades later, these ideas are still
    so seductive, and the reasons of why they OoO+SIMD worked out better
    are still so little-known that people still think that EPIC (and their incarnations IA-64 and Transmeta) are basically good ideas that just
    had some marketing mistake (e.g., in this thread), or just would need
    a few more good ideas (e.g., the Mill with its belt rather than
    rotating register files).

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Anton Ertl on Sun Feb 18 15:44:00 2024
    In article <2024Feb18.125049@mips.complang.tuwien.ac.at>, anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:

    The worrying thing is that a few decades later, these ideas are
    still so seductive, and the reasons of why OoO+SIMD worked out
    better are still so little-known that people still think that
    EPIC (and their incarnations IA-64 and Transmeta) are basically
    good ideas that just had some marketing mistake (e.g., in this
    thread),

    IA-64 certainly did have some marketing mistakes, but they weren't what
    sank it.

    or just would need a few more good ideas (e.g., the Mill with
    its belt rather than rotating register files).

    That . . . seems fair, actually. Oh, well. I'll pull it out of my list of tentative platform names.

    It's been clear to me for a while that the differences between
    conventional ISAs aren't actually very important, provided they can
    exploit all the memory and cache bandwidth and latency available. As
    things evolve, new problems arise with existing ISAs.

    The triumph of OoO as a means of managing the delays between memory and
    CPU suggests that an ISA that made it easier for a CPU to determine dependencies in some way has potential to make fast processors cheaper. I
    don't know how to do that, but it's worth thinking about.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to John Dallman on Sun Feb 18 16:16:10 2024
    jgd@cix.co.uk (John Dallman) writes:
    In article <qOcAN.65951$6ePe.26632@fx42.iad>, scott@slp53.sl.home (Scott >Lurndal) wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    I think HP and Intel started the project around 1990,
    The HP and Intel didn't join forces on what became Itanium
    until intel gave up on the P7 project in 1994.

    And they didn't start publicising it until 1998, IIRC. If they thought it >wasn't going to work, they could have quietly cancelled it.

    I was at SGI in 1998, when some of SGI's compiler technology was
    being considered for Merced.


    It seems to have been a result of groupthink that got established, rather >than face-saving. It was moderately convincing at the time; it took me a
    fair while to abandon the intuitive reaction that it ought to be very
    fast, and accept that measurement were the only true knowledge.

    While that's fair, I'd suggest that there haven't been many successes
    in the industry when attempting radical new architectures (Cray aside).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Anton Ertl on Sun Feb 18 16:22:59 2024
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    scott@slp53.sl.home (Scott Lurndal) writes:
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Fri, 16 Feb 2024 23:38:03 +0000, MitchAlsup wrote:

    Given the choice of an ARM-based system with some SoC-specific kernel >>>that is only supported for a few years

    That's a false choice. See ARM BSA and SBSA.

    Ok, I found "ARM Base System Architecture" and "Server Base System >Architecture". What I have not found (and I doubt that I will find it
    there) is a mainline Linux kernel that runs on our Odroid N2 (SoC:
    Amlogic S922X) and where perf stat produces results.

    Does the Odriod N2 claim compliance to the BSA?

    (It won't claim the SBSA, since it's not a server).

    All the major OS vendors participate in the SBSA, and all
    work properly on SBSA-compliant ARMv8/v9 systems, provided
    drivers for proprietary hardware are available upstream
    in the linux tree (something high-end SoC customers usually require).


    I doubt that I
    will find such a kernel in BSA or SBSA. By contrast, that's something
    that our complete arsenal of machines with the AMD64 architecture
    manages just fine. And that's just one thing.

    For a more mainstream problem, installing a new kernel on an AMD64 PC
    works the same way across the whole platform (well, UEFI introduced
    some excitement and problems, but for the earler machines, and the
    ones from after the first years of UEFI, this went smooth).

    All of our ARMv8 SoC's support either UEFI or uboot, it's up
    to the customer to choose which to use based on their
    requirements.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Scott Lurndal on Sun Feb 18 18:05:42 2024
    scott@slp53.sl.home (Scott Lurndal) writes:
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    scott@slp53.sl.home (Scott Lurndal) writes: >>>anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Fri, 16 Feb 2024 23:38:03 +0000, MitchAlsup wrote:

    Given the choice of an ARM-based system with some SoC-specific kernel >>>>that is only supported for a few years

    That's a false choice. See ARM BSA and SBSA.

    Ok, I found "ARM Base System Architecture" and "Server Base System >>Architecture". What I have not found (and I doubt that I will find it >>there) is a mainline Linux kernel that runs on our Odroid N2 (SoC:
    Amlogic S922X) and where perf stat produces results.

    Does the Odriod N2 claim compliance to the BSA?

    I have no idea.

    All the major OS vendors participate in the SBSA, and all
    work properly on SBSA-compliant ARMv8/v9 systems, provided
    drivers for proprietary hardware are available upstream
    in the linux tree (something high-end SoC customers usually require).

    So the BSA label, if present, tells me that the SoC is supported by
    mainline Linux. Unfortunately, most SoCs are not supported by
    mainline Linux, because apparently significant hardware on the SoC is
    supported only by some driver that sits on some forked Linux without
    being upstreamed. And that's what results in smartphones with these
    SoCs eventually not being able to get security updates.

    As for high-end, I doubt that the SoC on a EUR 100 SBC meets that
    description. But I don't think I will find a high-end SoC with a
    Cortex-A73, much less in an SBC with support for a GNU/Linux
    distribution rather than some Android system.

    Overall, there are not that many SBCs around, and even fewer SoCs that
    are used in them. The Rockchip SoCs we have used (RK3399, RK3588)
    seem to be better supported than the Amlogic ones (S905, S922X). The
    Raspis, when they eventually arrive, have good support, but they tend
    to be quite late. E.g., we have had the Rock5B (with RK3588,
    Cortex-A76s and A55s) for IIRC more than half a year before any word
    about the Raspi5 (with a SoC with A76 cores) reached me. The bottom
    line is that, for measuring how the A73 performs, the Odroid N2(+) is
    the only game in town.

    For a more mainstream problem, installing a new kernel on an AMD64 PC
    works the same way across the whole platform (well, UEFI introduced
    some excitement and problems, but for the earler machines, and the
    ones from after the first years of UEFI, this went smooth).

    All of our ARMv8 SoC's support either UEFI or uboot, it's up
    to the customer to choose which to use based on their
    requirements.

    Yes, I have seen uboot stuff in the documentation of the SBCs we use.
    But the instructions for upgrading to a new kernel on these SBCs are
    worrying.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Dallman on Sun Feb 18 21:05:30 2024
    On Sun, 18 Feb 2024 08:59 +0000 (GMT Standard Time), John Dallman wrote:

    And they didn't start publicising it until 1998, IIRC. If they thought
    it wasn't going to work, they could have quietly cancelled it.

    I certainly heard about it before then. As I understood it, things went
    quiet because it was taking longer than expected to make it all work. But
    there were obviously those sufficiently high up in the management chain
    who were determined not to be proven wrong. Otherwise, it could have been cancelled.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Anton Ertl on Sun Feb 18 21:08:50 2024
    On Sun, 18 Feb 2024 11:50:49 GMT, Anton Ertl wrote:

    The worrying thing is that a few decades later, these ideas are still so seductive, and the reasons of why they OoO+SIMD worked out better are
    still so little-known that people still think that EPIC (and their incarnations IA-64 and Transmeta) are basically good ideas that just had
    some marketing mistake ...

    The equivalent on the software wide would be microkernels--again, there
    are those who still think they can be made to work efficiently, in spite
    of mounting evidence to the contrary.

    Also, SIMD, while very fashionable nowadays, with its combinatorial
    explosion in the number of added instructions, does tend to make a mockery
    of the “R” in “RISC”. That’s why RISC-V is resurrecting the old Cray-style
    long vectors instead.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Anton Ertl on Sun Feb 18 21:01:15 2024
    On Sun, 18 Feb 2024 08:59:45 GMT, Anton Ertl wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Sat, 17 Feb 2024 18:08:36 GMT, Anton Ertl wrote:

    One solution would be if MS finally switched to using Linux as the
    basis for Windows.

    Once they brought a Linux kernel into Windows with WSL2, it seemed >>inevitable that they would rely on it more and more, until it became a >>mandatory part of a Windows install.

    That's not what I mean. What I mean is to turn Windows into using the
    Linux kernel rather than its current VMS-inspired kernel ...

    That is the next step. It would be the path of least resistance to
    implement new functionality on the Linux side, and let the Windows kernel wither away.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Sun Feb 18 21:12:08 2024
    On Sun, 18 Feb 2024 16:16:10 GMT, Scott Lurndal wrote:

    ... I'd suggest that there haven't been many successes in
    the industry when attempting radical new architectures (Cray aside).

    Risky ideas are risky ...

    After he left CDC, one might say Seymour Cray’s only real success was the Cray-1. Not sure if the Cray-2 made much money, and the 3 and 4 didn’t
    even make it into regular production.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Sun Feb 18 21:41:55 2024
    Lawrence D'Oliveiro wrote:

    On Sun, 18 Feb 2024 16:16:10 GMT, Scott Lurndal wrote:

    ... I'd suggest that there haven't been many successes in
    the industry when attempting radical new architectures (Cray aside).

    Risky ideas are risky ...

    After he left CDC, one might say Seymour Cray’s only real success was the Cray-1. Not sure if the Cray-2 made much money, and the 3 and 4 didn’t
    even make it into regular production.

    Seymour's talent was in packaging not in computer architecture.
    Thornton was the computer µarchitect of the group.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Sun Feb 18 21:48:57 2024
    Lawrence D'Oliveiro wrote:

    On Sun, 18 Feb 2024 11:50:49 GMT, Anton Ertl wrote:

    The worrying thing is that a few decades later, these ideas are still so
    seductive, and the reasons of why they OoO+SIMD worked out better are
    still so little-known that people still think that EPIC (and their
    incarnations IA-64 and Transmeta) are basically good ideas that just had
    some marketing mistake ...

    The equivalent on the software wide would be microkernels--again, there
    are those who still think they can be made to work efficiently, in spite
    of mounting evidence to the contrary.

    When context switches take 1,000+ cycles but CALL/RET only take 5, µKernels will never succeed. {That is a full context switch including ASID, IP, ROOT pointers, complete register file, and all associated thread-state.}

    µKernels can only succeed when context switch times are similar with CALL/RET. Otherwise the performance requirements will end up dictating monolithic design.

    Also, SIMD, while very fashionable nowadays, with its combinatorial
    explosion in the number of added instructions, does tend to make a mockery
    of the “R” in “RISC”. That’s why RISC-V is resurrecting the old Cray-style
    long vectors instead.


    Which is my point over the last ~year~ to stress that the R in RISC needs to actually mean REDUCED. {{Any ISA with more than 200 instructions cannot be called RISC.}}

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Sun Feb 18 23:48:46 2024
    On Sun, 18 Feb 2024 21:41:55 +0000, MitchAlsup1 wrote:

    Seymour's talent was in packaging not in computer architecture.

    Bit unlikely, considering his supers didn’t use any very fancy packaging techniques at all.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Mon Feb 19 02:13:08 2024
    On Sun, 18 Feb 2024 18:13:42 -0600, BGB wrote:

    ... things like GUI can be handled with IPC calls.

    Which is how X11 and Wayland do it. The bottleneck is in the user response time, so the overhead of message-passing calls is insignificant.

    Granted, none of the mainstream OS's run the GUI directly in the kernel,
    so this may not not be a factor.

    Both Microsoft and Apple do tie their GUIs quite inextricably into the OS kernel. That’s why you can’t customize them--at least, not in any easy way that doesn’t threaten the stability of the system.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Anton Ertl on Mon Feb 19 02:42:17 2024
    Anton Ertl wrote:

    jgd@cix.co.uk (John Dallman) writes:
    And they didn't start publicising it until 1998, IIRC.

    Well, according to ZDNet <https://web.archive.org/web/20080209211056/http://news.zdnet.com/2100-9584-5984747.html>,
    Intel and HP announced their collaboration in 1994, and revealed more
    details in 1997. I find postings about IA64 in my archive from 1997,
    but I remember reading stuff about it with no details for several
    years. I posted my short review of the architecture in October 1999 <https://www.complang.tuwien.ac.at/anton/ia-64-1999.txt>, so by that
    time the architecture specification had already been published.

    If they thought it
    wasn't going to work, they could have quietly cancelled it.

    After the 1994 announcement, some people might have asked at one point
    what become of the project, but yes.

    It seems to have been a result of groupthink that got established, rather >>than face-saving.

    Yes.

    It was moderately convincing at the time; it took me a
    fair while to abandon the intuitive reaction that it ought to be very
    fast, and accept that measurement were the only true knowledge.

    I certainly thought at the time that they were on the right track.

    In 1991 when I first heard of what became Itanic while designing a 6-wide
    GBOoO machine; we had a quick look-see and came to the conclusion it was
    doomed from the start.

    Everything we knew about the success of RISC in the 1980s and about
    the difficulties of getting more instruction-level parallelism in the
    early 1990s suggested that EPIC would be a good idea.

    We came to the opposite conclusion.

    The worrying thing is that a few decades later, these ideas are still
    so seductive, and the reasons of why they OoO+SIMD worked out better
    are still so little-known that people still think that EPIC (and their incarnations IA-64 and Transmeta) are basically good ideas that just
    had some marketing mistake (e.g., in this thread), or just would need
    a few more good ideas (e.g., the Mill with its belt rather than
    rotating register files).

    - anton

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Mon Feb 19 05:05:47 2024
    On Mon, 19 Feb 2024 02:42:17 +0000, MitchAlsup1 wrote:

    We came to the opposite conclusion.

    As they say, hindsight is 6/6.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to BGB on Mon Feb 19 15:06:35 2024
    BGB <cr88192@gmail.com> writes:
    On 2/18/2024 8:13 PM, Lawrence D'Oliveiro wrote:
    On Sun, 18 Feb 2024 18:13:42 -0600, BGB wrote:

    ... things like GUI can be handled with IPC calls.

    Which is how X11 and Wayland do it. The bottleneck is in the user response >> time, so the overhead of message-passing calls is insignificant.


    IIRC, X11 had worked by passing message buffers over Unix sockets (with
    Xlib as a wrapper interface over the socket-level interface).

    The shared memory extension allows clients to directly access
    buffers in the server.

    https://www.x.org/releases/X11R7.7/doc/xextproto/shm.html

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Levine@21:1/5 to All on Mon Feb 19 16:25:19 2024
    According to Lawrence D'Oliveiro <ldo@nz.invalid>:
    On Sun, 18 Feb 2024 21:41:55 +0000, MitchAlsup1 wrote:

    Seymour's talent was in packaging not in computer architecture.

    Bit unlikely, considering his supers didn’t use any very fancy packaging >techniques at all.

    Huh? Maybe not for individual chips, but the wiring and cooling and overall physical design were famous. Here's an article about it:

    https://american.cs.ucdavis.edu/academic/readings/papers/CRAY-technology.pdf

    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Mon Feb 19 18:22:17 2024
    Lawrence D'Oliveiro wrote:

    On Sun, 18 Feb 2024 21:41:55 +0000, MitchAlsup1 wrote:

    Seymour's talent was in packaging not in computer architecture.

    Bit unlikely, considering his supers didn’t use any very fancy packaging techniques at all.

    Consider cooling a refrigerator sized computer that emits 300 KW
    of heat ?? That IS a packaging problem and an interesting one, too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to Lawrence D'Oliveiro on Mon Feb 19 23:04:50 2024
    Lawrence D'Oliveiro wrote:
    On Sun, 18 Feb 2024 08:59 +0000 (GMT Standard Time), John Dallman wrote:

    And they didn't start publicising it until 1998, IIRC. If they thought
    it wasn't going to work, they could have quietly cancelled it.

    I certainly heard about it before then. As I understood it, things went
    quiet because it was taking longer than expected to make it all work. But there were obviously those sufficiently high up in the management chain
    who were determined not to be proven wrong. Otherwise, it could have been cancelled.

    I ordered the Itanium architecture manual as soon as the cpu was
    announced, and was very impressed. If it had turned up just 3 years
    later (instead of 7?), and at the originally promised speed/clock
    frequency, it would have been extremely competitive indeed.

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Tue Feb 20 00:52:18 2024
    On Mon, 19 Feb 2024 15:06:35 GMT, Scott Lurndal wrote:

    On 2/18/2024 8:13 PM, Lawrence D'Oliveiro wrote:

    On Sun, 18 Feb 2024 18:13:42 -0600, BGB wrote:

    ... things like GUI can be handled with IPC calls.

    Which is how X11 and Wayland do it. The bottleneck is in the user
    response time, so the overhead of message-passing calls is
    insignificant.

    The shared memory extension allows clients to directly access buffers in
    the server.

    https://www.x.org/releases/X11R7.7/doc/xextproto/shm.html

    Notice that’s only for sharing image data. All the rest of the messages
    (e.g. event notifications) still go over a stream-oriented socket.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Levine on Tue Feb 20 01:05:17 2024
    On Mon, 19 Feb 2024 16:25:19 -0000 (UTC), John Levine wrote:

    According to Lawrence D'Oliveiro <ldo@nz.invalid>:
    On Sun, 18 Feb 2024 21:41:55 +0000, MitchAlsup1 wrote:

    Seymour's talent was in packaging not in computer architecture.

    Bit unlikely, considering his supers didn’t use any very fancy packaging >>techniques at all.

    Huh? Maybe not for individual chips, but the wiring and cooling and
    overall physical design were famous.

    From Charles J Murray’s “The Supermen” (1997), pages 128-129:

    “Cray had avoided the use of integrated circuits, or chips, for
    nearly six years. As early as 1966, when he’d started on the CDC
    7600, integrated circuits were commercially available at about
    five dollars each, making them roughly equivalent in price to a
    pile of discrete components. Even then, engineers understood the
    advantages of integrated circuits: They eliminated the need for
    careful hand soldering of individual components to a printed
    circuit board.

    “But Cray had always made a point of lagging a generation behind
    the technology curve. That was precisely what he’d done on the
    6600—using the silicon transistor almost a decade after its
    introduction. ...

    “In 1972 Cray knew it was time to use integrated circuits.”

    So the Cray-1 was his first computer using integrated circuits.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lynn Wheeler@21:1/5 to mitchalsup@aol.com on Mon Feb 26 07:58:42 2024
    mitchalsup@aol.com (MitchAlsup1) writes:
    I had the first 200 MHz Pentium Pro out of the Micron factory.
    It ran DOOM at 73 fps and Quake at 45+ fps both full screen.
    I would not call that a joke.

    It was <essentially> the death knell for RISC workstations.

    2003, 32 processor, max. configured IBM mainframe Z990 benchmarked
    aggregate 9BIPS

    2003 Pentium4 processor benchmarked 9.7BIPS

    Also 1988, ibm branch office asked if I could help LLNL standardized
    some serial stuff they were playing with which quickly becomes fibre
    channel standard (FCS, initial 1gbit, full-duplex, 200mbytes/sec
    aggregate). Then some IBM mainframe engineers become involved and
    define a heavy-weight protocol that significantly reduces the native throughput, which is released as FICON.

    The most recent public benchmark I can find is "PEAK I/O" benchmark for
    max. configured z196 getting 2M IOPS using 104 FICON (running over 104
    FCS). About the same time a FCS was announced for E5-2600 blades
    claiming over million IOPS (two having higher throughput than 104
    FICON). Also IBM pubs recommend that System Assist Processors ("SAPs"
    that do the actual I/O), be kept to no more than 70% processor ... which
    would be about 1.5M IOPS).

    --
    virtualization experience starting Jan1968, online at home since Mar1970

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to mitchalsup@aol.com on Mon Feb 26 19:26:00 2024
    In article <79833d0dcdebb9e173c5cd2c6029e851@www.novabbs.org>, mitchalsup@aol.com (MitchAlsup1) wrote:

    I had the first 200 MHz Pentium Pro out of the Micron factory...
    It was <essentially> the death knell for RISC workstations.

    Yup. They struggled on for some time, but they never got near the price-performance. When the Pentium Pro appeared, my boss was porting the software I work on to Windows NT on MIPS, because NetPower reckoned they
    had a market opportunity until they saw how fast PPro was. They switched shortly thereafter: <https://www.hpcwire.com/1996/02/16/netpower-migrates-from-mips-to-intels- x86-architecture/>

    Just as well, really: the Microsoft MIPS compiler was missing some vital
    fixes that had gone into SGI's compiler, and would have given loads of
    trouble to anyone attempting to do anything mildly complicated.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jean-Marc Bourguet@21:1/5 to John Dallman on Mon Feb 26 20:48:50 2024
    jgd@cix.co.uk (John Dallman) writes:

    In article <79833d0dcdebb9e173c5cd2c6029e851@www.novabbs.org>, mitchalsup@aol.com (MitchAlsup1) wrote:

    I had the first 200 MHz Pentium Pro out of the Micron factory...
    It was <essentially> the death knell for RISC workstations.

    Yup. They struggled on for some time, but they never got near the price-performance.

    64-bit support was what kept RISC workstations alive for a time.

    --
    Jean-Marc

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to D'Oliveiro on Mon Feb 26 21:57:00 2024
    In article <urivlh$2olmn$1@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    On Mon, 26 Feb 2024 20:48:50 +0100, Jean-Marc Bourguet wrote:
    64-bit support was what kept RISC workstations alive for a time.
    Still, nowadays it seems a lot of Windows software is still 32-bit.
    Whereas on a 64-bit Linux workstation, everything is 64-bit.

    It is a little harder to port 32-bit Windows applications to 64-bit,
    because Windows uses the IL32LLP64 memory model, rather than I32LP64.

    Microsoft are gradually retiring 32-bit x86 versions of their operating
    system, but they won't take away the ability to run 32-bit applications
    in the foreseeable future, because there are still plenty around. That
    means that applications that don't actually need 64-bit data addressing
    can stay 32-bit, until someone decides to make the change. Even in a
    single market segment, some companies have dropped 32-bit, while others
    are still firmly 32-bit and seem scared of 64-bit.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Jean-Marc Bourguet on Mon Feb 26 21:26:09 2024
    On Mon, 26 Feb 2024 20:48:50 +0100, Jean-Marc Bourguet wrote:

    64-bit support was what kept RISC workstations alive for a time.

    Still, nowadays it seems a lot of Windows software is still 32-bit.
    Whereas on a 64-bit Linux workstation, everything is 64-bit.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Dallman on Mon Feb 26 23:27:26 2024
    On Mon, 26 Feb 2024 21:57 +0000 (GMT Standard Time), John Dallman wrote:

    Microsoft are gradually retiring 32-bit x86 versions of their operating system, but they won't take away the ability to run 32-bit applications
    in the foreseeable future, because there are still plenty around.

    I was mildly surprised to discover recently that Microsoft Visual
    Studio only made the transition to 64-bit a couple of years ago. And
    today I was even more surprised to discover that they haven’t quite
    completed the transition: it seems the Windows Forms designer has
    trouble because a lot of components are still 32-bit <https://devclass.com/2024/02/26/microsoft-struggles-to-address-fallout-from-windows-forms-designer-failure-in-64-bit-visual-studio/>.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)