I want my new architecture to offer something the x86 doesn't...
efficient emulation of older architecures with 36-bit, 48-bit,
and 60-bit words, so that those who have really old programs to
run are no longer disadvantaged.
While this seems like a super-niche thing to some, I see it as
something that's practically _essential_ to have a future world of
computers that doesn't leave older code behind - so that the
computer you already have on your desktop is truly general in its capabilities.
In the early days of the microcomputer era, one could either
have a cheap small computer with a single-chip CPU, or, if
one wanted something bigger, moderate performance was available
from bit-slice chips.
If you wanted higher performance than a bit-slice design would
allow, you had to use older, less highly integrated technology,
so the increase in cost was too large to be justified by the
increase in performance.
Eventually, the Pentium Pro, and its popular successor the Pentium
II came along, and now a System 360/195 architecture was placed
on a single chip (two dies, though, as even the L1 cache, which
was on the chip, had to have a separate die) and the problem was
solved.
This explains my goal of including a Cray I style vector capability
on a microprocessor - this is the one historic thing not yet reduced
to a chip which extends into a performance space beyond that of the
360/195.
My reasoning may be very naive, because I'm failing to take
into account how the current gap between CPU and DRAM speeds makes
older architectures not practical.
And, as I've noted also, the overwhelming dominance of Windoes on
the x86 shows "there can be only one",
which is why I want my new architecture to offer something the x86 doesn't... efficient
emulation of older architecures with 36-bit, 48-bit, and 60-bit
words, so that those who have really old programs to run are no
longer disadvantaged.
While this seems like a super-niche thing to some, I see it as
something that's practically _essential_ to have a future world of
computers that doesn't leave older code behind - so that the
computer you already have on your desktop is truly general in its capabilities.
I don't see FPGAs in their current form as efficient enough to
offer a route to the kind of generality I'm seeking.
By explaining what my goals are, rather than discussing the ISA
proposals that I see as a means to those goals, perhaps this makes
it possible for a better and more practical way to achieve those
goals to be suggested.
John Savard
Eventually, the Pentium Pro ...
And, as I've noted also, the overwhelming dominance of Windoes on the
x86 shows "there can be only one", which is why I want my new
architecture to offer something the x86 doesn't... efficient emulation
of older architecures with 36-bit, 48-bit, and 60-bit words, so that
those who have really old programs to run are no longer disadvantaged.
On Wed, 14 Feb 2024 05:50:41 -0000 (UTC), Quadibloc wrote:
Eventually, the Pentium Pro ...
Ah, the poor Pentium Pro, that was a bit of a joke. The problem was that >Intel expected that the majority of Windows code would be 32-bit by that >point.
Didn’t a company called “Transmeta” try that ... something like 30 years >ago? It didn’t work.
There is no path forward for Windows on non-x86.
Ah, the poor Pentium Pro, that was a bit of a joke. The problem was
that Intel expected that the majority of Windows code would be 32-bit
by that point. It wasn’t.
On Wed, 14 Feb 2024 05:50:41 -0000 (UTC), Quadibloc wrote:
Eventually, the Pentium Pro ...
Ah, the poor Pentium Pro, that was a bit of a joke. The problem was that Intel expected that the majority of Windows code would be 32-bit by that point. It wasn’t.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
There is no path forward for Windows on non-x86.
That's entirely up to Microsoft. As has been noted, they do have ARMv8 versions of windows 11.
Ah, the poor Pentium Pro, that was a bit of a joke. The problem was
that Intel expected that the majority of Windows code would be 32-bit
by that point. It wasn’t.
Maybe for some segment of the Windows world, but for the workstation/unix/RISC world, the Pentium Pro was no joke at all: it was
a game changer.
On Wed, 14 Feb 2024 16:57:54 -0500, Stefan Monnier wrote:
Ah, the poor Pentium Pro, that was a bit of a joke. The problem was
that Intel expected that the majority of Windows code would be 32-bit
by that point. It wasn’t.
Maybe for some segment of the Windows world, but for the
workstation/unix/RISC world, the Pentium Pro was no joke at all: it was
a game changer.
A chip with the emphasis on 32-bit performance, later replaced by the
Pentium II, with a greater emphasis on 16-bit performance ... only in the
x86 world, eh?
On Wed, 14 Feb 2024 05:50:41 -0000 (UTC), Quadibloc wrote:
Eventually, the Pentium Pro ...
Ah, the poor Pentium Pro, that was a bit of a joke. The problem was that
Intel expected that the majority of Windows code would be 32-bit by that point. It wasn’t.
And, as I've noted also, the overwhelming dominance of Windoes on the
x86 shows "there can be only one", which is why I want my new
architecture to offer something the x86 doesn't... efficient emulation
of older architecures with 36-bit, 48-bit, and 60-bit words, so that
those who have really old programs to run are no longer disadvantaged.
Didn’t a company called “Transmeta” try that ... something like 30 years
ago? It didn’t work.
There is no path forward for Windows on non-x86. Only open-source software
is capable of being truly cross-platform.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
There is no path forward for Windows on non-x86.
That's entirely up to Microsoft.
As has been noted, they do have
ARMv8 versions of windows 11.
https://learn.microsoft.com/en-us/windows/arm/overview
On Thu, 15 Feb 2024 07:24:56 GMT, Anton Ertl wrote:
Microsoft is trying to commoditize their complement (in
particular, Intel) by making Windows on ARM viable, but the ISVs don't
play along.
Can you blame them? They are not going to port their proprietary apps to
ARM until they see the customers buying lots of ARM-based machines, and >customers are staying away from buying ARM-based machines because they >don’t see lots of software that will take advantage of the hardware.
Chicken-and-egg situation, and no way to break out of it.
Microsoft is trying to commoditize their complement (in
particular, Intel) by making Windows on ARM viable, but the ISVs don't
play along.
Lawrence D'Oliveiro wrote:
On Wed, 14 Feb 2024 16:57:54 -0500, Stefan Monnier wrote:
Ah, the poor Pentium Pro, that was a bit of a joke. The problem was
that Intel expected that the majority of Windows code would be 32-bit
by that point. It wasn’t.
Maybe for some segment of the Windows world, but for the
workstation/unix/RISC world, the Pentium Pro was no joke at all: it was
a game changer.
A chip with the emphasis on 32-bit performance, later replaced by the
Pentium II, with a greater emphasis on 16-bit performance ... only in the
x86 world, eh?
This sounds remarkably like you expected sane behavior from x86 land.
What did matter, a lot, was the fact that when the PPro arrived, at an initial speed of up to 200 MHz, it immediately took over the crown as
the fastest specINT processor in the world.
On Wed, 14 Feb 2024 21:32:11 GMT, Scott Lurndal wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
There is no path forward for Windows on non-x86.
That's entirely up to Microsoft. As has been noted, they do have ARMv8
versions of windows 11.
They’ve been trying for years: Windows Phone 8, Windows RT, that laughable >“Windows 10 IOT Edition” for the Raspberry Pi, whatever the name is for >the current effort ... Windows-on-ARM has always been a trainwreck.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Wed, 14 Feb 2024 21:32:11 GMT, Scott Lurndal wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
There is no path forward for Windows on non-x86.
That's entirely up to Microsoft. As has been noted, they do have ARMv8
versions of windows 11.
They’ve been trying for years: Windows Phone 8, Windows RT, that laughable >>“Windows 10 IOT Edition” for the Raspberry Pi, whatever the name is for >>the current effort ... Windows-on-ARM has always been a trainwreck.
https://azure.microsoft.com/en-us/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
There is no path forward for Windows on non-x86.
That's entirely up to Microsoft. As has been noted, they do have
ARMv8 versions of windows 11.
https://learn.microsoft.com/en-us/windows/arm/overview
Though, I suspect, this may be similar to what killed the IA-64.
History might have gone quite differently if Intel, instead of
targeting it at the high-end, made it first available as a
lower-cost alternative to the Celeron line
Had it survived for longer, it could have maybe been a viable
option for smartphones and tablets.
That was viewed by MS as an "iPad killer", since it had a keyboard and
the "vastly superior Windows GUI" which did seem to be missing the point quite badly.
Development for it was supposed to be done on x64 Windows,
with the ARM Windows device being used via a USB connection, like iPad development.
Qualcomm claim their Snapdragon X Elite CPUs will compete with Apple's
CPUs, although proof will have to wait for them to be available.
In the late 1990s, when those decisions were made, smart
mobile devices didn't exist.
Development for it was supposed to be done on x64 Windows,
with the ARM Windows device being used via a USB connection, like
iPad development.
Which is such a dumb thing to do, given the Linux alternatives
offer self-hosted development and deployment stacks. Even the
humble Raspberry Pi could manage that from Day 1.
The other thing is: why is Windows-on-ARM so heavily tied to
Qualcomm chips? ARM Linux can run on a whole range of ARM chips
from a whole range of different vendors.
On Fri, 16 Feb 2024 08:55 +0000 (GMT Standard Time), John Dallman wrote:
The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm
chips? ARM Linux can run on a whole range of ARM chips from a whole range
of different vendors.
In article <uqold3$1ha3$4@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:
On Fri, 16 Feb 2024 14:32 +0000 (GMT Standard Time), John Dallman
wrote:
In the late 1990s, when those decisions were made, smart mobile
devices didn't exist.
Actually, they did. PDAs, remember?
True, but batteries of the period could not have supported Itanium's
100W+ power consumption for any useful time.
Lawrence D'Oliveiro wrote:
The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm
chips? ARM Linux can run on a whole range of ARM chips from a whole
range of different vendors.
Qualcomm paid for the port ?!?
The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm
chips? ARM Linux can run on a whole range of ARM chips from a whole range
of different vendors.
According to Lawrence D'Oliveiro <ldo@nz.invalid>:
The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm >>chips? ARM Linux can run on a whole range of ARM chips from a whole
range of different vendors.
More likely the Qualcomm chips have some peripherals that Windows wants.
Presumably they could have scaled [IA-64] down, while still keeping the
core ISA design intact?...
Like, presumably they had wanted to use the design for things big
and small, which would not have made sense if it could only be used
in big server chips.
But, maybe, say, as a CPU for home game-consoles or set-top
boxes?...
Or those thin clients that did little other than dial into the
internet and run a web-browser?...
In article <uqold3$1ha3$4@dont-email.me>, ldo@nz.invalid (Lawrence >D'Oliveiro) wrote:
In the late 1990s, when those decisions were made, smartActually, they did. PDAs, remember?
mobile devices didn't exist.
True, but batteries of the period could not have supported Itanium's 100W+ >power consumption for any useful time.
According to Lawrence D'Oliveiro <ldo@nz.invalid>:
The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm >>chips? ARM Linux can run on a whole range of ARM chips from a whole range >>of different vendors.
More likely the Qualcomm chips have some peripherals that Windows wants.
In article <uqola0$1ha3$3@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:
Development for it was supposed to be done on x64 Windows,
with the ARM Windows device being used via a USB connection, like
iPad development.
Which is such a dumb thing to do, given the Linux alternatives
offer self-hosted development and deployment stacks. Even the
humble Raspberry Pi could manage that from Day 1.
As I said, Microsoft's approach was widely rejected and they've
abandoned it.
The other thing is: why is Windows-on-ARM so heavily tied to
Qualcomm chips? ARM Linux can run on a whole range of ARM chips
from a whole range of different vendors.
My knowledge of that story is under NDA at present.
John
John Levine <johnl@taugh.com> writes:
According to Lawrence D'Oliveiro <ldo@nz.invalid>:
The other thing is: why is Windows-on-ARM so heavily tied to
Qualcomm chips? ARM Linux can run on a whole range of ARM chips
from a whole range of different vendors.
More likely the Qualcomm chips have some peripherals that Windows
wants.
Unlikely. More likely they fit the power curves required for the
portable devices like the Surface and the Lenovo Thinkpad.
https://github.com/AmpereComputing/Windows-11-On-Ampere
On Fri, 16 Feb 2024 23:38:03 +0000, MitchAlsup wrote:
Lawrence D'Oliveiro wrote:
The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm
chips? ARM Linux can run on a whole range of ARM chips from a whole
range of different vendors.
Qualcomm paid for the port ?!?
Can’t Microsoft afford to port Windows to anything else?
I don't know about you, but I personally find well implemented cross-development far more convenient than 'native' development.
I never developed for Win-ARM64, so don't know how well-implemented
it was.
In case of CE, native development was not an option, but even if it
would be an option I would not use it. First, because probably there
would not be my preferred programmer's editor installed.
Second and far more important, because it would be too much
trouble keeping all sources synchronized with company's source
control servers.
There is approximately zero chance that the target would be allowed
to be connected into corporate network.
May be, if my apps were order of magnitude more complicated than
they actually are, I'd feel differently.
One solution would be if MS finally switched to using Linux as the
basis for Windows. Then they would automatically get all the stuff
that is done for Android and for the SBCs, although that is a sad
story, too.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Fri, 16 Feb 2024 23:38:03 +0000, MitchAlsup wrote:
Given the choice of an ARM-based system with some SoC-specific kernel
that is only supported for a few years
On Sat, 17 Feb 2024 02:40:29 -0000 (UTC), John Levine wrote:
According to Lawrence D'Oliveiro <ldo@nz.invalid>:
The other thing is: why is Windows-on-ARM so heavily tied to Qualcomm >>>chips? ARM Linux can run on a whole range of ARM chips from a whole
range of different vendors.
More likely the Qualcomm chips have some peripherals that Windows wants.
I wonder what they could be?
What’s so special about Qualcomm chips, that is so specific to Windows? Because the products themselves don’t seem to reflect anything special.
Lawrence D'Oliveiro wrote:
On Sat, 17 Feb 2024 02:40:29 -0000 (UTC), John Levine wrote:
According to Lawrence D'Oliveiro <ldo@nz.invalid>:
The other thing is: why is Windows-on-ARM so heavily tied to
Qualcomm chips? ARM Linux can run on a whole range of ARM chips
from a whole range of different vendors.
More likely the Qualcomm chips have some peripherals that Windows
wants.
I wonder what they could be?
WiFi radio transceivers, bluetooth, ...
What’s so special about Qualcomm chips, that is so specific to
Windows? Because the products themselves don’t seem to reflect
anything special.
The Microsoft cross-development setup required doing everything in
their IDE.
The Microsoft cross-development setup required doing everything inThat's very strange. I know for sure that in Vs2019 they have fully functioning command line tools for aarch64. Was under impression
their IDE.
that VS2017 also has them.
It is typically more convenient to prepare the setup (project file)
in IDE, but after that you don't have to touch IDE at all if you don't
want to. Just type 'msbuild' from command prompt and everything is
compiled exactly the same as from IDE. At worst, sometimes you need
to add few magic compilation options like 'msbuild
-p:Configuration=Release'.
First, because probably there would not be my preferred programmer's
editor installed.
Second and far more
important, because it would be too much trouble keeping all sources synchronized with company's source control servers. There is
approximately zero chance that the target would be allowed to be
connected into corporate network.
One solution would be if MS finally switched to using Linux as the basis
for Windows.
But most of all,
the design is based on the compilers being able to solve a problem that
can't be solved in practice: static scheduling of memory loads in a
system with multiple levels of cache.
On Sat, 17 Feb 2024 11:41 +0000 (GMT Standard Time), John Dallman
wrote:
But most of all, the design is based on the compilers beingThat seems insane.
able to solve a problem that can't be solved in practice:
static scheduling of memory loads in a system with multiple
levels of cache.
Since when did architectural specs dictate the levels of cache
you could have? Normally, that is an implementation detail, that
can vary between different instances of the same architecture.
Except, if they could have made the chip both cheaper and faster
than a corresponding OoO x86 chip.
As I understand it, this was the promise of IA-64.
To a modern understanding, it is insane.
On Sat, 17 Feb 2024 22:30 +0000 (GMT Standard Time), John Dallman wrote:
To a modern understanding, it is insane.
I think that was already becoming apparent even before it finally shipped.
I think HP and Intel started the project around 1990,
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
I think HP and Intel started the project around 1990,The HP and Intel didn't join forces on what became Itanium
until intel gave up on the P7 project in 1994.
On 2/17/2024 5:41 AM, John Dallman wrote:
The huge number of architectural registers (128 64-bit integer, 128
82-bit floating point) would have made shrinks hard.
AFAIK:
I think the idea was that they already had 100+ registers internally
with their x86 chips (due to register renaming). And, the idea of having
128 GPRs in the IA-64, was to eliminate the register renaming?...
Except, if they could have made the chip both cheaper and faster than a >corresponding OoO x86 chip.
As I understand it, this was the promise of IA-64.
It is like, if one looks at a Xeon, and then concludes that the Atom
would have been impossible, because of how expensive and power hungry
the Xeon is.
They could have made a chip, say, with only a tiny fraction as much
cache, ...
On Sat, 17 Feb 2024 18:08:36 GMT, Anton Ertl wrote:
One solution would be if MS finally switched to using Linux as the basis
for Windows.
Once they brought a Linux kernel into Windows with WSL2, it seemed
inevitable that they would rely on it more and more, until it became a >mandatory part of a Windows install.
Thinking about it again, the proprietary-binary driver model of
Windows fits the tastes of these SoC manufacturers better than the
free source-level driver model of Linux, so once Windows-on-ARM
actually sells a significant number of SoCs, the SoC manufacturers
will happily provide such drivers.
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Fri, 16 Feb 2024 23:38:03 +0000, MitchAlsup wrote:
Given the choice of an ARM-based system with some SoC-specific kernel
that is only supported for a few years
That's a false choice. See ARM BSA and SBSA.
And they didn't start publicising it until 1998, IIRC.
If they thought it
wasn't going to work, they could have quietly cancelled it.
It seems to have been a result of groupthink that got established, rather >than face-saving.
It was moderately convincing at the time; it took me a
fair while to abandon the intuitive reaction that it ought to be very
fast, and accept that measurement were the only true knowledge.
The worrying thing is that a few decades later, these ideas are
still so seductive, and the reasons of why OoO+SIMD worked out
better are still so little-known that people still think that
EPIC (and their incarnations IA-64 and Transmeta) are basically
good ideas that just had some marketing mistake (e.g., in this
thread),
or just would need a few more good ideas (e.g., the Mill with
its belt rather than rotating register files).
In article <qOcAN.65951$6ePe.26632@fx42.iad>, scott@slp53.sl.home (Scott >Lurndal) wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
I think HP and Intel started the project around 1990,The HP and Intel didn't join forces on what became Itanium
until intel gave up on the P7 project in 1994.
And they didn't start publicising it until 1998, IIRC. If they thought it >wasn't going to work, they could have quietly cancelled it.
It seems to have been a result of groupthink that got established, rather >than face-saving. It was moderately convincing at the time; it took me a
fair while to abandon the intuitive reaction that it ought to be very
fast, and accept that measurement were the only true knowledge.
scott@slp53.sl.home (Scott Lurndal) writes:
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Fri, 16 Feb 2024 23:38:03 +0000, MitchAlsup wrote:
Given the choice of an ARM-based system with some SoC-specific kernel >>>that is only supported for a few years
That's a false choice. See ARM BSA and SBSA.
Ok, I found "ARM Base System Architecture" and "Server Base System >Architecture". What I have not found (and I doubt that I will find it
there) is a mainline Linux kernel that runs on our Odroid N2 (SoC:
Amlogic S922X) and where perf stat produces results.
I doubt that I
will find such a kernel in BSA or SBSA. By contrast, that's something
that our complete arsenal of machines with the AMD64 architecture
manages just fine. And that's just one thing.
For a more mainstream problem, installing a new kernel on an AMD64 PC
works the same way across the whole platform (well, UEFI introduced
some excitement and problems, but for the earler machines, and the
ones from after the first years of UEFI, this went smooth).
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
scott@slp53.sl.home (Scott Lurndal) writes: >>>anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Fri, 16 Feb 2024 23:38:03 +0000, MitchAlsup wrote:
Given the choice of an ARM-based system with some SoC-specific kernel >>>>that is only supported for a few years
That's a false choice. See ARM BSA and SBSA.
Ok, I found "ARM Base System Architecture" and "Server Base System >>Architecture". What I have not found (and I doubt that I will find it >>there) is a mainline Linux kernel that runs on our Odroid N2 (SoC:
Amlogic S922X) and where perf stat produces results.
Does the Odriod N2 claim compliance to the BSA?
All the major OS vendors participate in the SBSA, and all
work properly on SBSA-compliant ARMv8/v9 systems, provided
drivers for proprietary hardware are available upstream
in the linux tree (something high-end SoC customers usually require).
For a more mainstream problem, installing a new kernel on an AMD64 PC
works the same way across the whole platform (well, UEFI introduced
some excitement and problems, but for the earler machines, and the
ones from after the first years of UEFI, this went smooth).
All of our ARMv8 SoC's support either UEFI or uboot, it's up
to the customer to choose which to use based on their
requirements.
And they didn't start publicising it until 1998, IIRC. If they thought
it wasn't going to work, they could have quietly cancelled it.
The worrying thing is that a few decades later, these ideas are still so seductive, and the reasons of why they OoO+SIMD worked out better are
still so little-known that people still think that EPIC (and their incarnations IA-64 and Transmeta) are basically good ideas that just had
some marketing mistake ...
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Sat, 17 Feb 2024 18:08:36 GMT, Anton Ertl wrote:
One solution would be if MS finally switched to using Linux as the
basis for Windows.
Once they brought a Linux kernel into Windows with WSL2, it seemed >>inevitable that they would rely on it more and more, until it became a >>mandatory part of a Windows install.
That's not what I mean. What I mean is to turn Windows into using the
Linux kernel rather than its current VMS-inspired kernel ...
... I'd suggest that there haven't been many successes in
the industry when attempting radical new architectures (Cray aside).
On Sun, 18 Feb 2024 16:16:10 GMT, Scott Lurndal wrote:
... I'd suggest that there haven't been many successes in
the industry when attempting radical new architectures (Cray aside).
Risky ideas are risky ...
After he left CDC, one might say Seymour Cray’s only real success was the Cray-1. Not sure if the Cray-2 made much money, and the 3 and 4 didn’t
even make it into regular production.
On Sun, 18 Feb 2024 11:50:49 GMT, Anton Ertl wrote:
The worrying thing is that a few decades later, these ideas are still so
seductive, and the reasons of why they OoO+SIMD worked out better are
still so little-known that people still think that EPIC (and their
incarnations IA-64 and Transmeta) are basically good ideas that just had
some marketing mistake ...
The equivalent on the software wide would be microkernels--again, there
are those who still think they can be made to work efficiently, in spite
of mounting evidence to the contrary.
Also, SIMD, while very fashionable nowadays, with its combinatorial
explosion in the number of added instructions, does tend to make a mockery
of the “R” in “RISC”. That’s why RISC-V is resurrecting the old Cray-style
long vectors instead.
Seymour's talent was in packaging not in computer architecture.
... things like GUI can be handled with IPC calls.
Granted, none of the mainstream OS's run the GUI directly in the kernel,
so this may not not be a factor.
jgd@cix.co.uk (John Dallman) writes:
And they didn't start publicising it until 1998, IIRC.
Well, according to ZDNet <https://web.archive.org/web/20080209211056/http://news.zdnet.com/2100-9584-5984747.html>,
Intel and HP announced their collaboration in 1994, and revealed more
details in 1997. I find postings about IA64 in my archive from 1997,
but I remember reading stuff about it with no details for several
years. I posted my short review of the architecture in October 1999 <https://www.complang.tuwien.ac.at/anton/ia-64-1999.txt>, so by that
time the architecture specification had already been published.
If they thought it
wasn't going to work, they could have quietly cancelled it.
After the 1994 announcement, some people might have asked at one point
what become of the project, but yes.
It seems to have been a result of groupthink that got established, rather >>than face-saving.
Yes.
It was moderately convincing at the time; it took me a
fair while to abandon the intuitive reaction that it ought to be very
fast, and accept that measurement were the only true knowledge.
I certainly thought at the time that they were on the right track.
Everything we knew about the success of RISC in the 1980s and about
the difficulties of getting more instruction-level parallelism in the
early 1990s suggested that EPIC would be a good idea.
The worrying thing is that a few decades later, these ideas are still
so seductive, and the reasons of why they OoO+SIMD worked out better
are still so little-known that people still think that EPIC (and their incarnations IA-64 and Transmeta) are basically good ideas that just
had some marketing mistake (e.g., in this thread), or just would need
a few more good ideas (e.g., the Mill with its belt rather than
rotating register files).
- anton
We came to the opposite conclusion.
On 2/18/2024 8:13 PM, Lawrence D'Oliveiro wrote:
On Sun, 18 Feb 2024 18:13:42 -0600, BGB wrote:
... things like GUI can be handled with IPC calls.
Which is how X11 and Wayland do it. The bottleneck is in the user response >> time, so the overhead of message-passing calls is insignificant.
IIRC, X11 had worked by passing message buffers over Unix sockets (with
Xlib as a wrapper interface over the socket-level interface).
On Sun, 18 Feb 2024 21:41:55 +0000, MitchAlsup1 wrote:
Seymour's talent was in packaging not in computer architecture.
Bit unlikely, considering his supers didn’t use any very fancy packaging >techniques at all.
On Sun, 18 Feb 2024 21:41:55 +0000, MitchAlsup1 wrote:
Seymour's talent was in packaging not in computer architecture.
Bit unlikely, considering his supers didn’t use any very fancy packaging techniques at all.
On Sun, 18 Feb 2024 08:59 +0000 (GMT Standard Time), John Dallman wrote:
And they didn't start publicising it until 1998, IIRC. If they thought
it wasn't going to work, they could have quietly cancelled it.
I certainly heard about it before then. As I understood it, things went
quiet because it was taking longer than expected to make it all work. But there were obviously those sufficiently high up in the management chain
who were determined not to be proven wrong. Otherwise, it could have been cancelled.
On 2/18/2024 8:13 PM, Lawrence D'Oliveiro wrote:
On Sun, 18 Feb 2024 18:13:42 -0600, BGB wrote:
... things like GUI can be handled with IPC calls.
Which is how X11 and Wayland do it. The bottleneck is in the user
response time, so the overhead of message-passing calls is
insignificant.
The shared memory extension allows clients to directly access buffers in
the server.
https://www.x.org/releases/X11R7.7/doc/xextproto/shm.html
According to Lawrence D'Oliveiro <ldo@nz.invalid>:
On Sun, 18 Feb 2024 21:41:55 +0000, MitchAlsup1 wrote:
Seymour's talent was in packaging not in computer architecture.
Bit unlikely, considering his supers didn’t use any very fancy packaging >>techniques at all.
Huh? Maybe not for individual chips, but the wiring and cooling and
overall physical design were famous.
I had the first 200 MHz Pentium Pro out of the Micron factory.
It ran DOOM at 73 fps and Quake at 45+ fps both full screen.
I would not call that a joke.
It was <essentially> the death knell for RISC workstations.
I had the first 200 MHz Pentium Pro out of the Micron factory...
It was <essentially> the death knell for RISC workstations.
In article <79833d0dcdebb9e173c5cd2c6029e851@www.novabbs.org>, mitchalsup@aol.com (MitchAlsup1) wrote:
I had the first 200 MHz Pentium Pro out of the Micron factory...
It was <essentially> the death knell for RISC workstations.
Yup. They struggled on for some time, but they never got near the price-performance.
On Mon, 26 Feb 2024 20:48:50 +0100, Jean-Marc Bourguet wrote:
64-bit support was what kept RISC workstations alive for a time.Still, nowadays it seems a lot of Windows software is still 32-bit.
Whereas on a 64-bit Linux workstation, everything is 64-bit.
64-bit support was what kept RISC workstations alive for a time.
Microsoft are gradually retiring 32-bit x86 versions of their operating system, but they won't take away the ability to run 32-bit applications
in the foreseeable future, because there are still plenty around.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 28:17:10 |
Calls: | 10,390 |
Calls today: | 1 |
Files: | 14,064 |
Messages: | 6,417,074 |