A uniquely difficult architecture like x86 increases the barrier
to competition both from patents and organizational knowledge and
tools. While MIPS managed to suppress clones with its patent on
unaligned loads (please correct any historical inaccuracy), Intel
was better positioned to discourage software-compatible
competition — and not just financially.
I suspect that the bad reputation of x86 among computer architects
— especially with the biases from Computer Architecture: A
Quantitative Approach which substantially informs computer
architecture education
The binary lock-in advantage of x86 makes architectural changes
more challenging. While something like the 8080 to 8086 "assembly
compatible" transition might have been practical and long-term
beneficial from an engineering perspective, from a business
perspective such would validate binary translation, reducing the
competitive barriers.
(Itanium showed that mediocre hardware translation between x86 and
a rather incompatible architecture (and microarchitecture) would
have been problematic even if native Itanium code had competitive >performance.
On the other hand, ARM designed a 64-bit
architecture that is only moderately compatible with the 32-bit
architecture — flags being one example of compatibility
MIPS (even with its delayed branches, lack of variable length
encoding, etc.) would probably be a better architecture in 2023
than x86 was around 2010.
"Paul A. Clayton" <paaronclayton@gmail.com> writes:
On the other hand, ARM designed a 64-bit
architecture that is only moderately compatible with the 32-bit
architecture — flags being one example of compatibility
There is no compatibility at the ISA level. Using flags is a
similarity, not compatibility.
Anton Ertl wrote:
There is no compatibility at the ISA level. Using flags is a
similarity, not compatibility.
I would say its compatibility because it allows A64 to emulate
A32 functional behavior with minimal overhead.
That could keep you customers from fleeing to other architectures.
EricP <ThatWouldBeTelling@thevillage.com> writes:
Anton Ertl wrote:
There is no compatibility at the ISA level. Using flags is a
similarity, not compatibility.
I would say its compatibility because it allows A64 to emulate
A32 functional behavior with minimal overhead.
Emulation of A32 has not been relevant for quite a number of years,
because all cores that understood A64 also understood A32/T32.
course, implementing those cores was simplified by having the same
flags in the same order, but if there had been good reason, they could
just as well have built a data path hat produces both kinds of flags
(or, if they had decided to forego flags on A64, that implemented
flags just for A32/T32).
That could keep you customers from fleeing to other architectures.
They kept customers by providing cores with both A64 and A32/T32.
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
Emulation of A32 has not been relevant for quite a number of years,
because all cores that understood A64 also understood A32/T32.
All cores from ARM did. Cavium's cores were A64 only. The
latest Neoverse cores are A64 only. V9.x deprecates A32/T32
completely.
But nobody really used A32/T32 on ArmV8 cores. It wasn't
customer demand that lead to implementation on early ARM designs,
but rather the expectation of customer demand (ultimately,
non-existent).
But nobody really used A32/T32 on ArmV8 cores. It wasn't
customer demand that lead to implementation on early ARM designs,
but rather the expectation of customer demand (ultimately, non-existent).
FWIW, I'm running Debian's armhf port (tho on top of an AArch64 kernel)
on my only ARMv8 machine. For machines with small enough RAM, A32/T32
still makes sense.
But nobody really used A32/T32 on ArmV8 cores. It wasn't
customer demand that lead to implementation on early ARM designs,
but rather the expectation of customer demand (ultimately, non-existent).
ARMv7 cores are suitable for those machines. And less expensive.
ARMv7 cores are suitable for those machines. And less expensive.
Maybe that's relevant for those designing the SoC, but for people like
me, machines with ARMv7 cores tend to be significantly less powerful, >typically in terms of number of CPUs, speed of each CPU, speed of
available IOs, etc...
Virtually all SBCs brought to market in the last 5 years use ARMv8 CPUs >rather than ARMv7 ones. Yet the majority of them still has ≤4GB of RAM,
Given that A64, A32 and (for the most part) T32 have the same
32-bit instruction footprint, I don't see RAM size as a determinent
in this case.
On 12/4/2023 11:58 AM, Stefan Monnier wrote:
ARMv7 cores are suitable for those machines. And less expensive.
Maybe that's relevant for those designing the SoC, but for people like
me, machines with ARMv7 cores tend to be significantly less powerful,
typically in terms of number of CPUs, speed of each CPU, speed of
available IOs, etc...
Virtually all SBCs brought to market in the last 5 years use ARMv8 CPUs
rather than ARMv7 ones. Yet the majority of them still has ≤4GB of RAM, >>
Yeah, RAM isn't free...
And, as with other things, the endless march of "bigger and faster"
seems to have slowed down.
As for 4GB, probably the majority of programs would be happy enough with
a 4GB limit, so in this sense, 32-bit almost still makes sense.
Though, possibly slightly more useful is to use a 64-bit ISA, but using
a 32-bit virtual address space and pointers. Then one can potentially
save some memory, while still having the advantages of being able to
work efficiently with larger data.
This was small enough to be like, "meh whatever, will just go for 64-bit pointers even if they are probably unnecessary".
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
EricP <ThatWouldBeTelling@thevillage.com> writes:
Anton Ertl wrote:
There is no compatibility at the ISA level. Using flags is a
similarity, not compatibility.
I would say its compatibility because it allows A64 to emulate
A32 functional behavior with minimal overhead.
Emulation of A32 has not been relevant for quite a number of years,
because all cores that understood A64 also understood A32/T32.
All cores from ARM did. Cavium's cores were A64 only.
The only flags in common are NCVZ - Armv7/A32 add Q (indicating saturation) >and the state bits for the IT instruction and the GE flags for the
parallel instructions.
There's really not much in common between A64 and A32/T32, other than
the first 16 registers of the register file.
They kept customers by providing cores with both A64 and A32/T32.
But nobody really used A32/T32 on ArmV8 cores.
It wasn't
customer demand that lead to implementation on early ARM designs,
but rather the expectation of customer demand (ultimately, non-existent).
scott@slp53.sl.home (Scott Lurndal) writes:
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
EricP <ThatWouldBeTelling@thevillage.com> writes:
Anton Ertl wrote:
There is no compatibility at the ISA level. Using flags is a
similarity, not compatibility.
I would say its compatibility because it allows A64 to emulate
A32 functional behavior with minimal overhead.
Emulation of A32 has not been relevant for quite a number of years, >>>because all cores that understood A64 also understood A32/T32.
All cores from ARM did. Cavium's cores were A64 only.
Given that ARM designed A64 primarily for their own plans, and their
plans were to deliver cores with A64 and A32/T32 (and their customers
had the same option), there was no reason for them to design A64 for
easy A32/T32 emulation.
My guess is that Cavium expected their customers not to need A32/T32
rather than running an emulator.
The only flags in common are NCVZ - Armv7/A32 add Q (indicating saturation) >>and the state bits for the IT instruction and the GE flags for the
parallel instructions.
That's interesting. NCVZ seem to be most relevant for binary
translation, though. The IT instruction should not be hard to
translate without reifying these state bits. I don't know anything
about the parallel instructions.
There's really not much in common between A64 and A32/T32, other than
the first 16 registers of the register file.
In what way are the first 16 registers common? AFAIK A32/T32 has the
PC as one of those registers, A64 doesn't.\
On 12/3/23 10:01 AM, Anton Ertl wrote:
"Paul A. Clayton" <paaronclayton@gmail.com> writes:
A uniquely difficult architecture like x86 increases the barrier
to competition both from patents and organizational knowledge and
tools. While MIPS managed to suppress clones with its patent on
unaligned loads (please correct any historical inaccuracy), Intel
was better positioned to discourage software-compatible
competition — and not just financially.
Really? There is software-compatible competition to Intel. Not so
much for MIPS (maybe Loongson).
There is also less economic incentive to seek binary compatibility
with MIPS. Even when MIPS was used by multiple UNIX system
vendors, binaries would not be compatible across UNIXes. Targeting >workstations also influenced the economics of cloning.
That was noticed by Motorola when developing the 88100. They
sponsered a binary compatability standard (BCS) and an object
comptability standard (OCS) in conjunction with their customers
to provide standard portable binaries across operating
systems.
In article <P0mdN.7690$83n7.6186@fx18.iad>, scott@slp53.sl.home (Scott >Lurndal) wrote:
That was noticed by Motorola when developing the 88100. They
sponsered a binary compatability standard (BCS) and an object
comptability standard (OCS) in conjunction with their customers
to provide standard portable binaries across operating
systems.
Doing that for the file formats and calling standard is eminently
practical, but how was it managed for library APIs? Were there baseline >standards for libc, libm, libpthread and so on, which vendors could
extend, or were those things fully standardised?
On 12/10/2023 11:56 AM, John Dallman wrote:
In article <P0mdN.7690$83n7.6186@fx18.iad>, scott@slp53.sl.home (Scott
Lurndal) wrote:
That was noticed by Motorola when developing the 88100. They
sponsered a binary compatability standard (BCS) and an object
comptability standard (OCS) in conjunction with their customers
to provide standard portable binaries across operating
systems.
Doing that for the file formats and calling standard is eminently
practical, but how was it managed for library APIs? Were there baseline
standards for libc, libm, libpthread and so on, which vendors could
extend, or were those things fully standardised?
Seems like, yeah, to have any hope of binary compatibility, one also
needs to standardize on either the libraries or the specific syscalls
and syscall mechanism (like how it was in the MS-DOS era).
But, hoping for cross-platform seems like a bit of an ask when modern
OS's, like Linux, are hard-pressed to have binary compatibility between >different versions or different variants of the same OS on the same
hardware.
Or, like the annoyance of the whole Linux software stack, where one
seemingly can't really build or reuse any single part of it without >effectively bringing over the entire GNU ecosystem in the process.
On 12/10/2023 11:56 AM, John Dallman wrote:
In article <P0mdN.7690$83n7.6186@fx18.iad>, scott@slp53.sl.home (Scott
Lurndal) wrote:
That was noticed by Motorola when developing the 88100. They
sponsered a binary compatability standard (BCS) and an object
comptability standard (OCS) in conjunction with their customers
to provide standard portable binaries across operating
systems.
Doing that for the file formats and calling standard is eminently
practical, but how was it managed for library APIs? Were there baseline
standards for libc, libm, libpthread and so on, which vendors could
extend, or were those things fully standardised?
Seems like, yeah, to have any hope of binary compatibility, one also
needs to standardize on either the libraries or the specific syscalls
and syscall mechanism (like how it was in the MS-DOS era).
But, hoping for cross-platform seems like a bit of an ask when modern
OS's, like Linux, are hard-pressed to have binary compatibility between different versions or different variants of the same OS on the same
hardware.
Also annoys me that most FOSS projects are solely focused on
implementation, rather than on specifying things well enough to
potentially allow for multiple implementations to exist.
Say, people put all their effort trying to make "the one true whatever", rather than specifying things well enough to allow for re-implementation as-needed.
Or, like the annoyance of the whole Linux software stack, where one
seemingly can't really build or reuse any single part of it without effectively bringing over the entire GNU ecosystem in the process.
John
On 12/10/2023 3:54 PM, MitchAlsup wrote:
This is why your fresh out of college ISA architect is ill equipped to
create an ISA that stands the test of time.
By the time an architect has been exposed to enough of the whole system
to architect a small block with it, it is past time to retire. He needs
{
ISA, compilers, Linkers,
Environments{
elementary functions, dynamic linking, memory management,
signals, }
interrupts, exceptions, privilege, priority, supervisor, hypervisor,
Atomicity,
cache coherent protocols,
cache units,
function units,
pipelineing, reservation stations, scoreboards, order{
processor, memory, interconnect,
interrupt, }
memory {
strong, sequential, total store order, causal,
weak, relaxed} consistency,
FP, FDIV&SQRT,
PCIe{ bridges, devices, timers, counters,
IO/MMUs, } aliasing, disambiguation, logic design, Interconnect
design,
Block design,
Verification {an entire realm in its own right.}
a smattering of layout,
a smattering of tapeout,
a smaltering of testing more than a smattering of management, budgeting,
...
} {{and other annoyances}}
in order to properly design an ISA !! with a reasonable chance of long
term success.
Yeah, dunno...
I was mostly doing stuff for my own reasons.
I don't think I have done too horribly for my first attempt.
But, my project isn't terribly useful if all it can do is run custom
ports of Doom and similar.
On 12/10/23 4:54 PM, MitchAlsup wrote:
BGB wrote:[snip]
Say, people put all their effort trying to make "the one true
whatever", rather than specifying things well enough to allow
for re-implementation as-needed.
This is why your fresh out of college ISA architect is ill
equipped to create an ISA that stands the test of time.
[Necromancy]
I agree. Technical knowledge and intuition are also not the only
factors. Being able to listen (not only to data but to others'
intuitions) and being able to make decisions (not only self-
confidence but also a willingness and ability to accept the
consequences of mistaken judgments) — these abilities are
important and seem to require time/experience to develop.
By the time an architect has been exposed to enough of the whole
system
to architect a small block with it, it is past time to retire. He
needs
{
ISA, compilers, Linkers,
Environments{
elementary functions, dynamic linking, memory
management,
signals, }
interrupts, exceptions, privilege, priority, supervisor, hypervisor,
Atomicity,
cache coherent protocols,
cache units,
function units,
pipelineing, reservation stations, scoreboards, order{
processor, memory, interconnect,
interrupt, }
memory {
strong, sequential, total store order, causal,
weak, relaxed} consistency,
FP, FDIV&SQRT,
PCIe{ bridges, devices, timers, counters,
IO/MMUs, } aliasing, disambiguation, logic design,
Interconnect design,
Block design,
Verification {an entire realm in its own right.}
a smattering of layout,
a smattering of tapeout,
a smaltering of testing more than a smattering of management,
budgeting, ...
} {{and other annoyances}}
in order to properly design an ISA !! with a reasonable chance of
long term success.
I disagree that all of this needs to be in a single head.
While intra-brain communication is much faster that *inter*-brain communication, I suspect a team of less broad experts could
"properly design an ISA" perhaps taking only three times as long.
(The result might be a little less elegant as well. Even with
trust, communication would be filtered by not wanting to bother
another with "useless" information.)
One advantage of a team over an individual is that other members
of the team can make discordant statements (whereas data comes
from what one chooses to test).
On 1/21/24 4:33 PM, MitchAlsup1 wrote:
Paul A. Clayton wrote:[snip list of areas of familiarity needed for ISA design]
On 12/10/23 4:54 PM, MitchAlsup wrote:
I disagree that all of this needs to be in a single head.
Not in their head--but exposed to enough of it not to make some
sort of serious mistake with respect to most of the items on the
list.
I vaguely recall a general principle that an expert user of an
interface should understand both sides of the interface, whereas
an expert interface designer should understand at least one layer
beyond on both sides. (A quick web search did not find any such
reference, but I suspect something like that has been stated.)
I suspect "management" would surprise some, but understanding
how people interact and how to get the best from people and
understanding the time and money economics of a large project are
important — even for a non-commercial effort. Economics as a
study of how resource constraints impact systems and vice versa
is presumably useful even in hardware design since such involves
constrained resources.
Understanding process or principles is also a distinction
commonly made between engineering and science.
While intra-brain communication is much faster that *inter*-brain
communication, I suspect a team of less broad experts could
"properly design an ISA" perhaps taking only three times as long.
Here you are equating ISA with architecture.
In the distant past I suggested that prior to designing an ISA, one
should have to write a code generator for an existing architecture.
Now, I am expanding my recommendation that they also be exposed to
the "rest of the disease" of computer architecture. {see list above}
Yet the list is so broad that I fear your other conclusion would
apply: by the time one person has enough exposure to "the disease"
the person is (considered) too old to work in the field.
Distributing the expertise might improve the odds.
I think you, Mitch, are exceptional not just in the depth of
your knowledge but the breadth. I suspect most people — even
highly intellectual people, people who love ideas and learning —
tend to focus on a single subject in their professional life.
I vaguely recall a general principle that an expert user of an
interface should understand both sides of the interface, whereas
an expert interface designer should understand at least one layer
beyond on both sides.
(The result might be a little less elegant as well. Even with
trust, communication would be filtered by not wanting to bother
another with "useless" information.)
One advantage of a team over an individual is that other members
of the team can make discordant statements (whereas data comes
from what one chooses to test).
Nor am I suggesting a single individual is better than a well
oiled team...But that the totality of the team be exposed to "just
about everything in the above list".
So not all hope is lost?☺ (Sadly good teams seem to be rare.)
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 02:43:51 |
Calls: | 10,387 |
Calls today: | 2 |
Files: | 14,061 |
Messages: | 6,416,755 |