The tribe of x86 architectures didn't originate as an Intel design. The
8008 ISA originated at Datapoint, and grew through the 8080 and 8085.
Intel recognised their limitations, and decided to make something better,
but the iAPX 432 took time to mature and the 8086 was designed as an
extended 8080 to keep the company going until the 432 succeeded.
The 432 was a total failure, but the x86 line kept the company going and >growing. Then they came up with the i960, which had some success as a >high0end embedded processor, but was cancelled when Intel acquired rights
to DEC's StrongARM cores.
The i860 was a pretty comprehensive failure, but the x86 line made them
into a behemoth.
Then they decided to phase that out and do Itanium.
It
was less of a failure than 432 or i860, but they had to adopt AMD's
x86-64 ISA to avoid shrinking themselves into a subsidiary of HP.
It seems to me that IA-64 was a bigger failure: More money invested,
and more money lost (probably even relative to the size of the company
at the time).
- anton
On Sat, 14 Sep 2024 07:29:02 GMT
anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:
It seems to me that IA-64 was a bigger failure: More money invested,
and more money lost (probably even relative to the size of the company
at the time).
- anton
But more money made, too.
I'd suppose, in its later days, when all ambitions evaporated, Itanium
became a decent cache cow for Intel. Not spectacular, of course, just
decent.
i860 didn't quuie reach the state of cache cow. And i432 didn't reach anything.
... cache cow ...
On Sun, 15 Sep 2024 00:06:39 +0300, Michael S wrote:
... cache cow ...
Freudian slip? ;)
On Sat, 14 Sep 2024 21:06:39 +0000, Michael S wrote:
On Sat, 14 Sep 2024 07:29:02 GMT
anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:
It seems to me that IA-64 was a bigger failure: More money
invested, and more money lost (probably even relative to the size
of the company at the time).
- anton
But more money made, too.
I'd suppose, in its later days, when all ambitions evaporated,
Itanium became a decent cache cow for Intel. Not spectacular, of
course, just decent.
I do not believe that the sales revenue even met the engineering and manufacturing costs.
On Sat, 14 Sep 2024 22:49:21 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
I do not believe that the sales revenue even met the engineering and
manufacturing costs.
ASP was certainly many time higher than manufacturing cost, esp. after migration to 90nm in 2006.
Engineering cost was huge up until 2010, but significant part of what
was spent in 2005-2010 (development of QPI) was reused by Xeons.
In 2010-2012 engineering cost was probably quite moderate.
From 2013 to EOL in 2022 engineering cost was very low.
So, even if Itanium enterprise as whole lost a lot of money its
last 12-13 years taken in isolation were likely quite profitable.
On Sun, 15 Sep 2024 8:22:16 +0000, Michael S wrote:
On Sat, 14 Sep 2024 22:49:21 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
I do not believe that the sales revenue even met the engineering
and manufacturing costs.
ASP was certainly many time higher than manufacturing cost, esp.
after migration to 90nm in 2006.
Engineering cost was huge up until 2010, but significant part of
what was spent in 2005-2010 (development of QPI) was reused by
Xeons.
Engineering costs were at least 200 engineers for 2 decades at
approximately $200K/engineer/year. ½ salary ½ SW+HW+overhead.
This turn out to be $0.8B sales costs would be extra.
Did they sell $1B of these things ??
In 2010-2012 engineering cost was probably quite moderate.
From 2013 to EOL in 2022 engineering cost was very low.
So, even if Itanium enterprise as whole lost a lot of money its
last 12-13 years taken in isolation were likely quite profitable.
On Mon, 16 Sep 2024 23:48:56 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
Engineering costs were at least 200 engineers for 2 decades at
approximately $200K/engineer/year. ½ salary ½ SW+HW+overhead.
This turn out to be $0.8B sales costs would be extra.
Why would they need 200 engineers before 1998?
Or after 2010?
Why would they need more than 2-3 engineers after 2012?
Did they sell $1B of these things ??
I don't know, but would think that the answer is yes.
In the best years (2007-2008) HP sold approximately 75K Itanium boxen
per year. Assuming an average of 3 CPUs per box and 3.5K USD per CPU
that gives 0.79 B/y. For the rest of IPF life they were selling
significantly less, but still selling something.
And there were other vendors beyond HP, although nearly all of them
jumped ship before 2008.
On Tue, 17 Sep 2024 7:57:35 +0000, Michael S wrote:
On Mon, 16 Sep 2024 23:48:56 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
Engineering costs were at least 200 engineers for 2 decades at
approximately $200K/engineer/year. salary SW+HW+overhead.
This turn out to be $0.8B sales costs would be extra.
Why would they need 200 engineers before 1998?
Or after 2010?
Why would they need more than 2-3 engineers after 2012?
A friend of mine worked on ITanic in Longmount Co and related the
size of the team. He worked there from about 1995-2019.
And my numbers did not include the software engineers on the project.
Did they sell $1B of these things ??
I don't know, but would think that the answer is yes.
In the best years (2007-2008) HP sold approximately 75K Itanium
boxen per year. Assuming an average of 3 CPUs per box and 3.5K USD
per CPU that gives 0.79 B/y. For the rest of IPF life they were
selling significantly less, but still selling something.
And there were other vendors beyond HP, although nearly all of them
jumped ship before 2008.
To make "real" money MFG costs have to be less than sales price.
Somebody has to "Pay for * the FAB".
And it always bothered me that companies spend $1B+ to make a
FAB that produces $0.50 parts that go in $1.00 packages made
in a factory costing $50M, with $0.25 test costs.
(*) that part of the FAB capacity they occupy.
On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:
Not many computer companies survive three failed architectures: has that
record been beaten?
I think it’s fair to say that both Intel and Microsoft were companies
more
renowned for marketing prowess than actual technical brilliance.
And now that that marketing prowess is fading somewhat, both companies
are
suffering a bit from a marketplace that is changing faster than they can cope.
On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:
Not many computer companies survive three failed architectures: has
that record been beaten?
I think it’s fair to say that both Intel and Microsoft were companies
more renowned for marketing prowess than actual technical brilliance.
And now that that marketing prowess is fading somewhat, both
companies are suffering a bit from a marketplace that is changing
faster than they can cope.
Not many computer companies survive three failed architectures: has that record been beaten?
There are few things Intel would wish more than to "suffer"
financially like Microsoft.
"the CPUs are simply I/O managers to the Inference Engines and GPUs."
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have access
to the sheer quantity of RAM that is available to the CPU. And motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
tend not to offer this option.
On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the CPU. And
motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
tend not to offer this option.
He showed a die figure with 256GB of DRAM stacked 8-deep and 2×4-wide.
... all the CPU has to
do is program the I/O MMU to allow them to do their own thing, and deal
with the keyboard and mouse activities.
On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:
Not many computer companies survive three failed architectures: has that
record been beaten?
I think it’s fair to say that both Intel and Microsoft were companies more renowned for marketing prowess than actual technical brilliance.
Just last night, I was in a conversation with someone trying to start up
a new company that wants to compete in the "server market". Direct
quote;
"the CPUs are simply I/O managers to the Inference Engines and GPUs."
Who here thinks that CPUs have become the CDC 6600 PPs for the GPUs and >Inference Engines ??
On 9/17/2024 4:30 PM, Lawrence D'Oliveiro wrote:
On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:
Not many computer companies survive three failed architectures: has
that record been beaten?
I think it’s fair to say that both Intel and Microsoft were companies
more renowned for marketing prowess than actual technical brilliance.
For some years, Intel was known for it prowess in FAB technology. After
all, they managed to make a "difficult" architecture out perform CPUs
from better architectures.
Some time ago, they seem to have lost that FAB leadership particularly
to TSMC.
Just last night I talked to Jens Palsberg who works on quantum
computing.
On Wed, 18 Sep 2024 02:54:51 +0300, Michael S wrote:
There are few things Intel would wish more than to "suffer"
financially like Microsoft.
It is true that Microsoft is not (yet) losing money, but still the
revenues from its Windows cash cow cannot be what they used to be, if you look at the declining level of investment Microsoft is putting back into
its flagship OS.
Its game division is also undergoing a bit of an upheaval at the moment.
Its own games are moving away from being exclusives to its own console platform.
And look at its ongoing unsuccessful attempts to port Windows to the ARM architecture.
On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:
On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the CPU. And
motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
tend not to offer this option.
He showed a die figure with 256GB of DRAM stacked 8-deep and 2×4-wide.
Was it upgradeable? Or was it soldered in?
... all the CPU has to
do is program the I/O MMU to allow them to do their own thing, and deal
with the keyboard and mouse activities.
That’s not how interactive timesharing worked in the old days, and it certainly won’t be sufficient for interactive work today.
You *did* say “servers” though, didh’t you?
On Wed, 18 Sep 2024 1:27:03 +0000, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:
On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and GPUs." >>>>That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the CPU. And
motherboard-based CPU RAM is upgradeable, as well, whereas addon cards >>>> tend not to offer this option.
He showed a die figure with 256GB of DRAM stacked 8-deep and 2×4-wide.
Was it upgradeable? Or was it soldered in?
Soldered in.
... all the CPU has to
do is program the I/O MMU to allow them to do their own thing, and deal
with the keyboard and mouse activities.
That’s not how interactive timesharing worked in the old days, and it
certainly won’t be sufficient for interactive work today.
All the interaction is keyboard, mouse, display and internet.
The server can be located anywhere in the world.
You *did* say “servers” though, didh’t you?
Yes, server, not something within 10 feet of user.
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have access
to the sheer quantity of RAM that is available to the CPU.
On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:
On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the CPU. And
motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
tend not to offer this option.
He showed a die figure with 256GB of DRAM stacked 8-deep and 2×4-wide.
Was it upgradeable? Or was it soldered in?
... all the CPU has to
do is program the I/O MMU to allow them to do their own thing, and deal
with the keyboard and mouse activities.
That’s not how interactive timesharing worked in the old days, and it >certainly won’t be sufficient for interactive work today.
On Wed, 18 Sep 2024 1:27:03 +0000, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:
On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and GPUs." >>>>That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the CPU. And
motherboard-based CPU RAM is upgradeable, as well, whereas addon cards >>>> tend not to offer this option.
He showed a die figure with 256GB of DRAM stacked 8-deep and 2×4-wide.
Was it upgradeable? Or was it soldered in?
Soldered in.
On Tue, 17 Sep 2024 21:57:24 -0700, Stephen Fuld wrote:
On 9/17/2024 4:30 PM, Lawrence D'Oliveiro wrote:
On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:
Not many computer companies survive three failed architectures: has
that record been beaten?
I think it’s fair to say that both Intel and Microsoft were companies
more renowned for marketing prowess than actual technical brilliance.
For some years, Intel was known for it prowess in FAB technology. After
all, they managed to make a "difficult" architecture out perform CPUs
from better architectures.
Sure. By spending 10× on it what RISC-based competitors were able to.
Some time ago, they seem to have lost that FAB leadership particularly
to TSMC.
And the reason? Those costs kept going up and up, while the profits from >sales of x86 chips failed to keep pace.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and
GPUs."
That particular Wheel of Reincarnation will never turn that way.
And, your lack of knowledge strikes again. Such bespoke CPUs
are more and more common every month, with many currently
in tape-out or late-stage design by several fabless semiconductor
companies.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the CPU.
That's also incorrect. There is nothing preventing them from
accessing huge amounts of RAM when included on-die or via
chiplets. Consider CXL-Cache, which provides high-bandwidth
low-latency access to huge amounts of DRAM. Consider stacked
HBM. Consider not drawing conclusions from insufficient knowledge.
mitchalsup@aol.com (MitchAlsup1) writes:
On Wed, 18 Sep 2024 1:27:03 +0000, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:
On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and
GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the
CPU. And motherboard-based CPU RAM is upgradeable, as well,
whereas addon cards tend not to offer this option.
He showed a die figure with 256GB of DRAM stacked 8-deep and
2×4-wide.
Was it upgradeable? Or was it soldered in?
Soldered in.
Or in the case of HBM, directly stacked on the processor at the
fab.
On Wed, 18 Sep 2024 15:40:51 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and
GPUs."
That particular Wheel of Reincarnation will never turn that way.
And, your lack of knowledge strikes again. Such bespoke CPUs
are more and more common every month, with many currently
in tape-out or late-stage design by several fabless semiconductor
companies.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the CPU.
That's also incorrect. There is nothing preventing them from
accessing huge amounts of RAM when included on-die or via
chiplets. Consider CXL-Cache, which provides high-bandwidth
low-latency access to huge amounts of DRAM. Consider stacked
HBM. Consider not drawing conclusions from insufficient knowledge.
Low latency?
I'd think that the latency here at very least 5x higher than 45-60ns
figures typical for Intel/AMD/Apple/Qualcomm client CPUs.
And I am afraid that 10x is more common than 5x.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Tue, 17 Sep 2024 21:57:24 -0700, Stephen Fuld wrote:
On 9/17/2024 4:30 PM, Lawrence D'Oliveiro wrote:
On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:
Not many computer companies survive three failed architectures: has
that record been beaten?
I think it’s fair to say that both Intel and Microsoft were companies >>>> more renowned for marketing prowess than actual technical brilliance.
For some years, Intel was known for it prowess in FAB technology. After >>> all, they managed to make a "difficult" architecture out perform CPUs
from better architectures.
Sure. By spending 10× on it what RISC-based competitors were able to.
Some time ago, they seem to have lost that FAB leadership particularly
to TSMC.
And the reason? Those costs kept going up and up, while the profits from >>sales of x86 chips failed to keep pace.
No. Intel made several management mis-steps and was too late to the party.
Had
nothing to do with "keeping pace".
Scott Lurndal <scott@slp53.sl.home> schrieb:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Tue, 17 Sep 2024 21:57:24 -0700, Stephen Fuld wrote:
On 9/17/2024 4:30 PM, Lawrence D'Oliveiro wrote:
On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:For some years, Intel was known for it prowess in FAB technology. After >>>> all, they managed to make a "difficult" architecture out perform CPUs
Not many computer companies survive three failed architectures: has >>>>>> that record been beaten?
I think it’s fair to say that both Intel and Microsoft were companies >>>>> more renowned for marketing prowess than actual technical brilliance. >>>>
from better architectures.
Sure. By spending 10× on it what RISC-based competitors were able to.
Some time ago, they seem to have lost that FAB leadership particularly >>>> to TSMC.
And the reason? Those costs kept going up and up, while the profits from >>>sales of x86 chips failed to keep pace.
No. Intel made several management mis-steps and was too late to the party.
Which management missteps did you mean, and for which particular party
were they late?
Had
nothing to do with "keeping pace".
Not sure what you mean... could you explin a bit more?
On Wed, 18 Sep 2024 15:50:09 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
mitchalsup@aol.com (MitchAlsup1) writes:
On Wed, 18 Sep 2024 1:27:03 +0000, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:
On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and
GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the
CPU. And motherboard-based CPU RAM is upgradeable, as well,
whereas addon cards tend not to offer this option.
He showed a die figure with 256GB of DRAM stacked 8-deep and
2×4-wide.
Was it upgradeable? Or was it soldered in?
Soldered in.
Or in the case of HBM, directly stacked on the processor at the
fab.
It's not easy to get 256GB via HBM.
To give one example, Fujitsu A64Fx got only 32GB.
It was 5 years ago and some progress was made since then, but density improvements nowadays are sloooooooow.
On 18/09/2024 18:00, Michael S wrote:
On Wed, 18 Sep 2024 15:50:09 GMT
It's not easy to get 256GB via HBM.
To give one example, Fujitsu A64Fx got only 32GB.
It was 5 years ago and some progress was made since then, but density
improvements nowadays are sloooooooow.
<https://www.servethehome.com/micron-hbm3e-12-high-36gb-higher-capacity-ai-accelerators-shipping/>
<https://www.servethehome.com/a-quick-introduction-to-the-nvidia-gh200-aka-grace-hopper-arm/>
It's certainly not /cheap/ to have 256GB (or more) with HBM, but it is
not unrealistic.
On Wed, 18 Sep 2024 05:40:07 GMT, Anton Ertl wrote:
Just last night I talked to Jens Palsberg who works on quantum
computing.
What kind?
Has he got Shor’s algorithm working yet?
On 18/09/2024 02:42, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 02:54:51 +0300, Michael S wrote:
There are few things Intel would wish more than to "suffer"
financially like Microsoft.
It is true that Microsoft is not (yet) losing money, but still the
revenues from its Windows cash cow cannot be what they used to be, if you
look at the declining level of investment Microsoft is putting back into
its flagship OS.
I think MS has long ago stopped viewing desktop Windows as a cash cow.
But it still gets in a lot of money from server versions, as well as
server software such as MS SQL server. (The client access licences for
these cost far more than Windows desktop ever did.)
Their main cash
cow, I believe, is subscriptions to Office365 and associated software
where they have a near-monopoly for business use. (I expect Azure and everything there also makes money, but it has to compete with other
cloud companies.)
Its game division is also undergoing a bit of an upheaval at the moment.
Its own games are moving away from being exclusives to its own console
platform.
And look at its ongoing unsuccessful attempts to port Windows to the ARM
architecture.
I'd rather not look at that, thanks!
On the other hand, and this is where the deprecation of the CPUs come
in, The engines consuming the data are bandwidth machines {GPUs and
Inference engines} which are quite insensitive to latency (they are not
not latency bound machines like CPUs).
When doing GPUs, a memory access taking 400 cycles would hardly degrade
the overall GPU performance--while it would KILL any typical CPU architecture.
There are lots of free SQL servers now, this has forced Microsoft to
make MS SQL Express free for smaller than enterprise editions.
https://www.microsoft.com/en-gb/download/details.aspx?id=101064
https://josipmisko.com/posts/sql-express-limitations#
Those limits dwarf our needs.
He mentioned that several physics breakthroughs
are needed for quantum computing to become useful.
mitchalsup@aol.com (MitchAlsup1) writes:
On Wed, 18 Sep 2024 1:27:03 +0000, Lawrence D'Oliveiro wrote:It's not easy to get 256GB via HBM.
On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:
On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and
GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the CPU.
And motherboard-based CPU RAM is upgradeable, as well, whereas
addon cards tend not to offer this option.
He showed a die figure with 256GB of DRAM stacked 8-deep and
2×4-wide.
Was it upgradeable? Or was it soldered in?
Soldered in.
To give one example, Fujitsu A64Fx got only 32GB.
It was 5 years ago and some progress was made since then, but density improvements nowadays are sloooooooow.
All the interaction is keyboard, mouse, display and internet.
The server can be located anywhere in the world.
On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:
On the other hand, and this is where the deprecation of the CPUs come
in, The engines consuming the data are bandwidth machines {GPUs and
Inference engines} which are quite insensitive to latency (they are not
not latency bound machines like CPUs).
When doing GPUs, a memory access taking 400 cycles would hardly degrade
the overall GPU performance--while it would KILL any typical CPU
architecture.
But if it’s supposed to be for “interactive” use, it’s still going to take
those 400 memory-cycle times to return a response.
On Wed, 18 Sep 2024 22:54:33 +0000, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:
On the other hand, and this is where the deprecation of the CPUs come
in, The engines consuming the data are bandwidth machines {GPUs and
Inference engines} which are quite insensitive to latency (they are
not not latency bound machines like CPUs).
When doing GPUs, a memory access taking 400 cycles would hardly
degrade the overall GPU performance--while it would KILL any typical
CPU architecture.
But if it’s supposed to be for “interactive” use, it’s still going to
take those 400 memory-cycle times to return a response.
That is why you use the CPU for human interactions and bandwidth
engines for the muscle.
On Wed, 18 Sep 2024 17:01:48 +0000, David Brown wrote:
On 18/09/2024 18:00, Michael S wrote:
On Wed, 18 Sep 2024 15:50:09 GMT
It's not easy to get 256GB via HBM.
To give one example, Fujitsu A64Fx got only 32GB.
It was 5 years ago and some progress was made since then, but density
improvements nowadays are sloooooooow.
<https://www.servethehome.com/micron-hbm3e-12-high-36gb-higher-capacity-ai-accelerators-shipping/>
<https://www.servethehome.com/a-quick-introduction-to-the-nvidia-gh200-aka-grace-hopper-arm/>
It's certainly not /cheap/ to have 256GB (or more) with HBM, but it is
not unrealistic.
Consider the cost of the power it takes to feed a rack that consumes
100KW
continuously for a year, and don't forget to add in the cooling costs to remove that 100KW from that rack while computing the cost of the power.
Using $0.15 /KWh = $170,000 per year per rack (including cooling).
The cost of 256GB of memory fades into insignificance.
On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:
On the other hand, and this is where the deprecation of the CPUs come
in, The engines consuming the data are bandwidth machines {GPUs and
Inference engines} which are quite insensitive to latency (they are not
not latency bound machines like CPUs).
When doing GPUs, a memory access taking 400 cycles would hardly degrade
the overall GPU performance--while it would KILL any typical CPU
architecture.
But if it’s supposed to be for “interactive” use, it’s still going to take
those 400 memory-cycle times to return a response.
David Brown <david.brown@hesbynett.no> wrote:
On 18/09/2024 02:42, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 02:54:51 +0300, Michael S wrote:
There are few things Intel would wish more than to "suffer"
financially like Microsoft.
It is true that Microsoft is not (yet) losing money, but still the
revenues from its Windows cash cow cannot be what they used to be, if you >>> look at the declining level of investment Microsoft is putting back into >>> its flagship OS.
I think MS has long ago stopped viewing desktop Windows as a cash cow.
But it still gets in a lot of money from server versions, as well as
server software such as MS SQL server. (The client access licences for
these cost far more than Windows desktop ever did.)
There are lots of free SQL servers now, this has forced Microsoft to make
MS SQL Express free for smaller than enterprise editions.
https://www.microsoft.com/en-gb/download/details.aspx?id=101064
https://josipmisko.com/posts/sql-express-limitations#
Those limits dwarf our needs.
On 19/09/2024 00:54, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:
On the other hand, and this is where the deprecation of the CPUs come
in, The engines consuming the data are bandwidth machines {GPUs and
Inference engines} which are quite insensitive to latency (they are not
not latency bound machines like CPUs).
When doing GPUs, a memory access taking 400 cycles would hardly degrade
the overall GPU performance--while it would KILL any typical CPU
architecture.
But if it’s supposed to be for “interactive†use, it’s still
going to take
those 400 memory-cycle times to return a response.
In human terms, those 400 memory cycles are completely negligible. For most purposes, anything else than 100 milliseconds is an instant
response. For high-speed games played by experts, 10 milliseconds is a good target. For the most demanding tasks, such as making music, 1 millisecond might be required.
For anything interactive, an extra 400 memory cycles latency means
nothing - even if it is relatively slow memory - as long as you can keep
the throughput. Network latency is massively bigger than this extra
memory latency would be.
On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:
He mentioned that several physics breakthroughs
are needed for quantum computing to become useful.
The biggest one would be getting around the fundamental problem that you can’t get something for nothing.
The promise of an exponential increase in computing power for a linear increase in the number of processing elements sounds very much like “something for nothing” under another name, wouldn’t you say?
Consider the cost of the power it takes to feed a rack that consumes
100KW continuously for a year, and don't forget to add in the cooling costs to
remove that 100KW from that rack while computing the cost of the power.
Using $0.15 /KWh = $170,000 per year per rack (including cooling).
On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:
He mentioned that several physics breakthroughs
are needed for quantum computing to become useful.
The biggest one would be getting around the fundamental problem that
you can’t get something for nothing.
Stupid argument. Look at the effort and tech it takes to make quantum computers... that is not "nothing".
The promise of an exponential increase in computing power for a linear
increase in the number of processing elements sounds very much like
“something for nothing” under another name, wouldn’t you say?
No, it is exploiting the very non-intuitive nature of quantum
entanglement to create an exponential number of collective states of a
linear number of elements.
David Brown wrote:
On 19/09/2024 00:54, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:
On the other hand, and this is where the deprecation of the CPUs come
in, The engines consuming the data are bandwidth machines {GPUs and
Inference engines} which are quite insensitive to latency (they are not >>>> not latency bound machines like CPUs).
When doing GPUs, a memory access taking 400 cycles would hardly degrade >>>> the overall GPU performance--while it would KILL any typical CPU
architecture.
But if it’s supposed to be for “interactive†use, it’s still
going to take
those 400 memory-cycle times to return a response.
In human terms, those 400 memory cycles are completely negligible.
For most purposes, anything else than 100 milliseconds is an instant
You actually need 20 Hz/50 ms even for joystick/mouse response when you
ar enot in a hurry. (Was proven by the space station external arm
joystick controller which was initially specified to operate at 10 Hz,
but that turned out to be far too laggy for the astronauts so it was
doubled to 20 Hz.
response. For high-speed games played by experts, 10 milliseconds is
a good target. For the most demanding tasks, such as making music, 1
millisecond might be required.
My cousin Nils has hearing loss after a lifetime spent in studios and
playing music, he can't use the offered hearing aids because they add
3-4 ms of latency. (Something which he noticed _immediately_ when first trying a pair.)
Early multiplayer games had to invent all sorts of tricks to try to hide
For anything interactive, an extra 400 memory cycles latency means
nothing - even if it is relatively slow memory - as long as you can
keep the throughput. Network latency is massively bigger than this
extra memory latency would be.
away that latency, and well before that, around 1987 (?) I made a
version of my terminal emulator which could do the same:
I.e. give instant feedback for keystrokes while in reality buffering
them so that I could send out a single packet (over pay-per-packet X.25 networks) when I got a keystroke that I could not handle locally.
This one hack (designed and implemented overnight) saved Hydro and the Oseberg project NOK 2 Mill per year per remote location.
The only noticable (to the user) artifact was when they were entering
data into an uppercase-only field: They would see the lowercase local response until they hit enter or tab, then the remote response would overwrite the field with uppercase instead. Normally I simply checked if
the new remote data was reducing the offset between the local buffer and
the official terminal view.
On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:
He mentioned that several physics breakthroughs
are needed for quantum computing to become useful.
The biggest one would be getting around the fundamental problem that you
can’t get something for nothing.
Stupid argument. Look at the effort and tech it takes to make quantum computers... that is not "nothing".
The promise of an exponential increase in computing power for a linear
increase in the number of processing elements sounds very much like
“something for nothing” under another name, wouldn’t you say?
No, it is exploiting the very non-intuitive nature of quantum
entanglement to create an exponential number of collective states of a
linear number of elements. Medieval arguments about "nothing" vs
"something" don't work there.
On 19/09/2024 09:26, Terje Mathisen wrote:
David Brown wrote:
On 19/09/2024 00:54, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:
On the other hand, and this is where the deprecation of the CPUs come >>>>> in, The engines consuming the data are bandwidth machines {GPUs and
Inference engines} which are quite insensitive to latency (they are >>>>> not
not latency bound machines like CPUs).
When doing GPUs, a memory access taking 400 cycles would hardly
degrade
the overall GPU performance--while it would KILL any typical CPU
architecture.
But if it’s supposed to be for “interactive†use,
it’s still going to take
those 400 memory-cycle times to return a response.
In human terms, those 400 memory cycles are completely negligible.
For most purposes, anything else than 100 milliseconds is an instant
You actually need 20 Hz/50 ms even for joystick/mouse response when
you ar enot in a hurry. (Was proven by the space station external arm
joystick controller which was initially specified to operate at 10 Hz,
but that turned out to be far too laggy for the astronauts so it was
doubled to 20 Hz.
For that kind of thing, the latency you can tolerate will depend on the physical lag of the system, what you are trying to control, and the experience of the person controlling it. It will therefore lie
somewhere between the "100 ms feels instantaneous" that you see for many purposes, and the speed you need for gaming.
response. For high-speed games played by experts, 10 milliseconds
is a good target. For the most demanding tasks, such as making
music, 1 millisecond might be required.
My cousin Nils has hearing loss after a lifetime spent in studios and
playing music, he can't use the offered hearing aids because they add
3-4 ms of latency. (Something which he noticed _immediately_ when
first trying a pair.)
Even a complete amateur can notice time mismatches of 10 ms in a musical context, so for a professional this does not surprise me. I don't know
of any human endeavour that requires lower latency or more precise
timing than music.
Early multiplayer games had to invent all sorts of tricks to try to
For anything interactive, an extra 400 memory cycles latency means
nothing - even if it is relatively slow memory - as long as you can
keep the throughput. Network latency is massively bigger than this
extra memory latency would be.
hide away that latency, and well before that, around 1987 (?) I made a
version of my terminal emulator which could do the same:
I.e. give instant feedback for keystrokes while in reality buffering
them so that I could send out a single packet (over pay-per-packet
X.25 networks) when I got a keystroke that I could not handle locally.
The first modem I used was, I believe, 300 baud and there was a definite
lag between typing and the characters appearing on-screen.
Fortunately, such slow speeds are quite rare these days. The slowest system I have seen in practice, however, had about 150 mbps in one
direction and 70 mbps in the other, half-duplex. (Note that the "m" is lower-case here!)
This one hack (designed and implemented overnight) saved Hydro and the
Oseberg project NOK 2 Mill per year per remote location.
The only noticable (to the user) artifact was when they were entering
data into an uppercase-only field: They would see the lowercase local
response until they hit enter or tab, then the remote response would
overwrite the field with uppercase instead. Normally I simply checked
if the new remote data was reducing the offset between the local
buffer and the official terminal view.
I presume you called the case-change a feature, rather than an artefact, giving the user confirmation that the data was entered correctly?
On 19/09/2024 09:44, Niklas Holsti wrote:
On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:
He mentioned that several physics breakthroughs
are needed for quantum computing to become useful.
The biggest one would be getting around the fundamental problem that you >>> can’t get something for nothing.
Stupid argument. Look at the effort and tech it takes to make quantum
computers... that is not "nothing".
The promise of an exponential increase in computing power for a linear
increase in the number of processing elements sounds very much like
“something for nothing†under another name, wouldn’t you say?
No, it is exploiting the very non-intuitive nature of quantum
entanglement to create an exponential number of collective states of a
linear number of elements. Medieval arguments about "nothing" vs
"something" don't work there.
Quantum computing certainly gives you some tricks that are hard to
replicate with classical computers. (And of course some quantum effects are impossible to replicate classically, but those are not actually computations.)
But it is still ultimately limited in many ways. Landauer's principle about the minimal energy costs of calculations applies equally to
quantum calculations.
The practical limitations for quantum computers are far more
significant. Roughly speaking, when you entangle more states at once,
you need tighter tolerances to maintain coherence, which translates to
lower temperatures, higher energy costs, and lower times to do your calculations. And to be useful, you need large numbers of qubits, which again makes maintaining coherence increasingly difficult.
I'm sure that there will be breakthroughs that improve some of this, but
I am not holding my breath - I don't believe quantum computers will ever
be cost-effective for anything but a few very niche problems. Currently they have only beat classical computers in tasks that involve simulating some quantum effects. That's a bit like noticing that soap bubble computers are really good at solving 2D minimal energy surface problems.
Remember, the current record for Shor's algorithm is factorising 21 into
3 x 7. Factorising 35 is still beyond current engineering levels.
On Thu, 19 Sep 2024 10:44:24 +0300, Niklas Holsti wrote:
On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:
He mentioned that several physics breakthroughs
are needed for quantum computing to become useful.
The biggest one would be getting around the fundamental problem that
you can’t get something for nothing.
Stupid argument. Look at the effort and tech it takes to make quantum
computers... that is not "nothing".
Is there some ongoing “Nature’s Rentware” involved?
The promise of an exponential increase in computing power for a linear
increase in the number of processing elements sounds very much like
“something for nothing” under another name, wouldn’t you say?
No, it is exploiting the very non-intuitive nature of quantum
entanglement to create an exponential number of collective states of a
linear number of elements.
That’s called the “many worlds interpretation” of quantum mechanics, and
it is philosophical mumbo-jumbo nonsense.
David Brown wrote:
On 19/09/2024 09:44, Niklas Holsti wrote:
On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:
He mentioned that several physics breakthroughs
are needed for quantum computing to become useful.
The biggest one would be getting around the fundamental problem that
you
can’t get something for nothing.
Stupid argument. Look at the effort and tech it takes to make quantum
computers... that is not "nothing".
The promise of an exponential increase in computing power for a linear >>>> increase in the number of processing elements sounds very much like
“something for nothing†under another name, wouldn’t you say?
No, it is exploiting the very non-intuitive nature of quantum
entanglement to create an exponential number of collective states of
a linear number of elements. Medieval arguments about "nothing" vs
"something" don't work there.
Quantum computing certainly gives you some tricks that are hard to
replicate with classical computers. (And of course some quantum
effects are impossible to replicate classically, but those are not
actually computations.)
But it is still ultimately limited in many ways. Landauer's principle
about the minimal energy costs of calculations applies equally to
quantum calculations.
The practical limitations for quantum computers are far more
significant. Roughly speaking, when you entangle more states at once,
you need tighter tolerances to maintain coherence, which translates to
lower temperatures, higher energy costs, and lower times to do your
calculations. And to be useful, you need large numbers of qubits,
which again makes maintaining coherence increasingly difficult.
I'm sure that there will be breakthroughs that improve some of this,
but I am not holding my breath - I don't believe quantum computers
will ever be cost-effective for anything but a few very niche
problems. Currently they have only beat classical computers in tasks
that involve simulating some quantum effects. That's a bit like
noticing that soap bubble computers are really good at solving 2D
minimal energy surface problems.
Remember, the current record for Shor's algorithm is factorising 21
into 3 x 7. Factorising 35 is still beyond current engineering levels.
From my recent reading, it seems like factoring 21 (5 bits) requires at least 5+10=15 bits all staying entangled, plus a number of additional
bits for error correction. I'm guessing you also need some extra bits/redundancy in order to successfully read out the results?
Getting to at the very least 3K entangled bits in order to speed up RSA
1024 decryption will certainly be out of the question for the remainder
of my professional career, and most probably also the rest of my life.
On Thu, 19 Sep 2024 00:29:09 +0000, MitchAlsup1 wrote:
On Wed, 18 Sep 2024 22:54:33 +0000, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:
On the other hand, and this is where the deprecation of the CPUs come
in, The engines consuming the data are bandwidth machines {GPUs and
Inference engines} which are quite insensitive to latency (they are
not not latency bound machines like CPUs).
When doing GPUs, a memory access taking 400 cycles would hardly
degrade the overall GPU performance--while it would KILL any typical
CPU architecture.
But if it’s supposed to be for “interactive” use, it’s still going to
take those 400 memory-cycle times to return a response.
That is why you use the CPU for human interactions and bandwidth
engines for the muscle.
But then those bandwidth engines become the interactivity bottleneck,
don’t they?
Unless you use them only for precomputing stuff in some kind of batch mode >for later use, rather than doing processing on-demand.
On 19/09/2024 00:54, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:
But if it’s supposed to be for “interactive” use, it’s still going to
take
those 400 memory-cycle times to return a response.
In human terms, those 400 memory cycles are completely negligible. For
most purposes, anything else than 100 milliseconds is an instant
response. For high-speed games played by experts, 10 milliseconds is a
good target. For the most demanding tasks, such as making music, 1 millisecond might be required.
For anything interactive, an extra 400 memory cycles latency means
nothing - even if it is relatively slow memory - as long as you can keep
the throughput. Network latency is massively bigger than this extra
memory latency would be.
On 19/09/2024 09:26, Terje Mathisen wrote:
David Brown wrote:
Even a complete amateur can notice time mismatches of 10 ms in a musical context, so for a professional this does not surprise me. I don't know
of any human endeavour that requires lower latency or more precise
timing than music.
On 19/09/2024 09:44, Niklas Holsti wrote:
The practical limitations for quantum computers are far more
significant. Roughly speaking, when you entangle more states at once,
you need tighter tolerances to maintain coherence, which translates to
lower temperatures, higher energy costs, and lower times to do your calculations. And to be useful, you need large numbers of qubits, which again makes maintaining coherence increasingly difficult.
On 19/09/2024 12:59, Terje Mathisen wrote:
According to someone on the internet (that ever-reliable source of information), an n-bit integer takes 2n + 2 fully entangled qubits and 448.n³.log(n) gates. For 1024-bit RSA, that's 2050 logical qubits and
about 5×10e12 gates. For the common default size of 2048-bit RSA,
it's 4098 logical qubits and 4.2×10e13 gates.
Then you need the quantum error correction in addition. I am not at all convinced that I understand the details here or if I am applying them correctly, but I think that for larger systems you need perhaps 1000
physical qubits per logical qubit.
On Thu, 19 Sep 2024 14:15:09 +0000, David Brown wrote:
On 19/09/2024 12:59, Terje Mathisen wrote:
According to someone on the internet (that ever-reliable source of
information), an n-bit integer takes 2n + 2 fully entangled qubits and
448.n³.log(n) gates. For 1024-bit RSA, that's 2050 logical qubits and
about 5×10e12 gates. For the common default size of 2048-bit RSA,
it's 4098 logical qubits and 4.2×10e13 gates.
Then you need the quantum error correction in addition. I am not at all
convinced that I understand the details here or if I am applying them
correctly, but I think that for larger systems you need perhaps 1000
physical qubits per logical qubit.
I am convinced that quantum computers will eventually be good at
some things that regular computers are not and cannot be.
I am not convinced that any current application is one of those.
And for the things that quantum computers may be great at
{Deciphering without keys} they may do more harm than good.
On Thu, 19 Sep 2024 7:01:13 +0000, David Brown wrote:
On 19/09/2024 00:54, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:
But if it’s supposed to be for “interactive” use, it’s still going to
take
those 400 memory-cycle times to return a response.
In human terms, those 400 memory cycles are completely negligible. For
most purposes, anything else than 100 milliseconds is an instant
response. For high-speed games played by experts, 10 milliseconds is a
good target. For the most demanding tasks, such as making music, 1
millisecond might be required.
400 cycles IS negligible.
400 cycles for each LD is non-negligible.
Remember LDs are 20%-22% of the instruction stream and with 400 cycles
per LD you see an average of 80-cycles per instruction even if all
other instructions take 1 cycle. This is 160× SLOWER than current
CPUs. But GPUs with thousands of cores can use memory that slow and
still deliver big gains in performance (6×-50×).
For anything interactive, an extra 400 memory cycles latency means
nothing - even if it is relatively slow memory - as long as you can keep
the throughput. Network latency is massively bigger than this extra
memory latency would be.
Most CPUs can't even deliver control in 400 cycles to an interrupt
or exception handler.
On Thu, 19 Sep 2024 9:35:41 +0000, David Brown wrote:
On 19/09/2024 09:44, Niklas Holsti wrote:
The practical limitations for quantum computers are far more
significant. Roughly speaking, when you entangle more states at once,
you need tighter tolerances to maintain coherence, which translates to
lower temperatures, higher energy costs, and lower times to do your
calculations. And to be useful, you need large numbers of qubits, which
again makes maintaining coherence increasingly difficult.
One can say exactly the same about PAM4 signaling compared to NRZ or
even RZ coding.
On 2024-09-19 11:43, Lawrence D'Oliveiro wrote:
On Thu, 19 Sep 2024 10:44:24 +0300, Niklas Holsti wrote:
On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:
On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:
He mentioned that several physics breakthroughs
are needed for quantum computing to become useful.
The biggest one would be getting around the fundamental problem that
you can’t get something for nothing.
Stupid argument. Look at the effort and tech it takes to make quantum
computers... that is not "nothing".
Is there some ongoing “Nature’s Rentware” involved?
I have no idea what you mean by that.
The promise of an exponential increase in computing power for a linear >>>> increase in the number of processing elements sounds very much like
“something for nothing” under another name, wouldn’t you say?
No, it is exploiting the very non-intuitive nature of quantum
entanglement to create an exponential number of collective states of a
linear number of elements.
That’s called the “many worlds interpretation” of quantum mechanics, and
it is philosophical mumbo-jumbo nonsense.
The /fact/ that quantum mechanics describes how the world works,
entanglement and all, does not depend on the various attempts to
"interpret" or understand its foundations.
Quantum mechanics is high IQ bullshit to make professors look important.
I am convinced that quantum computers will eventually be good at some
things that regular computers are not and cannot be.
From my recent reading, it seems like factoring 21 (5 bits) requires at
least 5+10=15 bits all staying entangled, plus a number of additional
bits for error correction.
On Thu, 19 Sep 2024 16:23:09 +0000, MitchAlsup1 wrote:
I am convinced that quantum computers will eventually be good at some
things that regular computers are not and cannot be.
They are currently having some success in physical-optimization
problems,
with precision limits. That means they are basically just a revival of
the
old analog computers: fast at solving physical-related problems, but
with
much less precision than digital computers.
So far, the progress in making them handle number-theoretic calculations
has been essentially zero.
Quantum mechanics is high IQ bullshit to make professors look important.
On Thu, 19 Sep 2024 12:59:42 +0200, Terje Mathisen wrote:
From my recent reading, it seems like factoring 21 (5 bits) requires at
least 5+10=15 bits all staying entangled, plus a number of additional
bits for error correction.
The noise factor was something the original ideas about quantum computers
had not taken into account.
But it’s pretty obvious why it happens: “quantum” computing was something
thought up by people who took the “many worlds” interpretation of quantum theory just a little too seriously: if you could take advantage of “superposition of states” to run your computation simultaneously across multiple alternate universes, you could access a whole lot more computing power!
The reason why it doesn’t work is because of conservation of energy. Accessing those hypothetical “alternate universes” requires spreading the same amount of energy more thinly. And that’s where the noise comes from. So ultimately there will be no way to get rid of it.
I watched a video several months ago where a music producer demonstrated
how, just moving the notes around in microsecond time intervals,
destroys the "musicality" of the <ahem> music.
400 cycles IS negligible.
400 cycles for each LD is non-negligible.
Remember LDs are 20%-22% of the instruction stream and with 400 cycles
per LD you see an average of 80-cycles per instruction even if all other instructions take 1 cycle. This is 160× SLOWER than current CPUs. But
GPUs with thousands of cores can use memory that slow and still deliver
big gains in performance (6×-50×).
The way I implemented it was by updating the "official" back frame
buffer, and compare the update with the visible front buffer. If at any
time a write to the back buffer did not result in something that was
already in the front buffer, I just copied the back buffer to the front
and went on from there.
On Thu, 19 Sep 2024 20:48:38 +0000, Lawrence D'Oliveiro wrote:
On Thu, 19 Sep 2024 16:23:09 +0000, MitchAlsup1 wrote:
I am convinced that quantum computers will eventually be good at some
things that regular computers are not and cannot be.
They are currently having some success in physical-optimization
problems, with precision limits. That means they are basically just a
revival of the old analog computers: fast at solving physical-related
problems, but with much less precision than digital computers.
They seem to be rather exceptional at protein folding compared to
classical computing.
Brett <ggtgp@yahoo.com> schrieb:
Quantum mechanics is high IQ bullshit to make professors look important.
You need quantum mechanics to describe solid-state electronics
(or all atoms, for that matter).
On Thu, 19 Sep 2024 16:09:15 +0000, MitchAlsup1 wrote:
400 cycles IS negligible.
400 cycles for each LD is non-negligible.
Remember LDs are 20%-22% of the instruction stream and with 400 cycles
per LD you see an average of 80-cycles per instruction even if all other
instructions take 1 cycle. This is 160× SLOWER than current CPUs. But
GPUs with thousands of cores can use memory that slow and still deliver
big gains in performance (6×-50×).
How can they do that? What proportion of their instruction stream is
LDs?
It seems to me they are accessing memory in 100% of their instructions,
since they would have less sophisticated memory controllers than CPUs commonly have.
Hint:: They can context switch every instruction.
So if an instruction
does not complete in its cycle, they switch to a different set of
threads;
Also note: a single instruction causes 32-128 threads to make 1 step of forward progress.
Thomas Koenig <tkoenig@netcologne.de> wrote:
Brett <ggtgp@yahoo.com> schrieb:
Quantum mechanics is high IQ bullshit to make professors look important.
You need quantum mechanics to describe solid-state electronics
(or all atoms, for that matter).
Type “quantum mechanics criticism” and variants into Google and have at it.
On Thu, 19 Sep 2024 12:54:23 +0200, Terje Mathisen wrote:
The way I implemented it was by updating the "official" back frame
buffer, and compare the update with the visible front buffer. If at any
time a write to the back buffer did not result in something that was
already in the front buffer, I just copied the back buffer to the front
and went on from there.
Is this where the need for “triple buffering” comes from -- the fact that you need to copy the entire contents of one buffer to another?
The way I understood to do flicker-free drawing was with just two buffers
-- “double buffering”. And rather than swap the buffer contents, you just swapped the pointers to them.
Brett <ggtgp@yahoo.com> schrieb:
Thomas Koenig <tkoenig@netcologne.de> wrote:
Brett <ggtgp@yahoo.com> schrieb:
Quantum mechanics is high IQ bullshit to make professors look important. >>>You need quantum mechanics to describe solid-state electronics
(or all atoms, for that matter).
Type “quantum mechanics criticism” and variants into Google and have at it.
I've read enough crackpot theories already, thank you, I don't need
any more.
On Thu, 19 Sep 2024 12:54:23 +0200, Terje Mathisen wrote:
The way I implemented it was by updating the "official" back frame
buffer, and compare the update with the visible front buffer. If at any
time a write to the back buffer did not result in something that was
already in the front buffer, I just copied the back buffer to the front
and went on from there.
Is this where the need for “triple buffering” comes from -- the fact that you need to copy the entire contents of one buffer to another?
The way I understood to do flicker-free drawing was with just two buffers
-- “double buffering”. And rather than swap the buffer contents, you just swapped the pointers to them.
Thomas Koenig <tkoenig@netcologne.de> wrote:
Brett <ggtgp@yahoo.com> schrieb:
Quantum mechanics is high IQ bullshit to make professors look important.
You need quantum mechanics to describe solid-state electronics
(or all atoms, for that matter).
Type “quantum mechanics criticism” and variants into Google and have at it.
But we know the earth is flat.
On 20/09/2024 07:46, Thomas Koenig wrote:
Brett <ggtgp@yahoo.com> schrieb:
Thomas Koenig <tkoenig@netcologne.de> wrote:
Brett <ggtgp@yahoo.com> schrieb:
Quantum mechanics is high IQ bullshit to make professors look important. >>>>You need quantum mechanics to describe solid-state electronics
(or all atoms, for that matter).
Type “quantum mechanics criticism” and variants into Google and have at it.
I've read enough crackpot theories already, thank you, I don't need
any more.
Quantum mechanics describes the rules that give structure to atoms and molecules. On a larger scale, those structures build up to explain
spherical planets. But we know the earth is flat. Therefore, quantum mechanics is bullshit. What more evidence could you want?
The basic issue is:
* CPU+motherboard RAM -- usually upgradeable
* Addon coprocessor RAM -- usually not upgradeable
David Brown <david.brown@hesbynett.no> wrote:
On 20/09/2024 07:46, Thomas Koenig wrote:
Brett <ggtgp@yahoo.com> schrieb:
Thomas Koenig <tkoenig@netcologne.de> wrote:
Brett <ggtgp@yahoo.com> schrieb:
Quantum mechanics is high IQ bullshit to make professors look important. >>>>>You need quantum mechanics to describe solid-state electronics
(or all atoms, for that matter).
Type “quantum mechanics criticism” and variants into Google and have at it.
I've read enough crackpot theories already, thank you, I don't need
any more.
Quantum mechanics describes the rules that give structure to atoms and
molecules. On a larger scale, those structures build up to explain
spherical planets. But we know the earth is flat. Therefore, quantum
mechanics is bullshit. What more evidence could you want?
Yup, you just explained the Einstein argument, just like I said.
Even a complete amateur can notice time mismatches of 10 ms in a
musical context, so for a professional this does not surprise me.
I don't know of any human endeavour that requires lower latency or
more precise timing than music.
On Fri, 20 Sep 2024 00:58:44 +0000, MitchAlsup1 wrote:
Hint:: They can context switch every instruction.How does that help?
In article <vcgpqt$gndp$1@dont-email.me>, david.brown@hesbynett.no
(David
Brown) wrote:
Even a complete amateur can notice time mismatches of 10 ms in a
musical context, so for a professional this does not surprise me.
I don't know of any human endeavour that requires lower latency or
more precise timing than music.
A friend used to work on set-top boxes, with fairly slow hardware. They
had demonstrations of two different ways of handling inability to keep
up
with the data stream:
- Keeping the picture on schedule, and dropping a few milliseconds
of sound.
- Dropping a frame of the picture, and keeping the sound on-track.
Potential customers always thought they wanted the first approach, until
they watched the demos. Human vision fakes a lot of what we "see" at the
best of times, bit hearing is more sensitive to glitches.
John
The basic issue is:
* CPU+motherboard RAM -- usually upgradeable
* Addon coprocessor RAM -- usually not upgradeable
Maybe the RAM of the "addon coprocessor" is not upgradeable, but the
addon board itself can be replaced with another one (one with more RAM).
Lawrence D'Oliveiro wrote:
If you cannot swap the buffers with pointer updates ...
The way I understood to do flicker-free drawing was with just two
buffers -- “double buffering”. And rather than swap the buffer
contents, you just swapped the pointers to them.
Having the ears being able to hear millisecond differences in sound
arrival times is key to our ability to hunt and evade predator's.
If you can back up that claim (that noise in quantum computing comes
from "many worlds") ...
On Fri, 20 Sep 2024 01:08:23 +0300, Niklas Holsti wrote:
If you can back up that claim (that noise in quantum computing comes
from "many worlds") ...
No, I’m saying the opposite: the noise comes from the fact that “many worlds” is nonsense.
On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:
The basic issue is:
* CPU+motherboard RAM -- usually upgradeable
* Addon coprocessor RAM -- usually not upgradeable
Maybe the RAM of the "addon coprocessor" is not upgradeable, but the
addon board itself can be replaced with another one (one with more RAM).
Yes, but that’s a lot more expensive.
I had this crazy idea of putting cpus right on the ram. So, if you add
more memory to your system you automatically get more cpu's... Think
NUMA for a moment... ;^)
On Fri, 20 Sep 2024 01:08:23 +0300, Niklas Holsti wrote:
If you can back up that claim (that noise in quantum computing comes
from "many worlds") ...
No, I’m saying the opposite: the noise comes from the fact that “many worlds” is nonsense.
On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:
On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:
Yes, but that’s a lot more expensive.The basic issue is:
* CPU+motherboard RAM -- usually upgradeable
* Addon coprocessor RAM -- usually not upgradeable
Maybe the RAM of the "addon coprocessor" is not upgradeable, but the
addon board itself can be replaced with another one (one with more RAM). >>>
I had this crazy idea of putting cpus right on the ram. So, if you add
more memory to your system you automatically get more cpu's... Think
NUMA for a moment... ;^)
Can software use the extra CPUs ?
Also note: DRAMs are made on P-Channel process (leakage) with only a few layer of metal while CPUs are based on a N-Channel process (speed) with
many layers of metal.
Bus interconnects are not setup to take a CPU cache miss from one
DRAM to a different DRAM on behalf of its contained CPU(s).
{Chicken and egg problem}
MitchAlsup1 <mitchalsup@aol.com> wrote:
On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:
On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:
Yes, but that’s a lot more expensive.The basic issue is:
* CPU+motherboard RAM -- usually upgradeable
* Addon coprocessor RAM -- usually not upgradeable
Maybe the RAM of the "addon coprocessor" is not upgradeable, but the >>>>> addon board itself can be replaced with another one (one with more RAM). >>>>
I had this crazy idea of putting cpus right on the ram. So, if you add
more memory to your system you automatically get more cpu's... Think
NUMA for a moment... ;^)
Can software use the extra CPUs ?
Also note: DRAMs are made on P-Channel process (leakage) with only a few
layer of metal while CPUs are based on a N-Channel process (speed) with
many layers of metal.
Didn’t you work on the MC68000 which had one layer of metal?
This could be fine if you are going for the AI market of slow AI cpu
with huge memory and bandwidth.
The AI market is bigger than the general server market as seen in
NVidea’s sales.
Bus interconnects are not setup to take a CPU cache miss from one
DRAM to a different DRAM on behalf of its contained CPU(s).
{Chicken and egg problem}
Such a dram would be on the PCIE busses, and the main CPU’s would barely touch that ram, and the AI only searches locally.
On Fri, 20 Sep 2024 01:08:23 +0300, Niklas Holsti wrote:
If you can back up that claim (that noise in quantum computing comes
from "many worlds") ...
No, I’m saying the opposite: the noise comes from the fact that “many worlds” is nonsense.
All of these noise sources will remain even if the many-world theory collapses and dies (low probability).
Is there any activity going on at absolute zero?
Can software use the extra CPUs ?
I had this crazy idea of putting cpus right on the ram.
Quantum mechanics is high IQ bullshit to make professors look important.
Shit man, remember all of the slots in the old Apple IIgs's?
Type “quantum mechanics criticism” and variants into Google and have at it.
On Sat, 21 Sep 2024 1:12:38 +0000, Brett wrote:
MitchAlsup1 <mitchalsup@aol.com> wrote:
On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:
This could be fine if you are going for the AI market of slow AI cpu
with huge memory and bandwidth.
The AI market is bigger than the general server market as seen in
NVidea’s sales.
Bus interconnects are not setup to take a CPU cache miss from one
DRAM to a different DRAM on behalf of its contained CPU(s).
{Chicken and egg problem}
Thus a problem with the CPU on DRAM approach.
Such a dram would be on the PCIE busses, and the main CPU’s would barely >> touch that ram, and the AI only searches locally.
Better make it PCIe+CXL so the downstream CPU is cache coherent.
On Fri, 20 Sep 2024 20:06:00 +0000, John Dallman wrote:
In article <vcgpqt$gndp$1@dont-email.me>, david.brown@hesbynett.no
(David
Brown) wrote:
Even a complete amateur can notice time mismatches of 10 ms in a
musical context, so for a professional this does not surprise me.
I don't know of any human endeavour that requires lower latency or
more precise timing than music.
A friend used to work on set-top boxes, with fairly slow hardware. They
had demonstrations of two different ways of handling inability to keep
up
with the data stream:
- Keeping the picture on schedule, and dropping a few milliseconds
of sound.
- Dropping a frame of the picture, and keeping the sound on-track.
Potential customers always thought they wanted the first approach, until
they watched the demos. Human vision fakes a lot of what we "see" at the
best of times, bit hearing is more sensitive to glitches.
Having the ears being able to hear millisecond differences in sound
arrival times is key to our ability to hunt and evade predator's.
While our eyes have a time constant closer to 0.1 seconds.
That is, I blame natural selection on the above.
On 9/20/2024 6:12 PM, Brett wrote:
MitchAlsup1 <mitchalsup@aol.com> wrote:
On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:
On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:
Yes, but that’s a lot more expensive.The basic issue is:
* CPU+motherboard RAM -- usually upgradeable
* Addon coprocessor RAM -- usually not upgradeable
Maybe the RAM of the "addon coprocessor" is not upgradeable, but the >>>>>> addon board itself can be replaced with another one (one with more RAM). >>>>>
I had this crazy idea of putting cpus right on the ram. So, if you add >>>> more memory to your system you automatically get more cpu's... Think
NUMA for a moment... ;^)
Can software use the extra CPUs ?
Also note: DRAMs are made on P-Channel process (leakage) with only a few >>> layer of metal while CPUs are based on a N-Channel process (speed) with
many layers of metal.
Didn’t you work on the MC68000 which had one layer of metal?
This could be fine if you are going for the AI market of slow AI cpu with
huge memory and bandwidth.
The AI market is bigger than the general server market as seen in NVidea’s >> sales.
Bus interconnects are not setup to take a CPU cache miss from one
DRAM to a different DRAM on behalf of its contained CPU(s).
{Chicken and egg problem}
Such a dram would be on the PCIE busses, and the main CPU’s would barely >> touch that ram, and the AI only searches locally.
My crazy idea would be akin to a motherboard with a processor and a
bunch of slots. One would be filled with a special memory with cpu's on
it. If the user wants to add more memory they would gain extra cpu's. It would be a NUMA like scheme. Programs running on cpus with _very_ local
ram would be happy. The main cpu's on the motherboard can be physically
close to the ram slots as well. Adding more memory means we have access
to more cpus that are very close to the memory. So, it might be
interesting out there in the middle of fantasy land for a moment... ;^o
Ouch!
The manual says if you don't need to share data, don't do it... Right on
the cover! lol. ;^D
On 9/20/2024 6:48 PM, MitchAlsup1 wrote:
On Sat, 21 Sep 2024 1:12:38 +0000, Brett wrote:
MitchAlsup1 <mitchalsup@aol.com> wrote:
On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:
On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:
The basic issue is:
* CPU+motherboard RAM -- usually upgradeable
* Addon coprocessor RAM -- usually not upgradeable
Maybe the RAM of the "addon coprocessor" is not upgradeable, but the >>>>>>> addon board itself can be replaced with another one (one with more >>>>>>> RAM).
Yes, but that’s a lot more expensive.
I had this crazy idea of putting cpus right on the ram. So, if you add >>>>> more memory to your system you automatically get more cpu's... Think >>>>> NUMA for a moment... ;^)
Can software use the extra CPUs ?
Also note: DRAMs are made on P-Channel process (leakage) with only a few >>>> layer of metal while CPUs are based on a N-Channel process (speed) with >>>> many layers of metal.
Didn’t you work on the MC68000 which had one layer of metal?
Yes, but it was the 68020 and had polysilicide which we used as
a second layer of metal.
Mc88100 had 2 layers of metal and silicide.
The number of metal layers went about::
1978: 1
1980: 1+silicide
1982: 2+silicide
1988: 3+silicide
1990: 4+silicide
1995: 6
..
This could be fine if you are going for the AI market of slow AI cpu
with huge memory and bandwidth.
The AI market is bigger than the general server market as seen in
NVidea’s sales.
Bus interconnects are not setup to take a CPU cache miss from one
DRAM to a different DRAM on behalf of its contained CPU(s).
{Chicken and egg problem}
Thus a problem with the CPU on DRAM approach.
It would be HIGHLY local wrt its processing units and its memory for
they would all be one.
The programming for it would not be all that easy... It would be like a
NUMA where a program can divide itself up and run parts of itself on
each slot (aka memory-cpu hybrid unit card if you will). If a program
can be embarrassingly parallel, well that would be great! The Cell
processors comes to mind. But it failed. Shit.
A system with a mother board that has slots for several GPUS (think crossfire) and slots for memory+CPU units. The kicker is that adding
more memory gives you more cpus...
How crazy is this? Well, on a scale from:
Retarded to Moronic?
Pretty bad? Shit...
Shit man, remember all of the slots in the old Apple IIgs's?
;^o
Such a dram would be on the PCIE busses, and the main CPU’s would barely >>> touch that ram, and the AI only searches locally.
Better make it PCIe+CXL so the downstream CPU is cache coherent.
On Thu, 19 Sep 2024 19:29:31 -0000 (UTC), Brett wrote:
Quantum mechanics is high IQ bullshit to make professors look important.
Quantum mechanics is real. Quantum effects are real. Transistors only work because electrons can “tunnel” through a barrier with higher energy than they have, which should be classically impossible.
Matter only hangs together because electrons don’t actually orbit nuclei like planets in a miniature solar system: if they did, they would emit radiation (“bremsstrahlung radiation”), thereby losing energy and spiralling into the nucleus until the atom collapses. And that would
happen to every atom in the Universe. Clearly that is not the case.
Even an old-style incandescent light bulb only works because of quantum effects: the shape of the radiation curve depends only on the temperature
of the radiating body, once it gets sufficiently hot, with little or no dependence on what material the body is made of. This applies to your
light bulb and also to our Sun and the other stars.
It is true that quantum theory sounds completely crazy when you try to explain it. But it works, and gives the right answers, that have been verified repeatedly in countless tests. And in science, that counts for
more than anything.
On Fri, 20 Sep 2024 19:28:51 -0700, Chris M. Thomasson wrote:
Shit man, remember all of the slots in the old Apple IIgs's?
In those days, RAM was slow enough that you could put RAM expansion on bus >cards.
On 9/20/2024 7:28 PM, Chris M. Thomasson wrote:
It would be HIGHLY local wrt its processing units and its memory for
they would all be one.
The programming for it would not be all that easy... It would be like a
NUMA where a program can divide itself up and run parts of itself on
each slot (aka memory-cpu hybrid unit card if you will). If a program
can be embarrassingly parallel, well that would be great! The Cell
processors comes to mind. But it failed. Shit.
A system with a mother board that has slots for several GPUS (think
crossfire) and slots for memory+CPU units. The kicker is that adding
more memory gives you more cpus...
How crazy is this? Well, on a scale from:
Retarded to Moronic?
Pretty bad? Shit...
Shit man, remember all of the slots in the old Apple IIgs's?
;^o
Think of the transwarp card for the apple iigs. I think it was called
that. It sped up the system for sure.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 19 Sep 2024 19:29:31 -0000 (UTC), Brett wrote:
Quantum mechanics is high IQ bullshit to make professors look important.
Quantum mechanics is real. Quantum effects are real. Transistors only work >> because electrons can “tunnel” through a barrier with higher energy than >> they have, which should be classically impossible.
I did not criticize quantum effects, I criticized quantum mechanics which
is dumbshit SWAG that hides the truth of what is happening behind bullshit. With greater understanding we can come up with classical explanations, but those truths are too scary and could lead to the destruction of mankind.
If you want to know what is really going on watch the Eric Weinstein The Portal videos.
https://youtu.be/xBx5Y1YLfZY?si=0sVFmvOh-bot2ok7
Eric is a scary bright physicist, he does not know the truth, but he knows it’s being hidden, and he shows you.
“You can’t handle the truth.”
https://youtu.be/9FnO3igOkOk?si=0xmQuxz6yaLnBkCC
I don’t know the truth either, and if I did I would not speculate on it, too dangerous.
https://www.amazon.com/Experts-Vs-Conspiracy-Theorists-T-Shirt/dp/B0CKLLCN9M/ref=asc_df_B0CKLLCN9M
MitchAlsup1 wrote:
On Fri, 20 Sep 2024 20:06:00 +0000, John Dallman wrote:
In article <vcgpqt$gndp$1@dont-email.me>, david.brown@hesbynett.no
(David
Brown) wrote:
Even a complete amateur can notice time mismatches of 10 ms in a
musical context, so for a professional this does not surprise me.
I don't know of any human endeavour that requires lower latency or
more precise timing than music.
A friend used to work on set-top boxes, with fairly slow hardware.
They had demonstrations of two different ways of handling
inability to keep up
with the data stream:
- Keeping the picture on schedule, and dropping a few milliseconds
of sound.
- Dropping a frame of the picture, and keeping the sound on-track.
Potential customers always thought they wanted the first approach,
until they watched the demos. Human vision fakes a lot of what we
"see" at the best of times, bit hearing is more sensitive to
glitches.
Having the ears being able to hear millisecond differences in sound
arrival times is key to our ability to hunt and evade predator's.
Not only that, but the slight non-sylindrical shape of the ear
opening 6 canal cause _really_ minute phase shifts, but they are what
makes it possible for us to differentiate between a sound coming from directly behind vs directly ahead.
While our eyes have a time constant closer to 0.1 seconds.
That is, I blame natural selection on the above.
Supposedly, we devote more of our bran to hearing than to vision?
Terje
On Sat, 21 Sep 2024 15:39:41 +0200
Terje Mathisen <terje.mathisen@tmsw.no> wrote:
MitchAlsup1 wrote:
On Fri, 20 Sep 2024 20:06:00 +0000, John Dallman wrote:
In article <vcgpqt$gndp$1@dont-email.me>, david.brown@hesbynett.no
(David
Brown) wrote:
Even a complete amateur can notice time mismatches of 10 ms in a
musical context, so for a professional this does not surprise me.
I don't know of any human endeavour that requires lower latency or
more precise timing than music.
A friend used to work on set-top boxes, with fairly slow hardware.
They had demonstrations of two different ways of handling
inability to keep up
with the data stream:
- Keeping the picture on schedule, and dropping a few milliseconds
of sound.
- Dropping a frame of the picture, and keeping the sound on-track.
Potential customers always thought they wanted the first approach,
until they watched the demos. Human vision fakes a lot of what we
"see" at the best of times, bit hearing is more sensitive to
glitches.
Having the ears being able to hear millisecond differences in sound
arrival times is key to our ability to hunt and evade predator's.
Not only that, but the slight non-sylindrical shape of the ear
opening 6 canal cause _really_ minute phase shifts, but they are what
makes it possible for us to differentiate between a sound coming from
directly behind vs directly ahead.
While our eyes have a time constant closer to 0.1 seconds.
That is, I blame natural selection on the above.
Supposedly, we devote more of our bran to hearing than to vision?
Terje
I think, it's not even close in favor of vision.
On 9/21/2024 6:54 AM, Scott Lurndal wrote:
mitchalsup@aol.com (MitchAlsup1) writes:
https://www.marvell.com/products/cxl.html
What about a weak coherency where a programmer has to use the correct
membars to get the coherency required for their specific needs? Along
the lines of UltraSPARC in RMO mode?
On Sat, 21 Sep 2024 20:26:13 +0000, Chris M. Thomasson wrote:
On 9/21/2024 6:54 AM, Scott Lurndal wrote:
mitchalsup@aol.com (MitchAlsup1) writes:
https://www.marvell.com/products/cxl.html
What about a weak coherency where a programmer has to use the correct
membars to get the coherency required for their specific needs? Along
the lines of UltraSPARC in RMO mode?
In my case, I suffered through enough of these to implement a
memory hierarchy free from the need of any MemBars yet provide
the performance of <mostly> relaxed memory order, except when
certain kinds of addresses are touched {MMI/O, configuration
space, ATOMIC accesses,...} In these cases, the core becomes
{sequentially consistent, or strongly ordered} depending on the
touched address.
On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:
Is there any activity going on at absolute zero?
No, because the Third Law of Thermodynamics says you can’t get there
anyway.
How close can one get?
CPU's that are hyper close to the memory is good wrt locality. However,
the programming for it might turn some programmers off. NUMA like for
sure.
All the threads are executing exactly the same instructions,on the same
code path.
On Sat, 21 Sep 2024 13:29:31 -0700, Chris M. Thomasson wrote:
CPU's that are hyper close to the memory is good wrt locality.
However, the programming for it might turn some programmers off.
NUMA like for sure.
I’m pretty sure those multi-million-node Linux supers that fill the
top of the Top500 list have a NUMA-style memory-addressing model.
On Sat, 21 Sep 2024 13:43:55 -0700, Chris M. Thomasson wrote:
On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:
Is there any activity going on at absolute zero?
No, because the Third Law of Thermodynamics says you can’t get there
anyway.
How close can one get?
Arbitrarily close. I heard of experiments already being done in the microkelvin range.
Correction: just checked, and the Guinness World Record site reports a
figure of 38pK.
On 9/21/2024 4:55 PM, Lawrence D'Oliveiro wrote:
On Sat, 21 Sep 2024 13:43:55 -0700, Chris M. Thomasson wrote:
On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:
Is there any activity going on at absolute zero?
No, because the Third Law of Thermodynamics says you can’t get there >>>> anyway.
How close can one get?
Arbitrarily close. I heard of experiments already being done in the
microkelvin range.
Odd. So absolute zero is the "limit" and we can get arbitrarily close to
it? Kind of reminds me of the infinity in the unit fractions. Say they
are signed for a moment... ;^)
Correction: just checked, and the Guinness World Record site reports a
figure of 38pK.
From what I understand, GPUs also typically have
memory controllers optimized for throughput rather than latency, with
larger queue depth.
On Sat, 21 Sep 2024 23:34:40 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 21 Sep 2024 13:29:31 -0700, Chris M. Thomasson wrote:
CPU's that are hyper close to the memory is good wrt locality.
However, the programming for it might turn some programmers off.
NUMA like for sure.
I’m pretty sure those multi-million-node Linux supers that fill the top
of the Top500 list have a NUMA-style memory-addressing model.
You are wrong.
Last ccNuma was pushed out of top100 more than a decade ago.
All top machines today are MPP or clusters.
Not that a diffference between the two is well-defined.
On Fri, 20 Sep 2024 06:52:07 -0400, Paul A. Clayton wrote:
From what I understand, GPUs also typically have
memory controllers optimized for throughput rather than latency, with
larger queue depth.
Fine. If they aren’t designed for low latency, then you can’t call them “interactive”, can you? Since that requires quick response. They seem more oriented towards batch operation in the background.
On 9/21/2024 9:39 AM, Brett wrote:
Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
On 9/20/2024 6:48 PM, MitchAlsup1 wrote:
On Sat, 21 Sep 2024 1:12:38 +0000, Brett wrote:
MitchAlsup1 <mitchalsup@aol.com> wrote:
On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:
On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:
The basic issue is:
* CPU+motherboard RAM -- usually upgradeable
* Addon coprocessor RAM -- usually not upgradeable
Maybe the RAM of the "addon coprocessor" is not upgradeable, but the >>>>>>>>> addon board itself can be replaced with another one (one with more >>>>>>>>> RAM).
Yes, but that’s a lot more expensive.
I had this crazy idea of putting cpus right on the ram. So, if you add >>>>>>> more memory to your system you automatically get more cpu's... Think >>>>>>> NUMA for a moment... ;^)
Can software use the extra CPUs ?
Also note: DRAMs are made on P-Channel process (leakage) with only a few >>>>>> layer of metal while CPUs are based on a N-Channel process (speed) with >>>>>> many layers of metal.
Didn’t you work on the MC68000 which had one layer of metal?
Yes, but it was the 68020 and had polysilicide which we used as
a second layer of metal.
Mc88100 had 2 layers of metal and silicide.
The number of metal layers went about::
1978: 1
1980: 1+silicide
1982: 2+silicide
1988: 3+silicide
1990: 4+silicide
1995: 6
..
This could be fine if you are going for the AI market of slow AI cpu >>>>> with huge memory and bandwidth.
The AI market is bigger than the general server market as seen in
NVidea’s sales.
Bus interconnects are not setup to take a CPU cache miss from one
DRAM to a different DRAM on behalf of its contained CPU(s).
{Chicken and egg problem}
Thus a problem with the CPU on DRAM approach.
It would be HIGHLY local wrt its processing units and its memory for
they would all be one.
The programming for it would not be all that easy... It would be like a
NUMA where a program can divide itself up and run parts of itself on
each slot (aka memory-cpu hybrid unit card if you will). If a program
can be embarrassingly parallel, well that would be great! The Cell
processors comes to mind. But it failed. Shit.
Cell was in the PlayStation which Sony sold a huge number of and made
billions of dollars, so successful, not failed.
Touche! :^)
However, iirc, not all the games for it even used the SPE's. Instead
they used the PPC. I guess that might have been due to the "complexity"
of the programming? Not sure.
I programmed for Cell, it was actually a nice architecture for what it did.
Iirc, you had to use DMA to communicate with the SPE's?
If you think programming for AI is easy, I have news for you…
Those NVidia AI chips are at the brain damaged level for programming.
No shit? I was thinking along the lines of compute shaders in the GPU?
10’s of billions of dollars are invested in this market.
A system with a mother board that has slots for several GPUS (think
crossfire) and slots for memory+CPU units. The kicker is that adding
more memory gives you more cpus...
How crazy is this? Well, on a scale from:
Retarded to Moronic?
Pretty bad? Shit...
Shit man, remember all of the slots in the old Apple IIgs's?
;^o
Such a dram would be on the PCIE busses, and the main CPU’s would barely
touch that ram, and the AI only searches locally.
Better make it PCIe+CXL so the downstream CPU is cache coherent.
On 21/09/2024 19:40, Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 19 Sep 2024 19:29:31 -0000 (UTC), Brett wrote:
Quantum mechanics is high IQ bullshit to make professors look important. >>>Quantum mechanics is real. Quantum effects are real. Transistors only work >>> because electrons can “tunnel” through a barrier with higher energy than
they have, which should be classically impossible.
I did not criticize quantum effects, I criticized quantum mechanics which
is dumbshit SWAG that hides the truth of what is happening behind bullshit. >> With greater understanding we can come up with classical explanations, but >> those truths are too scary and could lead to the destruction of mankind.
If you want to know what is really going on watch the Eric Weinstein The
Portal videos.
https://youtu.be/xBx5Y1YLfZY?si=0sVFmvOh-bot2ok7
Eric is a scary bright physicist, he does not know the truth, but he knows >> it’s being hidden, and he shows you.
“You can’t handle the truth.”
https://youtu.be/9FnO3igOkOk?si=0xmQuxz6yaLnBkCC
I don’t know the truth either, and if I did I would not speculate on it, >> too dangerous.
https://www.amazon.com/Experts-Vs-Conspiracy-Theorists-T-Shirt/dp/B0CKLLCN9M/ref=asc_df_B0CKLLCN9M
In case anyone wants a safe link to information about this particular
muppet, without Google thinking they are interested in loony conspiracy theories on the level of "birds don't exist", you can read about him at <https://rationalwiki.org/wiki/Eric_Weinstein>.
"""
Repressed genius
For years as a mathematician he has said that he has some kind of theory
of everything that will knock everyone out and overturn the field of
physics, but he just can't publish it yet because the world isn't ready,
and the information will only be suppressed.[6]
In 2020, Weinstein published his much-hyped Oxford lecture on his Theory
of Geometric Unity.[7] It was met with silence and indifference among theoretical physicists and the scientific community at large.
On 1st April 2021, Weinstein released a draft of his paper online.[10]
Given the 1st April release date and the author details on the cover
page describing Weinstein as an "entertainer" and the paper itself as a
"work of entertainment", it is unclear at this stage whether Geometrical Unity was an elaborate April Fools' Day Wikipedia prank all along.
"""
Rather than publishing his theories in peer-reviewed journals, like real scary bright physicists, he promoted his views on Joe Rogan's podcast.
And he described himself as "not a physicist" (he is a venture
capitalist, not a scientist) and that the paper was "a work of entertainment". That should give you some idea of how seriously his
ideas should be taken.
Actual physicists know that quantum mechanics is not complete - it is
not a "theory of everything", and does not explain everything. It is,
like Newtonian gravity and general relativity, a simplification that
gives an accurate model of reality within certain limitations, and
hopefully it will one day be superseded by a new theory that models
reality more accurately and over a wider range of circumstances. That
is how science works.
As things stand today, no such better theory has been developed. There
are a number of ideas and hypotheses (still far from being classifiable
as scientific theories) that show promise and have not yet been
demonstrated to be wrong, but that's as far as we have got. Weinstein's "Geometric Unity" is not such a hypotheses - the little that has been published has been shown to be either wrong, or "not even wrong".
It's fine to come up with strange new ideas about how the universe
works. You then publish and discuss those ideas, and work with other scientists to weed out the clearly incorrect parts, try to expand and
modify it to fit what we know about reality, and to think about how it
could make predictions that could be tested. That's part of the process
of science.
It's not fine to believe half-baked ramblings from someone who doesn't understand what they are working with and won't listen to those who do.
The alternative to "I don't understand quantum mechanics" is /not/ to
believe whatever gobbledegook someone spouts on youtube.
i.e at least an order of magnitude more vision than hearing.
A GPU can perform 1T-10T calculations per second running at 1GHz.
Try doing that with a CPU.
On 9/21/2024 6:28 PM, MitchAlsup1 wrote:
Using lasers to slow the particles down !
When a particle is vibrating towards the laser, a picosecond blast of
energy slows it back down. Using heat to achieve cold.
Targeting a single particle without casting any effect on any other
particle? Can that be done?
I did not criticize quantum effects, I criticized quantum mechanics
which is dumbshit SWAG that hides the truth of what is happening behind bullshit. With greater understanding we can come up with classical explanations ...
Temperature is an unsigned quantity.
Astronomers have only found a dozen Einstein Rings ...
On Sun, 22 Sep 2024 02:57:21 +0300, Michael S wrote:
On Sat, 21 Sep 2024 23:34:40 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sat, 21 Sep 2024 13:29:31 -0700, Chris M. Thomasson wrote:
CPU's that are hyper close to the memory is good wrt locality.
However, the programming for it might turn some programmers off.
NUMA like for sure.
I’m pretty sure those multi-million-node Linux supers that fill
the top of the Top500 list have a NUMA-style memory-addressing
model.
You are wrong.
Last ccNuma was pushed out of top100 more than a decade ago.
All top machines today are MPP or clusters.
Not that a diffference between the two is well-defined.
If the difference is not “well-defined”, then how can I be “wrong”?
David Brown <david.brown@hesbynett.no> wrote:
On 20/09/2024 07:46, Thomas Koenig wrote:
Brett <ggtgp@yahoo.com> schrieb:
Thomas Koenig <tkoenig@netcologne.de> wrote:
Brett <ggtgp@yahoo.com> schrieb:
Quantum mechanics is high IQ bullshit to make professors look
important.
You need quantum mechanics to describe solid-state electronics
(or all atoms, for that matter).
Type “quantum mechanics criticism” and variants into Google and
have at it.
I've read enough crackpot theories already, thank you, I don't need
any more.
Quantum mechanics describes the rules that give structure to atoms
and molecules. On a larger scale, those structures build up to
explain spherical planets. But we know the earth is flat.
Therefore, quantum mechanics is bullshit. What more evidence could
you want?
Yup, you just explained the Einstein argument, just like I said.
Actual physicists know that quantum mechanics is not complete - it is
not a "theory of everything", and does not explain everything. It
is, like Newtonian gravity and general relativity, a simplification
that gives an accurate model of reality within certain limitations,
and hopefully it will one day be superseded by a new theory that
models reality more accurately and over a wider range of
circumstances. That is how science works.
As things stand today, no such better theory has been developed.
There are a number of ideas and hypotheses (still far from being
classifiable as scientific theories) that show promise and have not
yet been demonstrated to be wrong, but that's as far as we have got. Weinstein's "Geometric Unity" is not such a hypotheses - the little
that has been published has been shown to be either wrong, or "not
even wrong".
On Sun, 22 Sep 2024 2:13:42 +0000, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 06:52:07 -0400, Paul A. Clayton wrote:
From what I understand, GPUs also typically have
memory controllers optimized for throughput rather than latency,
with larger queue depth.
Fine. If they aren’t designed for low latency, then you can’t call
them “interactive”, can you? Since that requires quick response.
They seem more oriented towards batch operation in the background.
Think about it like this::
A GPU can perform 1T-10T calculations per second running at 1GHz.
Try doing that with a CPU.
They are entirely different on the spectrum of design and
architecture. Things that work to make CPUs faster do not make GPUs faster--and for the most part--vice versa.
Brett <ggtgp@yahoo.com> writes:
Thomas Koenig <tkoenig@netcologne.de> wrote:
Brett <ggtgp@yahoo.com> schrieb:
Quantum mechanics is high IQ bullshit to make professors look
important.
You need quantum mechanics to describe solid-state electronics
(or all atoms, for that matter).
Type quantum mechanics criticism and
variants into Google and have at it.
Why should one do that?
On Sun, 22 Sep 2024 02:21:54 +0000, MitchAlsup1 wrote:
A GPU can perform 1T-10T calculations per second running at 1GHz.
Try doing that with a CPU.
Why does the GPU have a lower clock speed? It’s because it’s locked
to the RAM speed, in a way that the CPU is not. Once you allow for
this, you realize that their performance limitations, doing like for
like, are not that different after all.
On 9/21/2024 6:28 PM, MitchAlsup1 wrote:
On Sat, 21 Sep 2024 23:55:13 +0000, Lawrence D'Oliveiro wrote:
On Sat, 21 Sep 2024 13:43:55 -0700, Chris M. Thomasson wrote:
On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:
Is there any activity going on at absolute zero?
No, because the Third Law of Thermodynamics says you can’t get there >>>>> anyway.
How close can one get?
Arbitrarily close. I heard of experiments already being done in the
microkelvin range.
Correction: just checked, and the Guinness World Record site reports a
figure of 38pK.
Using lasers to slow the particles down !
When a particle is vibrating towards the laser, a picosecond blast
of energy slows it back down. Using heat to achieve cold.
Targeting a single particle without casting any effect on any other
particle? Can that be done?
Having the ears being able to hear millisecond differences in sound
arrival times is key to our ability to hunt and evade predator's.
Michael S wrote:
On Sat, 21 Sep 2024 15:39:41 +0200
Terje Mathisen <terje.mathisen@tmsw.no> wrote:
MitchAlsup1 wrote:
On Fri, 20 Sep 2024 20:06:00 +0000, John Dallman wrote:
In article <vcgpqt$gndp$1@dont-email.me>, david.brown@hesbynett.no
(David
Brown) wrote:
Even a complete amateur can notice time mismatches of 10 ms in a
musical context, so for a professional this does not surprise me.
I don't know of any human endeavour that requires lower latency or >>>>>> more precise timing than music.
A friend used to work on set-top boxes, with fairly slow hardware.
They had demonstrations of two different ways of handling
inability to keep up
with the data stream:
- Keeping the picture on schedule, and dropping a few milliseconds
of sound.
- Dropping a frame of the picture, and keeping the sound on-track.
Potential customers always thought they wanted the first approach,
until they watched the demos. Human vision fakes a lot of what we
"see" at the best of times, bit hearing is more sensitive to
glitches.
Having the ears being able to hear millisecond differences in sound
arrival times is key to our ability to hunt and evade predator's.
Not only that, but the slight non-sylindrical shape of the ear
opening 6 canal cause _really_ minute phase shifts, but they are what
makes it possible for us to differentiate between a sound coming from
directly behind vs directly ahead.
While our eyes have a time constant closer to 0.1 seconds.
That is, I blame natural selection on the above.
Supposedly, we devote more of our bran to hearing than to vision?
Terje
I think, it's not even close in favor of vision.
That would have been my guess as well, as but as I wrote above, a few
years ago I was told it was otherwise. Now I have actually read a few
papers about how you can actually measure this, and it did make sense,
i.e at least an order of magnitude more vision than hearing.
Having done both audio and video codec optimization I know that even the
very highest levels of audio quality is near-DC compared to video
signals. :-)
On Sat, 21 Sep 2024 20:30:40 +0200
David Brown <david.brown@hesbynett.no> wrote:
Actual physicists know that quantum mechanics is not complete - it is
not a "theory of everything", and does not explain everything. It
is, like Newtonian gravity and general relativity, a simplification
that gives an accurate model of reality within certain limitations,
and hopefully it will one day be superseded by a new theory that
models reality more accurately and over a wider range of
circumstances. That is how science works.
As things stand today, no such better theory has been developed.
Actually, such theory (QED) was proposed by Paul Dirac back in 1920s and further developed by many others bright minds.
The trouble with it (according to my not too educated understanding) is
that unlike Schrodinger equation, approximate solutions for QED
equations can't be calculated numerically by means of Green's function. Because of that QED is rarely used outside of field of high-energy
particles and such.
But then, I am almost 40 years out of date. Things could have changed.
There are a number of ideas and hypotheses (still far from being
classifiable as scientific theories) that show promise and have not
yet been demonstrated to be wrong, but that's as far as we have got.
Weinstein's "Geometric Unity" is not such a hypotheses - the little
that has been published has been shown to be either wrong, or "not
even wrong".
On 22/09/2024 10:48, Michael S wrote:
On Sat, 21 Sep 2024 20:30:40 +0200
David Brown <david.brown@hesbynett.no> wrote:
Actual physicists know that quantum mechanics is not complete - it
is not a "theory of everything", and does not explain everything.
It is, like Newtonian gravity and general relativity, a
simplification that gives an accurate model of reality within
certain limitations, and hopefully it will one day be superseded
by a new theory that models reality more accurately and over a
wider range of circumstances. That is how science works.
As things stand today, no such better theory has been developed.
Actually, such theory (QED) was proposed by Paul Dirac back in
1920s and further developed by many others bright minds.
The trouble with it (according to my not too educated
understanding) is that unlike Schrodinger equation, approximate
solutions for QED equations can't be calculated numerically by
means of Green's function. Because of that QED is rarely used
outside of field of high-energy particles and such.
But then, I am almost 40 years out of date. Things could have
changed.
I don't claim to be an expert on this field in any way, and could
easily be muddled on the details.
I thought QED only covered special relativity, not general relativity
- i.e., it describes particles travelling near the speed of light,
but does not handle gravity or the curvature of space-time.
On Sun, 22 Sep 2024 0:13:38 +0000, Chris M. Thomasson wrote:
On 9/21/2024 4:55 PM, Lawrence D'Oliveiro wrote:
On Sat, 21 Sep 2024 13:43:55 -0700, Chris M. Thomasson wrote:
On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:
Is there any activity going on at absolute zero?
No, because the Third Law of Thermodynamics says you can’t get there >>>>> anyway.
How close can one get?
Arbitrarily close. I heard of experiments already being done in the
microkelvin range.
Odd. So absolute zero is the "limit" and we can get arbitrarily close to
it? Kind of reminds me of the infinity in the unit fractions. Say they
are signed for a moment... ;^)
Temperature is an unsigned quantity.
Correction: just checked, and the Guinness World Record site reports a
figure of 38pK.
On 2024-09-22 7:02, Chris M. Thomasson wrote:
On 9/21/2024 6:28 PM, MitchAlsup1 wrote:
On Sat, 21 Sep 2024 23:55:13 +0000, Lawrence D'Oliveiro wrote:
On Sat, 21 Sep 2024 13:43:55 -0700, Chris M. Thomasson wrote:
On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:
Is there any activity going on at absolute zero?
No, because the Third Law of Thermodynamics says you can’t get there >>>>>> anyway.
How close can one get?
Arbitrarily close. I heard of experiments already being done in the
microkelvin range.
Correction: just checked, and the Guinness World Record site reports a >>>> figure of 38pK.
Using lasers to slow the particles down !
When a particle is vibrating towards the laser, a picosecond blast
of energy slows it back down. Using heat to achieve cold.
Targeting a single particle without casting any effect on any other
particle? Can that be done?
It's not done that way - the laser beams are continuous, but they are
tuned and/or polarized to interact more with particles moving the "wrong way", slowing them down on the average, which means cooling them. The particles "self select" to interact with the beams, based on Doppler
effects or other effects that depend on particle movements.
https://en.wikipedia.org/wiki/Laser_cooling
Well, we have asymmetric memory barriers now (membarrier() in linux)
so we can get rid of memory barriers in some cases. For hazard
pointers which used to be a (load, store, mb, load) are now just
a (load, store, load). Much faster, from 8.02 nsecs to 0.79 nsecs.
So much so that other things which has heretofore been considered
to add negligible overhead are not so much by comparison. Which can
be a little annoying because some like using those a lot.
On Sat, 21 Sep 2024 20:30:40 +0200
David Brown <david.brown@hesbynett.no> wrote:
Actual physicists know that quantum mechanics is not complete - it is
not a "theory of everything", and does not explain everything. It
is, like Newtonian gravity and general relativity, a simplification
that gives an accurate model of reality within certain limitations,
and hopefully it will one day be superseded by a new theory that
models reality more accurately and over a wider range of
circumstances. That is how science works.
As things stand today, no such better theory has been developed.
Actually, such theory (QED) was proposed by Paul Dirac back in 1920s and further developed by many others bright minds.
The trouble with it (according to my not too educated understanding) is
that unlike Schrodinger equation, approximate solutions for QED
equations can't be calculated numerically by means of Green's function. Because of that QED is rarely used outside of field of high-energy
particles and such.
But then, I am almost 40 years out of date. Things could have changed.
On Sun, 22 Sep 2024 12:58:36 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 22/09/2024 10:48, Michael S wrote:
On Sat, 21 Sep 2024 20:30:40 +0200
David Brown <david.brown@hesbynett.no> wrote:
Actual physicists know that quantum mechanics is not complete - it
is not a "theory of everything", and does not explain everything.
It is, like Newtonian gravity and general relativity, a
simplification that gives an accurate model of reality within
certain limitations, and hopefully it will one day be superseded
by a new theory that models reality more accurately and over a
wider range of circumstances. That is how science works.
As things stand today, no such better theory has been developed.
Actually, such theory (QED) was proposed by Paul Dirac back in
1920s and further developed by many others bright minds.
The trouble with it (according to my not too educated
understanding) is that unlike Schrodinger equation, approximate
solutions for QED equations can't be calculated numerically by
means of Green's function. Because of that QED is rarely used
outside of field of high-energy particles and such.
But then, I am almost 40 years out of date. Things could have
changed.
I don't claim to be an expert on this field in any way, and could
easily be muddled on the details.
I thought QED only covered special relativity, not general relativity
- i.e., it describes particles travelling near the speed of light,
but does not handle gravity or the curvature of space-time.
That sounds correct, at least for Dirac's form of QED. May be it was
amended later.
But that was not my point.
My point was that the QED is well known to be better approximation of
reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent
of it. Despite that in practice a "worse" approximation is used far
more often.
On Sun, 22 Sep 2024 03:20:59 -0000 (UTC), Brett wrote:
Astronomers have only found a dozen Einstein Rings ...
It only took a minute to prove that false. From <https://en.wikipedia.org/wiki/Einstein_ring>: “Hundreds of
gravitational lenses are currently known”. Also: “The degree of completeness needed for an image seen through a gravitational lens to
qualify as an Einstein ring is yet to be defined.”
And there are other, subtler kinds of gravitational lensing. A link
from <https://en.wikipedia.org/wiki/Gravitational_lens> mentions a
survey of older data that discovered 1210 new lenses, doubling the
number known.
Not that this really has anything to do with quantum theory ...
scott@slp53.sl.home (Scott Lurndal) writes:
Brett <ggtgp@yahoo.com> writes:
Thomas Koenig <tkoenig@netcologne.de> wrote:
Brett <ggtgp@yahoo.com> schrieb:
Quantum mechanics is high IQ bullshit to make professors look
important.
You need quantum mechanics to describe solid-state electronics
(or all atoms, for that matter).
Type quantum mechanics criticism and
variants into Google and have at it.
Why should one do that?
To discover the truly brilliant explanations at crackpot-conspiracy-theories.com.
On Sat, 21 Sep 2024 17:40:21 -0000 (UTC), Brett wrote:
I did not criticize quantum effects, I criticized quantum mechanics
which is dumbshit SWAG that hides the truth of what is happening behind
bullshit. With greater understanding we can come up with classical
explanations ...
No we cannot. Some have hypothesized the existence of “hidden variables” which can be used to come up with classical explanations of quantum
effects. Bell’s Theorem offered a way to test for those, and the tests (there have been several of them so far, done in several different ways)
show that such “hidden variables” cannot exist.
Michael S <already5chosen@yahoo.com> writes:
On Sat, 21 Sep 2024 20:30:40 +0200
David Brown <david.brown@hesbynett.no> wrote:
Actual physicists know that quantum mechanics is not complete - it is
not a "theory of everything", and does not explain everything. It
is, like Newtonian gravity and general relativity, a simplification
that gives an accurate model of reality within certain limitations,
and hopefully it will one day be superseded by a new theory that
models reality more accurately and over a wider range of
circumstances. That is how science works.
As things stand today, no such better theory has been developed.
Actually, such theory (QED) was proposed by Paul Dirac back in 1920s and
further developed by many others bright minds.
The trouble with it (according to my not too educated understanding) is
that unlike Schrodinger equation, approximate solutions for QED
equations can't be calculated numerically by means of Green's function.
Because of that QED is rarely used outside of field of high-energy
particles and such.
But then, I am almost 40 years out of date. Things could have changed.
Quantum electrodynamics, aka QED, is a quantum field theory for the electromagnetic force. QED accounts for almost everything we can
directly see in the world, not counting gravity.
The original QED of Dirac, as expressed in the Dirac equation, has a
problem: according to that formulation, the self-energy of the
electron is infinite. To address this deficiency, for about 20
years physicists applied a convenient approximation, namely, they
treated the theoretically infinite quantity as zero. Surprisingly,
that approximation gave results that agreed with all the experiments
that were done up until about the mid 1940s.
In the late 1940s, Richard Feynman, Julian Schwinger, and Shinichiro
Tomonaga independently developed versions of QED that address the
infinite self-energy problem. (Tomonaga's work was done somewhat
earlier, but wasn't publicized until later because of the isolation
of Japan during World War II.) It wasn't at all obvious that the
QED of Feynman and the QED of Schwinger were equivalent. That they
were equivalent was established and publicized by Freeman Dyson
(while he was a graduate student, no less).
The problem of the seeminginly infinite self-energy of the electron
was addressed by a technique known as renormalization. We could say
that renormalization is only an approximation: it is known to be mathematically unsound, breaking down after a mere 400 or so decimal
places. Despite that, QED gives numerical results that are correct
up to the limits of our ability to measure. A computation done
using QED matched an experimental result to within the tolerance
of the measurement, which was 13 decimal places. An analogy given
by Feynman is that this is like measuring the distance from LA to
New York to an accuracy of the width of one human hair.
QED has implications that are visible in the "normal" world, by
which I mean using ordinary equipment rather than things like
synchrotrons and particle accelerators, and that leaves atoms
intact. Basically all of chemistry depends on QED and not on
anything more exotic.
There are three fundamental forces other than the electromagnetic
force, namely, gravity, the weak force, and the strong force. The
strong force is what holds together the protons and neutrons in the
nucleus of an atom; it has to be stronger than the electromagnetic
force so that protons don't just fly away from each other. The weak
force is related to radioactive decay; it works only over very
short distances because the carrier particle of the weak force is
fairly massive (about 80 times the mass of a proton IIRC). For
comparison the carrier particle of the electromagnetic force is the
photon, which is massless; that means the electromagnetic force
operates over arbitrarily large distances (although of course with a
strength that diminishes as the distance gets larger).
The strong force (sometimes called the color force) is peculiar in
that the strong force actually *increases* with distance. That
happens because the carrier particle of the color force has a color
charge. For comparison photons are electrically neutral. It's
because of this property that we never see isolated quarks.
Basically, trying to pull two quarks apart takes so much energy that
new quarks come into existence out of nothing.
Quarks come in three
"colors" (having nothing to do with ordinary color), times three
families of quarks, times two quarks in each family. The carrier
particle of the strong force is called a gluon, and there are eight
different kinds of gluons. (It seems like there should be nine, to
allow each of the 3x3 possible combinations of colors, but there are
only eight.) The corresponding theory to QED for the strong force
is called QCD, for Quantum chromodynamics.
A joke that I like to tell is because the carrier particle for the
strong force can change a quark from one color to another, rather
than calling it a gluon it should have been called a crayon.
The field theories for electromagnetism, the strong force, and the
weak force have been unified in the sense that there is a
mathematically consistent framework that accommodates all three.
That unification is only mathematical, by which I mean that there
are no testable physical implications, only a kind of tautological consistency. We can see all three field theories through a common mathematical lens, but that doesn't say anything about how the three
theories interact physically.
The gravitational force is much weaker, by 42 orders of magnitude,
than the other three fundamental forces. The General Theory of
Relativity is not a quantized theory. There are ideas about how to
unify gravity and the other three fundamental forces, but none of
these "grand unified" theories have any hypotheses that we are able
to test experimentally. It's unclear how gravity fits in to the
overall picture.
The foregoing represents my best understanding of QED and the other fundamental forces of physics. I've done a fair amount of reading
on the subject but I wouldn't claim even to be a physicist, let
alone an expert.
On 9/21/24 4:45 PM, MitchAlsup1 wrote:
On Sat, 21 Sep 2024 20:26:13 +0000, Chris M. Thomasson wrote:
On 9/21/2024 6:54 AM, Scott Lurndal wrote:
mitchalsup@aol.com (MitchAlsup1) writes:
https://www.marvell.com/products/cxl.html
What about a weak coherency where a programmer has to use the
correct
membars to get the coherency required for their specific needs?
Along
the lines of UltraSPARC in RMO mode?
In my case, I suffered through enough of these to implement a
memory hierarchy free from the need of any MemBars yet provide
the performance of <mostly> relaxed memory order, except when
certain kinds of addresses are touched {MMI/O, configuration
space, ATOMIC accesses,...} In these cases, the core becomes
{sequentially consistent, or strongly ordered} depending on the
touched address.
If I understand correctly, atomic accesses (Enhances
Synchronization Facility) effective use a complete memory barrier;
software could effectively provide a memory barrier "instruction"
by performing an otherwise pointless atomic/ESF operation.
Are there no cases where an atomic operation is desired but
sequential consistency is not required?
Or is this a tradeoff of frequency/criticality and the expected overhead of the implicit
memory barrier? (Memory barriers may be similar to context
switches, not needing to be as expensive as they are in most implementations.)
As far as PCIe device to device data routing, this will all be
based no the chosen virtual channel. Same channel=in order,
different channel=who knows.
On Sun, 22 Sep 2024 19:37:02 +0000, Paul A. Clayton wrote:
On 9/21/24 4:45 PM, MitchAlsup1 wrote:
On Sat, 21 Sep 2024 20:26:13 +0000, Chris M. Thomasson wrote:
On 9/21/2024 6:54 AM, Scott Lurndal wrote:
mitchalsup@aol.com (MitchAlsup1) writes:
https://www.marvell.com/products/cxl.html
What about a weak coherency where a programmer has to use the
correct
membars to get the coherency required for their specific needs?
Along
the lines of UltraSPARC in RMO mode?
In my case, I suffered through enough of these to implement a
memory hierarchy free from the need of any MemBars yet provide
the performance of <mostly> relaxed memory order, except when
certain kinds of addresses are touched {MMI/O, configuration
space, ATOMIC accesses,...} In these cases, the core becomes
{sequentially consistent, or strongly ordered} depending on the
touched address.
If I understand correctly, atomic accesses (Enhances
Synchronization Facility) effective use a complete memory barrier;
software could effectively provide a memory barrier "instruction"
by performing an otherwise pointless atomic/ESF operation.
Are there no cases where an atomic operation is desired but
sequential consistency is not required?
Probably--but in the realm of ATOMICs it is FAR better to be
a bit slower than to ever allow the illusion of atomicity to
be lost. This criterion is significantly harder when doing
multi-location ATOMIC stuff than single location ATOMIC stuff.
Or is this a tradeoff of
frequency/criticality and the expected overhead of the implicit
memory barrier? (Memory barriers may be similar to context
switches, not needing to be as expensive as they are in most
implementations.)
The R in RISC stands for Reduced. An ISA devoid of MemBar is
more reduced than one with MemBars. Programmers are rarely
in a position to understand all the cases where MemBar are
needed or not needed {{excepting our own Chris M. Thomasson}}
On 9/22/2024 5:39 PM, MitchAlsup1 wrote:
On Sun, 22 Sep 2024 19:37:02 +0000, Paul A. Clayton wrote:
On 9/21/24 4:45 PM, MitchAlsup1 wrote:
On Sat, 21 Sep 2024 20:26:13 +0000, Chris M. Thomasson wrote:
On 9/21/2024 6:54 AM, Scott Lurndal wrote:
mitchalsup@aol.com (MitchAlsup1) writes:
https://www.marvell.com/products/cxl.html
What about a weak coherency where a programmer has to use the
correct
membars to get the coherency required for their specific needs?
Along
the lines of UltraSPARC in RMO mode?
In my case, I suffered through enough of these to implement a
memory hierarchy free from the need of any MemBars yet provide
the performance of <mostly> relaxed memory order, except when
certain kinds of addresses are touched {MMI/O, configuration
space, ATOMIC accesses,...} In these cases, the core becomes
{sequentially consistent, or strongly ordered} depending on the
touched address.
If I understand correctly, atomic accesses (Enhances
Synchronization Facility) effective use a complete memory barrier;
software could effectively provide a memory barrier "instruction"
by performing an otherwise pointless atomic/ESF operation.
Are there no cases where an atomic operation is desired but
sequential consistency is not required?
Probably--but in the realm of ATOMICs it is FAR better to be
a bit slower than to ever allow the illusion of atomicity to
be lost. This criterion is significantly harder when doing
multi-location ATOMIC stuff than single location ATOMIC stuff.
Or is this a tradeoff of
frequency/criticality and the expected overhead of the implicit
memory barrier? (Memory barriers may be similar to context
switches, not needing to be as expensive as they are in most
implementations.)
The R in RISC stands for Reduced. An ISA devoid of MemBar is
more reduced than one with MemBars. Programmers are rarely
in a position to understand all the cases where MemBar are
needed or not needed {{excepting our own Chris M. Thomasson}}
Not quite sure what we are talking about here but I won't
let that stop me from commenting. :)
As far as loads and stores go, if they are atomic then
a load will not see a value that was not from some store.
Regarding memory barriers, that depends on the hardware
memory model and the program logic assuming one knows
how to do concurrent algorithms.
Speaking of memory models, remember when x86 didn't have
a formal memory model. They didn't put one in until
after itanium. Before that it was a sort of processor
consistency type 2 which was a real impedance mismatch
with what most concurrent software used a a memory model.
Joe Seigh
On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:
On 9/22/2024 5:39 PM, MitchAlsup1 wrote:
Speaking of memory models, remember when x86 didn't have
a formal memory model. They didn't put one in until
after itanium. Before that it was a sort of processor
consistency type 2 which was a real impedance mismatch
with what most concurrent software used a a memory model.
When only 1 x86 would fit on a die, it really did not mater
much. I was at AMD when they were designing their memory
model.
Joe Seigh
Are you not amazed that everything physicists know about the universe
can be written in 13 equations.
MitchAlsup1 <mitchalsup@aol.com> schrieb:
Are you not amazed that everything physicists know about the
universe can be written in 13 equations.
Randall Munroe has some comment on that... https://xkcd.com/1867/
(Among thers, he left out turbulence, where we have some
understanding, but do not yet understand the Navier-Stokes
equations - one of the Millenium Problems).
Einstein didn't like Copenhagen interpretation of Quantum Mechanics.
He didn't question the validity of equations.
On 23/09/2024 12:38, Thomas Koenig wrote:
MitchAlsup1 <mitchalsup@aol.com> schrieb:
Are you not amazed that everything physicists know about the universe
can be written in 13 equations.
Randall Munroe has some comment on that... https://xkcd.com/1867/
(Among thers, he left out turbulence, where we have some
understanding, but do not yet understand the Navier-Stokes
equations - one of the Millenium Problems).
Are you suggesting that "Gifted" was not an accurate documentary?
On Sun, 22 Sep 2024 12:58:36 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 22/09/2024 10:48, Michael S wrote:
On Sat, 21 Sep 2024 20:30:40 +0200
David Brown <david.brown@hesbynett.no> wrote:
Actual physicists know that quantum mechanics is not complete - it
is not a "theory of everything", and does not explain everything.
It is, like Newtonian gravity and general relativity, a
simplification that gives an accurate model of reality within
certain limitations, and hopefully it will one day be superseded
by a new theory that models reality more accurately and over a
wider range of circumstances. That is how science works.
As things stand today, no such better theory has been developed.
Actually, such theory (QED) was proposed by Paul Dirac back in
1920s and further developed by many others bright minds.
The trouble with it (according to my not too educated
understanding) is that unlike Schrodinger equation, approximate
solutions for QED equations can't be calculated numerically by
means of Green's function. Because of that QED is rarely used
outside of field of high-energy particles and such.
But then, I am almost 40 years out of date. Things could have
changed.
I don't claim to be an expert on this field in any way, and could
easily be muddled on the details.
I thought QED only covered special relativity, not general relativity
- i.e., it describes particles travelling near the speed of light,
but does not handle gravity or the curvature of space-time.
That sounds correct, at least for Dirac's form of QED. May be it was
amended later.
But that was not my point.
My point was that the QED is well known to be better approximation of
reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent
of it. Despite that in practice a "worse" approximation is used far
more often.
On Sun, 22 Sep 2024 13:10:34 +0000, Tim Rentsch wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sat, 21 Sep 2024 20:30:40 +0200
David Brown <david.brown@hesbynett.no> wrote:
Actual physicists know that quantum mechanics is not complete - it is
not a "theory of everything", and does not explain everything. It
is, like Newtonian gravity and general relativity, a simplification
that gives an accurate model of reality within certain limitations,
and hopefully it will one day be superseded by a new theory that
models reality more accurately and over a wider range of
circumstances. That is how science works.
As things stand today, no such better theory has been developed.
Actually, such theory (QED) was proposed by Paul Dirac back in 1920s and >>> further developed by many others bright minds.
The trouble with it (according to my not too educated understanding) is
that unlike Schrodinger equation, approximate solutions for QED
equations can't be calculated numerically by means of Green's function.
Because of that QED is rarely used outside of field of high-energy
particles and such.
But then, I am almost 40 years out of date. Things could have changed.
Quantum electrodynamics, aka QED, is a quantum field theory for the
electromagnetic force. QED accounts for almost everything we can
directly see in the world, not counting gravity.
The original QED of Dirac, as expressed in the Dirac equation, has a
problem: according to that formulation, the self-energy of the
electron is infinite. To address this deficiency, for about 20
years physicists applied a convenient approximation, namely, they
treated the theoretically infinite quantity as zero. Surprisingly,
that approximation gave results that agreed with all the experiments
that were done up until about the mid 1940s.
In the late 1940s, Richard Feynman, Julian Schwinger, and Shinichiro
Tomonaga independently developed versions of QED that address the
infinite self-energy problem. (Tomonaga's work was done somewhat
earlier, but wasn't publicized until later because of the isolation
of Japan during World War II.) It wasn't at all obvious that the
QED of Feynman and the QED of Schwinger were equivalent. That they
were equivalent was established and publicized by Freeman Dyson
(while he was a graduate student, no less).
The problem of the seeminginly infinite self-energy of the electron
was addressed by a technique known as renormalization. We could say
that renormalization is only an approximation: it is known to be
mathematically unsound, breaking down after a mere 400 or so decimal
places. Despite that, QED gives numerical results that are correct
up to the limits of our ability to measure. A computation done
using QED matched an experimental result to within the tolerance
of the measurement, which was 13 decimal places. An analogy given
by Feynman is that this is like measuring the distance from LA to
New York to an accuracy of the width of one human hair.
QED has implications that are visible in the "normal" world, by
which I mean using ordinary equipment rather than things like
synchrotrons and particle accelerators, and that leaves atoms
intact. Basically all of chemistry depends on QED and not on
anything more exotic.
There are three fundamental forces other than the electromagnetic
force, namely, gravity, the weak force, and the strong force. The
strong force is what holds together the protons and neutrons in the
nucleus of an atom; it has to be stronger than the electromagnetic
force so that protons don't just fly away from each other. The weak
force is related to radioactive decay; it works only over very
short distances because the carrier particle of the weak force is
fairly massive (about 80 times the mass of a proton IIRC). For
comparison the carrier particle of the electromagnetic force is the
photon, which is massless; that means the electromagnetic force
operates over arbitrarily large distances (although of course with a
strength that diminishes as the distance gets larger).
The strong force (sometimes called the color force) is peculiar in
that the strong force actually *increases* with distance. That
happens because the carrier particle of the color force has a color
charge. For comparison photons are electrically neutral. It's
because of this property that we never see isolated quarks.
Basically, trying to pull two quarks apart takes so much energy that
new quarks come into existence out of nothing.
It does not come out of nothing, it comes out of the energy being
applied to pull the 2 quarks apart. Once the energy gets that big,
it (the energy) condenses into a pair of quarks which then pair up
to prevent the quarks from being seen in isolation.
Quarks come in three
"colors" (having nothing to do with ordinary color), times three
families of quarks, times two quarks in each family. The carrier
particle of the strong force is called a gluon, and there are eight
different kinds of gluons. (It seems like there should be nine, to
allow each of the 3x3 possible combinations of colors, but there are
only eight.) The corresponding theory to QED for the strong force
is called QCD, for Quantum chromodynamics.
A joke that I like to tell is because the carrier particle for the
strong force can change a quark from one color to another, rather
than calling it a gluon it should have been called a crayon.
The field theories for electromagnetism, the strong force, and the
weak force have been unified in the sense that there is a
mathematically consistent framework that accommodates all three.
That unification is only mathematical, by which I mean that there
are no testable physical implications, only a kind of tautological
consistency. We can see all three field theories through a common
mathematical lens, but that doesn't say anything about how the three
theories interact physically.
The gravitational force is much weaker, by 42 orders of magnitude,
than the other three fundamental forces. The General Theory of
Relativity is not a quantized theory. There are ideas about how to
unify gravity and the other three fundamental forces, but none of
these "grand unified" theories have any hypotheses that we are able
to test experimentally. It's unclear how gravity fits in to the
overall picture.
Are you not amazed that everything physicists know about the
universe can be written in 13 equations.
MitchAlsup1 <mitchalsup@aol.com> schrieb:
Are you not amazed that everything physicists know about the universe
can be written in 13 equations.
Randall Munroe has some comment on that... https://xkcd.com/1867/
(Among thers, he left out turbulence, where we have some
understanding, but do not yet understand the Navier-Stokes
equations - one of the Millenium Problems).
On 9/17/24 8:44 PM, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have access
to the sheer quantity of RAM that is available to the CPU. And
motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
tend not to offer this option.
My guess would be that CPU RAM will decrease in upgradability.
On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:
As far as loads and stores go, if they are atomic then
a load will not see a value that was not from some store.
When you include stores from devices into memory, we agree.
A LD should return the last written value.
When you include device control registers; all bets are off.
Regarding memory barriers, that depends on the hardware
memory model and the program logic assuming one knows
how to do concurrent algorithms.
In particular, we are talking about a sequence of instructions
with the properties:: a) an external observer can see only
the previous or new values of a concurrent data structure
and none of the intermediate changes, and b) should the
event fail somewhere in the middle, no-one saw any of
the intermediate state, either.
The event is bigger than the memory reference instruction.
And finally, getting the MemBarIzation correct seems to
be beyond many-most programmers leading to error prone
applications.
MitchAlsup1 <mitchalsup@aol.com> schrieb:
Are you not amazed that everything physicists know about the universe
can be written in 13 equations.
Randall Munroe has some comment on that... https://xkcd.com/1867/
(Among thers, he left out turbulence, where we have some
understanding, but do not yet understand the Navier-Stokes
equations - one of the Millenium Problems).
Michael S <already5chosen@yahoo.com> writes:
On Sun, 22 Sep 2024 12:58:36 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 22/09/2024 10:48, Michael S wrote:
On Sat, 21 Sep 2024 20:30:40 +0200
David Brown <david.brown@hesbynett.no> wrote:
Actual physicists know that quantum mechanics is not complete - it
is not a "theory of everything", and does not explain everything.
It is, like Newtonian gravity and general relativity, a
simplification that gives an accurate model of reality within
certain limitations, and hopefully it will one day be superseded
by a new theory that models reality more accurately and over a
wider range of circumstances. That is how science works.
As things stand today, no such better theory has been developed.
Actually, such theory (QED) was proposed by Paul Dirac back in
1920s and further developed by many others bright minds.
The trouble with it (according to my not too educated
understanding) is that unlike Schrodinger equation, approximate
solutions for QED equations can't be calculated numerically by
means of Green's function. Because of that QED is rarely used
outside of field of high-energy particles and such.
But then, I am almost 40 years out of date. Things could have
changed.
I don't claim to be an expert on this field in any way, and could
easily be muddled on the details.
I thought QED only covered special relativity, not general relativity
- i.e., it describes particles travelling near the speed of light,
but does not handle gravity or the curvature of space-time.
That sounds correct, at least for Dirac's form of QED. May be it was
amended later.
No one does this because the gravitational effects are way beyond
negligible. It would be like, when doing an experiment on a
sunny day, wanting to take into account the effects of a star ten
quadrillion light years away. To say the effects are down in the
noise is a vast understatement. (The distance of ten quadrillion
light years reflects the relative strength of gravity compared to
the electromagnetic force.)
But that was not my point.
My point was that the QED is well known to be better approximation of
reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent
of it. Despite that in practice a "worse" approximation is used far
more often.
I would say simpler approximation, and simpler approximations are
usually used then they suffice. If for example we want to
calculate how much speed is needed to pass a moving car, we don't
need to take into account how distances change due to special
relativity. When we want to set a timer to cook something on the
stove, we don't worry about whether we are at sea level or up in
the mountains, even though we know that the difference in gravity
changes how fast the timer will run (and even can be measured).
Tim Rentsch wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sun, 22 Sep 2024 12:58:36 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 22/09/2024 10:48, Michael S wrote:
On Sat, 21 Sep 2024 20:30:40 +0200
David Brown <david.brown@hesbynett.no> wrote:
Actual physicists know that quantum mechanics is not complete - it >>>>>> is not a "theory of everything", and does not explain everything.
It is, like Newtonian gravity and general relativity, a
simplification that gives an accurate model of reality within
certain limitations, and hopefully it will one day be superseded
by a new theory that models reality more accurately and over a
wider range of circumstances. That is how science works.
As things stand today, no such better theory has been developed.
Actually, such theory (QED) was proposed by Paul Dirac back in
1920s and further developed by many others bright minds.
The trouble with it (according to my not too educated
understanding) is that unlike Schrodinger equation, approximate
solutions for QED equations can't be calculated numerically by
means of Green's function. Because of that QED is rarely used
outside of field of high-energy particles and such.
But then, I am almost 40 years out of date. Things could have
changed.
I don't claim to be an expert on this field in any way, and could
easily be muddled on the details.
I thought QED only covered special relativity, not general relativity
- i.e., it describes particles travelling near the speed of light,
but does not handle gravity or the curvature of space-time.
That sounds correct, at least for Dirac's form of QED. May be it was
amended later.
No one does this because the gravitational effects are way beyond
negligible. It would be like, when doing an experiment on a
sunny day, wanting to take into account the effects of a star ten
quadrillion light years away. To say the effects are down in the
noise is a vast understatement. (The distance of ten quadrillion
light years reflects the relative strength of gravity compared to
the electromagnetic force.)
But that was not my point.
My point was that the QED is well known to be better approximation of
reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent
of it. Despite that in practice a "worse" approximation is used far
more often.
I would say simpler approximation, and simpler approximations are
usually used then they suffice. If for example we want to
calculate how much speed is needed to pass a moving car, we don't
need to take into account how distances change due to special
relativity. When we want to set a timer to cook something on the
stove, we don't worry about whether we are at sea level or up in
the mountains, even though we know that the difference in gravity
changes how fast the timer will run (and even can be measured).
No, no, no!
The change in pressure directly impacts the cooking temperature, and therefore also the time needed.
Terje Mathisen <terje.mathisen@tmsw.no> writes:
Tim Rentsch wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sun, 22 Sep 2024 12:58:36 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 22/09/2024 10:48, Michael S wrote:
On Sat, 21 Sep 2024 20:30:40 +0200
David Brown <david.brown@hesbynett.no> wrote:
Actual physicists know that quantum mechanics is not complete - it >>>>>>> is not a "theory of everything", and does not explain everything. >>>>>>> It is, like Newtonian gravity and general relativity, a
simplification that gives an accurate model of reality within
certain limitations, and hopefully it will one day be superseded >>>>>>> by a new theory that models reality more accurately and over a
wider range of circumstances. That is how science works.
As things stand today, no such better theory has been developed.
Actually, such theory (QED) was proposed by Paul Dirac back in
1920s and further developed by many others bright minds.
The trouble with it (according to my not too educated
understanding) is that unlike Schrodinger equation, approximate
solutions for QED equations can't be calculated numerically by
means of Green's function. Because of that QED is rarely used
outside of field of high-energy particles and such.
But then, I am almost 40 years out of date. Things could have
changed.
I don't claim to be an expert on this field in any way, and could
easily be muddled on the details.
I thought QED only covered special relativity, not general relativity >>>>> - i.e., it describes particles travelling near the speed of light,
but does not handle gravity or the curvature of space-time.
That sounds correct, at least for Dirac's form of QED. May be it was
amended later.
No one does this because the gravitational effects are way beyond
negligible. It would be like, when doing an experiment on a
sunny day, wanting to take into account the effects of a star ten
quadrillion light years away. To say the effects are down in the
noise is a vast understatement. (The distance of ten quadrillion
light years reflects the relative strength of gravity compared to
the electromagnetic force.)
But that was not my point.
My point was that the QED is well known to be better approximation of
reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent
of it. Despite that in practice a "worse" approximation is used far
more often.
I would say simpler approximation, and simpler approximations are
usually used then they suffice. If for example we want to
calculate how much speed is needed to pass a moving car, we don't
need to take into account how distances change due to special
relativity. When we want to set a timer to cook something on the
stove, we don't worry about whether we are at sea level or up in
the mountains, even though we know that the difference in gravity
changes how fast the timer will run (and even can be measured).
No, no, no!
The change in pressure directly impacts the cooking temperature, and
therefore also the time needed.
I concede your point. My point was only about how the change
in gravity affects the speed at which the timer runs.
I had this crazy idea of putting cpus right on the ram. So, if you add
more memory to your system you automatically get more cpu's... Think
NUMA for a moment... ;^)
Yes, but that’s a lot more expensive.
On Mon, 23 Sep 2024 01:34:55 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:
On 9/22/2024 5:39 PM, MitchAlsup1 wrote:
Speaking of memory models, remember when x86 didn't have
a formal memory model. They didn't put one in until
after itanium. Before that it was a sort of processor
consistency type 2 which was a real impedance mismatch
with what most concurrent software used a a memory model.
When only 1 x86 would fit on a die, it really did not mater
much. I was at AMD when they were designing their memory
model.
Joe Seigh
Why # of CPU cores on die is of particular importance?
According to my understanding, what matters is # of CPU cores with
coherent access to the same memory+IO.
For x86, 4 cores (CPUs) were relatively common since 1996. There
existed few odd 8-core systems too, still back in the last century.
"Paul A. Clayton" <paaronclayton@gmail.com> writes:
On 9/17/24 8:44 PM, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have access >>> to the sheer quantity of RAM that is available to the CPU. And
motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
tend not to offer this option.
My guess would be that CPU RAM will decrease in upgradability.
LDO's statement "will never have access to the sheer quantity of
RAM that is available to the CPU" is flat out wrong.
Marvell already offers a CXL add-on processor card that supports
up to 4TB of DRAM with 16 high-end ARM64 V series cores.
On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:
"Paul A. Clayton" <paaronclayton@gmail.com> writes:
On 9/17/24 8:44 PM, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and
GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the CPU.
And motherboard-based CPU RAM is upgradeable, as well, whereas
addon cards tend not to offer this option.
My guess would be that CPU RAM will decrease in upgradability.
LDO's statement "will never have access to the sheer quantity of
RAM that is available to the CPU" is flat out wrong.
Marvell already offers a CXL add-on processor card that supports
up to 4TB of DRAM with 16 high-end ARM64 V series cores.
At somewhere near 3× the latency to DRAM.
If the size works for your application--great !
If the latency does not work for you--less great.
On Mon, 23 Sep 2024 21:10:00 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:
=20
"Paul A. Clayton" <paaronclayton@gmail.com> writes: =20=20
On 9/17/24 8:44=E2=80=AFPM, Lawrence D'Oliveiro wrote: =20
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
=20
"the CPUs are simply I/O managers to the Inference Engines and
GPUs." =20
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the CPU.
And motherboard-based CPU RAM is upgradeable, as well, whereas
addon cards tend not to offer this option. =20
My guess would be that CPU RAM will decrease in upgradability. =20
LDO's statement "will never have access to the sheer quantity of
RAM that is available to the CPU" is flat out wrong.
Marvell already offers a CXL add-on processor card that supports
up to 4TB of DRAM with 16 high-end ARM64 V series cores. =20
At somewhere near 3=C3=97 the latency to DRAM.
=20
Where did you find this figure?
I have read both product brief and press release and didn't see any
latency numbers mentioned, not even an order of magnitude.
I suppose, in order to get real datasheet one would have to sign NDA.
Somehow I don't see how anything running over PCIe-like link can be
as fast as you suggest.
On 9/23/2024 1:59 PM, MitchAlsup1 wrote:
On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:
On Mon, 23 Sep 2024 01:34:55 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:
On 9/22/2024 5:39 PM, MitchAlsup1 wrote:
Speaking of memory models, remember when x86 didn't have
a formal memory model. They didn't put one in until
after itanium. Before that it was a sort of processor
consistency type 2 which was a real impedance mismatch
with what most concurrent software used a a memory model.
When only 1 x86 would fit on a die, it really did not mater
much. I was at AMD when they were designing their memory
model.
Joe Seigh
Why # of CPU cores on die is of particular importance?
Prior to multi-CPUs on a die; 99% of all x86 systems were
mono-CPU systems, and the necessity of having a well known
memory model was more vague. Although there were servers
with multiple CPUs in them they represented "an afternoon
in the FAB" compared to the PC oriented x86s.
That is "we did not see the problem until it hit us in
the face." Once it did, we understood what we had to do:
presto memory model.
Also note: this was just after the execution pipeline went
Great Big Our of Order, and thus made the lack of order
problems much more visible to applications. {Pentium Pro}
Iirc, been a while, I think there was a problem on one of the Pentiums,
might be the pro, where it had an issue with releasing a spinlock with a normal store. I am most likely misremembering, but it is sparking some strange memories. Way back on c.p.t, Alex Terekhov (hope I did not
butcher the spelling of his name), anyway, wrote about it, I think...
Way back. early 2000's I think.
According to my understanding, what matters is # of CPU cores with
coherent access to the same memory+IO.
For x86, 4 cores (CPUs) were relatively common since 1996. There
existed few odd 8-core systems too, still back in the last century.
On Mon, 23 Sep 2024 21:10:00 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:
"Paul A. Clayton" <paaronclayton@gmail.com> writes:
On 9/17/24 8:44 PM, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and
GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never have
access to the sheer quantity of RAM that is available to the CPU.
And motherboard-based CPU RAM is upgradeable, as well, whereas
addon cards tend not to offer this option.
My guess would be that CPU RAM will decrease in upgradability.
LDO's statement "will never have access to the sheer quantity of
RAM that is available to the CPU" is flat out wrong.
Marvell already offers a CXL add-on processor card that supports
up to 4TB of DRAM with 16 high-end ARM64 V series cores.
At somewhere near 3× the latency to DRAM.
Where did you find this figure?
I have read both product brief and press release and didn't see any
latency numbers mentioned, not even an order of magnitude.
I suppose, in order to get real datasheet one would have to sign NDA.
Somehow I don't see how anything running over PCIe-like link can be
as fast as you suggest.
If the size works for your application--great !
If the latency does not work for you--less great.
On 9/23/2024 2:58 PM, MitchAlsup1 wrote:
On Mon, 23 Sep 2024 21:35:53 +0000, Chris M. Thomasson wrote:
Iirc, been a while, I think there was a problem on one of the Pentiums,
might be the pro, where it had an issue with releasing a spinlock with a >>> normal store. I am most likely misremembering, but it is sparking some
strange memories. Way back on c.p.t, Alex Terekhov (hope I did not
butcher the spelling of his name), anyway, wrote about it, I think...
Way back. early 2000's I think.
Many ATOMIC sequences start or end without any note on the memory
reference that it bounds an ATOMIC event. CAS has this problem
on the value to ultimately be compared (the start), T&S has this
problem on ST that unlocks the lock (the end). It is like using
indentation as the only means of signaling block structure in
your language of choice.
_Strong_ CAS in C++ terms, ala cmpxchg, will only fail if the comparands
are different.
This can be implemented with LL/SC for sure.
Scott
mentioned something about a bus lock after a certain amount of
failures... (side note) Weak CAS can fail even if the comparands are identical to each other ala LL/SC. This reminds me of LL/SC. the ABA
problem can worked around and/or eliminated without using LL/SC. I
remember reading papers about LL/SC getting around ABA, but then read
about how they can have their own can of worms. Pessimistic vs
optimistic sync... Wait/ Lock / Obstruction free things... ;^)
Fwiw, getting rid of the StoreLoad membar in algorithms like SMR is
great. There is a way to do this in existing systems. So, no hardware
changes required, and makes the system run fast.
Think of allowing a rouge thread to pound a CAS with random data wrt the comparand, trying to get it to fail... Of course this can be modifying a reservation granule wrt LL/SC side of things, right? Pessimistic (CAS)
vs Optimistic (LL/SC)?
Both are bad practice in making HW that can perform these things
efficiently. But notice that LL-SC does not have this problem.
Neither does ESM.
According to my understanding, what matters is # of CPU cores with
coherent access to the same memory+IO.
For x86, 4 cores (CPUs) were relatively common since 1996. There
existed few odd 8-core systems too, still back in the last century.
On 2024-09-23 20:43, Tim Rentsch wrote:
Terje Mathisen <terje.mathisen@tmsw.no> writes:
Tim Rentsch wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sun, 22 Sep 2024 12:58:36 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 22/09/2024 10:48, Michael S wrote:
On Sat, 21 Sep 2024 20:30:40 +0200
David Brown <david.brown@hesbynett.no> wrote:
Actual physicists know that quantum mechanics is not complete - it >>>>>>>> is not a "theory of everything", and does not explain everything. >>>>>>>> It is, like Newtonian gravity and general relativity, aActually, such theory (QED) was proposed by Paul Dirac back in
simplification that gives an accurate model of reality within
certain limitations, and hopefully it will one day be superseded >>>>>>>> by a new theory that models reality more accurately and over a >>>>>>>> wider range of circumstances. That is how science works.
As things stand today, no such better theory has been developed. >>>>>>>
1920s and further developed by many others bright minds.
The trouble with it (according to my not too educated
understanding) is that unlike Schrodinger equation, approximate
solutions for QED equations can't be calculated numerically by
means of Green's function. Because of that QED is rarely used
outside of field of high-energy particles and such.
But then, I am almost 40 years out of date. Things could have
changed.
I don't claim to be an expert on this field in any way, and could
easily be muddled on the details.
I thought QED only covered special relativity, not general relativity >>>>>> - i.e., it describes particles travelling near the speed of light, >>>>>> but does not handle gravity or the curvature of space-time.
That sounds correct, at least for Dirac's form of QED. May be it was >>>>> amended later.
No one does this because the gravitational effects are way beyond
negligible. It would be like, when doing an experiment on a
sunny day, wanting to take into account the effects of a star ten
quadrillion light years away. To say the effects are down in the
noise is a vast understatement. (The distance of ten quadrillion
light years reflects the relative strength of gravity compared to
the electromagnetic force.)
But that was not my point.
My point was that the QED is well known to be better approximation of >>>>> reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent >>>>> of it. Despite that in practice a "worse" approximation is used far >>>>> more often.
I would say simpler approximation, and simpler approximations are
usually used then they suffice. If for example we want to
calculate how much speed is needed to pass a moving car, we don't
need to take into account how distances change due to special
relativity. When we want to set a timer to cook something on the
stove, we don't worry about whether we are at sea level or up in
the mountains, even though we know that the difference in gravity
changes how fast the timer will run (and even can be measured).
No, no, no!
The change in pressure directly impacts the cooking temperature, and
therefore also the time needed.
I concede your point. My point was only about how the change
in gravity affects the speed at which the timer runs.
If the timer and the stove are at the same altitude, as seems natural,
you never have to consider gravity in timing the cooking - any gravity
effect on the timer rate is exactly the same as the effect on the
heating rate of the water in the pot and the cooking rate of its
contents. If it takes 10 minutes by the timer at sea level, it will
take 10 minutes by the timer in any other gravity, all other things
(such as the air pressure) being the same.
However, if you compare two timers (or stoves) at different altitudes,
that is where you can see the effect of gravity on time -- and it is
of course negligible for practical cookery on Earth.
On 9/23/2024 3:32 PM, MitchAlsup1 wrote:
I got rid of all MemBars and still have a fairly relaxed memory model.
That is interesting to me! It's sort-of "out of the box" so to speak?
How can a programmer take advantage of the relaxed aspect of your model?
Think of allowing a rouge thread to pound a CAS with random data wrt the >>> comparand, trying to get it to fail... Of course this can be modifying a >>> reservation granule wrt LL/SC side of things, right? Pessimistic (CAS)
vs Optimistic (LL/SC)?
Or methodological (ESM).
Still, how does live lock get eluded in your system? Think along the
lines of a "rouge" thread causing havoc? Banging on cache lines ect...
;^o
GPUs have lower clock speed because this way they can operate at lower voltage and to do more work per Joule.
High-end GPUs are power-bound beasts.
Chris M. Thomasson [2024-09-20 14:54:36] wrote:
I had this crazy idea of putting cpus right on the ram. So, if you add
more memory to your system you automatically get more cpu's... Think
NUMA for a moment... ;^)
To which Lawrence D'Oliveiro preemptively replied:
Yes, but that’s a lot more expensive.
On Sun, 22 Sep 2024 7:23:59 +0000, Lawrence D'Oliveiro wrote:
Bell’s Theorem offered a way to test for those, and the tests
(there have been several of them so far, done in several different
ways) show that such “hidden variables” cannot exist.
Do not exist, there remains no evidence that they cannot exist.
The difference between MPP and cluster is not well-defined.
The difference between ccNUMA and MPP-or-cluster is crystal clear.
Laplace's demon and the whole Reductionist approach to natural science
sounds decent (although unproven) as philosophy/program, but very rarely sufficient for solving complicated problems of chemistry, biology, engineering or even of many branches of physics themselves.
On Sun, 22 Sep 2024 18:45:54 +0000, MitchAlsup1 wrote:
On Sun, 22 Sep 2024 7:23:59 +0000, Lawrence D'Oliveiro wrote:
Bell’s Theorem offered a way to test for those, and the tests
(there have been several of them so far, done in several different
ways) show that such “hidden variables” cannot exist.
Do not exist, there remains no evidence that they cannot exist.
The large collection of tests of Bell’s theorem is that evidence.
(Among thers, he left out turbulence, where we have some understanding,
but do not yet understand the Navier-Stokes equations - one of the
Millenium Problems).
I thought QED only covered special relativity, not general relativity -That sounds correct, at least for Dirac's form of QED. May be it was
i.e., it describes particles travelling near the speed of light, but
does not handle gravity or the curvature of space-time.
amended later.
Electric Universe
Einstein didn't like Copenhagen interpretation of Quantum Mechanics.
On 9/23/2024 5:26 PM, MitchAlsup1 wrote:
On Mon, 23 Sep 2024 22:46:47 +0000, Chris M. Thomasson wrote:[...]
On 9/23/2024 3:32 PM, MitchAlsup1 wrote:Touch a DRAM location and one gets causal order.
I got rid of all MemBars and still have a fairly relaxed memory model.
That is interesting to me! It's sort-of "out of the box" so to speak?
How can a programmer take advantage of the relaxed aspect of your model? >>>
Touch a MM I/O location and one gets sequential consistency
Touch a config space location and one gets strongly ordering
Touch ROM and one gets unordered access.
You see, the memory <ordering> model is not tied to a CPU state, but
to what LD and ST instructions touch.
What is the granularity of the "touch"? A L2 cache line?
On Sun, 22 Sep 2024 18:45:09 -0000 (UTC), Brett wrote:
Electric Universe
Electricity without magnetism? Not even taking account of Maxwell’s unification of the electric and magnetic fields?
On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:
On Mon, 23 Sep 2024 01:34:55 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:
On 9/22/2024 5:39 PM, MitchAlsup1 wrote:
Speaking of memory models, remember when x86 didn't have
a formal memory model. They didn't put one in until
after itanium. Before that it was a sort of processor
consistency type 2 which was a real impedance mismatch
with what most concurrent software used a a memory model.
When only 1 x86 would fit on a die, it really did not mater
much. I was at AMD when they were designing their memory
model.
Joe Seigh
Why # of CPU cores on die is of particular importance?
Prior to multi-CPUs on a die; 99% of all x86 systems were
mono-CPU systems, and the necessity of having a well known
memory model was more vague. Although there were servers
with multiple CPUs in them they represented "an afternoon
in the FAB" compared to the PC oriented x86s.
On Sun, 22 Sep 2024 11:56:33 +0300, Michael S wrote:
Einstein didn't like Copenhagen interpretation of Quantum Mechanics.
He didn’t like quantum mechanics full stop. “God does not play dice”, he
famously said. And kept trying to come up with an alternative, though he never succeeded. And remember, his 1905 paper on the photoelectric effect (for which he won the Nobel Prize) was one of the foundation stones of
this horrible new theory.
“Interpretations” of quantum mechanics had nothing to do with this: the probabilistic behaviour that Einstein objected to is inherent in the equations themselves: wave function in → probability out.
On Mon, 23 Sep 2024 10:38:40 -0000 (UTC), Thomas Koenig wrote:
(Among thers, he left out turbulence, where we have some understanding,
but do not yet understand the Navier-Stokes equations - one of the
Millenium Problems).
I thought the problem with Navier-Stokes is that it assumes infinitesimally-small particles of fluid, whereas we know that real fluids are made up of atoms and molecules.
Remember how Max Planck solved the black-body problem? He knew all about
the previous approach of assuming that matter was made up of little oscillators, and then trying to work out the limiting behaviour as the
size of those oscillators approached zero -- that didn’t work. So his breakthrough was in assuming that the oscillators did *not* approach zero
in size, but had some minimum nonzero size. Et voilà ... he got a curve
that actually matched the known behaviour of radiating bodies. And laid
one of the foundation stones of quantum theory in the process.
Seems a similar thing could be done with Navier-Stokes ... ?
On Sun, 22 Sep 2024 11:56:33 +0300, Michael S wrote:
Einstein didn't like Copenhagen interpretation of Quantum Mechanics.
He didn’t like quantum mechanics full stop. “God does not play dice”, he
famously said. And kept trying to come up with an alternative, though he never succeeded. And remember, his 1905 paper on the photoelectric effect (for which he won the Nobel Prize) was one of the foundation stones of
this horrible new theory.
“Interpretations” of quantum mechanics had nothing to do with this: the probabilistic behaviour that Einstein objected to is inherent in the equations themselves: wave function in → probability out.
Michael S <already5chosen@yahoo.com> writes:
On Mon, 23 Sep 2024 21:10:00 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:
=20
"Paul A. Clayton" <paaronclayton@gmail.com> writes: =20=20
On 9/17/24 8:44=E2=80=AFPM, Lawrence D'Oliveiro wrote: =20
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
=20
"the CPUs are simply I/O managers to the Inference Engines and
GPUs." =20
That particular Wheel of Reincarnation will never turn that
way.
Why? It comes down to RAM. Those addon processors will never
have access to the sheer quantity of RAM that is available to
the CPU. And motherboard-based CPU RAM is upgradeable, as
well, whereas addon cards tend not to offer this option. =20
My guess would be that CPU RAM will decrease in upgradability.
=20
LDO's statement "will never have access to the sheer quantity of
RAM that is available to the CPU" is flat out wrong.
Marvell already offers a CXL add-on processor card that supports
up to 4TB of DRAM with 16 high-end ARM64 V series cores. =20
At somewhere near 3=C3=97 the latency to DRAM.
=20
Where did you find this figure?
I have read both product brief and press release and didn't see any
latency numbers mentioned, not even an order of magnitude.
I suppose, in order to get real datasheet one would have to sign NDA.
Somehow I don't see how anything running over PCIe-like link can be
as fast as you suggest.
The round-trip latency in PCie 6 can be circa 2ns. Add dram access
time to that and it's competetive with local memory.
On Mon, 23 Sep 2024 21:34:03 +0000, Michael S wrote:
On Mon, 23 Sep 2024 21:10:00 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:
"Paul A. Clayton" <paaronclayton@gmail.com> writes:
On 9/17/24 8:44 PM, Lawrence D'Oliveiro wrote:
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
"the CPUs are simply I/O managers to the Inference Engines and
GPUs."
That particular Wheel of Reincarnation will never turn that way.
Why? It comes down to RAM. Those addon processors will never
have access to the sheer quantity of RAM that is available to
the CPU. And motherboard-based CPU RAM is upgradeable, as well,
whereas addon cards tend not to offer this option.
My guess would be that CPU RAM will decrease in upgradability.
LDO's statement "will never have access to the sheer quantity of
RAM that is available to the CPU" is flat out wrong.
Marvell already offers a CXL add-on processor card that supports
up to 4TB of DRAM with 16 high-end ARM64 V series cores.
At somewhere near 3× the latency to DRAM.
Where did you find this figure?
I calculated it based on how messages get passed up and down PCIe
linkages and that that plug in memory has to be enough wire distance
to need a PCIe switch between CPU die and Plug. Then add on typical
memory controller and DRAM controller, and that is what you have.
I have read both product brief and press release and didn't see any
latency numbers mentioned, not even an order of magnitude.
I suppose, in order to get real datasheet one would have to sign
NDA.
Somehow I don't see how anything running over PCIe-like link can be
as fast as you suggest.
I was not suggesting it is fast, I was suggesting it is slow.
If the size works for your application--great !
If the latency does not work for you--less great.
MitchAlsup1 wrote:
On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:
On Mon, 23 Sep 2024 01:34:55 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:
On 9/22/2024 5:39 PM, MitchAlsup1 wrote:
Speaking of memory models, remember when x86 didn't have
a formal memory model. They didn't put one in until
after itanium. Before that it was a sort of processor
consistency type 2 which was a real impedance mismatch
with what most concurrent software used a a memory model.
When only 1 x86 would fit on a die, it really did not mater
much. I was at AMD when they were designing their memory
model.
Joe Seigh
Why # of CPU cores on die is of particular importance?
Prior to multi-CPUs on a die; 99% of all x86 systems were
mono-CPU systems, and the necessity of having a well known
memory model was more vague. Although there were servers
with multiple CPUs in them they represented "an afternoon
in the FAB" compared to the PC oriented x86s.
When I started writing my first multi-threaded programs, I insisted
on getting a workstation with at least two sockets/cpus:
Somebody wiser than me had written something like "You cannot write/test/debug multithreaded programs without the ability for
multiple threads to actually run at the same time."
Pretty obvious really, but the quote was sufficient to get my boss to
sign off on a much more expensive PC model. :-)
Terje
On Mon, 23 Sep 2024 10:38:40 -0000 (UTC), Thomas Koenig wrote:
(Among thers, he left out turbulence, where we have some
understanding, but do not yet understand the Navier-Stokes
equations - one of the Millenium Problems).
I thought the problem with Navier-Stokes is that it assumes infinitesimally-small particles of fluid, whereas we know that real
fluids are made up of atoms and molecules.
Remember how Max Planck solved the black-body problem? He knew all
about the previous approach of assuming that matter was made up of
little oscillators, and then trying to work out the limiting
behaviour as the size of those oscillators approached zero -- that
didn’t work. So his breakthrough was in assuming that the oscillators
did *not* approach zero in size, but had some minimum nonzero size.
Et voilà ... he got a curve that actually matched the known behaviour
of radiating bodies. And laid one of the foundation stones of quantum
theory in the process.
Seems a similar thing could be done with Navier-Stokes ... ?
On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:
On Mon, 23 Sep 2024 01:34:55 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:
On 9/22/2024 5:39 PM, MitchAlsup1 wrote:
Speaking of memory models, remember when x86 didn't have
a formal memory model. They didn't put one in until
after itanium. Before that it was a sort of processor
consistency type 2 which was a real impedance mismatch
with what most concurrent software used a a memory model.
When only 1 x86 would fit on a die, it really did not mater
much. I was at AMD when they were designing their memory
model.
Joe Seigh
Why # of CPU cores on die is of particular importance?
Prior to multi-CPUs on a die; 99% of all x86 systems were
mono-CPU systems, and the necessity of having a well known
memory model was more vague.
Although there were servers
with multiple CPUs in them they represented "an afternoon
in the FAB" compared to the PC oriented x86s.
That is "we did not see the problem until it hit us in
the face." Once it did, we understood what we had to do:
presto memory model.
Also note: this was just after the execution pipeline went
Great Big Our of Order, and thus made the lack of order
problems much more visible to applications. {Pentium Pro}
According to my understanding, what matters is # of CPU cores with
coherent access to the same memory+IO.
For x86, 4 cores (CPUs) were relatively common since 1996. There
existed few odd 8-core systems too, still back in the last century.
On Sun, 22 Sep 2024 10:34:16 +0300, Michael S wrote:
The difference between MPP and cluster is not well-defined.
The difference between ccNUMA and MPP-or-cluster is crystal clear.
If memory on other nodes were made directly addressable via hardware
that implemented something like a message-passing bus, suddenly the difference is not so clear.
MitchAlsup1 wrote:
On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:
Prior to multi-CPUs on a die; 99% of all x86 systems were
mono-CPU systems, and the necessity of having a well known
memory model was more vague. Although there were servers
with multiple CPUs in them they represented "an afternoon
in the FAB" compared to the PC oriented x86s.
When I started writing my first multi-threaded programs, I insisted on=20 >getting a workstation with at least two sockets/cpus:
Somebody wiser than me had written something like "You cannot=20 >write/test/debug multithreaded programs without the ability for multiple =
threads to actually run at the same time."
Pretty obvious really, but the quote was sufficient to get my boss to=20
sign off on a much more expensive PC model. :-)
Michael S <already5chosen@yahoo.com> writes:
On Mon, 23 Sep 2024 21:51:31 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
Michael S <already5chosen@yahoo.com> writes:
On Mon, 23 Sep 2024 21:10:00 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:
=20
"Paul A. Clayton" <paaronclayton@gmail.com> writes: =20=20
On 9/17/24 8:44=E2=80=AFPM, Lawrence D'Oliveiro wrote: =20
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
=20
"the CPUs are simply I/O managers to the Inference Engines
and GPUs." =20
That particular Wheel of Reincarnation will never turn that
way.
Why? It comes down to RAM. Those addon processors will never
have access to the sheer quantity of RAM that is available
to the CPU. And motherboard-based CPU RAM is upgradeable, as
well, whereas addon cards tend not to offer this option.
=20
My guess would be that CPU RAM will decrease in upgradability.
=20
LDO's statement "will never have access to the sheer quantity
of RAM that is available to the CPU" is flat out wrong.
Marvell already offers a CXL add-on processor card that
supports up to 4TB of DRAM with 16 high-end ARM64 V series
cores. =20
At somewhere near 3=C3=97 the latency to DRAM.
=20
Where did you find this figure?
I have read both product brief and press release and didn't see
any latency numbers mentioned, not even an order of magnitude.
I suppose, in order to get real datasheet one would have to sign
NDA.
Somehow I don't see how anything running over PCIe-like link can
be as fast as you suggest.
The round-trip latency in PCie 6 can be circa 2ns. Add dram
access time to that and it's competetive with local memory.
Either you don't know what you are talking about or you have very
special and particularly practically useless definition of round-trip >latency.
RC transmitter -> EP receiver/transmitter -> RC receiver at the
MAC level. Add in any logic delays to get to the memory controller,
and as noted, dram access time, will add to that. Making the
round trip delay comparable to a modern multi-socket numa
system.
On Mon, 23 Sep 2024 21:51:31 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
Michael S <already5chosen@yahoo.com> writes:
On Mon, 23 Sep 2024 21:10:00 +0000
mitchalsup@aol.com (MitchAlsup1) wrote:
On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:
=20
"Paul A. Clayton" <paaronclayton@gmail.com> writes: =20=20
On 9/17/24 8:44=E2=80=AFPM, Lawrence D'Oliveiro wrote: =20
On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
=20
"the CPUs are simply I/O managers to the Inference Engines and
GPUs." =20
That particular Wheel of Reincarnation will never turn that
way.
Why? It comes down to RAM. Those addon processors will never
have access to the sheer quantity of RAM that is available to
the CPU. And motherboard-based CPU RAM is upgradeable, as
well, whereas addon cards tend not to offer this option. =20
My guess would be that CPU RAM will decrease in upgradability.
=20
LDO's statement "will never have access to the sheer quantity of
RAM that is available to the CPU" is flat out wrong.
Marvell already offers a CXL add-on processor card that supports
up to 4TB of DRAM with 16 high-end ARM64 V series cores. =20
At somewhere near 3=C3=97 the latency to DRAM.
=20
Where did you find this figure?
I have read both product brief and press release and didn't see any
latency numbers mentioned, not even an order of magnitude.
I suppose, in order to get real datasheet one would have to sign NDA.
Somehow I don't see how anything running over PCIe-like link can be
as fast as you suggest.
The round-trip latency in PCie 6 can be circa 2ns. Add dram access
time to that and it's competetive with local memory.
Either you don't know what you are talking about or you have very
special and particularly practically useless definition of round-trip >latency.
Even if 99% is correct, there were still 6-7 figures worth of
dual-processor x86 systems sold each year and starting from 1997 at
least tens of thousands of quads.
Absence of ordering definitions should have been a problem for a lot of people. But somehow, it was not.
On 24/09/2024 03:00, Lawrence D'Oliveiro wrote:
On Mon, 23 Sep 2024 10:38:40 -0000 (UTC), Thomas Koenig wrote:
(Among thers, he left out turbulence, where we have some understanding,
but do not yet understand the Navier-Stokes equations - one of the
Millenium Problems).
I thought the problem with Navier-Stokes is that it assumes
infinitesimally-small particles of fluid, whereas we know that real fluids >> are made up of atoms and molecules.
Remember how Max Planck solved the black-body problem? He knew all about
the previous approach of assuming that matter was made up of little
oscillators, and then trying to work out the limiting behaviour as the
size of those oscillators approached zero -- that didn’t work. So his
breakthrough was in assuming that the oscillators did *not* approach zero
in size, but had some minimum nonzero size. Et voilà ... he got a curve
that actually matched the known behaviour of radiating bodies. And laid
one of the foundation stones of quantum theory in the process.
Seems a similar thing could be done with Navier-Stokes ... ?
Without knowing the history of work on Navier-Stokes, I am /reasonably/ confident that mathematicians have thought about this and tried it.
Quite a few decades ago, when I started my PhD, the group met
at a pub. Also present was one former PhD student, who had his
doctorate but, at the time, no job.
When asked what he was doing, he said he currently was a privat
scholar. A colleague asked for details, and he said that he
was working on the general solution of the Navier-Stokes equation,
and that he had tried separation of variables, but it didn't work.
We took this as "shut up, I don't want to hear any more questions".
Some time later, I tried to explain that to a medical doctor.
I told here that it was like claiming he was searching for the cure for cancer, and that he had tried a saline solution, but it didn't work.
MitchAlsup1 wrote:
On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:
Prior to multi-CPUs on a die; 99% of all x86 systems were
mono-CPU systems, and the necessity of having a well known
memory model was more vague. Although there were servers
with multiple CPUs in them they represented "an afternoon
in the FAB" compared to the PC oriented x86s.
When I started writing my first multi-threaded programs, I insisted on >getting a workstation with at least two sockets/cpus:
Somebody wiser than me had written something like "You cannot >write/test/debug multithreaded programs without the ability for multiple >threads to actually run at the same time."
Pretty obvious really, but the quote was sufficient to get my boss to
sign off on a much more expensive PC model. :-)
Terje
On 9/23/2024 8:17 PM, Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 22 Sep 2024 18:45:09 -0000 (UTC), Brett wrote:
Electric Universe
Electricity without magnetism? Not even taking account of Maxwell’s
unification of the electric and magnetic fields?
The standard theory is that there is no electricity in the universe due to >> the emptiness gaps. But light creates charge, and charge attraction, and
discharge creates magnetic fields, and all of this far better explains
galactic strands than mere gravity.
Go to the youtube Electric Universe and watch the most popular videos and
select play lists that interest you.
Most of this has been known since the 1920’s, but there be dragons down
this path so down the pointless gravity black hole science has gone.
Shit man... Are we in a black hole that resided in our parent universe? Humm... Always wondered about that type of shit.
I am on the black holes don’t exist list, at smaller than at the center
of a galaxy.
You hear physicists talk of microscopic black holes, but the force that
keeps atoms apart is so much more powerful than gravity that such talk
is just fools playing with math they don’t understand.
On 9/23/2024 5:45 PM, Lawrence D'Oliveiro wrote:
On Sun, 22 Sep 2024 10:34:16 +0300, Michael S wrote:
The difference between MPP and cluster is not well-defined.
The difference between ccNUMA and MPP-or-cluster is crystal clear.
If memory on other nodes were made directly addressable via hardware
that implemented something like a message-passing bus, suddenly the
difference is not so clear.
I guess the idea is to try to design things that try to minimize that
down to a bare minimum, if at all... Keep things as local as possible?
On 9/20/2024 4:33 PM, Lawrence D'Oliveiro wrote:
On Fri, 20 Sep 2024 09:55:53 +0200, Terje Mathisen wrote:
Lawrence D'Oliveiro wrote:
If you cannot swap the buffers with pointer updates ...
The way I understood to do flicker-free drawing was with just two
buffers -- “double buffering”. And rather than swap the buffer
contents, you just swapped the pointers to them.
Surely all the good hardware is/was designed that way, with special
registers pointing to “current buffer” and “back buffer”, with the >> display coming from “current buffer” while writes typically go to “back
buffer”. Why would you do it otherwise?
VRAM isn't free, and the older graphics hardware (before the era of 3D acceleration and the like) tended to only have a single framebuffer
(except, ironically, for text modes).
On Tue, 24 Sep 2024 0:50:15 +0000, Lawrence D'Oliveiro wrote:
On Sun, 22 Sep 2024 18:45:54 +0000, MitchAlsup1 wrote:
On Sun, 22 Sep 2024 7:23:59 +0000, Lawrence D'Oliveiro wrote:
Bell’s Theorem offered a way to test for those, and the tests (there >>>> have been several of them so far, done in several different ways)
show that such “hidden variables” cannot exist.
Do not exist, there remains no evidence that they cannot exist.
The large collection of tests of Bell’s theorem is that evidence.
There is still that ~1:peta chance of some phenomena we have not yet
measured to upend the inequality.
I told here that it was like claiming he was searching for the cure for cancer, and that he had tried a saline solution, but it didn't work.
But light creates charge, and charge attraction,
and discharge creates magnetic fields ...
You hear physicists talk of microscopic black holes, but the force that
keeps atoms apart is so much more powerful than gravity that such talk
is just fools playing with math they don’t understand.
Neutron stars are are collapsed forms of matter where gravity is
stronger than the electro-magnetic fields holding the electrons away
from each other and the protons.
You hear physicists talk of microscopic black holes,
What about the naked ones? ;^)
Or if quantum computing can give answers "better" than classical
computers using non-brute-force algorithms.
On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:
I am on the black holes don’t exist list, at smaller than at the center
of a galaxy.
You hear physicists talk of microscopic black holes, but the force that
keeps atoms apart is so much more powerful than gravity that such talk
is just fools playing with math they don’t understand.
Neutron stars are are collapsed forms of matter where gravity is
stronger
than the electro-magnetic fields holding the electrons away from each
other and the protons.
It is possible that there is some kind of (as yet non-understood) force
that prevent a black holes complete collapse into a point--erasing all visible aspects other than mass, charge, and spin.
It is just that our understanding of physics does not include such a
force.
Finally note: An electron can be modeled in QCD as if it were a black
hole with the mass, charge, and spin of an electron. ...
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force
that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:
I am on the black holes don't exist list, at smaller than at the
center of a galaxy.
You hear physicists talk of microscopic black holes, but the
force that keeps atoms apart is so much more powerful than
gravity that such talk is just fools playing with math they don't
understand.
Neutron stars are are collapsed forms of matter where gravity is
stronger than the electro-magnetic fields holding the electrons
away from each other and the protons.
It is possible that there is some kind of (as yet non-understood)
force that prevent a black holes complete collapse into a
point--erasing all visible aspects other than mass, charge, and
spin.
It is just that our understanding of physics does not include such
a force.
Finally note: An electron can be modeled in QCD as if it were a
black hole with the mass, charge, and spin of an electron. ...
mitchalsup@aol.com (MitchAlsup1) writes:
The "size" of a black hole might be identified as the radius of
the event horizon, since there is no way of looking inside the
event horizon. The radius of a black hole's event horizon is an
increasing function of the mass of the black hole. (My memory
tells me that the radius is a linear function of the mass, but
that should not be taken as reliable.)
Either the Earth or the
Sun would have (if it were a black hole) an event horizon radius
of more than one millimeter, but that statement too is a product
of my not-always-reliable memory.
Finally note: An electron can be modeled in QCD as if it were a
black hole with the mass, charge, and spin of an electron. ...
Electrons are color neutral. As far as QCD is concerned (since
QCD is only about the strong force, i.e. the color field),
electrons are invisible. (Disclaimer: to the best of my
understanding; I am not a physicist.)
On 27/09/2024 20:43, Brett wrote:
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force
that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly
together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Some of the pulsars are spinning at such a rate that they would fly
apart,
so we know the theory is wrong.
They are not flying apart - so we know /you/ are wrong.
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force
that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly
together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Some of the pulsars are spinning at such a rate that they would fly apart,
so we know the theory is wrong.
“A pulsar (from pulsating radio source)[1][2] is a highly magnetized rotating neutron star that emits beams of electromagnetic radiation out of its magnetic poles.[3] This radiation can be observed only when a beam of emission is pointing toward Earth (similar to the way a lighthouse can be seen only when the light is pointed in the direction of an observer), and
is responsible for the pulsed appearance of emission. “
This sounds like an electric motor,
and if you think a galactic
civilization would not turn such into a gas station, I have news for you.
You can take advantage of the huge gravity to feed it oil barrel
projectiles full of liquid hydrogen to feed off of the explosions for more power generation and keep the generator alive. The resulting spectrum would be artificial, but we lack the theory to understand that.
A Dyson sphere compared to a pulsar looks like a comparison of a desk fan
to a modern wind mill.
mitchalsup@aol.com (MitchAlsup1) writes:
On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:
I am on the black holes don't exist list, at smaller than at the
center of a galaxy.
You hear physicists talk of microscopic black holes, but the
force that keeps atoms apart is so much more powerful than
gravity that such talk is just fools playing with math they don't
understand.
Fools like Lev Landau, J Robert Oppenheimer, Richard Chase Tolman,
George Volkoff, Subrahmanyan Chandresekhar, Richard Feynman,
Stephen Hawking (and many others whose names I don't know)?
Neutron stars are are collapsed forms of matter where gravity is
stronger than the electro-magnetic fields holding the electrons
away from each other and the protons.
Not exactly. Electrons and protons attract each other. The gravity
is strong enough to get an electron and a proton close enough to each
other so they can combine and form a neutron. My model for this recombination is as follows: a proton is two up quarks and a down
quark; take an up quark (charge +2/3) and an electron (charge -1),
and maybe a neutrino, turn them all into energy and then turn the
energy back into a down quark (charge -1/3); so we have taken two up
quarks and a down quark (a proton) and an electron, and gotten out two
down quarks and an up quark (a neutron). Keep doing that until all
the electrons and protons are used up. Result: a neutron star,
consisting almost entirely of neutrons, and almost no protons or
electrons.
On Tue, 24 Sep 2024 20:33:58 +0000, MitchAlsup1 wrote:
Neutron stars are are collapsed forms of matter where gravity is
stronger than the electro-magnetic fields holding the electrons away
from each other and the protons.
Gravity here is stronger even than the Pauli exclusion principle, which
says that two matter particles (e.g. electrons, protons, neutrons) cannot occupy the same space at the same time.
On 2024-09-28 5:47, Brett wrote:
Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
On 2024-09-27 21:43, Brett wrote:
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force >>>>>>> that keeps atoms apart is so much more powerful than gravity that >>>>>>> such talk is just fools playing with math they don’t understand. >>>>>>That would mean that neutron stars (all the atoms crushed so tightly >>>>>> together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Some of the pulsars are spinning at such a rate that they would fly apart, >>>> so we know the theory is wrong.
Which pulsars are spinning too fast? Reference please!
https://en.wikipedia.org/wiki/PSR_J1748%E2%88%922446ad#:~:text=PSR%20J1748%E2%88%922446ad%20is%20the,was%20discovered%20by%20Jason%20W.%20T.
Spinning at 42,960 revolutions per minute.
The article says it is "the fastest-spinning pulsar known", but does not
say that it is spinning faster than neutron-star theories allow, so it
does not support your claim.
Took seconds for google to answer.
It is the wrong answer, at least for your claim.
Tim Rentsch wrote:
mitchalsup@aol.com (MitchAlsup1) writes:
On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:
I am on the black holes don't exist list, at smaller than at the
center of a galaxy.
You hear physicists talk of microscopic black holes, but the
force that keeps atoms apart is so much more powerful than
gravity that such talk is just fools playing with math they don't
understand.
Fools like Lev Landau, J Robert Oppenheimer, Richard Chase Tolman,
George Volkoff, Subrahmanyan Chandresekhar, Richard Feynman,
Stephen Hawking (and many others whose names I don't know)?
Neutron stars are are collapsed forms of matter where gravity is
stronger than the electro-magnetic fields holding the electrons
away from each other and the protons.
Not exactly. Electrons and protons attract each other. The gravity
is strong enough to get an electron and a proton close enough to each
other so they can combine and form a neutron. My model for this
recombination is as follows: a proton is two up quarks and a down
quark; take an up quark (charge +2/3) and an electron (charge -1),
and maybe a neutrino, turn them all into energy and then turn the
energy back into a down quark (charge -1/3); so we have taken two up
quarks and a down quark (a proton) and an electron, and gotten out two
down quarks and an up quark (a neutron). Keep doing that until all
the electrons and protons are used up. Result: a neutron star,
consisting almost entirely of neutrons, and almost no protons or
electrons.
Neutron stars exist in the region where the mass is high enough to overcome https://en.wikipedia.org/wiki/Electron_degeneracy_pressure
that prevents the collapse of a white dwarf, and below the mass of https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_limit
where
https://en.wikipedia.org/wiki/Degenerate_matter#Neutron_degeneracy
pressure prevents its collapse to a black hole.
What I see from a quicky search, the maximum spin rate for a neutron star
is thought to be 760 Hz, above which magnetic coupling to surrounding
matter and/or relativistic effects radiate away angular momentum.
https://en.wikipedia.org/wiki/Neutron_star#Spin_down
The previously referenced PSR J1748−2446ad spins at 716 Hz and at that
spin rate the surface of the neutron star is moving at approx 25% of
the speed of light.
Also the center of the neutron star will not have the angular momentum
of the outer edge but will have the high gravity.
So just a guess but spinning doesn't look like it should stop
black hole collapse if the mass gets too high.
But that math is above my pay grade.
Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
On 2024-09-28 5:47, Brett wrote:
Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
On 2024-09-27 21:43, Brett wrote:
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)Some of the pulsars are spinning at such a rate that they would fly apart,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force >>>>>>>> that keeps atoms apart is so much more powerful than gravity that >>>>>>>> such talk is just fools playing with math they don’t understand. >>>>>>>That would mean that neutron stars (all the atoms crushed so tightly >>>>>>> together that individual subatomic particles lose their identity) >>>>>>> couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong. >>>>>
so we know the theory is wrong.
Which pulsars are spinning too fast? Reference please!
https://en.wikipedia.org/wiki/PSR_J1748%E2%88%922446ad#:~:text=PSR%20J1748%E2%88%922446ad%20is%20the,was%20discovered%20by%20Jason%20W.%20T.
Spinning at 42,960 revolutions per minute.
The article says it is "the fastest-spinning pulsar known", but does not
say that it is spinning faster than neutron-star theories allow, so it
does not support your claim.
Took seconds for google to answer.
It is the wrong answer, at least for your claim.
Our sun spinning at 42,960 revolutions per minute would exceed the speed of light at its surface, much less be able to hold together.
But it comes from government research, and our government is a bunch of habitual liars ...
Also the center of the neutron star will not have the angular momentum
of the outer edge but will have the high gravity.
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force
that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly
together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
My guess would be that CPU RAM will decrease in upgradability. More
tightly integrated memory facilitates higher bandwidth and lower latency
(and lower system power/energy).
On Wed, 25 Sep 2024 10:43:20 +0300, Michael S wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force
that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly
together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Occam’s Razor applies: stick to the simplest explanation that fits the known facts.
Radio pulsars pulse at a very regular frequency (which is why they were originally thought to be created by some intelligence), but that frequency also gradually slows down with time. This is consistent with loss of
angular momentum (and loss of energy) from radiation emission from a
spinning neutron star.
Remember, this isn’t all just hand-waving: they have formulas, derived
from theory, into which they can plug in numbers, and the numbers agree
with actual measurements.
Can you come up with some other mechanism for a radio source that pulses extremely regularly, yet also slows down gradually over time?
Tim Rentsch wrote:
mitchalsup@aol.com (MitchAlsup1) writes:
On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:
I am on the black holes don't exist list, at smaller than at the
center of a galaxy.
You hear physicists talk of microscopic black holes, but the
force that keeps atoms apart is so much more powerful than
gravity that such talk is just fools playing with math they don't
understand.
Fools like Lev Landau, J Robert Oppenheimer, Richard Chase Tolman,
George Volkoff, Subrahmanyan Chandresekhar, Richard Feynman,
Stephen Hawking (and many others whose names I don't know)?
Neutron stars are are collapsed forms of matter where gravity is
stronger than the electro-magnetic fields holding the electrons
away from each other and the protons.
Not exactly. Electrons and protons attract each other. The gravity
is strong enough to get an electron and a proton close enough to each
other so they can combine and form a neutron. My model for this
recombination is as follows: a proton is two up quarks and a down
quark; take an up quark (charge +2/3) and an electron (charge -1),
and maybe a neutrino, turn them all into energy and then turn the
energy back into a down quark (charge -1/3); so we have taken two up
quarks and a down quark (a proton) and an electron, and gotten out two
down quarks and an up quark (a neutron). Keep doing that until all
the electrons and protons are used up. Result: a neutron star,
consisting almost entirely of neutrons, and almost no protons or
electrons.
Neutron stars exist in the region where the mass is high enough to
overcome https://en.wikipedia.org/wiki/Electron_degeneracy_pressure
that prevents the collapse of a white dwarf, and below the mass of https://en.wikipedia.org/wiki/
Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_limit
where
https://en.wikipedia.org/wiki/Degenerate_matter#Neutron_degeneracy
pressure prevents its collapse to a black hole.
What I see from a quicky search, the maximum spin rate for a neutron
star is thought to be 760 Hz, above which magnetic coupling to
surrounding matter and/or relativistic effects radiate away angular
momentum.
https://en.wikipedia.org/wiki/Neutron_star#Spin_down
The previously referenced PSR J17482446ad spins at 716 Hz and at
that spin rate the surface of the neutron star is moving at approx
25% of the speed of light.
Also the center of the neutron star will not have the angular
momentum of the outer edge but will have the high gravity.
So just a guess but spinning doesn't look like it should stop
black hole collapse if the mass gets too high.
But that math is above my pay grade.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 22 Sep 2024 16:58:10 -0400, Paul A. Clayton wrote:
My guess would be that CPU RAM will decrease in upgradability. More
tightly integrated memory facilitates higher bandwidth and lower latency >>> (and lower system power/energy).
Yes, we know that is the path that Apple is following. That seems to be
the only way they can justify their move to ARM processors, in terms of
increasing performance. Doesn’t mean that others will follow. I think
Apple’s approach will turn out to be an evolutionary dead-end.
Intels newest server cpu moves the dram onto the socket getting rid of
DIMMs.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Wed, 25 Sep 2024 10:43:20 +0300, Michael S wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force
that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly
together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Occam’s Razor applies: stick to the simplest explanation that fits the
known facts.
Radio pulsars pulse at a very regular frequency (which is why they were
originally thought to be created by some intelligence), but that frequency >> also gradually slows down with time. This is consistent with loss of
angular momentum (and loss of energy) from radiation emission from a
spinning neutron star.
Remember, this isn’t all just hand-waving: they have formulas, derived
from theory, into which they can plug in numbers, and the numbers agree
with actual measurements.
Theories are a dime a dozen, it is easy to back fit data to fit any number
of models.
Can you come up with some other mechanism for a radio source that pulses
extremely regularly, yet also slows down gradually over time?
Here is a nice alternative to the standard model, which follows Occam’s Razor:
https://youtu.be/bGygGius61I?si=6k0H1Bi70b4O9zgr
ThunderboltsProject posts a lot of interesting videos, but the quality
varies a lot with some crack pot ideas thrown in on occasion, to make one think I would suppose.
Tim Rentsch wrote:
mitchalsup@aol.com (MitchAlsup1) writes:
On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:
I am on the black holes don't exist list, at smaller than at the
center of a galaxy.
You hear physicists talk of microscopic black holes, but the
force that keeps atoms apart is so much more powerful than
gravity that such talk is just fools playing with math they don't
understand.
Fools like Lev Landau, J Robert Oppenheimer, Richard Chase Tolman,
George Volkoff, Subrahmanyan Chandresekhar, Richard Feynman,
Stephen Hawking (and many others whose names I don't know)?
Neutron stars are are collapsed forms of matter where gravity is
stronger than the electro-magnetic fields holding the electrons
away from each other and the protons.
Not exactly. Electrons and protons attract each other. The gravity
is strong enough to get an electron and a proton close enough to each
other so they can combine and form a neutron. My model for this
recombination is as follows: a proton is two up quarks and a down
quark; take an up quark (charge +2/3) and an electron (charge -1),
and maybe a neutrino, turn them all into energy and then turn the
energy back into a down quark (charge -1/3); so we have taken two up
quarks and a down quark (a proton) and an electron, and gotten out two
down quarks and an up quark (a neutron). Keep doing that until all
the electrons and protons are used up. Result: a neutron star,
consisting almost entirely of neutrons, and almost no protons or
electrons.
Neutron stars exist in the region where the mass is high enough to overcome https://en.wikipedia.org/wiki/Electron_degeneracy_pressure
that prevents the collapse of a white dwarf, and below the mass of https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_limit
where
https://en.wikipedia.org/wiki/Degenerate_matter#Neutron_degeneracy
pressure prevents its collapse to a black hole.
What I see from a quicky search, the maximum spin rate for a neutron star
is thought to be 760 Hz, above which magnetic coupling to surrounding
matter and/or relativistic effects radiate away angular momentum.
https://en.wikipedia.org/wiki/Neutron_star#Spin_down
The previously referenced PSR J1748−2446ad spins at 716 Hz and at that
spin rate the surface of the neutron star is moving at approx 25% of
the speed of light.
Also the center of the neutron star will not have the angular momentum
of the outer edge but will have the high gravity.
So just a guess but spinning doesn't look like it should stop
black hole collapse if the mass gets too high.
But that math is above my pay grade.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Tue, 24 Sep 2024 20:33:58 +0000, MitchAlsup1 wrote:
Neutron stars are are collapsed forms of matter where gravity is
stronger than the electro-magnetic fields holding the electrons away
from each other and the protons.
Gravity here is stronger even than the Pauli exclusion principle, which
says that two matter particles (e.g. electrons, protons, neutrons) cannot
occupy the same space at the same time.
This statement of the Pauli exclusion principle is wrong. An
example is the two electrons of a helium atom, which occupy the same
"space" (the lowest orbital shell of the atom) as long as the helium
atom persists.
The Pauli exclusion principle doesn't apply to some matter particles
(meaning particles that have non-zero rest mass). An example is
carrier particles of the weak force, W (and I believe there are
several kinds of W but I haven't bothered to verify that).
Also, in some situations the Pauli exclusion principle doesn't apply
to the kinds of particles it normally does apply to. An example is
a pair of electrons in a Cooper pair, which since the electrons are
paired they act as a boson rather than a fermion and thus are not
subject to the Pauli exclusion principle (which is that two fermions
cannot occupy the same quantum state).
Note by the way that the Pauli exclusion principle is not an
independent principle but simply a consequence of the laws of
quantum mechanics as they apply to fermions.
Finally, the original statement about gravity in a neutron star
being stronger than the Pauli exclusion principle is wrong. It is
precisely because of Pauli exclusion operating between the neutrons
that make up the neutron star that stops it from collapsing into a
black hole. The "pressure" of Pauli exclusion is not infinite,
which means there is an upper bound on how much mass a neutron star
can have before it collapses into a black hole. This bound, called
the Tolman-Oppenheimer-Volkoff limit, is somewhere between 2 and 3
solar masses.
(Disclaimer: all the above is to the best of my understanding; I
am not a physicist.)
On 2024-09-28 18:46, David Brown wrote:
On 27/09/2024 20:43, Brett wrote:
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force >>>>>> that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly >>>>> together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Some of the pulsars are spinning at such a rate that they would fly
apart,
so we know the theory is wrong.
They are not flying apart - so we know /you/ are wrong.
I think you made logical error there, David, a rare one for you. As I understand Brett, he is saying that "the theory" that pulsars are
neutron stars cannot be right, because some pulsars spin so rapidly that
a neutron star spinning like that would fly apart.
If there really were such pulsars -- pulsars spinning faster than a
neutron star can spin -- then I think Brett's argument would hold: those pulsars could not be neutron stars. Or the error could be in our understanding of how fast neutron stars can spin.
Of course it’s pushing string theory which is far greater bull.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Wed, 25 Sep 2024 10:43:20 +0300, Michael S wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force
that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly
together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Occam’s Razor applies: stick to the simplest explanation that fits the
known facts.
Radio pulsars pulse at a very regular frequency (which is why they were
originally thought to be created by some intelligence), but that frequency >> also gradually slows down with time. This is consistent with loss of
angular momentum (and loss of energy) from radiation emission from a
spinning neutron star.
Remember, this isn’t all just hand-waving: they have formulas, derived
from theory, into which they can plug in numbers, and the numbers agree
with actual measurements.
Theories are a dime a dozen, it is easy to back fit data to fit any number
of models.
Can you come up with some other mechanism for a radio source that pulses
extremely regularly, yet also slows down gradually over time?
Here is a nice alternative to the standard model, which follows Occam’s Razor:
https://youtu.be/bGygGius61I?si=6k0H1Bi70b4O9zgr
ThunderboltsProject posts a lot of interesting videos, but the quality
varies a lot with some crack pot ideas thrown in on occasion, to make one think I would suppose.
On Sun, 29 Sep 2024 02:08:28 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 22 Sep 2024 16:58:10 -0400, Paul A. Clayton wrote:
My guess would be that CPU RAM will decrease in upgradability.
More tightly integrated memory facilitates higher bandwidth and
lower latency (and lower system power/energy).
Yes, we know that is the path that Apple is following. That seems
to be the only way they can justify their move to ARM processors,
in terms of increasing performance. Doesn’t mean that others will
follow. I think Apple’s approach will turn out to be an
evolutionary dead-end.
Intels newest server cpu moves the dram onto the socket getting rid
of DIMMs.
And Intel is not exactly in the best of market health at the moment,
is it?
On 29/09/2024 04:08, Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Wed, 25 Sep 2024 10:43:20 +0300, Michael S wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force >>>>>> that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly >>>>> together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Occam’s Razor applies: stick to the simplest explanation that fits the >>> known facts.
Radio pulsars pulse at a very regular frequency (which is why they were
originally thought to be created by some intelligence), but that frequency >>> also gradually slows down with time. This is consistent with loss of
angular momentum (and loss of energy) from radiation emission from a
spinning neutron star.
Remember, this isn’t all just hand-waving: they have formulas, derived >>> from theory, into which they can plug in numbers, and the numbers agree
with actual measurements.
Theories are a dime a dozen, it is easy to back fit data to fit any number >> of models.
No, theories are not common. Wild ideas are common. Scientific
theories need a huge amount of work, evidence and support.
Can you come up with some other mechanism for a radio source that pulses >>> extremely regularly, yet also slows down gradually over time?
Here is a nice alternative to the standard model, which follows Occam’s
Razor:
https://youtu.be/bGygGius61I?si=6k0H1Bi70b4O9zgr
ThunderboltsProject posts a lot of interesting videos, but the quality
varies a lot with some crack pot ideas thrown in on occasion, to make one
think I would suppose.
These links you keep posting are not theories - they are, at best,
crackpot ideas with no justification and only a vague fit to some cherry-picked data.
Remember, there is a big difference between a "scientific theory" and a "conspiracy theory". They are not all just alternative theories!
On Sun, 29 Sep 2024 03:41:07 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 29 Sep 2024 02:08:28 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 22 Sep 2024 16:58:10 -0400, Paul A. Clayton wrote:
My guess would be that CPU RAM will decrease in upgradability.
More tightly integrated memory facilitates higher bandwidth and
lower latency (and lower system power/energy).
Yes, we know that is the path that Apple is following. That seems
to be the only way they can justify their move to ARM processors,
in terms of increasing performance. Doesn’t mean that others will
follow. I think Apple’s approach will turn out to be an
evolutionary dead-end.
Intels newest server cpu moves the dram onto the socket getting rid
of DIMMs.
And Intel is not exactly in the best of market health at the moment,
is it?
It seems, Brett is confusing Intel's client CPUs (Lunar Lake) for
Intel's server CPUs (Sierra Forest and Granite Rapids).
Don't take everything he says at face value. As a source of
information Brett is no more reliable than yourself.
Also, pay attention that even in client space Intel complements more
rigid Lunar Lake series with more traditional (likely, at cost of lower performance per watt) Arrow Lake series.
Michael S <already5chosen@yahoo.com> wrote:
On Sun, 29 Sep 2024 03:41:07 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 29 Sep 2024 02:08:28 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 22 Sep 2024 16:58:10 -0400, Paul A. Clayton wrote:
My guess would be that CPU RAM will decrease in upgradability.
More tightly integrated memory facilitates higher bandwidth and
lower latency (and lower system power/energy).
Yes, we know that is the path that Apple is following. That seems
to be the only way they can justify their move to ARM processors,
in terms of increasing performance. Doesn’t mean that others will
follow. I think Apple’s approach will turn out to be an
evolutionary dead-end.
Intels newest server cpu moves the dram onto the socket getting
rid of DIMMs.
And Intel is not exactly in the best of market health at the
moment, is it?
It seems, Brett is confusing Intel's client CPUs (Lunar Lake) for
Intel's server CPUs (Sierra Forest and Granite Rapids).
Don't take everything he says at face value. As a source of
information Brett is no more reliable than yourself.
Four times the dram bandwidth,
DIMMs are DOOMED.
Well then, Go take a look at the Structures Atom Model and tell me
what you think.
https://structuredatom.org
Four times the dram bandwidth, DIMMs are DOOMED.
In article <vdc5po$1reaa$1@dont-email.me>, ggtgp@yahoo.com (Brett) wrote:
Well then, Go take a look at the Structures Atom Model and tell me
what you think.
https://structuredatom.org
It's a re-invention of the "nuclear electrons" idea that was current
through the 1920s, and seems to have the same problems. <https://en.wikipedia.org/wiki/Discovery_of_the_neutron#Problems_of_the_nu clear_electrons_hypothesis>
John
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Remember, this isn’t all just hand-waving: they have formulas, derived
from theory, into which they can plug in numbers, and the numbers agree
with actual measurements.
Theories are a dime a dozen, it is easy to back fit data to fit any
number of models.
On Sun, 29 Sep 2024 02:08:26 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Remember, this isn’t all just hand-waving: they have formulas, derived >>> from theory, into which they can plug in numbers, and the numbers agree
with actual measurements.
Theories are a dime a dozen, it is easy to back fit data to fit any
number of models.
Predicting results that haven’t been measured yet, and measuring them and showing they are correct, is the true mark of science.
Does your “alternative model” measure up to that? No it does not.
Based off of Hubble research 1000’s of theories were proposed to get a Nobel prize, then the James Web telescope launched and all those
theories went into the toilet.
Had one of those theories been in the ball park you would have declared success for predictive science. Ignoring the 999 failures, but “science”completely failed.
On Sat, 28 Sep 2024 19:08:23 -0000 (UTC), Brett wrote:
But it comes from government research, and our government is a bunch of
habitual liars ...
Who is this “our” government? There are research labs all over the world (some even in private hands), keeping tabs on each other’s work. If somebody were lying about some result, it wouldn’t take long before the others discovered this.
That is just another cloud model. SAM has actual structure that
explains attachment angles and should explain hyperfine structure
properties better.
In article <vdd7rj$23l66$1@dont-email.me>, ggtgp@yahoo.com (Brett) wrote:
That is just another cloud model. SAM has actual structure that
explains attachment angles and should explain hyperfine structure
properties better.
And the Klein paradox? The electron confinement problem? There's no explanation of any of those problems.
Where are the peer-reviewed papers? All I see is posters and
presentations at Cold Fusion and Electric Universe conferences, plus
Tesla Tech, whose owner describes himself as "a publisher of extreme
science, alternative energy, health and medicine."
This appears to be pseudo-science, appealing to those who know a little
about physics, but incapable of explaining the difficult problems.
John
On Mon, 30 Sep 2024 4:11:18 +0000, Brett wrote:
Based off of Hubble research 1000’s of theories were proposed to get a
Nobel prize, then the James Web telescope launched and all those
theories went into the toilet.
Had one of those theories been in the ball park you would have declared
success for predictive science. Ignoring the 999 failures, but
“science”completely failed.
just because there were thousands of conjectures that fail to meet the
rigors of science does not mean that science has failed.
On Sun, 29 Sep 2024 18:26:39 -0000 (UTC)
Brett <ggtgp@yahoo.com> wrote:
Michael S <already5chosen@yahoo.com> wrote:
On Sun, 29 Sep 2024 03:41:07 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 29 Sep 2024 02:08:28 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 22 Sep 2024 16:58:10 -0400, Paul A. Clayton wrote:
My guess would be that CPU RAM will decrease in upgradability.
More tightly integrated memory facilitates higher bandwidth and
lower latency (and lower system power/energy).
Yes, we know that is the path that Apple is following. That seems
to be the only way they can justify their move to ARM processors,
in terms of increasing performance. Doesn’t mean that others will >>>>>> follow. I think Apple’s approach will turn out to be an
evolutionary dead-end.
Intels newest server cpu moves the dram onto the socket getting
rid of DIMMs.
And Intel is not exactly in the best of market health at the
moment, is it?
It seems, Brett is confusing Intel's client CPUs (Lunar Lake) for
Intel's server CPUs (Sierra Forest and Granite Rapids).
Don't take everything he says at face value. As a source of
information Brett is no more reliable than yourself.
Four times the dram bandwidth,
For the same # of IO pins LPDDR5x has only ~1.33x higher bandwidth
than DDR5 DIMMs at top standard speed. Somewhat more, if measured per
Watt rather than per pin, but even per Watt the factor is much less
than 2x.
DIMMs are DOOMED.
In the long run, I don't disagree. On client computers - the run would
not be even particularly long. On servers - it will take more than
decade.
But even on clients it's not going to be completed overnight or over
1-1.5 years.
just because there were thousands of conjectures that fail to meet the
rigors of science does not mean that science has failed.
The false religion of “science” failed.
On Mon, 30 Sep 2024 19:58:46 -0000 (UTC), Brett wrote:
The false religion of “science” failed.
Science is not a religion. Science, unlike religion, works whether you believe in it or not.
You can, very literally, bet your life on it. Not something you can say
of any religion ...
MitchAlsup1 <mitchalsup@aol.com> wrote:
On Mon, 30 Sep 2024 4:11:18 +0000, Brett wrote:
Based off of Hubble research 1000’s of theories were proposed to get a >>> Nobel prize, then the James Web telescope launched and all those
theories went into the toilet.
Had one of those theories been in the ball park you would have declared
success for predictive science. Ignoring the 999 failures, but
“science”completely failed.
just because there were thousands of conjectures that fail to meet the
rigors of science does not mean that science has failed.
The false religion of “science” failed.
Yes real science actually advances this way.
On Tue, 1 Oct 2024 2:40:55 +0000, Lawrence D'Oliveiro wrote:
Science is not a religion. Science, unlike religion, works whether you
believe in it or not.
Science, unlike religion, adjusts to the current set of facts--whatever
they may be.
On Tue, 1 Oct 2024 2:40:55 +0000, Lawrence D'Oliveiro wrote:
On Mon, 30 Sep 2024 19:58:46 -0000 (UTC), Brett wrote:
The false religion of “science” failed.
Science is not a religion. Science, unlike religion, works whether you
believe in it or not.
Science, unlike religion, adjusts to the current set of facts--whatever
they may be.
Science is not a religion.
And as someone (whose name I have forgotten) once said, "Science is
about unanswered questions. Religion is about unquestioned answers."
David Brown <david.brown@hesbynett.no> schrieb:
Science is not a religion.
And as someone (whose name I have forgotten) once said, "Science is
about unanswered questions. Religion is about unquestioned answers."
That is the ideal of science - scientific hypotheses are proposed.
They have to be falsifiable (i.e. you have to be able to do experiments
which could, in theory, prove the hypothesis wrong). You can never
_prove_ a hypothesis, you can only fail to disprove it, and then it
will gradually tend to become accepted. In other words, you try
to make predictions, and if those predictions fail, then the theory
is in trouble.
For example, Einstein's General Theory of Relativity was never
proven, it was found by a very large number of experiments by a
very large number of people that it could not be disproven, so
people generally accept it. But people still try to think of
experiments which might show a deviation, and keep trying for it.
Same for quantum mechanics. Whatever you think of it
philosophically, it has been shown to be remarkably accurate
at predicting actual behavior.
Mathematics is not a sciene under this definition, by the way.
The main problem is with people who try to sell something as
science which isn't, of which there are also many examples.
"Scientific Marxism" is one such example. It is sometimes hard
for an outsider to differentiate between actual scientific theories
which have been tested, and people just claiming that "the science
says so" when they have not been applying the scientific method
faithfully, either through ignorance or through bad intent.
There is also the problem of many people not knowing statistics well
enough and misapplying it, for example in social or medical science.
On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:
David Brown <david.brown@hesbynett.no> schrieb:
Science is not a religion.
And as someone (whose name I have forgotten) once said, "Science is
about unanswered questions. Religion is about unquestioned answers."
That is the ideal of science - scientific hypotheses are proposed.
They have to be falsifiable (i.e. you have to be able to do experiments
which could, in theory, prove the hypothesis wrong). You can never
_prove_ a hypothesis, you can only fail to disprove it, and then it
will gradually tend to become accepted. In other words, you try
to make predictions, and if those predictions fail, then the theory
is in trouble.
For example, Einstein's General Theory of Relativity was never
proven, it was found by a very large number of experiments by a
very large number of people that it could not be disproven, so
people generally accept it. But people still try to think of
experiments which might show a deviation, and keep trying for it.
Same for quantum mechanics. Whatever you think of it
philosophically, it has been shown to be remarkably accurate
at predicting actual behavior.
Mathematics is not a sciene under this definition, by the way.
Indeed, Units of forward progress in Math are done with formal
proofs.
The main problem is with people who try to sell something as
science which isn't, of which there are also many examples.
The colloquial person thinks theory and conjecture are
essentially equal. As in: "I just invented this theory".
No, you just: "Invented a conjecture." you have to have
substantial evidence to go from conjecture to theory.
"Scientific Marxism" is one such example. It is sometimes hard
for an outsider to differentiate between actual scientific theories
which have been tested, and people just claiming that "the science
says so" when they have not been applying the scientific method
faithfully, either through ignorance or through bad intent.
There is also the problem of many people not knowing statistics well
enough and misapplying it, for example in social or medical science.
Or politics....
On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:
David Brown <david.brown@hesbynett.no> schrieb:
Science is not a religion.
And as someone (whose name I have forgotten) once said, "Science is
about unanswered questions. Religion is about unquestioned answers."
That is the ideal of science - scientific hypotheses are proposed.
They have to be falsifiable (i.e. you have to be able to do experiments
which could, in theory, prove the hypothesis wrong). You can never
_prove_ a hypothesis, you can only fail to disprove it, and then it
will gradually tend to become accepted. In other words, you try
to make predictions, and if those predictions fail, then the theory
is in trouble.
For example, Einstein's General Theory of Relativity was never
proven, it was found by a very large number of experiments by a
very large number of people that it could not be disproven, so
people generally accept it. But people still try to think of
experiments which might show a deviation, and keep trying for it.
Same for quantum mechanics. Whatever you think of it
philosophically, it has been shown to be remarkably accurate
at predicting actual behavior.
Mathematics is not a sciene under this definition, by the way.
Indeed, Units of forward progress in Math are done with formal
proofs.
On 2024-10-01 21:20, MitchAlsup1 wrote:
On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:
David Brown <david.brown@hesbynett.no> schrieb:
Science is not a religion.
And as someone (whose name I have forgotten) once said, "Science is
about unanswered questions. Religion is about unquestioned answers."
That is the ideal of science - scientific hypotheses are proposed.
They have to be falsifiable (i.e. you have to be able to do experiments
which could, in theory, prove the hypothesis wrong). You can never
_prove_ a hypothesis, you can only fail to disprove it, and then it
will gradually tend to become accepted. In other words, you try
to make predictions, and if those predictions fail, then the theory
is in trouble.
For example, Einstein's General Theory of Relativity was never
proven, it was found by a very large number of experiments by a
very large number of people that it could not be disproven, so
people generally accept it. But people still try to think of
experiments which might show a deviation, and keep trying for it.
Same for quantum mechanics. Whatever you think of it
philosophically, it has been shown to be remarkably accurate
at predicting actual behavior.
Mathematics is not a sciene under this definition, by the way.
Indeed, Units of forward progress in Math are done with formal
proofs.
Yes, in the end, but it is interesting that a lot of the progress in mathematics happens thruogh the invention or intuition of /conjectures/, which may eventually be proven correct and true, or incorrect and
needing modification.
An open (neither proved nor disproved) conjecture often collects lots of "observed evidence", either by suggesting some interesting corollaries
or analogies that are then proved independently, or by surviving
energetic efforts to find counterexamples to the conjecture. In this
sense an open conjecture resembles a theory in physics.
A list of conjectures:
https://en.wikipedia.org/wiki/List_of_mathematical_conjectures
On 01/10/2024 20:20, MitchAlsup1 wrote:
On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:
David Brown <david.brown@hesbynett.no> schrieb:
Science is not a religion.
And as someone (whose name I have forgotten) once said, "Science is
about unanswered questions. Religion is about unquestioned answers."
That is the ideal of science - scientific hypotheses are proposed.
They have to be falsifiable (i.e. you have to be able to do experiments
which could, in theory, prove the hypothesis wrong). You can never
_prove_ a hypothesis, you can only fail to disprove it, and then it
will gradually tend to become accepted. In other words, you try
to make predictions, and if those predictions fail, then the theory
is in trouble.
For example, Einstein's General Theory of Relativity was never
proven, it was found by a very large number of experiments by a
very large number of people that it could not be disproven, so
people generally accept it. But people still try to think of
experiments which might show a deviation, and keep trying for it.
Same for quantum mechanics. Whatever you think of it
philosophically, it has been shown to be remarkably accurate
at predicting actual behavior.
Mathematics is not a sciene under this definition, by the way.
Indeed, Units of forward progress in Math are done with formal
proofs.
It's worth remembering that mathematical proofs always start at a base -
a set of axioms. And these axioms are assumed, not proven.
The main problem is with people who try to sell something as
science which isn't, of which there are also many examples.
The colloquial person thinks theory and conjecture are
essentially equal. As in: "I just invented this theory".
No, you just: "Invented a conjecture." you have to have
substantial evidence to go from conjecture to theory.
I think you need evidence, justification, and a good basis for proposing something before it can even be called a "conjecture" in science. You
don't start off with a conjecture - you start with an idea, and have a
long way to go to reach a "scientific theory", passing through
"conjecture" and "hypothesis" on the way.
"Scientific Marxism" is one such example. It is sometimes hard
for an outsider to differentiate between actual scientific theories
which have been tested, and people just claiming that "the science
says so" when they have not been applying the scientific method
faithfully, either through ignorance or through bad intent.
There is also the problem of many people not knowing statistics well
enough and misapplying it, for example in social or medical science.
Or politics....
Or even in hard sciences - scientists are humans too, and some of them
get their statistics wildly wrong.
On 2024-10-01 21:20, MitchAlsup1 wrote:
On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:
David Brown <david.brown@hesbynett.no> schrieb:
Science is not a religion.
And as someone (whose name I have forgotten) once said, "Science is
about unanswered questions. Religion is about unquestioned answers."
That is the ideal of science - scientific hypotheses are proposed.
They have to be falsifiable (i.e. you have to be able to do experiments
which could, in theory, prove the hypothesis wrong). You can never
_prove_ a hypothesis, you can only fail to disprove it, and then it
will gradually tend to become accepted. In other words, you try
to make predictions, and if those predictions fail, then the theory
is in trouble.
For example, Einstein's General Theory of Relativity was never
proven, it was found by a very large number of experiments by a
very large number of people that it could not be disproven, so
people generally accept it. But people still try to think of
experiments which might show a deviation, and keep trying for it.
Same for quantum mechanics. Whatever you think of it
philosophically, it has been shown to be remarkably accurate
at predicting actual behavior.
Mathematics is not a sciene under this definition, by the way.
Indeed, Units of forward progress in Math are done with formal
proofs.
Yes, in the end, but it is interesting that a lot of the progress in mathematics happens thruogh the invention or intuition of /conjectures/, which may eventually be proven correct and true, or incorrect and
needing modification.
An open (neither proved nor disproved) conjecture often collects lots of "observed evidence", either by suggesting some interesting corollaries
or analogies that are then proved independently, or by surviving
energetic efforts to find counterexamples to the conjecture. In this
sense an open conjecture resembles a theory in physics.
A list of conjectures:
https://en.wikipedia.org/wiki/List_of_mathematical_conjectures
On Tue, 1 Oct 2024 18:56:46 +0000, David Brown wrote:
On 01/10/2024 20:20, MitchAlsup1 wrote:
The colloquial person thinks theory and conjecture are
essentially equal. As in: "I just invented this theory".
No, you just: "Invented a conjecture." you have to have
substantial evidence to go from conjecture to theory.
I think you need evidence, justification, and a good basis for proposing
something before it can even be called a "conjecture" in science. You
don't start off with a conjecture - you start with an idea, and have a
long way to go to reach a "scientific theory", passing through
"conjecture" and "hypothesis" on the way.
I do not disagree with that. Sorry if I implied anything else.
On Tue, 1 Oct 2024 19:07:18 +0000, Niklas Holsti wrote:
On 2024-10-01 21:20, MitchAlsup1 wrote:
On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:
Mathematics is not a sciene under this definition, by the way.
Indeed, Units of forward progress in Math are done with formal
proofs.
Yes, in the end, but it is interesting that a lot of the progress in
mathematics happens thruogh the invention or intuition of /conjectures/,
which may eventually be proven correct and true, or incorrect and
needing modification.
Mathematical conjectures have a spectrum of "solidity" often more
solid in one branch of math than in another.
An open (neither proved nor disproved) conjecture often collects lots of
"observed evidence", either by suggesting some interesting corollaries
or analogies that are then proved independently, or by surviving
energetic efforts to find counterexamples to the conjecture. In this
sense an open conjecture resembles a theory in physics.
The solution to Fermat's last theorem used a large series of
then conjectures in order to demonstrate that the solution
was correct.
On 01/10/2024 23:09, MitchAlsup1 wrote:
On Tue, 1 Oct 2024 19:07:18 +0000, Niklas Holsti wrote:
On 2024-10-01 21:20, MitchAlsup1 wrote:
On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:
Mathematics is not a sciene under this definition, by the way.
Indeed, Units of forward progress in Math are done with formal
proofs.
Yes, in the end, but it is interesting that a lot of the progress in
mathematics happens thruogh the invention or intuition of /conjectures/, >>> which may eventually be proven correct and true, or incorrect and
needing modification.
Mathematical conjectures have a spectrum of "solidity" often more
solid in one branch of math than in another.
I am not entirely sure what you mean by that.
A conjecture is a hypothesis that you have reasonable justification for believing is true, but which is not proven to be true (then it becomes a theorem). Some conjectures have been confirmed empirically to a large
degree (such as the Riemann hypothesis) which is not proof, but can be
seen as strengthening the conjecture. Others, such as the continuum hypothesis, not only have no empirical evidence but have been proven to
be independent of our usual ZF set theory axioms - no evidence either
way can be found.
There are also some mathematicians who have a philosophy of viewing some kinds of proofs as "better" than others. Some dislike "proof by
computer", and don't consider the four-colour theorem to be a proven
theorem yet.
Others are "constructivists" - they are not happy with
merely a proof that some solution must exist, they only consider the hypothesis properly proven when they have a construction for a solution.
In that sense, a given conjecture may have more "solidity" in one
/school/ of mathematics than in another.
But I don't quite see how a single conjecture could have more "solidity"
in one /branch/ of mathematics than another. An example or two might
help.
An open (neither proved nor disproved) conjecture often collects lots of >>> "observed evidence", either by suggesting some interesting corollaries
or analogies that are then proved independently, or by surviving
energetic efforts to find counterexamples to the conjecture. In this
sense an open conjecture resembles a theory in physics.
The solution to Fermat's last theorem used a large series of
then conjectures in order to demonstrate that the solution
was correct.
Yes - and then those supporting conjectures were proven and morphed into theorems, with the knock-on effect of making everything higher up a
proven theorem.
This is very common in mathematics - you develop conditional proofs
building on assuming a conjecture is true, and then you (or someone
else) goes back and proves that conjecture later, or perhaps finds
another path around that part. For many theorems in mathematics, the complete proof is a /very/ long and winding path.
You can never _prove_ a hypothesis ...
Somebody wiser than me had written something like "You cannot write/test/debug multithreaded programs without the ability for multiple threads to actually run at the same time."
Sky Scholar just posted his latest mockery of modern physics:
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics:
Is this a particularly believable and/or coherent mockery?
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics:
Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ...
On Wed, 2 Oct 2024 7:20:47 +0000, David Brown wrote:
On 01/10/2024 23:09, MitchAlsup1 wrote:
On Tue, 1 Oct 2024 19:07:18 +0000, Niklas Holsti wrote:
On 2024-10-01 21:20, MitchAlsup1 wrote:
On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:
Mathematics is not a sciene under this definition, by the way.
Indeed, Units of forward progress in Math are done with formal
proofs.
Yes, in the end, but it is interesting that a lot of the progress in
mathematics happens thruogh the invention or intuition of
/conjectures/,
which may eventually be proven correct and true, or incorrect and
needing modification.
Mathematical conjectures have a spectrum of "solidity" often more
solid in one branch of math than in another.
I am not entirely sure what you mean by that.
A conjecture is a hypothesis that you have reasonable justification for
believing is true, but which is not proven to be true (then it becomes a
theorem). Some conjectures have been confirmed empirically to a large
degree (such as the Riemann hypothesis) which is not proof, but can be
seen as strengthening the conjecture. Others, such as the continuum
hypothesis, not only have no empirical evidence but have been proven to
be independent of our usual ZF set theory axioms - no evidence either
way can be found.
Other conjectures had a century or more between being conjectured with several "things they got right" before finally drifting towards a proof
or drifting towards disproof. The width of the drift is exactly the
spectrum I stated.
There are also some mathematicians who have a philosophy of viewing some
kinds of proofs as "better" than others. Some dislike "proof by
computer", and don't consider the four-colour theorem to be a proven
theorem yet.
Over time proofs drift towards being an axiom (at least in their little branch of math--which might not be axiomatic in other branches). others
start out proven and drift to the point there are only proven in one
or several branches of math.
Others are "constructivists" - they are not happy with
merely a proof that some solution must exist, they only consider the
hypothesis properly proven when they have a construction for a solution.
In that sense, a given conjecture may have more "solidity" in one
/school/ of mathematics than in another.
that is what I am talking about--it is all a big multidimensional
spectrum of {proof or conjecture}
But I don't quite see how a single conjecture could have more "solidity"
in one /branch/ of mathematics than another. An example or two might
help.
A conjecture/proof in ring-sum math may not work at all in
Real-Numbers. They are different branches in the space of Math.
Some proofs only work in Cartesian Multi-D spaces and fail in
manifold spaces.
On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics:
Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ...
And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
In science, we don’t go by “this guy has a legendary reputation and/or sounds like a credible witness, let’s believe him”, we go by evidence.
On Tue, 24 Sep 2024 07:50:36 +0200, Terje Mathisen wrote:
Somebody wiser than me had written something like "You cannot
write/test/debug multithreaded programs without the ability for multiple
threads to actually run at the same time."
Some threading bugs will more likely show up in multiple-CPU situations, others will more likely show up in single-CPU situations. You need to test every which way you can.
The 1990s were a “let’s use threads for everything” time. Thankfully, we
have become a bit more restrained since then ...
On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:Read any book on Vitamin C and you will find that Linus Pauling was right.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics:
Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ...
And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
In science, we don’t go by “this guy has a legendary reputation and/or sounds like a credible witness, let’s believe him”, we go by evidence.
On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics:
Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ...
And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
In science, we don’t go by “this guy has a legendary reputation and/or >> sounds like a credible witness, let’s believe him”, we go by evidence.
Indeed.
Also note that the two guys who won the Nobel Prize for the development
of MRI - the /real/ inventors of the MRI machine - are both long dead.
But this particular crank is mad enough and influential enough to have a
page on Rational Wiki, which is never a good sign. (It seems he did
work on improving MRI technology before he went bananas.)
<https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force
that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly
together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force
that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly
together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Some of the pulsars are spinning at such a rate that they would fly apart,
so we know the theory is wrong.
Now go find the other missing billion rings Einstein predicted.
On Sun, 22 Sep 2024 16:42:58 -0000 (UTC), Brett wrote:
Now go find the other missing billion rings Einstein predicted.
Where did he predict that?
On 2024-09-27 21:43, Brett wrote:
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force
that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly
together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Some of the pulsars are spinning at such a rate that they would fly apart, >> so we know the theory is wrong.
Which pulsars are spinning too fast? Reference please!
Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
On 2024-09-27 21:43, Brett wrote:
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:
You hear physicists talk of microscopic black holes, but the force >>>>>> that keeps atoms apart is so much more powerful than gravity that
such talk is just fools playing with math they don’t understand.
That would mean that neutron stars (all the atoms crushed so tightly >>>>> together that individual subatomic particles lose their identity)
couldn’t exist either. But they do.
Radio pulsars exist.
The theory is that they are neutron stars. But theory can be wrong.
Some of the pulsars are spinning at such a rate that they would fly apart, >>> so we know the theory is wrong.
Which pulsars are spinning too fast? Reference please!
https://en.wikipedia.org/wiki/PSR_J1748%E2%88%922446ad#:~:text=PSR%20J1748%E2%88%922446ad%20is%20the,was%20discovered%20by%20Jason%20W.%20T.
Spinning at 42,960 revolutions per minute.
Took seconds for google to answer.
(Yes, I have seen a Youtube video from a Flat Earth fanatic making that argument :-( )
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 22 Sep 2024 16:42:58 -0000 (UTC), Brett wrote:
Now go find the other missing billion rings Einstein predicted.
Where did he predict that?
All galaxies that have another galaxy behind at a reasonable range should show Einstein rings.
Billions.
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:Indeed.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics:
Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ... >>>
In science, we don’t go by “this guy has a legendary reputation and/or >>> sounds like a credible witness, let’s believe him”, we go by evidence. >>
Also note that the two guys who won the Nobel Prize for the development
of MRI - the /real/ inventors of the MRI machine - are both long dead.
But this particular crank is mad enough and influential enough to have a
page on Rational Wiki, which is never a good sign. (It seems he did
work on improving MRI technology before he went bananas.)
<https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>
One day I will be on rational wiki. ;)
Watch his videos and try to debunk what he says.
Good luck with that. ;)
On 03/10/2024 21:10, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:Indeed.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics:
Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ... >>>>
In science, we don’t go by “this guy has a legendary reputation and/or >>>> sounds like a credible witness, let’s believe him”, we go by evidence. >>>
Also note that the two guys who won the Nobel Prize for the development
of MRI - the /real/ inventors of the MRI machine - are both long dead.
But this particular crank is mad enough and influential enough to have a >>> page on Rational Wiki, which is never a good sign. (It seems he did
work on improving MRI technology before he went bananas.)
<https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>
One day I will be on rational wiki. ;)
Watch his videos and try to debunk what he says.
Good luck with that. ;)
There are more productive uses of my time which won't rot my brain as quickly, such as watching the grass grow.
A bit challenge with the kind of shite that people like this produce is
that it is often unfalsifiable. They invoke magic, much like religions
do, and then any kind of disproof or debunking is washed away by magic.
When you make up some nonsense that has no basis in reality or no
evidence, you can just keep adding more nonsense no matter what anyone
else says.
So when nutjobs like that guy tell you the sun is powered by pixies
riding tricycles really fast, he can easily invent more rubbish to
explain away any evidence.
There's a term for this - what these cranks churn out is "not even
wrong". (You can look that up on Rational Wiki too.)
And while the claims of this kind of conspiracy theory cannot be
falsified, there is also no evidence for them. Claims made without
evidence can be dismissed without evidence - there is no need to debunk
them. The correct reaction is to laugh if they are funny, then move on
and forget them.
We are all human, and sometimes we get fooled by an idea that sounds
right. But you should be embarrassed at believing such a wide range of idiocy and then promoting it.
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 21:10, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics:
Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ... >>>>>
In science, we don’t go by “this guy has a legendary reputation and/or
sounds like a credible witness, let’s believe him”, we go by evidence.
Indeed.
Also note that the two guys who won the Nobel Prize for the development >>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>
But this particular crank is mad enough and influential enough to have a >>>> page on Rational Wiki, which is never a good sign. (It seems he did
work on improving MRI technology before he went bananas.)
<https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>
One day I will be on rational wiki. ;)
Watch his videos and try to debunk what he says.
Good luck with that. ;)
There are more productive uses of my time which won't rot my brain as
quickly, such as watching the grass grow.
A bit challenge with the kind of shite that people like this produce is
that it is often unfalsifiable. They invoke magic, much like religions
do, and then any kind of disproof or debunking is washed away by magic.
When you make up some nonsense that has no basis in reality or no
evidence, you can just keep adding more nonsense no matter what anyone
else says.
So when nutjobs like that guy tell you the sun is powered by pixies
riding tricycles really fast, he can easily invent more rubbish to
explain away any evidence.
There's a term for this - what these cranks churn out is "not even
wrong". (You can look that up on Rational Wiki too.)
And while the claims of this kind of conspiracy theory cannot be
falsified, there is also no evidence for them. Claims made without
evidence can be dismissed without evidence - there is no need to debunk
them. The correct reaction is to laugh if they are funny, then move on
and forget them.
We are all human, and sometimes we get fooled by an idea that sounds
right. But you should be embarrassed at believing such a wide range of
idiocy and then promoting it.
A gas cannot emit the spectrum we see from the sun, liquid metallic
hydrogen can.
Gases do not show the pond ripples from impacts that we see from the sun surface.
And a long list of other basic facts Pierre-Marie_Robitaille goes over in
his Sky Scholar videos.
Stellar science is a bad joke, such basic mistakes should have been
corrected 100 years ago.
On 04/10/2024 19:59, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 21:10, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics: >>>>>>>>Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ...
And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
In science, we don’t go by “this guy has a legendary reputation and/or
sounds like a credible witness, let’s believe him”, we go by evidence.
Indeed.
Also note that the two guys who won the Nobel Prize for the development >>>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>>
But this particular crank is mad enough and influential enough to have a >>>>> page on Rational Wiki, which is never a good sign. (It seems he did >>>>> work on improving MRI technology before he went bananas.)
<https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>
One day I will be on rational wiki. ;)
Watch his videos and try to debunk what he says.
Good luck with that. ;)
There are more productive uses of my time which won't rot my brain as
quickly, such as watching the grass grow.
A bit challenge with the kind of shite that people like this produce is
that it is often unfalsifiable. They invoke magic, much like religions
do, and then any kind of disproof or debunking is washed away by magic.
When you make up some nonsense that has no basis in reality or no
evidence, you can just keep adding more nonsense no matter what anyone
else says.
So when nutjobs like that guy tell you the sun is powered by pixies
riding tricycles really fast, he can easily invent more rubbish to
explain away any evidence.
There's a term for this - what these cranks churn out is "not even
wrong". (You can look that up on Rational Wiki too.)
And while the claims of this kind of conspiracy theory cannot be
falsified, there is also no evidence for them. Claims made without
evidence can be dismissed without evidence - there is no need to debunk
them. The correct reaction is to laugh if they are funny, then move on
and forget them.
We are all human, and sometimes we get fooled by an idea that sounds
right. But you should be embarrassed at believing such a wide range of
idiocy and then promoting it.
A gas cannot emit the spectrum we see from the sun, liquid metallic
hydrogen can.
You do realise that the sun is primarily plasma, rather than gas? And
that scientists - /real/ scientists - can heat up gases until they are
plasma and look at the spectrum, in actual experiments in labs? Has
your hero tested a ball of liquid metallic hydrogen in his lab?
Gases do not show the pond ripples from impacts that we see from the sun
surface.
And a long list of other basic facts Pierre-Marie_Robitaille goes over in
his Sky Scholar videos.
Stellar science is a bad joke, such basic mistakes should have been
corrected 100 years ago.
You think one crackpot with no relevant education and no resources can
figure all this out in a couple of years, where tens of thousands of scientists have failed over a hundred years? Do you /really/ think that
is more likely than supposing that he doesn't understand what he is
talking about?
In real science, lab experiments, observation of reality (such as the
sun in this case), simulations, models, and hypotheses all go hand in
hand in collaboration between many scientists and experts in different
fields in order to push scientific knowledge further.
"Maverick" genius scientists who figure out the "real" answer on their
own don't exist outside the entertainment industry.
David Brown <david.brown@hesbynett.no> wrote:
On 04/10/2024 19:59, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 21:10, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics: >>>>>>>>>Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ...
And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
In science, we don’t go by “this guy has a legendary reputation and/or
sounds like a credible witness, let’s believe him”, we go by evidence.
Indeed.
Also note that the two guys who won the Nobel Prize for the development >>>>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>>>
But this particular crank is mad enough and influential enough to have a >>>>>> page on Rational Wiki, which is never a good sign. (It seems he did >>>>>> work on improving MRI technology before he went bananas.)
<https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>
One day I will be on rational wiki. ;)
Watch his videos and try to debunk what he says.
Good luck with that. ;)
There are more productive uses of my time which won't rot my brain as
quickly, such as watching the grass grow.
A bit challenge with the kind of shite that people like this produce is >>>> that it is often unfalsifiable. They invoke magic, much like religions >>>> do, and then any kind of disproof or debunking is washed away by magic. >>>> When you make up some nonsense that has no basis in reality or no
evidence, you can just keep adding more nonsense no matter what anyone >>>> else says.
So when nutjobs like that guy tell you the sun is powered by pixies
riding tricycles really fast, he can easily invent more rubbish to
explain away any evidence.
There's a term for this - what these cranks churn out is "not even
wrong". (You can look that up on Rational Wiki too.)
And while the claims of this kind of conspiracy theory cannot be
falsified, there is also no evidence for them. Claims made without
evidence can be dismissed without evidence - there is no need to debunk >>>> them. The correct reaction is to laugh if they are funny, then move on >>>> and forget them.
We are all human, and sometimes we get fooled by an idea that sounds
right. But you should be embarrassed at believing such a wide range of >>>> idiocy and then promoting it.
A gas cannot emit the spectrum we see from the sun, liquid metallic
hydrogen can.
You do realise that the sun is primarily plasma, rather than gas? And
that scientists - /real/ scientists - can heat up gases until they are
plasma and look at the spectrum, in actual experiments in labs? Has
your hero tested a ball of liquid metallic hydrogen in his lab?
Gases do not show the pond ripples from impacts that we see from the sun >>> surface.
And a long list of other basic facts Pierre-Marie_Robitaille goes over in >>> his Sky Scholar videos.
Stellar science is a bad joke, such basic mistakes should have been
corrected 100 years ago.
You think one crackpot with no relevant education and no resources can
figure all this out in a couple of years, where tens of thousands of
scientists have failed over a hundred years? Do you /really/ think that
is more likely than supposing that he doesn't understand what he is
talking about?
In real science, lab experiments, observation of reality (such as the
sun in this case), simulations, models, and hypotheses all go hand in
hand in collaboration between many scientists and experts in different
fields in order to push scientific knowledge further.
"Maverick" genius scientists who figure out the "real" answer on their
own don't exist outside the entertainment industry.
So science ended 100 years ago and we should close our eyes and ears and
say not anything that would counter our sacred flawless scientists of old.
Stop being a religious zealot and watch the videos.
If he is a crackpot you should be bright enough to figure it out and prove
it for the world to see. Crackpots cannot survive scientific rigor. A five minute search crushes such fools with ease, I have done this a dozen times.
A gas cannot emit the spectrum we see from the sun, liquid metallic
hydrogen can.
Gases do not show the pond ripples from impacts that we see from the sun surface.
Brett <ggtgp@yahoo.com> wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 04/10/2024 19:59, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 21:10, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics: >>>>>>>>>>Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ...
And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
In science, we don’t go by “this guy has a legendary reputation and/or
sounds like a credible witness, let’s believe him”, we go by evidence.
Indeed.
Also note that the two guys who won the Nobel Prize for the development >>>>>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>>>>
But this particular crank is mad enough and influential enough to have a
page on Rational Wiki, which is never a good sign. (It seems he did >>>>>>> work on improving MRI technology before he went bananas.)
<https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>
One day I will be on rational wiki. ;)
Watch his videos and try to debunk what he says.
Good luck with that. ;)
There are more productive uses of my time which won't rot my brain as >>>>> quickly, such as watching the grass grow.
A bit challenge with the kind of shite that people like this produce is >>>>> that it is often unfalsifiable. They invoke magic, much like religions >>>>> do, and then any kind of disproof or debunking is washed away by magic. >>>>> When you make up some nonsense that has no basis in reality or no
evidence, you can just keep adding more nonsense no matter what anyone >>>>> else says.
So when nutjobs like that guy tell you the sun is powered by pixies
riding tricycles really fast, he can easily invent more rubbish to
explain away any evidence.
There's a term for this - what these cranks churn out is "not even
wrong". (You can look that up on Rational Wiki too.)
And while the claims of this kind of conspiracy theory cannot be
falsified, there is also no evidence for them. Claims made without
evidence can be dismissed without evidence - there is no need to debunk >>>>> them. The correct reaction is to laugh if they are funny, then move on >>>>> and forget them.
We are all human, and sometimes we get fooled by an idea that sounds >>>>> right. But you should be embarrassed at believing such a wide range of >>>>> idiocy and then promoting it.
A gas cannot emit the spectrum we see from the sun, liquid metallic
hydrogen can.
You do realise that the sun is primarily plasma, rather than gas? And
that scientists - /real/ scientists - can heat up gases until they are
plasma and look at the spectrum, in actual experiments in labs? Has
your hero tested a ball of liquid metallic hydrogen in his lab?
Gases do not show the pond ripples from impacts that we see from the sun >>>> surface.
And a long list of other basic facts Pierre-Marie_Robitaille goes over in >>>> his Sky Scholar videos.
Stellar science is a bad joke, such basic mistakes should have been
corrected 100 years ago.
You think one crackpot with no relevant education and no resources can
figure all this out in a couple of years, where tens of thousands of
scientists have failed over a hundred years? Do you /really/ think that >>> is more likely than supposing that he doesn't understand what he is
talking about?
In real science, lab experiments, observation of reality (such as the
sun in this case), simulations, models, and hypotheses all go hand in
hand in collaboration between many scientists and experts in different
fields in order to push scientific knowledge further.
"Maverick" genius scientists who figure out the "real" answer on their
own don't exist outside the entertainment industry.
So science ended 100 years ago and we should close our eyes and ears and
say not anything that would counter our sacred flawless scientists of old. >>
Stop being a religious zealot and watch the videos.
If he is a crackpot you should be bright enough to figure it out and prove >> it for the world to see. Crackpots cannot survive scientific rigor. A five >> minute search crushes such fools with ease, I have done this a dozen times. >>
Here is what Sabine Hossenfelder thinks of modern physics, and she
makes money promoting physics to people on YouTube.
https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt
The comments are funny. ;)
My translation is that modern physics is a bullshit engine of unprovable gibberish like string theory.
David Brown <david.brown@hesbynett.no> wrote:
On 04/10/2024 19:59, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 21:10, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics: >>>>>>>>>Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ...
And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
In science, we don’t go by “this guy has a legendary reputation and/or
sounds like a credible witness, let’s believe him”, we go by evidence.
Indeed.
Also note that the two guys who won the Nobel Prize for the development >>>>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>>>
But this particular crank is mad enough and influential enough to have a >>>>>> page on Rational Wiki, which is never a good sign. (It seems he did >>>>>> work on improving MRI technology before he went bananas.)
<https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>
One day I will be on rational wiki. ;)
Watch his videos and try to debunk what he says.
Good luck with that. ;)
There are more productive uses of my time which won't rot my brain as
quickly, such as watching the grass grow.
A bit challenge with the kind of shite that people like this produce is >>>> that it is often unfalsifiable. They invoke magic, much like religions >>>> do, and then any kind of disproof or debunking is washed away by magic. >>>> When you make up some nonsense that has no basis in reality or no
evidence, you can just keep adding more nonsense no matter what anyone >>>> else says.
So when nutjobs like that guy tell you the sun is powered by pixies
riding tricycles really fast, he can easily invent more rubbish to
explain away any evidence.
There's a term for this - what these cranks churn out is "not even
wrong". (You can look that up on Rational Wiki too.)
And while the claims of this kind of conspiracy theory cannot be
falsified, there is also no evidence for them. Claims made without
evidence can be dismissed without evidence - there is no need to debunk >>>> them. The correct reaction is to laugh if they are funny, then move on >>>> and forget them.
We are all human, and sometimes we get fooled by an idea that sounds
right. But you should be embarrassed at believing such a wide range of >>>> idiocy and then promoting it.
A gas cannot emit the spectrum we see from the sun, liquid metallic
hydrogen can.
You do realise that the sun is primarily plasma, rather than gas? And
that scientists - /real/ scientists - can heat up gases until they are
plasma and look at the spectrum, in actual experiments in labs? Has
your hero tested a ball of liquid metallic hydrogen in his lab?
Gases do not show the pond ripples from impacts that we see from the sun >>> surface.
And a long list of other basic facts Pierre-Marie_Robitaille goes over in >>> his Sky Scholar videos.
Stellar science is a bad joke, such basic mistakes should have been
corrected 100 years ago.
You think one crackpot with no relevant education and no resources can
figure all this out in a couple of years, where tens of thousands of
scientists have failed over a hundred years? Do you /really/ think that
is more likely than supposing that he doesn't understand what he is
talking about?
In real science, lab experiments, observation of reality (such as the
sun in this case), simulations, models, and hypotheses all go hand in
hand in collaboration between many scientists and experts in different
fields in order to push scientific knowledge further.
"Maverick" genius scientists who figure out the "real" answer on their
own don't exist outside the entertainment industry.
So science ended 100 years ago and we should close our eyes and ears and
say not anything that would counter our sacred flawless scientists of old.
Stop being a religious zealot and watch the videos.
If he is a crackpot you should be bright enough to figure it out and prove
it for the world to see. Crackpots cannot survive scientific rigor. A five minute search crushes such fools with ease, I have done this a dozen times.
Brett <ggtgp@yahoo.com> wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 04/10/2024 19:59, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 21:10, Brett wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:
Sky Scholar just posted his latest mockery of modern physics: >>>>>>>>>>Is this a particularly believable and/or coherent mockery?
He invented the MRI machine and the Liquid Metallic model of the sun ...
And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
In science, we don’t go by “this guy has a legendary reputation and/or
sounds like a credible witness, let’s believe him”, we go by evidence.
Indeed.
Also note that the two guys who won the Nobel Prize for the development >>>>>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>>>>
But this particular crank is mad enough and influential enough to have a
page on Rational Wiki, which is never a good sign. (It seems he did >>>>>>> work on improving MRI technology before he went bananas.)
<https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>
One day I will be on rational wiki. ;)
Watch his videos and try to debunk what he says.
Good luck with that. ;)
There are more productive uses of my time which won't rot my brain as >>>>> quickly, such as watching the grass grow.
A bit challenge with the kind of shite that people like this produce is >>>>> that it is often unfalsifiable. They invoke magic, much like religions >>>>> do, and then any kind of disproof or debunking is washed away by magic. >>>>> When you make up some nonsense that has no basis in reality or no
evidence, you can just keep adding more nonsense no matter what anyone >>>>> else says.
So when nutjobs like that guy tell you the sun is powered by pixies
riding tricycles really fast, he can easily invent more rubbish to
explain away any evidence.
There's a term for this - what these cranks churn out is "not even
wrong". (You can look that up on Rational Wiki too.)
And while the claims of this kind of conspiracy theory cannot be
falsified, there is also no evidence for them. Claims made without
evidence can be dismissed without evidence - there is no need to debunk >>>>> them. The correct reaction is to laugh if they are funny, then move on >>>>> and forget them.
We are all human, and sometimes we get fooled by an idea that sounds >>>>> right. But you should be embarrassed at believing such a wide range of >>>>> idiocy and then promoting it.
A gas cannot emit the spectrum we see from the sun, liquid metallic
hydrogen can.
You do realise that the sun is primarily plasma, rather than gas? And
that scientists - /real/ scientists - can heat up gases until they are
plasma and look at the spectrum, in actual experiments in labs? Has
your hero tested a ball of liquid metallic hydrogen in his lab?
Gases do not show the pond ripples from impacts that we see from the sun >>>> surface.
And a long list of other basic facts Pierre-Marie_Robitaille goes over in >>>> his Sky Scholar videos.
Stellar science is a bad joke, such basic mistakes should have been
corrected 100 years ago.
You think one crackpot with no relevant education and no resources can
figure all this out in a couple of years, where tens of thousands of
scientists have failed over a hundred years? Do you /really/ think that >>> is more likely than supposing that he doesn't understand what he is
talking about?
In real science, lab experiments, observation of reality (such as the
sun in this case), simulations, models, and hypotheses all go hand in
hand in collaboration between many scientists and experts in different
fields in order to push scientific knowledge further.
"Maverick" genius scientists who figure out the "real" answer on their
own don't exist outside the entertainment industry.
So science ended 100 years ago and we should close our eyes and ears and
say not anything that would counter our sacred flawless scientists of old. >>
Stop being a religious zealot and watch the videos.
If he is a crackpot you should be bright enough to figure it out and prove >> it for the world to see. Crackpots cannot survive scientific rigor. A five >> minute search crushes such fools with ease, I have done this a dozen times. >>
Here is what Sabine Hossenfelder thinks of modern physics, and she makes money promoting physics to people on YouTube.
https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt
The comments are funny. ;)
My translation is that modern physics is a bullshit engine of unprovable gibberish like string theory.
On Fri, 4 Oct 2024 17:59:03 -0000 (UTC), Brett wrote:
A gas cannot emit the spectrum we see from the sun, liquid metallic
hydrogen can.
The spectrum of the Sun is primarily the continuous emissive one of a “black body” at a surface temperature of 6500K or thereabouts.
Superimposed on that are absorption lines corresponding to a range of elements, representing cooler substances in the surrounding “photosphere”,
I think it’s called.
Which of these lines do you think is characteristic of this mythical “liquid metallic hydrogen” of yours?
Fun fact: originally it was thought that those lines in the spectra of the Sun and other stars were characteristic of the entire makeup of the bodies concerned. In other words, they were full of elements much like those that make up the Earth and other planetary bodies.
A young doctorate student named Cecilia Payne, after some careful study,
came to the remarkable conclusion that stars were mostly hydrogen and
helium, and these spectral lines were due, in effect, to relatively small amounts of contaminants in among that bulk of hydrogen and helium.
Gases do not show the pond ripples from impacts that we see from the sun
surface.
What “impacts on the sun surface”?
On 05/10/2024 20:24, Brett wrote:
Brett <ggtgp@yahoo.com> wrote:
Here is what Sabine Hossenfelder thinks of modern physics, and she makes
money promoting physics to people on YouTube.
https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt
Sabine Hossenfelder is quite a good commentator, and I've seen many of
her videos before. Her points here are not new or contentious - there
is quite a support in scientific communities for her argument here. We
have arguably reached a point in the science of cosmology and
fundamental physics where traditional scientific progress is unavoidably minimal. Basically, we cannot build big enough experiments to provide corroborating or falsifying evidence for current hypothetical models
that could explain quantum mechanics (known to be an extraordinarily
good model on small scales) and relativity (known to work well on large scales, and with many aspects confirmed in laboratory experiments). If gravity works like a quantum field mediated by a "graviton" boson, we'd
need a particle accelerator the size of the orbit of Jupiter to find it.
On Sun, 6 Oct 2024 10:47:08 +0000, David Brown wrote:
On 05/10/2024 20:24, Brett wrote:
Brett <ggtgp@yahoo.com> wrote:
Here is what Sabine Hossenfelder thinks of modern physics, and she makes >>> money promoting physics to people on YouTube.
https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt
Sabine Hossenfelder is quite a good commentator, and I've seen many of
her videos before. Her points here are not new or contentious - there
is quite a support in scientific communities for her argument here. We
have arguably reached a point in the science of cosmology and
fundamental physics where traditional scientific progress is unavoidably
minimal. Basically, we cannot build big enough experiments to provide
corroborating or falsifying evidence for current hypothetical models
Based on the success of Webb--we can, we just don't have access to
enough money to allow for building and shipping such a device up into
space. Optics-check, structure-check, rocket-check, where to put it-
check, telemetry and command-check.
mitchalsup@aol.com (MitchAlsup1) writes:
On Sun, 6 Oct 2024 10:47:08 +0000, David Brown wrote:
On 05/10/2024 20:24, Brett wrote:
Brett <ggtgp@yahoo.com> wrote:
Here is what Sabine Hossenfelder thinks of modern physics, and she makes >>>> money promoting physics to people on YouTube.
https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt
Sabine Hossenfelder is quite a good commentator, and I've seen many of
her videos before. Her points here are not new or contentious - there
is quite a support in scientific communities for her argument here. We
have arguably reached a point in the science of cosmology and
fundamental physics where traditional scientific progress is unavoidably >>> minimal. Basically, we cannot build big enough experiments to provide
corroborating or falsifying evidence for current hypothetical models
Based on the success of Webb--we can, we just don't have access to
enough money to allow for building and shipping such a device up into >>space. Optics-check, structure-check, rocket-check, where to put it-
check, telemetry and command-check.
An article in this week's Aviation Week and Space Technology noted
that the starship will be able to boost a payload that masses
thirty times the Webb for less cost than the Webb launch.
[a particle accelerator to find quantum gravitons would need to
be the size of the orbit of Jupiter]
I heard closer to Saturn, but you forgot that it would take 5% of
the sun's energy to power it.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Fri, 4 Oct 2024 17:59:03 -0000 (UTC), Brett wrote:
A gas cannot emit the spectrum we see from the sun, liquid metallic
hydrogen can.
The spectrum of the Sun is primarily the continuous emissive one of a
“black body” at a surface temperature of 6500K or thereabouts.
Superimposed on that are absorption lines corresponding to a range of
elements, representing cooler substances in the surrounding “photosphere”,
I think it’s called.
Which of these lines do you think is characteristic of this mythical
“liquid metallic hydrogen” of yours?
Fun fact: originally it was thought that those lines in the spectra of the >> Sun and other stars were characteristic of the entire makeup of the bodies >> concerned. In other words, they were full of elements much like those that >> make up the Earth and other planetary bodies.
A young doctorate student named Cecilia Payne, after some careful study,
came to the remarkable conclusion that stars were mostly hydrogen and
helium, and these spectral lines were due, in effect, to relatively small
amounts of contaminants in among that bulk of hydrogen and helium.
Gases do not show the pond ripples from impacts that we see from the sun >>> surface.
What “impacts on the sun surface”?
Watch the first few minutes of the first video in the playlist to see a
solar eruption and some of that mass crashing back down on the sun surface, causing pond ripples. The idea of a plasma gas sun dies right there.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Fri, 4 Oct 2024 17:59:03 -0000 (UTC), Brett wrote:
Gases do not show the pond ripples from impacts that we see from the
sun surface.
What “impacts on the sun surface”?
Watch the first few minutes of the first video in the playlist to see a
solar eruption and some of that mass crashing back down on the sun
surface, causing pond ripples.
On Mon, 7 Oct 2024 0:39:15 +0000, Scott Lurndal wrote:
mitchalsup@aol.com (MitchAlsup1) writes:
On Sun, 6 Oct 2024 10:47:08 +0000, David Brown wrote:
On 05/10/2024 20:24, Brett wrote:
Brett <ggtgp@yahoo.com> wrote:
Here is what Sabine Hossenfelder thinks of modern physics, and she
makes
money promoting physics to people on YouTube.
https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt
Sabine Hossenfelder is quite a good commentator, and I've seen many of >>>> her videos before. Her points here are not new or contentious - there >>>> is quite a support in scientific communities for her argument here. We >>>> have arguably reached a point in the science of cosmology and
fundamental physics where traditional scientific progress is
unavoidably
minimal. Basically, we cannot build big enough experiments to provide >>>> corroborating or falsifying evidence for current hypothetical models
Based on the success of Webb--we can, we just don't have access to
enough money to allow for building and shipping such a device up into
space. Optics-check, structure-check, rocket-check, where to put it-
check, telemetry and command-check.
An article in this week's Aviation Week and Space Technology noted
that the starship will be able to boost a payload that masses
thirty times the Webb for less cost than the Webb launch.
I was counting on Starship in the above.
I was only complaining about the "can't" part.
Every piece of engineering is go--as long as someone will pay for it.
On 2024-10-07 1:08, Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Fri, 4 Oct 2024 17:59:03 -0000 (UTC), Brett wrote:
A gas cannot emit the spectrum we see from the sun, liquid metallic
hydrogen can.
The spectrum of the Sun is primarily the continuous emissive one of a
“black body” at a surface temperature of 6500K or thereabouts.
Superimposed on that are absorption lines corresponding to a range of
elements, representing cooler substances in the surrounding
“photosphere”,
I think it’s called.
Which of these lines do you think is characteristic of this mythical
“liquid metallic hydrogen” of yours?
Fun fact: originally it was thought that those lines in the spectra
of the
Sun and other stars were characteristic of the entire makeup of the
bodies
concerned. In other words, they were full of elements much like those
that
make up the Earth and other planetary bodies.
A young doctorate student named Cecilia Payne, after some careful study, >>> came to the remarkable conclusion that stars were mostly hydrogen and
helium, and these spectral lines were due, in effect, to relatively
small
amounts of contaminants in among that bulk of hydrogen and helium.
Gases do not show the pond ripples from impacts that we see from the
sun
surface.
What “impacts on the sun surface”?
Watch the first few minutes of the first video in the playlist to see a
solar eruption and some of that mass crashing back down on the sun
surface,
causing pond ripples. The idea of a plasma gas sun dies right there.
Stratified fluid (non-plasma) atmospheres can support pond-like waves:
https://en.wikipedia.org/wiki/Gravity_wave#Atmosphere_dynamics_on_Earth.
Plasma can support /many/ kinds of waves because of the coupling of the charged particles to magnetic fields:
https://en.wikipedia.org/wiki/Waves_in_plasmas
I don't claim to know what kind of wave was shown in the video of the
solar eruption -- intuitively I would plump for gravity waves. But I
don't think liquid metallic hydrogen is needed to explain it.
kinds of proofs as "better" than others. Some dislike "proof by computer", and don't consider the four-colour theorem to be a proven theorem yet.
On 07/10/2024 03:34, MitchAlsup1 wrote:
Sabine Hossenfelder is quite a good commentator, and I've seen many of >>>>> her videos before. Her points here are not new or contentious - there >>>>> is quite a support in scientific communities for her argument here. We >>>>> have arguably reached a point in the science of cosmology and
fundamental physics where traditional scientific progress is
unavoidably
minimal. Basically, we cannot build big enough experiments to provide >>>>> corroborating or falsifying evidence for current hypothetical models
Based on the success of Webb--we can, we just don't have access to
enough money to allow for building and shipping such a device up into
space. Optics-check, structure-check, rocket-check, where to put it-
check, telemetry and command-check.
An article in this week's Aviation Week and Space Technology noted
that the starship will be able to boost a payload that masses
thirty times the Webb for less cost than the Webb launch.
I was counting on Starship in the above.
I was only complaining about the "can't" part.
Every piece of engineering is go--as long as someone will pay for it.
No, the engineering is not remotely close to "go" for these things (the ridiculously large particle accelerators), even if there were an
unlimited supply of money.
There are, however, many other types of devices and experiments that
would be useful for physics research which /are/ possible from the engineering viewpoint, but lack the funding.
On 07/10/2024 08:29, Niklas Holsti wrote:
On 2024-10-07 1:08, Brett wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Fri, 4 Oct 2024 17:59:03 -0000 (UTC), Brett wrote:
A gas cannot emit the spectrum we see from the sun, liquid metallic hydrogen can.
The spectrum of the Sun is primarily the continuous emissive one of a "black body" at a surface temperature of 6500K or thereabouts.
Superimposed on that are absorption lines corresponding to a range of elements, representing cooler substances in the surrounding "photosphere",
I think its called.
Which of these lines do you think is characteristic of this mythical "liquid metallic hydrogen" of yours?
Fun fact: originally it was thought that those lines in the spectra
of the
Sun and other stars were characteristic of the entire makeup of the bodies
concerned. In other words, they were full of elements much like those that
make up the Earth and other planetary bodies.
A young doctorate student named Cecilia Payne, after some careful study,
came to the remarkable conclusion that stars were mostly hydrogen and helium, and these spectral lines were due, in effect, to relatively small
amounts of contaminants in among that bulk of hydrogen and helium.
Gases do not show the pond ripples from impacts that we see from the sun
surface.
What "impacts on the sun surface"?
Watch the first few minutes of the first video in the playlist to see a solar eruption and some of that mass crashing back down on the sun surface,
causing pond ripples. The idea of a plasma gas sun dies right there.
Stratified fluid (non-plasma) atmospheres can support pond-like waves:
https://en.wikipedia.org/wiki/Gravity_wave#Atmosphere_dynamics_on_Earth.
Note to Brett - gravity waves in a fluid are completely different from gravitational waves, such as those generated by black hole collisions
and detected by LIGO. You probably have some other magic snake oil
beliefs about those, but don't get them confused with gravity waves in a fluid.
On Mon, 7 Oct 2024 9:32:49 +0000, David Brown wrote:
On 07/10/2024 03:34, MitchAlsup1 wrote:
Sabine Hossenfelder is quite a good commentator, and I've seenBased on the success of Webb--we can, we just don't have access to
many of
her videos before. Her points here are not new or contentious -
there
is quite a support in scientific communities for her argument
here. We
have arguably reached a point in the science of cosmology and
fundamental physics where traditional scientific progress is
unavoidably
minimal. Basically, we cannot build big enough experiments to
provide
corroborating or falsifying evidence for current hypothetical models >>>>>
enough money to allow for building and shipping such a device up into >>>>> space. Optics-check, structure-check, rocket-check, where to put it- >>>>> check, telemetry and command-check.
An article in this week's Aviation Week and Space Technology noted
that the starship will be able to boost a payload that masses
thirty times the Webb for less cost than the Webb launch.
I was counting on Starship in the above.
I was only complaining about the "can't" part.
Every piece of engineering is go--as long as someone will pay for it.
No, the engineering is not remotely close to "go" for these things (the
ridiculously large particle accelerators), even if there were an
unlimited supply of money.
We have all the technology we need to build a 2× Webb and to launch
it into space, or we will by the time it can be built.
kinds of proofs as "better" than others. Some dislike "proof by computer", >> and don't consider the four-colour theorem to be a proven theorem yet.
"Proof by computer" can mean many different things. The 1976 proof by Appel&Haken failed to convince a number of mathematicians both because
of the use of a computer and because of the "inelegant", "brute
force" approach.
Regarding the use of a computer, it relied on ad-hoc code which used
brute force to check some large number of subproblems. For some mathematicians, it was basically some opaque piece of code saying "yes",
with no reason to be confident that the code actually did what the
authors intended it to do.
The 2005 proof by Gonthier also used a computer, but the program used
was a generic proof assistant. Arguably some "opaque brute force" code
was used as well, but it generated actual evidence of its claims, which
was then mechanically checked by the proof assistant.
That leaves a lot less room for arguing that it's not valid.
I haven't heard anyone express doubts about that proof yet.
Stefan
"Proof by computer" can mean many different things. The 1976 proof by Appel&Haken failed to convince a number of mathematicians both because
of the use of a computer and because of the "inelegant", "brute force" approach.
Even if 99% is correct, there were still 6-7 figures worth of
dual-processor x86 systems sold each year and starting from 1997 at
least tens of thousands of quads.
Absence of ordering definitions should have been a problem for a lot of >people. But somehow, it was not.
mitchalsup@aol.com (MitchAlsup1) wrote:
Also note: this was just after the execution pipeline went
Great Big Our of Order, and thus made the lack of order
problems much more visible to applications. {Pentium Pro}
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 04:23:22 |
Calls: | 10,387 |
Calls today: | 2 |
Files: | 14,061 |
Messages: | 6,416,782 |