• Is Intel exceptionally unsuccessful as an architecture designer?

    From John Dallman@21:1/5 to All on Fri Sep 13 20:51:00 2024
    The tribe of x86 architectures didn't originate as an Intel design. The
    8008 ISA originated at Datapoint, and grew through the 8080 and 8085.
    Intel recognised their limitations, and decided to make something better,
    but the iAPX 432 took time to mature and the 8086 was designed as an
    extended 8080 to keep the company going until the 432 succeeded.

    The 432 was a total failure, but the x86 line kept the company going and growing. Then they came up with the i960, which had some success as a
    high0end embedded processor, but was cancelled when Intel acquired rights
    to DEC's StrongARM cores. They produced XScale as an improved StrongARM,
    then sold the line.

    The i860 was a pretty comprehensive failure, but the x86 line made them
    into a behemoth. Then they decided to phase that out and do Itanium. It
    was less of a failure than 432 or i860, but they had to adopt AMD's
    x86-64 ISA to avoid shrinking themselves into a subsidiary of HP.

    Not many computer companies survive three failed architectures: has that
    record been beaten?

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to All on Fri Sep 13 23:18:01 2024
    You forgot the dismal financial drain IA64 placed on Intel.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to John Dallman on Sat Sep 14 07:29:02 2024
    jgd@cix.co.uk (John Dallman) writes:
    The tribe of x86 architectures didn't originate as an Intel design. The
    8008 ISA originated at Datapoint, and grew through the 8080 and 8085.
    Intel recognised their limitations, and decided to make something better,
    but the iAPX 432 took time to mature and the 8086 was designed as an
    extended 8080 to keep the company going until the 432 succeeded.

    The 432 was a total failure, but the x86 line kept the company going and >growing. Then they came up with the i960, which had some success as a >high0end embedded processor, but was cancelled when Intel acquired rights
    to DEC's StrongARM cores.

    The i960 was the outcome of salvaging another project, BiiN, which had
    similar goals (from reading the Wikipedia article) as the 432 and had
    many 432 veterans. But apparently they learned from their mistakes,
    and the base architecture is a RISC, and was competetive for a while.

    Reading the 386 oral history, Intel's idea at the start of the 386
    project was that BiiN was going to be the big thing, and 386 just a
    stopgap for those who had already invested in the 80286 (similar to
    432 and 8086). At that point the IBM PC existed, but there was no big
    PC market yet. Some time during the project, the PC market grew far
    beyond expectations, and the 386 became the main project of Intel.
    And they then rode with 386 followons until they produced the same
    situation again with IA-64 against Pentium Pro and its followons.

    Anyway, the kind of market that BiiN was developed for did not appear
    for BiiN. Tandem and Stratus owned this market; my impression was
    that BiiN was intended to be an alternative to those instead of
    marketing the BiiN hardware to them. Tandem went with MIPS
    processors, and Stratus with the 68000, then, strangely, i860, then HP
    PA, and finally Intel Xeon.

    So when BiiN stopped as a project and company in 1989, the base
    architecture, the i960 was salvaged and marketed as an embedded
    processor (Intel management at the time did not want to market it for
    Unix systems, probably because they were already marketing the 486 for
    PCs and the i860 as super-chip).

    According to Wikipedia, the first i960 taped out in the same month as
    the 386, but that apparently was only for internal BiiN usage at the
    time, and it only made commercial appearance in 1989 as embedded
    processor with all the more advanced features disabled.

    One interesting i960 is the i960CA (announced in July 1989), which is
    the first single-chip superscalar: dual issue, one integer, one
    memory, and one branch instruction can be performed at the same time,
    at 33MHz (the R3000 came out in 1989 with single-issue and similar
    clock).

    Intel reassigned the development teams in 1990, with the now-former
    i960 team working on the Pentium Pro and a smaller team working on the
    i960, so the i960 fell back relative to the competition. Given the
    decision to market it only for embedded systems, that's probably understandable. The decision to replace it with StrongARM is in the
    same vein.

    If they had designed and marketed the i960 for the Unix market and if
    they had done that from the start, they probably would have been among
    the first commercial RISCs, maybe the first. My guess is that an MMU
    for Unix takes less development effort and less silicon than all the
    features they designed in for BiiN, and they would have taped out and
    released earlier. It's not clear how that would have turned out in
    the long run: Would IA-32/AMD64 still have taken over the Unix market?
    Would they have started IA-64?

    The i860 was a pretty comprehensive failure, but the x86 line made them
    into a behemoth.

    According to <https://web.archive.org/web/20220705003416/https://spectrum.ieee.org/intel-i860>,
    this was started around the same time as the 486. Unlike the i960,
    this was marketed as high-performance general-purpose CPU, probably to
    address the widespread belief at the time that RISCs are the future
    and IA-32 was doomed. But apparently the i860 was designed for very
    good performance from perfectly scheduled code, but was not so great
    at usual compiler output (a mistake repeated in IA-64; so they did not
    learn from their mistakes in this case). I remember the explicitly
    pipelined FPU, but don't remember anything where the i860 would
    perform worse than other RISCs of its time. Anyway, the i860 did not
    see mainstream general-purpose use.

    Then they decided to phase that out and do Itanium.

    There was never a successor to the i860, so IA-64 (which was not
    started at Intel until 1994) is unlikely to have anything to do with
    it. Given that they wanted to market the i860 as high-end CPU, I
    expect that there was a followup project worked on when the i860 was
    released, but either that project failed (which would not surprise me,
    as i860 features look like being bad ideas like branch-delay slots,
    only more of them), or Intel decided that the market for the i860 was
    too small, and they canceled the project (or both).

    It
    was less of a failure than 432 or i860, but they had to adopt AMD's
    x86-64 ISA to avoid shrinking themselves into a subsidiary of HP.

    It seems to me that IA-64 was a bigger failure: More money invested,
    and more money lost (probably even relative to the size of the company
    at the time).

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Anton Ertl on Sun Sep 15 00:06:39 2024
    On Sat, 14 Sep 2024 07:29:02 GMT
    anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:


    It seems to me that IA-64 was a bigger failure: More money invested,
    and more money lost (probably even relative to the size of the company
    at the time).

    - anton

    But more money made, too.
    I'd suppose, in its later days, when all ambitions evaporated, Itanium
    became a decent cache cow for Intel. Not spectacular, of course, just
    decent.
    i860 didn't quuie reach the state of cache cow. And i432 didn't reach
    anything.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Michael S on Sat Sep 14 22:49:21 2024
    On Sat, 14 Sep 2024 21:06:39 +0000, Michael S wrote:

    On Sat, 14 Sep 2024 07:29:02 GMT
    anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:


    It seems to me that IA-64 was a bigger failure: More money invested,
    and more money lost (probably even relative to the size of the company
    at the time).

    - anton

    But more money made, too.
    I'd suppose, in its later days, when all ambitions evaporated, Itanium
    became a decent cache cow for Intel. Not spectacular, of course, just
    decent.

    I do not believe that the sales revenue even met the engineering and manufacturing costs.

    i860 didn't quuie reach the state of cache cow. And i432 didn't reach anything.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Sun Sep 15 00:42:20 2024
    On Sun, 15 Sep 2024 00:06:39 +0300, Michael S wrote:

    ... cache cow ...

    Freudian slip? ;)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Sun Sep 15 03:51:09 2024
    On Sun, 15 Sep 2024 00:42:20 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sun, 15 Sep 2024 00:06:39 +0300, Michael S wrote:

    ... cache cow ...

    Freudian slip? ;)

    It could be

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to mitchalsup@aol.com on Sun Sep 15 11:22:16 2024
    On Sat, 14 Sep 2024 22:49:21 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Sat, 14 Sep 2024 21:06:39 +0000, Michael S wrote:

    On Sat, 14 Sep 2024 07:29:02 GMT
    anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:


    It seems to me that IA-64 was a bigger failure: More money
    invested, and more money lost (probably even relative to the size
    of the company at the time).

    - anton

    But more money made, too.
    I'd suppose, in its later days, when all ambitions evaporated,
    Itanium became a decent cache cow for Intel. Not spectacular, of
    course, just decent.

    I do not believe that the sales revenue even met the engineering and manufacturing costs.


    ASP was certainly many time higher than manufacturing cost, esp. after migration to 90nm in 2006.
    Engineering cost was huge up until 2010, but significant part of what
    was spent in 2005-2010 (development of QPI) was reused by Xeons.
    In 2010-2012 engineering cost was probably quite moderate.
    From 2013 to EOL in 2022 engineering cost was very low.
    So, even if Itanium enterprise as whole lost a lot of money its
    last 12-13 years taken in isolation were likely quite profitable.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Michael S on Mon Sep 16 23:48:56 2024
    On Sun, 15 Sep 2024 8:22:16 +0000, Michael S wrote:

    On Sat, 14 Sep 2024 22:49:21 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:


    I do not believe that the sales revenue even met the engineering and
    manufacturing costs.


    ASP was certainly many time higher than manufacturing cost, esp. after migration to 90nm in 2006.
    Engineering cost was huge up until 2010, but significant part of what
    was spent in 2005-2010 (development of QPI) was reused by Xeons.

    Engineering costs were at least 200 engineers for 2 decades at
    approximately $200K/engineer/year. ½ salary ½ SW+HW+overhead.
    This turn out to be $0.8B sales costs would be extra.

    Did they sell $1B of these things ??

    In 2010-2012 engineering cost was probably quite moderate.
    From 2013 to EOL in 2022 engineering cost was very low.
    So, even if Itanium enterprise as whole lost a lot of money its
    last 12-13 years taken in isolation were likely quite profitable.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to mitchalsup@aol.com on Tue Sep 17 10:57:35 2024
    On Mon, 16 Sep 2024 23:48:56 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Sun, 15 Sep 2024 8:22:16 +0000, Michael S wrote:

    On Sat, 14 Sep 2024 22:49:21 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:


    I do not believe that the sales revenue even met the engineering
    and manufacturing costs.


    ASP was certainly many time higher than manufacturing cost, esp.
    after migration to 90nm in 2006.
    Engineering cost was huge up until 2010, but significant part of
    what was spent in 2005-2010 (development of QPI) was reused by
    Xeons.

    Engineering costs were at least 200 engineers for 2 decades at
    approximately $200K/engineer/year. ½ salary ½ SW+HW+overhead.
    This turn out to be $0.8B sales costs would be extra.


    Why would they need 200 engineers before 1998?
    Or after 2010?
    Why would they need more than 2-3 engineers after 2012?

    Did they sell $1B of these things ??


    I don't know, but would think that the answer is yes.

    In the best years (2007-2008) HP sold approximately 75K Itanium boxen
    per year. Assuming an average of 3 CPUs per box and 3.5K USD per CPU
    that gives 0.79 B/y. For the rest of IPF life they were selling
    significantly less, but still selling something.
    And there were other vendors beyond HP, although nearly all of them
    jumped ship before 2008.

    In 2010-2012 engineering cost was probably quite moderate.
    From 2013 to EOL in 2022 engineering cost was very low.
    So, even if Itanium enterprise as whole lost a lot of money its
    last 12-13 years taken in isolation were likely quite profitable.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Michael S on Tue Sep 17 19:58:08 2024
    On Tue, 17 Sep 2024 7:57:35 +0000, Michael S wrote:

    On Mon, 16 Sep 2024 23:48:56 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    Engineering costs were at least 200 engineers for 2 decades at
    approximately $200K/engineer/year. ½ salary ½ SW+HW+overhead.
    This turn out to be $0.8B sales costs would be extra.


    Why would they need 200 engineers before 1998?
    Or after 2010?
    Why would they need more than 2-3 engineers after 2012?

    A friend of mine worked on ITanic in Longmount Co and related the
    size of the team. He worked there from about 1995-2019.

    And my numbers did not include the software engineers on the project.

    Did they sell $1B of these things ??


    I don't know, but would think that the answer is yes.

    In the best years (2007-2008) HP sold approximately 75K Itanium boxen
    per year. Assuming an average of 3 CPUs per box and 3.5K USD per CPU
    that gives 0.79 B/y. For the rest of IPF life they were selling
    significantly less, but still selling something.
    And there were other vendors beyond HP, although nearly all of them
    jumped ship before 2008.

    To make "real" money MFG costs have to be less than ¼ sales price.
    Somebody has to "Pay for * the FAB".

    And it always bothered me that companies spend $1B+ to make a
    FAB that produces $0.50 parts that go in $1.00 packages made
    in a factory costing $50M, with $0.25 test costs.


    (*) that part of the FAB capacity they occupy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to mitchalsup@aol.com on Wed Sep 18 00:50:45 2024
    On Tue, 17 Sep 2024 19:58:08 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Tue, 17 Sep 2024 7:57:35 +0000, Michael S wrote:

    On Mon, 16 Sep 2024 23:48:56 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    Engineering costs were at least 200 engineers for 2 decades at
    approximately $200K/engineer/year. salary SW+HW+overhead.
    This turn out to be $0.8B sales costs would be extra.


    Why would they need 200 engineers before 1998?
    Or after 2010?
    Why would they need more than 2-3 engineers after 2012?

    A friend of mine worked on ITanic in Longmount Co and related the
    size of the team. He worked there from about 1995-2019.


    Can you ask him what exactly did he do in 2013-2019?
    From the outside it looks like no Itanium-related HW development was
    done by Intel after 2012.

    And my numbers did not include the software engineers on the project.

    Did they sell $1B of these things ??


    I don't know, but would think that the answer is yes.

    In the best years (2007-2008) HP sold approximately 75K Itanium
    boxen per year. Assuming an average of 3 CPUs per box and 3.5K USD
    per CPU that gives 0.79 B/y. For the rest of IPF life they were
    selling significantly less, but still selling something.
    And there were other vendors beyond HP, although nearly all of them
    jumped ship before 2008.

    To make "real" money MFG costs have to be less than sales price.
    Somebody has to "Pay for * the FAB".

    And it always bothered me that companies spend $1B+ to make a
    FAB that produces $0.50 parts that go in $1.00 packages made
    in a factory costing $50M, with $0.25 test costs.


    (*) that part of the FAB capacity they occupy.

    For the majority of its life Itanium CPUs were manufactored on
    silicon nodes that were no longer usable for mass-market x86. For
    example:
    90nm - 2006-2009
    65nm - 2010-2012
    32nm - 2015 to 2021
    Periods when Itanium competed with x86 for fab capacity were relatively
    brief.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Tue Sep 17 23:45:50 2024
    On Tue, 17 Sep 2024 23:30:04 +0000, Lawrence D'Oliveiro wrote:

    On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:

    Not many computer companies survive three failed architectures: has that
    record been beaten?

    I think it’s fair to say that both Intel and Microsoft were companies
    more
    renowned for marketing prowess than actual technical brilliance.

    And now that that marketing prowess is fading somewhat, both companies
    are
    suffering a bit from a marketplace that is changing faster than they can cope.

    Just last night, I was in a conversation with someone trying to start up
    a new company that wants to compete in the "server market". Direct
    quote;
    "the CPUs are simply I/O managers to the Inference Engines and GPUs."

    Back when I was doing GPUs*, the Physics was still done n the CPUs with
    the rendering done on the GPUs. That physics is now being done on the
    GPUs. (*) 2012-2015

    Who here thinks that CPUs have become the CDC 6600 PPs for the GPUs and Inference Engines ??

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Wed Sep 18 02:54:51 2024
    On Tue, 17 Sep 2024 23:30:04 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:

    Not many computer companies survive three failed architectures: has
    that record been beaten?

    I think it’s fair to say that both Intel and Microsoft were companies
    more renowned for marketing prowess than actual technical brilliance.

    And now that that marketing prowess is fading somewhat, both
    companies are suffering a bit from a marketplace that is changing
    faster than they can cope.

    There are few things Intel would wish more than to "suffer"
    financially like Microsoft.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Dallman on Tue Sep 17 23:30:04 2024
    On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:

    Not many computer companies survive three failed architectures: has that record been beaten?

    I think it’s fair to say that both Intel and Microsoft were companies more renowned for marketing prowess than actual technical brilliance.

    And now that that marketing prowess is fading somewhat, both companies are suffering a bit from a marketplace that is changing faster than they can
    cope.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Wed Sep 18 00:42:32 2024
    On Wed, 18 Sep 2024 02:54:51 +0300, Michael S wrote:

    There are few things Intel would wish more than to "suffer"
    financially like Microsoft.

    It is true that Microsoft is not (yet) losing money, but still the
    revenues from its Windows cash cow cannot be what they used to be, if you
    look at the declining level of investment Microsoft is putting back into
    its flagship OS.

    Its game division is also undergoing a bit of an upheaval at the moment.
    Its own games are moving away from being exclusives to its own console platform.

    And look at its ongoing unsuccessful attempts to port Windows to the ARM architecture.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Wed Sep 18 00:44:44 2024
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have access
    to the sheer quantity of RAM that is available to the CPU. And motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
    tend not to offer this option.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Wed Sep 18 00:57:59 2024
    On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have access
    to the sheer quantity of RAM that is available to the CPU. And motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
    tend not to offer this option.

    He showed a die figure with 256GB of DRAM stacked 8-deep and 2×4-wide.

    Then there is PCIe-CXL providing 1TB <of some kind of flash> you can
    buy today. Plug and play. PCIe allows direct device to device transfers.
    {GPU = 1 device, IE = 1 device, CXL-RAM = 1 device} all the CPU has to
    do is program the I/O MMU to allow them to do their own thing, and deal
    with the keyboard and mouse activities.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Wed Sep 18 01:27:03 2024
    On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:

    On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the CPU. And
    motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
    tend not to offer this option.

    He showed a die figure with 256GB of DRAM stacked 8-deep and 2×4-wide.

    Was it upgradeable? Or was it soldered in?

    ... all the CPU has to
    do is program the I/O MMU to allow them to do their own thing, and deal
    with the keyboard and mouse activities.

    That’s not how interactive timesharing worked in the old days, and it certainly won’t be sufficient for interactive work today.

    You *did* say “servers” though, didh’t you?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Fuld@21:1/5 to Lawrence D'Oliveiro on Tue Sep 17 21:57:24 2024
    On 9/17/2024 4:30 PM, Lawrence D'Oliveiro wrote:
    On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:

    Not many computer companies survive three failed architectures: has that
    record been beaten?

    I think it’s fair to say that both Intel and Microsoft were companies more renowned for marketing prowess than actual technical brilliance.

    For some years, Intel was known for it prowess in FAB technology. After
    all, they managed to make a "difficult" architecture out perform CPUs
    from better architectures.

    Some time ago, they seem to have lost that FAB leadership particularly
    to TSMC.



    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to mitchalsup@aol.com on Wed Sep 18 05:40:07 2024
    mitchalsup@aol.com (MitchAlsup1) writes:
    Just last night, I was in a conversation with someone trying to start up
    a new company that wants to compete in the "server market". Direct
    quote;
    "the CPUs are simply I/O managers to the Inference Engines and GPUs."

    The investor pitch of a startup that cannot compete on the CPU side.

    Who here thinks that CPUs have become the CDC 6600 PPs for the GPUs and >Inference Engines ??

    The big difference is, that despite the CDC6600 being a supercomputer,
    the majority of its software (measured in lines of code) ran on the
    CPU, not on the PPs.

    Later supercomputers had a general-purpose computer for doing all the
    menial stuff, so that the supercomputer could focus on the matrix
    multiplies that it was good at. I can believe that on those systems
    the software on the GP computer was bigger than the software on the supercomputer.

    For some machines (e.g., high-end game PCs, game consoles and whatever
    is done for training deep neural networks), I expect that the relation
    between the CPU and its accelerators are similar to those between the general-purpose and the supercomputer: Most power spent in the
    accelerator, most lines of code run on the CPU. Although, thinking
    again, are accelerators used for training, or only for running DNNs?

    For servers running data bases, web shops, social networks, etc., and
    for, e.g., people doing C++ or Rust development, I expect that
    accelerators play little role.

    Just last night I talked to Jens Palsberg who works on quantum
    computing. He told me that those working on quantum computer
    simulators find it too hard to program GPUs, so optimizations for CPUs
    are in demand. OTOH, earlier that day he gave a presentation on a
    different topic where he worked on an optimization that allowed to
    eliminate branches; and his collaborators, who work for Meta, were
    very intent on eliminating all branches, because that would allow them
    to run on specialized hardware, i.e., an accelerator.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Stephen Fuld on Wed Sep 18 06:46:54 2024
    On Tue, 17 Sep 2024 21:57:24 -0700, Stephen Fuld wrote:

    On 9/17/2024 4:30 PM, Lawrence D'Oliveiro wrote:

    On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:

    Not many computer companies survive three failed architectures: has
    that record been beaten?

    I think it’s fair to say that both Intel and Microsoft were companies
    more renowned for marketing prowess than actual technical brilliance.

    For some years, Intel was known for it prowess in FAB technology. After
    all, they managed to make a "difficult" architecture out perform CPUs
    from better architectures.

    Sure. By spending 10× on it what RISC-based competitors were able to.

    Some time ago, they seem to have lost that FAB leadership particularly
    to TSMC.

    And the reason? Those costs kept going up and up, while the profits from
    sales of x86 chips failed to keep pace.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Anton Ertl on Wed Sep 18 06:31:46 2024
    On Wed, 18 Sep 2024 05:40:07 GMT, Anton Ertl wrote:

    Just last night I talked to Jens Palsberg who works on quantum
    computing.

    What kind? Has he got Shor’s algorithm working yet?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Wed Sep 18 10:41:45 2024
    On 18/09/2024 02:42, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 02:54:51 +0300, Michael S wrote:

    There are few things Intel would wish more than to "suffer"
    financially like Microsoft.

    It is true that Microsoft is not (yet) losing money, but still the
    revenues from its Windows cash cow cannot be what they used to be, if you look at the declining level of investment Microsoft is putting back into
    its flagship OS.


    I think MS has long ago stopped viewing desktop Windows as a cash cow.
    But it still gets in a lot of money from server versions, as well as
    server software such as MS SQL server. (The client access licences for
    these cost far more than Windows desktop ever did.) Their main cash
    cow, I believe, is subscriptions to Office365 and associated software
    where they have a near-monopoly for business use. (I expect Azure and everything there also makes money, but it has to compete with other
    cloud companies.)

    Its game division is also undergoing a bit of an upheaval at the moment.
    Its own games are moving away from being exclusives to its own console platform.

    And look at its ongoing unsuccessful attempts to port Windows to the ARM architecture.

    I'd rather not look at that, thanks!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Wed Sep 18 13:34:16 2024
    On Wed, 18 Sep 2024 1:27:03 +0000, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:

    On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the CPU. And
    motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
    tend not to offer this option.

    He showed a die figure with 256GB of DRAM stacked 8-deep and 2×4-wide.

    Was it upgradeable? Or was it soldered in?

    Soldered in.

    ... all the CPU has to
    do is program the I/O MMU to allow them to do their own thing, and deal
    with the keyboard and mouse activities.

    That’s not how interactive timesharing worked in the old days, and it certainly won’t be sufficient for interactive work today.

    All the interaction is keyboard, mouse, display and internet.
    The server can be located anywhere in the world.

    You *did* say “servers” though, didh’t you?


    Yes, server, not something within 10 feet of user.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to All on Wed Sep 18 14:37:12 2024
    On Wed, 18 Sep 2024 13:34:16 +0000, MitchAlsup1 wrote:

    On Wed, 18 Sep 2024 1:27:03 +0000, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:

    On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and GPUs." >>>>
    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the CPU. And
    motherboard-based CPU RAM is upgradeable, as well, whereas addon cards >>>> tend not to offer this option.

    He showed a die figure with 256GB of DRAM stacked 8-deep and 2×4-wide.

    Was it upgradeable? Or was it soldered in?

    Soldered in.

    Onto modules that are easily replaceable.

    ... all the CPU has to
    do is program the I/O MMU to allow them to do their own thing, and deal
    with the keyboard and mouse activities.

    That’s not how interactive timesharing worked in the old days, and it
    certainly won’t be sufficient for interactive work today.

    All the interaction is keyboard, mouse, display and internet.
    The server can be located anywhere in the world.

    You *did* say “servers” though, didh’t you?


    Yes, server, not something within 10 feet of user.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Wed Sep 18 15:40:51 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    And, your lack of knowledge strikes again. Such bespoke CPUs
    are more and more common every month, with many currently
    in tape-out or late-stage design by several fabless semiconductor
    companies.


    Why? It comes down to RAM. Those addon processors will never have access
    to the sheer quantity of RAM that is available to the CPU.

    That's also incorrect. There is nothing preventing them from
    accessing huge amounts of RAM when included on-die or via
    chiplets. Consider CXL-Cache, which provides high-bandwidth
    low-latency access to huge amounts of DRAM. Consider stacked
    HBM. Consider not drawing conclusions from insufficient knowledge.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Wed Sep 18 15:48:45 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:

    On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the CPU. And
    motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
    tend not to offer this option.

    He showed a die figure with 256GB of DRAM stacked 8-deep and 2×4-wide.

    Was it upgradeable? Or was it soldered in?

    Why does it matter?

    https://en.wikipedia.org/wiki/High_Bandwidth_Memory


    ... all the CPU has to
    do is program the I/O MMU to allow them to do their own thing, and deal
    with the keyboard and mouse activities.

    That’s not how interactive timesharing worked in the old days, and it >certainly won’t be sufficient for interactive work today.

    Another incorrect generalization from LDO.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to mitchalsup@aol.com on Wed Sep 18 15:50:09 2024
    mitchalsup@aol.com (MitchAlsup1) writes:
    On Wed, 18 Sep 2024 1:27:03 +0000, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:

    On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and GPUs." >>>>
    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the CPU. And
    motherboard-based CPU RAM is upgradeable, as well, whereas addon cards >>>> tend not to offer this option.

    He showed a die figure with 256GB of DRAM stacked 8-deep and 2×4-wide.

    Was it upgradeable? Or was it soldered in?

    Soldered in.

    Or in the case of HBM, directly stacked on the processor at the
    fab.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Wed Sep 18 15:51:15 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Tue, 17 Sep 2024 21:57:24 -0700, Stephen Fuld wrote:

    On 9/17/2024 4:30 PM, Lawrence D'Oliveiro wrote:

    On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:

    Not many computer companies survive three failed architectures: has
    that record been beaten?

    I think it’s fair to say that both Intel and Microsoft were companies
    more renowned for marketing prowess than actual technical brilliance.

    For some years, Intel was known for it prowess in FAB technology. After
    all, they managed to make a "difficult" architecture out perform CPUs
    from better architectures.

    Sure. By spending 10× on it what RISC-based competitors were able to.

    Some time ago, they seem to have lost that FAB leadership particularly
    to TSMC.

    And the reason? Those costs kept going up and up, while the profits from >sales of x86 chips failed to keep pace.

    No. Intel made several management mis-steps and was too late to the party. Had
    nothing to do with "keeping pace".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Scott Lurndal on Wed Sep 18 19:04:14 2024
    On Wed, 18 Sep 2024 15:40:51 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and
    GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    And, your lack of knowledge strikes again. Such bespoke CPUs
    are more and more common every month, with many currently
    in tape-out or late-stage design by several fabless semiconductor
    companies.


    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the CPU.

    That's also incorrect. There is nothing preventing them from
    accessing huge amounts of RAM when included on-die or via
    chiplets. Consider CXL-Cache, which provides high-bandwidth
    low-latency access to huge amounts of DRAM. Consider stacked
    HBM. Consider not drawing conclusions from insufficient knowledge.


    Low latency?
    I'd think that the latency here at very least 5x higher than 45-60ns
    figures typical for Intel/AMD/Apple/Qualcomm client CPUs.
    And I am afraid that 10x is more common than 5x.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Scott Lurndal on Wed Sep 18 19:00:27 2024
    On Wed, 18 Sep 2024 15:50:09 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    mitchalsup@aol.com (MitchAlsup1) writes:
    On Wed, 18 Sep 2024 1:27:03 +0000, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:

    On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and
    GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the
    CPU. And motherboard-based CPU RAM is upgradeable, as well,
    whereas addon cards tend not to offer this option.

    He showed a die figure with 256GB of DRAM stacked 8-deep and
    2×4-wide.

    Was it upgradeable? Or was it soldered in?

    Soldered in.

    Or in the case of HBM, directly stacked on the processor at the
    fab.


    It's not easy to get 256GB via HBM.
    To give one example, Fujitsu A64Fx got only 32GB.
    It was 5 years ago and some progress was made since then, but density improvements nowadays are sloooooooow.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Michael S on Wed Sep 18 16:23:01 2024
    On Wed, 18 Sep 2024 16:04:14 +0000, Michael S wrote:

    On Wed, 18 Sep 2024 15:40:51 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and
    GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    And, your lack of knowledge strikes again. Such bespoke CPUs
    are more and more common every month, with many currently
    in tape-out or late-stage design by several fabless semiconductor
    companies.


    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the CPU.

    That's also incorrect. There is nothing preventing them from
    accessing huge amounts of RAM when included on-die or via
    chiplets. Consider CXL-Cache, which provides high-bandwidth
    low-latency access to huge amounts of DRAM. Consider stacked
    HBM. Consider not drawing conclusions from insufficient knowledge.


    Low latency?
    I'd think that the latency here at very least 5x higher than 45-60ns
    figures typical for Intel/AMD/Apple/Qualcomm client CPUs.
    And I am afraid that 10x is more common than 5x.

    2 points:

    Yes: CXL-Cache and CXL memory have significantly more latency than
    direct attach DAM DIMMs. With PCIe 6.0 this should be an adder
    of 50-60 ns over the latency of the cache hierarchy on die. So,
    yes it is slower than the kinds of caches we are used to, but
    when compared to the latency of SSD and spinning rust it is
    way better.

    -----

    On the other hand, and this is where the deprecation of the CPUs
    come in, The engines consuming the data are bandwidth machines
    {GPUs and Inference engines} which are quite insensitive to
    latency (they are not not latency bound machines like CPUs).

    When doing GPUs, a memory access taking 400 cycles would hardly
    degrade the overall GPU performance--while it would KILL any
    typical CPU architecture.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Scott Lurndal on Wed Sep 18 16:28:08 2024
    Scott Lurndal <scott@slp53.sl.home> schrieb:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Tue, 17 Sep 2024 21:57:24 -0700, Stephen Fuld wrote:

    On 9/17/2024 4:30 PM, Lawrence D'Oliveiro wrote:

    On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:

    Not many computer companies survive three failed architectures: has
    that record been beaten?

    I think it’s fair to say that both Intel and Microsoft were companies >>>> more renowned for marketing prowess than actual technical brilliance.

    For some years, Intel was known for it prowess in FAB technology. After >>> all, they managed to make a "difficult" architecture out perform CPUs
    from better architectures.

    Sure. By spending 10× on it what RISC-based competitors were able to.

    Some time ago, they seem to have lost that FAB leadership particularly
    to TSMC.

    And the reason? Those costs kept going up and up, while the profits from >>sales of x86 chips failed to keep pace.

    No. Intel made several management mis-steps and was too late to the party.

    Which management missteps did you mean, and for which particular party
    were they late?

    Had
    nothing to do with "keeping pace".

    Not sure what you mean... could you explin a bit more?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Thomas Koenig on Wed Sep 18 17:00:01 2024
    Thomas Koenig <tkoenig@netcologne.de> writes:
    Scott Lurndal <scott@slp53.sl.home> schrieb:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Tue, 17 Sep 2024 21:57:24 -0700, Stephen Fuld wrote:

    On 9/17/2024 4:30 PM, Lawrence D'Oliveiro wrote:

    On Fri, 13 Sep 2024 20:51 +0100 (BST), John Dallman wrote:

    Not many computer companies survive three failed architectures: has >>>>>> that record been beaten?

    I think it’s fair to say that both Intel and Microsoft were companies >>>>> more renowned for marketing prowess than actual technical brilliance. >>>>
    For some years, Intel was known for it prowess in FAB technology. After >>>> all, they managed to make a "difficult" architecture out perform CPUs
    from better architectures.

    Sure. By spending 10× on it what RISC-based competitors were able to.

    Some time ago, they seem to have lost that FAB leadership particularly >>>> to TSMC.

    And the reason? Those costs kept going up and up, while the profits from >>>sales of x86 chips failed to keep pace.

    No. Intel made several management mis-steps and was too late to the party.

    Which management missteps did you mean, and for which particular party
    were they late?

    Itanium and AMD64 respectively. The P7 was an interesting
    design. Merced et. al., an epic failure.


    Had
    nothing to do with "keeping pace".

    Not sure what you mean... could you explin a bit more?

    They made some poor engineering choices and completely
    missed 10nm, and lost processor focus (graphics, wireless)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Wed Sep 18 19:01:48 2024
    On 18/09/2024 18:00, Michael S wrote:
    On Wed, 18 Sep 2024 15:50:09 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    mitchalsup@aol.com (MitchAlsup1) writes:
    On Wed, 18 Sep 2024 1:27:03 +0000, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:

    On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and
    GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the
    CPU. And motherboard-based CPU RAM is upgradeable, as well,
    whereas addon cards tend not to offer this option.

    He showed a die figure with 256GB of DRAM stacked 8-deep and
    2×4-wide.

    Was it upgradeable? Or was it soldered in?

    Soldered in.

    Or in the case of HBM, directly stacked on the processor at the
    fab.


    It's not easy to get 256GB via HBM.
    To give one example, Fujitsu A64Fx got only 32GB.
    It was 5 years ago and some progress was made since then, but density improvements nowadays are sloooooooow.


    <https://www.servethehome.com/micron-hbm3e-12-high-36gb-higher-capacity-ai-accelerators-shipping/>
    <https://www.servethehome.com/a-quick-introduction-to-the-nvidia-gh200-aka-grace-hopper-arm/>

    It's certainly not /cheap/ to have 256GB (or more) with HBM, but it is
    not unrealistic.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to David Brown on Wed Sep 18 18:48:44 2024
    On Wed, 18 Sep 2024 17:01:48 +0000, David Brown wrote:

    On 18/09/2024 18:00, Michael S wrote:
    On Wed, 18 Sep 2024 15:50:09 GMT

    It's not easy to get 256GB via HBM.
    To give one example, Fujitsu A64Fx got only 32GB.
    It was 5 years ago and some progress was made since then, but density
    improvements nowadays are sloooooooow.


    <https://www.servethehome.com/micron-hbm3e-12-high-36gb-higher-capacity-ai-accelerators-shipping/>
    <https://www.servethehome.com/a-quick-introduction-to-the-nvidia-gh200-aka-grace-hopper-arm/>

    It's certainly not /cheap/ to have 256GB (or more) with HBM, but it is
    not unrealistic.

    Consider the cost of the power it takes to feed a rack that consumes
    100KW
    continuously for a year, and don't forget to add in the cooling costs to
    remove that 100KW from that rack while computing the cost of the power.
    Using $0.15 /KWh = $170,000 per year per rack (including cooling).

    The cost of 256GB of memory fades into insignificance.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Lawrence D'Oliveiro on Wed Sep 18 20:09:53 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 18 Sep 2024 05:40:07 GMT, Anton Ertl wrote:

    Just last night I talked to Jens Palsberg who works on quantum
    computing.

    What kind?

    He is working on the software side, not the physics side.

    Has he got Shor’s algorithm working yet?

    My understanding is that he's about enabling software developers to
    develop more programs. He mentioned that several physics
    breakthroughs are needed for quantum computing to become useful.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to David Brown on Wed Sep 18 20:57:14 2024
    David Brown <david.brown@hesbynett.no> wrote:
    On 18/09/2024 02:42, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 02:54:51 +0300, Michael S wrote:

    There are few things Intel would wish more than to "suffer"
    financially like Microsoft.

    It is true that Microsoft is not (yet) losing money, but still the
    revenues from its Windows cash cow cannot be what they used to be, if you
    look at the declining level of investment Microsoft is putting back into
    its flagship OS.


    I think MS has long ago stopped viewing desktop Windows as a cash cow.
    But it still gets in a lot of money from server versions, as well as
    server software such as MS SQL server. (The client access licences for
    these cost far more than Windows desktop ever did.)

    There are lots of free SQL servers now, this has forced Microsoft to make
    MS SQL Express free for smaller than enterprise editions.

    https://www.microsoft.com/en-gb/download/details.aspx?id=101064

    https://josipmisko.com/posts/sql-express-limitations#

    Those limits dwarf our needs.

    Their main cash
    cow, I believe, is subscriptions to Office365 and associated software
    where they have a near-monopoly for business use. (I expect Azure and everything there also makes money, but it has to compete with other
    cloud companies.)

    Its game division is also undergoing a bit of an upheaval at the moment.
    Its own games are moving away from being exclusives to its own console
    platform.

    And look at its ongoing unsuccessful attempts to port Windows to the ARM
    architecture.

    I'd rather not look at that, thanks!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Wed Sep 18 22:54:33 2024
    On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:

    On the other hand, and this is where the deprecation of the CPUs come
    in, The engines consuming the data are bandwidth machines {GPUs and
    Inference engines} which are quite insensitive to latency (they are not
    not latency bound machines like CPUs).

    When doing GPUs, a memory access taking 400 cycles would hardly degrade
    the overall GPU performance--while it would KILL any typical CPU architecture.

    But if it’s supposed to be for “interactive” use, it’s still going to take
    those 400 memory-cycle times to return a response.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Wed Sep 18 23:45:27 2024
    On Wed, 18 Sep 2024 20:57:14 -0000 (UTC), Brett wrote:

    There are lots of free SQL servers now, this has forced Microsoft to
    make MS SQL Express free for smaller than enterprise editions.

    https://www.microsoft.com/en-gb/download/details.aspx?id=101064

    https://josipmisko.com/posts/sql-express-limitations#

    Those limits dwarf our needs.

    Still, they look pretty miserly compared to what’s available with open- source alternatives.

    Even humble SQLite allows for bigger databases than that!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Anton Ertl on Wed Sep 18 23:47:07 2024
    On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:

    He mentioned that several physics breakthroughs
    are needed for quantum computing to become useful.

    The biggest one would be getting around the fundamental problem that you can’t get something for nothing.

    The promise of an exponential increase in computing power for a linear
    increase in the number of processing elements sounds very much like “something for nothing” under another name, wouldn’t you say?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Wed Sep 18 23:51:54 2024
    On Wed, 18 Sep 2024 19:00:27 +0300, Michael S wrote:

    mitchalsup@aol.com (MitchAlsup1) writes:

    On Wed, 18 Sep 2024 1:27:03 +0000, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 00:57:59 +0000, MitchAlsup1 wrote:

    On Wed, 18 Sep 2024 0:44:44 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and
    GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the CPU.
    And motherboard-based CPU RAM is upgradeable, as well, whereas
    addon cards tend not to offer this option.

    He showed a die figure with 256GB of DRAM stacked 8-deep and
    2×4-wide.

    Was it upgradeable? Or was it soldered in?

    Soldered in.

    It's not easy to get 256GB via HBM.
    To give one example, Fujitsu A64Fx got only 32GB.
    It was 5 years ago and some progress was made since then, but density improvements nowadays are sloooooooow.

    The basic issue is:

    * CPU+motherboard RAM -- usually upgradeable
    * Addon coprocessor RAM -- usually not upgradeable

    If all these special-purpose processors could share RAM coming from the
    same pool, the addon coprocessors would become a lot more useful.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Wed Sep 18 23:48:33 2024
    On Wed, 18 Sep 2024 13:34:16 +0000, MitchAlsup1 wrote:

    All the interaction is keyboard, mouse, display and internet.
    The server can be located anywhere in the world.

    “Interactive” means “able to respond to every user action with low latency”. That involves not just mouse clicks and keystrokes, but also
    mouse movements. And possibly, in future, when it becomes practicable to
    detect that, gestures and the direction of the user’s gaze as well.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Thu Sep 19 00:29:09 2024
    On Wed, 18 Sep 2024 22:54:33 +0000, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:

    On the other hand, and this is where the deprecation of the CPUs come
    in, The engines consuming the data are bandwidth machines {GPUs and
    Inference engines} which are quite insensitive to latency (they are not
    not latency bound machines like CPUs).

    When doing GPUs, a memory access taking 400 cycles would hardly degrade
    the overall GPU performance--while it would KILL any typical CPU
    architecture.

    But if it’s supposed to be for “interactive” use, it’s still going to take
    those 400 memory-cycle times to return a response.

    That is why you use the CPU for human interactions and bandwidth
    engines for the muscle.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Thu Sep 19 04:27:16 2024
    On Thu, 19 Sep 2024 00:29:09 +0000, MitchAlsup1 wrote:

    On Wed, 18 Sep 2024 22:54:33 +0000, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:

    On the other hand, and this is where the deprecation of the CPUs come
    in, The engines consuming the data are bandwidth machines {GPUs and
    Inference engines} which are quite insensitive to latency (they are
    not not latency bound machines like CPUs).

    When doing GPUs, a memory access taking 400 cycles would hardly
    degrade the overall GPU performance--while it would KILL any typical
    CPU architecture.

    But if it’s supposed to be for “interactive” use, it’s still going to
    take those 400 memory-cycle times to return a response.

    That is why you use the CPU for human interactions and bandwidth
    engines for the muscle.

    But then those bandwidth engines become the interactivity bottleneck,
    don’t they?

    Unless you use them only for precomputing stuff in some kind of batch mode
    for later use, rather than doing processing on-demand.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Thu Sep 19 08:52:26 2024
    On 18/09/2024 20:48, MitchAlsup1 wrote:
    On Wed, 18 Sep 2024 17:01:48 +0000, David Brown wrote:

    On 18/09/2024 18:00, Michael S wrote:
    On Wed, 18 Sep 2024 15:50:09 GMT

    It's not easy to get 256GB via HBM.
    To give one example, Fujitsu A64Fx got only 32GB.
    It was 5 years ago and some progress was made since then, but density
    improvements nowadays are sloooooooow.


    <https://www.servethehome.com/micron-hbm3e-12-high-36gb-higher-capacity-ai-accelerators-shipping/>
    <https://www.servethehome.com/a-quick-introduction-to-the-nvidia-gh200-aka-grace-hopper-arm/>

    It's certainly not /cheap/ to have 256GB (or more) with HBM, but it is
    not unrealistic.

    Consider the cost of the power it takes to feed a rack that consumes
    100KW
    continuously for a year, and don't forget to add in the cooling costs to remove that 100KW from that rack while computing the cost of the power.
    Using $0.15 /KWh = $170,000 per year per rack (including cooling).

    The cost of 256GB of memory fades into insignificance.

    Sure - when other costs are big enough, some costs are insignificant.

    But to fill your rack with 100 KW worth of equipment, you will have a
    lot of processors or accelerators that have 256 GB of HBM - it's not
    just a single device taking 100 KW !

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Thu Sep 19 09:01:13 2024
    On 19/09/2024 00:54, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:

    On the other hand, and this is where the deprecation of the CPUs come
    in, The engines consuming the data are bandwidth machines {GPUs and
    Inference engines} which are quite insensitive to latency (they are not
    not latency bound machines like CPUs).

    When doing GPUs, a memory access taking 400 cycles would hardly degrade
    the overall GPU performance--while it would KILL any typical CPU
    architecture.

    But if it’s supposed to be for “interactive” use, it’s still going to take
    those 400 memory-cycle times to return a response.

    In human terms, those 400 memory cycles are completely negligible. For
    most purposes, anything else than 100 milliseconds is an instant
    response. For high-speed games played by experts, 10 milliseconds is a
    good target. For the most demanding tasks, such as making music, 1
    millisecond might be required.

    For anything interactive, an extra 400 memory cycles latency means
    nothing - even if it is relatively slow memory - as long as you can keep
    the throughput. Network latency is massively bigger than this extra
    memory latency would be.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Brett on Thu Sep 19 09:27:58 2024
    On 18/09/2024 22:57, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 18/09/2024 02:42, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 02:54:51 +0300, Michael S wrote:

    There are few things Intel would wish more than to "suffer"
    financially like Microsoft.

    It is true that Microsoft is not (yet) losing money, but still the
    revenues from its Windows cash cow cannot be what they used to be, if you >>> look at the declining level of investment Microsoft is putting back into >>> its flagship OS.


    I think MS has long ago stopped viewing desktop Windows as a cash cow.
    But it still gets in a lot of money from server versions, as well as
    server software such as MS SQL server. (The client access licences for
    these cost far more than Windows desktop ever did.)

    There are lots of free SQL servers now, this has forced Microsoft to make
    MS SQL Express free for smaller than enterprise editions.

    https://www.microsoft.com/en-gb/download/details.aspx?id=101064

    https://josipmisko.com/posts/sql-express-limitations#

    Those limits dwarf our needs.

    We have just recently had to buy a Windows server and MS SQL server
    license, in order to run a third-party application that insists those
    are the requirements and they won't support the use of SQL Express. The
    server hardware cost about $1200 (my price estimates here are very
    rough) for a mini PC with 64 GB ram, running Proxmox. Windows server
    license was about $1000, and SQL Server was $1200, and there was the
    same again for the CALs needed. So something like 75% of the cost of
    the box is license fees to Microsoft - and that was as cheap as we could
    get within the requirements of the third-party application. That same application could have been written in a few thousand lines of Python
    and run on a Rasberry Pi with an external disk for storage. MS make a
    lot of profit from being the "industry standard" and persuading
    specialist software developers that Windows and MS SQL server are the
    server platforms of choice.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to David Brown on Thu Sep 19 09:26:51 2024
    David Brown wrote:
    On 19/09/2024 00:54, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:

    On the other hand, and this is where the deprecation of the CPUs come
    in, The engines consuming the data are bandwidth machines {GPUs and
    Inference engines} which are quite insensitive to latency (they are not
    not latency bound machines like CPUs).

    When doing GPUs, a memory access taking 400 cycles would hardly degrade
    the overall GPU performance--while it would KILL any typical CPU
    architecture.

    But if it’s supposed to be for “interactive” use, it’s still
    going to take
    those 400 memory-cycle times to return a response.

    In human terms, those 400 memory cycles are completely negligible.  For most purposes, anything else than 100 milliseconds is an instant

    You actually need 20 Hz/50 ms even for joystick/mouse response when you
    ar enot in a hurry. (Was proven by the space station external arm
    joystick controller which was initially specified to operate at 10 Hz,
    but that turned out to be far too laggy for the astronauts so it was
    doubled to 20 Hz.

    response.  For high-speed games played by experts, 10 milliseconds is a good target.  For the most demanding tasks, such as making music, 1 millisecond might be required.

    My cousin Nils has hearing loss after a lifetime spent in studios and
    playing music, he can't use the offered hearing aids because they add
    3-4 ms of latency. (Something which he noticed _immediately_ when first
    trying a pair.)

    For anything interactive, an extra 400 memory cycles latency means
    nothing - even if it is relatively slow memory - as long as you can keep
    the throughput.  Network latency is massively bigger than this extra
    memory latency would be.

    Early multiplayer games had to invent all sorts of tricks to try to hide
    away that latency, and well before that, around 1987 (?) I made a
    version of my terminal emulator which could do the same:

    I.e. give instant feedback for keystrokes while in reality buffering
    them so that I could send out a single packet (over pay-per-packet X.25 networks) when I got a keystroke that I could not handle locally.

    This one hack (designed and implemented overnight) saved Hydro and the
    Oseberg project NOK 2 Mill per year per remote location.

    The only noticable (to the user) artifact was when they were entering
    data into an uppercase-only field: They would see the lowercase local
    response until they hit enter or tab, then the remote response would
    overwrite the field with uppercase instead. Normally I simply checked if
    the new remote data was reducing the offset between the local buffer and
    the official terminal view.

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Lawrence D'Oliveiro on Thu Sep 19 10:44:24 2024
    On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:

    He mentioned that several physics breakthroughs
    are needed for quantum computing to become useful.

    The biggest one would be getting around the fundamental problem that you can’t get something for nothing.


    Stupid argument. Look at the effort and tech it takes to make quantum computers... that is not "nothing".


    The promise of an exponential increase in computing power for a linear increase in the number of processing elements sounds very much like “something for nothing” under another name, wouldn’t you say?


    No, it is exploiting the very non-intuitive nature of quantum
    entanglement to create an exponential number of collective states of a
    linear number of elements. Medieval arguments about "nothing" vs
    "something" don't work there.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Thu Sep 19 08:40:59 2024
    On Wed, 18 Sep 2024 18:48:44 +0000, MitchAlsup1 wrote:

    Consider the cost of the power it takes to feed a rack that consumes
    100KW continuously for a year, and don't forget to add in the cooling costs to
    remove that 100KW from that rack while computing the cost of the power.
    Using $0.15 /KWh = $170,000 per year per rack (including cooling).

    You know the old saying: “With great power comes great power bills”.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Niklas Holsti on Thu Sep 19 08:43:17 2024
    On Thu, 19 Sep 2024 10:44:24 +0300, Niklas Holsti wrote:

    On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:

    He mentioned that several physics breakthroughs
    are needed for quantum computing to become useful.

    The biggest one would be getting around the fundamental problem that
    you can’t get something for nothing.

    Stupid argument. Look at the effort and tech it takes to make quantum computers... that is not "nothing".

    Is there some ongoing “Nature’s Rentware” involved?

    The promise of an exponential increase in computing power for a linear
    increase in the number of processing elements sounds very much like
    “something for nothing” under another name, wouldn’t you say?

    No, it is exploiting the very non-intuitive nature of quantum
    entanglement to create an exponential number of collective states of a
    linear number of elements.

    That’s called the “many worlds interpretation” of quantum mechanics, and it is philosophical mumbo-jumbo nonsense.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Terje Mathisen on Thu Sep 19 11:10:53 2024
    On 19/09/2024 09:26, Terje Mathisen wrote:
    David Brown wrote:
    On 19/09/2024 00:54, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:

    On the other hand, and this is where the deprecation of the CPUs come
    in, The engines consuming the data are bandwidth machines {GPUs and
    Inference engines} which are quite insensitive to latency (they are not >>>> not latency bound machines like CPUs).

    When doing GPUs, a memory access taking 400 cycles would hardly degrade >>>> the overall GPU performance--while it would KILL any typical CPU
    architecture.

    But if it’s supposed to be for “interactive” use, it’s still
    going to take
    those 400 memory-cycle times to return a response.

    In human terms, those 400 memory cycles are completely negligible.
    For most purposes, anything else than 100 milliseconds is an instant

    You actually need 20 Hz/50 ms even for joystick/mouse response when you
    ar enot in a hurry. (Was proven by the space station external arm
    joystick controller which was initially specified to operate at 10 Hz,
    but that turned out to be far too laggy for the astronauts so it was
    doubled to 20 Hz.


    For that kind of thing, the latency you can tolerate will depend on the physical lag of the system, what you are trying to control, and the
    experience of the person controlling it. It will therefore lie
    somewhere between the "100 ms feels instantaneous" that you see for many purposes, and the speed you need for gaming.

    response.  For high-speed games played by experts, 10 milliseconds is
    a good target.  For the most demanding tasks, such as making music, 1
    millisecond might be required.

    My cousin Nils has hearing loss after a lifetime spent in studios and
    playing music, he can't use the offered hearing aids because they add
    3-4 ms of latency. (Something which he noticed _immediately_ when first trying a pair.)

    Even a complete amateur can notice time mismatches of 10 ms in a musical context, so for a professional this does not surprise me. I don't know
    of any human endeavour that requires lower latency or more precise
    timing than music.


    For anything interactive, an extra 400 memory cycles latency means
    nothing - even if it is relatively slow memory - as long as you can
    keep the throughput.  Network latency is massively bigger than this
    extra memory latency would be.

    Early multiplayer games had to invent all sorts of tricks to try to hide
    away that latency, and well before that, around 1987 (?) I made a
    version of my terminal emulator which could do the same:

    I.e. give instant feedback for keystrokes while in reality buffering
    them so that I could send out a single packet (over pay-per-packet X.25 networks) when I got a keystroke that I could not handle locally.


    The first modem I used was, I believe, 300 baud and there was a definite
    lag between typing and the characters appearing on-screen.

    Fortunately, such slow speeds are quite rare these days. The slowest
    system I have seen in practice, however, had about 150 mbps in one
    direction and 70 mbps in the other, half-duplex. (Note that the "m" is lower-case here!)

    This one hack (designed and implemented overnight) saved Hydro and the Oseberg project NOK 2 Mill per year per remote location.

    The only noticable (to the user) artifact was when they were entering
    data into an uppercase-only field: They would see the lowercase local response until they hit enter or tab, then the remote response would overwrite the field with uppercase instead. Normally I simply checked if
    the new remote data was reducing the offset between the local buffer and
    the official terminal view.


    I presume you called the case-change a feature, rather than an artefact,
    giving the user confirmation that the data was entered correctly?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Niklas Holsti on Thu Sep 19 11:35:41 2024
    On 19/09/2024 09:44, Niklas Holsti wrote:
    On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:

    He mentioned that several physics breakthroughs
    are needed for quantum computing to become useful.

    The biggest one would be getting around the fundamental problem that you
    can’t get something for nothing.


    Stupid argument. Look at the effort and tech it takes to make quantum computers... that is not "nothing".


    The promise of an exponential increase in computing power for a linear
    increase in the number of processing elements sounds very much like
    “something for nothing” under another name, wouldn’t you say?


    No, it is exploiting the very non-intuitive nature of quantum
    entanglement to create an exponential number of collective states of a
    linear number of elements. Medieval arguments about "nothing" vs
    "something" don't work there.


    Quantum computing certainly gives you some tricks that are hard to
    replicate with classical computers. (And of course some quantum effects
    are impossible to replicate classically, but those are not actually computations.)

    But it is still ultimately limited in many ways. Landauer's principle
    about the minimal energy costs of calculations applies equally to
    quantum calculations.

    The practical limitations for quantum computers are far more
    significant. Roughly speaking, when you entangle more states at once,
    you need tighter tolerances to maintain coherence, which translates to
    lower temperatures, higher energy costs, and lower times to do your calculations. And to be useful, you need large numbers of qubits, which
    again makes maintaining coherence increasingly difficult.

    I'm sure that there will be breakthroughs that improve some of this, but
    I am not holding my breath - I don't believe quantum computers will ever
    be cost-effective for anything but a few very niche problems. Currently
    they have only beat classical computers in tasks that involve simulating
    some quantum effects. That's a bit like noticing that soap bubble
    computers are really good at solving 2D minimal energy surface problems.

    Remember, the current record for Shor's algorithm is factorising 21 into
    3 x 7. Factorising 35 is still beyond current engineering levels.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to David Brown on Thu Sep 19 12:54:23 2024
    David Brown wrote:
    On 19/09/2024 09:26, Terje Mathisen wrote:
    David Brown wrote:
    On 19/09/2024 00:54, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:

    On the other hand, and this is where the deprecation of the CPUs come >>>>> in, The engines consuming the data are bandwidth machines {GPUs and
    Inference engines} which are quite insensitive to latency (they are >>>>> not
    not latency bound machines like CPUs).

    When doing GPUs, a memory access taking 400 cycles would hardly
    degrade
    the overall GPU performance--while it would KILL any typical CPU
    architecture.

    But if it’s supposed to be for “interactive” use,
    it’s still going to take
    those 400 memory-cycle times to return a response.

    In human terms, those 400 memory cycles are completely negligible.
    For most purposes, anything else than 100 milliseconds is an instant

    You actually need 20 Hz/50 ms even for joystick/mouse response when
    you ar enot in a hurry. (Was proven by the space station external arm
    joystick controller which was initially specified to operate at 10 Hz,
    but that turned out to be far too laggy for the astronauts so it was
    doubled to 20 Hz.


    For that kind of thing, the latency you can tolerate will depend on the physical lag of the system, what you are trying to control, and the experience of the person controlling it.  It will therefore lie
    somewhere between the "100 ms feels instantaneous" that you see for many purposes, and the speed you need for gaming.

    response.  For high-speed games played by experts, 10 milliseconds
    is a good target.  For the most demanding tasks, such as making
    music, 1 millisecond might be required.

    My cousin Nils has hearing loss after a lifetime spent in studios and
    playing music, he can't use the offered hearing aids because they add
    3-4 ms of latency. (Something which he noticed _immediately_ when
    first trying a pair.)

    Even a complete amateur can notice time mismatches of 10 ms in a musical context, so for a professional this does not surprise me.  I don't know
    of any human endeavour that requires lower latency or more precise
    timing than music.


    For anything interactive, an extra 400 memory cycles latency means
    nothing - even if it is relatively slow memory - as long as you can
    keep the throughput.  Network latency is massively bigger than this
    extra memory latency would be.

    Early multiplayer games had to invent all sorts of tricks to try to
    hide away that latency, and well before that, around 1987 (?) I made a
    version of my terminal emulator which could do the same:

    I.e. give instant feedback for keystrokes while in reality buffering
    them so that I could send out a single packet (over pay-per-packet
    X.25 networks) when I got a keystroke that I could not handle locally.


    The first modem I used was, I believe, 300 baud and there was a definite
    lag between typing and the characters appearing on-screen.

    Fortunately, such slow speeds are quite rare these days.  The slowest system I have seen in practice, however, had about 150 mbps in one
    direction and 70 mbps in the other, half-duplex.  (Note that the "m" is lower-case here!)

    This one hack (designed and implemented overnight) saved Hydro and the
    Oseberg project NOK 2 Mill per year per remote location.

    The only noticable (to the user) artifact was when they were entering
    data into an uppercase-only field: They would see the lowercase local
    response until they hit enter or tab, then the remote response would
    overwrite the field with uppercase instead. Normally I simply checked
    if the new remote data was reducing the offset between the local
    buffer and the official terminal view.


    I presume you called the case-change a feature, rather than an artefact, giving the user confirmation that the data was entered correctly?

    Good idea!

    No, I simply didn't mention it and the users mostly didn't even notice.

    The way I implemented it was by updating the "official" back frame
    buffer, and compare the update with the visible front buffer. If at any
    time a write to the back buffer did not result in something that was
    already in the front buffer, I just copied the back buffer to the front
    and went on from there.

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to David Brown on Thu Sep 19 12:59:42 2024
    David Brown wrote:
    On 19/09/2024 09:44, Niklas Holsti wrote:
    On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:

    He mentioned that several physics breakthroughs
    are needed for quantum computing to become useful.

    The biggest one would be getting around the fundamental problem that you >>> can’t get something for nothing.


    Stupid argument. Look at the effort and tech it takes to make quantum
    computers... that is not "nothing".


    The promise of an exponential increase in computing power for a linear
    increase in the number of processing elements sounds very much like
    “something for nothing” under another name, wouldn’t you say?


    No, it is exploiting the very non-intuitive nature of quantum
    entanglement to create an exponential number of collective states of a
    linear number of elements. Medieval arguments about "nothing" vs
    "something" don't work there.


    Quantum computing certainly gives you some tricks that are hard to
    replicate with classical computers.  (And of course some quantum effects are impossible to replicate classically, but those are not actually computations.)

    But it is still ultimately limited in many ways.  Landauer's principle about the minimal energy costs of calculations applies equally to
    quantum calculations.

    The practical limitations for quantum computers are far more
    significant.  Roughly speaking, when you entangle more states at once,
    you need tighter tolerances to maintain coherence, which translates to
    lower temperatures, higher energy costs, and lower times to do your calculations.  And to be useful, you need large numbers of qubits, which again makes maintaining coherence increasingly difficult.

    I'm sure that there will be breakthroughs that improve some of this, but
    I am not holding my breath - I don't believe quantum computers will ever
    be cost-effective for anything but a few very niche problems.  Currently they have only beat classical computers in tasks that involve simulating some quantum effects.  That's a bit like noticing that soap bubble computers are really good at solving 2D minimal energy surface problems.

    Remember, the current record for Shor's algorithm is factorising 21 into
    3 x 7.  Factorising 35 is still beyond current engineering levels.


    From my recent reading, it seems like factoring 21 (5 bits) requires at
    least 5+10=15 bits all staying entangled, plus a number of additional
    bits for error correction. I'm guessing you also need some extra bits/redundancy in order to successfully read out the results?

    Getting to at the very least 3K entangled bits in order to speed up RSA
    1024 decryption will certainly be out of the question for the remainder
    of my professional career, and most probably also the rest of my life.


    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Lawrence D'Oliveiro on Thu Sep 19 16:25:05 2024
    On 2024-09-19 11:43, Lawrence D'Oliveiro wrote:
    On Thu, 19 Sep 2024 10:44:24 +0300, Niklas Holsti wrote:

    On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:

    He mentioned that several physics breakthroughs
    are needed for quantum computing to become useful.

    The biggest one would be getting around the fundamental problem that
    you can’t get something for nothing.

    Stupid argument. Look at the effort and tech it takes to make quantum
    computers... that is not "nothing".

    Is there some ongoing “Nature’s Rentware” involved?


    I have no idea what you mean by that.


    The promise of an exponential increase in computing power for a linear
    increase in the number of processing elements sounds very much like
    “something for nothing” under another name, wouldn’t you say?

    No, it is exploiting the very non-intuitive nature of quantum
    entanglement to create an exponential number of collective states of a
    linear number of elements.

    That’s called the “many worlds interpretation” of quantum mechanics, and
    it is philosophical mumbo-jumbo nonsense.


    The /fact/ that quantum mechanics describes how the world works,
    entanglement and all, does not depend on the various attempts to
    "interpret" or understand its foundations.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Terje Mathisen on Thu Sep 19 16:15:09 2024
    On 19/09/2024 12:59, Terje Mathisen wrote:
    David Brown wrote:
    On 19/09/2024 09:44, Niklas Holsti wrote:
    On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:

    He mentioned that several physics breakthroughs
    are needed for quantum computing to become useful.

    The biggest one would be getting around the fundamental problem that
    you
    can’t get something for nothing.


    Stupid argument. Look at the effort and tech it takes to make quantum
    computers... that is not "nothing".


    The promise of an exponential increase in computing power for a linear >>>> increase in the number of processing elements sounds very much like
    “something for nothing” under another name, wouldn’t you say?


    No, it is exploiting the very non-intuitive nature of quantum
    entanglement to create an exponential number of collective states of
    a linear number of elements. Medieval arguments about "nothing" vs
    "something" don't work there.


    Quantum computing certainly gives you some tricks that are hard to
    replicate with classical computers.  (And of course some quantum
    effects are impossible to replicate classically, but those are not
    actually computations.)

    But it is still ultimately limited in many ways.  Landauer's principle
    about the minimal energy costs of calculations applies equally to
    quantum calculations.

    The practical limitations for quantum computers are far more
    significant.  Roughly speaking, when you entangle more states at once,
    you need tighter tolerances to maintain coherence, which translates to
    lower temperatures, higher energy costs, and lower times to do your
    calculations.  And to be useful, you need large numbers of qubits,
    which again makes maintaining coherence increasingly difficult.

    I'm sure that there will be breakthroughs that improve some of this,
    but I am not holding my breath - I don't believe quantum computers
    will ever be cost-effective for anything but a few very niche
    problems.  Currently they have only beat classical computers in tasks
    that involve simulating some quantum effects.  That's a bit like
    noticing that soap bubble computers are really good at solving 2D
    minimal energy surface problems.

    Remember, the current record for Shor's algorithm is factorising 21
    into 3 x 7.  Factorising 35 is still beyond current engineering levels.


    From my recent reading, it seems like factoring 21 (5 bits) requires at least 5+10=15 bits all staying entangled, plus a number of additional
    bits for error correction. I'm guessing you also need some extra bits/redundancy in order to successfully read out the results?

    This was done with a quantum computer designed specifically for that one
    task, and simplified with the knowledge of the answer. Even then, these machines just give you a result that /might/ be the correct answer - you
    have to check it externally to be sure. (Of course for integer
    factorisation, checking a possible answer is a lot easier than finding plausible answers.)


    Getting to at the very least 3K entangled bits in order to speed up RSA
    1024 decryption will certainly be out of the question for the remainder
    of my professional career, and most probably also the rest of my life.


    According to someone on the internet (that ever-reliable source of information), an n-bit integer takes 2n + 2 fully entangled qubits and 448.n³.log(n) gates. For 1024-bit RSA, that's 2050 logical qubits and
    about 5×10e12 gates. For the common default size of 2048-bit RSA,
    it's 4098 logical qubits and 4.2×10e13 gates.

    Then you need the quantum error correction in addition. I am not at all convinced that I understand the details here or if I am applying them correctly, but I think that for larger systems you need perhaps 1000
    physical qubits per logical qubit.

    So for your 1024-bit RSA, your need a 2 million qubit monster with all
    2000 logical qubits fully entangled (most quantum computers today with
    more than a few tens of qubits are not fully entangled - 51 fully
    entangled qubits was the biggest I read about). And you need to keep it coherent for 72n³ cycles - 72 gigacycles - for the algorithm. Top
    speeds today are 1.4 MHz, with perhaps 4 MHz being practically feasible, assuming only two-qubit gates are needed. That gives 5.4 hours, without considering the extra time delays of the quantum error correction (which
    I'm sure are very significant). Current coherence time records are
    measured in microseconds or perhaps milliseconds. (Some other types of
    quantum computers have longer coherence times, but correspondingly
    slower cycle speeds.)

    So using Shor's algorithm to break 1024-bit RSA requires a scaling of
    20,000 in qubit counts and 20,000,000 in coherence time and/or cycle
    speed. Moving to the common high-security size of 4096-bit adds another
    factor of 64 to each of these, and RSA easily scales much higher than that.

    I don't believe any of us need to worry about quantum computers breaking
    RSA for a while yet.

    (Of course someone might come up with a new algorithm, either classical
    or quantum, that changes the game.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Thu Sep 19 14:23:23 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Thu, 19 Sep 2024 00:29:09 +0000, MitchAlsup1 wrote:

    On Wed, 18 Sep 2024 22:54:33 +0000, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:

    On the other hand, and this is where the deprecation of the CPUs come
    in, The engines consuming the data are bandwidth machines {GPUs and
    Inference engines} which are quite insensitive to latency (they are
    not not latency bound machines like CPUs).

    When doing GPUs, a memory access taking 400 cycles would hardly
    degrade the overall GPU performance--while it would KILL any typical
    CPU architecture.

    But if it’s supposed to be for “interactive” use, it’s still going to
    take those 400 memory-cycle times to return a response.

    That is why you use the CPU for human interactions and bandwidth
    engines for the muscle.

    But then those bandwidth engines become the interactivity bottleneck,
    don’t they?

    No.


    Unless you use them only for precomputing stuff in some kind of batch mode >for later use, rather than doing processing on-demand.

    They're used in pretty much every high-speed network router
    to accelerate packet processing, cryptography, compression and
    machine learning; the very definition of interactivity.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to David Brown on Thu Sep 19 16:09:15 2024
    On Thu, 19 Sep 2024 7:01:13 +0000, David Brown wrote:

    On 19/09/2024 00:54, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:

    But if it’s supposed to be for “interactive” use, it’s still going to
    take
    those 400 memory-cycle times to return a response.

    In human terms, those 400 memory cycles are completely negligible. For
    most purposes, anything else than 100 milliseconds is an instant
    response. For high-speed games played by experts, 10 milliseconds is a
    good target. For the most demanding tasks, such as making music, 1 millisecond might be required.

    400 cycles IS negligible.
    400 cycles for each LD is non-negligible.

    Remember LDs are 20%-22% of the instruction stream and with 400 cycles
    per LD you see an average of 80-cycles per instruction even if all
    other instructions take 1 cycle. This is 160× SLOWER than current
    CPUs. But GPUs with thousands of cores can use memory that slow and
    still deliver big gains in performance (6×-50×).

    For anything interactive, an extra 400 memory cycles latency means
    nothing - even if it is relatively slow memory - as long as you can keep
    the throughput. Network latency is massively bigger than this extra
    memory latency would be.

    Most CPUs can't even deliver control in 400 cycles to an interrupt
    or exception handler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to David Brown on Thu Sep 19 16:16:14 2024
    On Thu, 19 Sep 2024 9:10:53 +0000, David Brown wrote:

    On 19/09/2024 09:26, Terje Mathisen wrote:
    David Brown wrote:

    Even a complete amateur can notice time mismatches of 10 ms in a musical context, so for a professional this does not surprise me. I don't know
    of any human endeavour that requires lower latency or more precise
    timing than music.

    Modern music "production" time synchronizes individual notes with
    the precise time that note should have been played.

    I watched a video several months ago where a music producer
    demonstrated how, just moving the notes around in microsecond
    time intervals, destroys the "musicality" of the <ahem> music.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to David Brown on Thu Sep 19 16:18:08 2024
    On Thu, 19 Sep 2024 9:35:41 +0000, David Brown wrote:

    On 19/09/2024 09:44, Niklas Holsti wrote:

    The practical limitations for quantum computers are far more
    significant. Roughly speaking, when you entangle more states at once,
    you need tighter tolerances to maintain coherence, which translates to
    lower temperatures, higher energy costs, and lower times to do your calculations. And to be useful, you need large numbers of qubits, which again makes maintaining coherence increasingly difficult.

    One can say exactly the same about PAM4 signaling compared to NRZ or
    even RZ coding.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to David Brown on Thu Sep 19 16:23:09 2024
    On Thu, 19 Sep 2024 14:15:09 +0000, David Brown wrote:

    On 19/09/2024 12:59, Terje Mathisen wrote:

    According to someone on the internet (that ever-reliable source of information), an n-bit integer takes 2n + 2 fully entangled qubits and 448.n³.log(n) gates. For 1024-bit RSA, that's 2050 logical qubits and
    about 5×10e12 gates. For the common default size of 2048-bit RSA,
    it's 4098 logical qubits and 4.2×10e13 gates.

    Then you need the quantum error correction in addition. I am not at all convinced that I understand the details here or if I am applying them correctly, but I think that for larger systems you need perhaps 1000
    physical qubits per logical qubit.

    I am convinced that quantum computers will eventually be good at
    some things that regular computers are not and cannot be.

    I am not convinced that any current application is one of those.

    And for the things that quantum computers may be great at
    {Deciphering without keys} they may do more harm than good.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Thu Sep 19 20:12:51 2024
    On 19/09/2024 18:23, MitchAlsup1 wrote:
    On Thu, 19 Sep 2024 14:15:09 +0000, David Brown wrote:

    On 19/09/2024 12:59, Terje Mathisen wrote:

    According to someone on the internet (that ever-reliable source of
    information), an n-bit integer takes 2n + 2 fully entangled qubits and
    448.n³.log(n) gates.  For 1024-bit RSA, that's 2050 logical qubits and
    about 5×10e12 gates.    For the common default size of 2048-bit RSA,
    it's 4098 logical qubits and 4.2×10e13 gates.

    Then you need the quantum error correction in addition.  I am not at all
    convinced that I understand the details here or if I am applying them
    correctly, but I think that for larger systems you need perhaps 1000
    physical qubits per logical qubit.

    I am convinced that quantum computers will eventually be good at
    some things that regular computers are not and cannot be.


    OK.

    They are already good at a few tasks, but they are not particularly
    useful ones.

    I am not convinced that any current application is one of those.


    Agreed, at least as far as we have seen so far with quantum computing.

    And for the things that quantum computers may be great at
    {Deciphering without keys} they may do more harm than good.

    I also don't think breaking encryption would be a useful thing. There
    may be other good uses of integer factorisation, however.

    Still, I don't believe quantum computers will ever actually be any good
    for this, unless someone comes up with a far better algorithm. I think
    it will always be easier and cheaper to break encryptions using social engineering, tricks, malware, bribery, blackmail, or - if all else fails
    - rubber hose cryptoanalysis.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Thu Sep 19 20:06:55 2024
    On 19/09/2024 18:09, MitchAlsup1 wrote:
    On Thu, 19 Sep 2024 7:01:13 +0000, David Brown wrote:

    On 19/09/2024 00:54, Lawrence D'Oliveiro wrote:
    On Wed, 18 Sep 2024 16:23:01 +0000, MitchAlsup1 wrote:

    But if it’s supposed to be for “interactive” use, it’s still going to
    take
    those 400 memory-cycle times to return a response.

    In human terms, those 400 memory cycles are completely negligible.  For
    most purposes, anything else than 100 milliseconds is an instant
    response.  For high-speed games played by experts, 10 milliseconds is a
    good target.  For the most demanding tasks, such as making music, 1
    millisecond might be required.

    400 cycles IS negligible.
    400 cycles for each LD is non-negligible.


    Sure.

    My understanding was that the extra cycles were latency on the handling
    of particular events or requests - after that, you had the data locally.
    If you had that kind of delay individually for each load, then I
    completely agree it is far from negligible.


    Remember LDs are 20%-22% of the instruction stream and with 400 cycles
    per LD you see an average of 80-cycles per instruction even if all
    other instructions take 1 cycle. This is 160× SLOWER than current
    CPUs. But GPUs with thousands of cores can use memory that slow and
    still deliver big gains in performance (6×-50×).

    For anything interactive, an extra 400 memory cycles latency means
    nothing - even if it is relatively slow memory - as long as you can keep
    the throughput.  Network latency is massively bigger than this extra
    memory latency would be.

    Most CPUs can't even deliver control in 400 cycles to an interrupt
    or exception handler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Thu Sep 19 20:15:57 2024
    On 19/09/2024 18:18, MitchAlsup1 wrote:
    On Thu, 19 Sep 2024 9:35:41 +0000, David Brown wrote:

    On 19/09/2024 09:44, Niklas Holsti wrote:

    The practical limitations for quantum computers are far more
    significant.  Roughly speaking, when you entangle more states at once,
    you need tighter tolerances to maintain coherence, which translates to
    lower temperatures, higher energy costs, and lower times to do your
    calculations.  And to be useful, you need large numbers of qubits, which
    again makes maintaining coherence increasingly difficult.

    One can say exactly the same about PAM4 signaling compared to NRZ or
    even RZ coding.

    Yes. It gets harder to have more signal levels per baud, and you need
    tighter tolerances, shorter lengths, less electrical noise, etc.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Niklas Holsti on Thu Sep 19 19:29:31 2024
    Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
    On 2024-09-19 11:43, Lawrence D'Oliveiro wrote:
    On Thu, 19 Sep 2024 10:44:24 +0300, Niklas Holsti wrote:

    On 2024-09-19 2:47, Lawrence D'Oliveiro wrote:

    On Wed, 18 Sep 2024 20:09:53 GMT, Anton Ertl wrote:

    He mentioned that several physics breakthroughs
    are needed for quantum computing to become useful.

    The biggest one would be getting around the fundamental problem that
    you can’t get something for nothing.

    Stupid argument. Look at the effort and tech it takes to make quantum
    computers... that is not "nothing".

    Is there some ongoing “Nature’s Rentware” involved?


    I have no idea what you mean by that.


    The promise of an exponential increase in computing power for a linear >>>> increase in the number of processing elements sounds very much like
    “something for nothing” under another name, wouldn’t you say?

    No, it is exploiting the very non-intuitive nature of quantum
    entanglement to create an exponential number of collective states of a
    linear number of elements.

    That’s called the “many worlds interpretation” of quantum mechanics, and
    it is philosophical mumbo-jumbo nonsense.

    The /fact/ that quantum mechanics describes how the world works,
    entanglement and all, does not depend on the various attempts to
    "interpret" or understand its foundations.


    Quantum mechanics is high IQ bullshit to make professors look important.

    Next you are going to use the magic word Einstein and declare that makes
    you win the argument.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Brett on Thu Sep 19 19:31:19 2024
    Brett <ggtgp@yahoo.com> schrieb:

    Quantum mechanics is high IQ bullshit to make professors look important.

    You need quantum mechanics to describe solid-state electronics
    (or all atoms, for that matter).

    Are you sure you are posting to the right newsgroup?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Thu Sep 19 20:48:38 2024
    On Thu, 19 Sep 2024 16:23:09 +0000, MitchAlsup1 wrote:

    I am convinced that quantum computers will eventually be good at some
    things that regular computers are not and cannot be.

    They are currently having some success in physical-optimization problems,
    with precision limits. That means they are basically just a revival of the
    old analog computers: fast at solving physical-related problems, but with
    much less precision than digital computers.

    So far, the progress in making them handle number-theoretic calculations
    has been essentially zero.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Terje Mathisen on Thu Sep 19 20:53:44 2024
    On Thu, 19 Sep 2024 12:59:42 +0200, Terje Mathisen wrote:

    From my recent reading, it seems like factoring 21 (5 bits) requires at
    least 5+10=15 bits all staying entangled, plus a number of additional
    bits for error correction.

    The noise factor was something the original ideas about quantum computers
    had not taken into account.

    But it’s pretty obvious why it happens: “quantum” computing was something thought up by people who took the “many worlds” interpretation of quantum theory just a little too seriously: if you could take advantage of “superposition of states” to run your computation simultaneously across multiple alternate universes, you could access a whole lot more computing power!

    The reason why it doesn’t work is because of conservation of energy. Accessing those hypothetical “alternate universes” requires spreading the same amount of energy more thinly. And that’s where the noise comes from.
    So ultimately there will be no way to get rid of it.

    And that’s why I say “quantum” computing (at least for number-theoretic operations) is “trying to get something for nothing”. Ultimately that won’t work.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Thu Sep 19 21:35:41 2024
    On Thu, 19 Sep 2024 20:48:38 +0000, Lawrence D'Oliveiro wrote:

    On Thu, 19 Sep 2024 16:23:09 +0000, MitchAlsup1 wrote:

    I am convinced that quantum computers will eventually be good at some
    things that regular computers are not and cannot be.

    They are currently having some success in physical-optimization
    problems,
    with precision limits. That means they are basically just a revival of
    the
    old analog computers: fast at solving physical-related problems, but
    with
    much less precision than digital computers.

    They seem to be rather exceptional at protein folding compared to
    classical computing.

    So far, the progress in making them handle number-theoretic calculations
    has been essentially zero.

    Getting them to hold a single 10-bit value for 10,000 cycles is <let
    us say> difficult.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Brett on Fri Sep 20 00:43:46 2024
    On 2024-09-19 22:29, Brett wrote:

    [...]

    Quantum mechanics is high IQ bullshit to make professors look important.


    Interesting that you wrote and submitted that anti-science comment using electronic devices that work in ways that cannot be understood without
    quantum theory.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Lawrence D'Oliveiro on Fri Sep 20 01:08:23 2024
    On 2024-09-19 23:53, Lawrence D'Oliveiro wrote:
    On Thu, 19 Sep 2024 12:59:42 +0200, Terje Mathisen wrote:

    From my recent reading, it seems like factoring 21 (5 bits) requires at
    least 5+10=15 bits all staying entangled, plus a number of additional
    bits for error correction.

    The noise factor was something the original ideas about quantum computers
    had not taken into account.

    But it’s pretty obvious why it happens: “quantum” computing was something
    thought up by people who took the “many worlds” interpretation of quantum theory just a little too seriously: if you could take advantage of “superposition of states” to run your computation simultaneously across multiple alternate universes, you could access a whole lot more computing power!

    The reason why it doesn’t work is because of conservation of energy. Accessing those hypothetical “alternate universes” requires spreading the same amount of energy more thinly. And that’s where the noise comes from. So ultimately there will be no way to get rid of it.


    If you can back up that claim (that noise in quantum computing comes
    from "many worlds") with actual math, you will have proved that
    many-worlds is true. That would be Nobel prize or two, right there. Go
    at it!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Thu Sep 19 23:38:33 2024
    On Thu, 19 Sep 2024 16:16:14 +0000, MitchAlsup1 wrote:

    I watched a video several months ago where a music producer demonstrated
    how, just moving the notes around in microsecond time intervals,
    destroys the "musicality" of the <ahem> music.

    Actually, it’s the opposite. Moving notes away from exact periodic time positions is called “humanizing” -- it makes the music sound more like it was created by humans (who find it impossible to play in exact time)
    rather than robots.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Thu Sep 19 23:37:00 2024
    On Thu, 19 Sep 2024 16:09:15 +0000, MitchAlsup1 wrote:

    400 cycles IS negligible.
    400 cycles for each LD is non-negligible.

    Remember LDs are 20%-22% of the instruction stream and with 400 cycles
    per LD you see an average of 80-cycles per instruction even if all other instructions take 1 cycle. This is 160× SLOWER than current CPUs. But
    GPUs with thousands of cores can use memory that slow and still deliver
    big gains in performance (6×-50×).

    How can they do that? What proportion of their instruction stream is LDs?
    It seems to me they are accessing memory in 100% of their instructions,
    since they would have less sophisticated memory controllers than CPUs
    commonly have.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Terje Mathisen on Thu Sep 19 23:40:05 2024
    On Thu, 19 Sep 2024 12:54:23 +0200, Terje Mathisen wrote:

    The way I implemented it was by updating the "official" back frame
    buffer, and compare the update with the visible front buffer. If at any
    time a write to the back buffer did not result in something that was
    already in the front buffer, I just copied the back buffer to the front
    and went on from there.

    Is this where the need for “triple buffering” comes from -- the fact that you need to copy the entire contents of one buffer to another?

    The way I understood to do flicker-free drawing was with just two buffers
    -- “double buffering”. And rather than swap the buffer contents, you just swapped the pointers to them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Thu Sep 19 23:43:15 2024
    On Thu, 19 Sep 2024 21:35:41 +0000, MitchAlsup1 wrote:

    On Thu, 19 Sep 2024 20:48:38 +0000, Lawrence D'Oliveiro wrote:

    On Thu, 19 Sep 2024 16:23:09 +0000, MitchAlsup1 wrote:

    I am convinced that quantum computers will eventually be good at some
    things that regular computers are not and cannot be.

    They are currently having some success in physical-optimization
    problems, with precision limits. That means they are basically just a
    revival of the old analog computers: fast at solving physical-related
    problems, but with much less precision than digital computers.

    They seem to be rather exceptional at protein folding compared to
    classical computing.

    That’s an example of what I what I would call “physical optimization”: the
    system can “feel” its way down the energy gradient just due to random fluctuations in the state of the physical variables. The algorithms for
    doing this were called “simulated annealing”, back in the day.

    Actually nowadays some AI engines (digitally programmed, not quantum) may
    be beating the quantum computers on protein folding, too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Thomas Koenig on Fri Sep 20 00:15:19 2024
    Thomas Koenig <tkoenig@netcologne.de> wrote:
    Brett <ggtgp@yahoo.com> schrieb:

    Quantum mechanics is high IQ bullshit to make professors look important.

    You need quantum mechanics to describe solid-state electronics
    (or all atoms, for that matter).


    Type “quantum mechanics criticism” and variants into Google and have at it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Fri Sep 20 00:58:44 2024
    On Thu, 19 Sep 2024 23:37:00 +0000, Lawrence D'Oliveiro wrote:

    On Thu, 19 Sep 2024 16:09:15 +0000, MitchAlsup1 wrote:

    400 cycles IS negligible.
    400 cycles for each LD is non-negligible.

    Remember LDs are 20%-22% of the instruction stream and with 400 cycles
    per LD you see an average of 80-cycles per instruction even if all other
    instructions take 1 cycle. This is 160× SLOWER than current CPUs. But
    GPUs with thousands of cores can use memory that slow and still deliver
    big gains in performance (6×-50×).

    How can they do that? What proportion of their instruction stream is
    LDs?

    20%-22% (as stated above) another 10% STs.

    It seems to me they are accessing memory in 100% of their instructions,
    since they would have less sophisticated memory controllers than CPUs commonly have.

    Maybe less sophisticated, but 20×-40× the number of 'miss buffers' than conventional CPUs.

    Hint:: They can context switch every instruction. So if an instruction
    does not complete in its cycle, they switch to a different set of
    threads;
    and they have lots of threads per core to work with.

    Also note: a single instruction causes 32-128 threads to make 1 step
    of forward progress.

    It is called SIMT for a reason.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Fri Sep 20 04:05:35 2024
    On Fri, 20 Sep 2024 00:58:44 +0000, MitchAlsup1 wrote:

    Hint:: They can context switch every instruction.

    How does that help?

    So if an instruction
    does not complete in its cycle, they switch to a different set of
    threads;

    That will need to do its own memory accesses. But the memory interface is
    still busy trying to complete the access for the previous thread.

    Also note: a single instruction causes 32-128 threads to make 1 step of forward progress.

    How many memory accesses does it take to complete that one step?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Brett on Fri Sep 20 05:46:49 2024
    Brett <ggtgp@yahoo.com> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> wrote:
    Brett <ggtgp@yahoo.com> schrieb:

    Quantum mechanics is high IQ bullshit to make professors look important.

    You need quantum mechanics to describe solid-state electronics
    (or all atoms, for that matter).

    Type “quantum mechanics criticism” and variants into Google and have at it.

    I've read enough crackpot theories already, thank you, I don't need
    any more.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Fri Sep 20 09:14:12 2024
    On 20/09/2024 01:40, Lawrence D'Oliveiro wrote:
    On Thu, 19 Sep 2024 12:54:23 +0200, Terje Mathisen wrote:

    The way I implemented it was by updating the "official" back frame
    buffer, and compare the update with the visible front buffer. If at any
    time a write to the back buffer did not result in something that was
    already in the front buffer, I just copied the back buffer to the front
    and went on from there.

    Is this where the need for “triple buffering” comes from -- the fact that you need to copy the entire contents of one buffer to another?

    The way I understood to do flicker-free drawing was with just two buffers
    -- “double buffering”. And rather than swap the buffer contents, you just swapped the pointers to them.

    You use triple buffering if the incoming data (or drawing commands, or whatever) and the outgoing data (such as to a screen) are asynchronous.
    If you have control of the timing and synchronisation on one side, you
    can get away with double buffering.

    You don't usually copy the data in such systems - you just swap pointers
    to the buffer used for the different purposes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Thomas Koenig on Fri Sep 20 09:37:58 2024
    On 20/09/2024 07:46, Thomas Koenig wrote:
    Brett <ggtgp@yahoo.com> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> wrote:
    Brett <ggtgp@yahoo.com> schrieb:

    Quantum mechanics is high IQ bullshit to make professors look important. >>>
    You need quantum mechanics to describe solid-state electronics
    (or all atoms, for that matter).

    Type “quantum mechanics criticism” and variants into Google and have at it.

    I've read enough crackpot theories already, thank you, I don't need
    any more.

    Quantum mechanics describes the rules that give structure to atoms and molecules. On a larger scale, those structures build up to explain
    spherical planets. But we know the earth is flat. Therefore, quantum mechanics is bullshit. What more evidence could you want?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to Lawrence D'Oliveiro on Fri Sep 20 09:55:53 2024
    Lawrence D'Oliveiro wrote:
    On Thu, 19 Sep 2024 12:54:23 +0200, Terje Mathisen wrote:

    The way I implemented it was by updating the "official" back frame
    buffer, and compare the update with the visible front buffer. If at any
    time a write to the back buffer did not result in something that was
    already in the front buffer, I just copied the back buffer to the front
    and went on from there.

    Is this where the need for “triple buffering” comes from -- the fact that you need to copy the entire contents of one buffer to another?

    The way I understood to do flicker-free drawing was with just two buffers
    -- “double buffering”. And rather than swap the buffer contents, you just swapped the pointers to them.

    If you cannot swap the buffers with pointer updates, then you need to
    copy, and if that copy takes a long time, then you might need a third
    buffer which you actually updating all the time.

    This would be one HW frame buffer, with a slow write port (ancient IBM
    CGA needed a bunch of frame times in order to update all of it
    flicker-free, since you could only write during beam retrace intervals:
    A few char cells duing each horizontal retrace, several lines during
    vertical retrace.

    In the back end you could flip between two buffers in RAM, but I never
    found a need to do so. I just measured that it was much faster to write
    a full screen from RAM to frame buffer than it was to do even a partial
    copy from one part of the frame buffer to another, i.e for doing
    scrolling within a window.

    When lots of stuff was happing in my terminal emulator, I would limit
    screen refreshes so that I didn't have to update the frame buffer more
    than maybe 20 times/second. For smaller updates, they would be instantly visible (during the next horizontal retrace).

    BTW, my back buffer was a list of lines, so that I could scroll by just uninking the top line and adding a blank at the bottom, I tried to limit
    the number of bulk RAM accesses as much as possible.

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Brett on Fri Sep 20 14:24:32 2024
    Brett <ggtgp@yahoo.com> writes:
    Thomas Koenig <tkoenig@netcologne.de> wrote:
    Brett <ggtgp@yahoo.com> schrieb:

    Quantum mechanics is high IQ bullshit to make professors look important.

    You need quantum mechanics to describe solid-state electronics
    (or all atoms, for that matter).


    Type “quantum mechanics criticism” and variants into Google and have at it.


    Why should one do that? Google indexes all kinds of cranks.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Fri Sep 20 11:18:00 2024
    But we know the earth is flat.

    Don't be ridiculous. Just look at the shape of any old shoe's sole.
    It shows that the earth is evidently not flat, it's curved "upward".
    The only sensible explanation is that the earth is a sphere and we live *inside* of it.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to David Brown on Fri Sep 20 15:21:12 2024
    David Brown <david.brown@hesbynett.no> wrote:
    On 20/09/2024 07:46, Thomas Koenig wrote:
    Brett <ggtgp@yahoo.com> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> wrote:
    Brett <ggtgp@yahoo.com> schrieb:

    Quantum mechanics is high IQ bullshit to make professors look important. >>>>
    You need quantum mechanics to describe solid-state electronics
    (or all atoms, for that matter).

    Type “quantum mechanics criticism” and variants into Google and have at it.

    I've read enough crackpot theories already, thank you, I don't need
    any more.

    Quantum mechanics describes the rules that give structure to atoms and molecules. On a larger scale, those structures build up to explain
    spherical planets. But we know the earth is flat. Therefore, quantum mechanics is bullshit. What more evidence could you want?


    Yup, you just explained the Einstein argument, just like I said.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Fri Sep 20 11:21:52 2024
    The basic issue is:
    * CPU+motherboard RAM -- usually upgradeable
    * Addon coprocessor RAM -- usually not upgradeable

    Maybe the RAM of the "addon coprocessor" is not upgradeable, but the
    addon board itself can be replaced with another one (one with more RAM).

    I love being able to upgrade/replace different components separately.
    But AFAICT, this is a minority concern. Most people treat computer
    systems as "atomic black boxes".


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Brett on Fri Sep 20 19:10:25 2024
    On 20/09/2024 17:21, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 20/09/2024 07:46, Thomas Koenig wrote:
    Brett <ggtgp@yahoo.com> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> wrote:
    Brett <ggtgp@yahoo.com> schrieb:

    Quantum mechanics is high IQ bullshit to make professors look important. >>>>>
    You need quantum mechanics to describe solid-state electronics
    (or all atoms, for that matter).

    Type “quantum mechanics criticism” and variants into Google and have at it.

    I've read enough crackpot theories already, thank you, I don't need
    any more.

    Quantum mechanics describes the rules that give structure to atoms and
    molecules. On a larger scale, those structures build up to explain
    spherical planets. But we know the earth is flat. Therefore, quantum
    mechanics is bullshit. What more evidence could you want?


    Yup, you just explained the Einstein argument, just like I said.


    I think you are somewhat confused.

    I can't claim to understand every argument Einstein made, but I am quite certain he was not a flat-earther. Nor did he think quantum mechanics
    was "bullshit" - his Nobel prize on the photoelectric effect was a stepping-stone in the development of the theory of quantum mechanics.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Brown on Fri Sep 20 21:06:00 2024
    In article <vcgpqt$gndp$1@dont-email.me>, david.brown@hesbynett.no (David Brown) wrote:

    Even a complete amateur can notice time mismatches of 10 ms in a
    musical context, so for a professional this does not surprise me.
    I don't know of any human endeavour that requires lower latency or
    more precise timing than music.

    A friend used to work on set-top boxes, with fairly slow hardware. They
    had demonstrations of two different ways of handling inability to keep up
    with the data stream:

    - Keeping the picture on schedule, and dropping a few milliseconds
    of sound.
    - Dropping a frame of the picture, and keeping the sound on-track.

    Potential customers always thought they wanted the first approach, until
    they watched the demos. Human vision fakes a lot of what we "see" at the
    best of times, bit hearing is more sensitive to glitches.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to D'Oliveiro on Fri Sep 20 21:06:00 2024
    In article <vcisaf$ulcv$1@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:
    On Fri, 20 Sep 2024 00:58:44 +0000, MitchAlsup1 wrote:
    Hint:: They can context switch every instruction.
    How does that help?

    All the threads are executing exactly the same instructions,on the same
    code path. If any of them start taking different branches, performance
    goes way down, because then they can't amortise the time for the
    instruction fetch across all the threads.

    The context switches don't involve any memory accesses. The GPU processor
    has a set of registers for each thread, and a context switch is just a
    change of which registers it's looking at. It's the same trick as the old
    TI 990 architecture. I was quite amused when I figured that out in the
    middle of the first presentation from Nvidia I ever sat through.

    <https://en.wikipedia.org/wiki/TI-990>

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to John Dallman on Fri Sep 20 20:17:20 2024
    On Fri, 20 Sep 2024 20:06:00 +0000, John Dallman wrote:

    In article <vcgpqt$gndp$1@dont-email.me>, david.brown@hesbynett.no
    (David
    Brown) wrote:

    Even a complete amateur can notice time mismatches of 10 ms in a
    musical context, so for a professional this does not surprise me.
    I don't know of any human endeavour that requires lower latency or
    more precise timing than music.

    A friend used to work on set-top boxes, with fairly slow hardware. They
    had demonstrations of two different ways of handling inability to keep
    up
    with the data stream:

    - Keeping the picture on schedule, and dropping a few milliseconds
    of sound.
    - Dropping a frame of the picture, and keeping the sound on-track.

    Potential customers always thought they wanted the first approach, until
    they watched the demos. Human vision fakes a lot of what we "see" at the
    best of times, bit hearing is more sensitive to glitches.

    Having the ears being able to hear millisecond differences in sound
    arrival times is key to our ability to hunt and evade predator's.

    While our eyes have a time constant closer to 0.1 seconds.

    That is, I blame natural selection on the above.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Stefan Monnier on Fri Sep 20 21:32:00 2024
    On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:

    The basic issue is:
    * CPU+motherboard RAM -- usually upgradeable
    * Addon coprocessor RAM -- usually not upgradeable

    Maybe the RAM of the "addon coprocessor" is not upgradeable, but the
    addon board itself can be replaced with another one (one with more RAM).

    Yes, but that’s a lot more expensive.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Terje Mathisen on Fri Sep 20 21:33:47 2024
    On Fri, 20 Sep 2024 09:55:53 +0200, Terje Mathisen wrote:

    Lawrence D'Oliveiro wrote:

    The way I understood to do flicker-free drawing was with just two
    buffers -- “double buffering”. And rather than swap the buffer
    contents, you just swapped the pointers to them.

    If you cannot swap the buffers with pointer updates ...

    Surely all the good hardware is/was designed that way, with special
    registers pointing to “current buffer” and “back buffer”, with the display
    coming from “current buffer” while writes typically go to “back buffer”.
    Why would you do it otherwise?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Fri Sep 20 21:39:52 2024
    On Fri, 20 Sep 2024 20:17:20 +0000, MitchAlsup1 wrote:

    Having the ears being able to hear millisecond differences in sound
    arrival times is key to our ability to hunt and evade predator's.

    We tell the direction of sound at frequencies above about 700Hz, I think
    it was, by the change in timbre as the sound has to negotiate the shape of
    our ears and heads. This works better for complex sounds than for pure
    tones.

    This also works less well for lower frequencies, since the wavelength
    becomes long enough to diffract around our heads much more easily. We may
    be able to tell direction at these frequencies based on phase differences,
    or we may not. Certainly home-cinema designers don’t seem to consider this important, which is why we only have one subwoofer in a typical surround
    setup, instead of a stereo pair.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Niklas Holsti on Fri Sep 20 21:40:33 2024
    On Fri, 20 Sep 2024 01:08:23 +0300, Niklas Holsti wrote:

    If you can back up that claim (that noise in quantum computing comes
    from "many worlds") ...

    No, I’m saying the opposite: the noise comes from the fact that “many worlds” is nonsense.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Fri Sep 20 22:07:42 2024
    On Fri, 20 Sep 2024 21:40:33 +0000, Lawrence D'Oliveiro wrote:

    On Fri, 20 Sep 2024 01:08:23 +0300, Niklas Holsti wrote:

    If you can back up that claim (that noise in quantum computing comes
    from "many worlds") ...

    No, I’m saying the opposite: the noise comes from the fact that “many worlds” is nonsense.

    There are many kinds of noise and the presence of noise is part of
    our world with very little needing quantum mechanics to be visible.

    The Casimir effect measures quantum noise in a <hard> vacuum
    caused by virtual particles.

    Then there is a noise of amplification, a noise of sampling, a noise
    related to the movement of atoms (heat), Brownian motion, and on and on.

    All of these noise sources will remain even if the many-world
    theory collapses and dies (low probability).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Chris M. Thomasson on Fri Sep 20 22:11:05 2024
    On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:

    On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
    On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:

    The basic issue is:
    * CPU+motherboard RAM -- usually upgradeable
    * Addon coprocessor RAM -- usually not upgradeable

    Maybe the RAM of the "addon coprocessor" is not upgradeable, but the
    addon board itself can be replaced with another one (one with more RAM).

    Yes, but that’s a lot more expensive.

    I had this crazy idea of putting cpus right on the ram. So, if you add
    more memory to your system you automatically get more cpu's... Think
    NUMA for a moment... ;^)

    Can software use the extra CPUs ?

    Also note: DRAMs are made on P-Channel process (leakage) with only a few
    layer of metal while CPUs are based on a N-Channel process (speed) with
    many layers of metal.

    Bus interconnects are not setup to take a CPU cache miss from one
    DRAM to a different DRAM on behalf of its contained CPU(s).
    {Chicken and egg problem}

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Josh Vanderhoof@21:1/5 to Lawrence D'Oliveiro on Fri Sep 20 19:08:52 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Fri, 20 Sep 2024 01:08:23 +0300, Niklas Holsti wrote:

    If you can back up that claim (that noise in quantum computing comes
    from "many worlds") ...

    No, I’m saying the opposite: the noise comes from the fact that “many worlds” is nonsense.

    I've never been a fan of many worlds but this kind of blew my mind when
    it revealed how many worlds is like pilot wave theory without the
    corpuscles.

    https://www.youtube.com/watch?v=BUHW1zlstVk

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to mitchalsup@aol.com on Sat Sep 21 01:12:38 2024
    MitchAlsup1 <mitchalsup@aol.com> wrote:
    On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:

    On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
    On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:

    The basic issue is:
    * CPU+motherboard RAM -- usually upgradeable
    * Addon coprocessor RAM -- usually not upgradeable

    Maybe the RAM of the "addon coprocessor" is not upgradeable, but the
    addon board itself can be replaced with another one (one with more RAM). >>>
    Yes, but that’s a lot more expensive.

    I had this crazy idea of putting cpus right on the ram. So, if you add
    more memory to your system you automatically get more cpu's... Think
    NUMA for a moment... ;^)

    Can software use the extra CPUs ?

    Also note: DRAMs are made on P-Channel process (leakage) with only a few layer of metal while CPUs are based on a N-Channel process (speed) with
    many layers of metal.

    Didn’t you work on the MC68000 which had one layer of metal?

    This could be fine if you are going for the AI market of slow AI cpu with
    huge memory and bandwidth.

    The AI market is bigger than the general server market as seen in NVidea’s sales.

    Bus interconnects are not setup to take a CPU cache miss from one
    DRAM to a different DRAM on behalf of its contained CPU(s).
    {Chicken and egg problem}

    Such a dram would be on the PCIE busses, and the main CPU’s would barely touch that ram, and the AI only searches locally.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Brett on Sat Sep 21 01:48:45 2024
    On Sat, 21 Sep 2024 1:12:38 +0000, Brett wrote:

    MitchAlsup1 <mitchalsup@aol.com> wrote:
    On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:

    On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
    On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:

    The basic issue is:
    * CPU+motherboard RAM -- usually upgradeable
    * Addon coprocessor RAM -- usually not upgradeable

    Maybe the RAM of the "addon coprocessor" is not upgradeable, but the >>>>> addon board itself can be replaced with another one (one with more RAM). >>>>
    Yes, but that’s a lot more expensive.

    I had this crazy idea of putting cpus right on the ram. So, if you add
    more memory to your system you automatically get more cpu's... Think
    NUMA for a moment... ;^)

    Can software use the extra CPUs ?

    Also note: DRAMs are made on P-Channel process (leakage) with only a few
    layer of metal while CPUs are based on a N-Channel process (speed) with
    many layers of metal.

    Didn’t you work on the MC68000 which had one layer of metal?

    Yes, but it was the 68020 and had polysilicide which we used as
    a second layer of metal.

    Mc88100 had 2 layers of metal and silicide.

    The number of metal layers went about::
    1978: 1
    1980: 1+silicide
    1982: 2+silicide
    1988: 3+silicide
    1990: 4+silicide
    1995: 6
    ..

    This could be fine if you are going for the AI market of slow AI cpu
    with huge memory and bandwidth.

    The AI market is bigger than the general server market as seen in
    NVidea’s sales.

    Bus interconnects are not setup to take a CPU cache miss from one
    DRAM to a different DRAM on behalf of its contained CPU(s).
    {Chicken and egg problem}

    Thus a problem with the CPU on DRAM approach.

    Such a dram would be on the PCIE busses, and the main CPU’s would barely touch that ram, and the AI only searches locally.

    Better make it PCIe+CXL so the downstream CPU is cache coherent.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Lawrence D'Oliveiro on Sat Sep 21 10:40:30 2024
    On 2024-09-21 0:40, Lawrence D'Oliveiro wrote:
    On Fri, 20 Sep 2024 01:08:23 +0300, Niklas Holsti wrote:

    If you can back up that claim (that noise in quantum computing comes
    from "many worlds") ...

    No, I’m saying the opposite: the noise comes from the fact that “many worlds” is nonsense.


    Strange view of the world, that. Noise in quantum computing is a fact,
    but your opinion about "many worlds" is just that: an opinion, not a fact.

    A view of the world in which opinions cause physical facts... no point
    in continuing the discussion. Bye.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Sat Sep 21 08:20:56 2024
    On Fri, 20 Sep 2024 22:07:42 +0000, MitchAlsup1 wrote:

    All of these noise sources will remain even if the many-world theory collapses and dies (low probability).

    Many-worlds is not a “theory” in any scientific sense: it is merely an (attempt at) “interpretation” of quantum theory. It tries to make things clearer, but in the process just replaces one set of mysterious terms with another.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Sat Sep 21 08:22:48 2024
    On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:

    Is there any activity going on at absolute zero?

    No, because the Third Law of Thermodynamics says you can’t get there
    anyway.

    Fun fact: there are actual physical systems with negative absolute
    temperatures (I studied a bit of this in undergrad physics), but whether starting from positive or negative, you can’t get to absolute zero from either side.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Sat Sep 21 08:25:42 2024
    On Fri, 20 Sep 2024 22:11:05 +0000, MitchAlsup1 wrote:

    Can software use the extra CPUs ?

    There were these attempts at massively parallel processing called
    “systolic arrays” ... were they ever useful?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Sat Sep 21 08:24:43 2024
    On Fri, 20 Sep 2024 14:54:36 -0700, Chris M. Thomasson wrote:

    I had this crazy idea of putting cpus right on the ram.

    Not so crazy. I can remember things like this being discussed back in the 1980s. I think the Transputer had a similar idea.

    What seems to have happened since is that individual CPUs are being
    matched up with more and more RAM.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Sat Sep 21 08:34:25 2024
    On Thu, 19 Sep 2024 19:29:31 -0000 (UTC), Brett wrote:

    Quantum mechanics is high IQ bullshit to make professors look important.

    Quantum mechanics is real. Quantum effects are real. Transistors only work because electrons can “tunnel” through a barrier with higher energy than they have, which should be classically impossible.

    Matter only hangs together because electrons don’t actually orbit nuclei
    like planets in a miniature solar system: if they did, they would emit radiation (“bremsstrahlung radiation”), thereby losing energy and spiralling into the nucleus until the atom collapses. And that would
    happen to every atom in the Universe. Clearly that is not the case.

    Even an old-style incandescent light bulb only works because of quantum effects: the shape of the radiation curve depends only on the temperature
    of the radiating body, once it gets sufficiently hot, with little or no dependence on what material the body is made of. This applies to your
    light bulb and also to our Sun and the other stars.

    It is true that quantum theory sounds completely crazy when you try to
    explain it. But it works, and gives the right answers, that have been
    verified repeatedly in countless tests. And in science, that counts for
    more than anything.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Sat Sep 21 08:26:58 2024
    On Fri, 20 Sep 2024 19:28:51 -0700, Chris M. Thomasson wrote:

    Shit man, remember all of the slots in the old Apple IIgs's?

    In those days, RAM was slow enough that you could put RAM expansion on bus cards.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Sat Sep 21 08:42:57 2024
    On Fri, 20 Sep 2024 00:15:19 -0000 (UTC), Brett wrote:

    Type “quantum mechanics criticism” and variants into Google and have at it.

    Do any of those “criticisms” offer a coherent alternative theory with some actual experimental evidence to show it works?

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    (crickets)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to mitchalsup@aol.com on Sat Sep 21 13:54:03 2024
    mitchalsup@aol.com (MitchAlsup1) writes:
    On Sat, 21 Sep 2024 1:12:38 +0000, Brett wrote:

    MitchAlsup1 <mitchalsup@aol.com> wrote:
    On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:

    This could be fine if you are going for the AI market of slow AI cpu
    with huge memory and bandwidth.

    The AI market is bigger than the general server market as seen in
    NVidea’s sales.

    Bus interconnects are not setup to take a CPU cache miss from one
    DRAM to a different DRAM on behalf of its contained CPU(s).
    {Chicken and egg problem}

    Thus a problem with the CPU on DRAM approach.

    Such a dram would be on the PCIE busses, and the main CPU’s would barely >> touch that ram, and the AI only searches locally.

    Better make it PCIe+CXL so the downstream CPU is cache coherent.

    Exactly.

    https://www.marvell.com/products/cxl.html

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to All on Sat Sep 21 15:39:41 2024
    MitchAlsup1 wrote:
    On Fri, 20 Sep 2024 20:06:00 +0000, John Dallman wrote:

    In article <vcgpqt$gndp$1@dont-email.me>, david.brown@hesbynett.no
    (David
    Brown) wrote:

    Even a complete amateur can notice time mismatches of 10 ms in a
    musical context, so for a professional this does not surprise me.
    I don't know of any human endeavour that requires lower latency or
    more precise timing than music.

    A friend used to work on set-top boxes, with fairly slow hardware. They
    had demonstrations of two different ways of handling inability to keep
    up
    with the data stream:

    - Keeping the picture on schedule, and dropping a few milliseconds
      of sound.
    - Dropping a frame of the picture, and keeping the sound on-track.

    Potential customers always thought they wanted the first approach, until
    they watched the demos. Human vision fakes a lot of what we "see" at the
    best of times, bit hearing is more sensitive to glitches.

    Having the ears being able to hear millisecond differences in sound
    arrival times is key to our ability to hunt and evade predator's.

    Not only that, but the slight non-sylindrical shape of the ear opening 6
    canal cause _really_ minute phase shifts, but they are what makes it
    possible for us to differentiate between a sound coming from directly
    behind vs directly ahead.

    While our eyes have a time constant closer to 0.1 seconds.

    That is, I blame natural selection on the above.

    Supposedly, we devote more of our bran to hearing than to vision?

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Chris M. Thomasson on Sat Sep 21 16:23:38 2024
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 9/20/2024 6:12 PM, Brett wrote:
    MitchAlsup1 <mitchalsup@aol.com> wrote:
    On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:

    On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
    On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:

    The basic issue is:
    * CPU+motherboard RAM -- usually upgradeable
    * Addon coprocessor RAM -- usually not upgradeable

    Maybe the RAM of the "addon coprocessor" is not upgradeable, but the >>>>>> addon board itself can be replaced with another one (one with more RAM). >>>>>
    Yes, but that’s a lot more expensive.

    I had this crazy idea of putting cpus right on the ram. So, if you add >>>> more memory to your system you automatically get more cpu's... Think
    NUMA for a moment... ;^)

    Can software use the extra CPUs ?

    Also note: DRAMs are made on P-Channel process (leakage) with only a few >>> layer of metal while CPUs are based on a N-Channel process (speed) with
    many layers of metal.

    Didn’t you work on the MC68000 which had one layer of metal?

    This could be fine if you are going for the AI market of slow AI cpu with
    huge memory and bandwidth.

    The AI market is bigger than the general server market as seen in NVidea’s >> sales.

    Bus interconnects are not setup to take a CPU cache miss from one
    DRAM to a different DRAM on behalf of its contained CPU(s).
    {Chicken and egg problem}

    Such a dram would be on the PCIE busses, and the main CPU’s would barely >> touch that ram, and the AI only searches locally.

    My crazy idea would be akin to a motherboard with a processor and a
    bunch of slots. One would be filled with a special memory with cpu's on
    it. If the user wants to add more memory they would gain extra cpu's. It would be a NUMA like scheme. Programs running on cpus with _very_ local
    ram would be happy. The main cpu's on the motherboard can be physically
    close to the ram slots as well. Adding more memory means we have access
    to more cpus that are very close to the memory. So, it might be
    interesting out there in the middle of fantasy land for a moment... ;^o
    Ouch!

    The newest Intel CPU’s have the dram on the socket, like the iPhone, you
    get four times the bandwidth. With no DIMM sockets instead of dual socket servers you get 4 and 8 socket servers.

    What you are asking for is coming today. ;)

    I have a bunch of dual socket servers that only have one socket populated, bottom price tier.

    The manual says if you don't need to share data, don't do it... Right on
    the cover! lol. ;^D


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Chris M. Thomasson on Sat Sep 21 16:39:39 2024
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 9/20/2024 6:48 PM, MitchAlsup1 wrote:
    On Sat, 21 Sep 2024 1:12:38 +0000, Brett wrote:

    MitchAlsup1 <mitchalsup@aol.com> wrote:
    On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:

    On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
    On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:

    The basic issue is:
    * CPU+motherboard RAM -- usually upgradeable
    * Addon coprocessor RAM -- usually not upgradeable

    Maybe the RAM of the "addon coprocessor" is not upgradeable, but the >>>>>>> addon board itself can be replaced with another one (one with more >>>>>>> RAM).

    Yes, but that’s a lot more expensive.

    I had this crazy idea of putting cpus right on the ram. So, if you add >>>>> more memory to your system you automatically get more cpu's... Think >>>>> NUMA for a moment... ;^)

    Can software use the extra CPUs ?

    Also note: DRAMs are made on P-Channel process (leakage) with only a few >>>> layer of metal while CPUs are based on a N-Channel process (speed) with >>>> many layers of metal.

    Didn’t you work on the MC68000 which had one layer of metal?

    Yes, but it was the 68020 and had polysilicide which we used as
    a second layer of metal.

    Mc88100 had 2 layers of metal and silicide.

    The number of metal layers went about::
    1978: 1
    1980: 1+silicide
    1982: 2+silicide
    1988: 3+silicide
    1990: 4+silicide
    1995: 6
    ..

    This could be fine if you are going for the AI market of slow AI cpu
    with huge memory and bandwidth.

    The AI market is bigger than the general server market as seen in
    NVidea’s sales.

    Bus interconnects are not setup to take a CPU cache miss from one
    DRAM to a different DRAM on behalf of its contained CPU(s).
    {Chicken and egg problem}

    Thus a problem with the CPU on DRAM approach.

    It would be HIGHLY local wrt its processing units and its memory for
    they would all be one.

    The programming for it would not be all that easy... It would be like a
    NUMA where a program can divide itself up and run parts of itself on
    each slot (aka memory-cpu hybrid unit card if you will). If a program
    can be embarrassingly parallel, well that would be great! The Cell
    processors comes to mind. But it failed. Shit.

    Cell was in the PlayStation which Sony sold a huge number of and made
    billions of dollars, so successful, not failed.

    I programmed for Cell, it was actually a nice architecture for what it did.

    If you think programming for AI is easy, I have news for you…

    Those NVidia AI chips are at the brain damaged level for programming.

    10’s of billions of dollars are invested in this market.

    A system with a mother board that has slots for several GPUS (think crossfire) and slots for memory+CPU units. The kicker is that adding
    more memory gives you more cpus...

    How crazy is this? Well, on a scale from:

    Retarded to Moronic?

    Pretty bad? Shit...

    Shit man, remember all of the slots in the old Apple IIgs's?

    ;^o



    Such a dram would be on the PCIE busses, and the main CPU’s would barely >>> touch that ram, and the AI only searches locally.

    Better make it PCIe+CXL so the downstream CPU is cache coherent.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Lawrence D'Oliveiro on Sat Sep 21 17:40:21 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 19 Sep 2024 19:29:31 -0000 (UTC), Brett wrote:

    Quantum mechanics is high IQ bullshit to make professors look important.

    Quantum mechanics is real. Quantum effects are real. Transistors only work because electrons can “tunnel” through a barrier with higher energy than they have, which should be classically impossible.

    I did not criticize quantum effects, I criticized quantum mechanics which
    is dumbshit SWAG that hides the truth of what is happening behind bullshit. With greater understanding we can come up with classical explanations, but those truths are too scary and could lead to the destruction of mankind.

    If you want to know what is really going on watch the Eric Weinstein The
    Portal videos.

    https://youtu.be/xBx5Y1YLfZY?si=0sVFmvOh-bot2ok7

    Eric is a scary bright physicist, he does not know the truth, but he knows it’s being hidden, and he shows you.

    “You can’t handle the truth.”

    https://youtu.be/9FnO3igOkOk?si=0xmQuxz6yaLnBkCC

    I don’t know the truth either, and if I did I would not speculate on it,
    too dangerous.

    https://www.amazon.com/Experts-Vs-Conspiracy-Theorists-T-Shirt/dp/B0CKLLCN9M/ref=asc_df_B0CKLLCN9M


    Matter only hangs together because electrons don’t actually orbit nuclei like planets in a miniature solar system: if they did, they would emit radiation (“bremsstrahlung radiation”), thereby losing energy and spiralling into the nucleus until the atom collapses. And that would
    happen to every atom in the Universe. Clearly that is not the case.

    Even an old-style incandescent light bulb only works because of quantum effects: the shape of the radiation curve depends only on the temperature
    of the radiating body, once it gets sufficiently hot, with little or no dependence on what material the body is made of. This applies to your
    light bulb and also to our Sun and the other stars.

    It is true that quantum theory sounds completely crazy when you try to explain it. But it works, and gives the right answers, that have been verified repeatedly in countless tests. And in science, that counts for
    more than anything.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to ldo@nz.invalid on Sat Sep 21 14:16:27 2024
    On Sat, 21 Sep 2024 08:26:58 -0000 (UTC), Lawrence D'Oliveiro
    <ldo@nz.invalid> wrote:

    On Fri, 20 Sep 2024 19:28:51 -0700, Chris M. Thomasson wrote:

    Shit man, remember all of the slots in the old Apple IIgs's?

    In those days, RAM was slow enough that you could put RAM expansion on bus >cards.

    The //gs had a dedicated memory slot separate from the peripheral bus.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to chris.m.thomasson.1@gmail.com on Sat Sep 21 14:15:28 2024
    On Fri, 20 Sep 2024 19:32:32 -0700, "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wrote:

    On 9/20/2024 7:28 PM, Chris M. Thomasson wrote:
    It would be HIGHLY local wrt its processing units and its memory for
    they would all be one.

    The programming for it would not be all that easy... It would be like a
    NUMA where a program can divide itself up and run parts of itself on
    each slot (aka memory-cpu hybrid unit card if you will). If a program
    can be embarrassingly parallel, well that would be great! The Cell
    processors comes to mind. But it failed. Shit.

    A system with a mother board that has slots for several GPUS (think
    crossfire) and slots for memory+CPU units. The kicker is that adding
    more memory gives you more cpus...

    How crazy is this? Well, on a scale from:

    Retarded to Moronic?

    Pretty bad? Shit...

    Shit man, remember all of the slots in the old Apple IIgs's?

    ;^o

    Think of the transwarp card for the apple iigs. I think it was called
    that. It sped up the system for sure.

    Had one. It was a CPU accelerator board - not memory.

    The Transwarp board provided a fast 65816 and cache. It plugged into a
    slot but took only power from the bus. It connected to the system
    board via a cable plugged into the original CPU socket.

    The original //gs CPU was 2.5MHz. Transwarp was one of 2 accelerators
    that could be purchased, increasing CPU to 6, 8 or 10 MHz.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Brett on Sat Sep 21 20:30:40 2024
    On 21/09/2024 19:40, Brett wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 19 Sep 2024 19:29:31 -0000 (UTC), Brett wrote:

    Quantum mechanics is high IQ bullshit to make professors look important.

    Quantum mechanics is real. Quantum effects are real. Transistors only work >> because electrons can “tunnel” through a barrier with higher energy than >> they have, which should be classically impossible.

    I did not criticize quantum effects, I criticized quantum mechanics which
    is dumbshit SWAG that hides the truth of what is happening behind bullshit. With greater understanding we can come up with classical explanations, but those truths are too scary and could lead to the destruction of mankind.

    If you want to know what is really going on watch the Eric Weinstein The Portal videos.

    https://youtu.be/xBx5Y1YLfZY?si=0sVFmvOh-bot2ok7

    Eric is a scary bright physicist, he does not know the truth, but he knows it’s being hidden, and he shows you.

    “You can’t handle the truth.”

    https://youtu.be/9FnO3igOkOk?si=0xmQuxz6yaLnBkCC

    I don’t know the truth either, and if I did I would not speculate on it, too dangerous.

    https://www.amazon.com/Experts-Vs-Conspiracy-Theorists-T-Shirt/dp/B0CKLLCN9M/ref=asc_df_B0CKLLCN9M


    In case anyone wants a safe link to information about this particular
    muppet, without Google thinking they are interested in loony conspiracy theories on the level of "birds don't exist", you can read about him at <https://rationalwiki.org/wiki/Eric_Weinstein>.

    """
    Repressed genius

    For years as a mathematician he has said that he has some kind of theory
    of everything that will knock everyone out and overturn the field of
    physics, but he just can't publish it yet because the world isn't ready,
    and the information will only be suppressed.[6]

    In 2020, Weinstein published his much-hyped Oxford lecture on his Theory
    of Geometric Unity.[7] It was met with silence and indifference among theoretical physicists and the scientific community at large.

    On 1st April 2021, Weinstein released a draft of his paper online.[10]
    Given the 1st April release date and the author details on the cover
    page describing Weinstein as an "entertainer" and the paper itself as a
    "work of entertainment", it is unclear at this stage whether Geometrical
    Unity was an elaborate April Fools' Day Wikipedia prank all along.
    """

    Rather than publishing his theories in peer-reviewed journals, like real
    scary bright physicists, he promoted his views on Joe Rogan's podcast.
    And he described himself as "not a physicist" (he is a venture
    capitalist, not a scientist) and that the paper was "a work of
    entertainment". That should give you some idea of how seriously his
    ideas should be taken.



    Actual physicists know that quantum mechanics is not complete - it is
    not a "theory of everything", and does not explain everything. It is,
    like Newtonian gravity and general relativity, a simplification that
    gives an accurate model of reality within certain limitations, and
    hopefully it will one day be superseded by a new theory that models
    reality more accurately and over a wider range of circumstances. That
    is how science works.

    As things stand today, no such better theory has been developed. There
    are a number of ideas and hypotheses (still far from being classifiable
    as scientific theories) that show promise and have not yet been
    demonstrated to be wrong, but that's as far as we have got. Weinstein's "Geometric Unity" is not such a hypotheses - the little that has been
    published has been shown to be either wrong, or "not even wrong".

    It's fine to come up with strange new ideas about how the universe
    works. You then publish and discuss those ideas, and work with other scientists to weed out the clearly incorrect parts, try to expand and
    modify it to fit what we know about reality, and to think about how it
    could make predictions that could be tested. That's part of the process
    of science.

    It's not fine to believe half-baked ramblings from someone who doesn't understand what they are working with and won't listen to those who do.
    The alternative to "I don't understand quantum mechanics" is /not/ to
    believe whatever gobbledegook someone spouts on youtube.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Terje Mathisen on Sat Sep 21 22:58:38 2024
    On Sat, 21 Sep 2024 15:39:41 +0200
    Terje Mathisen <terje.mathisen@tmsw.no> wrote:

    MitchAlsup1 wrote:
    On Fri, 20 Sep 2024 20:06:00 +0000, John Dallman wrote:

    In article <vcgpqt$gndp$1@dont-email.me>, david.brown@hesbynett.no
    (David
    Brown) wrote:

    Even a complete amateur can notice time mismatches of 10 ms in a
    musical context, so for a professional this does not surprise me.
    I don't know of any human endeavour that requires lower latency or
    more precise timing than music.

    A friend used to work on set-top boxes, with fairly slow hardware.
    They had demonstrations of two different ways of handling
    inability to keep up
    with the data stream:

    - Keeping the picture on schedule, and dropping a few milliseconds
    of sound.
    - Dropping a frame of the picture, and keeping the sound on-track.

    Potential customers always thought they wanted the first approach,
    until they watched the demos. Human vision fakes a lot of what we
    "see" at the best of times, bit hearing is more sensitive to
    glitches.

    Having the ears being able to hear millisecond differences in sound
    arrival times is key to our ability to hunt and evade predator's.

    Not only that, but the slight non-sylindrical shape of the ear
    opening 6 canal cause _really_ minute phase shifts, but they are what
    makes it possible for us to differentiate between a sound coming from directly behind vs directly ahead.

    While our eyes have a time constant closer to 0.1 seconds.

    That is, I blame natural selection on the above.

    Supposedly, we devote more of our bran to hearing than to vision?

    Terje



    I think, it's not even close in favor of vision.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to Michael S on Sat Sep 21 22:42:49 2024
    Michael S wrote:
    On Sat, 21 Sep 2024 15:39:41 +0200
    Terje Mathisen <terje.mathisen@tmsw.no> wrote:

    MitchAlsup1 wrote:
    On Fri, 20 Sep 2024 20:06:00 +0000, John Dallman wrote:

    In article <vcgpqt$gndp$1@dont-email.me>, david.brown@hesbynett.no
    (David
    Brown) wrote:

    Even a complete amateur can notice time mismatches of 10 ms in a
    musical context, so for a professional this does not surprise me.
    I don't know of any human endeavour that requires lower latency or
    more precise timing than music.

    A friend used to work on set-top boxes, with fairly slow hardware.
    They had demonstrations of two different ways of handling
    inability to keep up
    with the data stream:

    - Keeping the picture on schedule, and dropping a few milliseconds
      of sound.
    - Dropping a frame of the picture, and keeping the sound on-track.

    Potential customers always thought they wanted the first approach,
    until they watched the demos. Human vision fakes a lot of what we
    "see" at the best of times, bit hearing is more sensitive to
    glitches.

    Having the ears being able to hear millisecond differences in sound
    arrival times is key to our ability to hunt and evade predator's.

    Not only that, but the slight non-sylindrical shape of the ear
    opening 6 canal cause _really_ minute phase shifts, but they are what
    makes it possible for us to differentiate between a sound coming from
    directly behind vs directly ahead.

    While our eyes have a time constant closer to 0.1 seconds.

    That is, I blame natural selection on the above.

    Supposedly, we devote more of our bran to hearing than to vision?

    Terje



    I think, it's not even close in favor of vision.

    That would have been my guess as well, as but as I wrote above, a few
    years ago I was told it was otherwise. Now I have actually read a few
    papers about how you can actually measure this, and it did make sense,
    i.e at least an order of magnitude more vision than hearing.

    Having done both audio and video codec optimization I know that even the
    very highest levels of audio quality is near-DC compared to video
    signals. :-)

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Chris M. Thomasson on Sat Sep 21 20:45:10 2024
    On Sat, 21 Sep 2024 20:26:13 +0000, Chris M. Thomasson wrote:

    On 9/21/2024 6:54 AM, Scott Lurndal wrote:
    mitchalsup@aol.com (MitchAlsup1) writes:
    https://www.marvell.com/products/cxl.html

    What about a weak coherency where a programmer has to use the correct
    membars to get the coherency required for their specific needs? Along
    the lines of UltraSPARC in RMO mode?

    In my case, I suffered through enough of these to implement a
    memory hierarchy free from the need of any MemBars yet provide
    the performance of <mostly> relaxed memory order, except when
    certain kinds of addresses are touched {MMI/O, configuration
    space, ATOMIC accesses,...} In these cases, the core becomes
    {sequentially consistent, or strongly ordered} depending on the
    touched address.

    As far as PCIe device to device data routing, this will all be
    based no the chosen virtual channel. Same channel=in order,
    different channel=who knows.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From jseigh@21:1/5 to All on Sat Sep 21 18:49:20 2024
    On 9/21/24 16:45, MitchAlsup1 wrote:
    On Sat, 21 Sep 2024 20:26:13 +0000, Chris M. Thomasson wrote:

    On 9/21/2024 6:54 AM, Scott Lurndal wrote:
    mitchalsup@aol.com (MitchAlsup1) writes:
    https://www.marvell.com/products/cxl.html

    What about a weak coherency where a programmer has to use the correct
    membars to get the coherency required for their specific needs? Along
    the lines of UltraSPARC in RMO mode?

    In my case, I suffered through enough of these to implement a
    memory hierarchy free from the need of any MemBars yet provide
    the performance of <mostly> relaxed memory order, except when
    certain kinds of addresses are touched {MMI/O, configuration
    space, ATOMIC accesses,...} In these cases, the core becomes
    {sequentially consistent, or strongly ordered} depending on the
    touched address.

    Well, we have asymmetric memory barriers now (membarrier() in linux)
    so we can get rid of memory barriers in some cases. For hazard
    pointers which used to be a (load, store, mb, load) are now just
    a (load, store, load). Much faster, from 8.02 nsecs to 0.79 nsecs.
    So much so that other things which has heretofore been considered
    to add negligible overhead are not so much by comparison. Which can
    be a little annoying because some like using those a lot.

    Joe Seigh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Sat Sep 21 23:55:13 2024
    On Sat, 21 Sep 2024 13:43:55 -0700, Chris M. Thomasson wrote:

    On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:

    On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:

    Is there any activity going on at absolute zero?

    No, because the Third Law of Thermodynamics says you can’t get there
    anyway.

    How close can one get?

    Arbitrarily close. I heard of experiments already being done in the
    microkelvin range.

    Correction: just checked, and the Guinness World Record site reports a
    figure of 38pK.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Sat Sep 21 23:34:40 2024
    On Sat, 21 Sep 2024 13:29:31 -0700, Chris M. Thomasson wrote:

    CPU's that are hyper close to the memory is good wrt locality. However,
    the programming for it might turn some programmers off. NUMA like for
    sure.

    I’m pretty sure those multi-million-node Linux supers that fill the top of the Top500 list have a NUMA-style memory-addressing model.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Dallman on Sat Sep 21 23:50:46 2024
    On Fri, 20 Sep 2024 21:06 +0100 (BST), John Dallman wrote:

    All the threads are executing exactly the same instructions,on the same
    code path.

    Yes, but look at the things that GPUs, for example, typically do: large
    parts of their execution time is in pieces of code called “fragment shaders”. In OpenGL, a “fragment” means a “pixel before compositing” --
    one or more “fragments” get composited together to produce a final image pixel. They could have just called it a “pixel in an intermediate image buffer”, and avoided introducing yet another mysterious-sounding technical term.

    There are a lot of memory accesses involved in a typical fragment shader: reading from texture buffers, reading/writing other image buffers. Then
    you have things like stencil buffers and depth buffers, that play their
    part in the computation. And geometry buffers, though these tend to be
    smaller. Buffers coming out your ears, basically. So the proportion of instructions that access memory is much higher than a typical CPU workload
    -- probably not far short of 100%, certainly in execution time.

    As I recall, DRAM access involves specifying “row” and “column” addresses.
    As I further recall, if the “row” address does not change from one access to the next, then you can specify multiple successive “column”-only addresses and do faster sequential access to the memory (until you hit the
    end of the row). GPUs would take full advantage of this, and their
    patterns of memory usage should suit it quite well.

    On the other hand, such heavily sequential access has poor caching
    behaviour.

    So you see the difference in memory behaviour between GPUs and CPUs: CPUs
    have (or allow) more complex patterns of memory access, necessitating
    elaborate memory controllers with multiple levels of caching to get the necessary performance, while GPUs can make do with much simpler memory interfaces that don’t benefit from caching.

    This also complicates any ability to share memory between GPUs and CPUs.

    Which brings us back to the point I made before: CPU RAM on the
    motherboard is typically upgradeable, while GPU RAM comes on the same card
    as the GPU, and is typically not upgradeable.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Sun Sep 22 02:57:21 2024
    On Sat, 21 Sep 2024 23:34:40 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sat, 21 Sep 2024 13:29:31 -0700, Chris M. Thomasson wrote:

    CPU's that are hyper close to the memory is good wrt locality.
    However, the programming for it might turn some programmers off.
    NUMA like for sure.

    I’m pretty sure those multi-million-node Linux supers that fill the
    top of the Top500 list have a NUMA-style memory-addressing model.

    You are wrong.
    Last ccNuma was pushed out of top100 more than a decade ago.
    All top machines today are MPP or clusters.
    Not that a diffference between the two is well-defined.

    After consulting Wikipeadia, it's probably far more than a decade.
    The last one that I thought to be ccNuma, was in fact a cluster of
    10 ccNuma computers.
    https://en.wikipedia.org/wiki/Columbia_(supercomputer)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Sun Sep 22 01:28:42 2024
    On Sat, 21 Sep 2024 23:55:13 +0000, Lawrence D'Oliveiro wrote:

    On Sat, 21 Sep 2024 13:43:55 -0700, Chris M. Thomasson wrote:

    On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:

    On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:

    Is there any activity going on at absolute zero?

    No, because the Third Law of Thermodynamics says you can’t get there
    anyway.

    How close can one get?

    Arbitrarily close. I heard of experiments already being done in the microkelvin range.

    Correction: just checked, and the Guinness World Record site reports a
    figure of 38pK.

    Using lasers to slow the particles down !

    When a particle is vibrating towards the laser, a picosecond blast
    of energy slows it back down. Using heat to achieve cold.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Chris M. Thomasson on Sun Sep 22 01:29:38 2024
    On Sun, 22 Sep 2024 0:13:38 +0000, Chris M. Thomasson wrote:

    On 9/21/2024 4:55 PM, Lawrence D'Oliveiro wrote:
    On Sat, 21 Sep 2024 13:43:55 -0700, Chris M. Thomasson wrote:

    On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:

    On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:

    Is there any activity going on at absolute zero?

    No, because the Third Law of Thermodynamics says you can’t get there >>>> anyway.

    How close can one get?

    Arbitrarily close. I heard of experiments already being done in the
    microkelvin range.

    Odd. So absolute zero is the "limit" and we can get arbitrarily close to
    it? Kind of reminds me of the infinity in the unit fractions. Say they
    are signed for a moment... ;^)

    Temperature is an unsigned quantity.


    Correction: just checked, and the Guinness World Record site reports a
    figure of 38pK.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Paul A. Clayton on Sun Sep 22 02:13:42 2024
    On Fri, 20 Sep 2024 06:52:07 -0400, Paul A. Clayton wrote:

    From what I understand, GPUs also typically have
    memory controllers optimized for throughput rather than latency, with
    larger queue depth.

    Fine. If they aren’t designed for low latency, then you can’t call them “interactive”, can you? Since that requires quick response. They seem more oriented towards batch operation in the background.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Sun Sep 22 02:12:31 2024
    On Sun, 22 Sep 2024 02:57:21 +0300, Michael S wrote:

    On Sat, 21 Sep 2024 23:34:40 -0000 (UTC)

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sat, 21 Sep 2024 13:29:31 -0700, Chris M. Thomasson wrote:

    CPU's that are hyper close to the memory is good wrt locality.
    However, the programming for it might turn some programmers off.
    NUMA like for sure.

    I’m pretty sure those multi-million-node Linux supers that fill the top
    of the Top500 list have a NUMA-style memory-addressing model.

    You are wrong.
    Last ccNuma was pushed out of top100 more than a decade ago.
    All top machines today are MPP or clusters.
    Not that a diffference between the two is well-defined.

    If the difference is not “well-defined”, then how can I be “wrong”?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Sun Sep 22 02:21:54 2024
    On Sun, 22 Sep 2024 2:13:42 +0000, Lawrence D'Oliveiro wrote:

    On Fri, 20 Sep 2024 06:52:07 -0400, Paul A. Clayton wrote:

    From what I understand, GPUs also typically have
    memory controllers optimized for throughput rather than latency, with
    larger queue depth.

    Fine. If they aren’t designed for low latency, then you can’t call them “interactive”, can you? Since that requires quick response. They seem more oriented towards batch operation in the background.

    Think about it like this::

    A GPU can perform 1T-10T calculations per second running at 1GHz.

    Try doing that with a CPU.

    They are entirely different on the spectrum of design and architecture.
    Things that work to make CPUs faster do not make GPUs faster--and for
    the most part--vice versa.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Chris M. Thomasson on Sun Sep 22 02:48:52 2024
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 9/21/2024 9:39 AM, Brett wrote:
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 9/20/2024 6:48 PM, MitchAlsup1 wrote:
    On Sat, 21 Sep 2024 1:12:38 +0000, Brett wrote:

    MitchAlsup1 <mitchalsup@aol.com> wrote:
    On Fri, 20 Sep 2024 21:54:36 +0000, Chris M. Thomasson wrote:

    On 9/20/2024 2:32 PM, Lawrence D'Oliveiro wrote:
    On Fri, 20 Sep 2024 11:21:52 -0400, Stefan Monnier wrote:

    The basic issue is:
    * CPU+motherboard RAM -- usually upgradeable
    * Addon coprocessor RAM -- usually not upgradeable

    Maybe the RAM of the "addon coprocessor" is not upgradeable, but the >>>>>>>>> addon board itself can be replaced with another one (one with more >>>>>>>>> RAM).

    Yes, but that’s a lot more expensive.

    I had this crazy idea of putting cpus right on the ram. So, if you add >>>>>>> more memory to your system you automatically get more cpu's... Think >>>>>>> NUMA for a moment... ;^)

    Can software use the extra CPUs ?

    Also note: DRAMs are made on P-Channel process (leakage) with only a few >>>>>> layer of metal while CPUs are based on a N-Channel process (speed) with >>>>>> many layers of metal.

    Didn’t you work on the MC68000 which had one layer of metal?

    Yes, but it was the 68020 and had polysilicide which we used as
    a second layer of metal.

    Mc88100 had 2 layers of metal and silicide.

    The number of metal layers went about::
    1978: 1
    1980: 1+silicide
    1982: 2+silicide
    1988: 3+silicide
    1990: 4+silicide
    1995: 6
    ..

    This could be fine if you are going for the AI market of slow AI cpu >>>>> with huge memory and bandwidth.

    The AI market is bigger than the general server market as seen in
    NVidea’s sales.

    Bus interconnects are not setup to take a CPU cache miss from one
    DRAM to a different DRAM on behalf of its contained CPU(s).
    {Chicken and egg problem}

    Thus a problem with the CPU on DRAM approach.

    It would be HIGHLY local wrt its processing units and its memory for
    they would all be one.

    The programming for it would not be all that easy... It would be like a
    NUMA where a program can divide itself up and run parts of itself on
    each slot (aka memory-cpu hybrid unit card if you will). If a program
    can be embarrassingly parallel, well that would be great! The Cell
    processors comes to mind. But it failed. Shit.

    Cell was in the PlayStation which Sony sold a huge number of and made
    billions of dollars, so successful, not failed.

    Touche! :^)

    However, iirc, not all the games for it even used the SPE's. Instead
    they used the PPC. I guess that might have been due to the "complexity"
    of the programming? Not sure.

    ALL games used the SPE’s, the PPC was not fast enough for a AAA game.
    SPE is more powerful and flexible than a vertex shader on the graphics
    chip.

    I programmed for Cell, it was actually a nice architecture for what it did.

    Iirc, you had to use DMA to communicate with the SPE's?

    You have to built DMA lists for the graphics chip anyway, the SPE’s are
    just more of the same. Today the vertex shaders are on the graphics chip, instead of SPE, same difference.

    If you think programming for AI is easy, I have news for you…

    Those NVidia AI chips are at the brain damaged level for programming.

    No shit? I was thinking along the lines of compute shaders in the GPU?


    10’s of billions of dollars are invested in this market.

    A system with a mother board that has slots for several GPUS (think
    crossfire) and slots for memory+CPU units. The kicker is that adding
    more memory gives you more cpus...

    How crazy is this? Well, on a scale from:

    Retarded to Moronic?

    Pretty bad? Shit...

    Shit man, remember all of the slots in the old Apple IIgs's?

    ;^o



    Such a dram would be on the PCIE busses, and the main CPU’s would barely
    touch that ram, and the AI only searches locally.

    Better make it PCIe+CXL so the downstream CPU is cache coherent.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to David Brown on Sun Sep 22 03:20:59 2024
    David Brown <david.brown@hesbynett.no> wrote:
    On 21/09/2024 19:40, Brett wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 19 Sep 2024 19:29:31 -0000 (UTC), Brett wrote:

    Quantum mechanics is high IQ bullshit to make professors look important. >>>
    Quantum mechanics is real. Quantum effects are real. Transistors only work >>> because electrons can “tunnel” through a barrier with higher energy than
    they have, which should be classically impossible.

    I did not criticize quantum effects, I criticized quantum mechanics which
    is dumbshit SWAG that hides the truth of what is happening behind bullshit. >> With greater understanding we can come up with classical explanations, but >> those truths are too scary and could lead to the destruction of mankind.

    If you want to know what is really going on watch the Eric Weinstein The
    Portal videos.

    https://youtu.be/xBx5Y1YLfZY?si=0sVFmvOh-bot2ok7

    Eric is a scary bright physicist, he does not know the truth, but he knows >> it’s being hidden, and he shows you.

    “You can’t handle the truth.”

    https://youtu.be/9FnO3igOkOk?si=0xmQuxz6yaLnBkCC

    I don’t know the truth either, and if I did I would not speculate on it, >> too dangerous.

    https://www.amazon.com/Experts-Vs-Conspiracy-Theorists-T-Shirt/dp/B0CKLLCN9M/ref=asc_df_B0CKLLCN9M


    In case anyone wants a safe link to information about this particular
    muppet, without Google thinking they are interested in loony conspiracy theories on the level of "birds don't exist", you can read about him at <https://rationalwiki.org/wiki/Eric_Weinstein>.

    """
    Repressed genius

    For years as a mathematician he has said that he has some kind of theory
    of everything that will knock everyone out and overturn the field of
    physics, but he just can't publish it yet because the world isn't ready,
    and the information will only be suppressed.[6]

    In 2020, Weinstein published his much-hyped Oxford lecture on his Theory
    of Geometric Unity.[7] It was met with silence and indifference among theoretical physicists and the scientific community at large.

    On 1st April 2021, Weinstein released a draft of his paper online.[10]
    Given the 1st April release date and the author details on the cover
    page describing Weinstein as an "entertainer" and the paper itself as a
    "work of entertainment", it is unclear at this stage whether Geometrical Unity was an elaborate April Fools' Day Wikipedia prank all along.
    """

    Rather than publishing his theories in peer-reviewed journals, like real scary bright physicists, he promoted his views on Joe Rogan's podcast.
    And he described himself as "not a physicist" (he is a venture
    capitalist, not a scientist) and that the paper was "a work of entertainment". That should give you some idea of how seriously his
    ideas should be taken.



    Actual physicists know that quantum mechanics is not complete - it is
    not a "theory of everything", and does not explain everything. It is,
    like Newtonian gravity and general relativity, a simplification that
    gives an accurate model of reality within certain limitations, and
    hopefully it will one day be superseded by a new theory that models
    reality more accurately and over a wider range of circumstances. That
    is how science works.

    As things stand today, no such better theory has been developed. There
    are a number of ideas and hypotheses (still far from being classifiable
    as scientific theories) that show promise and have not yet been
    demonstrated to be wrong, but that's as far as we have got. Weinstein's "Geometric Unity" is not such a hypotheses - the little that has been published has been shown to be either wrong, or "not even wrong".

    It's fine to come up with strange new ideas about how the universe
    works. You then publish and discuss those ideas, and work with other scientists to weed out the clearly incorrect parts, try to expand and
    modify it to fit what we know about reality, and to think about how it
    could make predictions that could be tested. That's part of the process
    of science.

    It's not fine to believe half-baked ramblings from someone who doesn't understand what they are working with and won't listen to those who do.
    The alternative to "I don't understand quantum mechanics" is /not/ to
    believe whatever gobbledegook someone spouts on youtube.

    https://tenor.com/view/dr-evil-right-riiiight-gif-7363102

    Astronomers have only found a dozen Einstein Rings, with todays telescopes
    we should be seeing billions of them, and the ones we find are so weak they
    can be explained by light spreading in a dusty stellar environment, an
    effect an order of magnitude smaller.

    Einstein is provably wrong about gravity and all you hear are crickets.

    Why? Because Einstein’s theories made designing a nuclear bomb 100 times harder, which bought humanity two decades of peace. Of course now you can
    do those calculations on your cell phone, but habits die hard. Atomic
    refining treaties now protect us.

    “You can’t handle the truth.”

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Terje Mathisen on Sun Sep 22 07:18:53 2024
    On Sat, 21 Sep 2024 22:42:49 +0200, Terje Mathisen wrote:

    i.e at least an order of magnitude more vision than hearing.

    Here’s another thing: the raw resolution of our retinas is on the order of gigapixels. But if you look at the bandwidth of the optic nerve, it’s
    nowhere near enough to transmit that amount of data a few dozen times a
    second.

    So much of what you see (or what you think you see) is thrown away before
    it even gets to your brain.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Sun Sep 22 07:16:53 2024
    On Sun, 22 Sep 2024 02:21:54 +0000, MitchAlsup1 wrote:

    A GPU can perform 1T-10T calculations per second running at 1GHz.

    Try doing that with a CPU.

    Why does the GPU have a lower clock speed? It’s because it’s locked to the RAM speed, in a way that the CPU is not. Once you allow for this, you
    realize that their performance limitations, doing like for like, are not
    that different after all.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Sun Sep 22 07:21:36 2024
    On Sat, 21 Sep 2024 21:02:19 -0700, Chris M. Thomasson wrote:

    On 9/21/2024 6:28 PM, MitchAlsup1 wrote:

    Using lasers to slow the particles down !

    When a particle is vibrating towards the laser, a picosecond blast of
    energy slows it back down. Using heat to achieve cold.

    Remember, heat comes from disorder. But a laser is a coherent beam, the opposite of disorder.

    Targeting a single particle without casting any effect on any other
    particle? Can that be done?

    I think the particle is caught in a peak or trough of the laser radiation
    wave, and bounced around that way.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Sun Sep 22 07:23:59 2024
    On Sat, 21 Sep 2024 17:40:21 -0000 (UTC), Brett wrote:

    I did not criticize quantum effects, I criticized quantum mechanics
    which is dumbshit SWAG that hides the truth of what is happening behind bullshit. With greater understanding we can come up with classical explanations ...

    No we cannot. Some have hypothesized the existence of “hidden variables” which can be used to come up with classical explanations of quantum
    effects. Bell’s Theorem offered a way to test for those, and the tests
    (there have been several of them so far, done in several different ways)
    show that such “hidden variables” cannot exist.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Sun Sep 22 07:31:21 2024
    On Sun, 22 Sep 2024 01:29:38 +0000, MitchAlsup1 wrote:

    Temperature is an unsigned quantity.

    Presumably you mean “absolute” temperature, otherwise ...

    Fun fact: there are actual physical systems with negative absolute temperatures.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Sun Sep 22 07:30:08 2024
    On Sun, 22 Sep 2024 03:20:59 -0000 (UTC), Brett wrote:

    Astronomers have only found a dozen Einstein Rings ...

    It only took a minute to prove that false. From <https://en.wikipedia.org/wiki/Einstein_ring>: “Hundreds of
    gravitational lenses are currently known”. Also: “The degree of completeness needed for an image seen through a gravitational lens to
    qualify as an Einstein ring is yet to be defined.”

    And there are other, subtler kinds of gravitational lensing. A link
    from <https://en.wikipedia.org/wiki/Gravitational_lens> mentions a
    survey of older data that discovered 1210 new lenses, doubling the
    number known.

    Not that this really has anything to do with quantum theory ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Sun Sep 22 10:34:16 2024
    On Sun, 22 Sep 2024 02:12:31 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sun, 22 Sep 2024 02:57:21 +0300, Michael S wrote:

    On Sat, 21 Sep 2024 23:34:40 -0000 (UTC)

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sat, 21 Sep 2024 13:29:31 -0700, Chris M. Thomasson wrote:

    CPU's that are hyper close to the memory is good wrt locality.
    However, the programming for it might turn some programmers off.
    NUMA like for sure.

    I’m pretty sure those multi-million-node Linux supers that fill
    the top of the Top500 list have a NUMA-style memory-addressing
    model.

    You are wrong.
    Last ccNuma was pushed out of top100 more than a decade ago.
    All top machines today are MPP or clusters.
    Not that a diffference between the two is well-defined.

    If the difference is not “well-defined”, then how can I be “wrong”?

    The difference between MPP and cluster is not well-defined.
    The difference between ccNUMA and MPP-or-cluster is crystal clear.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Brett on Sun Sep 22 11:56:33 2024
    On Fri, 20 Sep 2024 15:21:12 -0000 (UTC)
    Brett <ggtgp@yahoo.com> wrote:

    David Brown <david.brown@hesbynett.no> wrote:
    On 20/09/2024 07:46, Thomas Koenig wrote:
    Brett <ggtgp@yahoo.com> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> wrote:
    Brett <ggtgp@yahoo.com> schrieb:

    Quantum mechanics is high IQ bullshit to make professors look
    important.

    You need quantum mechanics to describe solid-state electronics
    (or all atoms, for that matter).

    Type “quantum mechanics criticism” and variants into Google and
    have at it.

    I've read enough crackpot theories already, thank you, I don't need
    any more.

    Quantum mechanics describes the rules that give structure to atoms
    and molecules. On a larger scale, those structures build up to
    explain spherical planets. But we know the earth is flat.
    Therefore, quantum mechanics is bullshit. What more evidence could
    you want?


    Yup, you just explained the Einstein argument, just like I said.


    Einstein didn't like Copenhagen interpretation of Quantum Mechanics.
    He didn't question the validity of equations.
    It's all was in different era. Philosophical interpretations were
    considered important.
    The next generations of physicists, say, those who were born 100 or
    less years ago, were much less obsessed with this sort of stuff. Of
    course, there are exceptions, even to this day.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Sun Sep 22 11:48:08 2024
    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:


    Actual physicists know that quantum mechanics is not complete - it is
    not a "theory of everything", and does not explain everything. It
    is, like Newtonian gravity and general relativity, a simplification
    that gives an accurate model of reality within certain limitations,
    and hopefully it will one day be superseded by a new theory that
    models reality more accurately and over a wider range of
    circumstances. That is how science works.

    As things stand today, no such better theory has been developed.

    Actually, such theory (QED) was proposed by Paul Dirac back in 1920s and further developed by many others bright minds.
    The trouble with it (according to my not too educated understanding) is
    that unlike Schrodinger equation, approximate solutions for QED
    equations can't be calculated numerically by means of Green's function.
    Because of that QED is rarely used outside of field of high-energy
    particles and such.

    But then, I am almost 40 years out of date. Things could have changed.

    There are a number of ideas and hypotheses (still far from being
    classifiable as scientific theories) that show promise and have not
    yet been demonstrated to be wrong, but that's as far as we have got. Weinstein's "Geometric Unity" is not such a hypotheses - the little
    that has been published has been shown to be either wrong, or "not
    even wrong".


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to mitchalsup@aol.com on Sun Sep 22 12:12:25 2024
    On Sun, 22 Sep 2024 02:21:54 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Sun, 22 Sep 2024 2:13:42 +0000, Lawrence D'Oliveiro wrote:

    On Fri, 20 Sep 2024 06:52:07 -0400, Paul A. Clayton wrote:

    From what I understand, GPUs also typically have
    memory controllers optimized for throughput rather than latency,
    with larger queue depth.

    Fine. If they aren’t designed for low latency, then you can’t call
    them “interactive”, can you? Since that requires quick response.
    They seem more oriented towards batch operation in the background.

    Think about it like this::

    A GPU can perform 1T-10T calculations per second running at 1GHz.

    Try doing that with a CPU.


    Any Xeon (except Sierra Forest) that has more than 31 cores can perform
    1T DP FP operations running at 1GHz (I am counting FMA as 2 ops).
    The same applies to EPYC3 or better, you just need a little more cores
    to do it, but still less than the number available in top models.
    I would think that even top model of Ampere Altra could achieve 1T FLOPs
    at 1 GHz, but I didn't check.

    Now 10T at 1 GHz on CPU would be tough. And quite pointless.
    But, I'd guess that pointlessness of it is exactly what you wanted to
    tell us.

    They are entirely different on the spectrum of design and
    architecture. Things that work to make CPUs faster do not make GPUs faster--and for the most part--vice versa.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Scott Lurndal on Sun Sep 22 02:29:51 2024
    scott@slp53.sl.home (Scott Lurndal) writes:

    Brett <ggtgp@yahoo.com> writes:

    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Brett <ggtgp@yahoo.com> schrieb:

    Quantum mechanics is high IQ bullshit to make professors look
    important.

    You need quantum mechanics to describe solid-state electronics
    (or all atoms, for that matter).

    Type quantum mechanics criticism and
    variants into Google and have at it.

    Why should one do that?

    To discover the truly brilliant explanations at crackpot-conspiracy-theories.com.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Sun Sep 22 12:18:01 2024
    On Sun, 22 Sep 2024 07:16:53 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sun, 22 Sep 2024 02:21:54 +0000, MitchAlsup1 wrote:

    A GPU can perform 1T-10T calculations per second running at 1GHz.

    Try doing that with a CPU.

    Why does the GPU have a lower clock speed? It’s because it’s locked
    to the RAM speed, in a way that the CPU is not. Once you allow for
    this, you realize that their performance limitations, doing like for
    like, are not that different after all.

    Bullshite.
    GPUs have lower clock speed because this way they can operate at lower
    voltage and to do more work per Joule.
    High-end GPUs are power-bound beasts.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Chris M. Thomasson on Sun Sep 22 12:41:37 2024
    On 2024-09-22 7:02, Chris M. Thomasson wrote:
    On 9/21/2024 6:28 PM, MitchAlsup1 wrote:
    On Sat, 21 Sep 2024 23:55:13 +0000, Lawrence D'Oliveiro wrote:

    On Sat, 21 Sep 2024 13:43:55 -0700, Chris M. Thomasson wrote:

    On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:

    On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:

    Is there any activity going on at absolute zero?

    No, because the Third Law of Thermodynamics says you can’t get there >>>>> anyway.

    How close can one get?

    Arbitrarily close. I heard of experiments already being done in the
    microkelvin range.

    Correction: just checked, and the Guinness World Record site reports a
    figure of 38pK.

    Using lasers to slow the particles down !

    When a particle is vibrating towards the laser, a picosecond blast
    of energy slows it back down. Using heat to achieve cold.

    Targeting a single particle without casting any effect on any other
    particle? Can that be done?


    It's not done that way - the laser beams are continuous, but they are
    tuned and/or polarized to interact more with particles moving the "wrong
    way", slowing them down on the average, which means cooling them. The
    particles "self select" to interact with the beams, based on Doppler
    effects or other effects that depend on particle movements.

    https://en.wikipedia.org/wiki/Laser_cooling

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to mitchalsup@aol.com on Sun Sep 22 02:34:29 2024
    mitchalsup@aol.com (MitchAlsup1) writes:

    Having the ears being able to hear millisecond differences in sound
    arrival times is key to our ability to hunt and evade predator's.

    Much smaller than that. More like 10 microseconds.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Terje Mathisen on Sun Sep 22 12:46:34 2024
    On 21/09/2024 22:42, Terje Mathisen wrote:
    Michael S wrote:
    On Sat, 21 Sep 2024 15:39:41 +0200
    Terje Mathisen <terje.mathisen@tmsw.no> wrote:

    MitchAlsup1 wrote:
    On Fri, 20 Sep 2024 20:06:00 +0000, John Dallman wrote:
    In article <vcgpqt$gndp$1@dont-email.me>, david.brown@hesbynett.no
    (David
    Brown) wrote:
    Even a complete amateur can notice time mismatches of 10 ms in a
    musical context, so for a professional this does not surprise me.
    I don't know of any human endeavour that requires lower latency or >>>>>> more precise timing than music.

    A friend used to work on set-top boxes, with fairly slow hardware.
    They had demonstrations of two different ways of handling
    inability to keep up
    with the data stream:

    - Keeping the picture on schedule, and dropping a few milliseconds
       of sound.
    - Dropping a frame of the picture, and keeping the sound on-track.

    Potential customers always thought they wanted the first approach,
    until they watched the demos. Human vision fakes a lot of what we
    "see" at the best of times, bit hearing is more sensitive to
    glitches.

    Having the ears being able to hear millisecond differences in sound
    arrival times is key to our ability to hunt and evade predator's.

    Not only that, but the slight non-sylindrical shape of the ear
    opening 6 canal cause _really_ minute phase shifts, but they are what
    makes it possible for us to differentiate between a sound coming from
    directly behind vs directly ahead.

    While our eyes have a time constant closer to 0.1 seconds.

    That is, I blame natural selection on the above.

    Supposedly, we devote more of our bran to hearing than to vision?

    Terje



    I think, it's not even close in favor of vision.

    That would have been my guess as well, as but as I wrote above, a few
    years ago I was told it was otherwise. Now I have actually read a few
    papers about how you can actually measure this, and it did make sense,
    i.e at least an order of magnitude more vision than hearing.

    Having done both audio and video codec optimization I know that even the
    very highest levels of audio quality is near-DC compared to video
    signals. :-)


    Part of this is the bandwidth of the information, as you see from signal handling. But biologically a main reason for the bigger brain volume
    devoted to visual processing compared to audio processing is that humans
    have excellent ears and terrible eyes.

    Mammal ears are, in general, very good. They gained significant
    fidelity and flexibility when early mammal ancestors evolved the ability
    to chew - to move their lower jaw sideways. This opened up space at the
    ear cavity that let it expand. Most mammals have external ears that aid directional detection (I think owls are the only non-mammals with
    external ears), and have a significantly more advanced internal
    structure than non-mammals. In humans there are more nerves leading
    from the brain to the ears, than from the ears to the brain - especially
    during the first two years of life, the ears are trained and fine-tuned
    to identify the sounds and language phonemes that we will use. So the information we get from the ears is already heavily processed and
    categorised, meaning much less brain volume needs to be devoted to
    processing it.

    Our eyes, on the other hand, are very poor. Part of that stretches
    /way/ back to when eyes evolved to see under water, not in air, and in a
    stroke of bad luck evolved backwards (so that the retinal nerve passes
    through the retina and connects on the front, leaving a hole in the
    retina, instead of being sensibly connected at the back). Early mammals
    got their eyes from the common ancestor of mammals, reptiles, dinosaurs
    and related groups, which were pretty good despite these "design flaws".
    But the early mammals were nocturnal and their eyesight waned as their hearing and sense of small dominated - especially colour vision
    deteriorated significantly and lost one of the three types of colour
    cone cell and were left with just two colours. Primates got a mutated
    version of one of these, but the colour separation is not good.

    So now we have very limited colour separation in just the very central
    area of the retina, and week vision for the rest of it. The brain has
    to do an enormous amount of processing to "invent" the rest of what we
    think we see.

    In comparison, many birds have much more distinct colour cones and a
    layer on top of coloured filter to enhance this more, and can have significantly more precise vision with very little brain power needed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Sun Sep 22 12:58:36 2024
    On 22/09/2024 10:48, Michael S wrote:
    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:


    Actual physicists know that quantum mechanics is not complete - it is
    not a "theory of everything", and does not explain everything. It
    is, like Newtonian gravity and general relativity, a simplification
    that gives an accurate model of reality within certain limitations,
    and hopefully it will one day be superseded by a new theory that
    models reality more accurately and over a wider range of
    circumstances. That is how science works.

    As things stand today, no such better theory has been developed.

    Actually, such theory (QED) was proposed by Paul Dirac back in 1920s and further developed by many others bright minds.
    The trouble with it (according to my not too educated understanding) is
    that unlike Schrodinger equation, approximate solutions for QED
    equations can't be calculated numerically by means of Green's function. Because of that QED is rarely used outside of field of high-energy
    particles and such.

    But then, I am almost 40 years out of date. Things could have changed.


    I don't claim to be an expert on this field in any way, and could easily
    be muddled on the details.

    I thought QED only covered special relativity, not general relativity -
    i.e., it describes particles travelling near the speed of light, but
    does not handle gravity or the curvature of space-time.


    There are a number of ideas and hypotheses (still far from being
    classifiable as scientific theories) that show promise and have not
    yet been demonstrated to be wrong, but that's as far as we have got.
    Weinstein's "Geometric Unity" is not such a hypotheses - the little
    that has been published has been shown to be either wrong, or "not
    even wrong".



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Sun Sep 22 14:26:17 2024
    On Sun, 22 Sep 2024 12:58:36 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 22/09/2024 10:48, Michael S wrote:
    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:


    Actual physicists know that quantum mechanics is not complete - it
    is not a "theory of everything", and does not explain everything.
    It is, like Newtonian gravity and general relativity, a
    simplification that gives an accurate model of reality within
    certain limitations, and hopefully it will one day be superseded
    by a new theory that models reality more accurately and over a
    wider range of circumstances. That is how science works.

    As things stand today, no such better theory has been developed.

    Actually, such theory (QED) was proposed by Paul Dirac back in
    1920s and further developed by many others bright minds.
    The trouble with it (according to my not too educated
    understanding) is that unlike Schrodinger equation, approximate
    solutions for QED equations can't be calculated numerically by
    means of Green's function. Because of that QED is rarely used
    outside of field of high-energy particles and such.

    But then, I am almost 40 years out of date. Things could have
    changed.

    I don't claim to be an expert on this field in any way, and could
    easily be muddled on the details.

    I thought QED only covered special relativity, not general relativity
    - i.e., it describes particles travelling near the speed of light,
    but does not handle gravity or the curvature of space-time.


    That sounds correct, at least for Dirac's form of QED. May be it was
    amended later.
    But that was not my point.
    My point was that the QED is well known to be better approximation of
    reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent
    of it. Despite that in practice a "worse" approximation is used far
    more often.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Sun Sep 22 13:26:20 2024
    On 22/09/2024 03:29, MitchAlsup1 wrote:
    On Sun, 22 Sep 2024 0:13:38 +0000, Chris M. Thomasson wrote:

    On 9/21/2024 4:55 PM, Lawrence D'Oliveiro wrote:
    On Sat, 21 Sep 2024 13:43:55 -0700, Chris M. Thomasson wrote:

    On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:

    On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:

    Is there any activity going on at absolute zero?

    No, because the Third Law of Thermodynamics says you can’t get there >>>>> anyway.

    How close can one get?

    Arbitrarily close. I heard of experiments already being done in the
    microkelvin range.

    Odd. So absolute zero is the "limit" and we can get arbitrarily close to
    it? Kind of reminds me of the infinity in the unit fractions. Say they
    are signed for a moment... ;^)

    Temperature is an unsigned quantity.


    Temperature can be defined in several ways. It is often defined as the
    average kinetic energy of particles, and with that definition it is an
    unsigned quantity and can never be negative.

    But it can also be defined as the ratio of heat change to entropy
    change. Normally, adding more heat energy to a system increases its
    entropy - temperature is positive. But there are systems where the
    energy is constrained, so adding more energy reduces the number of ways
    the energy can be distributed (since you are reducing the options for
    the distribution of the empty energy slots). Thus the temperature is
    negative.

    There is also a maximal temperature - or at least, a maximum temperature
    that can be discussed with current physical theories. At the Planck temperature of 1.4e32 K, the thermal radiation emitted has a wavelength
    of Planck length.



    Correction: just checked, and the Guinness World Record site reports a
    figure of 38pK.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Niklas Holsti on Sun Sep 22 13:40:22 2024
    On 22/09/2024 11:41, Niklas Holsti wrote:
    On 2024-09-22 7:02, Chris M. Thomasson wrote:
    On 9/21/2024 6:28 PM, MitchAlsup1 wrote:
    On Sat, 21 Sep 2024 23:55:13 +0000, Lawrence D'Oliveiro wrote:

    On Sat, 21 Sep 2024 13:43:55 -0700, Chris M. Thomasson wrote:

    On 9/21/2024 1:22 AM, Lawrence D'Oliveiro wrote:

    On Fri, 20 Sep 2024 15:33:23 -0700, Chris M. Thomasson wrote:

    Is there any activity going on at absolute zero?

    No, because the Third Law of Thermodynamics says you can’t get there >>>>>> anyway.

    How close can one get?

    Arbitrarily close. I heard of experiments already being done in the
    microkelvin range.

    Correction: just checked, and the Guinness World Record site reports a >>>> figure of 38pK.

    Using lasers to slow the particles down !

    When a particle is vibrating towards the laser, a picosecond blast
    of energy slows it back down. Using heat to achieve cold.

    Targeting a single particle without casting any effect on any other
    particle? Can that be done?


    It's not done that way - the laser beams are continuous, but they are
    tuned and/or polarized to interact more with particles moving the "wrong way", slowing them down on the average, which means cooling them. The particles "self select" to interact with the beams, based on Doppler
    effects or other effects that depend on particle movements.

    https://en.wikipedia.org/wiki/Laser_cooling


    Yes. I only learned about that recently - previously I had some vague
    (and wrong) ideas that about lasers hitting above-average temperature
    particles that moved faster and further than the rest.

    To give Chris a little more detail, the atoms will absorb light at
    particular frequencies, where the photon energy matches energy levels
    for its electrons. The closer the frequency matches, the higher the probability of an absorption.

    You can imagine the atom vibrating back and forth, with a speed
    dependent on its kinetic energy (its temperature). If you shine a laser
    with a particular frequency at a stationary atom, it will "see" that
    exact frequency of light. But if it is moving towards the light source,
    it will "see" a higher frequency, while if it is moving away from the
    light source, it will "see" a lower frequency - that's the Doppler effect.

    So you pick a laser frequency so that when a hot atom is moving quickly
    towards the light, it has a high probability of being absorbed. When
    that happens, the light exerts a force on the atom against its direction
    of motion, slowing it down and thus reducing its kinetic energy and temperature. Of course some other atoms will be hit too - they have a
    lower but non-zero probability of absorbing the photons. Overall,
    however, you reduce the average kinetic energy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From jseigh@21:1/5 to jseigh on Sun Sep 22 07:44:43 2024
    On 9/21/24 18:49, jseigh wrote:

    Well, we have asymmetric memory barriers now (membarrier() in linux)
    so we can get rid of memory barriers in some cases.  For hazard
    pointers which used to be a (load, store, mb, load) are now just
    a (load, store, load).  Much faster,  from 8.02 nsecs to 0.79 nsecs.
    So much so that other things which has heretofore been considered
    to add negligible overhead are not so much by comparison.  Which can
    be a little annoying because some like using those a lot.


    I should correct those timings slightly. The measurements were for a
    hazard pointer load, a dummy dependent load, and a hazard pointer clear.
    If I measure w/o the dummy dependent load, the timings go from
    7.75 to 0.61 nsecs respectively.

    Joe Seigh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Sun Sep 22 06:10:34 2024
    Michael S <already5chosen@yahoo.com> writes:

    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    Actual physicists know that quantum mechanics is not complete - it is
    not a "theory of everything", and does not explain everything. It
    is, like Newtonian gravity and general relativity, a simplification
    that gives an accurate model of reality within certain limitations,
    and hopefully it will one day be superseded by a new theory that
    models reality more accurately and over a wider range of
    circumstances. That is how science works.

    As things stand today, no such better theory has been developed.

    Actually, such theory (QED) was proposed by Paul Dirac back in 1920s and further developed by many others bright minds.
    The trouble with it (according to my not too educated understanding) is
    that unlike Schrodinger equation, approximate solutions for QED
    equations can't be calculated numerically by means of Green's function. Because of that QED is rarely used outside of field of high-energy
    particles and such.

    But then, I am almost 40 years out of date. Things could have changed.

    Quantum electrodynamics, aka QED, is a quantum field theory for the electromagnetic force. QED accounts for almost everything we can
    directly see in the world, not counting gravity.

    The original QED of Dirac, as expressed in the Dirac equation, has a
    problem: according to that formulation, the self-energy of the
    electron is infinite. To address this deficiency, for about 20
    years physicists applied a convenient approximation, namely, they
    treated the theoretically infinite quantity as zero. Surprisingly,
    that approximation gave results that agreed with all the experiments
    that were done up until about the mid 1940s.

    In the late 1940s, Richard Feynman, Julian Schwinger, and Shinichiro
    Tomonaga independently developed versions of QED that address the
    infinite self-energy problem. (Tomonaga's work was done somewhat
    earlier, but wasn't publicized until later because of the isolation
    of Japan during World War II.) It wasn't at all obvious that the
    QED of Feynman and the QED of Schwinger were equivalent. That they
    were equivalent was established and publicized by Freeman Dyson
    (while he was a graduate student, no less).

    The problem of the seeminginly infinite self-energy of the electron
    was addressed by a technique known as renormalization. We could say
    that renormalization is only an approximation: it is known to be mathematically unsound, breaking down after a mere 400 or so decimal
    places. Despite that, QED gives numerical results that are correct
    up to the limits of our ability to measure. A computation done
    using QED matched an experimental result to within the tolerance
    of the measurement, which was 13 decimal places. An analogy given
    by Feynman is that this is like measuring the distance from LA to
    New York to an accuracy of the width of one human hair.

    QED has implications that are visible in the "normal" world, by
    which I mean using ordinary equipment rather than things like
    synchrotrons and particle accelerators, and that leaves atoms
    intact. Basically all of chemistry depends on QED and not on
    anything more exotic.

    There are three fundamental forces other than the electromagnetic
    force, namely, gravity, the weak force, and the strong force. The
    strong force is what holds together the protons and neutrons in the
    nucleus of an atom; it has to be stronger than the electromagnetic
    force so that protons don't just fly away from each other. The weak
    force is related to radioactive decay; it works only over very
    short distances because the carrier particle of the weak force is
    fairly massive (about 80 times the mass of a proton IIRC). For
    comparison the carrier particle of the electromagnetic force is the
    photon, which is massless; that means the electromagnetic force
    operates over arbitrarily large distances (although of course with a
    strength that diminishes as the distance gets larger).

    The strong force (sometimes called the color force) is peculiar in
    that the strong force actually *increases* with distance. That
    happens because the carrier particle of the color force has a color
    charge. For comparison photons are electrically neutral. It's
    because of this property that we never see isolated quarks.
    Basically, trying to pull two quarks apart takes so much energy that
    new quarks come into existence out of nothing. Quarks come in three
    "colors" (having nothing to do with ordinary color), times three
    families of quarks, times two quarks in each family. The carrier
    particle of the strong force is called a gluon, and there are eight
    different kinds of gluons. (It seems like there should be nine, to
    allow each of the 3x3 possible combinations of colors, but there are
    only eight.) The corresponding theory to QED for the strong force
    is called QCD, for Quantum chromodynamics.

    A joke that I like to tell is because the carrier particle for the
    strong force can change a quark from one color to another, rather
    than calling it a gluon it should have been called a crayon.

    The field theories for electromagnetism, the strong force, and the
    weak force have been unified in the sense that there is a
    mathematically consistent framework that accommodates all three.
    That unification is only mathematical, by which I mean that there
    are no testable physical implications, only a kind of tautological
    consistency. We can see all three field theories through a common
    mathematical lens, but that doesn't say anything about how the three
    theories interact physically.

    The gravitational force is much weaker, by 42 orders of magnitude,
    than the other three fundamental forces. The General Theory of
    Relativity is not a quantized theory. There are ideas about how to
    unify gravity and the other three fundamental forces, but none of
    these "grand unified" theories have any hypotheses that we are able
    to test experimentally. It's unclear how gravity fits in to the
    overall picture.

    The foregoing represents my best understanding of QED and the other
    fundamental forces of physics. I've done a fair amount of reading
    on the subject but I wouldn't claim even to be a physicist, let
    alone an expert.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Sun Sep 22 14:39:53 2024
    On 22/09/2024 13:26, Michael S wrote:
    On Sun, 22 Sep 2024 12:58:36 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 22/09/2024 10:48, Michael S wrote:
    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:


    Actual physicists know that quantum mechanics is not complete - it
    is not a "theory of everything", and does not explain everything.
    It is, like Newtonian gravity and general relativity, a
    simplification that gives an accurate model of reality within
    certain limitations, and hopefully it will one day be superseded
    by a new theory that models reality more accurately and over a
    wider range of circumstances. That is how science works.

    As things stand today, no such better theory has been developed.

    Actually, such theory (QED) was proposed by Paul Dirac back in
    1920s and further developed by many others bright minds.
    The trouble with it (according to my not too educated
    understanding) is that unlike Schrodinger equation, approximate
    solutions for QED equations can't be calculated numerically by
    means of Green's function. Because of that QED is rarely used
    outside of field of high-energy particles and such.

    But then, I am almost 40 years out of date. Things could have
    changed.

    I don't claim to be an expert on this field in any way, and could
    easily be muddled on the details.

    I thought QED only covered special relativity, not general relativity
    - i.e., it describes particles travelling near the speed of light,
    but does not handle gravity or the curvature of space-time.


    That sounds correct, at least for Dirac's form of QED. May be it was
    amended later.
    But that was not my point.
    My point was that the QED is well known to be better approximation of
    reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent
    of it. Despite that in practice a "worse" approximation is used far
    more often.


    OK.

    Of course, that is entirely normal for science - you regularly use
    "worse" approximations when they are easier to handle and good enough
    for the task. Thus Newtonian gravity is used more than general
    relativity, because it is accurate enough in many circumstances while
    being a lot easier to understand and calculate. The same is presumably
    true with QED and other quantum mechanics calculations (not that I know
    the details of those calculations).

    Thanks for the extra information and corrections here.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Lawrence D'Oliveiro on Sun Sep 22 16:42:58 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 22 Sep 2024 03:20:59 -0000 (UTC), Brett wrote:

    Astronomers have only found a dozen Einstein Rings ...

    It only took a minute to prove that false. From <https://en.wikipedia.org/wiki/Einstein_ring>: “Hundreds of
    gravitational lenses are currently known”. Also: “The degree of completeness needed for an image seen through a gravitational lens to
    qualify as an Einstein ring is yet to be defined.”

    I was talking full rings as predicted by Einstein.

    Partial rings are a dime a dozen.

    Now go find the other missing billion rings Einstein predicted.

    Never mind that the Einstein ring’s found can be explained by stacks of clustered galaxies doing normal light dispersion.

    And there are other, subtler kinds of gravitational lensing. A link
    from <https://en.wikipedia.org/wiki/Gravitational_lens> mentions a
    survey of older data that discovered 1210 new lenses, doubling the
    number known.

    Not that this really has anything to do with quantum theory ...

    “You can’t handle the truth.”

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Tim Rentsch on Sun Sep 22 18:45:09 2024
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:

    Brett <ggtgp@yahoo.com> writes:

    Thomas Koenig <tkoenig@netcologne.de> wrote:

    Brett <ggtgp@yahoo.com> schrieb:

    Quantum mechanics is high IQ bullshit to make professors look
    important.

    You need quantum mechanics to describe solid-state electronics
    (or all atoms, for that matter).

    Type quantum mechanics criticism and
    variants into Google and have at it.

    Why should one do that?

    To discover the truly brilliant explanations at crackpot-conspiracy-theories.com.

    Electric Universe

    https://youtu.be/UN3rwk4oD1M?si=_kREdQBQuquCat3w

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Sun Sep 22 18:45:54 2024
    On Sun, 22 Sep 2024 7:23:59 +0000, Lawrence D'Oliveiro wrote:

    On Sat, 21 Sep 2024 17:40:21 -0000 (UTC), Brett wrote:

    I did not criticize quantum effects, I criticized quantum mechanics
    which is dumbshit SWAG that hides the truth of what is happening behind
    bullshit. With greater understanding we can come up with classical
    explanations ...

    No we cannot. Some have hypothesized the existence of “hidden variables” which can be used to come up with classical explanations of quantum
    effects. Bell’s Theorem offered a way to test for those, and the tests (there have been several of them so far, done in several different ways)
    show that such “hidden variables” cannot exist.

    Do not exist, there remains no evidence that they cannot exist.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Tim Rentsch on Sun Sep 22 18:59:51 2024
    On Sun, 22 Sep 2024 13:10:34 +0000, Tim Rentsch wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    Actual physicists know that quantum mechanics is not complete - it is
    not a "theory of everything", and does not explain everything. It
    is, like Newtonian gravity and general relativity, a simplification
    that gives an accurate model of reality within certain limitations,
    and hopefully it will one day be superseded by a new theory that
    models reality more accurately and over a wider range of
    circumstances. That is how science works.

    As things stand today, no such better theory has been developed.

    Actually, such theory (QED) was proposed by Paul Dirac back in 1920s and
    further developed by many others bright minds.
    The trouble with it (according to my not too educated understanding) is
    that unlike Schrodinger equation, approximate solutions for QED
    equations can't be calculated numerically by means of Green's function.
    Because of that QED is rarely used outside of field of high-energy
    particles and such.

    But then, I am almost 40 years out of date. Things could have changed.

    Quantum electrodynamics, aka QED, is a quantum field theory for the electromagnetic force. QED accounts for almost everything we can
    directly see in the world, not counting gravity.

    The original QED of Dirac, as expressed in the Dirac equation, has a
    problem: according to that formulation, the self-energy of the
    electron is infinite. To address this deficiency, for about 20
    years physicists applied a convenient approximation, namely, they
    treated the theoretically infinite quantity as zero. Surprisingly,
    that approximation gave results that agreed with all the experiments
    that were done up until about the mid 1940s.

    In the late 1940s, Richard Feynman, Julian Schwinger, and Shinichiro
    Tomonaga independently developed versions of QED that address the
    infinite self-energy problem. (Tomonaga's work was done somewhat
    earlier, but wasn't publicized until later because of the isolation
    of Japan during World War II.) It wasn't at all obvious that the
    QED of Feynman and the QED of Schwinger were equivalent. That they
    were equivalent was established and publicized by Freeman Dyson
    (while he was a graduate student, no less).

    The problem of the seeminginly infinite self-energy of the electron
    was addressed by a technique known as renormalization. We could say
    that renormalization is only an approximation: it is known to be mathematically unsound, breaking down after a mere 400 or so decimal
    places. Despite that, QED gives numerical results that are correct
    up to the limits of our ability to measure. A computation done
    using QED matched an experimental result to within the tolerance
    of the measurement, which was 13 decimal places. An analogy given
    by Feynman is that this is like measuring the distance from LA to
    New York to an accuracy of the width of one human hair.

    QED has implications that are visible in the "normal" world, by
    which I mean using ordinary equipment rather than things like
    synchrotrons and particle accelerators, and that leaves atoms
    intact. Basically all of chemistry depends on QED and not on
    anything more exotic.

    There are three fundamental forces other than the electromagnetic
    force, namely, gravity, the weak force, and the strong force. The
    strong force is what holds together the protons and neutrons in the
    nucleus of an atom; it has to be stronger than the electromagnetic
    force so that protons don't just fly away from each other. The weak
    force is related to radioactive decay; it works only over very
    short distances because the carrier particle of the weak force is
    fairly massive (about 80 times the mass of a proton IIRC). For
    comparison the carrier particle of the electromagnetic force is the
    photon, which is massless; that means the electromagnetic force
    operates over arbitrarily large distances (although of course with a
    strength that diminishes as the distance gets larger).

    The strong force (sometimes called the color force) is peculiar in
    that the strong force actually *increases* with distance. That
    happens because the carrier particle of the color force has a color
    charge. For comparison photons are electrically neutral. It's
    because of this property that we never see isolated quarks.
    Basically, trying to pull two quarks apart takes so much energy that
    new quarks come into existence out of nothing.

    It does not come out of nothing, it comes out of the energy being
    applied to pull the 2 quarks apart. Once the energy gets that big,
    it (the energy) condenses into a pair of quarks which then pair up
    to prevent the quarks from being seen in isolation.

    Quarks come in three
    "colors" (having nothing to do with ordinary color), times three
    families of quarks, times two quarks in each family. The carrier
    particle of the strong force is called a gluon, and there are eight
    different kinds of gluons. (It seems like there should be nine, to
    allow each of the 3x3 possible combinations of colors, but there are
    only eight.) The corresponding theory to QED for the strong force
    is called QCD, for Quantum chromodynamics.

    A joke that I like to tell is because the carrier particle for the
    strong force can change a quark from one color to another, rather
    than calling it a gluon it should have been called a crayon.

    The field theories for electromagnetism, the strong force, and the
    weak force have been unified in the sense that there is a
    mathematically consistent framework that accommodates all three.
    That unification is only mathematical, by which I mean that there
    are no testable physical implications, only a kind of tautological consistency. We can see all three field theories through a common mathematical lens, but that doesn't say anything about how the three
    theories interact physically.

    The gravitational force is much weaker, by 42 orders of magnitude,
    than the other three fundamental forces. The General Theory of
    Relativity is not a quantized theory. There are ideas about how to
    unify gravity and the other three fundamental forces, but none of
    these "grand unified" theories have any hypotheses that we are able
    to test experimentally. It's unclear how gravity fits in to the
    overall picture.

    Are you not amazed that everything physicists know about the universe
    can be written in 13 equations.

    The foregoing represents my best understanding of QED and the other fundamental forces of physics. I've done a fair amount of reading
    on the subject but I wouldn't claim even to be a physicist, let
    alone an expert.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Paul A. Clayton on Sun Sep 22 21:39:30 2024
    On Sun, 22 Sep 2024 19:37:02 +0000, Paul A. Clayton wrote:

    On 9/21/24 4:45 PM, MitchAlsup1 wrote:
    On Sat, 21 Sep 2024 20:26:13 +0000, Chris M. Thomasson wrote:

    On 9/21/2024 6:54 AM, Scott Lurndal wrote:
    mitchalsup@aol.com (MitchAlsup1) writes:
    https://www.marvell.com/products/cxl.html

    What about a weak coherency where a programmer has to use the
    correct
    membars to get the coherency required for their specific needs?
    Along
    the lines of UltraSPARC in RMO mode?

    In my case, I suffered through enough of these to implement a
    memory hierarchy free from the need of any MemBars yet provide
    the performance of <mostly> relaxed memory order, except when
    certain kinds of addresses are touched {MMI/O, configuration
    space, ATOMIC accesses,...} In these cases, the core becomes
    {sequentially consistent, or strongly ordered} depending on the
    touched address.

    If I understand correctly, atomic accesses (Enhances
    Synchronization Facility) effective use a complete memory barrier;
    software could effectively provide a memory barrier "instruction"
    by performing an otherwise pointless atomic/ESF operation.

    Are there no cases where an atomic operation is desired but
    sequential consistency is not required?

    Probably--but in the realm of ATOMICs it is FAR better to be
    a bit slower than to ever allow the illusion of atomicity to
    be lost. This criterion is significantly harder when doing
    multi-location ATOMIC stuff than single location ATOMIC stuff.

    Or is this a tradeoff of frequency/criticality and the expected overhead of the implicit
    memory barrier? (Memory barriers may be similar to context
    switches, not needing to be as expensive as they are in most implementations.)

    The R in RISC stands for Reduced. An ISA devoid of MemBar is
    more reduced than one with MemBars. Programmers are rarely
    in a position to understand all the cases where MemBar are
    needed or not needed {{excepting our own Chris M. Thomasson}}

    As far as PCIe device to device data routing, this will all be
    based no the chosen virtual channel. Same channel=in order,
    different channel=who knows.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From jseigh@21:1/5 to All on Sun Sep 22 20:53:35 2024
    On 9/22/2024 5:39 PM, MitchAlsup1 wrote:
    On Sun, 22 Sep 2024 19:37:02 +0000, Paul A. Clayton wrote:

    On 9/21/24 4:45 PM, MitchAlsup1 wrote:
    On Sat, 21 Sep 2024 20:26:13 +0000, Chris M. Thomasson wrote:

    On 9/21/2024 6:54 AM, Scott Lurndal wrote:
    mitchalsup@aol.com (MitchAlsup1) writes:
    https://www.marvell.com/products/cxl.html

    What about a weak coherency where a programmer has to use the
    correct
    membars to get the coherency required for their specific needs?
    Along
    the lines of UltraSPARC in RMO mode?

    In my case, I suffered through enough of these to implement a
    memory hierarchy free from the need of any MemBars yet provide
    the performance of <mostly> relaxed memory order, except when
    certain kinds of addresses are touched {MMI/O, configuration
    space, ATOMIC accesses,...} In these cases, the core becomes
    {sequentially consistent, or strongly ordered} depending on the
    touched address.

    If I understand correctly, atomic accesses (Enhances
    Synchronization Facility) effective use a complete memory barrier;
    software could effectively provide a memory barrier "instruction"
    by performing an otherwise pointless atomic/ESF operation.

    Are there no cases where an atomic operation is desired but
    sequential consistency is not required?

    Probably--but in the realm of ATOMICs it is FAR better to be
    a bit slower than to ever allow the illusion of atomicity to
    be lost. This criterion is significantly harder when doing
    multi-location ATOMIC stuff than single location ATOMIC stuff.

                                            Or is this a tradeoff of
    frequency/criticality and the expected overhead of the implicit
    memory barrier? (Memory barriers may be similar to context
    switches, not needing to be as expensive as they are in most
    implementations.)

    The R in RISC stands for Reduced. An ISA devoid of MemBar is
    more reduced than one with MemBars. Programmers are rarely
    in a position to understand all the cases where MemBar are
    needed or not needed {{excepting our own Chris M. Thomasson}}


    Not quite sure what we are talking about here but I won't
    let that stop me from commenting. :)

    As far as loads and stores go, if they are atomic then
    a load will not see a value that was not from some store.

    Regarding memory barriers, that depends on the hardware
    memory model and the program logic assuming one knows
    how to do concurrent algorithms.

    Speaking of memory models, remember when x86 didn't have
    a formal memory model. They didn't put one in until
    after itanium. Before that it was a sort of processor
    consistency type 2 which was a real impedance mismatch
    with what most concurrent software used a a memory model.

    Joe Seigh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to jseigh on Mon Sep 23 01:34:55 2024
    On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:

    On 9/22/2024 5:39 PM, MitchAlsup1 wrote:
    On Sun, 22 Sep 2024 19:37:02 +0000, Paul A. Clayton wrote:

    On 9/21/24 4:45 PM, MitchAlsup1 wrote:
    On Sat, 21 Sep 2024 20:26:13 +0000, Chris M. Thomasson wrote:

    On 9/21/2024 6:54 AM, Scott Lurndal wrote:
    mitchalsup@aol.com (MitchAlsup1) writes:
    https://www.marvell.com/products/cxl.html

    What about a weak coherency where a programmer has to use the
    correct
    membars to get the coherency required for their specific needs?
    Along
    the lines of UltraSPARC in RMO mode?

    In my case, I suffered through enough of these to implement a
    memory hierarchy free from the need of any MemBars yet provide
    the performance of <mostly> relaxed memory order, except when
    certain kinds of addresses are touched {MMI/O, configuration
    space, ATOMIC accesses,...} In these cases, the core becomes
    {sequentially consistent, or strongly ordered} depending on the
    touched address.

    If I understand correctly, atomic accesses (Enhances
    Synchronization Facility) effective use a complete memory barrier;
    software could effectively provide a memory barrier "instruction"
    by performing an otherwise pointless atomic/ESF operation.

    Are there no cases where an atomic operation is desired but
    sequential consistency is not required?

    Probably--but in the realm of ATOMICs it is FAR better to be
    a bit slower than to ever allow the illusion of atomicity to
    be lost. This criterion is significantly harder when doing
    multi-location ATOMIC stuff than single location ATOMIC stuff.

                                            Or is this a tradeoff of
    frequency/criticality and the expected overhead of the implicit
    memory barrier? (Memory barriers may be similar to context
    switches, not needing to be as expensive as they are in most
    implementations.)

    The R in RISC stands for Reduced. An ISA devoid of MemBar is
    more reduced than one with MemBars. Programmers are rarely
    in a position to understand all the cases where MemBar are
    needed or not needed {{excepting our own Chris M. Thomasson}}


    Not quite sure what we are talking about here but I won't
    let that stop me from commenting. :)

    Its a free forum

    As far as loads and stores go, if they are atomic then
    a load will not see a value that was not from some store.

    When you include stores from devices into memory, we agree.
    A LD should return the last written value.

    When you include device control registers; all bets are off.

    Regarding memory barriers, that depends on the hardware
    memory model and the program logic assuming one knows
    how to do concurrent algorithms.

    In particular, we are talking about a sequence of instructions
    with the properties:: a) an external observer can see only
    the previous or new values of a concurrent data structure
    and none of the intermediate changes, and b) should the
    event fail somewhere in the middle, no-one saw any of
    the intermediate state, either.

    The event is bigger than the memory reference instruction.

    And finally, getting the MemBarIzation correct seems to
    be beyond many-most programmers leading to error prone
    applications.

    Speaking of memory models, remember when x86 didn't have
    a formal memory model. They didn't put one in until
    after itanium. Before that it was a sort of processor
    consistency type 2 which was a real impedance mismatch
    with what most concurrent software used a a memory model.

    When only 1 x86 would fit on a die, it really did not mater
    much. I was at AMD when they were designing their memory
    model.

    Joe Seigh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to mitchalsup@aol.com on Mon Sep 23 10:53:36 2024
    On Mon, 23 Sep 2024 01:34:55 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:

    On 9/22/2024 5:39 PM, MitchAlsup1 wrote:

    Speaking of memory models, remember when x86 didn't have
    a formal memory model. They didn't put one in until
    after itanium. Before that it was a sort of processor
    consistency type 2 which was a real impedance mismatch
    with what most concurrent software used a a memory model.

    When only 1 x86 would fit on a die, it really did not mater
    much. I was at AMD when they were designing their memory
    model.

    Joe Seigh


    Why # of CPU cores on die is of particular importance?
    According to my understanding, what matters is # of CPU cores with
    coherent access to the same memory+IO.
    For x86, 4 cores (CPUs) were relatively common since 1996. There
    existed few odd 8-core systems too, still back in the last century.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to mitchalsup@aol.com on Mon Sep 23 10:38:40 2024
    MitchAlsup1 <mitchalsup@aol.com> schrieb:

    Are you not amazed that everything physicists know about the universe
    can be written in 13 equations.

    Randall Munroe has some comment on that... https://xkcd.com/1867/

    (Among thers, he left out turbulence, where we have some
    understanding, but do not yet understand the Navier-Stokes
    equations - one of the Millenium Problems).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Thomas Koenig on Mon Sep 23 13:59:24 2024
    On Mon, 23 Sep 2024 10:38:40 -0000 (UTC)
    Thomas Koenig <tkoenig@netcologne.de> wrote:

    MitchAlsup1 <mitchalsup@aol.com> schrieb:

    Are you not amazed that everything physicists know about the
    universe can be written in 13 equations.

    Randall Munroe has some comment on that... https://xkcd.com/1867/


    Exactly!
    Laplace's demon and the whole Reductionist approach to natural science
    sounds decent (although unproven) as philosophy/program, but very rarely sufficient for solving complicated problems of chemistry, biology,
    engineering or even of many branches of physics themselves.

    (Among thers, he left out turbulence, where we have some
    understanding, but do not yet understand the Navier-Stokes
    equations - one of the Millenium Problems).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Michael S on Mon Sep 23 11:44:45 2024
    Michael S <already5chosen@yahoo.com> schrieb:

    Einstein didn't like Copenhagen interpretation of Quantum Mechanics.
    He didn't question the validity of equations.

    Or, as a physics teacher said some time ago, "Shut up and do the math"
    ("Halt's Maul und rechne" in the original German).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to David Brown on Mon Sep 23 12:38:24 2024
    David Brown <david.brown@hesbynett.no> schrieb:
    On 23/09/2024 12:38, Thomas Koenig wrote:
    MitchAlsup1 <mitchalsup@aol.com> schrieb:

    Are you not amazed that everything physicists know about the universe
    can be written in 13 equations.

    Randall Munroe has some comment on that... https://xkcd.com/1867/

    (Among thers, he left out turbulence, where we have some
    understanding, but do not yet understand the Navier-Stokes
    equations - one of the Millenium Problems).

    Are you suggesting that "Gifted" was not an accurate documentary?

    Hadn't heard about that one before, but it appears not :-)

    By the way, I personally have no particular objection if the
    incompressible Navier-Stokes equations turn out to have properties
    which make them unsolvable (I almost wrote insoluble) in the
    general case. There is no such thing as an incompressible fluid
    in nature, and if should turns out that compressiblity is needed
    to make them mathematically tractable, so be it.

    It wouldn't be the first time that a simplification turns out
    badly.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Mon Sep 23 05:44:30 2024
    Michael S <already5chosen@yahoo.com> writes:

    On Sun, 22 Sep 2024 12:58:36 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 22/09/2024 10:48, Michael S wrote:

    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    Actual physicists know that quantum mechanics is not complete - it
    is not a "theory of everything", and does not explain everything.
    It is, like Newtonian gravity and general relativity, a
    simplification that gives an accurate model of reality within
    certain limitations, and hopefully it will one day be superseded
    by a new theory that models reality more accurately and over a
    wider range of circumstances. That is how science works.

    As things stand today, no such better theory has been developed.

    Actually, such theory (QED) was proposed by Paul Dirac back in
    1920s and further developed by many others bright minds.
    The trouble with it (according to my not too educated
    understanding) is that unlike Schrodinger equation, approximate
    solutions for QED equations can't be calculated numerically by
    means of Green's function. Because of that QED is rarely used
    outside of field of high-energy particles and such.

    But then, I am almost 40 years out of date. Things could have
    changed.

    I don't claim to be an expert on this field in any way, and could
    easily be muddled on the details.

    I thought QED only covered special relativity, not general relativity
    - i.e., it describes particles travelling near the speed of light,
    but does not handle gravity or the curvature of space-time.

    That sounds correct, at least for Dirac's form of QED. May be it was
    amended later.

    No one does this because the gravitational effects are way beyond
    negligible. It would be like, when doing an experiment on a
    sunny day, wanting to take into account the effects of a star ten
    quadrillion light years away. To say the effects are down in the
    noise is a vast understatement. (The distance of ten quadrillion
    light years reflects the relative strength of gravity compared to
    the electromagnetic force.)

    But that was not my point.
    My point was that the QED is well known to be better approximation of
    reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent
    of it. Despite that in practice a "worse" approximation is used far
    more often.

    I would say simpler approximation, and simpler approximations are
    usually used then they suffice. If for example we want to
    calculate how much speed is needed to pass a moving car, we don't
    need to take into account how distances change due to special
    relativity. When we want to set a timer to cook something on the
    stove, we don't worry about whether we are at sea level or up in
    the mountains, even though we know that the difference in gravity
    changes how fast the timer will run (and even can be measured).
    There are situations where QED is needed to get an accurate
    numerical result, as for example if we want to know the magnetic
    moment of the electron, and accurately enough to compare against
    very sensitive experiments. But until and unless we are
    confronted with circumstances where those tiny corrections are
    necessary, which is to say that the differences have measurable
    consequences, generally it's better to just ignore them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to mitchalsup@aol.com on Mon Sep 23 05:56:27 2024
    mitchalsup@aol.com (MitchAlsup1) writes:

    On Sun, 22 Sep 2024 13:10:34 +0000, Tim Rentsch wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    Actual physicists know that quantum mechanics is not complete - it is
    not a "theory of everything", and does not explain everything. It
    is, like Newtonian gravity and general relativity, a simplification
    that gives an accurate model of reality within certain limitations,
    and hopefully it will one day be superseded by a new theory that
    models reality more accurately and over a wider range of
    circumstances. That is how science works.

    As things stand today, no such better theory has been developed.

    Actually, such theory (QED) was proposed by Paul Dirac back in 1920s and >>> further developed by many others bright minds.
    The trouble with it (according to my not too educated understanding) is
    that unlike Schrodinger equation, approximate solutions for QED
    equations can't be calculated numerically by means of Green's function.
    Because of that QED is rarely used outside of field of high-energy
    particles and such.

    But then, I am almost 40 years out of date. Things could have changed.

    Quantum electrodynamics, aka QED, is a quantum field theory for the
    electromagnetic force. QED accounts for almost everything we can
    directly see in the world, not counting gravity.

    The original QED of Dirac, as expressed in the Dirac equation, has a
    problem: according to that formulation, the self-energy of the
    electron is infinite. To address this deficiency, for about 20
    years physicists applied a convenient approximation, namely, they
    treated the theoretically infinite quantity as zero. Surprisingly,
    that approximation gave results that agreed with all the experiments
    that were done up until about the mid 1940s.

    In the late 1940s, Richard Feynman, Julian Schwinger, and Shinichiro
    Tomonaga independently developed versions of QED that address the
    infinite self-energy problem. (Tomonaga's work was done somewhat
    earlier, but wasn't publicized until later because of the isolation
    of Japan during World War II.) It wasn't at all obvious that the
    QED of Feynman and the QED of Schwinger were equivalent. That they
    were equivalent was established and publicized by Freeman Dyson
    (while he was a graduate student, no less).

    The problem of the seeminginly infinite self-energy of the electron
    was addressed by a technique known as renormalization. We could say
    that renormalization is only an approximation: it is known to be
    mathematically unsound, breaking down after a mere 400 or so decimal
    places. Despite that, QED gives numerical results that are correct
    up to the limits of our ability to measure. A computation done
    using QED matched an experimental result to within the tolerance
    of the measurement, which was 13 decimal places. An analogy given
    by Feynman is that this is like measuring the distance from LA to
    New York to an accuracy of the width of one human hair.

    QED has implications that are visible in the "normal" world, by
    which I mean using ordinary equipment rather than things like
    synchrotrons and particle accelerators, and that leaves atoms
    intact. Basically all of chemistry depends on QED and not on
    anything more exotic.

    There are three fundamental forces other than the electromagnetic
    force, namely, gravity, the weak force, and the strong force. The
    strong force is what holds together the protons and neutrons in the
    nucleus of an atom; it has to be stronger than the electromagnetic
    force so that protons don't just fly away from each other. The weak
    force is related to radioactive decay; it works only over very
    short distances because the carrier particle of the weak force is
    fairly massive (about 80 times the mass of a proton IIRC). For
    comparison the carrier particle of the electromagnetic force is the
    photon, which is massless; that means the electromagnetic force
    operates over arbitrarily large distances (although of course with a
    strength that diminishes as the distance gets larger).

    The strong force (sometimes called the color force) is peculiar in
    that the strong force actually *increases* with distance. That
    happens because the carrier particle of the color force has a color
    charge. For comparison photons are electrically neutral. It's
    because of this property that we never see isolated quarks.
    Basically, trying to pull two quarks apart takes so much energy that
    new quarks come into existence out of nothing.

    It does not come out of nothing, it comes out of the energy being
    applied to pull the 2 quarks apart. Once the energy gets that big,
    it (the energy) condenses into a pair of quarks which then pair up
    to prevent the quarks from being seen in isolation.

    Yes, when I said that the new quarks come into existence out of
    nothing I meant nothing other than the energy being put in to
    pull the old quarks apart.

    Quarks come in three
    "colors" (having nothing to do with ordinary color), times three
    families of quarks, times two quarks in each family. The carrier
    particle of the strong force is called a gluon, and there are eight
    different kinds of gluons. (It seems like there should be nine, to
    allow each of the 3x3 possible combinations of colors, but there are
    only eight.) The corresponding theory to QED for the strong force
    is called QCD, for Quantum chromodynamics.

    A joke that I like to tell is because the carrier particle for the
    strong force can change a quark from one color to another, rather
    than calling it a gluon it should have been called a crayon.

    The field theories for electromagnetism, the strong force, and the
    weak force have been unified in the sense that there is a
    mathematically consistent framework that accommodates all three.
    That unification is only mathematical, by which I mean that there
    are no testable physical implications, only a kind of tautological
    consistency. We can see all three field theories through a common
    mathematical lens, but that doesn't say anything about how the three
    theories interact physically.

    The gravitational force is much weaker, by 42 orders of magnitude,
    than the other three fundamental forces. The General Theory of
    Relativity is not a quantized theory. There are ideas about how to
    unify gravity and the other three fundamental forces, but none of
    these "grand unified" theories have any hypotheses that we are able
    to test experimentally. It's unclear how gravity fits in to the
    overall picture.

    Are you not amazed that everything physicists know about the
    universe can be written in 13 equations.

    Not really, no. Most of those equations are a lot more complicated
    than 1+1=2. It's worth remembering that when Maxwell originally
    wrote down the equations for electromagnetism there were sixteen
    equations, not four. It was only after the development of vector
    notation that the sixteen equations were expressed as only four
    equations.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Thomas Koenig on Mon Sep 23 14:24:15 2024
    On 23/09/2024 12:38, Thomas Koenig wrote:
    MitchAlsup1 <mitchalsup@aol.com> schrieb:

    Are you not amazed that everything physicists know about the universe
    can be written in 13 equations.

    Randall Munroe has some comment on that... https://xkcd.com/1867/

    (Among thers, he left out turbulence, where we have some
    understanding, but do not yet understand the Navier-Stokes
    equations - one of the Millenium Problems).

    Are you suggesting that "Gifted" was not an accurate documentary?

    (Thanks to Terje for recommending that film, by the way.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Paul A. Clayton on Mon Sep 23 15:06:50 2024
    "Paul A. Clayton" <paaronclayton@gmail.com> writes:
    On 9/17/24 8:44 PM, Lawrence D'Oliveiro wrote:
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have access
    to the sheer quantity of RAM that is available to the CPU. And
    motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
    tend not to offer this option.

    My guess would be that CPU RAM will decrease in upgradability.

    LDO's statement "will never have access to the sheer quantity of
    RAM that is available to the CPU" is flat out wrong.

    Marvell already offers a CXL add-on processor card that supports
    up to 4TB of DRAM with 16 high-end ARM64 V series cores.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From jseigh@21:1/5 to All on Mon Sep 23 11:33:01 2024
    On 9/22/24 21:34, MitchAlsup1 wrote:
    On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:



    As far as loads and stores go, if they are atomic then
    a load will not see a value that was not from some store.

    When you include stores from devices into memory, we agree.
    A LD should return the last written value.

    When you include device control registers; all bets are off.

    Regarding memory barriers, that depends on the hardware
    memory model and the program logic assuming one knows
    how to do concurrent algorithms.

    In particular, we are talking about a sequence of instructions
    with the properties:: a) an external observer can see only
    the previous or new values of a concurrent data structure
    and none of the intermediate changes, and b) should the
    event fail somewhere in the middle, no-one saw any of
    the intermediate state, either.

    The event is bigger than the memory reference instruction.

    And finally, getting the MemBarIzation correct seems to
    be beyond many-most programmers leading to error prone
    applications.


    I have a pretty good understanding of atomicity and memory
    models so all good there.

    Are you familiar with RCU and how it's used in the linux
    kernel? RCU locked reads of data structures are not
    atomic. Works fine if you know how to do lock-free
    data structures.

    RCU is a form of qsbr. I did a qsbr implementation in
    the 80's at IBM and we got a patent for it, 4,809,168.
    A lot of other stuff as well, so like I said, all good.

    Joe Seigh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to Thomas Koenig on Mon Sep 23 19:08:55 2024
    Thomas Koenig wrote:
    MitchAlsup1 <mitchalsup@aol.com> schrieb:

    Are you not amazed that everything physicists know about the universe
    can be written in 13 equations.

    Randall Munroe has some comment on that... https://xkcd.com/1867/

    (Among thers, he left out turbulence, where we have some
    understanding, but do not yet understand the Navier-Stokes
    equations - one of the Millenium Problems).


    Spoiler alert:

    I watched "Gifted" on Netflix recently, seems it was solved by a lady
    who then prompty suicided instead of publishing, just to get revenge on
    her mother who had pressured her all her life?

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to Tim Rentsch on Mon Sep 23 19:12:19 2024
    Tim Rentsch wrote:
    Michael S <already5chosen@yahoo.com> writes:

    On Sun, 22 Sep 2024 12:58:36 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 22/09/2024 10:48, Michael S wrote:

    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    Actual physicists know that quantum mechanics is not complete - it
    is not a "theory of everything", and does not explain everything.
    It is, like Newtonian gravity and general relativity, a
    simplification that gives an accurate model of reality within
    certain limitations, and hopefully it will one day be superseded
    by a new theory that models reality more accurately and over a
    wider range of circumstances. That is how science works.

    As things stand today, no such better theory has been developed.

    Actually, such theory (QED) was proposed by Paul Dirac back in
    1920s and further developed by many others bright minds.
    The trouble with it (according to my not too educated
    understanding) is that unlike Schrodinger equation, approximate
    solutions for QED equations can't be calculated numerically by
    means of Green's function. Because of that QED is rarely used
    outside of field of high-energy particles and such.

    But then, I am almost 40 years out of date. Things could have
    changed.

    I don't claim to be an expert on this field in any way, and could
    easily be muddled on the details.

    I thought QED only covered special relativity, not general relativity
    - i.e., it describes particles travelling near the speed of light,
    but does not handle gravity or the curvature of space-time.

    That sounds correct, at least for Dirac's form of QED. May be it was
    amended later.

    No one does this because the gravitational effects are way beyond
    negligible. It would be like, when doing an experiment on a
    sunny day, wanting to take into account the effects of a star ten
    quadrillion light years away. To say the effects are down in the
    noise is a vast understatement. (The distance of ten quadrillion
    light years reflects the relative strength of gravity compared to
    the electromagnetic force.)

    But that was not my point.
    My point was that the QED is well known to be better approximation of
    reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent
    of it. Despite that in practice a "worse" approximation is used far
    more often.

    I would say simpler approximation, and simpler approximations are
    usually used then they suffice. If for example we want to
    calculate how much speed is needed to pass a moving car, we don't
    need to take into account how distances change due to special
    relativity. When we want to set a timer to cook something on the
    stove, we don't worry about whether we are at sea level or up in
    the mountains, even though we know that the difference in gravity
    changes how fast the timer will run (and even can be measured).

    No, no, no!

    The change in pressure directly impacts the cooking temperature, and
    therefore also the time needed.

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Terje Mathisen on Mon Sep 23 10:43:15 2024
    Terje Mathisen <terje.mathisen@tmsw.no> writes:

    Tim Rentsch wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Sun, 22 Sep 2024 12:58:36 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 22/09/2024 10:48, Michael S wrote:

    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    Actual physicists know that quantum mechanics is not complete - it >>>>>> is not a "theory of everything", and does not explain everything.
    It is, like Newtonian gravity and general relativity, a
    simplification that gives an accurate model of reality within
    certain limitations, and hopefully it will one day be superseded
    by a new theory that models reality more accurately and over a
    wider range of circumstances. That is how science works.

    As things stand today, no such better theory has been developed.

    Actually, such theory (QED) was proposed by Paul Dirac back in
    1920s and further developed by many others bright minds.
    The trouble with it (according to my not too educated
    understanding) is that unlike Schrodinger equation, approximate
    solutions for QED equations can't be calculated numerically by
    means of Green's function. Because of that QED is rarely used
    outside of field of high-energy particles and such.

    But then, I am almost 40 years out of date. Things could have
    changed.

    I don't claim to be an expert on this field in any way, and could
    easily be muddled on the details.

    I thought QED only covered special relativity, not general relativity
    - i.e., it describes particles travelling near the speed of light,
    but does not handle gravity or the curvature of space-time.

    That sounds correct, at least for Dirac's form of QED. May be it was
    amended later.

    No one does this because the gravitational effects are way beyond
    negligible. It would be like, when doing an experiment on a
    sunny day, wanting to take into account the effects of a star ten
    quadrillion light years away. To say the effects are down in the
    noise is a vast understatement. (The distance of ten quadrillion
    light years reflects the relative strength of gravity compared to
    the electromagnetic force.)

    But that was not my point.
    My point was that the QED is well known to be better approximation of
    reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent
    of it. Despite that in practice a "worse" approximation is used far
    more often.

    I would say simpler approximation, and simpler approximations are
    usually used then they suffice. If for example we want to
    calculate how much speed is needed to pass a moving car, we don't
    need to take into account how distances change due to special
    relativity. When we want to set a timer to cook something on the
    stove, we don't worry about whether we are at sea level or up in
    the mountains, even though we know that the difference in gravity
    changes how fast the timer will run (and even can be measured).

    No, no, no!

    The change in pressure directly impacts the cooking temperature, and therefore also the time needed.

    I concede your point. My point was only about how the change
    in gravity affects the speed at which the timer runs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Tim Rentsch on Mon Sep 23 21:13:30 2024
    On 2024-09-23 20:43, Tim Rentsch wrote:
    Terje Mathisen <terje.mathisen@tmsw.no> writes:

    Tim Rentsch wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Sun, 22 Sep 2024 12:58:36 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 22/09/2024 10:48, Michael S wrote:

    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    Actual physicists know that quantum mechanics is not complete - it >>>>>>> is not a "theory of everything", and does not explain everything. >>>>>>> It is, like Newtonian gravity and general relativity, a
    simplification that gives an accurate model of reality within
    certain limitations, and hopefully it will one day be superseded >>>>>>> by a new theory that models reality more accurately and over a
    wider range of circumstances. That is how science works.

    As things stand today, no such better theory has been developed.

    Actually, such theory (QED) was proposed by Paul Dirac back in
    1920s and further developed by many others bright minds.
    The trouble with it (according to my not too educated
    understanding) is that unlike Schrodinger equation, approximate
    solutions for QED equations can't be calculated numerically by
    means of Green's function. Because of that QED is rarely used
    outside of field of high-energy particles and such.

    But then, I am almost 40 years out of date. Things could have
    changed.

    I don't claim to be an expert on this field in any way, and could
    easily be muddled on the details.

    I thought QED only covered special relativity, not general relativity >>>>> - i.e., it describes particles travelling near the speed of light,
    but does not handle gravity or the curvature of space-time.

    That sounds correct, at least for Dirac's form of QED. May be it was
    amended later.

    No one does this because the gravitational effects are way beyond
    negligible. It would be like, when doing an experiment on a
    sunny day, wanting to take into account the effects of a star ten
    quadrillion light years away. To say the effects are down in the
    noise is a vast understatement. (The distance of ten quadrillion
    light years reflects the relative strength of gravity compared to
    the electromagnetic force.)

    But that was not my point.
    My point was that the QED is well known to be better approximation of
    reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent
    of it. Despite that in practice a "worse" approximation is used far
    more often.

    I would say simpler approximation, and simpler approximations are
    usually used then they suffice. If for example we want to
    calculate how much speed is needed to pass a moving car, we don't
    need to take into account how distances change due to special
    relativity. When we want to set a timer to cook something on the
    stove, we don't worry about whether we are at sea level or up in
    the mountains, even though we know that the difference in gravity
    changes how fast the timer will run (and even can be measured).

    No, no, no!

    The change in pressure directly impacts the cooking temperature, and
    therefore also the time needed.

    I concede your point. My point was only about how the change
    in gravity affects the speed at which the timer runs.


    If the timer and the stove are at the same altitude, as seems natural,
    you never have to consider gravity in timing the cooking - any gravity
    effect on the timer rate is exactly the same as the effect on the
    heating rate of the water in the pot and the cooking rate of its
    contents. If it takes 10 minutes by the timer at sea level, it will take
    10 minutes by the timer in any other gravity, all other things (such as
    the air pressure) being the same.

    However, if you compare two timers (or stoves) at different altitudes,
    that is where you can see the effect of gravity on time -- and it is of
    course negligible for practical cookery on Earth.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to To which Lawrence D'Oliveiro preemp on Mon Sep 23 16:30:49 2024
    Chris M. Thomasson [2024-09-20 14:54:36] wrote:
    I had this crazy idea of putting cpus right on the ram. So, if you add
    more memory to your system you automatically get more cpu's... Think
    NUMA for a moment... ;^)

    To which Lawrence D'Oliveiro preemptively replied:
    Yes, but that’s a lot more expensive.

    🙂

    More seriously: the idea of putting the CPU closer to the RAM and
    bundling them has been with us since last century.
    Until recently it was limited to situations like embedded systems,
    because the lack of flexibility was a major downer. But it's now used
    in many more circumstances, typically by stacking N dies of RAM on top
    of a die of CPU, because with current RAM sizes and CPUs it can give a respectable amount of RAM, with the advantage of a comfortable
    memory bandwidth.

    I'm not following GPUs very closely, but I'd be surprised if there
    aren't any GPUs which have RAM in the same chip (via stacked dies).
    But maybe these are still "lower tier" GPUs, which don't pay much
    attention to working efficiently when N such chips are put into a single
    system (IOW you may still be unable to just "add more RAM(w/GPU)").


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Michael S on Mon Sep 23 20:59:42 2024
    On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:

    On Mon, 23 Sep 2024 01:34:55 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:

    On 9/22/2024 5:39 PM, MitchAlsup1 wrote:

    Speaking of memory models, remember when x86 didn't have
    a formal memory model. They didn't put one in until
    after itanium. Before that it was a sort of processor
    consistency type 2 which was a real impedance mismatch
    with what most concurrent software used a a memory model.

    When only 1 x86 would fit on a die, it really did not mater
    much. I was at AMD when they were designing their memory
    model.

    Joe Seigh


    Why # of CPU cores on die is of particular importance?

    Prior to multi-CPUs on a die; 99% of all x86 systems were
    mono-CPU systems, and the necessity of having a well known
    memory model was more vague. Although there were servers
    with multiple CPUs in them they represented "an afternoon
    in the FAB" compared to the PC oriented x86s.

    That is "we did not see the problem until it hit us in
    the face." Once it did, we understood what we had to do:
    presto memory model.

    Also note: this was just after the execution pipeline went
    Great Big Our of Order, and thus made the lack of order
    problems much more visible to applications. {Pentium Pro}

    According to my understanding, what matters is # of CPU cores with
    coherent access to the same memory+IO.
    For x86, 4 cores (CPUs) were relatively common since 1996. There
    existed few odd 8-core systems too, still back in the last century.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Scott Lurndal on Mon Sep 23 21:10:00 2024
    On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:

    "Paul A. Clayton" <paaronclayton@gmail.com> writes:
    On 9/17/24 8:44 PM, Lawrence D'Oliveiro wrote:
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have access >>> to the sheer quantity of RAM that is available to the CPU. And
    motherboard-based CPU RAM is upgradeable, as well, whereas addon cards
    tend not to offer this option.

    My guess would be that CPU RAM will decrease in upgradability.

    LDO's statement "will never have access to the sheer quantity of
    RAM that is available to the CPU" is flat out wrong.

    Marvell already offers a CXL add-on processor card that supports
    up to 4TB of DRAM with 16 high-end ARM64 V series cores.

    At somewhere near 3× the latency to DRAM.

    If the size works for your application--great !
    If the latency does not work for you--less great.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to mitchalsup@aol.com on Tue Sep 24 00:34:03 2024
    On Mon, 23 Sep 2024 21:10:00 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:

    "Paul A. Clayton" <paaronclayton@gmail.com> writes:
    On 9/17/24 8:44 PM, Lawrence D'Oliveiro wrote:
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and
    GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the CPU.
    And motherboard-based CPU RAM is upgradeable, as well, whereas
    addon cards tend not to offer this option.

    My guess would be that CPU RAM will decrease in upgradability.

    LDO's statement "will never have access to the sheer quantity of
    RAM that is available to the CPU" is flat out wrong.

    Marvell already offers a CXL add-on processor card that supports
    up to 4TB of DRAM with 16 high-end ARM64 V series cores.

    At somewhere near 3× the latency to DRAM.


    Where did you find this figure?
    I have read both product brief and press release and didn't see any
    latency numbers mentioned, not even an order of magnitude.

    I suppose, in order to get real datasheet one would have to sign NDA.

    Somehow I don't see how anything running over PCIe-like link can be
    as fast as you suggest.

    If the size works for your application--great !
    If the latency does not work for you--less great.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Michael S on Mon Sep 23 21:51:31 2024
    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 23 Sep 2024 21:10:00 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:
    =20
    "Paul A. Clayton" <paaronclayton@gmail.com> writes: =20
    On 9/17/24 8:44=E2=80=AFPM, Lawrence D'Oliveiro wrote: =20
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
    =20
    "the CPUs are simply I/O managers to the Inference Engines and
    GPUs." =20

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the CPU.
    And motherboard-based CPU RAM is upgradeable, as well, whereas
    addon cards tend not to offer this option. =20

    My guess would be that CPU RAM will decrease in upgradability. =20

    LDO's statement "will never have access to the sheer quantity of
    RAM that is available to the CPU" is flat out wrong.

    Marvell already offers a CXL add-on processor card that supports
    up to 4TB of DRAM with 16 high-end ARM64 V series cores. =20
    =20
    At somewhere near 3=C3=97 the latency to DRAM.
    =20

    Where did you find this figure?
    I have read both product brief and press release and didn't see any
    latency numbers mentioned, not even an order of magnitude.

    I suppose, in order to get real datasheet one would have to sign NDA.

    Somehow I don't see how anything running over PCIe-like link can be
    as fast as you suggest.

    The round-trip latency in PCie 6 can be circa 2ns. Add dram access
    time to that and it's competetive with local memory.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Chris M. Thomasson on Mon Sep 23 21:58:20 2024
    On Mon, 23 Sep 2024 21:35:53 +0000, Chris M. Thomasson wrote:

    On 9/23/2024 1:59 PM, MitchAlsup1 wrote:
    On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:

    On Mon, 23 Sep 2024 01:34:55 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:

    On 9/22/2024 5:39 PM, MitchAlsup1 wrote:

    Speaking of memory models, remember when x86 didn't have
    a formal memory model.  They didn't put one in until
    after itanium.  Before that it was a sort of processor
    consistency type 2 which was a real impedance mismatch
    with what most concurrent software used a a memory model.

    When only 1 x86 would fit on a die, it really did not mater
    much. I was at AMD when they were designing their memory
    model.

    Joe Seigh


    Why # of CPU cores on die is of particular importance?

    Prior to multi-CPUs on a die; 99% of all x86 systems were
    mono-CPU systems, and the necessity of having a well known
    memory model was more vague. Although there were servers
    with multiple CPUs in them they represented "an afternoon
    in the FAB" compared to the PC oriented x86s.

    That is "we did not see the problem until it hit us in
    the face." Once it did, we understood what we had to do:
    presto memory model.

    Also note: this was just after the execution pipeline went
    Great Big Our of Order, and thus made the lack of order
    problems much more visible to applications. {Pentium Pro}

    Iirc, been a while, I think there was a problem on one of the Pentiums,
    might be the pro, where it had an issue with releasing a spinlock with a normal store. I am most likely misremembering, but it is sparking some strange memories. Way back on c.p.t, Alex Terekhov (hope I did not
    butcher the spelling of his name), anyway, wrote about it, I think...
    Way back. early 2000's I think.

    Many ATOMIC sequences start or end without any note on the memory
    reference that it bounds an ATOMIC event. CAS has this problem
    on the value to ultimately be compared (the start), T&S has this
    problem on ST that unlocks the lock (the end). It is like using
    indentation as the only means of signaling block structure in
    your language of choice.

    Both are bad practice in making HW that can perform these things
    efficiently. But notice that LL-SC does not have this problem.
    Neither does ESM.

    According to my understanding, what matters is # of CPU cores with
    coherent access to the same memory+IO.
    For x86, 4 cores (CPUs) were relatively common since 1996. There
    existed few odd 8-core systems too, still back in the last century.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Michael S on Mon Sep 23 22:05:27 2024
    On Mon, 23 Sep 2024 21:34:03 +0000, Michael S wrote:

    On Mon, 23 Sep 2024 21:10:00 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:

    "Paul A. Clayton" <paaronclayton@gmail.com> writes:
    On 9/17/24 8:44 PM, Lawrence D'Oliveiro wrote:
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and
    GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never have
    access to the sheer quantity of RAM that is available to the CPU.
    And motherboard-based CPU RAM is upgradeable, as well, whereas
    addon cards tend not to offer this option.

    My guess would be that CPU RAM will decrease in upgradability.

    LDO's statement "will never have access to the sheer quantity of
    RAM that is available to the CPU" is flat out wrong.

    Marvell already offers a CXL add-on processor card that supports
    up to 4TB of DRAM with 16 high-end ARM64 V series cores.

    At somewhere near 3× the latency to DRAM.


    Where did you find this figure?

    I calculated it based on how messages get passed up and down PCIe
    linkages and that that plug in memory has to be enough wire distance
    to need a PCIe switch between CPU die and Plug. Then add on typical
    memory controller and DRAM controller, and that is what you have.

    I have read both product brief and press release and didn't see any
    latency numbers mentioned, not even an order of magnitude.

    I suppose, in order to get real datasheet one would have to sign NDA.

    Somehow I don't see how anything running over PCIe-like link can be
    as fast as you suggest.

    I was not suggesting it is fast, I was suggesting it is slow.

    If the size works for your application--great !
    If the latency does not work for you--less great.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Chris M. Thomasson on Mon Sep 23 22:32:36 2024
    On Mon, 23 Sep 2024 22:19:37 +0000, Chris M. Thomasson wrote:

    On 9/23/2024 2:58 PM, MitchAlsup1 wrote:
    On Mon, 23 Sep 2024 21:35:53 +0000, Chris M. Thomasson wrote:
    Iirc, been a while, I think there was a problem on one of the Pentiums,
    might be the pro, where it had an issue with releasing a spinlock with a >>> normal store. I am most likely misremembering, but it is sparking some
    strange memories. Way back on c.p.t, Alex Terekhov (hope I did not
    butcher the spelling of his name), anyway, wrote about it, I think...
    Way back. early 2000's I think.

    Many ATOMIC sequences start or end without any note on the memory
    reference that it bounds an ATOMIC event. CAS has this problem
    on the value to ultimately be compared (the start), T&S has this
    problem on ST that unlocks the lock (the end). It is like using
    indentation as the only means of signaling block structure in
    your language of choice.

    _Strong_ CAS in C++ terms, ala cmpxchg, will only fail if the comparands
    are different.

    And used improperly is subject to ABA problem.

    However, if the LD which obtained the value to be compared was
    KNOWN to be part of an ATOMIC sequence ending with CAS, one can
    eliminate the ABA problem for all <reasonable> CAS sequences.

    This can be implemented with LL/SC for sure.

    With the addition above.

    Scott
    mentioned something about a bus lock after a certain amount of
    failures... (side note) Weak CAS can fail even if the comparands are identical to each other ala LL/SC. This reminds me of LL/SC. the ABA
    problem can worked around and/or eliminated without using LL/SC. I
    remember reading papers about LL/SC getting around ABA, but then read
    about how they can have their own can of worms. Pessimistic vs
    optimistic sync... Wait/ Lock / Obstruction free things... ;^)

    You are the expert on that here.

    Fwiw, getting rid of the StoreLoad membar in algorithms like SMR is
    great. There is a way to do this in existing systems. So, no hardware
    changes required, and makes the system run fast.

    I got rid of all MemBars and still have a fairly relaxed memory model.

    Think of allowing a rouge thread to pound a CAS with random data wrt the comparand, trying to get it to fail... Of course this can be modifying a reservation granule wrt LL/SC side of things, right? Pessimistic (CAS)
    vs Optimistic (LL/SC)?

    Or methodological (ESM).





    Both are bad practice in making HW that can perform these things
    efficiently. But notice that LL-SC does not have this problem.
    Neither does ESM.

    According to my understanding, what matters is # of CPU cores with
    coherent access to the same memory+IO.
    For x86, 4 cores (CPUs) were relatively common since 1996. There
    existed few odd 8-core systems too, still back in the last century.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Niklas Holsti on Mon Sep 23 15:53:32 2024
    Niklas Holsti <niklas.holsti@tidorum.invalid> writes:

    On 2024-09-23 20:43, Tim Rentsch wrote:

    Terje Mathisen <terje.mathisen@tmsw.no> writes:

    Tim Rentsch wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Sun, 22 Sep 2024 12:58:36 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 22/09/2024 10:48, Michael S wrote:

    On Sat, 21 Sep 2024 20:30:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    Actual physicists know that quantum mechanics is not complete - it >>>>>>>> is not a "theory of everything", and does not explain everything. >>>>>>>> It is, like Newtonian gravity and general relativity, a
    simplification that gives an accurate model of reality within
    certain limitations, and hopefully it will one day be superseded >>>>>>>> by a new theory that models reality more accurately and over a >>>>>>>> wider range of circumstances. That is how science works.

    As things stand today, no such better theory has been developed. >>>>>>>
    Actually, such theory (QED) was proposed by Paul Dirac back in
    1920s and further developed by many others bright minds.
    The trouble with it (according to my not too educated
    understanding) is that unlike Schrodinger equation, approximate
    solutions for QED equations can't be calculated numerically by
    means of Green's function. Because of that QED is rarely used
    outside of field of high-energy particles and such.

    But then, I am almost 40 years out of date. Things could have
    changed.

    I don't claim to be an expert on this field in any way, and could
    easily be muddled on the details.

    I thought QED only covered special relativity, not general relativity >>>>>> - i.e., it describes particles travelling near the speed of light, >>>>>> but does not handle gravity or the curvature of space-time.

    That sounds correct, at least for Dirac's form of QED. May be it was >>>>> amended later.

    No one does this because the gravitational effects are way beyond
    negligible. It would be like, when doing an experiment on a
    sunny day, wanting to take into account the effects of a star ten
    quadrillion light years away. To say the effects are down in the
    noise is a vast understatement. (The distance of ten quadrillion
    light years reflects the relative strength of gravity compared to
    the electromagnetic force.)

    But that was not my point.
    My point was that the QED is well known to be better approximation of >>>>> reality than Heisenberg's Matrix Mechanic or Schrodinger's equivalent >>>>> of it. Despite that in practice a "worse" approximation is used far >>>>> more often.

    I would say simpler approximation, and simpler approximations are
    usually used then they suffice. If for example we want to
    calculate how much speed is needed to pass a moving car, we don't
    need to take into account how distances change due to special
    relativity. When we want to set a timer to cook something on the
    stove, we don't worry about whether we are at sea level or up in
    the mountains, even though we know that the difference in gravity
    changes how fast the timer will run (and even can be measured).

    No, no, no!

    The change in pressure directly impacts the cooking temperature, and
    therefore also the time needed.

    I concede your point. My point was only about how the change
    in gravity affects the speed at which the timer runs.

    If the timer and the stove are at the same altitude, as seems natural,
    you never have to consider gravity in timing the cooking - any gravity
    effect on the timer rate is exactly the same as the effect on the
    heating rate of the water in the pot and the cooking rate of its
    contents. If it takes 10 minutes by the timer at sea level, it will
    take 10 minutes by the timer in any other gravity, all other things
    (such as the air pressure) being the same.

    However, if you compare two timers (or stoves) at different altitudes,
    that is where you can see the effect of gravity on time -- and it is
    of course negligible for practical cookery on Earth.

    Yes, the example was poorly chosen. I hope that doesn't detract from
    the more important point that in many cases or maybe even most cases
    there are factors that are ignorable because the impacts of those
    factors are many orders of magnitude less than those of the primary
    factors. A lot of what doing science entails is knowing which
    approximations are appropriate under the circumstances in question.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Chris M. Thomasson on Tue Sep 24 00:26:40 2024
    On Mon, 23 Sep 2024 22:46:47 +0000, Chris M. Thomasson wrote:

    On 9/23/2024 3:32 PM, MitchAlsup1 wrote:


    I got rid of all MemBars and still have a fairly relaxed memory model.

    That is interesting to me! It's sort-of "out of the box" so to speak?
    How can a programmer take advantage of the relaxed aspect of your model?

    Touch a DRAM location and one gets causal order.
    Touch a MM I/O location and one gets sequential consistency
    Touch a config space location and one gets strongly ordering
    Touch ROM and one gets unordered access.

    You see, the memory <ordering> model is not tied to a CPU state, but
    to what LD and ST instructions touch.

    Think of allowing a rouge thread to pound a CAS with random data wrt the >>> comparand, trying to get it to fail... Of course this can be modifying a >>> reservation granule wrt LL/SC side of things, right? Pessimistic (CAS)
    vs Optimistic (LL/SC)?

    Or methodological (ESM).

    Still, how does live lock get eluded in your system? Think along the
    lines of a "rouge" thread causing havoc? Banging on cache lines ect...
    ;^o

    ESM switches modes automagically. It starts out as Optimistic, and when
    that fails, it switches to methodological {note by the time the failure
    has been detected, the core is now sequentially consistent and will be performing the 'event' in program order and sequentially consistent.
    Should the methodological event fail, the core will ask the ATOMIC
    request granter using all of the addresses. IF this succeeds, then the
    core is allowed to NAK interference so that it will succeed. Upon
    success, those addresses are removed from the granter.

    So, for example:: a timer goes off and every core tries to pick a
    thread of the wait queue. The first access will likely fail, the
    second access will fail but the granter counted the amount of
    interference. So the third request is to a thread nobody else
    is attempting to dequeue. First 2 requests were to the element
    at the front of the queue, third requests is to a thread indexed
    down the queue by the interference number.

    Thus BigO( n^3 ) becomes BigO( 3 ). In practice BigO(n^3) becomes
    BigO(ln2(n)) due to random arrival and departure.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Tue Sep 24 00:48:04 2024
    On Sun, 22 Sep 2024 12:18:01 +0300, Michael S wrote:

    GPUs have lower clock speed because this way they can operate at lower voltage and to do more work per Joule.
    High-end GPUs are power-bound beasts.

    Trying to have it both ways? “Lower voltage” and “power-bound” at the same
    time?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Stefan Monnier on Tue Sep 24 00:46:40 2024
    On Mon, 23 Sep 2024 16:30:49 -0400, Stefan Monnier wrote:

    Chris M. Thomasson [2024-09-20 14:54:36] wrote:

    I had this crazy idea of putting cpus right on the ram. So, if you add
    more memory to your system you automatically get more cpu's... Think
    NUMA for a moment... ;^)

    To which Lawrence D'Oliveiro preemptively replied:
    Yes, but that’s a lot more expensive.

    No I didn’t. My reply said “not so crazy”.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Tue Sep 24 00:50:15 2024
    On Sun, 22 Sep 2024 18:45:54 +0000, MitchAlsup1 wrote:

    On Sun, 22 Sep 2024 7:23:59 +0000, Lawrence D'Oliveiro wrote:

    Bell’s Theorem offered a way to test for those, and the tests
    (there have been several of them so far, done in several different
    ways) show that such “hidden variables” cannot exist.

    Do not exist, there remains no evidence that they cannot exist.

    The large collection of tests of Bell’s theorem is that evidence.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Tue Sep 24 00:45:22 2024
    On Sun, 22 Sep 2024 10:34:16 +0300, Michael S wrote:

    The difference between MPP and cluster is not well-defined.
    The difference between ccNUMA and MPP-or-cluster is crystal clear.

    If memory on other nodes were made directly addressable via hardware that implemented something like a message-passing bus, suddenly the difference
    is not so clear.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Tue Sep 24 00:56:47 2024
    On Mon, 23 Sep 2024 13:59:24 +0300, Michael S wrote:

    Laplace's demon and the whole Reductionist approach to natural science
    sounds decent (although unproven) as philosophy/program, but very rarely sufficient for solving complicated problems of chemistry, biology, engineering or even of many branches of physics themselves.

    But ... isn’t trying to explain all the limitations of science in terms of one factor (reductionism) itself ... reductionist?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Tue Sep 24 00:54:54 2024
    On Tue, 24 Sep 2024 0:50:15 +0000, Lawrence D'Oliveiro wrote:

    On Sun, 22 Sep 2024 18:45:54 +0000, MitchAlsup1 wrote:

    On Sun, 22 Sep 2024 7:23:59 +0000, Lawrence D'Oliveiro wrote:

    Bell’s Theorem offered a way to test for those, and the tests
    (there have been several of them so far, done in several different
    ways) show that such “hidden variables” cannot exist.

    Do not exist, there remains no evidence that they cannot exist.

    The large collection of tests of Bell’s theorem is that evidence.

    There is still that ~1:peta chance of some phenomena we have not
    yet measured to upend the inequality.

    Yes, it is reliably proven (5-6 sigma) but there is still a
    very tiny chance.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Thomas Koenig on Tue Sep 24 01:00:45 2024
    On Mon, 23 Sep 2024 10:38:40 -0000 (UTC), Thomas Koenig wrote:

    (Among thers, he left out turbulence, where we have some understanding,
    but do not yet understand the Navier-Stokes equations - one of the
    Millenium Problems).

    I thought the problem with Navier-Stokes is that it assumes infinitesimally-small particles of fluid, whereas we know that real fluids
    are made up of atoms and molecules.

    Remember how Max Planck solved the black-body problem? He knew all about
    the previous approach of assuming that matter was made up of little oscillators, and then trying to work out the limiting behaviour as the
    size of those oscillators approached zero -- that didn’t work. So his breakthrough was in assuming that the oscillators did *not* approach zero
    in size, but had some minimum nonzero size. Et voilà ... he got a curve
    that actually matched the known behaviour of radiating bodies. And laid
    one of the foundation stones of quantum theory in the process.

    Seems a similar thing could be done with Navier-Stokes ... ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Tue Sep 24 01:01:38 2024
    On Sun, 22 Sep 2024 14:26:17 +0300, Michael S wrote:

    I thought QED only covered special relativity, not general relativity -
    i.e., it describes particles travelling near the speed of light, but
    does not handle gravity or the curvature of space-time.

    That sounds correct, at least for Dirac's form of QED. May be it was
    amended later.

    Nothing in quantum theory is able to handle gravity. Nothing.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Tue Sep 24 01:06:39 2024
    On Sun, 22 Sep 2024 18:45:09 -0000 (UTC), Brett wrote:

    Electric Universe

    Electricity without magnetism? Not even taking account of Maxwell’s unification of the electric and magnetic fields?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Tue Sep 24 01:05:40 2024
    On Sun, 22 Sep 2024 11:56:33 +0300, Michael S wrote:

    Einstein didn't like Copenhagen interpretation of Quantum Mechanics.

    He didn’t like quantum mechanics full stop. “God does not play dice”, he famously said. And kept trying to come up with an alternative, though he
    never succeeded. And remember, his 1905 paper on the photoelectric effect
    (for which he won the Nobel Prize) was one of the foundation stones of
    this horrible new theory.

    “Interpretations” of quantum mechanics had nothing to do with this: the probabilistic behaviour that Einstein objected to is inherent in the
    equations themselves: wave function in → probability out.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Chris M. Thomasson on Tue Sep 24 03:03:16 2024
    On Tue, 24 Sep 2024 2:48:43 +0000, Chris M. Thomasson wrote:

    On 9/23/2024 5:26 PM, MitchAlsup1 wrote:
    On Mon, 23 Sep 2024 22:46:47 +0000, Chris M. Thomasson wrote:

    On 9/23/2024 3:32 PM, MitchAlsup1 wrote:


    I got rid of all MemBars and still have a fairly relaxed memory model.

    That is interesting to me! It's sort-of "out of the box" so to speak?
    How can a programmer take advantage of the relaxed aspect of your model? >>>
    Touch a DRAM location and one gets causal order.
    Touch a MM I/O location and one gets sequential consistency
    Touch a config space location and one gets strongly ordering
    Touch ROM and one gets unordered access.

    You see, the memory <ordering> model is not tied to a CPU state, but
    to what LD and ST instructions touch.
    [...]

    What is the granularity of the "touch"? A L2 cache line?

    Yes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Lawrence D'Oliveiro on Tue Sep 24 03:17:48 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 22 Sep 2024 18:45:09 -0000 (UTC), Brett wrote:

    Electric Universe

    Electricity without magnetism? Not even taking account of Maxwell’s unification of the electric and magnetic fields?


    The standard theory is that there is no electricity in the universe due to
    the emptiness gaps. But light creates charge, and charge attraction, and discharge creates magnetic fields, and all of this far better explains
    galactic strands than mere gravity.

    Go to the youtube Electric Universe and watch the most popular videos and select play lists that interest you.

    Most of this has been known since the 1920’s, but there be dragons down
    this path so down the pointless gravity black hole science has gone.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to All on Tue Sep 24 07:50:36 2024
    MitchAlsup1 wrote:
    On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:

    On Mon, 23 Sep 2024 01:34:55 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:

    On 9/22/2024 5:39 PM, MitchAlsup1 wrote:

    Speaking of memory models, remember when x86 didn't have
    a formal memory model.  They didn't put one in until
    after itanium.  Before that it was a sort of processor
    consistency type 2 which was a real impedance mismatch
    with what most concurrent software used a a memory model.

    When only 1 x86 would fit on a die, it really did not mater
    much. I was at AMD when they were designing their memory
    model.

    Joe Seigh


    Why # of CPU cores on die is of particular importance?

    Prior to multi-CPUs on a die; 99% of all x86 systems were
    mono-CPU systems, and the necessity of having a well known
    memory model was more vague. Although there were servers
    with multiple CPUs in them they represented "an afternoon
    in the FAB" compared to the PC oriented x86s.

    When I started writing my first multi-threaded programs, I insisted on
    getting a workstation with at least two sockets/cpus:

    Somebody wiser than me had written something like "You cannot
    write/test/debug multithreaded programs without the ability for multiple threads to actually run at the same time."

    Pretty obvious really, but the quote was sufficient to get my boss to
    sign off on a much more expensive PC model. :-)

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Tue Sep 24 08:14:33 2024
    On 24/09/2024 03:05, Lawrence D'Oliveiro wrote:
    On Sun, 22 Sep 2024 11:56:33 +0300, Michael S wrote:

    Einstein didn't like Copenhagen interpretation of Quantum Mechanics.

    He didn’t like quantum mechanics full stop. “God does not play dice”, he
    famously said. And kept trying to come up with an alternative, though he never succeeded. And remember, his 1905 paper on the photoelectric effect (for which he won the Nobel Prize) was one of the foundation stones of
    this horrible new theory.

    “Interpretations” of quantum mechanics had nothing to do with this: the probabilistic behaviour that Einstein objected to is inherent in the equations themselves: wave function in → probability out.

    A lot of physicists don't like quantum mechanics, and its implications,
    and very few (if any) of them can claim to have a good intuitive feel
    for it. It's a huge departure from the classical clockwork universe models.

    But they still accept that it has turned out to be an excellent model
    for a lot of physics, standing strong test after test, and covering a
    wide range of situations to an astounding accuracy. The same applies to Einstein - perhaps he thought it was a horrible theory, but he also
    thought it was correct (in the sense of being an accurate model of reality).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Tue Sep 24 08:17:30 2024
    On 24/09/2024 03:00, Lawrence D'Oliveiro wrote:
    On Mon, 23 Sep 2024 10:38:40 -0000 (UTC), Thomas Koenig wrote:

    (Among thers, he left out turbulence, where we have some understanding,
    but do not yet understand the Navier-Stokes equations - one of the
    Millenium Problems).

    I thought the problem with Navier-Stokes is that it assumes infinitesimally-small particles of fluid, whereas we know that real fluids are made up of atoms and molecules.

    Remember how Max Planck solved the black-body problem? He knew all about
    the previous approach of assuming that matter was made up of little oscillators, and then trying to work out the limiting behaviour as the
    size of those oscillators approached zero -- that didn’t work. So his breakthrough was in assuming that the oscillators did *not* approach zero
    in size, but had some minimum nonzero size. Et voilà ... he got a curve
    that actually matched the known behaviour of radiating bodies. And laid
    one of the foundation stones of quantum theory in the process.

    Seems a similar thing could be done with Navier-Stokes ... ?

    Without knowing the history of work on Navier-Stokes, I am /reasonably/ confident that mathematicians have thought about this and tried it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Lawrence D'Oliveiro on Tue Sep 24 10:56:20 2024
    On 2024-09-24 4:05, Lawrence D'Oliveiro wrote:
    On Sun, 22 Sep 2024 11:56:33 +0300, Michael S wrote:

    Einstein didn't like Copenhagen interpretation of Quantum Mechanics.

    He didn’t like quantum mechanics full stop. “God does not play dice”, he
    famously said. And kept trying to come up with an alternative, though he never succeeded. And remember, his 1905 paper on the photoelectric effect (for which he won the Nobel Prize) was one of the foundation stones of
    this horrible new theory.

    “Interpretations” of quantum mechanics had nothing to do with this: the probabilistic behaviour that Einstein objected to is inherent in the equations themselves: wave function in → probability out.


    The "probability" interpretation of the wave function is basically the Copenhagen interpretation of quantum. It is removed in the many-worlds interpretation, where every possibility happens in some world, and only
    the "weight" of that world is affected by the wave function. It comes
    back, though, because the weight of a world direcly determines how
    likely we are to be in that world and so explains why repeating the same experiment gives the distribution of results expected from the
    probabilistic interpretation of the wave function. (As I imperfectly
    understand it.)

    It would be nice to hear Einstein's opinion about many-worlds...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Scott Lurndal on Tue Sep 24 11:44:01 2024
    On Mon, 23 Sep 2024 21:51:31 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 23 Sep 2024 21:10:00 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:
    =20
    "Paul A. Clayton" <paaronclayton@gmail.com> writes: =20
    On 9/17/24 8:44=E2=80=AFPM, Lawrence D'Oliveiro wrote: =20
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
    =20
    "the CPUs are simply I/O managers to the Inference Engines and
    GPUs." =20

    That particular Wheel of Reincarnation will never turn that
    way.

    Why? It comes down to RAM. Those addon processors will never
    have access to the sheer quantity of RAM that is available to
    the CPU. And motherboard-based CPU RAM is upgradeable, as
    well, whereas addon cards tend not to offer this option. =20

    My guess would be that CPU RAM will decrease in upgradability.
    =20

    LDO's statement "will never have access to the sheer quantity of
    RAM that is available to the CPU" is flat out wrong.

    Marvell already offers a CXL add-on processor card that supports
    up to 4TB of DRAM with 16 high-end ARM64 V series cores. =20
    =20
    At somewhere near 3=C3=97 the latency to DRAM.
    =20

    Where did you find this figure?
    I have read both product brief and press release and didn't see any
    latency numbers mentioned, not even an order of magnitude.

    I suppose, in order to get real datasheet one would have to sign NDA.

    Somehow I don't see how anything running over PCIe-like link can be
    as fast as you suggest.

    The round-trip latency in PCie 6 can be circa 2ns. Add dram access
    time to that and it's competetive with local memory.


    Either you don't know what you are talking about or you have very
    special and particularly practically useless definition of round-trip
    latency.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to mitchalsup@aol.com on Tue Sep 24 11:51:13 2024
    On Mon, 23 Sep 2024 22:05:27 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 21:34:03 +0000, Michael S wrote:

    On Mon, 23 Sep 2024 21:10:00 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:

    "Paul A. Clayton" <paaronclayton@gmail.com> writes:
    On 9/17/24 8:44 PM, Lawrence D'Oliveiro wrote:
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:

    "the CPUs are simply I/O managers to the Inference Engines and
    GPUs."

    That particular Wheel of Reincarnation will never turn that way.

    Why? It comes down to RAM. Those addon processors will never
    have access to the sheer quantity of RAM that is available to
    the CPU. And motherboard-based CPU RAM is upgradeable, as well,
    whereas addon cards tend not to offer this option.

    My guess would be that CPU RAM will decrease in upgradability.

    LDO's statement "will never have access to the sheer quantity of
    RAM that is available to the CPU" is flat out wrong.

    Marvell already offers a CXL add-on processor card that supports
    up to 4TB of DRAM with 16 high-end ARM64 V series cores.

    At somewhere near 3× the latency to DRAM.


    Where did you find this figure?

    I calculated it based on how messages get passed up and down PCIe
    linkages and that that plug in memory has to be enough wire distance
    to need a PCIe switch between CPU die and Plug. Then add on typical
    memory controller and DRAM controller, and that is what you have.

    I have read both product brief and press release and didn't see any
    latency numbers mentioned, not even an order of magnitude.

    I suppose, in order to get real datasheet one would have to sign
    NDA.

    Somehow I don't see how anything running over PCIe-like link can be
    as fast as you suggest.

    I was not suggesting it is fast, I was suggesting it is slow.


    And I think that 3x, i.e. ~150 ns latency of cache miss measured at CPU
    core, is more than fast enough to be competitive. But I don't think that
    it is technically possible.

    If the size works for your application--great !
    If the latency does not work for you--less great.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Terje Mathisen on Tue Sep 24 11:56:16 2024
    On Tue, 24 Sep 2024 07:50:36 +0200
    Terje Mathisen <terje.mathisen@tmsw.no> wrote:

    MitchAlsup1 wrote:
    On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:

    On Mon, 23 Sep 2024 01:34:55 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:

    On 9/22/2024 5:39 PM, MitchAlsup1 wrote:

    Speaking of memory models, remember when x86 didn't have
    a formal memory model.  They didn't put one in until
    after itanium.  Before that it was a sort of processor
    consistency type 2 which was a real impedance mismatch
    with what most concurrent software used a a memory model.

    When only 1 x86 would fit on a die, it really did not mater
    much. I was at AMD when they were designing their memory
    model.

    Joe Seigh


    Why # of CPU cores on die is of particular importance?

    Prior to multi-CPUs on a die; 99% of all x86 systems were
    mono-CPU systems, and the necessity of having a well known
    memory model was more vague. Although there were servers
    with multiple CPUs in them they represented "an afternoon
    in the FAB" compared to the PC oriented x86s.

    When I started writing my first multi-threaded programs, I insisted
    on getting a workstation with at least two sockets/cpus:

    Somebody wiser than me had written something like "You cannot write/test/debug multithreaded programs without the ability for
    multiple threads to actually run at the same time."

    Pretty obvious really, but the quote was sufficient to get my boss to
    sign off on a much more expensive PC model. :-)

    Terje


    There are few situations where the difference between SC and weaker
    memory ordering models does not manifest itself unless you have at
    least 3 cores. Hopefully, it only matter for people that are doing
    insane stuff.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Tue Sep 24 12:14:39 2024
    On Tue, 24 Sep 2024 01:00:45 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Mon, 23 Sep 2024 10:38:40 -0000 (UTC), Thomas Koenig wrote:

    (Among thers, he left out turbulence, where we have some
    understanding, but do not yet understand the Navier-Stokes
    equations - one of the Millenium Problems).

    I thought the problem with Navier-Stokes is that it assumes infinitesimally-small particles of fluid, whereas we know that real
    fluids are made up of atoms and molecules.


    No, the problem (supposing that there is The Problem) is an assumption
    of incompressibility. For real liquids this assumption is very close to
    truth, but despite the closeness it sometimes leads to very wrong
    solutions.

    Remember how Max Planck solved the black-body problem? He knew all
    about the previous approach of assuming that matter was made up of
    little oscillators, and then trying to work out the limiting
    behaviour as the size of those oscillators approached zero -- that
    didn’t work. So his breakthrough was in assuming that the oscillators
    did *not* approach zero in size, but had some minimum nonzero size.
    Et voilà ... he got a curve that actually matched the known behaviour
    of radiating bodies. And laid one of the foundation stones of quantum
    theory in the process.

    Seems a similar thing could be done with Navier-Stokes ... ?

    Equations of aerodynamics that do not suffer from This Problem, are
    as well-known as Navier-Stokes itself. But they are more difficult to
    solve.

    It's pretty similar to non-relativistic QM vs QED.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to mitchalsup@aol.com on Tue Sep 24 12:49:44 2024
    On Mon, 23 Sep 2024 20:59:42 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:

    On Mon, 23 Sep 2024 01:34:55 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 0:53:35 +0000, jseigh wrote:

    On 9/22/2024 5:39 PM, MitchAlsup1 wrote:

    Speaking of memory models, remember when x86 didn't have
    a formal memory model. They didn't put one in until
    after itanium. Before that it was a sort of processor
    consistency type 2 which was a real impedance mismatch
    with what most concurrent software used a a memory model.

    When only 1 x86 would fit on a die, it really did not mater
    much. I was at AMD when they were designing their memory
    model.

    Joe Seigh


    Why # of CPU cores on die is of particular importance?

    Prior to multi-CPUs on a die; 99% of all x86 systems were
    mono-CPU systems, and the necessity of having a well known
    memory model was more vague.
    Although there were servers
    with multiple CPUs in them they represented "an afternoon
    in the FAB" compared to the PC oriented x86s.


    Even if 99% is correct, there were still 6-7 figures worth of
    dual-processor x86 systems sold each year and starting from 1997 at
    least tens of thousands of quads.
    Absence of ordering definitions should have been a problem for a lot of
    people. But somehow, it was not.

    That is "we did not see the problem until it hit us in
    the face." Once it did, we understood what we had to do:
    presto memory model.

    Also note: this was just after the execution pipeline went
    Great Big Our of Order, and thus made the lack of order
    problems much more visible to applications. {Pentium Pro}


    And that happened almost 10 years before Intel published their first
    official x86 Memory Ordering paper. As to AMD, I think they hold it
    unpublished even longer.

    According to my understanding, what matters is # of CPU cores with
    coherent access to the same memory+IO.
    For x86, 4 cores (CPUs) were relatively common since 1996. There
    existed few odd 8-core systems too, still back in the last century.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Tue Sep 24 12:37:58 2024
    On Tue, 24 Sep 2024 00:45:22 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sun, 22 Sep 2024 10:34:16 +0300, Michael S wrote:

    The difference between MPP and cluster is not well-defined.
    The difference between ccNUMA and MPP-or-cluster is crystal clear.

    If memory on other nodes were made directly addressable via hardware
    that implemented something like a message-passing bus, suddenly the difference is not so clear.

    For as long as communication is not cache-coherent the difference vs
    ccNUMA is still crystal clear.

    I vaguely remember that in late 1990s Cray made computer of this sort,
    a predecessor of later far more successful CRAY-XT3/4. The later had to
    give up on direct addressability in order to scale for vastly higher
    number of nodes and of total memory.
    May be, Cray T3E ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Terje Mathisen on Tue Sep 24 14:14:34 2024
    Terje Mathisen <terje.mathisen@tmsw.no> writes:
    MitchAlsup1 wrote:
    On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:

    Prior to multi-CPUs on a die; 99% of all x86 systems were
    mono-CPU systems, and the necessity of having a well known
    memory model was more vague. Although there were servers
    with multiple CPUs in them they represented "an afternoon
    in the FAB" compared to the PC oriented x86s.

    When I started writing my first multi-threaded programs, I insisted on=20 >getting a workstation with at least two sockets/cpus:

    Somebody wiser than me had written something like "You cannot=20 >write/test/debug multithreaded programs without the ability for multiple =

    threads to actually run at the same time."

    Pretty obvious really, but the quote was sufficient to get my boss to=20
    sign off on a much more expensive PC model. :-)


    My experience has been that there are three operating modes that need
    to be tested. One processor, two processors and more than two
    processors.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Scott Lurndal on Tue Sep 24 17:50:12 2024
    On Tue, 24 Sep 2024 14:18:54 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 23 Sep 2024 21:51:31 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 23 Sep 2024 21:10:00 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:
    =20
    "Paul A. Clayton" <paaronclayton@gmail.com> writes: =20
    On 9/17/24 8:44=E2=80=AFPM, Lawrence D'Oliveiro wrote: =20
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
    =20
    "the CPUs are simply I/O managers to the Inference Engines
    and GPUs." =20

    That particular Wheel of Reincarnation will never turn that
    way.

    Why? It comes down to RAM. Those addon processors will never
    have access to the sheer quantity of RAM that is available
    to the CPU. And motherboard-based CPU RAM is upgradeable, as
    well, whereas addon cards tend not to offer this option.
    =20

    My guess would be that CPU RAM will decrease in upgradability.
    =20

    LDO's statement "will never have access to the sheer quantity
    of RAM that is available to the CPU" is flat out wrong.

    Marvell already offers a CXL add-on processor card that
    supports up to 4TB of DRAM with 16 high-end ARM64 V series
    cores. =20
    =20
    At somewhere near 3=C3=97 the latency to DRAM.
    =20

    Where did you find this figure?
    I have read both product brief and press release and didn't see
    any latency numbers mentioned, not even an order of magnitude.

    I suppose, in order to get real datasheet one would have to sign
    NDA.

    Somehow I don't see how anything running over PCIe-like link can
    be as fast as you suggest.

    The round-trip latency in PCie 6 can be circa 2ns. Add dram
    access time to that and it's competetive with local memory.


    Either you don't know what you are talking about or you have very
    special and particularly practically useless definition of round-trip >latency.

    RC transmitter -> EP receiver/transmitter -> RC receiver at the
    MAC level. Add in any logic delays to get to the memory controller,
    and as noted, dram access time, will add to that. Making the
    round trip delay comparable to a modern multi-socket numa
    system.


    The main components of additional delay are not related to signal
    propagation on PCBs/connectors/cable.

    The biggest delay is reception/decoding of response (==data) packet in
    CPU.

    The 2nd biggest delay is reception/decoding of request
    (==address/control) packet in extender chip.

    The 3rd biggest delay is accumulation of read data in extender chip.
    For various reasons, you can't start transmission of response packet
    until all data is available.
    However, for typical 64B responses the 3rd component is much smaller
    than 1st and 2nd.

    There are also few other delays, each of them small individually, but
    they add up.

    Your HW guys would certainly give you more precise and more detailed
    list of delays at various stages. Except, of course, of 1st and the
    biggest one on my list, which is outside of their control.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Michael S on Tue Sep 24 14:18:54 2024
    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 23 Sep 2024 21:51:31 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Mon, 23 Sep 2024 21:10:00 +0000
    mitchalsup@aol.com (MitchAlsup1) wrote:

    On Mon, 23 Sep 2024 15:06:50 +0000, Scott Lurndal wrote:
    =20
    "Paul A. Clayton" <paaronclayton@gmail.com> writes: =20
    On 9/17/24 8:44=E2=80=AFPM, Lawrence D'Oliveiro wrote: =20
    On Tue, 17 Sep 2024 23:45:50 +0000, MitchAlsup1 wrote:
    =20
    "the CPUs are simply I/O managers to the Inference Engines and
    GPUs." =20

    That particular Wheel of Reincarnation will never turn that
    way.

    Why? It comes down to RAM. Those addon processors will never
    have access to the sheer quantity of RAM that is available to
    the CPU. And motherboard-based CPU RAM is upgradeable, as
    well, whereas addon cards tend not to offer this option. =20

    My guess would be that CPU RAM will decrease in upgradability.
    =20

    LDO's statement "will never have access to the sheer quantity of
    RAM that is available to the CPU" is flat out wrong.

    Marvell already offers a CXL add-on processor card that supports
    up to 4TB of DRAM with 16 high-end ARM64 V series cores. =20
    =20
    At somewhere near 3=C3=97 the latency to DRAM.
    =20

    Where did you find this figure?
    I have read both product brief and press release and didn't see any
    latency numbers mentioned, not even an order of magnitude.

    I suppose, in order to get real datasheet one would have to sign NDA.

    Somehow I don't see how anything running over PCIe-like link can be
    as fast as you suggest.

    The round-trip latency in PCie 6 can be circa 2ns. Add dram access
    time to that and it's competetive with local memory.


    Either you don't know what you are talking about or you have very
    special and particularly practically useless definition of round-trip >latency.

    RC transmitter -> EP receiver/transmitter -> RC receiver at the
    MAC level. Add in any logic delays to get to the memory controller,
    and as noted, dram access time, will add to that. Making the
    round trip delay comparable to a modern multi-socket numa
    system.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Tue Sep 24 11:45:14 2024
    Even if 99% is correct, there were still 6-7 figures worth of
    dual-processor x86 systems sold each year and starting from 1997 at
    least tens of thousands of quads.
    Absence of ordering definitions should have been a problem for a lot of people. But somehow, it was not.

    My guess:

    - Luck due to the relatively limited amount of reordering taking place
    in the CPUs of the time.
    - Limited software support, encouraging very coarse commmunication
    patterns (like parallel `make` or processes communicating via pipes)?
    - The remaining cases were sufficiently rare that the victims blamed it
    on themselves for pushing the boundaries (and added hacks to work
    around the problems instead of complaining to their CPU manufacturer
    about the insane semantics imposed by their hardware)?


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to David Brown on Tue Sep 24 17:28:07 2024
    David Brown <david.brown@hesbynett.no> schrieb:
    On 24/09/2024 03:00, Lawrence D'Oliveiro wrote:
    On Mon, 23 Sep 2024 10:38:40 -0000 (UTC), Thomas Koenig wrote:

    (Among thers, he left out turbulence, where we have some understanding,
    but do not yet understand the Navier-Stokes equations - one of the
    Millenium Problems).

    I thought the problem with Navier-Stokes is that it assumes
    infinitesimally-small particles of fluid, whereas we know that real fluids >> are made up of atoms and molecules.

    Remember how Max Planck solved the black-body problem? He knew all about
    the previous approach of assuming that matter was made up of little
    oscillators, and then trying to work out the limiting behaviour as the
    size of those oscillators approached zero -- that didn’t work. So his
    breakthrough was in assuming that the oscillators did *not* approach zero
    in size, but had some minimum nonzero size. Et voilà ... he got a curve
    that actually matched the known behaviour of radiating bodies. And laid
    one of the foundation stones of quantum theory in the process.

    Seems a similar thing could be done with Navier-Stokes ... ?

    Without knowing the history of work on Navier-Stokes, I am /reasonably/ confident that mathematicians have thought about this and tried it.

    Quite a few decades ago, when I started my PhD, the group met
    at a pub. Also present was one former PhD student, who had his
    doctorate but, at the time, no job.

    When asked what he was doing, he said he currently was a privat
    scholar. A colleague asked for details, and he said that he
    was working on the general solution of the Navier-Stokes equation,
    and that he had tried separation of variables, but it didn't work.
    We took this as "shut up, I don't want to hear any more questions".

    Some time later, I tried to explain that to a medical doctor.
    I told here that it was like claiming he was searching for the cure for
    cancer, and that he had tried a saline solution, but it didn't work.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Thomas Koenig on Tue Sep 24 18:46:48 2024
    On Tue, 24 Sep 2024 17:28:07 +0000, Thomas Koenig wrote:


    Quite a few decades ago, when I started my PhD, the group met
    at a pub. Also present was one former PhD student, who had his
    doctorate but, at the time, no job.

    When asked what he was doing, he said he currently was a privat
    scholar. A colleague asked for details, and he said that he
    was working on the general solution of the Navier-Stokes equation,
    and that he had tried separation of variables, but it didn't work.
    We took this as "shut up, I don't want to hear any more questions".

    Some time later, I tried to explain that to a medical doctor.
    I told here that it was like claiming he was searching for the cure for cancer, and that he had tried a saline solution, but it didn't work.

    Perhaps the saline was not salty enough.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to terje.mathisen@tmsw.no on Tue Sep 24 15:45:22 2024
    On Tue, 24 Sep 2024 07:50:36 +0200, Terje Mathisen
    <terje.mathisen@tmsw.no> wrote:

    MitchAlsup1 wrote:
    On Mon, 23 Sep 2024 7:53:36 +0000, Michael S wrote:

    Prior to multi-CPUs on a die; 99% of all x86 systems were
    mono-CPU systems, and the necessity of having a well known
    memory model was more vague. Although there were servers
    with multiple CPUs in them they represented "an afternoon
    in the FAB" compared to the PC oriented x86s.

    When I started writing my first multi-threaded programs, I insisted on >getting a workstation with at least two sockets/cpus:

    Somebody wiser than me had written something like "You cannot >write/test/debug multithreaded programs without the ability for multiple >threads to actually run at the same time."

    Pretty obvious really, but the quote was sufficient to get my boss to
    sign off on a much more expensive PC model. :-)

    Terje

    Many moons ago, there existed people who actually understood the
    difference between "multi-programming" and "multi-processing".

    Such people today are few and far between.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Chris M. Thomasson on Tue Sep 24 20:21:53 2024
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 9/23/2024 8:17 PM, Brett wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 22 Sep 2024 18:45:09 -0000 (UTC), Brett wrote:

    Electric Universe

    Electricity without magnetism? Not even taking account of Maxwell’s
    unification of the electric and magnetic fields?


    The standard theory is that there is no electricity in the universe due to >> the emptiness gaps. But light creates charge, and charge attraction, and
    discharge creates magnetic fields, and all of this far better explains
    galactic strands than mere gravity.

    Go to the youtube Electric Universe and watch the most popular videos and
    select play lists that interest you.

    Most of this has been known since the 1920’s, but there be dragons down
    this path so down the pointless gravity black hole science has gone.


    Shit man... Are we in a black hole that resided in our parent universe? Humm... Always wondered about that type of shit.


    I am on the black holes don’t exist list, at smaller than at the center of
    a galaxy.

    You hear physicists talk of microscopic black holes, but the force that
    keeps atoms apart is so much more powerful than gravity that such talk is
    just fools playing with math they don’t understand.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Brett on Tue Sep 24 20:33:58 2024
    On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:


    I am on the black holes don’t exist list, at smaller than at the center
    of a galaxy.

    You hear physicists talk of microscopic black holes, but the force that
    keeps atoms apart is so much more powerful than gravity that such talk
    is just fools playing with math they don’t understand.

    Neutron stars are are collapsed forms of matter where gravity is
    stronger
    than the electro-magnetic fields holding the electrons away from each
    other and the protons.

    It is possible that there is some kind of (as yet non-understood) force
    that prevent a black holes complete collapse into a point--erasing all
    visible aspects other than mass, charge, and spin.

    It is just that our understanding of physics does not include such a
    force.

    Finally note: An electron can be modeled in QCD as if it were a black
    hole with the mass, charge, and spin of an electron. ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Tue Sep 24 23:43:32 2024
    On Mon, 23 Sep 2024 19:44:49 -0700, Chris M. Thomasson wrote:

    On 9/23/2024 5:45 PM, Lawrence D'Oliveiro wrote:

    On Sun, 22 Sep 2024 10:34:16 +0300, Michael S wrote:

    The difference between MPP and cluster is not well-defined.
    The difference between ccNUMA and MPP-or-cluster is crystal clear.

    If memory on other nodes were made directly addressable via hardware
    that implemented something like a message-passing bus, suddenly the
    difference is not so clear.

    I guess the idea is to try to design things that try to minimize that
    down to a bare minimum, if at all... Keep things as local as possible?

    Hence NUMA.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Tue Sep 24 23:46:35 2024
    On Tue, 24 Sep 2024 13:19:32 -0500, BGB wrote:

    On 9/20/2024 4:33 PM, Lawrence D'Oliveiro wrote:

    On Fri, 20 Sep 2024 09:55:53 +0200, Terje Mathisen wrote:

    Lawrence D'Oliveiro wrote:

    The way I understood to do flicker-free drawing was with just two
    buffers -- “double buffering”. And rather than swap the buffer
    contents, you just swapped the pointers to them.

    If you cannot swap the buffers with pointer updates ...

    Surely all the good hardware is/was designed that way, with special
    registers pointing to “current buffer” and “back buffer”, with the >> display coming from “current buffer” while writes typically go to “back
    buffer”. Why would you do it otherwise?

    VRAM isn't free, and the older graphics hardware (before the era of 3D acceleration and the like) tended to only have a single framebuffer
    (except, ironically, for text modes).

    But flicker-free updating requires at least two. And even the original 1984/1985 Macintosh could manage two, and the Amiga had its clever
    “copper” which split those buffers down to individual scan lines. With the base-register scheme, you don’t need more than two buffers. Yet I
    frequently hear of “triple-buffering” going on, which seems unnecessary to me.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Tue Sep 24 23:49:36 2024
    On Tue, 24 Sep 2024 00:54:54 +0000, MitchAlsup1 wrote:

    On Tue, 24 Sep 2024 0:50:15 +0000, Lawrence D'Oliveiro wrote:

    On Sun, 22 Sep 2024 18:45:54 +0000, MitchAlsup1 wrote:

    On Sun, 22 Sep 2024 7:23:59 +0000, Lawrence D'Oliveiro wrote:

    Bell’s Theorem offered a way to test for those, and the tests (there >>>> have been several of them so far, done in several different ways)
    show that such “hidden variables” cannot exist.

    Do not exist, there remains no evidence that they cannot exist.

    The large collection of tests of Bell’s theorem is that evidence.

    There is still that ~1:peta chance of some phenomena we have not yet
    measured to upend the inequality.

    Sure. Every time somebody does one test, other scientists look at that and
    say “but what if...”. So someone else thinks up a new test that approaches things from a different direction. That’s how science is done.

    Let’s just say that so many tests have been done of Bell’s Theorem now, that nobody is likely to build a scientific career out of continuing to question it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Thomas Koenig on Tue Sep 24 23:51:45 2024
    On Tue, 24 Sep 2024 17:28:07 -0000 (UTC), Thomas Koenig wrote:

    I told here that it was like claiming he was searching for the cure for cancer, and that he had tried a saline solution, but it didn't work.

    Fun fact: lots of things kill cancer cells. Apparently even distilled
    water will work -- in a Petri dish in the lab.

    Trying to apply that in a human body ... not so much.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Tue Sep 24 23:54:28 2024
    On Tue, 24 Sep 2024 03:17:48 -0000 (UTC), Brett wrote:

    But light creates charge, and charge attraction,
    and discharge creates magnetic fields ...

    Fun fact: light is already a travelling electromagnetic field -- about as
    pure as you can get. So you can leave out the mumbo-jumbo to get from “light” to “field”.

    Further fun fact: it has long been known that, if matter were held
    together purely by electromagnetic fields, it would be unstable and
    collapse.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Tue Sep 24 23:55:50 2024
    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force that
    keeps atoms apart is so much more powerful than gravity that such talk
    is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly
    together that individual subatomic particles lose their identity) couldn’t exist either. But they do.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Wed Sep 25 00:00:54 2024
    On Tue, 24 Sep 2024 20:33:58 +0000, MitchAlsup1 wrote:

    Neutron stars are are collapsed forms of matter where gravity is
    stronger than the electro-magnetic fields holding the electrons away
    from each other and the protons.

    Gravity here is stronger even than the Pauli exclusion principle, which
    says that two matter particles (e.g. electrons, protons, neutrons) cannot occupy the same space at the same time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Tue Sep 24 23:59:26 2024
    On Tue, 24 Sep 2024 14:15:38 -0700, Chris M. Thomasson wrote:

    You hear physicists talk of microscopic black holes,

    What about the naked ones? ;^)

    The term is actually “naked singularity”. The (scientifically troublesome) thing about a black hole is that it has a singularity at its heart, where
    the solution to Einstein’s equations of General Relativity involve a
    division by zero. Some say that’s not so bad, because this singularity is effectively hidden from the rest of the Universe by the surrounding swirl
    of high energy and matter that makes up the mass of the black hole itself.

    The open question is: can a singularity exist “naked” -- that is, not hidden from interacting with the rest of the Universe by a surrounding
    black hole? Because that would take the scariness to a whole new level.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Wed Sep 25 00:08:08 2024
    On Tue, 24 Sep 2024 17:08:25 -0500, BGB wrote:

    Or if quantum computing can give answers "better" than classical
    computers using non-brute-force algorithms.

    This is why it’s worth distinguishing between “digital” and “analog” computers. Analog computers were quite popular in the earlier part of the
    20th century, back when digital computers were still quite slow. They
    could come up with quick answers to physical-simulation problems, albeit
    to limited accuracy.

    For example, the Apollo Saturn-V rocket was controlled by a hybrid digital/analog computer system created by IBM, housed in the ring that
    coupled the third stage to the upper part with the CSM and LEM. The
    digital part computed where the rocket was supposed to go, but it could
    only solve the equations about once a second or so; it fed these numbers
    to the analog part, which could adjust the direction and thrust of the
    engines much more quickly than that, to keep the whole vehicle functioning properly and on course from millisecond to millisecond.

    But anyway, the current “quantum” computers have shown some success
    solving “analog” style problems, but even the simplest “digital” type operation, namely something involving factorizing integers, has so far completely eluded them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Wed Sep 25 10:12:31 2024
    On 24/09/2024 22:33, MitchAlsup1 wrote:
    On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:


    I am on the black holes don’t exist list, at smaller than at the center
    of a galaxy.

    You hear physicists talk of microscopic black holes, but the force that
    keeps atoms apart is so much more powerful than gravity that such talk
    is just fools playing with math they don’t understand.

    Neutron stars are are collapsed forms of matter where gravity is
    stronger
    than the electro-magnetic fields holding the electrons away from each
    other and the protons.

    It is possible that there is some kind of (as yet non-understood) force
    that prevent a black holes complete collapse into a point--erasing all visible aspects other than mass, charge, and spin.

    It is just that our understanding of physics does not include such a
    force.


    There are quite a lot of other forces and effects fighting against the collapse. These are often viewed as "pressure". Electron pressure
    prevents neutron stars from forming until you have at least 1.4 solar
    masses. Beyond that, there is the pressure from the strong force
    holding neutrons together, effects from the uncertainty principle, the
    Pauli exclusion principle, and various quark effects.

    These are not beyond our current understanding, but some of these
    degenerate matter states have not been observed. We have /some/ data
    about the inside of neutron stars, but it is limited. AFAIK we have not
    seen anything that is definitely more collapsed than a neutron star, but definitely not a black hole - quark stars, strange stars, and the like
    are hypothetical for now.

    But it is entirely plausible that there are other limits to compression
    that we don't as yet know about and that would prevent a singularity
    even for huge masses.

    However, AFAIK (and my knowledge here is amateur) it does not make an observable difference if there is such a force or pressure preventing singularities. Once you have reached the point where the object is
    smaller than its Schwarzchild radius (ignoring angular momentum for
    simplicity) then no information can escape from the object to the
    outside universe. From outside the event horizon, you see the mass,
    charge and angular momentum - nothing else, regardless of what things
    are like inside the event horizon.

    Finally note: An electron can be modeled in QCD as if it were a black
    hole with the mass, charge, and spin of an electron. ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Wed Sep 25 10:43:20 2024
    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force
    that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to mitchalsup@aol.com on Sat Sep 28 07:48:53 2024
    mitchalsup@aol.com (MitchAlsup1) writes:

    On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:

    I am on the black holes don't exist list, at smaller than at the
    center of a galaxy.

    You hear physicists talk of microscopic black holes, but the
    force that keeps atoms apart is so much more powerful than
    gravity that such talk is just fools playing with math they don't
    understand.

    Fools like Lev Landau, J Robert Oppenheimer, Richard Chase Tolman,
    George Volkoff, Subrahmanyan Chandresekhar, Richard Feynman,
    Stephen Hawking (and many others whose names I don't know)?

    Neutron stars are are collapsed forms of matter where gravity is
    stronger than the electro-magnetic fields holding the electrons
    away from each other and the protons.

    Not exactly. Electrons and protons attract each other. The gravity
    is strong enough to get an electron and a proton close enough to each
    other so they can combine and form a neutron. My model for this
    recombination is as follows: a proton is two up quarks and a down
    quark; take an up quark (charge +2/3) and an electron (charge -1),
    and maybe a neutrino, turn them all into energy and then turn the
    energy back into a down quark (charge -1/3); so we have taken two up
    quarks and a down quark (a proton) and an electron, and gotten out two
    down quarks and an up quark (a neutron). Keep doing that until all
    the electrons and protons are used up. Result: a neutron star,
    consisting almost entirely of neutrons, and almost no protons or
    electrons.

    It is possible that there is some kind of (as yet non-understood)
    force that prevent a black holes complete collapse into a
    point--erasing all visible aspects other than mass, charge, and
    spin.

    It is just that our understanding of physics does not include such
    a force.

    Somewhere inside the event horizon of a black hole there is a
    discontinuity in space-time. It isn't clear what distance even
    means in the vicinity of such a discontinuity.

    The "size" of a black hole might be identified as the radius of
    the event horizon, since there is no way of looking inside the
    event horizon. The radius of a black hole's event horizon is an
    increasing function of the mass of the black hole. (My memory
    tells me that the radius is a linear function of the mass, but
    that should not be taken as reliable.) Either the Earth or the
    Sun would have (if it were a black hole) an event horizon radius
    of more than one millimeter, but that statement too is a product
    of my not-always-reliable memory.


    Finally note: An electron can be modeled in QCD as if it were a
    black hole with the mass, charge, and spin of an electron. ...

    Electrons are color neutral. As far as QCD is concerned (since
    QCD is only about the strong force, i.e. the color field),
    electrons are invisible. (Disclaimer: to the best of my
    understanding; I am not a physicist.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Tim Rentsch on Sat Sep 28 17:41:29 2024
    On 28/09/2024 16:48, Tim Rentsch wrote:
    mitchalsup@aol.com (MitchAlsup1) writes:


    <snip stuff that seems fine to me>


    The "size" of a black hole might be identified as the radius of
    the event horizon, since there is no way of looking inside the
    event horizon. The radius of a black hole's event horizon is an
    increasing function of the mass of the black hole. (My memory
    tells me that the radius is a linear function of the mass, but
    that should not be taken as reliable.)

    Your memory is correct (at least when angular momentum and charge are
    ignored) - the Schwarzchild radius is 2Gm/c².

    Either the Earth or the
    Sun would have (if it were a black hole) an event horizon radius
    of more than one millimeter, but that statement too is a product
    of my not-always-reliable memory.


    For the Earth, it's about 9mm and for the Sun, around 3 km. So yes,
    both over 1 mm.


    Finally note: An electron can be modeled in QCD as if it were a
    black hole with the mass, charge, and spin of an electron. ...

    Electrons are color neutral. As far as QCD is concerned (since
    QCD is only about the strong force, i.e. the color field),
    electrons are invisible. (Disclaimer: to the best of my
    understanding; I am not a physicist.)

    Perhaps he meant QED, rather than QCD.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to David Brown on Sat Sep 28 19:01:34 2024
    On 2024-09-28 18:46, David Brown wrote:
    On 27/09/2024 20:43, Brett wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force
    that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly
    together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Some of the pulsars are spinning at such a rate that they would fly
    apart,
    so we know the theory is wrong.

    They are not flying apart - so we know /you/ are wrong.


    I think you made logical error there, David, a rare one for you. As I understand Brett, he is saying that "the theory" that pulsars are
    neutron stars cannot be right, because some pulsars spin so rapidly that
    a neutron star spinning like that would fly apart.

    If there really were such pulsars -- pulsars spinning faster than a
    neutron star can spin -- then I think Brett's argument would hold: those pulsars could not be neutron stars. Or the error could be in our
    understanding of how fast neutron stars can spin.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Brett on Sat Sep 28 17:46:41 2024
    On 27/09/2024 20:43, Brett wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force
    that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly
    together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Some of the pulsars are spinning at such a rate that they would fly apart,
    so we know the theory is wrong.

    They are not flying apart - so we know /you/ are wrong.


    “A pulsar (from pulsating radio source)[1][2] is a highly magnetized rotating neutron star that emits beams of electromagnetic radiation out of its magnetic poles.[3] This radiation can be observed only when a beam of emission is pointing toward Earth (similar to the way a lighthouse can be seen only when the light is pointed in the direction of an observer), and
    is responsible for the pulsed appearance of emission. “

    This sounds like an electric motor,

    So basically your argument is that pulsars spin, electric motors spin, therefore pulsars are electric motors? Where did you learn about logic,
    from watching Monty Python films?

    and if you think a galactic
    civilization would not turn such into a gas station, I have news for you.
    You can take advantage of the huge gravity to feed it oil barrel
    projectiles full of liquid hydrogen to feed off of the explosions for more power generation and keep the generator alive. The resulting spectrum would be artificial, but we lack the theory to understand that.

    A Dyson sphere compared to a pulsar looks like a comparison of a desk fan
    to a modern wind mill.

    Ah, so the electric motor is powered by oil barrels filled with liquid hydrogen? That makes /much/ more sense.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From EricP@21:1/5 to Tim Rentsch on Sat Sep 28 13:16:08 2024
    Tim Rentsch wrote:
    mitchalsup@aol.com (MitchAlsup1) writes:

    On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:

    I am on the black holes don't exist list, at smaller than at the
    center of a galaxy.

    You hear physicists talk of microscopic black holes, but the
    force that keeps atoms apart is so much more powerful than
    gravity that such talk is just fools playing with math they don't
    understand.

    Fools like Lev Landau, J Robert Oppenheimer, Richard Chase Tolman,
    George Volkoff, Subrahmanyan Chandresekhar, Richard Feynman,
    Stephen Hawking (and many others whose names I don't know)?

    Neutron stars are are collapsed forms of matter where gravity is
    stronger than the electro-magnetic fields holding the electrons
    away from each other and the protons.

    Not exactly. Electrons and protons attract each other. The gravity
    is strong enough to get an electron and a proton close enough to each
    other so they can combine and form a neutron. My model for this recombination is as follows: a proton is two up quarks and a down
    quark; take an up quark (charge +2/3) and an electron (charge -1),
    and maybe a neutrino, turn them all into energy and then turn the
    energy back into a down quark (charge -1/3); so we have taken two up
    quarks and a down quark (a proton) and an electron, and gotten out two
    down quarks and an up quark (a neutron). Keep doing that until all
    the electrons and protons are used up. Result: a neutron star,
    consisting almost entirely of neutrons, and almost no protons or
    electrons.

    Neutron stars exist in the region where the mass is high enough to overcome https://en.wikipedia.org/wiki/Electron_degeneracy_pressure
    that prevents the collapse of a white dwarf, and below the mass of https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_limit where
    https://en.wikipedia.org/wiki/Degenerate_matter#Neutron_degeneracy
    pressure prevents its collapse to a black hole.

    What I see from a quicky search, the maximum spin rate for a neutron star
    is thought to be 760 Hz, above which magnetic coupling to surrounding
    matter and/or relativistic effects radiate away angular momentum.

    https://en.wikipedia.org/wiki/Neutron_star#Spin_down

    The previously referenced PSR J1748−2446ad spins at 716 Hz and at that
    spin rate the surface of the neutron star is moving at approx 25% of
    the speed of light.

    Also the center of the neutron star will not have the angular momentum
    of the outer edge but will have the high gravity.
    So just a guess but spinning doesn't look like it should stop
    black hole collapse if the mass gets too high.
    But that math is above my pay grade.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Lawrence D'Oliveiro on Sat Sep 28 10:36:28 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Tue, 24 Sep 2024 20:33:58 +0000, MitchAlsup1 wrote:

    Neutron stars are are collapsed forms of matter where gravity is
    stronger than the electro-magnetic fields holding the electrons away
    from each other and the protons.

    Gravity here is stronger even than the Pauli exclusion principle, which
    says that two matter particles (e.g. electrons, protons, neutrons) cannot occupy the same space at the same time.

    This statement of the Pauli exclusion principle is wrong. An
    example is the two electrons of a helium atom, which occupy the same
    "space" (the lowest orbital shell of the atom) as long as the helium
    atom persists.

    The Pauli exclusion principle doesn't apply to some matter particles
    (meaning particles that have non-zero rest mass). An example is
    carrier particles of the weak force, W (and I believe there are
    several kinds of W but I haven't bothered to verify that).

    Also, in some situations the Pauli exclusion principle doesn't apply
    to the kinds of particles it normally does apply to. An example is
    a pair of electrons in a Cooper pair, which since the electrons are
    paired they act as a boson rather than a fermion and thus are not
    subject to the Pauli exclusion principle (which is that two fermions
    cannot occupy the same quantum state).

    Note by the way that the Pauli exclusion principle is not an
    independent principle but simply a consequence of the laws of
    quantum mechanics as they apply to fermions.

    Finally, the original statement about gravity in a neutron star
    being stronger than the Pauli exclusion principle is wrong. It is
    precisely because of Pauli exclusion operating between the neutrons
    that make up the neutron star that stops it from collapsing into a
    black hole. The "pressure" of Pauli exclusion is not infinite,
    which means there is an upper bound on how much mass a neutron star
    can have before it collapses into a black hole. This bound, called
    the Tolman-Oppenheimer-Volkoff limit, is somewhere between 2 and 3
    solar masses.

    (Disclaimer: all the above is to the best of my understanding; I
    am not a physicist.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Niklas Holsti on Sat Sep 28 18:05:04 2024
    Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
    On 2024-09-28 5:47, Brett wrote:
    Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
    On 2024-09-27 21:43, Brett wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force >>>>>>> that keeps atoms apart is so much more powerful than gravity that >>>>>>> such talk is just fools playing with math they don’t understand. >>>>>>
    That would mean that neutron stars (all the atoms crushed so tightly >>>>>> together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Some of the pulsars are spinning at such a rate that they would fly apart, >>>> so we know the theory is wrong.


    Which pulsars are spinning too fast? Reference please!


    https://en.wikipedia.org/wiki/PSR_J1748%E2%88%922446ad#:~:text=PSR%20J1748%E2%88%922446ad%20is%20the,was%20discovered%20by%20Jason%20W.%20T.


    Spinning at 42,960 revolutions per minute.


    The article says it is "the fastest-spinning pulsar known", but does not
    say that it is spinning faster than neutron-star theories allow, so it
    does not support your claim.

    Took seconds for google to answer.

    It is the wrong answer, at least for your claim.


    Our sun spinning at 42,960 revolutions per minute would exceed the speed of light at its surface, much less be able to hold together.

    The theory is wrong.

    A better theory is leakage from a Dyson generator swarm, but no scientist
    could ever say such a thing in public.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to EricP on Sat Sep 28 19:08:23 2024
    EricP <ThatWouldBeTelling@thevillage.com> wrote:
    Tim Rentsch wrote:
    mitchalsup@aol.com (MitchAlsup1) writes:

    On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:

    I am on the black holes don't exist list, at smaller than at the
    center of a galaxy.

    You hear physicists talk of microscopic black holes, but the
    force that keeps atoms apart is so much more powerful than
    gravity that such talk is just fools playing with math they don't
    understand.

    Fools like Lev Landau, J Robert Oppenheimer, Richard Chase Tolman,
    George Volkoff, Subrahmanyan Chandresekhar, Richard Feynman,
    Stephen Hawking (and many others whose names I don't know)?

    Neutron stars are are collapsed forms of matter where gravity is
    stronger than the electro-magnetic fields holding the electrons
    away from each other and the protons.

    Not exactly. Electrons and protons attract each other. The gravity
    is strong enough to get an electron and a proton close enough to each
    other so they can combine and form a neutron. My model for this
    recombination is as follows: a proton is two up quarks and a down
    quark; take an up quark (charge +2/3) and an electron (charge -1),
    and maybe a neutrino, turn them all into energy and then turn the
    energy back into a down quark (charge -1/3); so we have taken two up
    quarks and a down quark (a proton) and an electron, and gotten out two
    down quarks and an up quark (a neutron). Keep doing that until all
    the electrons and protons are used up. Result: a neutron star,
    consisting almost entirely of neutrons, and almost no protons or
    electrons.

    Neutron stars exist in the region where the mass is high enough to overcome https://en.wikipedia.org/wiki/Electron_degeneracy_pressure
    that prevents the collapse of a white dwarf, and below the mass of https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_limit
    where
    https://en.wikipedia.org/wiki/Degenerate_matter#Neutron_degeneracy
    pressure prevents its collapse to a black hole.

    What I see from a quicky search, the maximum spin rate for a neutron star
    is thought to be 760 Hz, above which magnetic coupling to surrounding
    matter and/or relativistic effects radiate away angular momentum.

    https://en.wikipedia.org/wiki/Neutron_star#Spin_down

    The previously referenced PSR J1748−2446ad spins at 716 Hz and at that
    spin rate the surface of the neutron star is moving at approx 25% of
    the speed of light.

    Also the center of the neutron star will not have the angular momentum
    of the outer edge but will have the high gravity.
    So just a guess but spinning doesn't look like it should stop
    black hole collapse if the mass gets too high.
    But that math is above my pay grade.


    That is an excellent summary of the much improved wiki page on Neutron
    Stars, and I appreciate it. Thank you.

    I actually kind of like that theory now that I understand it. But it comes
    from government research, and our government is a bunch of habitual liars,
    and valid nuclear theory is too dangerous for peasants to know.

    Go take a look at the Structures Atom Model and tell me what you think.

    https://structuredatom.org

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Brett on Sat Sep 28 23:15:33 2024
    On 2024-09-28 21:05, Brett wrote:
    Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
    On 2024-09-28 5:47, Brett wrote:
    Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
    On 2024-09-27 21:43, Brett wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force >>>>>>>> that keeps atoms apart is so much more powerful than gravity that >>>>>>>> such talk is just fools playing with math they don’t understand. >>>>>>>
    That would mean that neutron stars (all the atoms crushed so tightly >>>>>>> together that individual subatomic particles lose their identity) >>>>>>> couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong. >>>>>
    Some of the pulsars are spinning at such a rate that they would fly apart,
    so we know the theory is wrong.


    Which pulsars are spinning too fast? Reference please!


    https://en.wikipedia.org/wiki/PSR_J1748%E2%88%922446ad#:~:text=PSR%20J1748%E2%88%922446ad%20is%20the,was%20discovered%20by%20Jason%20W.%20T.


    Spinning at 42,960 revolutions per minute.


    The article says it is "the fastest-spinning pulsar known", but does not
    say that it is spinning faster than neutron-star theories allow, so it
    does not support your claim.

    Took seconds for google to answer.

    It is the wrong answer, at least for your claim.


    Our sun spinning at 42,960 revolutions per minute would exceed the speed of light at its surface, much less be able to hold together.


    Our sun is not a neutron star, nor a pulsar, so that statement is
    irrelevant.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Sun Sep 29 01:24:22 2024
    On Sat, 28 Sep 2024 19:08:23 -0000 (UTC), Brett wrote:

    But it comes from government research, and our government is a bunch of habitual liars ...

    Who is this “our” government? There are research labs all over the world (some even in private hands), keeping tabs on each other’s work. If
    somebody were lying about some result, it wouldn’t take long before the others discovered this.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to EricP on Sun Sep 29 01:22:49 2024
    On Sat, 28 Sep 2024 13:16:08 -0400, EricP wrote:

    Also the center of the neutron star will not have the angular momentum
    of the outer edge but will have the high gravity.

    Not quite. Gravity goes down as you approach the centre of mass. Gauss’ Theorem says that the attraction of all the mass that is further away from
    the centre than you, in all directions, tends to cancel out.

    Pressure still goes up as you approach the centre, yes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Sun Sep 29 01:31:02 2024
    On Wed, 25 Sep 2024 10:43:20 +0300, Michael S wrote:

    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force
    that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly
    together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Occam’s Razor applies: stick to the simplest explanation that fits the
    known facts.

    Radio pulsars pulse at a very regular frequency (which is why they were originally thought to be created by some intelligence), but that frequency
    also gradually slows down with time. This is consistent with loss of
    angular momentum (and loss of energy) from radiation emission from a
    spinning neutron star.

    Remember, this isn’t all just hand-waving: they have formulas, derived
    from theory, into which they can plug in numbers, and the numbers agree
    with actual measurements.

    Can you come up with some other mechanism for a radio source that pulses extremely regularly, yet also slows down gradually over time?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Paul A. Clayton on Sun Sep 29 01:36:42 2024
    On Sun, 22 Sep 2024 16:58:10 -0400, Paul A. Clayton wrote:

    My guess would be that CPU RAM will decrease in upgradability. More
    tightly integrated memory facilitates higher bandwidth and lower latency
    (and lower system power/energy).

    Yes, we know that is the path that Apple is following. That seems to be
    the only way they can justify their move to ARM processors, in terms of increasing performance. Doesn’t mean that others will follow. I think Apple’s approach will turn out to be an evolutionary dead-end.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Lawrence D'Oliveiro on Sun Sep 29 02:08:26 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Wed, 25 Sep 2024 10:43:20 +0300, Michael S wrote:

    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force
    that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly
    together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Occam’s Razor applies: stick to the simplest explanation that fits the known facts.

    Radio pulsars pulse at a very regular frequency (which is why they were originally thought to be created by some intelligence), but that frequency also gradually slows down with time. This is consistent with loss of
    angular momentum (and loss of energy) from radiation emission from a
    spinning neutron star.

    Remember, this isn’t all just hand-waving: they have formulas, derived
    from theory, into which they can plug in numbers, and the numbers agree
    with actual measurements.

    Theories are a dime a dozen, it is easy to back fit data to fit any number
    of models.

    Can you come up with some other mechanism for a radio source that pulses extremely regularly, yet also slows down gradually over time?

    Here is a nice alternative to the standard model, which follows Occam’s Razor:

    https://youtu.be/bGygGius61I?si=6k0H1Bi70b4O9zgr

    ThunderboltsProject posts a lot of interesting videos, but the quality
    varies a lot with some crack pot ideas thrown in on occasion, to make one
    think I would suppose.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to EricP on Sat Sep 28 19:24:15 2024
    EricP <ThatWouldBeTelling@thevillage.com> writes:

    Tim Rentsch wrote:

    mitchalsup@aol.com (MitchAlsup1) writes:

    On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:

    I am on the black holes don't exist list, at smaller than at the
    center of a galaxy.

    You hear physicists talk of microscopic black holes, but the
    force that keeps atoms apart is so much more powerful than
    gravity that such talk is just fools playing with math they don't
    understand.

    Fools like Lev Landau, J Robert Oppenheimer, Richard Chase Tolman,
    George Volkoff, Subrahmanyan Chandresekhar, Richard Feynman,
    Stephen Hawking (and many others whose names I don't know)?

    Neutron stars are are collapsed forms of matter where gravity is
    stronger than the electro-magnetic fields holding the electrons
    away from each other and the protons.

    Not exactly. Electrons and protons attract each other. The gravity
    is strong enough to get an electron and a proton close enough to each
    other so they can combine and form a neutron. My model for this
    recombination is as follows: a proton is two up quarks and a down
    quark; take an up quark (charge +2/3) and an electron (charge -1),
    and maybe a neutrino, turn them all into energy and then turn the
    energy back into a down quark (charge -1/3); so we have taken two up
    quarks and a down quark (a proton) and an electron, and gotten out two
    down quarks and an up quark (a neutron). Keep doing that until all
    the electrons and protons are used up. Result: a neutron star,
    consisting almost entirely of neutrons, and almost no protons or
    electrons.

    Neutron stars exist in the region where the mass is high enough to
    overcome https://en.wikipedia.org/wiki/Electron_degeneracy_pressure
    that prevents the collapse of a white dwarf, and below the mass of https://en.wikipedia.org/wiki/
    Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_limit
    where
    https://en.wikipedia.org/wiki/Degenerate_matter#Neutron_degeneracy
    pressure prevents its collapse to a black hole.

    Right, but the question is how does a white dwarf transition to a
    neutron star when its gravity gets too large? That is what I
    was trying to explain above: white dwarfs do have electrons and
    protons, but neutron stars mostly don't - my understanding is
    that the electrons and protons in a white dwarf combine to form
    neutrons, to turn it into a neutron star, and I was trying to
    describe a way that could happen.

    What I see from a quicky search, the maximum spin rate for a neutron
    star is thought to be 760 Hz, above which magnetic coupling to
    surrounding matter and/or relativistic effects radiate away angular
    momentum.

    https://en.wikipedia.org/wiki/Neutron_star#Spin_down

    The previously referenced PSR J17482446ad spins at 716 Hz and at
    that spin rate the surface of the neutron star is moving at approx
    25% of the speed of light.

    That's amusing. :)

    Also the center of the neutron star will not have the angular
    momentum of the outer edge but will have the high gravity.
    So just a guess but spinning doesn't look like it should stop
    black hole collapse if the mass gets too high.
    But that math is above my pay grade.

    As mass increases and gravity goes up eventually even just the
    tidal forces of the gravity will be enough to turn the neutrons
    into a quark-gluon plasma. So collapse seems inevitable. The
    question is only how much mass is needed to make the collapse
    happen.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Sun Sep 29 03:41:07 2024
    On Sun, 29 Sep 2024 02:08:28 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 22 Sep 2024 16:58:10 -0400, Paul A. Clayton wrote:

    My guess would be that CPU RAM will decrease in upgradability. More
    tightly integrated memory facilitates higher bandwidth and lower latency >>> (and lower system power/energy).

    Yes, we know that is the path that Apple is following. That seems to be
    the only way they can justify their move to ARM processors, in terms of
    increasing performance. Doesn’t mean that others will follow. I think
    Apple’s approach will turn out to be an evolutionary dead-end.

    Intels newest server cpu moves the dram onto the socket getting rid of
    DIMMs.

    And Intel is not exactly in the best of market health at the moment, is it?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Brett on Sun Sep 29 04:04:20 2024
    Brett <ggtgp@yahoo.com> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Wed, 25 Sep 2024 10:43:20 +0300, Michael S wrote:

    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force
    that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly
    together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Occam’s Razor applies: stick to the simplest explanation that fits the
    known facts.

    Radio pulsars pulse at a very regular frequency (which is why they were
    originally thought to be created by some intelligence), but that frequency >> also gradually slows down with time. This is consistent with loss of
    angular momentum (and loss of energy) from radiation emission from a
    spinning neutron star.

    Remember, this isn’t all just hand-waving: they have formulas, derived
    from theory, into which they can plug in numbers, and the numbers agree
    with actual measurements.

    Theories are a dime a dozen, it is easy to back fit data to fit any number
    of models.

    Can you come up with some other mechanism for a radio source that pulses
    extremely regularly, yet also slows down gradually over time?

    Here is a nice alternative to the standard model, which follows Occam’s Razor:

    https://youtu.be/bGygGius61I?si=6k0H1Bi70b4O9zgr

    ThunderboltsProject posts a lot of interesting videos, but the quality
    varies a lot with some crack pot ideas thrown in on occasion, to make one think I would suppose.


    Here is a nice 10 second link for the quantum fans from Neil deGrasse Tyson

    https://youtu.be/1f6nVUv6VRs?si=eiGcnN5eC1XRw1d6

    Of course it’s pushing string theory which is far greater bull.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to EricP on Sun Sep 29 13:59:29 2024
    On 28/09/2024 19:16, EricP wrote:
    Tim Rentsch wrote:
    mitchalsup@aol.com (MitchAlsup1) writes:

    On Tue, 24 Sep 2024 20:21:53 +0000, Brett wrote:

    I am on the black holes don't exist list, at smaller than at the
    center of a galaxy.

    You hear physicists talk of microscopic black holes, but the
    force that keeps atoms apart is so much more powerful than
    gravity that such talk is just fools playing with math they don't
    understand.

    Fools like Lev Landau, J Robert Oppenheimer, Richard Chase Tolman,
    George Volkoff, Subrahmanyan Chandresekhar, Richard Feynman,
    Stephen Hawking (and many others whose names I don't know)?

    Neutron stars are are collapsed forms of matter where gravity is
    stronger than the electro-magnetic fields holding the electrons
    away from each other and the protons.

    Not exactly.  Electrons and protons attract each other.  The gravity
    is strong enough to get an electron and a proton close enough to each
    other so they can combine and form a neutron.  My model for this
    recombination is as follows:  a proton is two up quarks and a down
    quark;  take an up quark (charge +2/3) and an electron (charge -1),
    and maybe a neutrino, turn them all into energy and then turn the
    energy back into a down quark (charge -1/3);  so we have taken two up
    quarks and a down quark (a proton) and an electron, and gotten out two
    down quarks and an up quark (a neutron).  Keep doing that until all
    the electrons and protons are used up.  Result:  a neutron star,
    consisting almost entirely of neutrons, and almost no protons or
    electrons.

    Neutron stars exist in the region where the mass is high enough to overcome https://en.wikipedia.org/wiki/Electron_degeneracy_pressure
    that prevents the collapse of a white dwarf, and below the mass of https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_limit
    where
    https://en.wikipedia.org/wiki/Degenerate_matter#Neutron_degeneracy
    pressure prevents its collapse to a black hole.

    What I see from a quicky search, the maximum spin rate for a neutron star
    is thought to be 760 Hz, above which magnetic coupling to surrounding
    matter and/or relativistic effects radiate away angular momentum.

    https://en.wikipedia.org/wiki/Neutron_star#Spin_down

    The previously referenced PSR J1748−2446ad spins at 716 Hz and at that
    spin rate the surface of the neutron star is moving at approx 25% of
    the speed of light.

    Also the center of the neutron star will not have the angular momentum
    of the outer edge but will have the high gravity.
    So just a guess but spinning doesn't look like it should stop
    black hole collapse if the mass gets too high.
    But that math is above my pay grade.


    It's also useful to remember that neutron stars are not structureless
    balls of tightly-packed neutrons, as is often imagined. They have
    layers of different types of degenerate matter, and it is not even
    really accurate to think of individual neutrons. Even the model of a
    neutron as one up quark and two down quarks is wrong - in some ways,
    it's more like a collection of a thousand or so quarks and anti-quarks
    with one up and two downs as valence quarks. And within a neutron star,
    you will get a soup as the neutron structures break down and merge.

    This does not make the maths any easier!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Tim Rentsch on Sun Sep 29 13:51:33 2024
    On 28/09/2024 19:36, Tim Rentsch wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Tue, 24 Sep 2024 20:33:58 +0000, MitchAlsup1 wrote:

    Neutron stars are are collapsed forms of matter where gravity is
    stronger than the electro-magnetic fields holding the electrons away
    from each other and the protons.

    Gravity here is stronger even than the Pauli exclusion principle, which
    says that two matter particles (e.g. electrons, protons, neutrons) cannot
    occupy the same space at the same time.

    This statement of the Pauli exclusion principle is wrong. An
    example is the two electrons of a helium atom, which occupy the same
    "space" (the lowest orbital shell of the atom) as long as the helium
    atom persists.


    It was certainly over-simplifying the Pauli exclusion principle. The
    principle applies only to fermions (particles with half-integer spin),
    and says they cannot occupy the same quantum state, rather than the
    "same space". The two electrons in the innermost orbit of a helium atom
    must have opposite spins - then they are in different quantum states.
    If their spins were the same, they would in effect be pushed apart to
    different positions.

    The Pauli exclusion principle doesn't apply to some matter particles
    (meaning particles that have non-zero rest mass). An example is
    carrier particles of the weak force, W (and I believe there are
    several kinds of W but I haven't bothered to verify that).

    Those are bosons (Z, W+ and W-), with integer spin and so the Pauli
    exclusion principle does not apply.


    Also, in some situations the Pauli exclusion principle doesn't apply
    to the kinds of particles it normally does apply to. An example is
    a pair of electrons in a Cooper pair, which since the electrons are
    paired they act as a boson rather than a fermion and thus are not
    subject to the Pauli exclusion principle (which is that two fermions
    cannot occupy the same quantum state).

    Yes.


    Note by the way that the Pauli exclusion principle is not an
    independent principle but simply a consequence of the laws of
    quantum mechanics as they apply to fermions.


    Yes.

    Finally, the original statement about gravity in a neutron star
    being stronger than the Pauli exclusion principle is wrong. It is
    precisely because of Pauli exclusion operating between the neutrons
    that make up the neutron star that stops it from collapsing into a
    black hole. The "pressure" of Pauli exclusion is not infinite,
    which means there is an upper bound on how much mass a neutron star
    can have before it collapses into a black hole. This bound, called
    the Tolman-Oppenheimer-Volkoff limit, is somewhere between 2 and 3
    solar masses.


    If all you wrote above is just from memory, then that's pretty good for
    a non-physicist. But if you managed to remember and spell "Tolman-Oppenheimer-Volkoff limit" from memory, then I am /really/
    impressed!

    (Disclaimer: all the above is to the best of my understanding; I
    am not a physicist.)

    Me too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Niklas Holsti on Sun Sep 29 14:08:06 2024
    On 28/09/2024 18:01, Niklas Holsti wrote:
    On 2024-09-28 18:46, David Brown wrote:
    On 27/09/2024 20:43, Brett wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force >>>>>> that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly >>>>> together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Some of the pulsars are spinning at such a rate that they would fly
    apart,
    so we know the theory is wrong.

    They are not flying apart - so we know /you/ are wrong.


    I think you made logical error there, David, a rare one for you. As I understand Brett, he is saying that "the theory" that pulsars are
    neutron stars cannot be right, because some pulsars spin so rapidly that
    a neutron star spinning like that would fly apart.

    If there really were such pulsars -- pulsars spinning faster than a
    neutron star can spin -- then I think Brett's argument would hold: those pulsars could not be neutron stars. Or the error could be in our understanding of how fast neutron stars can spin.


    You are right - /if/ there were pulsars spinning faster than neuron
    stars can spin, then those pulsars could not be neutron stars. And it
    would be fun to find such cases, because it could be evidence of
    hypothetical strange stars or quark stars.

    I had thought Brett was saying that pulsars spinning as fast as the one
    he referred to could not be neutron stars because they would fly apart -
    it's not always easy to understand what he is trying to say!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Brett on Sun Sep 29 14:13:54 2024
    On 29/09/2024 06:04, Brett wrote:

    Of course it’s pushing string theory which is far greater bull.


    Despite its name, "string theory" has a very long way to go before it
    can qualify as a scientific theory.

    That is - it might be a good model of reality, or it might not, but at
    the moment we have no idea of how to find evidence that it is a better
    model than existing ones, and no idea how to find evidence that would
    disprove it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Brett on Sun Sep 29 14:11:38 2024
    On 29/09/2024 04:08, Brett wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Wed, 25 Sep 2024 10:43:20 +0300, Michael S wrote:

    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force
    that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly
    together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Occam’s Razor applies: stick to the simplest explanation that fits the
    known facts.

    Radio pulsars pulse at a very regular frequency (which is why they were
    originally thought to be created by some intelligence), but that frequency >> also gradually slows down with time. This is consistent with loss of
    angular momentum (and loss of energy) from radiation emission from a
    spinning neutron star.

    Remember, this isn’t all just hand-waving: they have formulas, derived
    from theory, into which they can plug in numbers, and the numbers agree
    with actual measurements.

    Theories are a dime a dozen, it is easy to back fit data to fit any number
    of models.

    No, theories are not common. Wild ideas are common. Scientific
    theories need a huge amount of work, evidence and support.


    Can you come up with some other mechanism for a radio source that pulses
    extremely regularly, yet also slows down gradually over time?

    Here is a nice alternative to the standard model, which follows Occam’s Razor:

    https://youtu.be/bGygGius61I?si=6k0H1Bi70b4O9zgr

    ThunderboltsProject posts a lot of interesting videos, but the quality
    varies a lot with some crack pot ideas thrown in on occasion, to make one think I would suppose.


    These links you keep posting are not theories - they are, at best,
    crackpot ideas with no justification and only a vague fit to some
    cherry-picked data.

    Remember, there is a big difference between a "scientific theory" and a "conspiracy theory". They are not all just alternative theories!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Sun Sep 29 15:21:59 2024
    On Sun, 29 Sep 2024 03:41:07 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sun, 29 Sep 2024 02:08:28 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 22 Sep 2024 16:58:10 -0400, Paul A. Clayton wrote:

    My guess would be that CPU RAM will decrease in upgradability.
    More tightly integrated memory facilitates higher bandwidth and
    lower latency (and lower system power/energy).

    Yes, we know that is the path that Apple is following. That seems
    to be the only way they can justify their move to ARM processors,
    in terms of increasing performance. Doesn’t mean that others will
    follow. I think Apple’s approach will turn out to be an
    evolutionary dead-end.

    Intels newest server cpu moves the dram onto the socket getting rid
    of DIMMs.

    And Intel is not exactly in the best of market health at the moment,
    is it?

    It seems, Brett is confusing Intel's client CPUs (Lunar Lake) for
    Intel's server CPUs (Sierra Forest and Granite Rapids).
    Don't take everything he says at face value. As a source of
    information Brett is no more reliable than yourself.

    Also, pay attention that even in client space Intel complements more
    rigid Lunar Lake series with more traditional (likely, at cost of lower performance per watt) Arrow Lake series.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to David Brown on Sun Sep 29 18:20:40 2024
    David Brown <david.brown@hesbynett.no> wrote:
    On 29/09/2024 04:08, Brett wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Wed, 25 Sep 2024 10:43:20 +0300, Michael S wrote:

    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force >>>>>> that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly >>>>> together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Occam’s Razor applies: stick to the simplest explanation that fits the >>> known facts.

    Radio pulsars pulse at a very regular frequency (which is why they were
    originally thought to be created by some intelligence), but that frequency >>> also gradually slows down with time. This is consistent with loss of
    angular momentum (and loss of energy) from radiation emission from a
    spinning neutron star.

    Remember, this isn’t all just hand-waving: they have formulas, derived >>> from theory, into which they can plug in numbers, and the numbers agree
    with actual measurements.

    Theories are a dime a dozen, it is easy to back fit data to fit any number >> of models.

    No, theories are not common. Wild ideas are common. Scientific
    theories need a huge amount of work, evidence and support.


    Can you come up with some other mechanism for a radio source that pulses >>> extremely regularly, yet also slows down gradually over time?

    Here is a nice alternative to the standard model, which follows Occam’s
    Razor:

    https://youtu.be/bGygGius61I?si=6k0H1Bi70b4O9zgr

    ThunderboltsProject posts a lot of interesting videos, but the quality
    varies a lot with some crack pot ideas thrown in on occasion, to make one
    think I would suppose.


    These links you keep posting are not theories - they are, at best,
    crackpot ideas with no justification and only a vague fit to some cherry-picked data.

    Remember, there is a big difference between a "scientific theory" and a "conspiracy theory". They are not all just alternative theories!

    Well then, Go take a look at the Structures Atom Model and tell me what you think.

    https://structuredatom.org

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Michael S on Sun Sep 29 18:26:39 2024
    Michael S <already5chosen@yahoo.com> wrote:
    On Sun, 29 Sep 2024 03:41:07 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sun, 29 Sep 2024 02:08:28 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 22 Sep 2024 16:58:10 -0400, Paul A. Clayton wrote:

    My guess would be that CPU RAM will decrease in upgradability.
    More tightly integrated memory facilitates higher bandwidth and
    lower latency (and lower system power/energy).

    Yes, we know that is the path that Apple is following. That seems
    to be the only way they can justify their move to ARM processors,
    in terms of increasing performance. Doesn’t mean that others will
    follow. I think Apple’s approach will turn out to be an
    evolutionary dead-end.

    Intels newest server cpu moves the dram onto the socket getting rid
    of DIMMs.

    And Intel is not exactly in the best of market health at the moment,
    is it?

    It seems, Brett is confusing Intel's client CPUs (Lunar Lake) for
    Intel's server CPUs (Sierra Forest and Granite Rapids).
    Don't take everything he says at face value. As a source of
    information Brett is no more reliable than yourself.

    Four times the dram bandwidth, DIMMs are DOOMED.

    Also, pay attention that even in client space Intel complements more
    rigid Lunar Lake series with more traditional (likely, at cost of lower performance per watt) Arrow Lake series.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Brett on Sun Sep 29 22:57:29 2024
    On Sun, 29 Sep 2024 18:26:39 -0000 (UTC)
    Brett <ggtgp@yahoo.com> wrote:

    Michael S <already5chosen@yahoo.com> wrote:
    On Sun, 29 Sep 2024 03:41:07 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sun, 29 Sep 2024 02:08:28 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 22 Sep 2024 16:58:10 -0400, Paul A. Clayton wrote:

    My guess would be that CPU RAM will decrease in upgradability.
    More tightly integrated memory facilitates higher bandwidth and
    lower latency (and lower system power/energy).

    Yes, we know that is the path that Apple is following. That seems
    to be the only way they can justify their move to ARM processors,
    in terms of increasing performance. Doesn’t mean that others will
    follow. I think Apple’s approach will turn out to be an
    evolutionary dead-end.

    Intels newest server cpu moves the dram onto the socket getting
    rid of DIMMs.

    And Intel is not exactly in the best of market health at the
    moment, is it?

    It seems, Brett is confusing Intel's client CPUs (Lunar Lake) for
    Intel's server CPUs (Sierra Forest and Granite Rapids).
    Don't take everything he says at face value. As a source of
    information Brett is no more reliable than yourself.

    Four times the dram bandwidth,


    For the same # of IO pins LPDDR5x has only ~1.33x higher bandwidth
    than DDR5 DIMMs at top standard speed. Somewhat more, if measured per
    Watt rather than per pin, but even per Watt the factor is much less
    than 2x.

    DIMMs are DOOMED.

    In the long run, I don't disagree. On client computers - the run would
    not be even particularly long. On servers - it will take more than
    decade.
    But even on clients it's not going to be completed overnight or over
    1-1.5 years.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Brett on Sun Sep 29 22:09:00 2024
    In article <vdc5po$1reaa$1@dont-email.me>, ggtgp@yahoo.com (Brett) wrote:

    Well then, Go take a look at the Structures Atom Model and tell me
    what you think.

    https://structuredatom.org

    It's a re-invention of the "nuclear electrons" idea that was current
    through the 1920s, and seems to have the same problems. <https://en.wikipedia.org/wiki/Discovery_of_the_neutron#Problems_of_the_nu clear_electrons_hypothesis>

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Brett on Sun Sep 29 23:30:00 2024
    On Sun, 29 Sep 2024 18:26:39 +0000, Brett wrote:

    Four times the dram bandwidth, DIMMs are DOOMED.


    They had a good long run.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to John Dallman on Mon Sep 30 04:01:56 2024
    John Dallman <jgd@cix.co.uk> wrote:
    In article <vdc5po$1reaa$1@dont-email.me>, ggtgp@yahoo.com (Brett) wrote:

    Well then, Go take a look at the Structures Atom Model and tell me
    what you think.

    https://structuredatom.org

    It's a re-invention of the "nuclear electrons" idea that was current
    through the 1920s, and seems to have the same problems. <https://en.wikipedia.org/wiki/Discovery_of_the_neutron#Problems_of_the_nu clear_electrons_hypothesis>

    John


    That is just another cloud model. SAM has actual structure that explains attachment angles and should explain hyperfine structure properties better.

    The hyperfine structure looks like just a collection of SWAG formulas, so
    why should your atom model even matter.

    https://en.wikipedia.org/wiki/Hyperfine_structure

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Mon Sep 30 03:46:07 2024
    On Sun, 29 Sep 2024 02:08:26 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Remember, this isn’t all just hand-waving: they have formulas, derived
    from theory, into which they can plug in numbers, and the numbers agree
    with actual measurements.

    Theories are a dime a dozen, it is easy to back fit data to fit any
    number of models.

    Predicting results that haven’t been measured yet, and measuring them and showing they are correct, is the true mark of science.

    Does your “alternative model” measure up to that? No it does not.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Lawrence D'Oliveiro on Mon Sep 30 04:11:18 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 29 Sep 2024 02:08:26 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Remember, this isn’t all just hand-waving: they have formulas, derived >>> from theory, into which they can plug in numbers, and the numbers agree
    with actual measurements.

    Theories are a dime a dozen, it is easy to back fit data to fit any
    number of models.

    Predicting results that haven’t been measured yet, and measuring them and showing they are correct, is the true mark of science.

    Based off of Hubble research 1000’s of theories were proposed to get a
    Nobel prize, then the James Web telescope launched and all those theories
    went into the toilet.

    Had one of those theories been in the ball park you would have declared
    success for predictive science. Ignoring the 999 failures, but “science”completely failed.

    These “scientists” are nothing more than monkeys at typewriters hoping to get lucky.

    Does your “alternative model” measure up to that? No it does not.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Brett on Mon Sep 30 05:45:39 2024
    On Mon, 30 Sep 2024 4:11:18 +0000, Brett wrote:

    Based off of Hubble research 1000’s of theories were proposed to get a Nobel prize, then the James Web telescope launched and all those
    theories went into the toilet.

    Had one of those theories been in the ball park you would have declared success for predictive science. Ignoring the 999 failures, but “science”completely failed.

    just because there were thousands of conjectures that fail to meet the
    rigors of science does not mean that science has failed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Mon Sep 30 13:23:22 2024
    On 29/09/2024 03:24, Lawrence D'Oliveiro wrote:
    On Sat, 28 Sep 2024 19:08:23 -0000 (UTC), Brett wrote:

    But it comes from government research, and our government is a bunch of
    habitual liars ...

    Who is this “our” government? There are research labs all over the world (some even in private hands), keeping tabs on each other’s work. If somebody were lying about some result, it wouldn’t take long before the others discovered this.

    And even if Brett's government - wherever that is - really are "habitual liars", none of this kind of thing is "government research". Some of it
    is "government-funded research", which is entirely different. National governments often provide the funds (directly, or indirectly such as by
    funding universities) for for pure science research, but the science is long-running and outlasts any particular government.

    It's pure paranoid delusion and conspiracy theory - "they" are hiding
    the "truth" from us.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Brett on Mon Sep 30 16:35:00 2024
    In article <vdd7rj$23l66$1@dont-email.me>, ggtgp@yahoo.com (Brett) wrote:

    That is just another cloud model. SAM has actual structure that
    explains attachment angles and should explain hyperfine structure
    properties better.

    And the Klein paradox? The electron confinement problem? There's no
    explanation of any of those problems.

    Where are the peer-reviewed papers? All I see is posters and
    presentations at Cold Fusion and Electric Universe conferences, plus
    Tesla Tech, whose owner describes himself as "a publisher of extreme
    science, alternative energy, health and medicine."

    This appears to be pseudo-science, appealing to those who know a little
    about physics, but incapable of explaining the difficult problems.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to John Dallman on Mon Sep 30 20:13:49 2024
    John Dallman <jgd@cix.co.uk> wrote:
    In article <vdd7rj$23l66$1@dont-email.me>, ggtgp@yahoo.com (Brett) wrote:

    That is just another cloud model. SAM has actual structure that
    explains attachment angles and should explain hyperfine structure
    properties better.

    And the Klein paradox? The electron confinement problem? There's no explanation of any of those problems.

    You are smart enough to know that the Klein paradox is a cat that can be skinned many ways. That the current solution is popular today, does not
    mean that it will last.

    Where are the peer-reviewed papers? All I see is posters and
    presentations at Cold Fusion and Electric Universe conferences, plus
    Tesla Tech, whose owner describes himself as "a publisher of extreme
    science, alternative energy, health and medicine."

    Your questions made me buy the book, so that I can answer some of them.

    And to find flaws in SAM to criticize myself, they are human after all, as
    am I.

    Everywhere I look I find flaws, it is in my nature. The emperor is not
    wearing any cloths, and it is obvious to those that look, few do.

    This appears to be pseudo-science, appealing to those who know a little
    about physics, but incapable of explaining the difficult problems.

    John


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to mitchalsup@aol.com on Mon Sep 30 19:58:46 2024
    MitchAlsup1 <mitchalsup@aol.com> wrote:
    On Mon, 30 Sep 2024 4:11:18 +0000, Brett wrote:

    Based off of Hubble research 1000’s of theories were proposed to get a
    Nobel prize, then the James Web telescope launched and all those
    theories went into the toilet.

    Had one of those theories been in the ball park you would have declared
    success for predictive science. Ignoring the 999 failures, but
    “science”completely failed.

    just because there were thousands of conjectures that fail to meet the
    rigors of science does not mean that science has failed.

    The false religion of “science” failed.

    Yes real science actually advances this way.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Michael S on Mon Sep 30 20:20:29 2024
    Michael S <already5chosen@yahoo.com> wrote:
    On Sun, 29 Sep 2024 18:26:39 -0000 (UTC)
    Brett <ggtgp@yahoo.com> wrote:

    Michael S <already5chosen@yahoo.com> wrote:
    On Sun, 29 Sep 2024 03:41:07 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sun, 29 Sep 2024 02:08:28 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 22 Sep 2024 16:58:10 -0400, Paul A. Clayton wrote:

    My guess would be that CPU RAM will decrease in upgradability.
    More tightly integrated memory facilitates higher bandwidth and
    lower latency (and lower system power/energy).

    Yes, we know that is the path that Apple is following. That seems
    to be the only way they can justify their move to ARM processors,
    in terms of increasing performance. Doesn’t mean that others will >>>>>> follow. I think Apple’s approach will turn out to be an
    evolutionary dead-end.

    Intels newest server cpu moves the dram onto the socket getting
    rid of DIMMs.

    And Intel is not exactly in the best of market health at the
    moment, is it?

    It seems, Brett is confusing Intel's client CPUs (Lunar Lake) for
    Intel's server CPUs (Sierra Forest and Granite Rapids).
    Don't take everything he says at face value. As a source of
    information Brett is no more reliable than yourself.

    Four times the dram bandwidth,


    For the same # of IO pins LPDDR5x has only ~1.33x higher bandwidth
    than DDR5 DIMMs at top standard speed. Somewhat more, if measured per
    Watt rather than per pin, but even per Watt the factor is much less
    than 2x.

    On socket LPDDR5x has four times the bandwidth because there are four times
    as many memory busses, as you can have four times the pins on the carrier, compared to the socket.

    DIMMs are DOOMED.

    In the long run, I don't disagree. On client computers - the run would
    not be even particularly long. On servers - it will take more than
    decade.
    But even on clients it's not going to be completed overnight or over
    1-1.5 years.

    ;) And over four years which is two major product refreshes?

    You know that answer. ;)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Tue Oct 1 02:39:58 2024
    On Mon, 30 Sep 2024 05:45:39 +0000, MitchAlsup1 wrote:

    just because there were thousands of conjectures that fail to meet the
    rigors of science does not mean that science has failed.

    Quite the opposite. The fact that science has such rigours is what keeps
    it grounded in reality.

    In science, you are free to start with whatever assumptions you wish. But
    you then have to explore the consequences of those assumptions, wherever
    they may lead. If those consequences agree with reality, that’s a sign you may be on the right track; if they don’t, then you’re not.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Tue Oct 1 02:40:55 2024
    On Mon, 30 Sep 2024 19:58:46 -0000 (UTC), Brett wrote:

    The false religion of “science” failed.

    Science is not a religion. Science, unlike religion, works whether you
    believe in it or not.

    You can, very literally, bet your life on it. Not something you can say of
    any religion ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Lawrence D'Oliveiro on Tue Oct 1 03:48:53 2024
    On Tue, 1 Oct 2024 2:40:55 +0000, Lawrence D'Oliveiro wrote:

    On Mon, 30 Sep 2024 19:58:46 -0000 (UTC), Brett wrote:

    The false religion of “science” failed.

    Science is not a religion. Science, unlike religion, works whether you believe in it or not.

    Science, unlike religion, adjusts to the current set of facts--whatever
    they may be.

    You can, very literally, bet your life on it. Not something you can say
    of any religion ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Brett on Tue Oct 1 08:57:00 2024
    On 30/09/2024 21:58, Brett wrote:
    MitchAlsup1 <mitchalsup@aol.com> wrote:
    On Mon, 30 Sep 2024 4:11:18 +0000, Brett wrote:

    Based off of Hubble research 1000’s of theories were proposed to get a >>> Nobel prize, then the James Web telescope launched and all those
    theories went into the toilet.

    Had one of those theories been in the ball park you would have declared
    success for predictive science. Ignoring the 999 failures, but
    “science”completely failed.

    just because there were thousands of conjectures that fail to meet the
    rigors of science does not mean that science has failed.

    The false religion of “science” failed.


    Science is not a religion.

    And as someone (whose name I have forgotten) once said, "Science is
    about unanswered questions. Religion is about unquestioned answers."

    "Science does not know everything. Science /knows/ it does not know
    everything - otherwise we'd stop doing it." (That was Dara Ó Briain.)

    Yes real science actually advances this way.


    It's apparent from your postings that you have no concept of what "real science" is, or how it advances.

    You look at modern science, and you see there are gaps - things that no
    one is explaining properly. The scientific approach is to look at these
    holes and see opportunities to learn more and fill them in. Perhaps
    someone will fulfil the dream of all scientists, and prove an existing
    theory wrong.

    But your anti-scientific approach is to see these gaps or flaws and
    think that means scientists are lying to us, or that it's /all/ wrong.
    Then you listen to the first crackpot trisectors or conman that comes
    along, and happily give them your worship and your money just because
    they re-enforce your paranoia.

    It is flat-earthers like you that make it very difficult for real
    scientists who do come up with unusual ideas - they are brought low by
    the weight of supporters like you who follow them simply because their
    ideas are different, not because they understand the science involved.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Tue Oct 1 08:34:55 2024
    On Tue, 1 Oct 2024 03:48:53 +0000, MitchAlsup1 wrote:

    On Tue, 1 Oct 2024 2:40:55 +0000, Lawrence D'Oliveiro wrote:

    Science is not a religion. Science, unlike religion, works whether you
    believe in it or not.

    Science, unlike religion, adjusts to the current set of facts--whatever
    they may be.

    There is this assumption that the facts of the behaviour of electricity haven’t changed since yesterday -- that the way we calculate the voltages
    and currents and resistances still work exactly as they did before, so
    when you next reach for that power switch, it will activate the appliance
    you expected it to activate, and won’t suddenly burst out of the wall and kill you.

    That’s what I mean by “betting your life on the correctness of science”. Do you say a little incantation to the god(s) of your choice each time
    before touching that switch? Pour out a libation? Sacrifice a goat? No --
    you simply do it without thinking.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Tue Oct 1 10:48:11 2024
    On 01/10/2024 05:48, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 2:40:55 +0000, Lawrence D'Oliveiro wrote:

    On Mon, 30 Sep 2024 19:58:46 -0000 (UTC), Brett wrote:

    The false religion of “science” failed.

    Science is not a religion. Science, unlike religion, works whether you
    believe in it or not.

    Science, unlike religion, adjusts to the current set of facts--whatever
    they may be.


    It would be better to say that science adjusts to the current set of
    /evidence/ - our measurements of facts. The orbit of Mercury around the
    Sun is a fact, and has not changed over the centuries. Our measurements
    of it have changed as our experimental tools improved, and science
    changed accordingly (from Newtonian gravity to relativity).

    Sometimes new evidence directly disproves old theories or conjectures
    (such as the flogistan theory of fire), other times it gives more a
    accurate model while leaving the old one as a reasonable approximation
    in many situations (like gravity).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to David Brown on Tue Oct 1 15:51:36 2024
    David Brown <david.brown@hesbynett.no> schrieb:

    Science is not a religion.

    And as someone (whose name I have forgotten) once said, "Science is
    about unanswered questions. Religion is about unquestioned answers."

    That is the ideal of science - scientific hypotheses are proposed.
    They have to be falsifiable (i.e. you have to be able to do experiments
    which could, in theory, prove the hypothesis wrong). You can never
    _prove_ a hypothesis, you can only fail to disprove it, and then it
    will gradually tend to become accepted. In other words, you try
    to make predictions, and if those predictions fail, then the theory
    is in trouble.

    For example, Einstein's General Theory of Relativity was never
    proven, it was found by a very large number of experiments by a
    very large number of people that it could not be disproven, so
    people generally accept it. But people still try to think of
    experiments which might show a deviation, and keep trying for it.

    Same for quantum mechanics. Whatever you think of it
    philosophically, it has been shown to be remarkably accurate
    at predicting actual behavior.

    Mathematics is not a sciene under this definition, by the way.

    The main problem is with people who try to sell something as
    science which isn't, of which there are also many examples.
    "Scientific Marxism" is one such example. It is sometimes hard
    for an outsider to differentiate between actual scientific theories
    which have been tested, and people just claiming that "the science
    says so" when they have not been applying the scientific method
    faithfully, either through ignorance or through bad intent.

    There is also the problem of many people not knowing statistics well
    enough and misapplying it, for example in social or medical science.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Thomas Koenig on Tue Oct 1 18:20:16 2024
    On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:

    David Brown <david.brown@hesbynett.no> schrieb:

    Science is not a religion.

    And as someone (whose name I have forgotten) once said, "Science is
    about unanswered questions. Religion is about unquestioned answers."

    That is the ideal of science - scientific hypotheses are proposed.
    They have to be falsifiable (i.e. you have to be able to do experiments
    which could, in theory, prove the hypothesis wrong). You can never
    _prove_ a hypothesis, you can only fail to disprove it, and then it
    will gradually tend to become accepted. In other words, you try
    to make predictions, and if those predictions fail, then the theory
    is in trouble.

    For example, Einstein's General Theory of Relativity was never
    proven, it was found by a very large number of experiments by a
    very large number of people that it could not be disproven, so
    people generally accept it. But people still try to think of
    experiments which might show a deviation, and keep trying for it.

    Same for quantum mechanics. Whatever you think of it
    philosophically, it has been shown to be remarkably accurate
    at predicting actual behavior.

    Mathematics is not a sciene under this definition, by the way.

    Indeed, Units of forward progress in Math are done with formal
    proofs.

    The main problem is with people who try to sell something as
    science which isn't, of which there are also many examples.

    The colloquial person thinks theory and conjecture are
    essentially equal. As in: "I just invented this theory".
    No, you just: "Invented a conjecture." you have to have
    substantial evidence to go from conjecture to theory.

    "Scientific Marxism" is one such example. It is sometimes hard
    for an outsider to differentiate between actual scientific theories
    which have been tested, and people just claiming that "the science
    says so" when they have not been applying the scientific method
    faithfully, either through ignorance or through bad intent.

    There is also the problem of many people not knowing statistics well
    enough and misapplying it, for example in social or medical science.

    Or politics....

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Tue Oct 1 20:56:46 2024
    On 01/10/2024 20:20, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:

    David Brown <david.brown@hesbynett.no> schrieb:

    Science is not a religion.

    And as someone (whose name I have forgotten) once said, "Science is
    about unanswered questions.  Religion is about unquestioned answers."

    That is the ideal of science - scientific hypotheses are proposed.
    They have to be falsifiable (i.e. you have to be able to do experiments
    which could, in theory, prove the hypothesis wrong).  You can never
    _prove_ a hypothesis, you can only fail to disprove it, and then it
    will gradually tend to become accepted.  In other words, you try
    to make predictions, and if those predictions fail, then the theory
    is in trouble.

    For example, Einstein's General Theory of Relativity was never
    proven, it was found by a very large number of experiments by a
    very large number of people that it could not be disproven, so
    people generally accept it.  But people still try to think of
    experiments which might show a deviation, and keep trying  for it.

    Same for quantum mechanics.  Whatever you think of it
    philosophically, it has been shown to be remarkably accurate
    at predicting actual behavior.

    Mathematics is not a sciene under this definition, by the way.

    Indeed, Units of forward progress in Math are done with formal
    proofs.

    It's worth remembering that mathematical proofs always start at a base -
    a set of axioms. And these axioms are assumed, not proven.


    The main problem is with people who try to sell something as
    science which isn't, of which there are also many examples.

    The colloquial person thinks theory and conjecture are
    essentially equal. As in: "I just invented this theory".
    No, you just: "Invented a conjecture." you have to have
    substantial evidence to go from conjecture to theory.


    I think you need evidence, justification, and a good basis for proposing something before it can even be called a "conjecture" in science. You
    don't start off with a conjecture - you start with an idea, and have a
    long way to go to reach a "scientific theory", passing through
    "conjecture" and "hypothesis" on the way.

    "Scientific Marxism" is one such example.  It is sometimes hard
    for an outsider to differentiate between actual scientific theories
    which have been tested, and people just claiming that "the science
    says so" when they have not been applying the scientific method
    faithfully, either through ignorance or through bad intent.

    There is also the problem of many people not knowing statistics well
    enough and misapplying it, for example in social or medical science.

    Or politics....

    Or even in hard sciences - scientists are humans too, and some of them
    get their statistics wildly wrong.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to All on Tue Oct 1 22:07:18 2024
    On 2024-10-01 21:20, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:

    David Brown <david.brown@hesbynett.no> schrieb:

    Science is not a religion.

    And as someone (whose name I have forgotten) once said, "Science is
    about unanswered questions.  Religion is about unquestioned answers."

    That is the ideal of science - scientific hypotheses are proposed.
    They have to be falsifiable (i.e. you have to be able to do experiments
    which could, in theory, prove the hypothesis wrong).  You can never
    _prove_ a hypothesis, you can only fail to disprove it, and then it
    will gradually tend to become accepted.  In other words, you try
    to make predictions, and if those predictions fail, then the theory
    is in trouble.

    For example, Einstein's General Theory of Relativity was never
    proven, it was found by a very large number of experiments by a
    very large number of people that it could not be disproven, so
    people generally accept it.  But people still try to think of
    experiments which might show a deviation, and keep trying  for it.

    Same for quantum mechanics.  Whatever you think of it
    philosophically, it has been shown to be remarkably accurate
    at predicting actual behavior.

    Mathematics is not a sciene under this definition, by the way.

    Indeed, Units of forward progress in Math are done with formal
    proofs.


    Yes, in the end, but it is interesting that a lot of the progress in mathematics happens thruogh the invention or intuition of /conjectures/,
    which may eventually be proven correct and true, or incorrect and
    needing modification.

    An open (neither proved nor disproved) conjecture often collects lots of "observed evidence", either by suggesting some interesting corollaries
    or analogies that are then proved independently, or by surviving
    energetic efforts to find counterexamples to the conjecture. In this
    sense an open conjecture resembles a theory in physics.

    A list of conjectures:

    https://en.wikipedia.org/wiki/List_of_mathematical_conjectures

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Niklas Holsti on Tue Oct 1 21:09:35 2024
    On Tue, 1 Oct 2024 19:07:18 +0000, Niklas Holsti wrote:

    On 2024-10-01 21:20, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:

    David Brown <david.brown@hesbynett.no> schrieb:

    Science is not a religion.

    And as someone (whose name I have forgotten) once said, "Science is
    about unanswered questions.  Religion is about unquestioned answers."

    That is the ideal of science - scientific hypotheses are proposed.
    They have to be falsifiable (i.e. you have to be able to do experiments
    which could, in theory, prove the hypothesis wrong).  You can never
    _prove_ a hypothesis, you can only fail to disprove it, and then it
    will gradually tend to become accepted.  In other words, you try
    to make predictions, and if those predictions fail, then the theory
    is in trouble.

    For example, Einstein's General Theory of Relativity was never
    proven, it was found by a very large number of experiments by a
    very large number of people that it could not be disproven, so
    people generally accept it.  But people still try to think of
    experiments which might show a deviation, and keep trying  for it.

    Same for quantum mechanics.  Whatever you think of it
    philosophically, it has been shown to be remarkably accurate
    at predicting actual behavior.

    Mathematics is not a sciene under this definition, by the way.

    Indeed, Units of forward progress in Math are done with formal
    proofs.


    Yes, in the end, but it is interesting that a lot of the progress in mathematics happens thruogh the invention or intuition of /conjectures/, which may eventually be proven correct and true, or incorrect and
    needing modification.

    Mathematical conjectures have a spectrum of "solidity" often more
    solid in one branch of math than in another.

    An open (neither proved nor disproved) conjecture often collects lots of "observed evidence", either by suggesting some interesting corollaries
    or analogies that are then proved independently, or by surviving
    energetic efforts to find counterexamples to the conjecture. In this
    sense an open conjecture resembles a theory in physics.

    The solution to Fermat's last theorem used a large series of
    then conjectures in order to demonstrate that the solution
    was correct.

    A list of conjectures:

    https://en.wikipedia.org/wiki/List_of_mathematical_conjectures

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to David Brown on Tue Oct 1 21:11:12 2024
    On Tue, 1 Oct 2024 18:56:46 +0000, David Brown wrote:

    On 01/10/2024 20:20, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:

    David Brown <david.brown@hesbynett.no> schrieb:

    Science is not a religion.

    And as someone (whose name I have forgotten) once said, "Science is
    about unanswered questions.  Religion is about unquestioned answers."

    That is the ideal of science - scientific hypotheses are proposed.
    They have to be falsifiable (i.e. you have to be able to do experiments
    which could, in theory, prove the hypothesis wrong).  You can never
    _prove_ a hypothesis, you can only fail to disprove it, and then it
    will gradually tend to become accepted.  In other words, you try
    to make predictions, and if those predictions fail, then the theory
    is in trouble.

    For example, Einstein's General Theory of Relativity was never
    proven, it was found by a very large number of experiments by a
    very large number of people that it could not be disproven, so
    people generally accept it.  But people still try to think of
    experiments which might show a deviation, and keep trying  for it.

    Same for quantum mechanics.  Whatever you think of it
    philosophically, it has been shown to be remarkably accurate
    at predicting actual behavior.

    Mathematics is not a sciene under this definition, by the way.

    Indeed, Units of forward progress in Math are done with formal
    proofs.

    It's worth remembering that mathematical proofs always start at a base -
    a set of axioms. And these axioms are assumed, not proven.


    The main problem is with people who try to sell something as
    science which isn't, of which there are also many examples.

    The colloquial person thinks theory and conjecture are
    essentially equal. As in: "I just invented this theory".
    No, you just: "Invented a conjecture." you have to have
    substantial evidence to go from conjecture to theory.


    I think you need evidence, justification, and a good basis for proposing something before it can even be called a "conjecture" in science. You
    don't start off with a conjecture - you start with an idea, and have a
    long way to go to reach a "scientific theory", passing through
    "conjecture" and "hypothesis" on the way.

    I do not disagree with that. Sorry if I implied anything else.

    "Scientific Marxism" is one such example.  It is sometimes hard
    for an outsider to differentiate between actual scientific theories
    which have been tested, and people just claiming that "the science
    says so" when they have not been applying the scientific method
    faithfully, either through ignorance or through bad intent.

    There is also the problem of many people not knowing statistics well
    enough and misapplying it, for example in social or medical science.

    Or politics....

    Or even in hard sciences - scientists are humans too, and some of them
    get their statistics wildly wrong.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Niklas Holsti on Tue Oct 1 23:33:57 2024
    Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
    On 2024-10-01 21:20, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:

    David Brown <david.brown@hesbynett.no> schrieb:

    Science is not a religion.

    And as someone (whose name I have forgotten) once said, "Science is
    about unanswered questions.  Religion is about unquestioned answers."

    That is the ideal of science - scientific hypotheses are proposed.
    They have to be falsifiable (i.e. you have to be able to do experiments
    which could, in theory, prove the hypothesis wrong).  You can never
    _prove_ a hypothesis, you can only fail to disprove it, and then it
    will gradually tend to become accepted.  In other words, you try
    to make predictions, and if those predictions fail, then the theory
    is in trouble.

    For example, Einstein's General Theory of Relativity was never
    proven, it was found by a very large number of experiments by a
    very large number of people that it could not be disproven, so
    people generally accept it.  But people still try to think of
    experiments which might show a deviation, and keep trying  for it.

    Same for quantum mechanics.  Whatever you think of it
    philosophically, it has been shown to be remarkably accurate
    at predicting actual behavior.

    Mathematics is not a sciene under this definition, by the way.

    Indeed, Units of forward progress in Math are done with formal
    proofs.


    Yes, in the end, but it is interesting that a lot of the progress in mathematics happens thruogh the invention or intuition of /conjectures/, which may eventually be proven correct and true, or incorrect and
    needing modification.

    An open (neither proved nor disproved) conjecture often collects lots of "observed evidence", either by suggesting some interesting corollaries
    or analogies that are then proved independently, or by surviving
    energetic efforts to find counterexamples to the conjecture. In this
    sense an open conjecture resembles a theory in physics.

    A list of conjectures:

    https://en.wikipedia.org/wiki/List_of_mathematical_conjectures

    Sky Scholar just posted his latest mockery of modern physics:

    https://youtu.be/LlUBDlSJp_A?si=p5HwVyqGEoReWJ0h

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Wed Oct 2 08:50:06 2024
    On 01/10/2024 23:11, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 18:56:46 +0000, David Brown wrote:

    On 01/10/2024 20:20, MitchAlsup1 wrote:


    The colloquial person thinks theory and conjecture are
    essentially equal. As in: "I just invented this theory".
    No, you just: "Invented a conjecture." you have to have
    substantial evidence to go from conjecture to theory.


    I think you need evidence, justification, and a good basis for proposing
    something before it can even be called a "conjecture" in science.  You
    don't start off with a conjecture - you start with an idea, and have a
    long way to go to reach a "scientific theory", passing through
    "conjecture" and "hypothesis" on the way.

    I do not disagree with that. Sorry if I implied anything else.


    I read your post as saying that if someone says "I have a theory that
    the moon is made of green cheese", it is actually a conjecture, not a
    theory. I fully agree that it is not a theory - at least, not a
    scientific theory. But I would also not even call it a conjecture since
    it is has no justification or basis, and is easily disproved (if the
    moon is made of cheese, it is /grey/ cheese, not /green/ cheese!). It
    is no more than an idea or claim - to be a "conjecture", it needs to
    have a viable path towards a theory (though it may fail along that path).

    But we absolutely agree on "theory".

    The issue turns up regularly with Bible literalists who think the theory
    of evolution is "just a theory", and should have no more place in school curriculums than the so-called "theory of intelligent design". They
    are mixing up the scientific term "theory" with the colloquial
    non-scientific usage - and the same bad reasoning suggests that
    alongside the "Newtonian theory of gravity" we should be teaching the
    "theory of intelligent falling".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Wed Oct 2 09:20:47 2024
    On 01/10/2024 23:09, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 19:07:18 +0000, Niklas Holsti wrote:

    On 2024-10-01 21:20, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:


    Mathematics is not a sciene under this definition, by the way.

    Indeed, Units of forward progress in Math are done with formal
    proofs.


    Yes, in the end, but it is interesting that a lot of the progress in
    mathematics happens thruogh the invention or intuition of /conjectures/,
    which may eventually be proven correct and true, or incorrect and
    needing modification.

    Mathematical conjectures have a spectrum of "solidity" often more
    solid in one branch of math than in another.


    I am not entirely sure what you mean by that.

    A conjecture is a hypothesis that you have reasonable justification for believing is true, but which is not proven to be true (then it becomes a theorem). Some conjectures have been confirmed empirically to a large
    degree (such as the Riemann hypothesis) which is not proof, but can be
    seen as strengthening the conjecture. Others, such as the continuum hypothesis, not only have no empirical evidence but have been proven to
    be independent of our usual ZF set theory axioms - no evidence either
    way can be found.

    There are also some mathematicians who have a philosophy of viewing some
    kinds of proofs as "better" than others. Some dislike "proof by
    computer", and don't consider the four-colour theorem to be a proven
    theorem yet. Others are "constructivists" - they are not happy with
    merely a proof that some solution must exist, they only consider the
    hypothesis properly proven when they have a construction for a solution.
    In that sense, a given conjecture may have more "solidity" in one
    /school/ of mathematics than in another.

    But I don't quite see how a single conjecture could have more "solidity"
    in one /branch/ of mathematics than another. An example or two might help.


    An open (neither proved nor disproved) conjecture often collects lots of
    "observed evidence", either by suggesting some interesting corollaries
    or analogies that are then proved independently, or by surviving
    energetic efforts to find counterexamples to the conjecture. In this
    sense an open conjecture resembles a theory in physics.

    The solution to Fermat's last theorem used a large series of
    then conjectures in order to demonstrate that the solution
    was correct.


    Yes - and then those supporting conjectures were proven and morphed into theorems, with the knock-on effect of making everything higher up a
    proven theorem.

    This is very common in mathematics - you develop conditional proofs
    building on assuming a conjecture is true, and then you (or someone
    else) goes back and proves that conjecture later, or perhaps finds
    another path around that part. For many theorems in mathematics, the
    complete proof is a /very/ long and winding path.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to David Brown on Wed Oct 2 21:45:38 2024
    On Wed, 2 Oct 2024 7:20:47 +0000, David Brown wrote:

    On 01/10/2024 23:09, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 19:07:18 +0000, Niklas Holsti wrote:

    On 2024-10-01 21:20, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:


    Mathematics is not a sciene under this definition, by the way.

    Indeed, Units of forward progress in Math are done with formal
    proofs.


    Yes, in the end, but it is interesting that a lot of the progress in
    mathematics happens thruogh the invention or intuition of /conjectures/, >>> which may eventually be proven correct and true, or incorrect and
    needing modification.

    Mathematical conjectures have a spectrum of "solidity" often more
    solid in one branch of math than in another.


    I am not entirely sure what you mean by that.

    A conjecture is a hypothesis that you have reasonable justification for believing is true, but which is not proven to be true (then it becomes a theorem). Some conjectures have been confirmed empirically to a large
    degree (such as the Riemann hypothesis) which is not proof, but can be
    seen as strengthening the conjecture. Others, such as the continuum hypothesis, not only have no empirical evidence but have been proven to
    be independent of our usual ZF set theory axioms - no evidence either
    way can be found.

    Other conjectures had a century or more between being conjectured with
    several "things they got right" before finally drifting towards a proof
    or drifting towards disproof. The width of the drift is exactly the
    spectrum I stated.

    There are also some mathematicians who have a philosophy of viewing some kinds of proofs as "better" than others. Some dislike "proof by
    computer", and don't consider the four-colour theorem to be a proven
    theorem yet.

    Over time proofs drift towards being an axiom (at least in their little
    branch of math--which might not be axiomatic in other branches). others
    start out proven and drift to the point there are only proven in one
    or several branches of math.

    Others are "constructivists" - they are not happy with
    merely a proof that some solution must exist, they only consider the hypothesis properly proven when they have a construction for a solution.
    In that sense, a given conjecture may have more "solidity" in one
    /school/ of mathematics than in another.

    that is what I am talking about--it is all a big multidimensional
    spectrum of {proof or conjecture}

    But I don't quite see how a single conjecture could have more "solidity"
    in one /branch/ of mathematics than another. An example or two might
    help.

    A conjecture/proof in ring-sum math may not work at all in
    Real-Numbers. They are different branches in the space of Math.
    Some proofs only work in Cartesian Multi-D spaces and fail in
    manifold spaces.


    An open (neither proved nor disproved) conjecture often collects lots of >>> "observed evidence", either by suggesting some interesting corollaries
    or analogies that are then proved independently, or by surviving
    energetic efforts to find counterexamples to the conjecture. In this
    sense an open conjecture resembles a theory in physics.

    The solution to Fermat's last theorem used a large series of
    then conjectures in order to demonstrate that the solution
    was correct.


    Yes - and then those supporting conjectures were proven and morphed into theorems, with the knock-on effect of making everything higher up a
    proven theorem.

    This is very common in mathematics - you develop conditional proofs
    building on assuming a conjecture is true, and then you (or someone
    else) goes back and proves that conjecture later, or perhaps finds
    another path around that part. For many theorems in mathematics, the complete proof is a /very/ long and winding path.

    The long and winding road .. .

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Thomas Koenig on Thu Oct 3 00:36:53 2024
    On Tue, 1 Oct 2024 15:51:36 -0000 (UTC), Thomas Koenig wrote:

    You can never _prove_ a hypothesis ...

    Can you prove that?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Terje Mathisen on Thu Oct 3 00:34:54 2024
    On Tue, 24 Sep 2024 07:50:36 +0200, Terje Mathisen wrote:

    Somebody wiser than me had written something like "You cannot write/test/debug multithreaded programs without the ability for multiple threads to actually run at the same time."

    Some threading bugs will more likely show up in multiple-CPU situations,
    others will more likely show up in single-CPU situations. You need to test every which way you can.

    The 1990s were a “let’s use threads for everything” time. Thankfully, we have become a bit more restrained since then ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Thu Oct 3 00:38:08 2024
    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics:

    Is this a particularly believable and/or coherent mockery?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Lawrence D'Oliveiro on Thu Oct 3 01:45:36 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics:

    Is this a particularly believable and/or coherent mockery?


    He invented the MRI machine and the Liquid Metallic model of the sun, the
    sun is not a gas as taught in school.

    https://youtube.com/playlist?list=PLnU8XK0C8oTC-slk-xRn91pm05DT4XrVj&si=E7ZdtvUX4FHPkI70

    His YouTube play lists will keep you busy for a few days.

    You will quickly realize that what passes for stellar science is a load of bovine excess.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Thu Oct 3 03:58:12 2024
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics:

    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ...

    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.

    In science, we don’t go by “this guy has a legendary reputation and/or sounds like a credible witness, let’s believe him”, we go by evidence.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Thu Oct 3 10:02:42 2024
    On 02/10/2024 23:45, MitchAlsup1 wrote:
    On Wed, 2 Oct 2024 7:20:47 +0000, David Brown wrote:

    On 01/10/2024 23:09, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 19:07:18 +0000, Niklas Holsti wrote:

    On 2024-10-01 21:20, MitchAlsup1 wrote:
    On Tue, 1 Oct 2024 15:51:36 +0000, Thomas Koenig wrote:


    Mathematics is not a sciene under this definition, by the way.

    Indeed, Units of forward progress in Math are done with formal
    proofs.


    Yes, in the end, but it is interesting that a lot of the progress in
    mathematics happens thruogh the invention or intuition of
    /conjectures/,
    which may eventually be proven correct and true, or incorrect and
    needing modification.

    Mathematical conjectures have a spectrum of "solidity" often more
    solid in one branch of math than in another.


    I am not entirely sure what you mean by that.

    A conjecture is a hypothesis that you have reasonable justification for
    believing is true, but which is not proven to be true (then it becomes a
    theorem).  Some conjectures have been confirmed empirically to a large
    degree (such as the Riemann hypothesis) which is not proof, but can be
    seen as strengthening the conjecture.  Others, such as the continuum
    hypothesis, not only have no empirical evidence but have been proven to
    be independent of our usual ZF set theory axioms - no evidence either
    way can be found.

    Other conjectures had a century or more between being conjectured with several "things they got right" before finally drifting towards a proof
    or drifting towards disproof. The width of the drift is exactly the
    spectrum I stated.

    Okay, so that was what you meant. Fair enough.

    Many conjectures have not "drifted" significantly one way or the other -
    and some have "drifted" towards an expectation (or even a proof) that
    they are unprovable one way or the other.


    There are also some mathematicians who have a philosophy of viewing some
    kinds of proofs as "better" than others.  Some dislike "proof by
    computer", and don't consider the four-colour theorem to be a proven
    theorem yet.

    Over time proofs drift towards being an axiom (at least in their little branch of math--which might not be axiomatic in other branches). others
    start out proven and drift to the point there are only proven in one
    or several branches of math.

    That paragraph, on the other hand, makes absolutely no sense to me.

    An "axiom" is something that you take as true without any kind of proof
    - it is how you bootstrap mathematics. "Two sets are equal if and only
    if they they have the same elements" is an axiom of ZF set theory. "Any
    two distinct points can be connected by a unique straight line" is an
    axiom of Euclidian geometry.

    Axioms are what you /start/ with - proven theorems do not become axioms
    over time!

    And if a theorem is proven, then it is proven based on a set of axioms.
    It is not dependent on any particular branch of mathematics. And it
    does not "drift" towards being unproven. It can happen that mistakes
    are found in what was previously thought to be correct proves, but that
    is rare and there is no "drift". It is also certainly the case that if
    you change the axioms you used to prove something, then it is not (yet)
    proven with the new set of axioms. Some things can be proven if the
    continuum hypothesis is taken as an extra axiom - other things can be
    proven if you take as an axiom that the continuum hypothesis is false.
    But those are not different branches of mathematics, and again there is
    no "drift" here.


                Others are "constructivists" - they are not happy with
    merely a proof that some solution must exist, they only consider the
    hypothesis properly proven when they have a construction for a solution.
      In that sense, a given conjecture may have more "solidity" in one
    /school/ of mathematics than in another.

    that is what I am talking about--it is all a big multidimensional
    spectrum of {proof or conjecture}

    But I don't quite see how a single conjecture could have more "solidity"
    in one /branch/ of mathematics than another.  An example or two might
    help.

    A conjecture/proof in ring-sum math may not work at all in
    Real-Numbers. They are different branches in the space of Math.
    Some proofs only work in Cartesian Multi-D spaces and fail in
    manifold spaces.


    I think you are very confused here.

    If I use ZF set theory axioms to develop Peano arithmetic and prove the
    theorem that for all x, y in N, x.y = y.x (i.e., that multiplication of
    natural numbers is commutative) then that theorem is proven correct from
    those axioms. If I later define matrices and discover that matrix multiplication is non-commutative, that does not disprove my earlier
    theorem or show that it "works in some branches of maths but not others".

    Mathematical conjectures and their proofs (if one exists) have two parts
    - a set of pre-conditions and a result. The pre-conditions generally
    contain an implied part (a standard set of axioms) and an explicit part
    (such as "for all x in R, x > 0" or whatever). The result is the bit
    that we conjecture is true, or have proven to be true, from the
    pre-conditions. The conjecture or theorem says absolutely nothing about
    any other situation than when the pre-conditions hold. It is not the
    case that the theorem fails in other situations - it simply has no
    relevance and it makes no sense to ask if it is true or not if the pre-conditions are not met. It may be that you could formulate a
    similar conjecture with other pre-conditions that apply elsewhere, and
    that this new conjecture may be true or false, but it is a /new/ conjecture.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Thu Oct 3 10:23:12 2024
    On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics:

    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ...

    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.

    In science, we don’t go by “this guy has a legendary reputation and/or sounds like a credible witness, let’s believe him”, we go by evidence.

    Indeed.

    Also note that the two guys who won the Nobel Prize for the development
    of MRI - the /real/ inventors of the MRI machine - are both long dead.

    But this particular crank is mad enough and influential enough to have a
    page on Rational Wiki, which is never a good sign. (It seems he did
    work on improving MRI technology before he went bananas.)

    <https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From jseigh@21:1/5 to Lawrence D'Oliveiro on Thu Oct 3 10:25:26 2024
    On 10/2/24 20:34, Lawrence D'Oliveiro wrote:
    On Tue, 24 Sep 2024 07:50:36 +0200, Terje Mathisen wrote:

    Somebody wiser than me had written something like "You cannot
    write/test/debug multithreaded programs without the ability for multiple
    threads to actually run at the same time."

    Some threading bugs will more likely show up in multiple-CPU situations, others will more likely show up in single-CPU situations. You need to test every which way you can.

    The 1990s were a “let’s use threads for everything” time. Thankfully, we
    have become a bit more restrained since then ...

    Some things you can't test for. For example, hazard pointers without
    the store/load memory barrier. You need an asymmetric memory barrier
    for the memory reclamation thread to use. But the timing window that
    a race condition can happen if you didn't use that asymmetric memory
    barrier in is too small for any bad things to happen. You might
    think, oh if preemption occurred in the middle of a
    hazard pointer load, that would increase the timing window but
    guess what, that interrupt that caused the preemption effected a
    memory barrier. I did get around to showing formally that a
    false negative, which might make you think it was safe to reclaim
    the memory, when it was in fact not safe, cannot occur.

    Joe Seigh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Lawrence D'Oliveiro on Thu Oct 3 19:10:36 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics:

    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ...

    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.
    Read any book on Vitamin C and you will find that Linus Pauling was right.
    Only the cancer industrial complex complains about Linus Pauling.

    We now know that Cancer correlates with high omega 6 vegetable oil use, countries with high processed fake food intake have four times the cancer rates.

    https://www.datapandas.org/ranking/cancer-rates-by-country

    Vegetable oil and IQ:

    https://youtu.be/Kb-VNW_WaVU?si=K1PRkjFQoTEdt6_S

    Have fun learning.

    In science, we don’t go by “this guy has a legendary reputation and/or sounds like a credible witness, let’s believe him”, we go by evidence.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to David Brown on Thu Oct 3 19:10:38 2024
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics:

    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ...

    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.

    In science, we don’t go by “this guy has a legendary reputation and/or >> sounds like a credible witness, let’s believe him”, we go by evidence.

    Indeed.

    Also note that the two guys who won the Nobel Prize for the development
    of MRI - the /real/ inventors of the MRI machine - are both long dead.

    But this particular crank is mad enough and influential enough to have a
    page on Rational Wiki, which is never a good sign. (It seems he did
    work on improving MRI technology before he went bananas.)

    <https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>

    One day I will be on rational wiki. ;)

    Watch his videos and try to debunk what he says.

    Good luck with that. ;)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Michael S on Fri Sep 27 18:43:35 2024
    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force
    that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly
    together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Some of the pulsars are spinning at such a rate that they would fly apart,
    so we know the theory is wrong.

    “A pulsar (from pulsating radio source)[1][2] is a highly magnetized
    rotating neutron star that emits beams of electromagnetic radiation out of
    its magnetic poles.[3] This radiation can be observed only when a beam of emission is pointing toward Earth (similar to the way a lighthouse can be
    seen only when the light is pointed in the direction of an observer), and
    is responsible for the pulsed appearance of emission. “

    This sounds like an electric motor, and if you think a galactic
    civilization would not turn such into a gas station, I have news for you.
    You can take advantage of the huge gravity to feed it oil barrel
    projectiles full of liquid hydrogen to feed off of the explosions for more power generation and keep the generator alive. The resulting spectrum would
    be artificial, but we lack the theory to understand that.

    A Dyson sphere compared to a pulsar looks like a comparison of a desk fan
    to a modern wind mill.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Brett on Fri Sep 27 23:00:43 2024
    On 2024-09-27 21:43, Brett wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force
    that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly
    together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Some of the pulsars are spinning at such a rate that they would fly apart,
    so we know the theory is wrong.


    Which pulsars are spinning too fast? Reference please!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Sat Sep 28 02:30:23 2024
    On Sun, 22 Sep 2024 16:42:58 -0000 (UTC), Brett wrote:

    Now go find the other missing billion rings Einstein predicted.

    Where did he predict that?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Lawrence D'Oliveiro on Sat Sep 28 02:44:44 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 22 Sep 2024 16:42:58 -0000 (UTC), Brett wrote:

    Now go find the other missing billion rings Einstein predicted.

    Where did he predict that?

    All galaxies that have another galaxy behind at a reasonable range should
    show Einstein rings.

    Billions.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Niklas Holsti on Sat Sep 28 02:47:01 2024
    Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
    On 2024-09-27 21:43, Brett wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force
    that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly
    together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Some of the pulsars are spinning at such a rate that they would fly apart, >> so we know the theory is wrong.


    Which pulsars are spinning too fast? Reference please!


    https://en.wikipedia.org/wiki/PSR_J1748%E2%88%922446ad#:~:text=PSR%20J1748%E2%88%922446ad%20is%20the,was%20discovered%20by%20Jason%20W.%20T.


    Spinning at 42,960 revolutions per minute.

    Took seconds for google to answer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Brett on Sat Sep 28 10:07:42 2024
    On 2024-09-28 5:47, Brett wrote:
    Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:
    On 2024-09-27 21:43, Brett wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 24 Sep 2024 23:55:50 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 24 Sep 2024 20:21:53 -0000 (UTC), Brett wrote:

    You hear physicists talk of microscopic black holes, but the force >>>>>> that keeps atoms apart is so much more powerful than gravity that
    such talk is just fools playing with math they don’t understand.

    That would mean that neutron stars (all the atoms crushed so tightly >>>>> together that individual subatomic particles lose their identity)
    couldn’t exist either. But they do.

    Radio pulsars exist.
    The theory is that they are neutron stars. But theory can be wrong.

    Some of the pulsars are spinning at such a rate that they would fly apart, >>> so we know the theory is wrong.


    Which pulsars are spinning too fast? Reference please!


    https://en.wikipedia.org/wiki/PSR_J1748%E2%88%922446ad#:~:text=PSR%20J1748%E2%88%922446ad%20is%20the,was%20discovered%20by%20Jason%20W.%20T.


    Spinning at 42,960 revolutions per minute.


    The article says it is "the fastest-spinning pulsar known", but does not
    say that it is spinning faster than neutron-star theories allow, so it
    does not support your claim.


    Took seconds for google to answer.


    It is the wrong answer, at least for your claim.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Niklas Holsti on Sat Sep 28 07:34:44 2024
    Niklas Holsti <niklas.holsti@tidorum.invalid> schrieb:

    (Yes, I have seen a Youtube video from a Flat Earth fanatic making that argument :-( )

    The Flat Earth Society has members all around the globe.

    (Yes, this was on their web site once :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Brett on Sat Sep 28 10:28:00 2024
    On 2024-09-28 5:44, Brett wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 22 Sep 2024 16:42:58 -0000 (UTC), Brett wrote:

    Now go find the other missing billion rings Einstein predicted.

    Where did he predict that?

    All galaxies that have another galaxy behind at a reasonable range should show Einstein rings.

    Billions.


    Whether such a ring is detected by our telescopes depends on the
    closeness of the alignment of the galaxies with the line of sight from
    us and on their brightness, size, and structure. It also depends on the properties of the telescopes that have looked at these galaxies and on
    how astronomers have analysed the images from those telescopes.

    Can you show a calculation of how many rings, of some defined quality (completeness, shape, signal-to-noise level) should have been seen and
    detected in all astronomical observations to date?

    Your arguments are like those of a Flat Earth fanatic who observes that
    the horizon looks like a straight line, even when compared to a
    meter-long ruler, and who then thinks this proves that the Earth is
    flat, but who does not calculate whether that "looks straight" test can
    detect the small curvature of the horizon as seen from eye height above
    a globe with a radius of over 6300 kilometers.

    (Yes, I have seen a Youtube video from a Flat Earth fanatic making that argument :-( )

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Brett on Fri Oct 4 11:10:32 2024
    On 03/10/2024 21:10, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics:

    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ... >>>
    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.

    In science, we don’t go by “this guy has a legendary reputation and/or >>> sounds like a credible witness, let’s believe him”, we go by evidence. >>
    Indeed.

    Also note that the two guys who won the Nobel Prize for the development
    of MRI - the /real/ inventors of the MRI machine - are both long dead.

    But this particular crank is mad enough and influential enough to have a
    page on Rational Wiki, which is never a good sign. (It seems he did
    work on improving MRI technology before he went bananas.)

    <https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>

    One day I will be on rational wiki. ;)

    Watch his videos and try to debunk what he says.

    Good luck with that. ;)


    There are more productive uses of my time which won't rot my brain as
    quickly, such as watching the grass grow.

    A bit challenge with the kind of shite that people like this produce is
    that it is often unfalsifiable. They invoke magic, much like religions
    do, and then any kind of disproof or debunking is washed away by magic.
    When you make up some nonsense that has no basis in reality or no
    evidence, you can just keep adding more nonsense no matter what anyone
    else says.

    So when nutjobs like that guy tell you the sun is powered by pixies
    riding tricycles really fast, he can easily invent more rubbish to
    explain away any evidence.

    There's a term for this - what these cranks churn out is "not even
    wrong". (You can look that up on Rational Wiki too.)

    And while the claims of this kind of conspiracy theory cannot be
    falsified, there is also no evidence for them. Claims made without
    evidence can be dismissed without evidence - there is no need to debunk
    them. The correct reaction is to laugh if they are funny, then move on
    and forget them.

    We are all human, and sometimes we get fooled by an idea that sounds
    right. But you should be embarrassed at believing such a wide range of
    idiocy and then promoting it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to David Brown on Fri Oct 4 17:59:03 2024
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 21:10, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics:

    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ... >>>>
    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.

    In science, we don’t go by “this guy has a legendary reputation and/or >>>> sounds like a credible witness, let’s believe him”, we go by evidence. >>>
    Indeed.

    Also note that the two guys who won the Nobel Prize for the development
    of MRI - the /real/ inventors of the MRI machine - are both long dead.

    But this particular crank is mad enough and influential enough to have a >>> page on Rational Wiki, which is never a good sign. (It seems he did
    work on improving MRI technology before he went bananas.)

    <https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>

    One day I will be on rational wiki. ;)

    Watch his videos and try to debunk what he says.

    Good luck with that. ;)


    There are more productive uses of my time which won't rot my brain as quickly, such as watching the grass grow.

    A bit challenge with the kind of shite that people like this produce is
    that it is often unfalsifiable. They invoke magic, much like religions
    do, and then any kind of disproof or debunking is washed away by magic.
    When you make up some nonsense that has no basis in reality or no
    evidence, you can just keep adding more nonsense no matter what anyone
    else says.

    So when nutjobs like that guy tell you the sun is powered by pixies
    riding tricycles really fast, he can easily invent more rubbish to
    explain away any evidence.

    There's a term for this - what these cranks churn out is "not even
    wrong". (You can look that up on Rational Wiki too.)

    And while the claims of this kind of conspiracy theory cannot be
    falsified, there is also no evidence for them. Claims made without
    evidence can be dismissed without evidence - there is no need to debunk
    them. The correct reaction is to laugh if they are funny, then move on
    and forget them.

    We are all human, and sometimes we get fooled by an idea that sounds
    right. But you should be embarrassed at believing such a wide range of idiocy and then promoting it.


    A gas cannot emit the spectrum we see from the sun, liquid metallic
    hydrogen can.

    Gases do not show the pond ripples from impacts that we see from the sun surface.

    And a long list of other basic facts Pierre-Marie_Robitaille goes over in
    his Sky Scholar videos.

    Stellar science is a bad joke, such basic mistakes should have been
    corrected 100 years ago.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Brett on Sat Oct 5 11:08:31 2024
    On 04/10/2024 19:59, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 21:10, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics:

    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ... >>>>>
    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.

    In science, we don’t go by “this guy has a legendary reputation and/or
    sounds like a credible witness, let’s believe him”, we go by evidence.

    Indeed.

    Also note that the two guys who won the Nobel Prize for the development >>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>
    But this particular crank is mad enough and influential enough to have a >>>> page on Rational Wiki, which is never a good sign. (It seems he did
    work on improving MRI technology before he went bananas.)

    <https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>

    One day I will be on rational wiki. ;)

    Watch his videos and try to debunk what he says.

    Good luck with that. ;)


    There are more productive uses of my time which won't rot my brain as
    quickly, such as watching the grass grow.

    A bit challenge with the kind of shite that people like this produce is
    that it is often unfalsifiable. They invoke magic, much like religions
    do, and then any kind of disproof or debunking is washed away by magic.
    When you make up some nonsense that has no basis in reality or no
    evidence, you can just keep adding more nonsense no matter what anyone
    else says.

    So when nutjobs like that guy tell you the sun is powered by pixies
    riding tricycles really fast, he can easily invent more rubbish to
    explain away any evidence.

    There's a term for this - what these cranks churn out is "not even
    wrong". (You can look that up on Rational Wiki too.)

    And while the claims of this kind of conspiracy theory cannot be
    falsified, there is also no evidence for them. Claims made without
    evidence can be dismissed without evidence - there is no need to debunk
    them. The correct reaction is to laugh if they are funny, then move on
    and forget them.

    We are all human, and sometimes we get fooled by an idea that sounds
    right. But you should be embarrassed at believing such a wide range of
    idiocy and then promoting it.


    A gas cannot emit the spectrum we see from the sun, liquid metallic
    hydrogen can.


    You do realise that the sun is primarily plasma, rather than gas? And
    that scientists - /real/ scientists - can heat up gases until they are
    plasma and look at the spectrum, in actual experiments in labs? Has
    your hero tested a ball of liquid metallic hydrogen in his lab?

    Gases do not show the pond ripples from impacts that we see from the sun surface.

    And a long list of other basic facts Pierre-Marie_Robitaille goes over in
    his Sky Scholar videos.

    Stellar science is a bad joke, such basic mistakes should have been
    corrected 100 years ago.


    You think one crackpot with no relevant education and no resources can
    figure all this out in a couple of years, where tens of thousands of
    scientists have failed over a hundred years? Do you /really/ think that
    is more likely than supposing that he doesn't understand what he is
    talking about?

    In real science, lab experiments, observation of reality (such as the
    sun in this case), simulations, models, and hypotheses all go hand in
    hand in collaboration between many scientists and experts in different
    fields in order to push scientific knowledge further.

    "Maverick" genius scientists who figure out the "real" answer on their
    own don't exist outside the entertainment industry.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to David Brown on Sat Oct 5 17:49:35 2024
    David Brown <david.brown@hesbynett.no> wrote:
    On 04/10/2024 19:59, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 21:10, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics: >>>>>>>>
    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ...

    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.

    In science, we don’t go by “this guy has a legendary reputation and/or
    sounds like a credible witness, let’s believe him”, we go by evidence.

    Indeed.

    Also note that the two guys who won the Nobel Prize for the development >>>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>>
    But this particular crank is mad enough and influential enough to have a >>>>> page on Rational Wiki, which is never a good sign. (It seems he did >>>>> work on improving MRI technology before he went bananas.)

    <https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>

    One day I will be on rational wiki. ;)

    Watch his videos and try to debunk what he says.

    Good luck with that. ;)


    There are more productive uses of my time which won't rot my brain as
    quickly, such as watching the grass grow.

    A bit challenge with the kind of shite that people like this produce is
    that it is often unfalsifiable. They invoke magic, much like religions
    do, and then any kind of disproof or debunking is washed away by magic.
    When you make up some nonsense that has no basis in reality or no
    evidence, you can just keep adding more nonsense no matter what anyone
    else says.

    So when nutjobs like that guy tell you the sun is powered by pixies
    riding tricycles really fast, he can easily invent more rubbish to
    explain away any evidence.

    There's a term for this - what these cranks churn out is "not even
    wrong". (You can look that up on Rational Wiki too.)

    And while the claims of this kind of conspiracy theory cannot be
    falsified, there is also no evidence for them. Claims made without
    evidence can be dismissed without evidence - there is no need to debunk
    them. The correct reaction is to laugh if they are funny, then move on
    and forget them.

    We are all human, and sometimes we get fooled by an idea that sounds
    right. But you should be embarrassed at believing such a wide range of
    idiocy and then promoting it.


    A gas cannot emit the spectrum we see from the sun, liquid metallic
    hydrogen can.


    You do realise that the sun is primarily plasma, rather than gas? And
    that scientists - /real/ scientists - can heat up gases until they are
    plasma and look at the spectrum, in actual experiments in labs? Has
    your hero tested a ball of liquid metallic hydrogen in his lab?

    Gases do not show the pond ripples from impacts that we see from the sun
    surface.

    And a long list of other basic facts Pierre-Marie_Robitaille goes over in
    his Sky Scholar videos.

    Stellar science is a bad joke, such basic mistakes should have been
    corrected 100 years ago.


    You think one crackpot with no relevant education and no resources can
    figure all this out in a couple of years, where tens of thousands of scientists have failed over a hundred years? Do you /really/ think that
    is more likely than supposing that he doesn't understand what he is
    talking about?

    In real science, lab experiments, observation of reality (such as the
    sun in this case), simulations, models, and hypotheses all go hand in
    hand in collaboration between many scientists and experts in different
    fields in order to push scientific knowledge further.

    "Maverick" genius scientists who figure out the "real" answer on their
    own don't exist outside the entertainment industry.


    So science ended 100 years ago and we should close our eyes and ears and
    say not anything that would counter our sacred flawless scientists of old.

    Stop being a religious zealot and watch the videos.

    If he is a crackpot you should be bright enough to figure it out and prove
    it for the world to see. Crackpots cannot survive scientific rigor. A five minute search crushes such fools with ease, I have done this a dozen times.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Brett on Sat Oct 5 18:24:07 2024
    Brett <ggtgp@yahoo.com> wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 04/10/2024 19:59, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 21:10, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics: >>>>>>>>>
    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ...

    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.

    In science, we don’t go by “this guy has a legendary reputation and/or
    sounds like a credible witness, let’s believe him”, we go by evidence.

    Indeed.

    Also note that the two guys who won the Nobel Prize for the development >>>>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>>>
    But this particular crank is mad enough and influential enough to have a >>>>>> page on Rational Wiki, which is never a good sign. (It seems he did >>>>>> work on improving MRI technology before he went bananas.)

    <https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>

    One day I will be on rational wiki. ;)

    Watch his videos and try to debunk what he says.

    Good luck with that. ;)


    There are more productive uses of my time which won't rot my brain as
    quickly, such as watching the grass grow.

    A bit challenge with the kind of shite that people like this produce is >>>> that it is often unfalsifiable. They invoke magic, much like religions >>>> do, and then any kind of disproof or debunking is washed away by magic. >>>> When you make up some nonsense that has no basis in reality or no
    evidence, you can just keep adding more nonsense no matter what anyone >>>> else says.

    So when nutjobs like that guy tell you the sun is powered by pixies
    riding tricycles really fast, he can easily invent more rubbish to
    explain away any evidence.

    There's a term for this - what these cranks churn out is "not even
    wrong". (You can look that up on Rational Wiki too.)

    And while the claims of this kind of conspiracy theory cannot be
    falsified, there is also no evidence for them. Claims made without
    evidence can be dismissed without evidence - there is no need to debunk >>>> them. The correct reaction is to laugh if they are funny, then move on >>>> and forget them.

    We are all human, and sometimes we get fooled by an idea that sounds
    right. But you should be embarrassed at believing such a wide range of >>>> idiocy and then promoting it.


    A gas cannot emit the spectrum we see from the sun, liquid metallic
    hydrogen can.


    You do realise that the sun is primarily plasma, rather than gas? And
    that scientists - /real/ scientists - can heat up gases until they are
    plasma and look at the spectrum, in actual experiments in labs? Has
    your hero tested a ball of liquid metallic hydrogen in his lab?

    Gases do not show the pond ripples from impacts that we see from the sun >>> surface.

    And a long list of other basic facts Pierre-Marie_Robitaille goes over in >>> his Sky Scholar videos.

    Stellar science is a bad joke, such basic mistakes should have been
    corrected 100 years ago.


    You think one crackpot with no relevant education and no resources can
    figure all this out in a couple of years, where tens of thousands of
    scientists have failed over a hundred years? Do you /really/ think that
    is more likely than supposing that he doesn't understand what he is
    talking about?

    In real science, lab experiments, observation of reality (such as the
    sun in this case), simulations, models, and hypotheses all go hand in
    hand in collaboration between many scientists and experts in different
    fields in order to push scientific knowledge further.

    "Maverick" genius scientists who figure out the "real" answer on their
    own don't exist outside the entertainment industry.


    So science ended 100 years ago and we should close our eyes and ears and
    say not anything that would counter our sacred flawless scientists of old.

    Stop being a religious zealot and watch the videos.

    If he is a crackpot you should be bright enough to figure it out and prove
    it for the world to see. Crackpots cannot survive scientific rigor. A five minute search crushes such fools with ease, I have done this a dozen times.


    Here is what Sabine Hossenfelder thinks of modern physics, and she makes
    money promoting physics to people on YouTube.

    https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt

    The comments are funny. ;)

    My translation is that modern physics is a bullshit engine of unprovable gibberish like string theory.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Sun Oct 6 01:12:04 2024
    On Fri, 4 Oct 2024 17:59:03 -0000 (UTC), Brett wrote:

    A gas cannot emit the spectrum we see from the sun, liquid metallic
    hydrogen can.

    The spectrum of the Sun is primarily the continuous emissive one of a
    “black body” at a surface temperature of 6500K or thereabouts.

    Superimposed on that are absorption lines corresponding to a range of
    elements, representing cooler substances in the surrounding “photosphere”, I think it’s called.

    Which of these lines do you think is characteristic of this mythical
    “liquid metallic hydrogen” of yours?

    Fun fact: originally it was thought that those lines in the spectra of the
    Sun and other stars were characteristic of the entire makeup of the bodies concerned. In other words, they were full of elements much like those that
    make up the Earth and other planetary bodies.

    A young doctorate student named Cecilia Payne, after some careful study,
    came to the remarkable conclusion that stars were mostly hydrogen and
    helium, and these spectral lines were due, in effect, to relatively small amounts of contaminants in among that bulk of hydrogen and helium.

    Gases do not show the pond ripples from impacts that we see from the sun surface.

    What “impacts on the sun surface”?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Brett on Sun Oct 6 10:41:10 2024
    On 2024-10-05 21:24, Brett wrote:
    Brett <ggtgp@yahoo.com> wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 04/10/2024 19:59, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 21:10, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics: >>>>>>>>>>
    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ...

    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.

    In science, we don’t go by “this guy has a legendary reputation and/or
    sounds like a credible witness, let’s believe him”, we go by evidence.

    Indeed.

    Also note that the two guys who won the Nobel Prize for the development >>>>>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>>>>
    But this particular crank is mad enough and influential enough to have a
    page on Rational Wiki, which is never a good sign. (It seems he did >>>>>>> work on improving MRI technology before he went bananas.)

    <https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>

    One day I will be on rational wiki. ;)

    Watch his videos and try to debunk what he says.

    Good luck with that. ;)


    There are more productive uses of my time which won't rot my brain as >>>>> quickly, such as watching the grass grow.

    A bit challenge with the kind of shite that people like this produce is >>>>> that it is often unfalsifiable. They invoke magic, much like religions >>>>> do, and then any kind of disproof or debunking is washed away by magic. >>>>> When you make up some nonsense that has no basis in reality or no
    evidence, you can just keep adding more nonsense no matter what anyone >>>>> else says.

    So when nutjobs like that guy tell you the sun is powered by pixies
    riding tricycles really fast, he can easily invent more rubbish to
    explain away any evidence.

    There's a term for this - what these cranks churn out is "not even
    wrong". (You can look that up on Rational Wiki too.)

    And while the claims of this kind of conspiracy theory cannot be
    falsified, there is also no evidence for them. Claims made without
    evidence can be dismissed without evidence - there is no need to debunk >>>>> them. The correct reaction is to laugh if they are funny, then move on >>>>> and forget them.

    We are all human, and sometimes we get fooled by an idea that sounds >>>>> right. But you should be embarrassed at believing such a wide range of >>>>> idiocy and then promoting it.


    A gas cannot emit the spectrum we see from the sun, liquid metallic
    hydrogen can.


    You do realise that the sun is primarily plasma, rather than gas? And
    that scientists - /real/ scientists - can heat up gases until they are
    plasma and look at the spectrum, in actual experiments in labs? Has
    your hero tested a ball of liquid metallic hydrogen in his lab?

    Gases do not show the pond ripples from impacts that we see from the sun >>>> surface.

    And a long list of other basic facts Pierre-Marie_Robitaille goes over in >>>> his Sky Scholar videos.

    Stellar science is a bad joke, such basic mistakes should have been
    corrected 100 years ago.


    You think one crackpot with no relevant education and no resources can
    figure all this out in a couple of years, where tens of thousands of
    scientists have failed over a hundred years? Do you /really/ think that >>> is more likely than supposing that he doesn't understand what he is
    talking about?

    In real science, lab experiments, observation of reality (such as the
    sun in this case), simulations, models, and hypotheses all go hand in
    hand in collaboration between many scientists and experts in different
    fields in order to push scientific knowledge further.

    "Maverick" genius scientists who figure out the "real" answer on their
    own don't exist outside the entertainment industry.


    So science ended 100 years ago and we should close our eyes and ears and
    say not anything that would counter our sacred flawless scientists of old. >>
    Stop being a religious zealot and watch the videos.

    If he is a crackpot you should be bright enough to figure it out and prove >> it for the world to see. Crackpots cannot survive scientific rigor. A five >> minute search crushes such fools with ease, I have done this a dozen times. >>

    Here is what Sabine Hossenfelder thinks of modern physics, and she
    makes money promoting physics to people on YouTube.

    https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt

    The issues Hossenfelder discusses in that video are at the rugged
    frontiers of theoretical physics: whether or not Loop Quantum Gravity
    predicts that the speed of light depends on the frequency of the light,
    and whether or not it makes sense to work on mathematical models of
    reality, like string theory, that so far do not make testable predictions.

    It is natural that there are disagreements and even quarrels among
    physicists in such areas. The current methods for funding physics
    research may be contributing to such problems. And it is frustrating
    that no major, easily explainable advances have been made for quite a while.

    But I strongly doubt that Hossenfelder thinks the Sun consists of liquid metallic hydrogen.


    The comments are funny. ;)


    After a quick sampling it seems most comments are just praising
    Hossenfelder's aggressive style and ranting in this video, not talking
    about the physics.

    I wonder if Hossenfelder is in danger of becoming the DJ Trump of
    physics, perhaps soon calling for draining the academic swamp, and
    attracting a following of similarly disappointed and frustrated seekers
    for simple solutions.


    My translation is that modern physics is a bullshit engine of unprovable gibberish like string theory.


    Don't equate "modern physics" like string theory with all of physics.

    And YouTube videos making money does not mean that the videos present
    the truth... counter-examples are legion.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Brett on Sun Oct 6 12:07:16 2024
    On 05/10/2024 19:49, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 04/10/2024 19:59, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 21:10, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics: >>>>>>>>>
    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ...

    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.

    In science, we don’t go by “this guy has a legendary reputation and/or
    sounds like a credible witness, let’s believe him”, we go by evidence.

    Indeed.

    Also note that the two guys who won the Nobel Prize for the development >>>>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>>>
    But this particular crank is mad enough and influential enough to have a >>>>>> page on Rational Wiki, which is never a good sign. (It seems he did >>>>>> work on improving MRI technology before he went bananas.)

    <https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>

    One day I will be on rational wiki. ;)

    Watch his videos and try to debunk what he says.

    Good luck with that. ;)


    There are more productive uses of my time which won't rot my brain as
    quickly, such as watching the grass grow.

    A bit challenge with the kind of shite that people like this produce is >>>> that it is often unfalsifiable. They invoke magic, much like religions >>>> do, and then any kind of disproof or debunking is washed away by magic. >>>> When you make up some nonsense that has no basis in reality or no
    evidence, you can just keep adding more nonsense no matter what anyone >>>> else says.

    So when nutjobs like that guy tell you the sun is powered by pixies
    riding tricycles really fast, he can easily invent more rubbish to
    explain away any evidence.

    There's a term for this - what these cranks churn out is "not even
    wrong". (You can look that up on Rational Wiki too.)

    And while the claims of this kind of conspiracy theory cannot be
    falsified, there is also no evidence for them. Claims made without
    evidence can be dismissed without evidence - there is no need to debunk >>>> them. The correct reaction is to laugh if they are funny, then move on >>>> and forget them.

    We are all human, and sometimes we get fooled by an idea that sounds
    right. But you should be embarrassed at believing such a wide range of >>>> idiocy and then promoting it.


    A gas cannot emit the spectrum we see from the sun, liquid metallic
    hydrogen can.


    You do realise that the sun is primarily plasma, rather than gas? And
    that scientists - /real/ scientists - can heat up gases until they are
    plasma and look at the spectrum, in actual experiments in labs? Has
    your hero tested a ball of liquid metallic hydrogen in his lab?

    Gases do not show the pond ripples from impacts that we see from the sun >>> surface.

    And a long list of other basic facts Pierre-Marie_Robitaille goes over in >>> his Sky Scholar videos.

    Stellar science is a bad joke, such basic mistakes should have been
    corrected 100 years ago.


    You think one crackpot with no relevant education and no resources can
    figure all this out in a couple of years, where tens of thousands of
    scientists have failed over a hundred years? Do you /really/ think that
    is more likely than supposing that he doesn't understand what he is
    talking about?

    In real science, lab experiments, observation of reality (such as the
    sun in this case), simulations, models, and hypotheses all go hand in
    hand in collaboration between many scientists and experts in different
    fields in order to push scientific knowledge further.

    "Maverick" genius scientists who figure out the "real" answer on their
    own don't exist outside the entertainment industry.


    So science ended 100 years ago and we should close our eyes and ears and
    say not anything that would counter our sacred flawless scientists of old.

    No, that's not /remotely/ what I wrote. New science builds on previous
    science - mostly refining it, and only very occasionally throwing out
    old stuff entirely or coming up with something entirely new.


    Stop being a religious zealot and watch the videos.


    I have read enough about the people behind them - there is no need to
    waste time watching them.

    If he is a crackpot you should be bright enough to figure it out and prove
    it for the world to see. Crackpots cannot survive scientific rigor. A five minute search crushes such fools with ease, I have done this a dozen times.

    Again, did you read anything that I wrote?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Brett on Sun Oct 6 12:47:08 2024
    On 05/10/2024 20:24, Brett wrote:
    Brett <ggtgp@yahoo.com> wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 04/10/2024 19:59, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 21:10, Brett wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 03/10/2024 05:58, Lawrence D'Oliveiro wrote:
    On Thu, 3 Oct 2024 01:45:36 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 1 Oct 2024 23:33:57 -0000 (UTC), Brett wrote:

    Sky Scholar just posted his latest mockery of modern physics: >>>>>>>>>>
    Is this a particularly believable and/or coherent mockery?

    He invented the MRI machine and the Liquid Metallic model of the sun ...

    And Linus Pauling got the Nobel Prize and went nuts over Vitamin C.

    In science, we don’t go by “this guy has a legendary reputation and/or
    sounds like a credible witness, let’s believe him”, we go by evidence.

    Indeed.

    Also note that the two guys who won the Nobel Prize for the development >>>>>>> of MRI - the /real/ inventors of the MRI machine - are both long dead. >>>>>>>
    But this particular crank is mad enough and influential enough to have a
    page on Rational Wiki, which is never a good sign. (It seems he did >>>>>>> work on improving MRI technology before he went bananas.)

    <https://rationalwiki.org/wiki/Pierre-Marie_Robitaille>

    One day I will be on rational wiki. ;)

    Watch his videos and try to debunk what he says.

    Good luck with that. ;)


    There are more productive uses of my time which won't rot my brain as >>>>> quickly, such as watching the grass grow.

    A bit challenge with the kind of shite that people like this produce is >>>>> that it is often unfalsifiable. They invoke magic, much like religions >>>>> do, and then any kind of disproof or debunking is washed away by magic. >>>>> When you make up some nonsense that has no basis in reality or no
    evidence, you can just keep adding more nonsense no matter what anyone >>>>> else says.

    So when nutjobs like that guy tell you the sun is powered by pixies
    riding tricycles really fast, he can easily invent more rubbish to
    explain away any evidence.

    There's a term for this - what these cranks churn out is "not even
    wrong". (You can look that up on Rational Wiki too.)

    And while the claims of this kind of conspiracy theory cannot be
    falsified, there is also no evidence for them. Claims made without
    evidence can be dismissed without evidence - there is no need to debunk >>>>> them. The correct reaction is to laugh if they are funny, then move on >>>>> and forget them.

    We are all human, and sometimes we get fooled by an idea that sounds >>>>> right. But you should be embarrassed at believing such a wide range of >>>>> idiocy and then promoting it.


    A gas cannot emit the spectrum we see from the sun, liquid metallic
    hydrogen can.


    You do realise that the sun is primarily plasma, rather than gas? And
    that scientists - /real/ scientists - can heat up gases until they are
    plasma and look at the spectrum, in actual experiments in labs? Has
    your hero tested a ball of liquid metallic hydrogen in his lab?

    Gases do not show the pond ripples from impacts that we see from the sun >>>> surface.

    And a long list of other basic facts Pierre-Marie_Robitaille goes over in >>>> his Sky Scholar videos.

    Stellar science is a bad joke, such basic mistakes should have been
    corrected 100 years ago.


    You think one crackpot with no relevant education and no resources can
    figure all this out in a couple of years, where tens of thousands of
    scientists have failed over a hundred years? Do you /really/ think that >>> is more likely than supposing that he doesn't understand what he is
    talking about?

    In real science, lab experiments, observation of reality (such as the
    sun in this case), simulations, models, and hypotheses all go hand in
    hand in collaboration between many scientists and experts in different
    fields in order to push scientific knowledge further.

    "Maverick" genius scientists who figure out the "real" answer on their
    own don't exist outside the entertainment industry.


    So science ended 100 years ago and we should close our eyes and ears and
    say not anything that would counter our sacred flawless scientists of old. >>
    Stop being a religious zealot and watch the videos.

    If he is a crackpot you should be bright enough to figure it out and prove >> it for the world to see. Crackpots cannot survive scientific rigor. A five >> minute search crushes such fools with ease, I have done this a dozen times. >>

    Here is what Sabine Hossenfelder thinks of modern physics, and she makes money promoting physics to people on YouTube.

    https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt


    Sabine Hossenfelder is quite a good commentator, and I've seen many of
    her videos before. Her points here are not new or contentious - there
    is quite a support in scientific communities for her argument here. We
    have arguably reached a point in the science of cosmology and
    fundamental physics where traditional scientific progress is unavoidably minimal. Basically, we cannot build big enough experiments to provide corroborating or falsifying evidence for current hypothetical models
    that could explain quantum mechanics (known to be an extraordinarily
    good model on small scales) and relativity (known to work well on large
    scales, and with many aspects confirmed in laboratory experiments). If
    gravity works like a quantum field mediated by a "graviton" boson, we'd
    need a particle accelerator the size of the orbit of Jupiter to find it.
    If we want to use a particle accelerator to look for evidence of
    string theory (/not/ a scientific theory, despite the name), the size
    would be commensurate with the Milky Way.

    Then there is the limit to the human mind. This stuff requires such a
    depth of knowledge and study that by the time anyone has learned enough
    of the current ideas and existing data and evidence to be able to push
    the boundaries, they are already well past their creative prime.

    Does that mean we (meaning the scientific community) should stop trying?
    No, of course not. But we should try to stop going round in circles.
    The emphasis should be on finding ways to split the problems up, so that
    more people can work on simpler parts, and perhaps making more use of AI
    to handle the details. There should also, IMHO, be more focus on
    testability of ideas, and less on mathematical philosophy.

    It also means that new sources of experimental data are important -
    that's why projects like the James Webb telescope are so vital.

    Does it mean that /all/ of physics research is going nowhere? No, of
    course not - it is only in a few certain areas that have somewhat
    stagnated, and where some of the ideas stretch credulity. Cosmology may
    be suffering, but studies of the nanoscopic world, materials, unusual
    matter phases, and countless other branches of physics are all
    progressing solidly.

    And does it mean that we can replace existing accurate (but not quite
    perfect) models with complete bollocks that doesn't fit the evidence and
    has no justification from reality? For most people, that would be a
    rhetorical question - but for your benefit, the answer is no.


    The comments are funny. ;)

    My translation is that modern physics is a bullshit engine of unprovable gibberish like string theory.


    Your translation is wrong.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brett@21:1/5 to Lawrence D'Oliveiro on Sun Oct 6 22:08:40 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 4 Oct 2024 17:59:03 -0000 (UTC), Brett wrote:

    A gas cannot emit the spectrum we see from the sun, liquid metallic
    hydrogen can.

    The spectrum of the Sun is primarily the continuous emissive one of a “black body” at a surface temperature of 6500K or thereabouts.

    Superimposed on that are absorption lines corresponding to a range of elements, representing cooler substances in the surrounding “photosphere”,
    I think it’s called.

    Which of these lines do you think is characteristic of this mythical “liquid metallic hydrogen” of yours?

    Fun fact: originally it was thought that those lines in the spectra of the Sun and other stars were characteristic of the entire makeup of the bodies concerned. In other words, they were full of elements much like those that make up the Earth and other planetary bodies.

    A young doctorate student named Cecilia Payne, after some careful study,
    came to the remarkable conclusion that stars were mostly hydrogen and
    helium, and these spectral lines were due, in effect, to relatively small amounts of contaminants in among that bulk of hydrogen and helium.

    Gases do not show the pond ripples from impacts that we see from the sun
    surface.

    What “impacts on the sun surface”?

    Watch the first few minutes of the first video in the playlist to see a
    solar eruption and some of that mass crashing back down on the sun surface, causing pond ripples. The idea of a plasma gas sun dies right there.

    https://youtube.com/playlist?list=PLnU8XK0C8oTC-slk-xRn91pm05DT4XrVj&si=E7ZdtvUX4FHPkI70

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to David Brown on Sun Oct 6 23:29:30 2024
    On Sun, 6 Oct 2024 10:47:08 +0000, David Brown wrote:

    On 05/10/2024 20:24, Brett wrote:
    Brett <ggtgp@yahoo.com> wrote:

    Here is what Sabine Hossenfelder thinks of modern physics, and she makes
    money promoting physics to people on YouTube.

    https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt


    Sabine Hossenfelder is quite a good commentator, and I've seen many of
    her videos before. Her points here are not new or contentious - there
    is quite a support in scientific communities for her argument here. We
    have arguably reached a point in the science of cosmology and
    fundamental physics where traditional scientific progress is unavoidably minimal. Basically, we cannot build big enough experiments to provide corroborating or falsifying evidence for current hypothetical models

    Based on the success of Webb--we can, we just don't have access to
    enough money to allow for building and shipping such a device up into
    space. Optics-check, structure-check, rocket-check, where to put it-
    check, telemetry and command-check.

    that could explain quantum mechanics (known to be an extraordinarily
    good model on small scales) and relativity (known to work well on large scales, and with many aspects confirmed in laboratory experiments). If gravity works like a quantum field mediated by a "graviton" boson, we'd
    need a particle accelerator the size of the orbit of Jupiter to find it.

    I heard closer to Saturn, but you forgot that it would take 5% of the
    sun's energy to power it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to mitchalsup@aol.com on Mon Oct 7 00:39:15 2024
    mitchalsup@aol.com (MitchAlsup1) writes:
    On Sun, 6 Oct 2024 10:47:08 +0000, David Brown wrote:

    On 05/10/2024 20:24, Brett wrote:
    Brett <ggtgp@yahoo.com> wrote:

    Here is what Sabine Hossenfelder thinks of modern physics, and she makes >>> money promoting physics to people on YouTube.

    https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt


    Sabine Hossenfelder is quite a good commentator, and I've seen many of
    her videos before. Her points here are not new or contentious - there
    is quite a support in scientific communities for her argument here. We
    have arguably reached a point in the science of cosmology and
    fundamental physics where traditional scientific progress is unavoidably
    minimal. Basically, we cannot build big enough experiments to provide
    corroborating or falsifying evidence for current hypothetical models

    Based on the success of Webb--we can, we just don't have access to
    enough money to allow for building and shipping such a device up into
    space. Optics-check, structure-check, rocket-check, where to put it-
    check, telemetry and command-check.

    An article in this week's Aviation Week and Space Technology noted
    that the starship will be able to boost a payload that masses
    thirty times the Webb for less cost than the Webb launch.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to Scott Lurndal on Mon Oct 7 01:34:43 2024
    On Mon, 7 Oct 2024 0:39:15 +0000, Scott Lurndal wrote:

    mitchalsup@aol.com (MitchAlsup1) writes:
    On Sun, 6 Oct 2024 10:47:08 +0000, David Brown wrote:

    On 05/10/2024 20:24, Brett wrote:
    Brett <ggtgp@yahoo.com> wrote:

    Here is what Sabine Hossenfelder thinks of modern physics, and she makes >>>> money promoting physics to people on YouTube.

    https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt


    Sabine Hossenfelder is quite a good commentator, and I've seen many of
    her videos before. Her points here are not new or contentious - there
    is quite a support in scientific communities for her argument here. We
    have arguably reached a point in the science of cosmology and
    fundamental physics where traditional scientific progress is unavoidably >>> minimal. Basically, we cannot build big enough experiments to provide
    corroborating or falsifying evidence for current hypothetical models

    Based on the success of Webb--we can, we just don't have access to
    enough money to allow for building and shipping such a device up into >>space. Optics-check, structure-check, rocket-check, where to put it-
    check, telemetry and command-check.

    An article in this week's Aviation Week and Space Technology noted
    that the starship will be able to boost a payload that masses
    thirty times the Webb for less cost than the Webb launch.

    I was counting on Starship in the above.
    I was only complaining about the "can't" part.
    Every piece of engineering is go--as long as someone will pay for it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to mitchalsup@aol.com on Sun Oct 6 20:59:49 2024
    mitchalsup@aol.com (MitchAlsup1) writes:


    [a particle accelerator to find quantum gravitons would need to
    be the size of the orbit of Jupiter]

    I heard closer to Saturn, but you forgot that it would take 5% of
    the sun's energy to power it.

    That's great! We could just turn on the big accelerator every so
    often and counteract the effects of global warming.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Brett on Mon Oct 7 09:29:11 2024
    On 2024-10-07 1:08, Brett wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 4 Oct 2024 17:59:03 -0000 (UTC), Brett wrote:

    A gas cannot emit the spectrum we see from the sun, liquid metallic
    hydrogen can.

    The spectrum of the Sun is primarily the continuous emissive one of a
    “black body” at a surface temperature of 6500K or thereabouts.

    Superimposed on that are absorption lines corresponding to a range of
    elements, representing cooler substances in the surrounding “photosphere”,
    I think it’s called.

    Which of these lines do you think is characteristic of this mythical
    “liquid metallic hydrogen” of yours?

    Fun fact: originally it was thought that those lines in the spectra of the >> Sun and other stars were characteristic of the entire makeup of the bodies >> concerned. In other words, they were full of elements much like those that >> make up the Earth and other planetary bodies.

    A young doctorate student named Cecilia Payne, after some careful study,
    came to the remarkable conclusion that stars were mostly hydrogen and
    helium, and these spectral lines were due, in effect, to relatively small
    amounts of contaminants in among that bulk of hydrogen and helium.

    Gases do not show the pond ripples from impacts that we see from the sun >>> surface.

    What “impacts on the sun surface”?

    Watch the first few minutes of the first video in the playlist to see a
    solar eruption and some of that mass crashing back down on the sun surface, causing pond ripples. The idea of a plasma gas sun dies right there.


    Stratified fluid (non-plasma) atmospheres can support pond-like waves:

    https://en.wikipedia.org/wiki/Gravity_wave#Atmosphere_dynamics_on_Earth.

    Plasma can support /many/ kinds of waves because of the coupling of the
    charged particles to magnetic fields:

    https://en.wikipedia.org/wiki/Waves_in_plasmas

    I don't claim to know what kind of wave was shown in the video of the
    solar eruption -- intuitively I would plump for gravity waves. But I
    don't think liquid metallic hydrogen is needed to explain it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Brett on Mon Oct 7 06:37:40 2024
    On Sun, 6 Oct 2024 22:08:40 -0000 (UTC), Brett wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Fri, 4 Oct 2024 17:59:03 -0000 (UTC), Brett wrote:

    Gases do not show the pond ripples from impacts that we see from the
    sun surface.

    What “impacts on the sun surface”?

    Watch the first few minutes of the first video in the playlist to see a
    solar eruption and some of that mass crashing back down on the sun
    surface, causing pond ripples.

    Ripples can happen at any interface between fluids of sharply different densities. They happen even in our atmosphere (look for cloud types with
    words like “undulatus” in their names).

    Have you seen those toys you can buy for your home where they put two
    liquids of different densities, which don’t mix, in the same transparent tank? You rock it back and forth, and watch the slow-motion waves form at
    the boundary. Just like your “pond ripples” which you say cannot form.

    Why slow-motion? You figure it out.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Mon Oct 7 11:32:49 2024
    On 07/10/2024 03:34, MitchAlsup1 wrote:
    On Mon, 7 Oct 2024 0:39:15 +0000, Scott Lurndal wrote:

    mitchalsup@aol.com (MitchAlsup1) writes:
    On Sun, 6 Oct 2024 10:47:08 +0000, David Brown wrote:

    On 05/10/2024 20:24, Brett wrote:
    Brett <ggtgp@yahoo.com> wrote:

    Here is what Sabine Hossenfelder thinks of modern physics, and she
    makes
    money promoting physics to people on YouTube.

    https://youtu.be/cBIvSGLkwJY?si=USc2fHsaWTJMSDSt


    Sabine Hossenfelder is quite a good commentator, and I've seen many of >>>> her videos before.  Her points here are not new or contentious - there >>>> is quite a support in scientific communities for her argument here.  We >>>> have arguably reached a point in the science of cosmology and
    fundamental physics where traditional scientific progress is
    unavoidably
    minimal.  Basically, we cannot build big enough experiments to provide >>>> corroborating or falsifying evidence for current hypothetical models

    Based on the success of Webb--we can, we just don't have access to
    enough money to allow for building and shipping such a device up into
    space. Optics-check, structure-check, rocket-check, where to put it-
    check, telemetry and command-check.

    An article in this week's Aviation Week and Space Technology noted
    that the starship will be able to boost a payload that masses
    thirty times the Webb for less cost than the Webb launch.

    I was counting on Starship in the above.
    I was only complaining about the "can't" part.
    Every piece of engineering is go--as long as someone will pay for it.

    No, the engineering is not remotely close to "go" for these things (the ridiculously large particle accelerators), even if there were an
    unlimited supply of money.

    There are, however, many other types of devices and experiments that
    would be useful for physics research which /are/ possible from the
    engineering viewpoint, but lack the funding.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Niklas Holsti on Mon Oct 7 12:45:34 2024
    On 07/10/2024 08:29, Niklas Holsti wrote:
    On 2024-10-07 1:08, Brett wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 4 Oct 2024 17:59:03 -0000 (UTC), Brett wrote:

    A gas cannot emit the spectrum we see from the sun, liquid metallic
    hydrogen can.

    The spectrum of the Sun is primarily the continuous emissive one of a
    “black body” at a surface temperature of 6500K or thereabouts.

    Superimposed on that are absorption lines corresponding to a range of
    elements, representing cooler substances in the surrounding
    “photosphere”,
    I think it’s called.

    Which of these lines do you think is characteristic of this mythical
    “liquid metallic hydrogen” of yours?

    Fun fact: originally it was thought that those lines in the spectra
    of the
    Sun and other stars were characteristic of the entire makeup of the
    bodies
    concerned. In other words, they were full of elements much like those
    that
    make up the Earth and other planetary bodies.

    A young doctorate student named Cecilia Payne, after some careful study, >>> came to the remarkable conclusion that stars were mostly hydrogen and
    helium, and these spectral lines were due, in effect, to relatively
    small
    amounts of contaminants in among that bulk of hydrogen and helium.

    Gases do not show the pond ripples from impacts that we see from the
    sun
    surface.

    What “impacts on the sun surface”?

    Watch the first few minutes of the first video in the playlist to see a
    solar eruption and some of that mass crashing back down on the sun
    surface,
    causing pond ripples. The idea of a plasma gas sun dies right there.


    Stratified fluid (non-plasma) atmospheres can support pond-like waves:

    https://en.wikipedia.org/wiki/Gravity_wave#Atmosphere_dynamics_on_Earth.


    Note to Brett - gravity waves in a fluid are completely different from gravitational waves, such as those generated by black hole collisions
    and detected by LIGO. You probably have some other magic snake oil
    beliefs about those, but don't get them confused with gravity waves in a
    fluid.

    Plasma can support /many/ kinds of waves because of the coupling of the charged particles to magnetic fields:

    https://en.wikipedia.org/wiki/Waves_in_plasmas

    I don't claim to know what kind of wave was shown in the video of the
    solar eruption -- intuitively I would plump for gravity waves. But I
    don't think liquid metallic hydrogen is needed to explain it.


    Without having seen the video, my gut instinct is that a liquid surface
    on the sun could not come close to explaining any easily discernable waves.

    Coronal mass eruptions send most of their mass outwards - little falls
    back, and it is spread over a large area. Since liquid metallic
    hydrogen requires incredibly high pressures and incredibly high
    densities, if it existed on the surface of the sun (and it certainly
    does /not/ exist there), any waves on it would have extremely small
    amplitude.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Mon Oct 7 12:59:11 2024
    kinds of proofs as "better" than others. Some dislike "proof by computer", and don't consider the four-colour theorem to be a proven theorem yet.

    "Proof by computer" can mean many different things. The 1976 proof by Appel&Haken failed to convince a number of mathematicians both because
    of the use of a computer and because of the "inelegant", "brute
    force" approach.

    Regarding the use of a computer, it relied on ad-hoc code which used
    brute force to check some large number of subproblems. For some mathematicians, it was basically some opaque piece of code saying "yes",
    with no reason to be confident that the code actually did what the
    authors intended it to do.

    The 2005 proof by Gonthier also used a computer, but the program used
    was a generic proof assistant. Arguably some "opaque brute force" code
    was used as well, but it generated actual evidence of its claims, which
    was then mechanically checked by the proof assistant.

    That leaves a lot less room for arguing that it's not valid.
    I haven't heard anyone express doubts about that proof yet.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to David Brown on Mon Oct 7 17:14:19 2024
    On Mon, 7 Oct 2024 9:32:49 +0000, David Brown wrote:

    On 07/10/2024 03:34, MitchAlsup1 wrote:

    Sabine Hossenfelder is quite a good commentator, and I've seen many of >>>>> her videos before.  Her points here are not new or contentious - there >>>>> is quite a support in scientific communities for her argument here.  We >>>>> have arguably reached a point in the science of cosmology and
    fundamental physics where traditional scientific progress is
    unavoidably
    minimal.  Basically, we cannot build big enough experiments to provide >>>>> corroborating or falsifying evidence for current hypothetical models

    Based on the success of Webb--we can, we just don't have access to
    enough money to allow for building and shipping such a device up into
    space. Optics-check, structure-check, rocket-check, where to put it-
    check, telemetry and command-check.

    An article in this week's Aviation Week and Space Technology noted
    that the starship will be able to boost a payload that masses
    thirty times the Webb for less cost than the Webb launch.

    I was counting on Starship in the above.
    I was only complaining about the "can't" part.
    Every piece of engineering is go--as long as someone will pay for it.

    No, the engineering is not remotely close to "go" for these things (the ridiculously large particle accelerators), even if there were an
    unlimited supply of money.

    We have all the technology we need to build a 2× Webb and to launch
    it into space, or we will by the time it can be built.

    There are, however, many other types of devices and experiments that
    would be useful for physics research which /are/ possible from the engineering viewpoint, but lack the funding.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bill Findlay@21:1/5 to David Brown on Mon Oct 7 19:01:59 2024
    On 7 Oct 2024, David Brown wrote
    (in article <ve0e4f$1m7i1$1@dont-email.me>):

    On 07/10/2024 08:29, Niklas Holsti wrote:
    On 2024-10-07 1:08, Brett wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 4 Oct 2024 17:59:03 -0000 (UTC), Brett wrote:

    A gas cannot emit the spectrum we see from the sun, liquid metallic hydrogen can.

    The spectrum of the Sun is primarily the continuous emissive one of a "black body" at a surface temperature of 6500K or thereabouts.

    Superimposed on that are absorption lines corresponding to a range of elements, representing cooler substances in the surrounding "photosphere",
    I think its called.

    Which of these lines do you think is characteristic of this mythical "liquid metallic hydrogen" of yours?

    Fun fact: originally it was thought that those lines in the spectra
    of the
    Sun and other stars were characteristic of the entire makeup of the bodies
    concerned. In other words, they were full of elements much like those that
    make up the Earth and other planetary bodies.

    A young doctorate student named Cecilia Payne, after some careful study,
    came to the remarkable conclusion that stars were mostly hydrogen and helium, and these spectral lines were due, in effect, to relatively small
    amounts of contaminants in among that bulk of hydrogen and helium.

    Gases do not show the pond ripples from impacts that we see from the sun
    surface.

    What "impacts on the sun surface"?

    Watch the first few minutes of the first video in the playlist to see a solar eruption and some of that mass crashing back down on the sun surface,
    causing pond ripples. The idea of a plasma gas sun dies right there.


    Stratified fluid (non-plasma) atmospheres can support pond-like waves:

    https://en.wikipedia.org/wiki/Gravity_wave#Atmosphere_dynamics_on_Earth.

    Note to Brett - gravity waves in a fluid are completely different from gravitational waves, such as those generated by black hole collisions
    and detected by LIGO. You probably have some other magic snake oil
    beliefs about those, but don't get them confused with gravity waves in a fluid.

    Brett and his guru do seem to ignorant of the existence
    and properties of gravity. How else to explain the guru's
    "bottle of gas" nonsense?

    --
    Bill Findlay

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Tue Oct 8 09:17:26 2024
    On 07/10/2024 19:14, MitchAlsup1 wrote:
    On Mon, 7 Oct 2024 9:32:49 +0000, David Brown wrote:

    On 07/10/2024 03:34, MitchAlsup1 wrote:

    Sabine Hossenfelder is quite a good commentator, and I've seen
    many of
    her videos before.  Her points here are not new or contentious -
    there
    is quite a support in scientific communities for her argument
    here.  We
    have arguably reached a point in the science of cosmology and
    fundamental physics where traditional scientific progress is
    unavoidably
    minimal.  Basically, we cannot build big enough experiments to
    provide
    corroborating or falsifying evidence for current hypothetical models >>>>>
    Based on the success of Webb--we can, we just don't have access to
    enough money to allow for building and shipping such a device up into >>>>> space. Optics-check, structure-check, rocket-check, where to put it- >>>>> check, telemetry and command-check.

    An article in this week's Aviation Week and Space Technology noted
    that the starship will be able to boost a payload that masses
    thirty times the Webb for less cost than the Webb launch.

    I was counting on Starship in the above.
    I was only complaining about the "can't" part.
    Every piece of engineering is go--as long as someone will pay for it.

    No, the engineering is not remotely close to "go" for these things (the
    ridiculously large particle accelerators), even if there were an
    unlimited supply of money.

    We have all the technology we need to build a 2× Webb and to launch
    it into space, or we will by the time it can be built.


    Sure. And given how much new and exciting results we've got from the
    current James Webb (and the Hubble before it), we can look forward to
    getting even more from the next generation of space telescopes that can
    perhaps help push cosmology further and answer big questions such as the
    nature of dark matter.

    But it won't get us any closer to disproving or corroborating string
    theory, loop quantum gravity, gravitons, or any other current
    conjectures for a "theory of everything". It won't even help providing justification for conjectures such as dark energy and inflation, though
    it might provide more data that fits the maths. (That is, it might not disprove these conjectures, but it won't help explaining what they are
    or why they, allegedly, exist - it could be something else entirely that
    gives the same measurable results.)

    For experimental evidence for or against current theories of everything,
    the engineering is as much an issue as the cost. Money is not the only
    hinder to making a particle accelerator at the orbit of Jupiter (or
    Saturn, if that is what's needed).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Stefan Monnier on Tue Oct 8 09:23:08 2024
    On 07/10/2024 18:59, Stefan Monnier wrote:
    kinds of proofs as "better" than others. Some dislike "proof by computer", >> and don't consider the four-colour theorem to be a proven theorem yet.

    "Proof by computer" can mean many different things. The 1976 proof by Appel&Haken failed to convince a number of mathematicians both because
    of the use of a computer and because of the "inelegant", "brute
    force" approach.

    Regarding the use of a computer, it relied on ad-hoc code which used
    brute force to check some large number of subproblems. For some mathematicians, it was basically some opaque piece of code saying "yes",
    with no reason to be confident that the code actually did what the
    authors intended it to do.

    Certainly for a "proof by computer" to be acceptable, the software
    involved needs to be considered part of the proof. It needs to be
    something other mathematicians can read through and agree is correct,
    just like any other bit of the mathematical proof. Some programming
    languages are more suitable for that task than others - typically you'll
    want something that can handle arbitrary precision integers, automatic
    garbage collection (so that the code is not cluttered with stuff that is irrelevant to the real task), and probably a functional programming
    language or style (which is more mathematical in outlook, and easier to
    prove).

    And just like you want the "hand-written" maths to be checked by
    multiple mathematicians, computer-based proofs should be confirmed on
    different hardware (so your proof doesn't rely on the Pentium FDIV bug
    or similar), and ideally with the same algorithm re-implemented in more
    than one programming language. The more redundancy you can get there,
    the more confidence you can have in the results.


    The 2005 proof by Gonthier also used a computer, but the program used
    was a generic proof assistant. Arguably some "opaque brute force" code
    was used as well, but it generated actual evidence of its claims, which
    was then mechanically checked by the proof assistant.

    That leaves a lot less room for arguing that it's not valid.
    I haven't heard anyone express doubts about that proof yet.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Stefan Monnier on Sat Oct 12 07:48:31 2024
    On Mon, 07 Oct 2024 12:59:11 -0400, Stefan Monnier wrote:

    "Proof by computer" can mean many different things. The 1976 proof by Appel&Haken failed to convince a number of mathematicians both because
    of the use of a computer and because of the "inelegant", "brute force" approach.

    It does rather change the notion of mathematical proof to something more
    akin to a laboratory experiment, except the laboratory only exists inside
    a computer, don’t you think?

    In other words, mathematics looks like it is turning into an actual
    science.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Michael S on Sat Oct 12 08:27:37 2024
    Michael S <already5chosen@yahoo.com> writes:
    Even if 99% is correct, there were still 6-7 figures worth of
    dual-processor x86 systems sold each year and starting from 1997 at
    least tens of thousands of quads.
    Absence of ordering definitions should have been a problem for a lot of >people. But somehow, it was not.

    I remember Andy Glew posting here about the strong ordering that Intel
    had at the time, and that it leads to superior performance compared to
    weak ordering.

    mitchalsup@aol.com (MitchAlsup1) wrote:
    Also note: this was just after the execution pipeline went
    Great Big Our of Order, and thus made the lack of order
    problems much more visible to applications. {Pentium Pro}

    Nonsense. Stores are published in architectural order, and loads have
    to be architecturally ordered wrt local stores already in a
    single-core system. And once you have that, why should the ordering
    wrt. remote stores be any worse than on an in-order machine?

    Note that the weak ordering advocacy (such as [adve&gharachorloo95)
    arose in companies with (at the time) only in-order CPUs.

    Actually OoO technology offers a way to make the ordering strong
    without having to pay for barriers and somesuch; we may not yet have
    enough buffers for implementing sequential consistency efficiently,
    though, but maybe if we ask for sequential consistency, hardware
    designers will find a way to provide enough buffers for that.

    @TechReport{adve&gharachorloo95,
    author = {Sarita V. Adve and Kourosh Gharachorloo},
    title = {Shared Memory Consistency Models: A Tutorial},
    institution = {Digital Western Research Lab},
    year = {1995},
    type = {WRL Research Report},
    number = {95/7},
    annote = {Gives an overview of architectural features of
    shared-memory computers such as independent memory
    banks and per-CPU caches, and how they make the (for
    programmers) most natural consistency model hard to
    implement, giving examples of programs that can fail
    with weaker consistency models. It then discusses
    several categories of weaker consistency models and
    actual consistency models in these categories, and
    which ``safety net'' (e.g., memory barrier
    instructions) programmers need to use to work around
    the deficiencies of these models. While the authors
    recognize that programmers find it difficult to use
    these safety nets correctly and efficiently, it
    still advocates weaker consistency models, claiming
    that sequential consistency is too inefficient, by
    outlining an inefficient implementation (which is of
    course no proof that no efficient implementation
    exists). Still the paper is a good introduction to
    the issues involved.}
    }

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)