• Re: fun with AI. rofl...

    From MitchAlsup1@21:1/5 to Chris M. Thomasson on Mon May 26 20:39:57 2025
    On Mon, 26 May 2025 20:09:29 +0000, Chris M. Thomasson wrote:

    On 5/25/2025 10:14 PM, Chris M. Thomasson wrote:
    This is from a little conversation I had with Gemini about a fractal HDD
    arm with many heads, with cache sizes per head dependent on how far they
    are down the fractal arm. It evolved into this shit: ROFL. Might be for
    an interesting laugh? ;^)

    ____________________

    My ramble, just having fun:[...]

    Pipe dreams... Why chop up a wafer into individual chips? Use it as a
    whole. The entire wafer could spin like a HDD and can have etched CPU's, GPU's, DRAM, SRAM, ect... lol! ;^)

    Early on, only 20% of the chips are viable.
    Later, once the bleeding edge has moved on, up to 90% of the chips are
    viable. They are never all viable.

    See the failure of the Amdahl wafer scale integration effort.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Monnier@21:1/5 to All on Mon May 26 17:35:28 2025
    Early on, only 20% of the chips are viable. Later, once the bleeding
    edge has moved on, up to 90% of the chips are viable. They are never
    all viable.

    See the failure of the Amdahl wafer scale integration effort.

    Cerebras is still in business, AFAIK.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Chris M. Thomasson on Tue May 27 09:32:07 2025
    On 26/05/2025 23:41, Chris M. Thomasson wrote:
    On 5/26/2025 1:39 PM, MitchAlsup1 wrote:
    On Mon, 26 May 2025 20:09:29 +0000, Chris M. Thomasson wrote:

    On 5/25/2025 10:14 PM, Chris M. Thomasson wrote:
    This is from a little conversation I had with Gemini about a fractal
    HDD
    arm with many heads, with cache sizes per head dependent on how far
    they
    are down the fractal arm. It evolved into this shit: ROFL. Might be for >>>> an interesting laugh? ;^)

    ____________________

    My ramble, just having fun:[...]

    Pipe dreams... Why chop up a wafer into individual chips? Use it as a
    whole. The entire wafer could spin like a HDD and can have etched CPU's, >>> GPU's, DRAM, SRAM, ect... lol! ;^)

    Early on, only 20% of the chips are viable.
    Later, once the bleeding edge has moved on, up to 90% of the chips are
    viable. They are never all viable.

    See the failure of the Amdahl wafer scale integration effort.

    There might be a way for a wafer to be deemed say, 95% viable, so, it
    culls the heard wrt non working zones, damage? Similar to Cerberus. But, instead of all AI compute, shaders, ect. It would be a shit load of say,
    x64, GPUS, dram, sram, ect. Perhaps even sectors on the wafer that had
    say an old 286 on it, to play old games... ;^) Sorry to bother you with
    this dream. however, well, is it 100% crazy, or 99.999...% crazy? ;^)


    Yes, there are ways to increase the viability of a wafer. There are the obvious ones - as you produce more of the wafer and design, you can
    fine-tune the process and eliminate the riskiest parts. But you can
    also have a certain amount of redundancy to deal with damaged areas. I
    don't know how much this is done in current processors, but certainly in
    the past companies have produced wafers where the design was a number of
    cores and size of cache, and if any of the cores or cache blocks did not
    work during testing then the chips were simply sold as smaller core
    count parts. The wafer won't give a full output of the high-cost parts,
    but at least damaged parts won't be entirely wasted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to David Brown on Tue May 27 12:37:17 2025
    David Brown wrote:
    On 26/05/2025 23:41, Chris M. Thomasson wrote:
    On 5/26/2025 1:39 PM, MitchAlsup1 wrote:
    On Mon, 26 May 2025 20:09:29 +0000, Chris M. Thomasson wrote:

    On 5/25/2025 10:14 PM, Chris M. Thomasson wrote:
    This is from a little conversation I had with Gemini about a
    fractal HDD
    arm with many heads, with cache sizes per head dependent on how far >>>>> they
    are down the fractal arm. It evolved into this shit: ROFL. Might be >>>>> for
    an interesting laugh? ;^)

    ____________________

    My ramble, just having fun:[...]

    Pipe dreams... Why chop up a wafer into individual chips? Use it as a
    whole. The entire wafer could spin like a HDD and can have etched
    CPU's,
    GPU's, DRAM, SRAM, ect... lol! ;^)

    Early on, only 20% of the chips are viable.
    Later, once the bleeding edge has moved on, up to 90% of the chips are
    viable. They are never all viable.

    See the failure of the Amdahl wafer scale integration effort.

    There might be a way for a wafer to be deemed say, 95% viable, so, it
    culls the heard wrt non working zones, damage? Similar to Cerberus.
    But, instead of all AI compute, shaders, ect. It would be a shit load
    of say, x64, GPUS, dram, sram, ect. Perhaps even sectors on the wafer
    that had say an old 286 on it, to play old games... ;^) Sorry to
    bother you with this dream. however, well, is it 100% crazy, or
    99.999...% crazy? ;^)


    Yes, there are ways to increase the viability of a wafer.  There are the obvious ones - as you produce more of the wafer and design, you can fine-tune the process and eliminate the riskiest parts.  But you can
    also have a certain amount of redundancy to deal with damaged areas.  I don't know how much this is done in current processors, but certainly in
    the past companies have produced wafers where the design was a number of cores and size of cache, and if any of the cores or cache blocks did not work during testing then the chips were simply sold as smaller core
    count parts.  The wafer won't give a full output of the high-cost parts, but at least damaged parts won't be entirely wasted.


    The (in)famous PlayStation coprocessor had 7 cores even though it was
    designed with 8:

    This way they could use chips with one failing core.

    Terje

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MitchAlsup1@21:1/5 to David Brown on Tue May 27 15:40:16 2025
    On Tue, 27 May 2025 7:32:07 +0000, David Brown wrote:

    On 26/05/2025 23:41, Chris M. Thomasson wrote:
    On 5/26/2025 1:39 PM, MitchAlsup1 wrote:
    On Mon, 26 May 2025 20:09:29 +0000, Chris M. Thomasson wrote:

    On 5/25/2025 10:14 PM, Chris M. Thomasson wrote:
    This is from a little conversation I had with Gemini about a fractal >>>>> HDD
    arm with many heads, with cache sizes per head dependent on how far
    they
    are down the fractal arm. It evolved into this shit: ROFL. Might be for >>>>> an interesting laugh? ;^)

    ____________________

    My ramble, just having fun:[...]

    Pipe dreams... Why chop up a wafer into individual chips? Use it as a
    whole. The entire wafer could spin like a HDD and can have etched CPU's, >>>> GPU's, DRAM, SRAM, ect... lol! ;^)

    Early on, only 20% of the chips are viable.
    Later, once the bleeding edge has moved on, up to 90% of the chips are
    viable. They are never all viable.

    See the failure of the Amdahl wafer scale integration effort.

    There might be a way for a wafer to be deemed say, 95% viable, so, it
    culls the heard wrt non working zones, damage? Similar to Cerberus. But,
    instead of all AI compute, shaders, ect. It would be a shit load of say,
    x64, GPUS, dram, sram, ect. Perhaps even sectors on the wafer that had
    say an old 286 on it, to play old games... ;^) Sorry to bother you with
    this dream. however, well, is it 100% crazy, or 99.999...% crazy? ;^)


    Yes, there are ways to increase the viability of a wafer. There are the obvious ones - as you produce more of the wafer and design, you can
    fine-tune the process and eliminate the riskiest parts. But you can
    also have a certain amount of redundancy to deal with damaged areas. I
    don't know how much this is done in current processors, but certainly in
    the past companies have produced wafers where the design was a number of cores and size of cache, and if any of the cores or cache blocks did not
    work during testing then the chips were simply sold as smaller core
    count parts. The wafer won't give a full output of the high-cost parts,
    but at least damaged parts won't be entirely wasted.

    For decades, single lines in a cache (any level) could be put in a
    "no allocate" state, so your 16KB L1 caches would become a 16KB-64B
    Cache.

    Memory is easy to allow for a few transistors, wires, or connections
    to be bad and it still works OK-ish. Function units, busses, and
    sequencers:: not so much.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Tue May 27 19:16:58 2025
    On 27/05/2025 17:40, MitchAlsup1 wrote:
    On Tue, 27 May 2025 7:32:07 +0000, David Brown wrote:

    On 26/05/2025 23:41, Chris M. Thomasson wrote:
    On 5/26/2025 1:39 PM, MitchAlsup1 wrote:
    On Mon, 26 May 2025 20:09:29 +0000, Chris M. Thomasson wrote:

    On 5/25/2025 10:14 PM, Chris M. Thomasson wrote:
    This is from a little conversation I had with Gemini about a fractal >>>>>> HDD
    arm with many heads, with cache sizes per head dependent on how far >>>>>> they
    are down the fractal arm. It evolved into this shit: ROFL. Might
    be for
    an interesting laugh? ;^)

    ____________________

    My ramble, just having fun:[...]

    Pipe dreams... Why chop up a wafer into individual chips? Use it as a >>>>> whole. The entire wafer could spin like a HDD and can have etched
    CPU's,
    GPU's, DRAM, SRAM, ect... lol! ;^)

    Early on, only 20% of the chips are viable.
    Later, once the bleeding edge has moved on, up to 90% of the chips are >>>> viable. They are never all viable.

    See the failure of the Amdahl wafer scale integration effort.

    There might be a way for a wafer to be deemed say, 95% viable, so, it
    culls the heard wrt non working zones, damage? Similar to Cerberus. But, >>> instead of all AI compute, shaders, ect. It would be a shit load of say, >>> x64, GPUS, dram, sram, ect. Perhaps even sectors on the wafer that had
    say an old 286 on it, to play old games... ;^) Sorry to bother you with
    this dream. however, well, is it 100% crazy, or 99.999...% crazy? ;^)


    Yes, there are ways to increase the viability of a wafer.  There are the
    obvious ones - as you produce more of the wafer and design, you can
    fine-tune the process and eliminate the riskiest parts.  But you can
    also have a certain amount of redundancy to deal with damaged areas.  I
    don't know how much this is done in current processors, but certainly in
    the past companies have produced wafers where the design was a number of
    cores and size of cache, and if any of the cores or cache blocks did not
    work during testing then the chips were simply sold as smaller core
    count parts.  The wafer won't give a full output of the high-cost parts,
    but at least damaged parts won't be entirely wasted.

    For decades, single lines in a cache (any level) could be put in a
    "no allocate" state, so your 16KB L1 caches would become a 16KB-64B
    Cache.

    Memory is easy to allow for a few transistors, wires, or connections
    to be bad and it still works OK-ish. Function units, busses, and
    sequencers:: not so much.

    Sure - unless you simply disable an entire cpu core as bad. (If the
    damage is on the shared logic, your whole chip is screwed.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Terje Mathisen on Fri May 30 10:00:02 2025
    Terje Mathisen <terje.mathisen@tmsw.no> schrieb:

    The (in)famous PlayStation coprocessor had 7 cores even though it was designed with 8:

    This way they could use chips with one failing core.

    IBM sells some of their POWER9 chips with low yield to RaptorCS,
    which then puts them into their Talos II workstations with either 4,
    8, 18 or 22 cores.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)