On 5/25/2025 10:14 PM, Chris M. Thomasson wrote:
This is from a little conversation I had with Gemini about a fractal HDD
arm with many heads, with cache sizes per head dependent on how far they
are down the fractal arm. It evolved into this shit: ROFL. Might be for
an interesting laugh? ;^)
____________________
My ramble, just having fun:[...]
Pipe dreams... Why chop up a wafer into individual chips? Use it as a
whole. The entire wafer could spin like a HDD and can have etched CPU's, GPU's, DRAM, SRAM, ect... lol! ;^)
Early on, only 20% of the chips are viable. Later, once the bleeding
edge has moved on, up to 90% of the chips are viable. They are never
all viable.
See the failure of the Amdahl wafer scale integration effort.
On 5/26/2025 1:39 PM, MitchAlsup1 wrote:
On Mon, 26 May 2025 20:09:29 +0000, Chris M. Thomasson wrote:
On 5/25/2025 10:14 PM, Chris M. Thomasson wrote:
This is from a little conversation I had with Gemini about a fractal
HDD
arm with many heads, with cache sizes per head dependent on how far
they
are down the fractal arm. It evolved into this shit: ROFL. Might be for >>>> an interesting laugh? ;^)
____________________
My ramble, just having fun:[...]
Pipe dreams... Why chop up a wafer into individual chips? Use it as a
whole. The entire wafer could spin like a HDD and can have etched CPU's, >>> GPU's, DRAM, SRAM, ect... lol! ;^)
Early on, only 20% of the chips are viable.
Later, once the bleeding edge has moved on, up to 90% of the chips are
viable. They are never all viable.
See the failure of the Amdahl wafer scale integration effort.
There might be a way for a wafer to be deemed say, 95% viable, so, it
culls the heard wrt non working zones, damage? Similar to Cerberus. But, instead of all AI compute, shaders, ect. It would be a shit load of say,
x64, GPUS, dram, sram, ect. Perhaps even sectors on the wafer that had
say an old 286 on it, to play old games... ;^) Sorry to bother you with
this dream. however, well, is it 100% crazy, or 99.999...% crazy? ;^)
On 26/05/2025 23:41, Chris M. Thomasson wrote:
On 5/26/2025 1:39 PM, MitchAlsup1 wrote:
On Mon, 26 May 2025 20:09:29 +0000, Chris M. Thomasson wrote:
On 5/25/2025 10:14 PM, Chris M. Thomasson wrote:
This is from a little conversation I had with Gemini about a
fractal HDD
arm with many heads, with cache sizes per head dependent on how far >>>>> they
are down the fractal arm. It evolved into this shit: ROFL. Might be >>>>> for
an interesting laugh? ;^)
____________________
My ramble, just having fun:[...]
Pipe dreams... Why chop up a wafer into individual chips? Use it as a
whole. The entire wafer could spin like a HDD and can have etched
CPU's,
GPU's, DRAM, SRAM, ect... lol! ;^)
Early on, only 20% of the chips are viable.
Later, once the bleeding edge has moved on, up to 90% of the chips are
viable. They are never all viable.
See the failure of the Amdahl wafer scale integration effort.
There might be a way for a wafer to be deemed say, 95% viable, so, it
culls the heard wrt non working zones, damage? Similar to Cerberus.
But, instead of all AI compute, shaders, ect. It would be a shit load
of say, x64, GPUS, dram, sram, ect. Perhaps even sectors on the wafer
that had say an old 286 on it, to play old games... ;^) Sorry to
bother you with this dream. however, well, is it 100% crazy, or
99.999...% crazy? ;^)
Yes, there are ways to increase the viability of a wafer. There are the obvious ones - as you produce more of the wafer and design, you can fine-tune the process and eliminate the riskiest parts. But you can
also have a certain amount of redundancy to deal with damaged areas. I don't know how much this is done in current processors, but certainly in
the past companies have produced wafers where the design was a number of cores and size of cache, and if any of the cores or cache blocks did not work during testing then the chips were simply sold as smaller core
count parts. The wafer won't give a full output of the high-cost parts, but at least damaged parts won't be entirely wasted.
On 26/05/2025 23:41, Chris M. Thomasson wrote:
On 5/26/2025 1:39 PM, MitchAlsup1 wrote:
On Mon, 26 May 2025 20:09:29 +0000, Chris M. Thomasson wrote:
On 5/25/2025 10:14 PM, Chris M. Thomasson wrote:
This is from a little conversation I had with Gemini about a fractal >>>>> HDD
arm with many heads, with cache sizes per head dependent on how far
they
are down the fractal arm. It evolved into this shit: ROFL. Might be for >>>>> an interesting laugh? ;^)
____________________
My ramble, just having fun:[...]
Pipe dreams... Why chop up a wafer into individual chips? Use it as a
whole. The entire wafer could spin like a HDD and can have etched CPU's, >>>> GPU's, DRAM, SRAM, ect... lol! ;^)
Early on, only 20% of the chips are viable.
Later, once the bleeding edge has moved on, up to 90% of the chips are
viable. They are never all viable.
See the failure of the Amdahl wafer scale integration effort.
There might be a way for a wafer to be deemed say, 95% viable, so, it
culls the heard wrt non working zones, damage? Similar to Cerberus. But,
instead of all AI compute, shaders, ect. It would be a shit load of say,
x64, GPUS, dram, sram, ect. Perhaps even sectors on the wafer that had
say an old 286 on it, to play old games... ;^) Sorry to bother you with
this dream. however, well, is it 100% crazy, or 99.999...% crazy? ;^)
Yes, there are ways to increase the viability of a wafer. There are the obvious ones - as you produce more of the wafer and design, you can
fine-tune the process and eliminate the riskiest parts. But you can
also have a certain amount of redundancy to deal with damaged areas. I
don't know how much this is done in current processors, but certainly in
the past companies have produced wafers where the design was a number of cores and size of cache, and if any of the cores or cache blocks did not
work during testing then the chips were simply sold as smaller core
count parts. The wafer won't give a full output of the high-cost parts,
but at least damaged parts won't be entirely wasted.
On Tue, 27 May 2025 7:32:07 +0000, David Brown wrote:
On 26/05/2025 23:41, Chris M. Thomasson wrote:
On 5/26/2025 1:39 PM, MitchAlsup1 wrote:
On Mon, 26 May 2025 20:09:29 +0000, Chris M. Thomasson wrote:
On 5/25/2025 10:14 PM, Chris M. Thomasson wrote:
This is from a little conversation I had with Gemini about a fractal >>>>>> HDD
arm with many heads, with cache sizes per head dependent on how far >>>>>> they
are down the fractal arm. It evolved into this shit: ROFL. Might
be for
an interesting laugh? ;^)
____________________
My ramble, just having fun:[...]
Pipe dreams... Why chop up a wafer into individual chips? Use it as a >>>>> whole. The entire wafer could spin like a HDD and can have etched
CPU's,
GPU's, DRAM, SRAM, ect... lol! ;^)
Early on, only 20% of the chips are viable.
Later, once the bleeding edge has moved on, up to 90% of the chips are >>>> viable. They are never all viable.
See the failure of the Amdahl wafer scale integration effort.
There might be a way for a wafer to be deemed say, 95% viable, so, it
culls the heard wrt non working zones, damage? Similar to Cerberus. But, >>> instead of all AI compute, shaders, ect. It would be a shit load of say, >>> x64, GPUS, dram, sram, ect. Perhaps even sectors on the wafer that had
say an old 286 on it, to play old games... ;^) Sorry to bother you with
this dream. however, well, is it 100% crazy, or 99.999...% crazy? ;^)
Yes, there are ways to increase the viability of a wafer. There are the
obvious ones - as you produce more of the wafer and design, you can
fine-tune the process and eliminate the riskiest parts. But you can
also have a certain amount of redundancy to deal with damaged areas. I
don't know how much this is done in current processors, but certainly in
the past companies have produced wafers where the design was a number of
cores and size of cache, and if any of the cores or cache blocks did not
work during testing then the chips were simply sold as smaller core
count parts. The wafer won't give a full output of the high-cost parts,
but at least damaged parts won't be entirely wasted.
For decades, single lines in a cache (any level) could be put in a
"no allocate" state, so your 16KB L1 caches would become a 16KB-64B
Cache.
Memory is easy to allow for a few transistors, wires, or connections
to be bad and it still works OK-ish. Function units, busses, and
sequencers:: not so much.
The (in)famous PlayStation coprocessor had 7 cores even though it was designed with 8:
This way they could use chips with one failing core.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 04:22:32 |
Calls: | 10,387 |
Calls today: | 2 |
Files: | 14,061 |
Messages: | 6,416,782 |