Oh, I agree ... trying to "rapidly rebuild" the "Just Works"
code-base is VERY risky. As said, most of those old COBOL
apps on those old computers were basically PERFECT - and
the fallout from being IMperfect is SEVERE - both politically
and per-individual affected. Extreme caution is advised.
How hard could SS be? Are the rules so complex? I know it's hundreds
of millions of people, but that doesn't seen a huge challenge for
modern systems.
On 31/03/2025 04:08, pothead wrote:
Financial institutions are still running COBOL as well as other legacy
systems like TPF.
But probably on PCs running IBMS linux
On Mon, 31 Mar 2025 07:11:16 -0500, chrisv wrote:
How hard could SS be? Are the rules so complex? I know it's hundreds
of millions of people, but that doesn't seen a huge challenge for
modern systems.
Yes, the rules are quite complex and they are changing all the time.
Why should this be a problem? The rules are written by those without
a clue about digital methodologies, and then these rules must be somehow worked into a digital programming framework. This is not easy.
Consider a comparatively simple task like payroll processing.
How does
one calculate federal withholding tax? There is no straight formula.
The process is a labyrinth of crazy regulations. First, there are different tables for married, single, and a few other categories. Then there are
many different possibilities for what is actually taxable income. Are
401K deductions exempt? What about IRA or health insurance deductions?
Some deductions may be exempt from withholding but not SS/Medicare.
The ramifications go on and on and on.
The federal W4 forms have also been changed, thanks to Trump's first term
in office. The difference between the former and current forms is so drastic that all the code, and databases, that deals with them had to be completely re-written.
Then there are quarterly and annual reporting requirements, etc. etc.
A company may have 50 employees but nearly each and every one is a special case unto itself, and the software has to deal with it all without any error.
The SSA software is perhaps several orders of magnitude above this.
Computer science is based on mathematics which is an eminently logical process. But government regulations are not logical. There is no rhymme
or reason to them. They emenate from disparate and disjoint sources and
then someone throws the 100 or so volumes of compiled law at the programmer and says do it.
I've always said that payrolls should never have been computerized.
Computers are logical, and payrolls aren't. Unfortunately, the power
of computers has been exploited to support gratuitous complexity.
c186282 wrote:
Oh, I agree ... trying to "rapidly rebuild" the "Just Works"
code-base is VERY risky. As said, most of those old COBOL
apps on those old computers were basically PERFECT - and
the fallout from being IMperfect is SEVERE - both politically
and per-individual affected. Extreme caution is advised.
Sorry for my below naive/stupid questions...
How hard could SS be?
Are the rules so complex?
I know it's hundreds
of millions of people, but that doesn't seen a huge challenge for
modern systems. I don't know why it would be any harder than any "significant" piece of software, like spreadsheet or database
software.
I'm also wondering how large the code base could be, if it was written
fifty years ago when a megabyte was a huge amount of memory.
On 3/31/25 08:11, chrisv wrote:
c186282 wrote:
Oh, I agree ... trying to "rapidly rebuild" the "Just Works"
code-base is VERY risky. As said, most of those old COBOL
apps on those old computers were basically PERFECT - and
the fallout from being IMperfect is SEVERE - both politically
and per-individual affected. Extreme caution is advised.
Sorry for my below naive/stupid questions...
How hard could SS be?
In a word, "very".
Are the rules so complex?
In snapshot form, not too terribly bad. Problem is that there's been
50+ years worth of revisions, and the documentation of every change is
never 100.0000% perfect in every last detail.
As such, its become a "black box" that no one really knows what all it
is doing, so its a nightmare to try to document all the processes to try
to reproduce it.
This is why multiple Fortune 500 corporations has had projects over the
years to try to replace COBOL, but which have repeatedly failed. For example, one that I was aware of was looking to use Smalltalk; I never
paid attention enough to know if that was a good choice or not.
I know it's hundreds
of millions of people, but that doesn't seen a huge challenge for
modern systems. I don't know why it would be any harder than any "significant" piece of software, like spreadsheet or database
software.
It is "big iron" mainframe stuff. Think of a single data center having literally *rows* of IBM 360's/370's.
Granted, there's been huge growth for web-based centers that are running thousands of webservers/etc, but that's largely independent parallel capacity, not a single database, so that drives solution approaches too.
I'm also wondering how large the code base could be, if it was written fifty years ago when a megabyte was a huge amount of memory.
Yup. A system my wife worked on back in the 1990s for Y2K had literally
a couple of **Pentabytes** of data storage being managed by their COBOL system. I doubt it has grown by all that much .. my guess is that
they're probably still under ~50 Pentabytes today.
-hh
It should be noted that GnuCOBOL actually translates COBOL to C, and then compiles the C code with GnuC. In *theory* one could just run the whole code base through GnuCOBOL and create a C code base, but good luck making much sense of the generated C code...
On 3/31/25 08:11, chrisv wrote:
c186282 wrote:
Oh, I agree ... trying to "rapidly rebuild" the "Just Works"
code-base is VERY risky. As said, most of those old COBOL
apps on those old computers were basically PERFECT - and
the fallout from being IMperfect is SEVERE - both politically
and per-individual affected. Extreme caution is advised.
Sorry for my below naive/stupid questions...
How hard could SS be?
In a word, "very".
Are the rules so complex?
In snapshot form, not too terribly bad. Problem is that there's been
50+ years worth of revisions, and the documentation of every change is
never 100.0000% perfect in every last detail.
As such, its become a "black box" that no one really knows what all it
is doing, so its a nightmare to try to document all the processes to try
to reproduce it.
This is why multiple Fortune 500 corporations has had projects over the
years to try to replace COBOL, but which have repeatedly failed. For
example, one that I was aware of was looking to use Smalltalk; I never
paid attention enough to know if that was a good choice or not.
I know it's hundreds
of millions of people, but that doesn't seen a huge challenge for
modern systems. I don't know why it would be any harder than any
"significant" piece of software, like spreadsheet or database
software.
It is "big iron" mainframe stuff. Think of a single data center having
literally *rows* of IBM 360's/370's.
Granted, there's been huge growth for web-based centers that are running
thousands of webservers/etc, but that's largely independent parallel
capacity, not a single database, so that drives solution approaches too.
I'm also wondering how large the code base could be, if it was written
fifty years ago when a megabyte was a huge amount of memory.
Yup. A system my wife worked on back in the 1990s for Y2K had literally
a couple of **Pentabytes** of data storage being managed by their COBOL
system. I doubt it has grown by all that much .. my guess is that
they're probably still under ~50 Pentabytes today.
-hh
It should be noted that GnuCOBOL actually translates COBOL to C, and
then compiles the C code with GnuC. In *theory* one could just run the
whole code base through GnuCOBOL and create a C code base, but good luck making much sense of the generated C code...
I had to listen to a PMP rant all weekend about how profoundly stupid of
a plan this is from DOGE, especially the "in 5 months" claim. Overall,
it sounds like another example of a "contract out to Elon" attempt,
where they'll of course fail to meet any milestones, plus they'll skip
any high quality testing, and dump whatever human overrides onto the
already slashed staff to try to deal with while they claim "Victory!".
Total bullshit.
c186282 wrote:
Oh, I agree ... trying to "rapidly rebuild" the "Just Works"
code-base is VERY risky. As said, most of those old COBOL
apps on those old computers were basically PERFECT - and
the fallout from being IMperfect is SEVERE - both politically
and per-individual affected. Extreme caution is advised.
Sorry for my below naive/stupid questions...
How hard could SS be? Are the rules so complex?
I know it's hundreds
of millions of people, but that doesn't seen a huge challenge for
modern systems. I don't know why it would be any harder than any "significant" piece of software, like spreadsheet or database
software.
I'm also wondering how large the code base could be, if it was written
fifty years ago when a megabyte was a huge amount of memory.
On Tue, 1 Apr 2025 14:40:27 -0000 (UTC), Robert Heller wrote:
It should be noted that GnuCOBOL actually translates COBOL to C, and
then compiles the C code with GnuC. In *theory* one could just run the whole code base through GnuCOBOL and create a C code base, but good luck making much sense of the generated C code...
I never did it with COBOL but I did use 'f2c' to generate C code from Fortran. When I saw the results I wrote the C code manually. It probably would have compiled and ran but it was not maintainable. I've seen more readable output from disassemblers.
Happily I don't have to deal with IMARS any more but recent years it
wasn't any smoother than it was in 2016 when the article was written.
NIBRS, the DOJ's attempt to update the UCR hasn't went well either. Both
of these have been in the works for a decade or more.
On Tue, 1 Apr 2025 15:09:13 -0400
c186282 <c186282@nnada.net> wrote:
So, I'll stick with the *carefully* warning - and don't take down
What Works until it's very reliably duplicated by the newer code.
But really, good sir, How Hard Could It Be!? (TM)
Yes, one of the things I did when I was working at UMass was to convert some FORTRAN code to C and yes, automated tool commonly did either a poor job or created code that was not maintainable or readable. (And some of the code needed to be made multi-threaded for parallelization.)
On 1 Apr 2025 18:08:11 GMT, rbowman wrote:
Happily I don't have to deal with IMARS any more but recent years it
wasn't any smoother than it was in 2016 when the article was written.
NIBRS, the DOJ's attempt to update the UCR hasn't went well either. Both
of these have been in the works for a decade or more.
Ha, ha, ha, ha, ha, ha!
I was expecting that blowhard blowman to step in with his tales
of having done everything and anything.
Sure enough. He'll soon admit that he was consulted for the
transition but had to turn it down due to thousands of other
offers.
Before too long he'll even claim to a confidant of Elon Musk.
Ha, ha, ha, ha, ha, ha!
On 4/1/25 10:40, Robert Heller wrote:
It should be noted that GnuCOBOL actually translates COBOL to C, and then
compiles the C code with GnuC. In *theory* one could just run the
whole code
base through GnuCOBOL and create a C code base, but good luck making much
sense of the generated C code...
I had to listen to a PMP rant all weekend about how profoundly stupid of
a plan this is from DOGE,
On 4/1/25 11:08 AM, -hh wrote:
On 4/1/25 10:40, Robert Heller wrote:
It should be noted that GnuCOBOL actually translates COBOL to C, and
then
compiles the C code with GnuC. In *theory* one could just run the
whole code
base through GnuCOBOL and create a C code base, but good luck making
much
sense of the generated C code...
I had to listen to a PMP rant all weekend about how profoundly stupid
of a plan this is from DOGE,
Musk's "plan" isn't bad ... per-se. As noted in a
variety of news, some of those important federal
agencies are STILL using 60s hardware & programming.
The TRICK is getting any newer stuff RIGHT before it
goes mainline. That old COBOL code was GREAT, never
diss those narrow-tie Dilberts from the day.
BUT ... as those COBOLs were kinda tweaked for the
particular old boxes, no modern translation tool is
gonna work worth a damn. It'll have to be HAND DONE,
line by line, and done RIGHT. Forget Gen-Z/A2 and AI.
As I said to be hated - PYTHON ! :-)
Anyway, eventually it'll be all AI code. We
won't understand it - all magic thereafter.
Wave yer wand Harry !
On 02/04/2025 04:32, c186282 wrote:
On 4/1/25 11:08 AM, -hh wrote:I think that people here mostly haven't been exposed to business coding
On 4/1/25 10:40, Robert Heller wrote:
It should be noted that GnuCOBOL actually translates COBOL to C, and
then
compiles the C code with GnuC. In *theory* one could just run the
whole code
base through GnuCOBOL and create a C code base, but good luck making
much
sense of the generated C code...
I had to listen to a PMP rant all weekend about how profoundly stupid
of a plan this is from DOGE,
Musk's "plan" isn't bad ... per-se. As noted in a
variety of news, some of those important federal
agencies are STILL using 60s hardware & programming.
The TRICK is getting any newer stuff RIGHT before it
goes mainline. That old COBOL code was GREAT, never
diss those narrow-tie Dilberts from the day.
as much as technical coding.
There is nothing clever about business *coding*. The clever bit is the business *analysis*, for which a very tight specification is written,
The problem today is that many people can code, but almost no one
designs the functionality any more. It's just 'thrown together'.
Anyway, eventually it'll be all AI code. WeIt will all fuck up.
won't understand it - all magic thereafter.
The Natural Philosopher <tnp@invalid.invalid> writes:
On 31/03/2025 04:08, pothead wrote:
Financial institutions are still running COBOL as well as other legacy
systems like TPF.
But probably on PCs running IBMS linux
Lloyds are still on zSeries (you can see the job ads...) though no
doubt with layers of more modern stuff around it.
Replacing legacy systems is welcome, but a failed modernization that
breaks your customers’ ability to use their banks accounts gets very expensive very fast.
IBM zSeries
So yea, I can perfectly believe they're keeping 360sMy last contact with a system 360 was back in about 1995 when a company
and such alive and working ...
On 2025-03-31, Richard Kettlewell <invalid@invalid.invalid> wrote:
The Natural Philosopher <tnp@invalid.invalid> writes:
On 31/03/2025 04:08, pothead wrote:
Financial institutions are still running COBOL as well as other legacy >>>> systems like TPF.
But probably on PCs running IBMS linux
Lloyds are still on zSeries (you can see the job ads...) though no
doubt with layers of more modern stuff around it.
Replacing legacy systems is welcome, but a failed modernization that
breaks your customers’ ability to use their banks accounts gets very
expensive very fast.
Correct.
COBOL and oter legacy software are being run on IBM zSeries.
So yea, I can perfectly believe they're keeping 360s and such alive
and working ...
On Thu, 3 Apr 2025 06:52:14 -0400, c186282 wrote:
So yea, I can perfectly believe they're keeping 360s and such alive
and working ...
Technically the System/360 was succeeded by the System/370 in 1970. They skipped 380 and went to System/390. However there is a lot of backward compatibility so the Zs can run a lot of the legacy software.
The 360/370 boxes WERE really popular, so I'm gonna GUESS there's a
least one or two still chugging away. Maint cost would be insane
these days ... but you can kinda bury that in the budget while new
hardware stands out more and in more places.
On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:
The 360/370 boxes WERE really popular, so I'm gonna GUESS there's a
least one or two still chugging away. Maint cost would be insane
these days ... but you can kinda bury that in the budget while new
hardware stands out more and in more places.
I'm not sure there are any little old ladies left to knit magnetic core.
The tape drives were fragile when new but they and the disk drives the
size of a washing machine may have been replaced over the years. My mother had a plastic thing to keep layer cakes fresh that I was always reminded
of by the 2311's removable media.
https://d1yx3ys82bpsa0.cloudfront.net/groups/ibm-1311-2311.pdf
7.5 MB!!! What can you ever use that much storage for? I bought a microSD last week for another RPi project. It's getting hard to find them less
than 64 GB.
And look REALLY cool in sci-fi movies !
On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:
The 360/370 boxes WERE really popular, so I'm gonna GUESS there's a
least one or two still chugging away. Maint cost would be insane
these days ... but you can kinda bury that in the budget while new
hardware stands out more and in more places.
I'm not sure there are any little old ladies left to knit magnetic core.
The tape drives were fragile when new but they and the disk drives the
size of a washing machine may have been replaced over the years. My mother had a plastic thing to keep layer cakes fresh that I was always reminded
of by the 2311's removable media.
https://d1yx3ys82bpsa0.cloudfront.net/groups/ibm-1311-2311.pdf
7.5 MB!!! What can you ever use that much storage for? I bought a microSD last week for another RPi project. It's getting hard to find them less
than 64 GB.
On 04/04/2025 01:16, rbowman wrote:
On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:
The 360/370 boxes WERE really popular, so I'm gonna GUESS there's a >>> least one or two still chugging away. Maint cost would be insane
these days ... but you can kinda bury that in the budget while new >>> hardware stands out more and in more places.
I'm not sure there are any little old ladies left to knit magnetic core.
Last time I looked they were little asian ladies wit teeny nimble fingers. Did all the coil winding in that factory.
On 4/4/25 4:38 AM, The Natural Philosopher wrote:
On 04/04/2025 01:16, rbowman wrote:
On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:
The 360/370 boxes WERE really popular, so I'm gonna GUESS
there's a least one or two still chugging away. Maint cost
would be insane these days ... but you can kinda bury that in
the budget while new hardware stands out more and in more
places.
I'm not sure there are any little old ladies left to knit magnetic
core.
Last time I looked they were little asian ladies wit teeny nimble
fingers.
Did all the coil winding in that factory.
Look up "rope memory" :-)
I'm not sure there are any little old ladies left to knit magnetic core.
Last time I looked they were little asian ladies wit teeny nimble fingers. >> Did all the coil winding in that factory.
Look up "rope memory" :-)
Hey, it flew us to the moon ...
On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:
Last time I looked they were little asian ladies wit teeny nimble fingers. >>> Did all the coil winding in that factory.
I'm not sure there are any little old ladies left to knit magnetic core. >>>
Look up "rope memory" :-)
Hey, it flew us to the moon ...
Who would ever give a flying fuck about this "Neolithic" technical
crap? It's the future that is of concern.
My question has always been: when are these memory engineers (or
whatever they are called) going to produce cheap RAM memory that
can actually keep pace with the CPU?
For decades we have had to use various levels of high speed, though minuscule, cache memory in order for our software to run, and from
a programming point of view cache management is a supreme bitch.
The world needs cheap RAM that can operate at CPU speeds. Then,
all programming would be a supreme breeze.
Cache memory is just another crutch, and its existence is indisputable testimony that modern PC hardware is crippled shit.
On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:
On 4/4/25 4:38 AM, The Natural Philosopher wrote:
On 04/04/2025 01:16, rbowman wrote:
On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:
The 360/370 boxes WERE really popular, so I'm gonna GUESS
there's a least one or two still chugging away. Maint cost
would be insane these days ... but you can kinda bury that in >>>>> the budget while new hardware stands out more and in more
places.
I'm not sure there are any little old ladies left to knit magnetic
core.
Last time I looked they were little asian ladies wit teeny nimble
fingers.
Did all the coil winding in that factory.
Look up "rope memory" :-)
Then came twistor memory which begat bubble memory...
Well, I'd argue that on-chip cache is always gonna
outperform - if for no other reason than the short
circuit paths. These days, the speed of electricity
over wires is becoming increasingly annoying - it's
why they want photonics instead. Of course soon even
that will be too slow soon enough ... and you can
complain to Einstein .......
On 4/4/25 2:53 PM, rbowman wrote:
On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:
On 4/4/25 4:38 AM, The Natural Philosopher wrote:
On 04/04/2025 01:16, rbowman wrote:
On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:
The 360/370 boxes WERE really popular, so I'm gonna GUESS >>>>>> there's a least one or two still chugging away. Maint cost >>>>>> would be insane these days ... but you can kinda bury that in >>>>>> the budget while new hardware stands out more and in more >>>>>> places.
I'm not sure there are any little old ladies left to knit magnetic
core.
Last time I looked they were little asian ladies wit teeny nimble
fingers.
Did all the coil winding in that factory.
Look up "rope memory" :-)
Then came twistor memory which begat bubble memory...
Everybody had a trick back in the day. "Bubble"
wasn't bad really ... just couldn't push up
performance or capacity easily enough. Have you
ever looked into 'FRAM' - ferroelectric - mem for
embedded ? Much faster than flash, almost infinite
re-write capability, BUT they can't get the density
up much beyond the current, rather low, levels. It
still has a useful place (and I've used it) but it
has no "greater future" so far as I can discern.
"Rope" was interesting because it was used in the
NASA lunar lander vehicle. Basically cores on loose
wire, and I think SPACING was important. Why the hell
did they use that in 1969 ? Because, the way things
work, the govt SPECS for the vehicle probably went
out the door before JFK even finished his moon speech.
That's where the e-tech was frozen for all intents.
On 4/4/25 3:15 PM, Farley Flud wrote:
On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:
I'm not sure there are any little old ladies left to knit magnetic
core.
Last time I looked they were little asian ladies wit teeny nimble
fingers.
Did all the coil winding in that factory.
Look up "rope memory" :-)
Hey, it flew us to the moon ...
Who would ever give a flying fuck about this "Neolithic" technical
crap? It's the future that is of concern.
No future without a past.
And past tricks/thinking/strategies CAN inspire
the new.
My question has always been: when are these memory engineers (or
whatever they are called) going to produce cheap RAM memory that
can actually keep pace with the CPU?
Never ...
OK, Moore's Law is getting close to stalling-out CPU
performance. Ergo, give it a few years, the memory
MAY finally catch up.
For decades we have had to use various levels of high speed, though
minuscule, cache memory in order for our software to run, and from
a programming point of view cache management is a supreme bitch.
The world needs cheap RAM that can operate at CPU speeds. Then,
all programming would be a supreme breeze.
Well, your 'future' isn't providing. MAYbe some odd
idea from the past, just on better hardware ???
Cache memory is just another crutch, and its existence is indisputable
testimony that modern PC hardware is crippled shit.
Well, I'd argue that on-chip cache is always gonna
outperform - if for no other reason than the short
circuit paths. These days, the speed of electricity
over wires is becoming increasingly annoying - it's
why they want photonics instead. Of course soon even
that will be too slow soon enough ... and you can
complain to Einstein .......
"Rope" was interesting because it was used in the NASA lunar lander
vehicle. Basically cores on loose wire, and I think SPACING was
important. Why the hell did they use that in 1969 ? Because, the way
things work, the govt SPECS for the vehicle probably went out the
door before JFK even finished his moon speech.
That's where the e-tech was frozen for all intents.
Project I worked on was undersea repeater for optical cables.
Probably the worst organised and specified project ever, but that's a by
the by. They said 'you are lucky we are allowed to use a silicon
processor, up to 5 years ago we had to use germanium' 'Why?' Because
that was the only technology more than 15 years old that could be
guaranteed to last the 15 years'
On 05/04/2025 09:50, c186282 wrote:
On 4/4/25 3:15 PM, Farley Flud wrote:Indeed.
On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:
I'm not sure there are any little old ladies left to knit magnetic >>>>>> core.
Last time I looked they were little asian ladies wit teeny nimble
fingers.
Did all the coil winding in that factory.
Look up "rope memory" :-)
Hey, it flew us to the moon ...
Who would ever give a flying fuck about this "Neolithic" technical
crap? It's the future that is of concern.
No future without a past.
And past tricks/thinking/strategies CAN inspire
the new.
Many ideas that were infeasible, become feasible with new technology.
Many dont. Windmills being a prime example...
My question has always been: when are these memory engineers (or
whatever they are called) going to produce cheap RAM memory that
can actually keep pace with the CPU?
Never ...
The problem is distance between elements times the size of elements
divided by the speed of light.
It means that you need to start going 3D on memory to keep the
speed/capacity withiong bounds.
It has its parallel in human political structures. Huge monolithic
empires like the USSR simply fail to keep up, because the information
from the bottom takes a long time to get to the top.
A far better solution is the old British Empire, with governors having
power over a local nation, and very few decisions being centralised.
Same laws are governing both. What is happening is more local on chip cache.
OK, Moore's Law is getting close to stalling-out CPU
performance. Ergo, give it a few years, the memory
MAY finally catch up.
IIRC the RP2040 has 256K of memory *on the chip itself*.
That's local. Hence as fast as the chip is.
It already has it, juts not in the sizes you want. Because of theFor decades we have had to use various levels of high speed, though
minuscule, cache memory in order for our software to run, and from
a programming point of view cache management is a supreme bitch.
The world needs cheap RAM that can operate at CPU speeds. Then,
all programming would be a supreme breeze.
propagation delay inherently in large arrays.
We can clock the CPUS up to 4Ghz ± mainly because we can make em down to 10nm element size.
Below that you start to get into quantum effects and low yields.
DDR5 RAM is pushing 3GHz speeds
Well, your 'future' isn't providing. MAYbe some oddThe better idea is to look at what the actual problems are, and design massively parallel solutions to them that do not require a single
idea from the past, just on better hardware ???
processor running blindingly fast.
Cant yet beat speed of light. Photonics is not much faster thanCache memory is just another crutch, and its existence is indisputable
testimony that modern PC hardware is crippled shit.
Well, I'd argue that on-chip cache is always gonna
outperform - if for no other reason than the short
circuit paths. These days, the speed of electricity
over wires is becoming increasingly annoying - it's
why they want photonics instead. Of course soon even
that will be too slow soon enough ... and you can
complain to Einstein .......
electronics.
Back in the day we measured delay on a reel of 50 ohm coax. It was about
.95 the speed of light IIRC.
So that isn't the way to go,
Look, you need to study the history of engineering.
Let's take a steam engine. Early engines crude, inefficient and very
heavy and large. Maybe 1% efficient.
Roll forward to the first locomotives and still heavy but now getting 5% efficiency..
Fiddle with that for a hundred years and the final efficiency of a steam piston engine approached the theoretical limits of the technology
without cooling or superheated steam of around 20%. Now use superheated steam in a steam turbine with a condenser strapped on the back -
suitable for ships or power stations, - and you are getting up to 37%.
But that is fundamentally it. There is a law governing it
Efficiency is (Steam temperature IN - Steam temperature OUT)/(STEAM TEMPERATURE IN) (degrees absolute)
so for 200°C in and say 100°C out TIN = 673°A, TOUT =373°A gives a max thermal efficiency of 45%
You simply will never do better than that with water as the working
fluid unless you go to horrendous inlet temps of superheated steam
The point is that every technology has a limit beyond which no amount of tinkering is going to get you. Engineers come to understand this, the
lay public do not. They are always whining 'why cant you power the
universe form one single bonfire'
Digital computing has a little ways to go, but it is already close to
the limits
For some problems, precision analogue might be faster...
On Sat, 5 Apr 2025 11:41:11 +0100, The Natural Philosopher wrote:
Project I worked on was undersea repeater for optical cables.
Probably the worst organised and specified project ever, but that's a by
the by. They said 'you are lucky we are allowed to use a silicon
processor, up to 5 years ago we had to use germanium' 'Why?' Because
that was the only technology more than 15 years old that could be
guaranteed to last the 15 years'
When we built sequential runway lighting controllers the wire harnesses
had to be laced because the nylon ties used in industry hadn't had a
couple of decades of use.
https://en.wikipedia.org/wiki/Cable_lacing
Luckily all our techs were women, most of whom had knitting or macrame skills. The heart of the system was an electro-mechanical stepper that was also mostly obsolete in industry.
The SSA probably has people saying 'Rust? Well maybe after it's proven
itself for 60 years like COBOL.'
Digital ... note that clock speeds haven't really risen in
a LONG time. They can, to a point, make them 'work smarter'
but how much more ? Not all tasks/problems lend themselves
to parallel processing methods either.
So, yea, we're pretty much there.
Analog ... it still may have certain uses, however forAnalogue multiplication is the holy grail and can be dome using the exponential characteristics of bipolar transistors
chain operations the accumulated errors WILL getcha.
Might be a 'near' or 'finely-quantitized' sort of
analog - 100,000 distinct, non-drifting, states that
for some practical purposes LOOKS like traditional
analog. So long as you don't need TOO many decimal
points ......
Finally ... non-binary computing, eight or ten statesNon binary computing is essentially analogue computing
per "bit". Fewer operations, fewer gates twiddling,
better efficiency anyhow, potentially better speed. But
doing it with anything like traditional semiconductors,
cannot see how.
On Sat, 5 Apr 2025 11:41:11 +0100, The Natural Philosopher wrote:
Project I worked on was undersea repeater for optical cables.
Probably the worst organised and specified project ever, but that's a by
the by. They said 'you are lucky we are allowed to use a silicon
processor, up to 5 years ago we had to use germanium' 'Why?' Because
that was the only technology more than 15 years old that could be
guaranteed to last the 15 years'
When we built sequential runway lighting controllers the wire harnesses
had to be laced because the nylon ties used in industry hadn't had a
couple of decades of use.
https://en.wikipedia.org/wiki/Cable_lacing
Luckily all our techs were women, most of whom had knitting or macrame skills. The heart of the system was an electro-mechanical stepper that was also mostly obsolete in industry.
The SSA probably has people saying 'Rust? Well maybe after it's proven
itself for 60 years like COBOL.'
On Sat, 5 Apr 2025 04:31:22 -0400, c186282 wrote:
"Rope" was interesting because it was used in the NASA lunar lander
vehicle. Basically cores on loose wire, and I think SPACING was
important. Why the hell did they use that in 1969 ? Because, the way
things work, the govt SPECS for the vehicle probably went out the
door before JFK even finished his moon speech.
That's where the e-tech was frozen for all intents.
The specification document for projects like that takes years to write
with fights over every paragraph, including whether it should be vii., 7., G., or g.. There is so much ego involvement and politics that when it is finalized that is what shall be done, even if problems are recognized as
the project actually gets underway.
That leads to the F-35, Zumwalt class destroyers, and both varieties of
the LCS.
Nylon ties are often tied very tight, so tight that they can damage the
cable insulation and perhaps the outer conductors.
Nylon ties cut your hand when you push it trough the mesh of cables
trying to get one.
Maybe the #2 and #3 are due to not using a certified tool for tying
them, but I have never seen such a tool. It was mentioned, but nobody
had one.
Analogue multiplication is the holy grail and can be dome using the
exponential characteristics of bipolar transistors
https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf
On Sat, 5 Apr 2025 15:22:23 -0400, c186282 wrote:
Digital ... note that clock speeds haven't really risen in
a LONG time. They can, to a point, make them 'work smarter'
but how much more ? Not all tasks/problems lend themselves
to parallel processing methods either.
So, yea, we're pretty much there.
The supercomputer people would disagree.
Supercomputers, based on Linux, just keep on getting faster.
The metric is matrix multiplication, a classic problem in cache
management.
I don't know about the architecture of supercomputers but
the limit seems to be still quite open.
On 05/04/2025 20:22, c186282 wrote:
Analog ... it still may have certain uses, however forAnalogue multiplication is the holy grail and can be dome using the exponential characteristics of bipolar transistors
chain operations the accumulated errors WILL getcha.
Might be a 'near' or 'finely-quantitized' sort of
analog - 100,000 distinct, non-drifting, states that
for some practical purposes LOOKS like traditional
analog. So long as you don't need TOO many decimal
points ......
https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf
Finally ... non-binary computing, eight or ten states
per "bit". Fewer operations, fewer gates twiddling,
better efficiency anyhow, potentially better speed. But
doing it with anything like traditional semiconductors,
cannot see how.
Non binary computing is essentially analogue computing
It is already done in Flash RAM where more than two states of the memory capacitors are possible
Massive arrays of non linear analogue circuits for modelling things like
the Navier Stokes equations would be possible: Probably make a better
stab at climate modelling then the existing shit.
Farley Flud <fflud@gnu.rocks> wrote:
Gentoo: The Fastest GNU/Linux Hands Down
You are retarded.
On 05/04/2025 20:20, rbowman wrote:
On Sat, 5 Apr 2025 11:41:11 +0100, The Natural Philosopher wrote:I was taught wire lacing as an apprentice. Doesnt need women.
Project I worked on was undersea repeater for optical cables.
Probably the worst organised and specified project ever, but that's a by >>> the by. They said 'you are lucky we are allowed to use a silicon
processor, up to 5 years ago we had to use germanium' 'Why?' Because
that was the only technology more than 15 years old that could be
guaranteed to last the 15 years'
When we built sequential runway lighting controllers the wire harnesses
had to be laced because the nylon ties used in industry hadn't had a
couple of decades of use.
https://en.wikipedia.org/wiki/Cable_lacing
Luckily all our techs were women, most of whom had knitting or macrame
skills. The heart of the system was an electro-mechanical stepper that was >> also mostly obsolete in industry.
On Sat, 5 Apr 2025 11:41:11 +0100, The Natural Philosopher wrote:
Project I worked on was undersea repeater for optical cables.
Probably the worst organised and specified project ever, but that's a by
the by. They said 'you are lucky we are allowed to use a silicon
processor, up to 5 years ago we had to use germanium' 'Why?' Because
that was the only technology more than 15 years old that could be
guaranteed to last the 15 years'
When we built sequential runway lighting controllers the wire harnesses
had to be laced because the nylon ties used in industry hadn't had a
couple of decades of use.
https://en.wikipedia.org/wiki/Cable_lacing
Luckily all our techs were women, most of whom had knitting or macrame skills. The heart of the system was an electro-mechanical stepper that was also mostly obsolete in industry.
The SSA probably has people saying 'Rust? Well maybe after it's proven
itself for 60 years like COBOL.'
On 4/5/25 4:00 PM, Farley Flud wrote:
On Sat, 5 Apr 2025 15:22:23 -0400, c186282 wrote:
Digital ... note that clock speeds haven't really risen in
a LONG time. They can, to a point, make them 'work smarter'
but how much more ? Not all tasks/problems lend themselves
to parallel processing methods either.
So, yea, we're pretty much there.
The supercomputer people would disagree.
Supercomputers, based on Linux, just keep on getting faster.
The metric is matrix multiplication, a classic problem in cache
management.
I don't know about the architecture of supercomputers but
the limit seems to be still quite open.
Matrix mult is a kind of parallelization ... and we
still have some room there. But not every problem is
easily, or at all, suited for spreading over 1000
processors.
https://en.wikipedia.org/wiki/Amdahl%27s_law
Super-computers can use exotic tech, at a super price,
if they want - including superconductors and quantum
elements. NOT coming to a desktop near you anytime
soon alas ......
The Natural Philosopher <tnp@invalid.invalid> writes:
Analogue multiplication is the holy grail and can be dome using the
exponential characteristics of bipolar transistors
https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf
The electronics there is far beyond me but how general (and how
reliable) can this be made?
The operations I’m currently interested in are modular multiplication, addition and subtraction with comparatively small moduli (12-bit and
23-bit, currently). It’s a well-undertood problem with digital
computation, of course, but I’m curious about whether there’s another option here.
On 4/5/25 3:40 PM, The Natural Philosopher wrote:
On 05/04/2025 20:22, c186282 wrote:
Analog ... it still may have certain uses, however forAnalogue multiplication is the holy grail and can be dome using the
chain operations the accumulated errors WILL getcha.
Might be a 'near' or 'finely-quantitized' sort of
analog - 100,000 distinct, non-drifting, states that
for some practical purposes LOOKS like traditional
analog. So long as you don't need TOO many decimal
points ......
exponential characteristics of bipolar transistors
https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf
Finally ... non-binary computing, eight or ten states
per "bit". Fewer operations, fewer gates twiddling,
better efficiency anyhow, potentially better speed. But
doing it with anything like traditional semiconductors,
cannot see how.
Non binary computing is essentially analogue computing
Ummmm ... not if you can enforce clear 'guard bands' around
each of the, say eight, distinct voltage levels. Alas, as
stated, those 'different voltage levels' mean transistors
aren't cleanly on or off and will burn power kind of like
little resistors. Some all new material and approach would
be needed. Meta-material science MIGHT someday be able to
produce something like that.
It is already done in Flash RAM where more than two states of the
memory capacitors are possible
Massive arrays of non linear analogue circuits for modelling things
like the Navier Stokes equations would be possible: Probably make a
better stab at climate modelling then the existing shit.
Again with analog, it's the sensitivity to especially
temperature conditions that add errors in.
Keep
carrying those errors through several stages and soon
all you have is error, pretending to be The Solution.
Again, perhaps some meta-material that's NOT sensitive
to what typically throws-off analog electronics MIGHT
be made.
I'm trying to visualize what it would take to make
an all-analog version of, say, a payroll spreadsheet :-)
Now discrete use of analog as, as you suggested, doing
multiplication/division/logs initiated and read by
digital ... ?
Oh well, we're out in sci-fi land with most of this ...
may as well talk about using giant evil brains in
jars as computers :-)
As some here have mentioned, we may be closer to the
limits of computer power that we'd like to think.
Today's big trick is parallelization, but only some
kinds of problems can be modeled that way.
Saw an article the other day about using some kind
of disulfide for de-facto transistors, but did not
get the impression that they'd be fast. I think
temperature resistance was the main thrust - industrial
apps, Venus landers and such.
On 05/04/2025 23:34, c186282 wrote:
On 4/5/25 4:00 PM, Farley Flud wrote:
On Sat, 5 Apr 2025 15:22:23 -0400, c186282 wrote:
Digital ... note that clock speeds haven't really risen in
a LONG time. They can, to a point, make them 'work smarter'
but how much more ? Not all tasks/problems lend themselves
to parallel processing methods either.
So, yea, we're pretty much there.
The supercomputer people would disagree.
Supercomputers, based on Linux, just keep on getting faster.
The metric is matrix multiplication, a classic problem in cache
management.
I don't know about the architecture of supercomputers but
the limit seems to be still quite open.
Matrix mult is a kind of parallelization ... and we
still have some room there. But not every problem is
easily, or at all, suited for spreading over 1000
processors.
https://en.wikipedia.org/wiki/Amdahl%27s_law
Super-computers can use exotic tech, at a super price,
if they want - including superconductors and quantum
elements. NOT coming to a desktop near you anytime
soon alas ......
Well the main use of supercomputers is running vast mathematical models
to make sketchy assumptions and crude parametrisations look much more betterer than they actually are..
Real racing car and aircraft design uses wind tunnels. CFD can't do the
job.
On 05/04/2025 23:27, c186282 wrote:
On 4/5/25 3:40 PM, The Natural Philosopher wrote:
On 05/04/2025 20:22, c186282 wrote:
Analog ... it still may have certain uses, however forAnalogue multiplication is the holy grail and can be dome using the
chain operations the accumulated errors WILL getcha.
Might be a 'near' or 'finely-quantitized' sort of
analog - 100,000 distinct, non-drifting, states that
for some practical purposes LOOKS like traditional
analog. So long as you don't need TOO many decimal
points ......
exponential characteristics of bipolar transistors
https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf
Finally ... non-binary computing, eight or ten states
per "bit". Fewer operations, fewer gates twiddling,
better efficiency anyhow, potentially better speed. But
doing it with anything like traditional semiconductors,
cannot see how.
Non binary computing is essentially analogue computing
Ummmm ... not if you can enforce clear 'guard bands' around
each of the, say eight, distinct voltage levels. Alas, as
stated, those 'different voltage levels' mean transistors
aren't cleanly on or off and will burn power kind of like
little resistors. Some all new material and approach would
be needed. Meta-material science MIGHT someday be able to
produce something like that.
It is already done in Flash RAM where more than two states of the
memory capacitors are possible
Massive arrays of non linear analogue circuits for modelling things
like the Navier Stokes equations would be possible: Probably make a
better stab at climate modelling then the existing shit.
Again with analog, it's the sensitivity to especially
temperature conditions that add errors in.
Not really, That was mostly sorted years ago.
Keep
carrying those errors through several stages and soon
all you have is error, pretending to be The Solution.
So no different from floating point based current climate models, then...
Again, perhaps some meta-material that's NOT sensitiveAn awful lot of op-amps.
to what typically throws-off analog electronics MIGHT
be made.
I'm trying to visualize what it would take to make
an all-analog version of, say, a payroll spreadsheet :-)
The thing is that analogue computers were useful for system analysis
years before digital stuff came along. You could examine a dynamic
system and see if it was stable or not.
If not you did it another way. People who dribble on about 'climate
tipping points'have no clue really as to how real life complex analogue systems work.
Now discrete use of analog as, as you suggested, doingIts being thought about.
multiplication/division/logs initiated and read by
digital ... ?
Oh well, we're out in sci-fi land with most of this ...Well no, we are not.
may as well talk about using giant evil brains in
jars as computers :-)
Digital traded speed for precision.
As some here have mentioned, we may be closer to theI think I saw that too..
limits of computer power that we'd like to think.
Today's big trick is parallelization, but only some
kinds of problems can be modeled that way.
Saw an article the other day about using some kind
of disulfide for de-facto transistors, but did not
get the impression that they'd be fast. I think
temperature resistance was the main thrust - industrial
apps, Venus landers and such.
Massive parallelisation will definitely do *some* things faster.
Think 4096 core GPU processors...I think that's the way it will happen, decline of the general purpose CPU and emergence of specific chips
tailored to specific tasks. Its already happening to an extend with on
chip everything....
On 4/4/25 3:15 PM, Farley Flud wrote:
Who would ever give a flying fuck about this "Neolithic" technical
crap?
It's the future that is of concern.
No future without a past.
And past tricks/thinking/strategies CAN inspire
the new.
No, but we can move to quantum computing, which may become
a reality before too long.
On 4/5/25 4:16 PM, Joel wrote:
Farley Flud <fflud@gnu.rocks> wrote:
Gentoo: The Fastest GNU/Linux Hands Down
You are retarded.
Now now ... is he retarded,
or simply mistaken ?
On 05/04/2025 22:57, Richard Kettlewell wrote:
The Natural Philosopher <tnp@invalid.invalid> writes:Well it isn't very general. It takes two voltages between on and one
Analogue multiplication is the holy grail and can be dome using theThe electronics there is far beyond me but how general (and how
exponential characteristics of bipolar transistors
https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf
reliable) can this be made?
an multiplies them together.
That's what it does.
Going from digits to volts is relatively fast and easy, but the
reverse is not so true
The operations I’m currently interested in are modular multiplication,
addition and subtraction with comparatively small moduli (12-bit and
23-bit, currently). It’s a well-undertood problem with digital
computation, of course, but I’m curious about whether there’s another
option here.
That flew over my head Richard. Advanced maths is not my forte ...
Le 05-04-2025, Farley Flud <ff@linux.rocks> a écrit :
No, but we can move to quantum computing, which may become
a reality before too long.
I heard about that before I was born.
On 4/5/25 9:07 PM, The Natural Philosopher wrote:temperature conditions that add errors in.
Not really, That was mostly sorted years ago.
Ummm ... I'm gonna kinda have to disagree.
There are several factors that lead to errors in
analog electronics - simple temperature being
the worst.
Keep
carrying those errors through several stages and soon
all you have is error, pretending to be The Solution.
So no different from floating point based current climate models, then...
Digital FP *can* be done to almost arbitrary precision.
If you're running, say, a climate or 'dark energy' model
then you use a LOT of precision.
Again, perhaps some meta-material that's NOT sensitiveAn awful lot of op-amps.
to what typically throws-off analog electronics MIGHT
be made.
I'm trying to visualize what it would take to make
an all-analog version of, say, a payroll spreadsheet :-)
To say the least :-)
CAN be done, but is it WORTH it ???
But, I suppose, a whole-budget CAN be viewed
as an analog equation IF you try hard enough.
The thing is that analogue computers were useful for system analysis
years before digital stuff came along. You could examine a dynamic
system and see if it was stable or not.
Well, *how* stable it is ........
Digital is always right-on.
So what do you NEED most - speed or accuracy ?
If not you did it another way. People who dribble on about 'climate
tipping points'have no clue really as to how real life complex
analogue systems work.
I'm just gonna say that "climate" is beyond ANY kind
of models - analog OR digital. TOO many butterflies.
Now discrete use of analog as, as you suggested, doingIts being thought about.
multiplication/division/logs initiated and read by
digital ... ?
And we shall see ... advantage, or not ?
Maybe, horrors, "depends" .....
The "real world" acts as a very complex analog
equation - until you get down to quantum levels.
HOW the hell to best DEAL with that ???
Oh well, we're out in sci-fi land with most of this ...Well no, we are not.
may as well talk about using giant evil brains in
jars as computers :-)
Digital traded speed for precision.
I'd say digital traded precision for speed ...
Massive parallelisation will definitely do *some* things faster.
Agreed ... but not EVERYTHING.
Sometimes there's just no substitute for clock
speed and high-speed mem access.
Think 4096 core GPU processors...I think that's the way it will
happen, decline of the general purpose CPU and emergence of specific
chips tailored to specific tasks. Its already happening to an extend
with on chip everything....
I kinda understand. However that whole chip chain
will likely need to be fully, by design, integrated.
This is NOT so easy with multiple manufacturers.
ANYway ... final observation ... it keeps looking
like we're far closer to the END of increasing
computer power than the beginning. WHAT we want
computed is kinda a dynamic equation, but OVERALL
we're kinda near The End.
THEN what ?
On 4/5/25 9:13 PM, The Natural Philosopher wrote:
On 05/04/2025 23:34, c186282 wrote:
On 4/5/25 4:00 PM, Farley Flud wrote:
On Sat, 5 Apr 2025 15:22:23 -0400, c186282 wrote:
Digital ... note that clock speeds haven't really risen in
a LONG time. They can, to a point, make them 'work smarter'
but how much more ? Not all tasks/problems lend themselves
to parallel processing methods either.
So, yea, we're pretty much there.
The supercomputer people would disagree.
Supercomputers, based on Linux, just keep on getting faster.
The metric is matrix multiplication, a classic problem in cache
management.
I don't know about the architecture of supercomputers but
the limit seems to be still quite open.
Matrix mult is a kind of parallelization ... and we
still have some room there. But not every problem is
easily, or at all, suited for spreading over 1000
processors.
https://en.wikipedia.org/wiki/Amdahl%27s_law
Super-computers can use exotic tech, at a super price,
if they want - including superconductors and quantum
elements. NOT coming to a desktop near you anytime
soon alas ......
Well the main use of supercomputers is running vast mathematical
models to make sketchy assumptions and crude parametrisations look
much more betterer than they actually are..
Even 80s super-computers made it unnecessary to TEST
nuclear weapon designs - the entire physics could be
calculated, a 'virtual' bomb, and RELIED on. Even Iran
can do all that and more now.
Real racing car and aircraft design uses wind tunnels. CFD can't do
the job.
Um, yea ... really COULD be entirely virtualized.
ACCESS to such calx capabilities still isn't there
or affordable to ALL however. Do you think that
AirBus/Boeing/Lockheed build a zillion wind-tunnel
models these days ? Likely NONE. The airflows, the
structural components ... all SIMS.
WHAT can be done with "quantum" is not entirely
clear. Again it's not best suited for EVERYTHING.
The #1 issue is still the ERROR rates. As per QM,
where things can randomly change Just Because,
these errors are gonna be HARD to get around.
STILL no great solutions. Got a design for a
"Heisenberg compensator" ??? If so, GET RICH !!!
There MAY be some pattern in the QM errors that
can be matched/negated by some parallel QM
process/equation. We shall see.
The Natural Philosopher <tnp@invalid.invalid> writes:
On 05/04/2025 22:57, Richard Kettlewell wrote:
The Natural Philosopher <tnp@invalid.invalid> writes:Well it isn't very general. It takes two voltages between on and one
Analogue multiplication is the holy grail and can be dome using the >>>> exponential characteristics of bipolar transistorsThe electronics there is far beyond me but how general (and how
https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf
reliable) can this be made?
an multiplies them together.
That's what it does.
Going from digits to volts is relatively fast and easy, but the
reverse is not so true
Right, probably not a fit for my application, then.
The operations I’m currently interested in are modular multiplication, >>> addition and subtraction with comparatively small moduli (12-bit and
23-bit, currently). It’s a well-undertood problem with digital
computation, of course, but I’m curious about whether there’s another >>> option here.
That flew over my head Richard. Advanced maths is not my forte ...
You do modular arithmetic whenever you do times and dates - informally, it’s the same thing as they way minutes and seconds wrap around from 59
to 0, hours from 23 to 0, etc.
Putting it together into something useful is a bit more than that,
though, indeed, and a persistent concern is how to do it faster, hence keeping an eye out for additional technology options.
On 06 Apr 2025 08:59:33 GMT, Stéphane CARPENTIER wrote:
Le 05-04-2025, Farley Flud <ff@linux.rocks> a écrit :
No, but we can move to quantum computing, which may become
a reality before too long.
I heard about that before I was born.
In the US, the NIST is already researching algorithms for "post-quantum cryptography:"
https://csrc.nist.gov/projects/post-quantum-cryptography
Quantum computing is definitely going to happen.
Le 05-04-2025, c186282 <c186282@nnada.net> a écrit :
On 4/5/25 4:16 PM, Joel wrote:
Farley Flud <fflud@gnu.rocks> wrote:
Gentoo: The Fastest GNU/Linux Hands Down
You are retarded.
Now now ... is he retarded,
Yes. That's a simple answer to a simple question.
or simply mistaken ?
Yes again.
Le 05-04-2025, Farley Flud <ff@linux.rocks> a écrit :
No, but we can move to quantum computing, which may become a reality
before too long.
I heard about that before I was born.
On 4/5/25 3:40 PM, The Natural Philosopher wrote:
On 05/04/2025 20:22, c186282 wrote:
Analog ...
Massive arrays of non linear analogue circuits for modelling things
like the Navier Stokes equations would be possible: Probably make a
better stab at climate modelling then the existing shit.
Again with analog, it's the sensitivity to especially
temperature conditions that add errors in. Keep
carrying those errors through several stages and soon
all you have is error, pretending to be The Solution.
Again, perhaps some meta-material that's NOT sensitive
to what typically throws-off analog electronics MIGHT
be made.
I'm trying to visualize what it would take to make
an all-analog version of, say, a payroll spreadsheet :-)
Now discrete use of analog as, as you suggested, doing
multiplication/division/logs initiated and read by
digital ... ?
Oh well, we're out in sci-fi land with most of this ...
may as well talk about using giant evil brains in
jars as computers :-)
As some here have mentioned, we may be closer to the
limits of computer power that we'd like to think.
Today's big trick is parallelization, but only some
kinds of problems can be modeled that way.
Saw an article the other day about using some kind
of disulfide for de-facto transistors, but did not
get the impression that they'd be fast. I think
temperature resistance was the main thrust - industrial
apps, Venus landers and such.
On 4/5/25 18:27, c186282 wrote:
On 4/5/25 3:40 PM, The Natural Philosopher wrote:
On 05/04/2025 20:22, c186282 wrote:
Analog ...
Massive arrays of non linear analogue circuits for modelling things
like the Navier Stokes equations would be possible: Probably make a
better stab at climate modelling then the existing shit.
Again with analog, it's the sensitivity to especially
temperature conditions that add errors in. Keep
carrying those errors through several stages and soon
all you have is error, pretending to be The Solution.
Again, perhaps some meta-material that's NOT sensitive
to what typically throws-off analog electronics MIGHT
be made.
I'm trying to visualize what it would take to make
an all-analog version of, say, a payroll spreadsheet :-)
Woogh! That makes my brain hurt.
Now discrete use of analog as, as you suggested, doing
multiplication/division/logs initiated and read by
digital ... ?
Oh well, we're out in sci-fi land with most of this ...
may as well talk about using giant evil brains in
jars as computers :-)
As some here have mentioned, we may be closer to the
limits of computer power that we'd like to think.
Today's big trick is parallelization, but only some
kinds of problems can be modeled that way.
Saw an article the other day about using some kind
of disulfide for de-facto transistors, but did not
get the impression that they'd be fast. I think
temperature resistance was the main thrust - industrial
apps, Venus landers and such.
Actually, one of the things that Analog's still good at is real world
control systems with feeback loops and all the like.
I had one project some time 'way back in the 80s where we were troubleshooting a line that had a 1960s era analog control system, and
one of the conversations that came up was if to replace it with digital.
It got looked into and was determined that digital process controls
weren't fast enough for the line.
Fast-forward to ~2005. While back visiting that department, I found out that that old analog beast was still running the line and they were
trolling eBay for parts to keep it running.
On another visit ~2015, the update: they finally found a new digitally based control system that was fast enough to finally replace it & did.
Indeed ! However ... probably COULD be done, it's
a bunch of shifting values - input to some accts,
calx ops, shift to other accts ....... lots and
lots of rheostats ........
On Mon, 07 Apr 2025 17:59:45 -0400, c186282 wrote:
Indeed ! However ... probably COULD be done, it's
a bunch of shifting values - input to some accts,
calx ops, shift to other accts ....... lots and
lots of rheostats ........
How is conditional branching (e.g. an if-then-else statement)
to be implemented with analog circuits? It cannot be
done.
Analog computers are good for modelling systems that are
described by differential equations. Adders, differentiators,
and integrators can all be easily implemented with electronic
circuits. But beyond differential equation sytems analog
computers are useless.
The Norden bomb site of WWII wan an electro-mechanical
computer. It's job was to calculate the trajectory of
a bomb released by an aircraft and the trajectory is described
by a differential equation.
One of my professors told a story about a common "analog"
practice among engineers of the past. To calculate an integral,
which can be described as the area under a curve, they would plot
the curve on well made paper and then cut out (with scissors)
the plotted area and weigh it (on a lab balance). The ratio
of the cut-out area with a unit area of paper would be the
value of the integral. (Multi-dimensional integrals would
require carving blocks of balsa wood or a similar material.)
Of course it worked but today integration is easy to perform
to unlimited accuracy using digital means.
On Mon, 07 Apr 2025 17:59:45 -0400, c186282 wrote:
Indeed ! However ... probably COULD be done, it's
a bunch of shifting values - input to some accts,
calx ops, shift to other accts ....... lots and
lots of rheostats ........
How is conditional branching (e.g. an if-then-else statement)
to be implemented with analog circuits? It cannot be
done.
Analog computers are good for modelling systems that are
described by differential equations. Adders, differentiators,
and integrators can all be easily implemented with electronic
circuits. But beyond differential equation sytems analog
computers are useless.
The Norden bomb site of WWII wan an electro-mechanical
computer. It's job was to calculate the trajectory of
a bomb released by an aircraft and the trajectory is described
by a differential equation.
One of my professors told a story about a common "analog"
practice among engineers of the past. To calculate an integral,
which can be described as the area under a curve, they would plot
the curve on well made paper and then cut out (with scissors)
the plotted area and weigh it (on a lab balance). The ratio
of the cut-out area with a unit area of paper would be the
value of the integral. (Multi-dimensional integrals would
require carving blocks of balsa wood or a similar material.)
Of course it worked but today integration is easy to perform
to unlimited accuracy using digital means.
<snip>
Fast-forward to ~2005. While back visiting that department, I found out
that that old analog beast was still running the line and they were
trolling eBay for parts to keep it running.
On another visit ~2015, the update: they finally found a new digitally
based control system that was fast enough to finally replace it & did.
On 4/7/25 4:39 PM, -hh wrote:
...
I had one project some time 'way back in the 80s where we were
troubleshooting a line that had a 1960s era analog control system, and
one of the conversations that came up was if to replace it with
digital. It got looked into and was determined that digital process
controls weren't fast enough for the line.
Fast-forward to ~2005. While back visiting that department, I found
out that that old analog beast was still running the line and they
were trolling eBay for parts to keep it running.
Hey, so long as it works well !
On another visit ~2015, the update: they finally found a new
digitally based control system that was fast enough to finally replace
it & did.
What was the thing doing ?
Oh, on-theme, apparently Team Musk's nerd squad
managed to CRASH a fair segment of the SSA customer
web sites while trying to add some "anti-fraud"
feature :-)
PROBABLY no COBOL involved ... well, maybe ....
On 4/7/25 17:59, c186282 wrote:
On 4/7/25 4:39 PM, -hh wrote:
...
I had one project some time 'way back in the 80s where we were
troubleshooting a line that had a 1960s era analog control system,
and one of the conversations that came up was if to replace it with
digital. It got looked into and was determined that digital process
controls weren't fast enough for the line.
Fast-forward to ~2005. While back visiting that department, I found
out that that old analog beast was still running the line and they
were trolling eBay for parts to keep it running.
Hey, so long as it works well !
It did, so long as there were parts for it.
On another visit ~2015, the update: they finally found a new
digitally based control system that was fast enough to finally
replace it & did.
What was the thing doing ?
It was running a high speed manufacturing line. If memory serves,
roughly 1200ppm, so 20 parts per second.
For a digital system that's a budget of ~50 milliseconds total
processing time per part, which one can see how early digital stuff
couldn't maintain that pace, but as PCs got faster, it wasn't really
clear why it remained a "too hard".
That seemed to have come from the architecture. Its a series of linked tooling station heads, with each head has 22? sets of tools running
basically in parallel, but because everything was indexed, a part that
went through Station 1 on Head A, then went through Station 1 too on
Heads B, and Station 1 on C, 1 on D, 1 on E, etc ...
The process had interactive feedback loops all over the place between multiple heads (& other stuff), such that if head E started to report
its hydraulic psi was running high, that was because of an insufficient anneal back between B & C, so turn up the voltage on the annealing station...and if that was already running high, then turn up the voltage
on an earlier annealing station.
But that wasn't all: it would make similar on-the-fly adjustments for
each of the individual Stations too, so if Tool 18 on Head G was
complaining, they could adjust settings on Tools 18 on Heads ABCDEF
upstream of G .. and HIJK downstream too if that was a fix too.
It must have been an incredible project back in the 1960s to get it all
so incredibly figured out and well balanced.
The modernization eventually came along because the base machines were expensive - probably "lost art" IMO - but were known to be capable of
running much faster, and it was finally a modernization to have it run
faster that got over the goal line for digitization. I think they ended
up just a shade over 2000ppm; I'll ask the next time I stop by.
Plus front-loading it before you've run your in-house checks means that
your operating expenses to this contractor service go UP not down. Yes, that's a deliberate waste of taxpayer dollars.
On 4/8/25 10:28, c186282 wrote:
Oh, on-theme, apparently Team Musk's nerd squad
managed to CRASH a fair segment of the SSA customer
web sites while trying to add some "anti-fraud"
feature :-)
PROBABLY no COBOL involved ... well, maybe ....
Oh, its worse than that.
"The network crashes appear to be caused by an expansion initiated by
the Trump team of an existing contract with a credit-reporting agency
that tracks names, addresses and other personal information to verify customers’ identities. The enhanced fraud checks are now done earlier in the claims process and have resulted in a boost to the volume of
customers who must pass the checks."
<https://gizmodo.com/social-security-website-crashes-blamed-on-doge-software-update-2000586092>
Translation:
They *moved* where an existing credit agency check is done, but didn't
load test it before going live ... and golly, they broke it!
But the more important question here is:
**WHY** did they move where this check is done?
Because this check already existed, so moving where its done isn't going
to catch more fraud.
Plus front-loading it before you've run your in-house checks means that
your operating expenses to this contractor service go UP not down. Yes, that's a deliberate waste of taxpayer dollars.
The only motivation I can see is propaganda: this change will find more 'fraud' at the contractor's check ... but not more fraud in total.
Expect them to use the before/after contractor numbers only to falsely
claim that they've found 'more' fraud. No, they're committing fraud.
On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:
Plus front-loading it before you've run your in-house checks means that
your operating expenses to this contractor service go UP not down. Yes,
that's a deliberate waste of taxpayer dollars.
You'd think someone would want to try to reduce that waste.
Maybe set up a Department Of Government Efficiency or something...
I did mention one possible gain in doing the ID checks
earlier - giving Vlad and friends less access to the
deeper pages/system, places where more exploitable
flaws live.
In short, put up a big high city wall - then you
don't have to worry AS much about the inner layers
of the city.
On 2025-04-09, c186282 <c186282@nnada.net> wrote:
There's a fundamental political rule, esp in 'democracies',
that goes "ALWAYS be seen as *DOING SOMETHING*"
Something must be done. This is something.
Therefore, this must be done.
-- Yes, Prime Minister
On 4/8/25 7:18 PM, Charlie Gibbs wrote:
On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:
Plus front-loading it before you've run your in-house checks means that
your operating expenses to this contractor service go UP not down. Yes, >>> that's a deliberate waste of taxpayer dollars.
You'd think someone would want to try to reduce that waste.
Maybe set up a Department Of Government Efficiency or something...
Hey ... humans are only JUST so smart, AI is
even more stupid, and govt agencies .........
Likely the expense of the earlier checks do NOT add
up to much.
I did mention one possible gain in doing the ID checks
earlier - giving Vlad and friends less access to the
deeper pages/system, places where more exploitable
flaws live.
In short, put up a big high city wall - then you> don't have to worry AS much about the inner layers
of the city.
On 4/8/25 22:29, c186282 wrote:
On 4/8/25 7:18 PM, Charlie Gibbs wrote:
On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:
Plus front-loading it before you've run your in-house checks means that >>>> your operating expenses to this contractor service go UP not down.
Yes,
that's a deliberate waste of taxpayer dollars.
You'd think someone would want to try to reduce that waste.
Maybe set up a Department Of Government Efficiency or something...
Hey ... humans are only JUST so smart, AI is
even more stupid, and govt agencies .........
Likely the expense of the earlier checks do NOT add
up to much.
It might not be, but in this case, the benefit of the change is
literally zero ... and the expenses are not only more money to the
contractor who gets paid by the check request, but also the cost of
higher bandwidth demands which is what caused the site to crash.
I did mention one possible gain in doing the ID checks
earlier - giving Vlad and friends less access to the
deeper pages/system, places where more exploitable
flaws live.
In short, put up a big high city wall - then you> don't have to
worry AS much about the inner layers
of the city.
I don't really buy that, because of symmetry: when the workflow is that
a request has to successfully pass three gates, its functionally
equivalent to (A x B x C) and the sequence doesn't matter: one gets the same outcome for (C x B x A), and (A x C x B), etc.
The primary motivation for order selection comes from optimization
factors, such as the 'costs' of each gate: one puts the cheap gates
which knock down the most early, and put the slow/expensive gates late,
after the dataset's size has already been minimized.
On 4/9/25 2:18 PM, -hh wrote:
On 4/8/25 22:29, c186282 wrote:
On 4/8/25 7:18 PM, Charlie Gibbs wrote:
On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:
Plus front-loading it before you've run your in-house checks means
that
your operating expenses to this contractor service go UP not down.
Yes,
that's a deliberate waste of taxpayer dollars.
You'd think someone would want to try to reduce that waste.
Maybe set up a Department Of Government Efficiency or something...
Hey ... humans are only JUST so smart, AI is
even more stupid, and govt agencies .........
Likely the expense of the earlier checks do NOT add
up to much.
It might not be, but in this case, the benefit of the change is
literally zero ... and the expenses are not only more money to the
contractor who gets paid by the check request, but also the cost of
higher bandwidth demands which is what caused the site to crash.
I did mention one possible gain in doing the ID checks
earlier - giving Vlad and friends less access to the
deeper pages/system, places where more exploitable
flaws live.
In short, put up a big high city wall - then you> don't have to >>> worry AS much about the inner layers
of the city.
I don't really buy that, because of symmetry: when the workflow is
that a request has to successfully pass three gates, its functionally
equivalent to (A x B x C) and the sequence doesn't matter: one gets
the same outcome for (C x B x A), and (A x C x B), etc.
The primary motivation for order selection comes from optimization
factors, such as the 'costs' of each gate: one puts the cheap gates
which knock down the most early, and put the slow/expensive gates
late, after the dataset's size has already been minimized.
I understand your reasoning here.
The point I was trying to make is a bit different
however - less to really do with people trying to
defraud the system but with those seeking to
corrupt/destroy it. I see every web page, every
bit of HTML/PHP/JS executed, every little database
opened, as a potential source of fatal FLAWS enemies
can find and exploit to do great damage.
In that context, the sooner you can lock out pretenders
the better - less of the system exposed to the state-
sponsored hacks to analyze and pound at relentlessly.
Now Musk's little group DID make a mistake in
not taking bandwidth into account (and we do
not know how ELSE they may have screwed up
jamming new code into something they didn't
write) but 'non-optimal' verification order
MIGHT be worth the extra $$$ in an expanded
'security' context.
On 4/9/25 16:51, c186282 wrote:
On 4/9/25 2:18 PM, -hh wrote:
On 4/8/25 22:29, c186282 wrote:
On 4/8/25 7:18 PM, Charlie Gibbs wrote:
On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:
Plus front-loading it before you've run your in-house checks means >>>>>> that
your operating expenses to this contractor service go UP not down. >>>>>> Yes,
that's a deliberate waste of taxpayer dollars.
You'd think someone would want to try to reduce that waste.
Maybe set up a Department Of Government Efficiency or something...
Hey ... humans are only JUST so smart, AI is
even more stupid, and govt agencies .........
Likely the expense of the earlier checks do NOT add
up to much.
It might not be, but in this case, the benefit of the change is
literally zero ... and the expenses are not only more money to the
contractor who gets paid by the check request, but also the cost of
higher bandwidth demands which is what caused the site to crash.
I did mention one possible gain in doing the ID checks
earlier - giving Vlad and friends less access to the
deeper pages/system, places where more exploitable
flaws live.
In short, put up a big high city wall - then you> don't have >>>> to worry AS much about the inner layers
of the city.
I don't really buy that, because of symmetry: when the workflow is
that a request has to successfully pass three gates, its functionally
equivalent to (A x B x C) and the sequence doesn't matter: one gets
the same outcome for (C x B x A), and (A x C x B), etc.
The primary motivation for order selection comes from optimization
factors, such as the 'costs' of each gate: one puts the cheap gates
which knock down the most early, and put the slow/expensive gates
late, after the dataset's size has already been minimized.
I understand your reasoning here.
The point I was trying to make is a bit different
however - less to really do with people trying to
defraud the system but with those seeking to
corrupt/destroy it. I see every web page, every
bit of HTML/PHP/JS executed, every little database
opened, as a potential source of fatal FLAWS enemies
can find and exploit to do great damage.
In that context, the sooner you can lock out pretenders
the better - less of the system exposed to the state-
sponsored hacks to analyze and pound at relentlessly.
Sure, but that's not relevant here, because from a threat vulnerability perspective, its just one big 'black box' process. Anyone attempting to probe doesn't receive intermediary milestones/checkpoints to know if
they successfully passed/failed a gate.
Now Musk's little group DID make a mistake in
not taking bandwidth into account (and we do
not know how ELSE they may have screwed up
jamming new code into something they didn't
write) but 'non-optimal' verification order
MIGHT be worth the extra $$$ in an expanded
'security' context.
Might be worth it if it actually enhanced security. It failed to do so, because their change was just a "shuffling of the existing deck chairs".
On Fri, 30 May 2025 07:22:51 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Well, I guess that’s over, now that Elon Musk has left the building.
That’s the end of DOGE, without “saving” anywhere near the trillion
dollars he originally promised.
Comes as a *total* shock, lemmetellya.
Joel wrote:
John Ames <commodorejohn@gmail.com> wrote:no because there would be tariffs that go to trumps pocket
On Fri, 30 May 2025 07:22:51 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Well, I guess that’s over, now that Elon Musk has left the building. >>>> That’s the end of DOGE, without “saving” anywhere near the trillion >>>> dollars he originally promised.
Comes as a *total* shock, lemmetellya.
Let's say they had cut that much with this DOGE BS (not that it's 100%
a stupid idea, of course, but they were not approaching very
rationally), wouldn't the proposed tax breaks offset it? Wouldn't we
still be spending a large fortune every year on the damn military?
Well, I guess that’s over, now that Elon Musk has left the building. That’s the end of DOGE, without “saving” anywhere near the trillion dollars he originally promised.
military spending is the usa's biggest bill it's close to 90%
On 5/30/25 3:22 AM, Lawrence D'Oliveiro wrote:
Well, I guess that’s over, now that Elon Musk has left the building.
That’s the end of DOGE, without “saving” anywhere near the trillion
dollars he originally promised.
Eh ... 'close enough'.
At least somebody TRIED ... that's incredibly rare
in the higher echelons.
DOGE will continue, hopefully more smoothly integrate
into the whole process. SOMEBODY has to keep an eye
out for STUPID stuff.
Joel wrote:
The Natural Philosopher <tnp@invalid.invalid> wrote:military spending is the usa's biggest bill it's close to 90%
On 30/05/2025 20:21, % wrote:
Joel wrote:
John Ames <commodorejohn@gmail.com> wrote:no because there would be tariffs that go to trumps pocket
On Fri, 30 May 2025 07:22:51 -0000 (UTC) Lawrence D'Oliveiro
<ldo@nz.invalid> wrote:
Well, I guess that’s over, now that Elon Musk has left the
building. That’s the end of DOGE, without “saving” anywhere near >>>>>>> the trillion dollars he originally promised.
Comes as a *total* shock, lemmetellya.
Let's say they had cut that much with this DOGE BS (not that it's
100%
a stupid idea, of course, but they were not approaching very
rationally), wouldn't the proposed tax breaks offset it? Wouldn't
we still be spending a large fortune every year on the damn
military?
Military spending is pretty low in reality.
Laughable.
And it is a *pragmatic* program, whereas so much is spent on purely
*moral* initiatives to employ people who think they can therefore tell
how to run your life better than you can yourself.
Never even heard of such a thing, unless you mean the war on drugs
(which is yet another discarding of money).
John Ames <commodorejohn@gmail.com> wrote:
On Fri, 30 May 2025 07:22:51 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Well, I guess that’s over, now that Elon Musk has left the building.
That’s the end of DOGE, without “saving” anywhere near the trillion >>> dollars he originally promised.
Comes as a *total* shock, lemmetellya.
Let's say they had cut that much with this DOGE BS (not that it's 100%
a stupid idea, of course, but they were not approaching very
rationally), wouldn't the proposed tax breaks offset it?
Wouldn't we
still be spending a large fortune every year on the damn military?
Well, I guess that’s over, now that Elon Musk has left the building. That’s the end of DOGE, without “saving” anywhere near the trillion dollars he originally promised.
On 5/30/25 00:22, Lawrence D'Oliveiro wrote:
Well, I guess that’s over, now that Elon Musk has left the building.
That’s the end of DOGE, without “saving” anywhere near the trillion
dollars he originally promised.
No it is not the end of DOGE because some of them are still doing their intrusive data gathering and others have been put in charge of
agencies or established in pertinent positions in those agencies.
These are the people who shut down USAID and tried to cut
the funds allocated by the Congress to many agencies.
bliss
On 31/05/2025 21:46, Bobbie Sellers wrote:
On 5/30/25 00:22, Lawrence D'Oliveiro wrote:
Well, I guess that’s over, now that Elon Musk has left the building.
That’s the end of DOGE, without “saving” anywhere near the trillion >>> dollars he originally promised.
No it is not the end of DOGE because some of them are still doing >> their intrusive data gathering and others have been put in charge of
agencies or established in pertinent positions in those agencies.
These are the people who shut down USAID and tried to cut
the funds allocated by the Congress to many agencies.
bliss
Its typical really.
Someone says, quite rightly , 'there's a lot of waste in government'
So The Big Fart appoints someone to shut down EVERYTHING on the basis
that it will probably become obvious which bits are needed once they
are gone.
Stupid clumsy, ill advised and destructive.
On 5/31/25 14:24, The Natural Philosopher wrote:
On 31/05/2025 21:46, Bobbie Sellers wrote:
On 5/30/25 00:22, Lawrence D'Oliveiro wrote:
Well, I guess that’s over, now that Elon Musk has left the
building. That’s the end of DOGE, without “saving” anywhere
near the trillion dollars he originally promised.
No it is not the end of DOGE because some of them are still
doing their intrusive data gathering and others have been put in
charge of agencies or established in pertinent positions in those
agencies.
These are the people who shut down USAID and tried to cut the
funds allocated by the Congress to many agencies.
bliss
Its typical really.
Someone says, quite rightly , 'there's a lot of waste in
government'
And all that means is the money is not being spent to my personal
advantage. Call me a cynic but at 87 I have seen a lot of politicians
come and go.
So The Big Fart appoints someone to shut down EVERYTHING on the
basis that it will probably become obvious which bits are needed
once they are gone.
Stupid clumsy, ill advised and destructive.
What do you expect from a man who thinks a chainsaw is a good
illustration of his method. Big drama but as destructive as it has
proven to be and even thou and i agree on that.
c186282 wrote:
On 5/30/25 3:22 AM, Lawrence D'Oliveiro wrote:
Well, I guess that’s over, now that Elon Musk has left the building.
That’s the end of DOGE, without “saving” anywhere near the trillion >>> dollars he originally promised.
Eh ... 'close enough'.
At least somebody TRIED ... that's incredibly rare
in the higher echelons.
DOGE will continue, hopefully more smoothly integrate
into the whole process. SOMEBODY has to keep an eye
out for STUPID stuff.
he maybe cost that much
On 5/30/25 11:38 PM, % wrote:
c186282 wrote:
On 5/30/25 3:22 AM, Lawrence D'Oliveiro wrote:
Well, I guess that’s over, now that Elon Musk has left the building. >>>> That’s the end of DOGE, without “saving” anywhere near the trillion >>>> dollars he originally promised.
Eh ... 'close enough'.
At least somebody TRIED ... that's incredibly rare
in the higher echelons.
DOGE will continue, hopefully more smoothly integrate
into the whole process. SOMEBODY has to keep an eye
out for STUPID stuff.
he maybe cost that much
THIS year.
However the downstream impact of "No Stupid Stuff"
will be more profitable.
Alas the next non-MAGA admin will instantly nuke DOGE
and everything like it and go on a hiring and Stupid
Stuff crusade ......
IF Trump wants to have any "legacy" then much of this
stuff needs to be enshrined as federal LAW, not just
presidential directives/XOs. Laws are a lot harder
to un-do.
He is imposing a Trillion dollar debt increase on the USA by
continuing the stupid tax cut for the most well off and the corporations which make them their money at the expense of the workers.
That tax cut is why many call it "Trump Inflation". Cutting taxes
on the well-off is based on the "Trickle Down" theory which ihas
been proven to be the opposite. The wealth has trickled upward
to the wealthy as the value of the Dollar has decreased.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 05:12:55 |
Calls: | 10,387 |
Calls today: | 2 |
Files: | 14,061 |
Messages: | 6,416,797 |