• Re: Rewriting SSA. Is This A Chance For GNU/Linux?

    From chrisv@21:1/5 to All on Mon Mar 31 07:11:16 2025
    XPost: comp.os.linux.misc

    c186282 wrote:

    Oh, I agree ... trying to "rapidly rebuild" the "Just Works"
    code-base is VERY risky. As said, most of those old COBOL
    apps on those old computers were basically PERFECT - and
    the fallout from being IMperfect is SEVERE - both politically
    and per-individual affected. Extreme caution is advised.

    Sorry for my below naive/stupid questions...

    How hard could SS be? Are the rules so complex? I know it's hundreds
    of millions of people, but that doesn't seen a huge challenge for
    modern systems. I don't know why it would be any harder than any
    "significant" piece of software, like spreadsheet or database
    software.

    I'm also wondering how large the code base could be, if it was written
    fifty years ago when a megabyte was a huge amount of memory.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Farley Flud@21:1/5 to chrisv on Mon Mar 31 13:59:52 2025
    XPost: comp.os.linux.misc

    On Mon, 31 Mar 2025 07:11:16 -0500, chrisv wrote:


    How hard could SS be? Are the rules so complex? I know it's hundreds
    of millions of people, but that doesn't seen a huge challenge for
    modern systems.


    Yes, the rules are quite complex and they are changing all the time.

    Why should this be a problem? The rules are written by those without
    a clue about digital methodologies, and then these rules must be somehow
    worked into a digital programming framework. This is not easy.

    Consider a comparatively simple task like payroll processing. How does
    one calculate federal withholding tax? There is no straight formula.
    The process is a labyrinth of crazy regulations. First, there are different tables for married, single, and a few other categories. Then there are
    many different possibilities for what is actually taxable income. Are
    401K deductions exempt? What about IRA or health insurance deductions?
    Some deductions may be exempt from withholding but not SS/Medicare.
    The ramifications go on and on and on.

    The federal W4 forms have also been changed, thanks to Trump's first term
    in office. The difference between the former and current forms is so drastic that all the code, and databases, that deals with them had to be completely re-written.

    Then there are quarterly and annual reporting requirements, etc. etc.

    A company may have 50 employees but nearly each and every one is a special
    case unto itself, and the software has to deal with it all without any error.

    The SSA software is perhaps several orders of magnitude above this.

    Computer science is based on mathematics which is an eminently logical process. But government regulations are not logical. There is no rhymme or reason to them.
    They emenate from disparate and disjoint sources and then someone throws the 100 or so volumes of compiled law at the programmer and says do it.



    --
    Hail Linux! Hail FOSS! Hail Stallman!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to The Natural Philosopher on Mon Mar 31 17:36:23 2025
    XPost: comp.os.linux.misc

    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 31/03/2025 04:08, pothead wrote:
    Financial institutions are still running COBOL as well as other legacy
    systems like TPF.

    But probably on PCs running IBMS linux

    Lloyds are still on zSeries (you can see the job ads...) though no
    doubt with layers of more modern stuff around it.

    Replacing legacy systems is welcome, but a failed modernization that
    breaks your customers’ ability to use their banks accounts gets very expensive very fast.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to Farley Flud on Mon Mar 31 17:38:26 2025
    XPost: comp.os.linux.misc

    On 2025-03-31, Farley Flud <fsquared@fsquared.linux> wrote:

    On Mon, 31 Mar 2025 07:11:16 -0500, chrisv wrote:

    How hard could SS be? Are the rules so complex? I know it's hundreds
    of millions of people, but that doesn't seen a huge challenge for
    modern systems.

    Have you ever looked at a payroll system, or bureaucracy in general?

    Yes, the rules are quite complex and they are changing all the time.

    Why should this be a problem? The rules are written by those without
    a clue about digital methodologies, and then these rules must be somehow worked into a digital programming framework. This is not easy.

    Consider a comparatively simple task like payroll processing.

    <snort> Simple? Hah!

    How does
    one calculate federal withholding tax? There is no straight formula.
    The process is a labyrinth of crazy regulations. First, there are different tables for married, single, and a few other categories. Then there are
    many different possibilities for what is actually taxable income. Are
    401K deductions exempt? What about IRA or health insurance deductions?
    Some deductions may be exempt from withholding but not SS/Medicare.
    The ramifications go on and on and on.

    Hear, hear.

    The federal W4 forms have also been changed, thanks to Trump's first term
    in office. The difference between the former and current forms is so drastic that all the code, and databases, that deals with them had to be completely re-written.

    One year the Canadian government moved one box on the T4 form (our equivalent of the W4) one print position sideways. That meant that every other company
    in the country that had their own payroll system - and there were many at the time - had to change their system to adapt, even though the calculations hadn't changed. Utterly foolish.

    Then there are quarterly and annual reporting requirements, etc. etc.

    A company may have 50 employees but nearly each and every one is a special case unto itself, and the software has to deal with it all without any error.

    The SSA software is perhaps several orders of magnitude above this.

    Computer science is based on mathematics which is an eminently logical process. But government regulations are not logical. There is no rhymme
    or reason to them. They emenate from disparate and disjoint sources and
    then someone throws the 100 or so volumes of compiled law at the programmer and says do it.

    One of my definitions of hell is having to be that programmer forever.

    I've always said that payrolls should never have been computerized.
    Computers are logical, and payrolls aren't. Unfortunately, the power
    of computers has been exploited to support gratuitous complexity.
    I bet DOGE will never get around to addressing that.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Farley Flud@21:1/5 to Charlie Gibbs on Mon Mar 31 18:55:09 2025
    XPost: comp.os.linux.misc

    On Mon, 31 Mar 2025 17:38:26 GMT, Charlie Gibbs wrote:


    I've always said that payrolls should never have been computerized.
    Computers are logical, and payrolls aren't. Unfortunately, the power
    of computers has been exploited to support gratuitous complexity.


    That's why digital Unicode implementations are difficult and fraught
    with uncertainty and error.

    Language, and alphabets, are human concoctions and attempting to
    get computers to represent this human chaos is a continuing challenge
    even after almost 50 years

    ASCII was easy: one character, one code.

    But Unicode is a completely different animal. For example, in order
    to meaningfully sort Unicode expressions one must resort to "normalization"
    of which there are several types. Consequently, the software that
    deals with all of this is highly complex.

    Digital computers were originally developed to, guess what, compute.
    That is, the digital computer was a critical adjunct to scientific
    and engineering calculations. They replaced the legions of human "calculators," usually women, who pounded away at desktop adding machines.

    However, once applications begin to stray beyond science and engineering
    then the problems began.



    --
    Systemd: solving all the problems that you never knew you had.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From -hh@21:1/5 to chrisv on Tue Apr 1 10:16:55 2025
    XPost: comp.os.linux.misc

    On 3/31/25 08:11, chrisv wrote:
    c186282 wrote:

    Oh, I agree ... trying to "rapidly rebuild" the "Just Works"
    code-base is VERY risky. As said, most of those old COBOL
    apps on those old computers were basically PERFECT - and
    the fallout from being IMperfect is SEVERE - both politically
    and per-individual affected. Extreme caution is advised.

    Sorry for my below naive/stupid questions...

    How hard could SS be?

    In a word, "very".

    Are the rules so complex?

    In snapshot form, not too terribly bad. Problem is that there's been
    50+ years worth of revisions, and the documentation of every change is
    never 100.0000% perfect in every last detail.

    As such, its become a "black box" that no one really knows what all it
    is doing, so its a nightmare to try to document all the processes to try
    to reproduce it.

    This is why multiple Fortune 500 corporations has had projects over the
    years to try to replace COBOL, but which have repeatedly failed. For
    example, one that I was aware of was looking to use Smalltalk; I never
    paid attention enough to know if that was a good choice or not.


    I know it's hundreds
    of millions of people, but that doesn't seen a huge challenge for
    modern systems. I don't know why it would be any harder than any "significant" piece of software, like spreadsheet or database
    software.

    It is "big iron" mainframe stuff. Think of a single data center having literally *rows* of IBM 360's/370's.

    Granted, there's been huge growth for web-based centers that are running thousands of webservers/etc, but that's largely independent parallel
    capacity, not a single database, so that drives solution approaches too.

    I'm also wondering how large the code base could be, if it was written
    fifty years ago when a megabyte was a huge amount of memory.

    Yup. A system my wife worked on back in the 1990s for Y2K had literally
    a couple of **Pentabytes** of data storage being managed by their COBOL
    system. I doubt it has grown by all that much .. my guess is that
    they're probably still under ~50 Pentabytes today.


    -hh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert Heller@21:1/5 to recscuba_google@huntzinger.com on Tue Apr 1 14:40:27 2025
    XPost: comp.os.linux.misc

    It should be noted that GnuCOBOL actually translates COBOL to C, and then compiles the C code with GnuC. In *theory* one could just run the whole code base through GnuCOBOL and create a C code base, but good luck making much
    sense of the generated C code...

    At Tue, 1 Apr 2025 10:16:55 -0400 -hh <recscuba_google@huntzinger.com> wrote:


    On 3/31/25 08:11, chrisv wrote:
    c186282 wrote:

    Oh, I agree ... trying to "rapidly rebuild" the "Just Works"
    code-base is VERY risky. As said, most of those old COBOL
    apps on those old computers were basically PERFECT - and
    the fallout from being IMperfect is SEVERE - both politically
    and per-individual affected. Extreme caution is advised.

    Sorry for my below naive/stupid questions...

    How hard could SS be?

    In a word, "very".

    Are the rules so complex?

    In snapshot form, not too terribly bad. Problem is that there's been
    50+ years worth of revisions, and the documentation of every change is
    never 100.0000% perfect in every last detail.

    As such, its become a "black box" that no one really knows what all it
    is doing, so its a nightmare to try to document all the processes to try
    to reproduce it.

    This is why multiple Fortune 500 corporations has had projects over the
    years to try to replace COBOL, but which have repeatedly failed. For example, one that I was aware of was looking to use Smalltalk; I never
    paid attention enough to know if that was a good choice or not.


    I know it's hundreds
    of millions of people, but that doesn't seen a huge challenge for
    modern systems. I don't know why it would be any harder than any "significant" piece of software, like spreadsheet or database
    software.

    It is "big iron" mainframe stuff. Think of a single data center having literally *rows* of IBM 360's/370's.

    Granted, there's been huge growth for web-based centers that are running thousands of webservers/etc, but that's largely independent parallel capacity, not a single database, so that drives solution approaches too.

    I'm also wondering how large the code base could be, if it was written fifty years ago when a megabyte was a huge amount of memory.

    Yup. A system my wife worked on back in the 1990s for Y2K had literally
    a couple of **Pentabytes** of data storage being managed by their COBOL system. I doubt it has grown by all that much .. my guess is that
    they're probably still under ~50 Pentabytes today.


    -hh



    --
    Robert Heller -- Cell: 413-658-7953 GV: 978-633-5364
    Deepwoods Software -- Custom Software Services
    http://www.deepsoft.com/ -- Linux Administration Services
    heller@deepsoft.com -- Webhosting Services

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From -hh@21:1/5 to Robert Heller on Tue Apr 1 11:08:18 2025
    XPost: comp.os.linux.misc

    On 4/1/25 10:40, Robert Heller wrote:
    It should be noted that GnuCOBOL actually translates COBOL to C, and then compiles the C code with GnuC. In *theory* one could just run the whole code base through GnuCOBOL and create a C code base, but good luck making much sense of the generated C code...

    I had to listen to a PMP rant all weekend about how profoundly stupid of
    a plan this is from DOGE, especially the "in 5 months" claim. Overall,
    it sounds like another example of a "contract out to Elon" attempt,
    where they'll of course fail to meet any milestones, plus they'll skip
    any high quality testing, and dump whatever human overrides onto the
    already slashed staff to try to deal with while they claim "Victory!".

    Total bullshit.

    But in putting on a technologist's cap to contemplate what could be
    done, something like what you're referring to ... and/or other AI's
    being applied to comb & document what the code is doing ... should in
    theory be able to deliver an 80% solution (or better).

    The challenge then becomes in building the test deck to verify that the functions were transcribed correctly, and that's going to have to be
    extensive to get into all of the currently-solved corner cases, so
    testing alone will take the better part of a year just to verify its definitions.

    And that's before any performance specifications or regulations are
    applied for V&V. For example, if the software safety standards of
    MIL-STD-882 are applied, its a "Level 5" (highest) system, so there's
    huge time/cost implications for passing the Levels 1-5 requirements.
    The irony of this is that with DOGE claiming that they want to eliminate
    all waste/fraud, Level 5's criteria is no more than $1M, not a 'reduce
    it to zero' propaganda goalpost.

    -hh

    > At Tue, 1 Apr 2025 10:16:55 -0400 -hh
    <recscuba_google@huntzinger.com> wrote:


    On 3/31/25 08:11, chrisv wrote:
    c186282 wrote:

    Oh, I agree ... trying to "rapidly rebuild" the "Just Works"
    code-base is VERY risky. As said, most of those old COBOL
    apps on those old computers were basically PERFECT - and
    the fallout from being IMperfect is SEVERE - both politically
    and per-individual affected. Extreme caution is advised.

    Sorry for my below naive/stupid questions...

    How hard could SS be?

    In a word, "very".

    Are the rules so complex?

    In snapshot form, not too terribly bad. Problem is that there's been
    50+ years worth of revisions, and the documentation of every change is
    never 100.0000% perfect in every last detail.

    As such, its become a "black box" that no one really knows what all it
    is doing, so its a nightmare to try to document all the processes to try
    to reproduce it.

    This is why multiple Fortune 500 corporations has had projects over the
    years to try to replace COBOL, but which have repeatedly failed. For
    example, one that I was aware of was looking to use Smalltalk; I never
    paid attention enough to know if that was a good choice or not.


    I know it's hundreds
    of millions of people, but that doesn't seen a huge challenge for
    modern systems. I don't know why it would be any harder than any
    "significant" piece of software, like spreadsheet or database
    software.

    It is "big iron" mainframe stuff. Think of a single data center having
    literally *rows* of IBM 360's/370's.

    Granted, there's been huge growth for web-based centers that are running
    thousands of webservers/etc, but that's largely independent parallel
    capacity, not a single database, so that drives solution approaches too.

    I'm also wondering how large the code base could be, if it was written
    fifty years ago when a megabyte was a huge amount of memory.

    Yup. A system my wife worked on back in the 1990s for Y2K had literally
    a couple of **Pentabytes** of data storage being managed by their COBOL
    system. I doubt it has grown by all that much .. my guess is that
    they're probably still under ~50 Pentabytes today.


    -hh




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Robert Heller on Tue Apr 1 17:35:11 2025
    XPost: comp.os.linux.misc

    On Tue, 1 Apr 2025 14:40:27 -0000 (UTC), Robert Heller wrote:

    It should be noted that GnuCOBOL actually translates COBOL to C, and
    then compiles the C code with GnuC. In *theory* one could just run the
    whole code base through GnuCOBOL and create a C code base, but good luck making much sense of the generated C code...

    I never did it with COBOL but I did use 'f2c' to generate C code from
    Fortran. When I saw the results I wrote the C code manually. It probably
    would have compiled and ran but it was not maintainable. I've seen more readable output from disassemblers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to -hh on Tue Apr 1 18:08:11 2025
    XPost: comp.os.linux.misc

    On Tue, 1 Apr 2025 11:08:18 -0400, -hh wrote:

    I had to listen to a PMP rant all weekend about how profoundly stupid of
    a plan this is from DOGE, especially the "in 5 months" claim. Overall,
    it sounds like another example of a "contract out to Elon" attempt,
    where they'll of course fail to meet any milestones, plus they'll skip
    any high quality testing, and dump whatever human overrides onto the
    already slashed staff to try to deal with while they claim "Victory!".

    Total bullshit.

    Agreed on the five months. However I believe they started a project to
    update the systems in 2016 or 2017 and abandoned it to chase other
    squirrels in 2020. Bureaucracies are pain avoidant. How do you get them to
    do what has to be done over a span of administrations?

    https://dailycaller.com/2016/03/17/interior-dept-spent-15-million-on-a- crime-database-that-doesnt-work/

    Happily I don't have to deal with IMARS any more but recent years it
    wasn't any smoother than it was in 2016 when the article was written.
    NIBRS, the DOJ's attempt to update the UCR hasn't went well either. Both
    of these have been in the works for a decade or more.

    Part of the problem is giving the contracts to the usual suspects who are
    too big to fail.

    https://news.lockheedmartin.com/2014-08-04-Lockheed-Martin-Wins-90- Million-Metropolitan-Police-Service-Contract

    https://www.computerweekly.com/news/4500278605/Met-Police-cancels-command- and-control-system-contract-with-Northrop-Grumman

    It's not only the US. Northrup-Grumman, Lockheed Martin, Unisys, and
    others spread their joy worldwide.

    After the F-35 debacle Boeing got the F-47 contract. I can't wait to see
    how that one goes. Boeing seems to have problems producing planes that can
    fly lately but they are not without other costly fails.

    https://en.wikipedia.org/wiki/SBInet

    Year after year, decade after decade, the US government pisses away money
    of failures by the same companies.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to chrisv on Tue Apr 1 15:09:13 2025
    XPost: comp.os.linux.misc

    On 3/31/25 8:11 AM, chrisv wrote:
    c186282 wrote:

    Oh, I agree ... trying to "rapidly rebuild" the "Just Works"
    code-base is VERY risky. As said, most of those old COBOL
    apps on those old computers were basically PERFECT - and
    the fallout from being IMperfect is SEVERE - both politically
    and per-individual affected. Extreme caution is advised.

    Sorry for my below naive/stupid questions...

    How hard could SS be? Are the rules so complex?


    Oh gawd YES !!! Big old govt bureaucracy with
    a zillion little tweaks and cheats thrown in to
    please some or another pol or pressure group.


    I know it's hundreds
    of millions of people, but that doesn't seen a huge challenge for
    modern systems. I don't know why it would be any harder than any "significant" piece of software, like spreadsheet or database
    software.

    I'm also wondering how large the code base could be, if it was written
    fifty years ago when a megabyte was a huge amount of memory.

    There were tricks to swap stuff to and from mass
    storage, also creative use of batch jobs.

    So, I'll stick with the *carefully* warning - and
    don't take down What Works until it's very reliably
    duplicated by the newer code.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert Heller@21:1/5 to bowman@montana.com on Tue Apr 1 19:30:51 2025
    XPost: comp.os.linux.misc

    Yes, one of the things I did when I was working at UMass was to convert some FORTRAN code to C and yes, automated tool commonly did either a poor job or created code that was not maintainable or readable. (And some of the code needed to be made multi-threaded for parallelization.)

    At 1 Apr 2025 17:35:11 GMT rbowman <bowman@montana.com> wrote:


    On Tue, 1 Apr 2025 14:40:27 -0000 (UTC), Robert Heller wrote:

    It should be noted that GnuCOBOL actually translates COBOL to C, and
    then compiles the C code with GnuC. In *theory* one could just run the whole code base through GnuCOBOL and create a C code base, but good luck making much sense of the generated C code...

    I never did it with COBOL but I did use 'f2c' to generate C code from Fortran. When I saw the results I wrote the C code manually. It probably would have compiled and ran but it was not maintainable. I've seen more readable output from disassemblers.



    --
    Robert Heller -- Cell: 413-658-7953 GV: 978-633-5364
    Deepwoods Software -- Custom Software Services
    http://www.deepsoft.com/ -- Linux Administration Services
    heller@deepsoft.com -- Webhosting Services

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Farley Flud@21:1/5 to rbowman on Tue Apr 1 21:22:49 2025
    XPost: comp.os.linux.misc

    On 1 Apr 2025 18:08:11 GMT, rbowman wrote:


    Happily I don't have to deal with IMARS any more but recent years it
    wasn't any smoother than it was in 2016 when the article was written.
    NIBRS, the DOJ's attempt to update the UCR hasn't went well either. Both
    of these have been in the works for a decade or more.


    Ha, ha, ha, ha, ha, ha!

    I was expecting that blowhard blowman to step in with his tales
    of having done everything and anything.

    Sure enough. He'll soon admit that he was consulted for the
    transition but had to turn it down due to thousands of other
    offers.

    Before too long he'll even claim to a confidant of Elon Musk.

    Ha, ha, ha, ha, ha, ha!


    --
    Systemd: solving all the problems that you never knew you had.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to John Ames on Tue Apr 1 22:07:13 2025
    XPost: comp.os.linux.misc

    On 4/1/25 3:16 PM, John Ames wrote:
    On Tue, 1 Apr 2025 15:09:13 -0400
    c186282 <c186282@nnada.net> wrote:

    So, I'll stick with the *carefully* warning - and don't take down
    What Works until it's very reliably duplicated by the newer code.

    But really, good sir, How Hard Could It Be!? (TM)

    If it was remotely easy they'd have done it a
    long time ago. Even a crappy PC has more oomph
    than those 60s mini's and mainframes. Even RUST
    is more readable these days than that old COBOL.

    Bureaucrats DREAD changes - esp in IT equipment.
    They WILL drag their feet, stick with what they
    KNOW works, to the very last. Mistakes = horrible
    career collapses - maybe even no pension & perks !

    Alas, those 60s boxes ... yea, kinda TIME now.
    Almost nobody does five lines of COBOL anymore.
    The annoyance is that I've NEVER seen a utility
    that turns good COBOL into good anything else.
    It all kinda has to be re-done in a 'more modern'
    lang BY HAND, line by line with lots of comments.

    And no, GNU 'COBOL' ain't what most of that old
    code was writ in ....... those were COBOL distros
    unique to the antique boxes in question.

    Now, the contentious part - WHAT to re-write in ?
    IMHO ... PYTHON. It's readable and totally capable
    and widely known. Good for the next 25+ years.

    After that, it's all 'AI' and NOBODY will grok
    the code and methods ... all 'magic' thereafter,
    maybe evil magic ..........

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Robert Heller on Tue Apr 1 22:14:12 2025
    XPost: comp.os.linux.misc

    On 4/1/25 3:30 PM, Robert Heller wrote:
    Yes, one of the things I did when I was working at UMass was to convert some FORTRAN code to C and yes, automated tool commonly did either a poor job or created code that was not maintainable or readable. (And some of the code needed to be made multi-threaded for parallelization.)

    You're pretty much right.

    Note also that the 'COBOL' all those old apps
    were written in were often kinda custom for
    ONE kind of antique box. NOT GNU "COBOL".

    Any re-writes ... use PYTHON. Yep, a lot of
    people will hate me for saying that but there
    are several good reasons.

    Alas it will have to be done line by line,
    by competent HUMANS.

    Note that old COBOL code has probably been
    rudely patched and tweaked over and over and
    over since 1965 ...... MESS !!!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Farley Flud on Tue Apr 1 22:20:19 2025
    XPost: comp.os.linux.misc

    On 4/1/25 5:22 PM, Farley Flud wrote:
    On 1 Apr 2025 18:08:11 GMT, rbowman wrote:


    Happily I don't have to deal with IMARS any more but recent years it
    wasn't any smoother than it was in 2016 when the article was written.
    NIBRS, the DOJ's attempt to update the UCR hasn't went well either. Both
    of these have been in the works for a decade or more.


    Ha, ha, ha, ha, ha, ha!

    I was expecting that blowhard blowman to step in with his tales
    of having done everything and anything.


    Awwww ... he's not THAT bad ! :-)

    HAS been involved in a lot of stuff, so it's
    at least worth paying some attention eh ? Has
    a few years even on me - remembers pre PDP-11.


    Sure enough. He'll soon admit that he was consulted for the
    transition but had to turn it down due to thousands of other
    offers.

    Before too long he'll even claim to a confidant of Elon Musk.

    Ha, ha, ha, ha, ha, ha!

    Take it you don't like rbowman ...

    Try to move beyond it. NOT keen on
    flame-wars and personal trashing on
    the groups here - NEGATIVE gains.
    Do better.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to -hh on Tue Apr 1 23:32:30 2025
    XPost: comp.os.linux.misc

    On 4/1/25 11:08 AM, -hh wrote:
    On 4/1/25 10:40, Robert Heller wrote:
    It should be noted that GnuCOBOL actually translates COBOL to C, and then
    compiles the C code with GnuC.  In *theory* one could just run the
    whole code
    base through GnuCOBOL and create a C code base, but good luck making much
    sense of the generated C code...

    I had to listen to a PMP rant all weekend about how profoundly stupid of
    a plan this is from DOGE,

    Musk's "plan" isn't bad ... per-se. As noted in a
    variety of news, some of those important federal
    agencies are STILL using 60s hardware & programming.

    The TRICK is getting any newer stuff RIGHT before it
    goes mainline. That old COBOL code was GREAT, never
    diss those narrow-tie Dilberts from the day.

    BUT ... as those COBOLs were kinda tweaked for the
    particular old boxes, no modern translation tool is
    gonna work worth a damn. It'll have to be HAND DONE,
    line by line, and done RIGHT. Forget Gen-Z/A2 and AI.

    As I said to be hated - PYTHON ! :-)

    Anyway, eventually it'll be all AI code. We
    won't understand it - all magic thereafter.

    Wave yer wand Harry !

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Wed Apr 2 11:40:21 2025
    XPost: comp.os.linux.misc

    On 02/04/2025 04:32, c186282 wrote:
    On 4/1/25 11:08 AM, -hh wrote:
    On 4/1/25 10:40, Robert Heller wrote:
    It should be noted that GnuCOBOL actually translates COBOL to C, and
    then
    compiles the C code with GnuC.  In *theory* one could just run the
    whole code
    base through GnuCOBOL and create a C code base, but good luck making
    much
    sense of the generated C code...

    I had to listen to a PMP rant all weekend about how profoundly stupid
    of a plan this is from DOGE,

      Musk's "plan" isn't bad ... per-se. As noted in a
      variety of news, some of those important federal
      agencies are STILL using 60s hardware & programming.

      The TRICK is getting any newer stuff RIGHT before it
      goes mainline. That old COBOL code was GREAT, never
      diss those narrow-tie Dilberts from the day.

    I think that people here mostly haven't been exposed to business coding
    as much as technical coding.

    There is nothing clever about business *coding*. The clever bit is the
    business *analysis*, for which a very tight specification is written,

    The problem today is that many people can code, but almost no one
    designs the functionality any more. It's just 'thrown together'.


      BUT ... as those COBOLs were kinda tweaked for the
      particular old boxes, no modern translation tool is
      gonna work worth a damn. It'll have to be HAND DONE,
      line by line, and done RIGHT. Forget Gen-Z/A2 and AI.

    I am not sure how tweaked the stuff actually was.
    There are native COBOL compilers out there - Micro focus etc.

      As I said to be hated - PYTHON !  :-)

      Anyway, eventually it'll be all AI code. We
      won't understand it - all magic thereafter.

    It will all fuck up.

      Wave yer wand Harry !

    Wave yer willy, Elon.

    --
    Any fool can believe in principles - and most of them do!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From -hh@21:1/5 to The Natural Philosopher on Wed Apr 2 16:04:07 2025
    XPost: comp.os.linux.misc

    On 4/2/25 06:40, The Natural Philosopher wrote:
    On 02/04/2025 04:32, c186282 wrote:
    On 4/1/25 11:08 AM, -hh wrote:
    On 4/1/25 10:40, Robert Heller wrote:
    It should be noted that GnuCOBOL actually translates COBOL to C, and
    then
    compiles the C code with GnuC.  In *theory* one could just run the
    whole code
    base through GnuCOBOL and create a C code base, but good luck making
    much
    sense of the generated C code...

    I had to listen to a PMP rant all weekend about how profoundly stupid
    of a plan this is from DOGE,

       Musk's "plan" isn't bad ... per-se. As noted in a
       variety of news, some of those important federal
       agencies are STILL using 60s hardware & programming.

       The TRICK is getting any newer stuff RIGHT before it
       goes mainline. That old COBOL code was GREAT, never
       diss those narrow-tie Dilberts from the day.

    I think that people here mostly haven't been exposed to business coding
    as much as technical coding.

    There is nothing clever about business *coding*. The clever bit is the business *analysis*, for which a very tight specification is written,

    The problem today is that many people can code, but almost no one
    designs the functionality any more. It's just 'thrown together'.

    Good point. This is effectively why my spouse's company had decided to
    stop hiring programmers and treating them their business (especially as
    COBOL became rare): they inverted the script, to hire for people to
    learn the core business, and once they'd developed a competency there,
    they were asked if they wanted to learn programming (eg, COBOL).


       Anyway, eventually it'll be all AI code. We
       won't understand it - all magic thereafter.

    It will all fuck up.

    Indeed. It is the 1990's "performance specification" but on steroids.


    -hh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pothead@21:1/5 to Richard Kettlewell on Thu Apr 3 02:38:30 2025
    XPost: comp.os.linux.misc

    On 2025-03-31, Richard Kettlewell <invalid@invalid.invalid> wrote:
    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 31/03/2025 04:08, pothead wrote:
    Financial institutions are still running COBOL as well as other legacy
    systems like TPF.

    But probably on PCs running IBMS linux

    Lloyds are still on zSeries (you can see the job ads...) though no
    doubt with layers of more modern stuff around it.

    Replacing legacy systems is welcome, but a failed modernization that
    breaks your customers’ ability to use their banks accounts gets very expensive very fast.


    Correct.
    COBOL and oter legacy software are being run on IBM zSeries.


    --
    pothead
    Liberalism Is A Mental Disease
    Treat it accordingly <https://www.dailymail.co.uk/health/article-14512427/Doctors-reveal-symptoms-Trump-Derangement-Syndrome-tell-youve-got-it.html>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to pothead on Thu Apr 3 10:59:51 2025
    XPost: comp.os.linux.misc

    On 03/04/2025 03:38, pothead wrote:
    IBM zSeries

    Yeah, but those are massive beasts.
    System 360s were no more powerful than a PC.

    Smaller companies can run off those.

    --
    "The most difficult subjects can be explained to the most slow witted
    man if he has not formed any idea of them already; but the simplest
    thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid
    before him."

    - Leo Tolstoy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Thu Apr 3 12:10:18 2025
    XPost: comp.os.linux.misc

    On 03/04/2025 11:52, c186282 wrote:
    So yea, I can perfectly believe they're keeping 360s
      and such alive and working ...
    My last contact with a system 360 was back in about 1995 when a company
    making leather bound 'coffee table' books was finally retiring their
    glue covered 21 year old 360 and replacing it with an IBM PC, running
    IBMs version of Unix (AIX).

    They wanted email and at that time a modem and UUCP was the only way to
    do it.

    COBOL source could be recompiled to run on AIX.

    I don't think 360s are supported any more than is a DEC PDP11.

    --
    How fortunate for governments that the people they administer don't think.

    Adolf Hitler

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to pothead on Thu Apr 3 06:52:14 2025
    XPost: comp.os.linux.misc

    On 4/2/25 10:38 PM, pothead wrote:
    On 2025-03-31, Richard Kettlewell <invalid@invalid.invalid> wrote:
    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 31/03/2025 04:08, pothead wrote:
    Financial institutions are still running COBOL as well as other legacy >>>> systems like TPF.

    But probably on PCs running IBMS linux

    Lloyds are still on zSeries (you can see the job ads...) though no
    doubt with layers of more modern stuff around it.

    Replacing legacy systems is welcome, but a failed modernization that
    breaks your customers’ ability to use their banks accounts gets very
    expensive very fast.


    Correct.
    COBOL and oter legacy software are being run on IBM zSeries.

    BUT, if you're a career bcrat with a cushy job and
    fat pension at risk ... you STICK WITH WHAT YOU KNOW
    ACTUALLY *WORKS*.

    So yea, I can perfectly believe they're keeping 360s
    and such alive and working ...

    Let the NEXT guy rock the boat.

    I *do* understand and kinda agree with such an
    ultra-conservative approach.

    Also, as noted by various, the COBOL and ASM extensions
    and such were oft kinda custom to the PARTICULAR mini
    or mainframe ... so it's NOT just a matter of using
    some emulator/translator and expecting the best.

    For NOW ... 'parallel' systems are required. Give both
    the same data, same tasks. IF the new boxes/ware give
    the same results for a year or two THEN you quietly
    switch over. Get ahead of yourself and there WILL be
    hell to pay - want the entire AARP all over yer ass,
    threatening yer enabling pols ??? Bye-bye pension ...

    I remember a LITTLE TI-990 assembler ... but how
    many OTHERS ? They were fairly popular mini's and
    there's likely some at the SSA/IRS/etc still in
    use. 360s/370s even more likely. Would not even be
    surprised if one or two of the old discrete-transisor
    boxes from the late 50s aren't chugging in the
    basement somewhere. LOVE those spinny tapes :-)

    Hmmm ... 'conservators' should be let in soon, to
    collect/document/preserve that old hardware !

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Thu Apr 3 18:12:24 2025
    XPost: comp.os.linux.misc

    On Thu, 3 Apr 2025 06:52:14 -0400, c186282 wrote:

    So yea, I can perfectly believe they're keeping 360s and such alive
    and working ...

    Technically the System/360 was succeeded by the System/370 in 1970. They skipped 380 and went to System/390. However there is a lot of backward compatibility so the Zs can run a lot of the legacy software.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Thu Apr 3 16:38:57 2025
    XPost: comp.os.linux.misc

    On 4/3/25 2:12 PM, rbowman wrote:
    On Thu, 3 Apr 2025 06:52:14 -0400, c186282 wrote:

    So yea, I can perfectly believe they're keeping 360s and such alive
    and working ...

    Technically the System/360 was succeeded by the System/370 in 1970. They skipped 380 and went to System/390. However there is a lot of backward compatibility so the Zs can run a lot of the legacy software.

    But WHAT are the SSA and IRS still running in
    the basement ? :-)

    The 360/370 boxes WERE really popular, so I'm
    gonna GUESS there's a least one or two still
    chugging away. Maint cost would be insane these
    days ... but you can kinda bury that in the
    budget while new hardware stands out more and
    in more places.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Fri Apr 4 00:16:57 2025
    XPost: comp.os.linux.misc

    On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:

    The 360/370 boxes WERE really popular, so I'm gonna GUESS there's a
    least one or two still chugging away. Maint cost would be insane
    these days ... but you can kinda bury that in the budget while new
    hardware stands out more and in more places.

    I'm not sure there are any little old ladies left to knit magnetic core.
    The tape drives were fragile when new but they and the disk drives the
    size of a washing machine may have been replaced over the years. My mother
    had a plastic thing to keep layer cakes fresh that I was always reminded
    of by the 2311's removable media.

    https://d1yx3ys82bpsa0.cloudfront.net/groups/ibm-1311-2311.pdf

    7.5 MB!!! What can you ever use that much storage for? I bought a microSD
    last week for another RPi project. It's getting hard to find them less
    than 64 GB.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Thu Apr 3 23:03:09 2025
    XPost: comp.os.linux.misc

    On 4/3/25 8:16 PM, rbowman wrote:
    On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:

    The 360/370 boxes WERE really popular, so I'm gonna GUESS there's a
    least one or two still chugging away. Maint cost would be insane
    these days ... but you can kinda bury that in the budget while new
    hardware stands out more and in more places.

    I'm not sure there are any little old ladies left to knit magnetic core.


    Outsource to Vietnam :-)


    The tape drives were fragile when new but they and the disk drives the
    size of a washing machine may have been replaced over the years. My mother had a plastic thing to keep layer cakes fresh that I was always reminded
    of by the 2311's removable media.

    They DID kinda look like that !

    But, like in the old movies, I *do* remember the
    banks of tape units, the reels spinning back and
    forth. Hideous lookup/xfer rates but, for a time,
    they WERE "leading edge".

    And look REALLY cool in sci-fi movies !

    Write yer step by step by step batch jobs ...

    https://d1yx3ys82bpsa0.cloudfront.net/groups/ibm-1311-2311.pdf

    7.5 MB!!! What can you ever use that much storage for? I bought a microSD last week for another RPi project. It's getting hard to find them less
    than 64 GB.

    Actually it's getting hard to find under 128gb lately - IF
    you can get 'em they may actually be MORE expensive than
    the larger units.

    The tariffs may freeze things for awhile ...

    My first 'PC' hard drive was 10 MB ! What INCREDIBLE
    capacity and speed !!! Full-height Rodime - you could
    kill someone if you hit 'em in the head with the
    heavy thing .......

    Today's money ... about $6000us.

    MFM ... or just 'FM' - can't remember.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Fri Apr 4 05:54:17 2025
    XPost: comp.os.linux.misc

    On Thu, 3 Apr 2025 23:03:09 -0400, c186282 wrote:

    And look REALLY cool in sci-fi movies !

    If you want kool:

    https://www.hackster.io/news/science-art-and-nostalgia-combined-hands-on- with-the-rc2014-mini-ii-picasso-8686eb339d59?mc_cid=fbe60fc614

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to rbowman on Fri Apr 4 09:38:36 2025
    XPost: comp.os.linux.misc

    On 04/04/2025 01:16, rbowman wrote:
    On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:

    The 360/370 boxes WERE really popular, so I'm gonna GUESS there's a
    least one or two still chugging away. Maint cost would be insane
    these days ... but you can kinda bury that in the budget while new
    hardware stands out more and in more places.

    I'm not sure there are any little old ladies left to knit magnetic core.

    Last time I looked they were little asian ladies wit teeny nimble fingers.
    Did all the coil winding in that factory.

    The tape drives were fragile when new but they and the disk drives the
    size of a washing machine may have been replaced over the years. My mother had a plastic thing to keep layer cakes fresh that I was always reminded
    of by the 2311's removable media.

    https://d1yx3ys82bpsa0.cloudfront.net/groups/ibm-1311-2311.pdf

    7.5 MB!!! What can you ever use that much storage for? I bought a microSD last week for another RPi project. It's getting hard to find them less
    than 64 GB.

    Sad but true. You would think they would give 8GB ones away in a box of cornflakes...

    --
    “Puritanism: The haunting fear that someone, somewhere, may be happy.”

    H.L. Mencken, A Mencken Chrestomathy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Fri Apr 4 08:30:23 2025
    XPost: comp.os.linux.misc

    On 4/4/25 4:38 AM, The Natural Philosopher wrote:
    On 04/04/2025 01:16, rbowman wrote:
    On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:

        The 360/370 boxes WERE really popular, so I'm gonna GUESS there's a >>>     least one or two still chugging away. Maint cost would be insane
        these days ... but you can kinda bury that in the budget while new >>>     hardware stands out more and in more places.

    I'm not sure there are any little old ladies left to knit magnetic core.

    Last time I looked they were little asian ladies wit teeny nimble fingers. Did all the coil winding in that factory.

    Look up "rope memory" :-)

    Hey, it flew us to the moon ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Fri Apr 4 18:53:21 2025
    XPost: comp.os.linux.misc

    On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:

    On 4/4/25 4:38 AM, The Natural Philosopher wrote:
    On 04/04/2025 01:16, rbowman wrote:
    On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:

        The 360/370 boxes WERE really popular, so I'm gonna GUESS
        there's a least one or two still chugging away. Maint cost
        would be insane these days ... but you can kinda bury that in
        the budget while new hardware stands out more and in more
        places.

    I'm not sure there are any little old ladies left to knit magnetic
    core.

    Last time I looked they were little asian ladies wit teeny nimble
    fingers.
    Did all the coil winding in that factory.

    Look up "rope memory" :-)

    Then came twistor memory which begat bubble memory...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Farley Flud@21:1/5 to All on Fri Apr 4 19:15:20 2025
    XPost: comp.os.linux.misc

    On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:


    I'm not sure there are any little old ladies left to knit magnetic core.

    Last time I looked they were little asian ladies wit teeny nimble fingers. >> Did all the coil winding in that factory.

    Look up "rope memory" :-)

    Hey, it flew us to the moon ...


    Who would ever give a flying fuck about this "Neolithic" technical
    crap? It's the future that is of concern.

    My question has always been: when are these memory engineers (or
    whatever they are called) going to produce cheap RAM memory that
    can actually keep pace with the CPU?

    For decades we have had to use various levels of high speed, though
    minuscule, cache memory in order for our software to run, and from
    a programming point of view cache management is a supreme bitch.

    The world needs cheap RAM that can operate at CPU speeds. Then,
    all programming would be a supreme breeze.

    Cache memory is just another crutch, and its existence is indisputable testimony that modern PC hardware is crippled shit.


    --
    Systemd: solving all the problems that you never knew you had.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Farley Flud on Sat Apr 5 04:50:09 2025
    XPost: comp.os.linux.misc

    On 4/4/25 3:15 PM, Farley Flud wrote:
    On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:


    I'm not sure there are any little old ladies left to knit magnetic core. >>>
    Last time I looked they were little asian ladies wit teeny nimble fingers. >>> Did all the coil winding in that factory.

    Look up "rope memory" :-)

    Hey, it flew us to the moon ...


    Who would ever give a flying fuck about this "Neolithic" technical
    crap? It's the future that is of concern.

    No future without a past.

    And past tricks/thinking/strategies CAN inspire
    the new.


    My question has always been: when are these memory engineers (or
    whatever they are called) going to produce cheap RAM memory that
    can actually keep pace with the CPU?

    Never ...

    OK, Moore's Law is getting close to stalling-out CPU
    performance. Ergo, give it a few years, the memory
    MAY finally catch up.

    For decades we have had to use various levels of high speed, though minuscule, cache memory in order for our software to run, and from
    a programming point of view cache management is a supreme bitch.

    The world needs cheap RAM that can operate at CPU speeds. Then,
    all programming would be a supreme breeze.

    Well, your 'future' isn't providing. MAYbe some odd
    idea from the past, just on better hardware ???

    Cache memory is just another crutch, and its existence is indisputable testimony that modern PC hardware is crippled shit.

    Well, I'd argue that on-chip cache is always gonna
    outperform - if for no other reason than the short
    circuit paths. These days, the speed of electricity
    over wires is becoming increasingly annoying - it's
    why they want photonics instead. Of course soon even
    that will be too slow soon enough ... and you can
    complain to Einstein .......

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Sat Apr 5 04:31:22 2025
    XPost: comp.os.linux.misc

    On 4/4/25 2:53 PM, rbowman wrote:
    On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:

    On 4/4/25 4:38 AM, The Natural Philosopher wrote:
    On 04/04/2025 01:16, rbowman wrote:
    On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:

        The 360/370 boxes WERE really popular, so I'm gonna GUESS
        there's a least one or two still chugging away. Maint cost
        would be insane these days ... but you can kinda bury that in >>>>>     the budget while new hardware stands out more and in more
        places.

    I'm not sure there are any little old ladies left to knit magnetic
    core.

    Last time I looked they were little asian ladies wit teeny nimble
    fingers.
    Did all the coil winding in that factory.

    Look up "rope memory" :-)

    Then came twistor memory which begat bubble memory...

    Everybody had a trick back in the day. "Bubble"
    wasn't bad really ... just couldn't push up
    performance or capacity easily enough. Have you
    ever looked into 'FRAM' - ferroelectric - mem for
    embedded ? Much faster than flash, almost infinite
    re-write capability, BUT they can't get the density
    up much beyond the current, rather low, levels. It
    still has a useful place (and I've used it) but it
    has no "greater future" so far as I can discern.

    "Rope" was interesting because it was used in the
    NASA lunar lander vehicle. Basically cores on loose
    wire, and I think SPACING was important. Why the hell
    did they use that in 1969 ? Because, the way things
    work, the govt SPECS for the vehicle probably went
    out the door before JFK even finished his moon speech.
    That's where the e-tech was frozen for all intents.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Farley Flud@21:1/5 to All on Sat Apr 5 10:21:02 2025
    XPost: comp.os.linux.misc

    On Sat, 5 Apr 2025 04:50:09 -0400, c186282 wrote:


    Well, I'd argue that on-chip cache is always gonna
    outperform - if for no other reason than the short
    circuit paths. These days, the speed of electricity
    over wires is becoming increasingly annoying - it's
    why they want photonics instead. Of course soon even
    that will be too slow soon enough ... and you can
    complain to Einstein .......


    No, but we can move to quantum computing, which may become
    a reality before too long.

    Will FOSS be ready?



    --
    Systemd: solving all the problems that you never knew you had.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Sat Apr 5 11:41:11 2025
    XPost: comp.os.linux.misc

    On 05/04/2025 09:31, c186282 wrote:
    On 4/4/25 2:53 PM, rbowman wrote:
    On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:

    On 4/4/25 4:38 AM, The Natural Philosopher wrote:
    On 04/04/2025 01:16, rbowman wrote:
    On Thu, 3 Apr 2025 16:38:57 -0400, c186282 wrote:

         The 360/370 boxes WERE really popular, so I'm gonna GUESS >>>>>>      there's a least one or two still chugging away. Maint cost >>>>>>      would be insane these days ... but you can kinda bury that in >>>>>>      the budget while new hardware stands out more and in more >>>>>>      places.

    I'm not sure there are any little old ladies left to knit magnetic
    core.

    Last time I looked they were little asian ladies wit teeny nimble
    fingers.
    Did all the coil winding in that factory.

        Look up "rope memory"  :-)

    Then came twistor memory which begat bubble memory...

      Everybody had a trick back in the day. "Bubble"
      wasn't bad really ... just couldn't push up
      performance or capacity easily enough. Have you
      ever looked into 'FRAM' - ferroelectric - mem for
      embedded ? Much faster than flash, almost infinite
      re-write capability, BUT they can't get the density
      up much beyond the current, rather low, levels. It
      still has a useful place (and I've used it) but it
      has no "greater future" so far as I can discern.

      "Rope" was interesting because it was used in the
      NASA lunar lander vehicle. Basically cores on loose
      wire, and I think SPACING was important. Why the hell
      did they use that in 1969 ? Because, the way things
      work, the govt SPECS for the vehicle probably went
      out the door before JFK even finished his moon speech.
      That's where the e-tech was frozen for all intents.

    Project I worked on was undersea repeater for optical cables.
    Probably the worst organised and specified project ever, but that's a by
    the by. They said 'you are lucky we are allowed to use a silicon
    processor, up to 5 years ago we had to use germanium' 'Why?' Because
    that was the only technology more than 15 years old that could be
    guaranteed to last the 15 years'


    --
    The lifetime of any political organisation is about three years before
    its been subverted by the people it tried to warn you about.

    Anon.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Sat Apr 5 12:13:06 2025
    XPost: comp.os.linux.misc

    On 05/04/2025 09:50, c186282 wrote:
    On 4/4/25 3:15 PM, Farley Flud wrote:
    On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:


    I'm not sure there are any little old ladies left to knit magnetic
    core.

    Last time I looked they were little asian ladies wit teeny nimble
    fingers.
    Did all the coil winding in that factory.

        Look up "rope memory"  :-)

        Hey, it flew us to the moon ...


    Who would ever give a flying fuck about this "Neolithic" technical
    crap?  It's the future that is of concern.

      No future without a past.

      And past tricks/thinking/strategies CAN inspire
      the new.

    Indeed.
    Many ideas that were infeasible, become feasible with new technology.
    Many dont. Windmills being a prime example...


    My question has always been: when are these memory engineers (or
    whatever they are called) going to produce cheap RAM memory that
    can actually keep pace with the CPU?

      Never ...

    The problem is distance between elements times the size of elements
    divided by the speed of light.

    It means that you need to start going 3D on memory to keep the
    speed/capacity withiong bounds.

    It has its parallel in human political structures. Huge monolithic
    empires like the USSR simply fail to keep up, because the information
    from the bottom takes a long time to get to the top.

    A far better solution is the old British Empire, with governors having
    power over a local nation, and very few decisions being centralised.


      OK, Moore's Law is getting close to stalling-out CPU
      performance. Ergo, give it a few years, the memory
      MAY finally catch up.

    Same laws are governing both. What is happening is more local on chip
    cache.
    IIRC the RP2040 has 256K of memory *on the chip itself*.

    That's local. Hence as fast as the chip is.


    For decades we have had to use various levels of high speed, though
    minuscule, cache memory in order for our software to run, and from
    a programming point of view cache management is a supreme bitch.

    The world needs cheap RAM that can operate at CPU speeds.  Then,
    all programming would be a supreme breeze.

    It already has it, juts not in the sizes you want. Because of the
    propagation delay inherently in large arrays.

    We can clock the CPUS up to 4Ghz ± mainly because we can make em down to
    10nm element size.

    Below that you start to get into quantum effects and low yields.

    DDR5 RAM is pushing 3GHz speeds

      Well, your 'future' isn't providing. MAYbe some odd
      idea from the past, just on better hardware ???

    The better idea is to look at what the actual problems are, and design massively parallel solutions to them that do not require a single
    processor running blindingly fast.

    Cache memory is just another crutch, and its existence is indisputable
    testimony that modern PC hardware is crippled shit.

      Well, I'd argue that on-chip cache is always gonna
      outperform - if for no other reason than the short
      circuit paths. These days, the speed of electricity
      over wires is becoming increasingly annoying - it's
      why they want photonics instead. Of course soon even
      that will be too slow soon enough ... and you can
      complain to Einstein .......

    Cant yet beat speed of light. Photonics is not much faster than electronics. Back in the day we measured delay on a reel of 50 ohm coax. It was about
    .95 the speed of light IIRC.

    So that isn't the way to go,

    Look, you need to study the history of engineering.

    Let's take a steam engine. Early engines crude, inefficient and very
    heavy and large. Maybe 1% efficient.
    Roll forward to the first locomotives and still heavy but now getting 5% efficiency..

    Fiddle with that for a hundred years and the final efficiency of a steam
    piston engine approached the theoretical limits of the technology
    without cooling or superheated steam of around 20%. Now use superheated
    steam in a steam turbine with a condenser strapped on the back -
    suitable for ships or power stations, - and you are getting up to 37%.

    But that is fundamentally it. There is a law governing it
    Efficiency is (Steam temperature IN - Steam temperature OUT)/(STEAM
    TEMPERATURE IN) (degrees absolute)

    so for 200°C in and say 100°C out TIN = 673°A, TOUT =373°A gives a max thermal efficiency of 45%

    You simply will never do better than that with water as the working
    fluid unless you go to horrendous inlet temps of superheated steam


    The point is that every technology has a limit beyond which no amount of tinkering is going to get you. Engineers come to understand this, the
    lay public do not. They are always whining 'why cant you power the
    universe form one single bonfire'

    Digital computing has a little ways to go, but it is already close to
    the limits

    For some problems, precision analogue might be faster...

    --
    Climate Change: Socialism wearing a lab coat.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Sat Apr 5 19:11:42 2025
    XPost: comp.os.linux.misc

    On Sat, 5 Apr 2025 04:31:22 -0400, c186282 wrote:


    "Rope" was interesting because it was used in the NASA lunar lander
    vehicle. Basically cores on loose wire, and I think SPACING was
    important. Why the hell did they use that in 1969 ? Because, the way
    things work, the govt SPECS for the vehicle probably went out the
    door before JFK even finished his moon speech.
    That's where the e-tech was frozen for all intents.

    The specification document for projects like that takes years to write
    with fights over every paragraph, including whether it should be vii., 7.,
    G., or g.. There is so much ego involvement and politics that when it is finalized that is what shall be done, even if problems are recognized as
    the project actually gets underway.

    That leads to the F-35, Zumwalt class destroyers, and both varieties of
    the LCS.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to The Natural Philosopher on Sat Apr 5 19:20:25 2025
    XPost: comp.os.linux.misc

    On Sat, 5 Apr 2025 11:41:11 +0100, The Natural Philosopher wrote:

    Project I worked on was undersea repeater for optical cables.
    Probably the worst organised and specified project ever, but that's a by
    the by. They said 'you are lucky we are allowed to use a silicon
    processor, up to 5 years ago we had to use germanium' 'Why?' Because
    that was the only technology more than 15 years old that could be
    guaranteed to last the 15 years'

    When we built sequential runway lighting controllers the wire harnesses
    had to be laced because the nylon ties used in industry hadn't had a
    couple of decades of use.

    https://en.wikipedia.org/wiki/Cable_lacing

    Luckily all our techs were women, most of whom had knitting or macrame
    skills. The heart of the system was an electro-mechanical stepper that was
    also mostly obsolete in industry.

    The SSA probably has people saying 'Rust? Well maybe after it's proven
    itself for 60 years like COBOL.'

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Sat Apr 5 15:22:23 2025
    XPost: comp.os.linux.misc

    On 4/5/25 7:13 AM, The Natural Philosopher wrote:
    On 05/04/2025 09:50, c186282 wrote:
    On 4/4/25 3:15 PM, Farley Flud wrote:
    On Fri, 4 Apr 2025 08:30:23 -0400, c186282 wrote:


    I'm not sure there are any little old ladies left to knit magnetic >>>>>> core.

    Last time I looked they were little asian ladies wit teeny nimble
    fingers.
    Did all the coil winding in that factory.

        Look up "rope memory"  :-)

        Hey, it flew us to the moon ...


    Who would ever give a flying fuck about this "Neolithic" technical
    crap?  It's the future that is of concern.

       No future without a past.

       And past tricks/thinking/strategies CAN inspire
       the new.

    Indeed.
    Many ideas that were infeasible, become feasible with new technology.
    Many dont. Windmills being a prime example...


    My question has always been: when are these memory engineers (or
    whatever they are called) going to produce cheap RAM memory that
    can actually keep pace with the CPU?

       Never ...

    The problem is distance between elements times the  size of elements
    divided by the speed of light.

    It means that you need to start going 3D on memory to keep the
    speed/capacity withiong bounds.

    It has its parallel in human political structures. Huge monolithic
    empires like the USSR simply fail to keep up, because the information
    from the bottom takes a long time to get to the top.

    A far better solution is the old British Empire, with governors having
    power over a local nation, and very few decisions being centralised.


       OK, Moore's Law is getting close to stalling-out CPU
       performance. Ergo, give it a few years, the memory
       MAY finally catch up.

    Same laws are governing both.  What is happening is more local on chip cache.
    IIRC the RP2040 has 256K of memory *on the chip itself*.

    That's local. Hence as fast as the chip is.


    For decades we have had to use various levels of high speed, though
    minuscule, cache memory in order for our software to run, and from
    a programming point of view cache management is a supreme bitch.

    The world needs cheap RAM that can operate at CPU speeds.  Then,
    all programming would be a supreme breeze.

    It already has it, juts not in the sizes you want. Because of the
    propagation delay inherently in large arrays.

    We can clock the CPUS up to 4Ghz ± mainly because we can make em down to 10nm element size.

    Below that you start to get into quantum effects and low yields.

    DDR5 RAM is pushing 3GHz speeds

       Well, your 'future' isn't providing. MAYbe some odd
       idea from the past, just on better hardware ???

    The better idea is to look at what the actual problems are, and design massively parallel solutions to them that do not require a single
    processor running blindingly fast.

    Cache memory is just another crutch, and its existence is indisputable
    testimony that modern PC hardware is crippled shit.

       Well, I'd argue that on-chip cache is always gonna
       outperform - if for no other reason than the short
       circuit paths. These days, the speed of electricity
       over wires is becoming increasingly annoying - it's
       why they want photonics instead. Of course soon even
       that will be too slow soon enough ... and you can
       complain to Einstein .......

    Cant yet beat speed of light. Photonics is not much faster than
    electronics.
    Back in the day we measured delay on a reel of 50 ohm coax. It was about
    .95 the speed of light IIRC.

    So that isn't the way to go,

    Look, you need to study the history of engineering.

    Let's take a steam engine. Early engines crude, inefficient and very
    heavy and large. Maybe 1% efficient.
    Roll forward to the first locomotives and still heavy but now getting 5% efficiency..

    Fiddle with that for a hundred years and the final efficiency of a steam piston engine approached the theoretical limits of the technology
    without cooling or superheated  steam of around 20%. Now use superheated steam in a steam turbine with a condenser strapped on the back -
    suitable for ships or power stations, - and you are getting up to 37%.

    But that is fundamentally it.  There is a law governing it
    Efficiency is (Steam temperature IN - Steam temperature OUT)/(STEAM TEMPERATURE IN) (degrees absolute)

    so for 200°C in and say 100°C out TIN = 673°A, TOUT =373°A  gives a max thermal efficiency of 45%

    You simply will never do better than that with water as the working
    fluid unless you go to horrendous inlet temps of superheated steam


    The point is that every technology has a limit beyond which no amount of tinkering is going to get you. Engineers come to understand this, the
    lay public do not.  They are always whining 'why cant you power the
    universe form one single bonfire'

    Digital computing has a little ways to go, but it is already close to
    the limits

    For some problems, precision analogue might be faster...

    The greenies were playing a scheme that involved continually
    demanding higher fuel efficiency from IC engines, KNOWING
    that practical/theoretical limits would soon be reached and
    then they could ban manufacture of anything that could not
    meet their 'one easy step better' standard. Physics was their
    hidden dagger. (combustion chemistry too)

    Digital ... note that clock speeds haven't really risen in
    a LONG time. They can, to a point, make them 'work smarter'
    but how much more ? Not all tasks/problems lend themselves
    to parallel processing methods either.

    So, yea, we're pretty much there.

    Quantum ... not great for all kinds of problems AND the
    error issue is still TOO.

    Analog ... it still may have certain uses, however for
    chain operations the accumulated errors WILL getcha.
    Might be a 'near' or 'finely-quantitized' sort of
    analog - 100,000 distinct, non-drifting, states that
    for some practical purposes LOOKS like traditional
    analog. So long as you don't need TOO many decimal
    points ......

    Finally ... non-binary computing, eight or ten states
    per "bit". Fewer operations, fewer gates twiddling,
    better efficiency anyhow, potentially better speed. But
    doing it with anything like traditional semiconductors,
    cannot see how.

    The point is that not all kinds of "computing" can become
    faster and faster with little limit. Material and plain
    old physical issues - speed of light/electricity as we
    noted. Amazing we've "hit the wall" THIS soon.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to rbowman on Sat Apr 5 20:23:24 2025
    XPost: comp.os.linux.misc

    On 05/04/2025 20:20, rbowman wrote:
    On Sat, 5 Apr 2025 11:41:11 +0100, The Natural Philosopher wrote:

    Project I worked on was undersea repeater for optical cables.
    Probably the worst organised and specified project ever, but that's a by
    the by. They said 'you are lucky we are allowed to use a silicon
    processor, up to 5 years ago we had to use germanium' 'Why?' Because
    that was the only technology more than 15 years old that could be
    guaranteed to last the 15 years'

    When we built sequential runway lighting controllers the wire harnesses
    had to be laced because the nylon ties used in industry hadn't had a
    couple of decades of use.

    https://en.wikipedia.org/wiki/Cable_lacing

    Luckily all our techs were women, most of whom had knitting or macrame skills. The heart of the system was an electro-mechanical stepper that was also mostly obsolete in industry.

    I was taught wire lacing as an apprentice. Doesnt need women.

    The SSA probably has people saying 'Rust? Well maybe after it's proven
    itself for 60 years like COBOL.'

    --
    “Some people like to travel by train because it combines the slowness of
    a car with the cramped public exposure of 
an airplane.”

    Dennis Miller

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Farley Flud@21:1/5 to All on Sat Apr 5 20:00:53 2025
    XPost: comp.os.linux.misc

    On Sat, 5 Apr 2025 15:22:23 -0400, c186282 wrote:


    Digital ... note that clock speeds haven't really risen in
    a LONG time. They can, to a point, make them 'work smarter'
    but how much more ? Not all tasks/problems lend themselves
    to parallel processing methods either.

    So, yea, we're pretty much there.


    The supercomputer people would disagree.

    Supercomputers, based on Linux, just keep on getting faster.

    The metric is matrix multiplication, a classic problem in cache
    management.

    I don't know about the architecture of supercomputers but
    the limit seems to be still quite open.



    --
    Gentoo: The Fastest GNU/Linux Hands Down

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Sat Apr 5 20:40:56 2025
    XPost: comp.os.linux.misc

    On 05/04/2025 20:22, c186282 wrote:
    Analog ... it still may have certain uses, however for
      chain operations the accumulated errors WILL getcha.
      Might be a 'near' or 'finely-quantitized' sort of
      analog - 100,000 distinct, non-drifting, states that
      for some practical purposes LOOKS like traditional
      analog. So long as you don't need TOO many decimal
      points ......
    Analogue multiplication is the holy grail and can be dome using the exponential characteristics of bipolar transistors

    https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf

      Finally ... non-binary computing, eight or ten states
      per "bit". Fewer operations, fewer gates twiddling,
      better efficiency anyhow, potentially better speed. But
      doing it with anything like traditional semiconductors,
      cannot see how.
    Non binary computing is essentially analogue computing

    It is already done in Flash RAM where more than two states of the memory capacitors are possible

    Massive arrays of non linear analogue circuits for modelling things like
    the Navier Stokes equations would be possible: Probably make a better
    stab at climate modelling then the existing shite.

    --
    "First, find out who are the people you can not criticise. They are your oppressors."
    - George Orwell

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to rbowman on Sat Apr 5 22:10:57 2025
    XPost: comp.os.linux.misc

    On 2025-04-05 21:20, rbowman wrote:
    On Sat, 5 Apr 2025 11:41:11 +0100, The Natural Philosopher wrote:

    Project I worked on was undersea repeater for optical cables.
    Probably the worst organised and specified project ever, but that's a by
    the by. They said 'you are lucky we are allowed to use a silicon
    processor, up to 5 years ago we had to use germanium' 'Why?' Because
    that was the only technology more than 15 years old that could be
    guaranteed to last the 15 years'

    When we built sequential runway lighting controllers the wire harnesses
    had to be laced because the nylon ties used in industry hadn't had a
    couple of decades of use.

    https://en.wikipedia.org/wiki/Cable_lacing

    Luckily all our techs were women, most of whom had knitting or macrame skills. The heart of the system was an electro-mechanical stepper that was also mostly obsolete in industry.

    The SSA probably has people saying 'Rust? Well maybe after it's proven
    itself for 60 years like COBOL.'

    I prefer cable lacing to the nylon ties.

    I have seen nylon ties degrade and break, specially under the sun.

    Nylon ties are often tied very tight, so tight that they can damage the
    cable insulation and perhaps the outer conductors.

    Nylon ties cut your hand when you push it trough the mesh of cables
    trying to get one.

    Maybe the #2 and #3 are due to not using a certified tool for tying
    them, but I have never seen such a tool. It was mentioned, but nobody
    had one.


    Oh, I have seen steel ties, Chinese made.

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Ahlstrom@21:1/5 to rbowman on Sat Apr 5 16:24:32 2025
    XPost: comp.os.linux.misc

    rbowman wrote this post while blinking in Morse code:

    On Sat, 5 Apr 2025 04:31:22 -0400, c186282 wrote:

    "Rope" was interesting because it was used in the NASA lunar lander
    vehicle. Basically cores on loose wire, and I think SPACING was
    important. Why the hell did they use that in 1969 ? Because, the way
    things work, the govt SPECS for the vehicle probably went out the
    door before JFK even finished his moon speech.
    That's where the e-tech was frozen for all intents.

    The specification document for projects like that takes years to write
    with fights over every paragraph, including whether it should be vii., 7., G., or g.. There is so much ego involvement and politics that when it is finalized that is what shall be done, even if problems are recognized as
    the project actually gets underway.

    That leads to the F-35, Zumwalt class destroyers, and both varieties of
    the LCS.

    Endless online meetings
    Turf wars
    Requirements creep
    Scope creep
    Going in a different direction

    And then there's the krufty online tool for filling in modeling information and generating a bigass document from it. Mercifully I have forgotten the name.

    --
    People

    The sage does not distinguish between himself and the world;
    The needs of other people are as his own.
    He is good to those who are good;
    He is also good to those who are not good,
    Thereby he is good.
    He trusts those who are trustworthy;
    He also trusts those who are not trustworthy,
    Thereby he is trustworthy.
    The sage lives in harmony with the world,
    And his mind is the world's mind.
    So he nurtures the worlds of others
    As a mother does her children.
    -- Lao Tse, "Tao Te Ching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Carlos E.R. on Sat Apr 5 20:27:59 2025
    XPost: comp.os.linux.misc

    On Sat, 5 Apr 2025 22:10:57 +0200, Carlos E.R. wrote:

    Nylon ties are often tied very tight, so tight that they can damage the
    cable insulation and perhaps the outer conductors.

    Nylon ties cut your hand when you push it trough the mesh of cables
    trying to get one.

    Maybe the #2 and #3 are due to not using a certified tool for tying
    them, but I have never seen such a tool. It was mentioned, but nobody
    had one.

    https://www.grainger.com/product/TY-RAP-Cable-Tie-Tension-Tool-For-1TBD8

    I have a very old version, probably from the early '70s. You set the
    desired tension and when it is reached the tail is cut off flush to the
    lock. When you're manufacturing harnesses you're not tightening the ties
    with needle nose pliers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to The Natural Philosopher on Sat Apr 5 22:57:17 2025
    XPost: comp.os.linux.misc

    The Natural Philosopher <tnp@invalid.invalid> writes:
    Analogue multiplication is the holy grail and can be dome using the
    exponential characteristics of bipolar transistors

    https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf

    The electronics there is far beyond me but how general (and how
    reliable) can this be made?

    The operations I’m currently interested in are modular multiplication, addition and subtraction with comparatively small moduli (12-bit and
    23-bit, currently). It’s a well-undertood problem with digital
    computation, of course, but I’m curious about whether there’s another option here.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Farley Flud on Sat Apr 5 18:34:17 2025
    XPost: comp.os.linux.misc

    On 4/5/25 4:00 PM, Farley Flud wrote:
    On Sat, 5 Apr 2025 15:22:23 -0400, c186282 wrote:


    Digital ... note that clock speeds haven't really risen in
    a LONG time. They can, to a point, make them 'work smarter'
    but how much more ? Not all tasks/problems lend themselves
    to parallel processing methods either.

    So, yea, we're pretty much there.


    The supercomputer people would disagree.

    Supercomputers, based on Linux, just keep on getting faster.

    The metric is matrix multiplication, a classic problem in cache
    management.

    I don't know about the architecture of supercomputers but
    the limit seems to be still quite open.

    Matrix mult is a kind of parallelization ... and we
    still have some room there. But not every problem is
    easily, or at all, suited for spreading over 1000
    processors.

    https://en.wikipedia.org/wiki/Amdahl%27s_law

    Super-computers can use exotic tech, at a super price,
    if they want - including superconductors and quantum
    elements. NOT coming to a desktop near you anytime
    soon alas ......

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Sat Apr 5 18:27:55 2025
    XPost: comp.os.linux.misc

    On 4/5/25 3:40 PM, The Natural Philosopher wrote:
    On 05/04/2025 20:22, c186282 wrote:
    Analog ... it still may have certain uses, however for
       chain operations the accumulated errors WILL getcha.
       Might be a 'near' or 'finely-quantitized' sort of
       analog - 100,000 distinct, non-drifting, states that
       for some practical purposes LOOKS like traditional
       analog. So long as you don't need TOO many decimal
       points ......
     Analogue multiplication is the holy grail and can be dome using the exponential characteristics of bipolar transistors

    https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf


       Finally ... non-binary computing, eight or ten states
       per "bit". Fewer operations, fewer gates twiddling,
       better efficiency anyhow, potentially better speed. But
       doing it with anything like traditional semiconductors,
       cannot see how.

    Non binary computing is essentially analogue computing

    Ummmm ... not if you can enforce clear 'guard bands' around
    each of the, say eight, distinct voltage levels. Alas, as
    stated, those 'different voltage levels' mean transistors
    aren't cleanly on or off and will burn power kind of like
    little resistors. Some all new material and approach would
    be needed. Meta-material science MIGHT someday be able to
    produce something like that.

    It is already done in Flash RAM where more than two states of the memory capacitors are possible

    Massive arrays of non linear analogue circuits for modelling things like
    the Navier Stokes equations would be possible: Probably make a better
    stab at climate modelling then the existing shit.

    Again with analog, it's the sensitivity to especially
    temperature conditions that add errors in. Keep
    carrying those errors through several stages and soon
    all you have is error, pretending to be The Solution.
    Again, perhaps some meta-material that's NOT sensitive
    to what typically throws-off analog electronics MIGHT
    be made.

    I'm trying to visualize what it would take to make
    an all-analog version of, say, a payroll spreadsheet :-)

    Now discrete use of analog as, as you suggested, doing
    multiplication/division/logs initiated and read by
    digital ... ?

    Oh well, we're out in sci-fi land with most of this ...
    may as well talk about using giant evil brains in
    jars as computers :-)

    As some here have mentioned, we may be closer to the
    limits of computer power that we'd like to think.
    Today's big trick is parallelization, but only some
    kinds of problems can be modeled that way.

    Saw an article the other day about using some kind
    of disulfide for de-facto transistors, but did not
    get the impression that they'd be fast. I think
    temperature resistance was the main thrust - industrial
    apps, Venus landers and such.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Joel on Sat Apr 5 18:35:50 2025
    XPost: comp.os.linux.misc

    On 4/5/25 4:16 PM, Joel wrote:
    Farley Flud <fflud@gnu.rocks> wrote:

    Gentoo: The Fastest GNU/Linux Hands Down


    You are retarded.

    Now now ... is he retarded, or simply mistaken ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pothead@21:1/5 to The Natural Philosopher on Sat Apr 5 22:58:48 2025
    XPost: comp.os.linux.misc

    On 2025-04-05, The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 05/04/2025 20:20, rbowman wrote:
    On Sat, 5 Apr 2025 11:41:11 +0100, The Natural Philosopher wrote:

    Project I worked on was undersea repeater for optical cables.
    Probably the worst organised and specified project ever, but that's a by >>> the by. They said 'you are lucky we are allowed to use a silicon
    processor, up to 5 years ago we had to use germanium' 'Why?' Because
    that was the only technology more than 15 years old that could be
    guaranteed to last the 15 years'

    When we built sequential runway lighting controllers the wire harnesses
    had to be laced because the nylon ties used in industry hadn't had a
    couple of decades of use.

    https://en.wikipedia.org/wiki/Cable_lacing

    Luckily all our techs were women, most of whom had knitting or macrame
    skills. The heart of the system was an electro-mechanical stepper that was >> also mostly obsolete in industry.

    I was taught wire lacing as an apprentice. Doesnt need women.

    I once asked an IBM technician I knew what is the biggest mistake a technician can make and his reply was something like "we don't have to label the locations of the connectors on the cable we are replacing because the cable is laced."

    So technically a wire cable with 10 connectors that plugged into other components
    in the machine SHOULD be laced so that the break out points of the connectors aligned physically with their location in the machine.

    It works, until the lacing was incorrect and then the smoke escapes from the machine.

    This was back in the 70's with yellow and black colored wired cables bound together
    with that oily cord used back then.




    --
    pothead
    Liberalism Is A Mental Disease
    Treat it accordingly <https://www.dailymail.co.uk/health/article-14512427/Doctors-reveal-symptoms-Trump-Derangement-Syndrome-tell-youve-got-it.html>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Sat Apr 5 20:14:48 2025
    XPost: comp.os.linux.misc

    On 4/5/25 3:20 PM, rbowman wrote:
    On Sat, 5 Apr 2025 11:41:11 +0100, The Natural Philosopher wrote:

    Project I worked on was undersea repeater for optical cables.
    Probably the worst organised and specified project ever, but that's a by
    the by. They said 'you are lucky we are allowed to use a silicon
    processor, up to 5 years ago we had to use germanium' 'Why?' Because
    that was the only technology more than 15 years old that could be
    guaranteed to last the 15 years'

    When we built sequential runway lighting controllers the wire harnesses
    had to be laced because the nylon ties used in industry hadn't had a
    couple of decades of use.

    https://en.wikipedia.org/wiki/Cable_lacing

    Luckily all our techs were women, most of whom had knitting or macrame skills. The heart of the system was an electro-mechanical stepper that was also mostly obsolete in industry.

    The SSA probably has people saying 'Rust? Well maybe after it's proven
    itself for 60 years like COBOL.'

    HA ! I can sorta believe it !!!

    Oh wait, they can re-do it all in Ada ! That
    ought to take 25 years and 25 billion $$$ ...
    though the programmer suicide rate may be a
    bit embarrassing ..........

    Lots of 'govt projects' are NEVER meant to
    get done - just use up as much money as
    possible for as long as possible so a few
    people can keep dipping into the cash stream.

    Oh, old tested computer lang - BASIC :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Sun Apr 6 02:13:15 2025
    XPost: comp.os.linux.misc

    On 05/04/2025 23:34, c186282 wrote:
    On 4/5/25 4:00 PM, Farley Flud wrote:
    On Sat, 5 Apr 2025 15:22:23 -0400, c186282 wrote:


        Digital ... note that clock speeds haven't really risen in
        a LONG time. They can, to a point, make them 'work smarter'
        but how much more ? Not all tasks/problems lend themselves
        to parallel processing methods either.

        So, yea, we're pretty much there.


    The supercomputer people would disagree.

    Supercomputers, based on Linux, just keep on getting faster.

    The metric is matrix multiplication, a classic problem in cache
    management.

    I don't know about the architecture of supercomputers but
    the limit seems to be still quite open.

      Matrix mult is a kind of parallelization ... and we
      still have some room there. But not every problem is
      easily, or at all, suited for spreading over 1000
      processors.

      https://en.wikipedia.org/wiki/Amdahl%27s_law

      Super-computers can use exotic tech, at a super price,
      if they want - including superconductors and quantum
      elements. NOT coming to a desktop near you anytime
      soon alas ......

    Well the main use of supercomputers is running vast mathematical models
    to make sketchy assumptions and crude parametrisations look much more
    betterer than they actually are..

    Real racing car and aircraft design uses wind tunnels. CFD can't do the job.

    --
    You can get much farther with a kind word and a gun than you can with a
    kind word alone.

    Al Capone

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Richard Kettlewell on Sun Apr 6 01:57:44 2025
    XPost: comp.os.linux.misc

    On 05/04/2025 22:57, Richard Kettlewell wrote:
    The Natural Philosopher <tnp@invalid.invalid> writes:
    Analogue multiplication is the holy grail and can be dome using the
    exponential characteristics of bipolar transistors

    https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf

    The electronics there is far beyond me but how general (and how
    reliable) can this be made?

    Well it isn't very general. It takes two voltages between on and one an multiplies them together.
    That's what it does.

    Going from digits to volts is relatively fast and easy, but the reverse
    is not so true


    The operations I’m currently interested in are modular multiplication, addition and subtraction with comparatively small moduli (12-bit and
    23-bit, currently). It’s a well-undertood problem with digital
    computation, of course, but I’m curious about whether there’s another option here.

    That flew over my head Richard. Advanced maths is not my forte ...

    --
    "In our post-modern world, climate science is not powerful because it is
    true: it is true because it is powerful."

    Lucas Bergkamp

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Sun Apr 6 02:07:39 2025
    XPost: comp.os.linux.misc

    On 05/04/2025 23:27, c186282 wrote:
    On 4/5/25 3:40 PM, The Natural Philosopher wrote:
    On 05/04/2025 20:22, c186282 wrote:
    Analog ... it still may have certain uses, however for
       chain operations the accumulated errors WILL getcha.
       Might be a 'near' or 'finely-quantitized' sort of
       analog - 100,000 distinct, non-drifting, states that
       for some practical purposes LOOKS like traditional
       analog. So long as you don't need TOO many decimal
       points ......
      Analogue multiplication is the holy grail and can be dome using the
    exponential characteristics of bipolar transistors

    https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf

       Finally ... non-binary computing, eight or ten states
       per "bit". Fewer operations, fewer gates twiddling,
       better efficiency anyhow, potentially better speed. But
       doing it with anything like traditional semiconductors,
       cannot see how.

    Non binary computing is essentially analogue computing

      Ummmm ... not if you can enforce clear 'guard bands' around
      each of the, say eight, distinct voltage levels. Alas, as
      stated, those 'different voltage levels' mean transistors
      aren't cleanly on or off and will burn power kind of like
      little resistors. Some all new material and approach would
      be needed. Meta-material science MIGHT someday be able to
      produce something like that.

    It is already done in Flash RAM where more than two states of the
    memory capacitors are possible

    Massive arrays of non linear analogue circuits for modelling things
    like the Navier Stokes equations would be possible: Probably make a
    better stab at climate modelling then the existing shit.

      Again with analog, it's the sensitivity to especially
      temperature conditions that add errors in.

    Not really, That was mostly sorted years ago.

    Keep
      carrying those errors through several stages and soon
      all you have is error, pretending to be The Solution.

    So no different from floating point based current climate models, then...

      Again, perhaps some meta-material that's NOT sensitive
      to what typically throws-off analog electronics MIGHT
      be made.

      I'm trying to visualize what it would take to make
      an all-analog version of, say, a payroll spreadsheet :-)

    An awful lot of op-amps.
    The thing is that analogue computers were useful for system analysis
    years before digital stuff came along. You could examine a dynamic
    system and see if it was stable or not.

    If not you did it another way. People who dribble on about 'climate
    tipping points'have no clue really as to how real life complex analogue
    systems work.


      Now discrete use of analog as, as you suggested, doing
      multiplication/division/logs initiated and read by
      digital ... ?

    Its being thought about.


      Oh well, we're out in sci-fi land with most of this ...
      may as well talk about using giant evil brains in
      jars as computers  :-)

    Well no, we are not.
    Digital traded speed for precision.

      As some here have mentioned, we may be closer to the
      limits of computer power that we'd like to think.
      Today's big trick is parallelization, but only some
      kinds of problems can be modeled that way.

      Saw an article the other day about using some kind
      of disulfide for de-facto transistors, but did not
      get the impression that they'd be fast. I think
      temperature resistance was the main thrust - industrial
      apps, Venus landers and such.

    I think I saw that too..

    Massive parallelisation will definitely do *some* things faster.

    Think 4096 core GPU processors...I think that's the way it will happen,
    decline of the general purpose CPU and emergence of specific chips
    tailored to specific tasks. Its already happening to an extend with on
    chip everything....


    --
    Climate Change: Socialism wearing a lab coat.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Sun Apr 6 00:48:39 2025
    XPost: comp.os.linux.misc

    On 4/5/25 9:13 PM, The Natural Philosopher wrote:
    On 05/04/2025 23:34, c186282 wrote:
    On 4/5/25 4:00 PM, Farley Flud wrote:
    On Sat, 5 Apr 2025 15:22:23 -0400, c186282 wrote:


        Digital ... note that clock speeds haven't really risen in
        a LONG time. They can, to a point, make them 'work smarter'
        but how much more ? Not all tasks/problems lend themselves
        to parallel processing methods either.

        So, yea, we're pretty much there.


    The supercomputer people would disagree.

    Supercomputers, based on Linux, just keep on getting faster.

    The metric is matrix multiplication, a classic problem in cache
    management.

    I don't know about the architecture of supercomputers but
    the limit seems to be still quite open.

       Matrix mult is a kind of parallelization ... and we
       still have some room there. But not every problem is
       easily, or at all, suited for spreading over 1000
       processors.

       https://en.wikipedia.org/wiki/Amdahl%27s_law

       Super-computers can use exotic tech, at a super price,
       if they want - including superconductors and quantum
       elements. NOT coming to a desktop near you anytime
       soon alas ......

    Well the main use of supercomputers is running vast mathematical models
    to make sketchy assumptions and crude parametrisations look much more betterer than they actually are..

    Even 80s super-computers made it unnecessary to TEST
    nuclear weapon designs - the entire physics could be
    calculated, a 'virtual' bomb, and RELIED on. Even Iran
    can do all that and more now.

    Real racing car and aircraft design uses wind tunnels. CFD can't do the
    job.

    Um, yea ... really COULD be entirely virtualized.
    ACCESS to such calx capabilities still isn't there
    or affordable to ALL however. Do you think that
    AirBus/Boeing/Lockheed build a zillion wind-tunnel
    models these days ? Likely NONE. The airflows, the
    structural components ... all SIMS.

    WHAT can be done with "quantum" is not entirely
    clear. Again it's not best suited for EVERYTHING.
    The #1 issue is still the ERROR rates. As per QM,
    where things can randomly change Just Because,
    these errors are gonna be HARD to get around.
    STILL no great solutions. Got a design for a
    "Heisenberg compensator" ??? If so, GET RICH !!!

    There MAY be some pattern in the QM errors that
    can be matched/negated by some parallel QM
    process/equation. We shall see.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Sun Apr 6 00:26:47 2025
    XPost: comp.os.linux.misc

    On 4/5/25 9:07 PM, The Natural Philosopher wrote:
    On 05/04/2025 23:27, c186282 wrote:
    On 4/5/25 3:40 PM, The Natural Philosopher wrote:
    On 05/04/2025 20:22, c186282 wrote:
    Analog ... it still may have certain uses, however for
       chain operations the accumulated errors WILL getcha.
       Might be a 'near' or 'finely-quantitized' sort of
       analog - 100,000 distinct, non-drifting, states that
       for some practical purposes LOOKS like traditional
       analog. So long as you don't need TOO many decimal
       points ......
      Analogue multiplication is the holy grail and can be dome using the
    exponential characteristics of bipolar transistors

    https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf


       Finally ... non-binary computing, eight or ten states
       per "bit". Fewer operations, fewer gates twiddling,
       better efficiency anyhow, potentially better speed. But
       doing it with anything like traditional semiconductors,
       cannot see how.

    Non binary computing is essentially analogue computing

       Ummmm ... not if you can enforce clear 'guard bands' around
       each of the, say eight, distinct voltage levels. Alas, as
       stated, those 'different voltage levels' mean transistors
       aren't cleanly on or off and will burn power kind of like
       little resistors. Some all new material and approach would
       be needed. Meta-material science MIGHT someday be able to
       produce something like that.

    It is already done in Flash RAM where more than two states of the
    memory capacitors are possible

    Massive arrays of non linear analogue circuits for modelling things
    like the Navier Stokes equations would be possible: Probably make a
    better stab at climate modelling then the existing shit.

       Again with analog, it's the sensitivity to especially
       temperature conditions that add errors in.

    Not really, That was mostly sorted years ago.


    Ummm ... I'm gonna kinda have to disagree.

    There are several factors that lead to errors in
    analog electronics - simple temperature being
    the worst.


       Keep
       carrying those errors through several stages and soon
       all you have is error, pretending to be The Solution.

    So no different from floating point based current climate models, then...


    Digital FP *can* be done to almost arbitrary precision.
    If you're running, say, a climate or 'dark energy' model
    then you use a LOT of precision.


       Again, perhaps some meta-material that's NOT sensitive
       to what typically throws-off analog electronics MIGHT
       be made.

       I'm trying to visualize what it would take to make
       an all-analog version of, say, a payroll spreadsheet :-)

    An awful lot of op-amps.


    To say the least :-)

    CAN be done, but is it WORTH it ???

    But, I suppose, a whole-budget CAN be viewed
    as an analog equation IF you try hard enough.


    The thing is that analogue computers were useful for system analysis
    years before digital stuff came along. You could examine a dynamic
    system and see if it was stable or not.

    Well, *how* stable it is ........

    Digital is always right-on.

    So what do you NEED most - speed or accuracy ?

    If not you did it another way. People who dribble on about 'climate
    tipping points'have no clue really as to how real life complex analogue systems work.

    I'm just gonna say that "climate" is beyond ANY kind
    of models - analog OR digital. TOO many butterflies.

       Now discrete use of analog as, as you suggested, doing
       multiplication/division/logs initiated and read by
       digital ... ?

    Its being thought about.

    And we shall see ... advantage, or not ?

    Maybe, horrors, "depends" .....

    The "real world" acts as a very complex analog
    equation - until you get down to quantum levels.
    HOW the hell to best DEAL with that ???

       Oh well, we're out in sci-fi land with most of this ...
       may as well talk about using giant evil brains in
       jars as computers  :-)

    Well no, we are not.
    Digital traded speed for precision.


    I'd say digital traded precision for speed ...


       As some here have mentioned, we may be closer to the
       limits of computer power that we'd like to think.
       Today's big trick is parallelization, but only some
       kinds of problems can be modeled that way.

       Saw an article the other day about using some kind
       of disulfide for de-facto transistors, but did not
       get the impression that they'd be fast. I think
       temperature resistance was the main thrust - industrial
       apps, Venus landers and such.

    I think I saw that too..

    Massive parallelisation will definitely do *some* things faster.

    Agreed ... but not EVERYTHING.

    Sometimes there's just no substitute for clock
    speed and high-speed mem access.

    Think 4096 core GPU processors...I think that's the way it will happen, decline of the general purpose CPU and emergence of specific chips
    tailored to specific tasks. Its already happening to an extend with on
    chip everything....

    I kinda understand. However that whole chip chain
    will likely need to be fully, by design, integrated.
    This is NOT so easy with multiple manufacturers.

    I've avoided investments in higher-tech/specific-
    tech stuff. NVIDIA seemed good - but the current
    trade war is gonna put a BIG strain on them. It's
    all too volatile, too vulnerable to even small
    politics - govt or industry.

    DO wish I'd bought RHEL back near the IPO, but,
    well ..... ya never know unless you're an
    "insider".

    ANYway ... final observation ... it keeps looking
    like we're far closer to the END of increasing
    computer power than the beginning. WHAT we want
    computed is kinda a dynamic equation, but OVERALL
    we're kinda near The End.

    THEN what ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?St=C3=A9phane?= CARPENTIE@21:1/5 to All on Sun Apr 6 08:58:18 2025
    XPost: comp.os.linux.misc

    Le 05-04-2025, c186282 <c186282@nnada.net> a écrit :
    On 4/4/25 3:15 PM, Farley Flud wrote:

    Who would ever give a flying fuck about this "Neolithic" technical
    crap?

    In fact? You. You are stuck in the past, refusing everything that's new
    by default.

    It's the future that is of concern.

    Yes, and you are afraid of it. But others move fowrward without your
    blessing.

    No future without a past.

    And past tricks/thinking/strategies CAN inspire
    the new.

    At least, it can teach you some errors you need to avoid.

    --
    Si vous avez du temps à perdre :
    https://scarpet42.gitlab.io

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?St=C3=A9phane?= CARPENTIE@21:1/5 to All on Sun Apr 6 08:59:33 2025
    XPost: comp.os.linux.misc

    Le 05-04-2025, Farley Flud <ff@linux.rocks> a écrit :

    No, but we can move to quantum computing, which may become
    a reality before too long.

    I heard about that before I was born.

    --
    Si vous avez du temps à perdre :
    https://scarpet42.gitlab.io

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?St=C3=A9phane?= CARPENTIE@21:1/5 to All on Sun Apr 6 09:01:35 2025
    XPost: comp.os.linux.misc

    Le 05-04-2025, c186282 <c186282@nnada.net> a écrit :
    On 4/5/25 4:16 PM, Joel wrote:
    Farley Flud <fflud@gnu.rocks> wrote:

    Gentoo: The Fastest GNU/Linux Hands Down


    You are retarded.

    Now now ... is he retarded,

    Yes. That's a simple answer to a simple question.

    or simply mistaken ?

    Yes again.

    --
    Si vous avez du temps à perdre :
    https://scarpet42.gitlab.io

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to The Natural Philosopher on Sun Apr 6 10:05:34 2025
    XPost: comp.os.linux.misc

    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 05/04/2025 22:57, Richard Kettlewell wrote:
    The Natural Philosopher <tnp@invalid.invalid> writes:
    Analogue multiplication is the holy grail and can be dome using the
    exponential characteristics of bipolar transistors

    https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf
    The electronics there is far beyond me but how general (and how
    reliable) can this be made?

    Well it isn't very general. It takes two voltages between on and one
    an multiplies them together.
    That's what it does.

    Going from digits to volts is relatively fast and easy, but the
    reverse is not so true

    Right, probably not a fit for my application, then.

    The operations I’m currently interested in are modular multiplication,
    addition and subtraction with comparatively small moduli (12-bit and
    23-bit, currently). It’s a well-undertood problem with digital
    computation, of course, but I’m curious about whether there’s another
    option here.

    That flew over my head Richard. Advanced maths is not my forte ...

    You do modular arithmetic whenever you do times and dates - informally,
    it’s the same thing as they way minutes and seconds wrap around from 59
    to 0, hours from 23 to 0, etc.

    Putting it together into something useful is a bit more than that,
    though, indeed, and a persistent concern is how to do it faster, hence
    keeping an eye out for additional technology options.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Farley Flud@21:1/5 to All on Sun Apr 6 11:27:05 2025
    XPost: comp.os.linux.misc

    On 06 Apr 2025 08:59:33 GMT, Stéphane CARPENTIER wrote:

    Le 05-04-2025, Farley Flud <ff@linux.rocks> a écrit :

    No, but we can move to quantum computing, which may become
    a reality before too long.

    I heard about that before I was born.


    In the US, the NIST is already researching algorithms for "post-quantum cryptography:"

    https://csrc.nist.gov/projects/post-quantum-cryptography

    Quantum computing is definitely going to happen.





    --
    Systemd: solving all the problems that you never knew you had.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Sun Apr 6 12:55:48 2025
    XPost: comp.os.linux.misc

    On 06/04/2025 05:26, c186282 wrote:
    On 4/5/25 9:07 PM, The Natural Philosopher wrote:
       temperature conditions that add errors in.

    Not really, That was mostly sorted years ago.


      Ummm ... I'm gonna kinda have to disagree.

      There are several factors that lead to errors in
      analog electronics - simple temperature being
      the worst.


    Not really. If you se as close to zero coefficient resistors and enough feedback the circuits are insensitive to temperature.
    In terms of using junctions to do multiplication, there are ways of compensating for temperature in one device using the same effect in
    another, and on chip they are thermally coupled and doped the same. So
    it works very well indeed

       Keep
       carrying those errors through several stages and soon
       all you have is error, pretending to be The Solution.

    So no different from floating point based current climate models, then...


      Digital FP *can* be done to almost arbitrary precision.
      If you're running, say, a climate or 'dark energy' model
      then you use a LOT of precision.


    And get a very accurate 'wrong answer'.

    The problem is the time it takes to do it.



       Again, perhaps some meta-material that's NOT sensitive
       to what typically throws-off analog electronics MIGHT
       be made.

       I'm trying to visualize what it would take to make
       an all-analog version of, say, a payroll spreadsheet :-)

    An awful lot of op-amps.


      To say the least :-)

      CAN be done, but is it WORTH it ???

      But, I suppose, a whole-budget CAN be viewed
      as an analog equation IF you try hard enough.


    Most complex dynamic systems are 'analogue' and anyone who has modelled
    them using analogue electronics can tell you that if they do not have
    the right negative feedback they become unstable, and that's how we
    engineers know that 'positive feedback' in the climate is bullshit.
    What you can do with a multitude of opamps resistors and capacitors is
    to model extremely complex dynamic systems very quickly, and look for instability or unexpected results.

    No one does it any more, because everyoine fell in love with digital,
    but there are times when it works better.


    The thing is that analogue computers were useful for system analysis
    years before digital stuff came along. You could examine a dynamic
    system and see if it was stable or not.

      Well, *how* stable it is ........

      Digital is always right-on.

    No, it isn't.

      So what do you NEED most - speed or accuracy ?

    If not you did it another way. People who dribble on about 'climate
    tipping points'have no clue really as to how real life complex
    analogue systems work.

      I'm just gonna say that "climate" is beyond ANY kind
      of models - analog OR digital. TOO many butterflies.

    That is the point.
    There are chaotic elements but its *bounded* chaos. I looked everywhere
    for mathematical papers to try and determine the criteria for a chaotic
    system to become bounded, or even to identify how many strange
    attractors there were, or where they were located.

    No one has done the work.

    And yet digital climate modelling cannot even represent the past, let
    alone the future.



       Now discrete use of analog as, as you suggested, doing
       multiplication/division/logs initiated and read by
       digital ... ?

    Its being thought about.

      And we shall see ... advantage, or not ?


    In certain cases it is a better solution. I can envisage a chip
    comprised of many many linear amplifiers whose gain, frequency response
    and interconnections were programmable by digital logic, to allow one to
    model an enormously interconnected system very quickly, to at least see
    what its sensitive areas in fact were....

    I.e. identify the butterflies...


      Maybe, horrors, "depends" .....

    Well that means nothing. That's just anti-tech speak for 'I don't know
    what I am talking about, and I am as good as you. so therefore you don't either'...

      The "real world" acts as a very complex analog
      equation - until you get down to quantum levels.
      HOW the hell to best DEAL with that ???

    The point is you don't. If your system is so unstable that one atomic
    decay renders the cat dead, it doesn't last long in the 'real world

    The real world is *conservative*. It consists of systems that have
    *temporal persistence*. They have 'stood the test of time', if you like.

    If your model - be it analogue or digital - doesn't have the right sort
    of feedback to do that, its clearly not an accurate model is it?


       Oh well, we're out in sci-fi land with most of this ...
       may as well talk about using giant evil brains in
       jars as computers  :-)

    Well no, we are not.
    Digital traded speed for precision.


      I'd say digital traded precision for speed ...

    No. Digital is SLOW. Many hundreds of cycles to do in what analogue can
    do *approximately* in one or two.

    Massive parallelisation will definitely do *some* things faster.

      Agreed ... but not EVERYTHING.

      Sometimes there's just no substitute for clock
      speed and high-speed mem access.


    Well tough, because you aint gonna get that . The speed of light is the
    speed of light

    Think 4096 core GPU processors...I think that's the way it will
    happen, decline of the general purpose CPU and emergence of specific
    chips tailored to specific tasks. Its already happening to an extend
    with on chip everything....

      I kinda understand. However that whole chip chain
      will likely need to be fully, by design, integrated.
      This is NOT so easy with multiple manufacturers.

    Oh of course.
    ".

      ANYway ... final observation ... it keeps looking
      like we're far closer to the END of increasing
      computer power than the beginning. WHAT we want
      computed is kinda a dynamic equation, but OVERALL
      we're kinda near The End.

      THEN what ?

    I think we have taken in around 75 years, Turing's basic concept of a
    Turing machine to the end of the line, more or less. There is still room
    for improvement, but not at the level of general purpose computing cores.

    The lowest fruit left to pluck are the massive parallelisation which
    will suit some tasks - like matrix multiplications where the same job is
    being done to enormous pieces of data and the answer of one does not
    affect the result in another core - that benefit.

    As primarily an engineer who cut his teeth on analogue systems and built analogue computer systems at university, I am extremely interested in
    the potential use of hybrid chipsets for modelling of all sorts of real
    word complex dynamics systems, from the economy to the climate, from
    faster race cars with more downforce, to better aircraft and sailboats.

    What all these systems are is a massively interconnected series of
    'amplifiers' with varying gain response, and phase delay, whose transfer function may indeed not be linear.

    We know the partial differentials - e.g. Navier Stokes equations - but
    that doesn't help in digital terms. It's simply too complex to model the interactions accurately at any scale. But a chip with 50 million opamps
    and multipliers on it, all individually controllable that could be
    linked together in different ways could really start to do the job
    digits cannot.
    In short a new sort of computer for the problems that digital computers
    simply do not work on.

    Think about it. Division is slow in digital. But in analogue a
    logarithmic transfer function, followed by a subtraction and an
    exponential, does a rough division in nanoseconds

    Any mathematical transfer function takes the same amount of time,
    because instead of calculating it, it is precompiled and built in to the silicon Better still it may be possible to describe a non linear
    transfer function digitally and make the amplifiers in the chip obey it.
    Using the inherent non linearity of a transistor operated near its
    cutoff. Yes you curve might well be a series of 256 short straight lines
    or so, but it will still be FAST.

    No one has done this. They have stuck with the very mathematical
    approach of Turing and the applied maths of summing of series etc, for
    which a digital computer is admirable.
    What I am describing however is the equivalent of a book of log tables - remember those - baked in silicon, and being applied to opamps as
    transfer functions

    Why it isn't already done in terms of floating point maths on digital
    computers I do not know. A simple look up table for a log value, adding
    and subtracting exponents, plus an exponential table and you have a two
    or three operation multiplier or divider. Sine tan and Cosine tables likewise...Its no good for operation on integer matrices, but its a
    bloody fast way of getting a reasonable accurate engineering answer -
    like using a slide rule, really.

    My point is that real world analogue problems - i,e, not counting how
    many cents the bank has in its deposit box - but calculating at what
    point a car tyre is going to start to slide, and what happens next, even
    if subsequent runs give a different answer due to noise, at least tells
    you that the result OF that tyre sliding is unpredictable and chaotic,
    so try not to do it...is still giving a very valuable piece of information.

    The day that a computer beats a wind tunnel, is the day I look forward
    to. That is a possible way to do it



    --
    I would rather have questions that cannot be answered...
    ...than to have answers that cannot be questioned

    Richard Feynman

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Sun Apr 6 12:58:03 2025
    XPost: comp.os.linux.misc

    On 06/04/2025 05:48, c186282 wrote:
    On 4/5/25 9:13 PM, The Natural Philosopher wrote:
    On 05/04/2025 23:34, c186282 wrote:
    On 4/5/25 4:00 PM, Farley Flud wrote:
    On Sat, 5 Apr 2025 15:22:23 -0400, c186282 wrote:


        Digital ... note that clock speeds haven't really risen in
        a LONG time. They can, to a point, make them 'work smarter'
        but how much more ? Not all tasks/problems lend themselves
        to parallel processing methods either.

        So, yea, we're pretty much there.


    The supercomputer people would disagree.

    Supercomputers, based on Linux, just keep on getting faster.

    The metric is matrix multiplication, a classic problem in cache
    management.

    I don't know about the architecture of supercomputers but
    the limit seems to be still quite open.

       Matrix mult is a kind of parallelization ... and we
       still have some room there. But not every problem is
       easily, or at all, suited for spreading over 1000
       processors.

       https://en.wikipedia.org/wiki/Amdahl%27s_law

       Super-computers can use exotic tech, at a super price,
       if they want - including superconductors and quantum
       elements. NOT coming to a desktop near you anytime
       soon alas ......

    Well the main use of supercomputers is running vast mathematical
    models to make sketchy assumptions and crude parametrisations look
    much more betterer than they actually are..

      Even 80s super-computers made it unnecessary to TEST
      nuclear weapon designs - the entire physics could be
      calculated, a 'virtual' bomb, and RELIED on. Even Iran
      can do all that and more now.

    Real racing car and aircraft design uses wind tunnels. CFD can't do
    the job.

      Um, yea ... really COULD be entirely virtualized.
      ACCESS to such calx capabilities still isn't there
      or affordable to ALL however. Do you think that
      AirBus/Boeing/Lockheed build a zillion wind-tunnel
      models these days ? Likely NONE. The airflows, the
      structural components ... all SIMS.

    You are completely wrong
    https://www.youtube.com/watch?v=4PabZAx-4Yw

      WHAT can be done with "quantum" is not entirely
      clear. Again it's not best suited for EVERYTHING.
      The #1 issue is still the ERROR rates. As per QM,
      where things can randomly change Just Because,
      these errors are gonna be HARD to get around.
      STILL no great solutions. Got a design for a
      "Heisenberg compensator" ??? If so, GET RICH !!!

    You are degenerating to word salad...

      There MAY be some pattern in the QM errors that
      can be matched/negated by some parallel QM
      process/equation. We shall see.

    --
    “I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most
    obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have delighted in explaining to colleagues, which
    they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.”

    ― Leo Tolstoy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Richard Kettlewell on Sun Apr 6 13:02:49 2025
    XPost: comp.os.linux.misc

    On 06/04/2025 10:05, Richard Kettlewell wrote:
    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 05/04/2025 22:57, Richard Kettlewell wrote:
    The Natural Philosopher <tnp@invalid.invalid> writes:
    Analogue multiplication is the holy grail and can be dome using the >>>> exponential characteristics of bipolar transistors

    https://www.analog.com/media/en/technical-documentation/data-sheets/ADL5391.pdf
    The electronics there is far beyond me but how general (and how
    reliable) can this be made?

    Well it isn't very general. It takes two voltages between on and one
    an multiplies them together.
    That's what it does.

    Going from digits to volts is relatively fast and easy, but the
    reverse is not so true

    Right, probably not a fit for my application, then.

    The operations I’m currently interested in are modular multiplication, >>> addition and subtraction with comparatively small moduli (12-bit and
    23-bit, currently). It’s a well-undertood problem with digital
    computation, of course, but I’m curious about whether there’s another >>> option here.

    That flew over my head Richard. Advanced maths is not my forte ...

    You do modular arithmetic whenever you do times and dates - informally, it’s the same thing as they way minutes and seconds wrap around from 59
    to 0, hours from 23 to 0, etc.

    Putting it together into something useful is a bit more than that,
    though, indeed, and a persistent concern is how to do it faster, hence keeping an eye out for additional technology options.

    Ah OK then. No. That is very 'integer' based stuff.
    Analogue computing works to model real world analogue systems

    It's amazing how much code you have to write to simulate e.g. an audio
    tone control...because digital signal processing works in the time
    domain, but analogue works in the frequency domain. One resistor and one capacitor do more than a bunch of floating point computation...

    --
    “I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most
    obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have delighted in explaining to colleagues, which
    they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.”

    ― Leo Tolstoy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?St=C3=A9phane?= CARPENTIE@21:1/5 to All on Sun Apr 6 12:17:46 2025
    XPost: comp.os.linux.misc

    [En-tête "Followup-To:" positionné à comp.os.linux.misc.]
    Le 06-04-2025, Farley Flud <ff@linux.rocks> a écrit :
    On 06 Apr 2025 08:59:33 GMT, Stéphane CARPENTIER wrote:

    Le 05-04-2025, Farley Flud <ff@linux.rocks> a écrit :

    No, but we can move to quantum computing, which may become
    a reality before too long.

    I heard about that before I was born.

    In the US, the NIST is already researching algorithms for "post-quantum cryptography:"

    https://csrc.nist.gov/projects/post-quantum-cryptography

    Yes, the algorithms are farther away from the computers. Doesn't that
    ring a bell?

    Quantum computing is definitely going to happen.

    Yes, I know. Soon. Very soon. It's almost there. I heard that before I was born.

    --
    Si vous avez du temps à perdre :
    https://scarpet42.gitlab.io

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Farley Flud@21:1/5 to All on Sun Apr 6 15:48:10 2025
    XPost: comp.os.linux.misc

    On 06 Apr 2025 09:01:35 GMT, Stéphane CARPENTIER wrote:

    Le 05-04-2025, c186282 <c186282@nnada.net> a écrit :
    On 4/5/25 4:16 PM, Joel wrote:
    Farley Flud <fflud@gnu.rocks> wrote:

    Gentoo: The Fastest GNU/Linux Hands Down


    You are retarded.

    Now now ... is he retarded,

    Yes. That's a simple answer to a simple question.

    or simply mistaken ?

    Yes again.


    What about the French predilection for pissing in the street?

    https://www.thelocal.fr/20191024/why-the-stench-of-pee-may-never-leave-paris

    https://medium.com/illuminations-mirror/the-worlds-most-romantic-city-reeks-of-pee-b88c5cebcc0b

    Everyone knows that Paris stinks of human urine.

    Is that "retarded" or what?

    Ha, ha, ha, ha, ha, ha, ha, ha, ha, ha!





    --
    Systemd: solving all the problems that you never knew you had.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Sun Apr 6 16:51:48 2025
    XPost: comp.os.linux.misc

    On 06 Apr 2025 08:59:33 GMT, Stéphane CARPENTIER wrote:

    Le 05-04-2025, Farley Flud <ff@linux.rocks> a écrit :

    No, but we can move to quantum computing, which may become a reality
    before too long.

    I heard about that before I was born.

    The Zen of quantum computing?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From -hh@21:1/5 to All on Mon Apr 7 16:39:40 2025
    XPost: comp.os.linux.misc

    On 4/5/25 18:27, c186282 wrote:
    On 4/5/25 3:40 PM, The Natural Philosopher wrote:
    On 05/04/2025 20:22, c186282 wrote:
    Analog ...

    Massive arrays of non linear analogue circuits for modelling things
    like the Navier Stokes equations would be possible: Probably make a
    better stab at climate modelling then the existing shit.

      Again with analog, it's the sensitivity to especially
      temperature conditions that add errors in. Keep
      carrying those errors through several stages and soon
      all you have is error, pretending to be The Solution.
      Again, perhaps some meta-material that's NOT sensitive
      to what typically throws-off analog electronics MIGHT
      be made.

      I'm trying to visualize what it would take to make
      an all-analog version of, say, a payroll spreadsheet :-)

    Woogh! That makes my brain hurt.


      Now discrete use of analog as, as you suggested, doing
      multiplication/division/logs initiated and read by
      digital ... ?

      Oh well, we're out in sci-fi land with most of this ...
      may as well talk about using giant evil brains in
      jars as computers  :-)

      As some here have mentioned, we may be closer to the
      limits of computer power that we'd like to think.
      Today's big trick is parallelization, but only some
      kinds of problems can be modeled that way.

      Saw an article the other day about using some kind
      of disulfide for de-facto transistors, but did not
      get the impression that they'd be fast. I think
      temperature resistance was the main thrust - industrial
      apps, Venus landers and such.

    Actually, one of the things that Analog's still good at is real world
    control systems with feeback loops and all the like.

    I had one project some time 'way back in the 80s where we were
    troubleshooting a line that had a 1960s era analog control system, and
    one of the conversations that came up was if to replace it with digital.
    It got looked into and was determined that digital process controls
    weren't fast enough for the line.

    Fast-forward to ~2005. While back visiting that department, I found out
    that that old analog beast was still running the line and they were
    trolling eBay for parts to keep it running.

    On another visit ~2015, the update: they finally found a new digitally
    based control system that was fast enough to finally replace it & did.


    -hh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to -hh on Mon Apr 7 17:59:45 2025
    XPost: comp.os.linux.misc

    On 4/7/25 4:39 PM, -hh wrote:
    On 4/5/25 18:27, c186282 wrote:
    On 4/5/25 3:40 PM, The Natural Philosopher wrote:
    On 05/04/2025 20:22, c186282 wrote:
    Analog ...

    Massive arrays of non linear analogue circuits for modelling things
    like the Navier Stokes equations would be possible: Probably make a
    better stab at climate modelling then the existing shit.

       Again with analog, it's the sensitivity to especially
       temperature conditions that add errors in. Keep
       carrying those errors through several stages and soon
       all you have is error, pretending to be The Solution.
       Again, perhaps some meta-material that's NOT sensitive
       to what typically throws-off analog electronics MIGHT
       be made.

       I'm trying to visualize what it would take to make
       an all-analog version of, say, a payroll spreadsheet :-)

    Woogh!  That makes my brain hurt.


    Indeed ! However ... probably COULD be done, it's
    a bunch of shifting values - input to some accts,
    calx ops, shift to other accts ....... lots and
    lots of rheostats ........

    I'm not gonna try it ! :-)


       Now discrete use of analog as, as you suggested, doing
       multiplication/division/logs initiated and read by
       digital ... ?

       Oh well, we're out in sci-fi land with most of this ...
       may as well talk about using giant evil brains in
       jars as computers  :-)

       As some here have mentioned, we may be closer to the
       limits of computer power that we'd like to think.
       Today's big trick is parallelization, but only some
       kinds of problems can be modeled that way.

       Saw an article the other day about using some kind
       of disulfide for de-facto transistors, but did not
       get the impression that they'd be fast. I think
       temperature resistance was the main thrust - industrial
       apps, Venus landers and such.

    Actually, one of the things that Analog's still good at is real world
    control systems with feeback loops and all the like.

    As long as it's pretty straightforward, analog can
    sometimes do it quicker and simpler. I oft wonder
    whether the problem of a self-balancing android
    might be handled better with analog feedback schemes.

    Of course nerves are, ultimately, 'digital' - pulses
    of varying rate/spacing but always the same strength.
    Some of the sensory stuff even gets 'compressed'/encoded
    before going to the brain. Every little leg hair does
    not its own direct nerve to the brain.

    I had one project some time 'way back in the 80s where we were troubleshooting a line that had a 1960s era analog control system, and
    one of the conversations that came up was if to replace it with digital.
    It got looked into and was determined that digital process controls
    weren't fast enough for the line.

    Fast-forward to ~2005.  While back visiting that department, I found out that that old analog beast was still running the line and they were
    trolling eBay for parts to keep it running.

    Hey, so long as it works well !

    On another visit ~2015, the update:  they finally found a new digitally based control system that was fast enough to finally replace it & did.

    What was the thing doing ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Farley Flud@21:1/5 to All on Tue Apr 8 11:59:33 2025
    XPost: comp.os.linux.misc

    On Mon, 07 Apr 2025 17:59:45 -0400, c186282 wrote:


    Indeed ! However ... probably COULD be done, it's
    a bunch of shifting values - input to some accts,
    calx ops, shift to other accts ....... lots and
    lots of rheostats ........


    How is conditional branching (e.g. an if-then-else statement)
    to be implemented with analog circuits? It cannot be
    done.

    Analog computers are good for modelling systems that are
    described by differential equations. Adders, differentiators,
    and integrators can all be easily implemented with electronic
    circuits. But beyond differential equation sytems analog
    computers are useless.

    The Norden bomb site of WWII wan an electro-mechanical
    computer. It's job was to calculate the trajectory of
    a bomb released by an aircraft and the trajectory is described
    by a differential equation.

    One of my professors told a story about a common "analog"
    practice among engineers of the past. To calculate an integral,
    which can be described as the area under a curve, they would plot
    the curve on well made paper and then cut out (with scissors)
    the plotted area and weigh it (on a lab balance). The ratio
    of the cut-out area with a unit area of paper would be the
    value of the integral. (Multi-dimensional integrals would
    require carving blocks of balsa wood or a similar material.)

    Of course it worked but today integration is easy to perform
    to unlimited accuracy using digital means.


    --
    Hail Linux! Hail FOSS! Hail Stallman!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Farley Flud on Tue Apr 8 13:17:42 2025
    XPost: comp.os.linux.misc

    On 08/04/2025 12:59, Farley Flud wrote:
    On Mon, 07 Apr 2025 17:59:45 -0400, c186282 wrote:


    Indeed ! However ... probably COULD be done, it's
    a bunch of shifting values - input to some accts,
    calx ops, shift to other accts ....... lots and
    lots of rheostats ........


    How is conditional branching (e.g. an if-then-else statement)
    to be implemented with analog circuits? It cannot be
    done.

    You can do better than boolen logic
    You can use an adder and a comparator.
    If the sum of all the inputs is greater than X then Y, Else not Y#
    You can use Y to switch another set of analogue circuits off or on.

    Or even use the sum of all the inputs to modify the gain of an amplifier.

    If you have a digital problem use a digital computer if not then think again


    Analog computers are good for modelling systems that are
    described by differential equations. Adders, differentiators,
    and integrators can all be easily implemented with electronic
    circuits. But beyond differential equation sytems analog
    computers are useless.

    Exactly

    The Norden bomb site of WWII wan an electro-mechanical
    computer. It's job was to calculate the trajectory of
    a bomb released by an aircraft and the trajectory is described
    by a differential equation.

    And it was the biggest heap of shit ever missold to the USAAF

    One of my professors told a story about a common "analog"
    practice among engineers of the past. To calculate an integral,
    which can be described as the area under a curve, they would plot
    the curve on well made paper and then cut out (with scissors)
    the plotted area and weigh it (on a lab balance). The ratio
    of the cut-out area with a unit area of paper would be the
    value of the integral. (Multi-dimensional integrals would
    require carving blocks of balsa wood or a similar material.)

    Of course it worked but today integration is easy to perform
    to unlimited accuracy using digital means.

    Not unlimited either in precision or in compute time




    --
    For in reason, all government without the consent of the governed is the
    very definition of slavery.

    Jonathan Swift

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to All on Tue Apr 8 10:28:54 2025
    XPost: comp.os.linux.misc

    Oh, on-theme, apparently Team Musk's nerd squad
    managed to CRASH a fair segment of the SSA customer
    web sites while trying to add some "anti-fraud"
    feature :-)

    PROBABLY no COBOL involved ... well, maybe ....

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Farley Flud on Tue Apr 8 10:24:22 2025
    XPost: comp.os.linux.misc

    On 4/8/25 7:59 AM, Farley Flud wrote:
    On Mon, 07 Apr 2025 17:59:45 -0400, c186282 wrote:


    Indeed ! However ... probably COULD be done, it's
    a bunch of shifting values - input to some accts,
    calx ops, shift to other accts ....... lots and
    lots of rheostats ........


    How is conditional branching (e.g. an if-then-else statement)
    to be implemented with analog circuits? It cannot be
    done.

    Ummmmmm ... a lightly-latching op amp maybe, "IF (v1) > (v2)
    THEN amp = ON" ?
    Analog computers are good for modelling systems that are
    described by differential equations. Adders, differentiators,
    and integrators can all be easily implemented with electronic
    circuits. But beyond differential equation sytems analog
    computers are useless.

    The Norden bomb site of WWII wan an electro-mechanical
    computer. It's job was to calculate the trajectory of
    a bomb released by an aircraft and the trajectory is described
    by a differential equation.

    Note those kinds of systems require the use of
    something called 'dithering' ... essentially a
    small mechanical vibrating device. This would
    help overcome the natural 'stickiness' of parts
    on parts, making finer actions possible by
    adding just a little chaos. Some pure-electronics
    systems use a form of 'dithering' too. https://www.planetanalog.com/can-adding-noise-actually-improve-system-performance/

    One of my professors told a story about a common "analog"
    practice among engineers of the past. To calculate an integral,
    which can be described as the area under a curve, they would plot
    the curve on well made paper and then cut out (with scissors)
    the plotted area and weigh it (on a lab balance). The ratio
    of the cut-out area with a unit area of paper would be the
    value of the integral. (Multi-dimensional integrals would
    require carving blocks of balsa wood or a similar material.)

    That's impressively clever !

    Of course it worked but today integration is easy to perform
    to unlimited accuracy using digital means.

    Most things are ... but at the cost of great complexity
    and often power consumption. IF you can find ways to make
    use of 'natural calculations' they may be worth using,
    or at least including in the mostly-digital machine. If
    Ohm's law can do near-instant floating-point calx it
    MAY be easier to have a tiny circuit of a few resistors
    and then read it with an A/D converter than to do all
    the calx step by dozens/hundreds/thousands of digital
    instruction steps.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Ahlstrom@21:1/5 to -hh on Tue Apr 8 11:51:18 2025
    XPost: comp.os.linux.misc

    -hh wrote this post while blinking in Morse code:

    <snip>

    Fast-forward to ~2005. While back visiting that department, I found out
    that that old analog beast was still running the line and they were
    trolling eBay for parts to keep it running.

    I was on a project where the manager(s) ended up buying the box on eBay, causing prices to rise.

    On another visit ~2015, the update: they finally found a new digitally
    based control system that was fast enough to finally replace it & did.

    --
    Dear Miss Manners:
    Please list some tactful ways of removing a man's saliva from your face.

    Gentle Reader:
    Please list some decent ways of acquiring a man's saliva on your face. If
    the gentleman sprayed you inadvertently to accompany enthusiastic
    discourse, you may step back two paces, bring out your handkerchief, and
    go through the motions of wiping your nose, while trailing the cloth along
    your face to pick up whatever needs mopping along the route. If, however,
    the substance was acquired as a result of enthusiasm of a more intimate
    nature, you may delicately retrieve it with a flick of your pink tongue.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From -hh@21:1/5 to All on Tue Apr 8 16:17:44 2025
    XPost: comp.os.linux.misc

    On 4/7/25 17:59, c186282 wrote:
    On 4/7/25 4:39 PM, -hh wrote:
    ...
    I had one project some time 'way back in the 80s where we were
    troubleshooting a line that had a 1960s era analog control system, and
    one of the conversations that came up was if to replace it with
    digital. It got looked into and was determined that digital process
    controls weren't fast enough for the line.

    Fast-forward to ~2005.  While back visiting that department, I found
    out that that old analog beast was still running the line and they
    were trolling eBay for parts to keep it running.

      Hey, so long as it works well !

    It did, so long as there were parts for it.


    On another visit ~2015, the update:  they finally found a new
    digitally based control system that was fast enough to finally replace
    it & did.

      What was the thing doing ?

    It was running a high speed manufacturing line. If memory serves,
    roughly 1200ppm, so 20 parts per second.

    For a digital system that's a budget of ~50 milliseconds total
    processing time per part, which one can see how early digital stuff
    couldn't maintain that pace, but as PCs got faster, it wasn't really
    clear why it remained a "too hard".

    That seemed to have come from the architecture. Its a series of linked
    tooling station heads, with each head has 22? sets of tools running
    basically in parallel, but because everything was indexed, a part that
    went through Station 1 on Head A, then went through Station 1 too on
    Heads B, and Station 1 on C, 1 on D, 1 on E, etc ...

    The process had interactive feedback loops all over the place between
    multiple heads (& other stuff), such that if head E started to report
    its hydraulic psi was running high, that was because of an insufficient
    anneal back between B & C, so turn up the voltage on the annealing station...and if that was already running high, then turn up the voltage
    on an earlier annealing station.

    But that wasn't all: it would make similar on-the-fly adjustments for
    each of the individual Stations too, so if Tool 18 on Head G was
    complaining, they could adjust settings on Tools 18 on Heads ABCDEF
    upstream of G .. and HIJK downstream too if that was a fix too.

    It must have been an incredible project back in the 1960s to get it all
    so incredibly figured out and well balanced.

    The modernization eventually came along because the base machines were expensive - probably "lost art" IMO - but were known to be capable of
    running much faster, and it was finally a modernization to have it run
    faster that got over the goal line for digitization. I think they ended
    up just a shade over 2000ppm; I'll ask the next time I stop by.


    -hh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From -hh@21:1/5 to All on Tue Apr 8 16:26:43 2025
    XPost: comp.os.linux.misc

    On 4/8/25 10:28, c186282 wrote:
    Oh, on-theme, apparently Team Musk's nerd squad
    managed to CRASH a fair segment of the SSA customer
    web sites while trying to add some "anti-fraud"
    feature  :-)

    PROBABLY no COBOL involved ... well, maybe ....


    Oh, its worse than that.

    "The network crashes appear to be caused by an expansion initiated by
    the Trump team of an existing contract with a credit-reporting agency
    that tracks names, addresses and other personal information to verify customers’ identities. The enhanced fraud checks are now done earlier in
    the claims process and have resulted in a boost to the volume of
    customers who must pass the checks."

    <https://gizmodo.com/social-security-website-crashes-blamed-on-doge-software-update-2000586092>


    Translation:

    They *moved* where an existing credit agency check is done, but didn't
    load test it before going live ... and golly, they broke it!

    But the more important question here is:

    **WHY** did they move where this check is done?

    Because this check already existed, so moving where its done isn't going
    to catch more fraud.

    Plus front-loading it before you've run your in-house checks means that
    your operating expenses to this contractor service go UP not down. Yes,
    that's a deliberate waste of taxpayer dollars.

    The only motivation I can see is propaganda: this change will find more 'fraud' at the contractor's check ... but not more fraud in total.

    Expect them to use the before/after contractor numbers only to falsely
    claim that they've found 'more' fraud. No, they're committing fraud.


    -hh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to -hh on Tue Apr 8 19:03:00 2025
    XPost: comp.os.linux.misc

    On 4/8/25 4:17 PM, -hh wrote:
    On 4/7/25 17:59, c186282 wrote:
    On 4/7/25 4:39 PM, -hh wrote:
    ...
    I had one project some time 'way back in the 80s where we were
    troubleshooting a line that had a 1960s era analog control system,
    and one of the conversations that came up was if to replace it with
    digital. It got looked into and was determined that digital process
    controls weren't fast enough for the line.

    Fast-forward to ~2005.  While back visiting that department, I found
    out that that old analog beast was still running the line and they
    were trolling eBay for parts to keep it running.

       Hey, so long as it works well !

    It did, so long as there were parts for it.


    On another visit ~2015, the update:  they finally found a new
    digitally based control system that was fast enough to finally
    replace it & did.

       What was the thing doing ?

    It was running a high speed manufacturing line.  If memory serves,
    roughly 1200ppm, so 20 parts per second.

    For a digital system that's a budget of ~50 milliseconds total
    processing time per part, which one can see how early digital stuff
    couldn't maintain that pace, but as PCs got faster, it wasn't really
    clear why it remained a "too hard".

    That seemed to have come from the architecture.  Its a series of linked tooling station heads, with each head has 22? sets of tools running
    basically in parallel, but because everything was indexed, a part that
    went through Station 1 on Head A, then went through Station 1 too on
    Heads B, and Station 1 on C, 1 on D, 1 on E, etc ...

    The process had interactive feedback loops all over the place between multiple heads (& other stuff), such that if head E started to report
    its hydraulic psi was running high, that was because of an insufficient anneal back between B & C, so turn up the voltage on the annealing station...and if that was already running high, then turn up the voltage
    on an earlier annealing station.

    But that wasn't all:  it would make similar on-the-fly adjustments for
    each of the individual Stations too, so if Tool 18 on Head G was
    complaining, they could adjust settings on Tools 18 on Heads ABCDEF
    upstream of G .. and HIJK downstream too if that was a fix too.

    It must have been an incredible project back in the 1960s to get it all
    so incredibly figured out and well balanced.

    The modernization eventually came along because the base machines were expensive - probably "lost art" IMO - but were known to be capable of
    running much faster, and it was finally a modernization to have it run
    faster that got over the goal line for digitization.  I think they ended
    up just a shade over 2000ppm; I'll ask the next time I stop by.

    These days it's difficult to even imagine such a complex
    equation being handled by anything BUT digital and lots
    of rule tables - but they had what they had back then and
    made do anyway.

    MY wonder ... who initially DESIGNED all that ? Real Genius
    at work from the good old Can-Do days. Those are the kind of
    people who are rarely remembered in the histories.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to -hh on Tue Apr 8 23:18:35 2025
    XPost: comp.os.linux.misc

    On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:

    Plus front-loading it before you've run your in-house checks means that
    your operating expenses to this contractor service go UP not down. Yes, that's a deliberate waste of taxpayer dollars.

    You'd think someone would want to try to reduce that waste.
    Maybe set up a Department Of Government Efficiency or something...

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to -hh on Tue Apr 8 20:04:24 2025
    XPost: comp.os.linux.misc

    On 4/8/25 4:26 PM, -hh wrote:
    On 4/8/25 10:28, c186282 wrote:
    Oh, on-theme, apparently Team Musk's nerd squad
    managed to CRASH a fair segment of the SSA customer
    web sites while trying to add some "anti-fraud"
    feature  :-)

    PROBABLY no COBOL involved ... well, maybe ....


    Oh, its worse than that.

    "The network crashes appear to be caused by an expansion initiated by
    the Trump team of an existing contract with a credit-reporting agency
    that tracks names, addresses and other personal information to verify customers’ identities. The enhanced fraud checks are now done earlier in the claims process and have resulted in a boost to the volume of
    customers who must pass the checks."

    <https://gizmodo.com/social-security-website-crashes-blamed-on-doge-software-update-2000586092>



    Translation:

    They *moved* where an existing credit agency check is done, but didn't
    load test it before going live ... and golly, they broke it!

    But the more important question here is:

    **WHY** did they move where this check is done?

    Because this check already existed, so moving where its done isn't going
    to catch more fraud.


    "Well ... just jam the new code in ... *somewhere* ..."

    Oh, DOUBT many/any even knew the checks WERE done,
    just somewhere ELSE.


    Plus front-loading it before you've run your in-house checks means that
    your operating expenses to this contractor service go UP not down.  Yes, that's a deliberate waste of taxpayer dollars.

    The only motivation I can see is propaganda:  this change will find more 'fraud' at the contractor's check ... but not more fraud in total.


    There's a fundamental political rule, esp in 'democracies',
    that goes "ALWAYS be seen as *DOING SOMETHING*"

    Spin it however needed.

    ONLY possible maybe perhaps reason to move the checks
    is to not let fraudsters/Putin deeper into the system/
    process where there may be more little flaws to exploit.

    We know ALL code has those little flaws, logic/field/
    buffer issues. Even M$ can't clean all that junk out
    its products despite decades and 'AI' debugging and
    such. Check their security notes - there's still the
    dreaded "buffer-overflow vulnerability of the week".
    SO ... block perps earlier = less for them to attack.

    Maybe ....

    Expect them to use the before/after contractor numbers only to falsely
    claim that they've found 'more' fraud.  No, they're committing fraud.


    Nah ! They're *doing something* !!! :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Charlie Gibbs on Tue Apr 8 22:29:19 2025
    XPost: comp.os.linux.misc

    On 4/8/25 7:18 PM, Charlie Gibbs wrote:
    On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:

    Plus front-loading it before you've run your in-house checks means that
    your operating expenses to this contractor service go UP not down. Yes,
    that's a deliberate waste of taxpayer dollars.

    You'd think someone would want to try to reduce that waste.
    Maybe set up a Department Of Government Efficiency or something...


    Hey ... humans are only JUST so smart, AI is
    even more stupid, and govt agencies .........

    Likely the expense of the earlier checks do NOT add
    up to much.

    I did mention one possible gain in doing the ID checks
    earlier - giving Vlad and friends less access to the
    deeper pages/system, places where more exploitable
    flaws live.

    In short, put up a big high city wall - then you
    don't have to worry AS much about the inner layers
    of the city.

    Hmmmmm ... wonder what kind of code they were
    screwing with ... lots of JS ? No WONDER it all
    blew up :-)

    Lucky it wasn't the old COBOL stuff ..... I only
    know ONE guy who is still a competent COBOL
    programmer. I did a little, but ......

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to c186282@nnada.net on Wed Apr 9 16:49:39 2025
    XPost: comp.os.linux.misc

    On 2025-04-09, c186282 <c186282@nnada.net> wrote:

    I did mention one possible gain in doing the ID checks
    earlier - giving Vlad and friends less access to the
    deeper pages/system, places where more exploitable
    flaws live.

    In short, put up a big high city wall - then you
    don't have to worry AS much about the inner layers
    of the city.

    A friend once told me about an interesting concept raised
    by a science fiction story he read. By assuming that entry
    controls are perfect, the presence of someone in a restricted
    area automatically means that he is authorized to be there.

    If the entry controls fail, at least you have a convenient
    scapegoat - which is of prime importance in politics.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Charlie Gibbs on Wed Apr 9 13:09:32 2025
    XPost: comp.os.linux.misc

    On 4/9/25 12:49 PM, Charlie Gibbs wrote:
    On 2025-04-09, c186282 <c186282@nnada.net> wrote:

    There's a fundamental political rule, esp in 'democracies',
    that goes "ALWAYS be seen as *DOING SOMETHING*"

    Something must be done. This is something.
    Therefore, this must be done.
    -- Yes, Prime Minister


    Heh !

    But often truer that we'd like to think and hear.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From -hh@21:1/5 to All on Wed Apr 9 14:18:10 2025
    XPost: comp.os.linux.misc

    On 4/8/25 22:29, c186282 wrote:
    On 4/8/25 7:18 PM, Charlie Gibbs wrote:
    On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:

    Plus front-loading it before you've run your in-house checks means that
    your operating expenses to this contractor service go UP not down.  Yes, >>> that's a deliberate waste of taxpayer dollars.

    You'd think someone would want to try to reduce that waste.
    Maybe set up a Department Of Government Efficiency or something...


      Hey ... humans are only JUST so smart, AI is
      even more stupid, and govt agencies .........

      Likely the expense of the earlier checks do NOT add
      up to much.

    It might not be, but in this case, the benefit of the change is
    literally zero ... and the expenses are not only more money to the
    contractor who gets paid by the check request, but also the cost of
    higher bandwidth demands which is what caused the site to crash.


      I did mention one possible gain in doing the ID checks
      earlier - giving Vlad and friends less access to the
      deeper pages/system, places where more exploitable
      flaws live.

      In short, put up a big high city wall - then you>   don't have to worry AS much about the inner layers
      of the city.

    I don't really buy that, because of symmetry: when the workflow is that
    a request has to successfully pass three gates, its functionally
    equivalent to (A x B x C) and the sequence doesn't matter: one gets the
    same outcome for (C x B x A), and (A x C x B), etc.

    The primary motivation for order selection comes from optimization
    factors, such as the 'costs' of each gate: one puts the cheap gates
    which knock down the most early, and put the slow/expensive gates late,
    after the dataset's size has already been minimized.

    -hh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to -hh on Wed Apr 9 16:51:26 2025
    XPost: comp.os.linux.misc

    On 4/9/25 2:18 PM, -hh wrote:
    On 4/8/25 22:29, c186282 wrote:
    On 4/8/25 7:18 PM, Charlie Gibbs wrote:
    On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:

    Plus front-loading it before you've run your in-house checks means that >>>> your operating expenses to this contractor service go UP not down.
    Yes,
    that's a deliberate waste of taxpayer dollars.

    You'd think someone would want to try to reduce that waste.
    Maybe set up a Department Of Government Efficiency or something...


       Hey ... humans are only JUST so smart, AI is
       even more stupid, and govt agencies .........

       Likely the expense of the earlier checks do NOT add
       up to much.

    It might not be, but in this case, the benefit of the change is
    literally zero ... and the expenses are not only more money to the
    contractor who gets paid by the check request, but also the cost of
    higher bandwidth demands which is what caused the site to crash.


       I did mention one possible gain in doing the ID checks
       earlier - giving Vlad and friends less access to the
       deeper pages/system, places where more exploitable
       flaws live.
       In short, put up a big high city wall - then you>    don't have to
    worry AS much about the inner layers
       of the city.

    I don't really buy that, because of symmetry: when the workflow is that
    a request has to successfully pass three gates, its functionally
    equivalent to (A x B x C) and the sequence doesn't matter:  one gets the same outcome for (C x B x A), and (A x C x B), etc.

    The primary motivation for order selection comes from optimization
    factors, such as the 'costs' of each gate: one puts the cheap gates
    which knock down the most early, and put the slow/expensive gates late,
    after the dataset's size has already been minimized.


    I understand your reasoning here.

    The point I was trying to make is a bit different
    however - less to really do with people trying to
    defraud the system but with those seeking to
    corrupt/destroy it. I see every web page, every
    bit of HTML/PHP/JS executed, every little database
    opened, as a potential source of fatal FLAWS enemies
    can find and exploit to do great damage.

    In that context, the sooner you can lock out pretenders
    the better - less of the system exposed to the state-
    sponsored hacks to analyze and pound at relentlessly.

    Now Musk's little group DID make a mistake in
    not taking bandwidth into account (and we do
    not know how ELSE they may have screwed up
    jamming new code into something they didn't
    write) but 'non-optimal' verification order
    MIGHT be worth the extra $$$ in an expanded
    'security' context.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From -hh@21:1/5 to All on Wed Apr 9 22:33:54 2025
    XPost: comp.os.linux.misc

    On 4/9/25 16:51, c186282 wrote:
    On 4/9/25 2:18 PM, -hh wrote:
    On 4/8/25 22:29, c186282 wrote:
    On 4/8/25 7:18 PM, Charlie Gibbs wrote:
    On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:

    Plus front-loading it before you've run your in-house checks means
    that
    your operating expenses to this contractor service go UP not down.
    Yes,
    that's a deliberate waste of taxpayer dollars.

    You'd think someone would want to try to reduce that waste.
    Maybe set up a Department Of Government Efficiency or something...


       Hey ... humans are only JUST so smart, AI is
       even more stupid, and govt agencies .........

       Likely the expense of the earlier checks do NOT add
       up to much.

    It might not be, but in this case, the benefit of the change is
    literally zero ... and the expenses are not only more money to the
    contractor who gets paid by the check request, but also the cost of
    higher bandwidth demands which is what caused the site to crash.


       I did mention one possible gain in doing the ID checks
       earlier - giving Vlad and friends less access to the
       deeper pages/system, places where more exploitable
       flaws live.
       In short, put up a big high city wall - then you>    don't have to >>> worry AS much about the inner layers
       of the city.

    I don't really buy that, because of symmetry: when the workflow is
    that a request has to successfully pass three gates, its functionally
    equivalent to (A x B x C) and the sequence doesn't matter:  one gets
    the same outcome for (C x B x A), and (A x C x B), etc.

    The primary motivation for order selection comes from optimization
    factors, such as the 'costs' of each gate: one puts the cheap gates
    which knock down the most early, and put the slow/expensive gates
    late, after the dataset's size has already been minimized.


      I understand your reasoning here.

      The point I was trying to make is a bit different
      however - less to really do with people trying to
      defraud the system but with those seeking to
      corrupt/destroy it. I see every web page, every
      bit of HTML/PHP/JS executed, every little database
      opened, as a potential source of fatal FLAWS enemies
      can find and exploit to do great damage.

      In that context, the sooner you can lock out pretenders
      the better - less of the system exposed to the state-
      sponsored hacks to analyze and pound at relentlessly.

    Sure, but that's not relevant here, because from a threat vulnerability perspective, its just one big 'black box' process. Anyone attempting to
    probe doesn't receive intermediary milestones/checkpoints to know if
    they successfully passed/failed a gate.


      Now Musk's little group DID make a mistake in
      not taking bandwidth into account (and we do
      not know how ELSE they may have screwed up
      jamming new code into something they didn't
      write) but 'non-optimal' verification order
      MIGHT be worth the extra $$$ in an expanded
      'security' context.

    Might be worth it if it actually enhanced security. It failed to do so, because their change was just a "shuffling of the existing deck chairs".


    -hh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to -hh on Wed Apr 9 23:11:06 2025
    XPost: comp.os.linux.misc

    On 4/9/25 10:33 PM, -hh wrote:
    On 4/9/25 16:51, c186282 wrote:
    On 4/9/25 2:18 PM, -hh wrote:
    On 4/8/25 22:29, c186282 wrote:
    On 4/8/25 7:18 PM, Charlie Gibbs wrote:
    On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:

    Plus front-loading it before you've run your in-house checks means >>>>>> that
    your operating expenses to this contractor service go UP not down. >>>>>> Yes,
    that's a deliberate waste of taxpayer dollars.

    You'd think someone would want to try to reduce that waste.
    Maybe set up a Department Of Government Efficiency or something...


       Hey ... humans are only JUST so smart, AI is
       even more stupid, and govt agencies .........

       Likely the expense of the earlier checks do NOT add
       up to much.

    It might not be, but in this case, the benefit of the change is
    literally zero ... and the expenses are not only more money to the
    contractor who gets paid by the check request, but also the cost of
    higher bandwidth demands which is what caused the site to crash.


       I did mention one possible gain in doing the ID checks
       earlier - giving Vlad and friends less access to the
       deeper pages/system, places where more exploitable
       flaws live.
       In short, put up a big high city wall - then you>    don't have >>>> to worry AS much about the inner layers
       of the city.

    I don't really buy that, because of symmetry: when the workflow is
    that a request has to successfully pass three gates, its functionally
    equivalent to (A x B x C) and the sequence doesn't matter:  one gets
    the same outcome for (C x B x A), and (A x C x B), etc.

    The primary motivation for order selection comes from optimization
    factors, such as the 'costs' of each gate: one puts the cheap gates
    which knock down the most early, and put the slow/expensive gates
    late, after the dataset's size has already been minimized.


       I understand your reasoning here.

       The point I was trying to make is a bit different
       however - less to really do with people trying to
       defraud the system but with those seeking to
       corrupt/destroy it. I see every web page, every
       bit of HTML/PHP/JS executed, every little database
       opened, as a potential source of fatal FLAWS enemies
       can find and exploit to do great damage.

       In that context, the sooner you can lock out pretenders
       the better - less of the system exposed to the state-
       sponsored hacks to analyze and pound at relentlessly.

    Sure, but that's not relevant here, because from a threat vulnerability perspective, its just one big 'black box' process.  Anyone attempting to probe doesn't receive intermediary milestones/checkpoints to know if
    they successfully passed/failed a gate.

    Alas, the box is only "black" to OUR people.

    Remember a few months ago when China got into
    several major US phone/net carriers - and DID
    mess with them ?

    They got partway into the systems, then probed
    everything they found and found FLAWS they could
    exploit. THEY put 100 times more effort into it
    than the corp people spent looking for flaws.

    EVERY page is likely to have one or two tiny
    flaws - so the FEWER pages "They" can get into
    the system the BETTER.


       Now Musk's little group DID make a mistake in
       not taking bandwidth into account (and we do
       not know how ELSE they may have screwed up
       jamming new code into something they didn't
       write) but 'non-optimal' verification order
       MIGHT be worth the extra $$$ in an expanded
       'security' context.

    Might be worth it if it actually enhanced security.  It failed to do so, because their change was just a "shuffling of the existing deck chairs".

    THIS time, maybe. But some deck chairs are better
    than others.

    Alas, as long experience shows, there's likely NO
    way to solidly secure any public-facing system -
    esp with interactive web pages and such. Instead
    it becomes a statistical exercise. Can the damage
    be kept SMALL/RARE ?

    The cyber-access paradigm DOES seem to be the
    harbinger of doom. "They" have proven they CAN
    get into ANYTHING net-connected - govt, banks,
    utilities, nuke plants ... anything. There are
    ALWAYS sneaky back-doors, ALWAYS flaws that can
    be exploited.

    I'll go back awhile to when the USA exploited
    flaws in Siemens industrial-process units to
    trash all the Iranian uranium centrifuges.
    Alas, now, "They" can instruct yer local nuke
    reactor to pull all its control rods, or just
    shut down a cooling pump, or distort sensor
    readings, or .... the more access the more
    "in"s ..........

    2FA ? 3FA ? Required biometrics ? It can ALL
    be faked these days. "Security" is more a
    collective illusion/delusion.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Fri May 30 07:22:51 2025
    XPost: comp.os.linux.misc

    Well, I guess that’s over, now that Elon Musk has left the building.
    That’s the end of DOGE, without “saving” anywhere near the trillion dollars he originally promised.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to John Ames on Fri May 30 19:38:09 2025
    XPost: comp.os.linux.misc

    On 30/05/2025 18:59, John Ames wrote:
    On Fri, 30 May 2025 07:22:51 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Well, I guess that’s over, now that Elon Musk has left the building.
    That’s the end of DOGE, without “saving” anywhere near the trillion
    dollars he originally promised.

    Comes as a *total* shock, lemmetellya.

    Jumps into the pilots seat and starts pulling ALL the levers.
    I hope someone teaches them how to fly before they crash America.

    I am no fan of Democrats, but this is bullshit.


    --
    "I guess a rattlesnake ain't risponsible fer bein' a rattlesnake, but ah
    puts mah heel on um jess the same if'n I catches him around mah chillun".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Sat May 31 00:33:16 2025
    XPost: comp.os.linux.misc

    On 30/05/2025 20:21, % wrote:
    Joel wrote:
    John Ames <commodorejohn@gmail.com> wrote:
    On Fri, 30 May 2025 07:22:51 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Well, I guess that’s over, now that Elon Musk has left the building. >>>> That’s the end of DOGE, without “saving” anywhere near the trillion >>>> dollars he originally promised.

    Comes as a *total* shock, lemmetellya.


    Let's say they had cut that much with this DOGE BS (not that it's 100%
    a stupid idea, of course, but they were not approaching very
    rationally), wouldn't the proposed tax breaks offset it?  Wouldn't we
    still be spending a large fortune every year on the damn military?

    no because there would be tariffs that go to trumps pocket

    Military spending is pretty low in reality.

    And it is a *pragmatic* program, whereas so much is spent on purely
    *moral* initiatives to employ people who think they can therefore tell
    how to run your life better than you can yourself.

    --
    "Anyone who believes that the laws of physics are mere social
    conventions is invited to try transgressing those conventions from the
    windows of my apartment. (I live on the twenty-first floor.) "

    Alan Sokal

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Lawrence D'Oliveiro on Fri May 30 23:18:48 2025
    XPost: comp.os.linux.misc

    On 5/30/25 3:22 AM, Lawrence D'Oliveiro wrote:
    Well, I guess that’s over, now that Elon Musk has left the building. That’s the end of DOGE, without “saving” anywhere near the trillion dollars he originally promised.

    Eh ... 'close enough'.

    At least somebody TRIED ... that's incredibly rare
    in the higher echelons.

    DOGE will continue, hopefully more smoothly integrate
    into the whole process. SOMEBODY has to keep an eye
    out for STUPID stuff.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Sat May 31 07:23:01 2025
    XPost: comp.os.linux.misc

    military spending is the usa's biggest bill it's close to 90%

    90% of what?

    It's 16% of the total tax take and around 3% of GDP.


    --
    Any fool can believe in principles - and most of them do!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Sat May 31 07:21:01 2025
    XPost: comp.os.linux.misc

    On 31/05/2025 04:18, c186282 wrote:
    On 5/30/25 3:22 AM, Lawrence D'Oliveiro wrote:
    Well, I guess that’s over, now that Elon Musk has left the building.
    That’s the end of DOGE, without “saving” anywhere near the trillion
    dollars he originally promised.

      Eh ... 'close enough'.

      At least somebody TRIED ... that's incredibly rare
      in the higher echelons.

      DOGE will continue, hopefully more smoothly integrate
      into the whole process. SOMEBODY has to keep an eye
      out for STUPID stuff.

    The natural tendency for any organization on a budget is to create more
    work for itself and request more budget and manpower to fix a problem
    that it had no intention of fixing in the first place.

    Remember fixing the problem you were set up to fix calls into question
    whether you are worth employing thereafter.

    You see the same on sofware. Imagine someone said 'that's it: the code
    is just fine - there is no need to develop it further' what would happen
    to the Poetterings of this world?
    Tent city for you my boy!


    --
    “It is not the truth of Marxism that explains the willingness of intellectuals to believe it, but the power that it confers on
    intellectuals, in their attempts to control the world. And since...it is
    futile to reason someone out of a thing that he was not reasoned into,
    we can conclude that Marxism owes its remarkable power to survive every criticism to the fact that it is not a truth-directed but a
    power-directed system of thought.”
    Sir Roger Scruton

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Sat May 31 06:29:48 2025
    XPost: comp.os.linux.misc

    On Fri, 30 May 2025 20:37:27 -0700, % wrote:

    Joel wrote:
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 30/05/2025 20:21, % wrote:
    Joel wrote:
    John Ames <commodorejohn@gmail.com> wrote:
    On Fri, 30 May 2025 07:22:51 -0000 (UTC) Lawrence D'Oliveiro
    <ldo@nz.invalid> wrote:

    Well, I guess that’s over, now that Elon Musk has left the
    building. That’s the end of DOGE, without “saving” anywhere near >>>>>>> the trillion dollars he originally promised.

    Comes as a *total* shock, lemmetellya.

    Let's say they had cut that much with this DOGE BS (not that it's
    100%
    a stupid idea, of course, but they were not approaching very
    rationally), wouldn't the proposed tax breaks offset it?  Wouldn't
    we still be spending a large fortune every year on the damn
    military?

    no because there would be tariffs that go to trumps pocket

    Military spending is pretty low in reality.


    Laughable.


    And it is a *pragmatic* program, whereas so much is spent on purely
    *moral* initiatives to employ people who think they can therefore tell
    how to run your life better than you can yourself.


    Never even heard of such a thing, unless you mean the war on drugs
    (which is yet another discarding of money).

    military spending is the usa's biggest bill it's close to 90%

    There is mandatory spending including Social Security, Medicare, and other expenditures required by law.

    Then there is discretionary spending, which is less than a third of the mandatory budget. Of that defense spending is a little less than half.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bobbie Sellers@21:1/5 to Joel on Sat May 31 13:51:33 2025
    XPost: comp.os.linux.misc

    On 5/30/25 12:19, Joel wrote:
    John Ames <commodorejohn@gmail.com> wrote:
    On Fri, 30 May 2025 07:22:51 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Well, I guess that’s over, now that Elon Musk has left the building.
    That’s the end of DOGE, without “saving” anywhere near the trillion >>> dollars he originally promised.

    Comes as a *total* shock, lemmetellya.


    Let's say they had cut that much with this DOGE BS (not that it's 100%
    a stupid idea, of course, but they were not approaching very
    rationally), wouldn't the proposed tax breaks offset it?

    Yes the tax cuts are adding trillions to the national debt.

    Wouldn't we
    still be spending a large fortune every year on the damn military?

    Of course we would as Defense against the un-Godly enemy is ongong.
    Threats from the Asian continnet and the EurAsian land mass.
    Threats are seen from otehr direction as well.

    The environment is ignored. We are FA and your children will FO.

    bliss

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bobbie Sellers@21:1/5 to Lawrence D'Oliveiro on Sat May 31 13:46:19 2025
    XPost: comp.os.linux.misc

    On 5/30/25 00:22, Lawrence D'Oliveiro wrote:
    Well, I guess that’s over, now that Elon Musk has left the building. That’s the end of DOGE, without “saving” anywhere near the trillion dollars he originally promised.

    No it is not the end of DOGE because some of them are still doing
    their intrusive data gathering and others have been put in charge of
    agencies or established in pertinent positions in those agencies.

    These are the people who shut down USAID and tried to cut
    the funds allocated by the Congress to many agencies.

    bliss

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Bobbie Sellers on Sat May 31 22:24:56 2025
    XPost: comp.os.linux.misc

    On 31/05/2025 21:46, Bobbie Sellers wrote:


    On 5/30/25 00:22, Lawrence D'Oliveiro wrote:
    Well, I guess that’s over, now that Elon Musk has left the building.
    That’s the end of DOGE, without “saving” anywhere near the trillion
    dollars he originally promised.

        No it is not the end of DOGE because some of them are still doing their intrusive data gathering and others have been put in charge of
    agencies or established in pertinent positions in those agencies.

        These are the people who shut down USAID and tried to cut
    the funds allocated by the Congress to many agencies.

        bliss

    Its typical really.

    Someone says, quite rightly , 'there's a lot of waste in government'

    So The Big Fart appoints someone to shut down EVERYTHING on the basis
    that it will probably become obvious which bits are needed once they
    are gone.

    Stupid clumsy, ill advised and destructive.



    --
    The New Left are the people they warned you about.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bobbie Sellers@21:1/5 to The Natural Philosopher on Sat May 31 15:02:46 2025
    XPost: comp.os.linux.misc

    On 5/31/25 14:24, The Natural Philosopher wrote:
    On 31/05/2025 21:46, Bobbie Sellers wrote:


    On 5/30/25 00:22, Lawrence D'Oliveiro wrote:
    Well, I guess that’s over, now that Elon Musk has left the building.
    That’s the end of DOGE, without “saving” anywhere near the trillion >>> dollars he originally promised.

         No it is not the end of DOGE because some of them are still doing >> their intrusive data gathering and others have been put in charge of
    agencies or established in pertinent positions in those agencies.

         These are the people who shut down USAID and tried to cut
    the funds allocated by the Congress to many agencies.

         bliss

    Its typical really.

    Someone says, quite rightly , 'there's a lot of waste in government'

    And all that means is the money is not being spent to my personal advantage. Call me a cynic but at 87 I have seen a lot of politicians come and go.


    So The Big Fart appoints someone to shut down EVERYTHING on the basis
    that it will probably become obvious which bits are needed  once they
    are gone.

    Stupid clumsy, ill advised and destructive.

    What do you expect from a man who thinks a chainsaw is a good
    illustration of
    his method. Big drama but as destructive as it has proven to be and even
    thou
    and i agree on that.

    bliss

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Bobbie Sellers on Sat May 31 23:29:16 2025
    XPost: comp.os.linux.misc

    On 31/05/2025 23:02, Bobbie Sellers wrote:


    On 5/31/25 14:24, The Natural Philosopher wrote:
    On 31/05/2025 21:46, Bobbie Sellers wrote:


    On 5/30/25 00:22, Lawrence D'Oliveiro wrote:
    Well, I guess that’s over, now that Elon Musk has left the
    building. That’s the end of DOGE, without “saving” anywhere
    near the trillion dollars he originally promised.

    No it is not the end of DOGE because some of them are still
    doing their intrusive data gathering and others have been put in
    charge of agencies or established in pertinent positions in those
    agencies.

    These are the people who shut down USAID and tried to cut the
    funds allocated by the Congress to many agencies.

    bliss

    Its typical really.

    Someone says, quite rightly , 'there's a lot of waste in
    government'

    And all that means is the money is not being spent to my personal
    advantage. Call me a cynic but at 87 I have seen a lot of politicians
    come and go.


    Well it means that its not being spent to anyones advantage other than
    the people who are employed to waste it


    So The Big Fart appoints someone to shut down EVERYTHING on the
    basis that it will probably become obvious which bits are needed
    once they are gone.

    Stupid clumsy, ill advised and destructive.

    What do you expect from a man who thinks a chainsaw is a good
    illustration of his method. Big drama but as destructive as it has
    proven to be and even thou and i agree on that.

    I never expected anything different.

    It was time for a change, but that was really not the change that people thought it would be.



    --
    “when things get difficult you just have to lie”

    ― Jean Claud Jüncker

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to All on Sun Jun 1 01:20:12 2025
    XPost: comp.os.linux.misc

    On 5/30/25 11:38 PM, % wrote:
    c186282 wrote:
    On 5/30/25 3:22 AM, Lawrence D'Oliveiro wrote:
    Well, I guess that’s over, now that Elon Musk has left the building.
    That’s the end of DOGE, without “saving” anywhere near the trillion >>> dollars he originally promised.

       Eh ... 'close enough'.

       At least somebody TRIED ... that's incredibly rare
       in the higher echelons.

       DOGE will continue, hopefully more smoothly integrate
       into the whole process. SOMEBODY has to keep an eye
       out for STUPID stuff.


    he maybe cost that much

    THIS year.

    However the downstream impact of "No Stupid Stuff"
    will be more profitable.

    Alas the next non-MAGA admin will instantly nuke DOGE
    and everything like it and go on a hiring and Stupid
    Stuff crusade ......

    IF Trump wants to have any "legacy" then much of this
    stuff needs to be enshrined as federal LAW, not just
    presidential directives/XOs. Laws are a lot harder
    to un-do.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bobbie Sellers@21:1/5 to All on Sat May 31 23:03:05 2025
    XPost: comp.os.linux.misc

    On 5/31/25 22:20, c186282 wrote:
    On 5/30/25 11:38 PM, % wrote:
    c186282 wrote:
    On 5/30/25 3:22 AM, Lawrence D'Oliveiro wrote:
    Well, I guess that’s over, now that Elon Musk has left the building. >>>> That’s the end of DOGE, without “saving” anywhere near the trillion >>>> dollars he originally promised.

       Eh ... 'close enough'.

       At least somebody TRIED ... that's incredibly rare
       in the higher echelons.

    Not the end of DOGE even though they copied out as much
    information as they wanted for Musk to use. They have embedded
    themselves in the various agencies where no MAGAn or TV personalily
    could be found to fill the empty places they made.


       DOGE will continue, hopefully more smoothly integrate
       into the whole process. SOMEBODY has to keep an eye
       out for STUPID stuff.


    he maybe cost that much

      THIS year.

      However the downstream impact of "No Stupid Stuff"
      will be more profitable.

      Alas the next non-MAGA admin will instantly nuke DOGE
      and everything like it and go on a hiring and Stupid
      Stuff crusade ......

      IF Trump wants to have any "legacy" then much of this
      stuff needs to be enshrined as federal LAW, not just
      presidential directives/XOs. Laws are a lot harder
      to un-do.

    Trump is not concerned with Legacy beyond how large he can
    make his estate before death overtakes him. He is acting this way
    reflexively not thinking much about the consequences of his actions
    for the average citizen.

    He is imposing a Trillion dollar debt increase on the USA by
    continuing the stupid tax cut for the most well off and the corporations
    which make them their money at the expense of the workers.
    That tax cut is why many call it "Trump Inflation". Cutting taxes
    on the well-off is based on the "Trickle Down" theory which ihas
    been proven to be the opposite. The wealth has trickled upward
    to the wealthy as the value of the Dollar has decreased.

    Trump overturns laws about which he knows nothing. His
    agencies resign functions like food inspection which were
    passed around 100 years ago when in New York City water
    diluted with chalk was sold as cows milk. Food was mishandled
    and adulterated. Likely those practices will return.

    "Stupid Stuff", do you mean subverting the Judiciary with the
    part of the BBB that keeps judges from punishing federal officials
    who offend and are in contempt of court?
    It subverts the Separation of Powers which subversion
    is against the Constitution of the USA which Trump and
    Congress members have sworn to uphold just like the
    military member swear to Protect and Defend the Constitution.
    Trump and many members of his Administration including
    the MAGA congressmen and senator are Oath Breakers as
    were many of the particpants in January 2020 who tried to keep
    the Congress from counting votes.

    Do you mean the cuts to food aid for the deperate and to
    the children of the workibg poor?

    Do you mean the the cuts to Medicaid that will leave millions
    of Americans without health care?

    Do you mean closing the USAID which was soft-power and
    which will likely result in myriads of deaths in African nations due
    to loss of food and medication support? Generals in the past have
    said USAID was worth more than war machines which could be
    had for the same modest amount of money.

    What do you mean by "Stupid Stuff"? Maybe you mean
    Social Security? You would have to bring back the Poor Houses
    of the past with the Work Houses which were domitories where
    the poor and aged were secured. They were miserable places as
    were most of the public institutions of the past. But with all the
    empty office building maybe the space could be found. Won't
    be free though.

    bliss - neither Democrat nor Republican but lover of the
    Constitution of the United States, flawed though it is.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to Bobbie Sellers on Sun Jun 1 15:13:20 2025
    XPost: comp.os.linux.misc

    On 2025-06-01, Bobbie Sellers <bliss-sf4ever@dslextreme.com> wrote:

    He is imposing a Trillion dollar debt increase on the USA by
    continuing the stupid tax cut for the most well off and the corporations which make them their money at the expense of the workers.
    That tax cut is why many call it "Trump Inflation". Cutting taxes
    on the well-off is based on the "Trickle Down" theory which ihas
    been proven to be the opposite. The wealth has trickled upward
    to the wealthy as the value of the Dollar has decreased.

    The problem with "trickle-down" economics is that much of what
    trickles down is yellow. That's why the people at the bottom
    are known as "peons".

    Sometimes, though, what trickles down is brown. This is known
    as the "shit flows downhill" theory.

    Eventually, though, such trickling will be recognized by the
    Powers that Be as a leak, which will then be patched so they
    can keep 100% of everything.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)