• Are Floating Point Numbers still a Can of Worms?

    From Mostowski Collapse@21:1/5 to All on Fri Oct 14 07:42:54 2022
    On Windows Platform:

    Python 3.11.0rc1 (main, Aug 8 2022, 11:30:54)
    396** -1
    0.002525252525252525
    1/396
    0.0025252525252525255

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julio Di Egidio@21:1/5 to Mostowski Collapse on Mon Oct 17 01:59:42 2022
    On Friday, 14 October 2022 at 16:43:05 UTC+2, Mostowski Collapse wrote:
    On Windows Platform:

    Python 3.11.0rc1 (main, Aug 8 2022, 11:30:54)
    396** -1
    0.002525252525252525
    1/396
    0.0025252525252525255

    Numerics: another thing you have never known shit
    about and still manage to write bullshit across all
    groups ad nauseam.

    You really don't understand the damage you are doing,
    here as elsewhere, do you, you piece of retarded shit,
    or you too just working for the nazi monster??

    *Troll-spammer-crank alert*

    Julio

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to ju...@diegidio.name on Tue Oct 18 07:41:36 2022
    Nazi Monster. Do you mean Putin?

    LoL

    ju...@diegidio.name schrieb am Montag, 17. Oktober 2022 um 10:59:52 UTC+2:
    On Friday, 14 October 2022 at 16:43:05 UTC+2, Mostowski Collapse wrote:
    On Windows Platform:

    Python 3.11.0rc1 (main, Aug 8 2022, 11:30:54)
    396** -1
    0.002525252525252525
    1/396
    0.0025252525252525255
    Numerics: another thing you have never known shit
    about and still manage to write bullshit across all
    groups ad nauseam.

    You really don't understand the damage you are doing,
    here as elsewhere, do you, you piece of retarded shit,
    or you too just working for the nazi monster??

    *Troll-spammer-crank alert*

    Julio

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to Mostowski Collapse on Sat Oct 22 14:33:21 2022
    I also get:

    Python 3.11.0rc1 (main, Aug 8 2022, 11:30:54)
    2.718281828459045**0.8618974796837966
    2.367649

    Nice try, but isn't this one the more correct?

    ?- X is 2.718281828459045**0.8618974796837966.
    X = 2.3676489999999997.

    Mostowski Collapse schrieb am Freitag, 14. Oktober 2022 um 16:43:05 UTC+2:
    On Windows Platform:

    Python 3.11.0rc1 (main, Aug 8 2022, 11:30:54)
    396** -1
    0.002525252525252525
    1/396
    0.0025252525252525255

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pieter van Oostrum@21:1/5 to Mostowski Collapse on Sun Oct 23 16:24:58 2022
    Mostowski Collapse <bursejan@gmail.com> writes:

    I also get:

    Python 3.11.0rc1 (main, Aug 8 2022, 11:30:54)
    2.718281828459045**0.8618974796837966
    2.367649

    Nice try, but isn't this one the more correct?

    ?- X is 2.718281828459045**0.8618974796837966.
    X = 2.3676489999999997.


    That's probably the accuracy of the underlying C implementation of the exp function.

    In [25]: exp(0.8618974796837966)
    Out[25]: 2.367649

    But even your answer can be improved:

    Maxima:

    (%i1) fpprec:30$

    (%i2) bfloat(2.718281828459045b0)^bfloat(.8618974796837966b0);
    (%o2) 2.36764899999999983187397393143b0

    but:

    (%i7) bfloat(%e)^bfloat(.8618974796837966b0);
    (%o7) 2.3676490000000000085638369695b0
    surprisingly closer to Python's answer.

    but 2.718281828459045 isn't e. Close but no cigar.

    (%i10) bfloat(2.718281828459045b0) - bfloat(%e);
    (%o10) - 2.35360287471352802147785151603b-16

    Fricas:

    (1) -> 2.718281828459045^0.8618974796837966

    (1) 2.3676489999_999998319

    (2) -> exp(0.8618974796837966)

    (2) 2.3676490000_000000086

    --
    Pieter van Oostrum <pieter@vanoostrum.org>
    www: http://pieter.vanoostrum.org/
    PGP key: [8DAE142BE17999C4]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to Pieter van Oostrum on Mon Oct 24 02:00:37 2022
    Was using the microsoft calculator accessory. Maybe not
    extremly correct in the last digits. But we can check which
    one is the nearer float. Problem is the floats itself in decimal

    representation are not that accurate. So lets first show them a
    little more accorater in decimal, using yet another tool:

    ?- X is decimal(2.367649).
    X = 0d2.367649000000000114596332423388957977294921875.

    ?- X is decimal(2.3676489999999997).
    X = 0d2.367648999999999670507122573326341807842254638671875.

    Now lets see what the microsoft calculator accessory gives:

    2.718281828459045 ^ 0.8618974796837966 =
    2.3676489999999998318739739314273

    Which float is closer? Lets use microsoft calculator accessory again:

    2.3676490000000001145963324233890 - 2.3676489999999998318739739314273 = 0.0000000000000002827223584919617

    2.3676489999999998318739739314273 - 2.3676489999999996705071225733263 = 0.000000000000000161366851358101

    So I guess the second float is the to_nearest result.

    Pieter van Oostrum schrieb am Sonntag, 23. Oktober 2022 um 16:25:21 UTC+2:
    Mostowski Collapse <burs...@gmail.com> writes:

    I also get:

    Python 3.11.0rc1 (main, Aug 8 2022, 11:30:54)
    2.718281828459045**0.8618974796837966
    2.367649

    Nice try, but isn't this one the more correct?

    ?- X is 2.718281828459045**0.8618974796837966.
    X = 2.3676489999999997.

    That's probably the accuracy of the underlying C implementation of the exp function.

    In [25]: exp(0.8618974796837966)
    Out[25]: 2.367649

    But even your answer can be improved:

    Maxima:

    (%i1) fpprec:30$

    (%i2) bfloat(2.718281828459045b0)^bfloat(.8618974796837966b0);
    (%o2) 2.36764899999999983187397393143b0

    but:

    (%i7) bfloat(%e)^bfloat(.8618974796837966b0);
    (%o7) 2.3676490000000000085638369695b0
    surprisingly closer to Python's answer.

    but 2.718281828459045 isn't e. Close but no cigar.

    (%i10) bfloat(2.718281828459045b0) - bfloat(%e);
    (%o10) - 2.35360287471352802147785151603b-16

    Fricas:

    (1) -> 2.718281828459045^0.8618974796837966

    (1) 2.3676489999_999998319

    (2) -> exp(0.8618974796837966)

    (2) 2.3676490000_000000086

    --
    Pieter van Oostrum <pie...@vanoostrum.org>
    www: http://pieter.vanoostrum.org/
    PGP key: [8DAE142BE17999C4]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Schachner, Joseph (US)@21:1/5 to Mostowski Collapse on Mon Oct 24 14:52:28 2022
    Floating point will always be a can of worms, as long as people expect it to represent real numbers with more precision that float has. Usually this is not an issue, but sometimes it is. And, although this example does not exhibit subtractive
    cancellation, that is the surest way to have less precision that the two values you subtracted. And if you try to add up lots of values, if your sum grows large enough, tiny values will not change it anymore, even if there are many of them - there are
    simple algorithms to avoid this effect. But all of this is because float has limited precision.

    --- Joseph S.


    Teledyne Confidential; Commercially Sensitive Business Data

    -----Original Message-----
    From: Pieter van Oostrum <pieter-l@vanoostrum.org>
    Sent: Sunday, October 23, 2022 10:25 AM
    To: python-list@python.org
    Subject: Re: Are Floating Point Numbers still a Can of Worms?

    Mostowski Collapse <bursejan@gmail.com> writes:

    I also get:

    Python 3.11.0rc1 (main, Aug 8 2022, 11:30:54)
    2.718281828459045**0.8618974796837966
    2.367649

    Nice try, but isn't this one the more correct?

    ?- X is 2.718281828459045**0.8618974796837966.
    X = 2.3676489999999997.


    That's probably the accuracy of the underlying C implementation of the exp function.

    In [25]: exp(0.8618974796837966)
    Out[25]: 2.367649

    But even your answer can be improved:

    Maxima:

    (%i1) fpprec:30$

    (%i2) bfloat(2.718281828459045b0)^bfloat(.8618974796837966b0);
    (%o2) 2.36764899999999983187397393143b0

    but:

    (%i7) bfloat(%e)^bfloat(.8618974796837966b0);
    (%o7)
  • From Dennis Lee Bieber@21:1/5 to All on Mon Oct 24 14:02:33 2022
    On Mon, 24 Oct 2022 14:52:28 +0000, "Schachner, Joseph (US)" <Joseph.Schachner@Teledyne.com> declaimed the following:

    Floating point will always be a can of worms, as long as people expect it to represent real numbers with more precision that float has. Usually this is not an issue, but sometimes it is. And, although this example does not exhibit subtractive
    cancellation, that is the surest way to have less precision that the two values you subtracted. And if you try to add up lots of values, if your sum grows large enough, tiny values will not change it anymore, even if there are many of them - there are
    simple algorithms to avoid this effect. But all of this is because float has limited precision.


    Might I suggest this to those affected... https://www.amazon.com/Real-Computing-Made-Engineering-Calculations/dp/0486442217/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=1666634371&sr=8-1

    (Wow -- they want a fortune for the original hard-cover, which I own)


    --
    Wulfraed Dennis Lee Bieber AF6VN
    wlfraed@ix.netcom.com http://wlfraed.microdiversity.freeddns.org/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to Mostowski Collapse on Tue Oct 25 04:49:57 2022
    For the stalkers information, maybe the more correct word is not
    precision, but accuracy. The below result has the expected
    precision, i.e. the pow() function produces 53 bits mantissa of

    the floating point value, the decimal representation might not
    show that, but I guess a C double precision floating point
    functions computes 53 bits of some mantiassa, and the

    Python floats we see below are the same as the
    C double precision floating point values:

    Python 3.11.0rc1 (main, Aug 8 2022, 11:30:54)
    2.718281828459045**0.8618974796837966
    2.367649

    Unfortunately its not accurate up to 0.5 ULP. Lets compute the
    error in terms of ULP via microsoft calculator. What errors do we
    have for the two floating point numbers?

    0.0000000000000002827223584919617 /
    2.3676489999999998318739739314273 * 2^52 =
    0.53777747814549647686988825277067

    0.000000000000000161366851358101 /
    2.3676489999999998318739739314273 * 2^52 =
    0.30694232618360826777706134718012

    So the second floating point value, not the number
    returned by Python, has a relative error less than 0.5 ULP,
    and the first floating point value has a relative error

    above 0.5 ULP. If the error is more than 0.5 ULP its not
    anymore correctly rounded only nearly rounded.

    Mostowski Collapse schrieb am Dienstag, 25. Oktober 2022 um 13:30:52 UTC+2:
    Is this the same Schachner, Joseph that posted:

    Subject: ANN: Dogelog Runtime, Prolog to the Moon (2021)
    Message-ID: <BN8PR14MB28516E1052...@BN8PR14MB2851.namprd14.prod.outlook.com>
    Opinion: Anyone who is counting on Python for truly fast
    compute speed is probably using Python for the wrong purpose.
    Here, we use Python to control Test Equipment, to set up the
    equipment and ask for a measurement, get it, and proceed to
    the next measurement; and at the end produce a nice formatted
    report. If we wrote the test script in C or Rust or whatever it could
    not run substantially faster because it is communicating with
    the test equipment, setting it up and waiting for responses, and
    that is where the vast majority of the time goes. Especially
    if the measurement result requires averaging it can take a while.
    In my opinion this is an ideal use for Python, not just because
    the speed of Python is not important, but also because we can
    easily find people who know Python, who like coding in Python,
    and will join the company to program in Python ... and stay with us.

    --- Joseph S.

    Congratulations you already communicated in 2021 that speed is not necessecary. So whats your opinion now in 2022, precision is not necessary? Well, well, you are surely an expert in lowering the bar.

    LMAO!
    Schachner, Joseph (US) schrieb am Montag, 24. Oktober 2022 um 16:54:04 UTC+2:
    Floating point will always be a can of worms, as long as
    people expect it to represent real numbers with more precision
    that float has. Usually this is not an issue, but sometimes it is.
    And, although this example does not exhibit subtractive cancellation,
    that is the surest way to have less precision that the two values
    you subtracted. And if you try to add up lots of values, if your sum
    grows large enough, tiny values will not change it anymore, even
    if there are many of them - there are simple algorithms to avoid
    this effect. But all of this is because float has limited precision.

    --- Joseph S.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to All on Tue Oct 25 04:30:40 2022
    Is this the same Schachner, Joseph that posted:

    Subject: ANN: Dogelog Runtime, Prolog to the Moon (2021)
    Message-ID: <BN8PR14MB28516E1052F60F2A924B55B7F5DA9@BN8PR14MB2851.namprd14.prod.outlook.com>
    Opinion: Anyone who is counting on Python for truly fast
    compute speed is probably using Python for the wrong purpose.
    Here, we use Python to control Test Equipment, to set up the
    equipment and ask for a measurement, get it, and proceed to
    the next measurement; and at the end produce a nice formatted
    report. If we wrote the test script in C or Rust or whatever it could
    not run substantially faster because it is communicating with
    the test equipment, setting it up and waiting for responses, and
    that is where the vast majority of the time goes. Especially
    if the measurement result requires averaging it can take a while.
    In my opinion this is an ideal use for Python, not just because
    the speed of Python is not important, but also because we can
    easily find people who know Python, who like coding in Python,
    and will join the company to program in Python ... and stay with us.

    --- Joseph S.

    Congratulations you already communicated in 2021 that speed is not
    necessecary. So whats your opinion now in 2022, precision is not necessary? Well, well, you are surely an expert in lowering the bar.

    LMAO!

    Schachner, Joseph (US) schrieb am Montag, 24. Oktober 2022 um 16:54:04 UTC+2:
    Floating point will always be a can of worms, as long as
    people expect it to represent real numbers with more precision
    that float has. Usually this is not an issue, but sometimes it is.
    And, although this example does not exhibit subtractive cancellation,
    that is the surest way to have less precision that the two values
    you subtracted. And if you try to add up lots of values, if your sum
    grows large enough, tiny values will not change it anymore, even
    if there are many of them - there are simple algorithms to avoid
    this effect. But all of this is because float has limited precision.

    --- Joseph S.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to All on Fri Oct 28 17:31:52 2022
    Doing a little failure sweep for X**Y where the bits of
    X and Y are in total 20 bit, gives a little different picture,
    everything tested on Windows:

    % sweep1, JDK 19: 1756
    % sweep2, JDK 19: 666

    % sweep1, PyPy: 2930
    % sweep2, PyPy: 2174

    % sweep1, nodejs: 78264
    % sweep2, nodejs: 98698

    sweep1: 15 bits X plus sign bit, 3 bits Y plus sign bit
    sweep2: 12 bits X plus sign bit, 6 bits Y plus sign bit

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to All on Mon Nov 21 09:07:12 2022
    Ha Ha, these Machin like formulas are
    undershooting and overshooting:

    Python 3.11.0rc1
    import math
    28*math.atan(1/9)+4*math.atan(4765/441284)
    3.1415926535897927
    20*math.atan(1/7)+8*math.atan(3/79)
    3.141592653589793
    48*math.atan(1/16)+4*math.atan(14818029403841/407217467325761) 3.1415926535897936

    Credits: Machin's Merit
    https://www.mathpages.com/home/kmath373/kmath373.htm

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)