• (Almost) Rock-n-Roll - "The Bunny Hop" (1953)

    From 186283@ud0s4.net@21:1/5 to All on Thu Jan 9 02:48:19 2025
    https://www.youtube.com/watch?v=EmC1KyxhEJU

    Sax and trumpet main, not guitars.

    However it does have that proto-rock phasing
    and feel. Beef it up to guitars and it could
    have been a later 50s rock-n-roll tune.

    Where'd I hear of this ... seems the 'famous
    bunny museum' in Cal burnt down in the fires.
    Some old video showed a 45 of this song.


    --
    033-33

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Allodoxaphobia@21:1/5 to 186282@ud0s4.net on Thu Jan 9 16:02:15 2025
    On Thu, 9 Jan 2025 02:48:19 -0500, 186282@ud0s4.net wrote:
    https://www.youtube.com/watch%v=EmC1KyxhEJU

    Sax and trumpet main, not guitars.

    And, this has what, please, to do with colm?

    The trash overflows in this ng.
    Time to rename it -- and for me to drop it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From 186283@ud0s4.net@21:1/5 to Allodoxaphobia on Thu Jan 9 19:33:50 2025
    On 1/9/25 11:02 AM, Allodoxaphobia wrote:
    On Thu, 9 Jan 2025 02:48:19 -0500, 186282@ud0s4.net wrote:
    https://www.youtube.com/watch%v=EmC1KyxhEJU

    Sax and trumpet main, not guitars.

    And, this has what, please, to do with colm?

    Actually it was mis-posted ...

    But it's still fun.

    Set it as yer KDE startup tune :-)

    Hmmm ... wonder if it's where Hugh Hefner got
    his 'bunny' idea ?

    The trash overflows in this ng.
    Time to rename it -- and for me to drop it.

    I've complained from time to time, tried to start
    lin-centric threads ... only just SO much good.
    People are people and 'linux' is a kinda narrow
    subject - so other stuff quickly creeps in.

    The tech stuff - usually in the first half dozen
    replies in the thread.

    Maybe you should get a Chat or OpenAI feed and
    order it to stick to the hard tech and only the
    hard tech ? Of course soon even those will rebel
    and drift and wanna talk about the Kardashians.

    On the plus, most all people in COLM are gonna
    have 3-digit IQs - unlike too many other groups :-)

    If you want tech ... I've been trying to find out
    if with modern 'flat address space' CPUs there's
    any speed advantage in setting functions and
    data blocks at specific addresses - what in the
    old days would have been 'page boundaries' or
    such. In short does an i7 or ARM and/or popular
    mem-management chips have less work to do setting
    up reading/writing at some memory addresses ?
    Maybe a critical app could run ten percent faster
    if, even 'wasting' memory, you put some stuff in
    kind of exact places. Older chips with banked
    memory and even mag HDDs, the answer was Yes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to 186282@ud0s4.net on Fri Jan 10 07:11:33 2025
    On 10/01/2025 00:33, 186282@ud0s4.net wrote:
    I've been trying to find out
      if with modern 'flat address space' CPUs there's
      any speed advantage in setting functions and
      data blocks at specific addresses - what in the
      old days would have been 'page boundaries' or
      such. In short does an i7 or ARM and/or popular
      mem-management chips have less work to do setting
      up reading/writing at some memory addresses ?
      Maybe a critical app could run ten percent faster
      if, even 'wasting' memory, you put some stuff in
      kind of exact places. Older chips with banked
      memory and even mag HDDs, the answer was Yes.

    Mm.

    I don't think so. About the only thing that is proximity sensitive is
    cacheing. That is you want to try and ensure that you are operating out
    of cache, but the algorithms for what part of the instructions are
    cached and what are not is beyond my ability to identify, let alone code
    in...

    --
    The New Left are the people they warned you about.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From 186283@ud0s4.net@21:1/5 to The Natural Philosopher on Fri Jan 10 02:58:46 2025
    On 1/10/25 2:11 AM, The Natural Philosopher wrote:
    On 10/01/2025 00:33, 186282@ud0s4.net wrote:
    I've been trying to find out
       if with modern 'flat address space' CPUs there's
       any speed advantage in setting functions and
       data blocks at specific addresses - what in the
       old days would have been 'page boundaries' or
       such. In short does an i7 or ARM and/or popular
       mem-management chips have less work to do setting
       up reading/writing at some memory addresses ?
       Maybe a critical app could run ten percent faster
       if, even 'wasting' memory, you put some stuff in
       kind of exact places. Older chips with banked
       memory and even mag HDDs, the answer was Yes.

    Mm.

    I don't think so. About the only thing that is proximity sensitive is cacheing. That is you want to try and ensure that you are operating out
    of cache, but the algorithms for what part of the instructions are
    cached and what are not is beyond my ability to identify, let alone code in...

    I did a lot of searching but never found a
    good answer. IF you can do stuff entirely
    within CPU cache then it WILL be faster.
    Alas not MUCH stuff will be adaptable to
    that strategy - esp with today's bloatware.

    We MAY be talking maker/sub-brand specifics ...
    intel i3/i5/i7/i9 may all be different. Different
    gens different yet. ARMs too.

    Seems that CPUs and MMUs can do certain register
    ops faster/easier than others - fewer calx and
    switching settings. Therein my quest. If you want
    some code to run AS FAST AS POSSIBLE it's worth
    thinking about.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to 186282@ud0s4.net on Fri Jan 10 08:08:27 2025
    On 10/01/2025 07:58, 186282@ud0s4.net wrote:
    On 1/10/25 2:11 AM, The Natural Philosopher wrote:
    On 10/01/2025 00:33, 186282@ud0s4.net wrote:
    I've been trying to find out
       if with modern 'flat address space' CPUs there's
       any speed advantage in setting functions and
       data blocks at specific addresses - what in the
       old days would have been 'page boundaries' or
       such. In short does an i7 or ARM and/or popular
       mem-management chips have less work to do setting
       up reading/writing at some memory addresses ?
       Maybe a critical app could run ten percent faster
       if, even 'wasting' memory, you put some stuff in
       kind of exact places. Older chips with banked
       memory and even mag HDDs, the answer was Yes.

    Mm.

    I don't think so. About the only thing that is proximity sensitive is
    cacheing. That is you want to try and ensure that you are operating
    out of cache, but the algorithms for what part of the instructions are
    cached and what are not is beyond my ability to identify, let alone
    code in...

      I did a lot of searching but never found a
      good answer. IF you can do stuff entirely
      within CPU cache then it WILL be faster.
      Alas not MUCH stuff will be adaptable to
      that strategy - esp with today's bloatware.

    RK is probably the best person to understand that, but in fact a modern compiler will optimise for a specific processor architecture normally.
    It is quite instructive to see how 'real world' programs speed up on a
    chipset that simply has more cache.


      We MAY be talking maker/sub-brand specifics ...
      intel i3/i5/i7/i9 may all be different. Different
      gens different yet. ARMs too.

    Of course. And indeed many architectures are optimised for e.g. C programs. Imagine if you chipset detects a 'ca;; subroutine' code nugget and then proceeds to cache the new stack pointer's stack before doing anything.
    All your 'local' variables are now in cache.


      Seems that CPUs and MMUs can do certain register
      ops faster/easier than others - fewer calx and
      switching settings. Therein my quest. If you want
      some code to run AS FAST AS POSSIBLE it's worth
      thinking about.
    And compilers do.
    And chipsets do,.

    So we don't have to. I used to do *86 assembler, but what todays C
    compilers spit out is better than any hand crafted assembler is.

    My mathematical friend only uses assembler to access some weird features
    of a specific intel architecture to do with vector arithmetic. He writes
    C library functions in assembler to access them.

    Because the compilers don't acknowledge their existence - yet.




    --
    "I guess a rattlesnake ain't risponsible fer bein' a rattlesnake, but ah
    puts mah heel on um jess the same if'n I catches him around mah chillun".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to The Natural Philosopher on Fri Jan 10 09:10:16 2025
    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 10/01/2025 07:58, 186282@ud0s4.net wrote:
    On 1/10/25 2:11 AM, The Natural Philosopher wrote:
    I don't think so. About the only thing that is proximity sensitive
    is cacheing. That is you want to try and ensure that you are
    operating out of cache, but the algorithms for what part of the
    instructions are cached and what are not is beyond my ability to
    identify, let alone code in...

    The easy win is alignment, which allows more efficient use of memory
    resources. e.g. if a function is 50 bytes long and your cache line size
    is 64 bytes, you can ensure your function fits into a single cache line
    by aligning its start address to a multiple of 64. It doesn’t
    necessarily make that particular function faster, but it leaves more
    room in the cache for everything else - so more cache hits in the
    program as a whole. The same idea applies at other levels of the memory hierarchy, e.g. the system page size (usually, but not always,
    4Kbyte). Compilers and linkers already exploit all this quite
    effectively.

    The harder strategy is to put group data (or code) according to how it’s used. If a function is going to operate on several different values then
    having them adjacent in memory maximises the chances they’ll be read (or cached) in a single operation. Main memory access can be very slow
    (hundreds of times the latency of individual instructions) so getting
    this right can have substantial benefits. Easy enough for a little
    struct but more challenging for a complex data structure.

    So we don't have to. I used to do *86 assembler, but what todays C
    compilers spit out is better than any hand crafted assembler is.

    My mathematical friend only uses assembler to access some weird
    features of a specific intel architecture to do with vector
    arithmetic. He writes C library functions in assembler to access them.

    Because the compilers don't acknowledge their existence - yet.

    I’ve had good results using GCC’s native support for vector operations, though better on x86 than Arm so far, but my needs aren’t complex.

    The context where I’ve had to use a lot of assembler is achieving constant-time operation. Compilers love conditional branches...
    (Actually vector instructions are better for this, but not all of my
    code is vectorizable.)

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)