• Why pitch-corrected vocals sound so mechanical

    From Adam H. Kerman@21:1/5 to All on Fri Apr 11 08:13:40 2025
    https://www.youtube.com/watch?v=BDJF4lR3_eg

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From shawn@21:1/5 to ahk@chinet.com on Fri Apr 11 05:05:52 2025
    On Fri, 11 Apr 2025 08:13:40 -0000 (UTC), "Adam H. Kerman"
    <ahk@chinet.com> wrote:

    https://www.youtube.com/watch?v=BDJF4lR3_eg

    He's done a number of those analysis that shows just how modern music
    gets modified all the time. Either pitch correction or compression of
    the audio happens all the time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rhino@21:1/5 to Adam H. Kerman on Fri Apr 11 06:38:33 2025
    On 2025-04-11 4:13 AM, Adam H. Kerman wrote:
    https://www.youtube.com/watch?v=BDJF4lR3_eg

    Just as pitch-correction alters the pitch of notes to make them
    "perfect", quantization alters the placement of notes to make them
    rhythmically "perfect". For instance, if the drummer's high hat hits are
    a little bit off the beat, quantization can shift them to be exactly on
    the beat. This too makes the playing sound mechanical. I've heard
    several musicians bemoan the (over)use of quantization just as "Fil"
    bemoans the use of pitch-correction in this video.

    This video is a brief explanation of quantization without much of the philosophizing about whether it is good or bad. I'm sure there are other
    videos that examine the issue more thoroughly.

    https://www.youtube.com/watch?v=68LtY2aATl0 [7 minutes]

    Note: He makes several uses of the term "DAW" (spelled out as individual letters) without explaining it. DAW stands for Digital Audio
    Workstation, essentially the software you use to do the recording, pitch-correction, quantization etc.

    For what it's worth, I've often heard analysts note that bands like The Beatles, the Doors, and Led Zeppelin definitely speed up and slow down perceptably in some of their well-known recordings, even though they had
    fine drummers. If quantization had existed and been used when those
    recordings were made, we might well have found those songs somehow less impressive....

    This short by Rick Beato gives an example of a quantized Led Zeppelin
    groove versus the original:

    https://www.youtube.com/shorts/a7dTRgc0Mn4

    The short is an excerpt from this longer video:

    https://www.youtube.com/watch?v=hT4fFolyZYU [10 minutes]

    When Beato says that the original tempo is 170 BPM, he means 170 beats
    per minute.

    --
    Rhino

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rhino@21:1/5 to shawn on Fri Apr 11 06:44:04 2025
    On 2025-04-11 5:05 AM, shawn wrote:
    On Fri, 11 Apr 2025 08:13:40 -0000 (UTC), "Adam H. Kerman"
    <ahk@chinet.com> wrote:

    https://www.youtube.com/watch?v=BDJF4lR3_eg

    He's done a number of those analysis that shows just how modern music
    gets modified all the time. Either pitch correction or compression of
    the audio happens all the time.

    Electric guitarists are particularly famous/notorious for all the
    gadgets they use to alter their sound. Nowadays, it's a rare guitarist
    (or bassist for that matter) who doesn't have an extensive pedal board
    with effects like distortion, delay, compression, phase, flange, echo, overdrive, etc. etc. etc. You really don't hear clean electric guitar
    (or bass) very often any more.

    --
    Rhino

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BTR1701@21:1/5 to All on Fri Apr 11 19:47:05 2025
    On Apr 11, 2025 at 2:05:52 AM PDT, "shawn" <nanoflower@notforg.m.a.i.l.com> wrote:

    On Fri, 11 Apr 2025 08:13:40 -0000 (UTC), "Adam H. Kerman"
    <ahk@chinet.com> wrote:

    https://www.youtube.com/watch?v=BDJF4lR3_eg

    He's done a number of those analysis that shows just how modern music
    gets modified all the time. Either pitch correction or compression of
    the audio happens all the time.

    I've noticed this on recordings going back to the 80s. When I play along with film soundtracks, I often have to push the tuning slide in and make my trumpet almost an inch shorter than it normally is because the recording is so sharp, indicating it's been sped up from how it actually sounded in the studio when
    it was recorded.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BTR1701@21:1/5 to All on Fri Apr 11 19:50:29 2025
    On Apr 11, 2025 at 3:38:33 AM PDT, "Rhino" <no_offline_contact@example.com> wrote:

    On 2025-04-11 4:13 AM, Adam H. Kerman wrote:
    https://www.youtube.com/watch?v=BDJF4lR3_eg

    Just as pitch-correction alters the pitch of notes to make them
    "perfect", quantization alters the placement of notes to make them rhythmically "perfect".

    Rhythmically perfect is not desirable. It makes the music sound computerized, like Commander Data is playing it.

    Humans are not perfect so music that is perfect sounds inhuman. That's why my music software has a feature called "Human Playback", which purposely introduces minor inaccuracies in both rhythm and pitch, making it sound a lot less digital and cold.

    For instance, if the drummer's high hat hits are
    a little bit off the beat, quantization can shift them to be exactly on
    the beat. This too makes the playing sound mechanical. I've heard
    several musicians bemoan the (over)use of quantization just as "Fil"
    bemoans the use of pitch-correction in this video.

    This video is a brief explanation of quantization without much of the philosophizing about whether it is good or bad. I'm sure there are other videos that examine the issue more thoroughly.

    https://www.youtube.com/watch?v=68LtY2aATl0 [7 minutes]

    Note: He makes several uses of the term "DAW" (spelled out as individual letters) without explaining it. DAW stands for Digital Audio
    Workstation, essentially the software you use to do the recording, pitch-correction, quantization etc.

    For what it's worth, I've often heard analysts note that bands like The Beatles, the Doors, and Led Zeppelin definitely speed up and slow down perceptably in some of their well-known recordings, even though they had
    fine drummers. If quantization had existed and been used when those recordings were made, we might well have found those songs somehow less impressive....

    This short by Rick Beato gives an example of a quantized Led Zeppelin
    groove versus the original:

    https://www.youtube.com/shorts/a7dTRgc0Mn4

    The short is an excerpt from this longer video:

    https://www.youtube.com/watch?v=hT4fFolyZYU [10 minutes]

    When Beato says that the original tempo is 170 BPM, he means 170 beats
    per minute.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rhino@21:1/5 to All on Fri Apr 11 22:14:10 2025
    On 2025-04-11 3:50 PM, BTR1701 wrote:
    On Apr 11, 2025 at 3:38:33 AM PDT, "Rhino" <no_offline_contact@example.com> wrote:

    On 2025-04-11 4:13 AM, Adam H. Kerman wrote:
    https://www.youtube.com/watch?v=BDJF4lR3_eg

    Just as pitch-correction alters the pitch of notes to make them
    "perfect", quantization alters the placement of notes to make them
    rhythmically "perfect".

    Rhythmically perfect is not desirable. It makes the music sound computerized, like Commander Data is playing it.

    Exactly. Beato illustrates that in the cited videos.

    Humans are not perfect so music that is perfect sounds inhuman. That's why my music software has a feature called "Human Playback", which purposely introduces minor inaccuracies in both rhythm and pitch, making it sound a lot less digital and cold.

    Good on the developers for adding that ability. I wonder if the bits
    which are "de-quantized" are chosen at random or if some other method is employed?

    For instance, if the drummer's high hat hits are
    a little bit off the beat, quantization can shift them to be exactly on
    the beat. This too makes the playing sound mechanical. I've heard
    several musicians bemoan the (over)use of quantization just as "Fil"
    bemoans the use of pitch-correction in this video.

    This video is a brief explanation of quantization without much of the
    philosophizing about whether it is good or bad. I'm sure there are other
    videos that examine the issue more thoroughly.

    https://www.youtube.com/watch?v=68LtY2aATl0 [7 minutes]

    Note: He makes several uses of the term "DAW" (spelled out as individual
    letters) without explaining it. DAW stands for Digital Audio
    Workstation, essentially the software you use to do the recording,
    pitch-correction, quantization etc.

    For what it's worth, I've often heard analysts note that bands like The
    Beatles, the Doors, and Led Zeppelin definitely speed up and slow down
    perceptably in some of their well-known recordings, even though they had
    fine drummers. If quantization had existed and been used when those
    recordings were made, we might well have found those songs somehow less
    impressive....

    This short by Rick Beato gives an example of a quantized Led Zeppelin
    groove versus the original:

    https://www.youtube.com/shorts/a7dTRgc0Mn4

    The short is an excerpt from this longer video:

    https://www.youtube.com/watch?v=hT4fFolyZYU [10 minutes]

    When Beato says that the original tempo is 170 BPM, he means 170 beats
    per minute.





    --
    Rhino

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From shawn@21:1/5 to no_offline_contact@example.com on Sat Apr 12 00:13:02 2025
    On Fri, 11 Apr 2025 22:14:10 -0400, Rhino
    <no_offline_contact@example.com> wrote:

    On 2025-04-11 3:50 PM, BTR1701 wrote:
    On Apr 11, 2025 at 3:38:33 AM PDT, "Rhino" <no_offline_contact@example.com> >> wrote:

    On 2025-04-11 4:13 AM, Adam H. Kerman wrote:
    https://www.youtube.com/watch?v=BDJF4lR3_eg

    Just as pitch-correction alters the pitch of notes to make them
    "perfect", quantization alters the placement of notes to make them
    rhythmically "perfect".

    Rhythmically perfect is not desirable. It makes the music sound computerized,
    like Commander Data is playing it.

    Exactly. Beato illustrates that in the cited videos.

    Humans are not perfect so music that is perfect sounds inhuman. That's why my
    music software has a feature called "Human Playback", which purposely
    introduces minor inaccuracies in both rhythm and pitch, making it sound a lot
    less digital and cold.

    Good on the developers for adding that ability. I wonder if the bits
    which are "de-quantized" are chosen at random or if some other method is >employed?

    They appear to be chosen at random but it's not enough to duplicate
    humans. It just makes the sound less mechanical until you look at it
    with sound analysis as you can see in some of the other analysis
    videos that the Wings of Pegasus singer did. He showed how the vocals
    are still being forced to fit a perfect note with just a little
    variation added in which isn't what a human vocal sounds like.

    For instance, if the drummer's high hat hits are
    a little bit off the beat, quantization can shift them to be exactly on
    the beat. This too makes the playing sound mechanical. I've heard
    several musicians bemoan the (over)use of quantization just as "Fil"
    bemoans the use of pitch-correction in this video.

    This video is a brief explanation of quantization without much of the
    philosophizing about whether it is good or bad. I'm sure there are other >>> videos that examine the issue more thoroughly.

    https://www.youtube.com/watch?v=68LtY2aATl0 [7 minutes]

    Note: He makes several uses of the term "DAW" (spelled out as individual >>> letters) without explaining it. DAW stands for Digital Audio
    Workstation, essentially the software you use to do the recording,
    pitch-correction, quantization etc.

    For what it's worth, I've often heard analysts note that bands like The
    Beatles, the Doors, and Led Zeppelin definitely speed up and slow down
    perceptably in some of their well-known recordings, even though they had >>> fine drummers. If quantization had existed and been used when those
    recordings were made, we might well have found those songs somehow less
    impressive....

    This short by Rick Beato gives an example of a quantized Led Zeppelin
    groove versus the original:

    https://www.youtube.com/shorts/a7dTRgc0Mn4

    The short is an excerpt from this longer video:

    https://www.youtube.com/watch?v=hT4fFolyZYU [10 minutes]

    When Beato says that the original tempo is 170 BPM, he means 170 beats
    per minute.




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From shawn@21:1/5 to ahk@chinet.com on Sun Apr 13 08:47:04 2025
    On Fri, 11 Apr 2025 08:13:40 -0000 (UTC), "Adam H. Kerman"
    <ahk@chinet.com> wrote:

    https://www.youtube.com/watch?v=BDJF4lR3_eg

    Here is another perfect example of the mechanical sound of pitch
    corrected vocals. In this case they have a trio singing but apparently
    they were recorded separately, then pitch corrected and then put
    together to sound like they were singing together. It sounded off to
    me from the very start and then Fil starts pointing out the mistakes.

    The video is supposedly of three people singing together in a
    bathroom.
    https://www.youtube.com/watch?v=mMslQam1jRA

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BTR1701@21:1/5 to Adam H. Kerman on Mon Apr 14 19:07:00 2025
    On Apr 11, 2025 at 1:13:40 AM PDT, ""Adam H. Kerman"" <ahk@chinet.com> wrote:

    https://www.youtube.com/watch?v=BDJF4lR3_eg

    On Sunday, Chris Siddall demo'd a new AI system called Cantai that can read lyrics in a score and sing them during playback. The realism was stunning. Up till now, you'd enter the lyrics in the score and just have to accept that while the software can mimic a human voice with reasonable accuracy, it couldn't actually read and sing the lyrics, so all you'd get would be an "ah" or "oh" on each note rather than the actual word. Now it can sing the actual words and do it with not only an amazing degree of accuracy but with actual musicality. Doesn't sound at all like a computer. Sounds like an actual person singing.

    https://www.youtube.com/watch?v=J-mTezdPLGI&t=2408s

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Adam H. Kerman@21:1/5 to atropos@mac.com on Mon Apr 14 21:04:54 2025
    BTR1701 <atropos@mac.com> wrote:
    On Apr 11, 2025 at 1:13:40 AM PDT, ""Adam H. Kerman"" <ahk@chinet.com> wrote:

    https://www.youtube.com/watch?v=BDJF4lR3_eg

    On Sunday, Chris Siddall demo'd a new AI system called Cantai that can read >lyrics in a score and sing them during playback. The realism was stunning. Up >till now, you'd enter the lyrics in the score and just have to accept that >while the software can mimic a human voice with reasonable accuracy, it >couldn't actually read and sing the lyrics, so all you'd get would be an "ah" >or "oh" on each note rather than the actual word. Now it can sing the actual >words and do it with not only an amazing degree of accuracy but with actual >musicality. Doesn't sound at all like a computer. Sounds like an actual person >singing.

    https://www.youtube.com/watch?v=J-mTezdPLGI&t=2408s

    Great! Swifties can experience a live concert and Taylor Swift won't
    have to get out of bed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From moviePig@21:1/5 to All on Mon Apr 14 18:13:26 2025
    On 4/14/2025 3:07 PM, BTR1701 wrote:
    On Apr 11, 2025 at 1:13:40 AM PDT, ""Adam H. Kerman"" <ahk@chinet.com> wrote:

    https://www.youtube.com/watch?v=BDJF4lR3_eg

    On Sunday, Chris Siddall demo'd a new AI system called Cantai that can read lyrics in a score and sing them during playback. The realism was stunning. Up till now, you'd enter the lyrics in the score and just have to accept that while the software can mimic a human voice with reasonable accuracy, it couldn't actually read and sing the lyrics, so all you'd get would be an "ah" or "oh" on each note rather than the actual word. Now it can sing the actual words and do it with not only an amazing degree of accuracy but with actual musicality. Doesn't sound at all like a computer. Sounds like an actual person
    singing.

    https://www.youtube.com/watch?v=J-mTezdPLGI&t=2408s

    Here's a short story from '53. Fairly quick read...

    https://elateachers.weebly.com/uploads/2/7/0/1/27012625/virtuoso.pdf

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From anim8rfsk@21:1/5 to moviePig on Tue Apr 15 08:10:56 2025
    moviePig <nobody@nowhere.com> wrote:
    On 4/14/2025 3:07 PM, BTR1701 wrote:
    On Apr 11, 2025 at 1:13:40 AM PDT, ""Adam H. Kerman"" <ahk@chinet.com> wrote:

    https://www.youtube.com/watch?v=BDJF4lR3_eg

    On Sunday, Chris Siddall demo'd a new AI system called Cantai that can read >> lyrics in a score and sing them during playback. The realism was stunning. Up
    till now, you'd enter the lyrics in the score and just have to accept that >> while the software can mimic a human voice with reasonable accuracy, it
    couldn't actually read and sing the lyrics, so all you'd get would be an "ah"
    or "oh" on each note rather than the actual word. Now it can sing the actual >> words and do it with not only an amazing degree of accuracy but with actual >> musicality. Doesn't sound at all like a computer. Sounds like an actual person
    singing.

    https://www.youtube.com/watch?v=J-mTezdPLGI&t=2408s

    Here's a short story from '53. Fairly quick read...

    https://elateachers.weebly.com/uploads/2/7/0/1/27012625/virtuoso.pdf


    Sounds like Eartha Kitt.

    With a head cold.



    --
    The last thing I want to do is hurt you, but it is still on my list.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From anim8rfsk@21:1/5 to shawn on Tue Apr 15 08:10:57 2025
    shawn <nanoflower@notforg.m.a.i.l.com> wrote:
    On Fri, 11 Apr 2025 08:13:40 -0000 (UTC), "Adam H. Kerman"
    <ahk@chinet.com> wrote:

    https://www.youtube.com/watch?v=BDJF4lR3_eg

    Here is another perfect example of the mechanical sound of pitch
    corrected vocals. In this case they have a trio singing but apparently
    they were recorded separately, then pitch corrected and then put
    together to sound like they were singing together. It sounded off to
    me from the very start and then Fil starts pointing out the mistakes.

    The video is supposedly of three people singing together in a
    bathroom.
    https://www.youtube.com/watch?v=mMslQam1jRA


    They sound like they are singing:
    “the real deal with Bill McNeil!“

    --
    The last thing I want to do is hurt you, but it is still on my list.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Arthur Lipscomb@21:1/5 to All on Sat Apr 19 07:57:58 2025
    On 4/11/2025 12:47 PM, BTR1701 wrote:
    On Apr 11, 2025 at 2:05:52 AM PDT, "shawn" <nanoflower@notforg.m.a.i.l.com> wrote:

    On Fri, 11 Apr 2025 08:13:40 -0000 (UTC), "Adam H. Kerman"
    <ahk@chinet.com> wrote:

    https://www.youtube.com/watch?v=BDJF4lR3_eg

    He's done a number of those analysis that shows just how modern music
    gets modified all the time. Either pitch correction or compression of
    the audio happens all the time.

    I've noticed this on recordings going back to the 80s. When I play along with film soundtracks, I often have to push the tuning slide in and make my trumpet
    almost an inch shorter than it normally is because the recording is so sharp, indicating it's been sped up from how it actually sounded in the studio when it was recorded.



    I grew up watching, a 1982 musical called "The Pirate Movie" starring
    Kristy McNichol. I think I saw an interview with her once where she
    commented she couldn't sing and it was all done with computers. I never noticed it at the time, but now I can kind of tell there's some auto
    tune happening. But until I heard her say that, it never crossed my
    mind they had auto-tune in the 80s.

    Another favorite musical is "Josie and the Pussycats." I love the movie
    and I love the soundtrack. I very recently got to meet Rachael Leigh
    Cook at a convention, and I asked her about her singing in the movie.
    She said she can't sing at all, and it was all done with computers.
    That one sort of surprised me, since I never noticed the auto tune in
    her voice.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pluted Pup@21:1/5 to shawn on Mon Apr 21 14:42:40 2025
    On 4/11/25 9:13 PM, shawn wrote:
    On Fri, 11 Apr 2025 22:14:10 -0400, Rhino
    <no_offline_contact@example.com> wrote:

    On 2025-04-11 3:50 PM, BTR1701 wrote:
    On Apr 11, 2025 at 3:38:33 AM PDT, "Rhino" <no_offline_contact@example.com>
    wrote:

    On 2025-04-11 4:13 AM, Adam H. Kerman wrote:
    https://www.youtube.com/watch?v=BDJF4lR3_eg

    Just as pitch-correction alters the pitch of notes to make them
    "perfect", quantization alters the placement of notes to make them
    rhythmically "perfect".

    Rhythmically perfect is not desirable. It makes the music sound computerized,
    like Commander Data is playing it.

    Exactly. Beato illustrates that in the cited videos.

    Humans are not perfect so music that is perfect sounds inhuman.
    That's why my
    music software has a feature called "Human Playback", which purposely
    introduces minor inaccuracies in both rhythm and pitch, making it
    sound a lot
    less digital and cold.


    That is the mentality of post-production: first ruin a
    recording by processing, than "fix" that broken recording
    with more processing, in the attempt to conceal the bad
    work already done.


    Good on the developers for adding that ability. I wonder if the bits
    which are "de-quantized" are chosen at random or if some other method is
    employed?

    They appear to be chosen at random but it's not enough to duplicate
    humans. It just makes the sound less mechanical until you look at it
    with sound analysis as you can see in some of the other analysis
    videos that the Wings of Pegasus singer did. He showed how the vocals
    are still being forced to fit a perfect note with just a little
    variation added in which isn't what a human vocal sounds like.


    It seems that autotune / pitch correction turns the voice
    into a keyboard, which is not what singing sounds like.

    Greta Van Fleet for instance on their first two albums,
    which is all I've heard, has some songs with awful keyboard
    like vocals, entirely caused by post-production, not by
    the pre-production singing which I can only guess sounded
    better.

    In another song that is impossible to sing with pitch
    correction is the Beatles song Yesterday: the first
    vocal note does not comply with the sheet music but is
    the "same note" as the the succeeding note, but at a
    higher pitch so pitch correction would flatten the note
    making it wrong by repeating a note with the same pitch.
    The sheet music and covers of it falsely sing the first
    note a semitone higher, like in the movie Yesterday (2019).

    In related routine post-production stuff like dynamic
    compression, this is the muffling of the loud notes,
    so the song no longer has loud notes, drums, vocals.
    Then they proudly say that the result is loud!

    In noise reduction, it is the quiet sounds that are
    hurt the most, especially with stuff like fade-outs.

    So to sum up:

    Compression muffles the loud notes,
    Noise Reduction muffles the quiet notes,
    Beat Detector muffles the rhythm,
    Pitch Correction/Autotune muffles the singer.


    For instance, if the drummer's high hat hits are
    a little bit off the beat, quantization can shift them to be
    exactly on
    the beat. This too makes the playing sound mechanical. I've heard
    several musicians bemoan the (over)use of quantization just as "Fil"
    bemoans the use of pitch-correction in this video.

    This video is a brief explanation of quantization without much of the
    philosophizing about whether it is good or bad. I'm sure there are
    other
    videos that examine the issue more thoroughly.

    https://www.youtube.com/watch?v=68LtY2aATl0 [7 minutes]

    Note: He makes several uses of the term "DAW" (spelled out as
    individual
    letters) without explaining it. DAW stands for Digital Audio
    Workstation, essentially the software you use to do the recording,
    pitch-correction, quantization etc.

    For what it's worth, I've often heard analysts note that bands
    like The
    Beatles, the Doors, and Led Zeppelin definitely speed up and slow down
    perceptably in some of their well-known recordings, even though
    they had
    fine drummers. If quantization had existed and been used when those
    recordings were made, we might well have found those songs somehow
    less
    impressive....

    This short by Rick Beato gives an example of a quantized Led Zeppelin
    groove versus the original:

    https://www.youtube.com/shorts/a7dTRgc0Mn4

    The short is an excerpt from this longer video:

    https://www.youtube.com/watch?v=hT4fFolyZYU [10 minutes]

    When Beato says that the original tempo is 170 BPM, he means 170 beats
    per minute.




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pluted Pup@21:1/5 to Arthur Lipscomb on Mon Apr 21 14:57:28 2025
    On 4/19/25 7:57 AM, Arthur Lipscomb wrote:
    On 4/11/2025 12:47 PM, BTR1701 wrote:
    On Apr 11, 2025 at 2:05:52 AM PDT, "shawn"
    <nanoflower@notforg.m.a.i.l.com>
    wrote:

    On Fri, 11 Apr 2025 08:13:40 -0000 (UTC), "Adam H. Kerman"
    <ahk@chinet.com> wrote:

    https://www.youtube.com/watch?v=BDJF4lR3_eg

    He's done a number of those analysis that shows just how modern music
    gets modified all the time. Either pitch correction or compression of
    the audio happens all the time.

    I've noticed this on recordings going back to the 80s. When I play
    along with
    film soundtracks, I often have to push the tuning slide in and make my
    trumpet
    almost an inch shorter than it normally is because the recording is so
    sharp,
    indicating it's been sped up from how it actually sounded in the
    studio when
    it was recorded.



    I grew up watching, a 1982 musical called "The Pirate Movie" starring
    Kristy McNichol.  I think I saw an interview with her once where she commented she couldn't sing and it was all done with computers.  I never noticed it at the time, but now I can kind of tell there's some auto
    tune happening.  But until I heard her say that, it never crossed my
    mind they had auto-tune in the 80s.

    Another favorite musical is "Josie and the Pussycats."  I love the movie
    and I love the soundtrack.  I very recently got to meet Rachael Leigh
    Cook at a convention, and I asked her about her singing in the movie.
    She said she can't sing at all, and it was all done with computers. That
    one sort of surprised me, since I never noticed the auto tune in her voice.


    That could just mean they didn't sing the song, someone else
    sung it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)