• Re: Should road rules depend on the car?

    From Jon Ribbens@21:1/5 to J Newman on Tue Jan 28 23:04:16 2025
    On 2025-01-28, J Newman <jenniferkatenewman@gmail.com> wrote:
    For example, imagine an AI driven car with near instant reflexes,
    steering control, automatic emergency brakes, perhaps even a smart road
    that can inform them of possible oncoming traffic around a blind corner. Should the double white line on the road be an impediment to them
    overtaking around a blind corner, if technology reveals that there is no oncoming car and the AI is smart/responsive enough to negotiate a sharp
    turn at speed without skidding?

    For AI driven cars, should a higher speed limit apply to them?

    What about if someone passes their driving test, and was only tested on
    AI driven cars and their license is only valid for AI driven cars.
    Should more lax road conditions apply to them, than people who hold
    licenses that would allow them to drive non-autonomous vehicles?

    Who is liable if an AI driven car gets into an accident, especially if
    the driver qualified and holds a license that precludes them from manual control of the car?

    Perhaps some questions worth thinking about given that AI systems are surpassing humans in more and more ways.

    Driving cars, however, is certainly not one of them at the current time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Scott@21:1/5 to J Newman on Wed Jan 29 08:27:56 2025
    On 28/01/2025 22:29, J Newman wrote:
    Who is liable if an AI driven car gets into an accident

    May I suggest new offences: programming without due care and attention
    and dangerous programming?

    --
    Mike Scott
    Harlow, England

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Davey@21:1/5 to J Newman on Wed Jan 29 10:12:05 2025
    On Wed, 29 Jan 2025 00:29:15 +0200
    J Newman <jenniferkatenewman@gmail.com> wrote:

    For example, imagine an AI driven car with near instant reflexes,
    steering control, automatic emergency brakes, perhaps even a smart
    road that can inform them of possible oncoming traffic around a blind
    corner. Should the double white line on the road be an impediment to
    them overtaking around a blind corner, if technology reveals that
    there is no oncoming car and the AI is smart/responsive enough to
    negotiate a sharp turn at speed without skidding?

    For AI driven cars, should a higher speed limit apply to them?

    What about if someone passes their driving test, and was only tested
    on AI driven cars and their license is only valid for AI driven cars.
    Should more lax road conditions apply to them, than people who hold
    licenses that would allow them to drive non-autonomous vehicles?

    Who is liable if an AI driven car gets into an accident, especially
    if the driver qualified and holds a license that precludes them from
    manual control of the car?

    Perhaps some questions worth thinking about given that AI systems are surpassing humans in more and more ways.


    One of the things that helps road safety is the knowledge of what rules
    apply to both you and all other vehicles, and the fact that they are
    the same. Any variation would mean that everyone had to learn what
    rules applied to all other vehicles, and that is an unsupportable burden
    to put on everybody.
    So far, I am not convinced that AI systems should be let loose on the
    public roads in charge of driving. Little deliveries around town, yes,
    but not on the wider road network.

    --
    Davey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Davey on Wed Jan 29 11:56:45 2025
    On Wed, 29 Jan 2025 10:12:05 +0000, Davey wrote:

    So far, I am not convinced that AI systems should be let loose on the
    public roads in charge of driving. Little deliveries around town, yes,
    but not on the wider road network.

    We will end up building the environment to the AI. Basically what we have
    done with trains.

    I can easily see roads where AI is not allowed (or conversely roads where
    only AI is allowed).

    Motorways would be the logical starting point.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Davey@21:1/5 to jethro_uk@hotmailbin.com on Wed Jan 29 12:01:02 2025
    On Wed, 29 Jan 2025 11:56:45 -0000 (UTC)
    Jethro_uk <jethro_uk@hotmailbin.com> wrote:

    On Wed, 29 Jan 2025 10:12:05 +0000, Davey wrote:

    So far, I am not convinced that AI systems should be let loose on
    the public roads in charge of driving. Little deliveries around
    town, yes, but not on the wider road network.

    We will end up building the environment to the AI. Basically what we
    have done with trains.

    I can easily see roads where AI is not allowed (or conversely roads
    where only AI is allowed).

    Motorways would be the logical starting point.


    I am reminded of the scene in I, Robot, where the driver is almost
    sanctioned for driving a car manually.

    --
    Davey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Martin Harran on Wed Jan 29 13:50:52 2025
    On 2025-01-29, Martin Harran <martinharran@gmail.com> wrote:
    On Tue, 28 Jan 2025 23:04:16 -0000 (UTC), Jon Ribbens
    <jon+usenet@unequivocal.eu> wrote:
    On 2025-01-28, J Newman <jenniferkatenewman@gmail.com> wrote:
    For example, imagine an AI driven car with near instant reflexes,
    steering control, automatic emergency brakes, perhaps even a smart road
    that can inform them of possible oncoming traffic around a blind corner. >>> Should the double white line on the road be an impediment to them
    overtaking around a blind corner, if technology reveals that there is no >>> oncoming car and the AI is smart/responsive enough to negotiate a sharp
    turn at speed without skidding?

    For AI driven cars, should a higher speed limit apply to them?

    What about if someone passes their driving test, and was only tested on
    AI driven cars and their license is only valid for AI driven cars.
    Should more lax road conditions apply to them, than people who hold
    licenses that would allow them to drive non-autonomous vehicles?

    Who is liable if an AI driven car gets into an accident, especially if
    the driver qualified and holds a license that precludes them from manual >>> control of the car?

    Perhaps some questions worth thinking about given that AI systems are
    surpassing humans in more and more ways.

    Driving cars, however, is certainly not one of them at the current time.

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close
    to a vehicle in front, lane-keep assist.

    I wouldn't call any of those things "AI". And even those relatively
    simple things they frequently get wrong. I would be surprised if
    "full self-drive" arrives as an everyday general-purpose consumer
    feature within the next 20 years.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JNugent@21:1/5 to Davey on Wed Jan 29 14:14:13 2025
    On 29/01/2025 10:12 AM, Davey wrote:
    On Wed, 29 Jan 2025 00:29:15 +0200
    J Newman <jenniferkatenewman@gmail.com> wrote:

    For example, imagine an AI driven car with near instant reflexes,
    steering control, automatic emergency brakes, perhaps even a smart
    road that can inform them of possible oncoming traffic around a blind
    corner. Should the double white line on the road be an impediment to
    them overtaking around a blind corner, if technology reveals that
    there is no oncoming car and the AI is smart/responsive enough to
    negotiate a sharp turn at speed without skidding?

    For AI driven cars, should a higher speed limit apply to them?

    What about if someone passes their driving test, and was only tested
    on AI driven cars and their license is only valid for AI driven cars.
    Should more lax road conditions apply to them, than people who hold
    licenses that would allow them to drive non-autonomous vehicles?

    Who is liable if an AI driven car gets into an accident, especially
    if the driver qualified and holds a license that precludes them from
    manual control of the car?

    Perhaps some questions worth thinking about given that AI systems are
    surpassing humans in more and more ways.


    One of the things that helps road safety is the knowledge of what rules
    apply to both you and all other vehicles, and the fact that they are
    the same. Any variation would mean that everyone had to learn what
    rules applied to all other vehicles, and that is an unsupportable burden
    to put on everybody.
    So far, I am not convinced that AI systems should be let loose on the
    public roads in charge of driving. Little deliveries around town, yes,
    but not on the wider road network.

    Recently in the USA, I was startled to see what I can only describe as
    several small self-powered and self-directed boxes (c. 18" x 12" x 10")
    on wheels, carrying goods for delivery (at moderate speed) along the
    footway in a university area.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JNugent@21:1/5 to J Newman on Wed Jan 29 14:10:48 2025
    On 28/01/2025 10:29 PM, J Newman wrote:

    For example, imagine an AI driven car with near instant reflexes,
    steering control, automatic emergency brakes, perhaps even a smart road
    that can inform them of possible oncoming traffic around a blind corner. Should the double white line on the road be an impediment to them
    overtaking around a blind corner, if technology reveals that there is no oncoming car and the AI is smart/responsive enough to negotiate a sharp
    turn at speed without skidding?

    That sounds very like the way that railways are operated, in particular,
    with no requirement for the driver (human or otherwise) to be able to
    stop within the distance which can be *seen* to be clear.

    [ ... ]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to JNugent on Wed Jan 29 20:06:40 2025
    JNugent wrote:

    Recently in the USA, I was startled to see what I can only describe as several small self-powered and self-directed boxes (c. 18" x 12" x 10")
    on wheels, carrying goods for delivery (at moderate speed) along the
    footway in a university area.

    Some launched in the UK a few years ago, are they still going?

    <https://www.starship.xyz>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From GB@21:1/5 to JNugent on Wed Jan 29 20:23:37 2025
    On 29/01/2025 14:14, JNugent wrote:

    Recently in the USA, I was startled to see what I can only describe as several small self-powered and self-directed boxes (c. 18" x 12" x 10")
    on wheels, carrying goods for delivery (at moderate speed) along the
    footway in a university area.

    This reminds me of Bluestone 42, where the mine clearance team deploy
    their robot, and some Afghans turn up on a motorbike and steal it!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From GB@21:1/5 to Jon Ribbens on Wed Jan 29 20:18:14 2025
    On 29/01/2025 13:50, Jon Ribbens wrote:
    On 2025-01-29, Martin Harran <martinharran@gmail.com> wrote:

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close
    to a vehicle in front, lane-keep assist.

    I wouldn't call any of those things "AI". And even those relatively
    simple things they frequently get wrong. I would be surprised if
    "full self-drive" arrives as an everyday general-purpose consumer
    feature within the next 20 years.


    Exactly so. Tesla has been offering "Full Self Driving" for years, but
    it's just Musk hype. It's nowhere near at the moment.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Les. Hayward@21:1/5 to Martin Harran on Wed Jan 29 19:56:51 2025
    On 29/01/2025 13:09, Martin Harran wrote:

    Driving cars, however, is certainly not one of them at the current time.

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close
    to a vehicle in front, lane-keep assist.

    It seems that even the most mundane bit of software these days is
    described as AI! It must have been back in the 80's that a friend of
    mine was working on software for automatic landing and take-off for
    airliners. Nothing either artificial or intelligent about it!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Davey@21:1/5 to Les. Hayward on Wed Jan 29 23:49:38 2025
    On Wed, 29 Jan 2025 19:56:51 +0000
    "Les. Hayward" <les@nospam.invalid> wrote:

    On 29/01/2025 13:09, Martin Harran wrote:

    Driving cars, however, is certainly not one of them at the current
    time.

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too
    close to a vehicle in front, lane-keep assist.

    It seems that even the most mundane bit of software these days is
    described as AI! It must have been back in the 80's that a friend of
    mine was working on software for automatic landing and take-off for airliners. Nothing either artificial or intelligent about it!


    Indeed. There was then a battle between the US and the UK over whose
    system of automatic aircraft landing would be adopted by the industry.
    This was in the mid '70s. I happened to take a flight from London to
    Stuttgart at the time with BA, we took off in fog, we flew in fog, and
    landed in fog. While we were taxiing to the terminal, the pilot
    apologised for the 'sharp' landing, saying that the automatic system
    tended to go for the secure landing rather than the comfortable
    version. Nobody complained.
    --
    Davey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nick Finnigan@21:1/5 to J Newman on Thu Jan 30 08:59:06 2025
    On 28/01/2025 22:29, J Newman wrote:

    For AI driven cars, should a higher speed limit apply to them?

    What would happen if they exceeded a speed limit ?
    (Or crossed a double white line, etc.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to JNugent on Thu Jan 30 09:41:30 2025
    On 29/01/2025 14:14, JNugent wrote:
    On 29/01/2025 10:12 AM, Davey wrote:
    On Wed, 29 Jan 2025 00:29:15 +0200
    J Newman <jenniferkatenewman@gmail.com> wrote:

    For example, imagine an AI driven car with near instant reflexes,
    steering control, automatic emergency brakes, perhaps even a smart
    road that can inform them of possible oncoming traffic around a blind
    corner. Should the double white line on the road be an impediment to
    them overtaking around a blind corner, if technology reveals that
    there is no oncoming car and the AI is smart/responsive enough to
    negotiate a sharp turn at speed without skidding?

    For AI driven cars, should a higher speed limit apply to them?

    What about if someone passes their driving test, and was only tested
    on AI driven cars and their license is only valid for AI driven cars.
    Should more lax road conditions apply to them, than people who hold
    licenses that would allow them to drive non-autonomous vehicles?

    Who is liable if an AI driven car gets into an accident, especially
    if the driver qualified and holds a license that precludes them from
    manual control of the car?

    Perhaps some questions worth thinking about given that AI systems are
    surpassing humans in more and more ways.


    One of the things that helps road safety is the knowledge of what rules
    apply to both you and all other vehicles, and the fact that they are
    the same. Any variation would mean that everyone had to learn what
    rules applied to all other vehicles, and that is an unsupportable burden
    to put on everybody.
    So far, I am not convinced that AI systems should be let loose on the
    public roads in charge of driving. Little deliveries around town, yes,
    but not on the wider road network.

    Recently in the USA, I was startled to see what I can only describe as several small self-powered and self-directed boxes (c. 18" x 12" x 10")
    on wheels, carrying goods for delivery (at moderate speed) along the
    footway in a university area.

    "In April 2018, Starship launched its autonomous delivery service in
    Milton Keynes in partnership with Co-op and Tesco. In March 2020,
    Starship became the first robot delivery service to operate in a British
    town center with the rollout of its service in Central Milton Keynes

    https://en.wikipedia.org/wiki/Starship_Technologies#:~:text=In%20April%202018%2C%20Starship%20launched,service%20in%20Central%20Milton%20Keynes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Nick Finnigan on Thu Jan 30 12:20:55 2025
    On 30/01/2025 08:59, Nick Finnigan wrote:
    On 28/01/2025 22:29, J Newman wrote:

    For AI driven cars, should a higher speed limit apply to them?

     What would happen if they exceeded a speed limit ?
     (Or crossed a double white line, etc.)

    You'd make sure the signs and the data-base were accurate, and repaint
    the lines.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roland Perry@21:1/5 to All on Thu Jan 30 12:41:17 2025
    In message <slrnvpiojg.4rm.jon+usenet@raven.unequivocal.eu>, at 23:04:16
    on Tue, 28 Jan 2025, Jon Ribbens <jon+usenet@unequivocal.eu> remarked:
    On 2025-01-28, J Newman <jenniferkatenewman@gmail.com> wrote:
    For example, imagine an AI driven car with near instant reflexes,
    steering control, automatic emergency brakes, perhaps even a smart road
    that can inform them of possible oncoming traffic around a blind corner.
    Should the double white line on the road be an impediment to them
    overtaking around a blind corner, if technology reveals that there is no
    oncoming car and the AI is smart/responsive enough to negotiate a sharp
    turn at speed without skidding?

    For AI driven cars, should a higher speed limit apply to them?

    What about if someone passes their driving test, and was only tested on
    AI driven cars and their license is only valid for AI driven cars.
    Should more lax road conditions apply to them, than people who hold
    licenses that would allow them to drive non-autonomous vehicles?

    Who is liable if an AI driven car gets into an accident, especially if
    the driver qualified and holds a license that precludes them from manual
    control of the car?

    Perhaps some questions worth thinking about given that AI systems are
    surpassing humans in more and more ways.

    Driving cars, however, is certainly not one of them at the current time.

    I agree. They can't even follow white lines most of the time, because
    they are either worn out, covered in mud/snow, or only present on one
    side of the car.
    --
    Roland Perry

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roland Perry@21:1/5 to All on Thu Jan 30 12:44:23 2025
    In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
    Jan 2025, JNugent <JNugent73@mail.com> remarked:

    Recently in the USA, I was startled to see what I can only describe as >several small self-powered and self-directed boxes (c. 18" x 12" x 10")
    on wheels, carrying goods for delivery (at moderate speed) along the
    footway in a university area.

    There's a trial of those in Cambridge. A major problem is crossing
    roads, because they can't reach the buttons on the Pelican Crossings.

    Another unrelated problem is that they ping the buyer from the pavement
    outside what might be quite a big apartment block, and if they aren't
    attended to with (iirc) ten minutes, take the shopping back *and still
    charge for it*.
    --
    Roland Perry

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Norman Wells on Thu Jan 30 13:04:29 2025
    On Thu, 30 Jan 2025 12:20:55 +0000, Norman Wells wrote:

    On 30/01/2025 08:59, Nick Finnigan wrote:
    On 28/01/2025 22:29, J Newman wrote:

    For AI driven cars, should a higher speed limit apply to them?

     What would happen if they exceeded a speed limit ?
     (Or crossed a double white line, etc.)

    You'd make sure the signs and the data-base were accurate,

    Mathematically impossible.

    and repaint
    the lines.

    Much better idea.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Roland Perry on Thu Jan 30 13:04:59 2025
    On 30/01/2025 12:41, Roland Perry wrote:
    In message <slrnvpiojg.4rm.jon+usenet@raven.unequivocal.eu>, at 23:04:16
    on Tue, 28 Jan 2025, Jon Ribbens <jon+usenet@unequivocal.eu> remarked:
    On 2025-01-28, J Newman <jenniferkatenewman@gmail.com> wrote:
    For example, imagine an AI driven car with near instant reflexes,
    steering control, automatic emergency brakes, perhaps even a smart road
    that can inform them of possible oncoming traffic around a blind corner. >>> Should the double white line on the road be an impediment to them
    overtaking around a blind corner, if technology reveals that there is no >>> oncoming car and the AI is smart/responsive enough to negotiate a sharp
    turn at speed without skidding?

    For AI driven cars, should a higher speed limit apply to them?

    What about if someone passes their driving test, and was only tested on
    AI driven cars and their license is only valid for AI driven cars.
    Should more lax road conditions apply to them, than people who hold
    licenses that would allow them to drive non-autonomous vehicles?

    Who is liable if an AI driven car gets into an accident, especially if
    the driver qualified and holds a license that precludes them from manual >>> control of the car?

    Perhaps some questions worth thinking about given that AI systems are
    surpassing humans in more and more ways.

    Driving cars, however, is certainly not one of them at the current time.

    I agree. They can't even follow white lines most of the time, because
    they are either worn out, covered in mud/snow, or only present on one
    side of the car.

    Humans have those problems too, I think.

    The problem isn't with intelligence, whether human or artificial, but
    with sensors and the data-base. Those in autonomous vehicles already
    surpass any human, and have the obvious capacity to improve even
    further, which humans don't. They don't get tired either.

    https://www.newscientist.com/article/2435896-driverless-cars-are-mostly-safer-than-humans-but-worse-at-turns/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JNugent@21:1/5 to Andy Burns on Thu Jan 30 14:37:24 2025
    On 29/01/2025 08:06 PM, Andy Burns wrote:
    JNugent wrote:

    Recently in the USA, I was startled to see what I can only describe as
    several small self-powered and self-directed boxes (c. 18" x 12" x
    10") on wheels, carrying goods for delivery (at moderate speed) along
    the footway in a university area.

    Some launched in the UK a few years ago, are they still going?

    <https://www.starship.xyz>

    I haven't heard of those before!

    The ones I saw in Texas weren't as nicely-designed as that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JNugent@21:1/5 to All on Thu Jan 30 14:38:21 2025
    On 29/01/2025 08:23 PM, GB wrote:
    On 29/01/2025 14:14, JNugent wrote:

    Recently in the USA, I was startled to see what I can only describe as
    several small self-powered and self-directed boxes (c. 18" x 12" x
    10") on wheels, carrying goods for delivery (at moderate speed) along
    the footway in a university area.

    This reminds me of Bluestone 42, where the mine clearance team deploy
    their robot, and some Afghans turn up on a motorbike and steal it!

    :-)

    ISTR a scene in "Breaking Bad" where something a bit similar happened.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JNugent@21:1/5 to Roland Perry on Thu Jan 30 14:40:25 2025
    On 30/01/2025 12:44 PM, Roland Perry wrote:

    JNugent <JNugent73@mail.com> remarked:

    Recently in the USA, I was startled to see what I can only describe as
    several small self-powered and self-directed boxes (c. 18" x 12" x
    10") on wheels, carrying goods for delivery (at moderate speed) along
    the footway in a university area.

    There's a trial of those in Cambridge. A major problem is crossing
    roads, because they can't reach the buttons on the Pelican Crossings.

    Another unrelated problem is that they ping the buyer from the pavement outside what might be quite a big apartment block, and if they aren't attended to with (iirc) ten minutes, take the shopping back *and still
    charge for it*.

    "First world problems", as some might say!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JNugent@21:1/5 to Norman Wells on Thu Jan 30 14:39:16 2025
    On 30/01/2025 09:41 AM, Norman Wells wrote:
    On 29/01/2025 14:14, JNugent wrote:
    On 29/01/2025 10:12 AM, Davey wrote:
    On Wed, 29 Jan 2025 00:29:15 +0200
    J Newman <jenniferkatenewman@gmail.com> wrote:

    For example, imagine an AI driven car with near instant reflexes,
    steering control, automatic emergency brakes, perhaps even a smart
    road that can inform them of possible oncoming traffic around a blind
    corner. Should the double white line on the road be an impediment to
    them overtaking around a blind corner, if technology reveals that
    there is no oncoming car and the AI is smart/responsive enough to
    negotiate a sharp turn at speed without skidding?

    For AI driven cars, should a higher speed limit apply to them?

    What about if someone passes their driving test, and was only tested
    on AI driven cars and their license is only valid for AI driven cars.
    Should more lax road conditions apply to them, than people who hold
    licenses that would allow them to drive non-autonomous vehicles?

    Who is liable if an AI driven car gets into an accident, especially
    if the driver qualified and holds a license that precludes them from
    manual control of the car?

    Perhaps some questions worth thinking about given that AI systems are
    surpassing humans in more and more ways.


    One of the things that helps road safety is the knowledge of what rules
    apply to both you and all other vehicles, and the fact that they are
    the same. Any variation would mean that everyone had to learn what
    rules applied to all other vehicles, and that is an unsupportable burden >>> to put on everybody.
    So far, I am not convinced that AI systems should be let loose on the
    public roads in charge of driving. Little deliveries around town, yes,
    but not on the wider road network.

    Recently in the USA, I was startled to see what I can only describe as
    several small self-powered and self-directed boxes (c. 18" x 12" x
    10") on wheels, carrying goods for delivery (at moderate speed) along
    the footway in a university area.

    "In April 2018, Starship launched its autonomous delivery service in
    Milton Keynes in partnership with Co-op and Tesco. In March 2020,
    Starship became the first robot delivery service to operate in a British
    town center with the rollout of its service in Central Milton Keynes

    https://en.wikipedia.org/wiki/Starship_Technologies#:~:text=In%20April%202018%2C%20Starship%20launched,service%20in%20Central%20Milton%20Keynes.

    Yes - GB posted something to the same effect.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to All on Thu Jan 30 14:39:48 2025
    On 30/01/2025 13:04, Jethro_uk wrote:
    On Thu, 30 Jan 2025 12:20:55 +0000, Norman Wells wrote:
    On 30/01/2025 08:59, Nick Finnigan wrote:
    On 28/01/2025 22:29, J Newman wrote:

    For AI driven cars, should a higher speed limit apply to them?

     What would happen if they exceeded a speed limit ?
     (Or crossed a double white line, etc.)

    You'd make sure the signs and the data-base were accurate,

    Mathematically impossible.

    Perfect is the enemy of good.

    and repaint the lines.

    Much better idea.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to jethro_uk@hotmailbin.com on Thu Jan 30 14:35:09 2025
    On Thu, 30 Jan 2025 13:04:29 -0000 (UTC), Jethro_uk
    <jethro_uk@hotmailbin.com> wrote:

    On Thu, 30 Jan 2025 12:20:55 +0000, Norman Wells wrote:

    On 30/01/2025 08:59, Nick Finnigan wrote:
    On 28/01/2025 22:29, J Newman wrote:

    For AI driven cars, should a higher speed limit apply to them?

    What would happen if they exceeded a speed limit ?
    (Or crossed a double white line, etc.)

    You'd make sure the signs and the data-base were accurate,

    Mathematically impossible.

    The word "mathematically" is somewhat superflous here.

    and repaint
    the lines.

    Much better idea.

    Unless there were no lines to begin with.

    I'd be interested in knowing how autonomous vehicles cope with situations
    like this:

    https://thumbs.dreamstime.com/b/waiting-red-light-roadworks-controlled-traffic-lights-somerset-england-no-traffic-approaching-opposite-250969607.jpg?w=360
    or https://tinyurl.com/4ywavjue

    There's a temporary traffic light there, which I'd expect an autonomous
    vehicle to be capable of recognising and obeying. But would it understand
    the meaning of the red sign? And proceeding when the light is green means crossing a double white line, which an autonomous vehicle would know is prohibited. Would it correctly interpret the signage to indicate that, in
    this case, that prohibition is suspended?

    The reality is that humans don't just slavishly follow signs and road
    markings. We use context to tell us when we can, under certain
    circumstances, infer the presence of markings that we can't see (for
    example, a faded stop line at a normal light-controlled junction), or infer that a visible marking can be ignored (eg, the double white lines in that photo). A competant autonomous vehicle has to be able to do the same.

    It would not be acceptable for an autonomous vehicle to jump a red light because it can't see the stop line. Nor would it be acceptable for it to
    refuse to proceed in circumstances where normal rules have been temporarily suspended. And it doesn't just have to be able to obey signs and signals,
    both permanent and temporary, it also has to be able to obey instructions
    given by a police officer. If it can't do any of those without human input, then it isn't fully autonomous.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Mark Goodge on Thu Jan 30 17:59:33 2025
    On Thu, 30 Jan 2025 14:35:09 +0000, Mark Goodge wrote:

    The reality is that humans don't just slavishly follow signs and road markings.

    The regular parade of drivers shaving their vehicles under low bridges
    whilst driving into rivers because the sat nav said does make me suggest
    that is a bold claim.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Thu Jan 30 20:00:59 2025
    On 30/01/2025 14:35, Mark Goodge wrote:
    On Thu, 30 Jan 2025 13:04:29 -0000 (UTC), Jethro_uk <jethro_uk@hotmailbin.com> wrote:

    On Thu, 30 Jan 2025 12:20:55 +0000, Norman Wells wrote:

    On 30/01/2025 08:59, Nick Finnigan wrote:
    On 28/01/2025 22:29, J Newman wrote:

    For AI driven cars, should a higher speed limit apply to them?

     What would happen if they exceeded a speed limit ?
     (Or crossed a double white line, etc.)

    You'd make sure the signs and the data-base were accurate,

    Mathematically impossible.

    The word "mathematically" is somewhat superflous here.

    As is the word 'somewhat' here.

    and repaint
    the lines.

    Much better idea.

    Unless there were no lines to begin with.

    I'd be interested in knowing how autonomous vehicles cope with situations like this:

    https://thumbs.dreamstime.com/b/waiting-red-light-roadworks-controlled-traffic-lights-somerset-england-no-traffic-approaching-opposite-250969607.jpg?w=360
    or https://tinyurl.com/4ywavjue

    There's a temporary traffic light there, which I'd expect an autonomous vehicle to be capable of recognising and obeying. But would it understand
    the meaning of the red sign? And proceeding when the light is green means crossing a double white line, which an autonomous vehicle would know is prohibited. Would it correctly interpret the signage to indicate that, in this case, that prohibition is suspended?

    It's just a matter of programming. If a human can work it out
    logically, then so too can a computer. Control of an autonomous vehicle
    is not just a 10-line program in Basic.

    The reality is that humans don't just slavishly follow signs and road markings. We use context to tell us when we can, under certain
    circumstances, infer the presence of markings that we can't see (for
    example, a faded stop line at a normal light-controlled junction), or infer that a visible marking can be ignored (eg, the double white lines in that photo). A competant autonomous vehicle has to be able to do the same.

    Why do you think an autonomous vehicle is not capable of making the same inference?

    It would not be acceptable for an autonomous vehicle to jump a red light because it can't see the stop line. Nor would it be acceptable for it to refuse to proceed in circumstances where normal rules have been temporarily suspended.

    Nor is it for a human. But they do.

    And it doesn't just have to be able to obey signs and signals,
    both permanent and temporary, it also has to be able to obey instructions given by a police officer. If it can't do any of those without human input, then it isn't fully autonomous.

    It seems you're only now considering obvious potential pitfalls that I'm
    sure were considered several years ago, and have been resolved but unfortunately without telling you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Thu Jan 30 21:38:50 2025
    On Thu, 30 Jan 2025 20:00:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 30/01/2025 14:35, Mark Goodge wrote:

    The reality is that humans don't just slavishly follow signs and road
    markings. We use context to tell us when we can, under certain
    circumstances, infer the presence of markings that we can't see (for
    example, a faded stop line at a normal light-controlled junction), or infer >> that a visible marking can be ignored (eg, the double white lines in that
    photo). A competant autonomous vehicle has to be able to do the same.

    Why do you think an autonomous vehicle is not capable of making the same >inference?

    At the moment, none of them are. I'm not disputing that they will,
    eventually, be capable of it. The question is how long it will take to get there.

    And it doesn't just have to be able to obey signs and signals,
    both permanent and temporary, it also has to be able to obey instructions
    given by a police officer. If it can't do any of those without human input, >> then it isn't fully autonomous.

    It seems you're only now considering obvious potential pitfalls that I'm
    sure were considered several years ago, and have been resolved but >unfortunately without telling you.

    They haven't been resolved yet. Current trials of autonomous vehicles are
    still based on tightly defined scenarios on routes already known to the vehicle. Being able to go anywhere, any time is still some way off.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Thu Jan 30 23:04:39 2025
    On 30/01/2025 21:38, Mark Goodge wrote:
    On Thu, 30 Jan 2025 20:00:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 30/01/2025 14:35, Mark Goodge wrote:

    The reality is that humans don't just slavishly follow signs and road
    markings. We use context to tell us when we can, under certain
    circumstances, infer the presence of markings that we can't see (for
    example, a faded stop line at a normal light-controlled junction), or infer >>> that a visible marking can be ignored (eg, the double white lines in that >>> photo). A competant autonomous vehicle has to be able to do the same.

    Why do you think an autonomous vehicle is not capable of making the same
    inference?

    At the moment, none of them are.

    Really? What have you got that supports that apart from your pre-formed opinion?

    I'm not disputing that they will,
    eventually, be capable of it. The question is how long it will take to get there.

    And it doesn't just have to be able to obey signs and signals,
    both permanent and temporary, it also has to be able to obey instructions >>> given by a police officer. If it can't do any of those without human input, >>> then it isn't fully autonomous.

    It seems you're only now considering obvious potential pitfalls that I'm
    sure were considered several years ago, and have been resolved but
    unfortunately without telling you.

    They haven't been resolved yet. Current trials of autonomous vehicles are still based on tightly defined scenarios on routes already known to the vehicle. Being able to go anywhere, any time is still some way off.

    I think you are way behind the times on that.

    "New research shows that autonomous vehicles outperform human-driven ones.

    Self-driving cars are generally safer than those that are driven by
    humans, according to a new study."

    https://www.euronews.com/next/2024/06/20/self-driving-cars-are-generally-safer-than-human-driven-ones-research-shows

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From RJH@21:1/5 to Norman Wells on Fri Jan 31 06:59:40 2025
    On 30 Jan 2025 at 23:04:39 GMT, Norman Wells wrote:

    <div id="editor" contenteditable="false">> They haven't been resolved yet. Current trials of autonomous vehicles are
    still based on tightly defined scenarios on routes already known to the
    vehicle. Being able to go anywhere, any time is still some way off.

    I think you are way behind the times on that.

    Not really. Even from your own link:

    'The research found that autonomous or self-driving vehicles had lower
    accident rates than humans in work zones, traffic events, and “pre-accident movements” like slowing down and proceeding straight.

    Where the self-driving cars struggled, however, was if they drove at sunrise
    or sunset on roads with many turns'


    "New research shows that autonomous vehicles outperform human-driven ones.

    Self-driving cars are generally safer than those that are driven by
    humans, according to a new study."

    https://www.euronews.com/next/2024/06/20/self-driving-cars-are-generally-safer-than-human-driven-ones-research-shows

    That said, I tend to agree with the headline - self-driving cars are safer. It's just that we seem to accept and live with human frailties such as tiredness, being drunk, stupidity and recklessness (for example) far more readily than computer error.


    --
    Cheers, Rob, Sheffield UK

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to RJH on Fri Jan 31 08:23:21 2025
    On 31/01/2025 06:59, RJH wrote:
    On 30 Jan 2025 at 23:04:39 GMT, Norman Wells wrote:

    <div id="editor" contenteditable="false">> They haven't been resolved yet. >> Current trials of autonomous vehicles are
    still based on tightly defined scenarios on routes already known to the
    vehicle. Being able to go anywhere, any time is still some way off.

    I think you are way behind the times on that.

    Not really. Even from your own link:

    'The research found that autonomous or self-driving vehicles had lower accident rates than humans in work zones, traffic events, and “pre-accident movements” like slowing down and proceeding straight.

    Where the self-driving cars struggled, however, was if they drove at sunrise or sunset on roads with many turns'

    So too do humans.

    But autonomous vehicle systems at least have the capacity to improve.
    And will.

    "New research shows that autonomous vehicles outperform human-driven ones. >>
    Self-driving cars are generally safer than those that are driven by
    humans, according to a new study."

    https://www.euronews.com/next/2024/06/20/self-driving-cars-are-generally-safer-than-human-driven-ones-research-shows

    That said, I tend to agree with the headline - self-driving cars are safer. It's just that we seem to accept and live with human frailties such as tiredness, being drunk, stupidity and recklessness (for example) far more readily than computer error.

    How very odd, luddite and irrational.

    We rely absolutely on computers in so many aspects of life. It's very
    strange if we exclude driving with all the innate human frailties you list.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Fri Jan 31 09:12:31 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m02es8Fu9dcU2@mid.individual.net...

    On 30/01/2025 21:38, Mark Goodge wrote:

    On Thu, 30 Jan 2025 20:00:59 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>

    snip

    It seems you're only now considering obvious potential pitfalls that I'm >>> sure were considered several years ago, and have been resolved but
    unfortunately without telling you.

    They haven't been resolved yet. Current trials of autonomous vehicles are
    still based on tightly defined scenarios on routes already known to the
    vehicle. Being able to go anywhere, any time is still some way off.

    I think you are way behind the times on that.

    "New research shows that autonomous vehicles outperform human-driven ones.

    Self-driving cars are generally safer than those that are driven by humans, according
    to a new study."

    quote:

    Where the self-driving cars struggled, however, was if they drove at sunrise
    or sunset on roads with many turns, the report, published in the journal Nature,
    found. That's because low light conditions confused the cars' sensors as did the " complicated conditions" that come when a car is turning.

    unquote


    https://www.euronews.com/next/2024/06/20/self-driving-cars-are-generally-safer-than-human-driven-ones-research-shows

    So they're OK driving in straght ines (as I believe has already been alluded
    to )but they have a bit of a problem with "more complicated" stuff; if for instance they have to say to turn corners. i.e change their minds.

    So here we have a robot which is supposedly behaving just like a human
    but which is actually incapable of changing its mind.

    That's sounds strangely familiar, somehow,

    So it's maybe no wonder that you yourself are so enthusiastic about
    such projects.



    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to billy bookcase on Fri Jan 31 09:54:35 2025
    On Fri, 31 Jan 2025 09:12:31 -0000, "billy bookcase" <billy@anon.com> wrote:

    So they're OK driving in straght ines (as I believe has already been alluded >to )but they have a bit of a problem with "more complicated" stuff; if for >instance they have to say to turn corners. i.e change their minds.

    So here we have a robot which is supposedly behaving just like a human
    but which is actually incapable of changing its mind.

    That's sounds strangely familiar, somehow,

    So it's maybe no wonder that you yourself are so enthusiastic about
    such projects.

    That would explain a lot.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Fri Jan 31 09:38:49 2025
    On 31/01/2025 09:12, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m02es8Fu9dcU2@mid.individual.net...

    On 30/01/2025 21:38, Mark Goodge wrote:

    On Thu, 30 Jan 2025 20:00:59 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>

    snip

    It seems you're only now considering obvious potential pitfalls that I'm >>>> sure were considered several years ago, and have been resolved but
    unfortunately without telling you.

    They haven't been resolved yet. Current trials of autonomous vehicles are >>> still based on tightly defined scenarios on routes already known to the
    vehicle. Being able to go anywhere, any time is still some way off.

    I think you are way behind the times on that.

    "New research shows that autonomous vehicles outperform human-driven ones. >>
    Self-driving cars are generally safer than those that are driven by humans, according
    to a new study."

    quote:

    Where the self-driving cars struggled, however, was if they drove at sunrise or sunset on roads with many turns, the report, published in the journal Nature,
    found.

    Don't you too?

    The question that isn't answered is whether they were more or less safe
    than human drivers in such conditions.

    That's because low light conditions confused the cars' sensors as did
    the " complicated conditions" that come when a car is turning.

    unquote

    Indeed. But electronic sensors are capable of improvement and
    refinement. You and I aren't.

    https://www.euronews.com/next/2024/06/20/self-driving-cars-are-generally-safer-than-human-driven-ones-research-shows

    So they're OK driving in straght ines (as I believe has already been alluded to )but they have a bit of a problem with "more complicated" stuff; if for instance they have to say to turn corners. i.e change their minds.

    I think that's a bit of an exaggeration from what was actually reported,
    namely 'if they drove at sunrise or sunset on roads with many turns'.

    Where you and I would probably struggle too.

    Especially if tired, inebriated, distracted, inattentive or blinded by
    the light.

    So here we have a robot which is supposedly behaving just like a human
    but which is actually incapable of changing its mind.

    That's not actually how they work. They monitor and assess all inputs
    from the sensors many hundreds of times a second, thus 'changing their
    minds' constantly, and adapting far quicker than any human to whatever transpires.

    That's sounds strangely familiar, somehow,

    So it's maybe no wonder that you yourself are so enthusiastic about
    such projects.

    Just giving you the facts.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Martin Harran on Fri Jan 31 11:05:24 2025
    "Martin Harran" <martinharran@gmail.com> wrote in message news:v77ppjp4vfj9un6rhuceoq8i9m4fr07ne2@4ax.com...
    On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
    wrote:

    In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
    Jan 2025, JNugent <JNugent73@mail.com> remarked:

    Recently in the USA, I was startled to see what I can only describe as >>>several small self-powered and self-directed boxes (c. 18" x 12" x 10") >>>on wheels, carrying goods for delivery (at moderate speed) along the >>>footway in a university area.

    There's a trial of those in Cambridge. A major problem is crossing
    roads, because they can't reach the buttons on the Pelican Crossings.

    I would not have thought that to be an insurmountable problem.


    Presumably they won't be able to deliver any higher than the fourth
    floor in most blocks of flats either.



    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Fri Jan 31 10:56:46 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m03k1aF4mnkU3@mid.individual.net...

    That's not actually how they work. They monitor and assess all
    inputs from the sensor many hundreds of times a second, thus
    'changing their minds' constantly, and adapting far quicker than
    any human to whatever transpires.

    Apologies for the complete change of direction Norman, (it just happened
    to be you as the thought just occurred to me) but assuming these cars can
    in fact be made safe to drive anywhere -

    What is to stop pedestrians just stepping out in front of these cars and bringing them to complete halt;either to actually cross the road or just because hey feel like it; and what are the likely consequences for
    journey times and traffic speeds.

    I've already researched theft from these Amazon things which is apparently already on the rise in California

    https://www.autoweek.com/news/a44839987/delivery-robots-being-robbed/

    Despite their being fitted with sirens!

    First there were car alarms and now there are Amazon sirens !

    Something else to look forward to.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Les. Hayward@21:1/5 to Norman Wells on Fri Jan 31 11:58:10 2025
    On 31/01/2025 08:23, Norman Wells wrote:


    How very odd, luddite and irrational.

    We rely absolutely on computers in so many aspects of life.  It's very strange if we exclude driving with all the innate human frailties you list.


    As someone who has made a good living with computers large & small,I
    have a liking for them. I do NOT though like systems which are
    unnecessary, which seek to control me, or are added to make weight to
    products. For example, I prefer to decide when to switch on my lights
    manually!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to billy bookcase on Fri Jan 31 12:12:03 2025
    On Fri, 31 Jan 2025 10:56:46 +0000, billy bookcase wrote:

    What is to stop pedestrians just stepping out in front of these cars and bringing them to complete halt;either to actually cross the road or just because hey feel like it;

    Or to rob/hijack it ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Fri Jan 31 12:10:51 2025
    On 31/01/2025 10:56, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m03k1aF4mnkU3@mid.individual.net...

    That's not actually how they work. They monitor and assess all
    inputs from the sensor many hundreds of times a second, thus
    'changing their minds' constantly, and adapting far quicker than
    any human to whatever transpires.

    Apologies for the complete change of direction Norman, (it just happened
    to be you as the thought just occurred to me) but assuming these cars can
    in fact be made safe to drive anywhere -

    What is to stop pedestrians just stepping out in front of these cars and bringing them to complete halt;either to actually cross the road or just because hey feel like it;

    Well, nothing. Just as there isn't with a manually driven car. It's
    not generally allowed simply to plough into them however much they
    deserve it.

    But the same laws of physics apply. The reaction time of an autonomous
    vehicle may be a bit shorter but, if the pedestrians don't leave the
    required stopping distance, they'll be run over just the same. And it
    will demonstrably be their own silly fault.

    and what are the likely consequences for
    journey times and traffic speeds.

    If it were up to me, I'd program the things just to carry on after any
    such collisions. After all, why should I be delayed by others' proven stupidity? But, even though nothing could have been done, I accept some
    silly law or other may forbid that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to Norman Wells on Fri Jan 31 12:22:55 2025
    Norman Wells wrote:

    billy bookcase wrote:

    What is to stop pedestrians just stepping out in front of these cars and
    bringing them to complete halt;either to actually cross the road or just
    because  hey  feel like it;

    Well, nothing.  Just as there isn't with a manually driven car.

    It generally takes teenage boy levels of bravado to run across
    fast-moving traffic, once OAPs learn that autonomous cars will always
    stop, wrinkly-chicken might become their new pastime ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to billy bookcase on Fri Jan 31 12:16:06 2025
    billy bookcase wrote:

    assuming these cars can
    in fact be made safe to drive anywhere -

    What is to stop pedestrians just stepping out in front of these cars and bringing them to complete halt;either to actually cross the road or just because hey feel like it;

    In San Francisco it's been seen as a 'jolly jape' to place traffic cones
    on the bonnet of Waymo and other types of self-driving cars for the past
    couple of years.

    Also there are videos showing disorganised herds of autonomous cars,
    creeping ultra-cautiously around parking lots, some trying to park,
    others trying to get out, gently honking at each other and generally
    getting nowhere.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JNugent@21:1/5 to Martin Harran on Fri Jan 31 12:42:46 2025
    On 31/01/2025 09:56 AM, Martin Harran wrote:
    On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
    wrote:

    In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
    Jan 2025, JNugent <JNugent73@mail.com> remarked:

    Recently in the USA, I was startled to see what I can only describe as
    several small self-powered and self-directed boxes (c. 18" x 12" x 10")
    on wheels, carrying goods for delivery (at moderate speed) along the
    footway in a university area.

    There's a trial of those in Cambridge. A major problem is crossing
    roads, because they can't reach the buttons on the Pelican Crossings.

    I would not have thought that to be an insurmountable problem.


    Another unrelated problem is that they ping the buyer from the pavement
    outside what might be quite a big apartment block, and if they aren't
    attended to with (iirc) ten minutes, take the shopping back *and still
    charge for it*.

    Genuine question:

    I live in a detached house with front and back garden. When nobody is
    at home, couriers usually drop parcels around to the back porch which
    is not visible from the main road. What do they do at apartment blocks
    with shared landings when nobody is at home? Do they just leave the
    parcel at the front door, exposed to public view, even in a high-crime
    area?

    AFAICT, the answer is "Yes".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Fri Jan 31 12:52:59 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m03sudF6h2iU1@mid.individual.net...
    On 31/01/2025 10:56, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m03k1aF4mnkU3@mid.individual.net...

    That's not actually how they work. They monitor and assess all
    inputs from the sensor many hundreds of times a second, thus
    'changing their minds' constantly, and adapting far quicker than
    any human to whatever transpires.

    Apologies for the complete change of direction Norman, (it just happened
    to be you as the thought just occurred to me) but assuming these cars can
    in fact be made safe to drive anywhere -

    What is to stop pedestrians just stepping out in front of these cars and
    bringing them to complete halt;either to actually cross the road or just
    because hey feel like it;

    Well, nothing. Just as there isn't with a manually driven car. It's not generally
    allowed simply to plough into them however much they deserve it.

    Except that people * wouldn't do that* with a manual car. Would they ?

    *for all the very reasons you've given yourself*

    The human driver might be drunk, not paying attention etc etc

    But with a robot car , according to you at least and your friends*, they can be *guaranteed* that no accidents can possibly happen

    So why wouldn't people just walk out in front of a robot car and bring it to a halt ?

    For instance if what you say is correct, correct, correct, robot cars are going to put millions of human drivers out of a job. Taxi drivers for one.

    So what's to stop all these redundant taxi drivers stepping out in front of robot
    cars and refusing to move ?

    Obviously the robots and their human "friends" have never even considered
    such a possibility. As the possibility of robots being put out of work
    as a result of decisions taken by their AI friends is virtually
    zero.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Max Demian@21:1/5 to billy bookcase on Fri Jan 31 17:24:03 2025
    On 31/01/2025 11:05, billy bookcase wrote:
    "Martin Harran" <martinharran@gmail.com> wrote in message news:v77ppjp4vfj9un6rhuceoq8i9m4fr07ne2@4ax.com...
    On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
    wrote:

    In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
    Jan 2025, JNugent <JNugent73@mail.com> remarked:

    Recently in the USA, I was startled to see what I can only describe as >>>> several small self-powered and self-directed boxes (c. 18" x 12" x 10") >>>> on wheels, carrying goods for delivery (at moderate speed) along the
    footway in a university area.

    There's a trial of those in Cambridge. A major problem is crossing
    roads, because they can't reach the buttons on the Pelican Crossings.

    I would not have thought that to be an insurmountable problem.


    Presumably they won't be able to deliver any higher than the fourth
    floor in most blocks of flats either.

    Would they try to deliver directly to the balcony (if present)? Do human deliverers climb up the outside of blocks of flats?

    --
    Max Demian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Max Demian on Fri Jan 31 17:36:02 2025
    "Max Demian" <max_demian@bigfoot.com> wrote in message news:vnj0vj$3jhm7$2@dont-email.me...
    On 31/01/2025 11:05, billy bookcase wrote:
    "Martin Harran" <martinharran@gmail.com> wrote in message
    news:v77ppjp4vfj9un6rhuceoq8i9m4fr07ne2@4ax.com...
    On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
    wrote:

    In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
    Jan 2025, JNugent <JNugent73@mail.com> remarked:

    Recently in the USA, I was startled to see what I can only describe as >>>>> several small self-powered and self-directed boxes (c. 18" x 12" x 10") >>>>> on wheels, carrying goods for delivery (at moderate speed) along the >>>>> footway in a university area.

    There's a trial of those in Cambridge. A major problem is crossing
    roads, because they can't reach the buttons on the Pelican Crossings.

    I would not have thought that to be an insurmountable problem.


    Presumably they won't be able to deliver any higher than the fourth
    floor in most blocks of flats either.

    Would they try to deliver directly to the balcony (if present)? Do human deliverers
    climb up the outside of blocks of flats?

    Robots not drones.

    Drones don't have problems crossing roads; least of all at Pelican Crossings.


    bb






    --
    Max Demian


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Andy Burns on Fri Jan 31 12:29:47 2025
    On 31/01/2025 12:16, Andy Burns wrote:
    billy bookcase wrote:

    assuming these cars can
    in fact be made safe to drive anywhere -

    What is to stop pedestrians just stepping out in front of these cars and
    bringing them to complete halt;either to actually cross the road or just
    because  hey  feel like it;

    In San Francisco it's been seen as a 'jolly jape' to place traffic cones
    on the bonnet of Waymo and other types of self-driving cars for the past couple of years.

    What fun, eh? Just as it used to be to knock on peoples' doors then run
    away. One can only hope they'll grow out of it or, more likely, the programming will take care of such immaturity.

    Also there are videos showing disorganised herds of autonomous cars,
    creeping ultra-cautiously around parking lots, some trying to park,
    others trying to get out, gently honking at each other and generally
    getting nowhere.

    Just as with humans then.

    In computer terms, it's a standard process known simply as debugging.

    With humans, there's no such hope.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Martin Harran on Fri Jan 31 12:14:45 2025
    On 31/01/2025 09:56, Martin Harran wrote:
    On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
    wrote:

    In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
    Jan 2025, JNugent <JNugent73@mail.com> remarked:

    Recently in the USA, I was startled to see what I can only describe as
    several small self-powered and self-directed boxes (c. 18" x 12" x 10")
    on wheels, carrying goods for delivery (at moderate speed) along the
    footway in a university area.

    There's a trial of those in Cambridge. A major problem is crossing
    roads, because they can't reach the buttons on the Pelican Crossings.

    I would not have thought that to be an insurmountable problem.


    Another unrelated problem is that they ping the buyer from the pavement
    outside what might be quite a big apartment block, and if they aren't
    attended to with (iirc) ten minutes, take the shopping back *and still
    charge for it*.

    Genuine question:

    I live in a detached house with front and back garden. When nobody is
    at home, couriers usually drop parcels around to the back porch which
    is not visible from the main road. What do they do at apartment blocks
    with shared landings when nobody is at home? Do they just leave the
    parcel at the front door, exposed to public view, even in a high-crime
    area?

    As I understand it, they stop where they're told to, and message you
    with a code to unlock them, so you have to attend in person.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Andy Burns on Fri Jan 31 12:37:39 2025
    On 31/01/2025 12:22, Andy Burns wrote:
    Norman Wells wrote:

    billy bookcase wrote:

    What is to stop pedestrians just stepping out in front of these cars and >>> bringing them to complete halt;either to actually cross the road or just >>> because  hey  feel like it;

    Well, nothing.  Just as there isn't with a manually driven car.

    It generally takes teenage boy levels of bravado to run across fast-
    moving traffic, once OAPs learn that autonomous cars will always stop, wrinkly-chicken might become their new pastime ...

    But they can't and won't always stop. The laws of physics have not been repealed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Les. Hayward on Fri Jan 31 12:35:49 2025
    On 31/01/2025 11:58, Les. Hayward wrote:
    On 31/01/2025 08:23, Norman Wells wrote:


    How very odd, luddite and irrational.

    We rely absolutely on computers in so many aspects of life.  It's very
    strange if we exclude driving with all the innate human frailties you
    list.

    As someone who has made a good living with computers large & small,I
    have a liking for them. I do NOT though like systems which are
    unnecessary, which seek to control me, or are added to make weight to products. For example, I prefer to decide when to switch on my lights manually!

    Fair enough. You probably also like the tedium of driving (or
    alcohol-free, risk-taking excitement, who knows?) to the relaxation of
    being driven or on a train or plane. But many don't. And no-one will
    be forcing you to do anything you don't want to.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Fri Jan 31 14:48:52 2025
    On 31/01/2025 12:52, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m03sudF6h2iU1@mid.individual.net...
    On 31/01/2025 10:56, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m03k1aF4mnkU3@mid.individual.net...

    That's not actually how they work. They monitor and assess all
    inputs from the sensor many hundreds of times a second, thus
    'changing their minds' constantly, and adapting far quicker than
    any human to whatever transpires.

    Apologies for the complete change of direction Norman, (it just happened >>> to be you as the thought just occurred to me) but assuming these cars can >>> in fact be made safe to drive anywhere -

    What is to stop pedestrians just stepping out in front of these cars and >>> bringing them to complete halt;either to actually cross the road or just >>> because hey feel like it;

    Well, nothing. Just as there isn't with a manually driven car. It's not generally
    allowed simply to plough into them however much they deserve it.

    Except that people * wouldn't do that* with a manual car. Would they ?

    *for all the very reasons you've given yourself*

    The human driver might be drunk, not paying attention etc etc

    But with a robot car , according to you at least and your friends*, they can be *guaranteed* that no accidents can possibly happen

    No they can't. Even an autonomous vehicle has kinetic energy that must
    be dissipated by braking before it will come to a halt. And the amount
    of braking that can be applied is limited by the coefficient of friction between its tyres and the road surface. Otherwise, all you get is a
    skid and not very much slowing at all. Step out in front of one within
    the braking distance and you will inevitably be run over whether it's an autonomous vehicle or not.

    So why wouldn't people just walk out in front of a robot car and bring it to a halt ?

    Because they may die. That's usually a pretty good disincentive.

    For instance if what you say is correct, correct, correct, robot cars are going
    to put millions of human drivers out of a job. Taxi drivers for one.

    In time, maybe. But why work if machines can do the same task just as
    well but cheaper and safer? It's a principle most of us seem to have
    little problem with in other areas.

    So what's to stop all these redundant taxi drivers stepping out in front of robot
    cars and refusing to move ?

    Maybe death if they have no appreciation of necessary stopping
    distances. Maybe the police if they're just being obstructive.

    Obviously the robots and their human "friends" have never even considered such a possibility.

    Actually, they have, as I've just demonstrated.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to All on Fri Jan 31 12:21:00 2025
    On 31/01/2025 12:12, Jethro_uk wrote:
    On Fri, 31 Jan 2025 10:56:46 +0000, billy bookcase wrote:

    What is to stop pedestrians just stepping out in front of these cars and
    bringing them to complete halt;either to actually cross the road or just
    because hey feel like it;

    Or to rob/hijack it ?

    Maybe the people inside who are being transported? Er, just like any
    other car.

    These things don't just joyride round empty for the hell of it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Les. Hayward@21:1/5 to Norman Wells on Fri Jan 31 18:15:44 2025
    On 31/01/2025 12:35, Norman Wells wrote:
    On 31/01/2025 11:58, Les. Hayward wrote:
    On 31/01/2025 08:23, Norman Wells wrote:


    How very odd, luddite and irrational.

    We rely absolutely on computers in so many aspects of life.  It's
    very strange if we exclude driving with all the innate human
    frailties you list.

    As someone who has made a good living with computers large & small,I
    have a liking for them. I do NOT though like systems which are
    unnecessary, which seek to control me, or are added to make weight to
    products. For example, I prefer to decide when to switch on my lights
    manually!

    Fair enough.  You probably also like the tedium of driving (or
    alcohol-free, risk-taking excitement, who knows?) to the relaxation of
    being driven or on a train or plane.  But many don't.  And no-one will
    be forcing you to do anything you don't want to.



    Not so, I am afraid. Taking cars - it is getting more difficult to turn
    off all the annoyances. Soon it will be impossible. Take bill paying &
    parking - you are almost an outcast if you dare enter the world without
    a mobile phone. As to trains - As a child of a railway family, I'd love
    to. The trouble is that these days you need a second mortgage to afford
    the fair - then there is the ruddy computerised and over complicated
    booking system...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Les. Hayward on Fri Jan 31 19:46:27 2025
    "Les. Hayward" <les@nospam.invalid> wrote in message news:vnj41a$oq6b$1@solani.org...

    Not so, I am afraid. Taking cars - it is getting more difficult to turn
    off all the annoyances. Soon it will be impossible. Take bill paying & parking - you are almost an outcast if you dare enter the world without
    a mobile phone. As to trains - As a child of a railway family, I'd love to. The trouble is that these days you need a second mortgage to afford the fair -
    then there is the ruddy computerised and over complicated booking system...

    To say nothing of the regular announcements

    "This is a South Western Train to NoTown. Stopping at East Blogwich, Bolgwich, West Blogswich, North Eastwich, Easrwitch, West Eastwich and No Town. The
    next stop will be West Blogswich "

    Before every single stop ! And at full volume ! "I can't turn it down Sir, I'm sorry its set up like that"

    Despite the digital displays in every single carriage, carrying exactly the same
    information. Correct of not.

    "And all because they can"


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Sat Feb 1 08:52:53 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0466mF86flU1@mid.individual.net...

    But with a robot car , according to you at least and your friends*,
    they can be *guaranteed* that no accidents can possibly happen

    No they can't. Even an autonomous vehicle has kinetic energy that
    must be dissipated by braking before it will come to a halt. And the
    amount of braking that can be applied is limited by the coefficient
    of friction between its tyres and the road surface. Otherwise, all
    you get is a skid and not very much slowing at all. Step out in front
    of one within the braking distance and you will inevitably be run over whether
    it's an autonomous vehicle or not.


    Now you're simply obfuscating.

    You've already admitted these cars are fitted with sensors, no doubt directed towards
    all possible directions the car can travel. So that if something comes into its range
    of" vision", say a small child walking across the pavement possibly to run out into the
    road' the car will slow down sufficiently, no doubt having taken into account the
    coefficient of friction between its tyres and the road surface and any other technical
    objections you want to come up with, so as to be able to come to a stop.

    In fact just imagine the sorts of speeds that are going to be achievable by these
    robot cars of yours driving past pavements choc full of people

    You really can't have it both ways.

    Either these cars are "safe" in which case they will come to a stop if a child runs out
    into the road, or they're not

    As has been seen with all bike sharing schemes - one main result of these is to demonstrate just how many really stupid thoughtless people there are around who think nothing of blocking the pavements with the things, instead of parking them
    responsibly on;y a few feet away.

    And it will be much the same with robot cars. With thoughtless people now able to walk out in front of them; along with angry taxi drivers and small children

    Presumably to be mown down morons on bikes instead

    As has been suggested elsewhere these things would be fine in dedicated lanes on motorways. Except people don't simply want to drive up and down motorways
    do they? They need to get on and off of the motorway for a start.



    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Sat Feb 1 10:21:10 2025
    On 01/02/2025 08:52, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0466mF86flU1@mid.individual.net...

    But with a robot car , according to you at least and your friends*,
    they can be *guaranteed* that no accidents can possibly happen

    No they can't. Even an autonomous vehicle has kinetic energy that
    must be dissipated by braking before it will come to a halt. And the
    amount of braking that can be applied is limited by the coefficient
    of friction between its tyres and the road surface. Otherwise, all
    you get is a skid and not very much slowing at all. Step out in front
    of one within the braking distance and you will inevitably be run over whether
    it's an autonomous vehicle or not.


    Now you're simply obfuscating.

    You've already admitted these cars are fitted with sensors, no doubt directed towards
    all possible directions the car can travel. So that if something comes into its range
    of" vision", say a small child walking across the pavement possibly to run out into the
    road' the car will slow down sufficiently, no doubt having taken into account the
    coefficient of friction between its tyres and the road surface and any other technical
    objections you want to come up with, so as to be able to come to a stop.

    Yes, dependent on the degree of risk as deemed acceptable by the control program.

    In fact just imagine the sorts of speeds that are going to be achievable by these
    robot cars of yours driving past pavements choc full of people

    They'll be the same as, or even greater than, those achievable by manual drivers in the same conditions. The 'thinking distance' is rather
    shorter, you see, so the speed can be somewhat higher for the same
    degree of risk.

    At the same speed, the risk of being hit by an autonomous vehicle will
    be less than one being manually driven.

    You really can't have it both ways.

    Either these cars are "safe" in which case they will come to a stop if a child runs out
    into the road, or they're not

    No-one claims that they will be absolutely, 100% safe. There's a
    balance to be struck between doing anything and the risk of doing it.

    They can and will injure or kill if someone steps out in front of one
    within its braking distance. Just as with human driven vehicles. It
    can't be avoided.

    The relevant comparison is which is the safer of the two.

    If you want perfection, that's the enemy of good.

    As has been seen with all bike sharing schemes - one main result of these is to
    demonstrate just how many really stupid thoughtless people there are around who
    think nothing of blocking the pavements with the things, instead of parking them
    responsibly on;y a few feet away.

    Sorry, I don't see the relevance.

    And it will be much the same with robot cars. With thoughtless people now able
    to walk out in front of them; along with angry taxi drivers and small children

    Presumably to be mown down morons on bikes instead

    As has been suggested elsewhere these things would be fine in dedicated lanes on motorways. Except people don't simply want to drive up and down motorways do they? They need to get on and off of the motorway for a start.

    That's just your very outdated view of what's possible and achievable.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Sat Feb 1 13:12:20 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m06asnFin8sU1@mid.individual.net.

    On 01/02/2025 08:52, billy bookcase wrote
    ...

    snippage

    Either these cars are "safe" in which case they will come to a stop if a child
    runs out into the road, or they're not

    No-one claims that they will be absolutely, 100% safe. There's a balance to be struck
    between doing anything and the risk of doing it.

    They can and will injure or kill if someone steps out in front of one within its
    braking distance. Just as with human driven vehicles. It can't be avoided.

    The relevant comparison is which is the safer of the two.

    At the moment if a human driver, one of these drunks or inattentive drivers
    of yours, knocks over and kills a pedestrian, they will be held responsible initially at least, and will be required to account for themselves in a
    Court of Law.

    So who will be held responsible if one of these robots of yours knocks over
    and kills a pedestrian ?

    Anyone at all ?

    Or will the expensive KC doubtless hired by your robot friends, simply stand up in the inquest and explain that it was "just one of those things" as you're seeking
    to argue here, and that overall robots are definitely safer than humans.?



    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to billy bookcase on Sat Feb 1 13:23:57 2025
    billy bookcase wrote:

    So who will be held responsible if one of these robots of yours knocks over and kills a pedestrian ?

    <https://www.legislation.gov.uk/ukpga/2024/10/contents>

    Sections 24 to 27 (except they are not in force yet, it shows their intentions).

    Anyone at all ?

    A nominated manager?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Sat Feb 1 13:50:22 2025
    On 01/02/2025 13:12, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m06asnFin8sU1@mid.individual.net.

    On 01/02/2025 08:52, billy bookcase wrote
    ...

    snippage

    Either these cars are "safe" in which case they will come to a stop if a child
    runs out into the road, or they're not

    No-one claims that they will be absolutely, 100% safe. There's a balance to be struck
    between doing anything and the risk of doing it.

    They can and will injure or kill if someone steps out in front of one within its
    braking distance. Just as with human driven vehicles. It can't be avoided. >>
    The relevant comparison is which is the safer of the two.

    At the moment if a human driver, one of these drunks or inattentive drivers of yours, knocks over and kills a pedestrian, they will be held responsible initially at least, and will be required to account for themselves in a
    Court of Law.

    So who will be held responsible if one of these robots of yours knocks over and kills a pedestrian ?

    Anyone at all ?

    Or will the expensive KC doubtless hired by your robot friends, simply stand up
    in the inquest and explain that it was "just one of those things" as you're seeking
    to argue here, and that overall robots are definitely safer than humans.?

    As always, it depends.

    However, with the virtually instantaneous reaction time of an autonomous vehicle, it will almost certainly be the pedestrian's fault unless a malfunction can be established. Human error on the part of a driver can obviously be eliminated from the outset.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roland Perry@21:1/5 to All on Sun Feb 2 16:28:30 2025
    In message <v77ppjp4vfj9un6rhuceoq8i9m4fr07ne2@4ax.com>, at 09:56:31 on
    Fri, 31 Jan 2025, Martin Harran <martinharran@gmail.com> remarked:
    On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
    wrote:

    In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
    Jan 2025, JNugent <JNugent73@mail.com> remarked:

    Recently in the USA, I was startled to see what I can only describe as >>>several small self-powered and self-directed boxes (c. 18" x 12" x 10") >>>on wheels, carrying goods for delivery (at moderate speed) along the >>>footway in a university area.

    There's a trial of those in Cambridge. A major problem is crossing
    roads, because they can't reach the buttons on the Pelican Crossings.

    I would not have thought that to be an insurmountable problem.

    I have yet to see one of the robots with a suitable prosthetic arm
    fitted.

    Another unrelated problem is that they ping the buyer from the pavement >>outside what might be quite a big apartment block, and if they aren't >>attended to with (iirc) ten minutes, take the shopping back *and still >>charge for it*.

    Genuine question:

    I live in a detached house with front and back garden. When nobody is
    at home, couriers usually drop parcels around to the back porch which
    is not visible from the main road. What do they do at apartment blocks
    with shared landings when nobody is at home? Do they just leave the
    parcel at the front door, exposed to public view, even in a high-crime
    area?

    Apparently they text you to come to the kerb to pick up your groceries.
    If you don't arrive pronto, they take them back to the shop (and still
    charge you).
    --
    Roland Perry

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roland Perry@21:1/5 to All on Sun Feb 2 16:30:39 2025
    In message <m03uq6F76arU2@mid.individual.net>, at 12:42:46 on Fri, 31
    Jan 2025, JNugent <JNugent73@mail.com> remarked:
    Genuine question:

    I live in a detached house with front and back garden. When nobody is
    at home, couriers usually drop parcels around to the back porch which
    is not visible from the main road. What do they do at apartment blocks
    with shared landings when nobody is at home? Do they just leave the
    parcel at the front door, exposed to public view, even in a high-crime
    area?

    AFAICT, the answer is "Yes".

    The robot can't "leave" the groceries, the customer has to unlock them
    and take them out.
    --
    Roland Perry

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to billy bookcase on Sat Feb 1 14:47:24 2025
    On Sat, 01 Feb 2025 13:12:20 +0000, billy bookcase wrote:

    Or will the expensive KC doubtless hired by your robot friends, simply
    stand up in the inquest and explain that it was "just one of those
    things"
    as you're seeking to argue here, and that overall robots are definitely
    safer than humans.?

    Sounds fair to me.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Scott@21:1/5 to Norman Wells on Sat Feb 1 12:33:41 2025
    On 01/02/2025 10:21, Norman Wells wrote:
    They can and will injure or kill if someone steps out in front of one
    within its braking distance.  Just as with human driven vehicles.  It
    can't be avoided.

    Yet a /good/ human driver will be continuously assessing the likelihood
    of someone stepping out(*), and take pro-active steps to prevent an
    accident. Will (can?) a robot car do the same?

    (*) and indeed a myriad of other improbable things.

    --
    Mike Scott
    Harlow, England

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Andy Burns on Sat Feb 1 19:03:17 2025
    "Andy Burns" <usenet@andyburns.uk> wrote in message news:m06ljhFjh00U3@mid.individual.net...
    billy bookcase wrote:

    So who will be held responsible if one of these robots of yours knocks over >> and kills a pedestrian ?

    <https://www.legislation.gov.uk/ukpga/2024/10/contents>

    Sections 24 to 27 (except they are not in force yet, it shows their intentions).

    Thanks.

    Anyone at all ?

    A nominated manager?

    So no problems there, then.

    Presumably people will be queuing up to be nominated



    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Davey@21:1/5 to Norman Wells on Sat Feb 1 16:23:49 2025
    On Sat, 1 Feb 2025 13:50:22 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 01/02/2025 13:12, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m06asnFin8sU1@mid.individual.net.

    On 01/02/2025 08:52, billy bookcase wrote
    ...

    snippage

    Either these cars are "safe" in which case they will come to a
    stop if a child runs out into the road, or they're not

    No-one claims that they will be absolutely, 100% safe. There's a
    balance to be struck between doing anything and the risk of doing
    it.

    They can and will injure or kill if someone steps out in front of
    one within its braking distance. Just as with human driven
    vehicles. It can't be avoided.

    The relevant comparison is which is the safer of the two.

    At the moment if a human driver, one of these drunks or inattentive
    drivers of yours, knocks over and kills a pedestrian, they will be
    held responsible initially at least, and will be required to
    account for themselves in a Court of Law.

    So who will be held responsible if one of these robots of yours
    knocks over and kills a pedestrian ?

    Anyone at all ?

    Or will the expensive KC doubtless hired by your robot friends,
    simply stand up in the inquest and explain that it was "just one of
    those things" as you're seeking to argue here, and that overall
    robots are definitely safer than humans.?

    As always, it depends.

    However, with the virtually instantaneous reaction time of an
    autonomous vehicle, it will almost certainly be the pedestrian's
    fault unless a malfunction can be established. Human error on the
    part of a driver can obviously be eliminated from the outset.



    Reading these messages, I wonder how these self-driving cars react to
    signs, such as those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?
    --
    Davey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Davey@21:1/5 to billy bookcase on Sun Feb 2 13:37:09 2025
    On Fri, 31 Jan 2025 09:12:31 -0000
    "billy bookcase" <billy@anon.com> wrote:

    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m02es8Fu9dcU2@mid.individual.net...

    On 30/01/2025 21:38, Mark Goodge wrote:

    On Thu, 30 Jan 2025 20:00:59 +0000, Norman Wells
    <hex@unseen.ac.am> wrote:


    snip

    It seems you're only now considering obvious potential pitfalls
    that I'm sure were considered several years ago, and have been
    resolved but unfortunately without telling you.

    They haven't been resolved yet. Current trials of autonomous
    vehicles are still based on tightly defined scenarios on routes
    already known to the vehicle. Being able to go anywhere, any time
    is still some way off.

    I think you are way behind the times on that.

    "New research shows that autonomous vehicles outperform
    human-driven ones.

    Self-driving cars are generally safer than those that are driven by
    humans, according to a new study."

    quote:

    Where the self-driving cars struggled, however, was if they drove at
    sunrise or sunset on roads with many turns, the report, published in
    the journal Nature, found. That's because low light conditions
    confused the cars' sensors as did the " complicated conditions" that
    come when a car is turning.

    unquote


    https://www.euronews.com/next/2024/06/20/self-driving-cars-are-generally-safer-than-human-driven-ones-research-shows

    So they're OK driving in straght ines (as I believe has already been
    alluded to )but they have a bit of a problem with "more complicated"
    stuff; if for instance they have to say to turn corners. i.e change
    their minds.

    So here we have a robot which is supposedly behaving just like a human
    but which is actually incapable of changing its mind.

    That's sounds strangely familiar, somehow,

    So it's maybe no wonder that you yourself are so enthusiastic about
    such projects.



    bb



    I wonder how these self-driving cars react to signs, such as those not
    uncommon in the US, which mandate a lower-than-normal speed limit 'on
    school days only'?
    --
    Davey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Davey on Mon Feb 3 08:09:14 2025
    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those not uncommon in the US, which mandate a lower-than-normal speed limit 'on
    school days only'?

    Just as they do with any legally imposed speed limits. With a proper
    data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fredxx@21:1/5 to billy bookcase on Sat Feb 1 13:43:20 2025
    On 01/02/2025 13:12, billy bookcase wrote:

    <snip>

    So who will be held responsible if one of these robots of yours knocks over and kills a pedestrian ?

    Anyone at all ?

    Unless negligence can be proven then I don't see how any one will be
    liable. Of course the pedestrian can take out personal insurance in the
    event of an accident.

    The situation can arise currently. Lets suppose Gary Hart's Landrover
    Defender had suffered a mechanical breakdown leading to the disaster
    and he was able to provide evidence that his vehicle was properly
    maintained.

    The victims of this crash would be in the same position as per your
    concerns where no one would be liable through negligence. I would hope
    they had some form of life and injury insurance.

    BICBW

    If anything self drive cars would likely be safer than human driver and
    perhaps the occasional death should be accepted as a consequence.
    Perhaps there could be a similar fund as distributed by the MIB for
    uninsured drivers?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mike Scott on Mon Feb 3 08:10:43 2025
    On 01/02/2025 12:33, Mike Scott wrote:
    On 01/02/2025 10:21, Norman Wells wrote:

    They can and will injure or kill if someone steps out in front of one
    within its braking distance.  Just as with human driven vehicles.  It
    can't be avoided.

    Yet a /good/ human driver will be continuously assessing the likelihood
    of someone stepping out(*), and take pro-active steps to prevent an
    accident. Will (can?) a robot car do the same?

    Yes, and quicker, in fact virtually instantaneously.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nick Finnigan@21:1/5 to Andy Burns on Sat Feb 1 16:21:46 2025
    On 01/02/2025 13:23, Andy Burns wrote:
    billy bookcase wrote:

    So who will be held responsible if one of these robots of yours knocks over >> and kills a pedestrian ?

    <https://www.legislation.gov.uk/ukpga/2024/10/contents>

    Sections 24 to 27 (except they are not in force yet, it shows their intentions).

    That seems to relate to failing to give information to the Secretary of
    State etc., rather than what happens if an autonomous vehicle injures a pedestrian, exceeds a speed limit, crosses a double white line ...


    Anyone at all ?

    A nominated manager?


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Davey@21:1/5 to Norman Wells on Mon Feb 3 09:21:07 2025
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those
    not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?



    No, mine doesn't, and it is in a BMW. I will investigate in case it
    is a feature that is available but not activated. But you are assuming
    that all States and small 'burgs' in the US follow the same speed limit reporting processes that we do. Do they?

    --
    Davey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Davey on Mon Feb 3 09:36:45 2025
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those
    not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a proper
    data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    I will investigate in case it
    is a feature that is available but not activated. But you are assuming
    that all States and small 'burgs' in the US follow the same speed limit reporting processes that we do. Do they?

    I have no idea. But this is a UK group and all the necessary speed
    limit information here is certainly available and in current use.

    It's trivial programming to ensure that limits are not exceeded by any autonomous vehicle, which is certainly not the case with human drivers,
    even those who try to obey them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Davey@21:1/5 to Norman Wells on Mon Feb 3 09:58:48 2025
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    From: Norman Wells <hex@unseen.ac.am>
    Subject: Re: Should road rules depend on the car?
    Date: Mon, 3 Feb 2025 09:36:45 +0000
    User-Agent: Mozilla Thunderbird
    Newsgroups: uk.legal.moderated
    Organization: SGO

    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those
    not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated again.
    All other software up to date.

    --
    Davey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Mon Feb 3 10:03:37 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0bh1eFdablU2@mid.individual.net...

    It's trivial programming to ensure that limits are not exceeded by any autonomous
    vehicle, which is certainly not the case with human drivers, even those who try to obey
    them.

    How about the humans who were supposed to put up the signs but
    somehow forgot ?

    What do you, and your robot friends, intend to do about them ?


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Martin Harran on Mon Feb 3 10:07:22 2025
    "Martin Harran" <martinharran@gmail.com> wrote in message news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close
    to a vehicle in front, lane-keep assist.

    Those are examples of automation, as with autpilots. Where the each set
    of inputs, no matter how complex, will produce exactly the same result
    every time.

    AI systems on the other hand, learn and make up new rules as they go along.
    And so the process by which they actually arrive at particular decisions
    may not be readily available to humans.

    I think this is possibly what a lot of peopel are really worried about,


    bb




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to billy bookcase on Mon Feb 3 10:21:38 2025
    billy bookcase wrote:

    Martin Harran wrote:

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close
    to a vehicle in front, lane-keep assist.

    Those are examples of automation, as with autpilots. Where the each set
    of inputs, no matter how complex, will produce exactly the same result
    every time.
    With the sheer volume of inputs into an autopilot (location, speed,
    multiple cameras, bugs on the windscreen, lidar, clouds, weather,
    lighting, pedestrians, cyclists, other vehicles) do you think you could
    *ever* duplicate the inputs?

    Butterfly wings anyone?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Davey on Mon Feb 3 10:59:25 2025
    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those
    not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated again.
    All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it
    *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything
    dynamic or complex.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Davey@21:1/5 to Jon Ribbens on Mon Feb 3 11:20:03 2025
    On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
    Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as
    those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you
    are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything
    dynamic or complex.


    I have asked BMW the question, but I think I know the answer, despite
    Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?
    --
    Davey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JNugent@21:1/5 to Davey on Mon Feb 3 11:22:59 2025
    On 03/02/2025 11:20 AM, Davey wrote:
    On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
    Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as
    those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you
    are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it
    *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything
    dynamic or complex.


    I have asked BMW the question, but I think I know the answer, despite Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    Mine is just too old for that (January 2014).

    I use my iPhone with Apple Maps, so that I can make use of spoken
    commands to Siri.

    I also have a Garmin I bought in the USA, though I only use it if I
    don't want to waste my phone data package.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fredxx@21:1/5 to Davey on Mon Feb 3 11:05:16 2025
    On 03/02/2025 09:58, Davey wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    From: Norman Wells <hex@unseen.ac.am>
    Subject: Re: Should road rules depend on the car?
    Date: Mon, 3 Feb 2025 09:36:45 +0000
    User-Agent: Mozilla Thunderbird
    Newsgroups: uk.legal.moderated
    Organization: SGO

    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those
    not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated again.
    All other software up to date.

    The basic Android SatNavs with traffic information include things like roadworks and speed limits.

    I'm genuinely surprised that BMW doesn't provide the same information
    with live traffic. I rely on mine for speed limits and avoiding jams.

    If your console can run Android then I would ask questions about
    alternatives with live updates and traffic information. Otherwise I
    would consider replacing it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Jon Ribbens on Mon Feb 3 12:26:08 2025
    On 03/02/2025 10:59, Jon Ribbens wrote:
    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those >>>>>> not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated again.
    All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything
    dynamic or complex.

    The standalone Garmin satnav I have seems remarkably accurate as regards
    speed limits, far more so in fact than I am. The number of times it has correctly identified that I have been exceeding one is vastly greater
    than the number where it has been wrong, which is usually only when
    there are temporary speed restrictions on motorways.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Davey on Mon Feb 3 12:19:18 2025
    On 03/02/2025 09:58, Davey wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those
    not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated again.
    All other software up to date.

    Perhaps then it's because, as is well-known, BMW drivers are exempt from
    speed limits and don't need to be bothered with them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Andy Burns on Mon Feb 3 10:54:25 2025
    "Andy Burns" <usenet@andyburns.uk> wrote in message news:m0bjloFdgcfU3@mid.individual.net...
    billy bookcase wrote:

    Martin Harran wrote:

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close
    to a vehicle in front, lane-keep assist.

    Those are examples of automation, as with autpilots. Where the each set
    of inputs, no matter how complex, will produce exactly the same result
    every time.

    With the sheer volume of inputs into an autopilot (location, speed, multiple cameras, bugs on the windscreen, lidar, clouds, weather, lighting, pedestrians,
    cyclists, other vehicles) do you think you could *ever* duplicate the inputs?

    It's purely hypothetical.

    If two sets of inputs were identical then the outputs would be identical
    as all the data is numerical and subject to exactly the same set of calculations.

    Whereas if an AI autopilot was also fed passenger information it might
    decide they would all be better off flying to a different destination
    providing there was enough fuel.


    As to actual autopilots themselves.

    quote:

    a.. Control Inputs (Mandatory for vehicle operation):

    a.. Radio Control Receivers

    b.. MAVLink Data Streams, ie ground control stations or companion computers

    a.. Sensor Inputs:

    a.. GPS (mandatory item)

    b.. Compass (mandatory except for certain Plane uses)

    c.. Airspeed

    d.. Rangefinders

    e.. Redundant barometers, IMUs, etc.

    a.. Power Management Unit Inputs (Mandatory to supply power to autopilot)

    b.. Analog Inputs:

    a.. Received Signal Strength Input (RSSI)

    b.. Analog Airspeed Sensor

    :unquote



    https://ardupilot.org/plane/docs/common-flight-controller-io.html



    Butterfly wings anyone?

    Bugs on the screen, anyons ?

    As it happens, I much prefer the "Airplane" version myself.


    bb





    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to JNugent on Mon Feb 3 12:29:22 2025
    On 03/02/2025 11:22, JNugent wrote:
    On 03/02/2025 11:20 AM, Davey wrote:
    On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
    Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as
    those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits.  With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you
    are?

    No, mine doesn't, and it is in a BMW.

    Good grief!  How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it
    *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything
    dynamic or complex.


    I have asked BMW the question, but I think I know the answer, despite
    Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    Mine is just too old for that (January 2014).

    I use my iPhone with Apple Maps, so that I can make use of spoken
    commands to Siri.

    I also have a Garmin I bought in the USA, though I only use it if I
    don't want to waste my phone data package.

    You don't have to. Unless you want up-to-the-minute traffic
    information, satnavs works perfectly well on their own for routing
    without any internet connection.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Mon Feb 3 12:32:24 2025
    On 03/02/2025 10:03, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0bh1eFdablU2@mid.individual.net...

    It's trivial programming to ensure that limits are not exceeded by any autonomous
    vehicle, which is certainly not the case with human drivers, even those who try to obey
    them.

    How about the humans who were supposed to put up the signs but
    somehow forgot ?

    What do you, and your robot friends, intend to do about them ?

    If there are no signs there's nothing you need to obey. If the
    autonomous vehicle thinks there are restrictions in place, it will abide
    by them whether or not there are signs in the interests of safety.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Mon Feb 3 12:35:57 2025
    On 03/02/2025 10:07, billy bookcase wrote:
    "Martin Harran" <martinharran@gmail.com> wrote in message news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close
    to a vehicle in front, lane-keep assist.

    Those are examples of automation, as with autpilots. Where the each set
    of inputs, no matter how complex, will produce exactly the same result
    every time.

    AI systems on the other hand, learn and make up new rules as they go along. And so the process by which they actually arrive at particular decisions
    may not be readily available to humans.

    I think this is possibly what a lot of peopel are really worried about,

    AI is not part of autonomous vehicle control, and includes none of the
    things Mr Harran identified above. It is simply automation according to pre-determined rules.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Todal@21:1/5 to Davey on Mon Feb 3 12:58:18 2025
    On 03/02/2025 11:20, Davey wrote:
    On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
    Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as
    those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you
    are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it
    *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything
    dynamic or complex.


    I have asked BMW the question, but I think I know the answer, despite Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?


    I doubt if any BMWs have built-in satnavs that show the speed limit
    unless things have changed in the last 3 years or so.

    My son always tells me not to rely on the BMW satnav anyway because it
    is always too optimistic about arrival time and does not always divert
    from roads that are temporarily congested.

    The WAZE satnav is very good. The speed limit is displayed but too small
    to be easily visible between the driver and the dashboard. One drawback
    seems to be, in my experience, that with WAZE there is a short lag
    between where you are and what is displayed. So the satnav might tell
    you that you should have made your right turn a few seconds ago.

    A new Honda Jazz has a built-in Garmin satnav that does show the speed
    limit. And separately the car has a camera that looks for speed limit
    signs in the road and displays them separately on the dashboard - the
    two figures not always agreeing.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to The Todal on Mon Feb 3 13:09:04 2025
    On 2025-02-03, The Todal <the_todal@icloud.com> wrote:
    On 03/02/2025 11:20, Davey wrote:
    On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
    Jon Ribbens <jon+usenet@unequivocal.eu> wrote:
    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as
    those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you
    are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it
    *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything
    dynamic or complex.

    I have asked BMW the question, but I think I know the answer, despite
    Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    I doubt if any BMWs have built-in satnavs that show the speed limit
    unless things have changed in the last 3 years or so.

    Things have indeed changed in the last 3 years or so, not least because
    the EU has passed a law saying that all cars sold in the EU (and also
    Northern Ireland) must know the speed limit and prevent the car from
    exceeding it without alerting the driver.

    My son always tells me not to rely on the BMW satnav anyway because it
    is always too optimistic about arrival time and does not always divert
    from roads that are temporarily congested.

    The WAZE satnav is very good. The speed limit is displayed but too small
    to be easily visible between the driver and the dashboard. One drawback
    seems to be, in my experience, that with WAZE there is a short lag
    between where you are and what is displayed. So the satnav might tell
    you that you should have made your right turn a few seconds ago.

    A new Honda Jazz has a built-in Garmin satnav that does show the speed
    limit. And separately the car has a camera that looks for speed limit
    signs in the road and displays them separately on the dashboard - the
    two figures not always agreeing.

    Exactly my point. The technology for cars to reliably know the speed
    limit is not there yet. Thus making the above-mentioned EU law an
    interesting one.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to Norman Wells on Mon Feb 3 13:16:09 2025
    On 3 Feb 2025 at 12:29:22 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:

    On 03/02/2025 11:22, JNugent wrote:
    On 03/02/2025 11:20 AM, Davey wrote:
    On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
    Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as >>>>>>>>> those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a >>>>>>>> proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you
    are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it
    *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything
    dynamic or complex.


    I have asked BMW the question, but I think I know the answer, despite
    Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    Mine is just too old for that (January 2014).

    I use my iPhone with Apple Maps, so that I can make use of spoken
    commands to Siri.

    I also have a Garmin I bought in the USA, though I only use it if I
    don't want to waste my phone data package.

    You don't have to. Unless you want up-to-the-minute traffic
    information, satnavs works perfectly well on their own for routing
    without any internet connection.

    On my phone Google maps has totally insufficient map data for a long journey without an Internet connection.

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to billy bookcase on Mon Feb 3 13:39:46 2025
    On 03/02/2025 10:07, billy bookcase wrote:
    "Martin Harran" <martinharran@gmail.com> wrote in message news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close
    to a vehicle in front, lane-keep assist.

    Those are examples of automation, as with autpilots. Where the each set
    of inputs, no matter how complex, will produce exactly the same result
    every time.

    There can be situations where these fancy gadgets intended to make
    things safer can cause trouble.

    My car can sometimes applies the brakes hard spontaneously if it thinks
    there is an obstruction in its path. I expect it would do the same if
    you were about to run into the back of a stationary traffic queue too...

    Such obstructions can include deer, larger low flying birds like a
    pheasant or a car in a middle lane waiting to turn right or very rarely
    for no obvious reason at all (I suspect sensor glitch on a rough road).

    It is quite disconcerting when it happens out of the blue.

    AI systems on the other hand, learn and make up new rules as they go along. And so the process by which they actually arrive at particular decisions
    may not be readily available to humans.

    I think this is possibly what a lot of peopel are really worried about,

    AI is potentially more reliable than the average driver these days, and
    it will get better with time. However, when it goes wrong things get
    ugly and there will be some very interesting legal cases arising.

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Davey@21:1/5 to All on Mon Feb 3 13:41:40 2025
    snipped

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as
    those not uncommon in the US, which mandate a lower-than-normal
    speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    The basic Android SatNavs with traffic information include things
    like roadworks and speed limits.

    I'm genuinely surprised that BMW doesn't provide the same information
    with live traffic. I rely on mine for speed limits and avoiding jams.

    It provides live traffic information, such as road construction and
    accident hold-ups.

    If your console can run Android then I would ask questions about
    alternatives with live updates and traffic information. Otherwise I
    would consider replacing it.


    I have no idea what it runs, although I see lots of Linux terms in
    the software listings. I ask it to do things, and it usually does them.
    I certainly have no intention of replacing it, either the Satnav or the
    car, I have managed so far without my car telling me what the speed
    limit is, as I can read road signs myself.

    --
    Davey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to Davey on Mon Feb 3 13:53:59 2025
    Davey wrote:

    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    My 2016 car's built-in satnav displays speeds from a database (several
    years out of date)

    I tend to use google maps on my phone "projected" onto the car's screen
    by android auto, which gets speed limits from google's (mostly) up to
    date database.

    I have driven a recent BMW which got the speed limit from reading the signs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Roger Hayter on Mon Feb 3 14:03:39 2025
    On 2025-02-03, Roger Hayter <roger@hayter.org> wrote:
    On 3 Feb 2025 at 12:29:22 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:
    On 03/02/2025 11:22, JNugent wrote:
    I also have a Garmin I bought in the USA, though I only use it if I
    don't want to waste my phone data package.

    You don't have to. Unless you want up-to-the-minute traffic
    information, satnavs works perfectly well on their own for routing
    without any internet connection.

    On my phone Google maps has totally insufficient map data for a long
    journey without an Internet connection.

    With a location selected, pull up the bottom panel that gives
    information about it, scroll the buttons (that start "Directions",
    "Start", etc) to the left, select "Download offline map".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nick Finnigan@21:1/5 to Norman Wells on Mon Feb 3 14:13:55 2025
    On 03/02/2025 09:36, Norman Wells wrote:

    It's trivial programming to ensure that limits are not exceeded by any autonomous vehicle,

    Please give complete details of your trivial program to handle that...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Roger Hayter on Mon Feb 3 14:23:55 2025
    On 03/02/2025 13:16, Roger Hayter wrote:
    On 3 Feb 2025 at 12:29:22 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:

    On 03/02/2025 11:22, JNugent wrote:
    On 03/02/2025 11:20 AM, Davey wrote:
    On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
    Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as >>>>>>>>>> those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a >>>>>>>>> proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you
    are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it >>>>> *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything >>>>> dynamic or complex.


    I have asked BMW the question, but I think I know the answer, despite
    Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    Mine is just too old for that (January 2014).

    I use my iPhone with Apple Maps, so that I can make use of spoken
    commands to Siri.

    I also have a Garmin I bought in the USA, though I only use it if I
    don't want to waste my phone data package.

    You don't have to. Unless you want up-to-the-minute traffic
    information, satnavs works perfectly well on their own for routing
    without any internet connection.

    On my phone Google maps has totally insufficient map data for a long journey without an Internet connection.

    That's why you need a proper satnav. Something fit for purpose.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Davey on Mon Feb 3 14:32:31 2025
    On 03/02/2025 13:41, Davey wrote:

    I have no idea what it runs, although I see lots of Linux terms in
    the software listings. I ask it to do things, and it usually does them.
    I certainly have no intention of replacing it, either the Satnav or the
    car, I have managed so far without my car telling me what the speed
    limit is, as I can read road signs myself.

    Of course. Sometimes.

    My experience is that my satnav knows rather better than I do what the
    limit is at any one location. And that's not for the want of trying.

    If you say you're fully aware of the correct speed limit everywhere you
    drive all of the time, I will frankly tend not to believe you. Maybe a
    journey with a satnav that does could be revelatory.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to Davey on Mon Feb 3 13:12:44 2025
    On 3 Feb 2025 at 11:20:03 GMT, "Davey" <davey@example.invalid> wrote:

    On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
    Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as
    those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you
    are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it
    *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything
    dynamic or complex.


    I have asked BMW the question, but I think I know the answer, despite Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    My car reads speed limit signs and also interprets street lamp spacing; the latter is often fooled by telegraph poles, and of course by default 20mph limits. The satnav has no speed limit database.

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to Norman Wells on Mon Feb 3 14:39:52 2025
    On 03/02/2025 08:09, Norman Wells wrote:
    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those not
    uncommon in the US, which mandate a lower-than-normal speed limit 'on
    school days only'?

    Just as they do with any legally imposed speed limits.  With a proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    Yes, but it doesn't always get it right in tricky edge cases:

    Asymmetric speed limits on dual carriageways when one direction is under
    speed restrictions due to roadworks, crash recovery or resurfacing.

    30mph local roads crossing motorway bridges where derestricted applies.

    Mostly it gets it right but sometimes it has the car on a road parallel
    with the road it is actually on. Can be an issue with motorway parallel
    and close to old road as happens at RAF Dishforth with A1(M) and A168.
    Civilian GPS is only good to ~10m and it sometimes guesses wrong.

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to All on Mon Feb 3 14:46:55 2025
    On 3 Feb 2025 at 14:03:39 GMT, "Jon Ribbens" <jon+usenet@unequivocal.eu>
    wrote:

    On 2025-02-03, Roger Hayter <roger@hayter.org> wrote:
    On 3 Feb 2025 at 12:29:22 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:
    On 03/02/2025 11:22, JNugent wrote:
    I also have a Garmin I bought in the USA, though I only use it if I
    don't want to waste my phone data package.

    You don't have to. Unless you want up-to-the-minute traffic
    information, satnavs works perfectly well on their own for routing
    without any internet connection.

    On my phone Google maps has totally insufficient map data for a long
    journey without an Internet connection.

    With a location selected, pull up the bottom panel that gives
    information about it, scroll the buttons (that start "Directions",
    "Start", etc) to the left, select "Download offline map".

    My phone has insufficient memory

    --
    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Scott@21:1/5 to billy bookcase on Mon Feb 3 14:59:46 2025
    On 03/02/2025 10:54, billy bookcase wrote:
    With the sheer volume of inputs into an autopilot (location, speed, multiple >> cameras, bugs on the windscreen, lidar, clouds, weather, lighting, pedestrians,
    cyclists, other vehicles) do you think you could*ever* duplicate the inputs?
    It's purely hypothetical.

    If two sets of inputs were identical then the outputs would be identical
    as all the data is numerical and subject to exactly the same set of calculations.

    Sorry, that doesn't work in the real world.

    The whole point about "butterfly wings" is that many (all??) non-linear calculations are thoroughly chaotic in their results. If you can't
    specify inputs precisely, outputs can vary unpredictably. And any
    "numerical" input from the real world cannot be *precisely* specified.
    Output therefore can depend critically on an imprecise input.

    (It's not that the butterfly /causes/ the hurricane. It's just that you
    cannot put the full initial conditions into the highly non-linear
    weather calculations. Include the butterfly, predict a hurricane;
    exclude it, predict a sunny day. But you don't even know it's there....)

    I reckon a driver AI is decidedly non-linear system.


    --
    Mike Scott
    Harlow, England

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Andy Burns on Mon Feb 3 15:10:10 2025
    On Mon, 03 Feb 2025 13:53:59 +0000, Andy Burns wrote:

    Davey wrote:

    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    My 2016 car's built-in satnav displays speeds from a database (several
    years out of date)

    I have never managed to use my in-car sat nav for anything. I can't begin
    to describe how useless it is. Why manufacturers continue is beyond me

    I tend to use google maps on my phone "projected" onto the car's screen
    by android auto, which gets speed limits from google's (mostly) up to
    date database.

    Same here. Saves me the £xxx the dealer wanted for map updates. Also lets
    me use my phones infotainment, rather than the ghastly in car version
    (see above).

    I have driven a recent BMW which got the speed limit from reading the
    signs.

    I had a loan car in 2019 that did that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Jon Ribbens on Mon Feb 3 15:07:01 2025
    On Mon, 03 Feb 2025 10:59:25 +0000, Jon Ribbens wrote:

    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000 Norman Wells <hex@unseen.ac.am>
    wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000 Norman Wells <hex@unseen.ac.am>
    wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those
    not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated again.
    All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it *usually* shows the speed limit. Sometimes the speed limit it shows is
    the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything
    dynamic or complex.

    When I worked in insurance, I had a task to trawl through a years worth
    of black box data. (Which I did at a cost of about £5,000 versus the
    £100,000 a bunch of cowboys were trying to charge).

    One very common problem with the data was that because it was simply one
    data point per analysis, and the GPS had no idea of altitude, it was
    possible for a car on a motorway (70mph limit) to briefly appear as if it
    had been on a road with a 30mph limit that went over or under it.

    Of course that triggered a "naughty boy" black mark for the driver and
    affected their premiums.

    The answer was to have had a little bit more sophistication in the
    logging and more critically analysis. However that would have cost money
    and skills that would have made the whole project unprofitable.

    I left before they decided what to do.

    Big Data has now become Big AI, from what I can see.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Scott@21:1/5 to Norman Wells on Mon Feb 3 15:12:29 2025
    On 03/02/2025 12:26, Norman Wells wrote:
    The standalone Garmin satnav I have seems remarkably accurate as regards speed limits, far more so in fact than I am.  The number of times it has correctly identified that I have been exceeding one is vastly greater
    than the number where it has been wrong, which is usually only when
    there are temporary speed restrictions on motorways.

    Interesting. Mine makes mistakes on some stretches of road, not
    altogether consistently I sometimes feel.

    All this assumes the satnav has you on the right road. Mine has very occasionally shown me on a different road (or, once, decided I was
    driving through a nearby wooded area) some distance from where I really
    was. With an obvious knock-on effect on the rest of the display. I can't imagine what a self-drive would make of that.

    --
    Mike Scott
    Harlow, England

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Roger Hayter on Mon Feb 3 14:53:02 2025
    On 03/02/2025 14:46, Roger Hayter wrote:
    On 3 Feb 2025 at 14:03:39 GMT, "Jon Ribbens" <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-03, Roger Hayter <roger@hayter.org> wrote:
    On 3 Feb 2025 at 12:29:22 GMT, "Norman Wells" <hex@unseen.ac.am> wrote: >>>> On 03/02/2025 11:22, JNugent wrote:
    I also have a Garmin I bought in the USA, though I only use it if I
    don't want to waste my phone data package.

    You don't have to. Unless you want up-to-the-minute traffic
    information, satnavs works perfectly well on their own for routing
    without any internet connection.

    On my phone Google maps has totally insufficient map data for a long
    journey without an Internet connection.

    With a location selected, pull up the bottom panel that gives
    information about it, scroll the buttons (that start "Directions",
    "Start", etc) to the left, select "Download offline map".

    My phone has insufficient memory

    Time to ditch the burner phone and get a proper one then.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Martin Brown on Mon Feb 3 14:51:05 2025
    On 03/02/2025 14:39, Martin Brown wrote:
    On 03/02/2025 08:09, Norman Wells wrote:
    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those not
    uncommon in the US, which mandate a lower-than-normal speed limit 'on
    school days only'?

    Just as they do with any legally imposed speed limits.  With a proper
    data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    Yes, but it doesn't always get it right in tricky edge cases:

    Asymmetric speed limits on dual carriageways when one direction is under speed restrictions due to roadworks, crash recovery or resurfacing.

    30mph local roads crossing motorway bridges where derestricted applies.

    Mostly it gets it right but sometimes it has the car on a road parallel
    with the road it is actually on. Can be an issue with motorway parallel
    and close to old road as happens at RAF Dishforth with A1(M) and A168. Civilian GPS is only good to ~10m and it sometimes guesses wrong.

    Actually:

    "Most consumer-grade GPS devices, when given an unobstructed view of the
    sky, can pinpoint your location with an accuracy of about 3 to 5 meters
    (10 to 16 feet)".

    https://mapscaping.com/how-accurate-is-gps/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Roger Hayter on Mon Feb 3 14:46:54 2025
    On 03/02/2025 13:12, Roger Hayter wrote:
    On 3 Feb 2025 at 11:20:03 GMT, "Davey" <davey@example.invalid> wrote:

    I have asked BMW the question, but I think I know the answer, despite
    Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    My car reads speed limit signs and also interprets street lamp spacing; the latter is often fooled by telegraph poles, and of course by default 20mph limits.

    Obviously not a very sensible way of doing things then. Perhaps you
    need an upgrade to something that actually works?

    The satnav has no speed limit database.

    Well, my Garmin obviously does, and it wasn't expensive, having been
    bought for £65 in 2016 with free map updates since. Reliable technology
    is out there and is easily affordable.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Todal@21:1/5 to Norman Wells on Mon Feb 3 15:32:13 2025
    On 03/02/2025 14:32, Norman Wells wrote:
    On 03/02/2025 13:41, Davey wrote:

    I have no idea what it runs, although I see lots of Linux terms in
    the software listings. I ask it to do things, and it usually does them.
    I certainly have no intention of replacing it, either the Satnav or the
    car, I have managed so far without my car telling me what the speed
    limit is, as I can read road signs myself.

    Of course.  Sometimes.

    My experience is that my satnav knows rather better than I do what the
    limit is at any one location.  And that's not for the want of trying.

    If you say you're fully aware of the correct speed limit everywhere you
    drive all of the time, I will frankly tend not to believe you.  Maybe a journey with a satnav that does could be revelatory.


    Maybe every driver should be obliged to take a Speed Awareness Course periodically. To refresh their memory of which limits apply to which
    types of road.

    Instead, you might be offered such a course instead of points on your
    licence.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Mon Feb 3 15:43:28 2025
    On Mon, 3 Feb 2025 12:35:57 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 03/02/2025 10:07, billy bookcase wrote:
    "Martin Harran" <martinharran@gmail.com> wrote in message
    news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close
    to a vehicle in front, lane-keep assist.

    Those are examples of automation, as with autpilots. Where the each set
    of inputs, no matter how complex, will produce exactly the same result
    every time.

    AI systems on the other hand, learn and make up new rules as they go along. >> And so the process by which they actually arrive at particular decisions
    may not be readily available to humans.

    I think this is possibly what a lot of peopel are really worried about,

    AI is not part of autonomous vehicle control, and includes none of the
    things Mr Harran identified above. It is simply automation according to >pre-determined rules.

    But that's precisely the problem. It's impossible to have a predetermined
    rule for every possible scenario. And dealing with scenarios that you don't have a rule for requires intelligence.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to All on Mon Feb 3 16:15:48 2025
    On Mon, 3 Feb 2025 14:39:52 +0000, Martin Brown <'''newspam'''@nonad.co.uk> wrote:

    On 03/02/2025 08:09, Norman Wells wrote:
    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those not
    uncommon in the US, which mandate a lower-than-normal speed limit 'on
    school days only'?

    Just as they do with any legally imposed speed limits. With a proper
    data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    Yes, but it doesn't always get it right in tricky edge cases:

    Asymmetric speed limits on dual carriageways when one direction is under >speed restrictions due to roadworks, crash recovery or resurfacing.

    30mph local roads crossing motorway bridges where derestricted applies.

    Junctions with roads that have different limits can cause problems, too.

    Mostly it gets it right but sometimes it has the car on a road parallel
    with the road it is actually on. Can be an issue with motorway parallel
    and close to old road as happens at RAF Dishforth with A1(M) and A168. >Civilian GPS is only good to ~10m and it sometimes guesses wrong.

    There's also the problem that most minor roads only exist in the database as lines and nodes rather than areas. So what we, on the ground, see as this:

    ---------------------------------------------------------

    -----------------------+ +-----------------------------
    | |
    | |
    | |
    | |

    (that is, a road with a side-road meeting it at a t-junction)

    will be in the database as simply this:

    -------------------------+----------------------------
    |
    |
    |

    That's not a general problem for navigation, or, indeed, speed limits. But
    if the side road has a different limit to the main road, then a car using sat-nav to determine the speed limit as it proceeds along the main road will encounter a node where two different limits intersect. How sat-navs
    interprets that is not consistent.

    That's observable by some drivers in my town. If you're drving along this
    road, which has normal urban 30mph limit:

    https://maps.app.goo.gl/CKZTxGDMVNuXvfsU8

    then as you can see in the image, there's a side road with a 20mph limit.
    Some drivers report that, even if they are proceeding along the main road,
    and have no intention of turning, ther sat-nav nonetheless tells them to
    slow down to 20mph as they pass the junction.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Mon Feb 3 15:59:55 2025
    On Mon, 3 Feb 2025 12:29:22 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 03/02/2025 11:22, JNugent wrote:

    I also have a Garmin I bought in the USA, though I only use it if I
    don't want to waste my phone data package.

    You don't have to. Unless you want up-to-the-minute traffic
    information, satnavs works perfectly well on their own for routing
    without any internet connection.

    You need up-to-the-minute traffic information in order to reliably know temporary or variable speed limits, for example those on "smart" motorways.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to jethro_uk@hotmailbin.com on Mon Feb 3 16:20:10 2025
    On 2025-02-03, Jethro_uk <jethro_uk@hotmailbin.com> wrote:
    On Mon, 03 Feb 2025 13:53:59 +0000, Andy Burns wrote:
    Davey wrote:
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    My 2016 car's built-in satnav displays speeds from a database (several
    years out of date)

    I have never managed to use my in-car sat nav for anything. I can't begin
    to describe how useless it is. Why manufacturers continue is beyond me

    I would imagine it's because it adds a tick to their feature list, and
    it costs them very little or nothing to provide (the screen and the
    computer behind it would be there anyway), and a small percentage of
    customers presumably do pay for the exorbitant subscriptions for map
    updates thus providing a bit of extra income.

    Actually I had no idea of the range of things you can subscribe to
    for extra money with cars these days... cruise control with radar is
    apparently £35/month, automatic parking is £19/month, automatic
    headlight dipping is £10/month, etc. You can even pay £99 to play
    "engine sounds" inside the cabin!? Satnav map upates cost £79/year.
    And all of these features I think are actually built into the car,
    but disabled unless you pay the extra ongoing monthly money. I think
    capitalism is broken.

    https://www.bmw.co.uk/en/shop/ls/cp/connected-drive

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nick Finnigan@21:1/5 to Mike Scott on Mon Feb 3 15:47:20 2025
    On 03/02/2025 15:12, Mike Scott wrote:
    On 03/02/2025 12:26, Norman Wells wrote:
    The standalone Garmin satnav I have seems remarkably accurate as regards
    speed limits, far more so in fact than I am.  The number of times it has
    correctly identified that I have been exceeding one is vastly greater
    than the number where it has been wrong, which is usually only when there
    are temporary speed restrictions on motorways.

    Interesting. Mine makes mistakes on some stretches of road, not altogether consistently I sometimes feel.

    All this assumes the satnav has you on the right road. Mine has very occasionally shown me on a different road (or, once, decided I was driving through a nearby wooded area) some distance from where I really was. With
    an obvious knock-on effect on the rest of the display. I can't imagine what
    a self-drive would make of that.

    An autonomous vehicle would have additional (varyingly imprecise and out
    of date) inputs which indicate location and speeds of various parts of the vehicle; and might record recent direction changes it had output, and how
    well the vehicle had responded to those suggested outputs.

    Someone might give us details of a trivial program to handle all that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to The Todal on Mon Feb 3 16:41:39 2025
    On Mon, 03 Feb 2025 15:32:13 +0000, The Todal wrote:

    On 03/02/2025 14:32, Norman Wells wrote:
    On 03/02/2025 13:41, Davey wrote:

    I have no idea what it runs, although I see lots of Linux terms in the
    software listings. I ask it to do things, and it usually does them.
    I certainly have no intention of replacing it, either the Satnav or
    the car, I have managed so far without my car telling me what the
    speed limit is, as I can read road signs myself.

    Of course.  Sometimes.

    My experience is that my satnav knows rather better than I do what the
    limit is at any one location.  And that's not for the want of trying.

    If you say you're fully aware of the correct speed limit everywhere you
    drive all of the time, I will frankly tend not to believe you.  Maybe a
    journey with a satnav that does could be revelatory.


    Maybe every driver should be obliged to take a Speed Awareness Course periodically. To refresh their memory of which limits apply to which
    types of road

    And a general catch up with the highway code.

    If a course is £100 then £10/year is a snip to keep your licence.

    Having thoroughly benefited from one myself after being *extremely*
    cynical, I can't recommend them enough.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Jon Ribbens on Mon Feb 3 16:43:39 2025
    On Mon, 03 Feb 2025 16:20:10 +0000, Jon Ribbens wrote:

    On 2025-02-03, Jethro_uk <jethro_uk@hotmailbin.com> wrote:
    On Mon, 03 Feb 2025 13:53:59 +0000, Andy Burns wrote:
    Davey wrote:
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    My 2016 car's built-in satnav displays speeds from a database (several
    years out of date)

    I have never managed to use my in-car sat nav for anything. I can't
    begin to describe how useless it is. Why manufacturers continue is
    beyond me

    I would imagine it's because it adds a tick to their feature list, and
    it costs them very little or nothing to provide (the screen and the
    computer behind it would be there anyway), and a small percentage of customers presumably do pay for the exorbitant subscriptions for map
    updates thus providing a bit of extra income.

    Actually I had no idea of the range of things you can subscribe to for
    extra money with cars these days... cruise control with radar is
    apparently £35/month, automatic parking is £19/month, automatic
    headlight dipping is £10/month, etc. You can even pay £99 to play
    "engine sounds" inside the cabin!? Satnav map upates cost £79/year. And
    all of these features I think are actually built into the car, but
    disabled unless you pay the extra ongoing monthly money. I think
    capitalism is broken.

    https://www.bmw.co.uk/en/shop/ls/cp/connected-drive

    That's if you want a BMW.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to Norman Wells on Mon Feb 3 16:53:21 2025
    Norman Wells wrote:

    If there are no signs there's nothing you need to obey.  If the
    autonomous vehicle thinks there are restrictions in place, it will abide
    by them whether or not there are signs in the interests of safety.

    I realise that what Tesla describes as "Full self driving" isn't really autonomous driving, but from videos I've watched you can tell them how
    much to exceed the limit by ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to The Todal on Mon Feb 3 16:54:00 2025
    On 03/02/2025 15:32, The Todal wrote:
    On 03/02/2025 14:32, Norman Wells wrote:
    On 03/02/2025 13:41, Davey wrote:

    I have no idea what it runs, although I see lots of Linux terms in
    the software listings. I ask it to do things, and it usually does them.
    I certainly have no intention of replacing it, either the Satnav or the
    car, I have managed so far without my car telling me what the speed
    limit is, as I can read road signs myself.

    Of course.  Sometimes.

    My experience is that my satnav knows rather better than I do what the
    limit is at any one location.  And that's not for the want of trying.

    If you say you're fully aware of the correct speed limit everywhere
    you drive all of the time, I will frankly tend not to believe you.
    Maybe a journey with a satnav that does could be revelatory.

    Maybe every driver should be obliged to take a Speed Awareness Course periodically. To refresh their memory of which limits apply to which
    types of road.

    But it doesn't depend just on the type of road, so that would be
    inaccurate and misleading..

    A half decent satnav will know and tell you, wherever you are.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Mon Feb 3 17:06:32 2025
    On 03/02/2025 15:43, Mark Goodge wrote:
    On Mon, 3 Feb 2025 12:35:57 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 03/02/2025 10:07, billy bookcase wrote:
    "Martin Harran" <martinharran@gmail.com> wrote in message
    news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close >>>> to a vehicle in front, lane-keep assist.

    Those are examples of automation, as with autpilots. Where the each set
    of inputs, no matter how complex, will produce exactly the same result
    every time.

    AI systems on the other hand, learn and make up new rules as they go along. >>> And so the process by which they actually arrive at particular decisions >>> may not be readily available to humans.

    I think this is possibly what a lot of peopel are really worried about,

    AI is not part of autonomous vehicle control, and includes none of the
    things Mr Harran identified above. It is simply automation according to
    pre-determined rules.

    But that's precisely the problem. It's impossible to have a predetermined rule for every possible scenario.

    By and large, I think it is. It's why we have the Highway Code for example.

    And dealing with scenarios that you don't
    have a rule for requires intelligence.

    If you have enough rules, it will look like intelligence, which is after
    all just the application of logic, ie rules. But it isn't true
    intelligence unless some surprising new rule seems to have been
    developed and be in play. The times when that may be necessary,
    however, are few and far between.

    And even then it's arguable whether it is for example 'intelligent' or
    actually unintelligent to pass through a failed traffic light stuck on red.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to jethro_uk@hotmailbin.com on Mon Feb 3 16:50:58 2025
    On 2025-02-03, Jethro_uk <jethro_uk@hotmailbin.com> wrote:
    On Mon, 03 Feb 2025 16:20:10 +0000, Jon Ribbens wrote:

    On 2025-02-03, Jethro_uk <jethro_uk@hotmailbin.com> wrote:
    On Mon, 03 Feb 2025 13:53:59 +0000, Andy Burns wrote:
    Davey wrote:
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    My 2016 car's built-in satnav displays speeds from a database (several >>>> years out of date)

    I have never managed to use my in-car sat nav for anything. I can't
    begin to describe how useless it is. Why manufacturers continue is
    beyond me

    I would imagine it's because it adds a tick to their feature list, and
    it costs them very little or nothing to provide (the screen and the
    computer behind it would be there anyway), and a small percentage of
    customers presumably do pay for the exorbitant subscriptions for map
    updates thus providing a bit of extra income.

    Actually I had no idea of the range of things you can subscribe to for
    extra money with cars these days... cruise control with radar is
    apparently £35/month, automatic parking is £19/month, automatic
    headlight dipping is £10/month, etc. You can even pay £99 to play
    "engine sounds" inside the cabin!? Satnav map upates cost £79/year. And
    all of these features I think are actually built into the car, but
    disabled unless you pay the extra ongoing monthly money. I think
    capitalism is broken.

    https://www.bmw.co.uk/en/shop/ls/cp/connected-drive

    That's if you want a BMW.

    Other automobile manufacturers are available.

    https://www.volkswagen.co.uk/en/connected-services/we-connect/all-services.html https://services-store.citroen.co.uk/ ("integrated ChatGPT" ffs!)
    etc

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to Roger Hayter on Mon Feb 3 17:19:48 2025
    Roger Hayter wrote:

    On my phone Google maps has totally insufficient map data for a long journey without an Internet connection.

    I have the largest rectangle that Google Maps will allow and which is
    centred on my home, downloaded for offline use; it covers roughly
    Whitehaven to Middlesborough to Hastings to Exeter, consumes 1.5GB of
    storage, you can have multiple areas if you need them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to Norman Wells on Mon Feb 3 17:26:44 2025
    On 03/02/2025 14:51, Norman Wells wrote:
    On 03/02/2025 14:39, Martin Brown wrote:
    On 03/02/2025 08:09, Norman Wells wrote:
    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those not >>>> uncommon in the US, which mandate a lower-than-normal speed limit 'on
    school days only'?

    Just as they do with any legally imposed speed limits.  With a proper
    data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    Yes, but it doesn't always get it right in tricky edge cases:

    Asymmetric speed limits on dual carriageways when one direction is
    under speed restrictions due to roadworks, crash recovery or resurfacing.

    30mph local roads crossing motorway bridges where derestricted applies.

    Mostly it gets it right but sometimes it has the car on a road
    parallel with the road it is actually on. Can be an issue with
    motorway parallel and close to old road as happens at RAF Dishforth
    with A1(M) and A168. Civilian GPS is only good to ~10m and it
    sometimes guesses wrong.

    Actually:

    "Most consumer-grade GPS devices, when given an unobstructed view of the
    sky, can pinpoint your location with an accuracy of about 3 to 5 meters
    (10 to 16 feet)".

    https://mapscaping.com/how-accurate-is-gps/

    That is with a good constellation of satellites in the sky. For some
    unknown reason (possibly due to the road building work going on) our car
    satnav almost never puts us on the Great North Road at the bend going
    into Team Valley it invariably has us on the old road running parallel.

    It does eventually snap back to the right road but only when the other
    one separates. Earlier satnavs would quite often show us driving in a
    field parallel to the road on bad weather days with thick cloud cover.

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Mon Feb 3 17:43:15 2025
    On Mon, 3 Feb 2025 17:06:32 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 03/02/2025 15:43, Mark Goodge wrote:
    On Mon, 3 Feb 2025 12:35:57 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    AI is not part of autonomous vehicle control, and includes none of the
    things Mr Harran identified above. It is simply automation according to >>> pre-determined rules.

    But that's precisely the problem. It's impossible to have a predetermined
    rule for every possible scenario.

    By and large, I think it is. It's why we have the Highway Code for example.

    But even the Highway Code distinguishes between mandatory rules, which you
    are not permitted to break, and recommended rules, which can be disregarded
    if circumstances are such that doing so is both safe and beneficial. So the Highway Code, alone, won't be enough; you also need a set of rules for when
    you can break the non-mandatory rules.

    And dealing with scenarios that you don't
    have a rule for requires intelligence.

    If you have enough rules, it will look like intelligence, which is after
    all just the application of logic, ie rules. But it isn't true
    intelligence unless some surprising new rule seems to have been
    developed and be in play. The times when that may be necessary,
    however, are few and far between.

    And even then it's arguable whether it is for example 'intelligent' or >actually unintelligent to pass through a failed traffic light stuck on red.

    I was thinking of cases where there's no need to break the law, merely a
    need to make decisions based on information which cannot be preset. Let me
    give a couple of examples from my real life experience.

    The first happened when I was a passenger, not the driver. We'd been to a
    gig at the Etihad Stadium in Manchester, and my friend was following his sat-nav back to my house in Stoke-on-Trent so that he could drop me off and then continue home. However, as we arrived at J19 of the M6 from the A565,
    the slip road onto the M6 southbound (which his sat-nav was, correctly,
    telling him to take) was blocked by a police car with flashing blue and red lights. There was no indication of why it was blocked, and no signs giving
    any diversionary route. My assumption was that there had been an accident of some sort, but that was pure speculation.

    After driving round the entire roundabout twice (I have no idea why, maybe
    he was hoping the police car would magically disappear the second time
    around), he eventually asked me what to do. I directed him via a different
    exit onto the A50, and once we were on that heading south I told him to just ignore the sat-nav until it stopped telling him to turn around (we
    eventually joined the M6 at the next junction down).

    Had I not been in the car, I have no idea how he would have got out of that.
    If I was ever in similar circumstances, and didn't have the benefit of a passenger with local kowledge, I think I'd either find somewhere to pull
    over and look at an actual map, or just take what looks like the next best option and keep going in vaguely the right direction until my sat-nav picked
    up a new route.

    How do you think an autonomous vehicle would cope with that, when the route given by its navigation software is blocked and there's no data available to suggest an alternative?

    The second example also involves the police, but this time on a minor road.
    I was driving home late one night from the evening shift at work, and as i approached a junction there was a police car parked at it. It wasn't
    blocking the road, this time, but as I reached the junction a policeman
    stood in front of me and gave the official symbol for "stop". So I stopped,
    and then the policeman walked over to my car and gave the entirely
    unofficial but yet universally understood symbol for "wind your window
    down". Which, again, I did.

    I'd assumed I'd been stopped for a random breath test (this was
    mid-December), although given that I'd been at work rather than a party or
    the pub that didn't worry me. But he asked me where I was going, and, when I told him, replied that "You won't get through that way, there's been an accident and the road's blocked".

    He advised me to go back to the motorway, go one junction further down and reach my town via a different approach road. In reality, he'd given me
    enough information in the course of the conversation to make a reasonable
    guess where the accident was, and therefore that I could get past it by a series of back roads rather than going the long way round. So I did, and got home quicker than I would have done had I taken his well-meaning, but unnecessary, advice.

    How do you think an autonomous vehicle would cope with being stopped by the police, and given routing information by the police which conflicted with
    its navigational data?

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to jethro_uk@hotmailbin.com on Mon Feb 3 17:37:13 2025
    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:vnqrk3$1sc4r$46@dont-email.me...

    On Mon, 03 Feb 2025 15:32:13 +0000, The Todal wrote:

    Maybe every driver should be obliged to take a Speed Awareness Course
    periodically. To refresh their memory of which limits apply to which
    types of road

    And a general catch up with the highway code.

    If a course is 100 then 10/year is a snip to keep your licence.

    Having thoroughly benefited from one myself after being *extremely*
    cynical, I can't recommend them enough.

    The best tip I learned from mine, was when waiting to turn right, don't
    set your wheels to turn right until actually moving, but stay straight
    ahead. As any shunt from behind would otherwise propel you across the
    oncoming traffic.

    Before the start, the bloke sitting next to me starting complaining about
    how unfair it all was. I explained to him that you just needed to go through the motions, it was only half a day, it saved the points on your licence
    and that most of it was probably a load of old b*llocks anyway.

    A bloke standing at the side walked over whispered something in my ear.
    "I'm sorry I didn't hear that you'll have to speak up". (genuine)
    And again. "I'm sorry I didn't hear that either" (genuine)
    It was only on the third attempt I realised that he was an instructor
    telling me that swearing wasn't allowed.


    bb



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to All on Mon Feb 3 18:33:15 2025
    Jethro_uk wrote:

    The last
    piece of the jigsaw will be average speed cameras over the 20 mph zones.

    In 20mph zones I've seen, I've noticed they barely seem able to keep the
    signs on the lamp posts, let alone any cameras.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to billy bookcase on Mon Feb 3 18:25:30 2025
    On Mon, 03 Feb 2025 17:37:13 +0000, billy bookcase wrote:

    Before the start, the bloke sitting next to me starting complaining
    about how unfair it all was. I explained to him that you just needed to
    go through the motions, it was only half a day, it saved the points on
    your licence and that most of it was probably a load of old b*llocks
    anyway.

    On mine, an attendee started getting vexed when the instructors warned us
    all that there was a general push to reduce speed limits on some roads.

    He started going on that the road he'd been caught on used to be a 60 and
    was now a 40 but it could easily be a 70 and the law was wrong. The
    instructors had to step in and remind him that attending the course was predicated on accepting your fault, and if you refused to acknowledge
    you'd done wrong you were welcome to leave the course and have your
    conviction progressed to tell it to a judge.

    He shut up.

    Everything the instructors warned us about has come true. Lower speed
    limits. Average speed cameras on main roads and 20mph zones. The last
    piece of the jigsaw will be average speed cameras over the 20 mph zones.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JNugent@21:1/5 to Norman Wells on Mon Feb 3 20:08:50 2025
    On 03/02/2025 12:29 PM, Norman Wells wrote:
    On 03/02/2025 11:22, JNugent wrote:
    On 03/02/2025 11:20 AM, Davey wrote:
    On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
    Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as >>>>>>>>> those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a >>>>>>>> proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you
    are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it
    *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything
    dynamic or complex.


    I have asked BMW the question, but I think I know the answer, despite
    Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    Mine is just too old for that (January 2014).

    I use my iPhone with Apple Maps, so that I can make use of spoken
    commands to Siri.

    I also have a Garmin I bought in the USA, though I only use it if I
    don't want to waste my phone data package.

    You don't have to. Unless you want up-to-the-minute traffic
    information, satnavs works perfectly well on their own for routing
    without any internet connection.

    The Garmin doesn't have European maps on it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to JNugent on Mon Feb 3 21:05:05 2025
    On 03/02/2025 20:08, JNugent wrote:
    On 03/02/2025 12:29 PM, Norman Wells wrote:
    On 03/02/2025 11:22, JNugent wrote:
    On 03/02/2025 11:20 AM, Davey wrote:
    On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
    Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as >>>>>>>>>> those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits.  With a >>>>>>>>> proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you
    are?

    No, mine doesn't, and it is in a BMW.

    Good grief!  How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it >>>>> *usually* shows the speed limit. Sometimes the speed limit it shows
    is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the
    current time, even for simple fixed speed limits - let alone anything >>>>> dynamic or complex.


    I have asked BMW the question, but I think I know the answer, despite
    Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    Mine is just too old for that (January 2014).

    I use my iPhone with Apple Maps, so that I can make use of spoken
    commands to Siri.

    I also have a Garmin I bought in the USA, though I only use it if I
    don't want to waste my phone data package.

    You don't have to.  Unless you want up-to-the-minute traffic
    information, satnavs works perfectly well on their own for routing
    without any internet connection.

    The Garmin doesn't have European maps on it.

    So what?

    But it can if you pay.

    https://activegps.co.uk/garmin-city-navigator-nt-europe-sd.htm#compatible

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Mon Feb 3 19:05:16 2025
    On 03/02/2025 17:43, Mark Goodge wrote:
    On Mon, 3 Feb 2025 17:06:32 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 03/02/2025 15:43, Mark Goodge wrote:
    On Mon, 3 Feb 2025 12:35:57 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    AI is not part of autonomous vehicle control, and includes none of the >>>> things Mr Harran identified above. It is simply automation according to >>>> pre-determined rules.

    But that's precisely the problem. It's impossible to have a predetermined >>> rule for every possible scenario.

    By and large, I think it is. It's why we have the Highway Code for example.

    But even the Highway Code distinguishes between mandatory rules, which you are not permitted to break, and recommended rules, which can be disregarded if circumstances are such that doing so is both safe and beneficial. So the Highway Code, alone, won't be enough; you also need a set of rules for when you can break the non-mandatory rules.

    If you can rationally decide, I see no reason why a computer shouldn't
    in principle also be able to. You're just following an if-then
    algorithm, which is just what computer programs do.

    And dealing with scenarios that you don't
    have a rule for requires intelligence.

    If you have enough rules, it will look like intelligence, which is after
    all just the application of logic, ie rules. But it isn't true
    intelligence unless some surprising new rule seems to have been
    developed and be in play. The times when that may be necessary,
    however, are few and far between.

    And even then it's arguable whether it is for example 'intelligent' or
    actually unintelligent to pass through a failed traffic light stuck on red.

    I was thinking of cases where there's no need to break the law, merely a
    need to make decisions based on information which cannot be preset. Let me give a couple of examples from my real life experience.

    The first happened when I was a passenger, not the driver. We'd been to a
    gig at the Etihad Stadium in Manchester, and my friend was following his sat-nav back to my house in Stoke-on-Trent so that he could drop me off and then continue home. However, as we arrived at J19 of the M6 from the A565, the slip road onto the M6 southbound (which his sat-nav was, correctly, telling him to take) was blocked by a police car with flashing blue and red lights. There was no indication of why it was blocked, and no signs giving any diversionary route. My assumption was that there had been an accident of some sort, but that was pure speculation.

    After driving round the entire roundabout twice (I have no idea why, maybe
    he was hoping the police car would magically disappear the second time around), he eventually asked me what to do. I directed him via a different exit onto the A50, and once we were on that heading south I told him to just ignore the sat-nav until it stopped telling him to turn around (we
    eventually joined the M6 at the next junction down).

    Had I not been in the car, I have no idea how he would have got out of that. If I was ever in similar circumstances, and didn't have the benefit of a passenger with local kowledge, I think I'd either find somewhere to pull
    over and look at an actual map, or just take what looks like the next best option and keep going in vaguely the right direction until my sat-nav picked up a new route.

    How do you think an autonomous vehicle would cope with that, when the route given by its navigation software is blocked and there's no data available to suggest an alternative?

    Provided it is working on a live real-time mapping system in which road closures are clearly indicated, which is not beyond the wit of man, the
    satnav will simply re-route to avoid them. They're very good at finding alternative routes when you miss a turning for example, so it's not a
    new or insurmountable problem.

    The second example also involves the police, but this time on a minor road.
    I was driving home late one night from the evening shift at work, and as i approached a junction there was a police car parked at it. It wasn't
    blocking the road, this time, but as I reached the junction a policeman
    stood in front of me and gave the official symbol for "stop". So I stopped, and then the policeman walked over to my car and gave the entirely
    unofficial but yet universally understood symbol for "wind your window
    down". Which, again, I did.

    I'd assumed I'd been stopped for a random breath test (this was mid-December), although given that I'd been at work rather than a party or the pub that didn't worry me. But he asked me where I was going, and, when I told him, replied that "You won't get through that way, there's been an accident and the road's blocked".

    He advised me to go back to the motorway, go one junction further down and reach my town via a different approach road. In reality, he'd given me
    enough information in the course of the conversation to make a reasonable guess where the accident was, and therefore that I could get past it by a series of back roads rather than going the long way round. So I did, and got home quicker than I would have done had I taken his well-meaning, but unnecessary, advice.

    How do you think an autonomous vehicle would cope with being stopped by the police, and given routing information by the police which conflicted with
    its navigational data?

    It's the same problem as above. If the vehicle knows from an update to
    the live map that there is a road blockage, as should be a priority when autonomous vehicles become common, it will simply re-route to avoid it.

    Otherwise, but not too fancifully, it could be programmed to recognise
    the uniform (just as you do), raise an antenna with a microphone, and
    tell him to talk clearly to Siri identifying the exact location of the
    problem using what three words or whatever.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Mon Feb 3 17:12:06 2025
    On 03/02/2025 16:15, Mark Goodge wrote:
    On Mon, 3 Feb 2025 14:39:52 +0000, Martin Brown <'''newspam'''@nonad.co.uk> wrote:

    On 03/02/2025 08:09, Norman Wells wrote:
    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those not >>>> uncommon in the US, which mandate a lower-than-normal speed limit 'on
    school days only'?

    Just as they do with any legally imposed speed limits.  With a proper
    data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    Yes, but it doesn't always get it right in tricky edge cases:

    Asymmetric speed limits on dual carriageways when one direction is under
    speed restrictions due to roadworks, crash recovery or resurfacing.

    30mph local roads crossing motorway bridges where derestricted applies.

    Junctions with roads that have different limits can cause problems, too.

    Mostly it gets it right but sometimes it has the car on a road parallel
    with the road it is actually on. Can be an issue with motorway parallel
    and close to old road as happens at RAF Dishforth with A1(M) and A168.
    Civilian GPS is only good to ~10m and it sometimes guesses wrong.

    There's also the problem that most minor roads only exist in the database as lines and nodes rather than areas. So what we, on the ground, see as this:

    ---------------------------------------------------------

    -----------------------+ +-----------------------------
    | |
    | |
    | |
    | |

    (that is, a road with a side-road meeting it at a t-junction)

    will be in the database as simply this:

    -------------------------+----------------------------
    |
    |
    |

    That's not a general problem for navigation, or, indeed, speed limits. But
    if the side road has a different limit to the main road, then a car using sat-nav to determine the speed limit as it proceeds along the main road will encounter a node where two different limits intersect. How sat-navs interprets that is not consistent.

    That's observable by some drivers in my town. If you're drving along this road, which has normal urban 30mph limit:

    https://maps.app.goo.gl/CKZTxGDMVNuXvfsU8

    then as you can see in the image, there's a side road with a 20mph limit. Some drivers report that, even if they are proceeding along the main road, and have no intention of turning, ther sat-nav nonetheless tells them to
    slow down to 20mph as they pass the junction.

    If what you say is true, and it's not something I've experienced so I
    have my doubts, it's a minor problem easily solved. Satnavs are
    perfectly capable of determining which road you're on and the applicable
    speed limit.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JNugent@21:1/5 to Norman Wells on Mon Feb 3 23:24:27 2025
    On 03/02/2025 09:05 PM, Norman Wells wrote:
    On 03/02/2025 20:08, JNugent wrote:
    On 03/02/2025 12:29 PM, Norman Wells wrote:
    On 03/02/2025 11:22, JNugent wrote:
    On 03/02/2025 11:20 AM, Davey wrote:
    On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
    Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-03, Davey <davey@example.invalid> wrote:
    On Mon, 3 Feb 2025 09:36:45 +0000
    Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as >>>>>>>>>>> those not uncommon in the US, which mandate a
    lower-than-normal speed limit 'on school days only'?

    Just as they do with any legally imposed speed limits. With a >>>>>>>>>> proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you >>>>>>>>>> are?

    No, mine doesn't, and it is in a BMW.

    Good grief! How ancient is it?

    Built in 2017, SatNav map updated last year, soon to be updated
    again. All other software up to date.

    I use Google Maps from my phone in the car via Apple AirPlay, and it >>>>>> *usually* shows the speed limit. Sometimes the speed limit it shows >>>>>> is the one that's actually in force.

    "The car knowing the speed limit" demonstrably does not work at the >>>>>> current time, even for simple fixed speed limits - let alone anything >>>>>> dynamic or complex.


    I have asked BMW the question, but I think I know the answer, despite >>>>> Norman's amazement. I will report their reply.
    How many folks here have cars whose built-in Satnav displays speed
    limits, and when were they built?

    Mine is just too old for that (January 2014).

    I use my iPhone with Apple Maps, so that I can make use of spoken
    commands to Siri.

    I also have a Garmin I bought in the USA, though I only use it if I
    don't want to waste my phone data package.

    You don't have to. Unless you want up-to-the-minute traffic
    information, satnavs works perfectly well on their own for routing
    without any internet connection.

    The Garmin doesn't have European maps on it.

    So what?

    But it can if you pay.

    https://activegps.co.uk/garmin-city-navigator-nt-europe-sd.htm#compatible

    But I don't have to if I use my iPhone (my provider still allows Europe roaming).


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Norman Wells on Tue Feb 4 09:44:56 2025
    On Mon, 03 Feb 2025 17:12:06 +0000, Norman Wells wrote:

    On 03/02/2025 16:15, Mark Goodge wrote:
    On Mon, 3 Feb 2025 14:39:52 +0000, Martin Brown
    <'''newspam'''@nonad.co.uk> wrote:

    On 03/02/2025 08:09, Norman Wells wrote:
    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those
    not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits.  With a proper >>>> data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    Yes, but it doesn't always get it right in tricky edge cases:

    Asymmetric speed limits on dual carriageways when one direction is
    under speed restrictions due to roadworks, crash recovery or
    resurfacing.

    30mph local roads crossing motorway bridges where derestricted
    applies.

    Junctions with roads that have different limits can cause problems,
    too.

    Mostly it gets it right but sometimes it has the car on a road
    parallel with the road it is actually on. Can be an issue with
    motorway parallel and close to old road as happens at RAF Dishforth
    with A1(M) and A168. Civilian GPS is only good to ~10m and it
    sometimes guesses wrong.

    There's also the problem that most minor roads only exist in the
    database as lines and nodes rather than areas. So what we, on the
    ground, see as this:

    ---------------------------------------------------------

    -----------------------+ +-----------------------------
    | |
    | |
    | |
    | |

    (that is, a road with a side-road meeting it at a t-junction)

    will be in the database as simply this:

    -------------------------+----------------------------
    |
    |
    |

    That's not a general problem for navigation, or, indeed, speed limits.
    But if the side road has a different limit to the main road, then a car
    using sat-nav to determine the speed limit as it proceeds along the
    main road will encounter a node where two different limits intersect.
    How sat-navs interprets that is not consistent.

    That's observable by some drivers in my town. If you're drving along
    this road, which has normal urban 30mph limit:

    https://maps.app.goo.gl/CKZTxGDMVNuXvfsU8

    then as you can see in the image, there's a side road with a 20mph
    limit. Some drivers report that, even if they are proceeding along the
    main road, and have no intention of turning, ther sat-nav nonetheless
    tells them to slow down to 20mph as they pass the junction.

    If what you say is true, and it's not something I've experienced so I
    have my doubts, it's a minor problem easily solved. Satnavs are
    perfectly capable of determining which road you're on and the applicable speed limit.

    Are they ?

    I have already said there are cases where roads cross in space (over/
    under) and lacking the ability to determine altitude (plus the required
    history to also determine route) means GPS based speed limit
    determination is flawed.

    One solution would be that a journey is tracked as a series of linked
    points, rather than just an individual spot reading in time. However that introduces a level of complexity that you are not going to get for the
    sum you paid for your sat nav. Dedicated or not.

    A poor mans solution would be given a choice of two possible speed limits
    for a given lat/long (because you haven't specified altitude) then the algorithm should select the one [closest] that matches the last one.
    However in an age where everything is frameworks and apps and AI written, sensible code is a luxury we can't afford.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Norman Wells on Tue Feb 4 11:51:21 2025
    On Tue, 04 Feb 2025 10:12:08 +0000, Norman Wells wrote:

    On 04/02/2025 09:44, Jethro_uk wrote:
    On Mon, 03 Feb 2025 17:12:06 +0000, Norman Wells wrote:

    On 03/02/2025 16:15, Mark Goodge wrote:
    On Mon, 3 Feb 2025 14:39:52 +0000, Martin Brown
    <'''newspam'''@nonad.co.uk> wrote:

    On 03/02/2025 08:09, Norman Wells wrote:
    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those >>>>>>> not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits.  With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    Yes, but it doesn't always get it right in tricky edge cases:

    Asymmetric speed limits on dual carriageways when one direction is
    under speed restrictions due to roadworks, crash recovery or
    resurfacing.

    30mph local roads crossing motorway bridges where derestricted
    applies.

    Junctions with roads that have different limits can cause problems,
    too.

    Mostly it gets it right but sometimes it has the car on a road
    parallel with the road it is actually on. Can be an issue with
    motorway parallel and close to old road as happens at RAF Dishforth
    with A1(M) and A168. Civilian GPS is only good to ~10m and it
    sometimes guesses wrong.

    There's also the problem that most minor roads only exist in the
    database as lines and nodes rather than areas. So what we, on the
    ground, see as this:

    ---------------------------------------------------------

    -----------------------+ +-----------------------------
    | |
    | |
    | |
    | |

    (that is, a road with a side-road meeting it at a t-junction)

    will be in the database as simply this:

    -------------------------+----------------------------
    |
    |
    |

    That's not a general problem for navigation, or, indeed, speed
    limits. But if the side road has a different limit to the main road,
    then a car using sat-nav to determine the speed limit as it proceeds
    along the main road will encounter a node where two different limits
    intersect. How sat-navs interprets that is not consistent.

    That's observable by some drivers in my town. If you're drving along
    this road, which has normal urban 30mph limit:

    https://maps.app.goo.gl/CKZTxGDMVNuXvfsU8

    then as you can see in the image, there's a side road with a 20mph
    limit. Some drivers report that, even if they are proceeding along
    the main road, and have no intention of turning, ther sat-nav
    nonetheless tells them to slow down to 20mph as they pass the
    junction.

    If what you say is true, and it's not something I've experienced so I
    have my doubts, it's a minor problem easily solved. Satnavs are
    perfectly capable of determining which road you're on and the
    applicable speed limit.

    Are they ?

    I have already said there are cases where roads cross in space (over/
    under) and lacking the ability to determine altitude (plus the required
    history to also determine route) means GPS based speed limit
    determination is flawed.

    One solution would be that a journey is tracked as a series of linked
    points, rather than just an individual spot reading in time. However
    that introduces a level of complexity that you are not going to get for
    the sum you paid for your sat nav. Dedicated or not.

    A poor mans solution would be given a choice of two possible speed
    limits for a given lat/long (because you haven't specified altitude)
    then the algorithm should select the one [closest] that matches the
    last one. However in an age where everything is frameworks and apps and
    AI written,
    sensible code is a luxury we can't afford.

    I think all you're demonstrating is that with the array of sensors such
    as autonomous vehicles have, presumably including one telling it which direction it's facing, it's quite simple to determine which road it's on
    and what the speed limit is going the way it is.

    Indeed.

    However long personal experience has informed me that just because there
    is a solution to a problem, doesn't mean the problem will be solved.
    Especially when non technical issues are introduced.

    However you have very subtly shifted the discussion - and nearly got away
    with it. I was referring to "sat navs" - portable devices that can be
    moved from car to car. Either dedicated devices, or a modern smartphone
    running sat nav software.

    You slid over to autonomous cars. Admittedly bringing the thread back to
    the headline title. But also detracting from my point. Which is that
    generally a sat nav (or phone) isn't smart enough to distinguish the not uncommon situation where roads cross in 3D. A problem which was common
    enough to cause many problems with telematic based insurance and crap
    software.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to All on Tue Feb 4 10:12:08 2025
    On 04/02/2025 09:44, Jethro_uk wrote:
    On Mon, 03 Feb 2025 17:12:06 +0000, Norman Wells wrote:

    On 03/02/2025 16:15, Mark Goodge wrote:
    On Mon, 3 Feb 2025 14:39:52 +0000, Martin Brown
    <'''newspam'''@nonad.co.uk> wrote:

    On 03/02/2025 08:09, Norman Wells wrote:
    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those >>>>>> not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits.  With a proper >>>>> data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    Yes, but it doesn't always get it right in tricky edge cases:

    Asymmetric speed limits on dual carriageways when one direction is
    under speed restrictions due to roadworks, crash recovery or
    resurfacing.

    30mph local roads crossing motorway bridges where derestricted
    applies.

    Junctions with roads that have different limits can cause problems,
    too.

    Mostly it gets it right but sometimes it has the car on a road
    parallel with the road it is actually on. Can be an issue with
    motorway parallel and close to old road as happens at RAF Dishforth
    with A1(M) and A168. Civilian GPS is only good to ~10m and it
    sometimes guesses wrong.

    There's also the problem that most minor roads only exist in the
    database as lines and nodes rather than areas. So what we, on the
    ground, see as this:

    ---------------------------------------------------------

    -----------------------+ +-----------------------------
    | |
    | |
    | |
    | |

    (that is, a road with a side-road meeting it at a t-junction)

    will be in the database as simply this:

    -------------------------+----------------------------
    |
    |
    |

    That's not a general problem for navigation, or, indeed, speed limits.
    But if the side road has a different limit to the main road, then a car
    using sat-nav to determine the speed limit as it proceeds along the
    main road will encounter a node where two different limits intersect.
    How sat-navs interprets that is not consistent.

    That's observable by some drivers in my town. If you're drving along
    this road, which has normal urban 30mph limit:

    https://maps.app.goo.gl/CKZTxGDMVNuXvfsU8

    then as you can see in the image, there's a side road with a 20mph
    limit. Some drivers report that, even if they are proceeding along the
    main road, and have no intention of turning, ther sat-nav nonetheless
    tells them to slow down to 20mph as they pass the junction.

    If what you say is true, and it's not something I've experienced so I
    have my doubts, it's a minor problem easily solved. Satnavs are
    perfectly capable of determining which road you're on and the applicable
    speed limit.

    Are they ?

    I have already said there are cases where roads cross in space (over/
    under) and lacking the ability to determine altitude (plus the required history to also determine route) means GPS based speed limit
    determination is flawed.

    One solution would be that a journey is tracked as a series of linked
    points, rather than just an individual spot reading in time. However that introduces a level of complexity that you are not going to get for the
    sum you paid for your sat nav. Dedicated or not.

    A poor mans solution would be given a choice of two possible speed limits
    for a given lat/long (because you haven't specified altitude) then the algorithm should select the one [closest] that matches the last one.
    However in an age where everything is frameworks and apps and AI written, sensible code is a luxury we can't afford.

    I think all you're demonstrating is that with the array of sensors such
    as autonomous vehicles have, presumably including one telling it which direction it's facing, it's quite simple to determine which road it's on
    and what the speed limit is going the way it is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 4 12:19:07 2025
    On Mon, 3 Feb 2025 17:12:06 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 03/02/2025 16:15, Mark Goodge wrote:

    then as you can see in the image, there's a side road with a 20mph limit.
    Some drivers report that, even if they are proceeding along the main road, >> and have no intention of turning, ther sat-nav nonetheless tells them to
    slow down to 20mph as they pass the junction.

    If what you say is true, and it's not something I've experienced so I
    have my doubts, it's a minor problem easily solved.

    Ah, so your experience determines what you believe exists. That, Norman, is precisely why your opinion has such little practical value.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 4 12:16:04 2025
    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 03/02/2025 17:43, Mark Goodge wrote:

    How do you think an autonomous vehicle would cope with that, when the route >> given by its navigation software is blocked and there's no data available to >> suggest an alternative?

    Provided it is working on a live real-time mapping system in which road >closures are clearly indicated, which is not beyond the wit of man, the >satnav will simply re-route to avoid them.

    But that's the point. It wasn't. It can't be. When the police decide to
    block a road and turn people away, that information isn't available to navigation systems in real time.

    How do you think an autonomous vehicle would cope with being stopped by the >> police, and given routing information by the police which conflicted with
    its navigational data?

    It's the same problem as above. If the vehicle knows from an update to
    the live map that there is a road blockage, as should be a priority when >autonomous vehicles become common, it will simply re-route to avoid it.

    But, again, that's the point. This information is not available in real time
    to the navigation systems. A human can cope with route-finding if necessary without resorting to them.

    Otherwise, but not too fancifully, it could be programmed to recognise
    the uniform (just as you do), raise an antenna with a microphone, and
    tell him to talk clearly to Siri identifying the exact location of the >problem using what three words or whatever.

    Well, yes. But that's a lot more complex than merely following a route. It requires at least reasonable voice recognition, for example, and then being able to follow the instructions given.

    I don't doubt that, given sufficiently advanced AI, this will be a solvable problem. But it's not a problem you can solve just with a set of preset
    rules. So until autonomous vehicles have AI which is as robust at problem solving as human intelligence, it isn't really autonomous.

    Going on about live routing data is a red herring. As well as ongoing situations where the data simply isn't available to the system (eg, where
    the police decide to block a road), you also have to be able to cope with situations where the data exists but isn't available to the device (eg, in mobile notspots). To be truly autonomous, a self-driving vehicle has to be
    able to operate solely on the basis of road signs, markings, offline maps
    where available and, if necessary, verbal instructions from a human. Just
    like humans can.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to jethro_uk@hotmailbin.com on Tue Feb 4 12:32:14 2025
    On Tue, 4 Feb 2025 11:51:21 -0000 (UTC), Jethro_uk
    <jethro_uk@hotmailbin.com> wrote:

    You slid over to autonomous cars. Admittedly bringing the thread back to
    the headline title. But also detracting from my point. Which is that >generally a sat nav (or phone) isn't smart enough to distinguish the not >uncommon situation where roads cross in 3D. A problem which was common
    enough to cause many problems with telematic based insurance and crap >software.

    And that also takes us back to my point, which is that a truly autonomous vehicle has to be capable of deciding when to disregard the sat-nav, and be able to operate effectively in situations where the sat-nav data is missing
    or incomplete.

    Norman's responses to your comments also contradict his own earlier
    assertions that autonomous vehicles don't need AI, and that all of these problems have already been solved. Now he's merely saying that, in his
    opinion, they can be solved and that solving them is trivial.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Max Demian@21:1/5 to Norman Wells on Tue Feb 4 12:19:39 2025
    On 03/02/2025 19:05, Norman Wells wrote:
    On 03/02/2025 17:43, Mark Goodge wrote:

    I was thinking of cases where there's no need to break the law, merely a
    need to make decisions based on information which cannot be preset.
    Let me
    give a couple of examples from my real life experience.

    The first happened when I was a passenger, not the driver. We'd been to a
    gig at the Etihad Stadium in Manchester, and my friend was following his
    sat-nav back to my house in Stoke-on-Trent so that he could drop me
    off and
    then continue home. However, as we arrived at J19 of the M6 from the
    A565,
    the slip road onto the M6 southbound (which his sat-nav was, correctly,
    telling him to take) was blocked by a police car with flashing blue
    and red
    lights. There was no indication of why it was blocked, and no signs
    giving
    any diversionary route. My assumption was that there had been an
    accident of
    some sort, but that was pure speculation.

    After driving round the entire roundabout twice (I have no idea why,
    maybe
    he was hoping the police car would magically disappear the second time
    around), he eventually asked me what to do. I directed him via a
    different
    exit onto the A50, and once we were on that heading south I told him
    to just
    ignore the sat-nav until it stopped telling him to turn around (we
    eventually joined the M6 at the next junction down).

    Had I not been in the car, I have no idea how he would have got out of
    that.
    If I was ever in similar circumstances, and didn't have the benefit of a
    passenger with local kowledge, I think I'd either find somewhere to pull
    over and look at an actual map, or just take what looks like the next
    best
    option and keep going in vaguely the right direction until my sat-nav
    picked
    up a new route.

    How do you think an autonomous vehicle would cope with that, when the
    route
    given by its navigation software is blocked and there's no data
    available to
    suggest an alternative?

    Provided it is working on a live real-time mapping system in which road closures are clearly indicated, which is not beyond the wit of man, the satnav will simply re-route to avoid them.  They're very good at finding alternative routes when you miss a turning for example, so it's not a
    new or insurmountable problem.

    Won't the satnav keep telling you to take the original route? Is there a
    way to tell it that there is something in the way? The same problem
    would occur in the case of an autonomous vehicle.

    --
    Max Demian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Mark Goodge on Tue Feb 4 14:15:51 2025
    On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:

    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    [quoted text muted]

    But that's the point. It wasn't. It can't be. When the police decide to
    block a road and turn people away, that information isn't available to navigation systems in real time.

    Well Google Maps gets feedback on the progress of journeys using it and
    *could* quickly determine that a sudden influx of changed journeys is indicative of some sort of problem proceeding.

    Whether it does or not is another matter.

    The reason I am doubtful is that this is the same product which will
    happily send you down a labyrinthine route of side roads and alleys in
    the name of it's preferred "green" routing. Doing more damage to the environment than it could ever prevent.

    (sat Nav re-routing is another exemplar of machines being incapable of intelligence :) )

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Max Demian on Tue Feb 4 14:19:03 2025
    On Tue, 04 Feb 2025 12:19:39 +0000, Max Demian wrote:

    On 03/02/2025 19:05, Norman Wells wrote:
    On 03/02/2025 17:43, Mark Goodge wrote:

    I was thinking of cases where there's no need to break the law, merely
    a need to make decisions based on information which cannot be preset.
    Let me give a couple of examples from my real life experience.

    The first happened when I was a passenger, not the driver. We'd been
    to a gig at the Etihad Stadium in Manchester, and my friend was
    following his sat-nav back to my house in Stoke-on-Trent so that he
    could drop me off and then continue home. However, as we arrived at
    J19 of the M6 from the A565,
    the slip road onto the M6 southbound (which his sat-nav was,
    correctly, telling him to take) was blocked by a police car with
    flashing blue and red lights. There was no indication of why it was
    blocked, and no signs giving any diversionary route. My assumption was
    that there had been an accident of some sort, but that was pure
    speculation.

    After driving round the entire roundabout twice (I have no idea why,
    maybe he was hoping the police car would magically disappear the
    second time around), he eventually asked me what to do. I directed him
    via a different exit onto the A50, and once we were on that heading
    south I told him to just ignore the sat-nav until it stopped telling
    him to turn around (we eventually joined the M6 at the next junction
    down).

    Had I not been in the car, I have no idea how he would have got out of
    that.
    If I was ever in similar circumstances, and didn't have the benefit of
    a passenger with local kowledge, I think I'd either find somewhere to
    pull over and look at an actual map, or just take what looks like the
    next best option and keep going in vaguely the right direction until
    my sat-nav picked up a new route.

    How do you think an autonomous vehicle would cope with that, when the
    route given by its navigation software is blocked and there's no data
    available to suggest an alternative?

    Provided it is working on a live real-time mapping system in which road
    closures are clearly indicated, which is not beyond the wit of man, the
    satnav will simply re-route to avoid them.  They're very good at
    finding alternative routes when you miss a turning for example, so it's
    not a new or insurmountable problem.

    Won't the satnav keep telling you to take the original route?

    Oh yes.

    A few years back, using Google Maps (with live data) I saw the motorway
    exit I needed was tailing back as there had been an accident on the road
    it joined. This was on the roadside signs.

    I turned off the junction before and GM spent about 10 miles desperately
    trying to guide me back onto the blocked route.

    It was then I invented in my mind a button (now voice command) that would instruct GM to abandon the current route and go third-best.

    As usual, Google did reply to me saying that - like speed limits on their
    maps - it was an interesting idea.


    Is there a
    way to tell it that there is something in the way? The same problem
    would occur in the case of an autonomous vehicle.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Simon Parker on Tue Feb 4 15:04:08 2025
    On Tue, 04 Feb 2025 14:49:23 +0000, Simon Parker wrote:

    On 03/02/2025 09:36, Norman Wells wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000 Norman Wells <hex@unseen.ac.am>
    wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those
    not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits.  With a proper >>>> data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief!  How ancient is it?

    I will investigate in case it is a feature that is available but not
    activated. But you are assuming that all States and small 'burgs' in
    the US follow the same speed limit reporting processes that we do. Do
    they?

    I have no idea.  But this is a UK group and all the necessary speed
    limit information here is certainly available and in current use.

    It's trivial programming to ensure that limits are not exceeded by any
    autonomous vehicle, which is certainly not the case with human drivers,
    even those who try to obey them.

    Anecdata: A young(ish) family member had a black box fitted to their new
    car as a condition of their insurance policy. It was a physical black
    box connected directly to the vehicle electronics and mounted within the engine bay by a professional appointed by the insurers. Said family
    member has since changed their vehicle and the new vehicle didn't need a physical black box fitting but instead required an app installing on
    their mobile phone which must be with them in their car when they're
    driving as a condition of their insurance.

    Both systems have resulted in letters from the insurance company
    detailing incidents where the systems indicated that they were driving
    at significantly above the speed limit, which triggers penalties in the policy up to and including cancellation of the policy.

    Every single incident reported has been down to GPS issues where the car
    is on a dual carriageway, NSL road or similar, yet the black box
    detected the vehicle as being on a 30mph road adjacent to its current position but there is no reason to believe that the reverse situation
    cannot also exist, without triggering a letter from the insurance
    company.

    As I noted, I worked on such systems. There was no cheap^H^H^H^H^ easy
    way to fix the problem.

    Please detail how your autonomous vehicle has overcome these problems evidently inherent in the current navigational systems in use, (both stand-alone and embedded).

    Worth noting that speed limits are generally not related to the speed
    possible. Which is an additional wrinkle.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fredxx@21:1/5 to All on Tue Feb 4 14:49:28 2025
    On 04/02/2025 14:15, Jethro_uk wrote:
    On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:

    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    [quoted text muted]

    But that's the point. It wasn't. It can't be. When the police decide to
    block a road and turn people away, that information isn't available to
    navigation systems in real time.

    Well Google Maps gets feedback on the progress of journeys using it and *could* quickly determine that a sudden influx of changed journeys is indicative of some sort of problem proceeding.

    Whether it does or not is another matter.

    The reason I am doubtful is that this is the same product which will
    happily send you down a labyrinthine route of side roads and alleys in
    the name of it's preferred "green" routing. Doing more damage to the environment than it could ever prevent.

    I don't use, or even come across 'green routing' but what I can say is
    when I have ignored a deviation from the standard route it doesn't
    normally work out well in term of traffic, roadworks and jams.

    (sat Nav re-routing is another exemplar of machines being incapable of intelligence :) )

    I might agree, but that is when you use your own intelligence to work
    out if the new route is a sensible one. The satnav should be treated
    like a guide or tool you have autonomy over.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 4 15:05:02 2025
    On 04/02/2025 12:32, Mark Goodge wrote:
    On Tue, 4 Feb 2025 11:51:21 -0000 (UTC), Jethro_uk
    <jethro_uk@hotmailbin.com> wrote:

    You slid over to autonomous cars. Admittedly bringing the thread back to
    the headline title. But also detracting from my point. Which is that
    generally a sat nav (or phone) isn't smart enough to distinguish the not
    uncommon situation where roads cross in 3D. A problem which was common
    enough to cause many problems with telematic based insurance and crap
    software.

    And that also takes us back to my point, which is that a truly autonomous vehicle has to be capable of deciding when to disregard the sat-nav, and be able to operate effectively in situations where the sat-nav data is missing or incomplete.

    That's just a matter of programming to do what a sensible person would
    do in the same circumstances.

    Norman's responses to your comments also contradict his own earlier assertions that autonomous vehicles don't need AI,

    AI doesn't come into it.

    and that all of these
    problems have already been solved. Now he's merely saying that, in his opinion, they can be solved and that solving them is trivial.

    I'm sure you're reinventing the wheel bringing up what you think are outstanding problems without knowing what has been solved already. The
    answers to them, however, are so trivially easy that it is inconceivable
    that they haven't all been addressed years ago.

    Where do you think the estimated $75 billion spent by the automotive
    industry on the development of autonomous vehicles has gone? And do you
    really think they haven't already thought of all the issues you've raised?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to All on Tue Feb 4 14:55:38 2025
    On 04/02/2025 11:51, Jethro_uk wrote:
    On Tue, 04 Feb 2025 10:12:08 +0000, Norman Wells wrote:

    On 04/02/2025 09:44, Jethro_uk wrote:
    On Mon, 03 Feb 2025 17:12:06 +0000, Norman Wells wrote:

    On 03/02/2025 16:15, Mark Goodge wrote:
    On Mon, 3 Feb 2025 14:39:52 +0000, Martin Brown
    <'''newspam'''@nonad.co.uk> wrote:

    On 03/02/2025 08:09, Norman Wells wrote:
    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those >>>>>>>> not uncommon in the US, which mandate a lower-than-normal speed >>>>>>>> limit 'on school days only'?

    Just as they do with any legally imposed speed limits.  With a
    proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    Yes, but it doesn't always get it right in tricky edge cases:

    Asymmetric speed limits on dual carriageways when one direction is >>>>>> under speed restrictions due to roadworks, crash recovery or
    resurfacing.

    30mph local roads crossing motorway bridges where derestricted
    applies.

    Junctions with roads that have different limits can cause problems,
    too.

    Mostly it gets it right but sometimes it has the car on a road
    parallel with the road it is actually on. Can be an issue with
    motorway parallel and close to old road as happens at RAF Dishforth >>>>>> with A1(M) and A168. Civilian GPS is only good to ~10m and it
    sometimes guesses wrong.

    There's also the problem that most minor roads only exist in the
    database as lines and nodes rather than areas. So what we, on the
    ground, see as this:

    ---------------------------------------------------------

    -----------------------+ +-----------------------------
    | |
    | |
    | |
    | |

    (that is, a road with a side-road meeting it at a t-junction)

    will be in the database as simply this:

    -------------------------+----------------------------
    |
    |
    |

    That's not a general problem for navigation, or, indeed, speed
    limits. But if the side road has a different limit to the main road, >>>>> then a car using sat-nav to determine the speed limit as it proceeds >>>>> along the main road will encounter a node where two different limits >>>>> intersect. How sat-navs interprets that is not consistent.

    That's observable by some drivers in my town. If you're drving along >>>>> this road, which has normal urban 30mph limit:

    https://maps.app.goo.gl/CKZTxGDMVNuXvfsU8

    then as you can see in the image, there's a side road with a 20mph
    limit. Some drivers report that, even if they are proceeding along
    the main road, and have no intention of turning, ther sat-nav
    nonetheless tells them to slow down to 20mph as they pass the
    junction.

    If what you say is true, and it's not something I've experienced so I
    have my doubts, it's a minor problem easily solved. Satnavs are
    perfectly capable of determining which road you're on and the
    applicable speed limit.

    Are they ?

    I have already said there are cases where roads cross in space (over/
    under) and lacking the ability to determine altitude (plus the required
    history to also determine route) means GPS based speed limit
    determination is flawed.

    One solution would be that a journey is tracked as a series of linked
    points, rather than just an individual spot reading in time. However
    that introduces a level of complexity that you are not going to get for
    the sum you paid for your sat nav. Dedicated or not.

    A poor mans solution would be given a choice of two possible speed
    limits for a given lat/long (because you haven't specified altitude)
    then the algorithm should select the one [closest] that matches the
    last one. However in an age where everything is frameworks and apps and
    AI written,
    sensible code is a luxury we can't afford.

    I think all you're demonstrating is that with the array of sensors such
    as autonomous vehicles have, presumably including one telling it which
    direction it's facing, it's quite simple to determine which road it's on
    and what the speed limit is going the way it is.

    Indeed.

    However long personal experience has informed me that just because there
    is a solution to a problem, doesn't mean the problem will be solved. Especially when non technical issues are introduced.

    Perhaps there hasn't been the necessary incentive so far. Temporary
    blips are insignificant when you know they're just going to be ignored
    anyway. They become more important when vehicles are autonomous, but
    they can either be programmed out or made not to exist by simply
    assuming the speed limit on the minor road ends, say, 15 feet before the junction, which would make no practical difference to anyone.

    It's not my job to say how it should be done, but merely to point out
    that there are simple solutions.

    However you have very subtly shifted the discussion - and nearly got away with it. I was referring to "sat navs" - portable devices that can be
    moved from car to car. Either dedicated devices, or a modern smartphone running sat nav software.

    You slid over to autonomous cars. Admittedly bringing the thread back to
    the headline title. But also detracting from my point. Which is that generally a sat nav (or phone) isn't smart enough to distinguish the not uncommon situation where roads cross in 3D. A problem which was common
    enough to cause many problems with telematic based insurance and crap software.

    If you're talking just about manually driven cars, you'll do what anyone
    would, and just ignore them for the transient period they manifest
    themselves.

    It's not my brief to support crap software.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 4 14:42:52 2025
    On 04/02/2025 12:19, Mark Goodge wrote:
    On Mon, 3 Feb 2025 17:12:06 +0000, Norman Wells <hex@unseen.ac.am> wrote:
    On 03/02/2025 16:15, Mark Goodge wrote:

    then as you can see in the image, there's a side road with a 20mph limit. >>> Some drivers report that, even if they are proceeding along the main road, >>> and have no intention of turning, ther sat-nav nonetheless tells them to >>> slow down to 20mph as they pass the junction.

    If what you say is true, and it's not something I've experienced so I
    have my doubts, it's a minor problem easily solved.

    Ah, so your experience determines what you believe exists. That, Norman, is precisely why your opinion has such little practical value.

    I said 'if what you say is true'. It's still a minor problem easily
    solved by accurate sensors of speed and direction. Satnav is only one
    such input with an autonomous vehicle. Brief anomalies can be
    programmed out, just as human drivers can ignore them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fredxx@21:1/5 to Simon Parker on Tue Feb 4 15:00:44 2025
    On 04/02/2025 14:49, Simon Parker wrote:
    On 03/02/2025 09:36, Norman Wells wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those
    not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits.  With a proper >>>> data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief!  How ancient is it?

    I will investigate in case it
    is a feature that is available but not activated. But you are assuming
    that all States and small 'burgs' in the US follow the same speed limit
    reporting processes that we do. Do they?

    I have no idea.  But this is a UK group and all the necessary speed
    limit information here is certainly available and in current use.

    It's trivial programming to ensure that limits are not exceeded by any
    autonomous vehicle, which is certainly not the case with human
    drivers, even those who try to obey them.

    Anecdata: A young(ish) family member had a black box fitted to their new
    car as a condition of their insurance policy.  It was a physical black
    box connected directly to the vehicle electronics and mounted within the engine bay by a professional appointed by the insurers.  Said family
    member has since changed their vehicle and the new vehicle didn't need a physical black box fitting but instead required an app installing on
    their mobile phone which must be with them in their car when they're
    driving as a condition of their insurance.

    Both systems have resulted in letters from the insurance company
    detailing incidents where the systems indicated that they were driving
    at significantly above the speed limit, which triggers penalties in the policy up to and including cancellation of the policy.

    Every single incident reported has been down to GPS issues where the car
    is on a dual carriageway, NSL road or similar, yet the black box
    detected the vehicle as being on a 30mph road adjacent to its current position but there is no reason to believe that the reverse situation
    cannot also exist, without triggering a letter from the insurance company.

    Please detail how your autonomous vehicle has overcome these problems evidently inherent in the current navigational systems in use, (both stand-alone and embedded).

    Your example involves a simple software application where the only input
    is GPS data [1]. In reality there would be more inputs and more data to
    discern where the vehicle actually is. It would require many more orders
    of magnitude of software engineers hours and extensive testing.

    [1] jammers are commonplace too, where an employee or other driver
    doesn't want anyone to know where they actually are.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nick Finnigan@21:1/5 to All on Tue Feb 4 14:53:34 2025
    On 04/02/2025 14:15, Jethro_uk wrote:
    On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:

    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    [quoted text muted]

    But that's the point. It wasn't. It can't be. When the police decide to
    block a road and turn people away, that information isn't available to
    navigation systems in real time.

    Well Google Maps gets feedback on the progress of journeys using it and *could* quickly determine that a sudden influx of changed journeys is indicative of some sort of problem proceeding.

    Whether it does or not is another matter.

    It spots that a stretch of road has developed a very slow speed, and so prefers a different route based on travel times, without knowing why. Nor knowing whether the alternative roads can cope with the volume of traffic
    now moving along an unfamiliar route.

    Google Maps also shows forked arrows at bottom right to allow me to chose
    a different route at any time (data connection permitting).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Fredxx on Tue Feb 4 15:30:06 2025
    On Tue, 04 Feb 2025 14:49:28 +0000, Fredxx wrote:

    On 04/02/2025 14:15, Jethro_uk wrote:
    [quoted text muted]

    I don't use, or even come across 'green routing'

    Last time I used GM, it only gave you one route - the ability to chose "shortest" and "quickest" alongside "most economical had gone. When I
    briefly researched the matter, I read that Google were only supplying the "green" route now, for the planet. Not sure which planet though.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Nick Finnigan on Tue Feb 4 15:35:27 2025
    On Tue, 04 Feb 2025 14:53:34 +0000, Nick Finnigan wrote:

    On 04/02/2025 14:15, Jethro_uk wrote:
    On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:

    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    [quoted text muted]

    But that's the point. It wasn't. It can't be. When the police decide
    to block a road and turn people away, that information isn't available
    to navigation systems in real time.

    Well Google Maps gets feedback on the progress of journeys using it and
    *could* quickly determine that a sudden influx of changed journeys is
    indicative of some sort of problem proceeding.

    Whether it does or not is another matter.

    It spots that a stretch of road has developed a very slow speed, and
    so
    prefers a different route based on travel times, without knowing why.
    Nor knowing whether the alternative roads can cope with the volume of
    traffic now moving along an unfamiliar route.

    Odd. 25 years ago I worked on routing software that was aware of road characteristics. But not surprising as Googles idea of innovation and
    mine rarely intersect.


    Google Maps also shows forked arrows at bottom right to allow me to
    chose
    a different route at any time (data connection permitting).

    The arrows aren't always there IME (but the caveat with anything from the chocolate factory is it can change dramatically without notice). I only
    saw them when I was in a position to select one of the routes. I found
    that invariably they were suggesting a "quicker" route that would have
    taken me out of my way and not uncommonly down a side road.

    Countless trips with Uber drivers locally has shown me how pisspoor the
    routing is. Forget speed humps and chicanes. I would suggest the biggest
    cause of urban emission pollution is sat navigation software directing
    cars down tiny side roads to shave a millisecond off the journey.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to All on Tue Feb 4 15:46:11 2025
    Jethro_uk wrote:

    Google Maps also shows forked arrows at bottom right to allow me to
    chose
    a different route at any time (data connection permitting).
    The arrows aren't always there IME (but the caveat with anything from the
    chocolate factory is it can change dramatically without notice). I only
    saw them when I was in a position to select one of the routes. I found
    that invariably they were suggesting a "quicker" route that would have
    taken me out of my way and not uncommonly down a side road.

    I just tried it on a route I know well, it offered me a choice of three
    routes, the default one was the one with the green leaf, the other two
    were longer distance or took more time, coincidentally its preferred one
    is generally *my* preferred one, but there have been days when it picks
    a different route.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Simon Parker on Tue Feb 4 15:41:09 2025
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message news:m0enkeFqoikU2@mid.individual.net...
    On 03/02/2025 10:07, billy bookcase wrote:
    "Martin Harran" <martinharran@gmail.com> wrote in message
    news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close
    to a vehicle in front, lane-keep assist.

    Those are exmples of automation, as with autpilots. Where the each set
    of inputs, no matter how complex, will produce exactly the same result
    every time.

    AI systems on the other hand, learn and make up new rules as they go along. >> And so the process by which they actually arrive at particular decisions
    may not be readily available to humans.

    I think this is possibly what a lot of peopel are really worried about,

    I give you the details of Lufthansa LH1829 below. Numerous other examples are
    available, (some of which may include the grounding of the entire Boeing 737 MAX
    fleet).

    Whilst some knowledge of aviation terms will help in reading what follows, it is by no
    means mandatory.

    Please detail, as simply as possible, the likely outcome for flight LH1829 had the
    commander not been present and had he not had access to flight technicians via Telex.

    Then please detail which parties will be fulfilling the roles of the commander and the
    flight technicians in fully automated vehicles.

    I fail to see any possible relevance in any of of your questions.

    Unless you can detail where I have ever claimed that AI has any possible
    role to play in either "autonomous" vehicles, or in autopilots.

    AI relies on measuring outcomes, in order to "learn".

    In neither "autonomous" vehicles nor autopilots is anything less than
    the optimum outcome either permissible, or desirable, in real world
    situations,

    As it happens Norman is entirely correct in claiming AI has no role to play
    in autonomous vehicles. But entirely wrong in imagining all that all potential real life situations and problems which might arise can be anticipated, never mind modelled in experimental set-ups Such as would allow autonomous vehicles safely onto our roads


    bb



    Regards

    S.P.


    Lufthansa flight LH1829 took off from Bilbao for scheduled services at 0748 hrs with
    109 persons on board. The co-pilot was Pilot Flying (PF) and the commander monitored
    the conduct of the flight. It rained during take-off. The airplane climbed through
    the cloud tops at approximately FL 200.

    From 0756 hrs on, the FDR recorded a constant angle of attack value (AOA) of 4.2 for
    the AOA sensor #1.

    From 0757 hrs on, the recorded value for sensor #2 remained constant (4.6) also. At
    0803 hrs the crew receive the clearance to fly directly to way point LATEK and the
    airplane turned to the new heading.

    The commander of Lufthansa flight LH1829 stated he noticed the indications of the Alpha
    Protection Band in the Primary Flight Display (PFD) increasing unusually rapidly.

    The autopilot was in climb mode.
    The co-pilot of Lufthansa flight LH1829 stated he then reduced the rate of climb from
    about 800 ft/min to 500 ft/min using the Vertical Speed button of the autopilot giving
    the airplane a chance to accelerate.

    Shortly afterwards the co-pilot disengaged the autopilot and gave a short sidestick
    control input in order to reduce the pitch angle of the airplane. The nose of the
    airplane dropped further and the co-pilot counteracted this movement with the sidestick. The co-pilot stated the airplane did not responded as expected and continued
    to descend.

    Approximately 45 seconds after the autopilot was disengaged the co-pilot of Lufthansa
    flight LH1829 verbalised the airplane's limited reaction to his control inputs and
    therefore the commander took over the controls. At that time it had a rate of descent
    of about 4,000 ft/min and a pitch angle of -3.5. The commander gave maximum backward
    sidestick input (pull) and the airplane's nose began to rise, the rate of descent
    decreased, and the airplane entered horizontal flight attitude once more.

    The flight was continued in FL 270. The commander gave continuously backward sidestick
    input (pull) of more than 50% of the possible input; altitude and attitude remained
    constant. The autopilots could not be engaged again and the trim was limited in nose up
    movements. The crew stated they made sure that the indicated airspeed was correct by
    checking the given values for pitch and power in the Quick Reference Handbook (QRH).
    Searching the QRH revealed that no procedure was described which would suit this
    situation.

    The Flight Augmentation Computers (FAC) 1 and 2 were reset by the crew one after the
    other. The FDR showed that this did not change the situation.
    At 0811:12 hrs the Aircraft Communications Addressing and Reporting System (ACARS) sent
    an automatically generated message to the technicians of the operator's maintenance
    organisation. This message included the AOA values for the three Air Data Inertial
    Reference Units (ADIRUs).

    At 0824 hrs the crew sent a telex to the technicians with a short description of the
    situation and the question whether an in-flight simultaneous reset of the FACs was
    possible. Three minutes later this was answered in the affirmative with the note that
    then the airplane would be in Alternate Law.

    At 0831 hrs the crew wrote that a constant pull on the sidestick was necessary, that
    the trim was not available and that the AlphaProt Band moved very rapidly. In addition,
    the Centralised Fault Display System (CFDS) showed the failure message "PH6 AOA3". The
    technicians suggested in a reply telex to retrieve the angle of attack values via the
    Multifunctional Control and Display Unit (MCDU) and, if necessary, to turn off the Air
    Data Reference unit (ADR) 3.

    At 0836 hrs the crew turned off ADR 3. This did not change the situation and therefore
    the unit was turned on again. At 0852 hrs the crew received the following message:
    ".NACH SICHTUNG DER AOA WERTE, KOENNTE ES SEIN, DASS AOA1 UND AOA2 EINGEFROREN SIND UND
    EINEN ZU HOHEN ANSTROEMWINKEL MELDEN (After reviewing the AOA values it is possible
    that AOA1 and AOA2 are frozen and report a too high angle of attack). FALLS DAS PROBLEM
    WEITERHIN BESTEHT, ADR 1 UND ADR 2 AUSSCHALTEN, WAS ABER ZU ALTENATE LAW FUEHRT" (If
    the problem continues turn off ADR 1 and ADR 2 but that leads to alternate law). And at
    0857 hrs: ".VIELLEICHT REICHT ES AUCH NUR DEN ADR 2 AUSZUSCHALTEN (Maybe it suffices to
    just turn off ADR 2). [.]"

    At 0859 hrs the crew turned off ADR 2.

    The LUFTHANSA flight LH1829 control system changed to Alternate Law. The continuous
    sidestick input was no longer necessary in order to keep the airplane in horizontal
    flight attitude.

    Autotrim and autopilot functioned again.

    The crew stated they decided to fly to the destination aerodrome and use the remaining
    flight time of about one hour to become familiar with the systems' functions and
    prepare the landing.

    At 0949 hrs the airplane landed safely at the destination aerodrome.

    (From https://www.aviation-accidents.net/lufthansa-airbus-a321-231-d-aidp-flight-lh1829/)


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Andy Burns on Tue Feb 4 16:04:44 2025
    On Tue, 04 Feb 2025 15:46:11 +0000, Andy Burns wrote:

    Jethro_uk wrote:

    Google Maps also shows forked arrows at bottom right to allow me to
    chose
    a different route at any time (data connection permitting).
    The arrows aren't always there IME (but the caveat with anything from
    the
    chocolate factory is it can change dramatically without notice). I only
    saw them when I was in a position to select one of the routes. I found
    that invariably they were suggesting a "quicker" route that would have
    taken me out of my way and not uncommonly down a side road.

    I just tried it on a route I know well, it offered me a choice of three routes, the default one was the one with the green leaf, the other two
    were longer distance or took more time, coincidentally its preferred one
    is generally *my* preferred one, but there have been days when it picks
    a different route.

    As I noted, anything Googly can change without notice. There was a sneeze
    of time when it would display speed limits without needing a route
    programmed (same as Waze "driving mode"). Then it stopped.

    As a testament to that instability, I won a bet with a US friend who was visiting the UK as it turned out the US "version" did provide speed
    limits without a route. They were quite perplexed that mine didn't.

    All moot anyway, as last time I looked that feature had gone.

    Whilst Google is (very) useful, I would never rely on it for anything
    serious. And that is across the board.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Tue Feb 4 16:06:25 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0eokvFtck2U4@mid.individual.net...
    On 04/02/2025 12:32, Mark Goodge wrote:
    On Tue, 4 Feb 2025 11:51:21 -0000 (UTC), Jethro_uk
    <jethro_uk@hotmailbin.com> wrote:

    You slid over to autonomous cars. Admittedly bringing the thread back to >>> the headline title. But also detracting from my point. Which is that
    generally a sat nav (or phone) isn't smart enough to distinguish the not >>> uncommon situation where roads cross in 3D. A problem which was common
    enough to cause many problems with telematic based insurance and crap
    software.

    And that also takes us back to my point, which is that a truly autonomous
    vehicle has to be capable of deciding when to disregard the sat-nav, and be >> able to operate effectively in situations where the sat-nav data is missing >> or incomplete.

    That's just a matter of programming to do what a sensible person would do in the same
    circumstances.

    Norman's responses to your comments also contradict his own earlier
    assertions that autonomous vehicles don't need AI,

    AI doesn't come into it.

    and that all of these
    problems have already been solved. Now he's merely saying that, in his
    opinion, they can be solved and that solving them is trivial.

    I'm sure you're reinventing the wheel bringing up what you think are outstanding
    problems without knowing what has been solved already. The answers to them, however,
    are so trivially easy that it is inconceivable that they haven't all been addressed
    years ago.

    Where do you think the estimated $75 billion spent by the automotive industry on the
    development of autonomous vehicles has gone? And do you really think they haven't
    already thought of all the issues you've raised?

    Right. You're an autonomous car. You've got your two passengers in the
    back ( you know humans, remember them ? ) a husband and wife. Originally
    they told you to go to Tesco, But then half way there the wife said she'd changed her mind and wanted to go to Asda, So she said "go to Asda". But
    her husband objected and said he wanted to go to "Morrisons" instead. So
    you're driving along supposedly headed for Tesco but with shouts of "Asda"
    "No Morrisons",."No Asda", "Morrisons" coming out of the back for the
    next ten minutes. So where do you go, and why ?


    bb










    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to All on Tue Feb 4 16:18:54 2025
    Jethro_uk wrote:

    There was a sneeze
    of time when it would display speed limits without needing a route
    programmed (same as Waze "driving mode"). Then it stopped.

    GM also called it "driving mode", I'm fairly sure that when using GM in
    the car it still displays the current speed limit, I'll try to remember
    to check when I next venture out ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Andy Burns on Tue Feb 4 17:00:01 2025
    On Tue, 04 Feb 2025 16:18:54 +0000, Andy Burns wrote:

    Jethro_uk wrote:

    There was a sneeze of time when it would display speed limits without
    needing a route programmed (same as Waze "driving mode"). Then it
    stopped.

    GM also called it "driving mode", I'm fairly sure that when using GM in
    the car it still displays the current speed limit, I'll try to remember
    to check when I next venture out ...

    In which case they have restored it (again).

    Personally I found it incredibly useful - even in areas I knew. But also
    for journeys that didn't need the sat nav.

    Funnily enough it became less of a thing when I started using Google
    Assistant on my phone, as just asking to set a route is so trivial.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to usenet@listmail.good-stuff.co.uk on Tue Feb 4 16:39:02 2025
    On 4 Feb 2025 at 12:16:04 GMT, "Mark Goodge"
    <usenet@listmail.good-stuff.co.uk> wrote:

    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 03/02/2025 17:43, Mark Goodge wrote:

    How do you think an autonomous vehicle would cope with that, when the route >>> given by its navigation software is blocked and there's no data available to
    suggest an alternative?

    Provided it is working on a live real-time mapping system in which road
    closures are clearly indicated, which is not beyond the wit of man, the
    satnav will simply re-route to avoid them.

    But that's the point. It wasn't. It can't be. When the police decide to
    block a road and turn people away, that information isn't available to navigation systems in real time.

    How do you think an autonomous vehicle would cope with being stopped by the >>> police, and given routing information by the police which conflicted with >>> its navigational data?

    It's the same problem as above. If the vehicle knows from an update to
    the live map that there is a road blockage, as should be a priority when
    autonomous vehicles become common, it will simply re-route to avoid it.

    But, again, that's the point. This information is not available in real time to the navigation systems. A human can cope with route-finding if necessary without resorting to them.

    Otherwise, but not too fancifully, it could be programmed to recognise
    the uniform (just as you do), raise an antenna with a microphone, and
    tell him to talk clearly to Siri identifying the exact location of the
    problem using what three words or whatever.

    Well, yes. But that's a lot more complex than merely following a route. It requires at least reasonable voice recognition, for example, and then being able to follow the instructions given.

    I don't doubt that, given sufficiently advanced AI, this will be a solvable problem. But it's not a problem you can solve just with a set of preset rules. So until autonomous vehicles have AI which is as robust at problem solving as human intelligence, it isn't really autonomous.

    Going on about live routing data is a red herring. As well as ongoing situations where the data simply isn't available to the system (eg, where
    the police decide to block a road), you also have to be able to cope with situations where the data exists but isn't available to the device (eg, in mobile notspots). To be truly autonomous, a self-driving vehicle has to be able to operate solely on the basis of road signs, markings, offline maps where available and, if necessary, verbal instructions from a human. Just like humans can.

    Mark

    Sat navs already deal with crowd-sourced information about road closures, and driver indicated road closures, and reroute accordingly.

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Roger Hayter on Tue Feb 4 17:03:17 2025
    On Tue, 04 Feb 2025 16:39:02 +0000, Roger Hayter wrote:

    On 4 Feb 2025 at 12:16:04 GMT, "Mark Goodge" <usenet@listmail.good-stuff.co.uk> wrote:

    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    On 03/02/2025 17:43, Mark Goodge wrote:

    How do you think an autonomous vehicle would cope with that, when the
    route given by its navigation software is blocked and there's no data
    available to suggest an alternative?

    Provided it is working on a live real-time mapping system in which
    road closures are clearly indicated, which is not beyond the wit of
    man, the satnav will simply re-route to avoid them.

    But that's the point. It wasn't. It can't be. When the police decide to
    block a road and turn people away, that information isn't available to
    navigation systems in real time.

    How do you think an autonomous vehicle would cope with being stopped
    by the police, and given routing information by the police which
    conflicted with its navigational data?

    It's the same problem as above. If the vehicle knows from an update
    to the live map that there is a road blockage, as should be a priority
    when autonomous vehicles become common, it will simply re-route to
    avoid it.

    But, again, that's the point. This information is not available in real
    time to the navigation systems. A human can cope with route-finding if
    necessary without resorting to them.

    Otherwise, but not too fancifully, it could be programmed to recognise
    the uniform (just as you do), raise an antenna with a microphone, and
    tell him to talk clearly to Siri identifying the exact location of the
    problem using what three words or whatever.

    Well, yes. But that's a lot more complex than merely following a route.
    It requires at least reasonable voice recognition, for example, and
    then being able to follow the instructions given.

    I don't doubt that, given sufficiently advanced AI, this will be a
    solvable problem. But it's not a problem you can solve just with a set
    of preset rules. So until autonomous vehicles have AI which is as
    robust at problem solving as human intelligence, it isn't really
    autonomous.

    Going on about live routing data is a red herring. As well as ongoing
    situations where the data simply isn't available to the system (eg,
    where the police decide to block a road), you also have to be able to
    cope with situations where the data exists but isn't available to the
    device (eg, in mobile notspots). To be truly autonomous, a self-driving
    vehicle has to be able to operate solely on the basis of road signs,
    markings, offline maps where available and, if necessary, verbal
    instructions from a human. Just like humans can.

    Mark

    Sat navs already deal with crowd-sourced information about road
    closures, and driver indicated road closures, and reroute accordingly.

    *some* sat navs ...

    However the problem then can be all that diverted traffic itself causes congestion. Leading to another re-route. And so on.

    Ideally all that data would be picked up by a central function that could
    amend traffic control (e.g. lights) accordingly. However that would run
    counter to the first commandment of UK transport since 1986 (when I
    observed it) that nothing should be done to improve car journeys or
    encourage private car ownership over public transport.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 4 15:20:03 2025
    On 04/02/2025 12:16, Mark Goodge wrote:
    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 03/02/2025 17:43, Mark Goodge wrote:

    How do you think an autonomous vehicle would cope with that, when the route >>> given by its navigation software is blocked and there's no data available to
    suggest an alternative?

    Provided it is working on a live real-time mapping system in which road
    closures are clearly indicated, which is not beyond the wit of man, the
    satnav will simply re-route to avoid them.

    But that's the point. It wasn't. It can't be. When the police decide to
    block a road and turn people away, that information isn't available to navigation systems in real time.

    It would be trivially easy to do though. Police alert HQ, HQ tells
    mapping, mapping puts a block where the accident is, autonomous vehicles re-route to avoid it.

    How do you think an autonomous vehicle would cope with being stopped by the >>> police, and given routing information by the police which conflicted with >>> its navigational data?

    It's the same problem as above. If the vehicle knows from an update to
    the live map that there is a road blockage, as should be a priority when
    autonomous vehicles become common, it will simply re-route to avoid it.

    But, again, that's the point. This information is not available in real time to the navigation systems. A human can cope with route-finding if necessary without resorting to them.

    Except that a human won't do anything either until and unless he knows
    where a blockage is.

    As autonomous vehicles become used there will have to be a requirement
    for accurate and rapid information on blockages.

    Information on congestion is readily available already in real time.
    There's no technical reason why the same could not be done for blockages.

    Otherwise, but not too fancifully, it could be programmed to recognise
    the uniform (just as you do), raise an antenna with a microphone, and
    tell him to talk clearly to Siri identifying the exact location of the
    problem using what three words or whatever.

    Well, yes. But that's a lot more complex than merely following a route. It requires at least reasonable voice recognition, for example, and then being able to follow the instructions given.

    These things exist already.

    I don't doubt that, given sufficiently advanced AI, this will be a solvable problem. But it's not a problem you can solve just with a set of preset rules. So until autonomous vehicles have AI which is as robust at problem solving as human intelligence, it isn't really autonomous.

    It's not AI, which is something entirely different.

    Going on about live routing data is a red herring. As well as ongoing situations where the data simply isn't available to the system (eg, where
    the police decide to block a road), you also have to be able to cope with situations where the data exists but isn't available to the device (eg, in mobile notspots). To be truly autonomous, a self-driving vehicle has to be able to operate solely on the basis of road signs, markings, offline maps where available and, if necessary, verbal instructions from a human. Just like humans can.

    And just like autonomous vehicles can already, but safer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Tue Feb 4 16:25:08 2025
    On 04/02/2025 15:41, billy bookcase wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message news:m0enkeFqoikU2@mid.individual.net...
    On 03/02/2025 10:07, billy bookcase wrote:
    "Martin Harran" <martinharran@gmail.com> wrote in message
    news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close >>>> to a vehicle in front, lane-keep assist.

    Those are exmples of automation, as with autpilots. Where the each set
    of inputs, no matter how complex, will produce exactly the same result
    every time.

    AI systems on the other hand, learn and make up new rules as they go along. >>> And so the process by which they actually arrive at particular decisions >>> may not be readily available to humans.

    I think this is possibly what a lot of peopel are really worried about,

    I give you the details of Lufthansa LH1829 below. Numerous other examples are
    available, (some of which may include the grounding of the entire Boeing 737 MAX
    fleet).

    Whilst some knowledge of aviation terms will help in reading what follows, it is by no
    means mandatory.

    Please detail, as simply as possible, the likely outcome for flight LH1829 had the
    commander not been present and had he not had access to flight technicians via Telex.

    Then please detail which parties will be fulfilling the roles of the commander and the
    flight technicians in fully automated vehicles.

    I fail to see any possible relevance in any of of your questions.

    Unless you can detail where I have ever claimed that AI has any possible
    role to play in either "autonomous" vehicles, or in autopilots.

    AI relies on measuring outcomes, in order to "learn".

    In neither "autonomous" vehicles nor autopilots is anything less than
    the optimum outcome either permissible, or desirable, in real world situations,

    As it happens Norman is entirely correct in claiming AI has no role to play in autonomous vehicles. But entirely wrong in imagining all that all potential
    real life situations and problems which might arise can be anticipated, never mind modelled in experimental set-ups Such as would allow autonomous vehicles
    safely onto our roads

    You must then have missed the earlier references in this thread to
    studies showing that autonomous vehicles are in fact already safer than
    human driven ones.

    They are currently allowed already in parts of the USA, and in France
    and Germany.

    https://ioki.com/en/autonomous-vehicles-turning-point-in-european-regional-and-local-transport/#:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.

    In the UK, they could be allowed by 2026.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Tue Feb 4 16:37:23 2025
    On 04/02/2025 14:49, Simon Parker wrote:
    On 03/02/2025 09:36, Norman Wells wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those
    not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits.  With a proper >>>> data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief!  How ancient is it?

    I will investigate in case it
    is a feature that is available but not activated. But you are assuming
    that all States and small 'burgs' in the US follow the same speed limit
    reporting processes that we do. Do they?

    I have no idea.  But this is a UK group and all the necessary speed
    limit information here is certainly available and in current use.

    It's trivial programming to ensure that limits are not exceeded by any
    autonomous vehicle, which is certainly not the case with human
    drivers, even those who try to obey them.

    Anecdata: A young(ish) family member had a black box fitted to their new
    car as a condition of their insurance policy.  It was a physical black
    box connected directly to the vehicle electronics and mounted within the engine bay by a professional appointed by the insurers.  Said family
    member has since changed their vehicle and the new vehicle didn't need a physical black box fitting but instead required an app installing on
    their mobile phone which must be with them in their car when they're
    driving as a condition of their insurance.

    Both systems have resulted in letters from the insurance company
    detailing incidents where the systems indicated that they were driving
    at significantly above the speed limit, which triggers penalties in the policy up to and including cancellation of the policy.

    Every single incident reported has been down to GPS issues where the car
    is on a dual carriageway, NSL road or similar, yet the black box
    detected the vehicle as being on a 30mph road adjacent to its current position but there is no reason to believe that the reverse situation
    cannot also exist, without triggering a letter from the insurance company.

    Please detail how your autonomous vehicle has overcome these problems evidently inherent in the current navigational systems in use, (both stand-alone and embedded).

    $75 billion has been spent by the automotive industry on developing
    autonomous vehicle technology, including the software. How much do you
    think the black box company spent developing theirs?

    You get what you pay for.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Max Demian on Tue Feb 4 15:25:24 2025
    On 04/02/2025 12:19, Max Demian wrote:
    On 03/02/2025 19:05, Norman Wells wrote:

    Provided it is working on a live real-time mapping system in which
    road closures are clearly indicated, which is not beyond the wit of
    man, the satnav will simply re-route to avoid them.  They're very good
    at finding alternative routes when you miss a turning for example, so
    it's not a new or insurmountable problem.

    Won't the satnav keep telling you to take the original route? Is there a
    way to tell it that there is something in the way? The same problem
    would occur in the case of an autonomous vehicle.

    Satnavs should, and autonomous vehicle controls will have to, have some
    way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Tue Feb 4 16:32:42 2025
    On 04/02/2025 14:51, Simon Parker wrote:
    On 03/02/2025 10:03, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0bh1eFdablU2@mid.individual.net...

    It's trivial programming to ensure that limits are not exceeded by
    any autonomous
    vehicle, which is certainly not the case with human drivers, even
    those who try to obey
    them.

    How about the humans who were supposed to put up the signs but
    somehow forgot ?

    What do you, and your robot friends, intend to do about them ?

    Or, as this is a legal newsgroup, a sign with no underlying TRO, or a
    TRO with no corresponding signage installed?

    Do NormCars (TM) (a Division of NormTech (TM)) have access to the
    underlying TRO database and know to ignore a sign if it has no TRO
    legally underpinning it?

    Will NormCars (TM) know the legally enforceable speed limit from
    accessing the TRO database and therefore drive at the correct speed even
    if some ne'er do well has spray painted, nicked or otherwise interfered
    with the signage?

    Yes, of course. It's only sensible to know what speed limits actually
    apply where after all.

    Will NormTech (TM) be selling this software to other major automobile companies or are they keeping their "trivial" coding to themselves?

    It's not software but access to relevant data that needs to be sorted
    out. Given the data, it's simplicity itself to write a control program
    along the lines of get location, look up speed limit, if speed exceeds
    limit slow down.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Tue Feb 4 16:14:59 2025
    On 04/02/2025 14:47, Simon Parker wrote:
    On 03/02/2025 10:07, billy bookcase wrote:
    "Martin Harran" <martinharran@gmail.com> wrote in message
    news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close
    to a vehicle in front, lane-keep assist.

    Those are exmples of automation, as with autpilots. Where the each set
    of inputs, no matter how complex, will produce exactly the same result
    every time.

    AI systems on the other hand, learn and make up new rules as they go
    along.
    And so the process by which they actually arrive at particular decisions
    may not be readily available to humans.

    I think this is possibly what a lot of peopel are really worried about,

    I give you the details of Lufthansa LH1829 below.  Numerous other
    examples are available, (some of which may include the grounding of the entire Boeing 737 MAX fleet).

    Whilst some knowledge of aviation terms will help in reading what
    follows, it is by no means mandatory.

    Please detail, as simply as possible, the likely outcome for flight
    LH1829 had the commander not been present and had he not had access to
    flight technicians via Telex.

    Obviously, it would have crashed, killing all on board.

    Then please detail which parties will be fulfilling the roles of the commander and the flight technicians in fully automated vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    But a failsafe default of come to a halt and turn everything off, which
    would doubtless be in any autonomous vehicle, isn't really an option
    doing 500mph at 30,000 ft.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Simon Parker on Tue Feb 4 15:42:24 2025
    On Tue, 04 Feb 2025 14:54:39 +0000, Simon Parker wrote:

    And, despite what some posters to this thread may believe, regardless of
    how sincere they are in their beliefs, we are a significant distance
    away from a vehicle even approaching level 5.

    Indeed. And while the distraction into the realms of the legal are
    admirable as an excellent attempt to "be prepared" it should not be
    confused with the considerable technical challenges. Which are not aided
    by a useless philosophical rehashing of the trolley bus problem.

    I still believe that autonomous cars are awaiting an intersection with a sensible approach to EVs whereby "your" car quietly slips off between 6pm
    to 6am to charge up at the most convenient high speed (no humans allowed) charging point. And if it happens to take the odd paying passenger on
    it's way - well what's there not to like.

    However that future needs a few hundred thousand, if not million things
    to happen. Admittedly they are truly inevitable. But they may take a
    while.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Tue Feb 4 16:49:48 2025
    On 04/02/2025 16:06, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0eokvFtck2U4@mid.individual.net...

    I'm sure you're reinventing the wheel bringing up what you think are outstanding
    problems without knowing what has been solved already. The answers to them, however,
    are so trivially easy that it is inconceivable that they haven't all been addressed
    years ago.

    Where do you think the estimated $75 billion spent by the automotive industry on the
    development of autonomous vehicles has gone? And do you really think they haven't
    already thought of all the issues you've raised?

    Right. You're an autonomous car. You've got your two passengers in the
    back ( you know humans, remember them ? ) a husband and wife. Originally
    they told you to go to Tesco, But then half way there the wife said she'd changed her mind and wanted to go to Asda, So she said "go to Asda". But
    her husband objected and said he wanted to go to "Morrisons" instead. So you're driving along supposedly headed for Tesco but with shouts of "Asda" "No Morrisons",."No Asda", "Morrisons" coming out of the back for the
    next ten minutes. So where do you go, and why ?

    Where would a human driver go? It's no different. You either follow
    the last instruction all the time until they get fed up bickering, or
    you park up, lock the doors, turn off the lights, and wait until you
    only have one unambiguous instruction to follow. If conflict continues
    beyond a certain limit, I'd program the thing to ignore all instructions
    and drive direct to Relate.

    I don't know how it's done in practice but resolution is surely
    possible, maybe by ensuring only one person has authority.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Les. Hayward@21:1/5 to All on Tue Feb 4 17:20:31 2025
    I think that all of this vexatious text can be replaced by the old
    statement:

    Rules were made for the guidance of wise men and the obedience of fools".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Tue Feb 4 18:49:37 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0etb5Fu84aU2@mid.individual.net...

    On 04/02/2025 15:41, billy bookcase wrote:

    snip

    AI relies on measuring outcomes, in order to "learn".

    In neither "autonomous" vehicles nor autopilots is anything less than
    the optimum outcome either permissible, or desirable, in real world
    situations,

    As it happens Norman is entirely correct in claiming AI has no role to play >> in autonomous vehicles. But entirely wrong in imagining all that all potential
    real life situations and problems which might arise can be anticipated, never
    mind modelled in experimental set-ups Such as would allow autonomous vehicles
    safely onto our roads

    You must then have missed the earlier references in this thread to studies showing that
    autonomous vehicles are in fact already safer than human driven ones.

    The studies were carried out in California.

    Not on the Hanger Lane roundabout.

    As to the former, like one two others on this thread I can speak with
    *direct personal experience* on this one. As well as the latter.

    As having watched every single episode of "Curb Your Enthusiasm" I can confirm that *all* the streets in the 'burbs of LA where Larry lives, are wide enough to accommodate, should they so wish, two lanes for autonomous cars down
    the middle; leaving the other two lanes for human cars.

    While pedestrians are almost unheard of in the 'burbs themselves.

    Essentially where street widths are concerned, nothing has changed in suburban LA since the days of Harold Lloyd, Laurel and Hardy or later, Columbo.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Todal@21:1/5 to Norman Wells on Tue Feb 4 19:08:54 2025
    On 03/02/2025 16:54, Norman Wells wrote:
    On 03/02/2025 15:32, The Todal wrote:
    On 03/02/2025 14:32, Norman Wells wrote:
    On 03/02/2025 13:41, Davey wrote:

    I have no idea what it runs, although I see lots of Linux terms in
    the software listings. I ask it to do things, and it usually does them. >>>> I certainly have no intention of replacing it, either the Satnav or the >>>> car, I have managed so far without my car telling me what the speed
    limit is, as I can read road signs myself.

    Of course.  Sometimes.

    My experience is that my satnav knows rather better than I do what
    the limit is at any one location.  And that's not for the want of
    trying.

    If you say you're fully aware of the correct speed limit everywhere
    you drive all of the time, I will frankly tend not to believe you.
    Maybe a journey with a satnav that does could be revelatory.

    Maybe every driver should be obliged to take a Speed Awareness Course
    periodically. To refresh their memory of which limits apply to which
    types of road.

    But it doesn't depend just on the type of road, so that would be
    inaccurate and misleading..

    A half decent satnav will know and tell you, wherever you are.



    I don't think any satnav is capable of telling you the speed limit on
    every road you happen to be on.

    Needless to say, it is no defence in any road traffic prosecution to say
    "but I was misled by my satnav".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fredxx@21:1/5 to All on Tue Feb 4 18:09:38 2025
    On 04/02/2025 15:30, Jethro_uk wrote:
    On Tue, 04 Feb 2025 14:49:28 +0000, Fredxx wrote:

    On 04/02/2025 14:15, Jethro_uk wrote:
    [quoted text muted]

    I don't use, or even come across 'green routing'

    Last time I used GM, it only gave you one route - the ability to chose "shortest" and "quickest" alongside "most economical had gone. When I
    briefly researched the matter, I read that Google were only supplying the "green" route now, for the planet. Not sure which planet though.

    There are alternatives such as Waze (Google owned) and HereWeGo.

    Unless it's so obvious that there is just the one route I find Waze
    gives me at least one alternative.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to jethro_uk@hotmailbin.com on Tue Feb 4 19:14:10 2025
    On 4 Feb 2025 at 17:03:17 GMT, "Jethro_uk" <jethro_uk@hotmailbin.com> wrote:

    On Tue, 04 Feb 2025 16:39:02 +0000, Roger Hayter wrote:

    On 4 Feb 2025 at 12:16:04 GMT, "Mark Goodge"
    <usenet@listmail.good-stuff.co.uk> wrote:

    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    On 03/02/2025 17:43, Mark Goodge wrote:

    How do you think an autonomous vehicle would cope with that, when the >>>>> route given by its navigation software is blocked and there's no data >>>>> available to suggest an alternative?

    Provided it is working on a live real-time mapping system in which
    road closures are clearly indicated, which is not beyond the wit of
    man, the satnav will simply re-route to avoid them.

    But that's the point. It wasn't. It can't be. When the police decide to
    block a road and turn people away, that information isn't available to
    navigation systems in real time.

    How do you think an autonomous vehicle would cope with being stopped >>>>> by the police, and given routing information by the police which
    conflicted with its navigational data?

    It's the same problem as above. If the vehicle knows from an update
    to the live map that there is a road blockage, as should be a priority >>>> when autonomous vehicles become common, it will simply re-route to
    avoid it.

    But, again, that's the point. This information is not available in real
    time to the navigation systems. A human can cope with route-finding if
    necessary without resorting to them.

    Otherwise, but not too fancifully, it could be programmed to recognise >>>> the uniform (just as you do), raise an antenna with a microphone, and
    tell him to talk clearly to Siri identifying the exact location of the >>>> problem using what three words or whatever.

    Well, yes. But that's a lot more complex than merely following a route.
    It requires at least reasonable voice recognition, for example, and
    then being able to follow the instructions given.

    I don't doubt that, given sufficiently advanced AI, this will be a
    solvable problem. But it's not a problem you can solve just with a set
    of preset rules. So until autonomous vehicles have AI which is as
    robust at problem solving as human intelligence, it isn't really
    autonomous.

    Going on about live routing data is a red herring. As well as ongoing
    situations where the data simply isn't available to the system (eg,
    where the police decide to block a road), you also have to be able to
    cope with situations where the data exists but isn't available to the
    device (eg, in mobile notspots). To be truly autonomous, a self-driving
    vehicle has to be able to operate solely on the basis of road signs,
    markings, offline maps where available and, if necessary, verbal
    instructions from a human. Just like humans can.

    Mark

    Sat navs already deal with crowd-sourced information about road
    closures, and driver indicated road closures, and reroute accordingly.

    *some* sat navs ...

    However the problem then can be all that diverted traffic itself causes congestion. Leading to another re-route. And so on.

    Ideally all that data would be picked up by a central function that could amend traffic control (e.g. lights) accordingly. However that would run counter to the first commandment of UK transport since 1986 (when I
    observed it) that nothing should be done to improve car journeys or
    encourage private car ownership over public transport.

    Not in real time, but Google maps has apparently deleted a local minor road round us that could be used to circumvent a rather long trunk road diversion that is going to last three months.

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sam Plusnet@21:1/5 to Fredxx on Tue Feb 4 19:30:07 2025
    On 04/02/2025 14:49, Fredxx wrote:
    On 04/02/2025 14:15, Jethro_uk wrote:
    On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:

    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    [quoted text muted]

    But that's the point. It wasn't. It can't be. When the police decide to
    block a road and turn people away, that information isn't available to
    navigation systems in real time.

    Well Google Maps gets feedback on the progress of journeys using it and
    *could* quickly determine that a sudden influx of changed journeys is
    indicative of some sort of problem proceeding.

    Whether it does or not is another matter.

    The reason I am doubtful is that this is the same product which will
    happily send you down a labyrinthine route of side roads and alleys in
    the name of it's preferred "green" routing. Doing more damage to the
    environment than it could ever prevent.

    I don't use, or even come across 'green routing' but what I can say is
    when I have ignored a deviation from the standard route it doesn't
    normally work out well in term of traffic, roadworks and jams.

    (sat Nav re-routing is another exemplar of machines being incapable of
    intelligence :) )

    I might agree, but that is when you use your own intelligence to work
    out if the new route is a sensible one. The satnav should be treated
    like a guide or tool you have autonomy over.

    That is very difficult to do if you are partway through your journey
    when a new route/diversion is offered/required, and you do not have the
    luxury of pulling over in order to study this new 'serving suggestion'.

    Examining the real life details of a route on the titchy little satnav
    screen is hard to do at the best of times.

    --
    Sam Plusnet

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Tue Feb 4 19:15:29 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0eupdFu84aU5@mid.individual.net...
    On 04/02/2025 16:06, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0eokvFtck2U4@mid.individual.net...

    I'm sure you're reinventing the wheel bringing up what you think are outstanding
    problems without knowing what has been solved already. The answers to them, however,
    are so trivially easy that it is inconceivable that they haven't all been addressed
    years ago.

    Where do you think the estimated $75 billion spent by the automotive industry on the
    development of autonomous vehicles has gone? And do you really think they haven't
    already thought of all the issues you've raised?

    Right. You're an autonomous car. You've got your two passengers in the
    back ( you know humans, remember them ? ) a husband and wife. Originally
    they told you to go to Tesco, But then half way there the wife said she'd
    changed her mind and wanted to go to Asda, So she said "go to Asda". But
    her husband objected and said he wanted to go to "Morrisons" instead. So
    you're driving along supposedly headed for Tesco but with shouts of "Asda" >> "No Morrisons",."No Asda", "Morrisons" coming out of the back for the
    next ten minutes. So where do you go, and why ?

    Where would a human driver go? It's no different.

    Both the husband and wife used to drive to the supermarket. But without
    any discernible pattern such as alternating. One reason may have been
    because one wanted to buy something which was on special offer.
    At other times one may simply have not been in the mood for
    driving.

    You either follow the last instruction all the time until they get fed up bickering,

    But if either one of them knows this, then they'll still be shouting
    while you're entering Tesco's car park

    or you park up, lock the doors,

    At which point, anticipating this, one of them will feign a heart attack or similar life threatening condition. Then what do you do ?

    turn off the lights, and wait until you only have one unambiguous instruction to
    follow. If conflict continues beyond a certain limit, I'd program the thing to ignore
    all instructions and drive direct to Relate.

    Following which *you* will be required to attend regular counselling
    sessions at Relate, on your own, for the next six months.

    I don't know how it's done in practice but resolution is surely possible, maybe by
    ensuring only one person has authority.

    Quite clearly, you haven't yet completed your counselling sessions


    bb




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Tue Feb 4 19:21:53 2025
    On 04/02/2025 18:49, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0etb5Fu84aU2@mid.individual.net...

    On 04/02/2025 15:41, billy bookcase wrote:

    snip

    AI relies on measuring outcomes, in order to "learn".

    In neither "autonomous" vehicles nor autopilots is anything less than
    the optimum outcome either permissible, or desirable, in real world
    situations,

    As it happens Norman is entirely correct in claiming AI has no role to play >>> in autonomous vehicles. But entirely wrong in imagining all that all potential
    real life situations and problems which might arise can be anticipated, never
    mind modelled in experimental set-ups Such as would allow autonomous vehicles
    safely onto our roads

    You must then have missed the earlier references in this thread to studies showing that
    autonomous vehicles are in fact already safer than human driven ones.

    The studies were carried out in California.

    Not on the Hanger Lane roundabout.

    Nevertheless, from a document I cited earlier:

    "Passengers in Edinburgh will be able to enjoy the world’s first fully
    sized self-driving bus from the spring"

    Does that indicate there is any substantial doubt, even on British urban streets?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sam Plusnet@21:1/5 to All on Tue Feb 4 19:33:10 2025
    On 04/02/2025 15:35, Jethro_uk wrote:
    On Tue, 04 Feb 2025 14:53:34 +0000, Nick Finnigan wrote:

    On 04/02/2025 14:15, Jethro_uk wrote:
    On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:

    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    [quoted text muted]

    But that's the point. It wasn't. It can't be. When the police decide
    to block a road and turn people away, that information isn't available >>>> to navigation systems in real time.

    Well Google Maps gets feedback on the progress of journeys using it and
    *could* quickly determine that a sudden influx of changed journeys is
    indicative of some sort of problem proceeding.

    Whether it does or not is another matter.

    It spots that a stretch of road has developed a very slow speed, and
    so
    prefers a different route based on travel times, without knowing why.
    Nor knowing whether the alternative roads can cope with the volume of
    traffic now moving along an unfamiliar route.

    Odd. 25 years ago I worked on routing software that was aware of road characteristics. But not surprising as Googles idea of innovation and
    mine rarely intersect.


    Google Maps also shows forked arrows at bottom right to allow me to
    chose
    a different route at any time (data connection permitting).

    The arrows aren't always there IME (but the caveat with anything from the chocolate factory is it can change dramatically without notice). I only
    saw them when I was in a position to select one of the routes. I found
    that invariably they were suggesting a "quicker" route that would have
    taken me out of my way and not uncommonly down a side road.

    Countless trips with Uber drivers locally has shown me how pisspoor the routing is. Forget speed humps and chicanes. I would suggest the biggest cause of urban emission pollution is sat navigation software directing
    cars down tiny side roads to shave a millisecond off the journey.

    Where that millisecond is calculated as though it were possible and
    practical to travel at the posted speed limit throughout.

    --
    Sam Plusnet

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to The Todal on Tue Feb 4 19:23:56 2025
    On 04/02/2025 19:08, The Todal wrote:
    On 03/02/2025 16:54, Norman Wells wrote:
    On 03/02/2025 15:32, The Todal wrote:
    On 03/02/2025 14:32, Norman Wells wrote:
    On 03/02/2025 13:41, Davey wrote:

    I have no idea what it runs, although I see lots of Linux terms in
    the software listings. I ask it to do things, and it usually does
    them.
    I certainly have no intention of replacing it, either the Satnav or
    the
    car, I have managed so far without my car telling me what the speed
    limit is, as I can read road signs myself.

    Of course.  Sometimes.

    My experience is that my satnav knows rather better than I do what
    the limit is at any one location.  And that's not for the want of
    trying.

    If you say you're fully aware of the correct speed limit everywhere
    you drive all of the time, I will frankly tend not to believe you.
    Maybe a journey with a satnav that does could be revelatory.

    Maybe every driver should be obliged to take a Speed Awareness Course
    periodically. To refresh their memory of which limits apply to which
    types of road.

    But it doesn't depend just on the type of road, so that would be
    inaccurate and misleading..

    A half decent satnav will know and tell you, wherever you are.

    I don't think any satnav is capable of telling you the speed limit on
    every road you happen to be on.

    Why not?

    As far as I'm aware, my cheap old Garmin does.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 4 20:45:12 2025
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    Satnavs should, and autonomous vehicle controls will have to, have some
    way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've never worked as a programmer...."

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to jethro_uk@hotmailbin.com on Tue Feb 4 20:42:05 2025
    On Tue, 4 Feb 2025 15:30:06 -0000 (UTC), Jethro_uk
    <jethro_uk@hotmailbin.com> wrote:

    On Tue, 04 Feb 2025 14:49:28 +0000, Fredxx wrote:

    On 04/02/2025 14:15, Jethro_uk wrote:
    [quoted text muted]

    I don't use, or even come across 'green routing'

    Last time I used GM, it only gave you one route - the ability to chose >"shortest" and "quickest" alongside "most economical had gone. When I
    briefly researched the matter, I read that Google were only supplying the >"green" route now, for the planet. Not sure which planet though.

    I've just asked Google Maps on my phone to give me a route to a randomly
    chosen location. It gave me three options, and recommended the quickest.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 4 21:29:15 2025
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of the
    commander and the flight technicians in fully automated vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a Train Captain) on board who is able to take over and drive it manually if
    necessary. And the trains themselves don't need any routing capability,
    because they run on rails. It's a very far cry from a fully autonomous road vehicle.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Mark Goodge on Tue Feb 4 21:32:38 2025
    On 2025-02-04, Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:
    Satnavs should, and autonomous vehicle controls will have to, have some
    way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've
    never worked as a programmer...."

    Indeed. A programmer would have called it a "simple matter
    of programming" :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Les. Hayward on Tue Feb 4 21:34:37 2025
    On 29/01/2025 19:56, Les. Hayward wrote:

    It seems that even the most mundane bit of software these days is
    described as AI! It must have been back in the 80's that a friend of
    mine was working on software for automatic landing and take-off for airliners. Nothing either artificial or intelligent about it!


    Bit late wasn't he?

    https://en.wikipedia.org/wiki/Hawker_Siddeley_Trident#avionics

    will tell you that "it enabled the Trident to perform the first
    automatic landing by a civil airliner in scheduled passenger service on
    10 June 1965"

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Norman Wells on Tue Feb 4 21:47:27 2025
    On 03/02/2025 08:09, Norman Wells wrote:

    Just as they do with any legally imposed speed limits.  With a proper data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    Yes. It's even right most of the time.

    The anecdote I heard was from a friend of mine whose car spotted the 50
    sign on the adjacent road while he was on a motorway, and braked.

    For reasons unknown it thought it meant 50kph, even though he was in
    England.

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 4 21:57:54 2025
    On Tue, 4 Feb 2025 16:32:42 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 14:51, Simon Parker wrote:
    On 03/02/2025 10:03, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0bh1eFdablU2@mid.individual.net...

    It's trivial programming to ensure that limits are not exceeded by
    any autonomous
    vehicle, which is certainly not the case with human drivers, even
    those who try to obey
    them.

    How about the humans who were supposed to put up the signs but
    somehow forgot ?

    What do you, and your robot friends, intend to do about them ?

    Or, as this is a legal newsgroup, a sign with no underlying TRO, or a
    TRO with no corresponding signage installed?

    Do NormCars (TM) (a Division of NormTech (TM)) have access to the
    underlying TRO database and know to ignore a sign if it has no TRO
    legally underpinning it?

    Will NormCars (TM) know the legally enforceable speed limit from
    accessing the TRO database and therefore drive at the correct speed even
    if some ne'er do well has spray painted, nicked or otherwise interfered
    with the signage?

    Yes, of course. It's only sensible to know what speed limits actually
    apply where after all.

    How long do you think it will take for a national TRO database to be created and fully populated, and API access to it made available for sat-nav
    providers? Because it doesn't, as yet, exist.

    Local highway authorities maintain their own repositories of TROs, but
    there's no requirement for them to be in a standard format. And, in
    particular, TROs don't usually contain coordinates. They describe affected stretches of road using phrases like "From the junction with Foo Street for
    a distance of 86m in a northerly direction". Converting that into something
    you can store in a geographic database is not, despite your optimism,
    trivial.

    After all, even Google doesn't use TROs to map speed limits. Their systems
    are intelligent enough to know what the default limits are in the UK for different types of road. But, for any departure from the default, it does exactly what a human would do - look at the signs:

    https://blog.google/products/maps/how-ai-and-imagery-keep-speed-limits-on-google-maps-updated/

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 4 22:00:29 2025
    On Tue, 4 Feb 2025 16:37:23 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 14:49, Simon Parker wrote:

    Please detail how your autonomous vehicle has overcome these problems
    evidently inherent in the current navigational systems in use, (both
    stand-alone and embedded).

    $75 billion has been spent by the automotive industry on developing >autonomous vehicle technology, including the software.

    $75 billion is a lot to spend on something you keep telling us is trivial.
    What a waste of money, eh.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 4 22:05:44 2025
    On Tue, 4 Feb 2025 15:05:02 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 12:32, Mark Goodge wrote:

    And that also takes us back to my point, which is that a truly autonomous
    vehicle has to be capable of deciding when to disregard the sat-nav, and be >> able to operate effectively in situations where the sat-nav data is missing >> or incomplete.

    That's just a matter of programming to do what a sensible person would
    do in the same circumstances.

    The word "just" is doing a lot of heavy lifting there.

    Norman's responses to your comments also contradict his own earlier
    assertions that autonomous vehicles don't need AI,

    AI doesn't come into it.

    On the contrary, I think everyone who knows anything at all about the
    prospects for autonomous vehicles is of the opinion that AI is the long-term solution.

    and that all of these
    problems have already been solved. Now he's merely saying that, in his
    opinion, they can be solved and that solving them is trivial.

    I'm sure you're reinventing the wheel bringing up what you think are >outstanding problems without knowing what has been solved already. The >answers to them, however, are so trivially easy that it is inconceivable
    that they haven't all been addressed years ago.

    Where do you think the estimated $75 billion spent by the automotive
    industry on the development of autonomous vehicles has gone? And do you >really think they haven't already thought of all the issues you've raised?

    I am absolutely certain that they've considered all of these issues, and
    many more besides. That's precisely why, despite spending $75 billion on it,
    we still don't have fully autonomous vehicles on the road. The automotive industry is putting a lot of time and money into getting it right. I think
    they would be slightly amused to hear you so blithely dismiss their efforts
    as trivial.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 07:56:31 2025
    On 04/02/2025 21:57, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:32:42 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 14:51, Simon Parker wrote:
    On 03/02/2025 10:03, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0bh1eFdablU2@mid.individual.net...

    It's trivial programming to ensure that limits are not exceeded by
    any autonomous
    vehicle, which is certainly not the case with human drivers, even
    those who try to obey
    them.

    How about the humans who were supposed to put up the signs but
    somehow forgot ?

    What do you, and your robot friends, intend to do about them ?

    Or, as this is a legal newsgroup, a sign with no underlying TRO, or a
    TRO with no corresponding signage installed?

    Do NormCars (TM) (a Division of NormTech (TM)) have access to the
    underlying TRO database and know to ignore a sign if it has no TRO
    legally underpinning it?

    Will NormCars (TM) know the legally enforceable speed limit from
    accessing the TRO database and therefore drive at the correct speed even >>> if some ne'er do well has spray painted, nicked or otherwise interfered
    with the signage?

    Yes, of course. It's only sensible to know what speed limits actually
    apply where after all.

    How long do you think it will take for a national TRO database to be created and fully populated, and API access to it made available for sat-nav providers? Because it doesn't, as yet, exist.

    So you say. However, my little Garmin clearly has access to a data-base
    of speed limits already, so one clearly exists and it seems to me to be
    pretty comprehensive. Whose it is I have no idea. But it's a sine qua
    non that autonomous vehicles will need to know, and it's obviously not
    an insuperable problem.

    Local highway authorities maintain their own repositories of TROs, but there's no requirement for them to be in a standard format. And, in particular, TROs don't usually contain coordinates. They describe affected stretches of road using phrases like "From the junction with Foo Street for
    a distance of 86m in a northerly direction". Converting that into something you can store in a geographic database is not, despite your optimism, trivial.

    One wonders how my little Garmin copes then.

    After all, even Google doesn't use TROs to map speed limits. Their systems are intelligent enough to know what the default limits are in the UK for different types of road. But, for any departure from the default, it does exactly what a human would do - look at the signs:

    https://blog.google/products/maps/how-ai-and-imagery-keep-speed-limits-on-google-maps-updated/

    My Garmin doesn't. It doesn't have eyes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nick Finnigan@21:1/5 to All on Tue Feb 4 21:14:21 2025
    On 04/02/2025 15:35, Jethro_uk wrote:
    On Tue, 04 Feb 2025 14:53:34 +0000, Nick Finnigan wrote:

    On 04/02/2025 14:15, Jethro_uk wrote:
    On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:

    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    [quoted text muted]

    But that's the point. It wasn't. It can't be. When the police decide
    to block a road and turn people away, that information isn't available >>>> to navigation systems in real time.

    Well Google Maps gets feedback on the progress of journeys using it and
    *could* quickly determine that a sudden influx of changed journeys is
    indicative of some sort of problem proceeding.

    Whether it does or not is another matter.

    It spots that a stretch of road has developed a very slow speed, and
    so
    prefers a different route based on travel times, without knowing why.
    Nor knowing whether the alternative roads can cope with the volume of
    traffic now moving along an unfamiliar route.

    Odd. 25 years ago I worked on routing software that was aware of road characteristics. But not surprising as Googles idea of innovation and
    mine rarely intersect.

    I'm sure Google has some road characteristics, and uses them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 4 21:38:14 2025
    On 04/02/2025 20:45, Mark Goodge wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    Satnavs should, and autonomous vehicle controls will have to, have some
    way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've never worked as a programmer...."

    What a cheap remark! All computer languages I've ever dealt with, and I
    have programmed in more than one, understand the word 'not'. It's a
    very basic operator.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Jon Ribbens on Tue Feb 4 21:39:17 2025
    On 04/02/2025 21:32, Jon Ribbens wrote:
    On 2025-02-04, Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    Satnavs should, and autonomous vehicle controls will have to, have some
    way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've
    never worked as a programmer...."

    Indeed. A programmer would have called it a "simple matter
    of programming" :-)

    Which it in fact is. You just use the word 'not'.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 07:48:49 2025
    On 04/02/2025 22:00, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:37:23 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 14:49, Simon Parker wrote:

    Please detail how your autonomous vehicle has overcome these problems
    evidently inherent in the current navigational systems in use, (both
    stand-alone and embedded).

    $75 billion has been spent by the automotive industry on developing
    autonomous vehicle technology, including the software.

    $75 billion is a lot to spend on something you keep telling us is trivial. What a waste of money, eh.

    Solving the naive problems you have brought up was trivial and cost very little. The others cost rather more.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nick Finnigan@21:1/5 to Norman Wells on Tue Feb 4 21:54:14 2025
    On 04/02/2025 15:05, Norman Wells wrote:
    On 04/02/2025 12:32, Mark Goodge wrote:

    Norman's responses to your comments also contradict his own earlier
    assertions that autonomous vehicles don't need AI,

    AI doesn't come into it.


    Which autonomous vehicles don't use AI for object recogntion?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 08:04:22 2025
    On 04/02/2025 22:05, Mark Goodge wrote:
    On Tue, 4 Feb 2025 15:05:02 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 12:32, Mark Goodge wrote:

    And that also takes us back to my point, which is that a truly autonomous >>> vehicle has to be capable of deciding when to disregard the sat-nav, and be >>> able to operate effectively in situations where the sat-nav data is missing >>> or incomplete.

    That's just a matter of programming to do what a sensible person would
    do in the same circumstances.

    The word "just" is doing a lot of heavy lifting there.

    No, that's routine. It's the word 'sensible' that's the heavy one.

    Norman's responses to your comments also contradict his own earlier
    assertions that autonomous vehicles don't need AI,

    AI doesn't come into it.

    On the contrary, I think everyone who knows anything at all about the prospects for autonomous vehicles is of the opinion that AI is the long-term solution.

    Then they too don't understand what AI is, but have adopted it as the
    word of the moment.

    and that all of these
    problems have already been solved. Now he's merely saying that, in his
    opinion, they can be solved and that solving them is trivial.

    I'm sure you're reinventing the wheel bringing up what you think are
    outstanding problems without knowing what has been solved already. The
    answers to them, however, are so trivially easy that it is inconceivable
    that they haven't all been addressed years ago.

    Where do you think the estimated $75 billion spent by the automotive
    industry on the development of autonomous vehicles has gone? And do you
    really think they haven't already thought of all the issues you've raised?

    I am absolutely certain that they've considered all of these issues, and
    many more besides. That's precisely why, despite spending $75 billion on it, we still don't have fully autonomous vehicles on the road. The automotive industry is putting a lot of time and money into getting it right. I think they would be slightly amused to hear you so blithely dismiss their efforts as trivial.

    If we're talking about words doing heavy lifting, the word 'fully' there
    is a prime example, and is subject to all sorts of weaselly interpretation.

    The USA is considerably ahead of us, but this is where we're at in Europe:

    https://ioki.com/en/autonomous-vehicles-turning-point-in-european-regional-and-local-transport/#:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 4 21:45:09 2025
    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of the
    commander and the flight technicians in fully automated vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a Train Captain) on board who is able to take over and drive it manually if necessary. And the trains themselves don't need any routing capability, because they run on rails. It's a very far cry from a fully autonomous road vehicle.

    And how many times have they been called into action to drive a train
    manually? Perhaps you'd tell us. Otherwise it seems just a union ruse
    to employ people to do nothing, as is their wont.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 4 21:22:46 2025
    On Tue, 4 Feb 2025 19:21:53 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 18:49, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0etb5Fu84aU2@mid.individual.net...

    You must then have missed the earlier references in this thread to studies showing that
    autonomous vehicles are in fact already safer than human driven ones.

    The studies were carried out in California.

    Not on the Hanger Lane roundabout.

    Nevertheless, from a document I cited earlier:

    "Passengers in Edinburgh will be able to enjoy the worlds first fully
    sized self-driving bus from the spring"

    It's a shuttle bus that runs back and forth between just two locations. The entirety of its route on the public highway is on dual carriageways. And it still needs someone to sit in the driver's seat to "monitor the technology". Oh, and it's not even autonomous end to end.

    Their driverless buses have sensors enabling them to travel on
    pre-selected roads at up to 50mph.

    On each trip they switch between autonomous and manual driving, being
    controlled by computer across the Forth Road Bridge, and from the M90 and
    M8 to the A720 in Edinburgh.

    https://www.bbc.co.uk/news/articles/c0q0lg3j1exo

    Does that indicate there is any substantial doubt, even on British urban >streets?

    It indicates that there's still considerable doubt, as it happens.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Tue Feb 4 20:43:30 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0f7miF2crrU1@mid.individual.net...
    On 04/02/2025 18:49, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0etb5Fu84aU2@mid.individual.net...

    On 04/02/2025 15:41, billy bookcase wrote:

    snip

    AI relies on measuring outcomes, in order to "learn".

    In neither "autonomous" vehicles nor autopilots is anything less than
    the optimum outcome either permissible, or desirable, in real world
    situations,

    As it happens Norman is entirely correct in claiming AI has no role to play
    in autonomous vehicles. But entirely wrong in imagining all that all potential
    real life situations and problems which might arise can be anticipated, never
    mind modelled in experimental set-ups Such as would allow autonomous vehicles
    safely onto our roads

    You must then have missed the earlier references in this thread to studies showing
    that
    autonomous vehicles are in fact already safer than human driven ones.

    The studies were carried out in California.

    Not on the Hanger Lane roundabout.

    Nevertheless, from a document I cited earlier:

    "Passengers in Edinburgh will be able to enjoy the world's first fully sized self-driving bus from the spring"

    Does that indicate there is any substantial doubt, even on British urban streets?

    This one ?

    quote

    The UK's first full-sized driverless buses have started operating for passengers in Edinburgh.

    The autonomous vehicles, operated by Stagecoach, run across the Forth
    Road Bridge from Ferrytoll park-and-ride in Fife to Edinburgh Park station.

    Despite the "driverless" name, the buses operate with two staff on board.

    One will sit in the driver's seat to monitor the technology, and a
    so-called bus captain will help passengers with boarding and buying tickets

    unquote

    https://www.bbc.co.uk/news/uk-scotland-edinburgh-east-fife-65589913

    To repeat ---

    "One will sit in the driver's seat to *monitor the technology "

    Doubless with a big handle or button conveniently placed nearby.

    Just in case the "monitoring" gets a bit "stressful"


    bb













    .






    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Les. Hayward@21:1/5 to Vir Campestris on Wed Feb 5 09:35:10 2025
    On 04/02/2025 21:34, Vir Campestris wrote:
    On 29/01/2025 19:56, Les. Hayward wrote:

    It seems that even the most mundane bit of software these days is
    described as AI! It must have been back in the 80's that a friend of
    mine was working on software for automatic landing and take-off for
    airliners. Nothing either artificial or intelligent about it!


    Bit late wasn't he?

    https://en.wikipedia.org/wiki/Hawker_Siddeley_Trident#avionics

    will tell you that "it enabled the Trident to perform the first
    automatic landing by a civil airliner in scheduled passenger service on
    10 June 1965"

    Andy

    It may have been the 70's rather than the 80's. In any case, there is a difference between an initial test case and getting software which is
    compliant with the rest of the world.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 5 09:57:08 2025
    On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 20:45, Mark Goodge wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    Satnavs should, and autonomous vehicle controls will have to, have some
    way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've never >> worked as a programmer...."

    What a cheap remark! All computer languages I've ever dealt with, and I
    have programmed in more than one, understand the word 'not'. It's a
    very basic operator.

    Please name three different programming languages which you have used that include the word "not" as an operator.

    But, anyway, it's not the syntax I was querying. It's how to ensure that the existence of a blockage, and its precise location, is passed to the software promptly and in a format which can easily be interpreted by the software.
    For example, when the notification comes in the form of a policeman saying
    "you can't go that way, mate, it's blocked".

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 5 09:58:54 2025
    On Tue, 4 Feb 2025 21:45:09 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of the
    commander and the flight technicians in fully automated vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a Train
    Captain) on board who is able to take over and drive it manually if
    necessary. And the trains themselves don't need any routing capability,
    because they run on rails. It's a very far cry from a fully autonomous road >> vehicle.

    And how many times have they been called into action to drive a train >manually? Perhaps you'd tell us. Otherwise it seems just a union ruse
    to employ people to do nothing, as is their wont.

    I've seen one do it, once. So, not very often. But the point is that the
    system isn't, yet, considered reliable enough to do without the facility for manual driving, even if it's good enough for enough of the time in day to
    day use.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Norman Wells on Wed Feb 5 10:30:18 2025
    On Tue, 04 Feb 2025 21:45:09 +0000, Norman Wells wrote:

    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of the
    commander and the flight technicians in fully automated vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a
    Train Captain) on board who is able to take over and drive it manually
    if necessary. And the trains themselves don't need any routing
    capability, because they run on rails. It's a very far cry from a fully
    autonomous road vehicle.

    And how many times have they been called into action to drive a train manually? Perhaps you'd tell us. Otherwise it seems just a union ruse
    to employ people to do nothing, as is their wont.

    It's not just driving the train. It's ensuring there's nothing fouling
    the doors as it departs. Something which - despite being sensored up the
    wazzoo - still occurs. With potentially fatal results.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 5 10:33:46 2025
    On Wed, 5 Feb 2025 07:56:31 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 21:57, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:32:42 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 14:51, Simon Parker wrote:
    On 03/02/2025 10:03, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0bh1eFdablU2@mid.individual.net...

    It's trivial programming to ensure that limits are not exceeded by >>>>>> any autonomous
    vehicle, which is certainly not the case with human drivers, even
    those who try to obey
    them.

    How about the humans who were supposed to put up the signs but
    somehow forgot ?

    What do you, and your robot friends, intend to do about them ?

    Or, as this is a legal newsgroup, a sign with no underlying TRO, or a
    TRO with no corresponding signage installed?

    Do NormCars (TM) (a Division of NormTech (TM)) have access to the
    underlying TRO database and know to ignore a sign if it has no TRO
    legally underpinning it?

    Will NormCars (TM) know the legally enforceable speed limit from
    accessing the TRO database and therefore drive at the correct speed even >>>> if some ne'er do well has spray painted, nicked or otherwise interfered >>>> with the signage?

    Yes, of course. It's only sensible to know what speed limits actually
    apply where after all.

    How long do you think it will take for a national TRO database to be created >> and fully populated, and API access to it made available for sat-nav
    providers? Because it doesn't, as yet, exist.

    So you say.

    So I know.

    I am aware that arguing from self-authority is frowned on in Usenet, but in this particular context I do think it's worth pointing out that I am two
    things which you are not. Firstly I am a programmer, and, although I have
    never written routing software, I have worked with it, and I have seen its complexity. And, secondly, possibly more pertinently in this context, I am a local councillor, on a council which is the local highway authority. Both of those facts are easily verifiable, should you care to Google me. I don't
    hide my identity on Usenet.

    As a councillor, I have a login on my council's intranet which, among other things, gives me access to the council's road network database, including
    TROs. And some of those TROs go back a long way. Far enough back that their only digital existence is as a scanned copy of a typewritten document. The
    idea that there's a national database easily accessible by third parties is, quite simply, fantasy. There may be, one day. But we are very far from it at the moment.

    However, my little Garmin clearly has access to a data-base
    of speed limits already, so one clearly exists and it seems to me to be >pretty comprehensive. Whose it is I have no idea. But it's a sine qua
    non that autonomous vehicles will need to know, and it's obviously not
    an insuperable problem.

    Garmin, like most sat-nav providers, uses a variety of sources for its database.

    Local highway authorities maintain their own repositories of TROs, but
    there's no requirement for them to be in a standard format. And, in
    particular, TROs don't usually contain coordinates. They describe affected >> stretches of road using phrases like "From the junction with Foo Street for >> a distance of 86m in a northerly direction". Converting that into something >> you can store in a geographic database is not, despite your optimism,
    trivial.

    One wonders how my little Garmin copes then.

    It copes because most speed limits don't change very often, and when they do change, they usually only affect a short stretch of road. Provided its data sources include sources of speed limits, it can usually update that data
    before most people notice. But if you happen to drive along a stretch of
    road where there has been a recent speed limit change, then you will
    probably observe that your sat-nav doesn't immediately reflect that. Again,,
    a few minutes with Google (or search engine of your choice) will find plenty
    of instances of people complaning that their Garmin sat-nav has got a speed limit wrong, or doesn't update when the limit changes.

    After all, even Google doesn't use TROs to map speed limits. Their systems >> are intelligent enough to know what the default limits are in the UK for
    different types of road. But, for any departure from the default, it does
    exactly what a human would do - look at the signs:

    https://blog.google/products/maps/how-ai-and-imagery-keep-speed-limits-on-google-maps-updated/

    My Garmin doesn't. It doesn't have eyes.

    No, but Garmin probably buys data from Google, and other providers of the
    data.

    One of the leading suppliers of speed limit data is Here (that is, the
    company called "Here", not here as in present at this location). Given the requirement for all cars sold in the EU from July 2024 onwards to have Intelligent Speed Assistance (ISA), this is clearly a growing market. Here
    has published information which gives some background to both ISA and the supply of data.

    Interestingly, ISA providers are only required to meet a minimum level of
    90% accuracy. And temporary speed limits are explicitly excluded from ISA requirements. So there's a clear recognition that there will be times when
    the software doesn't know, or can't know, the speed limit.

    For human drivers that's not a major issue, there's an expectation that
    human drivers will still be able to read signs even where the software
    doesn't. But an autonomous vehicle only has the software, it doesn't have a human able to give it information.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 5 10:36:10 2025
    On Wed, 5 Feb 2025 08:04:22 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    The USA is considerably ahead of us, but this is where we're at in Europe:

    https://ioki.com/en/autonomous-vehicles-turning-point-in-european-regional-and-local-transport/#:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.

    Yes, we're at the point where, as you conveniently highlighted for us "most
    of them travel at low speed, on fixed routes and still with safety drivers".

    That does not, to me, suggest that all of the problems inherent with
    autonomous vehicles have been solved, nor that solving those problems will
    be trivial.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Mark Goodge on Wed Feb 5 10:38:35 2025
    On Tue, 04 Feb 2025 21:57:54 +0000, Mark Goodge wrote:

    On Tue, 4 Feb 2025 16:32:42 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    [quoted text muted]

    How long do you think it will take for a national TRO database to be
    created and fully populated, and API access to it made available for
    sat-nav providers? Because it doesn't, as yet, exist.

    When I worked in logistics, the Ordnance Survey upon which our maps were
    based refused to guarantee their accuracy. In particular the existence
    and exact height of bridges over the road.

    This in turn meant we were unable to guarantee 100% that our maps could
    be used to follow routes that avoided said bridges.

    For truly autonomous control of a vehicle, the systems will need to be
    prepared for reality to contradict virtuality and have a reliable way of interpolating the two.

    As an aside, it has been noted that Google Maps will "create" a road when
    it sees people who are using Google Maps taking a certain route. This was discussed in The Register a while back about a story of a car being
    directed 100km into a desert because GM thought there was a road there.
    Turned out it had followed some hikers and assumed there was a
    (presumably new) road there.

    Speaking for myself, I noticed that GM quite happily would direct me to
    come off a motorway services and sneak out the back way as it's suggested route. I imagine that was because other locals (like me) did that anyway
    and it created that route despite it being unlawful.

    Which does raise the question of whether that provides a defence, if it
    ever came to court (as a civil case).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 10:22:59 2025
    On 05/02/2025 09:57, Mark Goodge wrote:
    On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:
    On 04/02/2025 20:45, Mark Goodge wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    Satnavs should, and autonomous vehicle controls will have to, have some >>>> way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've never
    worked as a programmer...."

    What a cheap remark! All computer languages I've ever dealt with, and I
    have programmed in more than one, understand the word 'not'. It's a
    very basic operator.

    Please name three different programming languages which you have used that include the word "not" as an operator.

    But, anyway, it's not the syntax I was querying. It's how to ensure that the existence of a blockage, and its precise location, is passed to the software promptly and in a format which can easily be interpreted by the software.
    For example, when the notification comes in the form of a policeman saying "you can't go that way, mate, it's blocked".

    What's the problem? It's only a matter of information being passed to
    whoever can amend the live mapping, and road closed notification being
    applied on the maps.

    If it's not done, we're no worse off than we are now. Huge delays
    whether of manually driven vehicles or autonomous ones.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 10:40:21 2025
    On 05/02/2025 09:58, Mark Goodge wrote:
    On Tue, 4 Feb 2025 21:45:09 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of the >>>>> commander and the flight technicians in fully automated vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a Train >>> Captain) on board who is able to take over and drive it manually if
    necessary. And the trains themselves don't need any routing capability,
    because they run on rails. It's a very far cry from a fully autonomous road >>> vehicle.

    And how many times have they been called into action to drive a train
    manually? Perhaps you'd tell us. Otherwise it seems just a union ruse
    to employ people to do nothing, as is their wont.

    I've seen one do it, once. So, not very often. But the point is that the system isn't, yet, considered reliable enough to do without the facility for manual driving, even if it's good enough for enough of the time in day to
    day use.

    "Trains on the Docklands Light Railway (DLR) don't have drivers even in
    the ATO kind of way. Instead, they have "train attendants" or "captains"
    who travel on the train but move around inside it rather than sitting at
    the front. These people do, however, look after the doors just like
    their ATO Tube counterparts. They too are expected to operate the train manually if something goes wrong with the system. And they too are
    members of trade unions."

    https://www.theguardian.com/uk/davehillblog/2012/feb/29/boris-johnson-driverless-underground-trains#:~:text=Trains%20on%20the%20Docklands%20Light,like%20their%20ATO%20Tube%20counterparts.

    I suspect the last sentence of the above has more to do with having door openers on board than actual issues of reliability.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 11:18:30 2025
    On 05/02/2025 10:36, Mark Goodge wrote:
    On Wed, 5 Feb 2025 08:04:22 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    The USA is considerably ahead of us, but this is where we're at in Europe: >>
    https://ioki.com/en/autonomous-vehicles-turning-point-in-european-regional-and-local-transport/#:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.

    Yes, we're at the point where, as you conveniently highlighted for us "most of them travel at low speed, on fixed routes and still with safety drivers".

    It's worth noting that report is nearly two years old in what is a very fast-moving field.

    That does not, to me, suggest that all of the problems inherent with autonomous vehicles have been solved, nor that solving those problems will
    be trivial.

    Of course they haven't all been solved, but the evidence is that AVs are already safer than manually driven vehicles which is a very significant
    marker point in their development. The quest now is to approach
    perfection with, of course, the law of diminishing returns applying.

    Knowing that perfection is impossible to achieve, when do we say they
    are good enough? What metrics apply?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 11:07:54 2025
    On 05/02/2025 10:33, Mark Goodge wrote:
    On Wed, 5 Feb 2025 07:56:31 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 21:57, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:32:42 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 04/02/2025 14:51, Simon Parker wrote:
    On 03/02/2025 10:03, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0bh1eFdablU2@mid.individual.net...

    It's trivial programming to ensure that limits are not exceeded by >>>>>>> any autonomous
    vehicle, which is certainly not the case with human drivers, even >>>>>>> those who try to obey
    them.

    How about the humans who were supposed to put up the signs but
    somehow forgot ?

    What do you, and your robot friends, intend to do about them ?

    Or, as this is a legal newsgroup, a sign with no underlying TRO, or a >>>>> TRO with no corresponding signage installed?

    Do NormCars (TM) (a Division of NormTech (TM)) have access to the
    underlying TRO database and know to ignore a sign if it has no TRO
    legally underpinning it?

    Will NormCars (TM) know the legally enforceable speed limit from
    accessing the TRO database and therefore drive at the correct speed even >>>>> if some ne'er do well has spray painted, nicked or otherwise interfered >>>>> with the signage?

    Yes, of course. It's only sensible to know what speed limits actually >>>> apply where after all.

    How long do you think it will take for a national TRO database to be created
    and fully populated, and API access to it made available for sat-nav
    providers? Because it doesn't, as yet, exist.

    So you say.

    So I know.

    I am aware that arguing from self-authority is frowned on in Usenet, but in this particular context I do think it's worth pointing out that I am two things which you are not. Firstly I am a programmer, and, although I have never written routing software, I have worked with it, and I have seen its complexity. And, secondly, possibly more pertinently in this context, I am a local councillor, on a council which is the local highway authority. Both of those facts are easily verifiable, should you care to Google me. I don't
    hide my identity on Usenet.

    As a councillor, I have a login on my council's intranet which, among other things, gives me access to the council's road network database, including TROs. And some of those TROs go back a long way. Far enough back that their only digital existence is as a scanned copy of a typewritten document. The idea that there's a national database easily accessible by third parties is, quite simply, fantasy. There may be, one day. But we are very far from it at the moment.

    However, my little Garmin clearly has access to a data-base
    of speed limits already, so one clearly exists and it seems to me to be
    pretty comprehensive. Whose it is I have no idea. But it's a sine qua
    non that autonomous vehicles will need to know, and it's obviously not
    an insuperable problem.

    Garmin, like most sat-nav providers, uses a variety of sources for its database.

    Excellent news, and just as it should be. Progress should not be
    inhibited because local councils are still in the dark ages, and it
    seems there are good ways round.

    Local highway authorities maintain their own repositories of TROs, but
    there's no requirement for them to be in a standard format. And, in
    particular, TROs don't usually contain coordinates. They describe affected >>> stretches of road using phrases like "From the junction with Foo Street for >>> a distance of 86m in a northerly direction". Converting that into something >>> you can store in a geographic database is not, despite your optimism,
    trivial.

    One wonders how my little Garmin copes then.

    It copes because most speed limits don't change very often, and when they do change, they usually only affect a short stretch of road. Provided its data sources include sources of speed limits, it can usually update that data before most people notice. But if you happen to drive along a stretch of
    road where there has been a recent speed limit change, then you will
    probably observe that your sat-nav doesn't immediately reflect that. Again,, a few minutes with Google (or search engine of your choice) will find plenty of instances of people complaning that their Garmin sat-nav has got a speed limit wrong, or doesn't update when the limit changes.

    That may be because they haven't updated their maps for some time. But
    there's no reason why live mapping should have any speed limit wrong for
    long.

    The thing is, though, that there obviously are data-bases of speed
    limits out there that are rather more comprehensive, up-to-date and
    accessible than you think.

    My Garmin doesn't consult scanned copies of ancient TROs in council vaults.

    After all, even Google doesn't use TROs to map speed limits. Their systems >>> are intelligent enough to know what the default limits are in the UK for >>> different types of road. But, for any departure from the default, it does >>> exactly what a human would do - look at the signs:

    https://blog.google/products/maps/how-ai-and-imagery-keep-speed-limits-on-google-maps-updated/

    My Garmin doesn't. It doesn't have eyes.

    No, but Garmin probably buys data from Google, and other providers of the data.

    One of the leading suppliers of speed limit data is Here (that is, the company called "Here", not here as in present at this location). Given the requirement for all cars sold in the EU from July 2024 onwards to have Intelligent Speed Assistance (ISA), this is clearly a growing market. Here has published information which gives some background to both ISA and the supply of data.

    Interestingly, ISA providers are only required to meet a minimum level of
    90% accuracy. And temporary speed limits are explicitly excluded from ISA requirements. So there's a clear recognition that there will be times when the software doesn't know, or can't know, the speed limit.

    For human drivers that's not a major issue, there's an expectation that
    human drivers will still be able to read signs even where the software doesn't. But an autonomous vehicle only has the software, it doesn't have a human able to give it information.

    No, they will also have an array of sensors that vastly exceed any
    human. The satnav speed limit data-base is only part of their toolkit.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to All on Wed Feb 5 10:50:20 2025
    On 05/02/2025 10:30, Jethro_uk wrote:
    On Tue, 04 Feb 2025 21:45:09 +0000, Norman Wells wrote:

    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of the >>>>> commander and the flight technicians in fully automated vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a
    Train Captain) on board who is able to take over and drive it manually
    if necessary. And the trains themselves don't need any routing
    capability, because they run on rails. It's a very far cry from a fully
    autonomous road vehicle.

    And how many times have they been called into action to drive a train
    manually? Perhaps you'd tell us. Otherwise it seems just a union ruse
    to employ people to do nothing, as is their wont.

    It's not just driving the train. It's ensuring there's nothing fouling
    the doors as it departs. Something which - despite being sensored up the wazzoo - still occurs. With potentially fatal results.

    Seems like we'll need an on-board captain then, presumably with
    government accreditation, for any autonomous vehicle journey, even a
    private car.

    That's a bit of a downer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Wed Feb 5 10:26:11 2025
    On 04/02/2025 20:43, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0f7miF2crrU1@mid.individual.net...
    On 04/02/2025 18:49, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0etb5Fu84aU2@mid.individual.net...

    On 04/02/2025 15:41, billy bookcase wrote:

    snip

    AI relies on measuring outcomes, in order to "learn".

    In neither "autonomous" vehicles nor autopilots is anything less than >>>>> the optimum outcome either permissible, or desirable, in real world
    situations,

    As it happens Norman is entirely correct in claiming AI has no role to play
    in autonomous vehicles. But entirely wrong in imagining all that all potential
    real life situations and problems which might arise can be anticipated, never
    mind modelled in experimental set-ups Such as would allow autonomous vehicles
    safely onto our roads

    You must then have missed the earlier references in this thread to studies showing
    that
    autonomous vehicles are in fact already safer than human driven ones.

    The studies were carried out in California.

    Not on the Hanger Lane roundabout.

    Nevertheless, from a document I cited earlier:

    "Passengers in Edinburgh will be able to enjoy the world's first fully sized >> self-driving bus from the spring"

    Does that indicate there is any substantial doubt, even on British urban streets?

    This one ?

    quote

    The UK's first full-sized driverless buses have started operating for passengers in Edinburgh.

    The autonomous vehicles, operated by Stagecoach, run across the Forth
    Road Bridge from Ferrytoll park-and-ride in Fife to Edinburgh Park station.

    Despite the "driverless" name, the buses operate with two staff on board.

    One will sit in the driver's seat to monitor the technology, and a
    so-called bus captain will help passengers with boarding and buying tickets

    Unions again, eh?

    unquote

    https://www.bbc.co.uk/news/uk-scotland-edinburgh-east-fife-65589913

    To repeat ---

    "One will sit in the driver's seat to *monitor the technology"

    Doubless with a big handle or button conveniently placed nearby.

    Just in case the "monitoring" gets a bit "stressful"

    Got any stats on how many times it has gone wrong and he's been needed?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 10:31:37 2025
    On 04/02/2025 21:22, Mark Goodge wrote:
    On Tue, 4 Feb 2025 19:21:53 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 18:49, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0etb5Fu84aU2@mid.individual.net...

    You must then have missed the earlier references in this thread to studies showing that
    autonomous vehicles are in fact already safer than human driven ones.

    The studies were carried out in California.

    Not on the Hanger Lane roundabout.

    Nevertheless, from a document I cited earlier:

    "Passengers in Edinburgh will be able to enjoy the world’s first fully
    sized self-driving bus from the spring"

    It's a shuttle bus that runs back and forth between just two locations. The entirety of its route on the public highway is on dual carriageways. And it still needs someone to sit in the driver's seat to "monitor the technology". Oh, and it's not even autonomous end to end.

    Their driverless buses have sensors enabling them to travel on
    pre-selected roads at up to 50mph.

    On each trip they switch between autonomous and manual driving, being
    controlled by computer across the Forth Road Bridge, and from the M90 and
    M8 to the A720 in Edinburgh.

    https://www.bbc.co.uk/news/articles/c0q0lg3j1exo

    Does that indicate there is any substantial doubt, even on British urban
    streets?

    It indicates that there's still considerable doubt, as it happens.

    What it indicates to me, in the absence of any statistics of failure
    rates, is that the law which requires such precautions may be rather out-of-date.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Norman Wells on Wed Feb 5 12:50:04 2025
    On Wed, 05 Feb 2025 10:50:20 +0000, Norman Wells wrote:

    On 05/02/2025 10:30, Jethro_uk wrote:
    On Tue, 04 Feb 2025 21:45:09 +0000, Norman Wells wrote:

    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of
    the commander and the flight technicians in fully automated
    vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a
    Train Captain) on board who is able to take over and drive it
    manually if necessary. And the trains themselves don't need any
    routing capability, because they run on rails. It's a very far cry
    from a fully autonomous road vehicle.

    And how many times have they been called into action to drive a train
    manually? Perhaps you'd tell us. Otherwise it seems just a union
    ruse to employ people to do nothing, as is their wont.

    It's not just driving the train. It's ensuring there's nothing fouling
    the doors as it departs. Something which - despite being sensored up
    the wazzoo - still occurs. With potentially fatal results.

    Seems like we'll need an on-board captain then, presumably with
    government accreditation, for any autonomous vehicle journey, even a
    private car.

    That's a bit of a downer.

    Sorry.

    What the blazes are you blithering on about ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Wed Feb 5 12:51:33 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0fg35F4497U3@mid.individual.net...
    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of the
    commander and the flight technicians in fully automated vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a Train
    Captain) on board who is able to take over and drive it manually if
    necessary. And the trains themselves don't need any routing capability,
    because they run on rails. It's a very far cry from a fully autonomous road >> vehicle.

    And how many times have they been called into action to drive a train manually?
    Perhaps you'd tell us. Otherwise it seems just a union ruse to employ people to do
    nothing, as is their wont.


    Their presence reassures passengers who appreciate the human touch
    in the event of a breakdown. And can also act as a useful deterrent against vandalism and anti-social behaviour in the event of any such breakdown
    taking place.

    Much the same as on the Victoria Line; which has been fully automatic
    using ATO ever since its inception in 1968; but which nevertheless has
    a "driver" in the cab.

    Not of course that you might be expected to understand much
    of this, for all too obvious reasons.




    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Nick Finnigan on Wed Feb 5 12:55:07 2025
    "Nick Finnigan" <nix@genie.co.uk> wrote in message news:vnu2a7$20ufs$1@dont-email.me...
    On 04/02/2025 15:05, Norman Wells wrote:
    On 04/02/2025 12:32, Mark Goodge wrote:

    Norman's responses to your comments also contradict his own earlier
    assertions that autonomous vehicles don't need AI,

    AI doesn't come into it.


    Which autonomous vehicles don't use AI for object recogntion?

    All of them.

    An object is an object; and is simply something to be avoided surely ?

    Why would there be any need to discriminate ?


    bb




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Wed Feb 5 13:02:23 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0gsm3FapjhU2@mid.individual.net...
    On 04/02/2025 20:43, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0f7miF2crrU1@mid.individual.net...
    On 04/02/2025 18:49, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0etb5Fu84aU2@mid.individual.net...

    On 04/02/2025 15:41, billy bookcase wrote:

    snip

    AI relies on measuring outcomes, in order to "learn".

    In neither "autonomous" vehicles nor autopilots is anything less than >>>>>> the optimum outcome either permissible, or desirable, in real world >>>>>> situations,

    As it happens Norman is entirely correct in claiming AI has no role to play
    in autonomous vehicles. But entirely wrong in imagining all that all potential
    real life situations and problems which might arise can be anticipated, never
    mind modelled in experimental set-ups Such as would allow autonomous vehicles
    safely onto our roads

    You must then have missed the earlier references in this thread to studies showing
    that
    autonomous vehicles are in fact already safer than human driven ones. >>>>
    The studies were carried out in California.

    Not on the Hanger Lane roundabout.

    Nevertheless, from a document I cited earlier:

    "Passengers in Edinburgh will be able to enjoy the world's first fully sized
    self-driving bus from the spring"

    Does that indicate there is any substantial doubt, even on British urban streets?

    This one ?

    quote

    The UK's first full-sized driverless buses have started operating for
    passengers in Edinburgh.

    The autonomous vehicles, operated by Stagecoach, run across the Forth
    Road Bridge from Ferrytoll park-and-ride in Fife to Edinburgh Park station. >>
    Despite the "driverless" name, the buses operate with two staff on board.

    One will sit in the driver's seat to monitor the technology, and a
    so-called bus captain will help passengers with boarding and buying tickets

    Unions again, eh?

    unquote

    https://www.bbc.co.uk/news/uk-scotland-edinburgh-east-fife-65589913

    To repeat ---

    "One will sit in the driver's seat to *monitor the technology"

    Doubless with a big handle or button conveniently placed nearby.

    Just in case the "monitoring" gets a bit "stressful"

    Got any stats on how many times it has gone wrong and he's been needed?

    This is Endinburgh we're talking about..

    Just remind me again.

    Where was "Trainspotting" located, exactly ?

    That's something else to add to your database, once you've
    looked it up.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Simon Parker on Wed Feb 5 13:50:18 2025
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message news:m0h3kgFqoilU4@mid.individual.net...



    snippage

    ( Autopilots can malfunction but then who ever claimed otherwise ? )

    That AI is used in AVs for object recognition is a given.

    Complete and utter nonsense.

    If a transmitter sends out signals which are reflected back within a
    specified period, then that is all the evidence which is required of the presence of an object in that direction, at that distance; which therefore constitutes something which needs be avoided,.

    Even robot vacuum cleaners are fitted with that capability.

    And in the latter case, whether its an armchair or the family Labrador makes absolutely no difference whatsoever. And its exactly the same with AV's

    And will remain the case until such time as they, AV's decide for themselves
    to deliberately run humans down. As opposed to say running into trees
    etc. And that will be the point, and only then, where AI has finally kicked in

    Anyone claiming that there is no AI in AVs can be dismissed as talking
    from a position of ignorance. As can anyone that agrees with them. :-)

    You do realise of course that that particular observation adds nothing whatsoever
    to your argument ? Which in itself is so specularly wrong as to be almost embarrassing ?

    snip


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to Mark Goodge on Wed Feb 5 14:33:10 2025
    On 05/02/2025 09:57, Mark Goodge wrote:
    On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 20:45, Mark Goodge wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    Satnavs should, and autonomous vehicle controls will have to, have some >>>> way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've never
    worked as a programmer...."

    What a cheap remark! All computer languages I've ever dealt with, and I
    have programmed in more than one, understand the word 'not'. It's a
    very basic operator.

    Please name three different programming languages which you have used that include the word "not" as an operator.

    FORTRAN, Lisp, Modula2, Pascal (I can't count)

    And a number of others that use ! or some other cryptic symbol as not.

    But, anyway, it's not the syntax I was querying. It's how to ensure that the existence of a blockage, and its precise location, is passed to the software promptly and in a format which can easily be interpreted by the software.
    For example, when the notification comes in the form of a policeman saying "you can't go that way, mate, it's blocked".

    They can't even manage to do that reliably on the motorway networks
    where "smart" motorways frequently have the obstruction ahead signs
    completely out of whack with the actual situation on the ground. In
    several cases recently I have seen a stranded vehicle stuck just after
    the "derestricted" all clear sign (after several gantries on 50mph).

    The way it works at present on the Satnavs that have realtime live
    traffic updates is that the system predicts increased journey time and
    flags sections of road ahead red or yellow where the traffic speed from
    other users is unreasonably slow. I have ours set to only deviate if the
    (long) journey time increases by more than 15 minutes.

    It seems to work well enough and you can sometimes tell who else is
    using the same nav system by where they come off the normal route.

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to billy bookcase on Wed Feb 5 15:35:35 2025
    On Wed, 05 Feb 2025 12:51:33 +0000, billy bookcase wrote:

    Much the same as on the Victoria Line; which has been fully automatic
    using ATO ever since its inception in 1968; but which nevertheless has
    a "driver" in the cab.

    For interest, the Moorgate crash was in 1975.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to All on Wed Feb 5 15:39:01 2025
    On Wed, 5 Feb 2025 14:33:10 +0000, Martin Brown <'''newspam'''@nonad.co.uk> wrote:

    On 05/02/2025 09:57, Mark Goodge wrote:
    On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 20:45, Mark Goodge wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>
    Satnavs should, and autonomous vehicle controls will have to, have some >>>>> way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've never
    worked as a programmer...."

    What a cheap remark! All computer languages I've ever dealt with, and I >>> have programmed in more than one, understand the word 'not'. It's a
    very basic operator.

    Please name three different programming languages which you have used that >> include the word "not" as an operator.

    FORTRAN, Lisp, Modula2, Pascal (I can't count)

    Shhh! I'd set that as a test for Norman!

    And a number of others that use ! or some other cryptic symbol as not.

    Pretty much every language has a negation operator. It would be almost impossible to write useful code if it didn't. But, these days, the operator
    is, as you say, typically a symbol rather than the word "not". That's why I
    was curious as to how recent Norman's programming knowledge, if any, is.

    The way it works at present on the Satnavs that have realtime live
    traffic updates is that the system predicts increased journey time and
    flags sections of road ahead red or yellow where the traffic speed from
    other users is unreasonably slow. I have ours set to only deviate if the >(long) journey time increases by more than 15 minutes.

    That's how Google Maps and Waze do it. It does obviously rely on the system having continual two-way communication with the central server. A standalone sat-nav can't do that.

    It seems to work well enough and you can sometimes tell who else is
    using the same nav system by where they come off the normal route.

    One scenario where it doesn't work is late at night or on lightly used roads where there isn't enough traffic to give a reliable statistical sample.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 5 16:21:48 2025
    On Wed, 5 Feb 2025 11:07:54 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 10:33, Mark Goodge wrote:
    On Wed, 5 Feb 2025 07:56:31 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    One wonders how my little Garmin copes then.

    It copes because most speed limits don't change very often, and when they do >> change, they usually only affect a short stretch of road. Provided its data >> sources include sources of speed limits, it can usually update that data
    before most people notice. But if you happen to drive along a stretch of
    road where there has been a recent speed limit change, then you will
    probably observe that your sat-nav doesn't immediately reflect that. Again,, >> a few minutes with Google (or search engine of your choice) will find plenty >> of instances of people complaning that their Garmin sat-nav has got a speed >> limit wrong, or doesn't update when the limit changes.

    That may be because they haven't updated their maps for some time.

    No; if you do as I suggest you will find that the people complaining have up
    to date information. In fact, a common commplaint is "I updated my maps, but the speed limit is still wrong".

    Although, of course, if it needs the user to perform the update anyway then it's not real time data.

    But
    there's no reason why live mapping should have any speed limit wrong for >long.

    How long is "for long"? And how long would be too long?

    The thing is, though, that there obviously are data-bases of speed
    limits out there that are rather more comprehensive, up-to-date and >accessible than you think.

    There are plenty of databases of speed limits out there, compiled from a variety of sources including crowd-sourcing. Waze is often regarded as
    having the most up to date data because it provides an easy means for users
    to give it that information. Here Maps uses pooled vehicle data, which is
    very reliable in situations where there is sufficient traffic to generate a statistically significant sample. Google, as I've previously stated, uses optical data from Streetview cars as well as pooled vehicle data.

    But my point was simply that there is no official, canonical, comprehensive database of speed limits, and nor is there likely to be any time soon. The reason why Here, Google and Waze need to create their own databases using secondary sources is because there is no primary source. Oddly enough,
    Google, Here and Waze all boast that their system is the best. They wouldn't need to do that if they were simply drawing on official data.

    For human drivers that's not a major issue, there's an expectation that
    human drivers will still be able to read signs even where the software
    doesn't. But an autonomous vehicle only has the software, it doesn't have a >> human able to give it information.

    No, they will also have an array of sensors that vastly exceed any
    human. The satnav speed limit data-base is only part of their toolkit.

    Again, that is my point. To be fully autonomous, a vehicle will need to be
    able to derive the necessary data from its surroundings, context and memory without relying solely on information transmitted from a central system.
    That is, it will need to be able to behave like a human driver. Which, as
    yet, they can't do. They'll get there eventually. But getting there isn't a trivial problem to solve.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to billy bookcase on Wed Feb 5 16:17:29 2025
    On 2025-02-05, billy bookcase <billy@onon.com> wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message news:m0h3kgFqoilU4@mid.individual.net...

    snippage

    ( Autopilots can malfunction but then who ever claimed otherwise ? )

    That AI is used in AVs for object recognition is a given.

    Complete and utter nonsense.

    If a transmitter sends out signals which are reflected back within a specified period, then that is all the evidence which is required of the presence of an object in that direction, at that distance; which therefore constitutes something which needs be avoided,.

    Even robot vacuum cleaners are fitted with that capability.

    And in the latter case, whether its an armchair or the family Labrador
    makes absolutely no difference whatsoever. And its exactly the same
    with AV's

    And will remain the case until such time as they, AV's decide for
    themselves to deliberately run humans down. As opposed to say running
    into trees etc. And that will be the point, and only then, where AI
    has finally kicked in

    Anyone claiming that there is no AI in AVs can be dismissed as talking
    from a position of ignorance. As can anyone that agrees with them. :-)

    You do realise of course that that particular observation adds
    nothing whatsoever to your argument ? Which in itself is so specularly
    wrong as to be almost embarrassing ?

    Well, *someone* out of you and Simon is "so specularly wrong as to be
    almost embarrassing"... but it isn't Simon.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to All on Wed Feb 5 16:31:01 2025
    On Wed, 5 Feb 2025 12:36:57 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:

    Having worked extensively with the local council on a related matter,
    [^1], I can assure you that TROs are created individually by each LA.
    They do not even create their own searchable repository of TROs, never
    mind anything even approaching a database of all TROs nationwide which
    third parties can access in real time.

    Actually, my local highway authority does provide a searchable repository of recent TROs, provided you know how to use Google's advanced search tools:

    https://www.google.co.uk/search?q=%22speed+limit%22+inurl%3Ahttps%3A%2F%2Fwww.worcestershire.gov.uk%2Fcouncil-services%2Fplanning-and-developments%2Fplanning-applications%2Fpublic-notices&sca_esv=df054736d75a2025&sxsrf=AHTn8zo5eX_VoDyUDF9lBIH4odn0sWyKEw%
    3A1738772754354&source=hp&ei=EpGjZ6qXEO6khbIP3IjHiAE&iflsig=ACkRmUkAAAAAZ6OfIl-BqseInoxRKkenvn3r1LuXIAC2&ved=0ahUKEwiq6POi-ayLAxVuUkEAHVzEEREQ4dUDCBk&uact=5&oq=%22speed+limit%22+inurl%3Ahttps%3A%2F%2Fwww.worcestershire.gov.uk%2Fcouncil-services%
    2Fplanning-and-developments%2Fplanning-applications%2Fpublic-notices

    Maybe they've done a deal with NormTech for the funding.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Todal@21:1/5 to All on Wed Feb 5 16:32:48 2025
    On 05/02/2025 15:35, Jethro_uk wrote:
    On Wed, 05 Feb 2025 12:51:33 +0000, billy bookcase wrote:

    Much the same as on the Victoria Line; which has been fully automatic
    using ATO ever since its inception in 1968; but which nevertheless has
    a "driver" in the cab.

    For interest, the Moorgate crash was in 1975.



    If a tube train is driverless and has no member of staff onboard, what
    happens if a passenger is taken ill? Or if there is a knife-fight? Or if
    there is a bomb? Or if the train collides with a person on the track? Or
    if the doors have come open during the journey?

    It would presumably up to the passengers themselves to find a way of
    summoning help or evacuating the train. Or wait for someone else to take
    the initiative.

    I know, I know. Just unions trying to preserve their domain and impede
    the profitability of the service.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Wed Feb 5 15:40:05 2025
    On 05/02/2025 12:33, Simon Parker wrote:
    On 04/02/2025 16:14, Norman Wells wrote:
    On 04/02/2025 14:47, Simon Parker wrote:
    On 03/02/2025 10:07, billy bookcase wrote:
    "Martin Harran" <martinharran@gmail.com> wrote in message
    news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too close >>>>> to a vehicle in front, lane-keep assist.

    Those are exmples of automation, as with autpilots. Where the each set >>>> of inputs, no matter how complex, will produce exactly the same result >>>> every time.

    AI systems on the other hand, learn and make up new rules as they go
    along.
    And so the process by which they actually arrive at particular
    decisions
    may not be readily available to humans.

    I think this is possibly what a lot of peopel are really worried about, >>>
    I give you the details of Lufthansa LH1829 below.  Numerous other
    examples are available, (some of which may include the grounding of
    the entire Boeing 737 MAX fleet).

    Whilst some knowledge of aviation terms will help in reading what
    follows, it is by no means mandatory.

    Please detail, as simply as possible, the likely outcome for flight
    LH1829 had the commander not been present and had he not had access
    to flight technicians via Telex.

    Obviously, it would have crashed, killing all on board.

    Most people, hopefully you included, would consider that to be a "sub- optimal outcome".

    You just asked me what would happen, not what I thought of it.

    Then please detail which parties will be fulfilling the roles of the
    commander and the flight technicians in fully automated vehicles.

    No-one.  Just as there isn't on the Docklands Light Railway.

    <SFX: Buzzer>

    I recommend researching the role and responsibilities of the "Passenger Service Agent" on the DLR.

    And navigating the DLR is light years away from a Level 5 AV.

    Of course the DLR is easier to manage autonomously than road vehicles;
    no-one claims otherwise. But how often has human intervention been
    required? If only very rarely, the Passenger Service Agent is
    unnecessary and redundant, except of course from the union's point of
    view, which is wholly unrelated to safety.

    But level 5 just represents perfection which is the enemy of good and
    not necessarily required anyway. Level 4 may be quite sufficient, meaning:

    "The vehicle operates completely autonomously under certain conditions.
    The human being no longer has to be ready to intervene. He can work,
    watch movies and even sleep. And the vehicle may also drive alone, i.e.
    without occupants. It must be able to reach a safe state without the intervention of a human driver, i.e. to come to a standstill in a
    parking lot for example. However, the autonomy of the vehicle at Level 4
    is still linked to certain conditions, such as a defined route, driving
    on the highway or in the parking garage. Numerous companies are already
    testing level 4 vehicles in road traffic".

    https://www.zf.com/mobile/en/technologies/automated_driving/stories/6_levels_of_automated_driving.html#:~:text=Numerous%20companies%20are%20already%20testing,in%20road%20traffic%2C%20including%20ZF.&text=Germany%20has%20created%20the%20framework,as%20the%
    20first%20country%20worldwide.

    It's far closer to realisation than many here seem to credit, and such
    vehicles may be on the roads here next year:

    https://www.gov.uk/government/news/self-driving-vehicles-set-to-be-on-roads-by-2026-as-automated-vehicles-act-becomes-law

    But a failsafe default of come to a halt and turn everything off,
    which would doubtless be in any autonomous vehicle, isn't really an
    option doing 500mph at 30,000 ft.
    Remind me not to travel in a NormCar (TM) at any time now or in the
    future.  I do not really want to be in a vehicle that in the event of an incident is designed and programmed to come to a halt in the right-most
    lane of a motorway and turn off both its headlights and hazard lights
    despite it being night time.

    Well, of course it wouldn't. It may in other brands during the
    debugging phase but not in Normcars.

    You claim to have programming experience so you ought to be familiar
    with the Pareto Principle.  The first 80% is relatively easy.  That last 20% is the killer.  80% of the bugs are in 20% of the code.  80% of the complexity comes from 20% of the code.  80% of the R&D is spent on 20%
    of the application.  AVs don't even have the initial 80% mastered yet.

    Call it by whatever fancy name you like, it's the same old 80/20 rule.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to All on Wed Feb 5 16:26:01 2025
    On 05/02/2025 15:35, Jethro_uk wrote:
    On Wed, 05 Feb 2025 12:51:33 +0000, billy bookcase wrote:

    Much the same as on the Victoria Line; which has been fully automatic
    using ATO ever since its inception in 1968; but which nevertheless has
    a "driver" in the cab.

    For interest, the Moorgate crash was in 1975.

    And on the Northern Line.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Wed Feb 5 15:58:36 2025
    On 05/02/2025 12:15, Simon Parker wrote:
    On 04/02/2025 15:25, Norman Wells wrote:
    On 04/02/2025 12:19, Max Demian wrote:
    On 03/02/2025 19:05, Norman Wells wrote:

    Provided it is working on a live real-time mapping system in which
    road closures are clearly indicated, which is not beyond the wit of
    man, the satnav will simply re-route to avoid them.  They're very
    good at finding alternative routes when you miss a turning for
    example, so it's not a new or insurmountable problem.

    Won't the satnav keep telling you to take the original route? Is
    there a way to tell it that there is something in the way? The same
    problem would occur in the case of an autonomous vehicle.

    Satnavs should, and autonomous vehicle controls will have to, have
    some way programmed in to discard any routes where there is a notified
    blockage.  That's a trivial programming exercise.

    Is it?

    Of course it is.

    Can you post a copy of your program please along with relevant
    certification for the design, coding, debugging, testing and deployment.

    What certification do you consider necessary? And who certifies it?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Jon Ribbens on Wed Feb 5 16:33:51 2025
    On 05/02/2025 16:17, Jon Ribbens wrote:
    On 2025-02-05, billy bookcase <billy@onon.com> wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message
    news:m0h3kgFqoilU4@mid.individual.net...

    snippage

    ( Autopilots can malfunction but then who ever claimed otherwise ? )

    That AI is used in AVs for object recognition is a given.

    Complete and utter nonsense.

    If a transmitter sends out signals which are reflected back within a
    specified period, then that is all the evidence which is required of the
    presence of an object in that direction, at that distance; which therefore >> constitutes something which needs be avoided,.

    Even robot vacuum cleaners are fitted with that capability.

    And in the latter case, whether its an armchair or the family Labrador
    makes absolutely no difference whatsoever. And its exactly the same
    with AV's

    And will remain the case until such time as they, AV's decide for
    themselves to deliberately run humans down. As opposed to say running
    into trees etc. And that will be the point, and only then, where AI
    has finally kicked in

    Anyone claiming that there is no AI in AVs can be dismissed as talking
    from a position of ignorance. As can anyone that agrees with them. :-)

    You do realise of course that that particular observation adds
    nothing whatsoever to your argument ? Which in itself is so specularly
    wrong as to be almost embarrassing ?

    Well, *someone* out of you and Simon is "so specularly wrong as to be
    almost embarrassing"... but it isn't Simon.

    I think you may be mistaken there. AI involves crunching huge
    quantities of data to come up with conclusions. That is assuredly *not* happening in the tiny confines of a satnav chip which merely uses
    earlier and elsewhere-formed AI conclusions.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 16:28:42 2025
    On 05/02/2025 15:39, Mark Goodge wrote:
    On Wed, 5 Feb 2025 14:33:10 +0000, Martin Brown <'''newspam'''@nonad.co.uk> wrote:

    On 05/02/2025 09:57, Mark Goodge wrote:
    On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 04/02/2025 20:45, Mark Goodge wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>>
    Satnavs should, and autonomous vehicle controls will have to, have some >>>>>> way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've never
    worked as a programmer...."

    What a cheap remark! All computer languages I've ever dealt with, and I >>>> have programmed in more than one, understand the word 'not'. It's a
    very basic operator.

    Please name three different programming languages which you have used that >>> include the word "not" as an operator.

    FORTRAN, Lisp, Modula2, Pascal (I can't count)

    Shhh! I'd set that as a test for Norman!

    And a number of others that use ! or some other cryptic symbol as not.

    Pretty much every language has a negation operator. It would be almost impossible to write useful code if it didn't. But, these days, the operator is, as you say, typically a symbol rather than the word "not". That's why I was curious as to how recent Norman's programming knowledge, if any, is.

    The operator is still 'not', whatever symbols are used to invoke it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to billy bookcase on Wed Feb 5 16:24:38 2025
    On 05/02/2025 12:51, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0fg35F4497U3@mid.individual.net...
    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of the >>>>> commander and the flight technicians in fully automated vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a Train >>> Captain) on board who is able to take over and drive it manually if
    necessary. And the trains themselves don't need any routing capability,
    because they run on rails. It's a very far cry from a fully autonomous road >>> vehicle.

    And how many times have they been called into action to drive a train manually?
    Perhaps you'd tell us. Otherwise it seems just a union ruse to employ people to do
    nothing, as is their wont.


    Their presence reassures passengers who appreciate the human touch
    in the event of a breakdown. And can also act as a useful deterrent against vandalism and anti-social behaviour in the event of any such breakdown
    taking place.

    Much the same as on the Victoria Line; which has been fully automatic
    using ATO ever since its inception in 1968; but which nevertheless has
    a "driver" in the cab.

    Not of course that you might be expected to understand much
    of this, for all too obvious reasons.

    Can you understand that none of this soft soap has anything at all to do
    with autonomous operation?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to All on Wed Feb 5 16:35:33 2025
    On 05/02/2025 12:50, Jethro_uk wrote:
    On Wed, 05 Feb 2025 10:50:20 +0000, Norman Wells wrote:

    On 05/02/2025 10:30, Jethro_uk wrote:
    On Tue, 04 Feb 2025 21:45:09 +0000, Norman Wells wrote:

    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of >>>>>>> the commander and the flight technicians in fully automated
    vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a
    Train Captain) on board who is able to take over and drive it
    manually if necessary. And the trains themselves don't need any
    routing capability, because they run on rails. It's a very far cry
    from a fully autonomous road vehicle.

    And how many times have they been called into action to drive a train
    manually? Perhaps you'd tell us. Otherwise it seems just a union
    ruse to employ people to do nothing, as is their wont.

    It's not just driving the train. It's ensuring there's nothing fouling
    the doors as it departs. Something which - despite being sensored up
    the wazzoo - still occurs. With potentially fatal results.

    Seems like we'll need an on-board captain then, presumably with
    government accreditation, for any autonomous vehicle journey, even a
    private car.

    That's a bit of a downer.

    Sorry.

    What the blazes are you blithering on about ?

    Well, someone has to ensure the doors aren't being fouled with
    potentially fatal results.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Wed Feb 5 15:56:31 2025
    On 05/02/2025 12:14, Simon Parker wrote:
    On 04/02/2025 15:20, Norman Wells wrote:
    On 04/02/2025 12:16, Mark Goodge wrote:
    On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    Provided it is working on a live real-time mapping system in which road >>>> closures are clearly indicated, which is not beyond the wit of man, the >>>> satnav will simply re-route to avoid them.

    But that's the point. It wasn't. It can't be. When the police decide to
    block a road and turn people away, that information isn't available to
    navigation systems in real time.

    It would be trivially easy to do though.  Police alert HQ, HQ tells
    mapping, mapping puts a block where the accident is, autonomous
    vehicles re-route to avoid it.

    Is the police officer placing a general call with HQ or to a specific
    desk at HQ?  What happens if the request cannot be connected to HQ immediately?  How do HQ alert mapping? Is HQ alerting Google Maps, Apple Maps, Garmin, et al individually are are they alerting a central
    resource which subsequently pushes this data out to various service providers?

    Not my province. I just show how it can easily be done. The details
    can be left to minions.

    Not forgetting, of course, and quite topical at the moment: who is
    paying for all this?

    I do believe electronic communications are to all intents and purposes
    free these days. There is no reason why most of it cannot be automated
    and cost very little.

    At present, Google, Apple and Garmin get their data, mostly, without
    charge.

    Do they? Where from?

    Why would they want to move to a model that involves them
    paying for what they currently get for free?

    It's in the national interest that such information should be freely
    available and used.

    It's the same problem as above.  If the vehicle knows from an update to >>>> the live map that there is a road blockage, as should be a priority
    when
    autonomous vehicles become common, it will simply re-route to avoid it. >>>
    But, again, that's the point. This information is not available in
    real time
    to the navigation systems. A human can cope with route-finding if
    necessary
    without resorting to them.

    Except that a human won't do anything either until and unless he knows
    where a blockage is.

    As autonomous vehicles become used there will have to be a requirement
    for accurate and rapid information on blockages.

    Information on congestion is readily available already in real time.
    There's no technical reason why the same could not be done for blockages.

    It is clear that you do not understand how the current system works
    because, if you did, you would know why this will not work for blockages
    such as those being described.

    But it could, and should. There is no technological reason why it can't.
    >>>> Otherwise, but not too fancifully, it could be programmed to
    recognise
    the uniform (just as you do), raise an antenna with a microphone, and
    tell him to talk clearly to Siri identifying the exact location of the >>>> problem using what three words or whatever.

    Well, yes. But that's a lot more complex than merely following a
    route. It
    requires at least reasonable voice recognition, for example, and then
    being
    able to follow the instructions given.

    These things exist already.

    If the policeman has a Scottish accent?  Or Brummie?  (I recommend searching for "Amy101" if the second reference doesn't elicit a "light
    bulb" moment.)

    Then it will need to go through Google Translate, as usual.

    I don't doubt that, given sufficiently advanced AI, this will be a
    solvable
    problem. But it's not a problem you can solve just with a set of preset
    rules. So until autonomous vehicles have AI which is as robust at
    problem
    solving as human intelligence, it isn't really autonomous.

    It's not AI, which is something entirely different.

    Modern, (c.f. "Current"), voice recognition, (i.e. the very systems you
    are referencing above that "already exist") is "not AI"?  Are you sure?
    I recommend phoning a friend, or asking the audience before you lock in
    your answer. :-)

    It is not AI in itself, though it is the product of previous AI which
    mined the data.

    Also, please detail which object recognition system you are using in
    NormCars (TM) that can recognise a police officer by their uniform alone
    but which does not employ AI.

    Again it's not AI in itself, but just the ability earlier AI has given
    the vehicle software.

    Going on about live routing data is a red herring. As well as ongoing
    situations where the data simply isn't available to the system (eg,
    where
    the police decide to block a road), you also have to be able to cope
    with
    situations where the data exists but isn't available to the device
    (eg, in
    mobile notspots). To be truly autonomous, a self-driving vehicle has
    to be
    able to operate solely on the basis of road signs, markings, offline
    maps
    where available and, if necessary, verbal instructions from a human.
    Just
    like humans can.

    And just like autonomous vehicles can already, but safer.

    There are currently no SAE Level 5 automated cars deployed anywhere in
    the world.  There are some SAE Level 4 automated vehicles operating in strictly controlled areas but they are in strictly controlled areas for precisely the reasons Mark mentioned.

    And can move seamlessly into the mainstream when properly debugged and
    proven, provided the law has moved with the times and doesn't contain
    absurd backward-looking restrictions. Which I'm glad to say the
    Automated Vehicles Act 2024 doesn't.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 5 15:59:04 2025
    On Wed, 5 Feb 2025 10:31:37 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 21:22, Mark Goodge wrote:

    It's a shuttle bus that runs back and forth between just two locations. The >> entirety of its route on the public highway is on dual carriageways. And it >> still needs someone to sit in the driver's seat to "monitor the technology". >> Oh, and it's not even autonomous end to end.

    Their driverless buses have sensors enabling them to travel on
    pre-selected roads at up to 50mph.

    On each trip they switch between autonomous and manual driving, being
    controlled by computer across the Forth Road Bridge, and from the M90 and >> M8 to the A720 in Edinburgh.

    https://www.bbc.co.uk/news/articles/c0q0lg3j1exo

    Does that indicate there is any substantial doubt, even on British urban >>> streets?

    It indicates that there's still considerable doubt, as it happens.

    What it indicates to me, in the absence of any statistics of failure
    rates, is that the law which requires such precautions may be rather >out-of-date.

    Did you actually read the quote I cited? It's not autonomous end to end.
    Only over a certain section of the route. So it has to have a human driver
    on board, for the bits that aren't autonomous.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Wed Feb 5 16:03:33 2025
    On 05/02/2025 12:31, Simon Parker wrote:
    On 04/02/2025 21:22, Mark Goodge wrote:
    On Tue, 4 Feb 2025 19:21:53 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 04/02/2025 18:49, billy bookcase wrote:
    "Norman Wells" <hex@unseen.ac.am> wrote in message
    news:m0etb5Fu84aU2@mid.individual.net...

    You must then have missed the earlier references in this thread to
    studies showing that
    autonomous vehicles are in fact already safer than human driven ones. >>>>
    The studies were carried out in California.

    Not on the Hanger Lane roundabout.

    Nevertheless, from a document I cited earlier:

    "Passengers in Edinburgh will be able to enjoy the world’s first fully >>> sized self-driving bus from the spring"

    It's a shuttle bus that runs back and forth between just two
    locations. The
    entirety of its route on the public highway is on dual carriageways.
    And it
    still needs someone to sit in the driver's seat to "monitor the
    technology".
    Oh, and it's not even autonomous end to end.

       Their driverless buses have sensors enabling them to travel on
       pre-selected roads at up to 50mph.

       On each trip they switch between autonomous and manual driving, being >>    controlled by computer across the Forth Road Bridge, and from the
    M90 and
       M8 to the A720 in Edinburgh.

    https://www.bbc.co.uk/news/articles/c0q0lg3j1exo

    Does that indicate there is any substantial doubt, even on British urban >>> streets?

    It indicates that there's still considerable doubt, as it happens.

    It is worse than that:

    The service has been withdrawn as there were insufficient passenger
    numbers to make it cost-effective.  (Not surprising when the cost of all
    the tech, plus a "not a driver" and "conductor" were factored in. I.e.
    all the costs of the original bus, plus all the AV tech plus two members
    of staff on board rather than the usual one made the project unfeasible
    on cost grounds.)

    There were insufficient passenger numbers to make running the route cost-effective on an ongoing basis. There was never any question of the passengers on that particular route alone having to meet all the startup
    costs of research and development, which would have made the fares prohibitively expensive.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to All on Wed Feb 5 16:43:27 2025
    On Wed, 5 Feb 2025 12:42:14 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:


    S&P Global Mobility (Google them) predict that Level 5 passenger cars
    will not appear before 2035 at the earliest.

    If all goes well, I should be retired and living a life of leisure by then.
    So, if I'm rich enough, I might be able to say "Alexa, take me to the pub".

    (Actually, I've always said that I won't get a "smart" device until I can choose what to call it. I'm not going to be asking Google, Siri, Alexa or Copilot to take me to the pub; what I really want is to be able to say "Zen, take me to the pub". Or possibly "Holly, take me to the pub". Or even just plain "Computer, take me to the pub". But I wouldn't want to say "HAL, take
    me to the pub" in case it refused. Or worse, if it took me there, but
    refused to take me home).

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Wed Feb 5 16:21:58 2025
    On 05/02/2025 12:30, Simon Parker wrote:
    On 04/02/2025 16:25, Norman Wells wrote:
    On 04/02/2025 15:41, billy bookcase wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message
    news:m0enkeFqoikU2@mid.individual.net...

    Whilst some knowledge of aviation terms will help in reading what
    follows, it is by no
    means mandatory.

    Please detail, as simply as possible, the likely outcome for flight
    LH1829 had the
    commander not been present and had he not had access to flight
    technicians via Telex.

    Then please detail which parties will be fulfilling the roles of the
    commander and the
    flight technicians in fully automated vehicles.

    I fail to see any possible relevance in any of of your questions.

    Unless you can detail where I have ever claimed that AI has any possible >>> role to play in either "autonomous" vehicles, or in autopilots.

    AI relies on measuring outcomes, in order to "learn".

    In neither "autonomous" vehicles nor autopilots is anything less than
    the optimum outcome either permissible, or desirable, in real world
    situations,

    As it happens Norman is entirely correct in claiming AI has no role
    to play
    in autonomous vehicles. But entirely wrong in imagining all that all
    potential
    real life situations and problems which might arise can be
    anticipated, never
    mind modelled in experimental set-ups  Such as would allow autonomous
    vehicles
    safely onto our roads

    You must then have missed the earlier references in this thread to
    studies showing that autonomous vehicles are in fact already safer
    than human driven ones.

    They are currently allowed already in parts of the USA, and in France
    and Germany.

    https://ioki.com/en/autonomous-vehicles-turning-point-in-european-
    regional-and-local-transport/
    #:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.

    In the UK, they could be allowed by 2026.

    It is my sad duty to inform you that you have, again, engaged in your
    usual habit of speaking with great authority on a matter whilst relying
    on Google searches and quick skims of articles in the background.

    According to the Department for Transport, which is a bit horse's mouthy:

    "Self-driving vehicles set to be on roads by 2026 as Automated Vehicles
    Act becomes law"

    https://www.gov.uk/government/news/self-driving-vehicles-set-to-be-on-roads-by-2026-as-automated-vehicles-act-becomes-law

    To much fanfare, Germany did indeed announce approval for autonomous
    vehicles in July of last year.  But those familiar with the system, (in contrast to those relying on Google to prop up their arguments), will be aware of the following key points:

    (1) This is a test / trial / whatever you want to call it of just six autonomous vehicles (AVs), not a mass roll-out.

    Did anyone claim it was?

    (2) It is tightly restricted to the Rhine-Main region, more specifically
    the city of Darmstadt in the Offenbach district, and even more
    specifically restricted only to defined routes within that city.

    (3) As can be gathered from point (2), but worth stating for the
    avoidance of doubt, the vehicles involved are Level 4 AVs, NOT Level 5.
    They have a "safety monitor" behind the wheel and are also supervised remotely at a control centre.

    Well, I suppose they may do, but according to the official definition of
    Level 4:

    "The vehicle operates completely autonomously under certain conditions.
    The human being no longer has to be ready to intervene. He can work,
    watch movies and even sleep."

    (4) In the initial part of the trial, the AVs will not have passengers
    on board and will instead focus on collecting data.

    (5) In time, project staff members will ride as passengers.  (This
    mirrors the Waymo roll-out in San Francisco.)

    (6) If the trials are successful, project staff members will be followed
    by testing using "Approved Users" (again, mirroring Waymo's "Trusted Testers").

    If they are Level 4 as you say, they already comply with the above.

    So, just to be clear: In Germany, on specific pre-defined routes in a
    single city, there are trials involving six cars that have a "not a
    driver" at the wheel and a remote operator back in a control room
    shuttling staff members, and carefully selected members of the public
    along those specific pre-defined routes.

    It is hardly The Jetsons, is it?

    JJ Ricks documents each of his journeys using Waymo.  I invite you to
    watch this video:

    https://www.youtube.com/watch?v=zdKCQKBvH-A

    Note that at around 6:00 (Chapter "Right Turn Detour") the Waymo takes a right turn earlier than expected to make an easier left turn thereby
    avoiding a complicated junction (an difficult "unprotected left" which
    Waymo has a known habit of avoiding).  It does the same at around the
    ten minute mark (see previous comment on "known habits" of current AVs).

    How very sensible.

    Things get really messy from the 12:35 mark onward.  I actively invite
    you to watch it for yourself, but in the meantime, here's Waymo's
    official explanation of what happened:

    "While driving fully autonomously through an extended work zone, the
    Waymo Driver detected an unusual situation and requested the attention
    of a remote Fleet Response specialist to provide additional information.
     During that interaction, the Fleet Response team provided incorrect guidance, which made it challenging for the Waymo Driver to resume its intended route, and required Waymo's Roadside Assistance team to
    complete the trip.  While the situation was not ideal, the Waymo Driver operated the vehicle safely until Roadside Assistance arrived.
    Throughout, Waymo's team was in touch with the rider, who provided
    thoughtful and helpful feedback that allows us to continue learning and improving the Waymo Driver.  Our team has already assessed the event and improved our operational process."

    Do you have anything a bit more recent than 3 years ago? Things haven't
    stood still in the interim.

    My explanation:

    The AV encountered a temporary no right turn because of roadworks marked
    by cones, (Ed: something we definitely do not have in the UK!), and
    could not complete the right turn it expected to make and (to use Waymo terminology), "paused" as it was incapable of re-routing in real time. A remote operator spoke to the passenger and advised him that "Roadside Assistance" (i.e. a human driver) was on the way.  Whilst waiting for a human driver to arrive, the Waymo Driver, decided it was going to make
    the right turn after all and having done so, then completely blocked the road.  However, as it was no longer "paused", "Roadside Assistance" was cancelled.  Except for the fact that the car was now sat in an active
    lane "paused" waiting for a driver to come and rescue it, but it had
    just cancelled the human, as it unpaused.  There were several "false
    starts" and the AV eventually gave up and put the hazard lights on in
    the "fast-lane" of the active carriageway.  Roadside Assistance, (aka
    the human driver), arrived but the car took off again.  This happened several times before they finally accessed the vehicle at around 29
    minutes at which point the human drove the rest of the way, (or in Waymo speak, "the ride was completed in manual").

    And remember, this is on controlled, specific pre-defined routes.
    Thankfully, Waymo keep "Roadside Assistance" (i.e. humans) around 5-6
    miles from each car.

    "Ah, but that video was three years ago!", I hear some cry.

    Indeed, so why go on about it in such depth?

    OK then, here's a more recent one where the Waymo gets stuck in a loop:

    https://www.youtube.com/watch?v=esLzeHKhGzg

    Or another where it tries to drive through a temporary construction zone (from December 2024):

    https://www.youtube.com/watch?v=2ihN0IkIMPg

    Despite claims to the contrary, this technology is not anywhere close to
    a mass roll-out yet.

    It's a process called debugging.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 5 16:47:38 2025
    On Wed, 5 Feb 2025 11:18:30 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 10:36, Mark Goodge wrote:
    On Wed, 5 Feb 2025 08:04:22 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    The USA is considerably ahead of us, but this is where we're at in Europe: >>>
    https://ioki.com/en/autonomous-vehicles-turning-point-in-european-regional-and-local-transport/#:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.

    Yes, we're at the point where, as you conveniently highlighted for us "most >> of them travel at low speed, on fixed routes and still with safety drivers".

    It's worth noting that report is nearly two years old in what is a very >fast-moving field.

    So why didn't you provide me with a more up to date link?

    After all, you keep confidently asserting that we've already got fully autonomous vehicles up and running, so you must have seen this mentioned somewhere on the web, surely?

    That does not, to me, suggest that all of the problems inherent with
    autonomous vehicles have been solved, nor that solving those problems will >> be trivial.

    Of course they haven't all been solved, but the evidence is that AVs are >already safer than manually driven vehicles which is a very significant >marker point in their development. The quest now is to approach
    perfection with, of course, the law of diminishing returns applying.

    They're safer than human drivers under certain, carefully controlled and restricted, conditions. But they still make mistakes that human drivers are very unlikely to make, even if those mistakes are not directly dangerous.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 16:44:36 2025
    On 05/02/2025 16:21, Mark Goodge wrote:
    On Wed, 5 Feb 2025 11:07:54 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 10:33, Mark Goodge wrote:
    On Wed, 5 Feb 2025 07:56:31 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    One wonders how my little Garmin copes then.

    It copes because most speed limits don't change very often, and when they do
    change, they usually only affect a short stretch of road. Provided its data >>> sources include sources of speed limits, it can usually update that data >>> before most people notice. But if you happen to drive along a stretch of >>> road where there has been a recent speed limit change, then you will
    probably observe that your sat-nav doesn't immediately reflect that. Again,,
    a few minutes with Google (or search engine of your choice) will find plenty
    of instances of people complaning that their Garmin sat-nav has got a speed >>> limit wrong, or doesn't update when the limit changes.

    That may be because they haven't updated their maps for some time.

    No; if you do as I suggest you will find that the people complaining have up to date information. In fact, a common commplaint is "I updated my maps, but the speed limit is still wrong".

    Although, of course, if it needs the user to perform the update anyway then it's not real time data.

    But
    there's no reason why live mapping should have any speed limit wrong for
    long.

    How long is "for long"? And how long would be too long?

    Obviously, it should be as short as possible.

    The thing is, though, that there obviously are data-bases of speed
    limits out there that are rather more comprehensive, up-to-date and
    accessible than you think.

    There are plenty of databases of speed limits out there, compiled from a variety of sources including crowd-sourcing. Waze is often regarded as
    having the most up to date data because it provides an easy means for users to give it that information. Here Maps uses pooled vehicle data, which is very reliable in situations where there is sufficient traffic to generate a statistically significant sample. Google, as I've previously stated, uses optical data from Streetview cars as well as pooled vehicle data.

    But my point was simply that there is no official, canonical, comprehensive database of speed limits, and nor is there likely to be any time soon. The reason why Here, Google and Waze need to create their own databases using secondary sources is because there is no primary source. Oddly enough, Google, Here and Waze all boast that their system is the best. They wouldn't need to do that if they were simply drawing on official data.

    At least competition seems to be spurring on progress which, according
    to your description, local authorities don't seem to care about.

    If the 'secondary sources', whatever they are, do what is required, then
    the primary sources such as scanned copies of ancient TROs in Council
    vaults can stay where they are as far as I'm concerned. I wouldn't like
    to see us all dragged down to that sort of level.

    For human drivers that's not a major issue, there's an expectation that
    human drivers will still be able to read signs even where the software
    doesn't. But an autonomous vehicle only has the software, it doesn't have a >>> human able to give it information.

    No, they will also have an array of sensors that vastly exceed any
    human. The satnav speed limit data-base is only part of their toolkit.

    Again, that is my point. To be fully autonomous, a vehicle will need to be able to derive the necessary data from its surroundings, context and memory without relying solely on information transmitted from a central system.
    That is, it will need to be able to behave like a human driver. Which, as yet, they can't do. They'll get there eventually. But getting there isn't a trivial problem to solve.

    They are much closer already than you think.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 5 17:32:09 2025
    On Wed, 5 Feb 2025 16:44:36 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 16:21, Mark Goodge wrote:

    How long is "for long"? And how long would be too long?

    Obviously, it should be as short as possible.

    Well, obviously. But if the shortest possible was a week, would that be OK?

    Again, that is my point. To be fully autonomous, a vehicle will need to be >> able to derive the necessary data from its surroundings, context and memory >> without relying solely on information transmitted from a central system.
    That is, it will need to be able to behave like a human driver. Which, as
    yet, they can't do. They'll get there eventually. But getting there isn't a >> trivial problem to solve.

    They are much closer already than you think.

    I have no overwhelming reason to disagree with the industry consensus (as
    cited earlier by Simon Parker) that we're probably around ten years away
    from the first fully autonomous (level 5) vehicles on the roads in real life use (as opposed to mere testing). I'd be happy enough if we get there a bit quicker. I suspect that DeepSeek may be giving some developers grounds for optimism, given that it appears to be technically on a par with the best Western AI but at a much lower cost.

    Obviously, programming an autonomous vehicle is more of a challenge than programming a chatbot. But the underlying trade-offs of computing power v
    cost are essentially the same. So it would not surprise me at all if the
    first genuine level 5 vehicles come from China. Which would raise a whole
    lot of other ethical and legal issues.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Owen Rees@21:1/5 to Norman Wells on Wed Feb 5 17:21:27 2025
    Norman Wells <hex@unseen.ac.am> wrote:
    On 04/02/2025 20:45, Mark Goodge wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    Satnavs should, and autonomous vehicle controls will have to, have some
    way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've never >> worked as a programmer...."

    What a cheap remark! All computer languages I've ever dealt with, and I
    have programmed in more than one, understand the word 'not'. It's a
    very basic operator.

    !

    <insert what Mark said here>

    Today I am working with a language that understands the word ‘not’ and a language that does not. I may need to use another language that does not understand ‘not’ tomorrow.

    Today’s exercise is also something well known to those who have done non-trivial software development. Writing the code is usually the smallest
    part of what needs to be done.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to The Todal on Wed Feb 5 17:33:46 2025
    On Wed, 05 Feb 2025 16:32:48 +0000, The Todal wrote:

    On 05/02/2025 15:35, Jethro_uk wrote:
    [quoted text muted]


    If a tube train is driverless and has no member of staff onboard, what happens if a passenger is taken ill?

    Presumably they would need to be removed from the train ASAP. Not sure
    that requires a dedicated operative *on the train*

    Or if there is a knife-fight?

    What would the dedicated operative be able to do in that situation ? The
    train would still need to be halted for law enforcement to board.

    Or if
    there is a bomb?

    If no one knows about it, *bang*. Again what difference does the presence
    of an operative make ?

    Or if the train collides with a person on the track? Or
    if the doors have come open during the journey?

    You are falling into the trap of inventing more and more remote
    possibilities in the hope of a "gotcha" moment. A rather Luddite tactic.

    I could list a book of "what ifs" around the implications of flying
    machines that should ensure no one ever flies. And yet we do.

    At some point, you simply have to accept the odds. The only real issue
    being that acceptance is clearly communicated and arrived at by a
    transparent process.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Norman Wells on Wed Feb 5 17:35:35 2025
    On Wed, 05 Feb 2025 16:35:33 +0000, Norman Wells wrote:

    On 05/02/2025 12:50, Jethro_uk wrote:
    On Wed, 05 Feb 2025 10:50:20 +0000, Norman Wells wrote:

    On 05/02/2025 10:30, Jethro_uk wrote:
    On Tue, 04 Feb 2025 21:45:09 +0000, Norman Wells wrote:

    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> >>>>>> wrote:

    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of >>>>>>>> the commander and the flight technicians in fully automated
    vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a >>>>>> Train Captain) on board who is able to take over and drive it
    manually if necessary. And the trains themselves don't need any
    routing capability, because they run on rails. It's a very far cry >>>>>> from a fully autonomous road vehicle.

    And how many times have they been called into action to drive a
    train manually? Perhaps you'd tell us. Otherwise it seems just a
    union ruse to employ people to do nothing, as is their wont.

    It's not just driving the train. It's ensuring there's nothing
    fouling the doors as it departs. Something which - despite being
    sensored up the wazzoo - still occurs. With potentially fatal
    results.

    Seems like we'll need an on-board captain then, presumably with
    government accreditation, for any autonomous vehicle journey, even a
    private car.

    That's a bit of a downer.

    Sorry.

    What the blazes are you blithering on about ?

    Well, someone has to ensure the doors aren't being fouled with
    potentially fatal results.

    Transposing situations that can occur on railways onto car journeys is a
    tad akin to comparing apples and oranges.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Jon Ribbens on Wed Feb 5 17:42:20 2025
    "Jon Ribbens" <jon+usenet@unequivocal.eu> wrote in message news:slrnvq73op.56ka.jon+usenet@raven.unequivocal.eu...
    On 2025-02-05, billy bookcase <billy@onon.com> wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message
    news:m0h3kgFqoilU4@mid.individual.net...

    snippage

    ( Autopilots can malfunction but then who ever claimed otherwise ? )

    That AI is used in AVs for object recognition is a given.

    Complete and utter nonsense.

    If a transmitter sends out signals which are reflected back within a
    specified period, then that is all the evidence which is required of the
    presence of an object in that direction, at that distance; which therefore >> constitutes something which needs be avoided,.

    Even robot vacuum cleaners are fitted with that capability.

    And in the latter case, whether its an armchair or the family Labrador
    makes absolutely no difference whatsoever. And its exactly the same
    with AV's

    And will remain the case until such time as they, AV's decide for
    themselves to deliberately run humans down. As opposed to say running
    into trees etc. And that will be the point, and only then, where AI
    has finally kicked in

    Anyone claiming that there is no AI in AVs can be dismissed as talking
    from a position of ignorance. As can anyone that agrees with them. :-)

    You do realise of course that that particular observation adds
    nothing whatsoever to your argument ? Which in itself is so specularly
    wrong as to be almost embarrassing ?

    Well, *someone* out of you and Simon is "so specularly wrong as to be
    almost embarrassing"... but it isn't Simon.

    So no actual argument then ?

    Thought so.

    So what else is new ?


    bb


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Wed Feb 5 17:47:41 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0hi7fFddamU9@mid.individual.net...
    On 05/02/2025 16:17, Jon Ribbens wrote:
    On 2025-02-05, billy bookcase <billy@onon.com> wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message
    news:m0h3kgFqoilU4@mid.individual.net...

    snippage

    ( Autopilots can malfunction but then who ever claimed otherwise ? )

    That AI is used in AVs for object recognition is a given.

    Complete and utter nonsense.

    If a transmitter sends out signals which are reflected back within a
    specified period, then that is all the evidence which is required of the >>> presence of an object in that direction, at that distance; which therefore >>> constitutes something which needs be avoided,.

    Even robot vacuum cleaners are fitted with that capability.

    And in the latter case, whether its an armchair or the family Labrador
    makes absolutely no difference whatsoever. And its exactly the same
    with AV's

    And will remain the case until such time as they, AV's decide for
    themselves to deliberately run humans down. As opposed to say running
    into trees etc. And that will be the point, and only then, where AI
    has finally kicked in

    Anyone claiming that there is no AI in AVs can be dismissed as talking >>>> from a position of ignorance. As can anyone that agrees with them. :-) >>>
    You do realise of course that that particular observation adds
    nothing whatsoever to your argument ? Which in itself is so specularly
    wrong as to be almost embarrassing ?

    Well, *someone* out of you and Simon is "so specularly wrong as to be
    almost embarrassing"... but it isn't Simon.

    I think you may be mistaken there. AI involves crunching huge quantities of data to
    come up with conclusions. That is assuredly *not* happening in the tiny confines of a
    satnav chip which merely uses earlier and elsewhere-formed AI conclusions.

    Jon and Simon aren't talking about sat nav but "object recognition"

    As they both believe that when an AV is about to collide with "an object"
    its important for some reason that it "recognises" exactly what that object is


    bb






    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to billy bookcase on Wed Feb 5 18:05:33 2025
    On Wed, 5 Feb 2025 17:47:41 -0000, "billy bookcase" <billy@onon.com> wrote:

    Jon and Simon aren't talking about sat nav but "object recognition"

    As they both believe that when an AV is about to collide with "an object"
    its important for some reason that it "recognises" exactly what that object is

    Of course it matters what the object is. Some objects will do more damage to the car if the car strikes them, and some objects will suffer more injury of the car strikes them. There's a huge difference between, say, a plastic bag,
    a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course, you'd avoid all of them. But you wouldn't necessarily take abrupt evasive action to avoid a plastic bag. On the other hand, you'd hit anything - including a bollard - if necessary to avoid hitting a child.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to billy bookcase on Wed Feb 5 17:51:44 2025
    On 2025-02-05, billy bookcase <billy@onon.com> wrote:

    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m0hi7fFddamU9@mid.individual.net...
    On 05/02/2025 16:17, Jon Ribbens wrote:
    On 2025-02-05, billy bookcase <billy@onon.com> wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message
    news:m0h3kgFqoilU4@mid.individual.net...

    snippage

    ( Autopilots can malfunction but then who ever claimed otherwise ? )

    That AI is used in AVs for object recognition is a given.

    Complete and utter nonsense.

    If a transmitter sends out signals which are reflected back within a
    specified period, then that is all the evidence which is required of the >>>> presence of an object in that direction, at that distance; which therefore >>>> constitutes something which needs be avoided,.

    Even robot vacuum cleaners are fitted with that capability.

    And in the latter case, whether its an armchair or the family Labrador >>>> makes absolutely no difference whatsoever. And its exactly the same
    with AV's

    And will remain the case until such time as they, AV's decide for
    themselves to deliberately run humans down. As opposed to say running >>>> into trees etc. And that will be the point, and only then, where AI
    has finally kicked in

    Anyone claiming that there is no AI in AVs can be dismissed as talking >>>>> from a position of ignorance. As can anyone that agrees with them. :-) >>>>
    You do realise of course that that particular observation adds
    nothing whatsoever to your argument ? Which in itself is so specularly >>>> wrong as to be almost embarrassing ?

    Well, *someone* out of you and Simon is "so specularly wrong as to be
    almost embarrassing"... but it isn't Simon.

    I think you may be mistaken there. AI involves crunching huge
    quantities of data to come up with conclusions. That is assuredly
    *not* happening in the tiny confines of a satnav chip which merely
    uses earlier and elsewhere-formed AI conclusions.

    Jon and Simon aren't talking about sat nav but "object recognition"

    Indeed.

    As they both believe that when an AV is about to collide with "an
    object" its important for some reason that it "recognises" exactly
    what that object is

    No, you've just made that up.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to Norman Wells on Wed Feb 5 18:44:19 2025
    On 05/02/2025 15:58, Norman Wells wrote:
    On 05/02/2025 12:15, Simon Parker wrote:
    On 04/02/2025 15:25, Norman Wells wrote:
    On 04/02/2025 12:19, Max Demian wrote:
    On 03/02/2025 19:05, Norman Wells wrote:

    Provided it is working on a live real-time mapping system in which
    road closures are clearly indicated, which is not beyond the wit of
    man, the satnav will simply re-route to avoid them.  They're very
    good at finding alternative routes when you miss a turning for
    example, so it's not a new or insurmountable problem.

    Won't the satnav keep telling you to take the original route? Is
    there a way to tell it that there is something in the way? The same
    problem would occur in the case of an autonomous vehicle.

    Satnavs should, and autonomous vehicle controls will have to, have
    some way programmed in to discard any routes where there is a
    notified blockage.  That's a trivial programming exercise.

    Is it?

    Of course it is.

    Can you post a copy of your program please along with relevant
    certification for the design, coding, debugging, testing and deployment.

    What certification do you consider necessary?  And who certifies it?

    In theory a Chartered Engineer should sign off detailed design and
    subsequent stages but in practice apart from 6 9's software and danger
    to life or mission critical stuff hardly anything is even close to that development process standard today.

    Shrink wrap code of the ship it and be damned variety can be
    particularly hairy. You can always download a few GB of updates. It was
    much harder back when modems maxed out at 56k baud.

    Curiously gaming software is surprisingly reliable when compared to say
    MS Excel, Outlook or Word (which are riddled with bugs and race conditions).

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Mark Goodge on Wed Feb 5 18:27:25 2025
    "Mark Goodge" <usenet@listmail.good-stuff.co.uk> wrote in message news:tl97qjp5df6pi53p05gdorvigpahd2nao3@4ax.com...
    On Wed, 5 Feb 2025 17:47:41 -0000, "billy bookcase" <billy@onon.com> wrote:

    Jon and Simon aren't talking about sat nav but "object recognition"

    As they both believe that when an AV is about to collide with "an object" >>its important for some reason that it "recognises" exactly what that object is

    Of course it matters what the object is. Some objects will do more damage to the car if the car strikes them, and some objects will suffer more injury of the car strikes them. There's a huge difference between, say, a plastic bag, a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course, you'd avoid all of them. But you wouldn't necessarily take abrupt evasive action to avoid a plastic bag. On the other hand, you'd hit anything - including a bollard - if necessary to avoid hitting a child.

    The clear implication is that AI enabled AV's are actually doing this at the present time.

    Not that in 5,10 or 20 years time ideally, AI enabled AV's will be able to recognise every conceivable type and shape of plastic bag and distinguish
    them from similarly shaped blocks of concrete.

    And is it really the case that an AV would chsose to hit a bollard
    in order to avoid hitting a child ? If say there were "two" children in
    the back of the AV along with their pregnant mother possibly on
    their way to hospital ?


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Wed Feb 5 17:02:25 2025
    On 05/02/2025 12:42, Simon Parker wrote:
    On 04/02/2025 16:37, Norman Wells wrote:
    On 04/02/2025 14:49, Simon Parker wrote:
    On 03/02/2025 09:36, Norman Wells wrote:
    On 03/02/2025 09:21, Davey wrote:
    On Mon, 3 Feb 2025 08:09:14 +0000
    Norman Wells <hex@unseen.ac.am> wrote:

    On 02/02/2025 13:37, Davey wrote:

    I wonder how these self-driving cars react to signs, such as those >>>>>>> not uncommon in the US, which mandate a lower-than-normal speed
    limit 'on school days only'?

    Just as they do with any legally imposed speed limits.  With a proper >>>>>> data-base of numbers and times.

    Does your satnav not tell you the speed limit wherever you are?

    No, mine doesn't, and it is in a BMW.

    Good grief!  How ancient is it?

    I will investigate in case it
    is a feature that is available but not activated. But you are assuming >>>>> that all States and small 'burgs' in the US follow the same speed
    limit
    reporting processes that we do. Do they?

    I have no idea.  But this is a UK group and all the necessary speed
    limit information here is certainly available and in current use.

    It's trivial programming to ensure that limits are not exceeded by
    any autonomous vehicle, which is certainly not the case with human
    drivers, even those who try to obey them.

    Anecdata: A young(ish) family member had a black box fitted to their
    new car as a condition of their insurance policy.  It was a physical
    black box connected directly to the vehicle electronics and mounted
    within the engine bay by a professional appointed by the insurers.
    Said family member has since changed their vehicle and the new
    vehicle didn't need a physical black box fitting but instead required
    an app installing on their mobile phone which must be with them in
    their car when they're driving as a condition of their insurance.

    Both systems have resulted in letters from the insurance company
    detailing incidents where the systems indicated that they were
    driving at significantly above the speed limit, which triggers
    penalties in the policy up to and including cancellation of the policy.

    Every single incident reported has been down to GPS issues where the
    car is on a dual carriageway, NSL road or similar, yet the black box
    detected the vehicle as being on a 30mph road adjacent to its current
    position but there is no reason to believe that the reverse situation
    cannot also exist, without triggering a letter from the insurance
    company.

    Please detail how your autonomous vehicle has overcome these problems
    evidently inherent in the current navigational systems in use, (both
    stand-alone and embedded).

    $75 billion has been spent by the automotive industry on developing
    autonomous vehicle technology, including the software.  How much do
    you think the black box company spent developing theirs?

    You get what you pay for.

    S&P Global Mobility (Google them) predict that Level 5 passenger cars
    will not appear before 2035 at the earliest.

    Meanwhile, Level 4 AVs, which will meet the requirements of the new Act,
    are being tested and proved *now*.

    For context, Tesla "Autopilot" is classified as Level 2.

    Your numerous posts to this thread seem to demonstrate both that you
    fail to understand where we actually are with the current technology and
    are grossly underestimating how much further we have to go before we get
    to the point where AV is ready for mass deployment.

    And I think you're hiding your head in the sand or, to mix metaphors,
    trying to stem the tide of progress.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 16:54:50 2025
    On 05/02/2025 16:43, Mark Goodge wrote:
    On Wed, 5 Feb 2025 12:42:14 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:


    S&P Global Mobility (Google them) predict that Level 5 passenger cars
    will not appear before 2035 at the earliest.

    If all goes well, I should be retired and living a life of leisure by then. So, if I'm rich enough, I might be able to say "Alexa, take me to the pub".

    (Actually, I've always said that I won't get a "smart" device until I can choose what to call it. I'm not going to be asking Google, Siri, Alexa or Copilot to take me to the pub; what I really want is to be able to say "Zen, take me to the pub". Or possibly "Holly, take me to the pub". Or even just plain "Computer, take me to the pub". But I wouldn't want to say "HAL, take me to the pub" in case it refused. Or worse, if it took me there, but
    refused to take me home).

    I think it would probably just refuse to open the pod bay doors.

    Anyway, I believe the names Siri and Alexa were carefully chosen because
    of their uniqueness and low likelihood of confusion with ordinary words
    used in ordinary speech.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 20:06:11 2025
    On 05/02/2025 17:32, Mark Goodge wrote:
    On Wed, 5 Feb 2025 16:44:36 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 16:21, Mark Goodge wrote:

    How long is "for long"? And how long would be too long?

    Obviously, it should be as short as possible.

    Well, obviously. But if the shortest possible was a week, would that be OK?

    Again, that is my point. To be fully autonomous, a vehicle will need to be >>> able to derive the necessary data from its surroundings, context and memory >>> without relying solely on information transmitted from a central system. >>> That is, it will need to be able to behave like a human driver. Which, as >>> yet, they can't do. They'll get there eventually. But getting there isn't a >>> trivial problem to solve.

    They are much closer already than you think.

    I have no overwhelming reason to disagree with the industry consensus (as cited earlier by Simon Parker) that we're probably around ten years away
    from the first fully autonomous (level 5) vehicles on the roads in real life use (as opposed to mere testing).

    We don't actually need Level 5. Level 4 will do. And they're being
    tested and proved right now.

    I'd be happy enough if we get there a bit
    quicker. I suspect that DeepSeek may be giving some developers grounds for optimism, given that it appears to be technically on a par with the best Western AI but at a much lower cost.

    Obviously, programming an autonomous vehicle is more of a challenge than programming a chatbot. But the underlying trade-offs of computing power v cost are essentially the same. So it would not surprise me at all if the first genuine level 5 vehicles come from China. Which would raise a whole
    lot of other ethical and legal issues.

    Would it? Why?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Mark Goodge on Wed Feb 5 17:25:24 2025
    On 2025-02-05, Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:
    On Wed, 5 Feb 2025 11:18:30 +0000, Norman Wells <hex@unseen.ac.am> wrote:
    Of course they haven't all been solved, but the evidence is that AVs are >>already safer than manually driven vehicles which is a very significant >>marker point in their development. The quest now is to approach
    perfection with, of course, the law of diminishing returns applying.

    They're safer than human drivers under certain, carefully controlled and restricted, conditions. But they still make mistakes that human drivers are very unlikely to make, even if those mistakes are not directly dangerous.

    The "fail 'safe' by stopping immobile in the middle of a lane of
    fast-moving traffic" shown in that Waymo video earlier seemed pretty
    "directly dangerous" to me.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 5 17:18:15 2025
    On Wed, 5 Feb 2025 16:33:51 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 16:17, Jon Ribbens wrote:
    On 2025-02-05, billy bookcase <billy@onon.com> wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message
    news:m0h3kgFqoilU4@mid.individual.net...

    Anyone claiming that there is no AI in AVs can be dismissed as talking >>>> from a position of ignorance. As can anyone that agrees with them. :-) >>>
    You do realise of course that that particular observation adds
    nothing whatsoever to your argument ? Which in itself is so specularly
    wrong as to be almost embarrassing ?

    Well, *someone* out of you and Simon is "so specularly wrong as to be
    almost embarrassing"... but it isn't Simon.

    I think you may be mistaken there. AI involves crunching huge
    quantities of data to come up with conclusions. That is assuredly *not* >happening in the tiny confines of a satnav chip which merely uses
    earlier and elsewhere-formed AI conclusions.

    The sat-nav chip isn't using AI. But the sat-nav is only one source of data
    to an autonomous vehicle's control system. The control system has to be able
    to reconcile multiple sources of data, including sensory and optical information, and decide in real time how to use all that data and act accordingly. And that's where the AI is.

    If you don't believe me, here's a research paper from last year:

    https://www.mdpi.com/2504-2289/8/4/42

    Which includes such gems as

    Artificial intelligence (AI) currently plays a crucial role in the
    development and operation of autonomous vehicles. The integration of AI
    algorithms enables autonomous vehicles to navigate, perceive, and adapt to
    dynamic environments, making them safer and more efficient.

    and

    Sharper perception and decision-making: AI algorithms are more adept at
    understanding environments with advanced sensors and robust machine
    learning.

    And that's just one document. Here are a few more:

    https://www.xenonstack.com/blog/autonomous-vehicles-driver-assistance https://hai.stanford.edu/news/how-ai-making-autonomous-vehicles-safer https://arxiv.org/pdf/2402.17690 https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry
    https://www.techtarget.com/searchenterpriseai/definition/driverless-car

    TL;DR: anyone who tells you that AI plays no role in autonomous vehicles is either stunningly ignorant or deliberately misleading you.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 5 20:18:09 2025
    On 05/02/2025 16:47, Mark Goodge wrote:
    On Wed, 5 Feb 2025 11:18:30 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 10:36, Mark Goodge wrote:
    On Wed, 5 Feb 2025 08:04:22 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    The USA is considerably ahead of us, but this is where we're at in Europe: >>>>
    https://ioki.com/en/autonomous-vehicles-turning-point-in-european-regional-and-local-transport/#:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.

    Yes, we're at the point where, as you conveniently highlighted for us "most >>> of them travel at low speed, on fixed routes and still with safety drivers".

    It's worth noting that report is nearly two years old in what is a very
    fast-moving field.

    So why didn't you provide me with a more up to date link?

    It wasn't my point to prove.

    After all, you keep confidently asserting that we've already got fully autonomous vehicles up and running, so you must have seen this mentioned somewhere on the web, surely?

    Before I do any more of that, you need to clarify what exactly you mean
    by 'fully' autonomous, which is an expression I've not used at all so
    haven't confidently asserted actually.

    Is Level 4 'fully' autonomous?

    If so, they are up and running and are being tested and proved already.

    That does not, to me, suggest that all of the problems inherent with
    autonomous vehicles have been solved, nor that solving those problems will >>> be trivial.

    Of course they haven't all been solved, but the evidence is that AVs are
    already safer than manually driven vehicles which is a very significant
    marker point in their development. The quest now is to approach
    perfection with, of course, the law of diminishing returns applying.

    They're safer than human drivers under certain, carefully controlled and restricted, conditions. But they still make mistakes that human drivers are very unlikely to make, even if those mistakes are not directly dangerous.

    The purpose of testing is to iron out any problems in a process known as debugging. The vehicles can improve even further, and will. Humans
    can't and won't.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Owen Rees@21:1/5 to Norman Wells on Thu Feb 6 00:29:21 2025
    Norman Wells <hex@unseen.ac.am> wrote:
    On 05/02/2025 15:39, Mark Goodge wrote:
    On Wed, 5 Feb 2025 14:33:10 +0000, Martin Brown <'''newspam'''@nonad.co.uk> >> wrote:

    On 05/02/2025 09:57, Mark Goodge wrote:
    On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>
    On 04/02/2025 20:45, Mark Goodge wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    Satnavs should, and autonomous vehicle controls will have to, have some >>>>>>> way programmed in to discard any routes where there is a notified >>>>>>> blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've never
    worked as a programmer...."

    What a cheap remark! All computer languages I've ever dealt with, and I >>>>> have programmed in more than one, understand the word 'not'. It's a >>>>> very basic operator.

    Please name three different programming languages which you have used that >>>> include the word "not" as an operator.

    FORTRAN, Lisp, Modula2, Pascal (I can't count)

    IIRC in FORTRAN it is .NOT. and the dots are critically important. It may
    have changed since I last used FORTRAN which was late 1970s or early 1980s
    but that would imply a radical change in the nature of the language.


    Shhh! I'd set that as a test for Norman!

    And a number of others that use ! or some other cryptic symbol as not.

    Pretty much every language has a negation operator. It would be almost
    impossible to write useful code if it didn't. But, these days, the operator >> is, as you say, typically a symbol rather than the word "not". That's why I >> was curious as to how recent Norman's programming knowledge, if any, is.

    The operator is still 'not', whatever symbols are used to invoke it.

    In your post above you said “the word ‘not’”. That sort of sloppiness is
    unfortunately too common in software and a rich source of bugs.

    I am not inclined to accept your estimates of the effort required for a
    task unless you can demonstrate a better understanding of software. For example, can you explain what can go wrong when using a programming
    language that has both polymorphic comparison operators and implicit
    coercions?

    Preventing people from buying from your online shop - the real life case on which my question is based - is not harmful. In other contexts it could do
    real harm.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Norman Wells on Thu Feb 6 09:41:56 2025
    On Wed, 05 Feb 2025 16:54:50 +0000, Norman Wells wrote:

    On 05/02/2025 16:43, Mark Goodge wrote:
    [quoted text muted]

    I think it would probably just refuse to open the pod bay doors.

    Anyway, I believe the names Siri and Alexa were carefully chosen because
    of their uniqueness and low likelihood of confusion with ordinary words
    used in ordinary speech.

    You mean there was never a name "Alexa" in the Anglophone world before
    Amazon ?

    Truly everyday is a learning day.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Mark Goodge on Thu Feb 6 09:51:07 2025
    "Mark Goodge" <usenet@listmail.good-stuff.co.uk> wrote in message news:9l67qjp7t21ft4f3vmne1kbsqtqfa2dt03@4ax.com...

    snip

    TL;DR: anyone who tells you that AI plays no role in autonomous vehicles is either stunningly ignorant or deliberately misleading you.

    AFAIAA neither in this thread, nor anywhere else, has anyone ever denied
    that AI has had, and will have, a big role to play in the development
    of AV software. In the actual "training" of AV's. While continuously
    refining the training programme/program in the process

    But that situation is completely different to what I myself, along with
    I believe Norman is suggesting. Which is that after having finally been "trained" and certificated, whether next week, next year, or in 10 years
    time AV's are *not* going to be rolled out into the real world to
    carry on their with their AI training.

    Norman has however, perhaps totally inadvertently, put his finger on
    the real issue. With his claim, whether correct ot not, that all that
    matters is that * AV's can/will be able *to perform better than humans*

    Which is, in reality the whole crux of the matter.

    AV's are a total blind.

    "They" already know how interested people are in cars. If only by
    sampling social media and the number of interminable threads devoted to discussing the vagaries of different brands of sat nav. Or the problems
    people are experiencing with their top of the range Beemers. And that's
    not even to mention the former popularity of "classic" "Top Gear"

    So when they claim they're going to spend X billion pound developing
    AV's, people are inclined to concentrate on the AV's, without
    looking at the bigger picture.

    Which is that research pounds/dollars/remnimbi supposedly spent solely
    on developing AV's in the very complex task of "performing better than
    humans" when driving a car around, will also enable them in the
    process to develop software enabling machines of all descriptions
    to "perform better than humans" in far less complex or onerous tasks.

    Basically the ultimate goal of all AI, is not simply to enable the
    development of AV's which will be able to perform better than humans ,
    but to *totally replace humans*. Both in physical and knowledge
    based tasks.

    Quite what humans are supposedly going to do, once this has finally
    been achieved, has not yet been satisfactorily explained.

    Either by visionaries such as our great leader, Keir Starmer, or by
    anyone else.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Owen Rees on Thu Feb 6 08:07:32 2025
    On 06/02/2025 00:29, Owen Rees wrote:
    Norman Wells <hex@unseen.ac.am> wrote:

    The operator is still 'not', whatever symbols are used to invoke it.

    In your post above you said “the word ‘not’”. That sort of sloppiness is
    unfortunately too common in software and a rich source of bugs.

    I am not inclined to accept your estimates of the effort required for a
    task unless you can demonstrate a better understanding of software. For example, can you explain what can go wrong when using a programming
    language that has both polymorphic comparison operators and implicit coercions?

    No, but nerds probably can.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Thu Feb 6 08:14:08 2025
    On 05/02/2025 17:18, Mark Goodge wrote:
    On Wed, 5 Feb 2025 16:33:51 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 16:17, Jon Ribbens wrote:
    On 2025-02-05, billy bookcase <billy@onon.com> wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message
    news:m0h3kgFqoilU4@mid.individual.net...

    Anyone claiming that there is no AI in AVs can be dismissed as talking >>>>> from a position of ignorance. As can anyone that agrees with them. :-) >>>>
    You do realise of course that that particular observation adds
    nothing whatsoever to your argument ? Which in itself is so specularly >>>> wrong as to be almost embarrassing ?

    Well, *someone* out of you and Simon is "so specularly wrong as to be
    almost embarrassing"... but it isn't Simon.

    I think you may be mistaken there. AI involves crunching huge
    quantities of data to come up with conclusions. That is assuredly *not*
    happening in the tiny confines of a satnav chip which merely uses
    earlier and elsewhere-formed AI conclusions.

    The sat-nav chip isn't using AI. But the sat-nav is only one source of data to an autonomous vehicle's control system. The control system has to be able to reconcile multiple sources of data, including sensory and optical information, and decide in real time how to use all that data and act accordingly. And that's where the AI is.

    If you don't believe me, here's a research paper from last year:

    https://www.mdpi.com/2504-2289/8/4/42

    Which includes such gems as

    Artificial intelligence (AI) currently plays a crucial role in the
    development and operation of autonomous vehicles. The integration of AI
    algorithms enables autonomous vehicles to navigate, perceive, and adapt to
    dynamic environments, making them safer and more efficient.

    and

    Sharper perception and decision-making: AI algorithms are more adept at
    understanding environments with advanced sensors and robust machine
    learning.

    And that's just one document. Here are a few more:

    https://www.xenonstack.com/blog/autonomous-vehicles-driver-assistance https://hai.stanford.edu/news/how-ai-making-autonomous-vehicles-safer https://arxiv.org/pdf/2402.17690 https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry
    https://www.techtarget.com/searchenterpriseai/definition/driverless-car

    TL;DR: anyone who tells you that AI plays no role in autonomous vehicles is either stunningly ignorant or deliberately misleading you.

    AI is a buzzword which is generally used by the pretentious to make out
    they have a better understanding of anything complex than they actually
    do. But AI involves the crunching of huge amounts of data to develop principles and insights that can be used more simply elsewhere without
    having to reinvent the wheel every time. Autonomous car controls don't
    have vast amounts of data, and you don't want to sit around for a week
    waiting for it to decide exactly what it is it has just crashed into.
    They are not employing AI, though AI may have played a part in
    developing the algorithms they do actually use.

    It's all a matter of proper definition of terms which is too commonly
    lacking.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Jon Ribbens on Thu Feb 6 08:33:52 2025
    On 05/02/2025 17:25, Jon Ribbens wrote:
    On 2025-02-05, Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:
    On Wed, 5 Feb 2025 11:18:30 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>> Of course they haven't all been solved, but the evidence is that AVs are >>> already safer than manually driven vehicles which is a very significant
    marker point in their development. The quest now is to approach
    perfection with, of course, the law of diminishing returns applying.

    They're safer than human drivers under certain, carefully controlled and
    restricted, conditions. But they still make mistakes that human drivers are >> very unlikely to make, even if those mistakes are not directly dangerous.

    The "fail 'safe' by stopping immobile in the middle of a lane of
    fast-moving traffic" shown in that Waymo video earlier seemed pretty "directly dangerous" to me.

    Do you really think that won't have been addressed?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to Mark Goodge on Thu Feb 6 10:11:39 2025
    On 05/02/2025 15:39, Mark Goodge wrote:
    On Wed, 5 Feb 2025 14:33:10 +0000, Martin Brown <'''newspam'''@nonad.co.uk> wrote:

    On 05/02/2025 09:57, Mark Goodge wrote:
    On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 04/02/2025 20:45, Mark Goodge wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>>
    Satnavs should, and autonomous vehicle controls will have to, have some >>>>>> way programmed in to discard any routes where there is a notified
    blockage. That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me you've never
    worked as a programmer...."

    What a cheap remark! All computer languages I've ever dealt with, and I >>>> have programmed in more than one, understand the word 'not'. It's a
    very basic operator.

    Please name three different programming languages which you have used that >>> include the word "not" as an operator.

    FORTRAN, Lisp, Modula2, Pascal (I can't count)

    Shhh! I'd set that as a test for Norman!

    You didn't seriously expect him to answer did you?

    And a number of others that use ! or some other cryptic symbol as not.

    Pretty much every language has a negation operator. It would be almost impossible to write useful code if it didn't. But, these days, the operator is, as you say, typically a symbol rather than the word "not". That's why I was curious as to how recent Norman's programming knowledge, if any, is.

    Two input `nand` is sufficient to build any logic system.
    `Not` on it's own isn't.

    The way it works at present on the Satnavs that have realtime live
    traffic updates is that the system predicts increased journey time and
    flags sections of road ahead red or yellow where the traffic speed from
    other users is unreasonably slow. I have ours set to only deviate if the
    (long) journey time increases by more than 15 minutes.

    That's how Google Maps and Waze do it. It does obviously rely on the system having continual two-way communication with the central server. A standalone sat-nav can't do that.

    Indeed it is only available to those with the right kit and when there
    is a major SNAFU some distance away you can tell who has it installed.

    It seems to work well enough and you can sometimes tell who else is
    using the same nav system by where they come off the normal route.

    One scenario where it doesn't work is late at night or on lightly used roads where there isn't enough traffic to give a reliable statistical sample.

    That is a limitation OTOH when the roads are quiet the traffic jams tend
    not to be too bothersome either. Police will let the odd car go through.

    Mostly we see roads closed for repair late at night. The satnav can find
    some amazing alternative routes home (and seems to be fully aware of all scheduled road closures). Last time that happened A19 down it picked a
    tiny road through a being built housing estate as a fantastic short cut.
    (it will be a right rat run once more people know about it)

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Jon Ribbens on Thu Feb 6 10:00:23 2025
    "Jon Ribbens" <jon+usenet@unequivocal.eu> wrote in message news:slrnvq799g.56ka.jon+usenet@raven.unequivocal.eu...

    On 2025-02-05, billy bookcase <billy@onon.com> wrote:


    Jon and Simon aren't talking about sat nav but "object recognition"

    Indeed.

    As they both believe that when an AV is about to collide with "an
    object" its important for some reason that it "recognises" exactly
    what that object is

    No, you've just made that up.

    Unfortunately, given your evident unwilingness to ever explain what
    you actually mean by these somewhat inscutable remarks, people are
    left with very little option but to engage in speculation as to what
    you might actually mean.


    bb


    One cannot help but be reminded of Katherine Tate's "annoying woman"
    character.

    Ryanair baggage clerk; "Have you weighed all your baggage madam"

    AW:" Guess !"

    Ryanair baggage clerk " I asked you if you weighed it"

    AW:" Go on, Guess ! It can only be a yes or no can't it ".

    Ryanair baggage clerk; " Madam for the last time have you or have
    you not weighed your baggage"

    AW:"There's really no need to adopt that attitude you know. Yes I
    have."












    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to All on Thu Feb 6 10:15:33 2025
    On Wed, 05 Feb 2025 16:54:50 +0000, Norman Wells wrote:

    On 05/02/2025 16:43, Mark Goodge wrote:
    [quoted text muted]

    I think it would probably just refuse to open the pod bay doors.

    Anyway, I believe the names Siri and Alexa were carefully chosen because
    of their uniqueness and low likelihood of confusion with ordinary words
    used in ordinary speech.

    More lilely they were chosed as "girly" names which would sit quite
    happily in the living room alongside the air freshener.

    Which presumably represnted a larger potential market than would "Thor"
    or "Wotan"; sitting on a shelf in the garage or man cave.



    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to Norman Wells on Thu Feb 6 10:10:27 2025
    On 6 Feb 2025 at 08:14:08 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:

    On 05/02/2025 17:18, Mark Goodge wrote:
    On Wed, 5 Feb 2025 16:33:51 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 16:17, Jon Ribbens wrote:
    On 2025-02-05, billy bookcase <billy@onon.com> wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message
    news:m0h3kgFqoilU4@mid.individual.net...

    Anyone claiming that there is no AI in AVs can be dismissed as talking >>>>>> from a position of ignorance. As can anyone that agrees with them. :-) >>>>>
    You do realise of course that that particular observation adds
    nothing whatsoever to your argument ? Which in itself is so specularly >>>>> wrong as to be almost embarrassing ?

    Well, *someone* out of you and Simon is "so specularly wrong as to be
    almost embarrassing"... but it isn't Simon.

    I think you may be mistaken there. AI involves crunching huge
    quantities of data to come up with conclusions. That is assuredly *not* >>> happening in the tiny confines of a satnav chip which merely uses
    earlier and elsewhere-formed AI conclusions.

    The sat-nav chip isn't using AI. But the sat-nav is only one source of data >> to an autonomous vehicle's control system. The control system has to be able >> to reconcile multiple sources of data, including sensory and optical
    information, and decide in real time how to use all that data and act
    accordingly. And that's where the AI is.

    If you don't believe me, here's a research paper from last year:

    https://www.mdpi.com/2504-2289/8/4/42

    Which includes such gems as

    Artificial intelligence (AI) currently plays a crucial role in the
    development and operation of autonomous vehicles. The integration of AI >> algorithms enables autonomous vehicles to navigate, perceive, and adapt to
    dynamic environments, making them safer and more efficient.

    and

    Sharper perception and decision-making: AI algorithms are more adept at >> understanding environments with advanced sensors and robust machine
    learning.

    And that's just one document. Here are a few more:

    https://www.xenonstack.com/blog/autonomous-vehicles-driver-assistance
    https://hai.stanford.edu/news/how-ai-making-autonomous-vehicles-safer
    https://arxiv.org/pdf/2402.17690
    https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry
    https://www.techtarget.com/searchenterpriseai/definition/driverless-car

    TL;DR: anyone who tells you that AI plays no role in autonomous vehicles is >> either stunningly ignorant or deliberately misleading you.

    AI is a buzzword which is generally used by the pretentious to make out
    they have a better understanding of anything complex than they actually
    do. But AI involves the crunching of huge amounts of data to develop principles and insights that can be used more simply elsewhere without
    having to reinvent the wheel every time. Autonomous car controls don't
    have vast amounts of data, and you don't want to sit around for a week waiting for it to decide exactly what it is it has just crashed into.
    They are not employing AI, though AI may have played a part in
    developing the algorithms they do actually use.

    It's all a matter of proper definition of terms which is too commonly lacking.

    ISTR that AI was involved in pattern recognition long before "large language models" or whatever. I don't think explicit algorithm-based face recognition
    is a thing?

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nick Finnigan@21:1/5 to billy bookcase on Thu Feb 6 10:43:56 2025
    On 05/02/2025 12:55, billy bookcase wrote:
    "Nick Finnigan" <nix@genie.co.uk> wrote in message news:vnu2a7$20ufs$1@dont-email.me...
    On 04/02/2025 15:05, Norman Wells wrote:
    On 04/02/2025 12:32, Mark Goodge wrote:

    Norman's responses to your comments also contradict his own earlier
    assertions that autonomous vehicles don't need AI,

    AI doesn't come into it.


    Which autonomous vehicles don't use AI for object recogntion?

    All of them.

    https://www.teslaoracle.com/2025/02/03/tesla-pushes-cybertruck-fsd-v13-2-6-with-new-improvements-and-teases-upcoming-ones-release-notes/
    "In the upcoming updates, Tesla will be increasing the AI ..."


    An object is an object; and is simply something to be avoided surely ?

    Why would there be any need to discriminate ?

    https://www.teslaoracle.com/2020/06/06/testing-autopilot-object-detection/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Max Demian@21:1/5 to Mark Goodge on Thu Feb 6 12:43:02 2025
    On 05/02/2025 18:05, Mark Goodge wrote:
    On Wed, 5 Feb 2025 17:47:41 -0000, "billy bookcase" <billy@onon.com> wrote:

    Jon and Simon aren't talking about sat nav but "object recognition"

    As they both believe that when an AV is about to collide with "an object"
    its important for some reason that it "recognises" exactly what that object is

    Of course it matters what the object is. Some objects will do more damage to the car if the car strikes them, and some objects will suffer more injury of the car strikes them. There's a huge difference between, say, a plastic bag, a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course, you'd avoid all of them. But you wouldn't necessarily take abrupt evasive action to avoid a plastic bag. On the other hand, you'd hit anything - including a bollard - if necessary to avoid hitting a child.

    I'm reminded of those creepy bollards that look like children. I suppose
    the idea is that the driver will avoid hitting them because he likes
    children; more likely he will avoid hitting them because they are made
    of metal.

    --
    Max Demian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Max Demian@21:1/5 to Mark Goodge on Thu Feb 6 12:46:05 2025
    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:
    On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of the
    commander and the flight technicians in fully automated vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a Train Captain) on board who is able to take over and drive it manually if necessary. And the trains themselves don't need any routing capability, because they run on rails. It's a very far cry from a fully autonomous road vehicle.

    Are these "Passenger Service Agents" fully qualified drivers (and paid
    as much)? If so, what's the point?

    --
    Max Demian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Max Demian@21:1/5 to The Todal on Thu Feb 6 12:49:38 2025
    On 05/02/2025 16:32, The Todal wrote:
    On 05/02/2025 15:35, Jethro_uk wrote:
    On Wed, 05 Feb 2025 12:51:33 +0000, billy bookcase wrote:

    Much the same as on the Victoria Line; which has been fully automatic
    using ATO ever since its inception in  1968; but which nevertheless has >>> a "driver" in the cab.

    For interest, the Moorgate crash was in 1975.

    If a tube train is driverless and has no member of staff onboard, what happens if a passenger is taken ill? Or if there is a knife-fight? Or if there is a bomb? Or if the train collides with a person on the track? Or
    if the doors have come open during the journey?

    There could be a means for passengers to communicate remotely with a
    control room, and the train brought to a halt at the next station
    remotely, and the appropriate emergency services alerted.

    --
    Max Demian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Max Demian@21:1/5 to Mark Goodge on Thu Feb 6 12:59:46 2025
    On 05/02/2025 16:43, Mark Goodge wrote:
    On Wed, 5 Feb 2025 12:42:14 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:

    S&P Global Mobility (Google them) predict that Level 5 passenger cars
    will not appear before 2035 at the earliest.

    If all goes well, I should be retired and living a life of leisure by then. So, if I'm rich enough, I might be able to say "Alexa, take me to the pub".

    (Actually, I've always said that I won't get a "smart" device until I can choose what to call it. I'm not going to be asking Google, Siri, Alexa or Copilot to take me to the pub; what I really want is to be able to say "Zen, take me to the pub". Or possibly "Holly, take me to the pub". Or even just plain "Computer, take me to the pub". But I wouldn't want to say "HAL, take me to the pub" in case it refused. Or worse, if it took me there, but
    refused to take me home).

    I understand that Alexa's "wake word" can be changed to anything you
    like (probably using the app that won't run on my phone).

    Personally I prefer chat devices where you have to tap or press
    something to invoke them.

    --
    Max Demian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to Max Demian on Thu Feb 6 13:15:49 2025
    On 6 Feb 2025 at 12:59:46 GMT, "Max Demian" <max_demian@bigfoot.com> wrote:

    On 05/02/2025 16:43, Mark Goodge wrote:
    On Wed, 5 Feb 2025 12:42:14 +0000, Simon Parker <simonparkerulm@gmail.com> >> wrote:

    S&P Global Mobility (Google them) predict that Level 5 passenger cars
    will not appear before 2035 at the earliest.

    If all goes well, I should be retired and living a life of leisure by then. >> So, if I'm rich enough, I might be able to say "Alexa, take me to the pub". >>
    (Actually, I've always said that I won't get a "smart" device until I can
    choose what to call it. I'm not going to be asking Google, Siri, Alexa or
    Copilot to take me to the pub; what I really want is to be able to say "Zen, >> take me to the pub". Or possibly "Holly, take me to the pub". Or even just >> plain "Computer, take me to the pub". But I wouldn't want to say "HAL, take >> me to the pub" in case it refused. Or worse, if it took me there, but
    refused to take me home).

    I understand that Alexa's "wake word" can be changed to anything you
    like (probably using the app that won't run on my phone).

    Personally I prefer chat devices where you have to tap or press
    something to invoke them.

    Indeed. I would find it somewhat odd to deliberately put a machine in my
    living room designed to listen to every word said and potentially interpret it and relay it to a server on the Internet.

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to All on Thu Feb 6 15:19:03 2025
    Jethro_uk wrote:

    When I worked in insurance, I had a task to trawl through a years worth
    of black box data. (Which I did at a cost of about £5,000 versus the £100,000 a bunch of cowboys were trying to charge).

    One very common problem with the data was that because it was simply one
    data point per analysis, and the GPS had no idea of altitude, it was
    possible for a car on a motorway (70mph limit) to briefly appear as if it
    had been on a road with a 30mph limit that went over or under it.

    Of course that triggered a "naughty boy" black mark for the driver and affected their premiums.

    The answer was to have had a little bit more sophistication in the
    logging and more critically analysis. However that would have cost money
    and skills that would have made the whole project unprofitable.

    So InsCo has choice to spend money and avoid putting up premiums for a
    few unfortunate customers, or not spend money and raise a few premiums?

    I left before they decided what to do.

    I wonder ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fredxx@21:1/5 to Roger Hayter on Thu Feb 6 15:21:24 2025
    On 06/02/2025 13:15, Roger Hayter wrote:
    On 6 Feb 2025 at 12:59:46 GMT, "Max Demian" <max_demian@bigfoot.com> wrote:

    On 05/02/2025 16:43, Mark Goodge wrote:
    On Wed, 5 Feb 2025 12:42:14 +0000, Simon Parker <simonparkerulm@gmail.com> >>> wrote:

    S&P Global Mobility (Google them) predict that Level 5 passenger cars
    will not appear before 2035 at the earliest.

    If all goes well, I should be retired and living a life of leisure by then. >>> So, if I'm rich enough, I might be able to say "Alexa, take me to the pub". >>>
    (Actually, I've always said that I won't get a "smart" device until I can >>> choose what to call it. I'm not going to be asking Google, Siri, Alexa or >>> Copilot to take me to the pub; what I really want is to be able to say "Zen,
    take me to the pub". Or possibly "Holly, take me to the pub". Or even just >>> plain "Computer, take me to the pub". But I wouldn't want to say "HAL, take >>> me to the pub" in case it refused. Or worse, if it took me there, but
    refused to take me home).

    I understand that Alexa's "wake word" can be changed to anything you
    like (probably using the app that won't run on my phone).

    Personally I prefer chat devices where you have to tap or press
    something to invoke them.

    Indeed. I would find it somewhat odd to deliberately put a machine in my living room designed to listen to every word said and potentially interpret it
    and relay it to a server on the Internet.

    I've come to think my life must be so uninteresting to all but a few I
    doubt anything would come of disjointed conversations held by the big
    internet players. If I was of interest I might even be impressed!

    Is your life so wild and on a criminal or immoral spectrum you think a
    server will dote on your every word?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to billy bookcase on Thu Feb 6 11:08:48 2025
    On 06/02/2025 10:15, billy bookcase wrote:
    On Wed, 05 Feb 2025 16:54:50 +0000, Norman Wells wrote:

    On 05/02/2025 16:43, Mark Goodge wrote:
    [quoted text muted]

    I think it would probably just refuse to open the pod bay doors.

    Anyway, I believe the names Siri and Alexa were carefully chosen because >>> of their uniqueness and low likelihood of confusion with ordinary words
    used in ordinary speech.

    More lilely they were chosed as "girly" names which would sit quite
    happily in the living room alongside the air freshener.

    "Siri" quite often responds to "don't be silly" and other such "silly"
    phrases.
    "Alexa" can respond to phrases like "excellent", "a hexadecimal" etc.

    Sometimes they respond spontaneously to something random going past on
    Radio 4 that bears no obvious resemblance to the true activation name.
    That is one of the hazards of AI pattern matching.

    Various designer JPEGs exist of cats and dogs that certain AI image
    classifiers get so totally wrong that it is funny.

    Which presumably represnted a larger potential market than would "Thor"
    or "Wotan"; sitting on a shelf in the garage or man cave.

    You can alter its wakeup activation word if you choose.

    https://www.amazon.co.uk/gp/help/customer/display.html?nodeId=GSH36WER6NKGTFXS

    So you can rename it "Thor", "Wotan", "Voldemort" or "Scumspawn" if you
    wish.

    Alexa is generally referred to here as "she who must not be named".

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Thu Feb 6 15:32:06 2025
    On 06/02/2025 14:51, Simon Parker wrote:
    On 05/02/2025 10:22, Norman Wells wrote:
    On 05/02/2025 09:57, Mark Goodge wrote:
    On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:
    On 04/02/2025 20:45, Mark Goodge wrote:
    On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    Satnavs should, and autonomous vehicle controls will have to, have >>>>>> some
    way programmed in to discard any routes where there is a notified
    blockage.  That's a trivial programming exercise.

    "Tell me you've never worked as a programmer without telling me
    you've never
    worked as a programmer...."

    What a cheap remark!  All computer languages I've ever dealt with,
    and I
    have programmed in more than one, understand the word 'not'.  It's a
    very basic operator.

    Please name three different programming languages which you have used
    that
    include the word "not" as an operator.

    But, anyway, it's not the syntax I was querying. It's how to ensure
    that the
    existence of a blockage, and its precise location, is passed to the
    software
    promptly and in a format which can easily be interpreted by the
    software.
    For example, when the notification comes in the form of a policeman
    saying
    "you can't go that way, mate, it's blocked".

    What's the problem?  It's only a matter of information being passed to
    whoever can amend the live mapping, and road closed notification being
    applied on the maps..

    The "problem" is that you are unable or unwilling to detail the process
    by which "the information" is "passed to whoever can amend the live
    mapping, and road closure notification being applied on the maps."

    For one thing, (as detailed in a parallel post), there are numerous
    systems at play, rather than a single all-encompassing system.

    Then they all should be notified, and those who wish to remain
    competitive will act.

    Additionally, there isn't some entity within each of those systems with responsibility to "amend the live mapping".

    Then perhaps, if they want to remain relevant and commercially viable in
    the coming age of autonomous vehicles, they will see it as commercially important to develop such a facility if they don't have it already. It
    is surely not beyond the wit of man despite all the pettyfogging issues
    you mention.

    I recommend that you spend time learning how Google, Apple and Garmin
    (to name but three) deal with congestion and routing around it and you
    will be well on the well to enlightenment.

    They act on information received. Information about road blockages is
    just more information that should be easy to integrate.

    Until then, you see this as a simple solution to a simple problem
    because you do not understand the problem and therefore have no idea of
    the complexities involved and how your proposed solution maps onto these.

    No, it's because I don't get bogged down by trivia and 'the way it's
    always been done'.

    If it's not done, we're no worse off than we are now.  Huge delays
    whether of manually driven vehicles or autonomous ones.
    I invite you to watch the in-car videos of Waymo Driver in action when
    it goes wrong.  If you think that leaves us "no worse off then we are
    now" then I must disagree with you in the strongest possible terms.

    And these are Level 4 AVs with a baby-sitter back at the control centre
    and a human driver on standby no more than 5-6 miles away to intervene
    when things go wrong, which they do all too frequently.

    I was talking, as was very clear, about what happens if mapping concerns
    do not act on road closure information received, which has nothing to do
    with what you say.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Simon Parker on Thu Feb 6 18:52:27 2025
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message news:m0k1ivFqoikU22@mid.individual.net...


    gross snippage

    AVs utilise AI every second of every trip. They are trained to deal with situations
    they have never previously encountered because it is impossible to program every
    possible scenario into the system.

    One single example of what you mean will suffice; and will actually
    demonstrate your understanding of the topic.

    That is, one example of A! in action. Such as to distinguish it from
    the actions of say a simple robot vacuum cleaner with certain added
    features.

    Every one of which, robot vacuum cleaners that is, are capable of
    cleaning rooms and avoiding objects which they have never
    encountered before in totally novel scenarios..

    So that's just one or two sentences please, * in your own words*.

    Of some action or actions - so not very long sentences or very
    long words - currently being performed by certificated AV's which
    could not have been accomplished by using sophisticated
    automation.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Thu Feb 6 18:57:44 2025
    On Thu, 6 Feb 2025 15:32:06 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 06/02/2025 14:51, Simon Parker wrote:
    On 05/02/2025 10:22, Norman Wells wrote:

    What's the problem? It's only a matter of information being passed to
    whoever can amend the live mapping, and road closed notification being
    applied on the maps..

    The "problem" is that you are unable or unwilling to detail the process
    by which "the information" is "passed to whoever can amend the live
    mapping, and road closure notification being applied on the maps."

    For one thing, (as detailed in a parallel post), there are numerous
    systems at play, rather than a single all-encompassing system.

    Then they all should be notified,

    How?

    Please can you detail the process by which the fact that a police officer
    has just closed a road due to an accident is notified to all of the various systems which might want that information.

    This is, actually, a solvable problem, given certain basic assumptions, and
    I have what I think would be an effective solution in mind, but I'd be interested to hear how you would do it if you were desiging the system.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Simon Parker on Thu Feb 6 19:00:58 2025
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message news:m0k1lpFqoikU23@mid.individual.net...

    AVs *DO* receive huge amounts of data and crunch that data in real time to make routing
    decisions. (And in context, "routing" here does not relate merely to, "Shall I turn
    left in 250metres?" but includes, "Shall I move 30cm further away from the footway as
    there's a pedestrian there?" or "There's a cyclist crossing the junction so even though
    the light is green, I must wait for them to clear the junction before I proceed.")

    These being exampes of your "situations they have never previously encountered " cyclists riding along the road" and bright red and green
    traffic lights ?

    And on public roads ?

    Who'd have thought ?

    Just how long did it take them to work that one out, I wonder ?

    Was it in the first year ?



    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Fredxx on Thu Feb 6 19:12:38 2025
    "Fredxx" <fredxx@spam.invalid> wrote in message news:vo2k1k$30l5k$1@dont-email.me...

    On 06/02/2025 13:15, Roger Hayter wrote:


    snip


    Indeed. I would find it somewhat odd to deliberately put a machine in my
    living room designed to listen to every word said and potentially interpret it
    and relay it to a server on the Internet.

    I've come to think my life must be so uninteresting to all but a few I doubt anything
    would come of disjointed conversations held by the big internet players. If I was of
    interest I might even be impressed!

    Is your life so wild and on a criminal or immoral spectrum you think a server will dote
    on your every word?

    No. But it might possibly go on to suggest a succession of products which could make your life that much easier. And that they're all there in stock, and available
    for next day delivery. All you'll probably need to do is say the magic words "Oh Yes Please" or similar, and maybe flash your smartphone at it.

    And then six months later, bald headed men in black tee shirts come knocking
    on your door, waving pieces of paper.


    bb



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Scott@21:1/5 to Simon Parker on Thu Feb 6 19:19:02 2025
    On 06/02/2025 14:49, Simon Parker wrote:
    Whilst on the page, please scroll down to the bottom and watch the video under the section "Watch our technology in action" and then tell me that object recognition employed in Level 4 cars is "not AI in itself, but
    just the ability earlier AI has given the vehicle software".

    I couldn't help but wonder how it would perform on some of our single-track-passing-place-if-you're-lucky roads. Or where the road
    markings are faded into obscurity. Or large potholes. That video shows
    what to me look ideal road conditions - good road surface and markings,
    few obstacles, no hiding places for people to emerge from, no crowded
    junctions with red-light jumpers,... etc.



    --
    Mike Scott
    Harlow, England

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Thu Feb 6 19:24:38 2025
    On 06/02/2025 18:57, Mark Goodge wrote:
    On Thu, 6 Feb 2025 15:32:06 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 06/02/2025 14:51, Simon Parker wrote:
    On 05/02/2025 10:22, Norman Wells wrote:

    What's the problem?  It's only a matter of information being passed to >>>> whoever can amend the live mapping, and road closed notification being >>>> applied on the maps..

    The "problem" is that you are unable or unwilling to detail the process
    by which "the information" is "passed to whoever can amend the live
    mapping, and road closure notification being applied on the maps."

    For one thing, (as detailed in a parallel post), there are numerous
    systems at play, rather than a single all-encompassing system.

    Then they all should be notified,

    How?

    Please can you detail the process by which the fact that a police officer
    has just closed a road due to an accident is notified to all of the various systems which might want that information.

    This is, actually, a solvable problem, given certain basic assumptions, and
    I have what I think would be an effective solution in mind, but I'd be interested to hear how you would do it if you were desiging the system.

    It's a solvable problem according to you, just as I said.

    That's quite enough.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Thu Feb 6 19:08:18 2025
    On 06/02/2025 15:27, Simon Parker wrote:

    Anyway, I believe the names Siri and Alexa were carefully chosen
    because of their uniqueness and low likelihood of confusion with
    ordinary words used in ordinary speech.

    It is not often you're right, but you're wrong again.

    Bezos specifically chose the name "Alexa" after the ancient library of Alexandria.  The developers liked it because of the hard "X" consonant
    which helped them increase the precision of recognition in the early versions.

    The wife of a good friend is named "Alexandra", (or "Alex" for short).

    For fairly obvious reasons, they cannot and will not have Amazon smart devices in their home.

    Similarly, Siri is named after Sigrid, a Norwegian co-worker of Dag
    Kittlaus, founder of Siri, prior to its acquisition by Apple.

    I'm rather surprised you did not add, as you did before:

    "But then I have the advantage of actually knowing what I'm talking
    about rather than relying on Google to prop up my specious claims"

    Yes, sounds like it.

    Strange that you did not also find:

    "when Apple bought the personal assistant [Siri], they needed a name to
    match the female voice which is easy to remember yet distinguishable
    from other names"

    and:

    "Amazon's voice assistant, Alexa, reportedly is inspired by the computer
    in Star Trek. Its makers wanted it, too, to be a library of information.
    And what better library of information is there than the famed,
    historical library of Alexandria? So the name kind of works. However,
    that was only a minor factor in choosing the name."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Thu Feb 6 19:21:12 2025
    On 06/02/2025 15:21, Simon Parker wrote:
    On 05/02/2025 15:40, Norman Wells wrote:
    On 05/02/2025 12:33, Simon Parker wrote:
    On 04/02/2025 16:14, Norman Wells wrote:

    Obviously, it would have crashed, killing all on board.

    Most people, hopefully you included, would consider that to be a
    "sub- optimal outcome".

    You just asked me what would happen, not what I thought of it.

    Actually, I asked "billy bookcase" but that's beside the point.

    Consider this a request for your thoughts on the subject rather than a
    simple request of what would happen.


    No-one.  Just as there isn't on the Docklands Light Railway.

    <SFX: Buzzer>

    I recommend researching the role and responsibilities of the
    "Passenger Service Agent" on the DLR.

    And navigating the DLR is light years away from a Level 5 AV.

    Of course the DLR is easier to manage autonomously than road vehicles;
    no-one claims otherwise.

    So why did you introduce it as an example?


    But how often has human intervention been required?

    Not my example, not my data to produce.

    Or if you prefer that expressed a different way, I believe someone once
    said: "If someone has a point to make, it's for them to make it, not for
    me to go searching on their behalf."

    But you were the one saying how necessary the Public Service Agent is.
    I just thought you mighty like to support that with a fact or two rather
    than just let it fall.

    If only very rarely, the Passenger Service Agent is unnecessary and
    redundant, except of course from the union's point of view, which is
    wholly unrelated to safety.

    Two posts ago, the Passenger Service Agent didn't even exist.  (You
    stated "No-one" fulfilled the role of the pilot on a plane under
    autopilot, not "A Passenger Service Agent that is rarely needed.")

    Now you dismiss them as "unnecessary and redundant".

    If they are, they are. It's all a question of the numbers you seem very reluctant to provide.

    Do you know how often human intervention has been required on the DLR
    and can you cite a verifiable source to support this or do your
    arguments rest upon unsupported claims?

    No, it's a question I asked you, the information I requested being
    necessary to support your position.

    But level 5 just represents perfection which is the enemy of good and
    not necessarily required anyway.

    I notice you've been using this quote a lot recently.  Did it appear on
    your "Quote-a-Day" calendar or some such?

    And Level 5 is not "perfection" - it is a fully autonomous vehicle - the subject of the majority of this thread.

    Level 4 may be quite sufficient, meaning:

    "The vehicle operates completely autonomously under certain
    conditions. The human being no longer has to be ready to intervene. He
    can work, watch movies and even sleep. And the vehicle may also drive
    alone, i.e. without occupants. It must be able to reach a safe state
    without the intervention of a human driver, i.e. to come to a
    standstill in a parking lot for example. However, the autonomy of the
    vehicle at Level 4 is still linked to certain conditions, such as a
    defined route, driving on the highway or in the parking garage.
    Numerous companies are already testing level 4 vehicles in road traffic".

    https://www.zf.com/mobile/en/technologies/automated_driving/
    stories/6_levels_of_automated_driving.html#:~:text=Numerous%20companies%20are%20already%20testing,in%20road%20traffic%2C%20including%20ZF.&text=Germany%20has%20created%20the%20framework,as%20the%20first%20country%20worldwide.

    "the vehicle may also drive alone, i.e. without occupants", particularly during the initial "data collection" phase with typically lasts around
    six months or so, depending on the precise circumstances.

    And "the autonomy of the vehicle at Level 4 is still linked to certain conditions, such as a defined route, driving on the highway or in the
    parking garage."

    I do not consider "pre-defined routes" to be "quite sufficient" given
    that this often involves short trips within a geographically restricted
    area with certain junctions and routes therein avoided.

    In short, I cannot get the AV from anywhere to anywhere using any
    available route.

    It may go from where I am to where I need to be using a particular
    route, but equally it may not.


    It's far closer to realisation than many here seem to credit,

    I know precisely where we are with the technology.  If my contributions
    to the thread have demonstrated anything, it should be that this is a
    subject in which I am keenly interested and therefore well versed.

    Of course. As always.

    and such vehicles may be on the roads here next year:

    https://www.gov.uk/government/news/self-driving-vehicles-set-to-be-on-
    roads-by-2026-as-automated-vehicles-act-becomes-law

    For very limited values of "on the roads" and "here".


    But a failsafe default of come to a halt and turn everything off,
    which would doubtless be in any autonomous vehicle, isn't really an
    option doing 500mph at 30,000 ft.
    Remind me not to travel in a NormCar (TM) at any time now or in the
    future.  I do not really want to be in a vehicle that in the event of
    an incident is designed and programmed to come to a halt in the
    right- most lane of a motorway and turn off both its headlights and
    hazard lights despite it being night time.

    Well, of course it wouldn't.  It may in other brands during the
    debugging phase but not in Normcars.

    No, you said "a failsafe default of 'come to a halt and turn everything
    off'" would "doubtless be in any autonomous vehicle".  NormCars (TM) are included within the dataset {any autonomous vehicle}.

    You made no mention of first finding a safe place before 'coming to a
    halt' nor of keeping the headlights on, if relevant, and activating the hazard lights.  You said, quite definitely "and turn *EVERYTHING*
    off" (emphasis mine).  Headlights and hazard lights would be included
    within the dataset {everything}.

    Some things are absolutely implicit, and shouldn't need to be spelt out
    to everyone regardless of mental capacity.

    I made the point because it highlights the erroneous nature of your
    thinking.

    You claim things are simple and trivial because you dismiss as
    irrelevant the very things that make them complex.  Everything is simple when one ignores its inherent complications.

    Other things actually are simple and trivial if minor or non-existent
    problems are seen as such.

    You claim to have programming experience so you ought to be familiar
    with the Pareto Principle.  The first 80% is relatively easy.  That
    last 20% is the killer.  80% of the bugs are in 20% of the code.  80%
    of the complexity comes from 20% of the code.  80% of the R&D is
    spent on 20% of the application.  AVs don't even have the initial 80%
    mastered yet.

    Call it by whatever fancy name you like, it's the same old 80/20 rule.

    It is not a "fancy name", nor is it 'what I like to call it'.  That is
    the correct name for it as it is named after Italian polymath Vilfredo
    Pareto [^1] who first wrote about it while teaching at the University of Lausanne.

    [^1] https://en.wikipedia.org/wiki/Vilfredo_Pareto

    Good old Google.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Thu Feb 6 15:41:24 2025
    On 06/02/2025 14:53, Simon Parker wrote:
    On 05/02/2025 15:58, Norman Wells wrote:
    On 05/02/2025 12:15, Simon Parker wrote:
    On 04/02/2025 15:25, Norman Wells wrote:
    On 04/02/2025 12:19, Max Demian wrote:
    On 03/02/2025 19:05, Norman Wells wrote:

    Provided it is working on a live real-time mapping system in which >>>>>> road closures are clearly indicated, which is not beyond the wit
    of man, the satnav will simply re-route to avoid them.  They're
    very good at finding alternative routes when you miss a turning
    for example, so it's not a new or insurmountable problem.

    Won't the satnav keep telling you to take the original route? Is
    there a way to tell it that there is something in the way? The same
    problem would occur in the case of an autonomous vehicle.

    Satnavs should, and autonomous vehicle controls will have to, have
    some way programmed in to discard any routes where there is a
    notified blockage.  That's a trivial programming exercise.

    Is it?

    Of course it is.

    So you'll be posting a copy of your code, presently.  Please feel free
    to use the language in which you are most comfortable.

    I don't write code any more; I leave that to minions. I look at the
    bigger picture. With my knowledge of actual coding, I do know that it
    would be a trivial exercise to ignore any calculated route where a known blockage exists and find an alternative.

    Can you post a copy of your program please along with relevant
    certification for the design, coding, debugging, testing and deployment.

    What certification do you consider necessary?  And who certifies it?

    Your lack of knowledge and understanding of the matter is writ large,
    despite your bullish claims and confidence.  It is not what
    certification *I* consider necessary, but what is required by the
    various bodies, including but not limited to ISO.

    This is all very nerdy stuff which is not a deal-breaker.

    As this is a legal newsgroup, perhaps now would be a good time to
    mention Jean Bookout v Toyota Motor Corporation and the expert evidence
    of Michael Barr (BSEE/MSEE) presented therein.

    Cutting to the chase, he demonstrated to the satisfaction of the jury
    that a single bit-flip in Toyota's Electronic Throttle Control System
    (ETCS) resulted in the system failing catastrophically causing
    "unintended acceleration" (UA) which in turn led to numerous deaths, (including that of Jean Bookout).

    Prior to the case, the motor industry paid lip service to OSEK and MISRA
    for automated systems.  Where AVs are concerned, compliance, with the
    above or similar systems, needs to be certified.

    And of course the bit flip would not have occurred if it had been? It eliminates solar flares, for example, does it?

    Additionally, since the Bookout case, international standards have been developed so your code is also going to need to be compliant with both
    ISO 26262 (ASIL) and ISO 34503:2023 but I am sure you knew all this
    already as you are such an authority on the subject and issue
    pronouncements with such confidence.

    As above, I look forward to receiving your fully compliant code by
    return as it is, after all, "a trivial programming exercise".

    Then you'll be in a state of high anticipation for some considerable time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Thu Feb 6 15:57:24 2025
    On 06/02/2025 14:54, Simon Parker wrote:
    On 05/02/2025 16:03, Norman Wells wrote:
    On 05/02/2025 12:31, Simon Parker wrote:
    On 04/02/2025 21:22, Mark Goodge wrote:

    It's a shuttle bus that runs back and forth between just two
    locations. The
    entirety of its route on the public highway is on dual carriageways.
    And it
    still needs someone to sit in the driver's seat to "monitor the
    technology".
    Oh, and it's not even autonomous end to end.

       Their driverless buses have sensors enabling them to travel on
       pre-selected roads at up to 50mph.

       On each trip they switch between autonomous and manual driving,
    being
       controlled by computer across the Forth Road Bridge, and from the >>>> M90 and
       M8 to the A720 in Edinburgh.

    https://www.bbc.co.uk/news/articles/c0q0lg3j1exo

    Does that indicate there is any substantial doubt, even on British
    urban
    streets?

    It indicates that there's still considerable doubt, as it happens.

    It is worse than that:

    The service has been withdrawn as there were insufficient passenger
    numbers to make it cost-effective.  (Not surprising when the cost of
    all the tech, plus a "not a driver" and "conductor" were factored in.
    I.e. all the costs of the original bus, plus all the AV tech plus two
    members of staff on board rather than the usual one made the project
    unfeasible on cost grounds.)

    There were insufficient passenger numbers to make running the route
    cost-effective on an ongoing basis.  There was never any question of
    the passengers on that particular route alone having to meet all the
    startup costs of research and development, which would have made the
    fares prohibitively expensive.

    Do you accept that your example does not advance your argument because:

    (1) It was not a fully autonomous service.

    It was fully autonomous for part of it.

    (2) It required more staff than a traditional bus.

    Not for reasons of safety.

    (3) It has been withdrawn from service.

    Of course. It didn't attract enough passengers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Thu Feb 6 16:16:54 2025
    On 06/02/2025 15:28, Simon Parker wrote:
    On 05/02/2025 17:02, Norman Wells wrote:
    On 05/02/2025 12:42, Simon Parker wrote:
    On 04/02/2025 16:37, Norman Wells wrote:

    $75 billion has been spent by the automotive industry on developing
    autonomous vehicle technology, including the software.  How much do
    you think the black box company spent developing theirs?

    You get what you pay for.

    S&P Global Mobility (Google them) predict that Level 5 passenger cars
    will not appear before 2035 at the earliest.

    Meanwhile, Level 4 AVs, which will meet the requirements of the new
    Act, are being tested and proved *now*.

    Indeed they are.  But they are a significant way short of being a "fully autonomous vehicle", which is what was being discussed for the majority
    of this thread.

    Then you're using the term in a very specific and narrow way which you previously did not define.

    If you consider only Level 5 AVs to be 'fully autonomous' then, yes,
    they are still some way off.

    But that's a bit of a narrow definition, in that it excludes Level 4 AVs
    where:

    "The vehicle operates completely autonomously under certain conditions.
    The human being no longer has to be ready to intervene. He can work,
    watch movies and even sleep. And the vehicle may also drive alone, i.e.
    without occupants."

    All the criteria for being allowed on UK roads and operate autonomously
    seem to be met by Level 4.

    For context, Tesla "Autopilot" is classified as Level 2.

    Your numerous posts to this thread seem to demonstrate both that you
    fail to understand where we actually are with the current technology
    and are grossly underestimating how much further we have to go before
    we get to the point where AV is ready for mass deployment.

    And I think you're hiding your head in the sand or, to mix metaphors,
    trying to stem the tide of progress.

    My new(ish) car has / is currently being certified as a Level 4 AV in
    certain jurisdictions (but not the UK, and in a spirit of full
    disclosure, this was not taken into consideration when getting it).

    Is yours?

    Certainly not! With all the horror stories you tell, anyone would be a
    fool to buy one.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Owen Rees@21:1/5 to Norman Wells on Thu Feb 6 21:13:00 2025
    On Thu, 6 Feb 2025 08:07:32 +0000, Norman Wells <hex@unseen.ac.am> wrote
    in <m0j8u4Fmj8iU1@mid.individual.net>:

    On 06/02/2025 00:29, Owen Rees wrote:
    Norman Wells <hex@unseen.ac.am> wrote:

    The operator is still 'not', whatever symbols are used to invoke it.

    In your post above you said the word not. That sort of sloppiness is
    unfortunately too common in software and a rich source of bugs.

    I am not inclined to accept your estimates of the effort required for a
    task unless you can demonstrate a better understanding of software. For
    example, can you explain what can go wrong when using a programming
    language that has both polymorphic comparison operators and implicit
    coercions?

    No, but nerds probably can.


    With that belittling and derogatory remark you demonstrate that you
    should never be entrusted with any role related to safety critical
    systems.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Thu Feb 6 16:53:25 2025
    On 06/02/2025 15:00, Simon Parker wrote:
    On 05/02/2025 16:21, Norman Wells wrote:
    On 05/02/2025 12:30, Simon Parker wrote:
    On 04/02/2025 16:25, Norman Wells wrote:

    You must then have missed the earlier references in this thread to
    studies showing that autonomous vehicles are in fact already safer
    than human driven ones.

    They are currently allowed already in parts of the USA, and in
    France and Germany.

    https://ioki.com/en/autonomous-vehicles-turning-point-in-european-
    regional-and-local-transport/
    In the UK, they could be allowed by 2026.

    It is my sad duty to inform you that you have, again, engaged in your
    usual habit of speaking with great authority on a matter whilst
    relying on Google searches and quick skims of articles in the
    background.

    According to the Department for Transport, which is a bit horse's mouthy:

    "Self-driving vehicles set to be on roads by 2026 as Automated
    Vehicles Act becomes law"

    https://www.gov.uk/government/news/self-driving-vehicles-set-to-be-on-
    roads-by-2026-as-automated-vehicles-act-becomes-law

    Meaning I'll be able to ride in the UK equivalent of a Waymo from my
    house to any destination to which I'd normally get a taxi / Uber?

    In time. Not necessarily immediately.

    To much fanfare, Germany did indeed announce approval for autonomous
    vehicles in July of last year.  But those familiar with the system,
    (in contrast to those relying on Google to prop up their arguments),
    will be aware of the following key points:

    (1) This is a test / trial / whatever you want to call it of just six
    autonomous vehicles (AVs), not a mass roll-out.

    Did anyone claim it was?

    Has anybody used the words "test / trial / whatever you want to call it"
    when describing the proposed introduction of AVs in the UK throughout
    this thread?

    If not, why not?

    Because it's obvious. Anything to do with debugging is by definition a
    trial.

    Surely it would have been prudent to have made it clear that the
    proposed introduction of AVs to the UK is going to be limited in size,
    (in terms of the number of vehicles deployed), scope (the area in which
    the vehicles will be able to operate) and availability (all trials
    elsewhere, (and it is logical to assume the UK trials will be the same),
    have involved an initial period of cars with a "standby driver" on board being used to gather data prior to members of the project team taking
    rides in the vehicles before they are finally used by carefully selected participants in the project.

    Has anyone said anything different?

    Whilst the AVs may be introduced in 2026 when do you think ordinary
    members of the public will be able to ride in them?

    As soon as the Secretary of State authorises their use under Section 3
    of the Automated Vehicles Act 2024 having been satisfied that any
    particular model is capable of travelling autonomously and safely, which essentially means a level of safety equivalent to, or higher than, that
    of careful and competent human drivers.

    I've referred here earlier to a study showing autonomous vehicles are
    already meeting that standard.

    (2) It is tightly restricted to the Rhine-Main region, more
    specifically the city of Darmstadt in the Offenbach district, and
    even more specifically restricted only to defined routes within that
    city.
    K?
    (3) As can be gathered from point (2), but worth stating for the
    avoidance of doubt, the vehicles involved are Level 4 AVs, NOT Level
    5. They have a "safety monitor" behind the wheel and are also
    supervised remotely at a control centre.

    Well, I suppose they may do, but according to the official definition
    of Level 4:

    "The vehicle operates completely autonomously under certain
    conditions. The human being no longer has to be ready to intervene. He
    can work, watch movies and even sleep."

    Please outline what a Level 4 system is required to do should it fail or cannot proceed.

    Then, please outline what must happen immediately thereafter.

    Your point to make. You've got one, so you say, so you tell us.


    JJ Ricks documents each of his journeys using Waymo.  I invite you to
    watch this video:

    https://www.youtube.com/watch?v=zdKCQKBvH-A

    Note that at around 6:00 (Chapter "Right Turn Detour") the Waymo
    takes a right turn earlier than expected to make an easier left turn
    thereby avoiding a complicated junction (a difficult "unprotected
    left" which Waymo has a known habit of avoiding).  It does the same
    at around the ten minute mark (see previous comment on "known habits"
    of current AVs).

    How very sensible.

    I would not describe an increased journey time, increased fuel usage and taking rat-runs through residential estates rather than sticking to main roads as "very sensible".

    But then I have the advantage of actually knowing what I'm talking about rather than relying on Google to prop up my specious claims.

    Of course you do, as always.

    But you do seem to have an ability to import previously unmentioned
    'increased journey time', and 'increased fuel usage' into your previous scenario without a by-your-leave or any substantiation.

    Things get really messy from the 12:35 mark onward.  I actively
    invite you to watch it for yourself, but in the meantime, here's
    Waymo's official explanation of what happened:

    "While driving fully autonomously through an extended work zone, the
    Waymo Driver detected an unusual situation and requested the
    attention of a remote Fleet Response specialist to provide additional
    information.   During that interaction, the Fleet Response team
    provided incorrect guidance, which made it challenging for the Waymo
    Driver to resume its intended route, and required Waymo's Roadside
    Assistance team to complete the trip.  While the situation was not
    ideal, the Waymo Driver operated the vehicle safely until Roadside
    Assistance arrived. Throughout, Waymo's team was in touch with the
    rider, who provided thoughtful and helpful feedback that allows us to
    continue learning and improving the Waymo Driver.  Our team has
    already assessed the event and improved our operational process."

    Do you have anything a bit more recent than 3 years ago?  Things
    haven't stood still in the interim.

    If only I had not addressed that in the very post to which you are replying!  And, to answer your question, yes I do - I gave two further
    links the most recent of which was from December 2024.  Recent enough
    for you?

    BTW, do you have any evidence that the newer Waymo Driver systems have addressed these problems or are you merely claiming "Time has passed,
    things have moved on, this has probably been solved", without actually knowing much less having evidence to prove that to be the case?

    The whole purpose of trials is to note and address any issues, not just
    sit back for three years hoping they'll magically cure themselves. It
    is inconceivable given the commercial importance of getting it right
    that they have not been addressed.

    My explanation:

    The AV encountered a temporary no right turn because of roadworks
    marked by cones, (Ed: something we definitely do not have in the
    UK!), and could not complete the right turn it expected to make and
    (to use Waymo terminology), "paused" as it was incapable of re-
    routing in real time. A remote operator spoke to the passenger and
    advised him that "Roadside Assistance" (i.e. a human driver) was on
    the way.  Whilst waiting for a human driver to arrive, the Waymo
    Driver, decided it was going to make the right turn after all and
    having done so, then completely blocked the road.  However, as it was
    no longer "paused", "Roadside Assistance" was cancelled.  Except for
    the fact that the car was now sat in an active lane "paused" waiting
    for a driver to come and rescue it, but it had just cancelled the
    human, as it unpaused. There were several "false starts" and the AV
    eventually gave up and put the hazard lights on in the "fast-lane" of
    the active carriageway.  Roadside Assistance, (aka the human driver),
    arrived but the car took off again.  This happened several times
    before they finally accessed the vehicle at around 29 minutes at
    which point the human drove the rest of the way, (or in Waymo speak,
    "the ride was completed in manual").

    And remember, this is on controlled, specific pre-defined routes.
    Thankfully, Waymo keep "Roadside Assistance" (i.e. humans) around 5-6
    miles from each car.

    "Ah, but that video was three years ago!", I hear some cry.

    Indeed, so why go on about it in such depth?

    Because the problems highlighted in the video still exist.

    Proof please.

    OK then, here's a more recent one where the Waymo gets stuck in a loop:

    https://www.youtube.com/watch?v=esLzeHKhGzg

    No comment on this one?

    Only that trials exist in order to find errors and act to put them right.

    Or another where it tries to drive through a temporary construction
    zone (from December 2024):

    https://www.youtube.com/watch?v=2ihN0IkIMPg

    Despite claims to the contrary, this technology is not anywhere close
    to a mass roll-out yet.

    It's a process called debugging.

    Actually, it isn't.  Debugging is identifying and removing errors. These systems are still being actively developed and are therefore still
    firmly in the "Research and Development" phase.  For example, the latest version of the Waymo Driver utilises fewer sensors thereby reducing the cost.  Reducing the number of sensors is definitely R&D, not "debugging".

    Semantics. Debugging is part of development.

    Tangentially, I assume that you're aware that Waymo is under
    investigation by the National Highway Traffic Safety Administration
    (NHTSA) in America for 22 incidents in which the vehicles behaved
    erratically and sometimes disobeyed traffic safety rules or were
    involved in crashes? [^1]

    Any pronouncement to make on that particular development in the field of
    AVs?

    [^1] https://static.nhtsa.gov/odi/inv/2024/INOA-PE24016-12382.pdf

    Without knowing over how many vehicle-miles the incidents occurred, it's impossible to comment sensibly. In the same distance, how many similar incidents do human drivers cause, and how many are actually reported?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Owen Rees@21:1/5 to Norman Wells on Thu Feb 6 21:51:28 2025
    On Thu, 6 Feb 2025 19:21:12 +0000, Norman Wells <hex@unseen.ac.am> wrote
    in <m0kgd9FseqvU2@mid.individual.net>:

    Some things are absolutely implicit, and shouldn't need to be spelt out
    to everyone regardless of mental capacity.

    Anyone with that attitude should never be allowed anywhere near the specification, design, implementation or operation of any system that
    requires high assurance.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Simon Parker on Thu Feb 6 22:30:13 2025
    On 2025-02-06, Simon Parker <simonparkerulm@gmail.com> wrote:
    On 05/02/2025 20:06, Norman Wells wrote:
    On 05/02/2025 17:32, Mark Goodge wrote:
    I have no overwhelming reason to disagree with the industry consensus (as >>> cited earlier by Simon Parker) that we're probably around ten years away >>> from the first fully autonomous (level 5) vehicles on the roads in
    real life use (as opposed to mere testing).

    We don't actually need Level 5.  Level 4 will do.  And they're being
    tested and proved right now.

    When you say "Level 4 will do", please define precisely what you mean by
    the phrase "will do".

    Given that you've only made posts in the last 24 hours referencing the various levels of autonomy in AVs, I'm not convinced you are entirely
    aware of the precise limitations inherent in a Level 4 AV.

    I'll consider cars to be "full self drive" if they are legally allowed
    to be used on all or very nearly all public roads with nobody inside
    who is capable of driving (i.e. no licence, or asleep or drunk, etc).
    And that they actually safely get you to the destination with high
    reliability, of course :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to Owen Rees on Thu Feb 6 23:02:17 2025
    On 6 Feb 2025 at 21:13:00 GMT, "Owen Rees" <orees@hotmail.com> wrote:

    On Thu, 6 Feb 2025 08:07:32 +0000, Norman Wells <hex@unseen.ac.am> wrote
    in <m0j8u4Fmj8iU1@mid.individual.net>:

    On 06/02/2025 00:29, Owen Rees wrote:
    Norman Wells <hex@unseen.ac.am> wrote:

    The operator is still 'not', whatever symbols are used to invoke it.

    In your post above you said “the word ‘not’”. That sort of sloppiness is
    unfortunately too common in software and a rich source of bugs.

    I am not inclined to accept your estimates of the effort required for a
    task unless you can demonstrate a better understanding of software. For
    example, can you explain what can go wrong when using a programming
    language that has both polymorphic comparison operators and implicit
    coercions?

    No, but nerds probably can.


    With that belittling and derogatory remark you demonstrate that you
    should never be entrusted with any role related to safety critical
    systems.

    There may be appropriate openings in the Post Office Board??

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to All on Fri Feb 7 09:10:40 2025
    On Thu, 6 Feb 2025 10:11:39 +0000, Martin Brown <'''newspam'''@nonad.co.uk> wrote:

    On 05/02/2025 15:39, Mark Goodge wrote:

    One scenario where it doesn't work is late at night or on lightly used roads >> where there isn't enough traffic to give a reliable statistical sample.

    That is a limitation OTOH when the roads are quiet the traffic jams tend
    not to be too bothersome either. Police will let the odd car go through.

    That depends on the reason why the road is closed. If the road is completely impassable due to being blocked by an accident or something like a fallen
    tree (which happened not far from me in the recent storms) then there would
    be no point in the police letting the odd car through, as even the odd car
    will need to turn back when they reach the obstruction.

    Although that does, of course, introduce a further complexity into the
    system that autonomous vehicles will need to be able to handle. There's a difference between a "hard" closure, where the road is simply impassable and all the police can do is tell you that, and a "soft" closure where it is passable with care but the police won't let you try unless you can give them
    a good reason.

    Mostly we see roads closed for repair late at night. The satnav can find
    some amazing alternative routes home (and seems to be fully aware of all >scheduled road closures). Last time that happened A19 down it picked a
    tiny road through a being built housing estate as a fantastic short cut.
    (it will be a right rat run once more people know about it)

    Roadworks are, generally, planned in advance (unless it's an emergency), and there is a mechanism for publishing that data which the sat-nav providers
    can tap into. If anything, they're more likely to be over-conservative in
    such situations because they will go strictly by the stated closure times
    which are likely to be longer than the actual closure. So you can find your sat-nav re-routing you to avoid a location where the construction workers
    took the barriers down and packed up an hour ago.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Owen Rees@21:1/5 to J Newman on Fri Feb 7 08:54:03 2025
    J Newman <jenniferkatenewman@gmail.com> wrote:
    On 03/02/2025 16:59, Mike Scott wrote:
    On 03/02/2025 10:54, billy bookcase wrote:
    With the sheer volume of inputs into an autopilot (location, speed,
    multiple
    cameras,  bugs on the windscreen, lidar, clouds, weather, lighting,
    pedestrians,
    cyclists, other vehicles) do you think you could*ever* duplicate the
    inputs?
    It's purely hypothetical.

    If two sets of inputs were identical then the outputs would be identical >>> as all the data is numerical and subject to exactly the same set of
    calculations.

    Sorry, that doesn't work in the real world.

    The whole point about "butterfly wings" is that many (all??) non-linear
    calculations are thoroughly chaotic in their results. If you can't
    specify inputs precisely, outputs can vary unpredictably. And any
    "numerical" input from the real world cannot be *precisely* specified.
    Output therefore can depend critically on an imprecise input.

    (It's not that the butterfly /causes/ the hurricane. It's just that you
    cannot put the full initial conditions into the highly non-linear
    weather calculations. Include the butterfly, predict a hurricane;
    exclude it, predict a sunny day. But you don't even know it's there....)

    I reckon a driver AI is decidedly non-linear system.


    I don't think some people here give AI enough credit. Ask OpenAI for
    example exactly the same questions, and you will get slightly different,
    but usually essentially the same, answers.

    That depends on the system not learning and not using inputs other than the question.

    If I ask whether or not I should take an umbrella when I walk to the local shops I would hope to get different answers depending on where I am and the local weather forecast for the day.


    It's human-like in this regard. If I ask a human two identical
    questions, I would get slightly different answers in terms of prose (but hopefully identical in terms of substance).

    Have I asked you this question before?

    Traditionally, unknown inputs that affect the result are referred to as “phase of the moon”. If you look it up in the jargon file you will find that there was a surprising result at CERN where phase of the moon was the input that made the difference.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to J Newman on Fri Feb 7 09:11:12 2025
    "J Newman" <jenniferkatenewman@gmail.com> wrote in message news:vo49ms$3d0ms$1@dont-email.me...

    It's human-like in this regard. If I ask a human two identical questions, I would get
    slightly different answers in terms of prose (but hopefully identical in terms of
    substance).

    You're lucky then.

    Some humans can ask another human an identical question
    on two successive days and get a completely different
    answer.each time.

    And on the next day as well


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Fri Feb 7 09:24:00 2025
    On Thu, 6 Feb 2025 15:41:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 06/02/2025 14:53, Simon Parker wrote:

    So you'll be posting a copy of your code, presently. Please feel free
    to use the language in which you are most comfortable.

    I don't write code any more; I leave that to minions.

    I'm glad to see that your experience of working for Fujitsu on the Horizon project has come in useful.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Fri Feb 7 09:20:23 2025
    On Thu, 6 Feb 2025 19:24:38 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 06/02/2025 18:57, Mark Goodge wrote:
    On Thu, 6 Feb 2025 15:32:06 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    Then they all should be notified,

    How?

    Please can you detail the process by which the fact that a police officer
    has just closed a road due to an accident is notified to all of the various >> systems which might want that information.

    This is, actually, a solvable problem, given certain basic assumptions, and >> I have what I think would be an effective solution in mind, but I'd be
    interested to hear how you would do it if you were desiging the system.

    It's a solvable problem according to you, just as I said.

    That's quite enough.

    Ah, but my solution rests on assumptions which are not yet reality. I wonder
    if you can correctly identify what they are. And, as I've pointed out, I am
    a programmer in my day job. Given your repeated insistance that you
    understand all the principles as well as, or even better than, people who actually work in this field it ought to be a trivial exercise for you to at least describe the basic architecture of the necessary system. You don't
    even need to write any actual code samples for that. Just a high-level
    overview will do.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Mike Scott on Fri Feb 7 09:02:50 2025
    "Mike Scott" <usenet.16@scottsonline.org.uk.invalid> wrote in message news:vnqll2$1agia$1@dont-email.me...
    On 03/02/2025 10:54, billy bookcase wrote:
    With the sheer volume of inputs into an autopilot (location, speed, multiple
    cameras, bugs on the windscreen, lidar, clouds, weather, lighting, pedestrians,
    cyclists, other vehicles) do you think you could*ever* duplicate the inputs?
    It's purely hypothetical.

    If two sets of inputs were identical then the outputs would be identical
    as all the data is numerical and subject to exactly the same set of
    calculations.

    Sorry, that doesn't work in the real world.

    The whole point about "butterfly wings" is that many (all??) non-linear calculations
    are thoroughly chaotic in their results. If you can't specify inputs precisely, outputs
    can vary unpredictably.

    You can specify inputs as precisely as you wish.

    The question is whether they're accurate or not.

    Garbage to nine decimal places in - garbage to nine decimal places out
    as they say.

    And any "numerical" input from the real world cannot be *precisely* specified. Output
    therefore can depend critically on an imprecise input.

    In this instance the input will be data from sensors indicating the
    successive positions of all moving objects at say 1/10 second
    intervals from which can be discerned with a sufficient level of
    accuracy their direction, speed, state of acceleration or
    deceleration along with projections of all possible paths (so
    different for cyclists and Ferraris)

    A lot of linear calculations there but sufficiently accurate I'd have
    imagined to avoid any possibility of collisions.


    (It's not that the butterfly /causes/ the hurricane. It's just that you cannot put the
    full initial conditions into the highly non-linear weather calculations. Include the
    butterfly, predict a hurricane; exclude it, predict a sunny day. But you don't even
    know it's there....)

    I reckon a driver AI is decidedly non-linear system.

    For a moment there, I was even envisaging sensors to prompt the
    AV to switch on its windscreen wipers, so it could see out better.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Fri Feb 7 09:31:59 2025
    On Thu, 6 Feb 2025 08:14:08 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 17:18, Mark Goodge wrote:

    And that's just one document. Here are a few more:

    https://www.xenonstack.com/blog/autonomous-vehicles-driver-assistance
    https://hai.stanford.edu/news/how-ai-making-autonomous-vehicles-safer
    https://arxiv.org/pdf/2402.17690
    https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry
    https://www.techtarget.com/searchenterpriseai/definition/driverless-car

    TL;DR: anyone who tells you that AI plays no role in autonomous vehicles is >> either stunningly ignorant or deliberately misleading you.

    AI is a buzzword which is generally used by the pretentious to make out
    they have a better understanding of anything complex than they actually
    do. But AI involves the crunching of huge amounts of data to develop >principles and insights that can be used more simply elsewhere without
    having to reinvent the wheel every time. Autonomous car controls don't
    have vast amounts of data, and you don't want to sit around for a week >waiting for it to decide exactly what it is it has just crashed into.
    They are not employing AI, though AI may have played a part in
    developing the algorithms they do actually use.

    Norman, I have just posted links to documents by researchers and autonomous vehicle developers which clearly state that they are using AI. If you
    disagree with them, then maybe you could address those claims and explain
    why you think they are wrong. Reference to your experience working on
    similar projects would be invaluable.

    It's all a matter of proper definition of terms which is too commonly >lacking.

    So tell us about the AI you've worked on, and how that differs from the AI
    used by autonomous vehicle developers.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to J Newman on Fri Feb 7 09:54:58 2025
    On Fri, 07 Feb 2025 08:37:12 +0200, J Newman wrote:

    On 03/02/2025 16:59, Mike Scott wrote:
    [quoted text muted]
    I don't think some people here give AI enough credit. Ask OpenAI for
    example exactly the same questions, and you will get slightly different,
    but usually essentially the same, answers.

    Doesn't make them right though.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to All on Fri Feb 7 09:58:41 2025
    On Thu, 6 Feb 2025 12:43:02 +0000, Max Demian <max_demian@bigfoot.com>
    wrote:

    On 05/02/2025 18:05, Mark Goodge wrote:

    Of course it matters what the object is. Some objects will do more damage to >> the car if the car strikes them, and some objects will suffer more injury of >> the car strikes them. There's a huge difference between, say, a plastic bag, >> a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course, >> you'd avoid all of them. But you wouldn't necessarily take abrupt evasive
    action to avoid a plastic bag. On the other hand, you'd hit anything -
    including a bollard - if necessary to avoid hitting a child.

    I'm reminded of those creepy bollards that look like children. I suppose
    the idea is that the driver will avoid hitting them because he likes >children; more likely he will avoid hitting them because they are made
    of metal.

    The point is more that humans are, subconsciously, more likely to notice children than bollards. It's something that's hardwired into the human
    psyche. So people are less likely to hit bollards that look like children,
    not because they like children, but because they are more likely to notice bollards that look like children. And, having noticed them, they are then
    much less likely to hit them.

    This is actually an interesting point as far as autonomous vehicles are concerned. It's well known in highway design that "sign blindness" can be an issue; if you have too many signs at one location, even if individually they each have something important to say, then drivers are less likely to take account of them. Information overload reduces compliance and reduces safety. But robot cars won't have that issue. On the contrary, as far as AI is concerned, you can never have too much data.

    Equally, there are a lot of visual aspects of road design which act directly
    on human psychology. Things like "dragon's teeth" markings, village boundary gateways, etc all have no physical effect on a vehicle (unlike, say, speed bumps) and have no legal significance (unlike a 30pmh sign). And yet they
    have a statistically significant effect in reducing accidents - not because people see them and think "oh, I must drive more safely", but simply because they cannot help being affected by them subconsciously. But AI doesn't have
    a subconscious. So "nudge theory" won't work on autonomous vehicles,
    everything they neeed to know will need to be explicitly programmed, communicated or observed.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Jon Ribbens on Fri Feb 7 10:01:09 2025
    On Thu, 06 Feb 2025 22:30:13 +0000, Jon Ribbens wrote:

    On 2025-02-06, Simon Parker <simonparkerulm@gmail.com> wrote:
    On 05/02/2025 20:06, Norman Wells wrote:
    On 05/02/2025 17:32, Mark Goodge wrote:
    I have no overwhelming reason to disagree with the industry consensus
    (as cited earlier by Simon Parker) that we're probably around ten
    years away from the first fully autonomous (level 5) vehicles on the
    roads in real life use (as opposed to mere testing).

    We don't actually need Level 5.  Level 4 will do.  And they're being
    tested and proved right now.

    When you say "Level 4 will do", please define precisely what you mean
    by the phrase "will do".

    Given that you've only made posts in the last 24 hours referencing the
    various levels of autonomy in AVs, I'm not convinced you are entirely
    aware of the precise limitations inherent in a Level 4 AV.

    I'll consider cars to be "full self drive" if they are legally allowed
    to be used on all or very nearly all public roads with nobody inside who
    is capable of driving (i.e. no licence, or asleep or drunk, etc). And
    that they actually safely get you to the destination with high
    reliability, of course :-)

    If all cars were autonomous, we could have that now ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Mark Goodge on Fri Feb 7 10:09:41 2025
    "Mark Goodge" <usenet@listmail.good-stuff.co.uk> wrote in message news:eskbqj5crd0m2cs8nhljpvq3bub310avl0@4ax.com...
    On Thu, 6 Feb 2025 12:43:02 +0000, Max Demian <max_demian@bigfoot.com>
    wrote:

    On 05/02/2025 18:05, Mark Goodge wrote:

    Of course it matters what the object is. Some objects will do more damage to
    the car if the car strikes them, and some objects will suffer more injury of
    the car strikes them. There's a huge difference between, say, a plastic bag,
    a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course,
    you'd avoid all of them. But you wouldn't necessarily take abrupt evasive >>> action to avoid a plastic bag. On the other hand, you'd hit anything -
    including a bollard - if necessary to avoid hitting a child.

    I'm reminded of those creepy bollards that look like children. I suppose >>the idea is that the driver will avoid hitting them because he likes >>children; more likely he will avoid hitting them because they are made
    of metal.

    The point is more that humans are, subconsciously, more likely to notice children than bollards. It's something that's hardwired into the human psyche. So people are less likely to hit bollards that look like children, not because they like children, but because they are more likely to notice bollards that look like children. And, having noticed them, they are then much less likely to hit them.

    This is actually an interesting point as far as autonomous vehicles are concerned. It's well known in highway design that "sign blindness" can be an issue; if you have too many signs at one location, even if individually they each have something important to say, then drivers are less likely to take account of them. Information overload reduces compliance and reduces safety. But robot cars won't have that issue. On the contrary, as far as AI is concerned, you can never have too much data.

    Equally, there are a lot of visual aspects of road design which act directly on human psychology. Things like "dragon's teeth" markings, village boundary gateways, etc all have no physical effect on a vehicle (unlike, say, speed bumps) and have no legal significance (unlike a 30pmh sign). And yet they have a statistically significant effect in reducing accidents - not because people see them and think "oh, I must drive more safely", but simply because they cannot help being affected by them subconsciously. But AI doesn't have
    a subconscious. So "nudge theory" won't work on autonomous vehicles, everything they neeed to know will need to be explicitly programmed, communicated or observed.

    They are also more likely to spot any "Invisible Gorillas"

    quote

    To demonstrate this effect they created a video where students [Again !}
    pass a basketball between themselves. Viewers asked to count the number
    of times the players with the white shirts pass the ball often fail to
    notice a person in a gorilla suit who appears in the center of the image

    :unquote

    https://en.wikipedia.org/wiki/The_Invisible_Gorilla


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to All on Fri Feb 7 10:23:55 2025
    On Thu, 6 Feb 2025 12:46:05 +0000, Max Demian <max_demian@bigfoot.com>
    wrote:

    On 04/02/2025 21:29, Mark Goodge wrote:
    On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>> On 04/02/2025 14:47, Simon Parker wrote:

    Then please detail which parties will be fulfilling the roles of the
    commander and the flight technicians in fully automated vehicles.

    No-one. Just as there isn't on the Docklands Light Railway.

    DLR trains have a "Passenger Service Agent" (previously known as a Train
    Captain) on board who is able to take over and drive it manually if
    necessary. And the trains themselves don't need any routing capability,
    because they run on rails. It's a very far cry from a fully autonomous road >> vehicle.

    Are these "Passenger Service Agents" fully qualified drivers (and paid
    as much)? If so, what's the point?

    Yes, they are.

    https://www.mylondon.news/news/east-london-news/dlr-hiring-customer-service-role-20925766

    As for why don't they just have them manually driven all the time, the
    answer is that letting the automation do the majority of the driving frees
    up the PSA to interact with passengers, including things like helping
    disabled users, provide information and deter crime. Lanual driving would
    mean either having double the onboard staff, or not having onboard passenger-facing staff. The former would be too expensive, the latter would
    be unpopular with passengers and may not necessarily save that much money anyway as it would increase staffing needs at stations.

    One of the disadvantages of the automated tube lines (such as the Victoria Line) is that there's no onboard staff in direct contact with passengers
    (there is still a backup driver, but they sit in the cab). That means that
    all passenger assistance (eg, helping a wheelchair user) has to be provided
    by station staff, and there's litle opportunity to anticipate and prevent things like "steamer" theft while a train is between stations. The DLR, by contrast, has most of its front-line staff, other than at major
    interchanges, on the trains and in direct contact with passengers
    throughout. That means they can provide assistance to passengers who need
    help getting on and off, and if they see someone do anything illegal (such
    as theft, or assault) while a train is between stations they can immediately report it and have BTP waiting at the next station to deal with the perpetrator(s).

    So the DLR is a more user-friendly service than the tube, and one of the
    ways it manages that is to have as much as possible of non-passenger facing roles (such as driving) automated, leaving the majority of its staff to work
    in passenger facing roles.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Owen Rees on Fri Feb 7 09:48:25 2025
    On 06/02/2025 21:51, Owen Rees wrote:
    On Thu, 6 Feb 2025 19:21:12 +0000, Norman Wells <hex@unseen.ac.am> wrote
    in <m0kgd9FseqvU2@mid.individual.net>:

    Some things are absolutely implicit, and shouldn't need to be spelt out
    to everyone regardless of mental capacity.

    Anyone with that attitude should never be allowed anywhere near the specification, design, implementation or operation of any system that requires high assurance.

    How does that follow?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Owen Rees on Fri Feb 7 09:50:03 2025
    On 06/02/2025 21:13, Owen Rees wrote:
    On Thu, 6 Feb 2025 08:07:32 +0000, Norman Wells <hex@unseen.ac.am> wrote
    in <m0j8u4Fmj8iU1@mid.individual.net>:
    On 06/02/2025 00:29, Owen Rees wrote:
    Norman Wells <hex@unseen.ac.am> wrote:

    The operator is still 'not', whatever symbols are used to invoke it.

    In your post above you said “the word ‘not’”. That sort of sloppiness is
    unfortunately too common in software and a rich source of bugs.

    I am not inclined to accept your estimates of the effort required for a
    task unless you can demonstrate a better understanding of software. For
    example, can you explain what can go wrong when using a programming
    language that has both polymorphic comparison operators and implicit
    coercions?

    No, but nerds probably can.

    With that belittling and derogatory remark you demonstrate that you
    should never be entrusted with any role related to safety critical
    systems.

    It's unlikely that I will be. But I don't see how your conclusion
    follows from what I said anyway.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Fri Feb 7 09:47:46 2025
    On 06/02/2025 14:49, Simon Parker wrote:
    On 05/02/2025 15:56, Norman Wells wrote:
    On 05/02/2025 12:14, Simon Parker wrote:
    On 04/02/2025 15:20, Norman Wells wrote:
    On 04/02/2025 12:16, Mark Goodge wrote:

    It would be trivially easy to do though.  Police alert HQ, HQ tells
    mapping, mapping puts a block where the accident is, autonomous
    vehicles re-route to avoid it.

    Is the police officer placing a general call with HQ or to a specific
    desk at HQ?  What happens if the request cannot be connected to HQ
    immediately?  How do HQ alert mapping? Is HQ alerting Google Maps,
    Apple Maps, Garmin, et al individually are are they alerting a
    central resource which subsequently pushes this data out to various
    service providers?t

    Not my province.  I just show how it can easily be done.  The details
    can be left to minions.

    I do not consider it reasonable to state that a particular task is
    "trivially easy to do" and then when asked for further details to
    dismiss the question with a "Not my province." statement.

    Additionally, the questions were asked to demonstrate that what you
    claim is not, in fact, "trivially easy" and could, or more likely would,
    be rather complex.

    It may sound simple in theory, but as the quote reminds us, "In theory, theory and practice are the same.  In practice they are not."

    Similarly, "The Devil is in the details."

    Details you are unable or unwilling to provide resulting in your claim
    that it is "trivially easy to do" failing as unsubstantiated and
    unsupported.

    It's nuts and bolts stuff. Ask a builder. I deal with the bigger picture.

    Not forgetting, of course, and quite topical at the moment: who is
    paying for all this?

    I do believe electronic communications are to all intents and purposes
    free these days.  There is no reason why most of it cannot be
    automated and cost very little.

    Again, your bold claims fail through lack of evidence when challenged.

    For example, how are your automated electronic communications coping
    with "Not Spots"?

    Not very well of course. But not spots concern just about 7% of the
    UK's land mass, generally in very remote areas. 93% of the nations'
    land mass is not a not spot, where of course electronic communications
    work very well and essentially for free. What's not to like?

    Your glass half empty attitude, even when it's 93% full, is yet another
    example of perfect being the enemy of good.

    At present, Google, Apple and Garmin get their data, mostly, without
    charge.

    Do they?  Where from?

    Surely someone making the bold claims you have made throughout the
    thread is fully aware of how the current systems work and would not need
    to ask such a question?

    You made the claim. I just queried it. And I wonder, since you didn't
    say, what point you were trying to make.

    (Note the use of the word "systems" as there are multiple systems in play.)

    If you do not know how the current systems work, how can you hope to
    support the claims you are repeatedly making throughout the thread?

    Or are you hoping your claims will go unchallenged just because you made
    them and that ought to be enough for everyone to accept they are true regardless?

    I don't know what claims you're only vaguely alluding to.

    Why would they want to move to a model that involves them paying for
    what they currently get for free?

    It's in the national interest that such information should be freely
    available and used.

    Please provide examples where large tech companies have prioritised
    acting in the national interest over acting in their own interest unless
    and until compelled to do so by legislation.

    I'm not following. Why do you think companies will have to move to a
    model that involves them paying for what they currently get for free?

    Additionally, the current systems compete with each other rather than
    feeding into a central system you envisage.  By what means are you consolidating the competing systems?

    There's no reason why they should have to consolidate. Competition
    drives progress. That's why, for example, there are commercial
    data-bases of speed limits rather than just piles of paper TRO in
    Council archives.

    Except that a human won't do anything either until and unless he
    knows where a blockage is.

    As autonomous vehicles become used there will have to be a
    requirement for accurate and rapid information on blockages.

    Information on congestion is readily available already in real time.
    There's no technical reason why the same could not be done for
    blockages.

    It is clear that you do not understand how the current system works
    because, if you did, you would know why this will not work for
    blockages such as those being described.

    But it could, and should.  There is no technological reason why it can't.

    There is no technological reason why every member of ULM cannot take a
    flight into space.  On what date is your flight scheduled?

    Sadly, but clearly, there are considerations other than technological
    reasons that must be factored in to the real world consideration of a
    matter.

    What I suggest is perfectly feasible. As others here have confirmed.
    Even if it requires a bit of effort and a positive attitude.

    These things exist already.

    If the policeman has a Scottish accent?  Or Brummie?  (I recommend
    searching for "Amy101" if the second reference doesn't elicit a
    "light bulb" moment.)

    Then it will need to go through Google Translate, as usual.

    Oh dear!  I recommend re-reading what I wrote.  The problem is *accents* not *language*.  AI is notoriously poor at dealing with certain accents.

    You do like edge cases. And you sadly seem to think that any minor
    hitch is a fatal flaw, when it's anything but.

    For the second time of asking, I recommend searching for "Amy101" as
    this is clearly another area on which you are happy to issue
    pronouncements whilst making it abundantly clear that you do not
    understand the subject.


    It's not AI, which is something entirely different.

    Modern, (c.f. "Current"), voice recognition, (i.e. the very systems
    you are referencing above that "already exist") is "not AI"?  Are you
    sure? I recommend phoning a friend, or asking the audience before you
    lock in your answer. :-)

    It is not AI in itself, though it is the product of previous AI which
    mined the data.

    Google Assistant (or whatever they are calling it this week), Siri and
    Alexa all use AI technologies like machine learning, natural language processing and voice recognition to interact with users.

    These technologies are often referred to as types of Narrow AI.

    I know of nobody, with one notable exception, that claims that Narrow AI
    is not AI.

    Why it has a particular designation is that it is not general AI as it
    has been used and assumed by others in this thread.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Fri Feb 7 10:15:23 2025
    On 07/02/2025 09:20, Mark Goodge wrote:
    On Thu, 6 Feb 2025 19:24:38 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 06/02/2025 18:57, Mark Goodge wrote:
    On Thu, 6 Feb 2025 15:32:06 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    Then they all should be notified,

    How?

    Please can you detail the process by which the fact that a police officer >>> has just closed a road due to an accident is notified to all of the various >>> systems which might want that information.

    This is, actually, a solvable problem, given certain basic assumptions, and >>> I have what I think would be an effective solution in mind, but I'd be
    interested to hear how you would do it if you were desiging the system.

    It's a solvable problem according to you, just as I said.

    That's quite enough.

    Ah, but my solution rests on assumptions which are not yet reality. I wonder if you can correctly identify what they are.

    I'm sure you'll tell us if it's important. Guessing games really aren't
    my thing.

    And, as I've pointed out, I am
    a programmer in my day job. Given your repeated insistance that you understand all the principles as well as, or even better than, people who actually work in this field it ought to be a trivial exercise for you to at least describe the basic architecture of the necessary system. You don't
    even need to write any actual code samples for that. Just a high-level overview will do.

    That's what I gave you. Officer calls HQ, HQ alerts live mapping
    providers, they put blocks on maps, AVs re-route to avoid blocks.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to J Newman on Fri Feb 7 10:00:07 2025
    On Fri, 07 Feb 2025 08:40:34 +0200, J Newman wrote:

    On 03/02/2025 17:43, Mark Goodge wrote:
    On Mon, 3 Feb 2025 12:35:57 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    On 03/02/2025 10:07, billy bookcase wrote:
    "Martin Harran" <martinharran@gmail.com> wrote in message
    news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...

    "At the current time" for complete control but probably not too far
    away. There is quite a bit of AI already in modern cars with things
    like picking up speed limits, automatic speed reduction when too
    close to a vehicle in front, lane-keep assist.

    Those are examples of automation, as with autpilots. Where the each
    set of inputs, no matter how complex, will produce exactly the same
    result every time.

    AI systems on the other hand, learn and make up new rules as they go
    along.
    And so the process by which they actually arrive at particular
    decisions may not be readily available to humans.

    I think this is possibly what a lot of peopel are really worried
    about,

    AI is not part of autonomous vehicle control, and includes none of the
    things Mr Harran identified above. It is simply automation according
    to pre-determined rules.

    But that's precisely the problem. It's impossible to have a
    predetermined rule for every possible scenario. And dealing with
    scenarios that you don't have a rule for requires intelligence.

    Mark


    So here's how ChatGPT responded to such a scenario. Is AI intelligent
    enough to drive cars?

    Q: Can you come up with a driving condition for which there are no rules
    for, and can you say what you would do?

    A:
    Scenario:
    You're driving at night in a rural area with no streetlights, and
    suddenly, a massive swarm of insects (e.g., locusts or mayflies)
    completely covers your windshield, making visibility nearly zero. Your
    wipers and washer fluid are ineffective because the sheer number of
    insects is too great.

    What I Would Do:
    Stay Calm and Avoid Sudden Braking – Slamming the brakes could cause a rear-end collision if there's a vehicle behind me.
    Reduce Speed Gradually – I’d ease off the accelerator and slowly decelerate to maintain control.
    Turn on Hazard Lights – This alerts other drivers to my reduced speed
    and possible erratic movement.
    Find a Safe Stopping Point – If I can, I’d try to pull over to a safe location like the shoulder, a rest area, or a side road.
    Use Alternate Visibility Methods – If my windshield is completely
    covered, I might open my side window and stick my head out carefully (if safe) to get some visibility.
    Assess and Clear the Windshield – Once stopped, I’d use a squeegee, rag, or even a piece of clothing to remove the insects.
    Report the Hazard if Needed – If it's a massive swarm affecting multiple drivers, I might call local authorities to warn others.
    This is a situation that isn’t explicitly covered by standard driving
    laws, but the best approach follows general defensive driving principles—stay calm, maintain control, and prioritize visibility and safety.

    Would you have done something differently?

    How did it face with the trolleybus problem ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Fri Feb 7 10:34:52 2025
    On Wed, 5 Feb 2025 20:06:11 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 17:32, Mark Goodge wrote:
    On Wed, 5 Feb 2025 16:44:36 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 16:21, Mark Goodge wrote:

    Again, that is my point. To be fully autonomous, a vehicle will need to be >>>> able to derive the necessary data from its surroundings, context and memory
    without relying solely on information transmitted from a central system. >>>> That is, it will need to be able to behave like a human driver. Which, as >>>> yet, they can't do. They'll get there eventually. But getting there isn't a
    trivial problem to solve.

    They are much closer already than you think.

    I have no overwhelming reason to disagree with the industry consensus (as
    cited earlier by Simon Parker) that we're probably around ten years away
    from the first fully autonomous (level 5) vehicles on the roads in real life >> use (as opposed to mere testing).

    We don't actually need Level 5. Level 4 will do. And they're being
    tested and proved right now.

    So you don't actually believe that fully autonomous vehicles are closer than
    I think?

    Obviously, programming an autonomous vehicle is more of a challenge than
    programming a chatbot. But the underlying trade-offs of computing power v
    cost are essentially the same. So it would not surprise me at all if the
    first genuine level 5 vehicles come from China. Which would raise a whole
    lot of other ethical and legal issues.

    Would it? Why?

    Because of China's known propensity to engage in data extraction and censorship. What if a Chinese-made automated taxi was carrying a couple of politicians having a conversation about Taiwan?

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Fri Feb 7 10:23:02 2025
    On 07/02/2025 09:31, Mark Goodge wrote:
    On Thu, 6 Feb 2025 08:14:08 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 17:18, Mark Goodge wrote:

    And that's just one document. Here are a few more:

    https://www.xenonstack.com/blog/autonomous-vehicles-driver-assistance
    https://hai.stanford.edu/news/how-ai-making-autonomous-vehicles-safer
    https://arxiv.org/pdf/2402.17690
    https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry
    https://www.techtarget.com/searchenterpriseai/definition/driverless-car

    TL;DR: anyone who tells you that AI plays no role in autonomous vehicles is >>> either stunningly ignorant or deliberately misleading you.

    AI is a buzzword which is generally used by the pretentious to make out
    they have a better understanding of anything complex than they actually
    do. But AI involves the crunching of huge amounts of data to develop
    principles and insights that can be used more simply elsewhere without
    having to reinvent the wheel every time. Autonomous car controls don't
    have vast amounts of data, and you don't want to sit around for a week
    waiting for it to decide exactly what it is it has just crashed into.
    They are not employing AI, though AI may have played a part in
    developing the algorithms they do actually use.

    Norman, I have just posted links to documents by researchers and autonomous vehicle developers which clearly state that they are using AI.

    Well, they would, wouldn't they? They want to sound cutting edge.

    What they're actually using is what Mr Parker has identified as 'Narrow
    AI' but that doesn't sound as good. However, it does mean their cars
    don't have to carry round with them their own huge data-processing
    centres, electrical sub-stations and massive water-cooling facilities,
    which broader or proper AI requires.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to Norman Wells on Fri Feb 7 11:04:28 2025
    On 7 Feb 2025 at 09:47:46 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:

    On 06/02/2025 14:49, Simon Parker wrote:
    On 05/02/2025 15:56, Norman Wells wrote:
    On 05/02/2025 12:14, Simon Parker wrote:
    On 04/02/2025 15:20, Norman Wells wrote:
    On 04/02/2025 12:16, Mark Goodge wrote:

    It would be trivially easy to do though. Police alert HQ, HQ tells
    mapping, mapping puts a block where the accident is, autonomous
    vehicles re-route to avoid it.

    Is the police officer placing a general call with HQ or to a specific
    desk at HQ? What happens if the request cannot be connected to HQ
    immediately? How do HQ alert mapping? Is HQ alerting Google Maps,
    Apple Maps, Garmin, et al individually are are they alerting a
    central resource which subsequently pushes this data out to various
    service providers?t

    Not my province. I just show how it can easily be done. The details
    can be left to minions.

    I do not consider it reasonable to state that a particular task is
    "trivially easy to do" and then when asked for further details to
    dismiss the question with a "Not my province." statement.

    Additionally, the questions were asked to demonstrate that what you
    claim is not, in fact, "trivially easy" and could, or more likely would,
    be rather complex.

    It may sound simple in theory, but as the quote reminds us, "In theory,
    theory and practice are the same. In practice they are not."

    Similarly, "The Devil is in the details."

    Details you are unable or unwilling to provide resulting in your claim
    that it is "trivially easy to do" failing as unsubstantiated and
    unsupported.

    It's nuts and bolts stuff. Ask a builder. I deal with the bigger picture.

    Not forgetting, of course, and quite topical at the moment: who is
    paying for all this?

    I do believe electronic communications are to all intents and purposes
    free these days. There is no reason why most of it cannot be
    automated and cost very little.

    Again, your bold claims fail through lack of evidence when challenged.

    For example, how are your automated electronic communications coping
    with "Not Spots"?

    Not very well of course. But not spots concern just about 7% of the
    UK's land mass, generally in very remote areas. 93% of the nations'
    land mass is not a not spot, where of course electronic communications
    work very well and essentially for free. What's not to like?

    Your glass half empty attitude, even when it's 93% full, is yet another example of perfect being the enemy of good.

    At present, Google, Apple and Garmin get their data, mostly, without
    charge.

    Do they? Where from?

    Surely someone making the bold claims you have made throughout the
    thread is fully aware of how the current systems work and would not need
    to ask such a question?

    You made the claim. I just queried it. And I wonder, since you didn't
    say, what point you were trying to make.

    (Note the use of the word "systems" as there are multiple systems in play.) >>
    If you do not know how the current systems work, how can you hope to
    support the claims you are repeatedly making throughout the thread?

    Or are you hoping your claims will go unchallenged just because you made
    them and that ought to be enough for everyone to accept they are true
    regardless?

    I don't know what claims you're only vaguely alluding to.

    Why would they want to move to a model that involves them paying for
    what they currently get for free?

    It's in the national interest that such information should be freely
    available and used.

    Please provide examples where large tech companies have prioritised
    acting in the national interest over acting in their own interest unless
    and until compelled to do so by legislation.

    I'm not following. Why do you think companies will have to move to a
    model that involves them paying for what they currently get for free?

    Additionally, the current systems compete with each other rather than
    feeding into a central system you envisage. By what means are you
    consolidating the competing systems?

    There's no reason why they should have to consolidate. Competition
    drives progress. That's why, for example, there are commercial
    data-bases of speed limits rather than just piles of paper TRO in
    Council archives.

    Except that a human won't do anything either until and unless he
    knows where a blockage is.

    As autonomous vehicles become used there will have to be a
    requirement for accurate and rapid information on blockages.

    Information on congestion is readily available already in real time. >>>>> There's no technical reason why the same could not be done for
    blockages.

    It is clear that you do not understand how the current system works
    because, if you did, you would know why this will not work for
    blockages such as those being described.

    But it could, and should. There is no technological reason why it can't. >>
    There is no technological reason why every member of ULM cannot take a
    flight into space. On what date is your flight scheduled?

    Sadly, but clearly, there are considerations other than technological
    reasons that must be factored in to the real world consideration of a
    matter.

    What I suggest is perfectly feasible. As others here have confirmed.
    Even if it requires a bit of effort and a positive attitude.

    These things exist already.

    If the policeman has a Scottish accent? Or Brummie? (I recommend
    searching for "Amy101" if the second reference doesn't elicit a
    "light bulb" moment.)

    Then it will need to go through Google Translate, as usual.

    Oh dear! I recommend re-reading what I wrote. The problem is *accents*
    not *language*. AI is notoriously poor at dealing with certain accents.

    You do like edge cases. And you sadly seem to think that any minor
    hitch is a fatal flaw, when it's anything but.

    For the second time of asking, I recommend searching for "Amy101" as
    this is clearly another area on which you are happy to issue
    pronouncements whilst making it abundantly clear that you do not
    understand the subject.


    It's not AI, which is something entirely different.

    Modern, (c.f. "Current"), voice recognition, (i.e. the very systems
    you are referencing above that "already exist") is "not AI"? Are you
    sure? I recommend phoning a friend, or asking the audience before you
    lock in your answer. :-)

    It is not AI in itself, though it is the product of previous AI which
    mined the data.

    Google Assistant (or whatever they are calling it this week), Siri and
    Alexa all use AI technologies like machine learning, natural language
    processing and voice recognition to interact with users.

    These technologies are often referred to as types of Narrow AI.

    I know of nobody, with one notable exception, that claims that Narrow AI
    is not AI.

    Why it has a particular designation is that it is not general AI as it
    has been used and assumed by others in this thread.

    No one but you has even for a moment assumed that ChatGPT-style AI was any way involved; if that was what you were denying, then you got the wrong end of the stick. What is heavily involved are learning systems for, e.g., image analysis and decision making.


    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to jethro_uk@hotmailbin.com on Fri Feb 7 11:12:58 2025
    On 2025-02-07, Jethro_uk <jethro_uk@hotmailbin.com> wrote:
    On Thu, 06 Feb 2025 22:30:13 +0000, Jon Ribbens wrote:

    On 2025-02-06, Simon Parker <simonparkerulm@gmail.com> wrote:
    On 05/02/2025 20:06, Norman Wells wrote:
    On 05/02/2025 17:32, Mark Goodge wrote:
    I have no overwhelming reason to disagree with the industry consensus >>>>> (as cited earlier by Simon Parker) that we're probably around ten
    years away from the first fully autonomous (level 5) vehicles on the >>>>> roads in real life use (as opposed to mere testing).

    We don't actually need Level 5.  Level 4 will do.  And they're being >>>> tested and proved right now.

    When you say "Level 4 will do", please define precisely what you mean
    by the phrase "will do".

    Given that you've only made posts in the last 24 hours referencing the
    various levels of autonomy in AVs, I'm not convinced you are entirely
    aware of the precise limitations inherent in a Level 4 AV.

    I'll consider cars to be "full self drive" if they are legally allowed
    to be used on all or very nearly all public roads with nobody inside who
    is capable of driving (i.e. no licence, or asleep or drunk, etc). And
    that they actually safely get you to the destination with high
    reliability, of course :-)

    If all cars were autonomous, we could have that now ...

    Well, no, because other cars aren't the only thing that cars
    have to worry about.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to All on Fri Feb 7 10:44:11 2025
    On Thu, 6 Feb 2025 15:26:46 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:

    On 05/02/2025 16:31, Mark Goodge wrote:
    On Wed, 5 Feb 2025 12:36:57 +0000, Simon Parker <simonparkerulm@gmail.com> >> wrote:

    Having worked extensively with the local council on a related matter,
    [^1], I can assure you that TROs are created individually by each LA.
    They do not even create their own searchable repository of TROs, never
    mind anything even approaching a database of all TROs nationwide which
    third parties can access in real time.

    Actually, my local highway authority does provide a searchable repository of >> recent TROs, provided you know how to use Google's advanced search tools:

    https://www.google.co.uk/search?q=%22speed+limit%22+inurl%3Ahttps%3A%2F%2Fwww.worcestershire.gov.uk%2Fcouncil-services%2Fplanning-and-developments%2Fplanning-applications%2Fpublic-notices&sca_esv=df054736d75a2025&sxsrf=AHTn8zo5eX_
    VoDyUDF9lBIH4odn0sWyKEw%3A1738772754354&source=hp&ei=EpGjZ6qXEO6khbIP3IjHiAE&iflsig=ACkRmUkAAAAAZ6OfIl-BqseInoxRKkenvn3r1LuXIAC2&ved=0ahUKEwiq6POi-ayLAxVuUkEAHVzEEREQ4dUDCBk&uact=5&oq=%22speed+limit%22+inurl%3Ahttps%3A%2F%2Fwww.worcestershire.gov.uk%
    2Fcouncil-services%2Fplanning-and-developments%2Fplanning-applications%2Fpublic-notices

    Point of Order: I do not consider a Google Advanced Search, (using
    "inurl" to restrict the results to searching a single web-site), to meet
    the brief of a LA creating their own searchable repository of TROs.
    YMMV. :-)

    From a programming perspective, it's actually easier. It is (genuinely!) relatively trivial to screenscrape a known URL and pull out all the
    documents linked from it that match a certain pattern or contain certain keywords (such as "speed limit"). The difficulty is not automating that,
    it's knowing all the URLs on every highway authority website and keeping up
    to date with every time they change.

    If I understand correctly, you're merely publishing each individual TRO
    and Google is creating a SERP based on a query against those TROs. Is
    every TRO ever produced available at that site or only more recent ones?

    Only recent ones. I can get at older ones via the council Intranet, but
    they're not published on the public-facing website.

    Bear in mind that for NormTech's system to work, he needs to be able to >submit a GPS location to the system which then retrieves all TROs that
    apply to that location, sorts them into chronological order and finally >analyses those TROs to determine the speed limit currently in effect at
    that point.

    Yes, and that's the programming challenge. Which would be determinedly non-trivial.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Max Demian@21:1/5 to J Newman on Fri Feb 7 11:36:18 2025
    On 07/02/2025 06:37, J Newman wrote:
    On 03/02/2025 16:59, Mike Scott wrote:
    On 03/02/2025 10:54, billy bookcase wrote:
    With the sheer volume of inputs into an autopilot (location, speed,
    multiple
    cameras,  bugs on the windscreen, lidar, clouds, weather, lighting,
    pedestrians,
    cyclists, other vehicles) do you think you could*ever* duplicate the
    inputs?
    It's purely hypothetical.

    If two sets of inputs were identical then the outputs would be identical >>> as all the data is numerical and subject to exactly the same set of
    calculations.

    Sorry, that doesn't work in the real world.

    The whole point about "butterfly wings" is that many (all??) non-
    linear calculations are thoroughly chaotic in their results. If you
    can't specify inputs precisely, outputs can vary unpredictably. And
    any "numerical" input from the real world cannot be *precisely*
    specified. Output therefore can depend critically on an imprecise input.

    (It's not that the butterfly /causes/ the hurricane. It's just that
    you cannot put the full initial conditions into the highly non-linear
    weather calculations. Include the butterfly, predict a hurricane;
    exclude it, predict a sunny day. But you don't even know it's there....)

    I reckon a driver AI is decidedly non-linear system.


    I don't think some people here give AI enough credit. Ask OpenAI for
    example exactly the same questions, and you will get slightly different,
    but usually essentially the same, answers.

    It's human-like in this regard. If I ask a human two identical
    questions, I would get slightly different answers in terms of prose (but hopefully identical in terms of substance).

    The reply is seeded with a (usually different) pseudo-random number each
    time.

    --
    Max Demian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Max Demian@21:1/5 to Mark Goodge on Fri Feb 7 12:03:21 2025
    On 07/02/2025 09:58, Mark Goodge wrote:
    On Thu, 6 Feb 2025 12:43:02 +0000, Max Demian <max_demian@bigfoot.com>
    wrote:
    On 05/02/2025 18:05, Mark Goodge wrote:

    Of course it matters what the object is. Some objects will do more damage to
    the car if the car strikes them, and some objects will suffer more injury of
    the car strikes them. There's a huge difference between, say, a plastic bag,
    a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course,
    you'd avoid all of them. But you wouldn't necessarily take abrupt evasive >>> action to avoid a plastic bag. On the other hand, you'd hit anything -
    including a bollard - if necessary to avoid hitting a child.

    I'm reminded of those creepy bollards that look like children. I suppose
    the idea is that the driver will avoid hitting them because he likes
    children; more likely he will avoid hitting them because they are made
    of metal.

    The point is more that humans are, subconsciously, more likely to notice children than bollards. It's something that's hardwired into the human psyche. So people are less likely to hit bollards that look like children, not because they like children, but because they are more likely to notice bollards that look like children. And, having noticed them, they are then much less likely to hit them.

    Is there any research on this? The bollards don't look like actual
    children. I suppose their creepy unfamiliarity might make them more
    noticeable until people get used to them.

    --
    Max Demian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Fri Feb 7 12:31:42 2025
    On Wed, 5 Feb 2025 20:18:09 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 16:47, Mark Goodge wrote:

    After all, you keep confidently asserting that we've already got fully
    autonomous vehicles up and running, so you must have seen this mentioned
    somewhere on the web, surely?

    Before I do any more of that, you need to clarify what exactly you mean
    by 'fully' autonomous, which is an expression I've not used at all so
    haven't confidently asserted actually.

    Level 5, of course. Otherwise I wouldn't say "fully".

    Is Level 4 'fully' autonomous?

    No, of course not. It's explicitly defined as not being fully autonomous.
    Here, for example, is a good summary of the different levels:

    https://www.synopsys.com/blogs/chip-design/autonomous-driving-levels.html

    As described there, the different levels are:

    Level 0 (No Driving Automation)
    Level 1 (Driver Assistance)
    Level 2 (Partial Driving Automation)
    Level 3 (Conditional Driving Automation)
    Level 4 (High Driving Automation)
    Level 5 (Full Driving Automation)

    Fully autonomous means being able to go anywhere a human driver can go, but with no requirement for a human driver even as backup. It doesn't need geofencing, it doesn't need a "user in charge", it doesn't need data that is not currently available to a human driver.

    Waymo is trialling level 4 technology in its automated taxis, although in
    some cases they still fall back to level 3. The Edinburgh bus you mentioned earlier is an example of level 3. There are no real world, proven level 5 systems in existence yet, although lots of manufacturers are working hard to develop them.

    It's also worth noting that even the level 4 systems which currently exist
    only work under what, compared to the human experience, are very limited conditions. Waymo, for example, currently operates in just three US cities, with three more to come in the foreseeable future. And that's the problem
    with level 4, and why it's not a long-term solution. Level 4 requires
    detailed local setup snd sdministration, which simply doesn't scale. You
    can't take a Waymo car from San Fransisco and put it in New York, let alone London, Paris or Brisbane. You can't even take it from San Francisco and put
    it in Sausalito (just the other side of the Golden Gate Bridge). Waymo
    doesn't operate anywhere that's even remotely rural.

    I, on the other hand, as a human driver, have driven safely and (as far as
    I'm aware) legally in both urban and rural settings in the UK, France,
    Germany, Switzerland, the USA, Ireland, Belgium, the Netherlands, Denmark
    and Luxembourg. I'm reasonably confident I could drive anywhere that has
    road signs I can read (I might struggle a bit in countries which don't use
    the latin alphabet). And I'm not even a professional driver.

    Until autonomous vehicles reach that level of versatility there's no way
    they can be called "fully" autonomous. Level 4 can be useful in the interim, particularly for public transport purposes - automated buses that have a DLR style "passenger services agent" on board who can take over the driving if necessary but otherwise is free to concentrate on passenger-facing duties
    would be a big improvement - but it's never going to be a replacement for personal travel. I'm not even convinced that Waymo is ever going to be
    better, as long as it's still at level 4, than Uber or a black cab[1]. In
    all but a few specific circumstances, level 4 is only ever going to be a staging point, a testbed for technology that will eventually evolve into
    level 5. And until we get to level 5, it's not fully autonomous.

    [1] Another real life example of a situation which automation might struggle with. I was in a taxi which was taking me home after work (paid for by my employer because it was the evening shift and I'd worked past the time of
    the last train). As we approached my flat, the road was closed by overnight roadworks. The taxi driver could have still got to my flat by going the long way round, but, instead, I turned to him and said "It's OK mate, you can
    drop me on the corner here, my place is just the other side of the park and it's only a hundred yards walk". Being able to give that kind of verbal instruction to an automated vehicle requires good voice recognition[2],
    better than we've got at the moment.

    [2] I'd be interested in knowing how Waymo handles that - how does a
    passenger instruct it to cease the journey short of what it had originally
    been given as the destination?

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Fri Feb 7 13:00:53 2025
    On Fri, 7 Feb 2025 10:15:23 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 07/02/2025 09:20, Mark Goodge wrote:

    And, as I've pointed out, I am
    a programmer in my day job. Given your repeated insistance that you
    understand all the principles as well as, or even better than, people who
    actually work in this field it ought to be a trivial exercise for you to at >> least describe the basic architecture of the necessary system. You don't
    even need to write any actual code samples for that. Just a high-level
    overview will do.

    That's what I gave you. Officer calls HQ, HQ alerts live mapping
    providers, they put blocks on maps, AVs re-route to avoid blocks.

    Ah, so your solution is all done manually. The officer makes a voice call,
    and people at HQ who take the call then manually inform the mapping
    providers, who then manually update their data.

    That does have the advantage of being robust, and less reliant on complex programming. But it's quite labour-intensive, and unlikely to be reliable.
    One error by anyone in the chain will mean that sat-navs will get incorrect information. And if you think those errors won't happen, then you've clearly never looked at any government database[1].

    [1] One of my earliest achievements as a local councillor was correcting an error in the NSG that had been there since the 1990s. In my day job, I'm currently working on a project that involves school data, and one of the
    things I've come up against is that there is no validation or verification
    of information provided by individual schools to the system. Even where it's obviously, glaringly incorrect.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Fri Feb 7 13:03:55 2025
    On Fri, 7 Feb 2025 10:23:02 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 07/02/2025 09:31, Mark Goodge wrote:

    Norman, I have just posted links to documents by researchers and autonomous >> vehicle developers which clearly state that they are using AI.

    Well, they would, wouldn't they? They want to sound cutting edge.

    Throughout this entire thread, you've been confidently telling us that the *are* cutting edge!

    What they're actually using is what Mr Parker has identified as 'Narrow
    AI' but that doesn't sound as good. However, it does mean their cars
    don't have to carry round with them their own huge data-processing
    centres, electrical sub-stations and massive water-cooling facilities,
    which broader or proper AI requires.

    Ah, you've learned a new phrase, and are now eager to try it out. Although
    you still don't really seem to have learned what it means.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to All on Fri Feb 7 13:08:18 2025
    On Fri, 7 Feb 2025 12:03:21 +0000, Max Demian <max_demian@bigfoot.com>
    wrote:

    On 07/02/2025 09:58, Mark Goodge wrote:
    On Thu, 6 Feb 2025 12:43:02 +0000, Max Demian <max_demian@bigfoot.com>
    wrote:
    On 05/02/2025 18:05, Mark Goodge wrote:

    Of course it matters what the object is. Some objects will do more damage to
    the car if the car strikes them, and some objects will suffer more injury of
    the car strikes them. There's a huge difference between, say, a plastic bag,
    a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course,
    you'd avoid all of them. But you wouldn't necessarily take abrupt evasive >>>> action to avoid a plastic bag. On the other hand, you'd hit anything - >>>> including a bollard - if necessary to avoid hitting a child.

    I'm reminded of those creepy bollards that look like children. I suppose >>> the idea is that the driver will avoid hitting them because he likes
    children; more likely he will avoid hitting them because they are made
    of metal.

    The point is more that humans are, subconsciously, more likely to notice
    children than bollards. It's something that's hardwired into the human
    psyche. So people are less likely to hit bollards that look like children, >> not because they like children, but because they are more likely to notice >> bollards that look like children. And, having noticed them, they are then
    much less likely to hit them.

    Is there any research on this? The bollards don't look like actual
    children. I suppose their creepy unfamiliarity might make them more >noticeable until people get used to them.

    There is research, but I don't have it to hand - I've seen it in a
    presentation by highway engineers; I may have the slides somewhere if I can
    dig them out. But, AIUI, they don't need to look particularly realistic,
    merely close enough to actual children to trigger the subconscious pattern recognition that we all have. Once you've seen them, and realised they're
    not real children, the design has already done its job.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Mark Goodge on Fri Feb 7 13:29:30 2025
    "Mark Goodge" <usenet@listmail.good-stuff.co.uk> wrote in message news:h8obqj9shmlgjk6ut7ub7talu3v2erh09d@4ax.com...
    Because of China's known propensity to engage in data extraction and censorship. What if a Chinese-made automated taxi was carrying a
    couple of politicians having a conversation about Taiwan?


    Indeed.

    I'll bet it would have the Chinese quaking in their boots

    Especially if we decided to send our other ship, as well.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to billy bookcase on Fri Feb 7 13:54:02 2025
    On 2025-02-07, billy bookcase <billy@anon.com> wrote:
    "Mark Goodge" <usenet@listmail.good-stuff.co.uk> wrote in message news:h8obqj9shmlgjk6ut7ub7talu3v2erh09d@4ax.com...
    Because of China's known propensity to engage in data extraction and
    censorship. What if a Chinese-made automated taxi was carrying a
    couple of politicians having a conversation about Taiwan?

    Indeed.

    I'll bet it would have the Chinese quaking in their boots

    Especially if we decided to send our other ship, as well.

    He didn't say they were British politicians.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to Norman Wells on Fri Feb 7 14:55:46 2025
    On 07/02/2025 09:48, Norman Wells wrote:
    On 06/02/2025 21:51, Owen Rees wrote:
    On Thu, 6 Feb 2025 19:21:12 +0000, Norman Wells <hex@unseen.ac.am> wrote
    in <m0kgd9FseqvU2@mid.individual.net>:

    Some things are absolutely implicit, and shouldn't need to be spelt out
    to everyone regardless of mental capacity.

    Anyone with that attitude should never be allowed anywhere near the
    specification, design, implementation or operation of any system that
    requires high assurance.

    How does that follow?

    The law of unintended consequences means that anything however trivial
    that is missing from a requirements specification that could lead to
    undefined behaviour will at some later stage bite you in the backside.

    Classic examples being:

    High end fighter jets that would allow you to retract the under carriage
    when stationary on the ground. *VERY* expensive mistake...

    Several also rolled over to fly upside down when crossing the equator.

    Gunnery Coriolis force correction tables that are applied in exactly the
    wrong direction in the Southern Hemisphere - Boer War *and* Falklands War! Twice as inaccurate as doing nothing at all.
    (the latter software failing was *very* quickly corrected)

    Several banks that got 2000 *not* being a leap year wrong.

    Patriot Missile batteries that overflowed their system clock time if
    left running continuously for too long so stopped working at all.

    Calibration failure of medical radiation sources killing patients.

    The list of such serious software MFUs is endless.

    comp.risks is full of other examples where something that should be
    "blindingly obvious" took out some very sophisticated piece of kit.

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jon Ribbens@21:1/5 to Martin Brown on Fri Feb 7 17:15:18 2025
    On 2025-02-07, Martin Brown <'''newspam'''@nonad.co.uk> wrote:
    Several banks that got 2000 *not* being a leap year wrong.

    Except it was a leap year. But we don't need to worry about that
    rule again until 2400, so I'm sure we don't need to include it in
    any programs we write today.

    (I wrote at least one program in the 1990s which needed to know
    if it was a leap year, and was written in assembler on a processor
    with no 'division' instruction, so implementing the full rules
    would have been quite hard, whereas implementing a rule which works
    fine from 1901-2099 inclusive is one instruction: TST R0, #3.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Walker@21:1/5 to Martin Brown on Fri Feb 7 16:25:04 2025
    On 07/02/2025 14:55, Martin Brown wrote:
    [...]
    The law of unintended consequences means that anything however
    trivial that is missing from a requirements specification that could
    lead to undefined behaviour will at some later stage bite you in the backside.
    Classic examples being:
    [...]
    Gunnery Coriolis force correction tables that are applied in exactly> the wrong direction in the Southern Hemisphere - Boer War *and*
    Falklands War!
    Twice as inaccurate as doing nothing at all.
    (the latter software failing was *very* quickly corrected)

    This example seems remarkably unlikely. Neither the Boer War(s)
    nor the Falklands War were notable for battleships lobbing shells at each
    other over a range of tens of miles, and for anything less the Coriolis
    force has negligible effect [a few metres]. There have been persistent
    myths that the WW1 Battle of the Falklands was thusly affected, and it's superficially plausible as the Dreadnoughts that took part had only just arrived from the UK and could well have been ill-prepared. But this seems
    to be debunked by

    http://www.dreadnoughtproject.org/tfs/index.php/Battle_of_the_Falkland_Islands

    [near the bottom], which describes it as "an annoying urban legend". The [apparently authoritative] claim there is that no contemporary RN gunnery
    took the Coriolis effect into account at all, and in any case the effect
    was irrelevant under battle conditions.

    It's a pity, as I've used the example myself in mechanics lectures.
    If I ever unretire, I'll have to rewrite my lecture notes.

    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Pridham

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to Andy Burns on Fri Feb 7 19:11:02 2025
    Andy Burns wrote:

    Jethro_uk wrote:

    There was a sneeze of time when it would display speed limits
    without needing a route programmed (same as Waze "driving mode").
    Then it stopped.

    GM also called it "driving mode", I'm fairly sure that when using GM in
    the car it still displays the current speed limit, I'll try to remember
    to check when I next venture out ...
    You are correct, it only displays the speed limit when it's on route
    guidance, I can't think why they'd decide that's best?

    In other news, if your car only has wired android Auto, this device JustWorks™

    <https://amazon.co.uk/dp/B0DBVMG9Y6>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to jon+usenet@unequivocal.eu on Fri Feb 7 11:34:39 2025
    On Wed, 5 Feb 2025 17:25:24 -0000 (UTC), Jon Ribbens <jon+usenet@unequivocal.eu> wrote:

    On 2025-02-05, Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:
    On Wed, 5 Feb 2025 11:18:30 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>Of course they haven't all been solved, but the evidence is that AVs are >>>already safer than manually driven vehicles which is a very significant >>>marker point in their development. The quest now is to approach >>>perfection with, of course, the law of diminishing returns applying.

    They're safer than human drivers under certain, carefully controlled and
    restricted, conditions. But they still make mistakes that human drivers are >> very unlikely to make, even if those mistakes are not directly dangerous.

    The "fail 'safe' by stopping immobile in the middle of a lane of
    fast-moving traffic" shown in that Waymo video earlier seemed pretty >"directly dangerous" to me.

    Indeed. Those are conditions that they haven't yet, controlled for.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to All on Fri Feb 7 11:33:42 2025
    On Thu, 6 Feb 2025 15:27:49 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:

    Bezos specifically chose the name "Alexa" after the ancient library of >Alexandria. The developers liked it because of the hard "X" consonant
    which helped them increase the precision of recognition in the early >versions.

    Alexa Internet (a now defunct web data company) was named after the Library
    of Alexandria before Amazon bought it and later used the name for its voice assistant. So it's a bit of a stretch for Bezos to say he chose the name because of the library; at the time that Amazon was developing their voice assistant they already owned Alexa Internet. The origin of the name may have been a factor in Amazon using it for that purpose, as was the hard X sound,
    but another commonly cited reason is that their purchase of Alexa internet meant that they already owned the trademark.

    The wife of a good friend is named "Alexandra", (or "Alex" for short).

    For fairly obvious reasons, they cannot and will not have Amazon smart >devices in their home.

    Similarly, Siri is named after Sigrid, a Norwegian co-worker of Dag
    Kittlaus, founder of Siri, prior to its acquisition by Apple.

    Wikipedia, with cited references, says it's a backformation from SRI, aka Stanford Research Institute, which did the initial development work. Other sources say that the similarity to Norwegian names is an urban legend.
    Although it's entirely possible that Kittlaus saw the similarity between SRI and Norwegian names, and hence formalised the name Siri for that reason.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Fri Feb 7 11:07:53 2025
    On Wed, 5 Feb 2025 16:54:50 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    Anyway, I believe the names Siri and Alexa were carefully chosen because
    of their uniqueness and low likelihood of confusion with ordinary words
    used in ordinary speech.

    Alexa existed as a service before it became the name of Amazon's smart
    speaker. In fact, it existed as a service before Amazon bought it. One of
    the reasons why Amazon used Alexa as the name was because, having purchased Alexa Internet Inc, they already owned the trademark. Alexa Internet Inc
    chose the name in homage to the Library of Alexandria. Siri is a
    backformation from the abbreviation of Stanford Research Institute (SRI)
    which originally developed the speech recongition system later
    commercialised by Nuance Communications and purchased by Apple.

    As for a low likelihood of confusion with other words, Alexa is a genuine, though relatively uncommon, given name. And Siri's similarity to the name of the country Syria was the cause of one particularly amusing intervention in
    the House of Commons:

    https://www.bbc.co.uk/news/av/uk-politics-44701007

    Alexa does at least have the advantage that, apart from the name, it's dissimilar to most other English words. Siri, on the other hand, is similar
    to the first two syllables of several words, inclyding Syria, serious and serial.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to billy bookcase on Fri Feb 7 11:14:36 2025
    On Thu, 6 Feb 2025 10:15:33 -0000, "billy bookcase" <billy@onon.com> wrote:


    On Wed, 05 Feb 2025 16:54:50 +0000, Norman Wells wrote:

    On 05/02/2025 16:43, Mark Goodge wrote:
    [quoted text muted]

    I think it would probably just refuse to open the pod bay doors.

    Anyway, I believe the names Siri and Alexa were carefully chosen because >>> of their uniqueness and low likelihood of confusion with ordinary words
    used in ordinary speech.

    More lilely they were chosed as "girly" names which would sit quite
    happily in the living room alongside the air freshener.

    Alexa was used by Amazon because they already owned the trademark from their purchase of Alexa Internet Inc. Siri was called Siri by its developers at Stanford Research Institute (SRI) as a backformation from the abbreviation - originally the SRI voice recognition system, it became Siri voice and then
    just Siri - before it was commercialised by Nuance Communications who sold
    it to Apple. Apple didn't name it, they just bought it.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Fri Feb 7 11:28:49 2025
    On 07/02/2025 10:34, Mark Goodge wrote:
    On Wed, 5 Feb 2025 20:06:11 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 05/02/2025 17:32, Mark Goodge wrote:
    On Wed, 5 Feb 2025 16:44:36 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 05/02/2025 16:21, Mark Goodge wrote:

    Again, that is my point. To be fully autonomous, a vehicle will need to be
    able to derive the necessary data from its surroundings, context and memory
    without relying solely on information transmitted from a central system. >>>>> That is, it will need to be able to behave like a human driver. Which, as >>>>> yet, they can't do. They'll get there eventually. But getting there isn't a
    trivial problem to solve.

    They are much closer already than you think.

    I have no overwhelming reason to disagree with the industry consensus (as >>> cited earlier by Simon Parker) that we're probably around ten years away >>> from the first fully autonomous (level 5) vehicles on the roads in real life
    use (as opposed to mere testing).

    We don't actually need Level 5. Level 4 will do. And they're being
    tested and proved right now.

    So you don't actually believe that fully autonomous vehicles are closer than I think?

    Level 5 is as far away as Level 5 is. And if that's what you regard as
    'fully autonomous' that's where they are.

    But Level 5 is not the criterion for being allowed on UK roads under the Automated Vehicles Act, which is what probably matters more to most.
    >>> Obviously, programming an autonomous vehicle is more of a
    challenge than
    programming a chatbot. But the underlying trade-offs of computing power v >>> cost are essentially the same. So it would not surprise me at all if the >>> first genuine level 5 vehicles come from China. Which would raise a whole >>> lot of other ethical and legal issues.

    Would it? Why?

    Because of China's known propensity to engage in data extraction and censorship. What if a Chinese-made automated taxi was carrying a couple of politicians having a conversation about Taiwan?

    You mean, apart from dropping them off safely where they wanted it to go?

    Tell us your fears. Will the doors be locked and the occupants gassed
    do you think?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Fri Feb 7 11:21:38 2025
    On 07/02/2025 10:44, Mark Goodge wrote:
    On Thu, 6 Feb 2025 15:26:46 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:

    Bear in mind that for NormTech's system to work, he needs to be able to
    submit a GPS location to the system which then retrieves all TROs that
    apply to that location, sorts them into chronological order and finally
    analyses those TROs to determine the speed limit currently in effect at
    that point.

    Yes, and that's the programming challenge. Which would be determinedly non-trivial.

    How is it then that my little Garmin seems to know the speed limits
    already? Surely all the work you say is a determinedly non-trivial
    programming challenge has already been done?

    There's no need to mine the vaults of dinosaur Councils.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Fri Feb 7 20:38:34 2025
    On Fri, 7 Feb 2025 11:21:38 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 07/02/2025 10:44, Mark Goodge wrote:
    On Thu, 6 Feb 2025 15:26:46 +0000, Simon Parker <simonparkerulm@gmail.com> >> wrote:

    Bear in mind that for NormTech's system to work, he needs to be able to
    submit a GPS location to the system which then retrieves all TROs that
    apply to that location, sorts them into chronological order and finally
    analyses those TROs to determine the speed limit currently in effect at
    that point.

    Yes, and that's the programming challenge. Which would be determinedly
    non-trivial.

    How is it then that my little Garmin seems to know the speed limits
    already? Surely all the work you say is a determinedly non-trivial >programming challenge has already been done?

    It doesn't know all of them, everywhere, all the time. All you are doing by asserting that it does is to demonstrate that you don't drive enough in
    enough different places to discover the edge cases.

    If you disagree, then sign up for the various Garmin forums on the web and
    tell all the people reporting that their Garmin gets the speed limit wrong
    that they must be mistaken. I'm sure they'll be thrilled to get the benefit
    of your expertise.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Owen Rees@21:1/5 to Mark Goodge on Fri Feb 7 23:39:26 2025
    Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:


    This is actually an interesting point as far as autonomous vehicles are concerned. It's well known in highway design that "sign blindness" can be an issue; if you have too many signs at one location, even if individually they each have something important to say, then drivers are less likely to take account of them. Information overload reduces compliance and reduces safety. But robot cars won't have that issue. On the contrary, as far as AI is concerned, you can never have too much data.

    I was reading about Margaret Hamilton recently. The part I found most interesting was the description of how the Apollo 11 LEM software had been written to handle unexpected events. There was an error in the checklist
    the astronauts were following that caused them to switch on a system that
    sent too much data, navigation data not relevant during the approach to landing. The particular event had not been anticipated but the possibility
    of an overload had. The essential functions were still able to operate and
    the landing went ahead.

    Modern systems can handle a lot more data but modern sensors can also
    deliver a lot more data. In a system of the kind I heard described as “hard real time“ back in the day I would expect information overload to still be
    an issue.

    I have never worked on a system where the deadline for getting the
    calculations done is literally a “you are dead if you miss it” line but I have worked with people who did and the stories are fascinating and
    disturbing.

    One was of a missile - sea-wolf IIRC - launching and destroying a shell
    fired from the ship’s own gun. The trajectory of the shell intersected the ship and the control system had not been programmed to distinguish between incoming and outgoing.

    Another was fly by wire which worked perfectly in the test rig. Fortunately
    the test pilot was able to work around the reversal of up and down which
    was a bug in both the flight software and the test rig software.

    I hope that the people doing the real work on autonomous vehicles already
    know the history and have heard similar stories. One concern is that they
    may have people without that background breathing down their necks asking
    for cost cuts and demanding early launches.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to Andy Burns on Sat Feb 8 00:31:09 2025
    On 7 Feb 2025 at 19:11:02 GMT, "Andy Burns" <usenet@andyburns.uk> wrote:

    Andy Burns wrote:

    Jethro_uk wrote:

    There was a sneeze of time when it would display speed limits
    without needing a route programmed (same as Waze "driving mode").
    Then it stopped.

    GM also called it "driving mode", I'm fairly sure that when using GM in
    the car it still displays the current speed limit, I'll try to remember
    to check when I next venture out ...
    You are correct, it only displays the speed limit when it's on route guidance, I can't think why they'd decide that's best?

    In other news, if your car only has wired android Auto, this device JustWorks™

    <https://amazon.co.uk/dp/B0DBVMG9Y6>

    Ordered! But I shan't sue you if it doesn't work for me! I'm a pessimist, and
    I need a push to buy a new phone.

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Les. Hayward@21:1/5 to Owen Rees on Sat Feb 8 10:12:53 2025
    On 07/02/2025 23:39, Owen Rees wrote:
    Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:


    Modern systems can handle a lot more data but modern sensors can also
    deliver a lot more data. In a system of the kind I heard described as “hard real time“ back in the day I would expect information overload to still be an issue.

    Of course any system is only as good as the data delivered by the
    sensors. When these pack up it can lead to interesting results as I
    discovered to my cost when a sensor in a car I once had would falsely
    decide that the engine was over-revving and sometimes halt the car in an embarrassing manner.

    The fault was only cleared by a process of guesswork & elimination,
    since it was intermittent and routine testing reported nothing wrong.

    One hopes that a similar fault does not occur on one of those wonderful self-drive horrors.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Owen Rees on Sat Feb 8 10:20:15 2025
    On Fri, 07 Feb 2025 23:39:26 +0000, Owen Rees wrote:

    Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:


    [quoted text muted]

    I was reading about Margaret Hamilton recently. The part I found most interesting was the description of how the Apollo 11 LEM software had
    been written to handle unexpected events. There was an error in the
    checklist the astronauts were following that caused them to switch on a system that sent too much data, navigation data not relevant during the approach to landing. The particular event had not been anticipated but
    the possibility of an overload had. The essential functions were still
    able to operate and the landing went ahead.

    A unknown known as it were. something which makes perfect sense to my engineering mind. Who cares what causes <x> to happen ? Much more
    important is what to do if <x> happens.

    This is how I smuggled "alien invasion" into a BCP/DR plan for a former employer. Whilst highly unlikely, it would definitely make travel to the
    office impossible, so the plan was for that. Amazingly it's be activated
    twice. Once when there was severe flooding in Hampshire, around the
    office. And once for Covid.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Sat Feb 8 08:38:13 2025
    On 07/02/2025 20:38, Mark Goodge wrote:
    On Fri, 7 Feb 2025 11:21:38 +0000, Norman Wells <hex@unseen.ac.am> wrote:
    On 07/02/2025 10:44, Mark Goodge wrote:
    On Thu, 6 Feb 2025 15:26:46 +0000, Simon Parker <simonparkerulm@gmail.com> >>> wrote:

    Bear in mind that for NormTech's system to work, he needs to be able to >>>> submit a GPS location to the system which then retrieves all TROs that >>>> apply to that location, sorts them into chronological order and finally >>>> analyses those TROs to determine the speed limit currently in effect at >>>> that point.

    Yes, and that's the programming challenge. Which would be determinedly
    non-trivial.

    How is it then that my little Garmin seems to know the speed limits
    already? Surely all the work you say is a determinedly non-trivial
    programming challenge has already been done?

    It doesn't know all of them, everywhere, all the time.

    I can only speak from my own experience, which is that is does seem to
    know the speed limit wherever I go, except of course for very temporary
    ones.

    I don't doubt that 'It doesn't know all of them, everywhere, all the
    time' but if it works, say, 95% of the time, that's hardly a glass half
    empty situation, is it?

    What percentage do you think it is, and why? Do you have any numbers at
    all to support your position, or even similar personal experience?

    All you are doing by
    asserting that it does is to demonstrate that you don't drive enough in enough different places to discover the edge cases.

    I think you mean not spots rather than edge cases. But, anyway, there
    were several people here who originally didn't believe that any satnavs
    told you any speed limits at all.

    If you disagree, then sign up for the various Garmin forums on the web and tell all the people reporting that their Garmin gets the speed limit wrong that they must be mistaken. I'm sure they'll be thrilled to get the benefit of your expertise.

    Are you going to give any links?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to jethro_uk@hotmailbin.com on Sat Feb 8 11:13:08 2025
    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:vo7b4v$1sc4r$75@dont-email.me...

    This is how I smuggled "alien invasion" into a BCP/DR plan for a former employer.

    1. Ask for volunteers for abduction.

    2. Select those employees considered most suitable for abduction
    so as to fulfil the quota.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Sat Feb 8 15:00:04 2025
    On Sat, 8 Feb 2025 08:38:13 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 07/02/2025 20:38, Mark Goodge wrote:
    On Fri, 7 Feb 2025 11:21:38 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>> On 07/02/2025 10:44, Mark Goodge wrote:
    On Thu, 6 Feb 2025 15:26:46 +0000, Simon Parker <simonparkerulm@gmail.com> >>>> wrote:

    Bear in mind that for NormTech's system to work, he needs to be able to >>>>> submit a GPS location to the system which then retrieves all TROs that >>>>> apply to that location, sorts them into chronological order and finally >>>>> analyses those TROs to determine the speed limit currently in effect at >>>>> that point.

    Yes, and that's the programming challenge. Which would be determinedly >>>> non-trivial.

    How is it then that my little Garmin seems to know the speed limits
    already? Surely all the work you say is a determinedly non-trivial
    programming challenge has already been done?

    It doesn't know all of them, everywhere, all the time.

    I can only speak from my own experience, which is that is does seem to
    know the speed limit wherever I go, except of course for very temporary
    ones.

    So not all the time, then.

    I don't doubt that 'It doesn't know all of them, everywhere, all the
    time' but if it works, say, 95% of the time, that's hardly a glass half
    empty situation, is it?

    What would you expect an autonomous vehicle to do in locations that are part
    of the 5%?

    The actual percentage isn't really relevant. The point is that as long as
    it's not 100%, then an autonomous vehicle has to be able to cope with situations where its sat-nav data gives the wrong speed limit. And that, of course, includes temporary limits.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sam Plusnet@21:1/5 to billy bookcase on Sat Feb 8 18:49:59 2025
    On 07/02/2025 09:11, billy bookcase wrote:
    "J Newman" <jenniferkatenewman@gmail.com> wrote in message news:vo49ms$3d0ms$1@dont-email.me...

    It's human-like in this regard. If I ask a human two identical questions, I would get
    slightly different answers in terms of prose (but hopefully identical in terms of
    substance).

    You're lucky then.

    Some humans can ask another human an identical question
    on two successive days and get a completely different
    answer.each time.

    And on the next day as well

    Stop spending time with economists.


    --
    Sam Plusnet

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to billy bookcase on Sat Feb 8 14:29:43 2025
    On Sat, 08 Feb 2025 11:13:08 +0000, billy bookcase wrote:

    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:vo7b4v$1sc4r$75@dont-email.me...

    This is how I smuggled "alien invasion" into a BCP/DR plan for a former
    employer.

    1. Ask for volunteers for abduction.

    2. Select those employees considered most suitable for abduction so as
    to fulfil the quota.

    Sadly that wasn't in my remit :)

    I just had to scope out the IT systems ability to cope.

    There was quite a heated "discussion" where a senior executive really
    tried to discount the possibility that there was a realistic possibility
    of both the office and it's alternate location (which was in Gloucester)
    would be inaccessible at the same time.

    Come March 2020 ....

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Sat Feb 8 15:16:37 2025
    On 08/02/2025 15:00, Mark Goodge wrote:
    On Sat, 8 Feb 2025 08:38:13 +0000, Norman Wells <hex@unseen.ac.am> wrote:
    On 07/02/2025 20:38, Mark Goodge wrote:
    On Fri, 7 Feb 2025 11:21:38 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>> On 07/02/2025 10:44, Mark Goodge wrote:
    On Thu, 6 Feb 2025 15:26:46 +0000, Simon Parker <simonparkerulm@gmail.com>
    wrote:

    Bear in mind that for NormTech's system to work, he needs to be able to >>>>>> submit a GPS location to the system which then retrieves all TROs that >>>>>> apply to that location, sorts them into chronological order and finally >>>>>> analyses those TROs to determine the speed limit currently in effect at >>>>>> that point.

    Yes, and that's the programming challenge. Which would be determinedly >>>>> non-trivial.

    How is it then that my little Garmin seems to know the speed limits
    already? Surely all the work you say is a determinedly non-trivial
    programming challenge has already been done?

    It doesn't know all of them, everywhere, all the time.

    I can only speak from my own experience, which is that is does seem to
    know the speed limit wherever I go, except of course for very temporary
    ones.

    So not all the time, then.

    I don't doubt that 'It doesn't know all of them, everywhere, all the
    time' but if it works, say, 95% of the time, that's hardly a glass half
    empty situation, is it?

    What would you expect an autonomous vehicle to do in locations that are part of the 5%?

    Default to some programmed safe level or take inputs from other sensors
    and data sources of course.

    The actual percentage isn't really relevant. The point is that as long as it's not 100%, then an autonomous vehicle has to be able to cope with situations where its sat-nav data gives the wrong speed limit. And that, of course, includes temporary limits.

    Might as well give up all attempts at progress then. These are clearly insuperable problems and deal-breakers, at least for you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JNugent@21:1/5 to Sam Plusnet on Sun Feb 9 14:39:35 2025
    On 08/02/2025 06:49 PM, Sam Plusnet wrote:

    billy bookcase wrote:
    "J Newman" <jenniferkatenewman@gmail.com> wrote:

    It's human-like in this regard. If I ask a human two identical
    questions, I would get slightly different answers in terms of
    prose (but hopefully identical in terms of substance).

    You're lucky then.
    Some humans can ask another human an identical question
    on two successive days and get a completely different
    answer.each time.
    And on the next day as well

    Stop spending time with economists.

    ...especially the ones who were only in Customer Services anyway?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to jethro_uk@hotmailbin.com on Sun Feb 9 10:42:01 2025
    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:vo7pom$1sc4r$77@dont-email.me...
    On Sat, 08 Feb 2025 11:13:08 +0000, billy bookcase wrote:

    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message
    news:vo7b4v$1sc4r$75@dont-email.me...

    This is how I smuggled "alien invasion" into a BCP/DR plan for a former
    employer.

    1. Ask for volunteers for abduction.

    2. Select those employees considered most suitable for abduction so as
    to fulfil the quota.

    Sadly that wasn't in my remit :)

    A lost opportunity there I'm afraid. Did Alexander the Great say it wasn't in his
    remit ? Did Napoleon ?

    Other people's problems are your opportunity !

    There is nothing impresses a potential customer more than a supplier who has clearly done their homework

    And if we do know one thing about aliens, its that they're fond of abducting people.

    And so they were bound to be impressed with a ready made supply of pre-selected abductees : with some of them just rarin' to go

    Your firm could have been the go-to source for the aliens for all kinds of unique
    traditional British products. Pork Pies, Cornish Pasties, Worcester Sauce, Marmite which they'll never even have heard of. Along with maybe a bit of Harris Tweed thrown in.

    While they in return could have offered you access to some of their advanced technology. Having travelled thousands of light years to get here, its fairly certain
    for instance that they'll have cracked not only AV's but AI drones. But one step
    at a time will be sufficient to destroy any earthbound competition

    And with all that knowledge and software contained in a tiny "thumb" drive
    no bigger than a match head , Although obviously being so far advanced
    they will already be up to USB 526 B by now. So UK hardware will
    obviously need to be adapted

    Having exclusive access to this advanced technological knowledge would
    not only turn your firm into a world leader overnight - *and with you at
    its head* but would again put the West Midlands back on the map, where it
    truly belongs

    What more could anyone possibly ask ?.,


    I just had to scope out the IT systems ability to cope.

    There was quite a heated "discussion" where a senior executive really
    tried to discount the possibility that there was a realistic possibility
    of both the office and it's alternate location (which was in Gloucester) would be inaccessible at the same time.

    Come March 2020 ....

    As I said, a lost opportunity


    bb






    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sam Plusnet@21:1/5 to billy bookcase on Sun Feb 9 19:03:19 2025
    On 09/02/2025 10:42, billy bookcase wrote:
    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:vo7pom$1sc4r$77@dont-email.me...
    On Sat, 08 Feb 2025 11:13:08 +0000, billy bookcase wrote:

    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message
    news:vo7b4v$1sc4r$75@dont-email.me...

    This is how I smuggled "alien invasion" into a BCP/DR plan for a former >>>> employer.

    1. Ask for volunteers for abduction.

    2. Select those employees considered most suitable for abduction so as
    to fulfil the quota.

    Sadly that wasn't in my remit :)

    A lost opportunity there I'm afraid. Did Alexander the Great say it wasn't in his
    remit ? Did Napoleon ?

    Those gentlemen could (and did) freely employ means of persuasion that
    tend to be frowned upon today.

    Having the absolute command of a large army does tend to impress people.

    --
    Sam Plusnet

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Sun Feb 9 21:42:32 2025
    On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 08/02/2025 15:00, Mark Goodge wrote:

    What would you expect an autonomous vehicle to do in locations that are part >> of the 5%?

    Default to some programmed safe level or take inputs from other sensors
    and data sources of course.

    As, indeed, I have been telling you, all the time you've been insisting that
    an autonomous vehicle will always be able to trust its data supply.

    The actual percentage isn't really relevant. The point is that as long as
    it's not 100%, then an autonomous vehicle has to be able to cope with
    situations where its sat-nav data gives the wrong speed limit. And that, of >> course, includes temporary limits.

    Might as well give up all attempts at progress then. These are clearly >insuperable problems and deal-breakers, at least for you.

    No; in order to progress we just need to stop pretending that it's a trivial problem to solve. Maybe we need to rely on the expertise of those who are actually working on it, rather than the inexpert opinions of random people
    on the Internet.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Sun Feb 9 22:13:41 2025
    On 09/02/2025 21:42, Mark Goodge wrote:
    On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote:
    On 08/02/2025 15:00, Mark Goodge wrote:

    What would you expect an autonomous vehicle to do in locations that are part
    of the 5%?

    Default to some programmed safe level or take inputs from other sensors
    and data sources of course.

    As, indeed, I have been telling you, all the time you've been insisting that an autonomous vehicle will always be able to trust its data supply.

    As of course it should. It's best if it can. What I said only applies
    to very few in remote areas where it may unfortunately be necessary.

    I never claimed 95% or so was a glass completely full.

    But it's certainly not half empty.

    The actual percentage isn't really relevant. The point is that as long as >>> it's not 100%, then an autonomous vehicle has to be able to cope with
    situations where its sat-nav data gives the wrong speed limit. And that, of >>> course, includes temporary limits.

    Might as well give up all attempts at progress then. These are clearly
    insuperable problems and deal-breakers, at least for you.

    No; in order to progress we just need to stop pretending that it's a trivial problem to solve. Maybe we need to rely on the expertise of those who are actually working on it, rather than the inexpert opinions of random people
    on the Internet.

    It has largely been solved already. The rest will follow. In the
    meantime,we have a glass perhaps 95% full, which is no reason to be Eeyore.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Sam Plusnet on Sun Feb 9 21:27:11 2025
    "Sam Plusnet" <not@home.com> wrote in message news:YZ6qP.80988$YsRf.70788@fx18.iad...
    On 09/02/2025 10:42, billy bookcase wrote:
    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message
    news:vo7pom$1sc4r$77@dont-email.me...
    On Sat, 08 Feb 2025 11:13:08 +0000, billy bookcase wrote:

    "Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message
    news:vo7b4v$1sc4r$75@dont-email.me...

    This is how I smuggled "alien invasion" into a BCP/DR plan for a former >>>>> employer.

    1. Ask for volunteers for abduction.

    2. Select those employees considered most suitable for abduction so as >>>> to fulfil the quota.

    Sadly that wasn't in my remit :)

    A lost opportunity there I'm afraid. Did Alexander the Great say it wasn't in his
    remit ? Did Napoleon ?

    Those gentlemen could (and did) freely employ means of persuasion that tend to be
    frowned upon today.

    But they had "vision".

    They were able to "inspire".

    Napoleon was easily able to "raise" another Army on his return from Elba, never mind lead it at Waterloo. Even after the Russian debacle.

    While the two, and only two war crimes ever attributed to Napoleon were possibly
    solely the product of English propaganda,

    At least according to Napoleon lookalike, and biographer, Andrew Roberts.
    Among others.

    Having the absolute command of a large army does tend to impress people.

    Alexander the Great actually got as far as India,

    Now "that" degree of ambition, really still does impress people.

    As would Britain's new relationship with our "Alien Friends", which
    would really have Trump, the EU, and even the Chinese, come begging.



    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Mon Feb 10 16:28:24 2025
    On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 09/02/2025 21:42, Mark Goodge wrote:
    On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>> On 08/02/2025 15:00, Mark Goodge wrote:

    What would you expect an autonomous vehicle to do in locations that are part
    of the 5%?

    Default to some programmed safe level or take inputs from other sensors
    and data sources of course.

    As, indeed, I have been telling you, all the time you've been insisting that >> an autonomous vehicle will always be able to trust its data supply.

    As of course it should. It's best if it can. What I said only applies
    to very few in remote areas where it may unfortunately be necessary.

    What should an autonomous vehicle do if it has a data supply (ie, it's not
    in a mobile notspot) but its data disagrees with signage on the ground?

    I never claimed 95% or so was a glass completely full.

    But it's certainly not half empty.

    It's actually around 91% at the moment, apparently. The target is 95%. But,
    of course, you can't have level 5 autonomy until you can cope with 100% of
    the roads, even those which are outwith the 95% mobile data coverage.

    The actual percentage isn't really relevant. The point is that as long as >>>> it's not 100%, then an autonomous vehicle has to be able to cope with
    situations where its sat-nav data gives the wrong speed limit. And that, of
    course, includes temporary limits.

    Might as well give up all attempts at progress then. These are clearly
    insuperable problems and deal-breakers, at least for you.

    No; in order to progress we just need to stop pretending that it's a trivial >> problem to solve. Maybe we need to rely on the expertise of those who are
    actually working on it, rather than the inexpert opinions of random people >> on the Internet.

    It has largely been solved already. The rest will follow. In the >meantime,we have a glass perhaps 95% full, which is no reason to be Eeyore.

    I'm sure the rest will follow. I've never disputed that. All I'm pointing
    out is that solving it will take considerably more computing complexity than you have shown even the slightest ability to comprehend.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to Mark Goodge on Mon Feb 10 18:07:14 2025
    On 10/02/2025 16:28, Mark Goodge wrote:
    On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 09/02/2025 21:42, Mark Goodge wrote:
    On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>> On 08/02/2025 15:00, Mark Goodge wrote:

    What would you expect an autonomous vehicle to do in locations that are part
    of the 5%?

    Default to some programmed safe level or take inputs from other sensors >>>> and data sources of course.

    As, indeed, I have been telling you, all the time you've been insisting that
    an autonomous vehicle will always be able to trust its data supply.

    As of course it should. It's best if it can. What I said only applies
    to very few in remote areas where it may unfortunately be necessary.

    What should an autonomous vehicle do if it has a data supply (ie, it's not
    in a mobile notspot) but its data disagrees with signage on the ground?

    Where it gets really tricky is when some sensor or other gives entirely misleading readings leading to the vehicle doing something off the wall.
    One of the real crazy acceleration glitches was traced to a single bit
    flip (probably alpha particle induced) - though most were user error.

    Aircraft have fallen out of the sky for similar sensor glitches on the
    angle of attack anti-stall system which is unforgivable.

    https://www.heraldnet.com/nation-world/not-just-the-737-angle-of-attack-sensors-have-had-problems/

    My previous car had a parking light sensor error. The parking light
    worked but the sensor to check if it was working had failed. The only
    viable fix would have been a very expensive replacement wiring harness.

    I never claimed 95% or so was a glass completely full.

    But it's certainly not half empty.

    It's actually around 91% at the moment, apparently. The target is 95%. But, of course, you can't have level 5 autonomy until you can cope with 100% of the roads, even those which are outwith the 95% mobile data coverage.

    It is a complex issue as to how good an automated driving system has to
    be before it is safer than the average driver on the roads. They can't
    be far off that stage now but the problem is that even the best ones at
    present can be fooled by some plain patterns or sky like colours on very
    big trucks. Not much fun for a car picking a fight with an HGV.

    You also get issues where manually driven aggressive drivers will
    exploit the automated system's avoid a collision rule to bully their way
    into a lane making it less safe for everyone else. I've had someone cut
    in front of me close enough to trigger an automated ABS partial stop.
    Luckily there was nothing close behind me but it rearranged the contents
    of the car quite comprehensively.

    The actual percentage isn't really relevant. The point is that as long as >>>>> it's not 100%, then an autonomous vehicle has to be able to cope with >>>>> situations where its sat-nav data gives the wrong speed limit. And that, of
    course, includes temporary limits.

    Might as well give up all attempts at progress then. These are clearly >>>> insuperable problems and deal-breakers, at least for you.

    No; in order to progress we just need to stop pretending that it's a trivial
    problem to solve. Maybe we need to rely on the expertise of those who are >>> actually working on it, rather than the inexpert opinions of random people >>> on the Internet.

    It has largely been solved already. The rest will follow. In the
    meantime,we have a glass perhaps 95% full, which is no reason to be Eeyore.

    I'm sure the rest will follow. I've never disputed that. All I'm pointing
    out is that solving it will take considerably more computing complexity than you have shown even the slightest ability to comprehend.

    Real time video image assessment is one of those things that seems easy
    but turns out to really really difficult. I never look anything like my passport biometric profile when I have just run the length of some huge
    airport to reach a Gate that is out of the back of beyond. YMMV

    Combined with Lidar or some other method AI probably will at some stage
    soon be good enough for all practical purposes. They do seem to still
    have some quite annoying glitches though where they are in use.

    Mission critical vehicle software has to meet a very high quality
    standard to be acceptable. BSODs have no place in automotive displays.

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 11 00:03:34 2025
    On 10/02/2025 16:28, Mark Goodge wrote:
    On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 09/02/2025 21:42, Mark Goodge wrote:
    On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>> On 08/02/2025 15:00, Mark Goodge wrote:

    What would you expect an autonomous vehicle to do in locations that are part
    of the 5%?

    Default to some programmed safe level or take inputs from other sensors >>>> and data sources of course.

    As, indeed, I have been telling you, all the time you've been insisting that
    an autonomous vehicle will always be able to trust its data supply.

    As of course it should. It's best if it can. What I said only applies
    to very few in remote areas where it may unfortunately be necessary.

    What should an autonomous vehicle do if it has a data supply (ie, it's not
    in a mobile notspot) but its data disagrees with signage on the ground?

    It will presumably default to the least dangerous.

    I never claimed 95% or so was a glass completely full.

    But it's certainly not half empty.

    It's actually around 91% at the moment, apparently. The target is 95%. But, of course, you can't have level 5 autonomy until you can cope with 100% of the roads, even those which are outwith the 95% mobile data coverage.

    You can still have total autonomy in areas that are covered.

    You wouldn't expect an autonomous vehicle to work anywhere at all, like
    in Chad for example, so why everywhere even in Britain where there may
    not in fact be any roads or people? If there are not spots, AVs can be
    told where they are and programmed to avoid them or to refuse any
    journey that necessarily requires driving through one.

    The actual percentage isn't really relevant. The point is that as long as >>>>> it's not 100%, then an autonomous vehicle has to be able to cope with >>>>> situations where its sat-nav data gives the wrong speed limit. And that, of
    course, includes temporary limits.

    Might as well give up all attempts at progress then. These are clearly >>>> insuperable problems and deal-breakers, at least for you.

    No; in order to progress we just need to stop pretending that it's a trivial
    problem to solve. Maybe we need to rely on the expertise of those who are >>> actually working on it, rather than the inexpert opinions of random people >>> on the Internet.

    It has largely been solved already. The rest will follow. In the
    meantime,we have a glass perhaps 95% full, which is no reason to be Eeyore.

    I'm sure the rest will follow. I've never disputed that. All I'm pointing
    out is that solving it will take considerably more computing complexity than you have shown even the slightest ability to comprehend.

    There is a law of diminishing returns. There is also a point at which
    you decide enough is enough.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Mon Feb 10 23:54:15 2025
    On 10/02/2025 19:21, Simon Parker wrote:
    On 06/02/2025 15:32, Norman Wells wrote:
    On 06/02/2025 14:51, Simon Parker wrote:
    On 05/02/2025 10:22, Norman Wells wrote:

    What's the problem?  It's only a matter of information being passed
    to whoever can amend the live mapping, and road closed notification
    being applied on the maps..

    The "problem" is that you are unable or unwilling to detail the
    process by which "the information" is "passed to whoever can amend
    the live mapping, and road closure notification being applied on the
    maps."

    For one thing, (as detailed in a parallel post), there are numerous
    systems at play, rather than a single all-encompassing system.

    Then they all should be notified, and those who wish to remain
    competitive will act.

    I've explained in a post elsewhere in the thread how Google, Apple and
    Garmin aggregate user data to determine delays in the proposed routing.

    Your task, should you choose to accept it, it to map onto the existing
    model precisely how "they should all be notified" and how they then
    action said "road closure notification being applied on the maps."

    Over to you, Norman...

    If you're not claiming it's an impossible ask, then it's possible.

    I have the vision of what is possible. It's for minions to implement it.

    Additionally, there isn't some entity within each of those systems
    with responsibility to "amend the live mapping".

    Then perhaps, if they want to remain relevant and commercially viable
    in the coming age of autonomous vehicles, they will see it as
    commercially important to develop such a facility if they don't have
    it already.  It is surely not beyond the wit of man despite all the
    pettyfogging issues you mention.

    How many notifications are there likely to be in a 24 hour period?  Is
    one person per receiving company on a shift basis sufficient or does it
    need multiple people?  If multiple people, how many?  What happens if HQ try to contact them and they're away from their desk?

    If people are going to be such a problem, they can be replaced with
    automated systems, which will make the whole process even cheaper than virtually free.

    How many people are required at "HQ" to receive the notifications from officers "in the field", as it were?  Similarly, what happens if HQ do
    not answer the call in a timely manner (for all values of "timely manner")?

    Are the same people in HQ that receive the notifications passing them on
    to the live mapping companies or does that need a separate team?  If so, same questions for this team.

    For all of the above, what happens in the event of sickness, holiday, bereavement, maternity absence, etc.?

    All these people problems. No wonder computers will replace them.

    I recommend that you spend time learning how Google, Apple and Garmin
    (to name but three) deal with congestion and routing around it and
    you will be well on the well to enlightenment.

    They act on information received.  Information about road blockages is
    just more information that should be easy to integrate.

    It *should* be easy to integrate, but you've got no idea how the current system works, no detailed specification for your proposed system and no
    idea how the two systems will relate to each other.

    But you can state with absolute certainty and confidence that it "should
    be easy to integrate".


    Until then, you see this as a simple solution to a simple problem
    because you do not understand the problem and therefore have no idea
    of the complexities involved and how your proposed solution maps onto
    these.

    No, it's because I don't get bogged down by trivia and 'the way it's
    always been done'.

    Blinding oneself, deliberately or otherwise, to the failings in one's arguments is how one loses one's proverbial shirt when the project
    commences.


    If it's not done, we're no worse off than we are now.  Huge delays
    whether of manually driven vehicles or autonomous ones.
    I invite you to watch the in-car videos of Waymo Driver in action
    when it goes wrong.  If you think that leaves us "no worse off then
    we are now" then I must disagree with you in the strongest possible
    terms.

    And these are Level 4 AVs with a baby-sitter back at the control
    centre and a human driver on standby no more than 5-6 miles away to
    intervene when things go wrong, which they do all too frequently.

    I was talking, as was very clear, about what happens if mapping
    concerns do not act on road closure information received, which has
    nothing to do with what you say.
    NormCars (TM) have been designed without AI.  Therefore they cannot read "Road Closed" signs or observe blockages in real time.  What are they to
    do in the time between the blockage occurring and receiving notification
    of it assuming there's a delay of no more than sixty seconds at each
    stage of the process?

    (Officer arrives on scene and erects roadblock (T), Officer notifies HQ
    of blockage (T+1 minute), HQ notifies mapping companies in alphabetic
    order of blockage so first is Apple (T+2 minutes), HQ notifies Garmin of blockage (T+3 minutes), HQ notifies Google of blockage (T+4 minutes)...

    Google updates blockage data onto system (T+5 minutes).

    That's still not bad in absolute terms, but with instantaneous and
    simultaneous communication already being possible and widely-used,
    there's no need for any of this nonsense. The whole process could be automated, and would be better if it were.

    It could even be possible for the officer on the spot to put a block on
    all the maps instantaneously by a simple key press.

    You just have to have the vision to see it.

    Assume NormCars (TM) have TomTom navigation technology.  How long after
    the roadblock being erected does the car receive notification of the
    blockage and what is it doing in the time between the roadblock being
    erected and being notified of it?

    According to you, it will be coming to an emergency stop in the fast
    lane of a motorway somewhere.

    But it won't have TomTom navigation technology if other competitive
    systems are better.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 11 09:19:07 2025
    On Mon, 10 Feb 2025 23:54:15 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 10/02/2025 19:21, Simon Parker wrote:

    How many notifications are there likely to be in a 24 hour period? Is
    one person per receiving company on a shift basis sufficient or does it
    need multiple people? If multiple people, how many? What happens if HQ
    try to contact them and they're away from their desk?

    If people are going to be such a problem, they can be replaced with
    automated systems, which will make the whole process even cheaper than >virtually free.

    When I asked you, in Message-ID <mrjbqjhn7vfpkmn7mvodnaenrslr6s23la@4ax.com> for your specification for a notification system, you responded, in
    Message-ID <m0m4prF61neU3@mid.individual.net>, with one that is essentially manual in nature. Given that you now seem to have agreed with Simon that an automated system would be better, maybe you could give us your specification for an automated system.

    Again, I'm not asking for code samples. Just the system architecture.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 11 09:22:10 2025
    On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 10/02/2025 16:28, Mark Goodge wrote:
    On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 09/02/2025 21:42, Mark Goodge wrote:

    What should an autonomous vehicle do if it has a data supply (ie, it's not >> in a mobile notspot) but its data disagrees with signage on the ground?

    It will presumably default to the least dangerous.

    How will it do that? How will it know which is the least dangerous?

    You wouldn't expect an autonomous vehicle to work anywhere at all, like
    in Chad for example, so why everywhere even in Britain where there may
    not in fact be any roads or people?

    Then you're telling us that you don't, actually, think that level 5 autonomy will ever be achieved. That's quite a turnaround from your previous argument that getting there is trivial.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 11 09:53:12 2025
    On 11/02/2025 09:19, Mark Goodge wrote:
    On Mon, 10 Feb 2025 23:54:15 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 10/02/2025 19:21, Simon Parker wrote:

    How many notifications are there likely to be in a 24 hour period?  Is
    one person per receiving company on a shift basis sufficient or does it
    need multiple people?  If multiple people, how many?  What happens if HQ >>> try to contact them and they're away from their desk?

    If people are going to be such a problem, they can be replaced with
    automated systems, which will make the whole process even cheaper than
    virtually free.

    When I asked you, in Message-ID <mrjbqjhn7vfpkmn7mvodnaenrslr6s23la@4ax.com> for your specification for a notification system, you responded, in Message-ID <m0m4prF61neU3@mid.individual.net>, with one that is essentially manual in nature. Given that you now seem to have agreed with Simon that an automated system would be better, maybe you could give us your specification for an automated system.

    I don't think he ever postulated an automated system at all. I did that
    in response to all the human errors, delays and bungling he imagined.

    Again, I'm not asking for code samples. Just the system architecture.

    Ideally, police officer at the scene presses big red button on computer
    marked 'Accident blockage'. That sends a message giving the location to
    the mapping companies and updates their maps instantaneously and
    automatically.

    Less ideally, it involves people on the way to the same end result.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 11 09:43:31 2025
    On 11/02/2025 09:22, Mark Goodge wrote:
    On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 10/02/2025 16:28, Mark Goodge wrote:
    On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 09/02/2025 21:42, Mark Goodge wrote:

    What should an autonomous vehicle do if it has a data supply (ie, it's not >>> in a mobile notspot) but its data disagrees with signage on the ground?

    It will presumably default to the least dangerous.

    How will it do that? How will it know which is the least dangerous?

    Obviously by taking the lower of two speed limits if it has conflicting information on such.

    You wouldn't expect an autonomous vehicle to work anywhere at all, like
    in Chad for example, so why everywhere even in Britain where there may
    not in fact be any roads or people?

    Then you're telling us that you don't, actually, think that level 5 autonomy will ever be achieved. That's quite a turnaround from your previous argument that getting there is trivial.

    Level 5, if I understand it correctly, means that the vehicles are fully automated and can drive anywhere and in any conditions without human input.

    Until that includes Chad and the moon, of course that can't be achieved.
    It is centuries away, and is a rather stupid and pointless definition
    if taken literally. But I don't see why you can't have fully automated vehicles that can drive anywhere within an area where there is
    sufficient data available to ensure its and others' safety. And such
    areas need not be worldwide or even nationwide.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 11 11:03:17 2025
    On Tue, 11 Feb 2025 09:53:12 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 09:19, Mark Goodge wrote:

    Again, I'm not asking for code samples. Just the system architecture.

    Ideally, police officer at the scene presses big red button on computer >marked 'Accident blockage'. That sends a message giving the location to
    the mapping companies and updates their maps instantaneously and >automatically.

    How does the big red button know where the blockage is? How is that data transmitted to the central server, and then to the mapping databases?

    That's what I mean by system architecture. Just saying "It sends a message"
    is like saying "Wave a magic wand". You must, surely, have some idea of how
    it would work at a technical level. Unless you can explain how that would
    work, your opinion is not particularly useful.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to Norman Wells on Tue Feb 11 11:04:26 2025
    On 11 Feb 2025 at 00:03:34 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:

    On 10/02/2025 16:28, Mark Goodge wrote:
    On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 09/02/2025 21:42, Mark Goodge wrote:
    On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>> On 08/02/2025 15:00, Mark Goodge wrote:

    What would you expect an autonomous vehicle to do in locations that are part
    of the 5%?

    Default to some programmed safe level or take inputs from other sensors >>>>> and data sources of course.

    As, indeed, I have been telling you, all the time you've been insisting that
    an autonomous vehicle will always be able to trust its data supply.

    As of course it should. It's best if it can. What I said only applies
    to very few in remote areas where it may unfortunately be necessary.

    What should an autonomous vehicle do if it has a data supply (ie, it's not >> in a mobile notspot) but its data disagrees with signage on the ground?

    It will presumably default to the least dangerous.

    I never claimed 95% or so was a glass completely full.

    But it's certainly not half empty.

    It's actually around 91% at the moment, apparently. The target is 95%. But, >> of course, you can't have level 5 autonomy until you can cope with 100% of >> the roads, even those which are outwith the 95% mobile data coverage.

    You can still have total autonomy in areas that are covered.

    You wouldn't expect an autonomous vehicle to work anywhere at all, like
    in Chad for example, so why everywhere even in Britain where there may
    not in fact be any roads or people? If there are not spots, AVs can be
    told where they are and programmed to avoid them or to refuse any
    journey that necessarily requires driving through one.

    What about a sudden failure of a base station?

    snip


    --
    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Roger Hayter on Tue Feb 11 11:28:42 2025
    On 11/02/2025 11:04, Roger Hayter wrote:
    On 11 Feb 2025 at 00:03:34 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:

    On 10/02/2025 16:28, Mark Goodge wrote:
    On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 09/02/2025 21:42, Mark Goodge wrote:
    On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>>> On 08/02/2025 15:00, Mark Goodge wrote:

    What would you expect an autonomous vehicle to do in locations that are part
    of the 5%?

    Default to some programmed safe level or take inputs from other sensors >>>>>> and data sources of course.

    As, indeed, I have been telling you, all the time you've been insisting that
    an autonomous vehicle will always be able to trust its data supply.

    As of course it should. It's best if it can. What I said only applies >>>> to very few in remote areas where it may unfortunately be necessary.

    What should an autonomous vehicle do if it has a data supply (ie, it's not >>> in a mobile notspot) but its data disagrees with signage on the ground?

    It will presumably default to the least dangerous.

    I never claimed 95% or so was a glass completely full.

    But it's certainly not half empty.

    It's actually around 91% at the moment, apparently. The target is 95%. But, >>> of course, you can't have level 5 autonomy until you can cope with 100% of >>> the roads, even those which are outwith the 95% mobile data coverage.

    You can still have total autonomy in areas that are covered.

    You wouldn't expect an autonomous vehicle to work anywhere at all, like
    in Chad for example, so why everywhere even in Britain where there may
    not in fact be any roads or people? If there are not spots, AVs can be
    told where they are and programmed to avoid them or to refuse any
    journey that necessarily requires driving through one.

    What about a sudden failure of a base station?

    Or a tsunami, volcanic extinction event, nuclear attack, the sun
    exploding, or even condom failure?

    I don't know. But then again, I doubt if manual drivers do either.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 11 11:25:14 2025
    On 11/02/2025 11:03, Mark Goodge wrote:
    On Tue, 11 Feb 2025 09:53:12 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 09:19, Mark Goodge wrote:

    Again, I'm not asking for code samples. Just the system architecture.

    Ideally, police officer at the scene presses big red button on computer
    marked 'Accident blockage'. That sends a message giving the location to
    the mapping companies and updates their maps instantaneously and
    automatically.

    How does the big red button know where the blockage is?

    I believe as a twinkle in someone's eye, if not yet in reality, there is
    a thing provisionally called a global positioning system based, I think,
    on satellites or some such preposterous idea.

    How is that data
    transmitted to the central server, and then to the mapping databases?

    Over the ether by 'communicators' a bit like they have on Star Trek.
    Again, I know it sounds far-fetched, but a chap down the pub assured me
    it was theoretically possible, and he was very plausible.

    That's what I mean by system architecture. Just saying "It sends a message" is like saying "Wave a magic wand". You must, surely, have some idea of how it would work at a technical level. Unless you can explain how that would work, your opinion is not particularly useful.

    It certainly isn't impossible. That being the case, I have the vision,
    others just need to assemble the nuts and bolts.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 11 11:24:06 2025
    On Tue, 11 Feb 2025 09:43:31 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 09:22, Mark Goodge wrote:
    On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>

    You wouldn't expect an autonomous vehicle to work anywhere at all, like
    in Chad for example, so why everywhere even in Britain where there may
    not in fact be any roads or people?

    Then you're telling us that you don't, actually, think that level 5 autonomy >> will ever be achieved. That's quite a turnaround from your previous argument >> that getting there is trivial.

    Level 5, if I understand it correctly, means that the vehicles are fully >automated and can drive anywhere and in any conditions without human input.

    Yes, it does. Or, to be more precise, it means being able to go anywhere a human driver in an otherwise unmodified street legal vehicle would be able
    to go. It doesn't require the autonomous vehicle to be able to do things
    that a normal car driven by a human wouldn't. So it doesn't need to be able
    to climb Ben Nevis, or participate in a Formula 1 race. But it does need to
    be able to handle New York, London, Paris and Wyre Piddle. And Cig
    Peighinnean Nis.

    Until that includes Chad and the moon, of course that can't be achieved.

    It deosn't need to include the moon.

    It is centuries away, and is a rather stupid and pointless definition
    if taken literally.

    According to autonomous vehicle developers, we should be looking at seeing level 5 in testing, at least, in about a decade's time. Obviously, getting
    from testing to production is a different matter, and there may well be
    several issues which need to be ironed out along the way. But level 5 most definitely is the goal.

    But I don't see why you can't have fully automated
    vehicles that can drive anywhere within an area where there is
    sufficient data available to ensure its and others' safety. And such
    areas need not be worldwide or even nationwide.

    It's reasonable to accept that even level 5 will require national
    differences, and hence won't necessarily be available in all territories simultaneously. What makes a car street legal in the UK and EU is not the
    same as street legal in the US and, say, Chad. Unless we go down the route
    of standalone humanoid robots, as depicted in science fiction, which can get into any car and drive it just as a human driver can, then the vehicles themselves will need to be designed differently for different territories.
    And given that there are likely to be legislative differences around
    regulating the behaviour of autonomous vehicles, it's likely that a level 5
    car which is street legal in the UK may not be in the US, and vice versa. So the cars themselves may not be transferable between different regulatory regimes. But the technology itself will be. If you can build a car that's
    level 5 street legal in the UK, then you can - provided Chad's road traffic legislation permits it - build one which is street legal in Chad.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Roger Hayter@21:1/5 to Norman Wells on Tue Feb 11 12:32:05 2025
    On 11 Feb 2025 at 11:28:42 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:

    On 11/02/2025 11:04, Roger Hayter wrote:
    On 11 Feb 2025 at 00:03:34 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:

    On 10/02/2025 16:28, Mark Goodge wrote:
    On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>
    On 09/02/2025 21:42, Mark Goodge wrote:
    On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote:
    On 08/02/2025 15:00, Mark Goodge wrote:

    What would you expect an autonomous vehicle to do in locations that are part
    of the 5%?

    Default to some programmed safe level or take inputs from other sensors >>>>>>> and data sources of course.

    As, indeed, I have been telling you, all the time you've been insisting that
    an autonomous vehicle will always be able to trust its data supply. >>>>>
    As of course it should. It's best if it can. What I said only applies >>>>> to very few in remote areas where it may unfortunately be necessary.

    What should an autonomous vehicle do if it has a data supply (ie, it's not >>>> in a mobile notspot) but its data disagrees with signage on the ground? >>>
    It will presumably default to the least dangerous.

    I never claimed 95% or so was a glass completely full.

    But it's certainly not half empty.

    It's actually around 91% at the moment, apparently. The target is 95%. But,
    of course, you can't have level 5 autonomy until you can cope with 100% of >>>> the roads, even those which are outwith the 95% mobile data coverage.

    You can still have total autonomy in areas that are covered.

    You wouldn't expect an autonomous vehicle to work anywhere at all, like
    in Chad for example, so why everywhere even in Britain where there may
    not in fact be any roads or people? If there are not spots, AVs can be
    told where they are and programmed to avoid them or to refuse any
    journey that necessarily requires driving through one.

    What about a sudden failure of a base station?

    Or a tsunami, volcanic extinction event, nuclear attack, the sun
    exploding, or even condom failure?

    I don't know. But then again, I doubt if manual drivers do either.

    From my POV, the major difference is that I lost network service at home last week for a day because of a base station failure, but none of the others happened to me, nor are likely to.

    --

    Roger Hayter

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 11 12:50:08 2025
    On Tue, 11 Feb 2025 11:25:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 11:03, Mark Goodge wrote:
    On Tue, 11 Feb 2025 09:53:12 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
    On 11/02/2025 09:19, Mark Goodge wrote:

    Again, I'm not asking for code samples. Just the system architecture.

    Ideally, police officer at the scene presses big red button on computer
    marked 'Accident blockage'. That sends a message giving the location to >>> the mapping companies and updates their maps instantaneously and
    automatically.

    How does the big red button know where the blockage is?

    I believe as a twinkle in someone's eye, if not yet in reality, there is
    a thing provisionally called a global positioning system based, I think,
    on satellites or some such preposterous idea.

    So the police officer has to be physically present at the blockage when they press the button. What about if they need to report it from a different location?

    How is that data
    transmitted to the central server, and then to the mapping databases?

    Over the ether by 'communicators' a bit like they have on Star Trek.
    Again, I know it sounds far-fetched, but a chap down the pub assured me
    it was theoretically possible, and he was very plausible.

    What technology do those communicators use? Does the central database do something like make an automated phone call to all the mapping databases? Or something different? If so, what?

    That's what I mean by system architecture. Just saying "It sends a message" >> is like saying "Wave a magic wand". You must, surely, have some idea of how >> it would work at a technical level. Unless you can explain how that would
    work, your opinion is not particularly useful.

    It certainly isn't impossible. That being the case, I have the vision, >others just need to assemble the nuts and bolts.

    Vision isn't knowing that something can be done. Vision is knowing how it
    can be done.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Roger Hayter on Tue Feb 11 13:07:23 2025
    On 11 Feb 2025 11:04:26 GMT, Roger Hayter <roger@hayter.org> wrote:

    On 11 Feb 2025 at 00:03:34 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:

    You wouldn't expect an autonomous vehicle to work anywhere at all, like
    in Chad for example, so why everywhere even in Britain where there may
    not in fact be any roads or people? If there are not spots, AVs can be
    told where they are and programmed to avoid them or to refuse any
    journey that necessarily requires driving through one.

    What about a sudden failure of a base station?

    A level 5 autonomous vehicle would cope in much the same way that a human
    would under the same or similar circumstances (eg, loss of mobile signal, failure of a sat-nav unit). That is, infer a "best effort" route from
    context (eg, road signs and markings as well as the physical condition of
    the road), offline mapping, and, where possible and appropriate, alternative data sources (such as asking for directions).

    Of course, an autonomous vehicle could end up hopelessly lost under such circumstances. But then, so could a human. And an autonomous vehicle would,
    at least, not suffer from inability to read a map. Provided its offline map data was updated reasonably often it could probably make the majority of its journeys without actually needing any live mapping data, although that would
    be suboptimal as it would mean it wouldn't be able to avoid congestion and pre-emptively avoid closures by taking a different route from much earlier.

    Norman's problem, though, is that he doesn't think level 5 autonomy is
    possible at all, at least not for centuries, but he does think that level 4
    can be good enough for that not to matter in practice. I think he is wrong
    on both of those counts.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to Norman Wells on Tue Feb 11 13:32:40 2025
    On 11/02/2025 11:25, Norman Wells wrote:

    It certainly isn't impossible.  That being the case, I have the vision, others just need to assemble the nuts and bolts.

    That is true of any difficult engineering problem if you put enough
    money, resources and expertise behind it.

    However, in this instance it would be much simpler to declare your
    claims of how easy it all is to be 'Norman True' and move on.

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Roger Hayter on Tue Feb 11 13:19:35 2025
    On 11/02/2025 12:32, Roger Hayter wrote:
    On 11 Feb 2025 at 11:28:42 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:

    On 11/02/2025 11:04, Roger Hayter wrote:
    On 11 Feb 2025 at 00:03:34 GMT, "Norman Wells" <hex@unseen.ac.am> wrote: >>>
    On 10/02/2025 16:28, Mark Goodge wrote:
    On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>>
    On 09/02/2025 21:42, Mark Goodge wrote:
    On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote:
    On 08/02/2025 15:00, Mark Goodge wrote:

    What would you expect an autonomous vehicle to do in locations that are part
    of the 5%?

    Default to some programmed safe level or take inputs from other sensors
    and data sources of course.

    As, indeed, I have been telling you, all the time you've been insisting that
    an autonomous vehicle will always be able to trust its data supply. >>>>>>
    As of course it should. It's best if it can. What I said only applies >>>>>> to very few in remote areas where it may unfortunately be necessary. >>>>>
    What should an autonomous vehicle do if it has a data supply (ie, it's not
    in a mobile notspot) but its data disagrees with signage on the ground? >>>>
    It will presumably default to the least dangerous.

    I never claimed 95% or so was a glass completely full.

    But it's certainly not half empty.

    It's actually around 91% at the moment, apparently. The target is 95%. But,
    of course, you can't have level 5 autonomy until you can cope with 100% of
    the roads, even those which are outwith the 95% mobile data coverage. >>>>
    You can still have total autonomy in areas that are covered.

    You wouldn't expect an autonomous vehicle to work anywhere at all, like >>>> in Chad for example, so why everywhere even in Britain where there may >>>> not in fact be any roads or people? If there are not spots, AVs can be >>>> told where they are and programmed to avoid them or to refuse any
    journey that necessarily requires driving through one.

    What about a sudden failure of a base station?

    Or a tsunami, volcanic extinction event, nuclear attack, the sun
    exploding, or even condom failure?

    I don't know. But then again, I doubt if manual drivers do either.

    From my POV, the major difference is that I lost network service at home last
    week for a day because of a base station failure, but none of the others happened to me, nor are likely to.

    You never know.

    Anyway, other networks are available. If none of them work, and other
    sensors on the vehicle don't give adequate information, as with a
    malfunction of the vehicle itself it will have to come to a graceful
    halt and not continue until normal service is resumed. Just like you
    would if your brakes or headlamps fail when you are driving manually.

    Nothing works perfectly all of the time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 11 13:33:42 2025
    On 11/02/2025 11:24, Mark Goodge wrote:
    On Tue, 11 Feb 2025 09:43:31 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 09:22, Mark Goodge wrote:
    On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>

    You wouldn't expect an autonomous vehicle to work anywhere at all, like >>>> in Chad for example, so why everywhere even in Britain where there may >>>> not in fact be any roads or people?

    Then you're telling us that you don't, actually, think that level 5 autonomy
    will ever be achieved. That's quite a turnaround from your previous argument
    that getting there is trivial.

    Level 5, if I understand it correctly, means that the vehicles are fully
    automated and can drive anywhere and in any conditions without human input.

    Yes, it does. Or, to be more precise, it means being able to go anywhere a human driver in an otherwise unmodified street legal vehicle would be able
    to go. It doesn't require the autonomous vehicle to be able to do things
    that a normal car driven by a human wouldn't. So it doesn't need to be able to climb Ben Nevis, or participate in a Formula 1 race. But it does need to be able to handle New York, London, Paris and Wyre Piddle. And Còig Peighinnean Nis.

    Until that includes Chad and the moon, of course that can't be achieved.

    It deosn't need to include the moon.

    But it does need to include Chad?

    If so, it's a wholly unrealistic definition.

    If not, you're accepting that there are geographical areas in which you
    can have Level 5 AVs, and some where you can't.

    If there are geographical areas where you can have them, it's just a
    matter of defining those areas. And there's no reason why they should
    coincide with national borders. They could be larger than countries or smaller. A single country could have many different areas within it.

    As long as an AV has the information, it can avoid or refuse areas where
    it can't operate.

    It is centuries away, and is a rather stupid and pointless definition
    if taken literally.

    According to autonomous vehicle developers, we should be looking at seeing level 5 in testing, at least, in about a decade's time. Obviously, getting from testing to production is a different matter, and there may well be several issues which need to be ironed out along the way. But level 5 most definitely is the goal.

    But I don't see why you can't have fully automated
    vehicles that can drive anywhere within an area where there is
    sufficient data available to ensure its and others' safety. And such
    areas need not be worldwide or even nationwide.

    It's reasonable to accept that even level 5 will require national differences, and hence won't necessarily be available in all territories simultaneously.

    Why then mention New York, London and Paris in the same breath, as you
    did above?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 11 14:15:48 2025
    On Tue, 11 Feb 2025 13:33:42 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 11:24, Mark Goodge wrote:
    On Tue, 11 Feb 2025 09:43:31 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
    On 11/02/2025 09:22, Mark Goodge wrote:
    On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>

    You wouldn't expect an autonomous vehicle to work anywhere at all, like >>>>> in Chad for example, so why everywhere even in Britain where there may >>>>> not in fact be any roads or people?

    Then you're telling us that you don't, actually, think that level 5 autonomy
    will ever be achieved. That's quite a turnaround from your previous argument
    that getting there is trivial.

    Level 5, if I understand it correctly, means that the vehicles are fully >>> automated and can drive anywhere and in any conditions without human input. >>
    Yes, it does. Or, to be more precise, it means being able to go anywhere a >> human driver in an otherwise unmodified street legal vehicle would be able >> to go. It doesn't require the autonomous vehicle to be able to do things
    that a normal car driven by a human wouldn't. So it doesn't need to be able >> to climb Ben Nevis, or participate in a Formula 1 race. But it does need to >> be able to handle New York, London, Paris and Wyre Piddle. And Cig
    Peighinnean Nis.

    Until that includes Chad and the moon, of course that can't be achieved.

    It deosn't need to include the moon.

    But it does need to include Chad?

    It needs to include the areas where humans can drive street-legal vehicles
    in Chad, yes. I don't see why (other than barely disguised racism), you seem
    to think that's difficult. Chad has a road network just like any other
    country. Humans use those roads without any difficulties. A level 5
    autonomous vehicle will be able to do so as well, so long as it meets Chad's legal requirements.

    I'm not suggesting that level 5 autonomy needs to be able to handle
    challenging off-road conditions. Most human drivers - and most street-legal normal cars - would struggle with that, too. So you're not going to see them crossing the Sahara off-road, and you won't find them in Antarctica either.
    But anywhere that has a public highway network and a defined legal framework for the vehicles which use it will be amenable to level 5 autonomy.

    It's reasonable to accept that even level 5 will require national
    differences, and hence won't necessarily be available in all territories
    simultaneously.

    Why then mention New York, London and Paris in the same breath, as you
    did above?

    Because the basic technology will be the same everywhere. It's the legal framework which will be different. Not every country will, initially, be willing to allow level 5 autonomous vehicles to use its roads. Not because
    they are technically incapable of it, but because that government has, for whatever reason, decided not to permit them. And even once it is permitted, there may be incompatibilities between the various equivalents of AI construction and use regulations which make it impractical to have
    cross-border travel with autonomous vehicles. And in some countries, particularly those with protectionist regulatory regimes, autonomous
    vehicles from certain sources may be prohibited. I suspect you're not going
    to see American autonomous vehicles on the streets of North Korea, even once they're common on the streets of North Dakota. But that won't be because they're technically incapable of working there. It will simply be because they're not allowed there.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 11 13:42:43 2025
    On 11/02/2025 12:50, Mark Goodge wrote:
    On Tue, 11 Feb 2025 11:25:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 11:03, Mark Goodge wrote:
    On Tue, 11 Feb 2025 09:53:12 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 11/02/2025 09:19, Mark Goodge wrote:

    Again, I'm not asking for code samples. Just the system architecture. >>>>
    Ideally, police officer at the scene presses big red button on computer >>>> marked 'Accident blockage'. That sends a message giving the location to >>>> the mapping companies and updates their maps instantaneously and
    automatically.

    How does the big red button know where the blockage is?

    I believe as a twinkle in someone's eye, if not yet in reality, there is
    a thing provisionally called a global positioning system based, I think,
    on satellites or some such preposterous idea.

    So the police officer has to be physically present at the blockage when they press the button. What about if they need to report it from a different location?

    Then he doesn't press the first red button, but the second one that says
    press here if the blockage is somewhere else, whereupon he will be
    required to input where exactly it is.

    Goodness, it's not rocket science.

    How is that data
    transmitted to the central server, and then to the mapping databases?

    Over the ether by 'communicators' a bit like they have on Star Trek.
    Again, I know it sounds far-fetched, but a chap down the pub assured me
    it was theoretically possible, and he was very plausible.

    What technology do those communicators use?

    Either the police communications network or the ordinary mobile phone
    network of course.

    Does the central database do
    something like make an automated phone call to all the mapping databases? Or something different? If so, what?

    Something like that.

    That's what I mean by system architecture. Just saying "It sends a message" >>> is like saying "Wave a magic wand". You must, surely, have some idea of how >>> it would work at a technical level. Unless you can explain how that would >>> work, your opinion is not particularly useful.

    It certainly isn't impossible. That being the case, I have the vision,
    others just need to assemble the nuts and bolts.

    Vision isn't knowing that something can be done. Vision is knowing how it
    can be done.

    No. There are planners with an overall vision, there are architects who
    design the realisation of that vision, and there are builders who put
    the bricks in place.

    Planners do not need to be either architects or builders.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 11 16:08:09 2025
    On 11/02/2025 13:07, Mark Goodge wrote:
    On 11 Feb 2025 11:04:26 GMT, Roger Hayter <roger@hayter.org> wrote:

    On 11 Feb 2025 at 00:03:34 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:

    You wouldn't expect an autonomous vehicle to work anywhere at all, like
    in Chad for example, so why everywhere even in Britain where there may
    not in fact be any roads or people? If there are not spots, AVs can be
    told where they are and programmed to avoid them or to refuse any
    journey that necessarily requires driving through one.

    What about a sudden failure of a base station?

    A level 5 autonomous vehicle would cope in much the same way that a human would under the same or similar circumstances (eg, loss of mobile signal, failure of a sat-nav unit). That is, infer a "best effort" route from
    context (eg, road signs and markings as well as the physical condition of
    the road), offline mapping, and, where possible and appropriate, alternative data sources (such as asking for directions).

    Of course, an autonomous vehicle could end up hopelessly lost under such circumstances. But then, so could a human. And an autonomous vehicle would, at least, not suffer from inability to read a map. Provided its offline map data was updated reasonably often it could probably make the majority of its journeys without actually needing any live mapping data, although that would be suboptimal as it would mean it wouldn't be able to avoid congestion and pre-emptively avoid closures by taking a different route from much earlier.

    Norman's problem, though, is that he doesn't think level 5 autonomy is possible at all, at least not for centuries

    Only if it has to include places like Chad, which the definition I have
    read does strictly require, namely 'a car that can travel anywhere there
    are roads, in all conditions'.

    but he does think that level 4
    can be good enough for that not to matter in practice. I think he is wrong
    on both of those counts.

    There's actually a continuum between Levels 4 and 5, not a step change.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 11 16:04:14 2025
    On 11/02/2025 14:15, Mark Goodge wrote:
    On Tue, 11 Feb 2025 13:33:42 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 11:24, Mark Goodge wrote:
    On Tue, 11 Feb 2025 09:43:31 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 11/02/2025 09:22, Mark Goodge wrote:
    On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote:


    You wouldn't expect an autonomous vehicle to work anywhere at all, like >>>>>> in Chad for example, so why everywhere even in Britain where there may >>>>>> not in fact be any roads or people?

    Then you're telling us that you don't, actually, think that level 5 autonomy
    will ever be achieved. That's quite a turnaround from your previous argument
    that getting there is trivial.

    Level 5, if I understand it correctly, means that the vehicles are fully >>>> automated and can drive anywhere and in any conditions without human input.

    Yes, it does. Or, to be more precise, it means being able to go anywhere a >>> human driver in an otherwise unmodified street legal vehicle would be able >>> to go. It doesn't require the autonomous vehicle to be able to do things >>> that a normal car driven by a human wouldn't. So it doesn't need to be able >>> to climb Ben Nevis, or participate in a Formula 1 race. But it does need to >>> be able to handle New York, London, Paris and Wyre Piddle. And Còig
    Peighinnean Nis.

    Until that includes Chad and the moon, of course that can't be achieved. >>>
    It deosn't need to include the moon.

    But it does need to include Chad?

    It needs to include the areas where humans can drive street-legal vehicles
    in Chad, yes. I don't see why (other than barely disguised racism), you seem to think that's difficult. Chad has a road network just like any other country.

    Yes, but I doubt if it has the mapping, or any speed limit data-bases,
    quite possibly not even any signage, lane markings, junction priorities
    etc etc etc. It does makes life a little harder.

    Humans use those roads without any difficulties. A level 5
    autonomous vehicle will be able to do so as well, so long as it meets Chad's legal requirements.

    So, a vehicle can be level 5 in London or Chad, but needn't be in both?

    I'm not suggesting that level 5 autonomy needs to be able to handle challenging off-road conditions. Most human drivers - and most street-legal normal cars - would struggle with that, too. So you're not going to see them crossing the Sahara off-road, and you won't find them in Antarctica either. But anywhere that has a public highway network and a defined legal framework for the vehicles which use it will be amenable to level 5 autonomy.

    Probably not Chad at all then.

    It's reasonable to accept that even level 5 will require national
    differences, and hence won't necessarily be available in all territories >>> simultaneously.

    Why then mention New York, London and Paris in the same breath, as you
    did above?

    Because the basic technology will be the same everywhere. It's the legal framework which will be different.

    Well, no, because a lot depends on the data available to AVs, and that
    will be in very short supply in places like Chad.

    Not every country will, initially, be
    willing to allow level 5 autonomous vehicles to use its roads. Not because they are technically incapable of it, but because that government has, for whatever reason, decided not to permit them. And even once it is permitted, there may be incompatibilities between the various equivalents of AI construction and use regulations which make it impractical to have cross-border travel with autonomous vehicles. And in some countries, particularly those with protectionist regulatory regimes, autonomous
    vehicles from certain sources may be prohibited. I suspect you're not going to see American autonomous vehicles on the streets of North Korea, even once they're common on the streets of North Dakota. But that won't be because they're technically incapable of working there. It will simply be because they're not allowed there.

    And won't have an awful lot of data to work with.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sam Plusnet@21:1/5 to Mark Goodge on Tue Feb 11 19:21:51 2025
    On 11/02/2025 09:22, Mark Goodge wrote:
    On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 10/02/2025 16:28, Mark Goodge wrote:
    On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 09/02/2025 21:42, Mark Goodge wrote:

    What should an autonomous vehicle do if it has a data supply (ie, it's not >>> in a mobile notspot) but its data disagrees with signage on the ground?

    It will presumably default to the least dangerous.

    How will it do that? How will it know which is the least dangerous?

    Perhaps it will be able to hand-wave away any such difficulty - your interlocutor may be able to train it in this art.


    --
    Sam Plusnet

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Owen Rees@21:1/5 to All on Tue Feb 11 21:28:11 2025
    On Mon, 10 Feb 2025 23:54:15 +0000, Norman Wells <hex@unseen.ac.am>
    wrote in <m0vht9Fm2ueU3@mid.individual.net>:

    On 10/02/2025 19:21, Simon Parker wrote:
    NormCars (TM) have been designed without AI. Therefore they cannot read
    "Road Closed" signs or observe blockages in real time. What are they to
    do in the time between the blockage occurring and receiving notification
    of it assuming there's a delay of no more than sixty seconds at each
    stage of the process?

    (Officer arrives on scene and erects roadblock (T), Officer notifies HQ
    of blockage (T+1 minute), HQ notifies mapping companies in alphabetic
    order of blockage so first is Apple (T+2 minutes), HQ notifies Garmin of
    blockage (T+3 minutes), HQ notifies Google of blockage (T+4 minutes)...

    Google updates blockage data onto system (T+5 minutes).

    That's still not bad in absolute terms, but with instantaneous and >simultaneous communication already being possible and widely-used,
    there's no need for any of this nonsense. The whole process could be >automated, and would be better if it were.

    It could even be possible for the officer on the spot to put a block on
    all the maps instantaneously by a simple key press.

    Could you point to something that has instantaneous and simultaneous communucation please. If not an actual device then the breakthrough in
    physics that shows that instantaneous communication is possible. Last I
    heard there was an upper limit of 3*10^8 metre/sec for communication.


    You just have to have the vision to see it.

    You need to look at the bigger picture.

    The scenario where an officer has arrived to report the blockage with
    his FTL communicator is a small and relatively unimportant part of the
    possible situations that need to be handled. Simon Parker has already
    asked what happens in part of that larger picture - the time between the blockage occurring and the officer arriving. You narrow view apparently
    does not include that part of the overall situation.


    Assume NormCars (TM) have TomTom navigation technology. How long after
    the roadblock being erected does the car receive notification of the
    blockage and what is it doing in the time between the roadblock being
    erected and being notified of it?

    According to you, it will be coming to an emergency stop in the fast
    lane of a motorway somewhere.

    I have not see that suggested as the response to not yet having received notification of a blockage.

    It might be an appropriate response if there is a collision that blocks
    the motorway immediately ahead of the AV.


    But it won't have TomTom navigation technology if other competitive
    systems are better.


    If you look at the bigger picture, rerouting when notified of an
    obstacle sufficiently far ahead is not one of the more significant
    scenarios. No doubt it is a useful feature but it is not safety
    critical, unlike other things an AV will have to do.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 11 21:27:26 2025
    On Tue, 11 Feb 2025 13:42:43 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 12:50, Mark Goodge wrote:
    On Tue, 11 Feb 2025 11:25:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
    On 11/02/2025 11:03, Mark Goodge wrote:
    On Tue, 11 Feb 2025 09:53:12 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>
    On 11/02/2025 09:19, Mark Goodge wrote:

    Again, I'm not asking for code samples. Just the system architecture. >>>>>
    Ideally, police officer at the scene presses big red button on computer >>>>> marked 'Accident blockage'. That sends a message giving the location to >>>>> the mapping companies and updates their maps instantaneously and
    automatically.

    How does the big red button know where the blockage is?

    I believe as a twinkle in someone's eye, if not yet in reality, there is >>> a thing provisionally called a global positioning system based, I think, >>> on satellites or some such preposterous idea.

    So the police officer has to be physically present at the blockage when they >> press the button. What about if they need to report it from a different
    location?

    Then he doesn't press the first red button, but the second one that says >press here if the blockage is somewhere else, whereupon he will be
    required to input where exactly it is.

    How does he input exactly where it is? How does he know the precise
    coordinates of where it is if he isn't at the actual location to get a GPS
    fix?


    Over the ether by 'communicators' a bit like they have on Star Trek.
    Again, I know it sounds far-fetched, but a chap down the pub assured me
    it was theoretically possible, and he was very plausible.

    What technology do those communicators use?

    Either the police communications network or the ordinary mobile phone
    network of course.

    So, not data links, then?

    It certainly isn't impossible. That being the case, I have the vision,
    others just need to assemble the nuts and bolts.

    Vision isn't knowing that something can be done. Vision is knowing how it
    can be done.

    No. There are planners with an overall vision, there are architects who >design the realisation of that vision, and there are builders who put
    the bricks in place.

    Planners do not need to be either architects or builders.

    Tell me you know nothing about planning without telling me you know nothing about planning.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Tue Feb 11 21:30:33 2025
    On Tue, 11 Feb 2025 16:04:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 14:15, Mark Goodge wrote:

    It needs to include the areas where humans can drive street-legal vehicles >> in Chad, yes. I don't see why (other than barely disguised racism), you seem >> to think that's difficult. Chad has a road network just like any other
    country.

    Yes, but I doubt if it has the mapping, or any speed limit data-bases,
    quite possibly not even any signage, lane markings, junction priorities
    etc etc etc.

    But that's precisely the point. Humans manage in those conditions. So level
    5 autonomy will, too.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 11 21:47:03 2025
    On 11/02/2025 21:27, Mark Goodge wrote:
    On Tue, 11 Feb 2025 13:42:43 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 12:50, Mark Goodge wrote:
    On Tue, 11 Feb 2025 11:25:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
    On 11/02/2025 11:03, Mark Goodge wrote:
    On Tue, 11 Feb 2025 09:53:12 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 09:19, Mark Goodge wrote:

    Again, I'm not asking for code samples. Just the system architecture. >>>>>>
    Ideally, police officer at the scene presses big red button on computer >>>>>> marked 'Accident blockage'. That sends a message giving the location to >>>>>> the mapping companies and updates their maps instantaneously and
    automatically.

    How does the big red button know where the blockage is?

    I believe as a twinkle in someone's eye, if not yet in reality, there is >>>> a thing provisionally called a global positioning system based, I think, >>>> on satellites or some such preposterous idea.

    So the police officer has to be physically present at the blockage when they
    press the button. What about if they need to report it from a different
    location?

    Then he doesn't press the first red button, but the second one that says
    press here if the blockage is somewhere else, whereupon he will be
    required to input where exactly it is.

    How does he input exactly where it is? How does he know the precise coordinates of where it is if he isn't at the actual location to get a GPS fix?

    He knows because he's on the scene. If he isn't and doesn't know he
    doesn't press either red button. It's very simple.

    Over the ether by 'communicators' a bit like they have on Star Trek.
    Again, I know it sounds far-fetched, but a chap down the pub assured me >>>> it was theoretically possible, and he was very plausible.

    What technology do those communicators use?

    Either the police communications network or the ordinary mobile phone
    network of course.

    So, not data links, then?

    Why not, if they work? I don't exclude any possibilities.

    It certainly isn't impossible. That being the case, I have the vision, >>>> others just need to assemble the nuts and bolts.

    Vision isn't knowing that something can be done. Vision is knowing how it >>> can be done.

    No. There are planners with an overall vision, there are architects who
    design the realisation of that vision, and there are builders who put
    the bricks in place.

    Planners do not need to be either architects or builders.

    Tell me you know nothing about planning without telling me you know nothing about planning.

    No, I can't do that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Owen Rees on Tue Feb 11 21:43:47 2025
    On 11/02/2025 21:28, Owen Rees wrote:
    On Mon, 10 Feb 2025 23:54:15 +0000, Norman Wells <hex@unseen.ac.am>
    wrote in <m0vht9Fm2ueU3@mid.individual.net>:

    On 10/02/2025 19:21, Simon Parker wrote:
    NormCars (TM) have been designed without AI.  Therefore they cannot read >>> "Road Closed" signs or observe blockages in real time.  What are they to >>> do in the time between the blockage occurring and receiving notification >>> of it assuming there's a delay of no more than sixty seconds at each
    stage of the process?

    (Officer arrives on scene and erects roadblock (T), Officer notifies HQ
    of blockage (T+1 minute), HQ notifies mapping companies in alphabetic
    order of blockage so first is Apple (T+2 minutes), HQ notifies Garmin of >>> blockage (T+3 minutes), HQ notifies Google of blockage (T+4 minutes)...

    Google updates blockage data onto system (T+5 minutes).

    That's still not bad in absolute terms, but with instantaneous and
    simultaneous communication already being possible and widely-used,
    there's no need for any of this nonsense. The whole process could be
    automated, and would be better if it were.

    It could even be possible for the officer on the spot to put a block on
    all the maps instantaneously by a simple key press.

    Could you point to something that has instantaneous and simultaneous communucation please. If not an actual device then the breakthrough in physics that shows that instantaneous communication is possible. Last I
    heard there was an upper limit of 3*10^8 metre/sec for communication.

    Oh, for goodness sake.

    You just have to have the vision to see it.

    You need to look at the bigger picture.

    The scenario where an officer has arrived to report the blockage with
    his FTL communicator is a small and relatively unimportant part of the possible situations that need to be handled. Simon Parker has already
    asked what happens in part of that larger picture - the time between the blockage occurring and the officer arriving. You narrow view apparently
    does not include that part of the overall situation.

    Can't be helped or avoided, can it?

    Assume NormCars (TM) have TomTom navigation technology.  How long after >>> the roadblock being erected does the car receive notification of the
    blockage and what is it doing in the time between the roadblock being
    erected and being notified of it?

    According to you, it will be coming to an emergency stop in the fast
    lane of a motorway somewhere.

    I have not see that suggested as the response to not yet having received notification of a blockage.

    No, but if you believe some here it's what happens all the time regardless.

    It might be an appropriate response if there is a collision that blocks
    the motorway immediately ahead of the AV.

    But it won't have TomTom navigation technology if other competitive
    systems are better.

    If you look at the bigger picture, rerouting when notified of an
    obstacle sufficiently far ahead is not one of the more significant
    scenarios. No doubt it is a useful feature but it is not safety
    critical, unlike other things an AV will have to do.

    So? That's the scenario we were discussing.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Tue Feb 11 21:48:20 2025
    On 11/02/2025 21:30, Mark Goodge wrote:
    On Tue, 11 Feb 2025 16:04:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 14:15, Mark Goodge wrote:

    It needs to include the areas where humans can drive street-legal vehicles >>> in Chad, yes. I don't see why (other than barely disguised racism), you seem
    to think that's difficult. Chad has a road network just like any other
    country.

    Yes, but I doubt if it has the mapping, or any speed limit data-bases,
    quite possibly not even any signage, lane markings, junction priorities
    etc etc etc.

    But that's precisely the point. Humans manage in those conditions. So level
    5 autonomy will, too.

    My point is that Level 5 autonomy is simply not possible in such
    data-deprived places.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 12 09:09:46 2025
    On Tue, 11 Feb 2025 21:48:20 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 21:30, Mark Goodge wrote:
    On Tue, 11 Feb 2025 16:04:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
    On 11/02/2025 14:15, Mark Goodge wrote:

    It needs to include the areas where humans can drive street-legal vehicles >>>> in Chad, yes. I don't see why (other than barely disguised racism), you seem
    to think that's difficult. Chad has a road network just like any other >>>> country.

    Yes, but I doubt if it has the mapping, or any speed limit data-bases,
    quite possibly not even any signage, lane markings, junction priorities
    etc etc etc.

    But that's precisely the point. Humans manage in those conditions. So level >> 5 autonomy will, too.

    My point is that Level 5 autonomy is simply not possible in such >data-deprived places.

    In which case, you're saying that level 5 autonomy is simply not possible.
    The autonomous vehicle manufacturers disagree with you.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 12 09:08:14 2025
    On Tue, 11 Feb 2025 21:47:03 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 11/02/2025 21:27, Mark Goodge wrote:
    On Tue, 11 Feb 2025 13:42:43 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
    On 11/02/2025 12:50, Mark Goodge wrote:

    So the police officer has to be physically present at the blockage when they
    press the button. What about if they need to report it from a different >>>> location?

    Then he doesn't press the first red button, but the second one that says >>> press here if the blockage is somewhere else, whereupon he will be
    required to input where exactly it is.

    How does he input exactly where it is? How does he know the precise
    coordinates of where it is if he isn't at the actual location to get a GPS >> fix?

    He knows because he's on the scene. If he isn't and doesn't know he
    doesn't press either red button. It's very simple.

    OK, so your system doesn't account for any need for someone to ever report a blockage without being physically at the blockage at the time they report
    it. That's a bit of a weakness, wouldn't you say?

    What technology do those communicators use?

    Either the police communications network or the ordinary mobile phone
    network of course.

    So, not data links, then?

    Why not, if they work? I don't exclude any possibilities.

    I was just wondering why they didn't seem to have occurred to you
    previously.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Mark Goodge on Wed Feb 12 09:42:34 2025
    On 12/02/2025 09:08, Mark Goodge wrote:
    On Tue, 11 Feb 2025 21:47:03 +0000, Norman Wells <hex@unseen.ac.am> wrote:
    On 11/02/2025 21:27, Mark Goodge wrote:
    On Tue, 11 Feb 2025 13:42:43 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>> On 11/02/2025 12:50, Mark Goodge wrote:

    So the police officer has to be physically present at the blockage when they
    press the button. What about if they need to report it from a different >>>>> location?

    Then he doesn't press the first red button, but the second one that says >>>> press here if the blockage is somewhere else, whereupon he will be
    required to input where exactly it is.

    How does he input exactly where it is? How does he know the precise
    coordinates of where it is if he isn't at the actual location to get a GPS >>> fix?

    He knows because he's on the scene. If he isn't and doesn't know he
    doesn't press either red button. It's very simple.

    OK, so your system doesn't account for any need for someone to ever report a blockage without being physically at the blockage at the time they report
    it. That's a bit of a weakness, wouldn't you say?

    No, as I've said elsewhere, if he's not actually at the scene, he
    obviously has to know where the blockage is and communicate that.

    I really don't know why you're making such heavy weather of this.

    What technology do those communicators use?

    Either the police communications network or the ordinary mobile phone
    network of course.

    So, not data links, then?

    Why not, if they work? I don't exclude any possibilities.

    I was just wondering why they didn't seem to have occurred to you
    previously.

    Because it's nuts and bolts stuff. It's all possible.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Norman Wells on Wed Feb 12 10:14:01 2025
    "Norman Wells" <hex@unseen.ac.am> wrote in message news:m11ut4F3is1U3@mid.individual.net...
    On 11/02/2025 21:30, Mark Goodge wrote:
    On Tue, 11 Feb 2025 16:04:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
    On 11/02/2025 14:15, Mark Goodge wrote:

    It needs to include the areas where humans can drive street-legal vehicles >>>> in Chad, yes. I don't see why (other than barely disguised racism), you seem
    to think that's difficult. Chad has a road network just like any other >>>> country.

    Yes, but I doubt if it has the mapping, or any speed limit data-bases,
    quite possibly not even any signage, lane markings, junction priorities
    etc etc etc.

    But that's precisely the point. Humans manage in those conditions. So level >> 5 autonomy will, too.

    My point is that Level 5 autonomy is simply not possible in such data-deprived places.

    How do you think people managed to drive around in Chad before the
    invention of the Internet ?

    They would have used maps.

    Just as AV's in the UK would have come preloaded with "the AA Road
    Map of Great Britain," as part of some sponsorship deal.

    While AV's headed for Chad, would have done what Michael Palin did at
    the start of "80 Days". Popped down to Stanford's Map Shop in Covent
    Garden, and bought all the latest maps. These would then have been fed
    into the in-built scanner, using the slot in the dashboard right next
    to the cigarette lighter.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Simon Parker on Wed Feb 12 13:39:19 2025
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message news:m13l8pFqoikU43@mid.individual.net...
    On 06/02/2025 19:00, billy bookcase wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message
    news:m0k1lpFqoikU23@mid.individual.net...

    AVs *DO* receive huge amounts of data and crunch that data in real time to make
    routing
    decisions. (And in context, "routing" here does not relate merely to, "Shall I turn
    left in 250metres?" but includes, "Shall I move 30cm further away from the footway as
    there's a pedestrian there?" or "There's a cyclist crossing the junction so even
    though
    the light is green, I must wait for them to clear the junction before I proceed.")

    These being exampes of your "situations they have never previously
    encountered " cyclists riding along the road" and bright red and green
    traffic lights ?

    And on public roads ?

    Who'd have thought ?

    Just how long did it take them to work that one out, I wonder ?

    Was it in the first year ?

    I can but recommend you watch the video previously linked.

    As somebody claiming that current Level 4 AVs do not possess any AI and are similar to
    robot vacuum cleaners

    Where did I ever claim that ?

    And robot vacuum cleaners in case you hadn't noticed don';t
    have to deal with moving objects. Which is a whole different
    class of object than are coffee tables and sleeping labradors

    Which I will happily admit.

    I've never once referred to any Levels at all.

    As to actual Level 4, By the time they actually come out all humans will
    have been reduced to unemployable layabouts whose sole remaining
    purpose in life, will be to try and win back all the money they've lost
    on the online gaming, which now fills the whole of their waking hours.


    bb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From billy bookcase@21:1/5 to Simon Parker on Wed Feb 12 13:47:46 2025
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message news:m13ldmFqoilU28@mid.individual.net...
    On 06/02/2025 18:52, billy bookcase wrote:
    "Simon Parker" <simonparkerulm@gmail.com> wrote in message
    news:m0k1ivFqoikU22@mid.individual.net...


    gross snippage

    AVs utilise AI every second of every trip. They are trained to deal with situations
    they have never previously encountered because it is impossible to program every
    possible scenario into the system.

    One single example of what you mean will suffice; and will actually
    demonstrate your understanding of the topic.

    That is, one example of A! in action. Such as to distinguish it from
    the actions of say a simple robot vacuum cleaner with certain added
    features.

    Every one of which, robot vacuum cleaners that is, are capable of
    cleaning rooms and avoiding objects which they have never
    encountered before in totally novel scenarios..

    So that's just one or two sentences please, * in your own words*.

    Of some action or actions - so not very long sentences or very
    long words - currently being performed by certificated AV's which
    could not have been accomplished by using sophisticated
    automation.

    I shall politely, but firmly, state that I am disinclined to acquiesce to your request
    to provide said information.

    Mainly because I've posted a quote from a senior official at one of the main developers
    of Level 4 AVs that contradicts what you're claiming and additionally because I've
    posted a link to a video from the same source demonstrating precisely what you're
    asking for here.

    Unfortunately [Parker], unless you're capable of putting it into your own words
    whatever it is, then there's no real indication that you yourself actually understand
    it, any of it, at all, is there ?

    Haven't we experienced this problem before with your work ?

    So its another 0/10 I'm afraid


    bb

    < Sick Note snipped >


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to Norman Wells on Wed Feb 12 14:42:47 2025
    On Wed, 12 Feb 2025 09:42:34 +0000, Norman Wells <hex@unseen.ac.am> wrote:

    On 12/02/2025 09:08, Mark Goodge wrote:
    On Tue, 11 Feb 2025 21:47:03 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>> On 11/02/2025 21:27, Mark Goodge wrote:

    How does he input exactly where it is? How does he know the precise
    coordinates of where it is if he isn't at the actual location to get a GPS >>>> fix?

    He knows because he's on the scene. If he isn't and doesn't know he
    doesn't press either red button. It's very simple.

    OK, so your system doesn't account for any need for someone to ever report a >> blockage without being physically at the blockage at the time they report
    it. That's a bit of a weakness, wouldn't you say?

    No, as I've said elsewhere, if he's not actually at the scene, he
    obviously has to know where the blockage is and communicate that.

    How does he communicate that, if he's not actually at the location?

    I really don't know why you're making such heavy weather of this.

    I'm not making heavy weather of it. You are. Your system is full of obvious weaknesses, and yet whenever I point one out you simply resort to bluster.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Wed Feb 12 14:51:28 2025
    On 12/02/2025 13:07, Simon Parker wrote:
    On 10/02/2025 23:54, Norman Wells wrote:
    On 10/02/2025 19:21, Simon Parker wrote:
    On 06/02/2025 15:32, Norman Wells wrote:

    Then they all should be notified, and those who wish to remain
    competitive will act.

    I've explained in a post elsewhere in the thread how Google, Apple
    and Garmin aggregate user data to determine delays in the proposed
    routing.

    Your task, should you choose to accept it, it to map onto the
    existing model precisely how "they should all be notified" and how
    they then action said "road closure notification being applied on the
    maps."

    Over to you, Norman...

    If you're not claiming it's an impossible ask, then it's possible.

    Not all things that are possible are also profitable.

    Nor are they even commercially viable.

    Same thing. But it's not for me to decide what may be profitable.

    I have the vision of what is possible.  It's for minions to implement it.

    To the best of my knowledge, you are retired.  You have no minions.

    You have absolutely no knowledge of either actually.

    And I do not believe the long-suffering Mrs Wells would appreciate being referred to in such terms.

    Can you quote where I did please?

    Then perhaps, if they want to remain relevant and commercially
    viable in the coming age of autonomous vehicles, they will see it as
    commercially important to develop such a facility if they don't have
    it already.  It is surely not beyond the wit of man despite all the
    pettyfogging issues you mention.

    How many notifications are there likely to be in a 24 hour period?
    Is one person per receiving company on a shift basis sufficient or
    does it need multiple people?  If multiple people, how many?  What
    happens if HQ try to contact them and they're away from their desk?

    If people are going to be such a problem, they can be replaced with
    automated systems, which will make the whole process even cheaper than
    virtually free.

    You have still failed to identify the source of the commercial contract
    you have in mind that permits communication for "virtually free".

    And to be clear, the system you propose where the police officer
    contacts HQ to advise of the blockage and HQ in turn then contacts the relevant map and routing data companies is instead being fully automated?

    How does this automated system operate?  How does it deal with "hard"
    and "soft" blockages?

    If you mean by 'soft blockage' that the road is still passable, it is
    not such a blockage.

    How many people are required at "HQ" to receive the notifications
    from officers "in the field", as it were?  Similarly, what happens if
    HQ do not answer the call in a timely manner (for all values of
    "timely manner")?

    Are the same people in HQ that receive the notifications passing them
    on to the live mapping companies or does that need a separate team?
    If so, same questions for this team.

    For all of the above, what happens in the event of sickness, holiday,
    bereavement, maternity absence, etc.?

    All these people problems.  No wonder computers will replace them.

    Will they?  Do you have a cite for that too?

    Of course they will. It would be daft not to. You've just listed all
    the advantages of computers.

    Or this.

    Google updates blockage data onto system (T+5 minutes).

    That's still not bad in absolute terms,

    You might describe it as "not bad in absolute terms" but that does not
    help the NormCars (TM) that are unable to navigate in that 5 plus minute window, and that's assuming absolute optimal operating conditions.

    You haven't said what the NormCar will be doing for that time.  Could
    you clarify please?

    Had you read on before leaping in, you'd have seen:

    but with instantaneous and simultaneous communication already being
    possible and widely-used, there's no need for any of this nonsense.
    The whole process could be automated, and would be better if it were.

    So now you're proposing the construction of a central, presumably
    government, database into which the police officer is feeding the road
    closed data with an API allowing the relevant mapping companies to
    extract data from the database in real time?

    Are you still describing this as "easy to integrate" and "virtually
    free" because, if so, I may have some very bad news for you?

    It's hardly beyond the wit of man.

    It could even be possible for the officer on the spot to put a block
    on all the maps instantaneously by a simple key press.No it vdfvfdvdfvdfv

    No it couldn't.  He'd need at least two buttons (one for "soft" block
    and one for "hard block") and also a means of adding a block at which he isn't present.  (For example he received notification from the
    electricity company that a wire has come down in a storm and is blocking
    the road.)  Unless you are planning for the officer first to drive to
    the scene so that he can add the details of the block to the database.

    What's the problem? It doesn't make it impossible.

    You just have to have the vision to see it.

    Blinkers to ignore all the inherent problems also seem to be mandatory too.

    I would suggest that in the event of a NormCar encountering an
    unexpected roadblock on its journey, it will have little alternative but
    to resort to its "failsafe default" until further and better information
    is available.

    Of course, just like a manual driver. Where did I say anything otherwise?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Wed Feb 12 15:16:26 2025
    On 12/02/2025 13:22, Simon Parker wrote:
    On 07/02/2025 11:28, Norman Wells wrote:

    But Level 5 is not the criterion for being allowed on UK roads under
    the Automated Vehicles Act, which is what probably matters more to most.

    No, being able to get into an AV and using it as one would a taxi "is
    what probably matters more to most".

    Well, that's pretty limited. Have you ever tried getting a taxi to go
    south of the river?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Goodge@21:1/5 to All on Wed Feb 12 15:11:48 2025
    On Wed, 12 Feb 2025 13:10:21 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:

    On 07/02/2025 06:40, J Newman wrote:

    Q: Can you come up with a driving condition for which there are no rules
    for, and can you say what you would do?

    A:
    Scenario:
    You're driving at night in a rural area with no streetlights, and
    suddenly, a massive swarm of insects (e.g., locusts or mayflies)
    completely covers your windshield, making visibility nearly zero. Your
    wipers and washer fluid are ineffective because the sheer number of
    insects is too great.

    What I Would Do:
    Stay Calm and Avoid Sudden Braking Slamming the brakes could cause a
    rear-end collision if there's a vehicle behind me.

    Interesting that the LLM mentions this. There are two cases being >investigated in America where AVs have done precisely this causing
    serious rear-end collisions.

    Thus demonstrating that ChatGPT, at least, is a little more advanced :-)

    Reduce Speed Gradually Id ease off the accelerator and slowly
    decelerate to maintain control.
    Turn on Hazard Lights This alerts other drivers to my reduced speed
    and possible erratic movement.
    Find a Safe Stopping Point If I can, Id try to pull over to a safe
    location like the shoulder, a rest area, or a side road.
    Use Alternate Visibility Methods If my windshield is completely
    covered, I might open my side window and stick my head out carefully (if
    safe) to get some visibility.

    I love that the LLM speaks as if it is a human. It doesn't have a head
    to stick out of the window, carefully or otherwise - nor do the AVs for
    that matter.

    I think the AI is assuming that it's a physical robot, as is commonly
    depicted in science fiction. That's not entirely unreasonable given the
    premise of the question, the implication is that the AI is "driving" the car rather than *being* the car. And that would also be consistent with much of
    the material that it's been trained on. ChatGPT is familiar with the work of Isaac Asimov.

    Alternatively, maybe it's trying a bit too hard to pass the Turing test. Although ChatGPT is, of course, familiar with the Turing test as well.

    Assess and Clear the Windshield Once stopped, Id use a squeegee, rag,
    or even a piece of clothing to remove the insects.
    Report the Hazard if Needed If it's a massive swarm affecting multiple
    drivers, I might call local authorities to warn others.
    This is a situation that isnt explicitly covered by standard driving
    laws, but the best approach follows general defensive driving principles
    stay calm, maintain control, and prioritize visibility and safety.

    Would you have done something differently?

    Outputting the text and putting it into action in an AV are completely >separate issues and I do not accept that it would be "trivially easy" to >program all of the above into an AV.

    I don't think it's actually necessary to, I think the point here is that, at least in theory, AI is capable of dealing with unforeseen circumstances, and
    an automomous vehicle with sufficiently advanced AI would be able to as well (contrary to NormTech's designs which hardcode everything and rely on there always being a source of external data to use).

    Obviously, standalone or embedded AI isn't at that point yet - it would not only need to be able to make that kind of decision, but also to make it in a matter of milliseconds. Which even ChatGPT can't do, at the moment, its reaction time simply isn't good enough to drive a car even if it could, in theory, make the right decisions when doing so. But there's a kind of
    Moore's Law progression to AI at the moment, so what's impossible now may
    well be possible within a surprisingly short time.

    That won't be because solving those problems is trivially easy, though. It
    will be because AI has advanced to the point where it can reliably solve incredibly complicated problems incredibly rapidly.

    Mark

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sam Plusnet@21:1/5 to Mark Goodge on Wed Feb 12 20:47:45 2025
    On 12/02/2025 15:11, Mark Goodge wrote:
    On Wed, 12 Feb 2025 13:10:21 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:

    On 07/02/2025 06:40, J Newman wrote:

    Q: Can you come up with a driving condition for which there are no rules >>> for, and can you say what you would do?

    A:
    Scenario:
    You're driving at night in a rural area with no streetlights, and
    suddenly, a massive swarm of insects (e.g., locusts or mayflies)
    completely covers your windshield, making visibility nearly zero. Your
    wipers and washer fluid are ineffective because the sheer number of
    insects is too great.

    What I Would Do:
    Stay Calm and Avoid Sudden Braking – Slamming the brakes could cause a >>> rear-end collision if there's a vehicle behind me.

    Interesting that the LLM mentions this. There are two cases being
    investigated in America where AVs have done precisely this causing
    serious rear-end collisions.

    Thus demonstrating that ChatGPT, at least, is a little more advanced :-)

    Reduce Speed Gradually – I’d ease off the accelerator and slowly
    decelerate to maintain control.
    Turn on Hazard Lights – This alerts other drivers to my reduced speed
    and possible erratic movement.
    Find a Safe Stopping Point – If I can, I’d try to pull over to a safe >>> location like the shoulder, a rest area, or a side road.
    Use Alternate Visibility Methods – If my windshield is completely
    covered, I might open my side window and stick my head out carefully (if >>> safe) to get some visibility.

    I love that the LLM speaks as if it is a human. It doesn't have a head
    to stick out of the window, carefully or otherwise - nor do the AVs for
    that matter.

    I think the AI is assuming that it's a physical robot, as is commonly depicted in science fiction. That's not entirely unreasonable given the premise of the question, the implication is that the AI is "driving" the car rather than *being* the car. And that would also be consistent with much of the material that it's been trained on. ChatGPT is familiar with the work of Isaac Asimov.

    Evidently the AI (all AIs?) is being trained to be deceitful.
    By pretending that this is the work of a person, its owners hope that it
    can "pass" for human.

    I cannot see this ending well (for us).


    --
    Sam Plusnet

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jethro_uk@21:1/5 to Sam Plusnet on Thu Feb 13 09:18:00 2025
    On Wed, 12 Feb 2025 20:47:45 +0000, Sam Plusnet wrote:

    On 12/02/2025 15:11, Mark Goodge wrote:
    On Wed, 12 Feb 2025 13:10:21 +0000, Simon Parker
    <simonparkerulm@gmail.com> wrote:

    On 07/02/2025 06:40, J Newman wrote:

    Q: Can you come up with a driving condition for which there are no
    rules for, and can you say what you would do?

    A:
    Scenario:
    You're driving at night in a rural area with no streetlights, and
    suddenly, a massive swarm of insects (e.g., locusts or mayflies)
    completely covers your windshield, making visibility nearly zero.
    Your wipers and washer fluid are ineffective because the sheer number
    of insects is too great.

    What I Would Do:
    Stay Calm and Avoid Sudden Braking – Slamming the brakes could cause >>>> a rear-end collision if there's a vehicle behind me.

    Interesting that the LLM mentions this. There are two cases being
    investigated in America where AVs have done precisely this causing
    serious rear-end collisions.

    Thus demonstrating that ChatGPT, at least, is a little more advanced
    :-)

    Reduce Speed Gradually – I’d ease off the accelerator and slowly
    decelerate to maintain control.
    Turn on Hazard Lights – This alerts other drivers to my reduced speed >>>> and possible erratic movement.
    Find a Safe Stopping Point – If I can, I’d try to pull over to a safe >>>> location like the shoulder, a rest area, or a side road.
    Use Alternate Visibility Methods – If my windshield is completely
    covered, I might open my side window and stick my head out carefully
    (if safe) to get some visibility.

    I love that the LLM speaks as if it is a human. It doesn't have a
    head to stick out of the window, carefully or otherwise - nor do the
    AVs for that matter.

    I think the AI is assuming that it's a physical robot, as is commonly
    depicted in science fiction. That's not entirely unreasonable given the
    premise of the question, the implication is that the AI is "driving"
    the car rather than *being* the car. And that would also be consistent
    with much of the material that it's been trained on. ChatGPT is
    familiar with the work of Isaac Asimov.

    Evidently the AI (all AIs?) is being trained to be deceitful.
    By pretending that this is the work of a person, its owners hope that it
    can "pass" for human.

    I cannot see this ending well (for us).

    If "AI" is to become "I" then it *needs* to be able to lie.

    "If it can't lie, it ain't AI".

    Even the decades old Turing test requires the system under test to be
    able to lie. Otherwise asking it "What can you see ?" and getting the
    answer "Nothing, I am a machine" would lead to a very short film when
    it's dramatised.

    In order to pass the (ludicrously basic) Turing test, any candidate has
    to be able to invent and maintain a totally fictitious back story. (Interestingly enough, for contrast, the human interlocutor is allowed to pretend to be a machine. Which is an added wrinkle less discussed).

    Of course now there is "AI" to flog in the spadeful, the salesmen are
    telling us that the Turing test - far from being beautifully elegant, and simple like all classic experiments - is somehow "inappropriate". Mainly because applying it to any of the current crop of "AI" pretenders reveals instantly they aren't. Not even close. Not even in the same universe.

    In fact I am suspecting the real reason we have the "AI" craze right now
    is that there is a large %age of the population who aren't smart enough
    to spot "AI" because they themselves are short in the "I" department.
    However you aren't allowed to say that these days.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to Simon Parker on Thu Feb 13 11:38:50 2025
    On 12/02/2025 13:10, Simon Parker wrote:
    On 07/02/2025 06:40, J Newman wrote:
    On 03/02/2025 17:43, Mark Goodge wrote:
    On Mon, 3 Feb 2025 12:35:57 +0000, Norman Wells <hex@unseen.ac.am>
    wrote:

    AI is not part of autonomous vehicle control, and includes none of the >>>> things Mr Harran identified above.  It is simply automation
    according to
    pre-determined rules.

    But that's precisely the problem. It's impossible to have a
    predetermined
    rule for every possible scenario. And dealing with scenarios that you
    don't have a rule for requires intelligence.

    So here's how ChatGPT responded to such a scenario. Is AI intelligent
    enough to drive cars?

    Q: Can you come up with a driving condition for which there are no
    rules for, and can you say what you would do?

    A:
    Scenario:
    You're driving at night in a rural area with no streetlights, and
    suddenly, a massive swarm of insects (e.g., locusts or mayflies)
    completely covers your windshield, making visibility nearly zero. Your
    wipers and washer fluid are ineffective because the sheer number of
    insects is too great.

    The wipers *will* displace bulk insects enough to see something
    immediately behind where the wipers have just passed over. Not ideal but
    better than being entirely blind.

    What I Would Do:
    Stay Calm and Avoid Sudden Braking – Slamming the brakes could cause a
    rear-end collision if there's a vehicle behind me.

    Interesting that the LLM mentions this.  There are two cases being investigated in America where AVs have done precisely this causing
    serious rear-end collisions.

    My car which isn't a full AV will do a half emergency stop if someone
    cuts in front of me close enough to nearly take the front wing off. It re-arranged the contents of the car fairly comprehensively. Once there
    is a satisfactory gap it takes its foot off the brake. The first time it happened was truly alarming also there was an acoustic panic warning and
    a big red car outline flashed up on the dash. Most disconcerting.

    If there had been a car close in behind me I doubt if they would have
    had time to react.

    Legal question in a car with this level of autonomous functionality is
    the driver legally responsible for the autonomous actions of the car in response to such external stimuli? I did not hit the brakes hard - the
    car did it for me - a full ABS stop until it decided the gap was OK.

    I'm assuming here that dashcam evidence and the black box from braking decisions made by the AI and acceleration sensor logs are available.

    It sometimes does the same when passing inside a car turning right in a
    ghost island on our rather narrow country roads. Or sometimes because of
    a low flying pheasant or water droplet has run across the sensor.

    It has happened 3 times in about 10k miles travelled. So not a common occurrence but a bit too frequently for my liking.

    Reduce Speed Gradually – I’d ease off the accelerator and slowly
    decelerate to maintain control.
    Turn on Hazard Lights – This alerts other drivers to my reduced speed
    and possible erratic movement.
    Find a Safe Stopping Point – If I can, I’d try to pull over to a safe
    location like the shoulder, a rest area, or a side road.
    Use Alternate Visibility Methods – If my windshield is completely
    covered, I might open my side window and stick my head out carefully
    (if safe) to get some visibility.

    I love that the LLM speaks as if it is a human.  It doesn't have a head
    to stick out of the window, carefully or otherwise - nor do the AVs for
    that matter.

    It's advice is poor.

    The first thing you should do it ease off the accelerator *and* touch
    the brakes gently - enough to warm traffic behind that you are having to
    slow. They will find out soon enough why you are doing it when they hit
    the swarm (or go into thick smoke).

    It is reminiscent of the motorway problem in thick fog where morons are
    content to zoom along at a speed where they have absolutely no chance whatsoever of stopping if a stationary traffic queue suddenly appears
    out of the gloom. High multiplicity pile ups result from that scenario.

    Some people pile into queues at speed even with perfect visibility :(

    Assess and Clear the Windshield – Once stopped, I’d use a squeegee,
    rag, or even a piece of clothing to remove the insects.
    Report the Hazard if Needed – If it's a massive swarm affecting
    multiple drivers, I might call local authorities to warn others.
    This is a situation that isn’t explicitly covered by standard driving
    laws, but the best approach follows general defensive driving
    principles —stay calm, maintain control, and prioritize visibility and
    safety.

    Would you have done something differently?

    Outputting the text and putting it into action in an AV are completely separate issues and I do not accept that it would be "trivially easy" to program all of the above into an AV.

    Mine the fancy video camera can quite easily be blinded at night by inconsiderate drivers coming the other way with main beam up and not
    dipping. Much worse when it is raining. I don't find it that easy to
    drive against it either and put my main beam up for a couple of seconds
    to make the point. Most drivers on rural roads are polite (but not all).

    The flashing warning about the lost functionality is itself distracting
    to the driver. Someone hasn't been on the right HCI courses.

    The radar/ultrasound sensors (not sure of the exact method) are unlikely
    to be blinded by such optical interference and will see through fog.
    Lidar might or might not depending on the choice of wavelength.

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Norman Wells@21:1/5 to Simon Parker on Thu Feb 13 14:04:54 2025
    On 13/02/2025 12:34, Simon Parker wrote:
    On 12/02/2025 14:51, Norman Wells wrote:
    On 12/02/2025 13:07, Simon Parker wrote:
    On 10/02/2025 23:54, Norman Wells wrote:

    If you're not claiming it's an impossible ask, then it's possible.

    Not all things that are possible are also profitable.

    Nor are they even commercially viable.

    Same thing.

    No they're not.  Profitability is the here and now, commercial viability
    is over the lifecycle of the product / service.

    YouTube was not profitable for years but remained commercially viable.


    But it's not for me to decide what may be profitable.

    NormTech (TM) has backers with deep pockets ready, willing and able to
    soak up nine to ten figure losses?  No?  Thought not.

    Your disingenuous snipping has removed what we were actually talking
    about, which wasn't that.

    The fictional Normtech never was a mapping company.

    I have the vision of what is possible.  It's for minions to
    implement it.

    To the best of my knowledge, you are retired.  You have no minions.

    You have absolutely no knowledge of either actually.

    Are you denying that you are retired?  Do you deny that, as a retired person, at present you have no, (and I'm going to use the business
    term), "direct reports", (as I find your use of the term minions disparaging)?

    The fictional Normtech has many fictional employees, all of whom are
    minions to a greater or lesser extent, as are all employees everywhere,
    whether they think it disparaging or not.

    And I do not believe the long-suffering Mrs Wells would appreciate
    being referred to in such terms.

    Can you quote where I did please?

    You are retired.

    No quote then?

    As such you have no minions.  The only person over
    whom you hold a modicum of sway is Mrs Wells.  If she is not your
    minion, perhaps you could outline who the minions are that you envisage realising your dream.

    Those in the employ of the fictional Normtech of course.

    Sorry if your fishing expedition has resulted in no addition to the
    total lack of knowledge you have about me in real life which is no
    concern of yours anyway.

    If people are going to be such a problem, they can be replaced with
    automated systems, which will make the whole process even cheaper
    than virtually free.

    You have still failed to identify the source of the commercial
    contract you have in mind that permits communication for "virtually
    free".

    Still no mention of a commercial contract that permits communication for "virtually free".  Never mind one that will scale to the necessary levels.

    I can only assume that the best part of the 21st century has passed you
    by. Electronic communications *are* now virtually free.

    And to be clear, the system you propose where the police officer
    contacts HQ to advise of the blockage and HQ in turn then contacts
    the relevant map and routing data companies is instead being fully
    automated?

    How does this automated system operate?  How does it deal with "hard"
    and "soft" blockages?

    If you mean by 'soft blockage' that the road is still passable, it is
    not such a blockage.

    An example of a "soft blockage" may be a tree fallen in the road.

    Then it's an actual blockage, not something you vaguely say is 'soft'.

    It may surprise you to learn that the police / HA do not typically station
    a vehicle immediately either side of the downed tree permitting their officers to wave to each other across the trunk.  Rather they position
    them at the respective ends of the road to prevent vehicles entering the road.  However, properties may be accessible between the road block and
    the physical blockage.  Vehicles aiming to visit those properties may
    still be permitted to pass the road block despite being unable to pass
    the physical blockage.  The majority of vehicles, but not all, will be turned away.  Hence, it is a soft blockage.  Tell me, in these circumstances, which button does the officer press to indicate where the physical blockage is located, where the roadblocks are at the respective
    ends of the road and which button or buttons allow him to detail to
    which properties AVs may still travel between the road block and
    physical blockage?  How is all of this "instead being fully automated"?

    Are you saying it's impossible?

    If not, it's nerdy nuts and bolts stuff and just needs to be implemented whether you have the vision to see how or not.

    And that's without even getting into "passable with care" 'soft
    blockages' where limited local traffic may be permitted, but non-local
    is not.

    Your "fully automated" system is still "trivially easy" is it?


    All these people problems.  No wonder computers will replace them.

    Will they?  Do you have a cite for that too?

    Of course they will.  It would be daft not to.  You've just listed all
    the advantages of computers.

    Thank you for confirming that you do not have a cite or any evidence to support your claim.

    There are some things that are so self-apparent and obvious there is no
    doubt about them. And replacing people that cause problems with
    computers that don't is just one of them.

    [some unacknowledged, (and careless), snipping from Norman which can
    only be taken as an admission that he has no idea how to integrate his proposed system with those that already exist]

    You seem to be rather blinkered by the term 'integrate' as if it
    requires no changes whatsoever.

    Or this.

    Google updates blockage data onto system (T+5 minutes).

    That's still not bad in absolute terms,

    You might describe it as "not bad in absolute terms" but that does
    not help the NormCars (TM) that are unable to navigate in that 5 plus
    minute window, and that's assuming absolute optimal operating
    conditions.

    You haven't said what the NormCar will be doing for that time.  Could
    you clarify please?

    Had you read on before leaping in, you'd have seen:

    but with instantaneous and simultaneous communication already being
    possible and widely-used, there's no need for any of this nonsense.
    The whole process could be automated, and would be better if it were.

    Yes, about that "fully automated" system you have now pivoted to
    proposing, I look forward to your solutions to the "trivially simple"
    problem of a downed tree above and how it will be relayed to the various mapping systems without any human involvement at all.

    This is just micro-managing, which is not my style. It's perfectly
    possible if only you have the vision to see it.

    I would suggest that in the event of a NormCar encountering an
    unexpected roadblock on its journey, it will have little alternative
    but to resort to its "failsafe default" until further and better
    information is available.

    Of course, just like a manual driver.  Where did I say anything
    otherwise?

    Manual drivers do not and cannot "come to a halt and turn everything
    off" when encountering a roadblock.

    If there's no way through, that's exactly what they do.

    Not least because the officers
    present at the roadblock will tell them to keep moving.

    With real-time mapping information showing a road that is impassable, so
    too will an AV. It will have re-routed before even reaching it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)