For example, imagine an AI driven car with near instant reflexes,
steering control, automatic emergency brakes, perhaps even a smart road
that can inform them of possible oncoming traffic around a blind corner. Should the double white line on the road be an impediment to them
overtaking around a blind corner, if technology reveals that there is no oncoming car and the AI is smart/responsive enough to negotiate a sharp
turn at speed without skidding?
For AI driven cars, should a higher speed limit apply to them?
What about if someone passes their driving test, and was only tested on
AI driven cars and their license is only valid for AI driven cars.
Should more lax road conditions apply to them, than people who hold
licenses that would allow them to drive non-autonomous vehicles?
Who is liable if an AI driven car gets into an accident, especially if
the driver qualified and holds a license that precludes them from manual control of the car?
Perhaps some questions worth thinking about given that AI systems are surpassing humans in more and more ways.
Who is liable if an AI driven car gets into an accident
For example, imagine an AI driven car with near instant reflexes,
steering control, automatic emergency brakes, perhaps even a smart
road that can inform them of possible oncoming traffic around a blind
corner. Should the double white line on the road be an impediment to
them overtaking around a blind corner, if technology reveals that
there is no oncoming car and the AI is smart/responsive enough to
negotiate a sharp turn at speed without skidding?
For AI driven cars, should a higher speed limit apply to them?
What about if someone passes their driving test, and was only tested
on AI driven cars and their license is only valid for AI driven cars.
Should more lax road conditions apply to them, than people who hold
licenses that would allow them to drive non-autonomous vehicles?
Who is liable if an AI driven car gets into an accident, especially
if the driver qualified and holds a license that precludes them from
manual control of the car?
Perhaps some questions worth thinking about given that AI systems are surpassing humans in more and more ways.
So far, I am not convinced that AI systems should be let loose on the
public roads in charge of driving. Little deliveries around town, yes,
but not on the wider road network.
On Wed, 29 Jan 2025 10:12:05 +0000, Davey wrote:
So far, I am not convinced that AI systems should be let loose on
the public roads in charge of driving. Little deliveries around
town, yes, but not on the wider road network.
We will end up building the environment to the AI. Basically what we
have done with trains.
I can easily see roads where AI is not allowed (or conversely roads
where only AI is allowed).
Motorways would be the logical starting point.
On Tue, 28 Jan 2025 23:04:16 -0000 (UTC), Jon Ribbens
<jon+usenet@unequivocal.eu> wrote:
On 2025-01-28, J Newman <jenniferkatenewman@gmail.com> wrote:
For example, imagine an AI driven car with near instant reflexes,
steering control, automatic emergency brakes, perhaps even a smart road
that can inform them of possible oncoming traffic around a blind corner. >>> Should the double white line on the road be an impediment to them
overtaking around a blind corner, if technology reveals that there is no >>> oncoming car and the AI is smart/responsive enough to negotiate a sharp
turn at speed without skidding?
For AI driven cars, should a higher speed limit apply to them?
What about if someone passes their driving test, and was only tested on
AI driven cars and their license is only valid for AI driven cars.
Should more lax road conditions apply to them, than people who hold
licenses that would allow them to drive non-autonomous vehicles?
Who is liable if an AI driven car gets into an accident, especially if
the driver qualified and holds a license that precludes them from manual >>> control of the car?
Perhaps some questions worth thinking about given that AI systems are
surpassing humans in more and more ways.
Driving cars, however, is certainly not one of them at the current time.
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close
to a vehicle in front, lane-keep assist.
On Wed, 29 Jan 2025 00:29:15 +0200
J Newman <jenniferkatenewman@gmail.com> wrote:
For example, imagine an AI driven car with near instant reflexes,
steering control, automatic emergency brakes, perhaps even a smart
road that can inform them of possible oncoming traffic around a blind
corner. Should the double white line on the road be an impediment to
them overtaking around a blind corner, if technology reveals that
there is no oncoming car and the AI is smart/responsive enough to
negotiate a sharp turn at speed without skidding?
For AI driven cars, should a higher speed limit apply to them?
What about if someone passes their driving test, and was only tested
on AI driven cars and their license is only valid for AI driven cars.
Should more lax road conditions apply to them, than people who hold
licenses that would allow them to drive non-autonomous vehicles?
Who is liable if an AI driven car gets into an accident, especially
if the driver qualified and holds a license that precludes them from
manual control of the car?
Perhaps some questions worth thinking about given that AI systems are
surpassing humans in more and more ways.
One of the things that helps road safety is the knowledge of what rules
apply to both you and all other vehicles, and the fact that they are
the same. Any variation would mean that everyone had to learn what
rules applied to all other vehicles, and that is an unsupportable burden
to put on everybody.
So far, I am not convinced that AI systems should be let loose on the
public roads in charge of driving. Little deliveries around town, yes,
but not on the wider road network.
For example, imagine an AI driven car with near instant reflexes,
steering control, automatic emergency brakes, perhaps even a smart road
that can inform them of possible oncoming traffic around a blind corner. Should the double white line on the road be an impediment to them
overtaking around a blind corner, if technology reveals that there is no oncoming car and the AI is smart/responsive enough to negotiate a sharp
turn at speed without skidding?
Recently in the USA, I was startled to see what I can only describe as several small self-powered and self-directed boxes (c. 18" x 12" x 10")
on wheels, carrying goods for delivery (at moderate speed) along the
footway in a university area.
Recently in the USA, I was startled to see what I can only describe as several small self-powered and self-directed boxes (c. 18" x 12" x 10")
on wheels, carrying goods for delivery (at moderate speed) along the
footway in a university area.
On 2025-01-29, Martin Harran <martinharran@gmail.com> wrote:
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close
to a vehicle in front, lane-keep assist.
I wouldn't call any of those things "AI". And even those relatively
simple things they frequently get wrong. I would be surprised if
"full self-drive" arrives as an everyday general-purpose consumer
feature within the next 20 years.
Driving cars, however, is certainly not one of them at the current time.
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close
to a vehicle in front, lane-keep assist.
On 29/01/2025 13:09, Martin Harran wrote:
Driving cars, however, is certainly not one of them at the current
time.
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too
close to a vehicle in front, lane-keep assist.
It seems that even the most mundane bit of software these days is
described as AI! It must have been back in the 80's that a friend of
mine was working on software for automatic landing and take-off for airliners. Nothing either artificial or intelligent about it!
For AI driven cars, should a higher speed limit apply to them?
On 29/01/2025 10:12 AM, Davey wrote:
On Wed, 29 Jan 2025 00:29:15 +0200
J Newman <jenniferkatenewman@gmail.com> wrote:
For example, imagine an AI driven car with near instant reflexes,
steering control, automatic emergency brakes, perhaps even a smart
road that can inform them of possible oncoming traffic around a blind
corner. Should the double white line on the road be an impediment to
them overtaking around a blind corner, if technology reveals that
there is no oncoming car and the AI is smart/responsive enough to
negotiate a sharp turn at speed without skidding?
For AI driven cars, should a higher speed limit apply to them?
What about if someone passes their driving test, and was only tested
on AI driven cars and their license is only valid for AI driven cars.
Should more lax road conditions apply to them, than people who hold
licenses that would allow them to drive non-autonomous vehicles?
Who is liable if an AI driven car gets into an accident, especially
if the driver qualified and holds a license that precludes them from
manual control of the car?
Perhaps some questions worth thinking about given that AI systems are
surpassing humans in more and more ways.
One of the things that helps road safety is the knowledge of what rules
apply to both you and all other vehicles, and the fact that they are
the same. Any variation would mean that everyone had to learn what
rules applied to all other vehicles, and that is an unsupportable burden
to put on everybody.
So far, I am not convinced that AI systems should be let loose on the
public roads in charge of driving. Little deliveries around town, yes,
but not on the wider road network.
Recently in the USA, I was startled to see what I can only describe as several small self-powered and self-directed boxes (c. 18" x 12" x 10")
on wheels, carrying goods for delivery (at moderate speed) along the
footway in a university area.
On 28/01/2025 22:29, J Newman wrote:
For AI driven cars, should a higher speed limit apply to them?
What would happen if they exceeded a speed limit ?
(Or crossed a double white line, etc.)
On 2025-01-28, J Newman <jenniferkatenewman@gmail.com> wrote:
For example, imagine an AI driven car with near instant reflexes,
steering control, automatic emergency brakes, perhaps even a smart road
that can inform them of possible oncoming traffic around a blind corner.
Should the double white line on the road be an impediment to them
overtaking around a blind corner, if technology reveals that there is no
oncoming car and the AI is smart/responsive enough to negotiate a sharp
turn at speed without skidding?
For AI driven cars, should a higher speed limit apply to them?
What about if someone passes their driving test, and was only tested on
AI driven cars and their license is only valid for AI driven cars.
Should more lax road conditions apply to them, than people who hold
licenses that would allow them to drive non-autonomous vehicles?
Who is liable if an AI driven car gets into an accident, especially if
the driver qualified and holds a license that precludes them from manual
control of the car?
Perhaps some questions worth thinking about given that AI systems are
surpassing humans in more and more ways.
Driving cars, however, is certainly not one of them at the current time.
Recently in the USA, I was startled to see what I can only describe as >several small self-powered and self-directed boxes (c. 18" x 12" x 10")
on wheels, carrying goods for delivery (at moderate speed) along the
footway in a university area.
On 30/01/2025 08:59, Nick Finnigan wrote:
On 28/01/2025 22:29, J Newman wrote:
For AI driven cars, should a higher speed limit apply to them?
What would happen if they exceeded a speed limit ?
(Or crossed a double white line, etc.)
You'd make sure the signs and the data-base were accurate,
and repaint
the lines.
In message <slrnvpiojg.4rm.jon+usenet@raven.unequivocal.eu>, at 23:04:16
on Tue, 28 Jan 2025, Jon Ribbens <jon+usenet@unequivocal.eu> remarked:
On 2025-01-28, J Newman <jenniferkatenewman@gmail.com> wrote:
For example, imagine an AI driven car with near instant reflexes,
steering control, automatic emergency brakes, perhaps even a smart road
that can inform them of possible oncoming traffic around a blind corner. >>> Should the double white line on the road be an impediment to them
overtaking around a blind corner, if technology reveals that there is no >>> oncoming car and the AI is smart/responsive enough to negotiate a sharp
turn at speed without skidding?
For AI driven cars, should a higher speed limit apply to them?
What about if someone passes their driving test, and was only tested on
AI driven cars and their license is only valid for AI driven cars.
Should more lax road conditions apply to them, than people who hold
licenses that would allow them to drive non-autonomous vehicles?
Who is liable if an AI driven car gets into an accident, especially if
the driver qualified and holds a license that precludes them from manual >>> control of the car?
Perhaps some questions worth thinking about given that AI systems are
surpassing humans in more and more ways.
Driving cars, however, is certainly not one of them at the current time.
I agree. They can't even follow white lines most of the time, because
they are either worn out, covered in mud/snow, or only present on one
side of the car.
JNugent wrote:
Recently in the USA, I was startled to see what I can only describe as
several small self-powered and self-directed boxes (c. 18" x 12" x
10") on wheels, carrying goods for delivery (at moderate speed) along
the footway in a university area.
Some launched in the UK a few years ago, are they still going?
<https://www.starship.xyz>
On 29/01/2025 14:14, JNugent wrote:
Recently in the USA, I was startled to see what I can only describe as
several small self-powered and self-directed boxes (c. 18" x 12" x
10") on wheels, carrying goods for delivery (at moderate speed) along
the footway in a university area.
This reminds me of Bluestone 42, where the mine clearance team deploy
their robot, and some Afghans turn up on a motorbike and steal it!
JNugent <JNugent73@mail.com> remarked:
Recently in the USA, I was startled to see what I can only describe as
several small self-powered and self-directed boxes (c. 18" x 12" x
10") on wheels, carrying goods for delivery (at moderate speed) along
the footway in a university area.
There's a trial of those in Cambridge. A major problem is crossing
roads, because they can't reach the buttons on the Pelican Crossings.
Another unrelated problem is that they ping the buyer from the pavement outside what might be quite a big apartment block, and if they aren't attended to with (iirc) ten minutes, take the shopping back *and still
charge for it*.
On 29/01/2025 14:14, JNugent wrote:
On 29/01/2025 10:12 AM, Davey wrote:
On Wed, 29 Jan 2025 00:29:15 +0200
J Newman <jenniferkatenewman@gmail.com> wrote:
For example, imagine an AI driven car with near instant reflexes,
steering control, automatic emergency brakes, perhaps even a smart
road that can inform them of possible oncoming traffic around a blind
corner. Should the double white line on the road be an impediment to
them overtaking around a blind corner, if technology reveals that
there is no oncoming car and the AI is smart/responsive enough to
negotiate a sharp turn at speed without skidding?
For AI driven cars, should a higher speed limit apply to them?
What about if someone passes their driving test, and was only tested
on AI driven cars and their license is only valid for AI driven cars.
Should more lax road conditions apply to them, than people who hold
licenses that would allow them to drive non-autonomous vehicles?
Who is liable if an AI driven car gets into an accident, especially
if the driver qualified and holds a license that precludes them from
manual control of the car?
Perhaps some questions worth thinking about given that AI systems are
surpassing humans in more and more ways.
One of the things that helps road safety is the knowledge of what rules
apply to both you and all other vehicles, and the fact that they are
the same. Any variation would mean that everyone had to learn what
rules applied to all other vehicles, and that is an unsupportable burden >>> to put on everybody.
So far, I am not convinced that AI systems should be let loose on the
public roads in charge of driving. Little deliveries around town, yes,
but not on the wider road network.
Recently in the USA, I was startled to see what I can only describe as
several small self-powered and self-directed boxes (c. 18" x 12" x
10") on wheels, carrying goods for delivery (at moderate speed) along
the footway in a university area.
"In April 2018, Starship launched its autonomous delivery service in
Milton Keynes in partnership with Co-op and Tesco. In March 2020,
Starship became the first robot delivery service to operate in a British
town center with the rollout of its service in Central Milton Keynes
https://en.wikipedia.org/wiki/Starship_Technologies#:~:text=In%20April%202018%2C%20Starship%20launched,service%20in%20Central%20Milton%20Keynes.
On Thu, 30 Jan 2025 12:20:55 +0000, Norman Wells wrote:
On 30/01/2025 08:59, Nick Finnigan wrote:
On 28/01/2025 22:29, J Newman wrote:
For AI driven cars, should a higher speed limit apply to them?
What would happen if they exceeded a speed limit ?
(Or crossed a double white line, etc.)
You'd make sure the signs and the data-base were accurate,
Mathematically impossible.
and repaint the lines.
Much better idea.
On Thu, 30 Jan 2025 12:20:55 +0000, Norman Wells wrote:
On 30/01/2025 08:59, Nick Finnigan wrote:
On 28/01/2025 22:29, J Newman wrote:
For AI driven cars, should a higher speed limit apply to them?
What would happen if they exceeded a speed limit ?
(Or crossed a double white line, etc.)
You'd make sure the signs and the data-base were accurate,
Mathematically impossible.
and repaint
the lines.
Much better idea.
The reality is that humans don't just slavishly follow signs and road markings.
On Thu, 30 Jan 2025 13:04:29 -0000 (UTC), Jethro_uk <jethro_uk@hotmailbin.com> wrote:
On Thu, 30 Jan 2025 12:20:55 +0000, Norman Wells wrote:
On 30/01/2025 08:59, Nick Finnigan wrote:
On 28/01/2025 22:29, J Newman wrote:
For AI driven cars, should a higher speed limit apply to them?
What would happen if they exceeded a speed limit ?
(Or crossed a double white line, etc.)
You'd make sure the signs and the data-base were accurate,
Mathematically impossible.
The word "mathematically" is somewhat superflous here.
and repaint
the lines.
Much better idea.
Unless there were no lines to begin with.
I'd be interested in knowing how autonomous vehicles cope with situations like this:
https://thumbs.dreamstime.com/b/waiting-red-light-roadworks-controlled-traffic-lights-somerset-england-no-traffic-approaching-opposite-250969607.jpg?w=360
or https://tinyurl.com/4ywavjue
There's a temporary traffic light there, which I'd expect an autonomous vehicle to be capable of recognising and obeying. But would it understand
the meaning of the red sign? And proceeding when the light is green means crossing a double white line, which an autonomous vehicle would know is prohibited. Would it correctly interpret the signage to indicate that, in this case, that prohibition is suspended?
The reality is that humans don't just slavishly follow signs and road markings. We use context to tell us when we can, under certain
circumstances, infer the presence of markings that we can't see (for
example, a faded stop line at a normal light-controlled junction), or infer that a visible marking can be ignored (eg, the double white lines in that photo). A competant autonomous vehicle has to be able to do the same.
It would not be acceptable for an autonomous vehicle to jump a red light because it can't see the stop line. Nor would it be acceptable for it to refuse to proceed in circumstances where normal rules have been temporarily suspended.
And it doesn't just have to be able to obey signs and signals,
both permanent and temporary, it also has to be able to obey instructions given by a police officer. If it can't do any of those without human input, then it isn't fully autonomous.
On 30/01/2025 14:35, Mark Goodge wrote:
The reality is that humans don't just slavishly follow signs and road
markings. We use context to tell us when we can, under certain
circumstances, infer the presence of markings that we can't see (for
example, a faded stop line at a normal light-controlled junction), or infer >> that a visible marking can be ignored (eg, the double white lines in that
photo). A competant autonomous vehicle has to be able to do the same.
Why do you think an autonomous vehicle is not capable of making the same >inference?
And it doesn't just have to be able to obey signs and signals,
both permanent and temporary, it also has to be able to obey instructions
given by a police officer. If it can't do any of those without human input, >> then it isn't fully autonomous.
It seems you're only now considering obvious potential pitfalls that I'm
sure were considered several years ago, and have been resolved but >unfortunately without telling you.
On Thu, 30 Jan 2025 20:00:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 30/01/2025 14:35, Mark Goodge wrote:
The reality is that humans don't just slavishly follow signs and road
markings. We use context to tell us when we can, under certain
circumstances, infer the presence of markings that we can't see (for
example, a faded stop line at a normal light-controlled junction), or infer >>> that a visible marking can be ignored (eg, the double white lines in that >>> photo). A competant autonomous vehicle has to be able to do the same.
Why do you think an autonomous vehicle is not capable of making the same
inference?
At the moment, none of them are.
I'm not disputing that they will,
eventually, be capable of it. The question is how long it will take to get there.
And it doesn't just have to be able to obey signs and signals,
both permanent and temporary, it also has to be able to obey instructions >>> given by a police officer. If it can't do any of those without human input, >>> then it isn't fully autonomous.
It seems you're only now considering obvious potential pitfalls that I'm
sure were considered several years ago, and have been resolved but
unfortunately without telling you.
They haven't been resolved yet. Current trials of autonomous vehicles are still based on tightly defined scenarios on routes already known to the vehicle. Being able to go anywhere, any time is still some way off.
<div id="editor" contenteditable="false">> They haven't been resolved yet. Current trials of autonomous vehicles are
still based on tightly defined scenarios on routes already known to the
vehicle. Being able to go anywhere, any time is still some way off.
I think you are way behind the times on that.
"New research shows that autonomous vehicles outperform human-driven ones.
Self-driving cars are generally safer than those that are driven by
humans, according to a new study."
https://www.euronews.com/next/2024/06/20/self-driving-cars-are-generally-safer-than-human-driven-ones-research-shows
On 30 Jan 2025 at 23:04:39 GMT, Norman Wells wrote:
<div id="editor" contenteditable="false">> They haven't been resolved yet. >> Current trials of autonomous vehicles are
still based on tightly defined scenarios on routes already known to the
vehicle. Being able to go anywhere, any time is still some way off.
I think you are way behind the times on that.
Not really. Even from your own link:
'The research found that autonomous or self-driving vehicles had lower accident rates than humans in work zones, traffic events, and “pre-accident movements” like slowing down and proceeding straight.
Where the self-driving cars struggled, however, was if they drove at sunrise or sunset on roads with many turns'
"New research shows that autonomous vehicles outperform human-driven ones. >>
Self-driving cars are generally safer than those that are driven by
humans, according to a new study."
https://www.euronews.com/next/2024/06/20/self-driving-cars-are-generally-safer-than-human-driven-ones-research-shows
That said, I tend to agree with the headline - self-driving cars are safer. It's just that we seem to accept and live with human frailties such as tiredness, being drunk, stupidity and recklessness (for example) far more readily than computer error.
On 30/01/2025 21:38, Mark Goodge wrote:
On Thu, 30 Jan 2025 20:00:59 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
It seems you're only now considering obvious potential pitfalls that I'm >>> sure were considered several years ago, and have been resolved but
unfortunately without telling you.
They haven't been resolved yet. Current trials of autonomous vehicles are
still based on tightly defined scenarios on routes already known to the
vehicle. Being able to go anywhere, any time is still some way off.
I think you are way behind the times on that.
"New research shows that autonomous vehicles outperform human-driven ones.
Self-driving cars are generally safer than those that are driven by humans, according
to a new study."
https://www.euronews.com/next/2024/06/20/self-driving-cars-are-generally-safer-than-human-driven-ones-research-shows
So they're OK driving in straght ines (as I believe has already been alluded >to )but they have a bit of a problem with "more complicated" stuff; if for >instance they have to say to turn corners. i.e change their minds.
So here we have a robot which is supposedly behaving just like a human
but which is actually incapable of changing its mind.
That's sounds strangely familiar, somehow,
So it's maybe no wonder that you yourself are so enthusiastic about
such projects.
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m02es8Fu9dcU2@mid.individual.net...
On 30/01/2025 21:38, Mark Goodge wrote:
On Thu, 30 Jan 2025 20:00:59 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>
snip
It seems you're only now considering obvious potential pitfalls that I'm >>>> sure were considered several years ago, and have been resolved but
unfortunately without telling you.
They haven't been resolved yet. Current trials of autonomous vehicles are >>> still based on tightly defined scenarios on routes already known to the
vehicle. Being able to go anywhere, any time is still some way off.
I think you are way behind the times on that.
"New research shows that autonomous vehicles outperform human-driven ones. >>
Self-driving cars are generally safer than those that are driven by humans, according
to a new study."
quote:
Where the self-driving cars struggled, however, was if they drove at sunrise or sunset on roads with many turns, the report, published in the journal Nature,
found.
That's because low light conditions confused the cars' sensors as did
the " complicated conditions" that come when a car is turning.
unquote
https://www.euronews.com/next/2024/06/20/self-driving-cars-are-generally-safer-than-human-driven-ones-research-showsSo they're OK driving in straght ines (as I believe has already been alluded to )but they have a bit of a problem with "more complicated" stuff; if for instance they have to say to turn corners. i.e change their minds.
So here we have a robot which is supposedly behaving just like a human
but which is actually incapable of changing its mind.
That's sounds strangely familiar, somehow,
So it's maybe no wonder that you yourself are so enthusiastic about
such projects.
On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
wrote:
In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
Jan 2025, JNugent <JNugent73@mail.com> remarked:
Recently in the USA, I was startled to see what I can only describe as >>>several small self-powered and self-directed boxes (c. 18" x 12" x 10") >>>on wheels, carrying goods for delivery (at moderate speed) along the >>>footway in a university area.
There's a trial of those in Cambridge. A major problem is crossing
roads, because they can't reach the buttons on the Pelican Crossings.
I would not have thought that to be an insurmountable problem.
That's not actually how they work. They monitor and assess allany human to whatever transpires.
inputs from the sensor many hundreds of times a second, thus
'changing their minds' constantly, and adapting far quicker than
How very odd, luddite and irrational.
We rely absolutely on computers in so many aspects of life. It's very strange if we exclude driving with all the innate human frailties you list.
What is to stop pedestrians just stepping out in front of these cars and bringing them to complete halt;either to actually cross the road or just because hey feel like it;
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m03k1aF4mnkU3@mid.individual.net...
any human to whatever transpires.
That's not actually how they work. They monitor and assess all
inputs from the sensor many hundreds of times a second, thus
'changing their minds' constantly, and adapting far quicker than
Apologies for the complete change of direction Norman, (it just happened
to be you as the thought just occurred to me) but assuming these cars can
in fact be made safe to drive anywhere -
What is to stop pedestrians just stepping out in front of these cars and bringing them to complete halt;either to actually cross the road or just because hey feel like it;
and what are the likely consequences for
journey times and traffic speeds.
billy bookcase wrote:
What is to stop pedestrians just stepping out in front of these cars and
bringing them to complete halt;either to actually cross the road or just
because hey feel like it;
Well, nothing. Just as there isn't with a manually driven car.
assuming these cars can
in fact be made safe to drive anywhere -
What is to stop pedestrians just stepping out in front of these cars and bringing them to complete halt;either to actually cross the road or just because hey feel like it;
On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
wrote:
In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
Jan 2025, JNugent <JNugent73@mail.com> remarked:
Recently in the USA, I was startled to see what I can only describe as
several small self-powered and self-directed boxes (c. 18" x 12" x 10")
on wheels, carrying goods for delivery (at moderate speed) along the
footway in a university area.
There's a trial of those in Cambridge. A major problem is crossing
roads, because they can't reach the buttons on the Pelican Crossings.
I would not have thought that to be an insurmountable problem.
Another unrelated problem is that they ping the buyer from the pavement
outside what might be quite a big apartment block, and if they aren't
attended to with (iirc) ten minutes, take the shopping back *and still
charge for it*.
Genuine question:
I live in a detached house with front and back garden. When nobody is
at home, couriers usually drop parcels around to the back porch which
is not visible from the main road. What do they do at apartment blocks
with shared landings when nobody is at home? Do they just leave the
parcel at the front door, exposed to public view, even in a high-crime
area?
On 31/01/2025 10:56, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m03k1aF4mnkU3@mid.individual.net...
any human to whatever transpires.
That's not actually how they work. They monitor and assess all
inputs from the sensor many hundreds of times a second, thus
'changing their minds' constantly, and adapting far quicker than
Apologies for the complete change of direction Norman, (it just happened
to be you as the thought just occurred to me) but assuming these cars can
in fact be made safe to drive anywhere -
What is to stop pedestrians just stepping out in front of these cars and
bringing them to complete halt;either to actually cross the road or just
because hey feel like it;
Well, nothing. Just as there isn't with a manually driven car. It's not generally
allowed simply to plough into them however much they deserve it.
"Martin Harran" <martinharran@gmail.com> wrote in message news:v77ppjp4vfj9un6rhuceoq8i9m4fr07ne2@4ax.com...
On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
wrote:
In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
Jan 2025, JNugent <JNugent73@mail.com> remarked:
Recently in the USA, I was startled to see what I can only describe as >>>> several small self-powered and self-directed boxes (c. 18" x 12" x 10") >>>> on wheels, carrying goods for delivery (at moderate speed) along the
footway in a university area.
There's a trial of those in Cambridge. A major problem is crossing
roads, because they can't reach the buttons on the Pelican Crossings.
I would not have thought that to be an insurmountable problem.
Presumably they won't be able to deliver any higher than the fourth
floor in most blocks of flats either.
On 31/01/2025 11:05, billy bookcase wrote:
"Martin Harran" <martinharran@gmail.com> wrote in message
news:v77ppjp4vfj9un6rhuceoq8i9m4fr07ne2@4ax.com...
On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
wrote:
In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
Jan 2025, JNugent <JNugent73@mail.com> remarked:
Recently in the USA, I was startled to see what I can only describe as >>>>> several small self-powered and self-directed boxes (c. 18" x 12" x 10") >>>>> on wheels, carrying goods for delivery (at moderate speed) along the >>>>> footway in a university area.
There's a trial of those in Cambridge. A major problem is crossing
roads, because they can't reach the buttons on the Pelican Crossings.
I would not have thought that to be an insurmountable problem.
Presumably they won't be able to deliver any higher than the fourth
floor in most blocks of flats either.
Would they try to deliver directly to the balcony (if present)? Do human deliverers
climb up the outside of blocks of flats?
--
Max Demian
billy bookcase wrote:
assuming these cars can
in fact be made safe to drive anywhere -
What is to stop pedestrians just stepping out in front of these cars and
bringing them to complete halt;either to actually cross the road or just
because hey feel like it;
In San Francisco it's been seen as a 'jolly jape' to place traffic cones
on the bonnet of Waymo and other types of self-driving cars for the past couple of years.
Also there are videos showing disorganised herds of autonomous cars,
creeping ultra-cautiously around parking lots, some trying to park,
others trying to get out, gently honking at each other and generally
getting nowhere.
On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
wrote:
In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
Jan 2025, JNugent <JNugent73@mail.com> remarked:
Recently in the USA, I was startled to see what I can only describe as
several small self-powered and self-directed boxes (c. 18" x 12" x 10")
on wheels, carrying goods for delivery (at moderate speed) along the
footway in a university area.
There's a trial of those in Cambridge. A major problem is crossing
roads, because they can't reach the buttons on the Pelican Crossings.
I would not have thought that to be an insurmountable problem.
Another unrelated problem is that they ping the buyer from the pavement
outside what might be quite a big apartment block, and if they aren't
attended to with (iirc) ten minutes, take the shopping back *and still
charge for it*.
Genuine question:
I live in a detached house with front and back garden. When nobody is
at home, couriers usually drop parcels around to the back porch which
is not visible from the main road. What do they do at apartment blocks
with shared landings when nobody is at home? Do they just leave the
parcel at the front door, exposed to public view, even in a high-crime
area?
Norman Wells wrote:
billy bookcase wrote:
What is to stop pedestrians just stepping out in front of these cars and >>> bringing them to complete halt;either to actually cross the road or just >>> because hey feel like it;
Well, nothing. Just as there isn't with a manually driven car.
It generally takes teenage boy levels of bravado to run across fast-
moving traffic, once OAPs learn that autonomous cars will always stop, wrinkly-chicken might become their new pastime ...
On 31/01/2025 08:23, Norman Wells wrote:
How very odd, luddite and irrational.
We rely absolutely on computers in so many aspects of life. It's very
strange if we exclude driving with all the innate human frailties you
list.
As someone who has made a good living with computers large & small,I
have a liking for them. I do NOT though like systems which are
unnecessary, which seek to control me, or are added to make weight to products. For example, I prefer to decide when to switch on my lights manually!
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m03sudF6h2iU1@mid.individual.net...
On 31/01/2025 10:56, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m03k1aF4mnkU3@mid.individual.net...
any human to whatever transpires.
That's not actually how they work. They monitor and assess all
inputs from the sensor many hundreds of times a second, thus
'changing their minds' constantly, and adapting far quicker than
Apologies for the complete change of direction Norman, (it just happened >>> to be you as the thought just occurred to me) but assuming these cars can >>> in fact be made safe to drive anywhere -
What is to stop pedestrians just stepping out in front of these cars and >>> bringing them to complete halt;either to actually cross the road or just >>> because hey feel like it;
Well, nothing. Just as there isn't with a manually driven car. It's not generally
allowed simply to plough into them however much they deserve it.
Except that people * wouldn't do that* with a manual car. Would they ?
*for all the very reasons you've given yourself*
The human driver might be drunk, not paying attention etc etc
But with a robot car , according to you at least and your friends*, they can be *guaranteed* that no accidents can possibly happen
So why wouldn't people just walk out in front of a robot car and bring it to a halt ?
For instance if what you say is correct, correct, correct, robot cars are going
to put millions of human drivers out of a job. Taxi drivers for one.
So what's to stop all these redundant taxi drivers stepping out in front of robot
cars and refusing to move ?
Obviously the robots and their human "friends" have never even considered such a possibility.
On Fri, 31 Jan 2025 10:56:46 +0000, billy bookcase wrote:
What is to stop pedestrians just stepping out in front of these cars and
bringing them to complete halt;either to actually cross the road or just
because hey feel like it;
Or to rob/hijack it ?
On 31/01/2025 11:58, Les. Hayward wrote:
On 31/01/2025 08:23, Norman Wells wrote:
How very odd, luddite and irrational.
We rely absolutely on computers in so many aspects of life. It's
very strange if we exclude driving with all the innate human
frailties you list.
As someone who has made a good living with computers large & small,I
have a liking for them. I do NOT though like systems which are
unnecessary, which seek to control me, or are added to make weight to
products. For example, I prefer to decide when to switch on my lights
manually!
Fair enough. You probably also like the tedium of driving (or
alcohol-free, risk-taking excitement, who knows?) to the relaxation of
being driven or on a train or plane. But many don't. And no-one will
be forcing you to do anything you don't want to.
Not so, I am afraid. Taking cars - it is getting more difficult to turn
off all the annoyances. Soon it will be impossible. Take bill paying & parking - you are almost an outcast if you dare enter the world without
a mobile phone. As to trains - As a child of a railway family, I'd love to. The trouble is that these days you need a second mortgage to afford the fair -
then there is the ruddy computerised and over complicated booking system...
But with a robot car , according to you at least and your friends*,
they can be *guaranteed* that no accidents can possibly happen
No they can't. Even an autonomous vehicle has kinetic energy that
must be dissipated by braking before it will come to a halt. And the
amount of braking that can be applied is limited by the coefficient
of friction between its tyres and the road surface. Otherwise, all
you get is a skid and not very much slowing at all. Step out in front
of one within the braking distance and you will inevitably be run over whether
it's an autonomous vehicle or not.
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m0466mF86flU1@mid.individual.net...
But with a robot car , according to you at least and your friends*,
they can be *guaranteed* that no accidents can possibly happen
No they can't. Even an autonomous vehicle has kinetic energy that
must be dissipated by braking before it will come to a halt. And the
amount of braking that can be applied is limited by the coefficient
of friction between its tyres and the road surface. Otherwise, all
you get is a skid and not very much slowing at all. Step out in front
of one within the braking distance and you will inevitably be run over whether
it's an autonomous vehicle or not.
Now you're simply obfuscating.
You've already admitted these cars are fitted with sensors, no doubt directed towards
all possible directions the car can travel. So that if something comes into its range
of" vision", say a small child walking across the pavement possibly to run out into the
road' the car will slow down sufficiently, no doubt having taken into account the
coefficient of friction between its tyres and the road surface and any other technical
objections you want to come up with, so as to be able to come to a stop.
In fact just imagine the sorts of speeds that are going to be achievable by these
robot cars of yours driving past pavements choc full of people
You really can't have it both ways.
Either these cars are "safe" in which case they will come to a stop if a child runs out
into the road, or they're not
As has been seen with all bike sharing schemes - one main result of these is to
demonstrate just how many really stupid thoughtless people there are around who
think nothing of blocking the pavements with the things, instead of parking them
responsibly on;y a few feet away.
And it will be much the same with robot cars. With thoughtless people now able
to walk out in front of them; along with angry taxi drivers and small children
Presumably to be mown down morons on bikes instead
As has been suggested elsewhere these things would be fine in dedicated lanes on motorways. Except people don't simply want to drive up and down motorways do they? They need to get on and off of the motorway for a start.
On 01/02/2025 08:52, billy bookcase wrote...
Either these cars are "safe" in which case they will come to a stop if a child
runs out into the road, or they're not
No-one claims that they will be absolutely, 100% safe. There's a balance to be struck
between doing anything and the risk of doing it.
They can and will injure or kill if someone steps out in front of one within its
braking distance. Just as with human driven vehicles. It can't be avoided.
The relevant comparison is which is the safer of the two.
So who will be held responsible if one of these robots of yours knocks over and kills a pedestrian ?
Anyone at all ?
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m06asnFin8sU1@mid.individual.net.
On 01/02/2025 08:52, billy bookcase wrote...
snippage
Either these cars are "safe" in which case they will come to a stop if a child
runs out into the road, or they're not
No-one claims that they will be absolutely, 100% safe. There's a balance to be struck
between doing anything and the risk of doing it.
They can and will injure or kill if someone steps out in front of one within its
braking distance. Just as with human driven vehicles. It can't be avoided. >>
The relevant comparison is which is the safer of the two.
At the moment if a human driver, one of these drunks or inattentive drivers of yours, knocks over and kills a pedestrian, they will be held responsible initially at least, and will be required to account for themselves in a
Court of Law.
So who will be held responsible if one of these robots of yours knocks over and kills a pedestrian ?
Anyone at all ?
Or will the expensive KC doubtless hired by your robot friends, simply stand up
in the inquest and explain that it was "just one of those things" as you're seeking
to argue here, and that overall robots are definitely safer than humans.?
On Thu, 30 Jan 2025 12:44:23 +0000, Roland Perry <roland@perry.uk>
wrote:
In message <lvurdmFcfn8U3@mid.individual.net>, at 14:14:13 on Wed, 29
Jan 2025, JNugent <JNugent73@mail.com> remarked:
Recently in the USA, I was startled to see what I can only describe as >>>several small self-powered and self-directed boxes (c. 18" x 12" x 10") >>>on wheels, carrying goods for delivery (at moderate speed) along the >>>footway in a university area.
There's a trial of those in Cambridge. A major problem is crossing
roads, because they can't reach the buttons on the Pelican Crossings.
I would not have thought that to be an insurmountable problem.
Another unrelated problem is that they ping the buyer from the pavement >>outside what might be quite a big apartment block, and if they aren't >>attended to with (iirc) ten minutes, take the shopping back *and still >>charge for it*.
Genuine question:
I live in a detached house with front and back garden. When nobody is
at home, couriers usually drop parcels around to the back porch which
is not visible from the main road. What do they do at apartment blocks
with shared landings when nobody is at home? Do they just leave the
parcel at the front door, exposed to public view, even in a high-crime
area?
Genuine question:
I live in a detached house with front and back garden. When nobody is
at home, couriers usually drop parcels around to the back porch which
is not visible from the main road. What do they do at apartment blocks
with shared landings when nobody is at home? Do they just leave the
parcel at the front door, exposed to public view, even in a high-crime
area?
AFAICT, the answer is "Yes".
Or will the expensive KC doubtless hired by your robot friends, simply
stand up in the inquest and explain that it was "just one of those
things"
as you're seeking to argue here, and that overall robots are definitely
safer than humans.?
They can and will injure or kill if someone steps out in front of one
within its braking distance. Just as with human driven vehicles. It
can't be avoided.
billy bookcase wrote:
So who will be held responsible if one of these robots of yours knocks over >> and kills a pedestrian ?
<https://www.legislation.gov.uk/ukpga/2024/10/contents>
Sections 24 to 27 (except they are not in force yet, it shows their intentions).
Anyone at all ?
A nominated manager?
On 01/02/2025 13:12, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m06asnFin8sU1@mid.individual.net.
On 01/02/2025 08:52, billy bookcase wrote...
snippage
Either these cars are "safe" in which case they will come to a
stop if a child runs out into the road, or they're not
No-one claims that they will be absolutely, 100% safe. There's a
balance to be struck between doing anything and the risk of doing
it.
They can and will injure or kill if someone steps out in front of
one within its braking distance. Just as with human driven
vehicles. It can't be avoided.
The relevant comparison is which is the safer of the two.
At the moment if a human driver, one of these drunks or inattentive
drivers of yours, knocks over and kills a pedestrian, they will be
held responsible initially at least, and will be required to
account for themselves in a Court of Law.
So who will be held responsible if one of these robots of yours
knocks over and kills a pedestrian ?
Anyone at all ?
Or will the expensive KC doubtless hired by your robot friends,
simply stand up in the inquest and explain that it was "just one of
those things" as you're seeking to argue here, and that overall
robots are definitely safer than humans.?
As always, it depends.
However, with the virtually instantaneous reaction time of an
autonomous vehicle, it will almost certainly be the pedestrian's
fault unless a malfunction can be established. Human error on the
part of a driver can obviously be eliminated from the outset.
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m02es8Fu9dcU2@mid.individual.net...
On 30/01/2025 21:38, Mark Goodge wrote:
On Thu, 30 Jan 2025 20:00:59 +0000, Norman Wells
<hex@unseen.ac.am> wrote:
snip
It seems you're only now considering obvious potential pitfalls
that I'm sure were considered several years ago, and have been
resolved but unfortunately without telling you.
They haven't been resolved yet. Current trials of autonomous
vehicles are still based on tightly defined scenarios on routes
already known to the vehicle. Being able to go anywhere, any time
is still some way off.
I think you are way behind the times on that.
"New research shows that autonomous vehicles outperform
human-driven ones.
Self-driving cars are generally safer than those that are driven by
humans, according to a new study."
quote:
Where the self-driving cars struggled, however, was if they drove at
sunrise or sunset on roads with many turns, the report, published in
the journal Nature, found. That's because low light conditions
confused the cars' sensors as did the " complicated conditions" that
come when a car is turning.
unquote
https://www.euronews.com/next/2024/06/20/self-driving-cars-are-generally-safer-than-human-driven-ones-research-shows
So they're OK driving in straght ines (as I believe has already been
alluded to )but they have a bit of a problem with "more complicated"
stuff; if for instance they have to say to turn corners. i.e change
their minds.
So here we have a robot which is supposedly behaving just like a human
but which is actually incapable of changing its mind.
That's sounds strangely familiar, somehow,
So it's maybe no wonder that you yourself are so enthusiastic about
such projects.
bb
I wonder how these self-driving cars react to signs, such as those not uncommon in the US, which mandate a lower-than-normal speed limit 'on
school days only'?
So who will be held responsible if one of these robots of yours knocks over and kills a pedestrian ?
Anyone at all ?
On 01/02/2025 10:21, Norman Wells wrote:
They can and will injure or kill if someone steps out in front of one
within its braking distance. Just as with human driven vehicles. It
can't be avoided.
Yet a /good/ human driver will be continuously assessing the likelihood
of someone stepping out(*), and take pro-active steps to prevent an
accident. Will (can?) a robot car do the same?
billy bookcase wrote:
So who will be held responsible if one of these robots of yours knocks over >> and kills a pedestrian ?
<https://www.legislation.gov.uk/ukpga/2024/10/contents>
Sections 24 to 27 (except they are not in force yet, it shows their intentions).
Anyone at all ?
A nominated manager?
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those
not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those
not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a proper
data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
I will investigate in case it
is a feature that is available but not activated. But you are assuming
that all States and small 'burgs' in the US follow the same speed limit reporting processes that we do. Do they?
From: Norman Wells <hex@unseen.ac.am>
Subject: Re: Should road rules depend on the car?
Date: Mon, 3 Feb 2025 09:36:45 +0000
User-Agent: Mozilla Thunderbird
Newsgroups: uk.legal.moderated
Organization: SGO
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those
not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
It's trivial programming to ensure that limits are not exceeded by any autonomous
vehicle, which is certainly not the case with human drivers, even those who try to obey
them.
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close
to a vehicle in front, lane-keep assist.
Martin Harran wrote:With the sheer volume of inputs into an autopilot (location, speed,
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close
to a vehicle in front, lane-keep assist.
Those are examples of automation, as with autpilots. Where the each set
of inputs, no matter how complex, will produce exactly the same result
every time.
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those
not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated again.
All other software up to date.
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as
those not uncommon in the US, which mandate a
lower-than-normal speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you
are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it *usually* shows the speed limit. Sometimes the speed limit it shows
is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything
dynamic or complex.
On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
Jon Ribbens <jon+usenet@unequivocal.eu> wrote:
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as
those not uncommon in the US, which mandate a
lower-than-normal speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you
are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it
*usually* shows the speed limit. Sometimes the speed limit it shows
is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything
dynamic or complex.
I have asked BMW the question, but I think I know the answer, despite Norman's amazement. I will report their reply.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
From: Norman Wells <hex@unseen.ac.am>
Subject: Re: Should road rules depend on the car?
Date: Mon, 3 Feb 2025 09:36:45 +0000
User-Agent: Mozilla Thunderbird
Newsgroups: uk.legal.moderated
Organization: SGO
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those
not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated again.
All other software up to date.
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those >>>>>> not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated again.
All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it *usually* shows the speed limit. Sometimes the speed limit it shows
is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything
dynamic or complex.
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those
not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated again.
All other software up to date.
billy bookcase wrote:
Martin Harran wrote:
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close
to a vehicle in front, lane-keep assist.
Those are examples of automation, as with autpilots. Where the each set
of inputs, no matter how complex, will produce exactly the same result
every time.
With the sheer volume of inputs into an autopilot (location, speed, multiple cameras, bugs on the windscreen, lidar, clouds, weather, lighting, pedestrians,
cyclists, other vehicles) do you think you could *ever* duplicate the inputs?
Butterfly wings anyone?
On 03/02/2025 11:20 AM, Davey wrote:
On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
Jon Ribbens <jon+usenet@unequivocal.eu> wrote:
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as
those not uncommon in the US, which mandate a
lower-than-normal speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you
are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it
*usually* shows the speed limit. Sometimes the speed limit it shows
is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything
dynamic or complex.
I have asked BMW the question, but I think I know the answer, despite
Norman's amazement. I will report their reply.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
Mine is just too old for that (January 2014).
I use my iPhone with Apple Maps, so that I can make use of spoken
commands to Siri.
I also have a Garmin I bought in the USA, though I only use it if I
don't want to waste my phone data package.
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m0bh1eFdablU2@mid.individual.net...
It's trivial programming to ensure that limits are not exceeded by any autonomous
vehicle, which is certainly not the case with human drivers, even those who try to obey
them.
How about the humans who were supposed to put up the signs but
somehow forgot ?
What do you, and your robot friends, intend to do about them ?
"Martin Harran" <martinharran@gmail.com> wrote in message news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close
to a vehicle in front, lane-keep assist.
Those are examples of automation, as with autpilots. Where the each set
of inputs, no matter how complex, will produce exactly the same result
every time.
AI systems on the other hand, learn and make up new rules as they go along. And so the process by which they actually arrive at particular decisions
may not be readily available to humans.
I think this is possibly what a lot of peopel are really worried about,
On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
Jon Ribbens <jon+usenet@unequivocal.eu> wrote:
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as
those not uncommon in the US, which mandate a
lower-than-normal speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you
are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it
*usually* shows the speed limit. Sometimes the speed limit it shows
is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything
dynamic or complex.
I have asked BMW the question, but I think I know the answer, despite Norman's amazement. I will report their reply.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
On 03/02/2025 11:20, Davey wrote:
On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
Jon Ribbens <jon+usenet@unequivocal.eu> wrote:
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as
those not uncommon in the US, which mandate a
lower-than-normal speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you
are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it
*usually* shows the speed limit. Sometimes the speed limit it shows
is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything
dynamic or complex.
I have asked BMW the question, but I think I know the answer, despite
Norman's amazement. I will report their reply.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
I doubt if any BMWs have built-in satnavs that show the speed limit
unless things have changed in the last 3 years or so.
My son always tells me not to rely on the BMW satnav anyway because it
is always too optimistic about arrival time and does not always divert
from roads that are temporarily congested.
The WAZE satnav is very good. The speed limit is displayed but too small
to be easily visible between the driver and the dashboard. One drawback
seems to be, in my experience, that with WAZE there is a short lag
between where you are and what is displayed. So the satnav might tell
you that you should have made your right turn a few seconds ago.
A new Honda Jazz has a built-in Garmin satnav that does show the speed
limit. And separately the car has a camera that looks for speed limit
signs in the road and displays them separately on the dashboard - the
two figures not always agreeing.
On 03/02/2025 11:22, JNugent wrote:
On 03/02/2025 11:20 AM, Davey wrote:
On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
Jon Ribbens <jon+usenet@unequivocal.eu> wrote:
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as >>>>>>>>> those not uncommon in the US, which mandate a
lower-than-normal speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a >>>>>>>> proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you
are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it
*usually* shows the speed limit. Sometimes the speed limit it shows
is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything
dynamic or complex.
I have asked BMW the question, but I think I know the answer, despite
Norman's amazement. I will report their reply.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
Mine is just too old for that (January 2014).
I use my iPhone with Apple Maps, so that I can make use of spoken
commands to Siri.
I also have a Garmin I bought in the USA, though I only use it if I
don't want to waste my phone data package.
You don't have to. Unless you want up-to-the-minute traffic
information, satnavs works perfectly well on their own for routing
without any internet connection.
"Martin Harran" <martinharran@gmail.com> wrote in message news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close
to a vehicle in front, lane-keep assist.
Those are examples of automation, as with autpilots. Where the each set
of inputs, no matter how complex, will produce exactly the same result
every time.
AI systems on the other hand, learn and make up new rules as they go along. And so the process by which they actually arrive at particular decisions
may not be readily available to humans.
I think this is possibly what a lot of peopel are really worried about,
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as
those not uncommon in the US, which mandate a lower-than-normal
speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
The basic Android SatNavs with traffic information include things
like roadworks and speed limits.
I'm genuinely surprised that BMW doesn't provide the same information
with live traffic. I rely on mine for speed limits and avoiding jams.
If your console can run Android then I would ask questions about
alternatives with live updates and traffic information. Otherwise I
would consider replacing it.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
On 3 Feb 2025 at 12:29:22 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:
On 03/02/2025 11:22, JNugent wrote:
I also have a Garmin I bought in the USA, though I only use it if I
don't want to waste my phone data package.
You don't have to. Unless you want up-to-the-minute traffic
information, satnavs works perfectly well on their own for routing
without any internet connection.
On my phone Google maps has totally insufficient map data for a long
journey without an Internet connection.
It's trivial programming to ensure that limits are not exceeded by any autonomous vehicle,
On 3 Feb 2025 at 12:29:22 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:
On 03/02/2025 11:22, JNugent wrote:
On 03/02/2025 11:20 AM, Davey wrote:
On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
Jon Ribbens <jon+usenet@unequivocal.eu> wrote:
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as >>>>>>>>>> those not uncommon in the US, which mandate a
lower-than-normal speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a >>>>>>>>> proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you
are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it >>>>> *usually* shows the speed limit. Sometimes the speed limit it shows
is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything >>>>> dynamic or complex.
I have asked BMW the question, but I think I know the answer, despite
Norman's amazement. I will report their reply.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
Mine is just too old for that (January 2014).
I use my iPhone with Apple Maps, so that I can make use of spoken
commands to Siri.
I also have a Garmin I bought in the USA, though I only use it if I
don't want to waste my phone data package.
You don't have to. Unless you want up-to-the-minute traffic
information, satnavs works perfectly well on their own for routing
without any internet connection.
On my phone Google maps has totally insufficient map data for a long journey without an Internet connection.
I have no idea what it runs, although I see lots of Linux terms in
the software listings. I ask it to do things, and it usually does them.
I certainly have no intention of replacing it, either the Satnav or the
car, I have managed so far without my car telling me what the speed
limit is, as I can read road signs myself.
On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
Jon Ribbens <jon+usenet@unequivocal.eu> wrote:
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as
those not uncommon in the US, which mandate a
lower-than-normal speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you
are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it
*usually* shows the speed limit. Sometimes the speed limit it shows
is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything
dynamic or complex.
I have asked BMW the question, but I think I know the answer, despite Norman's amazement. I will report their reply.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those not
uncommon in the US, which mandate a lower-than-normal speed limit 'on
school days only'?
Just as they do with any legally imposed speed limits. With a proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
On 2025-02-03, Roger Hayter <roger@hayter.org> wrote:
On 3 Feb 2025 at 12:29:22 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:
On 03/02/2025 11:22, JNugent wrote:
I also have a Garmin I bought in the USA, though I only use it if I
don't want to waste my phone data package.
You don't have to. Unless you want up-to-the-minute traffic
information, satnavs works perfectly well on their own for routing
without any internet connection.
On my phone Google maps has totally insufficient map data for a long
journey without an Internet connection.
With a location selected, pull up the bottom panel that gives
information about it, scroll the buttons (that start "Directions",
"Start", etc) to the left, select "Download offline map".
With the sheer volume of inputs into an autopilot (location, speed, multiple >> cameras, bugs on the windscreen, lidar, clouds, weather, lighting, pedestrians,It's purely hypothetical.
cyclists, other vehicles) do you think you could*ever* duplicate the inputs?
If two sets of inputs were identical then the outputs would be identical
as all the data is numerical and subject to exactly the same set of calculations.
Davey wrote:
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
My 2016 car's built-in satnav displays speeds from a database (several
years out of date)
I tend to use google maps on my phone "projected" onto the car's screen
by android auto, which gets speed limits from google's (mostly) up to
date database.
I have driven a recent BMW which got the speed limit from reading the
signs.
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000 Norman Wells <hex@unseen.ac.am>
wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000 Norman Wells <hex@unseen.ac.am>
wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those
not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated again.
All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it *usually* shows the speed limit. Sometimes the speed limit it shows is
the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything
dynamic or complex.
The standalone Garmin satnav I have seems remarkably accurate as regards speed limits, far more so in fact than I am. The number of times it has correctly identified that I have been exceeding one is vastly greater
than the number where it has been wrong, which is usually only when
there are temporary speed restrictions on motorways.
On 3 Feb 2025 at 14:03:39 GMT, "Jon Ribbens" <jon+usenet@unequivocal.eu> wrote:
On 2025-02-03, Roger Hayter <roger@hayter.org> wrote:
On 3 Feb 2025 at 12:29:22 GMT, "Norman Wells" <hex@unseen.ac.am> wrote: >>>> On 03/02/2025 11:22, JNugent wrote:
I also have a Garmin I bought in the USA, though I only use it if I
don't want to waste my phone data package.
You don't have to. Unless you want up-to-the-minute traffic
information, satnavs works perfectly well on their own for routing
without any internet connection.
On my phone Google maps has totally insufficient map data for a long
journey without an Internet connection.
With a location selected, pull up the bottom panel that gives
information about it, scroll the buttons (that start "Directions",
"Start", etc) to the left, select "Download offline map".
My phone has insufficient memory
On 03/02/2025 08:09, Norman Wells wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those not
uncommon in the US, which mandate a lower-than-normal speed limit 'on
school days only'?
Just as they do with any legally imposed speed limits. With a proper
data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
Yes, but it doesn't always get it right in tricky edge cases:
Asymmetric speed limits on dual carriageways when one direction is under speed restrictions due to roadworks, crash recovery or resurfacing.
30mph local roads crossing motorway bridges where derestricted applies.
Mostly it gets it right but sometimes it has the car on a road parallel
with the road it is actually on. Can be an issue with motorway parallel
and close to old road as happens at RAF Dishforth with A1(M) and A168. Civilian GPS is only good to ~10m and it sometimes guesses wrong.
On 3 Feb 2025 at 11:20:03 GMT, "Davey" <davey@example.invalid> wrote:
I have asked BMW the question, but I think I know the answer, despite
Norman's amazement. I will report their reply.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
My car reads speed limit signs and also interprets street lamp spacing; the latter is often fooled by telegraph poles, and of course by default 20mph limits.
The satnav has no speed limit database.
On 03/02/2025 13:41, Davey wrote:
I have no idea what it runs, although I see lots of Linux terms in
the software listings. I ask it to do things, and it usually does them.
I certainly have no intention of replacing it, either the Satnav or the
car, I have managed so far without my car telling me what the speed
limit is, as I can read road signs myself.
Of course. Sometimes.
My experience is that my satnav knows rather better than I do what the
limit is at any one location. And that's not for the want of trying.
If you say you're fully aware of the correct speed limit everywhere you
drive all of the time, I will frankly tend not to believe you. Maybe a journey with a satnav that does could be revelatory.
On 03/02/2025 10:07, billy bookcase wrote:
"Martin Harran" <martinharran@gmail.com> wrote in message
news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close
to a vehicle in front, lane-keep assist.
Those are examples of automation, as with autpilots. Where the each set
of inputs, no matter how complex, will produce exactly the same result
every time.
AI systems on the other hand, learn and make up new rules as they go along. >> And so the process by which they actually arrive at particular decisions
may not be readily available to humans.
I think this is possibly what a lot of peopel are really worried about,
AI is not part of autonomous vehicle control, and includes none of the
things Mr Harran identified above. It is simply automation according to >pre-determined rules.
On 03/02/2025 08:09, Norman Wells wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those not
uncommon in the US, which mandate a lower-than-normal speed limit 'on
school days only'?
Just as they do with any legally imposed speed limits. With a proper
data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
Yes, but it doesn't always get it right in tricky edge cases:
Asymmetric speed limits on dual carriageways when one direction is under >speed restrictions due to roadworks, crash recovery or resurfacing.
30mph local roads crossing motorway bridges where derestricted applies.
Mostly it gets it right but sometimes it has the car on a road parallel
with the road it is actually on. Can be an issue with motorway parallel
and close to old road as happens at RAF Dishforth with A1(M) and A168. >Civilian GPS is only good to ~10m and it sometimes guesses wrong.
On 03/02/2025 11:22, JNugent wrote:
I also have a Garmin I bought in the USA, though I only use it if I
don't want to waste my phone data package.
You don't have to. Unless you want up-to-the-minute traffic
information, satnavs works perfectly well on their own for routing
without any internet connection.
On Mon, 03 Feb 2025 13:53:59 +0000, Andy Burns wrote:
Davey wrote:
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
My 2016 car's built-in satnav displays speeds from a database (several
years out of date)
I have never managed to use my in-car sat nav for anything. I can't begin
to describe how useless it is. Why manufacturers continue is beyond me
On 03/02/2025 12:26, Norman Wells wrote:
The standalone Garmin satnav I have seems remarkably accurate as regards
speed limits, far more so in fact than I am. The number of times it has
correctly identified that I have been exceeding one is vastly greater
than the number where it has been wrong, which is usually only when there
are temporary speed restrictions on motorways.
Interesting. Mine makes mistakes on some stretches of road, not altogether consistently I sometimes feel.
All this assumes the satnav has you on the right road. Mine has very occasionally shown me on a different road (or, once, decided I was driving through a nearby wooded area) some distance from where I really was. With
an obvious knock-on effect on the rest of the display. I can't imagine what
a self-drive would make of that.
On 03/02/2025 14:32, Norman Wells wrote:
On 03/02/2025 13:41, Davey wrote:Maybe every driver should be obliged to take a Speed Awareness Course periodically. To refresh their memory of which limits apply to which
I have no idea what it runs, although I see lots of Linux terms in the
software listings. I ask it to do things, and it usually does them.
I certainly have no intention of replacing it, either the Satnav or
the car, I have managed so far without my car telling me what the
speed limit is, as I can read road signs myself.
Of course. Sometimes.
My experience is that my satnav knows rather better than I do what the
limit is at any one location. And that's not for the want of trying.
If you say you're fully aware of the correct speed limit everywhere you
drive all of the time, I will frankly tend not to believe you. Maybe a
journey with a satnav that does could be revelatory.
types of road
On 2025-02-03, Jethro_uk <jethro_uk@hotmailbin.com> wrote:
On Mon, 03 Feb 2025 13:53:59 +0000, Andy Burns wrote:
Davey wrote:
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
My 2016 car's built-in satnav displays speeds from a database (several
years out of date)
I have never managed to use my in-car sat nav for anything. I can't
begin to describe how useless it is. Why manufacturers continue is
beyond me
I would imagine it's because it adds a tick to their feature list, and
it costs them very little or nothing to provide (the screen and the
computer behind it would be there anyway), and a small percentage of customers presumably do pay for the exorbitant subscriptions for map
updates thus providing a bit of extra income.
Actually I had no idea of the range of things you can subscribe to for
extra money with cars these days... cruise control with radar is
apparently £35/month, automatic parking is £19/month, automatic
headlight dipping is £10/month, etc. You can even pay £99 to play
"engine sounds" inside the cabin!? Satnav map upates cost £79/year. And
all of these features I think are actually built into the car, but
disabled unless you pay the extra ongoing monthly money. I think
capitalism is broken.
https://www.bmw.co.uk/en/shop/ls/cp/connected-drive
If there are no signs there's nothing you need to obey. If the
autonomous vehicle thinks there are restrictions in place, it will abide
by them whether or not there are signs in the interests of safety.
On 03/02/2025 14:32, Norman Wells wrote:
On 03/02/2025 13:41, Davey wrote:Maybe every driver should be obliged to take a Speed Awareness Course periodically. To refresh their memory of which limits apply to which
I have no idea what it runs, although I see lots of Linux terms in
the software listings. I ask it to do things, and it usually does them.
I certainly have no intention of replacing it, either the Satnav or the
car, I have managed so far without my car telling me what the speed
limit is, as I can read road signs myself.
Of course. Sometimes.
My experience is that my satnav knows rather better than I do what the
limit is at any one location. And that's not for the want of trying.
If you say you're fully aware of the correct speed limit everywhere
you drive all of the time, I will frankly tend not to believe you.
Maybe a journey with a satnav that does could be revelatory.
types of road.
On Mon, 3 Feb 2025 12:35:57 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 10:07, billy bookcase wrote:
"Martin Harran" <martinharran@gmail.com> wrote in message
news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close >>>> to a vehicle in front, lane-keep assist.
Those are examples of automation, as with autpilots. Where the each set
of inputs, no matter how complex, will produce exactly the same result
every time.
AI systems on the other hand, learn and make up new rules as they go along. >>> And so the process by which they actually arrive at particular decisions >>> may not be readily available to humans.
I think this is possibly what a lot of peopel are really worried about,
AI is not part of autonomous vehicle control, and includes none of the
things Mr Harran identified above. It is simply automation according to
pre-determined rules.
But that's precisely the problem. It's impossible to have a predetermined rule for every possible scenario.
And dealing with scenarios that you don't
have a rule for requires intelligence.
On Mon, 03 Feb 2025 16:20:10 +0000, Jon Ribbens wrote:
On 2025-02-03, Jethro_uk <jethro_uk@hotmailbin.com> wrote:
On Mon, 03 Feb 2025 13:53:59 +0000, Andy Burns wrote:
Davey wrote:
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
My 2016 car's built-in satnav displays speeds from a database (several >>>> years out of date)
I have never managed to use my in-car sat nav for anything. I can't
begin to describe how useless it is. Why manufacturers continue is
beyond me
I would imagine it's because it adds a tick to their feature list, and
it costs them very little or nothing to provide (the screen and the
computer behind it would be there anyway), and a small percentage of
customers presumably do pay for the exorbitant subscriptions for map
updates thus providing a bit of extra income.
Actually I had no idea of the range of things you can subscribe to for
extra money with cars these days... cruise control with radar is
apparently £35/month, automatic parking is £19/month, automatic
headlight dipping is £10/month, etc. You can even pay £99 to play
"engine sounds" inside the cabin!? Satnav map upates cost £79/year. And
all of these features I think are actually built into the car, but
disabled unless you pay the extra ongoing monthly money. I think
capitalism is broken.
https://www.bmw.co.uk/en/shop/ls/cp/connected-drive
That's if you want a BMW.
On my phone Google maps has totally insufficient map data for a long journey without an Internet connection.
On 03/02/2025 14:39, Martin Brown wrote:
On 03/02/2025 08:09, Norman Wells wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those not >>>> uncommon in the US, which mandate a lower-than-normal speed limit 'on
school days only'?
Just as they do with any legally imposed speed limits. With a proper
data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
Yes, but it doesn't always get it right in tricky edge cases:
Asymmetric speed limits on dual carriageways when one direction is
under speed restrictions due to roadworks, crash recovery or resurfacing.
30mph local roads crossing motorway bridges where derestricted applies.
Mostly it gets it right but sometimes it has the car on a road
parallel with the road it is actually on. Can be an issue with
motorway parallel and close to old road as happens at RAF Dishforth
with A1(M) and A168. Civilian GPS is only good to ~10m and it
sometimes guesses wrong.
Actually:
"Most consumer-grade GPS devices, when given an unobstructed view of the
sky, can pinpoint your location with an accuracy of about 3 to 5 meters
(10 to 16 feet)".
https://mapscaping.com/how-accurate-is-gps/
On 03/02/2025 15:43, Mark Goodge wrote:
On Mon, 3 Feb 2025 12:35:57 +0000, Norman Wells <hex@unseen.ac.am> wrote:
AI is not part of autonomous vehicle control, and includes none of the
things Mr Harran identified above. It is simply automation according to >>> pre-determined rules.
But that's precisely the problem. It's impossible to have a predetermined
rule for every possible scenario.
By and large, I think it is. It's why we have the Highway Code for example.
And dealing with scenarios that you don't
have a rule for requires intelligence.
If you have enough rules, it will look like intelligence, which is after
all just the application of logic, ie rules. But it isn't true
intelligence unless some surprising new rule seems to have been
developed and be in play. The times when that may be necessary,
however, are few and far between.
And even then it's arguable whether it is for example 'intelligent' or >actually unintelligent to pass through a failed traffic light stuck on red.
On Mon, 03 Feb 2025 15:32:13 +0000, The Todal wrote:
Maybe every driver should be obliged to take a Speed Awareness Course
periodically. To refresh their memory of which limits apply to which
types of road
And a general catch up with the highway code.
If a course is 100 then 10/year is a snip to keep your licence.
Having thoroughly benefited from one myself after being *extremely*
cynical, I can't recommend them enough.
The last
piece of the jigsaw will be average speed cameras over the 20 mph zones.
Before the start, the bloke sitting next to me starting complaining
about how unfair it all was. I explained to him that you just needed to
go through the motions, it was only half a day, it saved the points on
your licence and that most of it was probably a load of old b*llocks
anyway.
On 03/02/2025 11:22, JNugent wrote:
On 03/02/2025 11:20 AM, Davey wrote:
On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
Jon Ribbens <jon+usenet@unequivocal.eu> wrote:
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as >>>>>>>>> those not uncommon in the US, which mandate a
lower-than-normal speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a >>>>>>>> proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you
are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it
*usually* shows the speed limit. Sometimes the speed limit it shows
is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything
dynamic or complex.
I have asked BMW the question, but I think I know the answer, despite
Norman's amazement. I will report their reply.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
Mine is just too old for that (January 2014).
I use my iPhone with Apple Maps, so that I can make use of spoken
commands to Siri.
I also have a Garmin I bought in the USA, though I only use it if I
don't want to waste my phone data package.
You don't have to. Unless you want up-to-the-minute traffic
information, satnavs works perfectly well on their own for routing
without any internet connection.
On 03/02/2025 12:29 PM, Norman Wells wrote:
On 03/02/2025 11:22, JNugent wrote:
On 03/02/2025 11:20 AM, Davey wrote:
On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
Jon Ribbens <jon+usenet@unequivocal.eu> wrote:
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as >>>>>>>>>> those not uncommon in the US, which mandate a
lower-than-normal speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a >>>>>>>>> proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you
are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it >>>>> *usually* shows the speed limit. Sometimes the speed limit it shows
is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the
current time, even for simple fixed speed limits - let alone anything >>>>> dynamic or complex.
I have asked BMW the question, but I think I know the answer, despite
Norman's amazement. I will report their reply.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
Mine is just too old for that (January 2014).
I use my iPhone with Apple Maps, so that I can make use of spoken
commands to Siri.
I also have a Garmin I bought in the USA, though I only use it if I
don't want to waste my phone data package.
You don't have to. Unless you want up-to-the-minute traffic
information, satnavs works perfectly well on their own for routing
without any internet connection.
The Garmin doesn't have European maps on it.
On Mon, 3 Feb 2025 17:06:32 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 15:43, Mark Goodge wrote:
On Mon, 3 Feb 2025 12:35:57 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
AI is not part of autonomous vehicle control, and includes none of the >>>> things Mr Harran identified above. It is simply automation according to >>>> pre-determined rules.
But that's precisely the problem. It's impossible to have a predetermined >>> rule for every possible scenario.
By and large, I think it is. It's why we have the Highway Code for example.
But even the Highway Code distinguishes between mandatory rules, which you are not permitted to break, and recommended rules, which can be disregarded if circumstances are such that doing so is both safe and beneficial. So the Highway Code, alone, won't be enough; you also need a set of rules for when you can break the non-mandatory rules.
And dealing with scenarios that you don't
have a rule for requires intelligence.
If you have enough rules, it will look like intelligence, which is after
all just the application of logic, ie rules. But it isn't true
intelligence unless some surprising new rule seems to have been
developed and be in play. The times when that may be necessary,
however, are few and far between.
And even then it's arguable whether it is for example 'intelligent' or
actually unintelligent to pass through a failed traffic light stuck on red.
I was thinking of cases where there's no need to break the law, merely a
need to make decisions based on information which cannot be preset. Let me give a couple of examples from my real life experience.
The first happened when I was a passenger, not the driver. We'd been to a
gig at the Etihad Stadium in Manchester, and my friend was following his sat-nav back to my house in Stoke-on-Trent so that he could drop me off and then continue home. However, as we arrived at J19 of the M6 from the A565, the slip road onto the M6 southbound (which his sat-nav was, correctly, telling him to take) was blocked by a police car with flashing blue and red lights. There was no indication of why it was blocked, and no signs giving any diversionary route. My assumption was that there had been an accident of some sort, but that was pure speculation.
After driving round the entire roundabout twice (I have no idea why, maybe
he was hoping the police car would magically disappear the second time around), he eventually asked me what to do. I directed him via a different exit onto the A50, and once we were on that heading south I told him to just ignore the sat-nav until it stopped telling him to turn around (we
eventually joined the M6 at the next junction down).
Had I not been in the car, I have no idea how he would have got out of that. If I was ever in similar circumstances, and didn't have the benefit of a passenger with local kowledge, I think I'd either find somewhere to pull
over and look at an actual map, or just take what looks like the next best option and keep going in vaguely the right direction until my sat-nav picked up a new route.
How do you think an autonomous vehicle would cope with that, when the route given by its navigation software is blocked and there's no data available to suggest an alternative?
The second example also involves the police, but this time on a minor road.
I was driving home late one night from the evening shift at work, and as i approached a junction there was a police car parked at it. It wasn't
blocking the road, this time, but as I reached the junction a policeman
stood in front of me and gave the official symbol for "stop". So I stopped, and then the policeman walked over to my car and gave the entirely
unofficial but yet universally understood symbol for "wind your window
down". Which, again, I did.
I'd assumed I'd been stopped for a random breath test (this was mid-December), although given that I'd been at work rather than a party or the pub that didn't worry me. But he asked me where I was going, and, when I told him, replied that "You won't get through that way, there's been an accident and the road's blocked".
He advised me to go back to the motorway, go one junction further down and reach my town via a different approach road. In reality, he'd given me
enough information in the course of the conversation to make a reasonable guess where the accident was, and therefore that I could get past it by a series of back roads rather than going the long way round. So I did, and got home quicker than I would have done had I taken his well-meaning, but unnecessary, advice.
How do you think an autonomous vehicle would cope with being stopped by the police, and given routing information by the police which conflicted with
its navigational data?
On Mon, 3 Feb 2025 14:39:52 +0000, Martin Brown <'''newspam'''@nonad.co.uk> wrote:
On 03/02/2025 08:09, Norman Wells wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those not >>>> uncommon in the US, which mandate a lower-than-normal speed limit 'on
school days only'?
Just as they do with any legally imposed speed limits. With a proper
data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
Yes, but it doesn't always get it right in tricky edge cases:
Asymmetric speed limits on dual carriageways when one direction is under
speed restrictions due to roadworks, crash recovery or resurfacing.
30mph local roads crossing motorway bridges where derestricted applies.
Junctions with roads that have different limits can cause problems, too.
Mostly it gets it right but sometimes it has the car on a road parallel
with the road it is actually on. Can be an issue with motorway parallel
and close to old road as happens at RAF Dishforth with A1(M) and A168.
Civilian GPS is only good to ~10m and it sometimes guesses wrong.
There's also the problem that most minor roads only exist in the database as lines and nodes rather than areas. So what we, on the ground, see as this:
---------------------------------------------------------
-----------------------+ +-----------------------------
| |
| |
| |
| |
(that is, a road with a side-road meeting it at a t-junction)
will be in the database as simply this:
-------------------------+----------------------------
|
|
|
That's not a general problem for navigation, or, indeed, speed limits. But
if the side road has a different limit to the main road, then a car using sat-nav to determine the speed limit as it proceeds along the main road will encounter a node where two different limits intersect. How sat-navs interprets that is not consistent.
That's observable by some drivers in my town. If you're drving along this road, which has normal urban 30mph limit:
https://maps.app.goo.gl/CKZTxGDMVNuXvfsU8
then as you can see in the image, there's a side road with a 20mph limit. Some drivers report that, even if they are proceeding along the main road, and have no intention of turning, ther sat-nav nonetheless tells them to
slow down to 20mph as they pass the junction.
On 03/02/2025 20:08, JNugent wrote:
On 03/02/2025 12:29 PM, Norman Wells wrote:
On 03/02/2025 11:22, JNugent wrote:
On 03/02/2025 11:20 AM, Davey wrote:
On Mon, 3 Feb 2025 10:59:25 -0000 (UTC)
Jon Ribbens <jon+usenet@unequivocal.eu> wrote:
On 2025-02-03, Davey <davey@example.invalid> wrote:
On Mon, 3 Feb 2025 09:36:45 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as >>>>>>>>>>> those not uncommon in the US, which mandate a
lower-than-normal speed limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a >>>>>>>>>> proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you >>>>>>>>>> are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
Built in 2017, SatNav map updated last year, soon to be updated
again. All other software up to date.
I use Google Maps from my phone in the car via Apple AirPlay, and it >>>>>> *usually* shows the speed limit. Sometimes the speed limit it shows >>>>>> is the one that's actually in force.
"The car knowing the speed limit" demonstrably does not work at the >>>>>> current time, even for simple fixed speed limits - let alone anything >>>>>> dynamic or complex.
I have asked BMW the question, but I think I know the answer, despite >>>>> Norman's amazement. I will report their reply.
How many folks here have cars whose built-in Satnav displays speed
limits, and when were they built?
Mine is just too old for that (January 2014).
I use my iPhone with Apple Maps, so that I can make use of spoken
commands to Siri.
I also have a Garmin I bought in the USA, though I only use it if I
don't want to waste my phone data package.
You don't have to. Unless you want up-to-the-minute traffic
information, satnavs works perfectly well on their own for routing
without any internet connection.
The Garmin doesn't have European maps on it.
So what?
But it can if you pay.
https://activegps.co.uk/garmin-city-navigator-nt-europe-sd.htm#compatible
On 03/02/2025 16:15, Mark Goodge wrote:
On Mon, 3 Feb 2025 14:39:52 +0000, Martin Brown
<'''newspam'''@nonad.co.uk> wrote:
On 03/02/2025 08:09, Norman Wells wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those
not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a proper >>>> data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
Yes, but it doesn't always get it right in tricky edge cases:
Asymmetric speed limits on dual carriageways when one direction is
under speed restrictions due to roadworks, crash recovery or
resurfacing.
30mph local roads crossing motorway bridges where derestricted
applies.
Junctions with roads that have different limits can cause problems,
too.
Mostly it gets it right but sometimes it has the car on a road
parallel with the road it is actually on. Can be an issue with
motorway parallel and close to old road as happens at RAF Dishforth
with A1(M) and A168. Civilian GPS is only good to ~10m and it
sometimes guesses wrong.
There's also the problem that most minor roads only exist in the
database as lines and nodes rather than areas. So what we, on the
ground, see as this:
---------------------------------------------------------
-----------------------+ +-----------------------------
| |
| |
| |
| |
(that is, a road with a side-road meeting it at a t-junction)
will be in the database as simply this:
-------------------------+----------------------------
|
|
|
That's not a general problem for navigation, or, indeed, speed limits.
But if the side road has a different limit to the main road, then a car
using sat-nav to determine the speed limit as it proceeds along the
main road will encounter a node where two different limits intersect.
How sat-navs interprets that is not consistent.
That's observable by some drivers in my town. If you're drving along
this road, which has normal urban 30mph limit:
https://maps.app.goo.gl/CKZTxGDMVNuXvfsU8
then as you can see in the image, there's a side road with a 20mph
limit. Some drivers report that, even if they are proceeding along the
main road, and have no intention of turning, ther sat-nav nonetheless
tells them to slow down to 20mph as they pass the junction.
If what you say is true, and it's not something I've experienced so I
have my doubts, it's a minor problem easily solved. Satnavs are
perfectly capable of determining which road you're on and the applicable speed limit.
On 04/02/2025 09:44, Jethro_uk wrote:
On Mon, 03 Feb 2025 17:12:06 +0000, Norman Wells wrote:
On 03/02/2025 16:15, Mark Goodge wrote:
On Mon, 3 Feb 2025 14:39:52 +0000, Martin Brown
<'''newspam'''@nonad.co.uk> wrote:
On 03/02/2025 08:09, Norman Wells wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those >>>>>>> not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
Yes, but it doesn't always get it right in tricky edge cases:
Asymmetric speed limits on dual carriageways when one direction is
under speed restrictions due to roadworks, crash recovery or
resurfacing.
30mph local roads crossing motorway bridges where derestricted
applies.
Junctions with roads that have different limits can cause problems,
too.
Mostly it gets it right but sometimes it has the car on a road
parallel with the road it is actually on. Can be an issue with
motorway parallel and close to old road as happens at RAF Dishforth
with A1(M) and A168. Civilian GPS is only good to ~10m and it
sometimes guesses wrong.
There's also the problem that most minor roads only exist in the
database as lines and nodes rather than areas. So what we, on the
ground, see as this:
---------------------------------------------------------
-----------------------+ +-----------------------------
| |
| |
| |
| |
(that is, a road with a side-road meeting it at a t-junction)
will be in the database as simply this:
-------------------------+----------------------------
|
|
|
That's not a general problem for navigation, or, indeed, speed
limits. But if the side road has a different limit to the main road,
then a car using sat-nav to determine the speed limit as it proceeds
along the main road will encounter a node where two different limits
intersect. How sat-navs interprets that is not consistent.
That's observable by some drivers in my town. If you're drving along
this road, which has normal urban 30mph limit:
https://maps.app.goo.gl/CKZTxGDMVNuXvfsU8
then as you can see in the image, there's a side road with a 20mph
limit. Some drivers report that, even if they are proceeding along
the main road, and have no intention of turning, ther sat-nav
nonetheless tells them to slow down to 20mph as they pass the
junction.
If what you say is true, and it's not something I've experienced so I
have my doubts, it's a minor problem easily solved. Satnavs are
perfectly capable of determining which road you're on and the
applicable speed limit.
Are they ?
I have already said there are cases where roads cross in space (over/
under) and lacking the ability to determine altitude (plus the required
history to also determine route) means GPS based speed limit
determination is flawed.
One solution would be that a journey is tracked as a series of linked
points, rather than just an individual spot reading in time. However
that introduces a level of complexity that you are not going to get for
the sum you paid for your sat nav. Dedicated or not.
A poor mans solution would be given a choice of two possible speed
limits for a given lat/long (because you haven't specified altitude)
then the algorithm should select the one [closest] that matches the
last one. However in an age where everything is frameworks and apps and
AI written,
sensible code is a luxury we can't afford.
I think all you're demonstrating is that with the array of sensors such
as autonomous vehicles have, presumably including one telling it which direction it's facing, it's quite simple to determine which road it's on
and what the speed limit is going the way it is.
On Mon, 03 Feb 2025 17:12:06 +0000, Norman Wells wrote:
On 03/02/2025 16:15, Mark Goodge wrote:
On Mon, 3 Feb 2025 14:39:52 +0000, Martin Brown
<'''newspam'''@nonad.co.uk> wrote:
On 03/02/2025 08:09, Norman Wells wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those >>>>>> not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a proper >>>>> data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
Yes, but it doesn't always get it right in tricky edge cases:
Asymmetric speed limits on dual carriageways when one direction is
under speed restrictions due to roadworks, crash recovery or
resurfacing.
30mph local roads crossing motorway bridges where derestricted
applies.
Junctions with roads that have different limits can cause problems,
too.
Mostly it gets it right but sometimes it has the car on a road
parallel with the road it is actually on. Can be an issue with
motorway parallel and close to old road as happens at RAF Dishforth
with A1(M) and A168. Civilian GPS is only good to ~10m and it
sometimes guesses wrong.
There's also the problem that most minor roads only exist in the
database as lines and nodes rather than areas. So what we, on the
ground, see as this:
---------------------------------------------------------
-----------------------+ +-----------------------------
| |
| |
| |
| |
(that is, a road with a side-road meeting it at a t-junction)
will be in the database as simply this:
-------------------------+----------------------------
|
|
|
That's not a general problem for navigation, or, indeed, speed limits.
But if the side road has a different limit to the main road, then a car
using sat-nav to determine the speed limit as it proceeds along the
main road will encounter a node where two different limits intersect.
How sat-navs interprets that is not consistent.
That's observable by some drivers in my town. If you're drving along
this road, which has normal urban 30mph limit:
https://maps.app.goo.gl/CKZTxGDMVNuXvfsU8
then as you can see in the image, there's a side road with a 20mph
limit. Some drivers report that, even if they are proceeding along the
main road, and have no intention of turning, ther sat-nav nonetheless
tells them to slow down to 20mph as they pass the junction.
If what you say is true, and it's not something I've experienced so I
have my doubts, it's a minor problem easily solved. Satnavs are
perfectly capable of determining which road you're on and the applicable
speed limit.
Are they ?
I have already said there are cases where roads cross in space (over/
under) and lacking the ability to determine altitude (plus the required history to also determine route) means GPS based speed limit
determination is flawed.
One solution would be that a journey is tracked as a series of linked
points, rather than just an individual spot reading in time. However that introduces a level of complexity that you are not going to get for the
sum you paid for your sat nav. Dedicated or not.
A poor mans solution would be given a choice of two possible speed limits
for a given lat/long (because you haven't specified altitude) then the algorithm should select the one [closest] that matches the last one.
However in an age where everything is frameworks and apps and AI written, sensible code is a luxury we can't afford.
On 03/02/2025 16:15, Mark Goodge wrote:
then as you can see in the image, there's a side road with a 20mph limit.
Some drivers report that, even if they are proceeding along the main road, >> and have no intention of turning, ther sat-nav nonetheless tells them to
slow down to 20mph as they pass the junction.
If what you say is true, and it's not something I've experienced so I
have my doubts, it's a minor problem easily solved.
On 03/02/2025 17:43, Mark Goodge wrote:
How do you think an autonomous vehicle would cope with that, when the route >> given by its navigation software is blocked and there's no data available to >> suggest an alternative?
Provided it is working on a live real-time mapping system in which road >closures are clearly indicated, which is not beyond the wit of man, the >satnav will simply re-route to avoid them.
How do you think an autonomous vehicle would cope with being stopped by the >> police, and given routing information by the police which conflicted with
its navigational data?
It's the same problem as above. If the vehicle knows from an update to
the live map that there is a road blockage, as should be a priority when >autonomous vehicles become common, it will simply re-route to avoid it.
Otherwise, but not too fancifully, it could be programmed to recognise
the uniform (just as you do), raise an antenna with a microphone, and
tell him to talk clearly to Siri identifying the exact location of the >problem using what three words or whatever.
You slid over to autonomous cars. Admittedly bringing the thread back to
the headline title. But also detracting from my point. Which is that >generally a sat nav (or phone) isn't smart enough to distinguish the not >uncommon situation where roads cross in 3D. A problem which was common
enough to cause many problems with telematic based insurance and crap >software.
On 03/02/2025 17:43, Mark Goodge wrote:
I was thinking of cases where there's no need to break the law, merely a
need to make decisions based on information which cannot be preset.
Let me
give a couple of examples from my real life experience.
The first happened when I was a passenger, not the driver. We'd been to a
gig at the Etihad Stadium in Manchester, and my friend was following his
sat-nav back to my house in Stoke-on-Trent so that he could drop me
off and
then continue home. However, as we arrived at J19 of the M6 from the
A565,
the slip road onto the M6 southbound (which his sat-nav was, correctly,
telling him to take) was blocked by a police car with flashing blue
and red
lights. There was no indication of why it was blocked, and no signs
giving
any diversionary route. My assumption was that there had been an
accident of
some sort, but that was pure speculation.
After driving round the entire roundabout twice (I have no idea why,
maybe
he was hoping the police car would magically disappear the second time
around), he eventually asked me what to do. I directed him via a
different
exit onto the A50, and once we were on that heading south I told him
to just
ignore the sat-nav until it stopped telling him to turn around (we
eventually joined the M6 at the next junction down).
Had I not been in the car, I have no idea how he would have got out of
that.
If I was ever in similar circumstances, and didn't have the benefit of a
passenger with local kowledge, I think I'd either find somewhere to pull
over and look at an actual map, or just take what looks like the next
best
option and keep going in vaguely the right direction until my sat-nav
picked
up a new route.
How do you think an autonomous vehicle would cope with that, when the
route
given by its navigation software is blocked and there's no data
available to
suggest an alternative?
Provided it is working on a live real-time mapping system in which road closures are clearly indicated, which is not beyond the wit of man, the satnav will simply re-route to avoid them. They're very good at finding alternative routes when you miss a turning for example, so it's not a
new or insurmountable problem.
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
[quoted text muted]
But that's the point. It wasn't. It can't be. When the police decide to
block a road and turn people away, that information isn't available to navigation systems in real time.
On 03/02/2025 19:05, Norman Wells wrote:
On 03/02/2025 17:43, Mark Goodge wrote:
I was thinking of cases where there's no need to break the law, merely
a need to make decisions based on information which cannot be preset.
Let me give a couple of examples from my real life experience.
The first happened when I was a passenger, not the driver. We'd been
to a gig at the Etihad Stadium in Manchester, and my friend was
following his sat-nav back to my house in Stoke-on-Trent so that he
could drop me off and then continue home. However, as we arrived at
J19 of the M6 from the A565,
the slip road onto the M6 southbound (which his sat-nav was,
correctly, telling him to take) was blocked by a police car with
flashing blue and red lights. There was no indication of why it was
blocked, and no signs giving any diversionary route. My assumption was
that there had been an accident of some sort, but that was pure
speculation.
After driving round the entire roundabout twice (I have no idea why,
maybe he was hoping the police car would magically disappear the
second time around), he eventually asked me what to do. I directed him
via a different exit onto the A50, and once we were on that heading
south I told him to just ignore the sat-nav until it stopped telling
him to turn around (we eventually joined the M6 at the next junction
down).
Had I not been in the car, I have no idea how he would have got out of
that.
If I was ever in similar circumstances, and didn't have the benefit of
a passenger with local kowledge, I think I'd either find somewhere to
pull over and look at an actual map, or just take what looks like the
next best option and keep going in vaguely the right direction until
my sat-nav picked up a new route.
How do you think an autonomous vehicle would cope with that, when the
route given by its navigation software is blocked and there's no data
available to suggest an alternative?
Provided it is working on a live real-time mapping system in which road
closures are clearly indicated, which is not beyond the wit of man, the
satnav will simply re-route to avoid them. They're very good at
finding alternative routes when you miss a turning for example, so it's
not a new or insurmountable problem.
Won't the satnav keep telling you to take the original route?
way to tell it that there is something in the way? The same problem
would occur in the case of an autonomous vehicle.
On 03/02/2025 09:36, Norman Wells wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000 Norman Wells <hex@unseen.ac.am>
wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those
not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a proper >>>> data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
I will investigate in case it is a feature that is available but not
activated. But you are assuming that all States and small 'burgs' in
the US follow the same speed limit reporting processes that we do. Do
they?
I have no idea. But this is a UK group and all the necessary speed
limit information here is certainly available and in current use.
It's trivial programming to ensure that limits are not exceeded by any
autonomous vehicle, which is certainly not the case with human drivers,
even those who try to obey them.
Anecdata: A young(ish) family member had a black box fitted to their new
car as a condition of their insurance policy. It was a physical black
box connected directly to the vehicle electronics and mounted within the engine bay by a professional appointed by the insurers. Said family
member has since changed their vehicle and the new vehicle didn't need a physical black box fitting but instead required an app installing on
their mobile phone which must be with them in their car when they're
driving as a condition of their insurance.
Both systems have resulted in letters from the insurance company
detailing incidents where the systems indicated that they were driving
at significantly above the speed limit, which triggers penalties in the policy up to and including cancellation of the policy.
Every single incident reported has been down to GPS issues where the car
is on a dual carriageway, NSL road or similar, yet the black box
detected the vehicle as being on a 30mph road adjacent to its current position but there is no reason to believe that the reverse situation
cannot also exist, without triggering a letter from the insurance
company.
Please detail how your autonomous vehicle has overcome these problems evidently inherent in the current navigational systems in use, (both stand-alone and embedded).
On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
[quoted text muted]
But that's the point. It wasn't. It can't be. When the police decide to
block a road and turn people away, that information isn't available to
navigation systems in real time.
Well Google Maps gets feedback on the progress of journeys using it and *could* quickly determine that a sudden influx of changed journeys is indicative of some sort of problem proceeding.
Whether it does or not is another matter.
The reason I am doubtful is that this is the same product which will
happily send you down a labyrinthine route of side roads and alleys in
the name of it's preferred "green" routing. Doing more damage to the environment than it could ever prevent.
(sat Nav re-routing is another exemplar of machines being incapable of intelligence :) )
On Tue, 4 Feb 2025 11:51:21 -0000 (UTC), Jethro_uk
<jethro_uk@hotmailbin.com> wrote:
You slid over to autonomous cars. Admittedly bringing the thread back to
the headline title. But also detracting from my point. Which is that
generally a sat nav (or phone) isn't smart enough to distinguish the not
uncommon situation where roads cross in 3D. A problem which was common
enough to cause many problems with telematic based insurance and crap
software.
And that also takes us back to my point, which is that a truly autonomous vehicle has to be capable of deciding when to disregard the sat-nav, and be able to operate effectively in situations where the sat-nav data is missing or incomplete.
Norman's responses to your comments also contradict his own earlier assertions that autonomous vehicles don't need AI,
and that all of these
problems have already been solved. Now he's merely saying that, in his opinion, they can be solved and that solving them is trivial.
On Tue, 04 Feb 2025 10:12:08 +0000, Norman Wells wrote:
On 04/02/2025 09:44, Jethro_uk wrote:
On Mon, 03 Feb 2025 17:12:06 +0000, Norman Wells wrote:
On 03/02/2025 16:15, Mark Goodge wrote:
On Mon, 3 Feb 2025 14:39:52 +0000, Martin Brown
<'''newspam'''@nonad.co.uk> wrote:
On 03/02/2025 08:09, Norman Wells wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those >>>>>>>> not uncommon in the US, which mandate a lower-than-normal speed >>>>>>>> limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a
proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
Yes, but it doesn't always get it right in tricky edge cases:
Asymmetric speed limits on dual carriageways when one direction is >>>>>> under speed restrictions due to roadworks, crash recovery or
resurfacing.
30mph local roads crossing motorway bridges where derestricted
applies.
Junctions with roads that have different limits can cause problems,
too.
Mostly it gets it right but sometimes it has the car on a road
parallel with the road it is actually on. Can be an issue with
motorway parallel and close to old road as happens at RAF Dishforth >>>>>> with A1(M) and A168. Civilian GPS is only good to ~10m and it
sometimes guesses wrong.
There's also the problem that most minor roads only exist in the
database as lines and nodes rather than areas. So what we, on the
ground, see as this:
---------------------------------------------------------
-----------------------+ +-----------------------------
| |
| |
| |
| |
(that is, a road with a side-road meeting it at a t-junction)
will be in the database as simply this:
-------------------------+----------------------------
|
|
|
That's not a general problem for navigation, or, indeed, speed
limits. But if the side road has a different limit to the main road, >>>>> then a car using sat-nav to determine the speed limit as it proceeds >>>>> along the main road will encounter a node where two different limits >>>>> intersect. How sat-navs interprets that is not consistent.
That's observable by some drivers in my town. If you're drving along >>>>> this road, which has normal urban 30mph limit:
https://maps.app.goo.gl/CKZTxGDMVNuXvfsU8
then as you can see in the image, there's a side road with a 20mph
limit. Some drivers report that, even if they are proceeding along
the main road, and have no intention of turning, ther sat-nav
nonetheless tells them to slow down to 20mph as they pass the
junction.
If what you say is true, and it's not something I've experienced so I
have my doubts, it's a minor problem easily solved. Satnavs are
perfectly capable of determining which road you're on and the
applicable speed limit.
Are they ?
I have already said there are cases where roads cross in space (over/
under) and lacking the ability to determine altitude (plus the required
history to also determine route) means GPS based speed limit
determination is flawed.
One solution would be that a journey is tracked as a series of linked
points, rather than just an individual spot reading in time. However
that introduces a level of complexity that you are not going to get for
the sum you paid for your sat nav. Dedicated or not.
A poor mans solution would be given a choice of two possible speed
limits for a given lat/long (because you haven't specified altitude)
then the algorithm should select the one [closest] that matches the
last one. However in an age where everything is frameworks and apps and
AI written,
sensible code is a luxury we can't afford.
I think all you're demonstrating is that with the array of sensors such
as autonomous vehicles have, presumably including one telling it which
direction it's facing, it's quite simple to determine which road it's on
and what the speed limit is going the way it is.
Indeed.
However long personal experience has informed me that just because there
is a solution to a problem, doesn't mean the problem will be solved. Especially when non technical issues are introduced.
However you have very subtly shifted the discussion - and nearly got away with it. I was referring to "sat navs" - portable devices that can be
moved from car to car. Either dedicated devices, or a modern smartphone running sat nav software.
You slid over to autonomous cars. Admittedly bringing the thread back to
the headline title. But also detracting from my point. Which is that generally a sat nav (or phone) isn't smart enough to distinguish the not uncommon situation where roads cross in 3D. A problem which was common
enough to cause many problems with telematic based insurance and crap software.
On Mon, 3 Feb 2025 17:12:06 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 16:15, Mark Goodge wrote:
then as you can see in the image, there's a side road with a 20mph limit. >>> Some drivers report that, even if they are proceeding along the main road, >>> and have no intention of turning, ther sat-nav nonetheless tells them to >>> slow down to 20mph as they pass the junction.
If what you say is true, and it's not something I've experienced so I
have my doubts, it's a minor problem easily solved.
Ah, so your experience determines what you believe exists. That, Norman, is precisely why your opinion has such little practical value.
On 03/02/2025 09:36, Norman Wells wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those
not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a proper >>>> data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
I will investigate in case it
is a feature that is available but not activated. But you are assuming
that all States and small 'burgs' in the US follow the same speed limit
reporting processes that we do. Do they?
I have no idea. But this is a UK group and all the necessary speed
limit information here is certainly available and in current use.
It's trivial programming to ensure that limits are not exceeded by any
autonomous vehicle, which is certainly not the case with human
drivers, even those who try to obey them.
Anecdata: A young(ish) family member had a black box fitted to their new
car as a condition of their insurance policy. It was a physical black
box connected directly to the vehicle electronics and mounted within the engine bay by a professional appointed by the insurers. Said family
member has since changed their vehicle and the new vehicle didn't need a physical black box fitting but instead required an app installing on
their mobile phone which must be with them in their car when they're
driving as a condition of their insurance.
Both systems have resulted in letters from the insurance company
detailing incidents where the systems indicated that they were driving
at significantly above the speed limit, which triggers penalties in the policy up to and including cancellation of the policy.
Every single incident reported has been down to GPS issues where the car
is on a dual carriageway, NSL road or similar, yet the black box
detected the vehicle as being on a 30mph road adjacent to its current position but there is no reason to believe that the reverse situation
cannot also exist, without triggering a letter from the insurance company.
Please detail how your autonomous vehicle has overcome these problems evidently inherent in the current navigational systems in use, (both stand-alone and embedded).
On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
[quoted text muted]
But that's the point. It wasn't. It can't be. When the police decide to
block a road and turn people away, that information isn't available to
navigation systems in real time.
Well Google Maps gets feedback on the progress of journeys using it and *could* quickly determine that a sudden influx of changed journeys is indicative of some sort of problem proceeding.
Whether it does or not is another matter.
On 04/02/2025 14:15, Jethro_uk wrote:
[quoted text muted]
I don't use, or even come across 'green routing'
On 04/02/2025 14:15, Jethro_uk wrote:
On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
[quoted text muted]
But that's the point. It wasn't. It can't be. When the police decide
to block a road and turn people away, that information isn't available
to navigation systems in real time.
Well Google Maps gets feedback on the progress of journeys using it and
*could* quickly determine that a sudden influx of changed journeys is
indicative of some sort of problem proceeding.
Whether it does or not is another matter.
It spots that a stretch of road has developed a very slow speed, and
so
prefers a different route based on travel times, without knowing why.
Nor knowing whether the alternative roads can cope with the volume of
traffic now moving along an unfamiliar route.
Google Maps also shows forked arrows at bottom right to allow me to
chose
a different route at any time (data connection permitting).
Google Maps also shows forked arrows at bottom right to allow me tochocolate factory is it can change dramatically without notice). I only
chose
a different route at any time (data connection permitting).
The arrows aren't always there IME (but the caveat with anything from the
saw them when I was in a position to select one of the routes. I found
that invariably they were suggesting a "quicker" route that would have
taken me out of my way and not uncommonly down a side road.
On 03/02/2025 10:07, billy bookcase wrote:
"Martin Harran" <martinharran@gmail.com> wrote in message
news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close
to a vehicle in front, lane-keep assist.
Those are exmples of automation, as with autpilots. Where the each set
of inputs, no matter how complex, will produce exactly the same result
every time.
AI systems on the other hand, learn and make up new rules as they go along. >> And so the process by which they actually arrive at particular decisions
may not be readily available to humans.
I think this is possibly what a lot of peopel are really worried about,
I give you the details of Lufthansa LH1829 below. Numerous other examples are
available, (some of which may include the grounding of the entire Boeing 737 MAX
fleet).
Whilst some knowledge of aviation terms will help in reading what follows, it is by no
means mandatory.
Please detail, as simply as possible, the likely outcome for flight LH1829 had the
commander not been present and had he not had access to flight technicians via Telex.
Then please detail which parties will be fulfilling the roles of the commander and the
flight technicians in fully automated vehicles.
Regards
S.P.
Lufthansa flight LH1829 took off from Bilbao for scheduled services at 0748 hrs with
109 persons on board. The co-pilot was Pilot Flying (PF) and the commander monitored
the conduct of the flight. It rained during take-off. The airplane climbed through
the cloud tops at approximately FL 200.
From 0756 hrs on, the FDR recorded a constant angle of attack value (AOA) of 4.2 for
the AOA sensor #1.
From 0757 hrs on, the recorded value for sensor #2 remained constant (4.6) also. At
0803 hrs the crew receive the clearance to fly directly to way point LATEK and the
airplane turned to the new heading.
The commander of Lufthansa flight LH1829 stated he noticed the indications of the Alpha
Protection Band in the Primary Flight Display (PFD) increasing unusually rapidly.
The autopilot was in climb mode.
The co-pilot of Lufthansa flight LH1829 stated he then reduced the rate of climb from
about 800 ft/min to 500 ft/min using the Vertical Speed button of the autopilot giving
the airplane a chance to accelerate.
Shortly afterwards the co-pilot disengaged the autopilot and gave a short sidestick
control input in order to reduce the pitch angle of the airplane. The nose of the
airplane dropped further and the co-pilot counteracted this movement with the sidestick. The co-pilot stated the airplane did not responded as expected and continued
to descend.
Approximately 45 seconds after the autopilot was disengaged the co-pilot of Lufthansa
flight LH1829 verbalised the airplane's limited reaction to his control inputs and
therefore the commander took over the controls. At that time it had a rate of descent
of about 4,000 ft/min and a pitch angle of -3.5. The commander gave maximum backward
sidestick input (pull) and the airplane's nose began to rise, the rate of descent
decreased, and the airplane entered horizontal flight attitude once more.
The flight was continued in FL 270. The commander gave continuously backward sidestick
input (pull) of more than 50% of the possible input; altitude and attitude remained
constant. The autopilots could not be engaged again and the trim was limited in nose up
movements. The crew stated they made sure that the indicated airspeed was correct by
checking the given values for pitch and power in the Quick Reference Handbook (QRH).
Searching the QRH revealed that no procedure was described which would suit this
situation.
The Flight Augmentation Computers (FAC) 1 and 2 were reset by the crew one after the
other. The FDR showed that this did not change the situation.
At 0811:12 hrs the Aircraft Communications Addressing and Reporting System (ACARS) sent
an automatically generated message to the technicians of the operator's maintenance
organisation. This message included the AOA values for the three Air Data Inertial
Reference Units (ADIRUs).
At 0824 hrs the crew sent a telex to the technicians with a short description of the
situation and the question whether an in-flight simultaneous reset of the FACs was
possible. Three minutes later this was answered in the affirmative with the note that
then the airplane would be in Alternate Law.
At 0831 hrs the crew wrote that a constant pull on the sidestick was necessary, that
the trim was not available and that the AlphaProt Band moved very rapidly. In addition,
the Centralised Fault Display System (CFDS) showed the failure message "PH6 AOA3". The
technicians suggested in a reply telex to retrieve the angle of attack values via the
Multifunctional Control and Display Unit (MCDU) and, if necessary, to turn off the Air
Data Reference unit (ADR) 3.
At 0836 hrs the crew turned off ADR 3. This did not change the situation and therefore
the unit was turned on again. At 0852 hrs the crew received the following message:
".NACH SICHTUNG DER AOA WERTE, KOENNTE ES SEIN, DASS AOA1 UND AOA2 EINGEFROREN SIND UND
EINEN ZU HOHEN ANSTROEMWINKEL MELDEN (After reviewing the AOA values it is possible
that AOA1 and AOA2 are frozen and report a too high angle of attack). FALLS DAS PROBLEM
WEITERHIN BESTEHT, ADR 1 UND ADR 2 AUSSCHALTEN, WAS ABER ZU ALTENATE LAW FUEHRT" (If
the problem continues turn off ADR 1 and ADR 2 but that leads to alternate law). And at
0857 hrs: ".VIELLEICHT REICHT ES AUCH NUR DEN ADR 2 AUSZUSCHALTEN (Maybe it suffices to
just turn off ADR 2). [.]"
At 0859 hrs the crew turned off ADR 2.
The LUFTHANSA flight LH1829 control system changed to Alternate Law. The continuous
sidestick input was no longer necessary in order to keep the airplane in horizontal
flight attitude.
Autotrim and autopilot functioned again.
The crew stated they decided to fly to the destination aerodrome and use the remaining
flight time of about one hour to become familiar with the systems' functions and
prepare the landing.
At 0949 hrs the airplane landed safely at the destination aerodrome.
(From https://www.aviation-accidents.net/lufthansa-airbus-a321-231-d-aidp-flight-lh1829/)
Jethro_uk wrote:
Google Maps also shows forked arrows at bottom right to allow me tochocolate factory is it can change dramatically without notice). I only
chose
a different route at any time (data connection permitting).
The arrows aren't always there IME (but the caveat with anything from
the
saw them when I was in a position to select one of the routes. I found
that invariably they were suggesting a "quicker" route that would have
taken me out of my way and not uncommonly down a side road.
I just tried it on a route I know well, it offered me a choice of three routes, the default one was the one with the green leaf, the other two
were longer distance or took more time, coincidentally its preferred one
is generally *my* preferred one, but there have been days when it picks
a different route.
On 04/02/2025 12:32, Mark Goodge wrote:
On Tue, 4 Feb 2025 11:51:21 -0000 (UTC), Jethro_uk
<jethro_uk@hotmailbin.com> wrote:
You slid over to autonomous cars. Admittedly bringing the thread back to >>> the headline title. But also detracting from my point. Which is that
generally a sat nav (or phone) isn't smart enough to distinguish the not >>> uncommon situation where roads cross in 3D. A problem which was common
enough to cause many problems with telematic based insurance and crap
software.
And that also takes us back to my point, which is that a truly autonomous
vehicle has to be capable of deciding when to disregard the sat-nav, and be >> able to operate effectively in situations where the sat-nav data is missing >> or incomplete.
That's just a matter of programming to do what a sensible person would do in the same
circumstances.
Norman's responses to your comments also contradict his own earlier
assertions that autonomous vehicles don't need AI,
AI doesn't come into it.
and that all of these
problems have already been solved. Now he's merely saying that, in his
opinion, they can be solved and that solving them is trivial.
I'm sure you're reinventing the wheel bringing up what you think are outstanding
problems without knowing what has been solved already. The answers to them, however,
are so trivially easy that it is inconceivable that they haven't all been addressed
years ago.
Where do you think the estimated $75 billion spent by the automotive industry on the
development of autonomous vehicles has gone? And do you really think they haven't
already thought of all the issues you've raised?
There was a sneeze
of time when it would display speed limits without needing a route
programmed (same as Waze "driving mode"). Then it stopped.
Jethro_uk wrote:
There was a sneeze of time when it would display speed limits without
needing a route programmed (same as Waze "driving mode"). Then it
stopped.
GM also called it "driving mode", I'm fairly sure that when using GM in
the car it still displays the current speed limit, I'll try to remember
to check when I next venture out ...
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 17:43, Mark Goodge wrote:
How do you think an autonomous vehicle would cope with that, when the route >>> given by its navigation software is blocked and there's no data available to
suggest an alternative?
Provided it is working on a live real-time mapping system in which road
closures are clearly indicated, which is not beyond the wit of man, the
satnav will simply re-route to avoid them.
But that's the point. It wasn't. It can't be. When the police decide to
block a road and turn people away, that information isn't available to navigation systems in real time.
How do you think an autonomous vehicle would cope with being stopped by the >>> police, and given routing information by the police which conflicted with >>> its navigational data?
It's the same problem as above. If the vehicle knows from an update to
the live map that there is a road blockage, as should be a priority when
autonomous vehicles become common, it will simply re-route to avoid it.
But, again, that's the point. This information is not available in real time to the navigation systems. A human can cope with route-finding if necessary without resorting to them.
Otherwise, but not too fancifully, it could be programmed to recognise
the uniform (just as you do), raise an antenna with a microphone, and
tell him to talk clearly to Siri identifying the exact location of the
problem using what three words or whatever.
Well, yes. But that's a lot more complex than merely following a route. It requires at least reasonable voice recognition, for example, and then being able to follow the instructions given.
I don't doubt that, given sufficiently advanced AI, this will be a solvable problem. But it's not a problem you can solve just with a set of preset rules. So until autonomous vehicles have AI which is as robust at problem solving as human intelligence, it isn't really autonomous.
Going on about live routing data is a red herring. As well as ongoing situations where the data simply isn't available to the system (eg, where
the police decide to block a road), you also have to be able to cope with situations where the data exists but isn't available to the device (eg, in mobile notspots). To be truly autonomous, a self-driving vehicle has to be able to operate solely on the basis of road signs, markings, offline maps where available and, if necessary, verbal instructions from a human. Just like humans can.
Mark
On 4 Feb 2025 at 12:16:04 GMT, "Mark Goodge" <usenet@listmail.good-stuff.co.uk> wrote:
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
On 03/02/2025 17:43, Mark Goodge wrote:
How do you think an autonomous vehicle would cope with that, when the
route given by its navigation software is blocked and there's no data
available to suggest an alternative?
Provided it is working on a live real-time mapping system in which
road closures are clearly indicated, which is not beyond the wit of
man, the satnav will simply re-route to avoid them.
But that's the point. It wasn't. It can't be. When the police decide to
block a road and turn people away, that information isn't available to
navigation systems in real time.
How do you think an autonomous vehicle would cope with being stopped
by the police, and given routing information by the police which
conflicted with its navigational data?
It's the same problem as above. If the vehicle knows from an update
to the live map that there is a road blockage, as should be a priority
when autonomous vehicles become common, it will simply re-route to
avoid it.
But, again, that's the point. This information is not available in real
time to the navigation systems. A human can cope with route-finding if
necessary without resorting to them.
Otherwise, but not too fancifully, it could be programmed to recognise
the uniform (just as you do), raise an antenna with a microphone, and
tell him to talk clearly to Siri identifying the exact location of the
problem using what three words or whatever.
Well, yes. But that's a lot more complex than merely following a route.
It requires at least reasonable voice recognition, for example, and
then being able to follow the instructions given.
I don't doubt that, given sufficiently advanced AI, this will be a
solvable problem. But it's not a problem you can solve just with a set
of preset rules. So until autonomous vehicles have AI which is as
robust at problem solving as human intelligence, it isn't really
autonomous.
Going on about live routing data is a red herring. As well as ongoing
situations where the data simply isn't available to the system (eg,
where the police decide to block a road), you also have to be able to
cope with situations where the data exists but isn't available to the
device (eg, in mobile notspots). To be truly autonomous, a self-driving
vehicle has to be able to operate solely on the basis of road signs,
markings, offline maps where available and, if necessary, verbal
instructions from a human. Just like humans can.
Mark
Sat navs already deal with crowd-sourced information about road
closures, and driver indicated road closures, and reroute accordingly.
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 03/02/2025 17:43, Mark Goodge wrote:
How do you think an autonomous vehicle would cope with that, when the route >>> given by its navigation software is blocked and there's no data available to
suggest an alternative?
Provided it is working on a live real-time mapping system in which road
closures are clearly indicated, which is not beyond the wit of man, the
satnav will simply re-route to avoid them.
But that's the point. It wasn't. It can't be. When the police decide to
block a road and turn people away, that information isn't available to navigation systems in real time.
How do you think an autonomous vehicle would cope with being stopped by the >>> police, and given routing information by the police which conflicted with >>> its navigational data?
It's the same problem as above. If the vehicle knows from an update to
the live map that there is a road blockage, as should be a priority when
autonomous vehicles become common, it will simply re-route to avoid it.
But, again, that's the point. This information is not available in real time to the navigation systems. A human can cope with route-finding if necessary without resorting to them.
Otherwise, but not too fancifully, it could be programmed to recognise
the uniform (just as you do), raise an antenna with a microphone, and
tell him to talk clearly to Siri identifying the exact location of the
problem using what three words or whatever.
Well, yes. But that's a lot more complex than merely following a route. It requires at least reasonable voice recognition, for example, and then being able to follow the instructions given.
I don't doubt that, given sufficiently advanced AI, this will be a solvable problem. But it's not a problem you can solve just with a set of preset rules. So until autonomous vehicles have AI which is as robust at problem solving as human intelligence, it isn't really autonomous.
Going on about live routing data is a red herring. As well as ongoing situations where the data simply isn't available to the system (eg, where
the police decide to block a road), you also have to be able to cope with situations where the data exists but isn't available to the device (eg, in mobile notspots). To be truly autonomous, a self-driving vehicle has to be able to operate solely on the basis of road signs, markings, offline maps where available and, if necessary, verbal instructions from a human. Just like humans can.
"Simon Parker" <simonparkerulm@gmail.com> wrote in message news:m0enkeFqoikU2@mid.individual.net...
On 03/02/2025 10:07, billy bookcase wrote:
"Martin Harran" <martinharran@gmail.com> wrote in message
news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close >>>> to a vehicle in front, lane-keep assist.
Those are exmples of automation, as with autpilots. Where the each set
of inputs, no matter how complex, will produce exactly the same result
every time.
AI systems on the other hand, learn and make up new rules as they go along. >>> And so the process by which they actually arrive at particular decisions >>> may not be readily available to humans.
I think this is possibly what a lot of peopel are really worried about,
I give you the details of Lufthansa LH1829 below. Numerous other examples are
available, (some of which may include the grounding of the entire Boeing 737 MAX
fleet).
Whilst some knowledge of aviation terms will help in reading what follows, it is by no
means mandatory.
Please detail, as simply as possible, the likely outcome for flight LH1829 had the
commander not been present and had he not had access to flight technicians via Telex.
Then please detail which parties will be fulfilling the roles of the commander and the
flight technicians in fully automated vehicles.
I fail to see any possible relevance in any of of your questions.
Unless you can detail where I have ever claimed that AI has any possible
role to play in either "autonomous" vehicles, or in autopilots.
AI relies on measuring outcomes, in order to "learn".
In neither "autonomous" vehicles nor autopilots is anything less than
the optimum outcome either permissible, or desirable, in real world situations,
As it happens Norman is entirely correct in claiming AI has no role to play in autonomous vehicles. But entirely wrong in imagining all that all potential
real life situations and problems which might arise can be anticipated, never mind modelled in experimental set-ups Such as would allow autonomous vehicles
safely onto our roads
On 03/02/2025 09:36, Norman Wells wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those
not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a proper >>>> data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
I will investigate in case it
is a feature that is available but not activated. But you are assuming
that all States and small 'burgs' in the US follow the same speed limit
reporting processes that we do. Do they?
I have no idea. But this is a UK group and all the necessary speed
limit information here is certainly available and in current use.
It's trivial programming to ensure that limits are not exceeded by any
autonomous vehicle, which is certainly not the case with human
drivers, even those who try to obey them.
Anecdata: A young(ish) family member had a black box fitted to their new
car as a condition of their insurance policy. It was a physical black
box connected directly to the vehicle electronics and mounted within the engine bay by a professional appointed by the insurers. Said family
member has since changed their vehicle and the new vehicle didn't need a physical black box fitting but instead required an app installing on
their mobile phone which must be with them in their car when they're
driving as a condition of their insurance.
Both systems have resulted in letters from the insurance company
detailing incidents where the systems indicated that they were driving
at significantly above the speed limit, which triggers penalties in the policy up to and including cancellation of the policy.
Every single incident reported has been down to GPS issues where the car
is on a dual carriageway, NSL road or similar, yet the black box
detected the vehicle as being on a 30mph road adjacent to its current position but there is no reason to believe that the reverse situation
cannot also exist, without triggering a letter from the insurance company.
Please detail how your autonomous vehicle has overcome these problems evidently inherent in the current navigational systems in use, (both stand-alone and embedded).
On 03/02/2025 19:05, Norman Wells wrote:
Provided it is working on a live real-time mapping system in which
road closures are clearly indicated, which is not beyond the wit of
man, the satnav will simply re-route to avoid them. They're very good
at finding alternative routes when you miss a turning for example, so
it's not a new or insurmountable problem.
Won't the satnav keep telling you to take the original route? Is there a
way to tell it that there is something in the way? The same problem
would occur in the case of an autonomous vehicle.
On 03/02/2025 10:03, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0bh1eFdablU2@mid.individual.net...
It's trivial programming to ensure that limits are not exceeded by
any autonomous
vehicle, which is certainly not the case with human drivers, even
those who try to obey
them.
How about the humans who were supposed to put up the signs but
somehow forgot ?
What do you, and your robot friends, intend to do about them ?
Or, as this is a legal newsgroup, a sign with no underlying TRO, or a
TRO with no corresponding signage installed?
Do NormCars (TM) (a Division of NormTech (TM)) have access to the
underlying TRO database and know to ignore a sign if it has no TRO
legally underpinning it?
Will NormCars (TM) know the legally enforceable speed limit from
accessing the TRO database and therefore drive at the correct speed even
if some ne'er do well has spray painted, nicked or otherwise interfered
with the signage?
Will NormTech (TM) be selling this software to other major automobile companies or are they keeping their "trivial" coding to themselves?
On 03/02/2025 10:07, billy bookcase wrote:
"Martin Harran" <martinharran@gmail.com> wrote in message
news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close
to a vehicle in front, lane-keep assist.
Those are exmples of automation, as with autpilots. Where the each set
of inputs, no matter how complex, will produce exactly the same result
every time.
AI systems on the other hand, learn and make up new rules as they go
along.
And so the process by which they actually arrive at particular decisions
may not be readily available to humans.
I think this is possibly what a lot of peopel are really worried about,
I give you the details of Lufthansa LH1829 below. Numerous other
examples are available, (some of which may include the grounding of the entire Boeing 737 MAX fleet).
Whilst some knowledge of aviation terms will help in reading what
follows, it is by no means mandatory.
Please detail, as simply as possible, the likely outcome for flight
LH1829 had the commander not been present and had he not had access to
flight technicians via Telex.
Then please detail which parties will be fulfilling the roles of the commander and the flight technicians in fully automated vehicles.
And, despite what some posters to this thread may believe, regardless of
how sincere they are in their beliefs, we are a significant distance
away from a vehicle even approaching level 5.
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m0eokvFtck2U4@mid.individual.net...
I'm sure you're reinventing the wheel bringing up what you think are outstanding
problems without knowing what has been solved already. The answers to them, however,
are so trivially easy that it is inconceivable that they haven't all been addressed
years ago.
Where do you think the estimated $75 billion spent by the automotive industry on the
development of autonomous vehicles has gone? And do you really think they haven't
already thought of all the issues you've raised?
Right. You're an autonomous car. You've got your two passengers in the
back ( you know humans, remember them ? ) a husband and wife. Originally
they told you to go to Tesco, But then half way there the wife said she'd changed her mind and wanted to go to Asda, So she said "go to Asda". But
her husband objected and said he wanted to go to "Morrisons" instead. So you're driving along supposedly headed for Tesco but with shouts of "Asda" "No Morrisons",."No Asda", "Morrisons" coming out of the back for the
next ten minutes. So where do you go, and why ?
On 04/02/2025 15:41, billy bookcase wrote:
AI relies on measuring outcomes, in order to "learn".
In neither "autonomous" vehicles nor autopilots is anything less than
the optimum outcome either permissible, or desirable, in real world
situations,
As it happens Norman is entirely correct in claiming AI has no role to play >> in autonomous vehicles. But entirely wrong in imagining all that all potential
real life situations and problems which might arise can be anticipated, never
mind modelled in experimental set-ups Such as would allow autonomous vehicles
safely onto our roads
You must then have missed the earlier references in this thread to studies showing that
autonomous vehicles are in fact already safer than human driven ones.
On 03/02/2025 15:32, The Todal wrote:
On 03/02/2025 14:32, Norman Wells wrote:
On 03/02/2025 13:41, Davey wrote:Maybe every driver should be obliged to take a Speed Awareness Course
I have no idea what it runs, although I see lots of Linux terms in
the software listings. I ask it to do things, and it usually does them. >>>> I certainly have no intention of replacing it, either the Satnav or the >>>> car, I have managed so far without my car telling me what the speed
limit is, as I can read road signs myself.
Of course. Sometimes.
My experience is that my satnav knows rather better than I do what
the limit is at any one location. And that's not for the want of
trying.
If you say you're fully aware of the correct speed limit everywhere
you drive all of the time, I will frankly tend not to believe you.
Maybe a journey with a satnav that does could be revelatory.
periodically. To refresh their memory of which limits apply to which
types of road.
But it doesn't depend just on the type of road, so that would be
inaccurate and misleading..
A half decent satnav will know and tell you, wherever you are.
On Tue, 04 Feb 2025 14:49:28 +0000, Fredxx wrote:
On 04/02/2025 14:15, Jethro_uk wrote:
[quoted text muted]
I don't use, or even come across 'green routing'
Last time I used GM, it only gave you one route - the ability to chose "shortest" and "quickest" alongside "most economical had gone. When I
briefly researched the matter, I read that Google were only supplying the "green" route now, for the planet. Not sure which planet though.
On Tue, 04 Feb 2025 16:39:02 +0000, Roger Hayter wrote:
On 4 Feb 2025 at 12:16:04 GMT, "Mark Goodge"
<usenet@listmail.good-stuff.co.uk> wrote:
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
On 03/02/2025 17:43, Mark Goodge wrote:
How do you think an autonomous vehicle would cope with that, when the >>>>> route given by its navigation software is blocked and there's no data >>>>> available to suggest an alternative?
Provided it is working on a live real-time mapping system in which
road closures are clearly indicated, which is not beyond the wit of
man, the satnav will simply re-route to avoid them.
But that's the point. It wasn't. It can't be. When the police decide to
block a road and turn people away, that information isn't available to
navigation systems in real time.
How do you think an autonomous vehicle would cope with being stopped >>>>> by the police, and given routing information by the police which
conflicted with its navigational data?
It's the same problem as above. If the vehicle knows from an update
to the live map that there is a road blockage, as should be a priority >>>> when autonomous vehicles become common, it will simply re-route to
avoid it.
But, again, that's the point. This information is not available in real
time to the navigation systems. A human can cope with route-finding if
necessary without resorting to them.
Otherwise, but not too fancifully, it could be programmed to recognise >>>> the uniform (just as you do), raise an antenna with a microphone, and
tell him to talk clearly to Siri identifying the exact location of the >>>> problem using what three words or whatever.
Well, yes. But that's a lot more complex than merely following a route.
It requires at least reasonable voice recognition, for example, and
then being able to follow the instructions given.
I don't doubt that, given sufficiently advanced AI, this will be a
solvable problem. But it's not a problem you can solve just with a set
of preset rules. So until autonomous vehicles have AI which is as
robust at problem solving as human intelligence, it isn't really
autonomous.
Going on about live routing data is a red herring. As well as ongoing
situations where the data simply isn't available to the system (eg,
where the police decide to block a road), you also have to be able to
cope with situations where the data exists but isn't available to the
device (eg, in mobile notspots). To be truly autonomous, a self-driving
vehicle has to be able to operate solely on the basis of road signs,
markings, offline maps where available and, if necessary, verbal
instructions from a human. Just like humans can.
Mark
Sat navs already deal with crowd-sourced information about road
closures, and driver indicated road closures, and reroute accordingly.
*some* sat navs ...
However the problem then can be all that diverted traffic itself causes congestion. Leading to another re-route. And so on.
Ideally all that data would be picked up by a central function that could amend traffic control (e.g. lights) accordingly. However that would run counter to the first commandment of UK transport since 1986 (when I
observed it) that nothing should be done to improve car journeys or
encourage private car ownership over public transport.
On 04/02/2025 14:15, Jethro_uk wrote:
On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
[quoted text muted]
But that's the point. It wasn't. It can't be. When the police decide to
block a road and turn people away, that information isn't available to
navigation systems in real time.
Well Google Maps gets feedback on the progress of journeys using it and
*could* quickly determine that a sudden influx of changed journeys is
indicative of some sort of problem proceeding.
Whether it does or not is another matter.
The reason I am doubtful is that this is the same product which will
happily send you down a labyrinthine route of side roads and alleys in
the name of it's preferred "green" routing. Doing more damage to the
environment than it could ever prevent.
I don't use, or even come across 'green routing' but what I can say is
when I have ignored a deviation from the standard route it doesn't
normally work out well in term of traffic, roadworks and jams.
(sat Nav re-routing is another exemplar of machines being incapable of
intelligence :) )
I might agree, but that is when you use your own intelligence to work
out if the new route is a sensible one. The satnav should be treated
like a guide or tool you have autonomy over.
On 04/02/2025 16:06, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0eokvFtck2U4@mid.individual.net...
I'm sure you're reinventing the wheel bringing up what you think are outstanding
problems without knowing what has been solved already. The answers to them, however,
are so trivially easy that it is inconceivable that they haven't all been addressed
years ago.
Where do you think the estimated $75 billion spent by the automotive industry on the
development of autonomous vehicles has gone? And do you really think they haven't
already thought of all the issues you've raised?
Right. You're an autonomous car. You've got your two passengers in the
back ( you know humans, remember them ? ) a husband and wife. Originally
they told you to go to Tesco, But then half way there the wife said she'd
changed her mind and wanted to go to Asda, So she said "go to Asda". But
her husband objected and said he wanted to go to "Morrisons" instead. So
you're driving along supposedly headed for Tesco but with shouts of "Asda" >> "No Morrisons",."No Asda", "Morrisons" coming out of the back for the
next ten minutes. So where do you go, and why ?
Where would a human driver go? It's no different.
You either follow the last instruction all the time until they get fed up bickering,
or you park up, lock the doors,
turn off the lights, and wait until you only have one unambiguous instruction to
follow. If conflict continues beyond a certain limit, I'd program the thing to ignore
all instructions and drive direct to Relate.
I don't know how it's done in practice but resolution is surely possible, maybe by
ensuring only one person has authority.
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m0etb5Fu84aU2@mid.individual.net...
On 04/02/2025 15:41, billy bookcase wrote:
snip
AI relies on measuring outcomes, in order to "learn".
In neither "autonomous" vehicles nor autopilots is anything less than
the optimum outcome either permissible, or desirable, in real world
situations,
As it happens Norman is entirely correct in claiming AI has no role to play >>> in autonomous vehicles. But entirely wrong in imagining all that all potential
real life situations and problems which might arise can be anticipated, never
mind modelled in experimental set-ups Such as would allow autonomous vehicles
safely onto our roads
You must then have missed the earlier references in this thread to studies showing that
autonomous vehicles are in fact already safer than human driven ones.
The studies were carried out in California.
Not on the Hanger Lane roundabout.
On Tue, 04 Feb 2025 14:53:34 +0000, Nick Finnigan wrote:
On 04/02/2025 14:15, Jethro_uk wrote:
On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
[quoted text muted]
But that's the point. It wasn't. It can't be. When the police decide
to block a road and turn people away, that information isn't available >>>> to navigation systems in real time.
Well Google Maps gets feedback on the progress of journeys using it and
*could* quickly determine that a sudden influx of changed journeys is
indicative of some sort of problem proceeding.
Whether it does or not is another matter.
It spots that a stretch of road has developed a very slow speed, and
so
prefers a different route based on travel times, without knowing why.
Nor knowing whether the alternative roads can cope with the volume of
traffic now moving along an unfamiliar route.
Odd. 25 years ago I worked on routing software that was aware of road characteristics. But not surprising as Googles idea of innovation and
mine rarely intersect.
Google Maps also shows forked arrows at bottom right to allow me to
chose
a different route at any time (data connection permitting).
The arrows aren't always there IME (but the caveat with anything from the chocolate factory is it can change dramatically without notice). I only
saw them when I was in a position to select one of the routes. I found
that invariably they were suggesting a "quicker" route that would have
taken me out of my way and not uncommonly down a side road.
Countless trips with Uber drivers locally has shown me how pisspoor the routing is. Forget speed humps and chicanes. I would suggest the biggest cause of urban emission pollution is sat navigation software directing
cars down tiny side roads to shave a millisecond off the journey.
On 03/02/2025 16:54, Norman Wells wrote:
On 03/02/2025 15:32, The Todal wrote:
On 03/02/2025 14:32, Norman Wells wrote:
On 03/02/2025 13:41, Davey wrote:Maybe every driver should be obliged to take a Speed Awareness Course
I have no idea what it runs, although I see lots of Linux terms in
the software listings. I ask it to do things, and it usually does
them.
I certainly have no intention of replacing it, either the Satnav or
the
car, I have managed so far without my car telling me what the speed
limit is, as I can read road signs myself.
Of course. Sometimes.
My experience is that my satnav knows rather better than I do what
the limit is at any one location. And that's not for the want of
trying.
If you say you're fully aware of the correct speed limit everywhere
you drive all of the time, I will frankly tend not to believe you.
Maybe a journey with a satnav that does could be revelatory.
periodically. To refresh their memory of which limits apply to which
types of road.
But it doesn't depend just on the type of road, so that would be
inaccurate and misleading..
A half decent satnav will know and tell you, wherever you are.
I don't think any satnav is capable of telling you the speed limit on
every road you happen to be on.
Satnavs should, and autonomous vehicle controls will have to, have some
way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
On Tue, 04 Feb 2025 14:49:28 +0000, Fredxx wrote:
On 04/02/2025 14:15, Jethro_uk wrote:
[quoted text muted]
I don't use, or even come across 'green routing'
Last time I used GM, it only gave you one route - the ability to chose >"shortest" and "quickest" alongside "most economical had gone. When I
briefly researched the matter, I read that Google were only supplying the >"green" route now, for the planet. Not sure which planet though.
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of the
commander and the flight technicians in fully automated vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:
Satnavs should, and autonomous vehicle controls will have to, have some
way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me you've
never worked as a programmer...."
It seems that even the most mundane bit of software these days is
described as AI! It must have been back in the 80's that a friend of
mine was working on software for automatic landing and take-off for airliners. Nothing either artificial or intelligent about it!
Just as they do with any legally imposed speed limits. With a proper data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
On 04/02/2025 14:51, Simon Parker wrote:
On 03/02/2025 10:03, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0bh1eFdablU2@mid.individual.net...
It's trivial programming to ensure that limits are not exceeded by
any autonomous
vehicle, which is certainly not the case with human drivers, even
those who try to obey
them.
How about the humans who were supposed to put up the signs but
somehow forgot ?
What do you, and your robot friends, intend to do about them ?
Or, as this is a legal newsgroup, a sign with no underlying TRO, or a
TRO with no corresponding signage installed?
Do NormCars (TM) (a Division of NormTech (TM)) have access to the
underlying TRO database and know to ignore a sign if it has no TRO
legally underpinning it?
Will NormCars (TM) know the legally enforceable speed limit from
accessing the TRO database and therefore drive at the correct speed even
if some ne'er do well has spray painted, nicked or otherwise interfered
with the signage?
Yes, of course. It's only sensible to know what speed limits actually
apply where after all.
On 04/02/2025 14:49, Simon Parker wrote:
Please detail how your autonomous vehicle has overcome these problems
evidently inherent in the current navigational systems in use, (both
stand-alone and embedded).
$75 billion has been spent by the automotive industry on developing >autonomous vehicle technology, including the software.
On 04/02/2025 12:32, Mark Goodge wrote:
And that also takes us back to my point, which is that a truly autonomous
vehicle has to be capable of deciding when to disregard the sat-nav, and be >> able to operate effectively in situations where the sat-nav data is missing >> or incomplete.
That's just a matter of programming to do what a sensible person would
do in the same circumstances.
Norman's responses to your comments also contradict his own earlier
assertions that autonomous vehicles don't need AI,
AI doesn't come into it.
and that all of these
problems have already been solved. Now he's merely saying that, in his
opinion, they can be solved and that solving them is trivial.
I'm sure you're reinventing the wheel bringing up what you think are >outstanding problems without knowing what has been solved already. The >answers to them, however, are so trivially easy that it is inconceivable
that they haven't all been addressed years ago.
Where do you think the estimated $75 billion spent by the automotive
industry on the development of autonomous vehicles has gone? And do you >really think they haven't already thought of all the issues you've raised?
On Tue, 4 Feb 2025 16:32:42 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 14:51, Simon Parker wrote:
On 03/02/2025 10:03, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0bh1eFdablU2@mid.individual.net...
It's trivial programming to ensure that limits are not exceeded by
any autonomous
vehicle, which is certainly not the case with human drivers, even
those who try to obey
them.
How about the humans who were supposed to put up the signs but
somehow forgot ?
What do you, and your robot friends, intend to do about them ?
Or, as this is a legal newsgroup, a sign with no underlying TRO, or a
TRO with no corresponding signage installed?
Do NormCars (TM) (a Division of NormTech (TM)) have access to the
underlying TRO database and know to ignore a sign if it has no TRO
legally underpinning it?
Will NormCars (TM) know the legally enforceable speed limit from
accessing the TRO database and therefore drive at the correct speed even >>> if some ne'er do well has spray painted, nicked or otherwise interfered
with the signage?
Yes, of course. It's only sensible to know what speed limits actually
apply where after all.
How long do you think it will take for a national TRO database to be created and fully populated, and API access to it made available for sat-nav providers? Because it doesn't, as yet, exist.
Local highway authorities maintain their own repositories of TROs, but there's no requirement for them to be in a standard format. And, in particular, TROs don't usually contain coordinates. They describe affected stretches of road using phrases like "From the junction with Foo Street for
a distance of 86m in a northerly direction". Converting that into something you can store in a geographic database is not, despite your optimism, trivial.
After all, even Google doesn't use TROs to map speed limits. Their systems are intelligent enough to know what the default limits are in the UK for different types of road. But, for any departure from the default, it does exactly what a human would do - look at the signs:
https://blog.google/products/maps/how-ai-and-imagery-keep-speed-limits-on-google-maps-updated/
On Tue, 04 Feb 2025 14:53:34 +0000, Nick Finnigan wrote:
On 04/02/2025 14:15, Jethro_uk wrote:
On Tue, 04 Feb 2025 12:16:04 +0000, Mark Goodge wrote:
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
[quoted text muted]
But that's the point. It wasn't. It can't be. When the police decide
to block a road and turn people away, that information isn't available >>>> to navigation systems in real time.
Well Google Maps gets feedback on the progress of journeys using it and
*could* quickly determine that a sudden influx of changed journeys is
indicative of some sort of problem proceeding.
Whether it does or not is another matter.
It spots that a stretch of road has developed a very slow speed, and
so
prefers a different route based on travel times, without knowing why.
Nor knowing whether the alternative roads can cope with the volume of
traffic now moving along an unfamiliar route.
Odd. 25 years ago I worked on routing software that was aware of road characteristics. But not surprising as Googles idea of innovation and
mine rarely intersect.
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:
Satnavs should, and autonomous vehicle controls will have to, have some
way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me you've never worked as a programmer...."
On 2025-02-04, Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:
Satnavs should, and autonomous vehicle controls will have to, have some
way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me you've
never worked as a programmer...."
Indeed. A programmer would have called it a "simple matter
of programming" :-)
On Tue, 4 Feb 2025 16:37:23 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 14:49, Simon Parker wrote:
Please detail how your autonomous vehicle has overcome these problems
evidently inherent in the current navigational systems in use, (both
stand-alone and embedded).
$75 billion has been spent by the automotive industry on developing
autonomous vehicle technology, including the software.
$75 billion is a lot to spend on something you keep telling us is trivial. What a waste of money, eh.
On 04/02/2025 12:32, Mark Goodge wrote:
Norman's responses to your comments also contradict his own earlier
assertions that autonomous vehicles don't need AI,
AI doesn't come into it.
On Tue, 4 Feb 2025 15:05:02 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 12:32, Mark Goodge wrote:
And that also takes us back to my point, which is that a truly autonomous >>> vehicle has to be capable of deciding when to disregard the sat-nav, and be >>> able to operate effectively in situations where the sat-nav data is missing >>> or incomplete.
That's just a matter of programming to do what a sensible person would
do in the same circumstances.
The word "just" is doing a lot of heavy lifting there.
Norman's responses to your comments also contradict his own earlier
assertions that autonomous vehicles don't need AI,
AI doesn't come into it.
On the contrary, I think everyone who knows anything at all about the prospects for autonomous vehicles is of the opinion that AI is the long-term solution.
and that all of these
problems have already been solved. Now he's merely saying that, in his
opinion, they can be solved and that solving them is trivial.
I'm sure you're reinventing the wheel bringing up what you think are
outstanding problems without knowing what has been solved already. The
answers to them, however, are so trivially easy that it is inconceivable
that they haven't all been addressed years ago.
Where do you think the estimated $75 billion spent by the automotive
industry on the development of autonomous vehicles has gone? And do you
really think they haven't already thought of all the issues you've raised?
I am absolutely certain that they've considered all of these issues, and
many more besides. That's precisely why, despite spending $75 billion on it, we still don't have fully autonomous vehicles on the road. The automotive industry is putting a lot of time and money into getting it right. I think they would be slightly amused to hear you so blithely dismiss their efforts as trivial.
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of the
commander and the flight technicians in fully automated vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a Train Captain) on board who is able to take over and drive it manually if necessary. And the trains themselves don't need any routing capability, because they run on rails. It's a very far cry from a fully autonomous road vehicle.
On 04/02/2025 18:49, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0etb5Fu84aU2@mid.individual.net...
You must then have missed the earlier references in this thread to studies showing that
autonomous vehicles are in fact already safer than human driven ones.
The studies were carried out in California.
Not on the Hanger Lane roundabout.
Nevertheless, from a document I cited earlier:
"Passengers in Edinburgh will be able to enjoy the worlds first fully
sized self-driving bus from the spring"
Does that indicate there is any substantial doubt, even on British urban >streets?
On 04/02/2025 18:49, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0etb5Fu84aU2@mid.individual.net...
On 04/02/2025 15:41, billy bookcase wrote:
snip
AI relies on measuring outcomes, in order to "learn".
In neither "autonomous" vehicles nor autopilots is anything less than
the optimum outcome either permissible, or desirable, in real world
situations,
As it happens Norman is entirely correct in claiming AI has no role to play
in autonomous vehicles. But entirely wrong in imagining all that all potential
real life situations and problems which might arise can be anticipated, never
mind modelled in experimental set-ups Such as would allow autonomous vehicles
safely onto our roads
You must then have missed the earlier references in this thread to studies showing
that
autonomous vehicles are in fact already safer than human driven ones.
The studies were carried out in California.
Not on the Hanger Lane roundabout.
Nevertheless, from a document I cited earlier:
"Passengers in Edinburgh will be able to enjoy the world's first fully sized self-driving bus from the spring"
Does that indicate there is any substantial doubt, even on British urban streets?
On 29/01/2025 19:56, Les. Hayward wrote:
It seems that even the most mundane bit of software these days is
described as AI! It must have been back in the 80's that a friend of
mine was working on software for automatic landing and take-off for
airliners. Nothing either artificial or intelligent about it!
Bit late wasn't he?
https://en.wikipedia.org/wiki/Hawker_Siddeley_Trident#avionics
will tell you that "it enabled the Trident to perform the first
automatic landing by a civil airliner in scheduled passenger service on
10 June 1965"
Andy
On 04/02/2025 20:45, Mark Goodge wrote:
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:
Satnavs should, and autonomous vehicle controls will have to, have some
way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me you've never >> worked as a programmer...."
What a cheap remark! All computer languages I've ever dealt with, and I
have programmed in more than one, understand the word 'not'. It's a
very basic operator.
On 04/02/2025 21:29, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of the
commander and the flight technicians in fully automated vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a Train
Captain) on board who is able to take over and drive it manually if
necessary. And the trains themselves don't need any routing capability,
because they run on rails. It's a very far cry from a fully autonomous road >> vehicle.
And how many times have they been called into action to drive a train >manually? Perhaps you'd tell us. Otherwise it seems just a union ruse
to employ people to do nothing, as is their wont.
On 04/02/2025 21:29, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of the
commander and the flight technicians in fully automated vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a
Train Captain) on board who is able to take over and drive it manually
if necessary. And the trains themselves don't need any routing
capability, because they run on rails. It's a very far cry from a fully
autonomous road vehicle.
And how many times have they been called into action to drive a train manually? Perhaps you'd tell us. Otherwise it seems just a union ruse
to employ people to do nothing, as is their wont.
On 04/02/2025 21:57, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:32:42 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 14:51, Simon Parker wrote:
On 03/02/2025 10:03, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0bh1eFdablU2@mid.individual.net...
It's trivial programming to ensure that limits are not exceeded by >>>>>> any autonomous
vehicle, which is certainly not the case with human drivers, even
those who try to obey
them.
How about the humans who were supposed to put up the signs but
somehow forgot ?
What do you, and your robot friends, intend to do about them ?
Or, as this is a legal newsgroup, a sign with no underlying TRO, or a
TRO with no corresponding signage installed?
Do NormCars (TM) (a Division of NormTech (TM)) have access to the
underlying TRO database and know to ignore a sign if it has no TRO
legally underpinning it?
Will NormCars (TM) know the legally enforceable speed limit from
accessing the TRO database and therefore drive at the correct speed even >>>> if some ne'er do well has spray painted, nicked or otherwise interfered >>>> with the signage?
Yes, of course. It's only sensible to know what speed limits actually
apply where after all.
How long do you think it will take for a national TRO database to be created >> and fully populated, and API access to it made available for sat-nav
providers? Because it doesn't, as yet, exist.
So you say.
However, my little Garmin clearly has access to a data-base
of speed limits already, so one clearly exists and it seems to me to be >pretty comprehensive. Whose it is I have no idea. But it's a sine qua
non that autonomous vehicles will need to know, and it's obviously not
an insuperable problem.
Local highway authorities maintain their own repositories of TROs, but
there's no requirement for them to be in a standard format. And, in
particular, TROs don't usually contain coordinates. They describe affected >> stretches of road using phrases like "From the junction with Foo Street for >> a distance of 86m in a northerly direction". Converting that into something >> you can store in a geographic database is not, despite your optimism,
trivial.
One wonders how my little Garmin copes then.
After all, even Google doesn't use TROs to map speed limits. Their systems >> are intelligent enough to know what the default limits are in the UK for
different types of road. But, for any departure from the default, it does
exactly what a human would do - look at the signs:
https://blog.google/products/maps/how-ai-and-imagery-keep-speed-limits-on-google-maps-updated/
My Garmin doesn't. It doesn't have eyes.
The USA is considerably ahead of us, but this is where we're at in Europe:
https://ioki.com/en/autonomous-vehicles-turning-point-in-european-regional-and-local-transport/#:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.
On Tue, 4 Feb 2025 16:32:42 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
[quoted text muted]
How long do you think it will take for a national TRO database to be
created and fully populated, and API access to it made available for
sat-nav providers? Because it doesn't, as yet, exist.
On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 20:45, Mark Goodge wrote:
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
Satnavs should, and autonomous vehicle controls will have to, have some >>>> way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me you've never
worked as a programmer...."
What a cheap remark! All computer languages I've ever dealt with, and I
have programmed in more than one, understand the word 'not'. It's a
very basic operator.
Please name three different programming languages which you have used that include the word "not" as an operator.
But, anyway, it's not the syntax I was querying. It's how to ensure that the existence of a blockage, and its precise location, is passed to the software promptly and in a format which can easily be interpreted by the software.
For example, when the notification comes in the form of a policeman saying "you can't go that way, mate, it's blocked".
On Tue, 4 Feb 2025 21:45:09 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 21:29, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of the >>>>> commander and the flight technicians in fully automated vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a Train >>> Captain) on board who is able to take over and drive it manually if
necessary. And the trains themselves don't need any routing capability,
because they run on rails. It's a very far cry from a fully autonomous road >>> vehicle.
And how many times have they been called into action to drive a train
manually? Perhaps you'd tell us. Otherwise it seems just a union ruse
to employ people to do nothing, as is their wont.
I've seen one do it, once. So, not very often. But the point is that the system isn't, yet, considered reliable enough to do without the facility for manual driving, even if it's good enough for enough of the time in day to
day use.
On Wed, 5 Feb 2025 08:04:22 +0000, Norman Wells <hex@unseen.ac.am> wrote:
The USA is considerably ahead of us, but this is where we're at in Europe: >>
https://ioki.com/en/autonomous-vehicles-turning-point-in-european-regional-and-local-transport/#:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.
Yes, we're at the point where, as you conveniently highlighted for us "most of them travel at low speed, on fixed routes and still with safety drivers".
That does not, to me, suggest that all of the problems inherent with autonomous vehicles have been solved, nor that solving those problems will
be trivial.
On Wed, 5 Feb 2025 07:56:31 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 21:57, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:32:42 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 04/02/2025 14:51, Simon Parker wrote:
On 03/02/2025 10:03, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0bh1eFdablU2@mid.individual.net...
It's trivial programming to ensure that limits are not exceeded by >>>>>>> any autonomous
vehicle, which is certainly not the case with human drivers, even >>>>>>> those who try to obey
them.
How about the humans who were supposed to put up the signs but
somehow forgot ?
What do you, and your robot friends, intend to do about them ?
Or, as this is a legal newsgroup, a sign with no underlying TRO, or a >>>>> TRO with no corresponding signage installed?
Do NormCars (TM) (a Division of NormTech (TM)) have access to the
underlying TRO database and know to ignore a sign if it has no TRO
legally underpinning it?
Will NormCars (TM) know the legally enforceable speed limit from
accessing the TRO database and therefore drive at the correct speed even >>>>> if some ne'er do well has spray painted, nicked or otherwise interfered >>>>> with the signage?
Yes, of course. It's only sensible to know what speed limits actually >>>> apply where after all.
How long do you think it will take for a national TRO database to be created
and fully populated, and API access to it made available for sat-nav
providers? Because it doesn't, as yet, exist.
So you say.
So I know.
I am aware that arguing from self-authority is frowned on in Usenet, but in this particular context I do think it's worth pointing out that I am two things which you are not. Firstly I am a programmer, and, although I have never written routing software, I have worked with it, and I have seen its complexity. And, secondly, possibly more pertinently in this context, I am a local councillor, on a council which is the local highway authority. Both of those facts are easily verifiable, should you care to Google me. I don't
hide my identity on Usenet.
As a councillor, I have a login on my council's intranet which, among other things, gives me access to the council's road network database, including TROs. And some of those TROs go back a long way. Far enough back that their only digital existence is as a scanned copy of a typewritten document. The idea that there's a national database easily accessible by third parties is, quite simply, fantasy. There may be, one day. But we are very far from it at the moment.
However, my little Garmin clearly has access to a data-base
of speed limits already, so one clearly exists and it seems to me to be
pretty comprehensive. Whose it is I have no idea. But it's a sine qua
non that autonomous vehicles will need to know, and it's obviously not
an insuperable problem.
Garmin, like most sat-nav providers, uses a variety of sources for its database.
Local highway authorities maintain their own repositories of TROs, but
there's no requirement for them to be in a standard format. And, in
particular, TROs don't usually contain coordinates. They describe affected >>> stretches of road using phrases like "From the junction with Foo Street for >>> a distance of 86m in a northerly direction". Converting that into something >>> you can store in a geographic database is not, despite your optimism,
trivial.
One wonders how my little Garmin copes then.
It copes because most speed limits don't change very often, and when they do change, they usually only affect a short stretch of road. Provided its data sources include sources of speed limits, it can usually update that data before most people notice. But if you happen to drive along a stretch of
road where there has been a recent speed limit change, then you will
probably observe that your sat-nav doesn't immediately reflect that. Again,, a few minutes with Google (or search engine of your choice) will find plenty of instances of people complaning that their Garmin sat-nav has got a speed limit wrong, or doesn't update when the limit changes.
After all, even Google doesn't use TROs to map speed limits. Their systems >>> are intelligent enough to know what the default limits are in the UK for >>> different types of road. But, for any departure from the default, it does >>> exactly what a human would do - look at the signs:
https://blog.google/products/maps/how-ai-and-imagery-keep-speed-limits-on-google-maps-updated/
My Garmin doesn't. It doesn't have eyes.
No, but Garmin probably buys data from Google, and other providers of the data.
One of the leading suppliers of speed limit data is Here (that is, the company called "Here", not here as in present at this location). Given the requirement for all cars sold in the EU from July 2024 onwards to have Intelligent Speed Assistance (ISA), this is clearly a growing market. Here has published information which gives some background to both ISA and the supply of data.
Interestingly, ISA providers are only required to meet a minimum level of
90% accuracy. And temporary speed limits are explicitly excluded from ISA requirements. So there's a clear recognition that there will be times when the software doesn't know, or can't know, the speed limit.
For human drivers that's not a major issue, there's an expectation that
human drivers will still be able to read signs even where the software doesn't. But an autonomous vehicle only has the software, it doesn't have a human able to give it information.
On Tue, 04 Feb 2025 21:45:09 +0000, Norman Wells wrote:
On 04/02/2025 21:29, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of the >>>>> commander and the flight technicians in fully automated vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a
Train Captain) on board who is able to take over and drive it manually
if necessary. And the trains themselves don't need any routing
capability, because they run on rails. It's a very far cry from a fully
autonomous road vehicle.
And how many times have they been called into action to drive a train
manually? Perhaps you'd tell us. Otherwise it seems just a union ruse
to employ people to do nothing, as is their wont.
It's not just driving the train. It's ensuring there's nothing fouling
the doors as it departs. Something which - despite being sensored up the wazzoo - still occurs. With potentially fatal results.
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m0f7miF2crrU1@mid.individual.net...
On 04/02/2025 18:49, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0etb5Fu84aU2@mid.individual.net...
On 04/02/2025 15:41, billy bookcase wrote:
snip
AI relies on measuring outcomes, in order to "learn".
In neither "autonomous" vehicles nor autopilots is anything less than >>>>> the optimum outcome either permissible, or desirable, in real world
situations,
As it happens Norman is entirely correct in claiming AI has no role to play
in autonomous vehicles. But entirely wrong in imagining all that all potential
real life situations and problems which might arise can be anticipated, never
mind modelled in experimental set-ups Such as would allow autonomous vehicles
safely onto our roads
You must then have missed the earlier references in this thread to studies showing
that
autonomous vehicles are in fact already safer than human driven ones.
The studies were carried out in California.
Not on the Hanger Lane roundabout.
Nevertheless, from a document I cited earlier:
"Passengers in Edinburgh will be able to enjoy the world's first fully sized >> self-driving bus from the spring"
Does that indicate there is any substantial doubt, even on British urban streets?
This one ?
quote
The UK's first full-sized driverless buses have started operating for passengers in Edinburgh.
The autonomous vehicles, operated by Stagecoach, run across the Forth
Road Bridge from Ferrytoll park-and-ride in Fife to Edinburgh Park station.
Despite the "driverless" name, the buses operate with two staff on board.
One will sit in the driver's seat to monitor the technology, and a
so-called bus captain will help passengers with boarding and buying tickets
unquote
https://www.bbc.co.uk/news/uk-scotland-edinburgh-east-fife-65589913
To repeat ---
"One will sit in the driver's seat to *monitor the technology"
Doubless with a big handle or button conveniently placed nearby.
Just in case the "monitoring" gets a bit "stressful"
On Tue, 4 Feb 2025 19:21:53 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 18:49, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0etb5Fu84aU2@mid.individual.net...
You must then have missed the earlier references in this thread to studies showing that
autonomous vehicles are in fact already safer than human driven ones.
The studies were carried out in California.
Not on the Hanger Lane roundabout.
Nevertheless, from a document I cited earlier:
"Passengers in Edinburgh will be able to enjoy the world’s first fully
sized self-driving bus from the spring"
It's a shuttle bus that runs back and forth between just two locations. The entirety of its route on the public highway is on dual carriageways. And it still needs someone to sit in the driver's seat to "monitor the technology". Oh, and it's not even autonomous end to end.
Their driverless buses have sensors enabling them to travel on
pre-selected roads at up to 50mph.
On each trip they switch between autonomous and manual driving, being
controlled by computer across the Forth Road Bridge, and from the M90 and
M8 to the A720 in Edinburgh.
https://www.bbc.co.uk/news/articles/c0q0lg3j1exo
Does that indicate there is any substantial doubt, even on British urban
streets?
It indicates that there's still considerable doubt, as it happens.
On 05/02/2025 10:30, Jethro_uk wrote:
On Tue, 04 Feb 2025 21:45:09 +0000, Norman Wells wrote:
On 04/02/2025 21:29, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of
the commander and the flight technicians in fully automated
vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a
Train Captain) on board who is able to take over and drive it
manually if necessary. And the trains themselves don't need any
routing capability, because they run on rails. It's a very far cry
from a fully autonomous road vehicle.
And how many times have they been called into action to drive a train
manually? Perhaps you'd tell us. Otherwise it seems just a union
ruse to employ people to do nothing, as is their wont.
It's not just driving the train. It's ensuring there's nothing fouling
the doors as it departs. Something which - despite being sensored up
the wazzoo - still occurs. With potentially fatal results.
Seems like we'll need an on-board captain then, presumably with
government accreditation, for any autonomous vehicle journey, even a
private car.
That's a bit of a downer.
On 04/02/2025 21:29, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of the
commander and the flight technicians in fully automated vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a Train
Captain) on board who is able to take over and drive it manually if
necessary. And the trains themselves don't need any routing capability,
because they run on rails. It's a very far cry from a fully autonomous road >> vehicle.
And how many times have they been called into action to drive a train manually?
Perhaps you'd tell us. Otherwise it seems just a union ruse to employ people to do
nothing, as is their wont.
On 04/02/2025 15:05, Norman Wells wrote:
On 04/02/2025 12:32, Mark Goodge wrote:
Norman's responses to your comments also contradict his own earlier
assertions that autonomous vehicles don't need AI,
AI doesn't come into it.
Which autonomous vehicles don't use AI for object recogntion?
On 04/02/2025 20:43, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0f7miF2crrU1@mid.individual.net...
On 04/02/2025 18:49, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0etb5Fu84aU2@mid.individual.net...
On 04/02/2025 15:41, billy bookcase wrote:
snip
The studies were carried out in California.AI relies on measuring outcomes, in order to "learn".
In neither "autonomous" vehicles nor autopilots is anything less than >>>>>> the optimum outcome either permissible, or desirable, in real world >>>>>> situations,
As it happens Norman is entirely correct in claiming AI has no role to play
in autonomous vehicles. But entirely wrong in imagining all that all potential
real life situations and problems which might arise can be anticipated, never
mind modelled in experimental set-ups Such as would allow autonomous vehicles
safely onto our roads
You must then have missed the earlier references in this thread to studies showing
that
autonomous vehicles are in fact already safer than human driven ones. >>>>
Not on the Hanger Lane roundabout.
Nevertheless, from a document I cited earlier:
"Passengers in Edinburgh will be able to enjoy the world's first fully sized
self-driving bus from the spring"
Does that indicate there is any substantial doubt, even on British urban streets?
This one ?
quote
The UK's first full-sized driverless buses have started operating for
passengers in Edinburgh.
The autonomous vehicles, operated by Stagecoach, run across the Forth
Road Bridge from Ferrytoll park-and-ride in Fife to Edinburgh Park station. >>
Despite the "driverless" name, the buses operate with two staff on board.
One will sit in the driver's seat to monitor the technology, and a
so-called bus captain will help passengers with boarding and buying tickets
Unions again, eh?
unquote
https://www.bbc.co.uk/news/uk-scotland-edinburgh-east-fife-65589913
To repeat ---
"One will sit in the driver's seat to *monitor the technology"
Doubless with a big handle or button conveniently placed nearby.
Just in case the "monitoring" gets a bit "stressful"
Got any stats on how many times it has gone wrong and he's been needed?
That AI is used in AVs for object recognition is a given.
Anyone claiming that there is no AI in AVs can be dismissed as talking
from a position of ignorance. As can anyone that agrees with them. :-)
On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 20:45, Mark Goodge wrote:
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
Satnavs should, and autonomous vehicle controls will have to, have some >>>> way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me you've never
worked as a programmer...."
What a cheap remark! All computer languages I've ever dealt with, and I
have programmed in more than one, understand the word 'not'. It's a
very basic operator.
Please name three different programming languages which you have used that include the word "not" as an operator.
But, anyway, it's not the syntax I was querying. It's how to ensure that the existence of a blockage, and its precise location, is passed to the software promptly and in a format which can easily be interpreted by the software.
For example, when the notification comes in the form of a policeman saying "you can't go that way, mate, it's blocked".
Much the same as on the Victoria Line; which has been fully automatic
using ATO ever since its inception in 1968; but which nevertheless has
a "driver" in the cab.
On 05/02/2025 09:57, Mark Goodge wrote:
On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 20:45, Mark Goodge wrote:
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>
Satnavs should, and autonomous vehicle controls will have to, have some >>>>> way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me you've never
worked as a programmer...."
What a cheap remark! All computer languages I've ever dealt with, and I >>> have programmed in more than one, understand the word 'not'. It's a
very basic operator.
Please name three different programming languages which you have used that >> include the word "not" as an operator.
FORTRAN, Lisp, Modula2, Pascal (I can't count)
And a number of others that use ! or some other cryptic symbol as not.
The way it works at present on the Satnavs that have realtime live
traffic updates is that the system predicts increased journey time and
flags sections of road ahead red or yellow where the traffic speed from
other users is unreasonably slow. I have ours set to only deviate if the >(long) journey time increases by more than 15 minutes.
It seems to work well enough and you can sometimes tell who else is
using the same nav system by where they come off the normal route.
On 05/02/2025 10:33, Mark Goodge wrote:
On Wed, 5 Feb 2025 07:56:31 +0000, Norman Wells <hex@unseen.ac.am> wrote:
One wonders how my little Garmin copes then.
It copes because most speed limits don't change very often, and when they do >> change, they usually only affect a short stretch of road. Provided its data >> sources include sources of speed limits, it can usually update that data
before most people notice. But if you happen to drive along a stretch of
road where there has been a recent speed limit change, then you will
probably observe that your sat-nav doesn't immediately reflect that. Again,, >> a few minutes with Google (or search engine of your choice) will find plenty >> of instances of people complaning that their Garmin sat-nav has got a speed >> limit wrong, or doesn't update when the limit changes.
That may be because they haven't updated their maps for some time.
But
there's no reason why live mapping should have any speed limit wrong for >long.
The thing is, though, that there obviously are data-bases of speed
limits out there that are rather more comprehensive, up-to-date and >accessible than you think.
For human drivers that's not a major issue, there's an expectation that
human drivers will still be able to read signs even where the software
doesn't. But an autonomous vehicle only has the software, it doesn't have a >> human able to give it information.
No, they will also have an array of sensors that vastly exceed any
human. The satnav speed limit data-base is only part of their toolkit.
"Simon Parker" <simonparkerulm@gmail.com> wrote in message news:m0h3kgFqoilU4@mid.individual.net...
snippage
( Autopilots can malfunction but then who ever claimed otherwise ? )
That AI is used in AVs for object recognition is a given.
Complete and utter nonsense.
If a transmitter sends out signals which are reflected back within a specified period, then that is all the evidence which is required of the presence of an object in that direction, at that distance; which therefore constitutes something which needs be avoided,.
Even robot vacuum cleaners are fitted with that capability.
And in the latter case, whether its an armchair or the family Labrador
makes absolutely no difference whatsoever. And its exactly the same
with AV's
And will remain the case until such time as they, AV's decide for
themselves to deliberately run humans down. As opposed to say running
into trees etc. And that will be the point, and only then, where AI
has finally kicked in
Anyone claiming that there is no AI in AVs can be dismissed as talking
from a position of ignorance. As can anyone that agrees with them. :-)
You do realise of course that that particular observation adds
nothing whatsoever to your argument ? Which in itself is so specularly
wrong as to be almost embarrassing ?
Having worked extensively with the local council on a related matter,
[^1], I can assure you that TROs are created individually by each LA.
They do not even create their own searchable repository of TROs, never
mind anything even approaching a database of all TROs nationwide which
third parties can access in real time.
On Wed, 05 Feb 2025 12:51:33 +0000, billy bookcase wrote:
Much the same as on the Victoria Line; which has been fully automatic
using ATO ever since its inception in 1968; but which nevertheless has
a "driver" in the cab.
For interest, the Moorgate crash was in 1975.
On 04/02/2025 16:14, Norman Wells wrote:
On 04/02/2025 14:47, Simon Parker wrote:
On 03/02/2025 10:07, billy bookcase wrote:
"Martin Harran" <martinharran@gmail.com> wrote in messageI give you the details of Lufthansa LH1829 below. Numerous other
news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too close >>>>> to a vehicle in front, lane-keep assist.
Those are exmples of automation, as with autpilots. Where the each set >>>> of inputs, no matter how complex, will produce exactly the same result >>>> every time.
AI systems on the other hand, learn and make up new rules as they go
along.
And so the process by which they actually arrive at particular
decisions
may not be readily available to humans.
I think this is possibly what a lot of peopel are really worried about, >>>
examples are available, (some of which may include the grounding of
the entire Boeing 737 MAX fleet).
Whilst some knowledge of aviation terms will help in reading what
follows, it is by no means mandatory.
Please detail, as simply as possible, the likely outcome for flight
LH1829 had the commander not been present and had he not had access
to flight technicians via Telex.
Obviously, it would have crashed, killing all on board.
Most people, hopefully you included, would consider that to be a "sub- optimal outcome".
Then please detail which parties will be fulfilling the roles of the
commander and the flight technicians in fully automated vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
<SFX: Buzzer>
I recommend researching the role and responsibilities of the "Passenger Service Agent" on the DLR.
And navigating the DLR is light years away from a Level 5 AV.
But a failsafe default of come to a halt and turn everything off,Remind me not to travel in a NormCar (TM) at any time now or in the
which would doubtless be in any autonomous vehicle, isn't really an
option doing 500mph at 30,000 ft.
future. I do not really want to be in a vehicle that in the event of an incident is designed and programmed to come to a halt in the right-most
lane of a motorway and turn off both its headlights and hazard lights
despite it being night time.
You claim to have programming experience so you ought to be familiar
with the Pareto Principle. The first 80% is relatively easy. That last 20% is the killer. 80% of the bugs are in 20% of the code. 80% of the complexity comes from 20% of the code. 80% of the R&D is spent on 20%
of the application. AVs don't even have the initial 80% mastered yet.
On Wed, 05 Feb 2025 12:51:33 +0000, billy bookcase wrote:
Much the same as on the Victoria Line; which has been fully automatic
using ATO ever since its inception in 1968; but which nevertheless has
a "driver" in the cab.
For interest, the Moorgate crash was in 1975.
On 04/02/2025 15:25, Norman Wells wrote:
On 04/02/2025 12:19, Max Demian wrote:
On 03/02/2025 19:05, Norman Wells wrote:
Provided it is working on a live real-time mapping system in which
road closures are clearly indicated, which is not beyond the wit of
man, the satnav will simply re-route to avoid them. They're very
good at finding alternative routes when you miss a turning for
example, so it's not a new or insurmountable problem.
Won't the satnav keep telling you to take the original route? Is
there a way to tell it that there is something in the way? The same
problem would occur in the case of an autonomous vehicle.
Satnavs should, and autonomous vehicle controls will have to, have
some way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
Is it?
Can you post a copy of your program please along with relevant
certification for the design, coding, debugging, testing and deployment.
On 2025-02-05, billy bookcase <billy@onon.com> wrote:
"Simon Parker" <simonparkerulm@gmail.com> wrote in message
news:m0h3kgFqoilU4@mid.individual.net...
snippage
( Autopilots can malfunction but then who ever claimed otherwise ? )
That AI is used in AVs for object recognition is a given.
Complete and utter nonsense.
If a transmitter sends out signals which are reflected back within a
specified period, then that is all the evidence which is required of the
presence of an object in that direction, at that distance; which therefore >> constitutes something which needs be avoided,.
Even robot vacuum cleaners are fitted with that capability.
And in the latter case, whether its an armchair or the family Labrador
makes absolutely no difference whatsoever. And its exactly the same
with AV's
And will remain the case until such time as they, AV's decide for
themselves to deliberately run humans down. As opposed to say running
into trees etc. And that will be the point, and only then, where AI
has finally kicked in
Anyone claiming that there is no AI in AVs can be dismissed as talking
from a position of ignorance. As can anyone that agrees with them. :-)
You do realise of course that that particular observation adds
nothing whatsoever to your argument ? Which in itself is so specularly
wrong as to be almost embarrassing ?
Well, *someone* out of you and Simon is "so specularly wrong as to be
almost embarrassing"... but it isn't Simon.
On Wed, 5 Feb 2025 14:33:10 +0000, Martin Brown <'''newspam'''@nonad.co.uk> wrote:
On 05/02/2025 09:57, Mark Goodge wrote:
On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 04/02/2025 20:45, Mark Goodge wrote:
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>>
Satnavs should, and autonomous vehicle controls will have to, have some >>>>>> way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me you've never
worked as a programmer...."
What a cheap remark! All computer languages I've ever dealt with, and I >>>> have programmed in more than one, understand the word 'not'. It's a
very basic operator.
Please name three different programming languages which you have used that >>> include the word "not" as an operator.
FORTRAN, Lisp, Modula2, Pascal (I can't count)
Shhh! I'd set that as a test for Norman!
And a number of others that use ! or some other cryptic symbol as not.
Pretty much every language has a negation operator. It would be almost impossible to write useful code if it didn't. But, these days, the operator is, as you say, typically a symbol rather than the word "not". That's why I was curious as to how recent Norman's programming knowledge, if any, is.
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m0fg35F4497U3@mid.individual.net...
On 04/02/2025 21:29, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of the >>>>> commander and the flight technicians in fully automated vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a Train >>> Captain) on board who is able to take over and drive it manually if
necessary. And the trains themselves don't need any routing capability,
because they run on rails. It's a very far cry from a fully autonomous road >>> vehicle.
And how many times have they been called into action to drive a train manually?
Perhaps you'd tell us. Otherwise it seems just a union ruse to employ people to do
nothing, as is their wont.
Their presence reassures passengers who appreciate the human touch
in the event of a breakdown. And can also act as a useful deterrent against vandalism and anti-social behaviour in the event of any such breakdown
taking place.
Much the same as on the Victoria Line; which has been fully automatic
using ATO ever since its inception in 1968; but which nevertheless has
a "driver" in the cab.
Not of course that you might be expected to understand much
of this, for all too obvious reasons.
On Wed, 05 Feb 2025 10:50:20 +0000, Norman Wells wrote:
On 05/02/2025 10:30, Jethro_uk wrote:
On Tue, 04 Feb 2025 21:45:09 +0000, Norman Wells wrote:
On 04/02/2025 21:29, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of >>>>>>> the commander and the flight technicians in fully automated
vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a
Train Captain) on board who is able to take over and drive it
manually if necessary. And the trains themselves don't need any
routing capability, because they run on rails. It's a very far cry
from a fully autonomous road vehicle.
And how many times have they been called into action to drive a train
manually? Perhaps you'd tell us. Otherwise it seems just a union
ruse to employ people to do nothing, as is their wont.
It's not just driving the train. It's ensuring there's nothing fouling
the doors as it departs. Something which - despite being sensored up
the wazzoo - still occurs. With potentially fatal results.
Seems like we'll need an on-board captain then, presumably with
government accreditation, for any autonomous vehicle journey, even a
private car.
That's a bit of a downer.
Sorry.
What the blazes are you blithering on about ?
On 04/02/2025 15:20, Norman Wells wrote:
On 04/02/2025 12:16, Mark Goodge wrote:
On Mon, 3 Feb 2025 19:05:16 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
Provided it is working on a live real-time mapping system in which road >>>> closures are clearly indicated, which is not beyond the wit of man, the >>>> satnav will simply re-route to avoid them.
But that's the point. It wasn't. It can't be. When the police decide to
block a road and turn people away, that information isn't available to
navigation systems in real time.
It would be trivially easy to do though. Police alert HQ, HQ tells
mapping, mapping puts a block where the accident is, autonomous
vehicles re-route to avoid it.
Is the police officer placing a general call with HQ or to a specific
desk at HQ? What happens if the request cannot be connected to HQ immediately? How do HQ alert mapping? Is HQ alerting Google Maps, Apple Maps, Garmin, et al individually are are they alerting a central
resource which subsequently pushes this data out to various service providers?
Not forgetting, of course, and quite topical at the moment: who is
paying for all this?
At present, Google, Apple and Garmin get their data, mostly, without
charge.
Why would they want to move to a model that involves them
paying for what they currently get for free?
It's the same problem as above. If the vehicle knows from an update to >>>> the live map that there is a road blockage, as should be a priorityBut, again, that's the point. This information is not available in
when
autonomous vehicles become common, it will simply re-route to avoid it. >>>
real time
to the navigation systems. A human can cope with route-finding if
necessary
without resorting to them.
Except that a human won't do anything either until and unless he knows
where a blockage is.
As autonomous vehicles become used there will have to be a requirement
for accurate and rapid information on blockages.
Information on congestion is readily available already in real time.
There's no technical reason why the same could not be done for blockages.
It is clear that you do not understand how the current system works
because, if you did, you would know why this will not work for blockages
such as those being described.
the uniform (just as you do), raise an antenna with a microphone, and
tell him to talk clearly to Siri identifying the exact location of the >>>> problem using what three words or whatever.
Well, yes. But that's a lot more complex than merely following a
route. It
requires at least reasonable voice recognition, for example, and then
being
able to follow the instructions given.
These things exist already.
If the policeman has a Scottish accent? Or Brummie? (I recommend searching for "Amy101" if the second reference doesn't elicit a "light
bulb" moment.)
I don't doubt that, given sufficiently advanced AI, this will be a
solvable
problem. But it's not a problem you can solve just with a set of preset
rules. So until autonomous vehicles have AI which is as robust at
problem
solving as human intelligence, it isn't really autonomous.
It's not AI, which is something entirely different.
Modern, (c.f. "Current"), voice recognition, (i.e. the very systems you
are referencing above that "already exist") is "not AI"? Are you sure?
I recommend phoning a friend, or asking the audience before you lock in
your answer. :-)
Also, please detail which object recognition system you are using in
NormCars (TM) that can recognise a police officer by their uniform alone
but which does not employ AI.
Going on about live routing data is a red herring. As well as ongoing
situations where the data simply isn't available to the system (eg,
where
the police decide to block a road), you also have to be able to cope
with
situations where the data exists but isn't available to the device
(eg, in
mobile notspots). To be truly autonomous, a self-driving vehicle has
to be
able to operate solely on the basis of road signs, markings, offline
maps
where available and, if necessary, verbal instructions from a human.
Just
like humans can.
And just like autonomous vehicles can already, but safer.
There are currently no SAE Level 5 automated cars deployed anywhere in
the world. There are some SAE Level 4 automated vehicles operating in strictly controlled areas but they are in strictly controlled areas for precisely the reasons Mark mentioned.
On 04/02/2025 21:22, Mark Goodge wrote:
It's a shuttle bus that runs back and forth between just two locations. The >> entirety of its route on the public highway is on dual carriageways. And it >> still needs someone to sit in the driver's seat to "monitor the technology". >> Oh, and it's not even autonomous end to end.
Their driverless buses have sensors enabling them to travel on
pre-selected roads at up to 50mph.
On each trip they switch between autonomous and manual driving, being
controlled by computer across the Forth Road Bridge, and from the M90 and >> M8 to the A720 in Edinburgh.
https://www.bbc.co.uk/news/articles/c0q0lg3j1exo
Does that indicate there is any substantial doubt, even on British urban >>> streets?
It indicates that there's still considerable doubt, as it happens.
What it indicates to me, in the absence of any statistics of failure
rates, is that the law which requires such precautions may be rather >out-of-date.
On 04/02/2025 21:22, Mark Goodge wrote:
On Tue, 4 Feb 2025 19:21:53 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 18:49, billy bookcase wrote:
"Norman Wells" <hex@unseen.ac.am> wrote in message
news:m0etb5Fu84aU2@mid.individual.net...
You must then have missed the earlier references in this thread toThe studies were carried out in California.
studies showing that
autonomous vehicles are in fact already safer than human driven ones. >>>>
Not on the Hanger Lane roundabout.
Nevertheless, from a document I cited earlier:
"Passengers in Edinburgh will be able to enjoy the world’s first fully >>> sized self-driving bus from the spring"
It's a shuttle bus that runs back and forth between just two
locations. The
entirety of its route on the public highway is on dual carriageways.
And it
still needs someone to sit in the driver's seat to "monitor the
technology".
Oh, and it's not even autonomous end to end.
Their driverless buses have sensors enabling them to travel on
pre-selected roads at up to 50mph.
On each trip they switch between autonomous and manual driving, being >> controlled by computer across the Forth Road Bridge, and from the
M90 and
M8 to the A720 in Edinburgh.
https://www.bbc.co.uk/news/articles/c0q0lg3j1exo
Does that indicate there is any substantial doubt, even on British urban >>> streets?
It indicates that there's still considerable doubt, as it happens.
It is worse than that:
The service has been withdrawn as there were insufficient passenger
numbers to make it cost-effective. (Not surprising when the cost of all
the tech, plus a "not a driver" and "conductor" were factored in. I.e.
all the costs of the original bus, plus all the AV tech plus two members
of staff on board rather than the usual one made the project unfeasible
on cost grounds.)
S&P Global Mobility (Google them) predict that Level 5 passenger cars
will not appear before 2035 at the earliest.
On 04/02/2025 16:25, Norman Wells wrote:
On 04/02/2025 15:41, billy bookcase wrote:
"Simon Parker" <simonparkerulm@gmail.com> wrote in message
news:m0enkeFqoikU2@mid.individual.net...
Whilst some knowledge of aviation terms will help in reading what
follows, it is by no
means mandatory.
Please detail, as simply as possible, the likely outcome for flight
LH1829 had the
commander not been present and had he not had access to flight
technicians via Telex.
Then please detail which parties will be fulfilling the roles of the
commander and the
flight technicians in fully automated vehicles.
I fail to see any possible relevance in any of of your questions.
Unless you can detail where I have ever claimed that AI has any possible >>> role to play in either "autonomous" vehicles, or in autopilots.
AI relies on measuring outcomes, in order to "learn".
In neither "autonomous" vehicles nor autopilots is anything less than
the optimum outcome either permissible, or desirable, in real world
situations,
As it happens Norman is entirely correct in claiming AI has no role
to play
in autonomous vehicles. But entirely wrong in imagining all that all
potential
real life situations and problems which might arise can be
anticipated, never
mind modelled in experimental set-ups Such as would allow autonomous
vehicles
safely onto our roads
You must then have missed the earlier references in this thread to
studies showing that autonomous vehicles are in fact already safer
than human driven ones.
They are currently allowed already in parts of the USA, and in France
and Germany.
https://ioki.com/en/autonomous-vehicles-turning-point-in-european-
regional-and-local-transport/
#:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.
In the UK, they could be allowed by 2026.
It is my sad duty to inform you that you have, again, engaged in your
usual habit of speaking with great authority on a matter whilst relying
on Google searches and quick skims of articles in the background.
To much fanfare, Germany did indeed announce approval for autonomous
vehicles in July of last year. But those familiar with the system, (in contrast to those relying on Google to prop up their arguments), will be aware of the following key points:
(1) This is a test / trial / whatever you want to call it of just six autonomous vehicles (AVs), not a mass roll-out.
(2) It is tightly restricted to the Rhine-Main region, more specifically
the city of Darmstadt in the Offenbach district, and even more
specifically restricted only to defined routes within that city.
(3) As can be gathered from point (2), but worth stating for the
avoidance of doubt, the vehicles involved are Level 4 AVs, NOT Level 5.
They have a "safety monitor" behind the wheel and are also supervised remotely at a control centre.
(4) In the initial part of the trial, the AVs will not have passengers
on board and will instead focus on collecting data.
(5) In time, project staff members will ride as passengers. (This
mirrors the Waymo roll-out in San Francisco.)
(6) If the trials are successful, project staff members will be followed
by testing using "Approved Users" (again, mirroring Waymo's "Trusted Testers").
So, just to be clear: In Germany, on specific pre-defined routes in a
single city, there are trials involving six cars that have a "not a
driver" at the wheel and a remote operator back in a control room
shuttling staff members, and carefully selected members of the public
along those specific pre-defined routes.
It is hardly The Jetsons, is it?
JJ Ricks documents each of his journeys using Waymo. I invite you to
watch this video:
https://www.youtube.com/watch?v=zdKCQKBvH-A
Note that at around 6:00 (Chapter "Right Turn Detour") the Waymo takes a right turn earlier than expected to make an easier left turn thereby
avoiding a complicated junction (an difficult "unprotected left" which
Waymo has a known habit of avoiding). It does the same at around the
ten minute mark (see previous comment on "known habits" of current AVs).
Things get really messy from the 12:35 mark onward. I actively invite
you to watch it for yourself, but in the meantime, here's Waymo's
official explanation of what happened:
"While driving fully autonomously through an extended work zone, the
Waymo Driver detected an unusual situation and requested the attention
of a remote Fleet Response specialist to provide additional information.
During that interaction, the Fleet Response team provided incorrect guidance, which made it challenging for the Waymo Driver to resume its intended route, and required Waymo's Roadside Assistance team to
complete the trip. While the situation was not ideal, the Waymo Driver operated the vehicle safely until Roadside Assistance arrived.
Throughout, Waymo's team was in touch with the rider, who provided
thoughtful and helpful feedback that allows us to continue learning and improving the Waymo Driver. Our team has already assessed the event and improved our operational process."
My explanation:
The AV encountered a temporary no right turn because of roadworks marked
by cones, (Ed: something we definitely do not have in the UK!), and
could not complete the right turn it expected to make and (to use Waymo terminology), "paused" as it was incapable of re-routing in real time. A remote operator spoke to the passenger and advised him that "Roadside Assistance" (i.e. a human driver) was on the way. Whilst waiting for a human driver to arrive, the Waymo Driver, decided it was going to make
the right turn after all and having done so, then completely blocked the road. However, as it was no longer "paused", "Roadside Assistance" was cancelled. Except for the fact that the car was now sat in an active
lane "paused" waiting for a driver to come and rescue it, but it had
just cancelled the human, as it unpaused. There were several "false
starts" and the AV eventually gave up and put the hazard lights on in
the "fast-lane" of the active carriageway. Roadside Assistance, (aka
the human driver), arrived but the car took off again. This happened several times before they finally accessed the vehicle at around 29
minutes at which point the human drove the rest of the way, (or in Waymo speak, "the ride was completed in manual").
And remember, this is on controlled, specific pre-defined routes.
Thankfully, Waymo keep "Roadside Assistance" (i.e. humans) around 5-6
miles from each car.
"Ah, but that video was three years ago!", I hear some cry.
OK then, here's a more recent one where the Waymo gets stuck in a loop:
https://www.youtube.com/watch?v=esLzeHKhGzg
Or another where it tries to drive through a temporary construction zone (from December 2024):
https://www.youtube.com/watch?v=2ihN0IkIMPg
Despite claims to the contrary, this technology is not anywhere close to
a mass roll-out yet.
On 05/02/2025 10:36, Mark Goodge wrote:
On Wed, 5 Feb 2025 08:04:22 +0000, Norman Wells <hex@unseen.ac.am> wrote:
The USA is considerably ahead of us, but this is where we're at in Europe: >>>
https://ioki.com/en/autonomous-vehicles-turning-point-in-european-regional-and-local-transport/#:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.
Yes, we're at the point where, as you conveniently highlighted for us "most >> of them travel at low speed, on fixed routes and still with safety drivers".
It's worth noting that report is nearly two years old in what is a very >fast-moving field.
That does not, to me, suggest that all of the problems inherent with
autonomous vehicles have been solved, nor that solving those problems will >> be trivial.
Of course they haven't all been solved, but the evidence is that AVs are >already safer than manually driven vehicles which is a very significant >marker point in their development. The quest now is to approach
perfection with, of course, the law of diminishing returns applying.
On Wed, 5 Feb 2025 11:07:54 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 05/02/2025 10:33, Mark Goodge wrote:
On Wed, 5 Feb 2025 07:56:31 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
One wonders how my little Garmin copes then.
It copes because most speed limits don't change very often, and when they do
change, they usually only affect a short stretch of road. Provided its data >>> sources include sources of speed limits, it can usually update that data >>> before most people notice. But if you happen to drive along a stretch of >>> road where there has been a recent speed limit change, then you will
probably observe that your sat-nav doesn't immediately reflect that. Again,,
a few minutes with Google (or search engine of your choice) will find plenty
of instances of people complaning that their Garmin sat-nav has got a speed >>> limit wrong, or doesn't update when the limit changes.
That may be because they haven't updated their maps for some time.
No; if you do as I suggest you will find that the people complaining have up to date information. In fact, a common commplaint is "I updated my maps, but the speed limit is still wrong".
Although, of course, if it needs the user to perform the update anyway then it's not real time data.
But
there's no reason why live mapping should have any speed limit wrong for
long.
How long is "for long"? And how long would be too long?
The thing is, though, that there obviously are data-bases of speed
limits out there that are rather more comprehensive, up-to-date and
accessible than you think.
There are plenty of databases of speed limits out there, compiled from a variety of sources including crowd-sourcing. Waze is often regarded as
having the most up to date data because it provides an easy means for users to give it that information. Here Maps uses pooled vehicle data, which is very reliable in situations where there is sufficient traffic to generate a statistically significant sample. Google, as I've previously stated, uses optical data from Streetview cars as well as pooled vehicle data.
But my point was simply that there is no official, canonical, comprehensive database of speed limits, and nor is there likely to be any time soon. The reason why Here, Google and Waze need to create their own databases using secondary sources is because there is no primary source. Oddly enough, Google, Here and Waze all boast that their system is the best. They wouldn't need to do that if they were simply drawing on official data.
For human drivers that's not a major issue, there's an expectation that
human drivers will still be able to read signs even where the software
doesn't. But an autonomous vehicle only has the software, it doesn't have a >>> human able to give it information.
No, they will also have an array of sensors that vastly exceed any
human. The satnav speed limit data-base is only part of their toolkit.
Again, that is my point. To be fully autonomous, a vehicle will need to be able to derive the necessary data from its surroundings, context and memory without relying solely on information transmitted from a central system.
That is, it will need to be able to behave like a human driver. Which, as yet, they can't do. They'll get there eventually. But getting there isn't a trivial problem to solve.
On 05/02/2025 16:21, Mark Goodge wrote:
How long is "for long"? And how long would be too long?
Obviously, it should be as short as possible.
Again, that is my point. To be fully autonomous, a vehicle will need to be >> able to derive the necessary data from its surroundings, context and memory >> without relying solely on information transmitted from a central system.
That is, it will need to be able to behave like a human driver. Which, as
yet, they can't do. They'll get there eventually. But getting there isn't a >> trivial problem to solve.
They are much closer already than you think.
On 04/02/2025 20:45, Mark Goodge wrote:
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:
Satnavs should, and autonomous vehicle controls will have to, have some
way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me you've never >> worked as a programmer...."
What a cheap remark! All computer languages I've ever dealt with, and I
have programmed in more than one, understand the word 'not'. It's a
very basic operator.
On 05/02/2025 15:35, Jethro_uk wrote:
[quoted text muted]
If a tube train is driverless and has no member of staff onboard, what happens if a passenger is taken ill?
Or if there is a knife-fight?
Or if
there is a bomb?
Or if the train collides with a person on the track? Or
if the doors have come open during the journey?
On 05/02/2025 12:50, Jethro_uk wrote:
On Wed, 05 Feb 2025 10:50:20 +0000, Norman Wells wrote:
On 05/02/2025 10:30, Jethro_uk wrote:
On Tue, 04 Feb 2025 21:45:09 +0000, Norman Wells wrote:
On 04/02/2025 21:29, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> >>>>>> wrote:
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of >>>>>>>> the commander and the flight technicians in fully automated
vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a >>>>>> Train Captain) on board who is able to take over and drive it
manually if necessary. And the trains themselves don't need any
routing capability, because they run on rails. It's a very far cry >>>>>> from a fully autonomous road vehicle.
And how many times have they been called into action to drive a
train manually? Perhaps you'd tell us. Otherwise it seems just a
union ruse to employ people to do nothing, as is their wont.
It's not just driving the train. It's ensuring there's nothing
fouling the doors as it departs. Something which - despite being
sensored up the wazzoo - still occurs. With potentially fatal
results.
Seems like we'll need an on-board captain then, presumably with
government accreditation, for any autonomous vehicle journey, even a
private car.
That's a bit of a downer.
Sorry.
What the blazes are you blithering on about ?
Well, someone has to ensure the doors aren't being fouled with
potentially fatal results.
On 2025-02-05, billy bookcase <billy@onon.com> wrote:
"Simon Parker" <simonparkerulm@gmail.com> wrote in message
news:m0h3kgFqoilU4@mid.individual.net...
snippage
( Autopilots can malfunction but then who ever claimed otherwise ? )
That AI is used in AVs for object recognition is a given.
Complete and utter nonsense.
If a transmitter sends out signals which are reflected back within a
specified period, then that is all the evidence which is required of the
presence of an object in that direction, at that distance; which therefore >> constitutes something which needs be avoided,.
Even robot vacuum cleaners are fitted with that capability.
And in the latter case, whether its an armchair or the family Labrador
makes absolutely no difference whatsoever. And its exactly the same
with AV's
And will remain the case until such time as they, AV's decide for
themselves to deliberately run humans down. As opposed to say running
into trees etc. And that will be the point, and only then, where AI
has finally kicked in
Anyone claiming that there is no AI in AVs can be dismissed as talking
from a position of ignorance. As can anyone that agrees with them. :-)
You do realise of course that that particular observation adds
nothing whatsoever to your argument ? Which in itself is so specularly
wrong as to be almost embarrassing ?
Well, *someone* out of you and Simon is "so specularly wrong as to be
almost embarrassing"... but it isn't Simon.
On 05/02/2025 16:17, Jon Ribbens wrote:
On 2025-02-05, billy bookcase <billy@onon.com> wrote:
"Simon Parker" <simonparkerulm@gmail.com> wrote in message
news:m0h3kgFqoilU4@mid.individual.net...
snippage
( Autopilots can malfunction but then who ever claimed otherwise ? )
That AI is used in AVs for object recognition is a given.
Complete and utter nonsense.
If a transmitter sends out signals which are reflected back within a
specified period, then that is all the evidence which is required of the >>> presence of an object in that direction, at that distance; which therefore >>> constitutes something which needs be avoided,.
Even robot vacuum cleaners are fitted with that capability.
And in the latter case, whether its an armchair or the family Labrador
makes absolutely no difference whatsoever. And its exactly the same
with AV's
And will remain the case until such time as they, AV's decide for
themselves to deliberately run humans down. As opposed to say running
into trees etc. And that will be the point, and only then, where AI
has finally kicked in
Anyone claiming that there is no AI in AVs can be dismissed as talking >>>> from a position of ignorance. As can anyone that agrees with them. :-) >>>You do realise of course that that particular observation adds
nothing whatsoever to your argument ? Which in itself is so specularly
wrong as to be almost embarrassing ?
Well, *someone* out of you and Simon is "so specularly wrong as to be
almost embarrassing"... but it isn't Simon.
I think you may be mistaken there. AI involves crunching huge quantities of data to
come up with conclusions. That is assuredly *not* happening in the tiny confines of a
satnav chip which merely uses earlier and elsewhere-formed AI conclusions.
Jon and Simon aren't talking about sat nav but "object recognition"
As they both believe that when an AV is about to collide with "an object"
its important for some reason that it "recognises" exactly what that object is
"Norman Wells" <hex@unseen.ac.am> wrote in message news:m0hi7fFddamU9@mid.individual.net...
On 05/02/2025 16:17, Jon Ribbens wrote:
On 2025-02-05, billy bookcase <billy@onon.com> wrote:
"Simon Parker" <simonparkerulm@gmail.com> wrote in message
news:m0h3kgFqoilU4@mid.individual.net...
snippage
( Autopilots can malfunction but then who ever claimed otherwise ? )
That AI is used in AVs for object recognition is a given.
Complete and utter nonsense.
If a transmitter sends out signals which are reflected back within a
specified period, then that is all the evidence which is required of the >>>> presence of an object in that direction, at that distance; which therefore >>>> constitutes something which needs be avoided,.
Even robot vacuum cleaners are fitted with that capability.
And in the latter case, whether its an armchair or the family Labrador >>>> makes absolutely no difference whatsoever. And its exactly the same
with AV's
And will remain the case until such time as they, AV's decide for
themselves to deliberately run humans down. As opposed to say running >>>> into trees etc. And that will be the point, and only then, where AI
has finally kicked in
Anyone claiming that there is no AI in AVs can be dismissed as talking >>>>> from a position of ignorance. As can anyone that agrees with them. :-) >>>>You do realise of course that that particular observation adds
nothing whatsoever to your argument ? Which in itself is so specularly >>>> wrong as to be almost embarrassing ?
Well, *someone* out of you and Simon is "so specularly wrong as to be
almost embarrassing"... but it isn't Simon.
I think you may be mistaken there. AI involves crunching huge
quantities of data to come up with conclusions. That is assuredly
*not* happening in the tiny confines of a satnav chip which merely
uses earlier and elsewhere-formed AI conclusions.
Jon and Simon aren't talking about sat nav but "object recognition"
As they both believe that when an AV is about to collide with "an
object" its important for some reason that it "recognises" exactly
what that object is
On 05/02/2025 12:15, Simon Parker wrote:
On 04/02/2025 15:25, Norman Wells wrote:
On 04/02/2025 12:19, Max Demian wrote:
On 03/02/2025 19:05, Norman Wells wrote:
Provided it is working on a live real-time mapping system in which
road closures are clearly indicated, which is not beyond the wit of
man, the satnav will simply re-route to avoid them. They're very
good at finding alternative routes when you miss a turning for
example, so it's not a new or insurmountable problem.
Won't the satnav keep telling you to take the original route? Is
there a way to tell it that there is something in the way? The same
problem would occur in the case of an autonomous vehicle.
Satnavs should, and autonomous vehicle controls will have to, have
some way programmed in to discard any routes where there is a
notified blockage. That's a trivial programming exercise.
Is it?
Of course it is.
Can you post a copy of your program please along with relevant
certification for the design, coding, debugging, testing and deployment.
What certification do you consider necessary? And who certifies it?
On Wed, 5 Feb 2025 17:47:41 -0000, "billy bookcase" <billy@onon.com> wrote:
Jon and Simon aren't talking about sat nav but "object recognition"
As they both believe that when an AV is about to collide with "an object" >>its important for some reason that it "recognises" exactly what that object is
Of course it matters what the object is. Some objects will do more damage to the car if the car strikes them, and some objects will suffer more injury of the car strikes them. There's a huge difference between, say, a plastic bag, a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course, you'd avoid all of them. But you wouldn't necessarily take abrupt evasive action to avoid a plastic bag. On the other hand, you'd hit anything - including a bollard - if necessary to avoid hitting a child.
On 04/02/2025 16:37, Norman Wells wrote:
On 04/02/2025 14:49, Simon Parker wrote:
On 03/02/2025 09:36, Norman Wells wrote:
On 03/02/2025 09:21, Davey wrote:
On Mon, 3 Feb 2025 08:09:14 +0000
Norman Wells <hex@unseen.ac.am> wrote:
On 02/02/2025 13:37, Davey wrote:
I wonder how these self-driving cars react to signs, such as those >>>>>>> not uncommon in the US, which mandate a lower-than-normal speed
limit 'on school days only'?
Just as they do with any legally imposed speed limits. With a proper >>>>>> data-base of numbers and times.
Does your satnav not tell you the speed limit wherever you are?
No, mine doesn't, and it is in a BMW.
Good grief! How ancient is it?
I will investigate in case it
is a feature that is available but not activated. But you are assuming >>>>> that all States and small 'burgs' in the US follow the same speed
limit
reporting processes that we do. Do they?
I have no idea. But this is a UK group and all the necessary speed
limit information here is certainly available and in current use.
It's trivial programming to ensure that limits are not exceeded by
any autonomous vehicle, which is certainly not the case with human
drivers, even those who try to obey them.
Anecdata: A young(ish) family member had a black box fitted to their
new car as a condition of their insurance policy. It was a physical
black box connected directly to the vehicle electronics and mounted
within the engine bay by a professional appointed by the insurers.
Said family member has since changed their vehicle and the new
vehicle didn't need a physical black box fitting but instead required
an app installing on their mobile phone which must be with them in
their car when they're driving as a condition of their insurance.
Both systems have resulted in letters from the insurance company
detailing incidents where the systems indicated that they were
driving at significantly above the speed limit, which triggers
penalties in the policy up to and including cancellation of the policy.
Every single incident reported has been down to GPS issues where the
car is on a dual carriageway, NSL road or similar, yet the black box
detected the vehicle as being on a 30mph road adjacent to its current
position but there is no reason to believe that the reverse situation
cannot also exist, without triggering a letter from the insurance
company.
Please detail how your autonomous vehicle has overcome these problems
evidently inherent in the current navigational systems in use, (both
stand-alone and embedded).
$75 billion has been spent by the automotive industry on developing
autonomous vehicle technology, including the software. How much do
you think the black box company spent developing theirs?
You get what you pay for.
S&P Global Mobility (Google them) predict that Level 5 passenger cars
will not appear before 2035 at the earliest.
For context, Tesla "Autopilot" is classified as Level 2.
Your numerous posts to this thread seem to demonstrate both that you
fail to understand where we actually are with the current technology and
are grossly underestimating how much further we have to go before we get
to the point where AV is ready for mass deployment.
On Wed, 5 Feb 2025 12:42:14 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:
S&P Global Mobility (Google them) predict that Level 5 passenger cars
will not appear before 2035 at the earliest.
If all goes well, I should be retired and living a life of leisure by then. So, if I'm rich enough, I might be able to say "Alexa, take me to the pub".
(Actually, I've always said that I won't get a "smart" device until I can choose what to call it. I'm not going to be asking Google, Siri, Alexa or Copilot to take me to the pub; what I really want is to be able to say "Zen, take me to the pub". Or possibly "Holly, take me to the pub". Or even just plain "Computer, take me to the pub". But I wouldn't want to say "HAL, take me to the pub" in case it refused. Or worse, if it took me there, but
refused to take me home).
On Wed, 5 Feb 2025 16:44:36 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 05/02/2025 16:21, Mark Goodge wrote:
How long is "for long"? And how long would be too long?
Obviously, it should be as short as possible.
Well, obviously. But if the shortest possible was a week, would that be OK?
Again, that is my point. To be fully autonomous, a vehicle will need to be >>> able to derive the necessary data from its surroundings, context and memory >>> without relying solely on information transmitted from a central system. >>> That is, it will need to be able to behave like a human driver. Which, as >>> yet, they can't do. They'll get there eventually. But getting there isn't a >>> trivial problem to solve.
They are much closer already than you think.
I have no overwhelming reason to disagree with the industry consensus (as cited earlier by Simon Parker) that we're probably around ten years away
from the first fully autonomous (level 5) vehicles on the roads in real life use (as opposed to mere testing).
I'd be happy enough if we get there a bit
quicker. I suspect that DeepSeek may be giving some developers grounds for optimism, given that it appears to be technically on a par with the best Western AI but at a much lower cost.
Obviously, programming an autonomous vehicle is more of a challenge than programming a chatbot. But the underlying trade-offs of computing power v cost are essentially the same. So it would not surprise me at all if the first genuine level 5 vehicles come from China. Which would raise a whole
lot of other ethical and legal issues.
On Wed, 5 Feb 2025 11:18:30 +0000, Norman Wells <hex@unseen.ac.am> wrote:
Of course they haven't all been solved, but the evidence is that AVs are >>already safer than manually driven vehicles which is a very significant >>marker point in their development. The quest now is to approach
perfection with, of course, the law of diminishing returns applying.
They're safer than human drivers under certain, carefully controlled and restricted, conditions. But they still make mistakes that human drivers are very unlikely to make, even if those mistakes are not directly dangerous.
On 05/02/2025 16:17, Jon Ribbens wrote:
On 2025-02-05, billy bookcase <billy@onon.com> wrote:
"Simon Parker" <simonparkerulm@gmail.com> wrote in message
news:m0h3kgFqoilU4@mid.individual.net...
Anyone claiming that there is no AI in AVs can be dismissed as talking >>>> from a position of ignorance. As can anyone that agrees with them. :-) >>>You do realise of course that that particular observation adds
nothing whatsoever to your argument ? Which in itself is so specularly
wrong as to be almost embarrassing ?
Well, *someone* out of you and Simon is "so specularly wrong as to be
almost embarrassing"... but it isn't Simon.
I think you may be mistaken there. AI involves crunching huge
quantities of data to come up with conclusions. That is assuredly *not* >happening in the tiny confines of a satnav chip which merely uses
earlier and elsewhere-formed AI conclusions.
On Wed, 5 Feb 2025 11:18:30 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 05/02/2025 10:36, Mark Goodge wrote:
On Wed, 5 Feb 2025 08:04:22 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
The USA is considerably ahead of us, but this is where we're at in Europe: >>>>
https://ioki.com/en/autonomous-vehicles-turning-point-in-european-regional-and-local-transport/#:~:text=Last%20year%2C%20Germany%20became%20the,and%20still%20with%20safety%20drivers.
Yes, we're at the point where, as you conveniently highlighted for us "most >>> of them travel at low speed, on fixed routes and still with safety drivers".
It's worth noting that report is nearly two years old in what is a very
fast-moving field.
So why didn't you provide me with a more up to date link?
After all, you keep confidently asserting that we've already got fully autonomous vehicles up and running, so you must have seen this mentioned somewhere on the web, surely?
That does not, to me, suggest that all of the problems inherent with
autonomous vehicles have been solved, nor that solving those problems will >>> be trivial.
Of course they haven't all been solved, but the evidence is that AVs are
already safer than manually driven vehicles which is a very significant
marker point in their development. The quest now is to approach
perfection with, of course, the law of diminishing returns applying.
They're safer than human drivers under certain, carefully controlled and restricted, conditions. But they still make mistakes that human drivers are very unlikely to make, even if those mistakes are not directly dangerous.
On 05/02/2025 15:39, Mark Goodge wrote:
On Wed, 5 Feb 2025 14:33:10 +0000, Martin Brown <'''newspam'''@nonad.co.uk> >> wrote:
On 05/02/2025 09:57, Mark Goodge wrote:
On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>
On 04/02/2025 20:45, Mark Goodge wrote:
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote:
Satnavs should, and autonomous vehicle controls will have to, have some >>>>>>> way programmed in to discard any routes where there is a notified >>>>>>> blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me you've never
worked as a programmer...."
What a cheap remark! All computer languages I've ever dealt with, and I >>>>> have programmed in more than one, understand the word 'not'. It's a >>>>> very basic operator.
Please name three different programming languages which you have used that >>>> include the word "not" as an operator.
FORTRAN, Lisp, Modula2, Pascal (I can't count)
Shhh! I'd set that as a test for Norman!
And a number of others that use ! or some other cryptic symbol as not.
Pretty much every language has a negation operator. It would be almost
impossible to write useful code if it didn't. But, these days, the operator >> is, as you say, typically a symbol rather than the word "not". That's why I >> was curious as to how recent Norman's programming knowledge, if any, is.
The operator is still 'not', whatever symbols are used to invoke it.
On 05/02/2025 16:43, Mark Goodge wrote:
[quoted text muted]
I think it would probably just refuse to open the pod bay doors.
Anyway, I believe the names Siri and Alexa were carefully chosen because
of their uniqueness and low likelihood of confusion with ordinary words
used in ordinary speech.
TL;DR: anyone who tells you that AI plays no role in autonomous vehicles is either stunningly ignorant or deliberately misleading you.
Norman Wells <hex@unseen.ac.am> wrote:
The operator is still 'not', whatever symbols are used to invoke it.
In your post above you said “the word ‘not’”. That sort of sloppiness is
unfortunately too common in software and a rich source of bugs.
I am not inclined to accept your estimates of the effort required for a
task unless you can demonstrate a better understanding of software. For example, can you explain what can go wrong when using a programming
language that has both polymorphic comparison operators and implicit coercions?
On Wed, 5 Feb 2025 16:33:51 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 05/02/2025 16:17, Jon Ribbens wrote:
On 2025-02-05, billy bookcase <billy@onon.com> wrote:
"Simon Parker" <simonparkerulm@gmail.com> wrote in message
news:m0h3kgFqoilU4@mid.individual.net...
Anyone claiming that there is no AI in AVs can be dismissed as talking >>>>> from a position of ignorance. As can anyone that agrees with them. :-) >>>>You do realise of course that that particular observation adds
nothing whatsoever to your argument ? Which in itself is so specularly >>>> wrong as to be almost embarrassing ?
Well, *someone* out of you and Simon is "so specularly wrong as to be
almost embarrassing"... but it isn't Simon.
I think you may be mistaken there. AI involves crunching huge
quantities of data to come up with conclusions. That is assuredly *not*
happening in the tiny confines of a satnav chip which merely uses
earlier and elsewhere-formed AI conclusions.
The sat-nav chip isn't using AI. But the sat-nav is only one source of data to an autonomous vehicle's control system. The control system has to be able to reconcile multiple sources of data, including sensory and optical information, and decide in real time how to use all that data and act accordingly. And that's where the AI is.
If you don't believe me, here's a research paper from last year:
https://www.mdpi.com/2504-2289/8/4/42
Which includes such gems as
Artificial intelligence (AI) currently plays a crucial role in the
development and operation of autonomous vehicles. The integration of AI
algorithms enables autonomous vehicles to navigate, perceive, and adapt to
dynamic environments, making them safer and more efficient.
and
Sharper perception and decision-making: AI algorithms are more adept at
understanding environments with advanced sensors and robust machine
learning.
And that's just one document. Here are a few more:
https://www.xenonstack.com/blog/autonomous-vehicles-driver-assistance https://hai.stanford.edu/news/how-ai-making-autonomous-vehicles-safer https://arxiv.org/pdf/2402.17690 https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry
https://www.techtarget.com/searchenterpriseai/definition/driverless-car
TL;DR: anyone who tells you that AI plays no role in autonomous vehicles is either stunningly ignorant or deliberately misleading you.
On 2025-02-05, Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:
On Wed, 5 Feb 2025 11:18:30 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>> Of course they haven't all been solved, but the evidence is that AVs are >>> already safer than manually driven vehicles which is a very significant
marker point in their development. The quest now is to approach
perfection with, of course, the law of diminishing returns applying.
They're safer than human drivers under certain, carefully controlled and
restricted, conditions. But they still make mistakes that human drivers are >> very unlikely to make, even if those mistakes are not directly dangerous.
The "fail 'safe' by stopping immobile in the middle of a lane of
fast-moving traffic" shown in that Waymo video earlier seemed pretty "directly dangerous" to me.
On Wed, 5 Feb 2025 14:33:10 +0000, Martin Brown <'''newspam'''@nonad.co.uk> wrote:
On 05/02/2025 09:57, Mark Goodge wrote:
On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 04/02/2025 20:45, Mark Goodge wrote:
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>>
Satnavs should, and autonomous vehicle controls will have to, have some >>>>>> way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me you've never
worked as a programmer...."
What a cheap remark! All computer languages I've ever dealt with, and I >>>> have programmed in more than one, understand the word 'not'. It's a
very basic operator.
Please name three different programming languages which you have used that >>> include the word "not" as an operator.
FORTRAN, Lisp, Modula2, Pascal (I can't count)
Shhh! I'd set that as a test for Norman!
And a number of others that use ! or some other cryptic symbol as not.
Pretty much every language has a negation operator. It would be almost impossible to write useful code if it didn't. But, these days, the operator is, as you say, typically a symbol rather than the word "not". That's why I was curious as to how recent Norman's programming knowledge, if any, is.
The way it works at present on the Satnavs that have realtime live
traffic updates is that the system predicts increased journey time and
flags sections of road ahead red or yellow where the traffic speed from
other users is unreasonably slow. I have ours set to only deviate if the
(long) journey time increases by more than 15 minutes.
That's how Google Maps and Waze do it. It does obviously rely on the system having continual two-way communication with the central server. A standalone sat-nav can't do that.
It seems to work well enough and you can sometimes tell who else is
using the same nav system by where they come off the normal route.
One scenario where it doesn't work is late at night or on lightly used roads where there isn't enough traffic to give a reliable statistical sample.
On 2025-02-05, billy bookcase <billy@onon.com> wrote:
Jon and Simon aren't talking about sat nav but "object recognition"
Indeed.
As they both believe that when an AV is about to collide with "an
object" its important for some reason that it "recognises" exactly
what that object is
No, you've just made that up.
On Wed, 05 Feb 2025 16:54:50 +0000, Norman Wells wrote:
On 05/02/2025 16:43, Mark Goodge wrote:
[quoted text muted]
I think it would probably just refuse to open the pod bay doors.
Anyway, I believe the names Siri and Alexa were carefully chosen because
of their uniqueness and low likelihood of confusion with ordinary words
used in ordinary speech.
On 05/02/2025 17:18, Mark Goodge wrote:
On Wed, 5 Feb 2025 16:33:51 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 05/02/2025 16:17, Jon Ribbens wrote:
On 2025-02-05, billy bookcase <billy@onon.com> wrote:
"Simon Parker" <simonparkerulm@gmail.com> wrote in message
news:m0h3kgFqoilU4@mid.individual.net...
Anyone claiming that there is no AI in AVs can be dismissed as talking >>>>>> from a position of ignorance. As can anyone that agrees with them. :-) >>>>>You do realise of course that that particular observation adds
nothing whatsoever to your argument ? Which in itself is so specularly >>>>> wrong as to be almost embarrassing ?
Well, *someone* out of you and Simon is "so specularly wrong as to be
almost embarrassing"... but it isn't Simon.
I think you may be mistaken there. AI involves crunching huge
quantities of data to come up with conclusions. That is assuredly *not* >>> happening in the tiny confines of a satnav chip which merely uses
earlier and elsewhere-formed AI conclusions.
The sat-nav chip isn't using AI. But the sat-nav is only one source of data >> to an autonomous vehicle's control system. The control system has to be able >> to reconcile multiple sources of data, including sensory and optical
information, and decide in real time how to use all that data and act
accordingly. And that's where the AI is.
If you don't believe me, here's a research paper from last year:
https://www.mdpi.com/2504-2289/8/4/42
Which includes such gems as
Artificial intelligence (AI) currently plays a crucial role in the
development and operation of autonomous vehicles. The integration of AI >> algorithms enables autonomous vehicles to navigate, perceive, and adapt to
dynamic environments, making them safer and more efficient.
and
Sharper perception and decision-making: AI algorithms are more adept at >> understanding environments with advanced sensors and robust machine
learning.
And that's just one document. Here are a few more:
https://www.xenonstack.com/blog/autonomous-vehicles-driver-assistance
https://hai.stanford.edu/news/how-ai-making-autonomous-vehicles-safer
https://arxiv.org/pdf/2402.17690
https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry
https://www.techtarget.com/searchenterpriseai/definition/driverless-car
TL;DR: anyone who tells you that AI plays no role in autonomous vehicles is >> either stunningly ignorant or deliberately misleading you.
AI is a buzzword which is generally used by the pretentious to make out
they have a better understanding of anything complex than they actually
do. But AI involves the crunching of huge amounts of data to develop principles and insights that can be used more simply elsewhere without
having to reinvent the wheel every time. Autonomous car controls don't
have vast amounts of data, and you don't want to sit around for a week waiting for it to decide exactly what it is it has just crashed into.
They are not employing AI, though AI may have played a part in
developing the algorithms they do actually use.
It's all a matter of proper definition of terms which is too commonly lacking.
"Nick Finnigan" <nix@genie.co.uk> wrote in message news:vnu2a7$20ufs$1@dont-email.me...
On 04/02/2025 15:05, Norman Wells wrote:
On 04/02/2025 12:32, Mark Goodge wrote:
Norman's responses to your comments also contradict his own earlier
assertions that autonomous vehicles don't need AI,
AI doesn't come into it.
Which autonomous vehicles don't use AI for object recogntion?
All of them.
An object is an object; and is simply something to be avoided surely ?
Why would there be any need to discriminate ?
On Wed, 5 Feb 2025 17:47:41 -0000, "billy bookcase" <billy@onon.com> wrote:
Jon and Simon aren't talking about sat nav but "object recognition"
As they both believe that when an AV is about to collide with "an object"
its important for some reason that it "recognises" exactly what that object is
Of course it matters what the object is. Some objects will do more damage to the car if the car strikes them, and some objects will suffer more injury of the car strikes them. There's a huge difference between, say, a plastic bag, a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course, you'd avoid all of them. But you wouldn't necessarily take abrupt evasive action to avoid a plastic bag. On the other hand, you'd hit anything - including a bollard - if necessary to avoid hitting a child.
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of the
commander and the flight technicians in fully automated vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a Train Captain) on board who is able to take over and drive it manually if necessary. And the trains themselves don't need any routing capability, because they run on rails. It's a very far cry from a fully autonomous road vehicle.
On 05/02/2025 15:35, Jethro_uk wrote:
On Wed, 05 Feb 2025 12:51:33 +0000, billy bookcase wrote:
Much the same as on the Victoria Line; which has been fully automatic
using ATO ever since its inception in 1968; but which nevertheless has >>> a "driver" in the cab.
For interest, the Moorgate crash was in 1975.
If a tube train is driverless and has no member of staff onboard, what happens if a passenger is taken ill? Or if there is a knife-fight? Or if there is a bomb? Or if the train collides with a person on the track? Or
if the doors have come open during the journey?
On Wed, 5 Feb 2025 12:42:14 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:
S&P Global Mobility (Google them) predict that Level 5 passenger cars
will not appear before 2035 at the earliest.
If all goes well, I should be retired and living a life of leisure by then. So, if I'm rich enough, I might be able to say "Alexa, take me to the pub".
(Actually, I've always said that I won't get a "smart" device until I can choose what to call it. I'm not going to be asking Google, Siri, Alexa or Copilot to take me to the pub; what I really want is to be able to say "Zen, take me to the pub". Or possibly "Holly, take me to the pub". Or even just plain "Computer, take me to the pub". But I wouldn't want to say "HAL, take me to the pub" in case it refused. Or worse, if it took me there, but
refused to take me home).
On 05/02/2025 16:43, Mark Goodge wrote:
On Wed, 5 Feb 2025 12:42:14 +0000, Simon Parker <simonparkerulm@gmail.com> >> wrote:
S&P Global Mobility (Google them) predict that Level 5 passenger cars
will not appear before 2035 at the earliest.
If all goes well, I should be retired and living a life of leisure by then. >> So, if I'm rich enough, I might be able to say "Alexa, take me to the pub". >>
(Actually, I've always said that I won't get a "smart" device until I can
choose what to call it. I'm not going to be asking Google, Siri, Alexa or
Copilot to take me to the pub; what I really want is to be able to say "Zen, >> take me to the pub". Or possibly "Holly, take me to the pub". Or even just >> plain "Computer, take me to the pub". But I wouldn't want to say "HAL, take >> me to the pub" in case it refused. Or worse, if it took me there, but
refused to take me home).
I understand that Alexa's "wake word" can be changed to anything you
like (probably using the app that won't run on my phone).
Personally I prefer chat devices where you have to tap or press
something to invoke them.
When I worked in insurance, I had a task to trawl through a years worth
of black box data. (Which I did at a cost of about £5,000 versus the £100,000 a bunch of cowboys were trying to charge).
One very common problem with the data was that because it was simply one
data point per analysis, and the GPS had no idea of altitude, it was
possible for a car on a motorway (70mph limit) to briefly appear as if it
had been on a road with a 30mph limit that went over or under it.
Of course that triggered a "naughty boy" black mark for the driver and affected their premiums.
The answer was to have had a little bit more sophistication in the
logging and more critically analysis. However that would have cost money
and skills that would have made the whole project unprofitable.
I left before they decided what to do.
On 6 Feb 2025 at 12:59:46 GMT, "Max Demian" <max_demian@bigfoot.com> wrote:
On 05/02/2025 16:43, Mark Goodge wrote:
On Wed, 5 Feb 2025 12:42:14 +0000, Simon Parker <simonparkerulm@gmail.com> >>> wrote:
S&P Global Mobility (Google them) predict that Level 5 passenger cars
will not appear before 2035 at the earliest.
If all goes well, I should be retired and living a life of leisure by then. >>> So, if I'm rich enough, I might be able to say "Alexa, take me to the pub". >>>
(Actually, I've always said that I won't get a "smart" device until I can >>> choose what to call it. I'm not going to be asking Google, Siri, Alexa or >>> Copilot to take me to the pub; what I really want is to be able to say "Zen,
take me to the pub". Or possibly "Holly, take me to the pub". Or even just >>> plain "Computer, take me to the pub". But I wouldn't want to say "HAL, take >>> me to the pub" in case it refused. Or worse, if it took me there, but
refused to take me home).
I understand that Alexa's "wake word" can be changed to anything you
like (probably using the app that won't run on my phone).
Personally I prefer chat devices where you have to tap or press
something to invoke them.
Indeed. I would find it somewhat odd to deliberately put a machine in my living room designed to listen to every word said and potentially interpret it
and relay it to a server on the Internet.
On Wed, 05 Feb 2025 16:54:50 +0000, Norman Wells wrote:
On 05/02/2025 16:43, Mark Goodge wrote:
[quoted text muted]
I think it would probably just refuse to open the pod bay doors.
Anyway, I believe the names Siri and Alexa were carefully chosen because >>> of their uniqueness and low likelihood of confusion with ordinary words
used in ordinary speech.
More lilely they were chosed as "girly" names which would sit quite
happily in the living room alongside the air freshener.
Which presumably represnted a larger potential market than would "Thor"
or "Wotan"; sitting on a shelf in the garage or man cave.
On 05/02/2025 10:22, Norman Wells wrote:
On 05/02/2025 09:57, Mark Goodge wrote:
On Tue, 4 Feb 2025 21:38:14 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
On 04/02/2025 20:45, Mark Goodge wrote:
On Tue, 4 Feb 2025 15:25:24 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
Satnavs should, and autonomous vehicle controls will have to, have >>>>>> some
way programmed in to discard any routes where there is a notified
blockage. That's a trivial programming exercise.
"Tell me you've never worked as a programmer without telling me
you've never
worked as a programmer...."
What a cheap remark! All computer languages I've ever dealt with,
and I
have programmed in more than one, understand the word 'not'. It's a
very basic operator.
Please name three different programming languages which you have used
that
include the word "not" as an operator.
But, anyway, it's not the syntax I was querying. It's how to ensure
that the
existence of a blockage, and its precise location, is passed to the
software
promptly and in a format which can easily be interpreted by the
software.
For example, when the notification comes in the form of a policeman
saying
"you can't go that way, mate, it's blocked".
What's the problem? It's only a matter of information being passed to
whoever can amend the live mapping, and road closed notification being
applied on the maps..
The "problem" is that you are unable or unwilling to detail the process
by which "the information" is "passed to whoever can amend the live
mapping, and road closure notification being applied on the maps."
For one thing, (as detailed in a parallel post), there are numerous
systems at play, rather than a single all-encompassing system.
Additionally, there isn't some entity within each of those systems with responsibility to "amend the live mapping".
I recommend that you spend time learning how Google, Apple and Garmin
(to name but three) deal with congestion and routing around it and you
will be well on the well to enlightenment.
Until then, you see this as a simple solution to a simple problem
because you do not understand the problem and therefore have no idea of
the complexities involved and how your proposed solution maps onto these.
If it's not done, we're no worse off than we are now. Huge delaysI invite you to watch the in-car videos of Waymo Driver in action when
whether of manually driven vehicles or autonomous ones.
it goes wrong. If you think that leaves us "no worse off then we are
now" then I must disagree with you in the strongest possible terms.
And these are Level 4 AVs with a baby-sitter back at the control centre
and a human driver on standby no more than 5-6 miles away to intervene
when things go wrong, which they do all too frequently.
AVs utilise AI every second of every trip. They are trained to deal with situations
they have never previously encountered because it is impossible to program every
possible scenario into the system.
On 06/02/2025 14:51, Simon Parker wrote:
On 05/02/2025 10:22, Norman Wells wrote:
What's the problem? It's only a matter of information being passed to
whoever can amend the live mapping, and road closed notification being
applied on the maps..
The "problem" is that you are unable or unwilling to detail the process
by which "the information" is "passed to whoever can amend the live
mapping, and road closure notification being applied on the maps."
For one thing, (as detailed in a parallel post), there are numerous
systems at play, rather than a single all-encompassing system.
Then they all should be notified,
AVs *DO* receive huge amounts of data and crunch that data in real time to make routing
decisions. (And in context, "routing" here does not relate merely to, "Shall I turn
left in 250metres?" but includes, "Shall I move 30cm further away from the footway as
there's a pedestrian there?" or "There's a cyclist crossing the junction so even though
the light is green, I must wait for them to clear the junction before I proceed.")
On 06/02/2025 13:15, Roger Hayter wrote:
Indeed. I would find it somewhat odd to deliberately put a machine in my
living room designed to listen to every word said and potentially interpret it
and relay it to a server on the Internet.
I've come to think my life must be so uninteresting to all but a few I doubt anything
would come of disjointed conversations held by the big internet players. If I was of
interest I might even be impressed!
Is your life so wild and on a criminal or immoral spectrum you think a server will dote
on your every word?
Whilst on the page, please scroll down to the bottom and watch the video under the section "Watch our technology in action" and then tell me that object recognition employed in Level 4 cars is "not AI in itself, but
just the ability earlier AI has given the vehicle software".
On Thu, 6 Feb 2025 15:32:06 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 06/02/2025 14:51, Simon Parker wrote:
On 05/02/2025 10:22, Norman Wells wrote:
What's the problem? It's only a matter of information being passed to >>>> whoever can amend the live mapping, and road closed notification being >>>> applied on the maps..
The "problem" is that you are unable or unwilling to detail the process
by which "the information" is "passed to whoever can amend the live
mapping, and road closure notification being applied on the maps."
For one thing, (as detailed in a parallel post), there are numerous
systems at play, rather than a single all-encompassing system.
Then they all should be notified,
How?
Please can you detail the process by which the fact that a police officer
has just closed a road due to an accident is notified to all of the various systems which might want that information.
This is, actually, a solvable problem, given certain basic assumptions, and
I have what I think would be an effective solution in mind, but I'd be interested to hear how you would do it if you were desiging the system.
Anyway, I believe the names Siri and Alexa were carefully chosen
because of their uniqueness and low likelihood of confusion with
ordinary words used in ordinary speech.
It is not often you're right, but you're wrong again.
Bezos specifically chose the name "Alexa" after the ancient library of Alexandria. The developers liked it because of the hard "X" consonant
which helped them increase the precision of recognition in the early versions.
The wife of a good friend is named "Alexandra", (or "Alex" for short).
For fairly obvious reasons, they cannot and will not have Amazon smart devices in their home.
Similarly, Siri is named after Sigrid, a Norwegian co-worker of Dag
Kittlaus, founder of Siri, prior to its acquisition by Apple.
On 05/02/2025 15:40, Norman Wells wrote:
On 05/02/2025 12:33, Simon Parker wrote:
On 04/02/2025 16:14, Norman Wells wrote:
Obviously, it would have crashed, killing all on board.
Most people, hopefully you included, would consider that to be a
"sub- optimal outcome".
You just asked me what would happen, not what I thought of it.
Actually, I asked "billy bookcase" but that's beside the point.
Consider this a request for your thoughts on the subject rather than a
simple request of what would happen.
No-one. Just as there isn't on the Docklands Light Railway.
<SFX: Buzzer>
I recommend researching the role and responsibilities of the
"Passenger Service Agent" on the DLR.
And navigating the DLR is light years away from a Level 5 AV.
Of course the DLR is easier to manage autonomously than road vehicles;
no-one claims otherwise.
So why did you introduce it as an example?
But how often has human intervention been required?
Not my example, not my data to produce.
Or if you prefer that expressed a different way, I believe someone once
said: "If someone has a point to make, it's for them to make it, not for
me to go searching on their behalf."
If only very rarely, the Passenger Service Agent is unnecessary and
redundant, except of course from the union's point of view, which is
wholly unrelated to safety.
Two posts ago, the Passenger Service Agent didn't even exist. (You
stated "No-one" fulfilled the role of the pilot on a plane under
autopilot, not "A Passenger Service Agent that is rarely needed.")
Now you dismiss them as "unnecessary and redundant".
Do you know how often human intervention has been required on the DLR
and can you cite a verifiable source to support this or do your
arguments rest upon unsupported claims?
But level 5 just represents perfection which is the enemy of good and
not necessarily required anyway.
I notice you've been using this quote a lot recently. Did it appear on
your "Quote-a-Day" calendar or some such?
And Level 5 is not "perfection" - it is a fully autonomous vehicle - the subject of the majority of this thread.
Level 4 may be quite sufficient, meaning:
"The vehicle operates completely autonomously under certain
conditions. The human being no longer has to be ready to intervene. He
can work, watch movies and even sleep. And the vehicle may also drive
alone, i.e. without occupants. It must be able to reach a safe state
without the intervention of a human driver, i.e. to come to a
standstill in a parking lot for example. However, the autonomy of the
vehicle at Level 4 is still linked to certain conditions, such as a
defined route, driving on the highway or in the parking garage.
Numerous companies are already testing level 4 vehicles in road traffic".
https://www.zf.com/mobile/en/technologies/automated_driving/
stories/6_levels_of_automated_driving.html#:~:text=Numerous%20companies%20are%20already%20testing,in%20road%20traffic%2C%20including%20ZF.&text=Germany%20has%20created%20the%20framework,as%20the%20first%20country%20worldwide.
"the vehicle may also drive alone, i.e. without occupants", particularly during the initial "data collection" phase with typically lasts around
six months or so, depending on the precise circumstances.
And "the autonomy of the vehicle at Level 4 is still linked to certain conditions, such as a defined route, driving on the highway or in the
parking garage."
I do not consider "pre-defined routes" to be "quite sufficient" given
that this often involves short trips within a geographically restricted
area with certain junctions and routes therein avoided.
In short, I cannot get the AV from anywhere to anywhere using any
available route.
It may go from where I am to where I need to be using a particular
route, but equally it may not.
It's far closer to realisation than many here seem to credit,
I know precisely where we are with the technology. If my contributions
to the thread have demonstrated anything, it should be that this is a
subject in which I am keenly interested and therefore well versed.
and such vehicles may be on the roads here next year:
https://www.gov.uk/government/news/self-driving-vehicles-set-to-be-on-
roads-by-2026-as-automated-vehicles-act-becomes-law
For very limited values of "on the roads" and "here".
But a failsafe default of come to a halt and turn everything off,Remind me not to travel in a NormCar (TM) at any time now or in the
which would doubtless be in any autonomous vehicle, isn't really an
option doing 500mph at 30,000 ft.
future. I do not really want to be in a vehicle that in the event of
an incident is designed and programmed to come to a halt in the
right- most lane of a motorway and turn off both its headlights and
hazard lights despite it being night time.
Well, of course it wouldn't. It may in other brands during the
debugging phase but not in Normcars.
No, you said "a failsafe default of 'come to a halt and turn everything
off'" would "doubtless be in any autonomous vehicle". NormCars (TM) are included within the dataset {any autonomous vehicle}.
You made no mention of first finding a safe place before 'coming to a
halt' nor of keeping the headlights on, if relevant, and activating the hazard lights. You said, quite definitely "and turn *EVERYTHING*
off" (emphasis mine). Headlights and hazard lights would be included
within the dataset {everything}.
I made the point because it highlights the erroneous nature of your
thinking.
You claim things are simple and trivial because you dismiss as
irrelevant the very things that make them complex. Everything is simple when one ignores its inherent complications.
You claim to have programming experience so you ought to be familiar
with the Pareto Principle. The first 80% is relatively easy. That
last 20% is the killer. 80% of the bugs are in 20% of the code. 80%
of the complexity comes from 20% of the code. 80% of the R&D is
spent on 20% of the application. AVs don't even have the initial 80%
mastered yet.
Call it by whatever fancy name you like, it's the same old 80/20 rule.
It is not a "fancy name", nor is it 'what I like to call it'. That is
the correct name for it as it is named after Italian polymath Vilfredo
Pareto [^1] who first wrote about it while teaching at the University of Lausanne.
[^1] https://en.wikipedia.org/wiki/Vilfredo_Pareto
On 05/02/2025 15:58, Norman Wells wrote:
On 05/02/2025 12:15, Simon Parker wrote:
On 04/02/2025 15:25, Norman Wells wrote:
On 04/02/2025 12:19, Max Demian wrote:
On 03/02/2025 19:05, Norman Wells wrote:
Provided it is working on a live real-time mapping system in which >>>>>> road closures are clearly indicated, which is not beyond the wit
of man, the satnav will simply re-route to avoid them. They're
very good at finding alternative routes when you miss a turning
for example, so it's not a new or insurmountable problem.
Won't the satnav keep telling you to take the original route? Is
there a way to tell it that there is something in the way? The same
problem would occur in the case of an autonomous vehicle.
Satnavs should, and autonomous vehicle controls will have to, have
some way programmed in to discard any routes where there is a
notified blockage. That's a trivial programming exercise.
Is it?
Of course it is.
So you'll be posting a copy of your code, presently. Please feel free
to use the language in which you are most comfortable.
Can you post a copy of your program please along with relevant
certification for the design, coding, debugging, testing and deployment.
What certification do you consider necessary? And who certifies it?
Your lack of knowledge and understanding of the matter is writ large,
despite your bullish claims and confidence. It is not what
certification *I* consider necessary, but what is required by the
various bodies, including but not limited to ISO.
As this is a legal newsgroup, perhaps now would be a good time to
mention Jean Bookout v Toyota Motor Corporation and the expert evidence
of Michael Barr (BSEE/MSEE) presented therein.
Cutting to the chase, he demonstrated to the satisfaction of the jury
that a single bit-flip in Toyota's Electronic Throttle Control System
(ETCS) resulted in the system failing catastrophically causing
"unintended acceleration" (UA) which in turn led to numerous deaths, (including that of Jean Bookout).
Prior to the case, the motor industry paid lip service to OSEK and MISRA
for automated systems. Where AVs are concerned, compliance, with the
above or similar systems, needs to be certified.
Additionally, since the Bookout case, international standards have been developed so your code is also going to need to be compliant with both
ISO 26262 (ASIL) and ISO 34503:2023 but I am sure you knew all this
already as you are such an authority on the subject and issue
pronouncements with such confidence.
As above, I look forward to receiving your fully compliant code by
return as it is, after all, "a trivial programming exercise".
On 05/02/2025 16:03, Norman Wells wrote:
On 05/02/2025 12:31, Simon Parker wrote:
On 04/02/2025 21:22, Mark Goodge wrote:
It's a shuttle bus that runs back and forth between just two
locations. The
entirety of its route on the public highway is on dual carriageways.
And it
still needs someone to sit in the driver's seat to "monitor the
technology".
Oh, and it's not even autonomous end to end.
Their driverless buses have sensors enabling them to travel on
pre-selected roads at up to 50mph.
On each trip they switch between autonomous and manual driving,
being
controlled by computer across the Forth Road Bridge, and from the >>>> M90 and
M8 to the A720 in Edinburgh.
https://www.bbc.co.uk/news/articles/c0q0lg3j1exo
Does that indicate there is any substantial doubt, even on British
urban
streets?
It indicates that there's still considerable doubt, as it happens.
It is worse than that:
The service has been withdrawn as there were insufficient passenger
numbers to make it cost-effective. (Not surprising when the cost of
all the tech, plus a "not a driver" and "conductor" were factored in.
I.e. all the costs of the original bus, plus all the AV tech plus two
members of staff on board rather than the usual one made the project
unfeasible on cost grounds.)
There were insufficient passenger numbers to make running the route
cost-effective on an ongoing basis. There was never any question of
the passengers on that particular route alone having to meet all the
startup costs of research and development, which would have made the
fares prohibitively expensive.
Do you accept that your example does not advance your argument because:
(1) It was not a fully autonomous service.
(2) It required more staff than a traditional bus.
(3) It has been withdrawn from service.
On 05/02/2025 17:02, Norman Wells wrote:
On 05/02/2025 12:42, Simon Parker wrote:
On 04/02/2025 16:37, Norman Wells wrote:
$75 billion has been spent by the automotive industry on developing
autonomous vehicle technology, including the software. How much do
you think the black box company spent developing theirs?
You get what you pay for.
S&P Global Mobility (Google them) predict that Level 5 passenger cars
will not appear before 2035 at the earliest.
Meanwhile, Level 4 AVs, which will meet the requirements of the new
Act, are being tested and proved *now*.
Indeed they are. But they are a significant way short of being a "fully autonomous vehicle", which is what was being discussed for the majority
of this thread.
For context, Tesla "Autopilot" is classified as Level 2.
Your numerous posts to this thread seem to demonstrate both that you
fail to understand where we actually are with the current technology
and are grossly underestimating how much further we have to go before
we get to the point where AV is ready for mass deployment.
And I think you're hiding your head in the sand or, to mix metaphors,
trying to stem the tide of progress.
My new(ish) car has / is currently being certified as a Level 4 AV in
certain jurisdictions (but not the UK, and in a spirit of full
disclosure, this was not taken into consideration when getting it).
Is yours?
On 06/02/2025 00:29, Owen Rees wrote:
Norman Wells <hex@unseen.ac.am> wrote:
The operator is still 'not', whatever symbols are used to invoke it.
In your post above you said the word not. That sort of sloppiness is
unfortunately too common in software and a rich source of bugs.
I am not inclined to accept your estimates of the effort required for a
task unless you can demonstrate a better understanding of software. For
example, can you explain what can go wrong when using a programming
language that has both polymorphic comparison operators and implicit
coercions?
No, but nerds probably can.
On 05/02/2025 16:21, Norman Wells wrote:
On 05/02/2025 12:30, Simon Parker wrote:
On 04/02/2025 16:25, Norman Wells wrote:
You must then have missed the earlier references in this thread to
studies showing that autonomous vehicles are in fact already safer
than human driven ones.
They are currently allowed already in parts of the USA, and in
France and Germany.
https://ioki.com/en/autonomous-vehicles-turning-point-in-european-
regional-and-local-transport/
In the UK, they could be allowed by 2026.
It is my sad duty to inform you that you have, again, engaged in your
usual habit of speaking with great authority on a matter whilst
relying on Google searches and quick skims of articles in the
background.
According to the Department for Transport, which is a bit horse's mouthy:
"Self-driving vehicles set to be on roads by 2026 as Automated
Vehicles Act becomes law"
https://www.gov.uk/government/news/self-driving-vehicles-set-to-be-on-
roads-by-2026-as-automated-vehicles-act-becomes-law
Meaning I'll be able to ride in the UK equivalent of a Waymo from my
house to any destination to which I'd normally get a taxi / Uber?
To much fanfare, Germany did indeed announce approval for autonomous
vehicles in July of last year. But those familiar with the system,
(in contrast to those relying on Google to prop up their arguments),
will be aware of the following key points:
(1) This is a test / trial / whatever you want to call it of just six
autonomous vehicles (AVs), not a mass roll-out.
Did anyone claim it was?
Has anybody used the words "test / trial / whatever you want to call it"
when describing the proposed introduction of AVs in the UK throughout
this thread?
If not, why not?
Surely it would have been prudent to have made it clear that the
proposed introduction of AVs to the UK is going to be limited in size,
(in terms of the number of vehicles deployed), scope (the area in which
the vehicles will be able to operate) and availability (all trials
elsewhere, (and it is logical to assume the UK trials will be the same),
have involved an initial period of cars with a "standby driver" on board being used to gather data prior to members of the project team taking
rides in the vehicles before they are finally used by carefully selected participants in the project.
Whilst the AVs may be introduced in 2026 when do you think ordinary
members of the public will be able to ride in them?
(2) It is tightly restricted to the Rhine-Main region, more
specifically the city of Darmstadt in the Offenbach district, and
even more specifically restricted only to defined routes within that
city.
K?
(3) As can be gathered from point (2), but worth stating for the
avoidance of doubt, the vehicles involved are Level 4 AVs, NOT Level
5. They have a "safety monitor" behind the wheel and are also
supervised remotely at a control centre.
Well, I suppose they may do, but according to the official definition
of Level 4:
"The vehicle operates completely autonomously under certain
conditions. The human being no longer has to be ready to intervene. He
can work, watch movies and even sleep."
Please outline what a Level 4 system is required to do should it fail or cannot proceed.
Then, please outline what must happen immediately thereafter.
JJ Ricks documents each of his journeys using Waymo. I invite you to
watch this video:
https://www.youtube.com/watch?v=zdKCQKBvH-A
Note that at around 6:00 (Chapter "Right Turn Detour") the Waymo
takes a right turn earlier than expected to make an easier left turn
thereby avoiding a complicated junction (a difficult "unprotected
left" which Waymo has a known habit of avoiding). It does the same
at around the ten minute mark (see previous comment on "known habits"
of current AVs).
How very sensible.
I would not describe an increased journey time, increased fuel usage and taking rat-runs through residential estates rather than sticking to main roads as "very sensible".
But then I have the advantage of actually knowing what I'm talking about rather than relying on Google to prop up my specious claims.
Things get really messy from the 12:35 mark onward. I actively
invite you to watch it for yourself, but in the meantime, here's
Waymo's official explanation of what happened:
"While driving fully autonomously through an extended work zone, the
Waymo Driver detected an unusual situation and requested the
attention of a remote Fleet Response specialist to provide additional
information. During that interaction, the Fleet Response team
provided incorrect guidance, which made it challenging for the Waymo
Driver to resume its intended route, and required Waymo's Roadside
Assistance team to complete the trip. While the situation was not
ideal, the Waymo Driver operated the vehicle safely until Roadside
Assistance arrived. Throughout, Waymo's team was in touch with the
rider, who provided thoughtful and helpful feedback that allows us to
continue learning and improving the Waymo Driver. Our team has
already assessed the event and improved our operational process."
Do you have anything a bit more recent than 3 years ago? Things
haven't stood still in the interim.
If only I had not addressed that in the very post to which you are replying! And, to answer your question, yes I do - I gave two further
links the most recent of which was from December 2024. Recent enough
for you?
BTW, do you have any evidence that the newer Waymo Driver systems have addressed these problems or are you merely claiming "Time has passed,
things have moved on, this has probably been solved", without actually knowing much less having evidence to prove that to be the case?
My explanation:
The AV encountered a temporary no right turn because of roadworks
marked by cones, (Ed: something we definitely do not have in the
UK!), and could not complete the right turn it expected to make and
(to use Waymo terminology), "paused" as it was incapable of re-
routing in real time. A remote operator spoke to the passenger and
advised him that "Roadside Assistance" (i.e. a human driver) was on
the way. Whilst waiting for a human driver to arrive, the Waymo
Driver, decided it was going to make the right turn after all and
having done so, then completely blocked the road. However, as it was
no longer "paused", "Roadside Assistance" was cancelled. Except for
the fact that the car was now sat in an active lane "paused" waiting
for a driver to come and rescue it, but it had just cancelled the
human, as it unpaused. There were several "false starts" and the AV
eventually gave up and put the hazard lights on in the "fast-lane" of
the active carriageway. Roadside Assistance, (aka the human driver),
arrived but the car took off again. This happened several times
before they finally accessed the vehicle at around 29 minutes at
which point the human drove the rest of the way, (or in Waymo speak,
"the ride was completed in manual").
And remember, this is on controlled, specific pre-defined routes.
Thankfully, Waymo keep "Roadside Assistance" (i.e. humans) around 5-6
miles from each car.
"Ah, but that video was three years ago!", I hear some cry.
Indeed, so why go on about it in such depth?
Because the problems highlighted in the video still exist.
OK then, here's a more recent one where the Waymo gets stuck in a loop:
https://www.youtube.com/watch?v=esLzeHKhGzg
No comment on this one?
Or another where it tries to drive through a temporary construction
zone (from December 2024):
https://www.youtube.com/watch?v=2ihN0IkIMPg
Despite claims to the contrary, this technology is not anywhere close
to a mass roll-out yet.
It's a process called debugging.
Actually, it isn't. Debugging is identifying and removing errors. These systems are still being actively developed and are therefore still
firmly in the "Research and Development" phase. For example, the latest version of the Waymo Driver utilises fewer sensors thereby reducing the cost. Reducing the number of sensors is definitely R&D, not "debugging".
Tangentially, I assume that you're aware that Waymo is under
investigation by the National Highway Traffic Safety Administration
(NHTSA) in America for 22 incidents in which the vehicles behaved
erratically and sometimes disobeyed traffic safety rules or were
involved in crashes? [^1]
Any pronouncement to make on that particular development in the field of
AVs?
[^1] https://static.nhtsa.gov/odi/inv/2024/INOA-PE24016-12382.pdf
Some things are absolutely implicit, and shouldn't need to be spelt out
to everyone regardless of mental capacity.
On 05/02/2025 20:06, Norman Wells wrote:
On 05/02/2025 17:32, Mark Goodge wrote:
I have no overwhelming reason to disagree with the industry consensus (as >>> cited earlier by Simon Parker) that we're probably around ten years away >>> from the first fully autonomous (level 5) vehicles on the roads in
real life use (as opposed to mere testing).
We don't actually need Level 5. Level 4 will do. And they're being
tested and proved right now.
When you say "Level 4 will do", please define precisely what you mean by
the phrase "will do".
Given that you've only made posts in the last 24 hours referencing the various levels of autonomy in AVs, I'm not convinced you are entirely
aware of the precise limitations inherent in a Level 4 AV.
On Thu, 6 Feb 2025 08:07:32 +0000, Norman Wells <hex@unseen.ac.am> wrote
in <m0j8u4Fmj8iU1@mid.individual.net>:
On 06/02/2025 00:29, Owen Rees wrote:
Norman Wells <hex@unseen.ac.am> wrote:
The operator is still 'not', whatever symbols are used to invoke it.
In your post above you said the word not. That sort of sloppiness is
unfortunately too common in software and a rich source of bugs.
I am not inclined to accept your estimates of the effort required for a
task unless you can demonstrate a better understanding of software. For
example, can you explain what can go wrong when using a programming
language that has both polymorphic comparison operators and implicit
coercions?
No, but nerds probably can.
With that belittling and derogatory remark you demonstrate that you
should never be entrusted with any role related to safety critical
systems.
On 05/02/2025 15:39, Mark Goodge wrote:
One scenario where it doesn't work is late at night or on lightly used roads >> where there isn't enough traffic to give a reliable statistical sample.
That is a limitation OTOH when the roads are quiet the traffic jams tend
not to be too bothersome either. Police will let the odd car go through.
Mostly we see roads closed for repair late at night. The satnav can find
some amazing alternative routes home (and seems to be fully aware of all >scheduled road closures). Last time that happened A19 down it picked a
tiny road through a being built housing estate as a fantastic short cut.
(it will be a right rat run once more people know about it)
On 03/02/2025 16:59, Mike Scott wrote:
On 03/02/2025 10:54, billy bookcase wrote:I don't think some people here give AI enough credit. Ask OpenAI for
With the sheer volume of inputs into an autopilot (location, speed,It's purely hypothetical.
multiple
cameras, bugs on the windscreen, lidar, clouds, weather, lighting,
pedestrians,
cyclists, other vehicles) do you think you could*ever* duplicate the
inputs?
If two sets of inputs were identical then the outputs would be identical >>> as all the data is numerical and subject to exactly the same set of
calculations.
Sorry, that doesn't work in the real world.
The whole point about "butterfly wings" is that many (all??) non-linear
calculations are thoroughly chaotic in their results. If you can't
specify inputs precisely, outputs can vary unpredictably. And any
"numerical" input from the real world cannot be *precisely* specified.
Output therefore can depend critically on an imprecise input.
(It's not that the butterfly /causes/ the hurricane. It's just that you
cannot put the full initial conditions into the highly non-linear
weather calculations. Include the butterfly, predict a hurricane;
exclude it, predict a sunny day. But you don't even know it's there....)
I reckon a driver AI is decidedly non-linear system.
example exactly the same questions, and you will get slightly different,
but usually essentially the same, answers.
It's human-like in this regard. If I ask a human two identical
questions, I would get slightly different answers in terms of prose (but hopefully identical in terms of substance).
It's human-like in this regard. If I ask a human two identical questions, I would get
slightly different answers in terms of prose (but hopefully identical in terms of
substance).
On 06/02/2025 14:53, Simon Parker wrote:
So you'll be posting a copy of your code, presently. Please feel free
to use the language in which you are most comfortable.
I don't write code any more; I leave that to minions.
On 06/02/2025 18:57, Mark Goodge wrote:
On Thu, 6 Feb 2025 15:32:06 +0000, Norman Wells <hex@unseen.ac.am> wrote:
Then they all should be notified,
How?
Please can you detail the process by which the fact that a police officer
has just closed a road due to an accident is notified to all of the various >> systems which might want that information.
This is, actually, a solvable problem, given certain basic assumptions, and >> I have what I think would be an effective solution in mind, but I'd be
interested to hear how you would do it if you were desiging the system.
It's a solvable problem according to you, just as I said.
That's quite enough.
On 03/02/2025 10:54, billy bookcase wrote:
With the sheer volume of inputs into an autopilot (location, speed, multipleIt's purely hypothetical.
cameras, bugs on the windscreen, lidar, clouds, weather, lighting, pedestrians,
cyclists, other vehicles) do you think you could*ever* duplicate the inputs?
If two sets of inputs were identical then the outputs would be identical
as all the data is numerical and subject to exactly the same set of
calculations.
Sorry, that doesn't work in the real world.
The whole point about "butterfly wings" is that many (all??) non-linear calculations
are thoroughly chaotic in their results. If you can't specify inputs precisely, outputs
can vary unpredictably.
And any "numerical" input from the real world cannot be *precisely* specified. Output
therefore can depend critically on an imprecise input.
(It's not that the butterfly /causes/ the hurricane. It's just that you cannot put the
full initial conditions into the highly non-linear weather calculations. Include the
butterfly, predict a hurricane; exclude it, predict a sunny day. But you don't even
know it's there....)
I reckon a driver AI is decidedly non-linear system.
On 05/02/2025 17:18, Mark Goodge wrote:
And that's just one document. Here are a few more:
https://www.xenonstack.com/blog/autonomous-vehicles-driver-assistance
https://hai.stanford.edu/news/how-ai-making-autonomous-vehicles-safer
https://arxiv.org/pdf/2402.17690
https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry
https://www.techtarget.com/searchenterpriseai/definition/driverless-car
TL;DR: anyone who tells you that AI plays no role in autonomous vehicles is >> either stunningly ignorant or deliberately misleading you.
AI is a buzzword which is generally used by the pretentious to make out
they have a better understanding of anything complex than they actually
do. But AI involves the crunching of huge amounts of data to develop >principles and insights that can be used more simply elsewhere without
having to reinvent the wheel every time. Autonomous car controls don't
have vast amounts of data, and you don't want to sit around for a week >waiting for it to decide exactly what it is it has just crashed into.
They are not employing AI, though AI may have played a part in
developing the algorithms they do actually use.
It's all a matter of proper definition of terms which is too commonly >lacking.
On 03/02/2025 16:59, Mike Scott wrote:
[quoted text muted]I don't think some people here give AI enough credit. Ask OpenAI for
example exactly the same questions, and you will get slightly different,
but usually essentially the same, answers.
On 05/02/2025 18:05, Mark Goodge wrote:
Of course it matters what the object is. Some objects will do more damage to >> the car if the car strikes them, and some objects will suffer more injury of >> the car strikes them. There's a huge difference between, say, a plastic bag, >> a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course, >> you'd avoid all of them. But you wouldn't necessarily take abrupt evasive
action to avoid a plastic bag. On the other hand, you'd hit anything -
including a bollard - if necessary to avoid hitting a child.
I'm reminded of those creepy bollards that look like children. I suppose
the idea is that the driver will avoid hitting them because he likes >children; more likely he will avoid hitting them because they are made
of metal.
On 2025-02-06, Simon Parker <simonparkerulm@gmail.com> wrote:
On 05/02/2025 20:06, Norman Wells wrote:
On 05/02/2025 17:32, Mark Goodge wrote:
I have no overwhelming reason to disagree with the industry consensus
(as cited earlier by Simon Parker) that we're probably around ten
years away from the first fully autonomous (level 5) vehicles on the
roads in real life use (as opposed to mere testing).
We don't actually need Level 5. Level 4 will do. And they're being
tested and proved right now.
When you say "Level 4 will do", please define precisely what you mean
by the phrase "will do".
Given that you've only made posts in the last 24 hours referencing the
various levels of autonomy in AVs, I'm not convinced you are entirely
aware of the precise limitations inherent in a Level 4 AV.
I'll consider cars to be "full self drive" if they are legally allowed
to be used on all or very nearly all public roads with nobody inside who
is capable of driving (i.e. no licence, or asleep or drunk, etc). And
that they actually safely get you to the destination with high
reliability, of course :-)
On Thu, 6 Feb 2025 12:43:02 +0000, Max Demian <max_demian@bigfoot.com>
wrote:
On 05/02/2025 18:05, Mark Goodge wrote:
Of course it matters what the object is. Some objects will do more damage to
the car if the car strikes them, and some objects will suffer more injury of
the car strikes them. There's a huge difference between, say, a plastic bag,
a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course,
you'd avoid all of them. But you wouldn't necessarily take abrupt evasive >>> action to avoid a plastic bag. On the other hand, you'd hit anything -
including a bollard - if necessary to avoid hitting a child.
I'm reminded of those creepy bollards that look like children. I suppose >>the idea is that the driver will avoid hitting them because he likes >>children; more likely he will avoid hitting them because they are made
of metal.
The point is more that humans are, subconsciously, more likely to notice children than bollards. It's something that's hardwired into the human psyche. So people are less likely to hit bollards that look like children, not because they like children, but because they are more likely to notice bollards that look like children. And, having noticed them, they are then much less likely to hit them.
This is actually an interesting point as far as autonomous vehicles are concerned. It's well known in highway design that "sign blindness" can be an issue; if you have too many signs at one location, even if individually they each have something important to say, then drivers are less likely to take account of them. Information overload reduces compliance and reduces safety. But robot cars won't have that issue. On the contrary, as far as AI is concerned, you can never have too much data.
Equally, there are a lot of visual aspects of road design which act directly on human psychology. Things like "dragon's teeth" markings, village boundary gateways, etc all have no physical effect on a vehicle (unlike, say, speed bumps) and have no legal significance (unlike a 30pmh sign). And yet they have a statistically significant effect in reducing accidents - not because people see them and think "oh, I must drive more safely", but simply because they cannot help being affected by them subconsciously. But AI doesn't have
a subconscious. So "nudge theory" won't work on autonomous vehicles, everything they neeed to know will need to be explicitly programmed, communicated or observed.
On 04/02/2025 21:29, Mark Goodge wrote:
On Tue, 4 Feb 2025 16:14:59 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>> On 04/02/2025 14:47, Simon Parker wrote:
Then please detail which parties will be fulfilling the roles of the
commander and the flight technicians in fully automated vehicles.
No-one. Just as there isn't on the Docklands Light Railway.
DLR trains have a "Passenger Service Agent" (previously known as a Train
Captain) on board who is able to take over and drive it manually if
necessary. And the trains themselves don't need any routing capability,
because they run on rails. It's a very far cry from a fully autonomous road >> vehicle.
Are these "Passenger Service Agents" fully qualified drivers (and paid
as much)? If so, what's the point?
On Thu, 6 Feb 2025 19:21:12 +0000, Norman Wells <hex@unseen.ac.am> wrote
in <m0kgd9FseqvU2@mid.individual.net>:
Some things are absolutely implicit, and shouldn't need to be spelt out
to everyone regardless of mental capacity.
Anyone with that attitude should never be allowed anywhere near the specification, design, implementation or operation of any system that requires high assurance.
On Thu, 6 Feb 2025 08:07:32 +0000, Norman Wells <hex@unseen.ac.am> wrote
in <m0j8u4Fmj8iU1@mid.individual.net>:
On 06/02/2025 00:29, Owen Rees wrote:
Norman Wells <hex@unseen.ac.am> wrote:
The operator is still 'not', whatever symbols are used to invoke it.
In your post above you said “the word ‘not’”. That sort of sloppiness is
unfortunately too common in software and a rich source of bugs.
I am not inclined to accept your estimates of the effort required for a
task unless you can demonstrate a better understanding of software. For
example, can you explain what can go wrong when using a programming
language that has both polymorphic comparison operators and implicit
coercions?
No, but nerds probably can.
With that belittling and derogatory remark you demonstrate that you
should never be entrusted with any role related to safety critical
systems.
On 05/02/2025 15:56, Norman Wells wrote:
On 05/02/2025 12:14, Simon Parker wrote:
On 04/02/2025 15:20, Norman Wells wrote:
On 04/02/2025 12:16, Mark Goodge wrote:
It would be trivially easy to do though. Police alert HQ, HQ tells
mapping, mapping puts a block where the accident is, autonomous
vehicles re-route to avoid it.
Is the police officer placing a general call with HQ or to a specific
desk at HQ? What happens if the request cannot be connected to HQ
immediately? How do HQ alert mapping? Is HQ alerting Google Maps,
Apple Maps, Garmin, et al individually are are they alerting a
central resource which subsequently pushes this data out to various
service providers?t
Not my province. I just show how it can easily be done. The details
can be left to minions.
I do not consider it reasonable to state that a particular task is
"trivially easy to do" and then when asked for further details to
dismiss the question with a "Not my province." statement.
Additionally, the questions were asked to demonstrate that what you
claim is not, in fact, "trivially easy" and could, or more likely would,
be rather complex.
It may sound simple in theory, but as the quote reminds us, "In theory, theory and practice are the same. In practice they are not."
Similarly, "The Devil is in the details."
Details you are unable or unwilling to provide resulting in your claim
that it is "trivially easy to do" failing as unsubstantiated and
unsupported.
Not forgetting, of course, and quite topical at the moment: who is
paying for all this?
I do believe electronic communications are to all intents and purposes
free these days. There is no reason why most of it cannot be
automated and cost very little.
Again, your bold claims fail through lack of evidence when challenged.
For example, how are your automated electronic communications coping
with "Not Spots"?
At present, Google, Apple and Garmin get their data, mostly, without
charge.
Do they? Where from?
Surely someone making the bold claims you have made throughout the
thread is fully aware of how the current systems work and would not need
to ask such a question?
(Note the use of the word "systems" as there are multiple systems in play.)
If you do not know how the current systems work, how can you hope to
support the claims you are repeatedly making throughout the thread?
Or are you hoping your claims will go unchallenged just because you made
them and that ought to be enough for everyone to accept they are true regardless?
Why would they want to move to a model that involves them paying for
what they currently get for free?
It's in the national interest that such information should be freely
available and used.
Please provide examples where large tech companies have prioritised
acting in the national interest over acting in their own interest unless
and until compelled to do so by legislation.
Additionally, the current systems compete with each other rather than
feeding into a central system you envisage. By what means are you consolidating the competing systems?
Except that a human won't do anything either until and unless he
knows where a blockage is.
As autonomous vehicles become used there will have to be a
requirement for accurate and rapid information on blockages.
Information on congestion is readily available already in real time.
There's no technical reason why the same could not be done for
blockages.
It is clear that you do not understand how the current system works
because, if you did, you would know why this will not work for
blockages such as those being described.
But it could, and should. There is no technological reason why it can't.
There is no technological reason why every member of ULM cannot take a
flight into space. On what date is your flight scheduled?
Sadly, but clearly, there are considerations other than technological
reasons that must be factored in to the real world consideration of a
matter.
These things exist already.
If the policeman has a Scottish accent? Or Brummie? (I recommend
searching for "Amy101" if the second reference doesn't elicit a
"light bulb" moment.)
Then it will need to go through Google Translate, as usual.
Oh dear! I recommend re-reading what I wrote. The problem is *accents* not *language*. AI is notoriously poor at dealing with certain accents.
For the second time of asking, I recommend searching for "Amy101" as
this is clearly another area on which you are happy to issue
pronouncements whilst making it abundantly clear that you do not
understand the subject.
It's not AI, which is something entirely different.
Modern, (c.f. "Current"), voice recognition, (i.e. the very systems
you are referencing above that "already exist") is "not AI"? Are you
sure? I recommend phoning a friend, or asking the audience before you
lock in your answer. :-)
It is not AI in itself, though it is the product of previous AI which
mined the data.
Google Assistant (or whatever they are calling it this week), Siri and
Alexa all use AI technologies like machine learning, natural language processing and voice recognition to interact with users.
These technologies are often referred to as types of Narrow AI.
I know of nobody, with one notable exception, that claims that Narrow AI
is not AI.
On Thu, 6 Feb 2025 19:24:38 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 06/02/2025 18:57, Mark Goodge wrote:
On Thu, 6 Feb 2025 15:32:06 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
Then they all should be notified,
How?
Please can you detail the process by which the fact that a police officer >>> has just closed a road due to an accident is notified to all of the various >>> systems which might want that information.
This is, actually, a solvable problem, given certain basic assumptions, and >>> I have what I think would be an effective solution in mind, but I'd be
interested to hear how you would do it if you were desiging the system.
It's a solvable problem according to you, just as I said.
That's quite enough.
Ah, but my solution rests on assumptions which are not yet reality. I wonder if you can correctly identify what they are.
And, as I've pointed out, I am
a programmer in my day job. Given your repeated insistance that you understand all the principles as well as, or even better than, people who actually work in this field it ought to be a trivial exercise for you to at least describe the basic architecture of the necessary system. You don't
even need to write any actual code samples for that. Just a high-level overview will do.
On 03/02/2025 17:43, Mark Goodge wrote:
On Mon, 3 Feb 2025 12:35:57 +0000, Norman Wells <hex@unseen.ac.am>So here's how ChatGPT responded to such a scenario. Is AI intelligent
wrote:
On 03/02/2025 10:07, billy bookcase wrote:
"Martin Harran" <martinharran@gmail.com> wrote in message
news:jm9kpj1e571uothotnblkqfgnjictvden3@4ax.com...
"At the current time" for complete control but probably not too far
away. There is quite a bit of AI already in modern cars with things
like picking up speed limits, automatic speed reduction when too
close to a vehicle in front, lane-keep assist.
Those are examples of automation, as with autpilots. Where the each
set of inputs, no matter how complex, will produce exactly the same
result every time.
AI systems on the other hand, learn and make up new rules as they go
along.
And so the process by which they actually arrive at particular
decisions may not be readily available to humans.
I think this is possibly what a lot of peopel are really worried
about,
AI is not part of autonomous vehicle control, and includes none of the
things Mr Harran identified above. It is simply automation according
to pre-determined rules.
But that's precisely the problem. It's impossible to have a
predetermined rule for every possible scenario. And dealing with
scenarios that you don't have a rule for requires intelligence.
Mark
enough to drive cars?
Q: Can you come up with a driving condition for which there are no rules
for, and can you say what you would do?
A:
Scenario:
You're driving at night in a rural area with no streetlights, and
suddenly, a massive swarm of insects (e.g., locusts or mayflies)
completely covers your windshield, making visibility nearly zero. Your
wipers and washer fluid are ineffective because the sheer number of
insects is too great.
What I Would Do:
Stay Calm and Avoid Sudden Braking – Slamming the brakes could cause a rear-end collision if there's a vehicle behind me.
Reduce Speed Gradually – I’d ease off the accelerator and slowly decelerate to maintain control.
Turn on Hazard Lights – This alerts other drivers to my reduced speed
and possible erratic movement.
Find a Safe Stopping Point – If I can, I’d try to pull over to a safe location like the shoulder, a rest area, or a side road.
Use Alternate Visibility Methods – If my windshield is completely
covered, I might open my side window and stick my head out carefully (if safe) to get some visibility.
Assess and Clear the Windshield – Once stopped, I’d use a squeegee, rag, or even a piece of clothing to remove the insects.
Report the Hazard if Needed – If it's a massive swarm affecting multiple drivers, I might call local authorities to warn others.
This is a situation that isn’t explicitly covered by standard driving
laws, but the best approach follows general defensive driving principles—stay calm, maintain control, and prioritize visibility and safety.
Would you have done something differently?
On 05/02/2025 17:32, Mark Goodge wrote:
On Wed, 5 Feb 2025 16:44:36 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 05/02/2025 16:21, Mark Goodge wrote:
Again, that is my point. To be fully autonomous, a vehicle will need to be >>>> able to derive the necessary data from its surroundings, context and memory
without relying solely on information transmitted from a central system. >>>> That is, it will need to be able to behave like a human driver. Which, as >>>> yet, they can't do. They'll get there eventually. But getting there isn't a
trivial problem to solve.
They are much closer already than you think.
I have no overwhelming reason to disagree with the industry consensus (as
cited earlier by Simon Parker) that we're probably around ten years away
from the first fully autonomous (level 5) vehicles on the roads in real life >> use (as opposed to mere testing).
We don't actually need Level 5. Level 4 will do. And they're being
tested and proved right now.
Obviously, programming an autonomous vehicle is more of a challenge than
programming a chatbot. But the underlying trade-offs of computing power v
cost are essentially the same. So it would not surprise me at all if the
first genuine level 5 vehicles come from China. Which would raise a whole
lot of other ethical and legal issues.
Would it? Why?
On Thu, 6 Feb 2025 08:14:08 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 05/02/2025 17:18, Mark Goodge wrote:
And that's just one document. Here are a few more:
https://www.xenonstack.com/blog/autonomous-vehicles-driver-assistance
https://hai.stanford.edu/news/how-ai-making-autonomous-vehicles-safer
https://arxiv.org/pdf/2402.17690
https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry
https://www.techtarget.com/searchenterpriseai/definition/driverless-car
TL;DR: anyone who tells you that AI plays no role in autonomous vehicles is >>> either stunningly ignorant or deliberately misleading you.
AI is a buzzword which is generally used by the pretentious to make out
they have a better understanding of anything complex than they actually
do. But AI involves the crunching of huge amounts of data to develop
principles and insights that can be used more simply elsewhere without
having to reinvent the wheel every time. Autonomous car controls don't
have vast amounts of data, and you don't want to sit around for a week
waiting for it to decide exactly what it is it has just crashed into.
They are not employing AI, though AI may have played a part in
developing the algorithms they do actually use.
Norman, I have just posted links to documents by researchers and autonomous vehicle developers which clearly state that they are using AI.
On 06/02/2025 14:49, Simon Parker wrote:
On 05/02/2025 15:56, Norman Wells wrote:
On 05/02/2025 12:14, Simon Parker wrote:
On 04/02/2025 15:20, Norman Wells wrote:
On 04/02/2025 12:16, Mark Goodge wrote:
It would be trivially easy to do though. Police alert HQ, HQ tells
mapping, mapping puts a block where the accident is, autonomous
vehicles re-route to avoid it.
Is the police officer placing a general call with HQ or to a specific
desk at HQ? What happens if the request cannot be connected to HQ
immediately? How do HQ alert mapping? Is HQ alerting Google Maps,
Apple Maps, Garmin, et al individually are are they alerting a
central resource which subsequently pushes this data out to various
service providers?t
Not my province. I just show how it can easily be done. The details
can be left to minions.
I do not consider it reasonable to state that a particular task is
"trivially easy to do" and then when asked for further details to
dismiss the question with a "Not my province." statement.
Additionally, the questions were asked to demonstrate that what you
claim is not, in fact, "trivially easy" and could, or more likely would,
be rather complex.
It may sound simple in theory, but as the quote reminds us, "In theory,
theory and practice are the same. In practice they are not."
Similarly, "The Devil is in the details."
Details you are unable or unwilling to provide resulting in your claim
that it is "trivially easy to do" failing as unsubstantiated and
unsupported.
It's nuts and bolts stuff. Ask a builder. I deal with the bigger picture.
Not forgetting, of course, and quite topical at the moment: who is
paying for all this?
I do believe electronic communications are to all intents and purposes
free these days. There is no reason why most of it cannot be
automated and cost very little.
Again, your bold claims fail through lack of evidence when challenged.
For example, how are your automated electronic communications coping
with "Not Spots"?
Not very well of course. But not spots concern just about 7% of the
UK's land mass, generally in very remote areas. 93% of the nations'
land mass is not a not spot, where of course electronic communications
work very well and essentially for free. What's not to like?
Your glass half empty attitude, even when it's 93% full, is yet another example of perfect being the enemy of good.
At present, Google, Apple and Garmin get their data, mostly, without
charge.
Do they? Where from?
Surely someone making the bold claims you have made throughout the
thread is fully aware of how the current systems work and would not need
to ask such a question?
You made the claim. I just queried it. And I wonder, since you didn't
say, what point you were trying to make.
(Note the use of the word "systems" as there are multiple systems in play.) >>
If you do not know how the current systems work, how can you hope to
support the claims you are repeatedly making throughout the thread?
Or are you hoping your claims will go unchallenged just because you made
them and that ought to be enough for everyone to accept they are true
regardless?
I don't know what claims you're only vaguely alluding to.
Why would they want to move to a model that involves them paying for
what they currently get for free?
It's in the national interest that such information should be freely
available and used.
Please provide examples where large tech companies have prioritised
acting in the national interest over acting in their own interest unless
and until compelled to do so by legislation.
I'm not following. Why do you think companies will have to move to a
model that involves them paying for what they currently get for free?
Additionally, the current systems compete with each other rather than
feeding into a central system you envisage. By what means are you
consolidating the competing systems?
There's no reason why they should have to consolidate. Competition
drives progress. That's why, for example, there are commercial
data-bases of speed limits rather than just piles of paper TRO in
Council archives.
There is no technological reason why every member of ULM cannot take aExcept that a human won't do anything either until and unless he
knows where a blockage is.
As autonomous vehicles become used there will have to be a
requirement for accurate and rapid information on blockages.
Information on congestion is readily available already in real time. >>>>> There's no technical reason why the same could not be done for
blockages.
It is clear that you do not understand how the current system works
because, if you did, you would know why this will not work for
blockages such as those being described.
But it could, and should. There is no technological reason why it can't. >>
flight into space. On what date is your flight scheduled?
Sadly, but clearly, there are considerations other than technological
reasons that must be factored in to the real world consideration of a
matter.
What I suggest is perfectly feasible. As others here have confirmed.
Even if it requires a bit of effort and a positive attitude.
These things exist already.
If the policeman has a Scottish accent? Or Brummie? (I recommend
searching for "Amy101" if the second reference doesn't elicit a
"light bulb" moment.)
Then it will need to go through Google Translate, as usual.
Oh dear! I recommend re-reading what I wrote. The problem is *accents*
not *language*. AI is notoriously poor at dealing with certain accents.
You do like edge cases. And you sadly seem to think that any minor
hitch is a fatal flaw, when it's anything but.
For the second time of asking, I recommend searching for "Amy101" as
this is clearly another area on which you are happy to issue
pronouncements whilst making it abundantly clear that you do not
understand the subject.
It's not AI, which is something entirely different.
Modern, (c.f. "Current"), voice recognition, (i.e. the very systems
you are referencing above that "already exist") is "not AI"? Are you
sure? I recommend phoning a friend, or asking the audience before you
lock in your answer. :-)
It is not AI in itself, though it is the product of previous AI which
mined the data.
Google Assistant (or whatever they are calling it this week), Siri and
Alexa all use AI technologies like machine learning, natural language
processing and voice recognition to interact with users.
These technologies are often referred to as types of Narrow AI.
I know of nobody, with one notable exception, that claims that Narrow AI
is not AI.
Why it has a particular designation is that it is not general AI as it
has been used and assumed by others in this thread.
On Thu, 06 Feb 2025 22:30:13 +0000, Jon Ribbens wrote:
On 2025-02-06, Simon Parker <simonparkerulm@gmail.com> wrote:
On 05/02/2025 20:06, Norman Wells wrote:
On 05/02/2025 17:32, Mark Goodge wrote:
I have no overwhelming reason to disagree with the industry consensus >>>>> (as cited earlier by Simon Parker) that we're probably around ten
years away from the first fully autonomous (level 5) vehicles on the >>>>> roads in real life use (as opposed to mere testing).
We don't actually need Level 5. Level 4 will do. And they're being >>>> tested and proved right now.
When you say "Level 4 will do", please define precisely what you mean
by the phrase "will do".
Given that you've only made posts in the last 24 hours referencing the
various levels of autonomy in AVs, I'm not convinced you are entirely
aware of the precise limitations inherent in a Level 4 AV.
I'll consider cars to be "full self drive" if they are legally allowed
to be used on all or very nearly all public roads with nobody inside who
is capable of driving (i.e. no licence, or asleep or drunk, etc). And
that they actually safely get you to the destination with high
reliability, of course :-)
If all cars were autonomous, we could have that now ...
On 05/02/2025 16:31, Mark Goodge wrote:VoDyUDF9lBIH4odn0sWyKEw%3A1738772754354&source=hp&ei=EpGjZ6qXEO6khbIP3IjHiAE&iflsig=ACkRmUkAAAAAZ6OfIl-BqseInoxRKkenvn3r1LuXIAC2&ved=0ahUKEwiq6POi-ayLAxVuUkEAHVzEEREQ4dUDCBk&uact=5&oq=%22speed+limit%22+inurl%3Ahttps%3A%2F%2Fwww.worcestershire.gov.uk%
On Wed, 5 Feb 2025 12:36:57 +0000, Simon Parker <simonparkerulm@gmail.com> >> wrote:
Having worked extensively with the local council on a related matter,
[^1], I can assure you that TROs are created individually by each LA.
They do not even create their own searchable repository of TROs, never
mind anything even approaching a database of all TROs nationwide which
third parties can access in real time.
Actually, my local highway authority does provide a searchable repository of >> recent TROs, provided you know how to use Google's advanced search tools:
https://www.google.co.uk/search?q=%22speed+limit%22+inurl%3Ahttps%3A%2F%2Fwww.worcestershire.gov.uk%2Fcouncil-services%2Fplanning-and-developments%2Fplanning-applications%2Fpublic-notices&sca_esv=df054736d75a2025&sxsrf=AHTn8zo5eX_
Point of Order: I do not consider a Google Advanced Search, (usingdocuments linked from it that match a certain pattern or contain certain keywords (such as "speed limit"). The difficulty is not automating that,
"inurl" to restrict the results to searching a single web-site), to meet
the brief of a LA creating their own searchable repository of TROs.
YMMV. :-)
From a programming perspective, it's actually easier. It is (genuinely!) relatively trivial to screenscrape a known URL and pull out all the
If I understand correctly, you're merely publishing each individual TRO
and Google is creating a SERP based on a query against those TROs. Is
every TRO ever produced available at that site or only more recent ones?
Bear in mind that for NormTech's system to work, he needs to be able to >submit a GPS location to the system which then retrieves all TROs that
apply to that location, sorts them into chronological order and finally >analyses those TROs to determine the speed limit currently in effect at
that point.
On 03/02/2025 16:59, Mike Scott wrote:
On 03/02/2025 10:54, billy bookcase wrote:I don't think some people here give AI enough credit. Ask OpenAI for
With the sheer volume of inputs into an autopilot (location, speed,It's purely hypothetical.
multiple
cameras, bugs on the windscreen, lidar, clouds, weather, lighting,
pedestrians,
cyclists, other vehicles) do you think you could*ever* duplicate the
inputs?
If two sets of inputs were identical then the outputs would be identical >>> as all the data is numerical and subject to exactly the same set of
calculations.
Sorry, that doesn't work in the real world.
The whole point about "butterfly wings" is that many (all??) non-
linear calculations are thoroughly chaotic in their results. If you
can't specify inputs precisely, outputs can vary unpredictably. And
any "numerical" input from the real world cannot be *precisely*
specified. Output therefore can depend critically on an imprecise input.
(It's not that the butterfly /causes/ the hurricane. It's just that
you cannot put the full initial conditions into the highly non-linear
weather calculations. Include the butterfly, predict a hurricane;
exclude it, predict a sunny day. But you don't even know it's there....)
I reckon a driver AI is decidedly non-linear system.
example exactly the same questions, and you will get slightly different,
but usually essentially the same, answers.
It's human-like in this regard. If I ask a human two identical
questions, I would get slightly different answers in terms of prose (but hopefully identical in terms of substance).
On Thu, 6 Feb 2025 12:43:02 +0000, Max Demian <max_demian@bigfoot.com>
wrote:
On 05/02/2025 18:05, Mark Goodge wrote:
Of course it matters what the object is. Some objects will do more damage to
the car if the car strikes them, and some objects will suffer more injury of
the car strikes them. There's a huge difference between, say, a plastic bag,
a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course,
you'd avoid all of them. But you wouldn't necessarily take abrupt evasive >>> action to avoid a plastic bag. On the other hand, you'd hit anything -
including a bollard - if necessary to avoid hitting a child.
I'm reminded of those creepy bollards that look like children. I suppose
the idea is that the driver will avoid hitting them because he likes
children; more likely he will avoid hitting them because they are made
of metal.
The point is more that humans are, subconsciously, more likely to notice children than bollards. It's something that's hardwired into the human psyche. So people are less likely to hit bollards that look like children, not because they like children, but because they are more likely to notice bollards that look like children. And, having noticed them, they are then much less likely to hit them.
On 05/02/2025 16:47, Mark Goodge wrote:
After all, you keep confidently asserting that we've already got fully
autonomous vehicles up and running, so you must have seen this mentioned
somewhere on the web, surely?
Before I do any more of that, you need to clarify what exactly you mean
by 'fully' autonomous, which is an expression I've not used at all so
haven't confidently asserted actually.
Is Level 4 'fully' autonomous?
On 07/02/2025 09:20, Mark Goodge wrote:
And, as I've pointed out, I am
a programmer in my day job. Given your repeated insistance that you
understand all the principles as well as, or even better than, people who
actually work in this field it ought to be a trivial exercise for you to at >> least describe the basic architecture of the necessary system. You don't
even need to write any actual code samples for that. Just a high-level
overview will do.
That's what I gave you. Officer calls HQ, HQ alerts live mapping
providers, they put blocks on maps, AVs re-route to avoid blocks.
On 07/02/2025 09:31, Mark Goodge wrote:
Norman, I have just posted links to documents by researchers and autonomous >> vehicle developers which clearly state that they are using AI.
Well, they would, wouldn't they? They want to sound cutting edge.
What they're actually using is what Mr Parker has identified as 'Narrow
AI' but that doesn't sound as good. However, it does mean their cars
don't have to carry round with them their own huge data-processing
centres, electrical sub-stations and massive water-cooling facilities,
which broader or proper AI requires.
On 07/02/2025 09:58, Mark Goodge wrote:
On Thu, 6 Feb 2025 12:43:02 +0000, Max Demian <max_demian@bigfoot.com>
wrote:
On 05/02/2025 18:05, Mark Goodge wrote:
Of course it matters what the object is. Some objects will do more damage to
the car if the car strikes them, and some objects will suffer more injury of
the car strikes them. There's a huge difference between, say, a plastic bag,
a cardboard box, a lump of wood, a bollard, and a child. Ideally, of course,
you'd avoid all of them. But you wouldn't necessarily take abrupt evasive >>>> action to avoid a plastic bag. On the other hand, you'd hit anything - >>>> including a bollard - if necessary to avoid hitting a child.
I'm reminded of those creepy bollards that look like children. I suppose >>> the idea is that the driver will avoid hitting them because he likes
children; more likely he will avoid hitting them because they are made
of metal.
The point is more that humans are, subconsciously, more likely to notice
children than bollards. It's something that's hardwired into the human
psyche. So people are less likely to hit bollards that look like children, >> not because they like children, but because they are more likely to notice >> bollards that look like children. And, having noticed them, they are then
much less likely to hit them.
Is there any research on this? The bollards don't look like actual
children. I suppose their creepy unfamiliarity might make them more >noticeable until people get used to them.
Because of China's known propensity to engage in data extraction and censorship. What if a Chinese-made automated taxi was carrying a
couple of politicians having a conversation about Taiwan?
"Mark Goodge" <usenet@listmail.good-stuff.co.uk> wrote in message news:h8obqj9shmlgjk6ut7ub7talu3v2erh09d@4ax.com...
Because of China's known propensity to engage in data extraction and
censorship. What if a Chinese-made automated taxi was carrying a
couple of politicians having a conversation about Taiwan?
Indeed.
I'll bet it would have the Chinese quaking in their boots
Especially if we decided to send our other ship, as well.
On 06/02/2025 21:51, Owen Rees wrote:
On Thu, 6 Feb 2025 19:21:12 +0000, Norman Wells <hex@unseen.ac.am> wrote
in <m0kgd9FseqvU2@mid.individual.net>:
Some things are absolutely implicit, and shouldn't need to be spelt out
to everyone regardless of mental capacity.
Anyone with that attitude should never be allowed anywhere near the
specification, design, implementation or operation of any system that
requires high assurance.
How does that follow?
Several banks that got 2000 *not* being a leap year wrong.
The law of unintended consequences means that anything however[...]
trivial that is missing from a requirements specification that could
lead to undefined behaviour will at some later stage bite you in the backside.
Classic examples being:
Gunnery Coriolis force correction tables that are applied in exactly> the wrong direction in the Southern Hemisphere - Boer War *and*
Falklands War!
Twice as inaccurate as doing nothing at all.
(the latter software failing was *very* quickly corrected)
Jethro_uk wrote:You are correct, it only displays the speed limit when it's on route
There was a sneeze of time when it would display speed limits
without needing a route programmed (same as Waze "driving mode").
Then it stopped.
GM also called it "driving mode", I'm fairly sure that when using GM in
the car it still displays the current speed limit, I'll try to remember
to check when I next venture out ...
On 2025-02-05, Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:
On Wed, 5 Feb 2025 11:18:30 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>Of course they haven't all been solved, but the evidence is that AVs are >>>already safer than manually driven vehicles which is a very significant >>>marker point in their development. The quest now is to approach >>>perfection with, of course, the law of diminishing returns applying.
They're safer than human drivers under certain, carefully controlled and
restricted, conditions. But they still make mistakes that human drivers are >> very unlikely to make, even if those mistakes are not directly dangerous.
The "fail 'safe' by stopping immobile in the middle of a lane of
fast-moving traffic" shown in that Waymo video earlier seemed pretty >"directly dangerous" to me.
Bezos specifically chose the name "Alexa" after the ancient library of >Alexandria. The developers liked it because of the hard "X" consonant
which helped them increase the precision of recognition in the early >versions.
The wife of a good friend is named "Alexandra", (or "Alex" for short).
For fairly obvious reasons, they cannot and will not have Amazon smart >devices in their home.
Similarly, Siri is named after Sigrid, a Norwegian co-worker of Dag
Kittlaus, founder of Siri, prior to its acquisition by Apple.
Anyway, I believe the names Siri and Alexa were carefully chosen because
of their uniqueness and low likelihood of confusion with ordinary words
used in ordinary speech.
On Wed, 05 Feb 2025 16:54:50 +0000, Norman Wells wrote:
On 05/02/2025 16:43, Mark Goodge wrote:
[quoted text muted]
I think it would probably just refuse to open the pod bay doors.
Anyway, I believe the names Siri and Alexa were carefully chosen because >>> of their uniqueness and low likelihood of confusion with ordinary words
used in ordinary speech.
More lilely they were chosed as "girly" names which would sit quite
happily in the living room alongside the air freshener.
On Wed, 5 Feb 2025 20:06:11 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 05/02/2025 17:32, Mark Goodge wrote:
On Wed, 5 Feb 2025 16:44:36 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 05/02/2025 16:21, Mark Goodge wrote:
Again, that is my point. To be fully autonomous, a vehicle will need to be
able to derive the necessary data from its surroundings, context and memory
without relying solely on information transmitted from a central system. >>>>> That is, it will need to be able to behave like a human driver. Which, as >>>>> yet, they can't do. They'll get there eventually. But getting there isn't a
trivial problem to solve.
They are much closer already than you think.
I have no overwhelming reason to disagree with the industry consensus (as >>> cited earlier by Simon Parker) that we're probably around ten years away >>> from the first fully autonomous (level 5) vehicles on the roads in real life
use (as opposed to mere testing).
We don't actually need Level 5. Level 4 will do. And they're being
tested and proved right now.
So you don't actually believe that fully autonomous vehicles are closer than I think?
programming a chatbot. But the underlying trade-offs of computing power v >>> cost are essentially the same. So it would not surprise me at all if the >>> first genuine level 5 vehicles come from China. Which would raise a whole >>> lot of other ethical and legal issues.
Would it? Why?
Because of China's known propensity to engage in data extraction and censorship. What if a Chinese-made automated taxi was carrying a couple of politicians having a conversation about Taiwan?
On Thu, 6 Feb 2025 15:26:46 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:
Bear in mind that for NormTech's system to work, he needs to be able to
submit a GPS location to the system which then retrieves all TROs that
apply to that location, sorts them into chronological order and finally
analyses those TROs to determine the speed limit currently in effect at
that point.
Yes, and that's the programming challenge. Which would be determinedly non-trivial.
On 07/02/2025 10:44, Mark Goodge wrote:
On Thu, 6 Feb 2025 15:26:46 +0000, Simon Parker <simonparkerulm@gmail.com> >> wrote:
Bear in mind that for NormTech's system to work, he needs to be able to
submit a GPS location to the system which then retrieves all TROs that
apply to that location, sorts them into chronological order and finally
analyses those TROs to determine the speed limit currently in effect at
that point.
Yes, and that's the programming challenge. Which would be determinedly
non-trivial.
How is it then that my little Garmin seems to know the speed limits
already? Surely all the work you say is a determinedly non-trivial >programming challenge has already been done?
This is actually an interesting point as far as autonomous vehicles are concerned. It's well known in highway design that "sign blindness" can be an issue; if you have too many signs at one location, even if individually they each have something important to say, then drivers are less likely to take account of them. Information overload reduces compliance and reduces safety. But robot cars won't have that issue. On the contrary, as far as AI is concerned, you can never have too much data.
Andy Burns wrote:
Jethro_uk wrote:You are correct, it only displays the speed limit when it's on route guidance, I can't think why they'd decide that's best?
There was a sneeze of time when it would display speed limits
without needing a route programmed (same as Waze "driving mode").
Then it stopped.
GM also called it "driving mode", I'm fairly sure that when using GM in
the car it still displays the current speed limit, I'll try to remember
to check when I next venture out ...
In other news, if your car only has wired android Auto, this device JustWorks™
<https://amazon.co.uk/dp/B0DBVMG9Y6>
Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:
Modern systems can handle a lot more data but modern sensors can also
deliver a lot more data. In a system of the kind I heard described as “hard real time“ back in the day I would expect information overload to still be an issue.
Mark Goodge <usenet@listmail.good-stuff.co.uk> wrote:
[quoted text muted]
I was reading about Margaret Hamilton recently. The part I found most interesting was the description of how the Apollo 11 LEM software had
been written to handle unexpected events. There was an error in the
checklist the astronauts were following that caused them to switch on a system that sent too much data, navigation data not relevant during the approach to landing. The particular event had not been anticipated but
the possibility of an overload had. The essential functions were still
able to operate and the landing went ahead.
On Fri, 7 Feb 2025 11:21:38 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 07/02/2025 10:44, Mark Goodge wrote:
On Thu, 6 Feb 2025 15:26:46 +0000, Simon Parker <simonparkerulm@gmail.com> >>> wrote:
Bear in mind that for NormTech's system to work, he needs to be able to >>>> submit a GPS location to the system which then retrieves all TROs that >>>> apply to that location, sorts them into chronological order and finally >>>> analyses those TROs to determine the speed limit currently in effect at >>>> that point.
Yes, and that's the programming challenge. Which would be determinedly
non-trivial.
How is it then that my little Garmin seems to know the speed limits
already? Surely all the work you say is a determinedly non-trivial
programming challenge has already been done?
It doesn't know all of them, everywhere, all the time.
All you are doing by
asserting that it does is to demonstrate that you don't drive enough in enough different places to discover the edge cases.
If you disagree, then sign up for the various Garmin forums on the web and tell all the people reporting that their Garmin gets the speed limit wrong that they must be mistaken. I'm sure they'll be thrilled to get the benefit of your expertise.
This is how I smuggled "alien invasion" into a BCP/DR plan for a former employer.
On 07/02/2025 20:38, Mark Goodge wrote:
On Fri, 7 Feb 2025 11:21:38 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>> On 07/02/2025 10:44, Mark Goodge wrote:
On Thu, 6 Feb 2025 15:26:46 +0000, Simon Parker <simonparkerulm@gmail.com> >>>> wrote:
Bear in mind that for NormTech's system to work, he needs to be able to >>>>> submit a GPS location to the system which then retrieves all TROs that >>>>> apply to that location, sorts them into chronological order and finally >>>>> analyses those TROs to determine the speed limit currently in effect at >>>>> that point.
Yes, and that's the programming challenge. Which would be determinedly >>>> non-trivial.
How is it then that my little Garmin seems to know the speed limits
already? Surely all the work you say is a determinedly non-trivial
programming challenge has already been done?
It doesn't know all of them, everywhere, all the time.
I can only speak from my own experience, which is that is does seem to
know the speed limit wherever I go, except of course for very temporary
ones.
I don't doubt that 'It doesn't know all of them, everywhere, all the
time' but if it works, say, 95% of the time, that's hardly a glass half
empty situation, is it?
"J Newman" <jenniferkatenewman@gmail.com> wrote in message news:vo49ms$3d0ms$1@dont-email.me...
It's human-like in this regard. If I ask a human two identical questions, I would get
slightly different answers in terms of prose (but hopefully identical in terms of
substance).
You're lucky then.
Some humans can ask another human an identical question
on two successive days and get a completely different
answer.each time.
And on the next day as well
"Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:vo7b4v$1sc4r$75@dont-email.me...
This is how I smuggled "alien invasion" into a BCP/DR plan for a former
employer.
1. Ask for volunteers for abduction.
2. Select those employees considered most suitable for abduction so as
to fulfil the quota.
On Sat, 8 Feb 2025 08:38:13 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 07/02/2025 20:38, Mark Goodge wrote:
On Fri, 7 Feb 2025 11:21:38 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>> On 07/02/2025 10:44, Mark Goodge wrote:
On Thu, 6 Feb 2025 15:26:46 +0000, Simon Parker <simonparkerulm@gmail.com>
wrote:
Bear in mind that for NormTech's system to work, he needs to be able to >>>>>> submit a GPS location to the system which then retrieves all TROs that >>>>>> apply to that location, sorts them into chronological order and finally >>>>>> analyses those TROs to determine the speed limit currently in effect at >>>>>> that point.
Yes, and that's the programming challenge. Which would be determinedly >>>>> non-trivial.
How is it then that my little Garmin seems to know the speed limits
already? Surely all the work you say is a determinedly non-trivial
programming challenge has already been done?
It doesn't know all of them, everywhere, all the time.
I can only speak from my own experience, which is that is does seem to
know the speed limit wherever I go, except of course for very temporary
ones.
So not all the time, then.
I don't doubt that 'It doesn't know all of them, everywhere, all the
time' but if it works, say, 95% of the time, that's hardly a glass half
empty situation, is it?
What would you expect an autonomous vehicle to do in locations that are part of the 5%?
The actual percentage isn't really relevant. The point is that as long as it's not 100%, then an autonomous vehicle has to be able to cope with situations where its sat-nav data gives the wrong speed limit. And that, of course, includes temporary limits.
billy bookcase wrote:
"J Newman" <jenniferkatenewman@gmail.com> wrote:
It's human-like in this regard. If I ask a human two identical
questions, I would get slightly different answers in terms of
prose (but hopefully identical in terms of substance).
You're lucky then.
Some humans can ask another human an identical question
on two successive days and get a completely different
answer.each time.
And on the next day as well
Stop spending time with economists.
On Sat, 08 Feb 2025 11:13:08 +0000, billy bookcase wrote:
"Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message
news:vo7b4v$1sc4r$75@dont-email.me...
This is how I smuggled "alien invasion" into a BCP/DR plan for a former
employer.
1. Ask for volunteers for abduction.
2. Select those employees considered most suitable for abduction so as
to fulfil the quota.
Sadly that wasn't in my remit :)
I just had to scope out the IT systems ability to cope.
There was quite a heated "discussion" where a senior executive really
tried to discount the possibility that there was a realistic possibility
of both the office and it's alternate location (which was in Gloucester) would be inaccessible at the same time.
Come March 2020 ....
"Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message news:vo7pom$1sc4r$77@dont-email.me...
On Sat, 08 Feb 2025 11:13:08 +0000, billy bookcase wrote:
"Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message
news:vo7b4v$1sc4r$75@dont-email.me...
This is how I smuggled "alien invasion" into a BCP/DR plan for a former >>>> employer.
1. Ask for volunteers for abduction.
2. Select those employees considered most suitable for abduction so as
to fulfil the quota.
Sadly that wasn't in my remit :)
A lost opportunity there I'm afraid. Did Alexander the Great say it wasn't in his
remit ? Did Napoleon ?
On 08/02/2025 15:00, Mark Goodge wrote:
What would you expect an autonomous vehicle to do in locations that are part >> of the 5%?
Default to some programmed safe level or take inputs from other sensors
and data sources of course.
The actual percentage isn't really relevant. The point is that as long as
it's not 100%, then an autonomous vehicle has to be able to cope with
situations where its sat-nav data gives the wrong speed limit. And that, of >> course, includes temporary limits.
Might as well give up all attempts at progress then. These are clearly >insuperable problems and deal-breakers, at least for you.
On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 08/02/2025 15:00, Mark Goodge wrote:
What would you expect an autonomous vehicle to do in locations that are part
of the 5%?
Default to some programmed safe level or take inputs from other sensors
and data sources of course.
As, indeed, I have been telling you, all the time you've been insisting that an autonomous vehicle will always be able to trust its data supply.
The actual percentage isn't really relevant. The point is that as long as >>> it's not 100%, then an autonomous vehicle has to be able to cope with
situations where its sat-nav data gives the wrong speed limit. And that, of >>> course, includes temporary limits.
Might as well give up all attempts at progress then. These are clearly
insuperable problems and deal-breakers, at least for you.
No; in order to progress we just need to stop pretending that it's a trivial problem to solve. Maybe we need to rely on the expertise of those who are actually working on it, rather than the inexpert opinions of random people
on the Internet.
On 09/02/2025 10:42, billy bookcase wrote:
"Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message
news:vo7pom$1sc4r$77@dont-email.me...
On Sat, 08 Feb 2025 11:13:08 +0000, billy bookcase wrote:
"Jethro_uk" <jethro_uk@hotmailbin.com> wrote in message
news:vo7b4v$1sc4r$75@dont-email.me...
This is how I smuggled "alien invasion" into a BCP/DR plan for a former >>>>> employer.
1. Ask for volunteers for abduction.
2. Select those employees considered most suitable for abduction so as >>>> to fulfil the quota.
Sadly that wasn't in my remit :)
A lost opportunity there I'm afraid. Did Alexander the Great say it wasn't in his
remit ? Did Napoleon ?
Those gentlemen could (and did) freely employ means of persuasion that tend to be
frowned upon today.
Having the absolute command of a large army does tend to impress people.
On 09/02/2025 21:42, Mark Goodge wrote:
On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>> On 08/02/2025 15:00, Mark Goodge wrote:
What would you expect an autonomous vehicle to do in locations that are part
of the 5%?
Default to some programmed safe level or take inputs from other sensors
and data sources of course.
As, indeed, I have been telling you, all the time you've been insisting that >> an autonomous vehicle will always be able to trust its data supply.
As of course it should. It's best if it can. What I said only applies
to very few in remote areas where it may unfortunately be necessary.
I never claimed 95% or so was a glass completely full.
But it's certainly not half empty.
The actual percentage isn't really relevant. The point is that as long as >>>> it's not 100%, then an autonomous vehicle has to be able to cope with
situations where its sat-nav data gives the wrong speed limit. And that, of
course, includes temporary limits.
Might as well give up all attempts at progress then. These are clearly
insuperable problems and deal-breakers, at least for you.
No; in order to progress we just need to stop pretending that it's a trivial >> problem to solve. Maybe we need to rely on the expertise of those who are
actually working on it, rather than the inexpert opinions of random people >> on the Internet.
It has largely been solved already. The rest will follow. In the >meantime,we have a glass perhaps 95% full, which is no reason to be Eeyore.
On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 09/02/2025 21:42, Mark Goodge wrote:
On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>> On 08/02/2025 15:00, Mark Goodge wrote:
What would you expect an autonomous vehicle to do in locations that are part
of the 5%?
Default to some programmed safe level or take inputs from other sensors >>>> and data sources of course.
As, indeed, I have been telling you, all the time you've been insisting that
an autonomous vehicle will always be able to trust its data supply.
As of course it should. It's best if it can. What I said only applies
to very few in remote areas where it may unfortunately be necessary.
What should an autonomous vehicle do if it has a data supply (ie, it's not
in a mobile notspot) but its data disagrees with signage on the ground?
I never claimed 95% or so was a glass completely full.
But it's certainly not half empty.
It's actually around 91% at the moment, apparently. The target is 95%. But, of course, you can't have level 5 autonomy until you can cope with 100% of the roads, even those which are outwith the 95% mobile data coverage.
The actual percentage isn't really relevant. The point is that as long as >>>>> it's not 100%, then an autonomous vehicle has to be able to cope with >>>>> situations where its sat-nav data gives the wrong speed limit. And that, of
course, includes temporary limits.
Might as well give up all attempts at progress then. These are clearly >>>> insuperable problems and deal-breakers, at least for you.
No; in order to progress we just need to stop pretending that it's a trivial
problem to solve. Maybe we need to rely on the expertise of those who are >>> actually working on it, rather than the inexpert opinions of random people >>> on the Internet.
It has largely been solved already. The rest will follow. In the
meantime,we have a glass perhaps 95% full, which is no reason to be Eeyore.
I'm sure the rest will follow. I've never disputed that. All I'm pointing
out is that solving it will take considerably more computing complexity than you have shown even the slightest ability to comprehend.
On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 09/02/2025 21:42, Mark Goodge wrote:
On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>> On 08/02/2025 15:00, Mark Goodge wrote:
What would you expect an autonomous vehicle to do in locations that are part
of the 5%?
Default to some programmed safe level or take inputs from other sensors >>>> and data sources of course.
As, indeed, I have been telling you, all the time you've been insisting that
an autonomous vehicle will always be able to trust its data supply.
As of course it should. It's best if it can. What I said only applies
to very few in remote areas where it may unfortunately be necessary.
What should an autonomous vehicle do if it has a data supply (ie, it's not
in a mobile notspot) but its data disagrees with signage on the ground?
I never claimed 95% or so was a glass completely full.
But it's certainly not half empty.
It's actually around 91% at the moment, apparently. The target is 95%. But, of course, you can't have level 5 autonomy until you can cope with 100% of the roads, even those which are outwith the 95% mobile data coverage.
The actual percentage isn't really relevant. The point is that as long as >>>>> it's not 100%, then an autonomous vehicle has to be able to cope with >>>>> situations where its sat-nav data gives the wrong speed limit. And that, of
course, includes temporary limits.
Might as well give up all attempts at progress then. These are clearly >>>> insuperable problems and deal-breakers, at least for you.
No; in order to progress we just need to stop pretending that it's a trivial
problem to solve. Maybe we need to rely on the expertise of those who are >>> actually working on it, rather than the inexpert opinions of random people >>> on the Internet.
It has largely been solved already. The rest will follow. In the
meantime,we have a glass perhaps 95% full, which is no reason to be Eeyore.
I'm sure the rest will follow. I've never disputed that. All I'm pointing
out is that solving it will take considerably more computing complexity than you have shown even the slightest ability to comprehend.
On 06/02/2025 15:32, Norman Wells wrote:
On 06/02/2025 14:51, Simon Parker wrote:
On 05/02/2025 10:22, Norman Wells wrote:
What's the problem? It's only a matter of information being passed
to whoever can amend the live mapping, and road closed notification
being applied on the maps..
The "problem" is that you are unable or unwilling to detail the
process by which "the information" is "passed to whoever can amend
the live mapping, and road closure notification being applied on the
maps."
For one thing, (as detailed in a parallel post), there are numerous
systems at play, rather than a single all-encompassing system.
Then they all should be notified, and those who wish to remain
competitive will act.
I've explained in a post elsewhere in the thread how Google, Apple and
Garmin aggregate user data to determine delays in the proposed routing.
Your task, should you choose to accept it, it to map onto the existing
model precisely how "they should all be notified" and how they then
action said "road closure notification being applied on the maps."
Over to you, Norman...
Additionally, there isn't some entity within each of those systems
with responsibility to "amend the live mapping".
Then perhaps, if they want to remain relevant and commercially viable
in the coming age of autonomous vehicles, they will see it as
commercially important to develop such a facility if they don't have
it already. It is surely not beyond the wit of man despite all the
pettyfogging issues you mention.
How many notifications are there likely to be in a 24 hour period? Is
one person per receiving company on a shift basis sufficient or does it
need multiple people? If multiple people, how many? What happens if HQ try to contact them and they're away from their desk?
How many people are required at "HQ" to receive the notifications from officers "in the field", as it were? Similarly, what happens if HQ do
not answer the call in a timely manner (for all values of "timely manner")?
Are the same people in HQ that receive the notifications passing them on
to the live mapping companies or does that need a separate team? If so, same questions for this team.
For all of the above, what happens in the event of sickness, holiday, bereavement, maternity absence, etc.?
I recommend that you spend time learning how Google, Apple and Garmin
(to name but three) deal with congestion and routing around it and
you will be well on the well to enlightenment.
They act on information received. Information about road blockages is
just more information that should be easy to integrate.
It *should* be easy to integrate, but you've got no idea how the current system works, no detailed specification for your proposed system and no
idea how the two systems will relate to each other.
But you can state with absolute certainty and confidence that it "should
be easy to integrate".
Until then, you see this as a simple solution to a simple problem
because you do not understand the problem and therefore have no idea
of the complexities involved and how your proposed solution maps onto
these.
No, it's because I don't get bogged down by trivia and 'the way it's
always been done'.
Blinding oneself, deliberately or otherwise, to the failings in one's arguments is how one loses one's proverbial shirt when the project
commences.
NormCars (TM) have been designed without AI. Therefore they cannot read "Road Closed" signs or observe blockages in real time. What are they toIf it's not done, we're no worse off than we are now. Huge delaysI invite you to watch the in-car videos of Waymo Driver in action
whether of manually driven vehicles or autonomous ones.
when it goes wrong. If you think that leaves us "no worse off then
we are now" then I must disagree with you in the strongest possible
terms.
And these are Level 4 AVs with a baby-sitter back at the control
centre and a human driver on standby no more than 5-6 miles away to
intervene when things go wrong, which they do all too frequently.
I was talking, as was very clear, about what happens if mapping
concerns do not act on road closure information received, which has
nothing to do with what you say.
do in the time between the blockage occurring and receiving notification
of it assuming there's a delay of no more than sixty seconds at each
stage of the process?
(Officer arrives on scene and erects roadblock (T), Officer notifies HQ
of blockage (T+1 minute), HQ notifies mapping companies in alphabetic
order of blockage so first is Apple (T+2 minutes), HQ notifies Garmin of blockage (T+3 minutes), HQ notifies Google of blockage (T+4 minutes)...
Google updates blockage data onto system (T+5 minutes).
Assume NormCars (TM) have TomTom navigation technology. How long after
the roadblock being erected does the car receive notification of the
blockage and what is it doing in the time between the roadblock being
erected and being notified of it?
On 10/02/2025 19:21, Simon Parker wrote:
How many notifications are there likely to be in a 24 hour period? Is
one person per receiving company on a shift basis sufficient or does it
need multiple people? If multiple people, how many? What happens if HQ
try to contact them and they're away from their desk?
If people are going to be such a problem, they can be replaced with
automated systems, which will make the whole process even cheaper than >virtually free.
On 10/02/2025 16:28, Mark Goodge wrote:
On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 09/02/2025 21:42, Mark Goodge wrote:
What should an autonomous vehicle do if it has a data supply (ie, it's not >> in a mobile notspot) but its data disagrees with signage on the ground?
It will presumably default to the least dangerous.
You wouldn't expect an autonomous vehicle to work anywhere at all, like
in Chad for example, so why everywhere even in Britain where there may
not in fact be any roads or people?
On Mon, 10 Feb 2025 23:54:15 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 10/02/2025 19:21, Simon Parker wrote:
How many notifications are there likely to be in a 24 hour period? Is
one person per receiving company on a shift basis sufficient or does it
need multiple people? If multiple people, how many? What happens if HQ >>> try to contact them and they're away from their desk?
If people are going to be such a problem, they can be replaced with
automated systems, which will make the whole process even cheaper than
virtually free.
When I asked you, in Message-ID <mrjbqjhn7vfpkmn7mvodnaenrslr6s23la@4ax.com> for your specification for a notification system, you responded, in Message-ID <m0m4prF61neU3@mid.individual.net>, with one that is essentially manual in nature. Given that you now seem to have agreed with Simon that an automated system would be better, maybe you could give us your specification for an automated system.
Again, I'm not asking for code samples. Just the system architecture.
On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 10/02/2025 16:28, Mark Goodge wrote:
On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 09/02/2025 21:42, Mark Goodge wrote:
What should an autonomous vehicle do if it has a data supply (ie, it's not >>> in a mobile notspot) but its data disagrees with signage on the ground?
It will presumably default to the least dangerous.
How will it do that? How will it know which is the least dangerous?
You wouldn't expect an autonomous vehicle to work anywhere at all, like
in Chad for example, so why everywhere even in Britain where there may
not in fact be any roads or people?
Then you're telling us that you don't, actually, think that level 5 autonomy will ever be achieved. That's quite a turnaround from your previous argument that getting there is trivial.
On 11/02/2025 09:19, Mark Goodge wrote:
Again, I'm not asking for code samples. Just the system architecture.
Ideally, police officer at the scene presses big red button on computer >marked 'Accident blockage'. That sends a message giving the location to
the mapping companies and updates their maps instantaneously and >automatically.
On 10/02/2025 16:28, Mark Goodge wrote:
On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 09/02/2025 21:42, Mark Goodge wrote:
On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>> On 08/02/2025 15:00, Mark Goodge wrote:
What would you expect an autonomous vehicle to do in locations that are part
of the 5%?
Default to some programmed safe level or take inputs from other sensors >>>>> and data sources of course.
As, indeed, I have been telling you, all the time you've been insisting that
an autonomous vehicle will always be able to trust its data supply.
As of course it should. It's best if it can. What I said only applies
to very few in remote areas where it may unfortunately be necessary.
What should an autonomous vehicle do if it has a data supply (ie, it's not >> in a mobile notspot) but its data disagrees with signage on the ground?
It will presumably default to the least dangerous.
I never claimed 95% or so was a glass completely full.
But it's certainly not half empty.
It's actually around 91% at the moment, apparently. The target is 95%. But, >> of course, you can't have level 5 autonomy until you can cope with 100% of >> the roads, even those which are outwith the 95% mobile data coverage.
You can still have total autonomy in areas that are covered.
You wouldn't expect an autonomous vehicle to work anywhere at all, like
in Chad for example, so why everywhere even in Britain where there may
not in fact be any roads or people? If there are not spots, AVs can be
told where they are and programmed to avoid them or to refuse any
journey that necessarily requires driving through one.
On 11 Feb 2025 at 00:03:34 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:
On 10/02/2025 16:28, Mark Goodge wrote:
On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 09/02/2025 21:42, Mark Goodge wrote:
On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>>> On 08/02/2025 15:00, Mark Goodge wrote:
What would you expect an autonomous vehicle to do in locations that are part
of the 5%?
Default to some programmed safe level or take inputs from other sensors >>>>>> and data sources of course.
As, indeed, I have been telling you, all the time you've been insisting that
an autonomous vehicle will always be able to trust its data supply.
As of course it should. It's best if it can. What I said only applies >>>> to very few in remote areas where it may unfortunately be necessary.
What should an autonomous vehicle do if it has a data supply (ie, it's not >>> in a mobile notspot) but its data disagrees with signage on the ground?
It will presumably default to the least dangerous.
I never claimed 95% or so was a glass completely full.
But it's certainly not half empty.
It's actually around 91% at the moment, apparently. The target is 95%. But, >>> of course, you can't have level 5 autonomy until you can cope with 100% of >>> the roads, even those which are outwith the 95% mobile data coverage.
You can still have total autonomy in areas that are covered.
You wouldn't expect an autonomous vehicle to work anywhere at all, like
in Chad for example, so why everywhere even in Britain where there may
not in fact be any roads or people? If there are not spots, AVs can be
told where they are and programmed to avoid them or to refuse any
journey that necessarily requires driving through one.
What about a sudden failure of a base station?
On Tue, 11 Feb 2025 09:53:12 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 11/02/2025 09:19, Mark Goodge wrote:
Again, I'm not asking for code samples. Just the system architecture.
Ideally, police officer at the scene presses big red button on computer
marked 'Accident blockage'. That sends a message giving the location to
the mapping companies and updates their maps instantaneously and
automatically.
How does the big red button know where the blockage is?
How is that data
transmitted to the central server, and then to the mapping databases?
That's what I mean by system architecture. Just saying "It sends a message" is like saying "Wave a magic wand". You must, surely, have some idea of how it would work at a technical level. Unless you can explain how that would work, your opinion is not particularly useful.
On 11/02/2025 09:22, Mark Goodge wrote:
On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
You wouldn't expect an autonomous vehicle to work anywhere at all, like
in Chad for example, so why everywhere even in Britain where there may
not in fact be any roads or people?
Then you're telling us that you don't, actually, think that level 5 autonomy >> will ever be achieved. That's quite a turnaround from your previous argument >> that getting there is trivial.
Level 5, if I understand it correctly, means that the vehicles are fully >automated and can drive anywhere and in any conditions without human input.
Until that includes Chad and the moon, of course that can't be achieved.
It is centuries away, and is a rather stupid and pointless definition
if taken literally.
But I don't see why you can't have fully automated
vehicles that can drive anywhere within an area where there is
sufficient data available to ensure its and others' safety. And such
areas need not be worldwide or even nationwide.
On 11/02/2025 11:04, Roger Hayter wrote:
On 11 Feb 2025 at 00:03:34 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:
On 10/02/2025 16:28, Mark Goodge wrote:
On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>It will presumably default to the least dangerous.
On 09/02/2025 21:42, Mark Goodge wrote:
On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote:As of course it should. It's best if it can. What I said only applies >>>>> to very few in remote areas where it may unfortunately be necessary.
On 08/02/2025 15:00, Mark Goodge wrote:
What would you expect an autonomous vehicle to do in locations that are part
of the 5%?
Default to some programmed safe level or take inputs from other sensors >>>>>>> and data sources of course.
As, indeed, I have been telling you, all the time you've been insisting that
an autonomous vehicle will always be able to trust its data supply. >>>>>
What should an autonomous vehicle do if it has a data supply (ie, it's not >>>> in a mobile notspot) but its data disagrees with signage on the ground? >>>
I never claimed 95% or so was a glass completely full.
But it's certainly not half empty.
It's actually around 91% at the moment, apparently. The target is 95%. But,
of course, you can't have level 5 autonomy until you can cope with 100% of >>>> the roads, even those which are outwith the 95% mobile data coverage.
You can still have total autonomy in areas that are covered.
You wouldn't expect an autonomous vehicle to work anywhere at all, like
in Chad for example, so why everywhere even in Britain where there may
not in fact be any roads or people? If there are not spots, AVs can be
told where they are and programmed to avoid them or to refuse any
journey that necessarily requires driving through one.
What about a sudden failure of a base station?
Or a tsunami, volcanic extinction event, nuclear attack, the sun
exploding, or even condom failure?
I don't know. But then again, I doubt if manual drivers do either.
From my POV, the major difference is that I lost network service at home last week for a day because of a base station failure, but none of the others happened to me, nor are likely to.
On 11/02/2025 11:03, Mark Goodge wrote:
On Tue, 11 Feb 2025 09:53:12 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
On 11/02/2025 09:19, Mark Goodge wrote:
Again, I'm not asking for code samples. Just the system architecture.
Ideally, police officer at the scene presses big red button on computer
marked 'Accident blockage'. That sends a message giving the location to >>> the mapping companies and updates their maps instantaneously and
automatically.
How does the big red button know where the blockage is?
I believe as a twinkle in someone's eye, if not yet in reality, there is
a thing provisionally called a global positioning system based, I think,
on satellites or some such preposterous idea.
How is that data
transmitted to the central server, and then to the mapping databases?
Over the ether by 'communicators' a bit like they have on Star Trek.
Again, I know it sounds far-fetched, but a chap down the pub assured me
it was theoretically possible, and he was very plausible.
That's what I mean by system architecture. Just saying "It sends a message" >> is like saying "Wave a magic wand". You must, surely, have some idea of how >> it would work at a technical level. Unless you can explain how that would
work, your opinion is not particularly useful.
It certainly isn't impossible. That being the case, I have the vision, >others just need to assemble the nuts and bolts.
On 11 Feb 2025 at 00:03:34 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:
You wouldn't expect an autonomous vehicle to work anywhere at all, like
in Chad for example, so why everywhere even in Britain where there may
not in fact be any roads or people? If there are not spots, AVs can be
told where they are and programmed to avoid them or to refuse any
journey that necessarily requires driving through one.
What about a sudden failure of a base station?
It certainly isn't impossible. That being the case, I have the vision, others just need to assemble the nuts and bolts.
On 11 Feb 2025 at 11:28:42 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:
On 11/02/2025 11:04, Roger Hayter wrote:
On 11 Feb 2025 at 00:03:34 GMT, "Norman Wells" <hex@unseen.ac.am> wrote: >>>
On 10/02/2025 16:28, Mark Goodge wrote:
On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>>It will presumably default to the least dangerous.
On 09/02/2025 21:42, Mark Goodge wrote:What should an autonomous vehicle do if it has a data supply (ie, it's not
On Sat, 8 Feb 2025 15:16:37 +0000, Norman Wells <hex@unseen.ac.am> wrote:As of course it should. It's best if it can. What I said only applies >>>>>> to very few in remote areas where it may unfortunately be necessary. >>>>>
On 08/02/2025 15:00, Mark Goodge wrote:
What would you expect an autonomous vehicle to do in locations that are part
of the 5%?
Default to some programmed safe level or take inputs from other sensors
and data sources of course.
As, indeed, I have been telling you, all the time you've been insisting that
an autonomous vehicle will always be able to trust its data supply. >>>>>>
in a mobile notspot) but its data disagrees with signage on the ground? >>>>
You can still have total autonomy in areas that are covered.I never claimed 95% or so was a glass completely full.
But it's certainly not half empty.
It's actually around 91% at the moment, apparently. The target is 95%. But,
of course, you can't have level 5 autonomy until you can cope with 100% of
the roads, even those which are outwith the 95% mobile data coverage. >>>>
You wouldn't expect an autonomous vehicle to work anywhere at all, like >>>> in Chad for example, so why everywhere even in Britain where there may >>>> not in fact be any roads or people? If there are not spots, AVs can be >>>> told where they are and programmed to avoid them or to refuse any
journey that necessarily requires driving through one.
What about a sudden failure of a base station?
Or a tsunami, volcanic extinction event, nuclear attack, the sun
exploding, or even condom failure?
I don't know. But then again, I doubt if manual drivers do either.
From my POV, the major difference is that I lost network service at home lastweek for a day because of a base station failure, but none of the others happened to me, nor are likely to.
On Tue, 11 Feb 2025 09:43:31 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 11/02/2025 09:22, Mark Goodge wrote:
On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
You wouldn't expect an autonomous vehicle to work anywhere at all, like >>>> in Chad for example, so why everywhere even in Britain where there may >>>> not in fact be any roads or people?
Then you're telling us that you don't, actually, think that level 5 autonomy
will ever be achieved. That's quite a turnaround from your previous argument
that getting there is trivial.
Level 5, if I understand it correctly, means that the vehicles are fully
automated and can drive anywhere and in any conditions without human input.
Yes, it does. Or, to be more precise, it means being able to go anywhere a human driver in an otherwise unmodified street legal vehicle would be able
to go. It doesn't require the autonomous vehicle to be able to do things
that a normal car driven by a human wouldn't. So it doesn't need to be able to climb Ben Nevis, or participate in a Formula 1 race. But it does need to be able to handle New York, London, Paris and Wyre Piddle. And Còig Peighinnean Nis.
Until that includes Chad and the moon, of course that can't be achieved.
It deosn't need to include the moon.
It is centuries away, and is a rather stupid and pointless definition
if taken literally.
According to autonomous vehicle developers, we should be looking at seeing level 5 in testing, at least, in about a decade's time. Obviously, getting from testing to production is a different matter, and there may well be several issues which need to be ironed out along the way. But level 5 most definitely is the goal.
But I don't see why you can't have fully automated
vehicles that can drive anywhere within an area where there is
sufficient data available to ensure its and others' safety. And such
areas need not be worldwide or even nationwide.
It's reasonable to accept that even level 5 will require national differences, and hence won't necessarily be available in all territories simultaneously.
On 11/02/2025 11:24, Mark Goodge wrote:
On Tue, 11 Feb 2025 09:43:31 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
On 11/02/2025 09:22, Mark Goodge wrote:Yes, it does. Or, to be more precise, it means being able to go anywhere a >> human driver in an otherwise unmodified street legal vehicle would be able >> to go. It doesn't require the autonomous vehicle to be able to do things
On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>
You wouldn't expect an autonomous vehicle to work anywhere at all, like >>>>> in Chad for example, so why everywhere even in Britain where there may >>>>> not in fact be any roads or people?
Then you're telling us that you don't, actually, think that level 5 autonomy
will ever be achieved. That's quite a turnaround from your previous argument
that getting there is trivial.
Level 5, if I understand it correctly, means that the vehicles are fully >>> automated and can drive anywhere and in any conditions without human input. >>
that a normal car driven by a human wouldn't. So it doesn't need to be able >> to climb Ben Nevis, or participate in a Formula 1 race. But it does need to >> be able to handle New York, London, Paris and Wyre Piddle. And Cig
Peighinnean Nis.
Until that includes Chad and the moon, of course that can't be achieved.
It deosn't need to include the moon.
But it does need to include Chad?
It's reasonable to accept that even level 5 will require national
differences, and hence won't necessarily be available in all territories
simultaneously.
Why then mention New York, London and Paris in the same breath, as you
did above?
On Tue, 11 Feb 2025 11:25:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 11/02/2025 11:03, Mark Goodge wrote:
On Tue, 11 Feb 2025 09:53:12 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 11/02/2025 09:19, Mark Goodge wrote:
Again, I'm not asking for code samples. Just the system architecture. >>>>Ideally, police officer at the scene presses big red button on computer >>>> marked 'Accident blockage'. That sends a message giving the location to >>>> the mapping companies and updates their maps instantaneously and
automatically.
How does the big red button know where the blockage is?
I believe as a twinkle in someone's eye, if not yet in reality, there is
a thing provisionally called a global positioning system based, I think,
on satellites or some such preposterous idea.
So the police officer has to be physically present at the blockage when they press the button. What about if they need to report it from a different location?
How is that data
transmitted to the central server, and then to the mapping databases?
Over the ether by 'communicators' a bit like they have on Star Trek.
Again, I know it sounds far-fetched, but a chap down the pub assured me
it was theoretically possible, and he was very plausible.
What technology do those communicators use?
Does the central database do
something like make an automated phone call to all the mapping databases? Or something different? If so, what?
That's what I mean by system architecture. Just saying "It sends a message" >>> is like saying "Wave a magic wand". You must, surely, have some idea of how >>> it would work at a technical level. Unless you can explain how that would >>> work, your opinion is not particularly useful.
It certainly isn't impossible. That being the case, I have the vision,
others just need to assemble the nuts and bolts.
Vision isn't knowing that something can be done. Vision is knowing how it
can be done.
On 11 Feb 2025 11:04:26 GMT, Roger Hayter <roger@hayter.org> wrote:
On 11 Feb 2025 at 00:03:34 GMT, "Norman Wells" <hex@unseen.ac.am> wrote:
You wouldn't expect an autonomous vehicle to work anywhere at all, like
in Chad for example, so why everywhere even in Britain where there may
not in fact be any roads or people? If there are not spots, AVs can be
told where they are and programmed to avoid them or to refuse any
journey that necessarily requires driving through one.
What about a sudden failure of a base station?
A level 5 autonomous vehicle would cope in much the same way that a human would under the same or similar circumstances (eg, loss of mobile signal, failure of a sat-nav unit). That is, infer a "best effort" route from
context (eg, road signs and markings as well as the physical condition of
the road), offline mapping, and, where possible and appropriate, alternative data sources (such as asking for directions).
Of course, an autonomous vehicle could end up hopelessly lost under such circumstances. But then, so could a human. And an autonomous vehicle would, at least, not suffer from inability to read a map. Provided its offline map data was updated reasonably often it could probably make the majority of its journeys without actually needing any live mapping data, although that would be suboptimal as it would mean it wouldn't be able to avoid congestion and pre-emptively avoid closures by taking a different route from much earlier.
Norman's problem, though, is that he doesn't think level 5 autonomy is possible at all, at least not for centuries
but he does think that level 4
can be good enough for that not to matter in practice. I think he is wrong
on both of those counts.
On Tue, 11 Feb 2025 13:33:42 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 11/02/2025 11:24, Mark Goodge wrote:
On Tue, 11 Feb 2025 09:43:31 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 11/02/2025 09:22, Mark Goodge wrote:
On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote:
You wouldn't expect an autonomous vehicle to work anywhere at all, like >>>>>> in Chad for example, so why everywhere even in Britain where there may >>>>>> not in fact be any roads or people?
Then you're telling us that you don't, actually, think that level 5 autonomy
will ever be achieved. That's quite a turnaround from your previous argument
that getting there is trivial.
Level 5, if I understand it correctly, means that the vehicles are fully >>>> automated and can drive anywhere and in any conditions without human input.
Yes, it does. Or, to be more precise, it means being able to go anywhere a >>> human driver in an otherwise unmodified street legal vehicle would be able >>> to go. It doesn't require the autonomous vehicle to be able to do things >>> that a normal car driven by a human wouldn't. So it doesn't need to be able >>> to climb Ben Nevis, or participate in a Formula 1 race. But it does need to >>> be able to handle New York, London, Paris and Wyre Piddle. And Còig
Peighinnean Nis.
Until that includes Chad and the moon, of course that can't be achieved. >>>It deosn't need to include the moon.
But it does need to include Chad?
It needs to include the areas where humans can drive street-legal vehicles
in Chad, yes. I don't see why (other than barely disguised racism), you seem to think that's difficult. Chad has a road network just like any other country.
Humans use those roads without any difficulties. A level 5
autonomous vehicle will be able to do so as well, so long as it meets Chad's legal requirements.
I'm not suggesting that level 5 autonomy needs to be able to handle challenging off-road conditions. Most human drivers - and most street-legal normal cars - would struggle with that, too. So you're not going to see them crossing the Sahara off-road, and you won't find them in Antarctica either. But anywhere that has a public highway network and a defined legal framework for the vehicles which use it will be amenable to level 5 autonomy.
It's reasonable to accept that even level 5 will require national
differences, and hence won't necessarily be available in all territories >>> simultaneously.
Why then mention New York, London and Paris in the same breath, as you
did above?
Because the basic technology will be the same everywhere. It's the legal framework which will be different.
Not every country will, initially, be
willing to allow level 5 autonomous vehicles to use its roads. Not because they are technically incapable of it, but because that government has, for whatever reason, decided not to permit them. And even once it is permitted, there may be incompatibilities between the various equivalents of AI construction and use regulations which make it impractical to have cross-border travel with autonomous vehicles. And in some countries, particularly those with protectionist regulatory regimes, autonomous
vehicles from certain sources may be prohibited. I suspect you're not going to see American autonomous vehicles on the streets of North Korea, even once they're common on the streets of North Dakota. But that won't be because they're technically incapable of working there. It will simply be because they're not allowed there.
On Tue, 11 Feb 2025 00:03:34 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 10/02/2025 16:28, Mark Goodge wrote:
On Sun, 9 Feb 2025 22:13:41 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 09/02/2025 21:42, Mark Goodge wrote:
What should an autonomous vehicle do if it has a data supply (ie, it's not >>> in a mobile notspot) but its data disagrees with signage on the ground?
It will presumably default to the least dangerous.
How will it do that? How will it know which is the least dangerous?
On 10/02/2025 19:21, Simon Parker wrote:
NormCars (TM) have been designed without AI. Therefore they cannot read
"Road Closed" signs or observe blockages in real time. What are they to
do in the time between the blockage occurring and receiving notification
of it assuming there's a delay of no more than sixty seconds at each
stage of the process?
(Officer arrives on scene and erects roadblock (T), Officer notifies HQ
of blockage (T+1 minute), HQ notifies mapping companies in alphabetic
order of blockage so first is Apple (T+2 minutes), HQ notifies Garmin of
blockage (T+3 minutes), HQ notifies Google of blockage (T+4 minutes)...
Google updates blockage data onto system (T+5 minutes).
That's still not bad in absolute terms, but with instantaneous and >simultaneous communication already being possible and widely-used,
there's no need for any of this nonsense. The whole process could be >automated, and would be better if it were.
It could even be possible for the officer on the spot to put a block on
all the maps instantaneously by a simple key press.
You just have to have the vision to see it.
Assume NormCars (TM) have TomTom navigation technology. How long after
the roadblock being erected does the car receive notification of the
blockage and what is it doing in the time between the roadblock being
erected and being notified of it?
According to you, it will be coming to an emergency stop in the fast
lane of a motorway somewhere.
But it won't have TomTom navigation technology if other competitive
systems are better.
On 11/02/2025 12:50, Mark Goodge wrote:
On Tue, 11 Feb 2025 11:25:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
On 11/02/2025 11:03, Mark Goodge wrote:
On Tue, 11 Feb 2025 09:53:12 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>>
On 11/02/2025 09:19, Mark Goodge wrote:
Again, I'm not asking for code samples. Just the system architecture. >>>>>Ideally, police officer at the scene presses big red button on computer >>>>> marked 'Accident blockage'. That sends a message giving the location to >>>>> the mapping companies and updates their maps instantaneously and
automatically.
How does the big red button know where the blockage is?
I believe as a twinkle in someone's eye, if not yet in reality, there is >>> a thing provisionally called a global positioning system based, I think, >>> on satellites or some such preposterous idea.
So the police officer has to be physically present at the blockage when they >> press the button. What about if they need to report it from a different
location?
Then he doesn't press the first red button, but the second one that says >press here if the blockage is somewhere else, whereupon he will be
required to input where exactly it is.
Over the ether by 'communicators' a bit like they have on Star Trek.
Again, I know it sounds far-fetched, but a chap down the pub assured me
it was theoretically possible, and he was very plausible.
What technology do those communicators use?
Either the police communications network or the ordinary mobile phone
network of course.
It certainly isn't impossible. That being the case, I have the vision,
others just need to assemble the nuts and bolts.
Vision isn't knowing that something can be done. Vision is knowing how it
can be done.
No. There are planners with an overall vision, there are architects who >design the realisation of that vision, and there are builders who put
the bricks in place.
Planners do not need to be either architects or builders.
On 11/02/2025 14:15, Mark Goodge wrote:
It needs to include the areas where humans can drive street-legal vehicles >> in Chad, yes. I don't see why (other than barely disguised racism), you seem >> to think that's difficult. Chad has a road network just like any other
country.
Yes, but I doubt if it has the mapping, or any speed limit data-bases,
quite possibly not even any signage, lane markings, junction priorities
etc etc etc.
On Tue, 11 Feb 2025 13:42:43 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 11/02/2025 12:50, Mark Goodge wrote:
On Tue, 11 Feb 2025 11:25:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>
On 11/02/2025 11:03, Mark Goodge wrote:
On Tue, 11 Feb 2025 09:53:12 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 11/02/2025 09:19, Mark Goodge wrote:
Again, I'm not asking for code samples. Just the system architecture. >>>>>>Ideally, police officer at the scene presses big red button on computer >>>>>> marked 'Accident blockage'. That sends a message giving the location to >>>>>> the mapping companies and updates their maps instantaneously and
automatically.
How does the big red button know where the blockage is?
I believe as a twinkle in someone's eye, if not yet in reality, there is >>>> a thing provisionally called a global positioning system based, I think, >>>> on satellites or some such preposterous idea.
So the police officer has to be physically present at the blockage when they
press the button. What about if they need to report it from a different
location?
Then he doesn't press the first red button, but the second one that says
press here if the blockage is somewhere else, whereupon he will be
required to input where exactly it is.
How does he input exactly where it is? How does he know the precise coordinates of where it is if he isn't at the actual location to get a GPS fix?
Over the ether by 'communicators' a bit like they have on Star Trek.
Again, I know it sounds far-fetched, but a chap down the pub assured me >>>> it was theoretically possible, and he was very plausible.
What technology do those communicators use?
Either the police communications network or the ordinary mobile phone
network of course.
So, not data links, then?
It certainly isn't impossible. That being the case, I have the vision, >>>> others just need to assemble the nuts and bolts.
Vision isn't knowing that something can be done. Vision is knowing how it >>> can be done.
No. There are planners with an overall vision, there are architects who
design the realisation of that vision, and there are builders who put
the bricks in place.
Planners do not need to be either architects or builders.
Tell me you know nothing about planning without telling me you know nothing about planning.
On Mon, 10 Feb 2025 23:54:15 +0000, Norman Wells <hex@unseen.ac.am>
wrote in <m0vht9Fm2ueU3@mid.individual.net>:
On 10/02/2025 19:21, Simon Parker wrote:
NormCars (TM) have been designed without AI. Therefore they cannot read >>> "Road Closed" signs or observe blockages in real time. What are they to >>> do in the time between the blockage occurring and receiving notification >>> of it assuming there's a delay of no more than sixty seconds at each
stage of the process?
(Officer arrives on scene and erects roadblock (T), Officer notifies HQ
of blockage (T+1 minute), HQ notifies mapping companies in alphabetic
order of blockage so first is Apple (T+2 minutes), HQ notifies Garmin of >>> blockage (T+3 minutes), HQ notifies Google of blockage (T+4 minutes)...
Google updates blockage data onto system (T+5 minutes).
That's still not bad in absolute terms, but with instantaneous and
simultaneous communication already being possible and widely-used,
there's no need for any of this nonsense. The whole process could be
automated, and would be better if it were.
It could even be possible for the officer on the spot to put a block on
all the maps instantaneously by a simple key press.
Could you point to something that has instantaneous and simultaneous communucation please. If not an actual device then the breakthrough in physics that shows that instantaneous communication is possible. Last I
heard there was an upper limit of 3*10^8 metre/sec for communication.
You just have to have the vision to see it.
You need to look at the bigger picture.
The scenario where an officer has arrived to report the blockage with
his FTL communicator is a small and relatively unimportant part of the possible situations that need to be handled. Simon Parker has already
asked what happens in part of that larger picture - the time between the blockage occurring and the officer arriving. You narrow view apparently
does not include that part of the overall situation.
Assume NormCars (TM) have TomTom navigation technology. How long after >>> the roadblock being erected does the car receive notification of the
blockage and what is it doing in the time between the roadblock being
erected and being notified of it?
According to you, it will be coming to an emergency stop in the fast
lane of a motorway somewhere.
I have not see that suggested as the response to not yet having received notification of a blockage.
It might be an appropriate response if there is a collision that blocks
the motorway immediately ahead of the AV.
But it won't have TomTom navigation technology if other competitive
systems are better.
If you look at the bigger picture, rerouting when notified of an
obstacle sufficiently far ahead is not one of the more significant
scenarios. No doubt it is a useful feature but it is not safety
critical, unlike other things an AV will have to do.
On Tue, 11 Feb 2025 16:04:14 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 11/02/2025 14:15, Mark Goodge wrote:
It needs to include the areas where humans can drive street-legal vehicles >>> in Chad, yes. I don't see why (other than barely disguised racism), you seem
to think that's difficult. Chad has a road network just like any other
country.
Yes, but I doubt if it has the mapping, or any speed limit data-bases,
quite possibly not even any signage, lane markings, junction priorities
etc etc etc.
But that's precisely the point. Humans manage in those conditions. So level
5 autonomy will, too.
On 11/02/2025 21:30, Mark Goodge wrote:
On Tue, 11 Feb 2025 16:04:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
On 11/02/2025 14:15, Mark Goodge wrote:
It needs to include the areas where humans can drive street-legal vehicles >>>> in Chad, yes. I don't see why (other than barely disguised racism), you seem
to think that's difficult. Chad has a road network just like any other >>>> country.
Yes, but I doubt if it has the mapping, or any speed limit data-bases,
quite possibly not even any signage, lane markings, junction priorities
etc etc etc.
But that's precisely the point. Humans manage in those conditions. So level >> 5 autonomy will, too.
My point is that Level 5 autonomy is simply not possible in such >data-deprived places.
On 11/02/2025 21:27, Mark Goodge wrote:
On Tue, 11 Feb 2025 13:42:43 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
On 11/02/2025 12:50, Mark Goodge wrote:
So the police officer has to be physically present at the blockage when they
press the button. What about if they need to report it from a different >>>> location?
Then he doesn't press the first red button, but the second one that says >>> press here if the blockage is somewhere else, whereupon he will be
required to input where exactly it is.
How does he input exactly where it is? How does he know the precise
coordinates of where it is if he isn't at the actual location to get a GPS >> fix?
He knows because he's on the scene. If he isn't and doesn't know he
doesn't press either red button. It's very simple.
What technology do those communicators use?
Either the police communications network or the ordinary mobile phone
network of course.
So, not data links, then?
Why not, if they work? I don't exclude any possibilities.
On Tue, 11 Feb 2025 21:47:03 +0000, Norman Wells <hex@unseen.ac.am> wrote:
On 11/02/2025 21:27, Mark Goodge wrote:
On Tue, 11 Feb 2025 13:42:43 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>>> On 11/02/2025 12:50, Mark Goodge wrote:
So the police officer has to be physically present at the blockage when they
press the button. What about if they need to report it from a different >>>>> location?
Then he doesn't press the first red button, but the second one that says >>>> press here if the blockage is somewhere else, whereupon he will be
required to input where exactly it is.
How does he input exactly where it is? How does he know the precise
coordinates of where it is if he isn't at the actual location to get a GPS >>> fix?
He knows because he's on the scene. If he isn't and doesn't know he
doesn't press either red button. It's very simple.
OK, so your system doesn't account for any need for someone to ever report a blockage without being physically at the blockage at the time they report
it. That's a bit of a weakness, wouldn't you say?
What technology do those communicators use?
Either the police communications network or the ordinary mobile phone
network of course.
So, not data links, then?
Why not, if they work? I don't exclude any possibilities.
I was just wondering why they didn't seem to have occurred to you
previously.
On 11/02/2025 21:30, Mark Goodge wrote:
On Tue, 11 Feb 2025 16:04:14 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>
On 11/02/2025 14:15, Mark Goodge wrote:
It needs to include the areas where humans can drive street-legal vehicles >>>> in Chad, yes. I don't see why (other than barely disguised racism), you seem
to think that's difficult. Chad has a road network just like any other >>>> country.
Yes, but I doubt if it has the mapping, or any speed limit data-bases,
quite possibly not even any signage, lane markings, junction priorities
etc etc etc.
But that's precisely the point. Humans manage in those conditions. So level >> 5 autonomy will, too.
My point is that Level 5 autonomy is simply not possible in such data-deprived places.
On 06/02/2025 19:00, billy bookcase wrote:
"Simon Parker" <simonparkerulm@gmail.com> wrote in message
news:m0k1lpFqoikU23@mid.individual.net...
AVs *DO* receive huge amounts of data and crunch that data in real time to make
routing
decisions. (And in context, "routing" here does not relate merely to, "Shall I turn
left in 250metres?" but includes, "Shall I move 30cm further away from the footway as
there's a pedestrian there?" or "There's a cyclist crossing the junction so even
though
the light is green, I must wait for them to clear the junction before I proceed.")
These being exampes of your "situations they have never previously
encountered " cyclists riding along the road" and bright red and green
traffic lights ?
And on public roads ?
Who'd have thought ?
Just how long did it take them to work that one out, I wonder ?
Was it in the first year ?
I can but recommend you watch the video previously linked.
As somebody claiming that current Level 4 AVs do not possess any AI and are similar to
robot vacuum cleaners
On 06/02/2025 18:52, billy bookcase wrote:
"Simon Parker" <simonparkerulm@gmail.com> wrote in message
news:m0k1ivFqoikU22@mid.individual.net...
gross snippage
AVs utilise AI every second of every trip. They are trained to deal with situations
they have never previously encountered because it is impossible to program every
possible scenario into the system.
One single example of what you mean will suffice; and will actually
demonstrate your understanding of the topic.
That is, one example of A! in action. Such as to distinguish it from
the actions of say a simple robot vacuum cleaner with certain added
features.
Every one of which, robot vacuum cleaners that is, are capable of
cleaning rooms and avoiding objects which they have never
encountered before in totally novel scenarios..
So that's just one or two sentences please, * in your own words*.
Of some action or actions - so not very long sentences or very
long words - currently being performed by certificated AV's which
could not have been accomplished by using sophisticated
automation.
I shall politely, but firmly, state that I am disinclined to acquiesce to your request
to provide said information.
Mainly because I've posted a quote from a senior official at one of the main developers
of Level 4 AVs that contradicts what you're claiming and additionally because I've
posted a link to a video from the same source demonstrating precisely what you're
asking for here.
On 12/02/2025 09:08, Mark Goodge wrote:
On Tue, 11 Feb 2025 21:47:03 +0000, Norman Wells <hex@unseen.ac.am> wrote: >>> On 11/02/2025 21:27, Mark Goodge wrote:
How does he input exactly where it is? How does he know the precise
coordinates of where it is if he isn't at the actual location to get a GPS >>>> fix?
He knows because he's on the scene. If he isn't and doesn't know he
doesn't press either red button. It's very simple.
OK, so your system doesn't account for any need for someone to ever report a >> blockage without being physically at the blockage at the time they report
it. That's a bit of a weakness, wouldn't you say?
No, as I've said elsewhere, if he's not actually at the scene, he
obviously has to know where the blockage is and communicate that.
I really don't know why you're making such heavy weather of this.
On 10/02/2025 23:54, Norman Wells wrote:
On 10/02/2025 19:21, Simon Parker wrote:
On 06/02/2025 15:32, Norman Wells wrote:
Then they all should be notified, and those who wish to remain
competitive will act.
I've explained in a post elsewhere in the thread how Google, Apple
and Garmin aggregate user data to determine delays in the proposed
routing.
Your task, should you choose to accept it, it to map onto the
existing model precisely how "they should all be notified" and how
they then action said "road closure notification being applied on the
maps."
Over to you, Norman...
If you're not claiming it's an impossible ask, then it's possible.
Not all things that are possible are also profitable.
Nor are they even commercially viable.
I have the vision of what is possible. It's for minions to implement it.
To the best of my knowledge, you are retired. You have no minions.
And I do not believe the long-suffering Mrs Wells would appreciate being referred to in such terms.
Then perhaps, if they want to remain relevant and commercially
viable in the coming age of autonomous vehicles, they will see it as
commercially important to develop such a facility if they don't have
it already. It is surely not beyond the wit of man despite all the
pettyfogging issues you mention.
How many notifications are there likely to be in a 24 hour period?
Is one person per receiving company on a shift basis sufficient or
does it need multiple people? If multiple people, how many? What
happens if HQ try to contact them and they're away from their desk?
If people are going to be such a problem, they can be replaced with
automated systems, which will make the whole process even cheaper than
virtually free.
You have still failed to identify the source of the commercial contract
you have in mind that permits communication for "virtually free".
And to be clear, the system you propose where the police officer
contacts HQ to advise of the blockage and HQ in turn then contacts the relevant map and routing data companies is instead being fully automated?
How does this automated system operate? How does it deal with "hard"
and "soft" blockages?
How many people are required at "HQ" to receive the notifications
from officers "in the field", as it were? Similarly, what happens if
HQ do not answer the call in a timely manner (for all values of
"timely manner")?
Are the same people in HQ that receive the notifications passing them
on to the live mapping companies or does that need a separate team?
If so, same questions for this team.
For all of the above, what happens in the event of sickness, holiday,
bereavement, maternity absence, etc.?
All these people problems. No wonder computers will replace them.
Will they? Do you have a cite for that too?
Or this.
Google updates blockage data onto system (T+5 minutes).
That's still not bad in absolute terms,
You might describe it as "not bad in absolute terms" but that does not
help the NormCars (TM) that are unable to navigate in that 5 plus minute window, and that's assuming absolute optimal operating conditions.
You haven't said what the NormCar will be doing for that time. Could
you clarify please?
but with instantaneous and simultaneous communication already being
possible and widely-used, there's no need for any of this nonsense.
The whole process could be automated, and would be better if it were.
So now you're proposing the construction of a central, presumably
government, database into which the police officer is feeding the road
closed data with an API allowing the relevant mapping companies to
extract data from the database in real time?
Are you still describing this as "easy to integrate" and "virtually
free" because, if so, I may have some very bad news for you?
It could even be possible for the officer on the spot to put a block
on all the maps instantaneously by a simple key press.No it vdfvfdvdfvdfv
No it couldn't. He'd need at least two buttons (one for "soft" block
and one for "hard block") and also a means of adding a block at which he isn't present. (For example he received notification from the
electricity company that a wire has come down in a storm and is blocking
the road.) Unless you are planning for the officer first to drive to
the scene so that he can add the details of the block to the database.
You just have to have the vision to see it.
Blinkers to ignore all the inherent problems also seem to be mandatory too.
I would suggest that in the event of a NormCar encountering an
unexpected roadblock on its journey, it will have little alternative but
to resort to its "failsafe default" until further and better information
is available.
On 07/02/2025 11:28, Norman Wells wrote:
But Level 5 is not the criterion for being allowed on UK roads under
the Automated Vehicles Act, which is what probably matters more to most.
No, being able to get into an AV and using it as one would a taxi "is
what probably matters more to most".
On 07/02/2025 06:40, J Newman wrote:
Q: Can you come up with a driving condition for which there are no rules
for, and can you say what you would do?
A:
Scenario:
You're driving at night in a rural area with no streetlights, and
suddenly, a massive swarm of insects (e.g., locusts or mayflies)
completely covers your windshield, making visibility nearly zero. Your
wipers and washer fluid are ineffective because the sheer number of
insects is too great.
What I Would Do:
Stay Calm and Avoid Sudden Braking Slamming the brakes could cause a
rear-end collision if there's a vehicle behind me.
Interesting that the LLM mentions this. There are two cases being >investigated in America where AVs have done precisely this causing
serious rear-end collisions.
Reduce Speed Gradually Id ease off the accelerator and slowly
decelerate to maintain control.
Turn on Hazard Lights This alerts other drivers to my reduced speed
and possible erratic movement.
Find a Safe Stopping Point If I can, Id try to pull over to a safe
location like the shoulder, a rest area, or a side road.
Use Alternate Visibility Methods If my windshield is completely
covered, I might open my side window and stick my head out carefully (if
safe) to get some visibility.
I love that the LLM speaks as if it is a human. It doesn't have a head
to stick out of the window, carefully or otherwise - nor do the AVs for
that matter.
Assess and Clear the Windshield Once stopped, Id use a squeegee, rag,
or even a piece of clothing to remove the insects.
Report the Hazard if Needed If it's a massive swarm affecting multiple
drivers, I might call local authorities to warn others.
This is a situation that isnt explicitly covered by standard driving
laws, but the best approach follows general defensive driving principles
stay calm, maintain control, and prioritize visibility and safety.
Would you have done something differently?
Outputting the text and putting it into action in an AV are completely >separate issues and I do not accept that it would be "trivially easy" to >program all of the above into an AV.
On Wed, 12 Feb 2025 13:10:21 +0000, Simon Parker <simonparkerulm@gmail.com> wrote:
On 07/02/2025 06:40, J Newman wrote:
Q: Can you come up with a driving condition for which there are no rules >>> for, and can you say what you would do?
A:
Scenario:
You're driving at night in a rural area with no streetlights, and
suddenly, a massive swarm of insects (e.g., locusts or mayflies)
completely covers your windshield, making visibility nearly zero. Your
wipers and washer fluid are ineffective because the sheer number of
insects is too great.
What I Would Do:
Stay Calm and Avoid Sudden Braking – Slamming the brakes could cause a >>> rear-end collision if there's a vehicle behind me.
Interesting that the LLM mentions this. There are two cases being
investigated in America where AVs have done precisely this causing
serious rear-end collisions.
Thus demonstrating that ChatGPT, at least, is a little more advanced :-)
Reduce Speed Gradually – I’d ease off the accelerator and slowly
decelerate to maintain control.
Turn on Hazard Lights – This alerts other drivers to my reduced speed
and possible erratic movement.
Find a Safe Stopping Point – If I can, I’d try to pull over to a safe >>> location like the shoulder, a rest area, or a side road.
Use Alternate Visibility Methods – If my windshield is completely
covered, I might open my side window and stick my head out carefully (if >>> safe) to get some visibility.
I love that the LLM speaks as if it is a human. It doesn't have a head
to stick out of the window, carefully or otherwise - nor do the AVs for
that matter.
I think the AI is assuming that it's a physical robot, as is commonly depicted in science fiction. That's not entirely unreasonable given the premise of the question, the implication is that the AI is "driving" the car rather than *being* the car. And that would also be consistent with much of the material that it's been trained on. ChatGPT is familiar with the work of Isaac Asimov.
On 12/02/2025 15:11, Mark Goodge wrote:
On Wed, 12 Feb 2025 13:10:21 +0000, Simon Parker
<simonparkerulm@gmail.com> wrote:
On 07/02/2025 06:40, J Newman wrote:
Q: Can you come up with a driving condition for which there are no
rules for, and can you say what you would do?
A:
Scenario:
You're driving at night in a rural area with no streetlights, and
suddenly, a massive swarm of insects (e.g., locusts or mayflies)
completely covers your windshield, making visibility nearly zero.
Your wipers and washer fluid are ineffective because the sheer number
of insects is too great.
What I Would Do:
Stay Calm and Avoid Sudden Braking – Slamming the brakes could cause >>>> a rear-end collision if there's a vehicle behind me.
Interesting that the LLM mentions this. There are two cases being
investigated in America where AVs have done precisely this causing
serious rear-end collisions.
Thus demonstrating that ChatGPT, at least, is a little more advanced
:-)
Reduce Speed Gradually – I’d ease off the accelerator and slowly
decelerate to maintain control.
Turn on Hazard Lights – This alerts other drivers to my reduced speed >>>> and possible erratic movement.
Find a Safe Stopping Point – If I can, I’d try to pull over to a safe >>>> location like the shoulder, a rest area, or a side road.
Use Alternate Visibility Methods – If my windshield is completely
covered, I might open my side window and stick my head out carefully
(if safe) to get some visibility.
I love that the LLM speaks as if it is a human. It doesn't have a
head to stick out of the window, carefully or otherwise - nor do the
AVs for that matter.
I think the AI is assuming that it's a physical robot, as is commonly
depicted in science fiction. That's not entirely unreasonable given the
premise of the question, the implication is that the AI is "driving"
the car rather than *being* the car. And that would also be consistent
with much of the material that it's been trained on. ChatGPT is
familiar with the work of Isaac Asimov.
Evidently the AI (all AIs?) is being trained to be deceitful.
By pretending that this is the work of a person, its owners hope that it
can "pass" for human.
I cannot see this ending well (for us).
On 07/02/2025 06:40, J Newman wrote:
On 03/02/2025 17:43, Mark Goodge wrote:
On Mon, 3 Feb 2025 12:35:57 +0000, Norman Wells <hex@unseen.ac.am>
wrote:
AI is not part of autonomous vehicle control, and includes none of the >>>> things Mr Harran identified above. It is simply automation
according to
pre-determined rules.
But that's precisely the problem. It's impossible to have a
predetermined
rule for every possible scenario. And dealing with scenarios that you
don't have a rule for requires intelligence.
So here's how ChatGPT responded to such a scenario. Is AI intelligent
enough to drive cars?
Q: Can you come up with a driving condition for which there are no
rules for, and can you say what you would do?
A:
Scenario:
You're driving at night in a rural area with no streetlights, and
suddenly, a massive swarm of insects (e.g., locusts or mayflies)
completely covers your windshield, making visibility nearly zero. Your
wipers and washer fluid are ineffective because the sheer number of
insects is too great.
What I Would Do:
Stay Calm and Avoid Sudden Braking – Slamming the brakes could cause a
rear-end collision if there's a vehicle behind me.
Interesting that the LLM mentions this. There are two cases being investigated in America where AVs have done precisely this causing
serious rear-end collisions.
Reduce Speed Gradually – I’d ease off the accelerator and slowly
decelerate to maintain control.
Turn on Hazard Lights – This alerts other drivers to my reduced speed
and possible erratic movement.
Find a Safe Stopping Point – If I can, I’d try to pull over to a safe
location like the shoulder, a rest area, or a side road.
Use Alternate Visibility Methods – If my windshield is completely
covered, I might open my side window and stick my head out carefully
(if safe) to get some visibility.
I love that the LLM speaks as if it is a human. It doesn't have a head
to stick out of the window, carefully or otherwise - nor do the AVs for
that matter.
Assess and Clear the Windshield – Once stopped, I’d use a squeegee,
rag, or even a piece of clothing to remove the insects.
Report the Hazard if Needed – If it's a massive swarm affecting
multiple drivers, I might call local authorities to warn others.
This is a situation that isn’t explicitly covered by standard driving
laws, but the best approach follows general defensive driving
principles —stay calm, maintain control, and prioritize visibility and
safety.
Would you have done something differently?
Outputting the text and putting it into action in an AV are completely separate issues and I do not accept that it would be "trivially easy" to program all of the above into an AV.
On 12/02/2025 14:51, Norman Wells wrote:
On 12/02/2025 13:07, Simon Parker wrote:
On 10/02/2025 23:54, Norman Wells wrote:
If you're not claiming it's an impossible ask, then it's possible.
Not all things that are possible are also profitable.
Nor are they even commercially viable.
Same thing.
No they're not. Profitability is the here and now, commercial viability
is over the lifecycle of the product / service.
YouTube was not profitable for years but remained commercially viable.
But it's not for me to decide what may be profitable.
NormTech (TM) has backers with deep pockets ready, willing and able to
soak up nine to ten figure losses? No? Thought not.
I have the vision of what is possible. It's for minions to
implement it.
To the best of my knowledge, you are retired. You have no minions.
You have absolutely no knowledge of either actually.
Are you denying that you are retired? Do you deny that, as a retired person, at present you have no, (and I'm going to use the business
term), "direct reports", (as I find your use of the term minions disparaging)?
And I do not believe the long-suffering Mrs Wells would appreciate
being referred to in such terms.
Can you quote where I did please?
You are retired.
As such you have no minions. The only person over
whom you hold a modicum of sway is Mrs Wells. If she is not your
minion, perhaps you could outline who the minions are that you envisage realising your dream.
If people are going to be such a problem, they can be replaced with
automated systems, which will make the whole process even cheaper
than virtually free.
You have still failed to identify the source of the commercial
contract you have in mind that permits communication for "virtually
free".
Still no mention of a commercial contract that permits communication for "virtually free". Never mind one that will scale to the necessary levels.
And to be clear, the system you propose where the police officer
contacts HQ to advise of the blockage and HQ in turn then contacts
the relevant map and routing data companies is instead being fully
automated?
How does this automated system operate? How does it deal with "hard"
and "soft" blockages?
If you mean by 'soft blockage' that the road is still passable, it is
not such a blockage.
An example of a "soft blockage" may be a tree fallen in the road.
It may surprise you to learn that the police / HA do not typically station
a vehicle immediately either side of the downed tree permitting their officers to wave to each other across the trunk. Rather they position
them at the respective ends of the road to prevent vehicles entering the road. However, properties may be accessible between the road block and
the physical blockage. Vehicles aiming to visit those properties may
still be permitted to pass the road block despite being unable to pass
the physical blockage. The majority of vehicles, but not all, will be turned away. Hence, it is a soft blockage. Tell me, in these circumstances, which button does the officer press to indicate where the physical blockage is located, where the roadblocks are at the respective
ends of the road and which button or buttons allow him to detail to
which properties AVs may still travel between the road block and
physical blockage? How is all of this "instead being fully automated"?
And that's without even getting into "passable with care" 'soft
blockages' where limited local traffic may be permitted, but non-local
is not.
Your "fully automated" system is still "trivially easy" is it?
All these people problems. No wonder computers will replace them.
Will they? Do you have a cite for that too?
Of course they will. It would be daft not to. You've just listed all
the advantages of computers.
Thank you for confirming that you do not have a cite or any evidence to support your claim.
[some unacknowledged, (and careless), snipping from Norman which can
only be taken as an admission that he has no idea how to integrate his proposed system with those that already exist]
Or this.
Google updates blockage data onto system (T+5 minutes).
That's still not bad in absolute terms,
You might describe it as "not bad in absolute terms" but that does
not help the NormCars (TM) that are unable to navigate in that 5 plus
minute window, and that's assuming absolute optimal operating
conditions.
You haven't said what the NormCar will be doing for that time. Could
you clarify please?
Had you read on before leaping in, you'd have seen:
but with instantaneous and simultaneous communication already being
possible and widely-used, there's no need for any of this nonsense.
The whole process could be automated, and would be better if it were.
Yes, about that "fully automated" system you have now pivoted to
proposing, I look forward to your solutions to the "trivially simple"
problem of a downed tree above and how it will be relayed to the various mapping systems without any human involvement at all.
I would suggest that in the event of a NormCar encountering an
unexpected roadblock on its journey, it will have little alternative
but to resort to its "failsafe default" until further and better
information is available.
Of course, just like a manual driver. Where did I say anything
otherwise?
Manual drivers do not and cannot "come to a halt and turn everything
off" when encountering a roadblock.
Not least because the officers
present at the roadblock will tell them to keep moving.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 498 |
Nodes: | 16 (2 / 14) |
Uptime: | 38:33:28 |
Calls: | 9,798 |
Files: | 13,751 |
Messages: | 6,189,398 |