The experiment from Hernan Badino was redone. You can see it there...
<https://www.youtube.com/watch?v=fqWdSfN9FiA> Source
The main interest is that video is looping, and the result is almost:
<https://www.youtube.com/watch?v=0ZPJmnBh03M> Reworked
Well, Hernan Badino is moving his head when he is walking, so the reconstructed trajectory is not perfectly looping at the end. But
we can reconstruct the movement almost perfectly. We use OpenCV
for image processing, and POV-Ray for 3D representation. We have
to determine projective dominant motion in the video with a
reference image, and change it when correlation drops below 80%.
We have a 3D inertial model of motion, that's why POV-Ray helps =)
The principle of dominant 2D motion appeared at INRIA, it is here:
<https://www.irisa.fr/vista/Themes/Logiciel/Motion-2D/Motion-2D.html>
In our case, the dominant motion estimated from the approximation of optical-flow (DIS - Dense Inverse Search OpenCV) is 3D and projective.
Francois LE COAT writes:
The experiment from Hernan Badino was redone. You can see it there...
<https://www.youtube.com/watch?v=fqWdSfN9FiA> Source
The main interest is that video is looping, and the result is almost:
<https://www.youtube.com/watch?v=0ZPJmnBh03M> Reworked
Well, Hernan Badino is moving his head when he is walking, so the
reconstructed trajectory is not perfectly looping at the end. But
we can reconstruct the movement almost perfectly. We use OpenCV
for image processing, and POV-Ray for 3D representation. We have
to determine projective dominant motion in the video with a
reference image, and change it when correlation drops below 80%.
We have a 3D inertial model of motion, that's why POV-Ray helps =)
The principle of dominant 2D motion appeared at INRIA, it is here:
<https://www.irisa.fr/vista/Themes/Logiciel/Motion-2D/Motion-2D.html>
In our case, the dominant motion estimated from the approximation of
optical-flow (DIS - Dense Inverse Search OpenCV) is 3D and projective.
Francois LE COAT writes:
The experiment from Hernan Badino was redone. You can see it there...
<https://www.youtube.com/watch?v=fqWdSfN9FiA> Source
The main interest is that video is looping, and the result is almost:
<https://www.youtube.com/watch?v=0ZPJmnBh03M> Reworked
Well, Hernan Badino is moving his head when he is walking, so the
reconstructed trajectory is not perfectly looping at the end. But
we can reconstruct the movement almost perfectly. We use OpenCV
for image processing, and POV-Ray for 3D representation. We have
to determine projective dominant motion in the video with a
reference image, and change it when correlation drops below 80%.
We have a 3D inertial model of motion, that's why POV-Ray helps =)
Three drones are flying between forests of trees. Thanks to the
optical-flow (DIS OpenCV) measured on successive images, the
"temporal disparity" reveals the forest of trees (3rd dimension)...
<https://www.youtube.com/watch?v=QP75EeFVyOI> 1st drone <https://www.youtube.com/watch?v=fp5Z1Nu4Hko> 2nd drone <https://www.youtube.com/watch?v=fLxE8iS7fPI> 3rd drone
The interest with the forest is that trajectories are curved, in
order to avoid obstacles. It is measured thanks to a projective
transform, and represented with <Ry,Rz,Tx,Tz> thanks to POV-Ray.
The evolution of the drone is shown in front-view with its camera.
The principle of dominant 2D motion appeared at INRIA, it is here:
<https://www.irisa.fr/vista/Themes/Logiciel/Motion-2D/Motion-2D.html>
In our case, the dominant motion estimated from the approximation of
optical-flow (DIS - Dense Inverse Search OpenCV) is 3D and projective.
The experiment from Hernan Badino was redone. You can see it there...
<https://www.youtube.com/watch?v=fqWdSfN9FiA> Source
The main interest is that video is looping, and the result is almost:
<https://www.youtube.com/watch?v=0ZPJmnBh03M> Reworked
Well, Hernan Badino is moving his head when he is walking, so the
reconstructed trajectory is not perfectly looping at the end. But
we can reconstruct the movement almost perfectly. We use OpenCV
for image processing, and POV-Ray for 3D representation. We have
to determine projective dominant motion in the video with a
reference image, and change it when correlation drops below 80%.
We have a 3D inertial model of motion, that's why POV-Ray helps =)
Three drones are flying between forests of trees. Thanks to the
optical-flow (DIS OpenCV) measured on successive images, the
"temporal disparity" reveals the forest of trees (3rd dimension)...
<https://www.youtube.com/watch?v=QP75EeFVyOI> 1st drone
<https://www.youtube.com/watch?v=fp5Z1Nu4Hko> 2nd drone
<https://www.youtube.com/watch?v=fLxE8iS7fPI> 3rd drone
The interest with the forest is that trajectories are curved, in
order to avoid obstacles. It is measured thanks to a projective
transform, and represented with <Ry,Rz,Tx,Tz> thanks to POV-Ray.
The evolution of the drone is shown in front-view with its camera.
It is possible to perceive the relief (in depth) of a scene, when we
have at least two different viewpoints of it. Here is a new example with
a drone flying in the middle of a forest of trees, and from which we
process the video stream from the embedded camera...
<https://www.youtube.com/watch?v=WJ20EBM3PTc>
When the two views of the same scene are distant in space, we speak
of "spatial disparity". In the present case, the two viewpoints are
distant in time, and we then speak of "temporal disparity". This
involves knowing whether the two images of the same scene are acquired simultaneously, or delayed in time. We can perceive the relief in depth
in this case, with a single camera and its continuous video stream.
The principle of dominant 2D motion appeared at INRIA, it is here:
<https://www.irisa.fr/vista/Themes/Logiciel/Motion-2D/Motion-2D.html>
In our case, the dominant motion estimated from the approximation of
optical-flow (DIS - Dense Inverse Search OpenCV) is 3D and projective. >>>>
The experiment from Hernan Badino was redone. You can see it there... >>>>>
<https://www.youtube.com/watch?v=fqWdSfN9FiA> Source
The main interest is that video is looping, and the result is almost: >>>>>
<https://www.youtube.com/watch?v=0ZPJmnBh03M> Reworked
Well, Hernan Badino is moving his head when he is walking, so the
reconstructed trajectory is not perfectly looping at the end. But
we can reconstruct the movement almost perfectly. We use OpenCV
for image processing, and POV-Ray for 3D representation. We have
to determine projective dominant motion in the video with a
reference image, and change it when correlation drops below 80%.
We have a 3D inertial model of motion, that's why POV-Ray helps =)
Three drones are flying between forests of trees. Thanks to the
optical-flow (DIS OpenCV) measured on successive images, the
"temporal disparity" reveals the forest of trees (3rd dimension)...
<https://www.youtube.com/watch?v=QP75EeFVyOI> 1st drone
<https://www.youtube.com/watch?v=fp5Z1Nu4Hko> 2nd drone
<https://www.youtube.com/watch?v=fLxE8iS7fPI> 3rd drone
The interest with the forest is that trajectories are curved, in
order to avoid obstacles. It is measured thanks to a projective
transform, and represented with <Ry,Rz,Tx,Tz> thanks to POV-Ray.
The evolution of the drone is shown in front-view with its camera.
It is possible to perceive the relief (in depth) of a scene, when we
have at least two different viewpoints of it. Here is a new example with
a drone flying in the middle of a forest of trees, and from which we
process the video stream from the embedded camera...
<https://www.youtube.com/watch?v=WJ20EBM3PTc>
When the two views of the same scene are distant in space, we speak
of "spatial disparity". In the present case, the two viewpoints are
distant in time, and we then speak of "temporal disparity". This
involves knowing whether the two images of the same scene are acquired
simultaneously, or delayed in time. We can perceive the relief in depth
in this case, with a single camera and its continuous video stream.
Let remind us the starting point from this thread... We've redone the experiment from Hernan Badino, who is walking with a camera on his head:
<https://www.youtube.com/watch?v=GeVJMamDFXE>
Hernan determines his 2D ego-motion in the x-y plane, from corresponding interest points that persist in the video stream. That means he is calculating the projection matrix of the movement to deduce translations
in the ground plane. With time integration, it gives him the trajectory.
We're doing almost the same, but I work with OpenCV's optical-flow, and
not interest points. And my motion model is 3D, to obtain 8 parameters
in rotation and translation, that I can use in Persistence Of Vision.
I'm reconstituting the 3D movement, and I discover it's giving "temporal disparity", that is depth from motion.
A WEB page was made to illustrate Monocular Depth...
<https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/monocular_depth.html>
A drone flies between the trees of a forest. Thanks to the optical-flow measured on successive images, the monocular depth reveals the
forest of trees... We take a reference image, the optical-flow is
measured on two rectified images. Then we change the reference when
the inter-correlation drops below 60%. We can perceive the relief in
depth with a single camera, over time.
In fact, when we watch images captured by a drone, although there is
only one camera, we often see the relief. This is particularly marked
for trees in a forest. The goal here is to evaluate this relief, with
a measurement of "optical-flow", which allows one image to be matched
with another, when they seem to be close (we say they are "correlated").
We have two eyes, and the methods for measuring visible relief by
stereoscopy are very developed. Since the beginning of photography,
there were devices like the “stereoscope†which allows you to see the relief with two pictures, naturally. It is possible to measure relief,
thanks to epipolar geometry, and well-known mathematics. There are many measurement methods, very effective and based on human vision.
When it comes to measuring relief with a single camera, knowledge is
less established. There are 3D cameras, called "RGBD" with a "D" for
"depth". But how do they work? Is it possible to improve those? What
we are showing here does not require the use of any “artificial neural networkâ€. It is a physical measurement, with a classic algorithm,
which does not come from A.I. nor a big computer :-)
This is about measuring monocular depth, just as stereoscopic disparity
is measured. It means quantifying the depth, with images from a single camera. We can see this relief naturally, but it is a matter of
measuring it with the optical-flow.
Until now, drone images came from forests in France. The first images
were obtained in the French Vosges.
<https://www.youtube.com/watch?v=245yJJrwMQ0> Drone in the forest
We are now seeing more and more drones in forests outside of France.
The available image sources are diversifying...
Until now, drone images came from forests in France. The first images
were obtained in the French Vosges.
<https://www.youtube.com/watch?v=245yJJrwMQ0> Drone in the forest
We are now seeing more and more drones in forests outside of France.
The available image sources are diversifying...
Here is a sequence of images of a ballad in the forest. This scene
is observed by a tracking drone...
<https://www.youtube.com/watch?v=46VWJ6-YqtY>
The camera's movement is estimated in the images using a projective
dominant motion measurement. The presence of a man in the image sequence
does not interfere with trajectory estimation, because the character
occupies a position in the field of vision that is not dominant. The
dominant motion corresponds to the scrolling of the scenery, that is
the movement of the forest relative to an observing camera.
Until now, drone images came from forests in France. The first images
were obtained in the French Vosges.
<https://www.youtube.com/watch?v=245yJJrwMQ0> Drone in the forest
We are now seeing more and more drones in forests outside of France.
The available image sources are diversifying...
Here is a sequence of images of a ballad in the forest. This scene
is observed by a tracking drone...
<https://www.youtube.com/watch?v=46VWJ6-YqtY>
The camera's movement is estimated in the images using a projective
dominant motion measurement. The presence of a man in the image sequence
does not interfere with trajectory estimation, because the character
occupies a position in the field of vision that is not dominant. The
dominant motion corresponds to the scrolling of the scenery, that is
the movement of the forest relative to an observing camera.
Here is a drone in the forest...
<https://www.youtube.com/watch?v=h3vhlRBB9tg> Forest
We also obtain the trajectory in space: <https://skfb.ly/pxGqL>
It's interesting to note that this trajectory loops. That is to say,
the drone passes over the professional pilot's location, and is in
the same place at the end of the video.
This proves the quality of the trajectory estimation in space. :-)
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (0 / 16) |
Uptime: | 167:15:59 |
Calls: | 10,385 |
Calls today: | 2 |
Files: | 14,057 |
Messages: | 6,416,529 |