• Optical Inertia

    From Francois LE COAT@21:1/5 to All on Wed Feb 28 14:45:02 2024
    Hi,

    The experiment from Hernan Badino was redone. You can see it there...

    <https://www.youtube.com/watch?v=fqWdSfN9FiA> Source

    The main interest is that video is looping, and the result is almost:

    <https://www.youtube.com/watch?v=0ZPJmnBh03M> Reworked

    Well, Hernan Badino is moving his head when he is walking, so the
    reconstructed trajectory is not perfectly looping at the end. But
    we can reconstruct the movement almost perfectly. We use OpenCV
    for image processing, and POV-Ray for 3D representation. We have
    to determine projective dominant motion in the video with a
    reference image, and change it when correlation drops below 80%.

    We have a 3D inertial model of motion, that's why POV-Ray helps =)

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Thu Mar 28 16:15:09 2024
    Hi,

    The principle of dominant 2D motion appeared at INRIA, it is here:

    <https://www.irisa.fr/vista/Themes/Logiciel/Motion-2D/Motion-2D.html>

    In our case, the dominant motion estimated from the approximation of optical-flow (DIS - Dense Inverse Search OpenCV) is 3D and projective.

    Francois LE COAT writes:
    The experiment from Hernan Badino was redone. You can see it there...

        <https://www.youtube.com/watch?v=fqWdSfN9FiA> Source

    The main interest is that video is looping, and the result is almost:

        <https://www.youtube.com/watch?v=0ZPJmnBh03M> Reworked

    Well, Hernan Badino is moving his head when he is walking, so the reconstructed trajectory is not perfectly looping at the end. But
    we can reconstruct the movement almost perfectly. We use OpenCV
    for image processing, and POV-Ray for 3D representation. We have
    to determine projective dominant motion in the video with a
    reference image, and change it when correlation drops below 80%.

    We have a 3D inertial model of motion, that's why POV-Ray helps =)

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Wed Apr 3 18:45:04 2024
    Hi,

    Francois LE COAT writes:
    The principle of dominant 2D motion appeared at INRIA, it is here:

    <https://www.irisa.fr/vista/Themes/Logiciel/Motion-2D/Motion-2D.html>

    In our case, the dominant motion estimated from the approximation of optical-flow (DIS - Dense Inverse Search OpenCV) is 3D and projective.

    Francois LE COAT writes:
    The experiment from Hernan Badino was redone. You can see it there...

         <https://www.youtube.com/watch?v=fqWdSfN9FiA> Source

    The main interest is that video is looping, and the result is almost:

         <https://www.youtube.com/watch?v=0ZPJmnBh03M> Reworked

    Well, Hernan Badino is moving his head when he is walking, so the
    reconstructed trajectory is not perfectly looping at the end. But
    we can reconstruct the movement almost perfectly. We use OpenCV
    for image processing, and POV-Ray for 3D representation. We have
    to determine projective dominant motion in the video with a
    reference image, and change it when correlation drops below 80%.

    We have a 3D inertial model of motion, that's why POV-Ray helps =)

    Three drones are flying between forests of trees. Thanks to the
    optical-flow (DIS OpenCV) measured on successive images, the
    "temporal disparity" reveals the forest of trees (3rd dimension)...

    <https://www.youtube.com/watch?v=QP75EeFVyOI> 1st drone <https://www.youtube.com/watch?v=fp5Z1Nu4Hko> 2nd drone <https://www.youtube.com/watch?v=fLxE8iS7fPI> 3rd drone

    The interest with the forest is that trajectories are curved, in
    order to avoid obstacles. It is measured thanks to a projective
    transform, and represented with <Ry,Rz,Tx,Tz> thanks to POV-Ray.
    The evolution of the drone is shown in front-view with its camera.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Thu Oct 17 15:15:21 2024
    Hi,

    Francois LE COAT writes:
    The principle of dominant 2D motion appeared at INRIA, it is here:

    <https://www.irisa.fr/vista/Themes/Logiciel/Motion-2D/Motion-2D.html>

    In our case, the dominant motion estimated from the approximation of
    optical-flow (DIS - Dense Inverse Search OpenCV) is 3D and projective.

    Francois LE COAT writes:
    The experiment from Hernan Badino was redone. You can see it there...

         <https://www.youtube.com/watch?v=fqWdSfN9FiA> Source

    The main interest is that video is looping, and the result is almost:

         <https://www.youtube.com/watch?v=0ZPJmnBh03M> Reworked

    Well, Hernan Badino is moving his head when he is walking, so the
    reconstructed trajectory is not perfectly looping at the end. But
    we can reconstruct the movement almost perfectly. We use OpenCV
    for image processing, and POV-Ray for 3D representation. We have
    to determine projective dominant motion in the video with a
    reference image, and change it when correlation drops below 80%.

    We have a 3D inertial model of motion, that's why POV-Ray helps =)

    Three drones are flying between forests of trees. Thanks to the
    optical-flow (DIS OpenCV) measured on successive images, the
    "temporal disparity" reveals the forest of trees (3rd dimension)...

    <https://www.youtube.com/watch?v=QP75EeFVyOI> 1st drone <https://www.youtube.com/watch?v=fp5Z1Nu4Hko> 2nd drone <https://www.youtube.com/watch?v=fLxE8iS7fPI> 3rd drone

    The interest with the forest is that trajectories are curved, in
    order to avoid obstacles. It is measured thanks to a projective
    transform, and represented with <Ry,Rz,Tx,Tz> thanks to POV-Ray.
    The evolution of the drone is shown in front-view with its camera.

    It is possible to perceive the relief (in depth) of a scene, when we
    have at least two different viewpoints of it. Here is a new example with
    a drone flying in the middle of a forest of trees, and from which we
    process the video stream from the embedded camera...

    <https://www.youtube.com/watch?v=WJ20EBM3PTc>

    When the two views of the same scene are distant in space, we speak
    of "spatial disparity". In the present case, the two viewpoints are
    distant in time, and we then speak of "temporal disparity". This
    involves knowing whether the two images of the same scene are acquired simultaneously, or delayed in time. We can perceive the relief in depth
    in this case, with a single camera and its continuous video stream.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Tue Oct 29 19:00:02 2024
    Hi,

    Francois LE COAT writes:
    The principle of dominant 2D motion appeared at INRIA, it is here:

    <https://www.irisa.fr/vista/Themes/Logiciel/Motion-2D/Motion-2D.html>

    In our case, the dominant motion estimated from the approximation of
    optical-flow (DIS - Dense Inverse Search OpenCV) is 3D and projective.

    The experiment from Hernan Badino was redone. You can see it there...

         <https://www.youtube.com/watch?v=fqWdSfN9FiA> Source

    The main interest is that video is looping, and the result is almost:

         <https://www.youtube.com/watch?v=0ZPJmnBh03M> Reworked

    Well, Hernan Badino is moving his head when he is walking, so the
    reconstructed trajectory is not perfectly looping at the end. But
    we can reconstruct the movement almost perfectly. We use OpenCV
    for image processing, and POV-Ray for 3D representation. We have
    to determine projective dominant motion in the video with a
    reference image, and change it when correlation drops below 80%.

    We have a 3D inertial model of motion, that's why POV-Ray helps =)

    Three drones are flying between forests of trees. Thanks to the
    optical-flow (DIS OpenCV) measured on successive images, the
    "temporal disparity" reveals the forest of trees (3rd dimension)...

    <https://www.youtube.com/watch?v=QP75EeFVyOI> 1st drone
    <https://www.youtube.com/watch?v=fp5Z1Nu4Hko> 2nd drone
    <https://www.youtube.com/watch?v=fLxE8iS7fPI> 3rd drone

    The interest with the forest is that trajectories are curved, in
    order to avoid obstacles. It is measured thanks to a projective
    transform, and represented with <Ry,Rz,Tx,Tz> thanks to POV-Ray.
    The evolution of the drone is shown in front-view with its camera.

    It is possible to perceive the relief (in depth) of a scene, when we
    have at least two different viewpoints of it. Here is a new example with
    a drone flying in the middle of a forest of trees, and from which we
    process the video stream from the embedded camera...

    <https://www.youtube.com/watch?v=WJ20EBM3PTc>

    When the two views of the same scene are distant in space, we speak
    of "spatial disparity". In the present case, the two viewpoints are
    distant in time, and we then speak of "temporal disparity". This
    involves knowing whether the two images of the same scene are acquired simultaneously, or delayed in time. We can perceive the relief in depth
    in this case, with a single camera and its continuous video stream.

    Let remind us the starting point from this thread... We've redone the experiment from Hernan Badino, who is walking with a camera on his head:

    <https://www.youtube.com/watch?v=GeVJMamDFXE>

    Hernan determines his 2D ego-motion in the x-y plane, from corresponding interest points that persist in the video stream. That means he is
    calculating the projection matrix of the movement to deduce translations
    in the ground plane. With time integration, it gives him the trajectory.

    We're doing almost the same, but I work with OpenCV's optical-flow, and
    not interest points. And my motion model is 3D, to obtain 8 parameters
    in rotation and translation, that I can use in Persistence Of Vision.

    I'm reconstituting the 3D movement, and I discover it's giving "temporal disparity", that is depth from motion.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Thu Nov 21 16:35:01 2024
    Hi,

    Here is another result...

    Francois LE COAT writes:
    The principle of dominant 2D motion appeared at INRIA, it is here:

    <https://www.irisa.fr/vista/Themes/Logiciel/Motion-2D/Motion-2D.html>

    In our case, the dominant motion estimated from the approximation of
    optical-flow (DIS - Dense Inverse Search OpenCV) is 3D and projective. >>>>
    The experiment from Hernan Badino was redone. You can see it there... >>>>>
         <https://www.youtube.com/watch?v=fqWdSfN9FiA> Source

    The main interest is that video is looping, and the result is almost: >>>>>
         <https://www.youtube.com/watch?v=0ZPJmnBh03M> Reworked

    Well, Hernan Badino is moving his head when he is walking, so the
    reconstructed trajectory is not perfectly looping at the end. But
    we can reconstruct the movement almost perfectly. We use OpenCV
    for image processing, and POV-Ray for 3D representation. We have
    to determine projective dominant motion in the video with a
    reference image, and change it when correlation drops below 80%.

    We have a 3D inertial model of motion, that's why POV-Ray helps =)

    Three drones are flying between forests of trees. Thanks to the
    optical-flow (DIS OpenCV) measured on successive images, the
    "temporal disparity" reveals the forest of trees (3rd dimension)...

    <https://www.youtube.com/watch?v=QP75EeFVyOI> 1st drone
    <https://www.youtube.com/watch?v=fp5Z1Nu4Hko> 2nd drone
    <https://www.youtube.com/watch?v=fLxE8iS7fPI> 3rd drone

    The interest with the forest is that trajectories are curved, in
    order to avoid obstacles. It is measured thanks to a projective
    transform, and represented with <Ry,Rz,Tx,Tz> thanks to POV-Ray.
    The evolution of the drone is shown in front-view with its camera.

    It is possible to perceive the relief (in depth) of a scene, when we
    have at least two different viewpoints of it. Here is a new example with
    a drone flying in the middle of a forest of trees, and from which we
    process the video stream from the embedded camera...

    <https://www.youtube.com/watch?v=WJ20EBM3PTc>

    When the two views of the same scene are distant in space, we speak
    of "spatial disparity". In the present case, the two viewpoints are
    distant in time, and we then speak of "temporal disparity". This
    involves knowing whether the two images of the same scene are acquired
    simultaneously, or delayed in time. We can perceive the relief in depth
    in this case, with a single camera and its continuous video stream.

    Let remind us the starting point from this thread... We've redone the experiment from Hernan Badino, who is walking with a camera on his head:

        <https://www.youtube.com/watch?v=GeVJMamDFXE>

    Hernan determines his 2D ego-motion in the x-y plane, from corresponding interest points that persist in the video stream. That means he is calculating the projection matrix of the movement to deduce translations
    in the ground plane. With time integration, it gives him the trajectory.

    We're doing almost the same, but I work with OpenCV's optical-flow, and
    not interest points. And my motion model is 3D, to obtain 8 parameters
    in rotation and translation, that I can use in Persistence Of Vision.

    I'm reconstituting the 3D movement, and I discover it's giving "temporal disparity", that is depth from motion.

    An instrumented motorcycle rolls on the track of a speed circuit. Thanks
    to the approximation of optical flow (DIS - OpenCV) by the dominant
    projective movement, we determine translations on the ground plane,
    roll and yaw. That is to say the trajectory by projective parameters (Tx,Tz,Ry,Rz).

    <https://www.youtube.com/watch?v=-QLJ2ke9mN8>

    Image data comes from the publication:

    Bastien Vincke, Pauline Michel, Abdelhafid El Ouardi, Bruno Larnaudie,
    Flavien Delgehier, Rabah Sadoun, Samir Bouaziz, Stéphane Espié, Sergio Rodriguez, Abderrahmane Boubezoul. (Dec. 2024). Real Track Experiment
    Dataset for Motorcycle Rider Behavior and Trajectory Reconstruction.
    Data in Brief, Vol. 57, 111026.

    The instrumented motorcycle makes a complete lap of the track. The
    correlation threshold is set at 90% between successive images, to
    reset the calculation of the projective dynamic model.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to All on Mon Feb 24 16:30:02 2025
    Hi,

    A WEB page was made to illustrate Monocular Depth...

    <https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/monocular_depth.html>

    A drone flies between the trees of a forest. Thanks to the optical-flow measured on successive images, the monocular depth reveals the
    forest of trees... We take a reference image, the optical-flow is
    measured on two rectified images. Then we change the reference when
    the inter-correlation drops below 60%. We can perceive the relief in
    depth with a single camera, over time.

    In fact, when we watch images captured by a drone, although there is
    only one camera, we often see the relief. This is particularly marked
    for trees in a forest. The goal here is to evaluate this relief, with
    a measurement of "optical-flow", which allows one image to be matched
    with another, when they seem to be close (we say they are "correlated").

    We have two eyes, and the methods for measuring visible relief by
    stereoscopy are very developed. Since the beginning of photography,
    there were devices like the “stereoscope†which allows you to see the relief with two pictures, naturally. It is possible to measure relief,
    thanks to epipolar geometry, and well-known mathematics. There are many measurement methods, very effective and based on human vision.

    When it comes to measuring relief with a single camera, knowledge is
    less established. There are 3D cameras, called "RGBD" with a "D" for
    "depth". But how do they work? Is it possible to improve those? What
    we are showing here does not require the use of any “artificial neural networkâ€. It is a physical measurement, with a classic algorithm,
    which does not come from A.I. nor a big computer :-)

    This is about measuring monocular depth, just as stereoscopic disparity
    is measured. It means quantifying the depth, with images from a single
    camera. We can see this relief naturally, but it is a matter of
    measuring it with the optical-flow.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Tue Apr 1 19:15:01 2025
    Hi,

    Francois LE COAT writes:
    A WEB page was made to illustrate Monocular Depth...

    <https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/monocular_depth.html>


    A drone flies between the trees of a forest. Thanks to the optical-flow measured on successive images, the monocular depth reveals the
    forest of trees... We take a reference image, the optical-flow is
    measured on two rectified images. Then we change the reference when
    the inter-correlation drops below 60%. We can perceive the relief in
    depth with a single camera, over time.

    In fact, when we watch images captured by a drone, although there is
    only one camera, we often see the relief. This is particularly marked
    for trees in a forest. The goal here is to evaluate this relief, with
    a measurement of "optical-flow", which allows one image to be matched
    with another, when they seem to be close (we say they are "correlated").

    We have two eyes, and the methods for measuring visible relief by
    stereoscopy are very developed. Since the beginning of photography,
    there were devices like the “stereoscope†which allows you to see the relief with two pictures, naturally. It is possible to measure relief,
    thanks to epipolar geometry, and well-known mathematics. There are many measurement methods, very effective and based on human vision.

    When it comes to measuring relief with a single camera, knowledge is
    less established. There are 3D cameras, called "RGBD" with a "D" for
    "depth". But how do they work? Is it possible to improve those? What
    we are showing here does not require the use of any “artificial neural networkâ€. It is a physical measurement, with a classic algorithm,
    which does not come from A.I. nor a big computer :-)

    This is about measuring monocular depth, just as stereoscopic disparity
    is measured. It means quantifying the depth, with images from a single camera. We can see this relief naturally, but it is a matter of
    measuring it with the optical-flow.

    Until now, drone images came from forests in France. The first images
    were obtained in the French Vosges.

    <https://www.youtube.com/watch?v=245yJJrwMQ0> Drone in the forest

    We are now seeing more and more drones in forests outside of France.
    The available image sources are diversifying...

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Thu May 15 15:00:01 2025
    Hi,

    Francois LE COAT writes:
    Until now, drone images came from forests in France. The first images
    were obtained in the French Vosges.

    <https://www.youtube.com/watch?v=245yJJrwMQ0> Drone in the forest

    We are now seeing more and more drones in forests outside of France.
    The available image sources are diversifying...

    Here is a sequence of images of a ballad in the forest. This scene
    is observed by a tracking drone...

    <https://www.youtube.com/watch?v=46VWJ6-YqtY>

    The camera's movement is estimated in the images using a projective
    dominant motion measurement. The presence of a man in the image sequence
    does not interfere with trajectory estimation, because the character
    occupies a position in the field of vision that is not dominant. The
    dominant motion corresponds to the scrolling of the scenery, that is
    the movement of the forest relative to an observing camera.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Fri May 30 15:51:06 2025
    Hi,

    Francois LE COAT writes:
    Until now, drone images came from forests in France. The first images
    were obtained in the French Vosges.

    <https://www.youtube.com/watch?v=245yJJrwMQ0> Drone in the forest

    We are now seeing more and more drones in forests outside of France.
    The available image sources are diversifying...

    Here is a sequence of images of a ballad in the forest. This scene
    is observed by a tracking drone...

        <https://www.youtube.com/watch?v=46VWJ6-YqtY>

    The camera's movement is estimated in the images using a projective
    dominant motion measurement. The presence of a man in the image sequence
    does not interfere with trajectory estimation, because the character
    occupies a position in the field of vision that is not dominant. The
    dominant motion corresponds to the scrolling of the scenery, that is
    the movement of the forest relative to an observing camera.

    Here is a drone in the forest...

    <https://www.youtube.com/watch?v=h3vhlRBB9tg> Forest

    We also obtain the trajectory in space: <https://skfb.ly/pxGqL>

    It's interesting to note that this trajectory loops. That is to say,
    the drone passes over the professional pilot's location, and is in
    the same place at the end of the video.

    This proves the quality of the trajectory estimation in space. :-)

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Wed Jun 18 15:15:03 2025
    Hi,

    Francois LE COAT writes:
    Until now, drone images came from forests in France. The first images
    were obtained in the French Vosges.

    <https://www.youtube.com/watch?v=245yJJrwMQ0> Drone in the forest

    We are now seeing more and more drones in forests outside of France.
    The available image sources are diversifying...

    Here is a sequence of images of a ballad in the forest. This scene
    is observed by a tracking drone...

         <https://www.youtube.com/watch?v=46VWJ6-YqtY>

    The camera's movement is estimated in the images using a projective
    dominant motion measurement. The presence of a man in the image sequence
    does not interfere with trajectory estimation, because the character
    occupies a position in the field of vision that is not dominant. The
    dominant motion corresponds to the scrolling of the scenery, that is
    the movement of the forest relative to an observing camera.

    Here is a drone in the forest...

    <https://www.youtube.com/watch?v=h3vhlRBB9tg> Forest

    We also obtain the trajectory in space: <https://skfb.ly/pxGqL>

    It's interesting to note that this trajectory loops. That is to say,
    the drone passes over the professional pilot's location, and is in
    the same place at the end of the video.

    This proves the quality of the trajectory estimation in space. :-)

    Here's a long drone flight through a Swedish forest...

    <https://www.youtube.com/watch?v=ppW5BbDPFHc> Swedish forest

    We obtain the estimated trajectory in space: <https://skfb.ly/pxZHy>

    The image matching algorithm doesn't incorporate any prior knowledge
    about what the camera is observing. This might look like a SLAM
    (Simultaneous Localization And Mapping) method, which is a sparse
    method, but what is presented is a global and dense method based
    on optical-flow measurement (Dense Inverse Search - DIS).

    This is an algorithm, i.e. a numerical recipe, that does not use
    artificial neural networks. We obtain a filtered measurement (by
    a Kalman filter) of physical data.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)