Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Implementing scale constraints for non-georeferenced datasets #737

Open
pierotofy opened this issue Apr 23, 2021 · 1 comment

Comments

@pierotofy
Copy link
Contributor

Hey all ✋

I've been looking at the possibility to implement scale constraints for datasets that have no georeferencing information (no GPS and no GCPs). The use case is that sometimes people use images that have no spatial information (for example, datasets such as https://github.com/pierotofy/dataset_banana) but still have a wish to perform measurements on the results. A straightforward approach is to let OpenSfM run the reconstruction, then manually choose two points on the point cloud and perform a linear transformation based on a scaling factor.

I was wondering if a different approach could be doable by allowing a user to specify a sort of ground_control_line with two points A and B and the desired distance:

image

[image] [pixel_a_x] [pixel_a_y] [pixel_b_x] [pixel_b_y] [distance (meters)]

This mostly comes down to allowing a user to specify constraints in 2D rather than 3D, which can make the operation simpler.

I have the itch that perhaps this could be best handled as a post-processing operation on the dense result (raycast point A and B, find the points closest to the ray intersection, then scale), but wondering if perhaps this could be solved as part of an additional bundle adjustment constraint or as part of the alignment_constraints function and thus integrated in OpenSfM?

@AfaqSaeed
Copy link
Contributor

AfaqSaeed commented Aug 30, 2022

I see in the linear scaling factor method @pierotofy you can implement something to specify the points in 2d but the problem will be that we have no guarantee that the two points that we marked are even present in the reconstruction.I think a better idea would be to display all the tracked point on an image and select two points from the tracks.Then we measure that specific area and give it to point cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants