Category:Photogrammetry

From The Brighton Toy and Model Index
Jump to navigationJump to search

Photogrammetry historically referred to a set of techniques for measuring distances from photographs. Currently the term more commonly refers to a process of using dedicated software to reconstruct three-dimensional computer models of objects from sets of photographs.

Predecessors

Photogrammetry in the Nineteenth Century

Photogrammetric techniques were used by egyptologists to work out dimensions of monuments. Rather than physically measure the height of an obelisk, it was easier to assume that the obelisk was effectively vertical, measure the length and angle of its shadow, add half the thickness of the pillar, and then use Pythagoras to work out the height.

Photogrammetry in WW1 and WW2

During the First and Second World Wars, photogrammetry was used from aerial photographs to produce maps and height maps of enemy terrain.

Photogrammetry during the Cold War

After WW2, similar techniques were used with satellite images to produce distortion-corrected maps and height maps. Accurate three-dimensional data, became more important with the advent of cruise missile technology, as sending a missile on a ground-hugging path below enemy radar needed one to know the terrain that the missile would be hugging, and if one got the heights wrong, the missile would either be too visible, or would be likely to plough into the ground (or trees, or telegraph poles).

Development of photogrammetry software

Photogrammetry as an automated way of creating 3D models from images appeared with the advent of cheap, powerful computing, and with research on artificial intelligence and computer vision. Image-analysis routines allowed software to be fed an image and to pick out apparent details of interest, and stereo analysis software allowed a robot with stereo cameras to have depth perception, by comparing two images and working out the distance from the cameras of details visible in both views. University research on a range of open-source projects than allowed the creation of software modules that could identify huge numbers of potential details and their shapes in images, and cross-reference them, and identify and reject "outliers".

With university research feeding back into community-owned open-source software, the software modules that one needed for automated shape reconstruction were steadily improved by different teams, as serious computer power became more easily available. A project to speed up photogrammetry by using video and assuming that every frame would have almost the same physical location and direction as the previous one turned out to also be useful if one tried to deliberately take a series of static photographs that gave a continuous set of images: rather than try to manually identify the highest-proximity images, it could assume that they'd probably have adjacent filename numbers.

GPU technology

Photogrammetry got a further boost with the appearance of Graphic Processor Units ("GPU"s) designed for graphics cards for computer gaming PCs. The graphics on multi-player games were highly calculation-intensive, and needed to be able to apply repetitive calculations in three dimensions to large datasets. Rather than have a large multi-purpose processor with huge numbers of functions, a GPU had an array of extremely dumb processors that could only really perform basic math and trigonometric calculations, but could carry them out with extreme accuracy: one could dump an array of data into a GPU, give it a short piece of simple code that had to be carried out on every piece of data in the array, and have perhaps 64 or more identical processors chew through the data in parallel and then flag an alert when the calculations were finished. Although primarily aimed at the gaming market, GPUs gave the scientific community a way of achieving supercomputer-level computing power at much lower cost, as long as code was rewritten to be able to run on parallel on a GPU.

While having a faster graphics card with more GPU cores doesn't speed up everything to do with photogrammetry, it will speed up stages in the calculations that have been GPU-optimised.

Free vs commercial photogrammetry software

Although much of the core code modules that run photogrammetry software are open-source and freely available at zero cost, along with software frameworks that link everything together and make it run ... necessary for the researchers to develop the modules ... the user-interfaces of these free programmes are usually not really commercial-grade. Some are only available for Linux/Unix operating systems, some only run on Windows ... and if your version of Linux doesn't have all the same installed software as the developer's machine, or if you have a different version of Windows to the version used by the developer, finding out how to fix the problem and get the thing to run can be rather demanding.

While commercial photogrammetry software may have the same "core", it will have been "tweaked" to give the best results for most situations by default, and a lot of commercial developer time will have been spent on making sure that the software installs effortlessly and is as friendly to use as possible ... which can be really really useful if one is not an IT enthusiast and is more interested in photogrammetry than in operating system quirks.

Good commercial photogrammetry software usually recognises that there are two distinct markets for this sort of product: large-scale professional archaeology and mapping (where one can charge appropriate prices), and the hobbyist and casual professional market, where buyers might like to get involved in PG, but if the price is too high, they'll forget about it and apply themselves to something else instead.

Product prototyping isn't normally considered a "high-value" application for photogrammetry, as people interested in reverse-engineering hardware and modelling existing mechanisms to produce spare parts will normally have the budget to buy a laser-scanner instead.

Photogrammetry vs. laser-scanning

Photogrammetry is excellent at recreating models of terracotta pottery in a controlled environment, but it's weak at modelling blank, or reflective, or badly-lit or transparent surfaces. Laser-scanning will let you map surfaces without worrying about illumination or inadequate levels of detail. So photogrammetry may let you produce a nice model of the painted-detail parts of the Sistine Chapel, but it may not be as accurate when trying to model a white interior with white sculpted pillars and blank white walls. Laser-scanning is also a fairly dumb process that doesn't get overwhelmed by trying to cross-compare vast amounts of data. If you want to map a cave system, or check the dimensional accuracy of a plain plastic or metal part, you use laser-scanning. If you want to capture a landscape from a drone, where the camera isn't fixed and you don't need millimetre accuracy, or you want to model a painted building, you may use photogrammetry. If you have the budget, you buy a top-end system that integrates photographic images for colour data and laser-scanning for precise dimensional data.

Selected examples:


Pages in category ‘Photogrammetry’

This category contains only the following page.