By Mark Dahmke
When Autodesk announced the availability of their 123d Capture app for the iPad, I tried it, but quickly discovered that I couldn’t capture images of moving subjects – my cats. In mid-2012 I put together a test with 3 cameras – two DSLRs and an HP point and shoot camera. I used a remote trigger to simultaneously trigger the DSLRs but had to manually trigger the HP. Also I discovered that I needed more consistent lighting – and needed to use a flash and a very short exposure, to improve depth of field and get more consistent results, especially for subjects that move.
After those experiments I gave up on it for a while, then in mid-2013 decided to try again. This time I used Canon Powershot cameras with custom firmware from the CHDK project. The version I used was Stereo Data Maker because it allowed for very precise synchronization of multiple cameras. My first “proof-of-concept” tests started with three and later six cameras. I also ran some tests on static subjects to determine the minimum number of shots it would take for the 3D capture software to be able to produce a good model. I then scaled up by building a support rig from PVC pipe, added flash units and started experimenting with lighting and camera exposure settings.
The goal of this effort was to be able to capture all the images with an accuracy of at least 1/125 second, to freeze motion. The flash fires in about 1/1000 second and the cameras must fire plus or minus a few thousandths of a second from when the flash units fire.
The 24 images are then imported into Agisoft’s PhotoScan which will generate a 3D mesh and surface texture. After much experimentation, I found that PhotoScan produces much more accurate models and is more feature-rich than the web-based 123D Catch. Also PhotoScan supports masking which significantly improves the accuracy of the 3D capture.