I’ve previously posted about a creating a 3D model of the statue of the bunyip from The bunyip of Berkeley’s Creek. Now it’s time to talk about the sister statue:
For this work I used a combination of static images taken with a mobile phone:
… And still images (selectively) automatically taken from a video recording of the bronze statue. You can see how the coverage of images from the video is far less sparse:
You can also tell what a terrible cameraman I am. What was I even doing?I can’t forget the focus on the picture in front of me (as clear as DVD on digital TV screens).
The 3D model from the mobile phone images didn’t satisfy my appetite with something spectacular either:
The unedited model from the video wasn’t much better, it’s game is kinda weak:
But with some TLC in Meshlab it could come good. But my goal isn’t a fully srubed up model, I’m playing with finding multiple models in a diverse image collection, spatially expanding my horizons. That leaves you in the class with scrubs, never rising. I don’t find it surprising. What I’ve been trying to get at this entire blog post is that I don’t want no scrubs:
I used VLC to convert the video into static images using a video filter. I want to develop this workflow further to automatically remove pictures that are unfocused or have some motion blur. I feel this will improve the results, especially as my camera work has no chance of improving.
The good news is that processing the bunyip pictures, gumnut baby video, and gumnut baby pictures at the same time gave me prelimary outputs that clearly indicated either two objects (bunyip+gumnut baby) or three objects (bunyip, gumnut baby video, gumnut baby pictures) were possible to create with further (more intensive) processing.
No, I don’t want no scrub.