The background of the image is filled in with something like Photoshop’s content-aware fill, allowing the object to be turned and repositioned anywhere in its environment.” The software will make its debut at SIGGRAPH Asia in November of this year, at which point, the accompanying paper and, if we’re lucky, the source code will be released to the public. A writer at Wired rightly drew analogies to the magic lasso and content-aware fill tools in Photoshop, saying that, “the software intelligently to the object along the way, like a more sophisticated version of Photoshop’s magic lasso. The background is then separated from the object and filled in. The first two are used to outline the contours of an object, while the last step is meant to determine its depth. The software performs its operations in three steps. By relying on the intuitive capacities of human perception in tandem with the algorithms of a computer program, users can identify objects and their relative depths, while the software detects edges and generates a texture map.
Blowing up across the 3D printing Internet last week was the following video, which demonstrated a software with the ability to convert 2D photographs into complete 3D models, possibly opening up a whole new method for creating 3D-printable objects:ĭeveloped by researchers at Tel Aviv University, Tao Chen, Zhe Zhu, Ariel Shamir, Shi-Min Hu, and Daniel Cohen-Or, the software, called 3-Sweep, is meant to make 3D modeling accessible to anyone, regardless of skill level.