There was a lot to love from Apple’s WWDC (World Wide Developer Conference) event, but one new tool, in particular, caught our eye as a photo and video site. This is the new
What it does is let creators build a 3D image for augmented reality based on a collection of 2D images. This can still be a niche feature, but is undeniably awesome.
It’s being rolled out as part of RealityKit 2. RealityKit is a suite of technology that Apple-based developers can use to get started in augmented reality (AR). This second revision will add visual, audio, and animation control in their AR experiences, but the Object Capture technology is the most exciting part. Apple says that it came about due to the fact that 3D models are so difficult and time-consuming to create. It can be a deterrent for new developers who just want to get into the AR world but get stuck by the modeling stage.
How Does Object Capture Work?
Object Capture is about as simple as it gets. Developers need to take a series of pictures from different angles which can be done not only with an iPhone but also an iPad, DSLR, or drone – really any sort of camera will work. Taking the photos on a compatible Apple device will also use stereo depth data for the recovery of actual object size, and gravity vector data so that the 3D object turns out rightside-up. You’ll need various angles including the bottom of the object if you’d like to include that in the image. Then, according to Apple, a few lines of code within the Object Capture API on macOS will generate a 3D model.
On a more technical level, developers will start a photogrammetry session in RealityKit and point it to a folder containing the images. Then they call a process function to generate the model, selected the level of detail they would like to generate it at. Developers can also generate files optimized for AR Quick Look, which is a technology that lets developers add 3D models into apps or websites on Apple mobile devices.
How Will Object Capture Be Used?
Augmented reality is going to be making waves in online shopping. Apple has said that brands like Wayfair and Etsy are using Object Capture to create 3D models, implying that interactive models are coming to online stores. This also means customers will have a lot more access to features that let them preview furniture and other merchandise in their homes.
Beyond shopping, Apple added that Maxon and Unity are using Object Capture for software like Cinema 4D and Unity MARS, meaning this technology can be used for things like video games and movies.
Limitations and Ideal Conditions
Naturally, the technology isn’t perfect, but the limitations are things you would expect. Objects with transparent components or textureless components will suffer. It’s understandable that a 2D image wouldn’t be able to convey clear glass and how to interpret the objects seen through it. For a similar reason, it doesn’t work well with reflective surfaces, purely because your finished 3D model will contain those unwanted reflections. If you will be taking photos of a reflective surface, you can diffuse the light around it for the best results.
If you’ll be flipping the object, it will only really work with a rigid object that won’t change shape when flipped (please don’t flip your freshly-baked cake upside down!). Needless to say, an object with fine detail will look best when the reference photos are taken on a high-resolution camera and when the images are taken close-up. For the best results, the object should be placed against a simple background so the software can identify the object accurately. Lighting is also important and should be consistent while you take your photos.
“Turning real-world objects into 3D models has never been easier”, says Apple’s VP of Sensor Software, Myra Haggerty. And we can’t help but agree!
Watch the full conference session on Apple’s developer website here: