How Kiri solved Gaussian Splatting's Biggest Limitation
Turn your Splats into high quality meshes
In this interview with Jack Wang, CEO of Kiri Engine, we explore how 3D Gaussian Splatting is transforming the 3D scanning landscape. Jack shares practical insights, future trends, and surprising use cases that are reshaping how you think about capturing the world in 3D.
Before getting into the interview, I wanted to quickly introduce you to today’s sponsor Gracia AI. Gracia AI is the only app that allows you to experience Gaussian Splatting volumetric videos on a standalone headset, either in VR or MR.
It is a truly impressive experience and I recommend you to try it out right now on your Quest or PC-powered headset.
Interview with Jack Wang
What makes Kiri Engine different from other 3D scanning apps?
Jack Wang: Absolutely, they’re not all the same. Most apps rely on a couple of scanning technologies, but Kiri Engine integrates nearly all of them: photogrammetry, LiDAR, and now 3D Gaussian Splatting. This means our users get the best tool for each job. For example, photogrammetry has been around forever and is great for high-detail textures, but it fails on low-texture objects. That’s where newer methods like 3DGS come in. We’re really trying to let people scan anything they want, not just what photogrammetry can handle.
Why is photogrammetry not enough for e-commerce?
Jack Wang: The issue is photogrammetry only works well on high-texture objects like shoes or cloth bags. If you try scanning something like a water bottle or perfume with a smooth surface, it fails. Back in 2018, we tried pushing photogrammetry into e-commerce, and it was a hard no from retailers because the range of scannable objects was so limited. That’s why 3D Gaussian Splatting is a game changer. All of a sudden, you can scan pretty much anything static, even smooth or shiny surfaces, which opens up new product visualisation opportunities.
What are the limitations of LiDAR scanning today?
Jack Wang: LiDAR was super hyped when it first came to iPhone Pros, but in practice, it’s disappointing for detailed scans. It’s great for surveying, like capturing the layout of rooms, but not for detailed, photorealistic meshes. You’ll notice right away the scans are often inaccurate and don’t have the fine details that photogrammetry or 3DGS provide. If you're expecting that level of detail from LiDAR, you’ll be let down eight out of ten times.
After writing the interview, Kiri has announced new features to their app that allow to get higher fidelity LiDAR scans using cloud processing powered by AI.
Is 3D Gaussian Splatting ready for mainstream production pipelines?
Jack Wang: Not really, at least not yet. While 3DGS is hot right now, 90% of mainstream 3D pipelines like games, VFX, and 3D printing still revolve around meshes. Splatting doesn’t produce meshes natively, which makes it hard to slot into those workflows. Right now, it’s more of a tool for experimental visuals or creative projects like music videos. But that’s changing as tools like ours allow people to convert 3DGS into meshes.
How is Kiri Engine enabling mesh workflows from 3DGS?
Jack Wang: We have a feature called “3DGS to Mesh” that’s super popular. It lets users convert a 3D Gaussian Splat into a mesh, which you can then use in Blender, Unreal, or wherever. That’s been huge for making splats compatible with traditional workflows. We even provide a Blender add-on so you can do things like relighting, background removal, and focus shifting right inside Blender. It’s about bridging the gap between experimental and production.
Is 3DGS really lighter than meshes for the web?
Jack Wang: In theory, yes. But when we started, 3DGS files could still be 200 to 250MB, which was too heavy for web. So we focused on compression. In version 3.13 of Kiri Engine, we introduced a method that cuts that in half. Now we average 80 to 100MB, which is more manageable. It’s not as small as images yet, but we’re seeing constant progress through research papers and implementation. I’m confident we’ll keep pushing the size down.
How are people using both splats and meshes together?
Jack Wang: Great question. In workflows like Blender or Unreal, people import both versions. They use the mesh to handle physics, collisions, and animations, and overlay the splat for visual fidelity. That way you get the best of both worlds, physical behavior and stunning visuals. We’ve seen this used in architectural visualization, simulations, and even AR experiences where realism and interactivity need to coexist.
What editing capabilities does Kiri Engine offer for splats?
Jack Wang: We’ve added a lot of post-capture tools. You can brush out unwanted areas, crop to focus on the main object, and even toggle background removal which uses AI to isolate your subject. It works really well. I’ve had people surprised how precisely it cuts around their object. These tools make the workflow feel like working with meshes, even though it’s all splats. It’s a big part of why people stick with us after trying it.
What are people actually scanning every day and why?
Jack Wang: Funny thing, many users don’t have a practical reason. They’re scanning just because it’s cool. And that’s okay. These hobbyist users are some of our most loyal. They’re capturing statues, dresses, even entire vacations in Japan. We’ve seen a viral case in China where students scanned their dorms before graduation. That emotional connection to a space is something AI can’t fake. That’s where splatting shines. It preserves moments, not just objects.
Can splats be explored in VR effectively?
Jack Wang: Oh yeah. When we put splats into Apple Vision Pro, it was honestly emotional. Seeing a scanned memory like your childhood room in 3D VR is nothing like photos. We demoed this at GDC last year. It didn’t get massive downloads because Vision Pro adoption is low, but it showed us just how powerful VR plus 3DGS could be. It’s a total memory time capsule. Way more visceral than traditional media.
What does the future look like for animated splats?
Jack Wang: We’re already working on it. We’ve got a working prototype that uses monocular video, just a regular video clip, to generate animated splats. It’s still in early research stages and super GPU-heavy, but it proves it can be done without fancy volumetric cameras. Once we can refine it, the next big step is letting everyday users animate scenes straight from a phone video.
What’s next for Kiri Engine in the near and long term?
Jack Wang: Short term, we’re refining the experience, especially the learning curve. We want people to pick up their phone and scan like they’re taking a selfie. Long term, we’re developing a consumer-facing product built on Kiri Engine. It’ll be focused on storytelling, memory capture, and letting regular users own their 3D experiences. Also, we’re exploring sparse view splats, where you only need a few images instead of a full video to get great results. That’s going to be huge.
Check out the full interview on your favourite platform 👇
That’s it for today
See you next week