In this interview, we sit down with Knut Nesheim, head of engineering at Varjo’s Teleport. Teleport is a Gaussian Splatting tool built by Varjo, the Finnish company renowned for the top-tier VR headsets geared towards businesses and enterprises. In this interview, we dive into the reasoning for investing in 3DGS, the technology that makes it possible to view these high-fidelity reconstructions in VR, and usecases from real-world enterprises leveraging this technology.
Before getting into the interview, I wanted to quickly introduce you to today’s sponsor Gracia AI. Gracia AI is the only app that allows you to experience Gaussian Splatting volumetric videos on a standalone headset, either in VR or MR.
It is a truly impressive experience and I recommend you to try it out right now on your Quest or PC-powered headset.
Interview with Knut Nesheim
So Vario is known for having the top class VR headset currently available, and you’re investing in a technology to create, present, and share 3DGS. Can you explain us why?
Knut Nesheim: Yeah, so Teleport actually has quite a long history. We started working on Teleport four years ago, coming at it from a different angle: create amazing worlds you could view in VR, with our headset playing a central role. Back then, NeRF was becoming the best approach, but real-time rendering was computationally really difficult. When G-splatting showed up two years ago, we saw an opportunity: finally real-time rendering on a high-end VR headset was possible.
What led you to invest in a software like Teleport? Wasn’t hardware enough?
Knut Nesheim: Vario’s business strategy is to be more of a foundational technology layer in virtual, mixed, and augmented reality. Our headset is agnostic to applications, and Teleport is also agnostic to end goals: you could inspect a factory or put a botanical garden online. We saw that with one of these headsets, the first thing people want is to capture something real and view it with others as realistically as possible. G-splatting lets you do that, so branching into immersive software made perfect sense.
Can you share some of the magic happening behind the scenes, like the pipeline, profiles, iterations, or compression you use?
Knut Nesheim: Sure. We’ve built an automated end-to-end pipeline that removes many manual photogrammetry steps like image selection and editing. Our secret sauce is partly in cloud scalability, partly in proprietary magical ingredients, and partly in productizing well-understood tech. For example, rendering two 4K displays at 45fps in VR requires an extremely performant renderer. We balance splat count versus frame rate, and we’ve invested heavily in adaptive level-of-detail hierarchies to switch between 250,000-splat low-fidelity and up to 5 million-splat high-definition models based on view and motion.
What inputs are accepted by Teleport today?
Knut Nesheim: We now accept plain color images in various formats, videos, and even our iOS app captures. In the app, we take about two frames per second and leverage Apple’s AI magic to combine ultra-wide and wide-lens sensors. That gives beautifully exposed, high-quality images. Videos offer broader compatibility like drones and Android phones, but bring more motion blur and exposure issues. So in our iOS app we focus on getting the best possible photos to make high-quality results easier and more consistent.
If I have both a DSLR and an iPhone, which should I pick for capturing a space?
Knut Nesheim: It depends. If you have limited time or are on the move, the iPhone is great, anybody can walk slowly and get amazing results. Most street-level scans on our site were made with an iPhone. But if you have a DSLR or mirrorless and can carefully shoot with a tripod and proper lighting, you’ll get even better results. So for quick, reliable captures use the iPhone; for highest fidelity, use the DSLR workflow.
Besides the “keep phone steady” advice, what tips improve capture and final results?
Knut Nesheim: The biggest thing is to stand with your back toward the far wall and look across the whole space to get overview shots first, not just close-ups. Pretend there’s an object in the middle and loop around it. Ultra-wide lenses on phones really help for that. Once you’ve mastered overview and rotation around the scene, then consider lighting, avoid over-exposed windows or harsh shadows. Most people skip our tutorial, but if you follow those two steps you’ll see dramatically better results.
Once I have the splat in Teleport, what can I do within the viewer?
Knut Nesheim: A few things: you can export the splat as a PLY file and use it in Blender or Unity. Within Teleport’s own platform we offer virtual tours (add text annotations to highlight parts of a museum, for example) portals to move between scenes in a seamless way (like walking from one room or store to another), and video export. With video export you define a camera trajectory, FOV, reveal animation and create a complelling clip for social media or presentations. And of course, you can still jump into VR and explore every angle in high fidelity.
What computer hardware do I need to view these scenes in VR besides the headset?
Knut Nesheim: For the best experience on the Vario headset, we recommend a high-end Nvidia RTX GPU—40-series or even a three-year-old 3090 is still great. That powers two 4K displays at up to 45fps or higher. That said, you don’t need Vario; you can use Quest, which has lower-res displays and thus lower GPU requirements. Our adaptive level-of-detail engine adjusts splat count on the fly. We’re working to bring these high-quality scenes down-market so more users can run them on less powerful hardware.
Could cloud streaming solve sharing heavy splat files for VR on lower-end devices?
Knut Nesheim: We explored streaming—back in the NeRF days we had fleets of servers rendering scenes, and we even built a VR cloud-streaming product. But streaming two 4K streams over the internet faces connectivity and latency challenges—Wi-Fi fluctuations, artifacts, and the economics of renting powerful GPUs. We found that local rendering on a PC with a strong GPU actually gives a smoother, artifact-free experience. We’re eager to run more natively on Quest one day, but for now we prioritize quality and reliability over cloud streaming.
Have you worked on any commercial projects using this technology? Can you share examples?
Knut Nesheim: I can’t mention names, but one real example is an automated manufacturing line. Engineers need to know exactly where robot arms and attachments sit. Distances between welding arms and conveyors matter. They scan factory areas with iPhones or GoPros, upload to Teleport, and HQ can inspect layouts in VR for compliance or redesign. We also see construction sites: daily drone or handheld scans track progress for managers and clients, reducing delays and miscommunication without hiring 3D specialists.
Scale accuracy can be tricky in 3D reconstructions. How do you handle that?
Knut Nesheim: To get metric-accurate outputs, you need a reference in the input (like a known one-meter distance) or a manual scale setting step. In Teleport we’re exploring ways to detect scale automatically, perhaps using depth sensors or ARKit data from mobile. But often workflows include a quick calibration: capture an object of known size first, and the rest of the scene scales accordingly. It’s an ongoing area of focus for robustness.
How do you balance pushing technical innovation with serving top-tier enterprise customers?
Knut Nesheim: We focus on solving hard problems that deliver real value: enterprises need precise, photorealistic reconstructions. Our secret sauce blends research innovation with scalable cloud services and product polish. We constantly iterate: experimenting with depth supervision, MCMC-based densification, visible-splat optimizers, and memory-robust implementations. Yet we also streamline the user experience with automated pipelines, mobile apps, and integrated VR so that professionals can onboard quickly and see ROI without wrestling with manual 3D workflows.
What do you see as the current outstanding challenges to make Splats a sustainable ecosystem and file format?
Knut Nesheim: If I could wish for one thing it’d be that more traditional software (AutoCAD, SolidWorks, Revit etc) natively support G-splatting or via plugins. That would unlock massive additional usage. We debate building Unity or Unreal SDKs ourselves, but it’s a lot of work without clear demand. Beyond that, expanding beyond pure reconstruction, like AI-driven scene completion, day-to-night transformations, or exporting to mesh formats, could broaden adoption. Ultimately, it’s less about technology and more about convincing large companies to integrate 3DGS in their platforms.
Do not forget to check out the full interview on your favourite platform 👇
That’s it for today
See you next week




