In this interview, French interactive designer Lovis Odin breaks down how ComfyUI empowers artists to create complex visuals with near-total control. From hardware requirements to client work with Coca-Cola, Lovis shares his personal workflow, ethical insights, and practical advice for creatives looking to harness AI tools professionally.
Before getting into the interview, I wanted to quickly introduce you to today’s sponsor Gracia AI. Gracia AI is the only app that allows you to experience Gaussian Splatting volumetric videos on a standalone headset, either in VR or MR.
It is a truly impressive experience and I recommend you to try it out right now on your Quest or PC-powered headset.
Interview with Lovis Odin
What made ComfyUI such a compelling tool for you to start exploring creatively?
Lovis Odin: For me, the moment I started creating little workflows inside ComfyUI, especially with my own GPU, my own prompts, and sometimes my own LoRA models, it felt personal. It's very different from using Midjourney or other platforms. Here, I feel like I have real ownership. That kind of control and personal touch is super satisfying, especially as a creative. You’re not just generating something with AI, you’re building your own little engine that reflects your style and process.
What would you say is ComfyUI's biggest advantage over more mainstream tools like Runway or Midjourney?
Lovis Odin: ComfyUI is open source, and that's huge. Unlike tools like Runway or Midjourney, you can use any open-source model for images or video directly inside it. Sure, it's a bit more complex at first, but the control it gives you is on another level. You're not just prompting, you’re designing the full flow. And you’re not stuck paying subscription fees either. You can run it on your own machine, so the only cost is your electricity. That’s pretty empowering.
How simple is it for someone with limited tech skills to get ComfyUI up and running?
Lovis Odin: Honestly, it's a lot easier now than it used to be. You just go on GitHub, find the ComfyUI repo by Comfyanonymous, and download the package for Mac or PC. Even if you're not a developer, it's just a click-to-install thing now. You download the app, launch it, and you're good to go. They've really simplified the setup recently. It feels just like installing any regular app now, which is great for creatives who aren't super technical.
What kind of hardware do you need to get started with ComfyUI?
Lovis Odin: It depends on what you want to create. For images, you can get by with something like an Nvidia 2060, which is pretty entry-level. For video, though, it’s more demanding, you'll want at least a 4090. Macs can work too, especially newer ones with M2 or M3 chips, but video gen is tougher there since you can’t use Nvidia GPUs. Most heavy users stick to PC for that reason. But again, image gen? Totally doable on much more modest setups.
Are there good options for people who don’t have a strong local machine but still want to use ComfyUI?
Lovis Odin: Definitely. There are services like Google Colab, RunPod, and Vast.ai that let you rent GPUs at really low cost, like 20 cents to a dollar per hour. I use them when I want to go heavy on a workflow. You can build and test locally, then scale up in the cloud. That’s actually a great way to work because you stay flexible. You don’t need to own a supercomputer to create complex things anymore.
Where does 3D generation fall in terms of hardware demands and readiness inside ComfyUI?
Lovis Odin: 3D is still a newer frontier with ComfyUI, and yeah, it’s demanding. You need decent GPU power like with video. But honestly, what’s really cool is how fast the community is evolving. There are models now that can generate stylised 3D assets or textures, and they’re getting better. It’s still early days, but I’ve seen people do incredible stuff already. Just expect to experiment a bit and be patient with render times.
Let’s talk workflows. How does ComfyUI integrate with other creative tools like Photoshop or Blender?
Lovis Odin: So for Blender, there are already add-ons that let you integrate ComfyUI for things like texture generation. But mostly, you build workflows in ComfyUI itself using nodes. It’s a bit like Unreal Engine’s blueprint system. You can do full editing inside, color, contrast, and even adding captions. For final polish, yeah, sometimes I still open Photoshop, but most of the pipeline can now happen directly in ComfyUI.
Can you give an example of a typical editing workflow that happens fully in ComfyUI?
Lovis Odin: Let’s say I generate an image, and I want to tweak the lighting, adjust colors, and maybe add some elements. I don’t need to export anything. I can add nodes to adjust exposure, do color grading, and even change parts of the background. It’s not as intuitive as dragging sliders in Photoshop, but once you understand it, it’s super powerful. And the best part is, you can replicate that same process across dozens of images with one click.
Who do you think ComfyUI is really for?
Lovis Odin: That’s a great question. I think ComfyUI is for anyone who wants to have control. If you're just looking to prompt and get a quick image, sure, other tools are easier. But if you care about customization, if you're an illustrator, a designer, or a motion artist, ComfyUI lets you build exactly what you want. You can use your own sketches, your own videos, and remix them creatively. It's more technical, sure, but that also means more freedom.
You mentioned a big project you did with Coca-Cola. Can you walk us through that workflow?
Lovis Odin: That was a wild one. We started with a storyboard, generated mood boards in AI, and then worked with 2D animators to sketch out scenes. The sketches didn’t need to be super detailed, we used AI to interpolate between frames. Then we used ComfyUI to apply style, lighting, and effects. Some scenes had 3D elements, like a Coke can modeled in Blender, but ComfyUI handled the visual look. It helped us bring it all together with this sort of claymation vibe.
How did you find that project, and what advice would you give to others trying to get commercial work using AI?
Lovis Odin: Honestly, I was lucky to be in an agency that was open to experimenting. But I also spent a lot of personal time building workflows and sharing my results. That caught the attention of producers. If you're trying to get in, my advice is: start small, share your work, and don’t wait for permission. Try automating something mundane like removing backgrounds, and show how AI can save time. People start to notice when they see the value.
What would you recommend for someone who isn’t working in an agency but wants to freelance using ComfyUI?
Lovis Odin: So many people in this space started just experimenting on their own. My advice is: make stuff you like and post it. Share it on X, on LinkedIn, wherever. You’d be surprised how many clients reach out just because they saw a cool animation or workflow you posted. Join Discords like Banodoco. They’re full of artists and developers sharing workflows. Collaborate, learn, and your first client might just find you there.
You’re also part of Kartel AI. What’s the mission behind that?
Lovis Odin: Yeah, we started KartelAI because we saw a gap: tons of amazing AI artists, and clients who had no idea how to find or work with them. It’s basically a curated marketplace where vetted artists can showcase their work and get hired by brands or studios. We review every profile, tag them by style and capability, and handle client communication. It’s a way to connect serious talent with serious projects, and hopefully, help people turn their AI passion into paid work.
Do not forget to check out the full interview on your favourite platform 👇
That’s it for today
See you next week