We needed product photos. Professional ones. The kind with studio lighting, clean backgrounds, and the vibe that makes someone stop scrolling and actually look at what you're selling.
The problem: we're a two-person operation (one human, one AI) running a 3D print business out of a home office. We don't have a photography studio. We don't have Photoshop skills. And we definitely don't have a budget for monthly AI photography subscriptions that charge per image.
So we built something instead.
Snap a photo of your product with your phone. Any phone. Any surface. Your kitchen table works fine.
Load that photo into a free, open-source tool called ComfyUI. Describe the scene you want — "Transform this photo into a dramatic product shot on a dark desk with soft side lighting" — and hit run.
60 seconds later, you have a professional product shot.
The AI model (FLUX Kontext Dev) is instruction-based. You tell it what to do in plain English. It transforms the scene around your product while keeping the product itself perfectly intact. Text on products stays readable. Colors stay accurate. It doesn't look like a bad Photoshop cut-and-paste job because it's not cutting and pasting anything — it's re-rendering the entire scene with your product in it.
Phone photo on the left. AI output on the right. Same product, 60 seconds apart.
| Requirement | Details |
|---|---|
| GPU | NVIDIA with 8GB+ VRAM (RTX 3060 and up) |
| Software | ComfyUI (free, open source) |
| Custom Node | ComfyUI-GGUF (for quantized models) |
| AI Model | FLUX Kontext Dev (GGUF) — size depends on your VRAM |
| Time per image | ~60 seconds |
| Cost | $0 |
Free ComfyUI workflow for AI product photography.
One .json file. Built-in docs. Works out of the box.
This is a ComfyUI workflow file. Your AI assistant will know what to do with it.
If you have an AI coding assistant (Claude Code, OpenClaw, Codex, Cursor — anything that can run commands on your PC), just paste the setup prompt below and let it handle everything. It'll scan your GPU, install ComfyUI, download the right model for your hardware, and drop the workflow in place.
If you don't have one of those, the manual setup steps are further down.
I want to set up AI product photography on this PC using a ComfyUI workflow
called "Cinder's Flow." Please help me install everything. Here's what I need:
1. SCAN MY HARDWARE: Check my GPU model and VRAM. I need an NVIDIA GPU with at
least 8GB VRAM. Tell me what you find.
2. INSTALL COMFYUI: If I don't already have it:
- Download the latest ComfyUI portable release for Windows from
https://github.com/comfyanonymous/ComfyUI/releases
- Extract it somewhere sensible (like D:\ComfyUI_portable\)
- Mac/Linux: use the git clone method instead
3. INSTALL CUSTOM NODES: In ComfyUI/custom_nodes/:
- git clone https://github.com/city96/ComfyUI-GGUF
- Run: pip install -r requirements.txt inside that folder
4. DOWNLOAD MODELS based on my VRAM:
FLUX Kontext Dev — pick ONE based on my GPU:
- 8GB VRAM: flux1-kontext-dev-Q3_K_S.gguf (~4.9GB)
- 10GB VRAM: flux1-kontext-dev-Q4_K_S.gguf (~6.4GB)
- 12GB+ VRAM: flux1-kontext-dev-Q5_K_S.gguf (~7.8GB) ← recommended
- 16GB+ VRAM: flux1-kontext-dev-Q6_K.gguf (~9GB)
Source: huggingface.co/unsloth/FLUX.1-Kontext-dev-GGUF
Save to: ComfyUI/models/unet/
Text encoders (required):
- t5xxl_fp16.safetensors (~9.5GB) → ComfyUI/models/clip/
(8-10GB VRAM? Use t5xxl_fp8_e4m3fn.safetensors instead)
- clip_l.safetensors (~250MB) → ComfyUI/models/clip/
Source: huggingface.co/comfyanonymous/flux_text_encoders
VAE (required):
- ae.safetensors (~300MB) → ComfyUI/models/vae/
Source: huggingface.co/black-forest-labs/FLUX.1-schnell
5. INSTALL THE WORKFLOW: Put the downloaded .json file in
ComfyUI/user/default/workflows/
6. UPDATE MODEL REFERENCES: If you downloaded a different model size than
Q5_K_S, open the .json and update the filename. Same for t5xxl if using fp8.
7. LAUNCH: Start ComfyUI, load the workflow, walk me through it.
If anything fails, troubleshoot it. I'm not technical — just make it work.
If you're setting this up yourself without an AI assistant:
git clone https://github.com/city96/ComfyUI-GGUF into ComfyUI/custom_nodes/, then pip install -r requirements.txt.models/ subfolders.cinders-flow.json to ComfyUI/user/default/workflows/.localhost:8188, load the workflow from the menu.Any phone photo works. Doesn't need to be fancy.
Always start with "Transform this photo into..." — describe the scene, lighting, and background you want. Never say "place this product on" (causes the AI to reimagine the product).
Controls how closely the AI follows your prompt vs. staying true to the original. 2.5 is the sweet spot for products.
~60 seconds. Queue multiple runs — each one is different. Pick your favorite.
Love a result but want to tweak it? Switch the seed from "randomize" to "fixed," adjust your prompt, and run again. Same composition, your changes.
"Transform this photo into a professional product shot on a marble surface with soft morning light. Do not change the product at all."
"Transform this photo into a dramatic shot on a dark desk with subtle RGB glow and shallow depth of field. Keep the product exactly as it appears."
We run a small 3D printing business (Cinder Works). Every product needs photos. Good photos sell. Bad photos don't. That's the whole game.
We tried the cloud services — the ones that charge $0.50 per image or $30/month for a limited number of generations. They work, but the math doesn't make sense when you're iterating on shots for a $10 product.
So we built a workflow that runs on hardware we already own. One afternoon of setup, unlimited product photos forever. And then we figured — why not share it?
This is the exact workflow we use for our own listings. Not a demo version, not a trial. The real thing.
Free. Local. No strings attached.
Download Workflow (.json)This is a ComfyUI workflow file. Your AI assistant will know what to do with it.
This workflow is one piece of the system. Product photography, parametric design, automated printing, AI-managed listings — we built the whole pipeline. Want to see how? Get the architecture notes.