ComfyUI for Architects
A hands-on workshop that introduces architects to the exciting world of generative AI tools. Perfect for beginners, this session provides a practical foundation in using powerful AI tools like Stable Diffusion and ComfyUI to enhance your architectural design workflow.
A hands-on workshop that introduces architects to the exciting world of generative AI tools. Perfect for beginners, this session provides a practical foundation in using powerful AI tools like Stable Diffusion and ComfyUI to enhance your architectural design workflow. You'll learn essential concepts of generative AI in architecture, get hands-on experience with Stable Diffusion and ComfyUI, and receive step-by-step guidance in creating your first AI workflows.
You'll leave with ready-to-use templates and workflows, a solid understanding of the fundamentals of generative AI tools, and the confidence to start implementing AI in your practice. No prior AI experience is needed. We'll guide you through every step of the process. The workshop is designed to be accessible while providing the foundational knowledge you need to start using AI tools effectively in your architectural practice.
- Set up ComfyUI, install checkpoints, and configure extensions using the Manager and a batch downloader script
- Build a text-to-image workflow from scratch using KSampler, CLIP text encoder, VAE decoder, and checkpoint loader nodes
- Write architectural prompts using a structured formula covering medium, subject, location, style references, and mood
- Control generation quality by tuning seed, steps, CFG scale, and sampler settings for both SD XL and Flux models
- Use ControlNet with Canny edge maps to turn hand sketches or Rhino screenshots into rendered architectural visualizations
- Combine Canny and depth map ControlNets in a multi-module workflow for more geometrically accurate outputs
- Apply IP Adapter reference images to transfer visual style, materials, or design language from a reference photo onto a generated output
- Use inpainting with a site photo, sketch, and mask to place a conceptual design into a real site context without 3D modeling
Session 1: ComfyUI Fundamentals and ControlNet with Sketches
- Introduction to Stable Diffusion as an open-source alternative to Midjourney for architectural workflows
- The ComfyUI node-based interface: navigation, canvas, searching nodes, and connection logic
- AI models explained: checkpoints, the three-bucket model, and what checkpoint files contain
- Text prompt writing strategy: medium, subject, location, style, mood, positive vs. negative prompts
- KSampler parameters: seed, steps, CFG scale, sampler, scheduler, and latent image dimensions
- Flux model vs. SD XL differences, including the Flux Guidance node
- ControlNet workflow using Canny edge detection: loading sketches, preprocessing, strength parameters
- Image resizing automation and matching canvas dimensions to input
Session 2: 3D Model Input, IP Adapter, and Inpainting
- Using Rhino viewport screenshots as ControlNet base images instead of hand sketches
- Multi-ControlNet workflows combining Canny edge detection and depth maps for geometric accuracy
- Live screen share node for real-time Rhino-to-ComfyUI connection (also works with Revit, SketchUp)
- IP Adapter module: uploading style references, weight parameter, style transfer modes
- Image-to-image generation: denoising values, VAE Encode, and controlling original image preservation
- Inpainting workflow: combining site photos, sketch overlays, and masks for site context insertion
- Creating image variations using VAE Encode with reduced denoise values and batch count
- Installation troubleshooting: resolving missing custom nodes via the Manager
Ömer Nuray
Ömer Nuray is an architect, content creator, and founder of Design Input Studio. Through his studio, he creates content aimed at integrating new technologies such as AI into architecture and the design process. His interest in computational design and automation led him to the world of Artificial Intelligence.
As an AI Architect at Rendair AI, he is involved in developing an AI-powered architecture visualization solution. Ömer shares his expertise and journey through his online presence, reaching and inspiring architects and designers worldwide. He provides guidance on integrating AI into the design process through platforms such as his YouTube channel, social media and Design Input Academy.