How to use ControlNet models with Stable Diffusion

What is happening under the hood on the generative models that allow us to import an image and get results? Some of them use Stable Diffusion with ControlNet, a neural network structure to control diffusion models by adding extra conditions.

  1. A guide through the image generation report
  • First example
  • Base Generative AI images used
  • ControlNet models examples and results
  • The future: More customized ControlNet models
  1. Conclusions
  2. Alternatives to Stable Diffussion and ControlNet

Watch the full video by FofrAI in YouTube

A guide through the image generation report

FofrAI prepared an amazing guide to understanding what models are available now in ControlNet. Some information about their pre-processors is provided with examples of their outputs.

First example

In this series, you can see the three basic steps to go from an image to an abstraction to a new image.

From an image to an abstraction to a new image by FofrAI From an image to an abstraction to a new image by FofrAI From an image to an abstraction to a new image by FofrAI

From an image to an abstraction to a new image by FofrAI

Base Generative AI images used

Based on two AI-generated renderings -a Japanese architecture space and a picture of a couple- this video guides us through different ControlNet usages and models with great examples of the process and results.

ControlNet usages and models by FofrAI ControlNet usages and models by FofrAI

ControlNet usages and models by FofrAI

ControlNet models examples and results

You will find more information about specific models. Current models showcased:

  1. Canny
  2. Depth
  3. Open pose
  4. Fake scribble
  5. Segmentation
  6. Normal map
ControlNet models examples by FofrAI ControlNet models examples by FofrAI ControlNet models examples by FofrAI ControlNet models examples by FofrAI

ControlNet models examples by FofrAI

If you are using Stable Diffusion without any external tool like Render AI, you need to be sure you understand why each model is different.

As an example, you have a comparison between Canny and Scribble. While the Canny model will keep most of the straight lines, Scribble models will turn your original image into a sketch. This is used as a base to generate the new image, generating a less realistic space.

Other cases could be Segmentation distortions on the camera. Because segmentation generates a map of colors on the image, sometimes you could have a fish-eye image even though your source image was straight.

Comparison between Canny and Scribble by FofrAI

Comparison between Canny and Scribble by FofrAI

The future: More customized ControlNet models

Anyone can train their own ControlNet models, so more stunning tools are expected to be available in the future.

Conclusions

In this article, we went through a detailed guide about ControlNet models with examples. If you want to learn more about the video’s author and images, please look at Fofr AI Twitter account or YouTube channel.

Alternatives to Stable Diffussion and ControlNet

There are several tools that let you use Generative AI creation. One of them is RenderAI.app

RenderAI.app is an innovative tool that provides users with the opportunity to engage in Generative AI creation. With its user-friendly interface and accessible features, it enables individuals from various backgrounds to explore their creativity and generate visually striking and imaginative content.

References:


Related Posts

Firefly Adobe AI new features

Firefly Adobe AI new features

We'll explore the latest features that are revolutionizing the way we use Adobe's creative tools, including Photoshop, Illustrator, Premiere, and more.

DALL-E 3 with ChatGPT, understanding and how to use it

DALL-E 3 with ChatGPT, understanding and how to use it

DALL-E 3 represents a significant leap forward in AI art generation. In this article, we’ll explore its capabilities, how the integration with ChatGPT works, and its potential to inspire creators.

Exploring the Sketch-to-Render Process using MidJourney AI

Exploring the Sketch-to-Render Process using MidJourney AI

Using the power of Midjourney AI to convert a scribble or sketch into a render. A topic that's sure to captivate architects, interior designers, illustrators, and creative enthusiasts alike.