Learn how to use Inpainting and Control Net techniques to reimagine a photo, and use Compute text using a language model.ComputeText


"The living room boasts a spacious design with a large window allowing natural light and a cozy couch. Unique is the open-concept office area, separated by a partial wall, offering a functional workspace while maintaining unity, enhanced by plants, lamps, and a rug in the modern, minimalist decor."
The four parts of the guide cover:
- Inpainting: Use to generate variations of a photo of a room in different styles.
StableDiffusionXLInpaint
Edit an image using Stable Diffusion XL. Supports inpainting (edit part of the image with a mask) and image-to-image (edit the full image).
ExampleAPI ReferenceStableDiffusionXLInpaint( image_uri="https://media.substrate.run/docs-klimt-park.jpg", mask_image_uri="https://media.substrate.run/spiral-logo.jpeg", prompt="large tropical colorful bright birds in a jungle, high resolution oil painting", negative_prompt="dark, cartoon, anime", strength=0.8, num_images=2, store="hosted", seeds=[ 1607280, 1720395, ], )Output{ "outputs": [ { "image_uri": "https://assets.substrate.run/84848484.jpg", "seed": 1607326 }, { "image_uri": "https://assets.substrate.run/48484848.jpg", "seed": 1720398 } ] } - Control Net – Edge Detection: Use with the
StableDiffusionXLControlNet
Generate an image with generation structured by an input image, using Stable Diffusion XL with ControlNet.
ExampleAPI ReferenceStableDiffusionXLControlNet( image_uri="https://media.substrate.run/spiral-logo.jpeg", prompt="the futuristic solarpunk city of atlantis at sunset, cinematic bokeh HD", control_method="illusion", conditioning_scale=1.0, strength=1.0, store="hosted", num_images=2, seeds=[ 1607226, 1720395, ], )Output{ "outputs": [ { "image_uri": "https://assets.substrate.run/84848484.jpg", "seed": 1607266 }, { "image_uri": "https://assets.substrate.run/48484848.jpg", "seed": 1720398 } ] }edge
method to generate variations structured by the edges of the original image. - Control Net – Depth Detection: Use with the
StableDiffusionXLControlNet
Generate an image with generation structured by an input image, using Stable Diffusion XL with ControlNet.
ExampleAPI ReferenceStableDiffusionXLControlNet( image_uri="https://media.substrate.run/spiral-logo.jpeg", prompt="the futuristic solarpunk city of atlantis at sunset, cinematic bokeh HD", control_method="illusion", conditioning_scale=1.0, strength=1.0, store="hosted", num_images=2, seeds=[ 1607226, 1720395, ], )Output{ "outputs": [ { "image_uri": "https://assets.substrate.run/84848484.jpg", "seed": 1607266 }, { "image_uri": "https://assets.substrate.run/48484848.jpg", "seed": 1720398 } ] }depth
method to generate variations structured by a depth map of the original image. - Describing images: Use to describe the generated images.
ComputeText
Compute text using a language model.
ExampleAPI ReferenceComputeText( prompt="Who is Don Quixote?", temperature=0.4, max_tokens=800, )Output{ "text": "Don Quixote is a fictional character in the novel of the same name by Miguel de Cervantes." }
First, initialize Substrate:
1. Inpainting
Let's try generating variations of the room using Edit an image using Stable Diffusion XL. Supports inpainting (edit part of the image with a mask) and image-to-image (edit the full image).StableDiffusionXLInpaint
- This node can also be used to inpaint the masked part of an image if a
mask_image_uri
is provided. Here, we'll inpaint in the entire image. - The
strength
parameter controls the strength of the generation process over the original image. Higher strength values produces images that are further from the original.


When using this strength
value, some of the quality of the original is preserved in the variations, but they're quite different.
Edit an image using image generation inside part of the image or the full image.InpaintImage
StableDiffusionXLControlNet
. You should use high-level nodes if you
want your node to automatically update to the latest, best model.
2. Control Net – Edge Detection
Let's try using Generate an image with generation structured by an input image, using Stable Diffusion XL with ControlNet.StableDiffusionXLControlNet
edge
method, which processes the original image with an edge detection algorithm and uses edges to structure generation.


3. Control Net – Depth Detection
Let's try using Generate an image with generation structured by an input image, using Stable Diffusion XL with ControlNet.StableDiffusionXLControlNet
depth
method, which processes the original image with a depth detection algorithm and uses depth to structure generation.
4. Describing images
We can describe the content of the images using Compute text using a language model. Compute text using a language model.ComputeText
ComputeText
We run the pipeline by calling substrate.run
with the terminal nodes, summaries
.

The living room boasts a spacious design with a large window allowing natural light and a cozy couch. Unique is the open-concept office area, separated by a partial wall, offering a functional workspace while maintaining unity, enhanced by plants, lamps, and a rug in the modern, minimalist decor.

The image features a contemporary office with a captivating pink and purple color scheme: vibrant pink walls instill energy, while elegant purple furniture adds sophistication.