Using M.H. to control a text-to-image AI

Images of characters done with MakeHuman

Moderator: joepal

Using M.H. to control a text-to-image AI

Postby green_tomato » Sat Jul 08, 2023 10:44 am

doe.jpg
It's possible to use MakeHuman to control Stable Diffusion using ControlNet's "depth" mode by rendering out the depth buffer (or "Z buffer" as the depth information is often called) using Blender or a similar tool.
This will produce an image where objects that are close will be white, and areas that are far away are dark.
Then you can use the AI's usual tools to control the style. Here are some examples:
Attachments
semirealistic_3.jpg
semirealistic_2.jpg
semirealistic_1.jpg
anime_rough.jpg
anime_ghibliesque.jpg
anime_retro.jpg
green_tomato
 
Posts: 28
Joined: Sat Oct 29, 2022 10:43 am

Re: Using M.H. to control a text-to-image AI

Postby joepal » Sat Jul 08, 2023 10:52 am

That's actually pretty interesting.

I hadn't thought of using MH as input data for AI.
Joel Palmius (LinkedIn)
MakeHuman Infrastructure Manager
http://www.palmius.com/joel
joepal
 
Posts: 4474
Joined: Wed Jun 04, 2008 11:20 am

Re: Using M.H. to control a text-to-image AI

Postby joepal » Sat Jul 08, 2023 11:40 am

Approximation of the same idea in blender: Create a "camera distance material"

camera_distance_1.png

camera_distance_2.png
Joel Palmius (LinkedIn)
MakeHuman Infrastructure Manager
http://www.palmius.com/joel
joepal
 
Posts: 4474
Joined: Wed Jun 04, 2008 11:20 am

Re: Using M.H. to control a text-to-image AI

Postby green_tomato » Sat Jul 08, 2023 12:02 pm

Oh neat trick. I just used the compositor section to output the "Depth" layer, like this:
Attachments
render_depth_buffer.png
green_tomato
 
Posts: 28
Joined: Sat Oct 29, 2022 10:43 am


Return to Gallery

Who is online

Users browsing this forum: No registered users and 1 guest