Recruiting for Makehumans Future!

Locked forum where the devs and staff post news about the makehuman software

Re: Recruiting for Makehumans Future!

Postby punkduck » Tue Jan 07, 2025 9:05 pm

Hi

Happy new year to all :D And it already successfully started with a nasty cold in my case which reduces brain capacity when programming complicated stuff :shock:

Yes the channel order in my version is not yet flexible and still uses Blender standard, the output in old MakeHuman is XYZ for location and rotation.
You get a message when reading a file with a different order.
I am aware of it and will fix that, when other more important stuff is ready to go.

I managed to get animations back to Blender meanwhile using the bvh lines from my makehuman version. So Blender exporter is nearly completed (well no face-expressions still, but I use a different concept for that).

BVH: One needs additional information for that. The rest pose must be given at least. Then the so-called rest-matrix of each bone must be decomposed (we are in global space) and then it must be used as a correction matrix.

I have to admit, that I needed to understand the mhx2 importer from Thomas Larrson for that. He first calculated a correction matrix.
Then this correction matrix is used to calculate the animation in Blender for each bone. Here is the code for the bone-rotations: I change to Euler and do not use quaternions, because I wanted to allow 360 rotations like a pirouette. (Maybe I change it back to quaternions as an option in the importer, but NOT for root bone).

Code: Select all
euler = Euler((radians(m[6]), radians(m[7]), radians(m[8])), 'ZYX')
if name in self.bonecorr:
       cmat = self.bonecorr[name]
       mat = cmat.inverted() @  euler.to_matrix() @ cmat
       rot = mat.to_euler()
       pbone.rotation_mode = 'XYZ'
       pbone.rotation_euler = (rot.x, rot.y, rot.z)


gLTF is next. There the animation is used with buffers (translation and scale as VEC3, rotation in quaternions as VEC4) per bone. One or more input buffers for timeframe, then all the values in output buffer per bone ("channel"), the buffer pair they call a sampler. For a classical "step animation" (or baked animation) input buffer will be 1/24 equidistant time values (that is, what blender creates). This might be configurable but usually game engines allow to change the speed anyway. The output is a lot of smaller binary buffers then, scaling I will not add in the beginning, but it might be supported later, atm. we do not have jiggle bones and even dislocation is not used except for the position of the root bone.

Usually I will struggle with the correct values for quaternions ... :?

I wrote/improved a glb-file (binary gltf) analyzer in between. Maybe I will add this tool also in our repo or contact the person who wrote the original, there was no skeleton and animations so far. Maybe that helps other people as well.
User avatar
punkduck
 
Posts: 1255
Joined: Mon Oct 17, 2016 7:24 pm
Location: Nuremberg, Germany

Re: Recruiting for Makehumans Future!

Postby punkduck » Sun Jan 26, 2025 7:51 pm

socketcom.png
first steps with socket communicator


Meanwhile I created a socket communication. Internally it uses the exact same way as the mh2b format. There is no difference reading a file or a bytestream ;)
The character is transported using two calls, one for a JSON description and one for the binary buffers (the concept glTF uses as well).

Here is the code which works for binary connections and files:
Code: Select all
     
 if isinstance(inputp, bytearray):
            buf = inputp[self.bufferoffset:self.bufferoffset+length]
 else:
            buf = inputp.read(length)
 self.bufferoffset += length


The transport for animation is still buggy, just learnt that is does not work for hidden vertices ... and Noelia jumps up and down. Bad floor calculation. But that is typical in the beginning :roll:

Greetings
Punkduck
User avatar
punkduck
 
Posts: 1255
Joined: Mon Oct 17, 2016 7:24 pm
Location: Nuremberg, Germany

Re: Recruiting for Makehumans Future!

Postby RobBaer » Mon Jan 27, 2025 11:35 pm

Very nice. Your image also gives us some sense of how your environmental lighting is working, I guess. It didn't seem to move into Blender. Intentional?
User avatar
RobBaer
 
Posts: 1232
Joined: Sat Jul 13, 2013 3:30 pm
Location: Kirksville, MO USA

Re: Recruiting for Makehumans Future!

Postby punkduck » Sun Feb 02, 2025 8:57 pm

Hi

At the moment the socket is only a second way to import the character. There will be a few more features. But not everything makes sense. For example one can import different characters or replace an existing one. So a "keep in sync" would not always be, what is wanted. There is also a problem with definition, where the textures should be in the end. When one saves something from blender, paths should be relative etc.
Blender Eevee should give similar results, yes.

In between I did these measurement targets, which create a visual representation inside makehuman. It is an OpenGL overlay of lines always in foreground. The vertices are now defined in the target description and will be no longer inside program code. I found some unused code to calculate cup sizes. Well, so I decided to generate a character info, with a bunch of values as shown in the image below. As long as you have a female older than 8 (that's what I found in the internet, although usually it starts with 11 to 14) cup sizes are calculated. Size are from AA to O, last one would be boobs-on-a-stick, because of bust/underbust ratio calculation. Today I started with randomization.

measurement.png
measurement and information


After randomization, different order of bvh files will be tested. At the moment brain capacity is limited to the easier stuff. :?

Greetings, Punkduck
User avatar
punkduck
 
Posts: 1255
Joined: Mon Oct 17, 2016 7:24 pm
Location: Nuremberg, Germany

Re: Recruiting for Makehumans Future!

Postby punkduck » Sun Feb 09, 2025 8:14 pm

Hi

Meanwhile I discovered: real cupsizes are only calculated, when the breast-distance slider is like -50 ... otherwise mostly DD-F models will appear :mrgreen:

Anyway I tried with the random-generator meanwhile. One can really create creepy characters, like the one on the left side. With symmetry some look rather cute. And when I change them only a little and set ideal factor (which is proportions) to higher value + start with a character (here 100% female, breastsize= +50%), and NO reset to standard it seems to be quite okay.

weirdos1.png
from weirdos to non-weirdos


So without reset, now one can use an existing character. So I loaded Leska and dressed her with elvs nice undies.
I change her by approx 15% and 75% ideal, 90% symmetric and "female only". Similar characters will appear.

weirdos2.png
changing an existent character


Last test is with the same values, face only.

weirdos3.png
change the face only


The idea would be to call these functions by socket from blender as well. I think this way is more helpful than the old randomizer.

(most of the rules (like pregnancy only for females) and which groups are changed are configured in the base.json file to allow similar features with other names on different characters / meshes in the future)

Punkduck
User avatar
punkduck
 
Posts: 1255
Joined: Mon Oct 17, 2016 7:24 pm
Location: Nuremberg, Germany

Re: Recruiting for Makehumans Future!

Postby tomcat » Mon Feb 10, 2025 2:04 pm

Excellent work!
punkduck wrote:Anyway I tried with the random-generator meanwhile.

Is it possible to optionally enable random generation by Gaussian distribution instead of uniform?
Foreigners' reactions to Russian "Bird's Milk" candies
— Are your birds being milked?
— In Russia everyone is milked. Here even the zucchini is used to make caviar.
User avatar
tomcat
 
Posts: 467
Joined: Sun Sep 27, 2015 7:53 pm
Location: Moscow (Orcish Stan), The Aggressive Evil Empire

Re: Recruiting for Makehumans Future!

Postby punkduck » Tue Feb 11, 2025 11:10 pm

Hi,

The idea is pretty good, I could add that as an option. At the moment, I use the numpy random generator. They also have one called random.normal, which is the Gaussian one.

https://numpy.org/doc/stable/reference/random/generated/numpy.random.normal.html

The random() function itself creates numbers between 0 and 1. To get the same range with Gauss? Well the middle (called loc in the numpy docs) would be at 0.5.

But: It is a function providing values from -infinite to +infinite, although high negative and positive values are nearly impossible. Now it is more or less try and error what standard deviation (second parameter, called scale) would be best. With 0.25 I got a value outside the interval after e.g. 10 tries, 0.2 I got one after maybe 100. Now the probabilistic question is like this: how many values do I have to try until I cross one of the borders, no matter what scale? ;)

Well so with that standard deviation I can set the width of the curve, whereas the word width itself is of course mathematical nonsense, because width is infinite. But the solution is a [0 <= x <= 1] with an if statement. This sounds like "is there no better idea?". No the way is quite common in stochastics. And there is also a name for that:

https://en.wikipedia.org/wiki/Truncated_normal_distribution

took me some time ... my exam at university was in the last century, but I remembered Gauss was function defined everywhere, although mostly nearly zero (but never zero itself) :ugeek:

So with normal distribution one could produce even bigger weirdos than I did before, at least in theory. I will try that with truncation and the standard deviation could be calculated from my "weirdo-factor" ... ;)
I guess that might work then.
User avatar
punkduck
 
Posts: 1255
Joined: Mon Oct 17, 2016 7:24 pm
Location: Nuremberg, Germany

Re: Recruiting for Makehumans Future!

Postby RobBaer » Tue Feb 11, 2025 11:59 pm

So, there are an infinite number of normal distributions that are ALL bell shaped with a mean and a standard deviation. However, there is a one normal distribtion called the "standard normal distribtuion" that is defined as having a mean of zero and a standard deviation of 1.0. The x-axis of a standard normal distribution is usually called a z-score and the y-axis is probability density. That is, one stadard deviation below the mean is z = -1 and one standard devation above the mean is z = +1. As you say, z-score can vary between minus infinity and plus infinity, but the most common values are near the mean of zero. The area under the whole curve from z equals minus infinity to z equals infinity is 1.0. The area between any two z-scores can be interprested as probability of getting a z-value within that range.

If you draw randomly from the standard normal, 68% of your z-scores will be between -1 and +1. Interestingly, 95% of your z-scores will be between -1.96 and +1.96 and 99% of your z-values will be between -2.576 and +2.576. The bell shape means that most of the time a deviation based on a random z-value will not move a slider much, (assuming appropriate clipping and normalization) , but on occasion it could move a slider much farther depending on the lower probablity of getting a z-score that extreme. We could use our knowledge of the probablity of getting any given z-score to define a reasonable clipping range. If beyond the clipping range you would just recursively generate a new number "inside" the clipping range to return for as the random z-score driving slider movement. Small slider movements would be common, but more extreme movements would be rare. Of course these random z-score numbers could be rescaled to a 0,1 interval or -1,0,1 interval as appropriate for a given slider.
User avatar
RobBaer
 
Posts: 1232
Joined: Sat Jul 13, 2013 3:30 pm
Location: Kirksville, MO USA

Re: Recruiting for Makehumans Future!

Postby punkduck » Mon Feb 24, 2025 6:27 pm

Hi

Looks like I am not the only one dealing with math. :mrgreen:

It is not yet implemented, although it should be not a big deal. Meanwhile I tried a few other things like e.g. using Blender as a remote tool to change values, I guess, this already was done when I check MPFB 1. So as an example I added a function to randomize it from Blender. I don't know how much I should implement, but this was done to use makehuman via API by "remote control" as a test so far.

I try to implement a normal map algorithm with a geometry shader for the tagents, now, Since I demand at least OpenGL 3.2 it should work. Old makehuman calculates that in python before. Of course it is not yet running and the combination of all shaders in the end will not be as simple. Next could be reflections of the environment map, maybe shadow buffers for self shading and materials with light. It is a lot like trying to understand pyrender and other tools, where these methods are used, the OpenGL book etc.

I just wondered why metal of my PBR shader never worked. By testing the normalmap algorithm I just checked, where I used the "camera position" mentioned in that algorithm.

Well my viewpos is the camera position, so I have the value available. And so I found out, I always initialized (bound, found location of) that value, which was correct ... and then simply forgot to set the value, that means, the camera-pos was at (0,0,0) for calculation of light.

(0,0,0) is inside the character approx. where the butt is :lol:

So it looks much better now:

metaleffects.png
metal rulez
User avatar
punkduck
 
Posts: 1255
Joined: Mon Oct 17, 2016 7:24 pm
Location: Nuremberg, Germany

Re: Recruiting for Makehumans Future!

Postby punkduck » Tue Mar 11, 2025 10:35 pm

Hi
At the moment I create an asset downloader for single assets for the new version (there is an asset pack option as well, but it is partly disabled not to ruin existent assets).

This downloader itself tries to work with only the file assets.json ... and since that file is loaded to a download folder of makehuman itself, it is easy to understand what it does. This is more or less the same as in old version.
I try to avoid too much internet traffic, so no images are loaded without demand. One can press the camera button, which only loads the thumbnail if needed. One can even copy the URL or title in name field for download. Because when I check on the www-page and use makehuman and then are interested to get an asset, it would be more or less the simplest way without searching in the asset.json. Well there are a few exceptions where that does NOT work, but mostly it does.

At least one can always select the asset from the list, it seems to work for most of it already.

There are still 2 exceptions:
  • targets go to the user target folder correctly, a subfolder of the category is created or used, but the category and target json files are not yet created (after restart it works). Deleting a user target also creates problems. This I need to add anyway.
  • for materials it is simply not yet implemented. I hope the program will find the destination folder. The repository is in a sqlite database, so it could be easy if the names/titles are matching.

At the moment it is limited to specific file types like "diffuse" etc. Maybe I have to add some others. Maybe I also create metallic-roughness maps from existent single maps ...
I also need to avoid "huge thumbnails" (resize them before I put them to the file-system maybe to shorten time when asset page is created). For the worst case I need to create a manual possibility where to put the files. I guess I will add search patterns on top of the columns. These are the next ideas.

assetdownloader.png
asset downloader in progress


There are still bugs, just learnt from Wolgade that my disabling of the buttons, if the asset,json was not there did not change after download ... but of course after restart. Fixed that just 20 minutes ago :?
User avatar
punkduck
 
Posts: 1255
Joined: Mon Oct 17, 2016 7:24 pm
Location: Nuremberg, Germany

PreviousNext

Return to News from the crew

Who is online

Users browsing this forum: No registered users and 2 guests