Using MakeHuman models in VR environments

If your topic doesn't fit anywhere else, put it here.

Moderator: joepal

Using MakeHuman models in VR environments

Postby blindsaypatten » Sat Nov 04, 2017 4:59 pm

Yesterday I incorporated a MakeHuman model in a Samsung Gear VR environment using Unity. I'm just wondering if anyone else is working with MakeHuman models in VR? If so, for which VR hardware and using what software?
blindsaypatten
 
Posts: 586
Joined: Tue Mar 14, 2017 11:16 pm

Re: Using MakeHuman models in VR environments

Postby brkurt » Sat Nov 04, 2017 7:50 pm

brkurt
 
Posts: 1100
Joined: Sun Feb 17, 2008 8:49 pm

Re: Using MakeHuman models in VR environments

Postby joepal » Mon Nov 06, 2017 12:29 pm

MH models work more or less out of the box with Unity, via FBX export. See http://www.makehumancommunity.org/wiki/ ... them_there. Unity has a one checkbox solution for targeting VR headsets ("virtual reality supported", on or off), see https://unity3d.com/learn/tutorials/top ... r-overview.

Personally, I've ran Unity projects on oculus rift so far. This is a pretty smooth experience.
Joel Palmius (LinkedIn)
MakeHuman Infrastructure Manager
http://www.palmius.com/joel
joepal
 
Posts: 4465
Joined: Wed Jun 04, 2008 11:20 am

Re: Using MakeHuman models in VR environments

Postby jorgeo » Thu Dec 21, 2017 5:35 pm

I'm also using Makehuman for Unity, target platforms HTC Vive and Oculus right now.

https://youtu.be/V7_Qi-rMjRM
User avatar
jorgeo
 
Posts: 24
Joined: Fri Aug 05, 2016 7:35 am
Location: Austin, TX

Re: Using MakeHuman models in VR environments

Postby jujube » Sat Apr 21, 2018 7:45 pm

I recently got a gear vr, and I love it but I'm annoyed by the fact that there's no generic 3d model viewer on it. So now I have to 1. learn unity 2. learn how to get the unity games onto my phone.. so much learning /whine
jujube
 
Posts: 404
Joined: Fri Aug 14, 2015 10:46 pm

Re: Using MakeHuman models in VR environments

Postby CallHarvey3d » Tue Jun 12, 2018 1:49 pm

there were some cheap/free apps that allowed import of textured objs that i had used a few years ago i havnt looked in awhile but i imagine there's more now with some gesture support.
CallHarvey3d
 
Posts: 247
Joined: Wed Mar 09, 2016 3:33 pm

Re: Using MakeHuman models in VR environments

Postby ecke101 » Thu May 09, 2019 7:03 pm

Is there any chance Makehuman could support VR in the future? It shouldn't be so hard to implement.
I would try to modify the code if I could but I think it's too advanced for me.
There are OpenVR bindings for Python and a simple example how to get headset position and rotation.
https://github.com/cmbruns/pyopenvr

Would be amazing to be able to preview characters in VR fast to get a better feel for how they really look.
ecke101
 
Posts: 21
Joined: Thu May 09, 2019 6:59 pm

Re: Using MakeHuman models in VR environments

Postby joepal » Fri May 10, 2019 10:59 am

Without having looked, I'm guessing that our stone age OpenGL core would put in a few hurdles. It would probably have to be modernized first.
Joel Palmius (LinkedIn)
MakeHuman Infrastructure Manager
http://www.palmius.com/joel
joepal
 
Posts: 4465
Joined: Wed Jun 04, 2008 11:20 am

Re: Using MakeHuman models in VR environments

Postby ecke101 » Thu Aug 08, 2019 9:33 pm

I'm a total Python newbie but somehow managed to hack together a "working" VR viewer plugin for MakeHuman. There are some OpenGL errors in the log and it crashes if you close the window.. so if there is someone out there with an OpenVR-compatible headset who can help me that would be nice.

(2019-08-10) edit: Updated the code. Now it draws the body mesh colored by the normals.

You need to "pip install openvr" or install pyopenvr manually and have SteamVR installed.

7_VR.py (in the "/plugins"-folder):
Code: Select all
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import gui3d
import mh
import gui
import log
import numpy

from core import G

from openvr.glframework.qt5_app import MyGlWidget
from openvr.gl_renderer import OpenVrGlRenderer

from PyQt5.QtWidgets import QMainWindow
from PyQt5.QtOpenGL import QGLFormat

from textwrap import dedent

from OpenGL.GL import *  # @UnusedWildImport # this comment squelches an IDE warning
from OpenGL.GL.shaders import compileShader, compileProgram

from openvr.glframework import shader_string


class MHActor(object):
    def __init__(self):
        self.shader = 0
        self.vao = None
        self.humanchanged = False

    def updateVertices(self):
        self.vertices = []
        self.faces = []
        self.normals = []

        yOffset = -1 * gui3d.app.selectedHuman.getJointPosition('ground')[1]

        for obj in sorted(G.world, key = (lambda obj: obj.priority)):
            # log.message(obj.isTextured)
            # log.message(obj.solid)
            # log.message(len(obj.verts))
            # log.message(len(obj.primitives))
            # log.message(obj.primitives)
            # log.message('---')

            if obj.vertsPerPrimitive == 4 and len(obj.verts)>4:
                for vert in obj.verts:
                    self.vertices.append([vert[0],vert[1]+yOffset,vert[2]])
                for n in obj.norms:
                    self.normals.append(n)
                for fv in obj.primitives:
                    self.faces.append(fv[0])
                    self.faces.append(fv[1])
                    self.faces.append(fv[2])
                    self.faces.append(fv[2])
                    self.faces.append(fv[3])
                    self.faces.append(fv[0])

        self.faces = numpy.array(self.faces, dtype=numpy.uint32)
        self.vertices = numpy.array(self.vertices, dtype=numpy.float32)
        self.vertices = self.vertices * 0.1
        self.normals = numpy.array(self.normals, dtype=numpy.float32)

    def init_gl(self):
        vertex_shader = compileShader(
            shader_string("""
            layout(location = 0) uniform mat4 Projection = mat4(1);
            layout(location = 4) uniform mat4 ModelView = mat4(1);
            layout(location = 8) uniform float Size = 0.3;

            in vec4 position;
            in vec3 normal;

            out vec3 fnormal;

            void main() {
              gl_Position = Projection * ModelView * Size * position;
              fnormal = normal;
            }
            """),
            GL_VERTEX_SHADER)
        fragment_shader = compileShader(
            shader_string("""
            out vec4 FragColor;
            in vec3 fnormal;

            void main() {
              FragColor = vec4(normalize(fnormal), 1.0);
            }
            """),
            GL_FRAGMENT_SHADER)

        self.shader = compileProgram(vertex_shader, fragment_shader)
        glEnable(GL_DEPTH_TEST)

        self.updateVertices()

        self.vao = glGenVertexArrays(1)
        glBindVertexArray(self.vao)

        self.vertex_buffer = glGenBuffers(1)
        self.normal_buffer = glGenBuffers(1)
        self.element_buffer = glGenBuffers(1)

        glBindBuffer(GL_ARRAY_BUFFER, self.vertex_buffer)
        position = glGetAttribLocation(self.shader, 'position')
        glVertexAttribPointer(position, 3, GL_FLOAT, False, 0, ctypes.c_void_p(0))
        glEnableVertexAttribArray(position)
        glBufferData(GL_ARRAY_BUFFER, self.vertices.size*4, self.vertices, GL_STATIC_DRAW)

        glBindBuffer(GL_ARRAY_BUFFER, self.normal_buffer)
        normal = glGetAttribLocation(self.shader, 'normal')
        glVertexAttribPointer(normal, 3, GL_FLOAT, False, 0, ctypes.c_void_p(0))
        glEnableVertexAttribArray(normal)
        glBufferData(GL_ARRAY_BUFFER, self.normals.nbytes, self.normals, GL_STATIC_DRAW)

        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.element_buffer)
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, self.faces.nbytes, self.faces, GL_STATIC_DRAW)
       
        # glBufferData(GL_ARRAY_BUFFER, self.vertices.size*4, self.vertices, GL_STATIC_DRAW)
        # glBindVertexArray( 0 )
        # glBindBuffer(GL_ARRAY_BUFFER, 0)
        # glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0)

    def display_gl(self, modelview, projection):
        glUseProgram(self.shader)
        glUniformMatrix4fv(0, 1, False, projection)
        glUniformMatrix4fv(4, 1, False, modelview)
        glBindVertexArray(self.vao)

        if (self.humanchanged):
            # log.message("human changed")
            self.updateVertices()
            glBindBuffer(GL_ARRAY_BUFFER, self.vertex_buffer)
            glBufferData(GL_ARRAY_BUFFER, self.vertices.size*4, self.vertices, GL_STATIC_DRAW)
            glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.element_buffer)
            glBufferData(GL_ELEMENT_ARRAY_BUFFER, self.faces.nbytes, self.faces, GL_STATIC_DRAW)
            self.humanchanged = False

        glDrawElements(GL_TRIANGLES, len(self.faces), GL_UNSIGNED_INT, None)
        # glDisableVertexAttribArray(0)
        # glDisableVertexAttribArray(1)

    def dispose_gl(self):
        glDeleteProgram(self.shader)
        self.shader = 0
        if self.vao:
            glDeleteVertexArrays(1, (self.vao,))
        self.vao = 0

class VRWindow(QMainWindow):
    def __init__(self):
        super().__init__()

        self.setWindowTitle("VR test window")
        self.resize(800,600)
        self.mhactor = MHActor()

        renderer = OpenVrGlRenderer(multisample=2)
        renderer.append(self.mhactor)

        glformat = QGLFormat()
        glformat.setVersion(4, 1)
        glformat.setProfile(QGLFormat.CoreProfile)
        glformat.setDoubleBuffer(False)
        self.glwidget = MyGlWidget(renderer, glformat, self)
        self.setCentralWidget(self.glwidget)
        self.show()
       
    def closeEvent(self,event):
        self.glwidget.disposeGL()
        event.ignore()
        #QMainWindow.closeEvent(self, event)

class VRTaskView(gui3d.TaskView):

    def __init__(self, category):
        gui3d.TaskView.__init__(self, category, 'VR')

        box = self.addLeftWidget(gui.GroupBox('VR'))
       
        self.aButton = box.addWidget(gui.Button('Open VR window'))
        self.vrwindow = 0

        @self.aButton.mhEvent
        def onClicked(event):
            self.vrwindow = VRWindow()
            self.vrwindow.show()

    def onShow(self, event):
        gui3d.app.statusPersist('VR plugin')

    def onHide(self, event):
        gui3d.app.statusPersist('')

    def onHumanChanged(self, event):
        if self.vrwindow != 0:
            self.vrwindow.mhactor.humanchanged = True
       

category = None
taskview = None

# This method is called when the plugin is loaded into makehuman
# The app reference is passed so that a plugin can attach a new category, task, or other GUI elements


def load(app):
    category = app.getCategory('Utilities')
    taskview = category.addTask(VRTaskView(category))

# This method is called when the plugin is unloaded from makehuman
# At the moment this is not used, but in the future it will remove the added GUI elements


def unload(app):
    pass

ecke101
 
Posts: 21
Joined: Thu May 09, 2019 6:59 pm

Re: Using MakeHuman models in VR environments

Postby AnimationPrepStudios » Fri Sep 27, 2019 7:54 pm

I have recently developed a free SteamVR mocap tool that allows custom Makehuman avatars (as well as Reallusion CC3 models) to be used. And by using this automated Custom Avatar Builder user's may add custom avatars, fit into them and begin creating recordings/playback from them. It include ability to export all recordings directly to a SceneLoader.blend file for easy Cycles/EEVEE rendering.

I have also added many helpful features including: story-boarder, full-body tracking, microphone lip syncing, facial expression controller/editor, body physics, props, firearms, VivePro Eye gaze/blink tracking support, and so on..

The official release can be found on blenderartists.org: Download Here.

Image

Here's an example showing how to add custom Makehuman avatars:
Image


And finally here's a short film I made using this tool:
Image
Note, this was the first short film I ever directed, so it's far from great, but I created the entire thing in ~3 hours.
AnimationPrepStudios
 
Posts: 11
Joined: Sat Jul 27, 2019 8:42 pm


Return to General discussions about makehuman

Who is online

Users browsing this forum: Google [Bot] and 1 guest

cron