Hi guys,

Just to sum up, I am moving away from the bundled examples in order to use Qt5. Various reasons for that. In first place the openGL sample app is based on the very old principle of the fixed function pipeline, which was deprecated a while ago. Also it uses a library (SDL2) which I have no experience with.For this experiment, I am heavily relying on a tutorial blog set up by Trent Reed (http://www.trentreed.net/blog).With the last example, I wanted to look at the connections between the clientkit and a home made app. I previously managed to display a rotating cube in my headset, but the movement of the headset were not processed and the aspect ratio was not satisfactory.This time we look at how to fix that.

Objective: Fetch the head position from the clientKit and process the information adequately with openGL

**0/ Prerequisites**

First and foremost, read this tutorial for openGL 8 times, as recommended.

Secondly, follow Trent’s tutos until Part 3b. We are not looking at the key binding stuff, but it doesn’t hurt to understand what he does with it. I have only briefly looked at the callback system of OSVR, and it looks like similar stuff.

What we are going to look at is the very basic linear algebra stuff (matrices and transforms).

As mentioned in the last example, Trent has a git where you can find the code for part 3b.

Just to start with , make sure that you get the objective of the tutorial without any OSVR stuf yet. The general idea is to be able to move a camera around in a FPS style (Left click + WASD + mouse) and look a rather colorful cube that rotates on a black background. Exciting for 1 minute at best, but kind of cool.

Next we OSVRize the code: the basic modifications on the .pro file, window.h, and window.cpp are assumed to have been carried out in the exact same way as we did in the previous example.

No we are going to look a lot deeper at the paintGL() in window.cpp.

**1/ Code**

It’s probably better to post the code of the new paintGL() and then go through the details.

void Window::paintGL() { // Clear glClear(GL_COLOR_BUFFER_BIT); ctx.update(); /// For each viewer, eye combination... display.forEachEye([&amp;](osvr::clientkit::Eye eye) { // Insert &amp; to avoid lambda error /// Try retrieving the view matrix (based on eye pose) from OSVR double viewMat[OSVR_MATRIX_SIZE]; eye.getViewMatrix(OSVR_MATRIX_COLMAJOR | OSVR_MATRIX_COLVECTORS, viewMat); /// NOTE : OSVR's viewMat is equivalent to m_camera in our example ///Adapt double array to QMatrix4x4 QMatrix4x4 m_tempViewMat = QMatrix4x4(viewMat[0],viewMat[4],viewMat[8],viewMat[12], viewMat[1],viewMat[5],viewMat[9],viewMat[13], viewMat[2],viewMat[6],viewMat[10],viewMat[14], viewMat[3],viewMat[7],viewMat[11],viewMat[15]); m_tempViewMat= m_camera.toMatrix()*m_tempViewMat; /// For each display surface seen by the given eye of the given /// viewer... eye.forEachSurface([&amp;](osvr::clientkit::Surface surface) { //insert ampersand to avoid lambda error // The following function is what actually splits the window in two different viewports auto viewport = surface.getRelativeViewport(); glViewport(static_cast&lt;GLint&gt;(viewport.left), static_cast&lt;GLint&gt;(viewport.bottom), static_cast&lt;GLsizei&gt;(viewport.width), static_cast&lt;GLsizei&gt;(viewport.height)); /// Set the OpenGL projection matrix based on the one we /// computed. double zNear = 0.1; double zFar = 100; double projMat[OSVR_MATRIX_SIZE]; surface.getProjectionMatrix( zNear, zFar, OSVR_MATRIX_COLMAJOR | OSVR_MATRIX_COLVECTORS | OSVR_MATRIX_SIGNEDZ | OSVR_MATRIX_RHINPUT, projMat); ///OSVR provides a projection matrix as an array of double, which we turn into a QMatrix4x4 QMatrix4x4 m_tempProjMat = QMatrix4x4(projMat[0],projMat[4],projMat[8],projMat[12], projMat[1],projMat[5],projMat[9],projMat[13], projMat[2],projMat[6],projMat[10],projMat[14], projMat[3],projMat[7],projMat[11],projMat[15]); // Render using our shader m_program-&gt;bind(); //m_program-&gt;setUniformValue(u_worldToCamera, m_camera.toMatrix()); m_program-&gt;setUniformValue(u_worldToCamera, m_tempViewMat); //m_program-&gt;setUniformValue(u_cameraToView, m_projection); m_program-&gt;setUniformValue(u_cameraToView, m_tempProjMat); { m_object.bind(); m_program-&gt;setUniformValue(u_modelToWorld, m_transform.toMatrix()); glDrawArrays(GL_TRIANGLES, 0, sizeof(sg_vertexes) / sizeof(sg_vertexes[0])); m_object.release(); } m_program-&gt;release(); }); }); }

**2/ Details**

This part clears the buffer and update the OSVR context. Not much to say about it

</pre> <code>// Clear glClear(GL_COLOR_BUFFER_BIT); ctx.update();

Next, we will fetch OSVR clientkit the number of eyes to render. We usually have two, so the code following the bracket will run twice

</pre> <code>/// For each viewer, eye combination... display.forEachEye([&](osvr::clientkit::Eye eye) { // Insert & to avoid lambda error</code>

Then we will retrieve the view matrix from the OSVR clientKit. Briefly this is a 4×4 matrix representing the transformation from the world reference to the camera location. Refer to openGL tutorial for full explanation.

Using the QtCreator IDE, you can select OSVR_MATRIX_SIZE, and press F2, the IDE will take you to the definition of the constant. It’s 16.

Basically the client kit returns a matrix under the form of an array of double with 16 elements.

Code: Select all

`/// Try retrieving the view matrix (based on eye pose) from OSVR`

double viewMat[OSVR_MATRIX_SIZE];

eye.getViewMatrix(OSVR_MATRIX_COLMAJOR | OSVR_MATRIX_COLVECTORS,

viewMat);

Looking at the parameters above, the matrix is ordered in column major. The hint is the use of OSVR_MATRIX_COLMAJOR, and a bit of F2 on getViewMatrix, helps you figure out the flags.

The problem is that we want a QT specific type 4×4 matrix, named QMatrix4x4. Because there are already loads of helpful functions pre-implemented in QT. And also because we can.

The easiest way to do that is to create a temporary QMatrix4x4 and directly assign the element to the right place. The elements are then ordered by columns.

Code: Select all

` /// NOTE : OSVR's viewMat is equivalent to m_camera in our example`

///Adapt double array to QMatrix4x4

QMatrix4x4 m_tempViewMat = QMatrix4x4(viewMat[0],viewMat[4],viewMat[8],viewMat[12],

viewMat[1],viewMat[5],viewMat[9],viewMat[13],

viewMat[2],viewMat[6],viewMat[10],viewMat[14],

viewMat[3],viewMat[7],viewMat[11],viewMat[15]);

In his tutorial Trent has implemented the camera movements, and uses a function that returns a QMatrix4x4, m_camera.toMatrix(). We can combine the transformations of both WASD+Mouse and the transformations brought by OSVR by multiply them together, with a gentle reminder that matrix multiplications are not commutative.

Code: Select all

`m_tempViewMat= m_camera.toMatrix()*m_tempViewMat;`

Then we fetch the geometry of the viewport as in the first example. Remember one viewport per eye. Despite a misleading for each, there is only one element in the surface set of each eye.

Code: Select all

` /// For each display surface seen by the given eye of the given`

/// viewer...

// The loop is what actually splits the window in two different viewports

eye.forEachSurface([&](osvr::clientkit::Surface surface) { //insert ampersand to avoid lambda error

auto viewport = surface.getRelativeViewport();

glViewport(static_cast<GLint>(viewport.left),

static_cast<GLint>(viewport.bottom),

static_cast<GLsizei>(viewport.width),

static_cast<GLsizei>(viewport.height));

Now we fetch from the clientKit a projection matrix. With reference to the openGL tutorial, this matrix sets how the camera view the world, as in the optical properties of the camera lens.

zNear and zFar are coefficient that will dictate the “optical” properties.

Code: Select all

` /// Set the OpenGL projection matrix based on the one we`

/// computed.

double zNear = 0.1;

double zFar = 100;

double projMat[OSVR_MATRIX_SIZE];

surface.getProjectionMatrix(

zNear, zFar, OSVR_MATRIX_COLMAJOR | OSVR_MATRIX_COLVECTORS |

OSVR_MATRIX_SIGNEDZ | OSVR_MATRIX_RHINPUT,

projMat);

As for the camera matrix, the .getProjectionMatrix returns a array of double with 16 elements.

So we adapt it to Qt usage:

Code: Select all

`///OSVR provides a projection matrix as an array of double, which we turn into a QMatrix4x4`

QMatrix4x4 m_tempProjMat = QMatrix4x4(projMat[0],projMat[4],projMat[8],projMat[12],

projMat[1],projMat[5],projMat[9],projMat[13],

projMat[2],projMat[6],projMat[10],projMat[14],

projMat[3],projMat[7],projMat[11],projMat[15]);

And finally we assign the two matrices to the shader, in a modern openGL fashion:

Code: Select all

` // Render using our shader`

m_program->bind();

//m_program->setUniformValue(u_worldToCamera, m_camera.toMatrix());

m_program->setUniformValue(u_worldToCamera, m_tempViewMat);

//m_program->setUniformValue(u_cameraToView, m_projection);

m_program->setUniformValue(u_cameraToView, m_tempProjMat);

{

m_object.bind();

m_program->setUniformValue(u_modelToWorld, m_transform.toMatrix());

glDrawArrays(GL_TRIANGLES, 0, sizeof(sg_vertexes) / sizeof(sg_vertexes[0]));

m_object.release();

}

m_program->release();

**3/ Result**

One Ctrl+R later, we get this happening:

First conclusion, the cube now looks like a cube. No issue with the aspect ratio. That’s the benefit of using the projection matrix.

Also, we can look around, to try and find the cube, as it may not be in front of you at initialization time. Thought the thing didn’t work until I realized, it was behind me.

Note: It may not hurt to use the bundled app osvr_reset_yaw to make sure that the orientation is initialized.

If you right click and use the mouse you can turn the “body” around and use your “head” to inspect the cube. That’s the result of the combination of the two rotation matrices.

Keep right click and use WASD to walk around and inspect your cube even further. That’s the translation matrix in action.

Last conclusion, it’s bloody hard to take a screenshot with a headset on.

Elle est pas belle la vie ?

**5/ Going further**

Well I’m bored with cubes, I have inspected mine for a few minutes, and once the wow factor is gone, a cube remains a cube.

Now I want to play with qt3d and/or render a 2d application on the face of a surface (think OSVR multimedia player).

Please feel free to comment/question/criticise.

Until next time.

Cheers.