QT5 and OSVR – Very basic example

Hi guys,

There is a huge potential behind the Qt5 framework, so I keep going with my experiments… I have no intention on developing games myself, but there are some applications for augmented/deported reality that can benefit from having a serious general purpose framework under the boot.

I read a series a particularly interesting tutos made by game developer Trent Reed (http://www.trentreed.net/blog).
The whole idea was to combine his examples with the OSVR bundled examples.

In his openGL tutos, Part 0 and Part 1, Trent explains very well the process of setting up an openGL window with the QT5 framework. Quite a lot of it is still a bit esoteric for me (and therefore may not be in a position to reply to in-depth openGL questions), but I’d recommend to go through these tutos before setting up for OSVR.

In any case the following code is heavily based (at 99%) on the code of Part 2, which is available on his git. Also the other reference is the sample code , OpenGLSample.cpp from the documentation.

Already mentioned before , I am using Ubuntu 15.10, and I have no windows or mac machines to verify whether it is portable.

Objective: move away from SDL2, as presented in the OSVR doc and use QT5 framework.

Prerequisites

Osvr-core is located in ~/osvr/…
Just like my previous post, it is assumed that osvr-server , is build, and running in the background.
The HDK is connected and setup as landscape on a extended window to the right of the primary screen.
The app (in my case) is being developed in ~/OSVR-Projects/QT5-OSVR-Link/
Easiest way to get started is to copy the project from Trent’s github archive, rename the files adequatelu.

.pro file

Nothing has changed since my last post.
A couple of interesting point :
– c++11 to be enabled otherwise the ClientKit screams
– no need to enable opengl, as it is already par of the QGuiApplication.
– libraries includes and dependencies path need to be set as for any other qt apps with third party libs.
– we are just creating a minimalist QWindow from a console app template so we can get rid of app_bundle for the time being. (check Trent’s tutos to understand why).

QT       += core guiTARGET = QT-OSVR-Test-Link
CONFIG   += console c++11
CONFIG   -= app_bundleTEMPLATE = appSOURCES += \
main.cpp \
window.cpp \
transform3d.cppHEADERS += \
window.h \
vertex.h \
transform3d.hRESOURCES += \
resources.qrcINCLUDEPATH += /usr/includeLIBS += -L$$PWD/../../osvr/lib/ -losvrClientKit -losvrClient -losvrCommon -losvrUtil -ljsoncppINCLUDEPATH += $$PWD/../../osvr/include/
INCLUDEPATH += $$PWD/../../json/include/
DEPENDPATH += $$PWD/../../osvr/lib/

main.cpp

From the original project , there are only very few modifications to perform on the main.cpp, and these are principally cosmetics.
There are a couple of includes to add to get some of the display meta-information, but nothing out of the ordinary.
Once the format is set, the idea is to send the window to a little bit beyond (5 points) the boundary of the primary display (laptop), and trigger the fullscreen once the window is located on the HDK display. That’s ugly , but the qt setScreen functions are not working to my satisfaction.
It’s also ugly because the programming style is not offensive, and does not error check. However it can be pointed out that the program would anyway stop if the osvr server is not running upon creation of the window.

#include <QGuiApplication>
#include <QScreen>
#include “window.h”
#include <QDebug>int main(int argc, char *argv[])
{
QGuiApplication app(argc, argv);// Set OpenGL Version information
// Note: This format must be set before show() is called.
QSurfaceFormat format;
format.setRenderableType(QSurfaceFormat::OpenGL);
format.setProfile(QSurfaceFormat::CoreProfile);
format.setVersion(3, 3);// Set the window up
Window window;
window.setFormat(format);//Display the desktop properties
qDebug()<<endl<< “[APP] Screens available:”<<QGuiApplication::screens().size();
qDebug() << “[APP] Primary screen:” << QGuiApplication::primaryScreen()->name();
qDebug() << “[APP] Primary screen coordinates : ” <<QGuiApplication::primaryScreen()->availableGeometry();
qDebug() << “[APP] OSVR screen:” <<QGuiApplication::screens().last()->name();
qDebug() << “[APP] OSVR screen coordinates : ” <<QGuiApplication::screens().last()->availableGeometry()<< endl;//Move window beyond the primary screen, show the window and apply fullscreen property
window.setPosition((QGuiApplication::primaryScreen()->geometry().width()+5),5);
window.show();
window.setWindowState(Qt::WindowFullScreen);

return app.exec();
}

window.h

Two changes to bring to the base code:
– add includes for OSVR
– forward declare privately the client context and the display config

window.cpp

Things are getting a bit more involved from here.
Not strictly necessary, as they already are in the header, but a couple of OSVR includes don’t hurt.

Now, we want he OSVR context and display config to have an object wide scope, so we place the initializers in the constructor :

</pre>
<pre class="codebox"><code>// Start OSVR and get OSVR display config directly from the constructor</code></pre>
<pre class="codebox"><code>Window::Window()
 : ctx("com.osvr.example.QtOpenGL"),
 display(ctx)
 {
 m_transform.translate(0.0f, 0.0f, -5.0f);
 }</code></pre>
<pre>

Qt will call a few events in relation with openGL. Refer to the tutorial for the reasons why, I would not be able to explain better than Trent. The bottom line is that we have several location to position some extracts of the code from OpenGLSample.cpp. The display config checks and intial context update are positioned in the initializeGL event handler:

</pre>
<code>void Window::initializeGL()
{
// Initialize OpenGL Backend
initializeOpenGLFunctions();
connect(context(), SIGNAL(aboutToBeDestroyed()), this, SLOT(teardownGL()), Qt::DirectConnection);
connect(this, SIGNAL(frameSwapped()), this, SLOT(update()));
printVersionInformation();</code>

//OSVR Checks
if (!display.valid()) {
qDebug() << "\nCould not get display config (server probably not "
"running or not behaving), exiting."
<< endl;
return;
}

qDebug() << "Waiting for the display to fully start up, including "
"receiving initial pose update..."
<< endl;
while (!display.checkStartup()) {
ctx.update();
}
qDebug() << "OK, display startup status is good!" << endl;

// Set global information
glEnable(GL_CULL_FACE);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);

(...)
<pre>your code here

Now here the interesting part of the process, Qt will call a event handler paintGL, which is equivalent to the render(osvr::clientkit::DisplayConfig &disp) function in the original example. It is called regularly (need to dig on the doc to be more specific about how often), and trigger the rendering of a frame for each eyes. For the moment, we just want to how our video is affected by the client kit. Clearly, what we want is one rendered object for each eye. To do that, we have a double for each loop. The first loop is related to the number of eyes. It is going to run twice, because … we have two eyes. The second loop is going to run once for each eyes, and will set a view port. To do that, the application will fetch the geometry of the surface, and assign a viewport accordingly. Note: Just to avoid a “was not captured for this lambda” error at compile time, I had to modify [] to [&], in the forEachEye call. Found the solution on stackoverflow, but did not dig in details as for why precisely. That’s here. The rendering part is explained in Trent’s blog. Probably no need to dig too much into that.

 

<code><code>void Window::paintGL()</code></code>
<code><code></code></code></pre>
</div>
<div class="codebox"><code><code> {
// Clear
glClear(GL_COLOR_BUFFER_BIT);</code></code>// OSVR Context update
ctx.update();</div>
/// For each viewer, eye combination...
display.forEachEye([&](osvr::clientkit::Eye eye) { // Insert & to avoid lambda error

/// For each display surface seen by the given eye of the given
/// viewer...
eye.forEachSurface([&](osvr::clientkit::Surface surface) { //insert amp to avoid lambda error
auto viewport = surface.getRelativeViewport();

glViewport(static_cast<GLint>(viewport.left),
static_cast<GLint>(viewport.bottom),
static_cast<GLsizei>(viewport.width),
static_cast<GLsizei>(viewport.height));

// Render using our shader
m_program->bind();
m_program->setUniformValue(u_worldToView, m_projection);
{
m_object.bind();
m_program->setUniformValue(u_modelToWorld, m_transform.toMatrix());
glDrawArrays(GL_TRIANGLES, 0, sizeof(sg_vertexes) / sizeof(sg_vertexes[0]));
m_object.release();
}
m_program->release();
});
});
}
<div class="codebox">
<pre>

5/ Result
One Ctrl+R later, we get this happening:
Screenshot from 2016-02-15 16-32-45Voila !

6/ Going further

Well, my cube is rotating, and looks not too bad with the headset, but the head tracking does not affect the camera, because I culled all the matrix related function from the original example. Also he aspect ratio of the cube is particularly elongated.

Will go into detail of the whys in another example.

Now if you ask me why I want to use Qt5 so badly, please check the Qt example rendercontrol. There are some pretty wicked stuff to work on this principle, includign for instance rendering video feed on a 3d opengl rendered “screen”.

Please don’t hesitate to comment/suggest/criticise …

Cheers.

Originally posted on OSVR forums.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s