After releasing TNG Engineering for the DK2 (which is now up on Oculus Share!), I received some requests to make a post about optimizing this to run so smoothly for so many.
To be quite honest, this experience isn’t anywhere near optimized as it could be, but I did place a focus on avoiding hiccups / framerate hitches / frame freezes and generally getting the FPS up as high as possible with a minimum amount of work involved. Just as an FYI, I used Unity Pro 4.5.5p3, and some of this may no longer apply in Unity 5.
TNG Engineering is your standard static scene, where nothing moves. That alone opens the door to static batching, which is a big part of getting that “greased lightning” feel. In my demos I tend to have a single Unity scene that fades in and out of different areas, such as the “face forward and press space” prompt, the title screen, and the main environment itself - in this case, the Engineering Bay. Only one area is active in the scene hierarchy at a time and the camera is moved from one to the next. So, in order for static batching to actually work, geometry marked static has to be active in the scene hierarchy when Unity is generating a build. It cannot be inactive at startup and then later activated at runtime and still be able to participate in static batching. That little caveat was tripping me up for a while, but now I’ve got a script to work around that, with great results.
Also, since dynamic batching isn’t providing any benefit (either to draw calls or to the framerate) for this scene, I disable dynamic batching just so that Unity doesn’t bother trying.
Fading to black is a pretty straightforward way to cover up any heavy-lifting being done to load things up. The key is to make sure that everything is loaded up right then and there, and not leave anything out that will cause a hiccup later. Larger / more complex programs may instead need to carefully manage memory usage and resource loading in an incremental fashion, but this post is about simple scenes.
The player first starts out in a short hallway leading to the Engineering Bay, so when the player rounds the corner to see the rest of it, there can be some pretty serious frame freezes as things are being loaded and rendered for the first time. There are seemingly a lot of tools that Unity provides to address this, such as preloading all the texture resources, warming up all the shaders, uploading meshes to the GPU, etc, but the approach I took in TNG Engineering to make sure everything is loaded into memory and onto the GPU is to simply render it normally.
So in TNG Engineering, while the screen is faded out, the camera will actually move to 3 different positions and orientations and render one frame each, before ending up at the hallway where the player starts. This gets all the heavy loading out of the way while the player can’t see anything. This approach may not work for every type of program. For Titans of Space, instead of moving the player around, the program instead renders every planet and moon ahead of time in front of the player just before the tour starts, and then hides almost all of them and puts them back where they belong.
Direct lighting has a pretty significant impact on performance, and realtime shadows even moreso. In TNG Engineering, no realtime lights are used (and thus no real-time shadows). Instead, all lighting and shadows are baked beforehand. This can increase the size of your project with lightmap textures, but it’s a non-issue next to the boost in performance and visual quality.
Collision Detection for Player Movement:
Since I am letting the player move around, much of the geometry has colliders to prevent movement through the walls. In cases involving detailed geometry, it is usually overkill to create a mesh collider for exact collisions, so these are instead loosely represented by primitive shapes like cubes. There are still plenty of mesh colliders in use, though.
Collision Detection for Positional Head-Tracking:
And finally, since the DK2 has positional head-tracking, I like to fade the screen out to dark blue as a function of distance from the player’s head to a wall or surface. My approach to this is to use the CheckSphere physics function once a frame, with a radius that is equal to where fading out just barely starts. If that returns true, then I do up to 2 or 3 more CheckSphere calls to narrow down how far away the player’s head is from a surface. I end up with a granularity of 8 steps to fade out completely, which looks smooth enough in practice, and yet most of the time there is only going to be the one CheckSphere call to keep things performing well.
That’s about it!