WaveVR_Render Manual

WaveVR_Render integrate the render flow of Unity and Wave Native SDK, and provide some tools for the developer.

Cameras Introduction


WaveVR_Render need many cameras for different rendering purpose. All of them need have one WaveVR_Camera component in its GameObject. WaveVR_Render will generate them or you can create it yourself. Take a quick look for these cameras at Expand the Cameras.

Render timing

In WaveVR Unity plugin, the render timing does not follow the Unity’s Camera depth order. The eye cameras will render within a custom renderloop coroutine. The render timming will be triggered after WaitForEndOfFrame. See the Unity’s description ExecutionOrder for the timing information.

The other camera which is not controlled by WaveVR_Render will render at Unity’s camera order and timing. And after all other cameras rendered, the WaveVR_Render do its rendering.

Render result

When WaveVR_Render render camera in multipass mode, the left eye will render first, and then the right eye. Camera of each eye will render the contents within its FOV into each eye’s RenderTexture. In one frame, Total two RenderTexture will be rendered.

In the singlepass mode, the both eye will render at once. All contents will be rendered into one RenderTexture in one frame. The RenderTexture is not doublewidth texture but a 2-dimension texture 2D array. See more detail in Unity’s description.

In the runtime, All the RenderTextures will be submited to the native SDK for lens distorion correction, timewarp effect and show on HMD display.

Main Camera

You will get a null result when you try to get Camera.main. The Unity’s main camera need be enabled when you access it, but in WaveVR there is no enabled camera.

WaveVR didn’t have a main camera. Even if you set a camera’s tag “MainCamera” to any WaveVR_Render controlled camera, it does’t work. The WaveVR_Render will disable all Camera until our render timing which we mentioned above. No camera are enabled in Awake(), Start(), OnEnable(), Update() or LateUpdate(), and etc. Therefore no main camera.

If you need a main camera, create one and put it under head. However a camera not controlled by WaveVR_Render will not generate any result to the display of HMD. And the enabled camera may waste power and performance. If you still need that. Set a Culling Mask with nothing, a narrow Field of View and a smaller Viewport could help reduce the waste.


In Editor play mode, the distortion camera is used for the multipass. It help show two RenderTexture on the GameView. And the singlepass result utilizes the Unity’s ability to show on the GameView. However in the Editor play mode, all kind of preview will not have a correct FOV settings. You still need check the real result in a HMD device.

camera usage

After expanding, these cameras below will be generated. Most parameters of these camera will be modified once when WaveVR_Render create these cameras. After that you can modify camera’s parameters in Editor inspector. Most of them will keep no change even another expanding event is triggered.


The center eye can used to preview on GameView in the Editor. It will be enabled in Editor and be disabled in Runtime. You can manually add the WaveVR_Camera to your original camera, and then assign it to the WaveVR_Render’s CenterWvrCamera field.

If the WaveVR_Render didn’t have a CenterWvrCamera when expanding, it will generate one. If there is an attached camera in WaveVR_Render’s GameObject, the generated CenterWvrCamera will copy settings from the attached camera. If there is no attached camera, WaveVR_Render will still generate one.

Eye Center’s camera parameters will be copied to other stereo cameras when expanding those stereo cameras. For example, Clear Flags, Background, Culling Mask, Clipping Planes, Occlusion Culling, and Allow MSAA. The copy action will only perform when create new camera. If the cameras are specified by yourself. We won’t modify them in the runtime.

Center camera’s transform.localPosition is the center position of two eyes, and the position will be set when expanding. This position may not be always Vector3.zero because its value depends on the HMD device design. If you need gaze feature, you should use the center eye to compelete your design instead of the camera within the head.

lefteye and righteye

The camera’s transform.localPosition will be set every time expanding. The position value is related to the IPD and HMD device settings. And it will be set according to IPD in the runtime.


The camera’s transform.localPosition will be set every time expanding. The position value is like the center camera. The IPD will take effect when rendering.

single pass (Experimental)

SinglePass can help reduce draw call and dynamic occlusion cost. It can improve the performance very much. If your project has performance bottle-neck on the CPU side, you can try it.

To enable singlepass feature, go to PlayerSetting, and do the following steps:

  • Enable VR support
  • Add ‘MockHMD - Vive’ or ‘Split screen stereo’
  • Choose the SinglePass stereo rendering method

Before you enable it, see Unity’s singlepass document for basic understanding.

If you want to use the singlepass feature, you should check the compatibility of your graphic design. Not all the customer shader or effect can use in the singlepass. If you meet problem when using WaveVR’s singlepass, you should check if the problem also happen in a pure Unity’s VR environment.

We have tested the singlepass feature from Unity 5.6 to Unity 2018.1. The support of other latest Unity version may be released in later WaveVR version.

In our test, the post-process effect will never work with WaveVR in singlepass mode. If you use the post-process, it will cost your GPU and CPU but no effect to the result. In the OnRenderImage(), a correct RenderTexture when VR supported will never be retrived. It is a not unfixable problem.

Singlepass is an experimental feature to WavevR. We know it has problems, and we have no ability to fix it in the plugin level. And we can’t guarantee this feature will be keeped in the future.

Use Prefab

You can use WaveVR prefab in Assets/WaveVR/Prefabs to replace your original camera.


The prefab has a game object named head. It includes three components:

  • Camera

    it is an attached camera, which represented your original camera in scene. We don’t acturally need this camrea, but it helps to show a basic game view for developer.

    When playing or after expanding, this camera will be disabled, and leave all the parameters untouched by WaveVR_Render. This camera’s parameters will copy to a center camera and both eyes’ camera. you can decide to allow MSAA, occlusion, culling mask, clear flag, clipping planes, and background here.

  • WaveVR_Render

    The main script for the render lifecycle. It will create both eyes and an ear. In play mode, it controls both eyes to render and submit the binocular vision to display. The details will be provided later.


    Binocular vision

  • WaveVR_PoseTracker

    The type is set to WVR_DeviceType_HMD. It will receive the HMD’s pose event and change the game object’s transform. Developer should not try to modify the transform because this script will override it every frame.

Expand the Cameras

In play mode, the main cameras will be expanded in the runtime. If you ever expand them once in inspector, we will invoke the expand again in the runtime. In the second time expanding, no new camera will be created, and only necessary parameters will be modified.

The following components and game objects are created and added to the head game object.


Expanded cameras


Inspector of expanded head

You can also expand the cameras by clicking the expand button. After expanding, the created game objects can be modified.


Expand Button

In the hierarchy, WaveVR game object take a position like a body or a playground origin. You can place your head in a scene by moving WaveVR. Do not change the transform of the head because it will be overwritten with the HMD pose.

These are the game objects added as children of the head after expanding:

  • Eye Center
  • Eye Both
  • Eye Right
  • Eye Left
  • Distortion
  • Ear
  • Loading

Both eyes which is represented by “Eye Right and Left” will be based on the IPD to adjust their position to the left or right. Therefore, each eye will see from a different place. The “Eye Both” will be in the center position of two eyes, and just set different matrices into the shader for each eye when rendering. Every eye’s camrea position will be set in runtime according to the device. Thus you don’t have to modify the eye transform yourself. In editor, we give a preset position to the transforms.

If the GameObject of Wave_Render has a Camera, it will be copied to Eye Center, and then be disabled.

Each eye has a camera. Its near and far clip planes’ values will be set according to the values of the Eye Center’s camera. And its projection and field of view are controlled by a projection matrix that is taken from the SDK. The plug-in will set a target texture when rendering and the default viewport should be full texture. The other values you can set to the camera are: clear flag, background, culling

mask, clipping plane, allow MSAA, occlusion. All these values was copied from the main camera when

creating this camer during expanding. After first set, WaveVR will not modify them again. We dont’ support Dynamic Resolution and support only the Forward rendering path.

Distortion distorts both well-rendered eye textures and presents them to the display. This only works in the Unity Editor Play Mode for preview. It will be disabled when a project had been built as an app. The WaveVR SDK, which only works on target device, will be used instead.

Ear has a audio listener.

Loading is a mask for blocking other camera’s output on screen before WaveVR’s graphic is initialized. Loading will be disabled as soon as the WaveVR’s graphic initialized.

New WaveVR_Render features in 3.0

Since Wave 3.0, the unity plugin introduce an experimental feature: The support of Unity SinglePass stereo rendering. It utilized the Unity original supported feature. Therefore the developers no need to change too much to their project. The performance can be improved because the draw calls and dynamic occlusions are half amount to the multipass. However the postprocess can not work in singlepass. It can only work in the Unity’s native supported VR device. See Unity’s document for more detail singlepass usage.

We also made some architectural changes for that. All the changes are already in the WaveVR_Render prefab. They just need to import the new SDK. It takse zero effort to have he new architecture if the prefab was used. Developers can also keep their original design.

To apply the singlepass feature, all you need to do is click the use recommended on the VR support item of the Preference (WaveVR_Settings). This dialog will pop up after you import new SDK unitypackage or you can choose it in the WaveVR menu. See more detail below.

New WaveVR_Render are more powerful on camera handle. Develop can take more controll by using our designed delegates. Assigning a customized camera to WaveVR_Render are possible. Runtime assigning are also available. See more detail below and in the source code.

However it is possible to modify or remove these experimental features in future release. Be careful to apply these features.

  1. New Inspector GUI Layout

    In new layout, all possible vairables are shown on inspector.

    You can trigger an Expand per clicking on Configuration Changed in Play mode. This help you to test your game logic of expand. Other new features will be introduced later.

  1. New eye camera

    An attached camera with WaveVR_Render in the same GameObject won’t be a necessary now. The CenterWVRCamera is a copy of attached camera if attached camera is exist. The attached camera will be disabled when playing.

    All the cameras used by WaveVR_Render could be customized. You need add WaveVR_Camera component on a camera. See Camera Expand Callback for the detail of customization.


    All the cameras will be expanded and created in runtime, and choosed to be enabled according to the actural stereo rendering path. The BothEyes is a single-pass rendering camera. It is used only when the conditions of single-pass are qualified. The Left and Right eye will be used as multi-pass.


    You can make customized eyes to center, left, right and both camera by assigning it in inspect before playing. The setting will be serialized. Or you can do it through expand callback. Customized camera gameobject need have one or more WaveVR_Camera in it.

  1. New stereo rendering path (experimental)

    SinglePass is an experimental feature of WaveVR. Use it at your own risk. Choose a preferred stereo rendering path setting according to your scene. The actural rendering path will still depend on your project PlayerSettings and VR device. It will fallback to multipass if not qualified. Changing in runtime will take no effect. Default is Auto, which means ”Allow SinglePass when available”.

    If you don’t want singlepass rendering, you can just disable it from XRSettings of PlayerSettings.

  1. Camera Expand Callback (experimental)

    When cameras are expanded by WaveVR_Render, these delegate will be invoked. If you need put some components on certain camera’s GameObject, you can choose a right moment to do that.


    For example, you can set a customized camera for Eye_Both to render.botheyes in BeforeEyeExpand callback. And modify in AfterEyeExpand for your necessary.

  1. Preference

    We will notice you by the Preference dialog if your project doesn’t enable VRSupport and set SinglePass stereo rendering method. Click “Use recommanded” button to complete all settings. After that, you can check it in XRSettings (or in Other Settings for older Unity version)

  1. WaveVR_RenderMask

See RenderMask.

  1. Foveated Rendering

See WaveVR_FoveatedRendering page.


Some areas in RenderTexture can’t be seen in HMD while the HMD is keeping still. We can skip the rendering on those areas to save power and increase performance. To skip it, we use the RenderMask feature. See the red part in following image. The red color is only used to indicate the Mask area. The normal RenderMask is in black.


RenderMask will render a mask on those areas before camera render everything. The mask will write to depth with nearest distance. After that, the other contents whoes material do depth test will not be rendered by GPU in masked areas.

This feature depend on the device. Each device will decide its mask areas. If the device give a RenderMask mesh in the runtime, it means the device support the early Z test, and RenderMask will work. If the device didn’t give a mesh, the RenderMask in Unity will be disabled.

To enable this feature, you need put the RenderMask prefab in to your scene. If you are using the WaveVR prefab, it already has one RenderMask inside. This prefab can work any where in your scenne hierarchy, and putting one in the scene is enough. In the runtime, RenderMask will try to find a active WaveVR_Render itself. if not found, it won’t work.

You don’t need input any shader, material, and mesh in RenderMask component. These public field will be filled in the runtime.


RenderMask Prefab


RenderMask in WaveVR prefab

Class WaveVR_Render


class RenderThreadSynchronizer

A tool to force sync the render thread and game thread. Call sync() to flush all render thread commands. For internal usage.

Public Types

enum StereoRenderingPath {
    Auto = SinglePass

See acturalStereoRenderingPath.

Public Member Functions

delegate void RenderCallback(WaveVR_Render render)

A generic type of WaveVR_Render delegate.

delegate void RenderCallbackWithEye(WaveVR_Render render, WVR_Eye eye)

A generic type of WaveVR_Render delegate.

delegate void RenderCallbackWithEyeAndCamera(WaveVR_Render render, WVR_Eye eye, WaveVR_Camera wvrCamera)

A generic type of WaveVR_Render delegate.

T GetComponentFromChildren<T>(string name)

For internal usage.

void OnIpdChanged(params object[] args)

For internal usage. Receive the event callback of IpdChange.

int SetQualityLevel(int level, bool applyExpensiveChanges=true)

If you need change the quality level at runtime, invoke this function. It will change the quality then reload the scene to take effect. Please don’t use the QualitySettings.SetQualityLevel() of UnityEngine.

Static Public Member Functions

static void InitializeGraphic(RenderThreadSynchronizer synchronizer=null)

Call to native API WVR_RenderInit in RenderThread. For internal usage.

static bool IsVRSinglePassBuildTimeSupported()

The StereoRenderinPath in XRSettings is Editor-only script, and there is no way to get the value in Runtime. In WaveVR_RenderEditor, it will make a check if the stereo rendering path dose acturally set to SinglePass and it make also a define symbol, WAVEVR_SINGLEPASS_ENABLED in PlayerSettings when you build your application. Therefore IsVRSinglePassBuildTimeSupported() can have the result by preprocessor. You can use IsVRSinglePassBuildTimeSupported() to check it if your application enable the SinglePass stereo rendering path in the XRSettings in the runtime.

static void signalSurfaceState(string msg)

In Android, the graphic initialization should after the surface is ready. This function will be invoked by Activity in Java when the surface is ready. Only for internal usage.

static void Expand(WaveVR_Render head)

It will create eye cameras by copying data from the CenterCamera or the attached camera with head. And do camera initialization according role of each eyes.

The common setting includes the Projection Matrix, the Eye position related to HMD position, and add a component, WaveVR_Camera, in the eye’s GameObject.

It will be invoked in Start() or when developer click button on WaveVR_Render’s Inspector. If the hierarchy is already expanded, the cameras will still do necessary reset according to the new configuration. If the configuration, for example, the IPD, are changed, the Expand() will be invoked again.

If you want to override the settings, you can listen to these delegates: beforeRenderExpand, afterRenderExpand , beforeEyeExpand, afterEyeExpand

static void Collapse(WaveVR_Render head)

This script should never be invoked in Runtime. Only for Editor usage.

static Matrix4x4 MakeProjection(float l, float r, float t, float b, float n, float f)

A helper function to make projection matrix.

Public Attributes

float ipd = 0.063f

Show the current ipd.

bool configurationChanged = false

If configurationChanged set to true, it will trigger Expand() before eye rendering. After the change take effect, it will set to false.

RenderCallback beforeRenderExpand

The callback will be invoked before WaveVR_Render is going to create or modifiy all cameras in Expand().

RenderCallbackWithEye beforeEyeExpand

The callback will be invoked before one eye’s camera is going to be created or modified in Expand().

RenderCallbackWithEyeAndCamera afterEyeExpand

The callback will be invoked after one eys’s camera is created or modified in Expand().

RenderCallback afterRenderExpand

The callback will be invoked after all cameras are created or modified in Expand().

RenderCallback onConfigurationChanged

The callback will be invoked after some configuration change are applied

RenderCallback onSDKGraphicReady

The callback will be invoked after SDK’s graphic are initialized. You can safly invoke the SDK’s graphic related API.

RenderCallback onFirstFrame

The callback will be invoked when first time the WaveVR_Render render a frame.

RenderCallbackWithEyeAndCamera beforeRenderEye

The callback will be invoked before WaveVR_Render render one eye.

RenderCallbackWithEyeAndCamera afterRenderEye

The callback will be invoked after WaveVR_Render render one eye.

WaveVR_Camera centerWVRCamera = null

A shortcut to get center camera’s WaveVR_Camera object.

WaveVR_Camera lefteye = null

A shortcut to get left eye camera’s WaveVR_Camera object.

WaveVR_Camera righteye = null

A shortcut to get right eye camera’s WaveVR_Camera object.

WaveVR_Camera botheyes = null

A shortcut to get both eye camera’s WaveVR_Camera object.

WaveVR_Distortion distortion = null

A shortcut to get the WaveVR_Distortion object. The WaveVR_Distortion is used to show the offline rendered image to UnityEditor’s GameView. Will only be used in Editor.

GameObject loadingCanvas = null

For internal usage.

GameObject ear = null

A shortcut to get the ear’s gameobject.

WVR_PoseOriginModel _origin = WVR_PoseOriginModel.WVR_PoseOriginModel_OriginOnGround

The _origin is used to get the HMD pose. See more in origin

bool needTimeControl = false

Set it true if you need the WaveVR_Render to help to stop the Game time when pause or when input focus lost due to a system overlay popped up. See UnityEngine.Time.timeScale.

Static Public Attributes

static int globalOrigin = -1

The global setting will effect all scene’s WaveVR_Render’s origin. Set -1 to disable the global effect. See origin.

static int globalPreferredStereoRenderingPath = -1

The global setting will effect all scene’s WaveVR_Render’s preferredStereoRenderingPath. Set -1 to disable it. See New stereo rendering path for more information.


static WaveVR_Render Instance [get]

Used to get the currently active Instance of WaveVR_Render.

bool IsGraphicReady [get]

Used to check if the graphic of native SDK is initialized.

float sceneWidth [get]

The width in pixel is used by render texture of one eye. It will not be double in singlepass.

float sceneHeight [get]

The height in pixel is used by render texture of one eye.

float [] projRawL [get]

The raw projection of left eye is in the form of 4 floats: left, right, top and bottom. These values are in tangent value with near plane distance assumed as 1. The left and bottom should be negative.

float [] projRawR [get]

The raw projection of right eye is in the form of 4 floats: left, right, top and bottom. These values are in tangent value with near plane distance assumed as 1. The left and bottom should be negative.

WaveVR_Utils.RigidTransform [] eyes [get]

Store the both the eye’s local position and rotation releated to head. The values was get from native api as a matrix, and transform to Vector3 and Quaternion. The Quaternion of rotation may always be identity in this case. eyes will be updated when IPD change.

StereoRenderingPath acturalStereoRenderingPath [get]

This property will return a actually working enum type stereo rendering path. See also IsSinglePass <#wvr-render-prop-issinglepass>.

bool IsSinglePass [get]

This property will return true only if all the conditions are qualified for SinglePass.

bool isExpanded [get]

A check of camera expanding status. Also used to initialize internal variables.

Camera centerCamera [get]

A shortcut to get the Camera component of centerCamera.

TextureManager textureManager [get]

The texture queue containner. Only used by WaveVR_Render internal.

ColorSpace QSColorSpace [get]

The current ColorSpace which get from UnityEngine.QualitySettings. Only used by WaveVR_Render internal.

WVR_PoseOriginModel origin [get, set]

This variable decide how the HMD pose work in this scene. You can chage it in runtime, and it will trigger a configuration change event to take effect. See more in WVR_GetSyncPose or in:

enum WVR_PoseOriginModel

The style of tracking origin.

Identifies which style of tracking origin the application wants to use for the poses it is requesting


WVR_PoseOriginModel_OriginOnHead = 0

The origin of 6 DoF pose is on head.

WVR_PoseOriginModel_OriginOnGround = 1

The origin of 6 DoF pose is on ground.

WVR_PoseOriginModel_OriginOnTrackingObserver = 2

The raw pose from tracking system.

WVR_PoseOriginModel_OriginOnHead_3DoF = 3

The origin of 3 DoF pose is on head.