Render Runtime Tutorials

The following shows you how to use render runtime.

Note

Most of the code in the following sections are pseudocode. The code may not represent what you see in the native sample app hellovr.

What is Render Runtime?

Render runtime is a subsystem of WVR runtime and is primarily used to control the render thread and update the render target on the display. The render thread deals with graphics context management, buffer/surface binding, and rendering synchronization. Render runtime is responsible for updating the render target to the display device. It also handles visual experience improvement, distortion, and prediction after the content is generated from the game logic.

Initializing the Render Runtime

Note

Before initializing the render runtime, refer to VRActivity with Native Code to check the readiness of the prerequisite.

To invoke the render runtime interfaces, the header file wvr_render.h must be included. Before invoking the initializing interface in the sample code below, render runtime can be configured on some platforms with the initializing parameters WVR_RenderInitParams_t including supported graphics libraries and runtime configurations.

Note that only OpenGL graphics library is supported currently.

The first member of the initializing parameter can be filled with WVR_GraphicsApiType_OpenGL to specify this library. A malformed library name could lead to the library encountering a “not supported” error from the initializing interface.

The second member of the initializing parameter specifies the combination of runtime configuration with a bit mask (even if the data type is uint64_t). It can refer to the enumeration WVR_RenderConfig. The runtime configurations take control of switching features.

typedef enum {
    WVR_RenderConfig_Default                    = 0,             /**< **WVR_RenderConfig_Default**: Runtime initialization reflects certain properties in device service. Such as single buffer mode and reprojection mechanism, the default settings are determined by device service or runtime config file on specific platform. The default color space is set as linear domain. */
    WVR_RenderConfig_Disable_SingleBuffer       = ( 1 << 0 ),    /**< **WVR_RenderConfig_Disable_SingleBuffer**: Disable single buffer mode in runtime. */
    WVR_RenderConfig_Disable_Reprojection       = ( 1 << 1 ),    /**< **WVR_RenderConfig_Disable_Reprojection**: Disable reprojection mechanism in runtime. */
    WVR_RenderConfig_sRGB                       = ( 1 << 2 ),    /**< **WVR_RenderConfig_sRGB**: Determine whether the color space is set as sRGB domain. */
} WVR_RenderConfig;

The flag WVR_RenderConfig_Default means that some of these features are dominated by device properties or a runtime config file on specific platforms such as single buffer mode and reprojection mechanism.

  • Single buffer mode: Uses strip rendering on the front buffer to update the display.
  • Reprojection mechanism: Referred to as Timewarp. Timewarp is a VR technique which warps the rendered scene before updating it to the display.

For more details regarding HMD external device properties, click here. For runtime config file, refer to the documents from the platform vendor.

For an example, let’s assume that single buffer mode and reprojection mechanism are both enabled via the device properties or runtime config file. And, that the bit mask of the initializing parameters fits WVR_RenderConfig_Default, therefore inducing the render runtime to activate the features.

The double buffering method is to show the front buffer onscreen and to render the back buffer with the graphics library offline in the background. If the bit mask of the initializing parameters includes the flag WVR_RenderConfig_Disable_SingleBuffer, render runtime switches the rendering style to the double buffer method.

In some special cases, there is no need to apply the reprojection mechanism even if Timewarp helps fill in the missed frames and reduces scene judder. The bit mask of the initializing parameters includes WVR_RenderConfig_Disable_Reprojection, stopping the render runtime from warping the rendered scene.

Render runtime can help create the surface with sRGB color space (made especially for developers using Unity). If the color space of the build settings is set as linear in the Wave Unity SDK plugins, the color space of the texture has no gamma correction applied. For this situation, WVR_RenderConfig_sRGB needs be used in the bit-mask parameter when initializing the render runtime. This is to inform the render runtime to prepare a surface with the uncorrected texture that will be used for post-processing in the following steps.

#include <wvr/wvr_render.h>

WVR_RenderInitParams_t param = {WVR_GraphicsApiType_OpenGL, WVR_RenderConfig_Default};
WVR_RenderError pError = WVR_RenderInit(&param);
if (pError != WVR_RenderError_None) {
    LOGE("Render init failed - Error[%d]", pError);
}

There is an optional argument WVR_RenderInitParams in the third member WVR_GraphicsParams of the initializing parameter. The purpose of this argument is to have an interface for developers to provide the rendering context and surface of the native application. The native application provides its own rendering context which is packed in this argument. After rendering context and the surface are provided, render runtime binds the rendering context with the surface to show the content on the VR device.

Note:
  • The type of rendering context is EGLContext (defined as an alias of void*).
  • Use OpenGL ES 3.0 or later to get the desired visual effect.

The activity of the native application inherited from VRActivity can also apply an interface to provide its SurfaceView if the context is provided by the argument mentioned above. It can replace the original SurfaceView created by render runtime via invoking the hookNativeSurface interface of VRActivity on the stage onCreate of the activity lifecycle. The rendering section of the native application can be used on the WaveVR device.

public class AppActivity extends VRActivity{
    protected SurfaceView mSurfaceView;
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        Log.i(TAG, "onCreate");

        setContentView(R.layout.widgets);

        mSurfaceView = (SurfaceView) findViewById(R.id.surfaceview);
        if (mSurfaceView != null) {
            Log.i(TAG, "mSurfaceView" + mSurfaceView);
            hookNativeSurface(mSurfaceView);
        }
        super.onCreate(savedInstanceState);
    }
    // omission for simplification
}

The WVR_RenderError specifies the corresponding error of the render runtime initialization for debugging.

Loading Page

A cold launch can take a long time and may take several seconds on some platforms. Consider adding a loading page or welcome scene to avoid a black scene and for a good user experience.

Setup View and Projection Matrices

All VR experiences are from a first person perspective. The content should be shown on the display depending on the translation/rotation of the HMD. This means that the user’s position in VR should be provided to the render runtime before refreshing the display. The model positions in VR relative to the user’s position in the VR space constitute the viewer space. The easiest way mathematically to transform the center of the world to the viewer’s position is to apply the inverse matrix of the viewer to everything that needs to be seen. Invoking WVR_GetPoseState or WVR_GetSyncPose can obtain WVR_PoseSate_t which is an aggregation of the positional information. One of its member poseMatrix is the position matrix with a 4x4 matrix type WVR_Matrix4f.

typedef struct WVR_PoseState {
    WVR_Matrix4f_t poseMatrix;
    WVR_Vector3f_t velocity;
    WVR_Vector3f_t angularVelocity;
    bool isValidPose;
    int64_t timestamp;
} WVR_PoseState_t;

Although WVR_Matrix4f is not an effective form for OpenGL, it utilizes the column major form to present the pose. In this case, a row-major to column-major conversion is necessary. The following is an array-element mapping comparison between the two forms. The elements array indexes, 0~2, 4~6, 8~10 form a rotation matrix while the 3, 7, 11 indexes form the x,y,z translation vector.

WVR_Matrix4f::m[4][4]    OpenGL matrix 1 x 16 array, Matrix4
    0  1  2  3                0  4  8 12
    4  5  6  7                1  5  9 13
    8  9 10 11                2  6 10 14
   12 13 14 15                3  7 11 15

Based on the description above, a matrix transpose should be made after polling poses. The transpose should look similar to the following:

Matrix4 matrixtranspose(const WVR_Matrix4f_t& mat) {
    // Convert the HMD's pose to OpenGL matrix array.
    return Matrix4(
        mat.m[0][0], mat.m[1][0], mat.m[2][0], mat.m[3][0],
        mat.m[0][1], mat.m[1][1], mat.m[2][1], mat.m[3][1],
        mat.m[0][2], mat.m[1][2], mat.m[2][2], mat.m[3][2],
        mat.m[0][3], mat.m[1][3], mat.m[2][3], mat.m[3][3]
    );
}

The following example demonstrates the poll pose state via WVR_GetSyncPose and transposes the position matrix to an effective OpenGL matrix format.

Matrix4 DevicePoseArray[WVR_DEVICE_COUNT_LEVEL_0]
WVR_GetSyncPose(WVR_PoseOriginModel_OriginOnHead, mVRDevicePairs, WVR_DEVICE_COUNT_LEVEL_0);
for (int nDevice = 0; nDevice < WVR_DEVICE_COUNT_LEVEL_0; ++nDevice) {
    if (mVRDevicePairs[nDevice].pose.isValidPose) {
        DevicePoseArray[nDevice] = matrixtranspose(mVRDevicePairs[nDevice].pose.poseMatrix);
    }
}

As mentioned above, inverting the position matrix obtains the view transform matrix for the model in the scene. For example, when the viewer becomes the center of the view space:

  • The head moving left is equivalent to the object moving right.
  • The head rotating clockwise is equivalent to the object rotating counter-clockwise.
Matrix4 hmd = DevicePoseArray[WVR_DEVICE_HMD];
mHMDPose = hmd.invert();

The situation above is when the world space is stationary. What if the world space rotates and translates spontaneously? The view transform matrix has to add an external translation and rotation instead of just inverting itself.

The first step is to separate the original position matrix into rotation and translation matrices.

Matrix4 hmd = DevicePoseArray[WVR_DEVICE_HMD];

Matrix4 hmdRotation = hmd;
hmdRotation.setColumn(3, Vector4(0,0,0,1));

Matrix4 hmdTranslation;
hmdTranslation.setColumn(3, Vector4(hmd[12], hmd[13], hmd[14], 1));

Then, update the world rotation and translation.

// Update world rotation.
mWorldRotation += -mDriveAngle * mTimeDiff;
Matrix4 mat4WorldRotation;
mat4WorldRotation.rotate(mWorldRotation, 0, 1, 0);

// Update WorldTranslation
Vector4 direction = (mat4WorldRotation * hmdRotation) * Vector4(0, 0, 1, 0);  // Not apply the translation of hmdpose
direction *= -mDriveSpeed * mTimeDiff;
direction.w = 1;

// Move toward -z
Matrix4 update;
update.setColumn(3, direction);
mWorldTranslation *= update;

Lastly, compose the corresponding rotation and translation matrix together and invert the matrix multiplying product to complete the modified view transform matrix.

mHMDPose = (mWorldTranslation * hmdTranslation * mat4WorldRotation * hmdRotation).invert();

After applying the view matrix to model in games, a viewer space of the scene is assumed to be established. In order to provide stereo disparity, a pair of per-eye transform matrices should be used to correct the viewer space into two-eye spaces. The eyes are in fact a pair of sensors on outside-inside HMDs. The relative position from the sensor to the head are different between 3DoF and 6DoF.

Specify the left/right eye and the DoF type to the WVR_GetTransformFromEyeToHead interface. It returns a transform from the eye space to the viewer space with a 4x4 matrix type WVR_Matrix4f.

Matrix4 EyePosLeft = matrixConverter(
    WVR_GetTransformFromEyeToHead(WVR_Eye_Left)).invert();
Matrix4 EyePosRight = matrixConverter(
    WVR_GetTransformFromEyeToHead(WVR_Eye_Right)).invert();

Now, the eye space of the scene is ready but is in the Cartesian coordinate which doesn’t fit the visual experience. The valid eyesight is usually presented as a horizontal frustum where all things inside it can actually be seen on the screen/display. A perspective projection should be applied to the scene. Mapping all objects in the eye space to homogeneous coordinates can be achieved by multiplying the result of the corresponding WVR_GetProjection interface. It returns a projection matrix that corresponds to each eye with the type WVR_Matrix4f.

Matrix4 ProjectionLeft = wvrmatrixConverter(
    WVR_GetProjection(WVR_Eye_Left, dNearClip, dFarClip));
Matrix4 ProjectionRight = wvrmatrixConverter(
    WVR_GetProjection(WVR_Eye_Right, dNearClip, dFarClip));

After projecting everything in perspective, deformation makes the objects that are near the camera appear bigger, and other objects smaller. The render interface of the graphics API library needs the MVP matrix (Model View Projection matrix) to draw the models in the scene.

Updating the Scene Texture to the Display

In each frame, the stereoscopic scene rendering result should be updated to the display with the texture reference. The texture level resource is maintained by the render runtime for optimized resource management.

There is also a texture queue maintained in the runtime. In order to create the texture queue, the texture properties should be specified to the WVR_ObtainTextureQueue interface. The recommended texture size and associated buffers are suitable for the current display resolution. The size can be obtained via WVR_GetRenderTargetSize.

uint32_t RenderWidth = 0, RenderHeight = 0;
WVR_GetRenderTargetSize(&RenderWidth, &RenderHeight);

//Get the texture queue handler
void* mLeftEyeQ = WVR_ObtainTextureQueue(WVR_TextureTarget_2D, WVR_TextureFormat_RGBA, WVR_TextureType_UnsignedByte, RenderWidth, RenderHeight, 0);
void* mRightEyeQ = WVR_ObtainTextureQueue(WVR_TextureTarget_2D, WVR_TextureFormat_RGBA, WVR_TextureType_UnsignedByte, RenderWidth, RenderHeight, 0);

The recommended texture size can also be modified by an incremental ratio to scale up the scene size to fill the screen through WVR_SetOverfillRatio. However, this will also lead to lower performance. WVR_SetOverfillRatio should be invoked after VR runtime initialization and before render runtime initialization. This means that WVR_SetOverfillRatio should be invoked after invoking WVR_Init and before invoking WVR_RenderInit. This sequence helps prepare the prerequisite data in a suitable environment.

eError = WVR_Init(WVR_AppType_VRContent);
float ratio = 1.2f;
WVR_SetOverfillRatio(ratio, ratio);
WVR_RenderError pError = WVR_RenderInit(&param);

If using the multiview extension for vertex shader, the target texture has to be specified as WVR_TextureTarget_2D_ARRAY. The texture queue only needs to be created once for one draw call to render each scene to multiple texture layers of an array texture. This method helps reduce CPU load and rendering latency.

uint32_t RenderWidth = 0, RenderHeight = 0;
WVR_GetRenderTargetSize(&RenderWidth, &RenderHeight);

//Get the texture queue handler
void* mMultiviewQ = WVR_ObtainTextureQueue(WVR_TextureTarget_2D_Array, WVR_TextureFormat_RGBA, WVR_TextureType_UnsignedByte, RenderWidth, RenderHeight, 0);

WVR_GetTexture is the interface to get the texture from the texture queue. It requires the texture queue handle and index. In the beginning, all the texture level resources are available. A thorough iterative check of the texture queue is performed to get the texture to work. The length of the texture queue can be requested by the WVR_GetTextureQueueLength interface.

//Create frame buffer objects
for (int i = 0; i < WVR_GetTextureQueueLength(mLeftEyeQ); i++) {
    fbo = new FrameBufferObject((int)WVR_GetTexture(mLeftEyeQ, i).id, RenderWidth, RenderHeight);
    LeftEyeFBO.push_back(fbo);
}
for (int j = 0; j < WVR_GetTextureQueueLength(mRightEyeQ); j++) {
    fbo = new FrameBufferObject((int)WVR_GetTexture(mRightEyeQ, j).id, RenderWidth, RenderHeight);
    RightEyeFBO.push_back(fbo);
}

Compared to normal stereoscopic scene rendering, multiview rendering only needs the frame buffer objects per frame instead of the eyes.

//Create frame buffer objects in the frame level
for (int i = 0; i < WVR_GetTextureQueueLength(mMultiviewQ); i++) {
    fbo = new FrameBufferObject((int)WVR_GetTexture(mMultiviewQ, i).id, RenderWidth, RenderHeight);
    MultiviewFBO.push_back(fbo);
}

The frame buffer object is used to generate the render buffer and to attach the texture with the frame buffer.

FrameBufferObject::FrameBufferObject(int textureId, int width, int height) {
    glGenFramebuffers(1, &mFrameBufferId);
    glBindFramebuffer(GL_FRAMEBUFFER, mFrameBufferId);

    glGenRenderbuffers(1, &mDepthBufferId);
    glBindRenderbuffer(GL_RENDERBUFFER, mDepthBufferId);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, width, height);
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, mDepthBufferId);

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
}

To generate the frame buffer and attach the texture for multiview extension, use this:

FrameBufferObject::FrameBufferObject(int textureId, int width, int height) {
    glGenTextures(1, &mDepthBufferId);
    glBindTexture(GL_TEXTURE_2D_ARRAY, mDepthBufferId);
    glTexStorage3D(GL_TEXTURE_2D_ARRAY, 1, GL_DEPTH_COMPONENT24, mWidth, mHeight, 2);

    glGenFramebuffers(1, &mFrameBufferId);
    glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mFrameBufferId);

    glFramebufferTextureMultiviewOVR(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, mTextureId, 0, 0, 2);
    glFramebufferTextureMultiviewOVR(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, mDepthBufferId, 0, 0, 2);
}

Before rendering objects into a scene, texture needs to query the available texture index via WVR_GetAvailableTextureIndex. Using an occupied texture will cause a failure when submitting the texture. And, by specifying the side of the eye and texture to the WVR_PreRenderEye interface, render runtime will perform an extra setup on the texture. The third parameter of WVR_PreRenderEye is optional, refer to WVR_RenderFoveationMode for details.

// Left eye
//Get the available texture index in texture queue.
int32_t IndexLeft = WVR_GetAvailableTextureIndex(mLeftEyeQ);
int32_t IndexRight = WVR_GetAvailableTextureIndex(mRightEyeQ);

//Specify the texture in texture queue.
WVR_TextureParams_t leftEyeTexture = WVR_GetTexture(mLeftEyeQ, IndexLeft);
WVR_TextureParams_t rightEyeTexture = WVR_GetTexture(mRightEyeQ, IndexRight);

//Exert extra working on texture.
WVR_PreRenderEye(WVR_Eye_Left, &leftEyeTexture);
WVR_PreRenderEye(WVR_Eye_Right, &rightEyeTexture);

Any object can now be rendered into the scene texture. After rendering is finished, specify the side of the eye and submit the rendered texture to the WVR_SubmitFrame interface. Render runtime will then update this texture into the display.

// Submit left eye and then right eye
WVR_SubmitError e;
e = WVR_SubmitFrame(WVR_Eye_Left, &leftEyeTexture);
e = WVR_SubmitFrame(WVR_Eye_Right, &rightEyeTexture);

If the texture for rendering is established with the multiview extension, call the WVR_GetAvailableTextureIndex, WVR_GetTexture, and WVR_PreRenderEye interfaces once. Note that the first argument of WVR_PreRenderEye is arbitrary.

Render runtime doesn’t account for the side of the eye and only processes a texture once for each frame. The multiview texture is automatically handled by the render runtime internally.

//Get the available texture index in texture queue.
int32_t IndexMultiview = WVR_GetAvailableTextureIndex(mMultiviewQ);

//Specify the texture in texture queue.
WVR_TextureParams_t multiviewEyeTexture = WVR_GetTexture(mMultiviewQ, IndexMultiview);

//Exert extra working on texture.
WVR_PreRenderEye(WVR_Eye_Left, &multiviewEyeTexture);

If the rendered texture is established with the multiview extension, call the WVR_SubmitFrame interface once. Note that the first argument of WVR_SubmitFrame is arbitrary. Then, render runtime takes over to update the display without taking into account the side of the eyes.

//Submit function only take care of the texture established with multiview extension. Eye side is arbitrary.
WVR_SubmitError e;
e = WVR_SubmitFrame(WVR_Eye_Left, &multiviewEyeTexture);

The third argument of the WVR_SubmitFrame is a reference pose state. The polling pose method inside the render runtime is a periodical pull system. The nearest timing of the polling pose before submitting the frame will help reduce judder (the optional argument is by default null and skipped.)

Invoking WVR_GetSyncPose or WVR_GetPoseState multiple times to get the pose with different predicted times is allowed before calling WVR_SubmitFrame. In this case, passing the newest rendered pose as the third parameter of WVR_SubmitFrame is necessary. Note that never call these two pose-fetching APIs more than once before calling WVR_SubmitFrame without the referenced pose passed as a parameter.

The fourth argument of WVR_SubmitFrame is set as a combination of the bit mask. If the bit mask of the fourth argument fits the enumeration flag WVR_SubmitExtend_Default, the generic process of submitting the frame is invoked in the render runtime. Skipping this argument is the same effect as the default.

If the bit mask of the fourth argument fits the flag WVR_SubmitExtend_DisableDistortion, the render runtime is forced to disable the distortion correction for this submission. This is suitable for submitting textures that have already undergone distortion correction beforehand.

Note:
This distortion disabling feature is in the experimental stage. There is an obvious distortion around the edge of the scene when the performance is low.

If the bit mask of the fourth argument fits the flag WVR_SubmitExtend_PartialTexture, the render runtime will use the clipping texture UV coordinates WVR_TextureLayout_t from the texture parameter structure WVR_TextureParams_t from the second argument of the WVR_SubmitFrame. The feature presents the specified region of the texture on the display. The feature can also achieve a dynamic resolution change. Go to Dynamic resolution change for more details.

WVR_SubmitError specifies the corresponding error situation of the render runtime submitted for debugging.

Using Stereo Renderer Mode

Use the WVR_StereoRenderer coding design style to customize callback functions. Render runtime invokes these callbacks at a specific timing.

Note:
WVR_StereoRenderer is only supported in the native application.

Using MSAA technique

In the earlier version of the render runtime, it provided the MSAA technique feature only on certain platforms. To support more platforms, the sample code introduces a generic measure to create the frame buffer object with a multi-sampled rendering extension. A global flag gMsaa is declared in the sample code hellovr.cpp to control the activation of MSAA.

Dynamic resolution change

The fourth argument of WVR_SubmitFrame is able to activate the feature to present the partial texture with the flag WVR_SubmitExtend_PartialTexture. The effect of this feature looks like presenting the clipping texture in full screen.

The feature can also achieve the dynamic resolution change when the viewport is properly set to specify the affine transformation of rendered scenes from normalized device coordinates to clipping texture coordinates.

//Get the scale factor via the difference of UV coordinates of lower left and top right corner.
float widthFactor = topRightUV[0] - lowerLeftUV[0];
float heightFactor = topRightUV[1] - lowerLeftUV[1];

ViewportWidth = (unsigned int) (OriginWidth * widthFactor);
ViewportHeight = (unsigned int) (OriginHeight * heightFactor);
// For simplicity, it is assumed that the scale factors of width and height are equal.
mScale = widthFactor;
glViewport(0, 0, ViewportWidth, ViewportHeight);

This means that the scene can shrink to fit the size of the clipping texture and present the scene in full screen to achieve the resolution change. There are other measures to achieve the resolution change; however, the implementation using this feature has the least impact on resolution change. This is because it only allocates the texture memory once and invokes the minimum number of GL commands.

As the resolution changes from high to low, the performance becomes worse. To improve performance, remember to resize the depth buffer as well. For the different texture targets, the frame buffer has different ways to generate and bind the depth buffer. This means that there are different ways to resize the depth buffer. If the depth buffer was from setting the render buffer object, simply set the render buffer object again with the new dimensions, then bind it again to the original frame buffer.

ScaledWidth = (unsigned int) (OriginWidth * mScale);
ScaledHeight = (unsigned int) (OriginHeight * mScale);
glBindRenderbuffer(GL_RENDERBUFFER, DepthBufferId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, ScaledWidth, ScaledHeight);
glBindRenderbuffer(GL_RENDERBUFFER, 0);

If the depth buffer is generated via a texture object, the format and the dimensions are immutable. To change the depth buffer size, reset the existing frame buffer and generate the depth buffer again such as in a multiview scenario.

ScaledWidth = (unsigned int) (OriginWidth * mScale);
ScaledHeight = (unsigned int) (OriginHeight * mScale);
if(DepthBufferId != 0) {
    glDeleteTextures(1,&DepthBufferId);
    DepthBufferId = 0;
}

glGenTextures(1, &DepthBufferId);
glBindTexture(GL_TEXTURE_2D_ARRAY, DepthBufferId);
glTexStorage3D(GL_TEXTURE_2D_ARRAY, 1, GL_DEPTH_COMPONENT24, ScaledWidth, ScaledHeight, 2);
glBindTexture(GL_TEXTURE_2D_ARRAY, 0);
if (FrameBufferId != 0) {
    glDeleteFramebuffers(1, &FrameBufferId);
    FrameBufferId = 0;
}
glGenFramebuffers(1, &FrameBufferId);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, FrameBufferId);
glFramebufferTextureMultiviewOVR(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, DepthBufferId, 0, 0, 2);
glFramebufferTextureMultiviewOVR(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, TextureId, 0, 0, 2);

The lower left corner of a resized viewport and cropped texture must align with the original to fit the effective scope of the resized depth buffer. This prevents unexpected visual effects from happening.

Another type of resolution change is foveation. Foveation only keeps a clear center region and blurs the outside edges. Applying a dynamic resolution change will break this. Therefore, combining dynamic resolution change and foveation is not recommended.