Please complete steps in the Setup section before running your application.

The main interfaces are provided in the singleton class GestureProvider. Please ensure one and only one script is attached in the scene. The detection results are stored in GestureResult class.


Vive Hand Tracking SDK requires real camera devices to function, therefore does not support running in Unity Editor for platforms other than Windows.

Add Script to the Scene

To use Vive Hand Tracking SDK in your Unity project, add GestureProvider script to your VR render camera and set options in Inspector.

  • For most scenes, this normally is the camera object with MainCamera tag.
  • If you are using other VR plugins, this may be the head object in the camera prefab.


GestureProvider script must be attached to your VR render camera, or an object that has same position & rotation of your VR render camera.

The transform of GestureProvider is used to make sure detected hand position are in the correct global coordinate system. Most games add extra position/rotation offset to your rendering camera, please make sure the same offset is add to the object GestureProvider script is attached to.

See Sample and Test_WaveVR scenes for how to add GestureProvider script.

Start/Stop detection

GestureProvider handles detection start and stop based on its lifecycle. Detection is started when the script is enabled and stopped when the script is disabled/destroyed. The results are cleared when detection stops.

To manually control start/stop of the detection, simply enable/disable the script:

GestureProvider.Current.enabled = true;  // enable the script to start the detection
GestureProvider.Current.enabled = false; // disable the script to stop the detection

Android Camera Permission

On Android/WaveVR platform, camera permission must be granted before starting the detection. Vive Hand Tracking SDK automatically handles camera permission for WaveVR and Daydream. But on most Android phones, Unity does not support runtime permission before 2018.3. In such cases, you need to add your own logic (or using other Unity plugins) to grant camera permission. Make sure GestureProvider is disabled at startup, grant camera permission, then enable GestureProvider.

Getting Detection Result

When detection is running, detected left and right hand results are available as GestureProvider.LeftHand and GestureProvider.RightHand. Each hand contains pre-defined gesture and position of the hand. Positions has different meanings for each mode selected. See Samples section and GestureResult class for details.

To query current detection mode and status, use GestureProvider.Mode and GestureProvider.Status.

To query if hand results are updated in this frame, use GestureProvider.UpdatedInThisFrame.

Draw Detected Hands As Skeletons

The plugin contains LeftHandRenderer.prefab and RightHandRenderer.prefab for render hands as skeletons (or single points for 2D/3D modes). The prefabs are located in Assets\ViveHandTracking\Prefab folder. These prefabs utilizes HandRenderer.cs script for rendering detected hands. The script allows customizatiion of display colors and hand colliders.


The displayed hand is shown as below images. Left image have Show Gesture Color off, while right image have it on.

../_images/hand1.png ../_images/hand2.png

Draw Detected Hands As 3D Model

The plugin contains LeftHandModel.prefab and RightHandModel.prefab for render hands as 3D model for skeleton mode. The prefabs are located in Assets\ViveHandTracking\Prefab folder. These prefabs utilizes ModelRenderer.cs script for applying skeleton transforms on 3D model with skinned meshes.

HandModel.unity scene in the sample folder can be used to try out this feature. The plugin provides a pre-made 3D model for use with the ModelRenderer script. More 3D hand models will be provided in futere releases.

Model hands are also available in sample scene. Make Like gesture with both hands to switch between skeleton and 3d model.