Get to know MDN better
Note: WebVR API is replaced by WebXR API. WebVR was never ratified as a standard, was implemented and enabled by default in very few browsers and supported a small number of devices.
The WebVR API is a fantastic addition to the web developer's toolkit, allowing WebGL scenes to be presented in virtual reality displays such as the Oculus Rift and HTC Vive. But how do you get started with developing VR apps for the Web? This article will guide you through the basics.
To get started, you need:
Supporting VR hardware.
A computer powerful enough to handle rendering/displaying of VR scenes using your dedicated VR Hardware, if required. To give you an idea of what you need, look at the relevant guide for the VR you are purchasing (e.g., VIVE READY Computers).
A supporting browser installed — the latest Firefox Nightly or Chrome are your best options right now, on desktop or mobile.
Once you have everything assembled, you can test to see whether your setup works with WebVR by going to our simple A-Frame demo, and seeing whether the scene renders and whether you can enter VR display mode by pressing the button at the bottom right.
A-Frame is by far the best option if you want to create a WebVR-compatible 3D scene quickly, without needing to understand a bunch of new JavaScript code. It doesn't however teach you how the raw WebVR API works, and this is what we'll get on to next.
To illustrate how the WebVR API works, we'll study our raw-webgl-example, which looks a bit like this:
Note: You can find the source code of our demo on GitHub, and view it live also.
Note: If WebVR isn't working in your browser, you might need to make sure it is running through your graphics card. For example for NVIDIA cards, if you've got the NVIDIA control panel set up successfully, there will be a context menu option available — right click on Firefox, then choose Run with graphics processor > High-performance NVIDIA processor.
Our demo features the holy grail of WebGL demos — a rotating 3D cube. We've implemented this using raw WebGL API code. We won't be teaching any basic JavaScript or WebGL, just the WebVR parts.
Our demo also features:
When you look through the source code of our demo's main JavaScript file, you can easily find the WebVR-specific parts by searching for the string "WebVR" in preceding comments.
Note: To find out more about basic JavaScript and WebGL, consult our JavaScript learning material, and our WebGL Tutorial.
At this point, let's look at how the WebVR parts of the code work.
A typical (simple) WebVR app works like this:
In the below sections we'll look at our raw-webgl-demo in detail, and see where exactly the above features are used.
The first WebVR-related code you'll meet is this following block:
Let's briefly explain these:
To begin with, we retrieve a WebGL context to use to render 3D graphics into the <canvas> element in our HTML. We then check whether the gl context is available — if so, we run a number of functions to set up the scene for display.
Next, we start the process of actually rendering the scene onto the canvas, by setting the canvas to fill the entire browser viewport, and running the rendering loop (drawScene()) for the first time. This is the non-WebVR — normal — rendering loop.
Now onto our first WebVR-specific code. First of all, we check to see if Navigator.getVRDisplays exists — this is the entry point into the API, and therefore good basic feature detection for WebVR. If this doesn't exist, we log a message to indicate that WebVR 1.1 isn't supported by the browser.
The rest of the code goes inside the if (navigator.getVRDisplays) { } block, so that it only runs if WebVR is supported.
We first run the Navigator.getVRDisplays() function. This returns a promise, which is fulfilled with an array containing all the VR display devices connected to the computer. If none are connected, the array will be empty.
Inside the promise then() block, we check whether the array length is more than 0; if so, we set the value of our vrDisplay variable to the 0 index item inside the array. vrDisplay now contains a VRDisplay object representing our connected display!
The rest of the code goes inside the if (displays.length > 0) { } block, so that it only runs if there's at least one VR display available.
Note: It is unlikely that you'll have multiple VR displays connected to your computer, and this is just a simple demo, so this will do for now.
Now we have a VRDisplay object, we can use it do a number of things. The next thing we want to do is wire up functionality to start and stop presentation of the WebGL content to the display.
Continuing on with the previous code block, we now add an event listener to our start/stop button (btn) — when this button is clicked we want to check whether we are presenting to the display already (we do this in a fairly dumb fashion, by checking what the button textContent contains).
If the display is not already presenting, we use the VRDisplay.requestPresent() method to request that the browser start presenting content to the display. This takes as a parameter an array of the VRLayerInit objects representing the layers you want to present in the display.
Since the maximum number of layers you can display is currently 1, and the only required object member is the VRLayerInit.source property (which is a reference to the <canvas> you want to present in that layer; the other parameters are given sensible defaults — see leftBounds and rightBounds)), the parameter is [{ source: canvas }].
requestPresent() returns a promise that is fulfilled when the presentation begins successfully.
With our presentation request successful, we now want to start setting up to render content to present to the VRDisplay. First of all we set the canvas to the same size as the VR display area. We do this by getting the VREyeParameters for both eyes using VRDisplay.getEyeParameters().
We then do some simple math to calculate the total width of the VRDisplay rendering area based on the eye VREyeParameters.renderWidth and VREyeParameters.renderHeight.
Next, we cancel the animation loop previously set in motion by the Window.requestAnimationFrame() call inside the drawScene() function, and instead invoke drawVRScene(). This function renders the same scene as before, but with some special WebVR magic going on. The loop inside here is maintained by WebVR's special VRDisplay.requestAnimationFrame method.
Finally, we update the button text so that the next time it is pressed, it will stop presentation to the VR display.
To stop the VR presentation when the button is subsequently pressed, we call VRDisplay.exitPresent(). We also reverse the button's text content, and swap over the requestAnimationFrame calls. You can see here that we are using VRDisplay.cancelAnimationFrame to stop the VR rendering loop, and starting the normal rendering loop off again by calling drawScene().
Once the presentation starts, you'll be able to see the stereoscopic view displayed in the browser:
You'll learn below how the stereoscopic view is actually produced.
This is a good question. The reason is that for smooth rendering inside the VR display, you need to render the content at the display's native refresh rate, not that of the computer. VR display refresh rates are greater than PC refresh rates, typically up to 90fps. The rate will be differ from the computer's core refresh rate.
Note that when the VR display is not presenting, VRDisplay.requestAnimationFrame runs identically to Window.requestAnimationFrame, so if you wanted, you could just use a single rendering loop, rather than the two we are using in our app. We have used two because we wanted to do slightly different things depending on whether the VR display is presenting or not, and keep things separated for ease of comprehension.
At this point, we've seen all the code required to access the VR hardware, request that we present our scene to the hardware, and start running the rending loop. Let's now look at the code for the rendering loop, and explain how the WebVR-specific parts of it work.
First of all, we begin the definition of our rendering loop function — drawVRScene(). The first thing we do inside here is make a call to VRDisplay.requestAnimationFrame() to keep the loop running after it has been called once (this occurred earlier on in our code when we started presenting to the VR display). This call is set as the value of the global vrSceneFrame variable, so we can cancel the loop with a call to VRDisplay.cancelAnimationFrame() once we exit VR presenting.
Next, we call VRDisplay.getFrameData(), passing it the name of the variable that we want to use to contain the frame data. We initialized this earlier on — frameData. After the call completes, this variable will contain the data need to render the next frame to the VR device, packaged up as a VRFrameData object. This contains things like projection and view matrices for rendering the scene correctly for the left and right eye view, and the current VRPose object, which contains data on the VR display such as orientation, position, etc.
This has to be called on every frame so the rendered view is always up-to-date.
Now we retrieve the current VRPose from the VRFrameData.pose property, store the position and orientation for use later on, and send the current pose to the pose stats box for display, if the poseStatsDisplayed variable is set to true.
We now clear the canvas before we start drawing on it, so that the next frame is clearly seen, and we don't also see previous rendered frames:
We now render the view for both the left and right eyes. First of all we need to create projection and view locations for use in the rendering. These are WebGLUniformLocation objects, created using the WebGLRenderingContext.getUniformLocation() method, passing it the shader program's identifier and an identifying name as parameters.
The next rendering step involves:
We now do exactly the same thing, but for the right eye:
Next we define our drawGeometry() function. Most of this is just general WebGL code required to draw our 3D cube. You'll see some WebVR-specific parts in the mvTranslate() and mvRotate() function calls — these pass matrices into the WebGL program that define the translation and rotation of the cube for the current frame
You'll see that we are modifying these values by the position (curPos) and orientation (curOrient) of the VR display we got from the VRPose object. The result is that, for example, as you move or rotate your head left, the x position value (curPos[0]) and y rotation value ([curOrient[1]) are added to the x translation value, meaning that the cube will move to the right, as you'd expect when you are looking at something and then move/turn your head left.
This is a quick and dirty way to use VR pose data, but it illustrates the basic principle.
The next bit of the code has nothing to do with WebVR — it just updates the rotation of the cube on each frame:
The last part of the rendering loop involves us calling VRDisplay.submitFrame() — now all the work has been done and we've rendered the display on the <canvas>, this method then submits the frame to the VR display so it is displayed on there as well.
In this section we'll discuss the displayPoseStats() function, which displays our updated pose data on each frame. The function is fairly simple.
First of all, we store the six different property values obtainable from the VRPose object in their own variables — each one is a Float32Array.
We then write out the data into the information box, updating it on every frame. We've clamped each value to three decimal places using toFixed(), as the values are hard to read otherwise.
You should note that we've used a conditional expression to detect whether the linear acceleration and angular acceleration arrays are successfully returned before we display the data. These values are not reported by most VR hardware as yet, so the code would throw an error if we did not do this (the arrays return null if they are not successfully reported).
The WebVR spec features a number of events that are fired, allowing our app code to react to changes in the state of the VR display (see Window events). For example:
To demonstrate how they work, our simple demo includes the following example:
As you can see, the VRDisplayEvent object provides two useful properties — VRDisplayEvent.display, which contains a reference to the VRDisplay the event was fired in response to, and VRDisplayEvent.reason, which contains a human-readable reason why the event was fired.
This is a very useful event; you could use it to handle cases where the display gets disconnected unexpectedly, stopping errors from being thrown and making sure the user is aware of the situation. In Google's webvr.info presentation demo, the event is used to run an onVRPresentChange() function, which updates the UI controls as appropriate and resizes the canvas.
This article has given you the very basics of how to create a simple WebVR 1.1 app, to help you get started.
This page was last modified on Jul 18, 2025 by MDN contributors.
Your blueprint for a better internet.
Visit Mozilla Corporation’s not-for-profit parent, the Mozilla Foundation.
Portions of this content are ©1998–2026 by individual mozilla.org contributors. Content available under a Creative Commons license.