WebVR: How to Create Virtual Worlds in JavaScript

event_video_jsmonthly2.png

The era of Virtual Reality computing is about to begin. Analysts forecast that VR will become a $30 billion industry by 2020\. It’s also a lot of fun. In this talk, Peter O’Shaughnessy teaches us how to build virtual reality web applications for Samsung Gear VR and Google Cardboard, using the new WebVR API along \[…\]

Introduction

The era of Virtual Reality computing is about to begin. Analysts forecast that VR will become a $30 billion industry by 2020. It’s also a lot of fun. In this talk, Peter O’Shaughnessy teaches us how to build virtual reality web applications for Samsung Gear VR and Google Cardboard, using the new WebVR API along with Mozilla’s A-Frame and Three.js.

Watch the video on YouTube below and enjoy ?:

[00:00] I’d like to talk today about creating virtual reality Web applications using WebVR. It’s quite an interesting and cool time for this at the moment, because we could be just at the start of a new era.

[00:24] As our friend, Mark Zuckerberg, said, “Virtual reality is the next major computing platform after mobile.” Of course, 2016, we’ve had a lot of these devices reach consumers now. From the top, the HDC5, the Oculus Rift, Gear VR in the middle. We’ve got Google Cardboard, and Sony’s virtual reality headset as well coming soon.

[00:54] Hopefully, most of you tried virtual reality now. Actually, could I just get a show of hands who’s tried one of these devices out at least once. Most of you. That’s good.

[01:04] You’ve seen this, but the really interesting thing is that it gives you a sense of presence of being inside this environment, this virtual environment that you can create. It’s quite a powerful feeling.

[01:17] [laughter]

[01:19] As Josh Carpenter said who was previously working on VR at Mozilla and now at Google, “I’ve made a lot of desktop and mobile applications,” but nothing he ever created was going to make someone laugh, cry, recoil, or scream, or have that kind of reaction that virtual reality can.

[01:39] Now we’re creating these 2D planes effectively, to now branching out into full 3D virtual worlds. When you think about what that offers to us as Web designers and Web developers, it’s really powerful.

[01:59] We can use this new API, which is now a W3C community group spec, so it’s on the way to standardization. It’s being worked on for a while. What it gives us is first of all, the ability to discover what headsets are attached to either our desktop computer, or our mobile phone could be slotted in something like the Gear VR.

[02:27] With that display, we can say, “Launch full‑screen on that virtual reality display and launch our content in virtual reality.” It also gives us the integration into the senses on those virtual reality devices, so we can tap into the orientation and the position in order to render the next frame of the scene appropriately for the user depending on the way they’re looking.

[02:50] It lets us handle rendering for the different kinds of hardware, the different kinds of lenses, and the different distortion that we need to apply to get it looking right on those devices. Essentially, it opens up this world of all these different devices. We don’t have to just code natively for one. We can use WebVR and tap into them all.

[03:14] What we have to do basically is just render our scene twice, one for each eye. We’re making a stereoscopic scene. Using the values that WebVR can supply to us, we can apply the right distortion to those images which means that when you put on those lenses, that blows it up in the right way to fill your vision.

[03:40] These are the browsers that are known to be working on virtual reality, and I there have a virtual reality browser out there all ready or have announced that they’re working on it and bringing it soon. Chrome and Firefox have been working on this for quite a while. We’ve got our browser there on the right hand side. That’s the Samsung Internet logo. Just earlier this month, Microsoft edge announced that they were working on this as well.

[04:10] It’s not yet available by default in the stable versions of the browsers. It’s a simple process to get one of these had downloaded and setup. With Chrome on the desktop, there’s a special build that you can grab if you go to webvr.info. There’s the link there. You just download that and you switch on the flag, in the Chrome flags, and you have WebVR enabled.

[04:35] In Firefox’s similar but you can use the nightly build. There’s a WebVR enabler add‑on that you can install. That all setup WebVR for you.

[04:47] On Samsung Internet on android, you can slot that in your gear VR headset. By default, whatever page you’re looking at on your phone will then launch in this window in virtual reality. I’ll show you in a minute. You can then expand that up to fill your whole view. You do just need to one time go to Internet://webvrenable to enable it. As mentioned, Microsoft edge is in development.

[05:20] This hopefully, you can see at the back, I will read to it. This is basically showing the building blocks that we either need or can use to build virtual reality web applications. The bottom on the left, we have a virtual reality headset with senses. Something like the Oculus Rift, the HTC’s vive, the gear VR comes under that [inaudible] . In that case, we just need a WebVR capable web browser.

[05:53] Another I should probably mention is, you may need the right operating system because with Chrome and Firefox, they are still just windows only, since Oculus Rift moved away from the Mac while ago.

[06:07] If you get a WebVR browser, then you can use it with WebVR headsets. If you have one of those simple smartphone enclosures like a Google Cardboard where it doesn’t have its own senses, but it has its own lenses. You can use a WebVR‑Polyfill. It was created by Boris from Google. You can use then with another mobile web browser say just the standard Chrome on android. You can use the same demos and everything, and it work on Google Cardboard, too.

[06:45] Those things had a CO minimum requirement for virtual reality on the Web. On top of that, generally speaking, you’re going to be probably rendering [inaudible] with WebGO which is 3D API for graphics on the web. On top of that, you may well wish to use 3DS, which is really popular 3d graphics library which a lot of people use with this and certainly makes a lot easy than writing WebGL API code directly.

[07:15] Also I’m going to talk about A‑Frame as well which sits on top of 3GS, and uses 3GS underneath, and makes things even easier to develop to include the WebVR‑Polyfill for us. It sets up all the WebVR stuff for us. It makes it really easy.

[07:34] First, to just give you a bit on idea about the actual WebVR API itself, just this month we had a new version of it come out. Version 1.1, but probably they’re going to just call it WebVR. It had a few tweaks and modifications. The original version, version zero, was about 2013 now. It’s been going to a few alterations. It’s been updated a bit.

[08:06] As I mentioned, the API can give you the set of headsets available which in reality is probably going to be one. You can call navigator don’t get VR displays. Using that VR display object that we can get back, you can say request present. This is a bit like the Fullscreen API. You can say “Present this” and we give it a source of your canvas element where you’re rendering your 3D scene. That will launch it full screen on the headset.

[08:40] You can also use request animation frame on the VR display. This is like window.requestanimationframe, but it’s using the right refresh rate for the headset. This typically is more than 60 Hertz which we normally do in a normal web browser. To have that high refresh rate for virtual reality for making sure that lag is minimal as possible for people moving their heads around and seeing the scene update as fast as possible. It can be 90 Hertz, it could be a 120 Hertz even.

[09:13] When we’re ready, we can say submit frame, and this will say take what’s on my canvas and actually render it. We would also want to use get eye parameters which gives us data like the eye offset which is the distance between your pupil and the center between your eyes. The recommended render target width and height to help us construct the scene in the right way to create two view ports, one for each eye.

[09:47] Similarly, using get a frame data, we can get at a VR pose. From that we can take the position and the orientation acceleration and velocity which helps us then predict what we should render at the next frame.

[10:06] As I mention, now we’ve use WebVR to display things properly. We do need actually to generate our 3D scene, though. For that generally we’d use WebGL. I won’t go into WebGL details in this talk. I’m going to skip straight onto 3GS which is a really good live, but you can use on top of WebGL and makes it nice and easy.

[10:33] Most of you seen 3GS demos. By now, there’s lots of really cool stuff on the website. It was worth doing just a quick 3GS demo. We’re going to make a spinning cube. We can create a renderer. 3GS doesn’t just render to WebGL. That is generally what is used for long, so we create a WebGL renderer. You can set the size of that, and then we can spit out the DOM element that it gives us into our document body.

[11:06] We have a 3GS scene object which encompasses all of the objects that we want to display in this 3D scene. We create a camera which defines the viewing angle for the user. It has a field of view aspect ratio, and you set near and far clipping values. That means that things within that near and far range will be displayed.

[11:32] You can set the position of the camera, so the Zed is coming out towards us, coming positive. If we set the camera in position of Zed to a hundred, then we’re moving it out this way so we can see things that are position Zed zero. Then, we add our camera to the scene. Then, we can create our actually cube.

[11:52] In 3GS, our 3D objects are called measures. They encompass a geometry which defines the actual shape. In this case, it’s a box geometry with height and depths of 50 and a material which defines what’s actually painted on to that 3D shape. In this case, we’re just going with the basic material which means is going to show up whether or not there’s any lighting. We just set it to the color red. Then, we can add that cube to the scene too.

[12:24] Finally, we just need to animate our cube. We create an animate function. In this we can update the rotation value of the cube, and that’s in radiance. This animate function is the one that’s going to get called every time we’re ready to render the next frame of the scene.

[12:43] We use request animation frame for this. That says “Hey browser, call this back whenever you’re ready for the next frame.” Then, we can say “Renderer, hey, render our scene. I’m using from the perspective of the camera.” We just quote animate once to keep that off.

[13:03] That’s gives us this, which may not seem that much, but we’ve got a 3D scene now. We can go from there and build up to our virtual reality worlds.

[13:16] If we want to now take this and turn it into a virtual reality demo, we can use a couple of things that 3GS also has that you can provide. It gives us a VR controls which sets it up so that it’s using the orientation, and automatically updates the camera according to that.

[13:37] Also a VR effect, so we don’t need to worry ourselves about defining two cameras. One for each eye and updating all those appropriately. We can just say render from a VR effect. It can take our one camera, and it comply that to two eyes.

[13:59] It’s pretty easy to do virtual reality Web demos with 3GS. Before I get onto that, this is now taking that idea of the cube, but just flying up a little bit. There’s an example on the 3GS website, WebVR Cubes. This is probably a good one to get started with. The source goes up there. You can see here, as I was describing.

[14:25] If you put your Samsung Internet running phone into a gear VR, then it will pop‑up in this window here. You might be able to see this little button that says “Enter VR.” If you gaze at that and tap the side, that will launch it and into your full virtual reality display and look like this, but filling your whole vision.

[14:54] It’s even easier than using 3GS to use a new library or a new. It was first released last year. A library from Mozilla called A Frame. A Frame as I briefly mentioned before, includes the WebVR‑polyfill for us. It handles all of the virtual reality setup for us. We really can just concentrate on our 3D scenes.

[15:21] All we have to do to start using it is import the A Frame JavaScript probably in the head of the page. In our body, we can define a scene like this. It’s using the custom element API. These are basically just HTML elements. We have a scene which as you can imagine maps to a 3GS scene underneath. That sets up various things for us.

[15:47] It contains the rest of our objects. Inside that, we can include things like our sphere, and our box, and our cylinder, the things that make up the scene here.

[16:01] As we have a plane for the floor. On each of these elements, we can define attributes that set its proptosis for things like the position, the radius, the color, the rotation.

[16:16] Finally, we just add a sky which determines the background for the whole of the 3D scene. That gives us this. Without having to do anything else, this is a 3D scene that we can look around. Here, we’re just using the trackpad. If you had a virtual reality headset on, you could just move your head around. It also by default has WASD control, so it gives you a keyboard control also to move around.

[16:50] If this was on a mobile browser and we had a Google Cardboard to hand, we could press that little Google’s icon. That would then render with our two scenes with both eyes. We could just slot it in the cardboard display.

[17:11] To make a slide be more interesting demo with A Frame, I wanted to make a Flickr Carousel to view llamas on a mountain top. Everyday is made a better day if you view a few pictures of llamas.

[17:27] We’re going to use a couple of different components. I mention that you can set properties of elements by just defining an attribute on those elements. Those attributes come through from components. A Frame has this system of entities and components being the actual attributes on those elements. If

[17:51] If we include this one called layout component, which isn’t in A‑Frame by default but it is a component from one of the core contributors and it’s just on GitHub. You can find it there.

[18:03] Then we can actually then on any entity, we can define this layout attribute and that means, anything that we put inside this entity will be laid out however, which way we define for us. In this case, if we’re making our carousel, our circle carousel, we can say, “Make it a circle of radius three.” Then anything we chuck inside it will just be laid out in this circle for us.

[18:38] Now we actually want to generate our images and put those inside that container element. We can create dynamically a set of A images, which are images in the A‑Frame world. These we can apply our source attribute from the Flickr API. The URLs that it gives us back.

[19:02] On this image we can set the width and the height. There’s one other component that we want to use as well. There’s a look‑at component, again from the same guy Kevin who created the layout component.

[19:16] This is the easiest way to say, “I want these images to face me at 000 in 3D space.” We can just define the look‑at attribute on that and they will all be rotated around us. Then we can just append our image element into our container. From this, you can see that you can interact with A‑Frame just through normal DOM elements and through normal DOM manipulation, which is pretty neat.

[19:48] It also means that A‑Frame plays nicely with other UI frameworks and libraries like React, for example. The other thing we just need to do is if we want to register for clicks on our elements, because actually it’d be nice with our carousel if we could just click. That click could be a tap on the side of the Gear VR headset or there’s the button on the new Cardboard’s now that just…

[20:19] I can show it to you, everyone, afterwards because I’ve got them if you’ve got some time. You can press a little button on the Google Cardboard and it just presses something on the screen to see when I click.

[20:30] It’d be nice if you could do that and we add some interactivity. This carousel updates its position. To do that in A‑Frame, we want to have a cursor. The cursor is…If I just go back, you might just be able to see that, in the middle there’s a little black circle. That’s our cursor and it’s showing where we’re focusing on the scene.

[20:52] By default, A‑Frame includes the camera for us. If we don’t define one, there will be one there already and it will be set up with the WASD controls and the look controls, as I mentioned, but we can define our own camera and we can put whatever controls we want on it.

[21:08] In this case, I’ve taken off the WASD controls because we don’t want anyone moving outside of their llama carousel. They can just look around. It just defines the look controls. If we just had a pre‑canned animation, this would be very easy in A‑Frame because any element that you want to animate, you can define an A animation.

[21:30] Just place it inside that element and you can even say, for example, begin on click. Then that would mean as soon as you click on that element, this animation will happen. It will be triggered. If we just wanted one animation to, say, rotate 360 degrees, we could easily do that by adding A animation and saying, “Change the rotation of this element to 360 degrees.”

[21:55] One thing just to mention there. Whereas underneath, 3DS uses radians for angles, A‑Frame uses degrees, which maybe is slightly friendlier and it translates them into radians for us underneath. You can set a duration and fill just means, when you finish the animation just stay at the end point if you set it to forwards.

[22:19] In our case, we can’t just do a pre‑canning because we want to increment the rotation each time that you click. Thankfully, we can hook into TweenJS, which is another library and A‑Frame.

[22:34] Depending on 3DS, it also depends on, includes TweenJS and we can hook into that and then we can take control of animations and do whatever we want with JavaScript and make them dynamic, as well.

[22:47] The other thing worth mentioning here is just the DOM element here, which is EL, has 3DS object defined on it under object 3D. If you’re writing some JavaScript to make an A‑Frame scene and you want to hook into the underlying 3DS objects, you can do that using el.object3d.

[23:13] In this case, we want to update the rotation of that 3DS object and make it whatever our target rotation value is and you can set durations, which we can see a second. Then you can set that tweening to start and that will start the animation.

[23:32] One more thing, which is just slightly cut off. We need to define our sky for the scene around us because it’s be a bit dull to just be in some blank space with the llamas in front of us. It would be much nicer to be on a nice serene mountaintop. We can just define an image and if we put it inside A assets, this will use A‑Frame’s asset management system, which will help it to perform well.

[24:02] Things will be kicked off when these assets are actually loaded up. We can say, define an image with ID mountainsky and then inside our A sky, we can reference that image from the assets and say, “Use this source image.”

[24:24] Then, we have something like this, which is our A‑Frame scene. Not too many llamas there but if I click, you can see it spinning round and we can see some llamas here. There’s a good one.

[24:44] This is, again, just a scene that we can use on the desktop by just using our track pad or mouse and we can spin around and we can click and so it works on desktop, but it works better still if you have a virtual reality headset and you can pop this in there and then you can experience this filling up your whole vision and feeling like you’re on the mountaintop and you have llama images flying around you.

[25:13] Another demo that I wanted to show. This one was created by my colleague Aida and I just need to click inside it, I think, to use it. This one is a racer demo, which I can’t seem to do left and right on because then it’s controlling the slide.

[25:30] Normally you could spin it around left and right and, in fact, when you have the headset on, it uses the orientation. You can just tilt your head and it turns one way or another and you can tap the button or press the button to accelerate forwards.

[25:47] This was built using A‑Frame. Incredibly, for what it is, it didn’t take that much code or that much time for Aida to do, but it’s a really cool scene. We’ve got both of those demos and some more on our GitHub if anyone wants to see all the source code for them.

[26:10] These slides, I’ll put up online as well for you all afterwards. Where next? You’ve been able to build some cool virtual reality experiences already. Some of the things that I’m hoping to try next are using the touch pad of the Gear VR so we should be able to hook into swipes as well.

[26:32] It’d be nice to be able to swipe forwards and backwards and things to control things. If you’ve got an HTC Vive, you can use A‑Frame with that and you can use the controllers that come with the HTC Vive and A‑Frame have just released a really cool demo.

[26:48] It’s like a version of Tilt Brush, which you might’ve seen, and that uses A‑Frame. That would be really neat to try and do some things with that. Haven’t gone into adding in things like physics and gaming engines, which of course you could get into to build up some really nice experiences.

[27:06] Wed audio, adding some 3D audio so that you can hear things in the scene coming from the right place is something that you could do and that would be really neat, too.

[27:17] What worlds will you create? I hope that if you get any ideas and you create something cool, please share them with us. Thank you.

[27:27] [applause]

[27:27] [music]