Create your first HoloLens application with Mixed Reality Toolkit – Part 2

In the previous part of this tutorial, we learned how to create a new HoloLens application by using Unity with Mixed Reality Toolkit. Now we are going to exploit some of the most important features of this development framework, with the support of an example application.

Before starting, you have to follow the environment setup procedure explained in the previous part of this tutorial, at this section (install the 2019.1.5f1 version of Unity, including Universal Windows Platform support SKD). Also, you need to check out this ready-to-use code repository, containing the example application that we are going to analyse. Once cloned, open Unity Hub and select the main folder of the repository by clicking on the “Add” button, then open the added project. Unity will ask you to apply some changes to the project build settings. Click on the “Ignore” button. Now, click on the “File > Open Scene” button and then select the SampleScene.unity file from the Assets/Scenes folder of the project. At this point, you will be able to see the main scene, as in the image below:

Click on the “File > Build Settings” button and be sure that the Universal Windows Platform is the current platform. Otherwise, select it and click on the “Switch Platform” button. Now, you can build the project by clicking on the “Build” button and then choosing a target build directory. Once the project is built, plug the HoloLens device into your computer and open the generated UWP solution by using Visual Studio. You have to choose the right architecture configuration to build the project solution, depending on the generation of your device. Select x86 if you have a first-generation device or ARM x64 if you have a second-generation one, then click on the “Run on Device” button.

While you are waiting for the build to finish, you can take a look at the HoloLens input gestures, which are used by the example application:

We are ready to play!

On application startup, a permission pop-up will appear on the display, asking you for the authorisation to use the microphone. Once allowed, some virtual objects will appear in front of you, as in the image below:

Let’s start to play. If you turn your head around, you will notice that the panel Say Keyboard” is following you. Also, it appears to be always oriented in the direction of your eyes. You can test the speech commands by saying the word “Keyboard” aloud, and then the system keyboard will appear on the screen.

Close the keyboard and hold down the cube on your right by using the single-hand tap gesture, then you will be able to move it around the space. If you try to hold the cube down by using the double-hand tap gesture, you will be able to resize and rotate it. The same manipulation gestures are available with the sphere on your left, with the difference that, in this case, the single-hand tap gesture allows you to test the click and the drag events, instead of the hold one.

Both the cube and the sphere can interact together and with the surrounding space. You can test the collision detection by placing one of the objects on the floor or throwing it towards the other one. Finally, you can see how the space occlusion works by putting one of the objects behind a wall.

You can also run the example application directly on Unity by clicking on the “Play” button. In this case, to simulate the input gestures, use the following keyboard shortcuts:

  • Move the camera: W, A, S, D keys.
  • Look around: Right mouse button hold + mouse movement.
  • Bring up the simulated hands: Space or Left Shift keys.
  • Keep the simulated hands: T or Y keys.
  • Rotate the simulated hands: Q or E (horizontal) keys / R or F (vertical) keys.

Let’s take a look at the code behind

Here below, it’s possible to see how the example application implements the described features:

Radial view

The RadialView.cs component makes an object follow you. It’s part of the Solver script series, which facilitates the calculation of an object position and orientation. To use this component, you just have to add the related script to the target game object, as in the image below. Once added, the SolverHandler.cs component is automatically attached. Check out the Inspector for the “Canvas” GameObject to get more details.

If you only need to make an object face you, you have to use the Billboard.cs component. In this way, it will always face you regardless of your position.

Speech commands

Before using a speech command, you have to declare it inside the profile configuration. Click on the “MixedRealityToolkit” GameObject and check out the Inspector on your right. Inside the MixedRealityToolkit.cs component, select the Input tab and click on the “Clone” button, then set a new profile name and confirm. You have to repeat the cloning process for the Speech section. Once cloned, you can finally declare a new custom speech command by clicking on the “Add a New Speech Command” button, filling in the input fields as in the image below:

To activate a speech command previously declared, add the SpeechInputHandler.cs component to a target GameObject. Also, you need to add the speech command inside the keyword list, associating it with a callback function, as in the image below:

Speech commands require Microphone capability. To activate this, click on the “File > Build Settings > Player Settings” button and check the “Microphone” checkbox inside the Capabilities list.

Manipulation handler

The ManipulationHandler.cs component allows you to manipulate a GameObject by using the manipulation gestures. By adding this component to a target GameObject, as in the image below, you can move, resize and rotate it by using the input gestures.

Pointer handler

To detect the click event on a target GameObject, create a new MonoBehaviour script that implements the IMixedRealityPointerHandler interface. This interface allows you to define the following methods:

void OnPointerDown(MixedRealityPointerEventData eventData);
void OnPointerUp(MixedRealityPointerEventData eventData);
void OnPointerClicked(MixedRealityPointerEventData eventData);
void OnPointerDragged(MixedRealityPointerEventData eventData);

Take a look at the SpherePointerHandler.cs and the CubePointerHandler.cs components to get more details.

Collision detection

The collision detection is obtained by applying the Collider component to each object of interest. By applying the Rigidbody component to an object, it’s possible to put its motion under the control of Unity’s physics engine. Both the Sphere and the Cube have these components.

Spatial occlusion

The Spatial Awareness system allows you to implement spatial occlusion, making the virtual objects behind the obstacles invisibles. It also enables collision detection between the virtual objects and the surroundings. To activate this system, click on the “MixedRealityToolkit” GameObject and select the Spatial Awareness tab from the Inspector, then set the configuration options as in the image below:

Spatial Awareness requires Spatial Perception capability. To activate it, click on the “File > Build Settings > Player Settings” button and check the “Spatial Perception” checkbox inside the Capabilities list.

And that’s it!
If you need more examples, you can get the last available .Examples Unity package of Mixed Reality Toolkit from the official GitHub repository, with a lot of interesting Unity Scenes.

Happy coding!

Pubblicità