Develop

How to Use Multiple Arrays

1033 views 25/07/2018 Nicci 1

Applications can use more than one array to create multiple, separate interaction zones or extend a single one. The Ultrahaptics API allows for communication with multiple arrays simultaneously. In this way, with careful application design, multiple arrays can expand the interaction space.

NOTE: Due to the integrated Leap Motion® device, this article does not apply to the Ultrahaptics STRATOS Inspire.

A few points should be considered when building applications with multiple Ultrahaptics arrays:

  • All Ultrahaptics’ development kits include a Leap Motion® camera module. This tutorial will assume only a single Leap Motion camera. While it is possible to use other tracking systems, this tutorial will discuss only the Leap Motion® device.
  • Do not use multiple arrays to create coincident control points: currently, array outputs are limited to keep within OSHA guidelines and, while simultaneous control points can be created from more than one array, doing so can exceed this limit.
  • If emitting simultaneously to create non-coincident control points, phase misalignment may result in interference of the acoustic fields and degradation of the control points.
  • Multi Device mode allows an enlarged interaction zone to be created, using the standard input methods.
  • All the devices must be of the same model type. You cannot use a TOUCH Development Kit and a STRATOS Development Kit together.

Adding Multiple Devices

When using more than one Ultrahaptics array, we must define the location of each in our environment. We refer to our environment as global space. Each array has its own device space.

Construct an emitter instance with a device identifier and an optional Ultrahaptics::Transform. The device identifier is the “model:serial number” string.

Emitter (const char *idevice_identifier, Ultrahaptics::Transform transform)

Tip: Use DeviceInfo::getDeviceIdentifier() to find the device’s identifier.

Using Device Transforms

The Ultrahaptics::Transform translates from device space to global space. It holds a 3×3 basis matrix and an origin coordinate. Here’s the constructor:

Ultrahaptics::Transform (Matrix3x3 basis=Matrix3x3::identity(), Vector3 origin={})

The default transform is an identity matrix and aligns the emitter with global space.

In the illustration, our first device is at {0,0,0}. Our second is at {-0.2, 0.3, -0.1} metres and rotated 90º in the xy plane.

Tip: Read more about coordinates and alignment.

The basis function and origin translation are shown on the left:

Transform and devices

We create our first device with a default transform object:

Ultrahaptics::AmplitudeModulation::Emitter emitter("USX:USX00000001",
                                              Ultrahaptics::Transform{})

Add the second device with the Ultrahaptics::Emitter::addDevice()method:

emitter.addDevice("USX:USX00000002", my_transform)

where my_transform is constructed using the basis and origin shown above. More devices can be added in the same way.

The device transforms may be queried and changed using the Emitter’s getDeviceTransform() and
setDeviceTransform() methods respectively.

Device Selection Modes

Control points sent to an emitter instance will emit from one device only. There are two modes to control active device selection.

Distance-based selection is the most straightforward: The emitting device is the one with its origin closest to the control point. You can see this in the picture below. The hand is closest to the right-hand device. The right-hand device remains active until the hand crosses into the other zone.

Multiple Array Interaction Zones

As the hand crosses between zones the corresponding emitter takes over. The user feels no difference in haptics, just a smooth transition as the “handover” takes place. If a second hand appears in the scene the same rules apply, so that both devices may be active at once.

Distance selection mode is not suitable for all cases. Suppose two devices are facing each other. With distance selection mode, active device depends on proximity of the hand to the array. The control point can therefore emit on to the back of the hand. Since the palm and front of the fingers are the most sensitive parts of the hand the haptic sensation will not be effective.

Directional Selection mode solves this problem: Select the active device using the palm direction. The device facing the palm is always selected, not the closest device.

The emitter uses Distance Selection mode by default. Change mode using Ultrahaptics::Emitter::setDeviceSelectionMode

You can query the selected mode using

Ultrahaptics::Emitter::getDeviceSelectionMode

Tip: To move the camera module to a different position, you will need to modify the Ultrahaptics::Alignment instance. Read our tutorial on alignment.

Was this helpful?