Today we announced that Echo Spot is now shipping to customers in India. Echo Spot combines the power of voice with a visual display in a compact design to deliver magical voice experiences for customers. A custom skill for Echo Spot can include an interactive touch display in its response, in addition to standard voice interactions.
For skill developers, voice-enabled devices with a screen create unique opportunities to reimagine voice innovations. Here we show how you can build engaging voice-first skills for Echo Spot.
How to Detect a Device Display
Customers respond to a skill using different responses and actions depending on whether the customer does or does not see a screen while using the skill. Now that your skill is able to detect if a device has display, your skill service code should reflect this difference and support both types of interactions.
Here’s an example where we detect if a device has a display and then generate the graphical user interface (GUI) using one of the body templates in the Alexa Skills Kit. First, for your skill to be able to serve on display devices, you need to enable it through the developer console as shown below.
The JSON request that your skill receives includes all the information you need to determine if your device has a screen display and if it supports other interfaces, like Audio Player and Video. Let’s look closely at the JSON requests received from a variety of Alexa devices: Echo (no display screen), and Echo Spot (display screen)
Step 1: Include this helper function in your skill code to detect if the device has display. As you can see from the JSON above, to determine whether the device supports display, we need to check if the node “Display” exists within the “supportedInterfaces” node in the JSON request we receive. Here’s the helper function that can do that for you:
// returns true if the skill is running on a device with a display function supportsDisplay() {
var hasDisplay =
this.event.context &&
this.event.context.System &&
this.event.context.System.device &&
this.event.context.System.device.supportedInterfaces &&
this.event.context.System.device.supportedInterfaces.Display
return hasDisplay;
}
Step 2: Call the helper function from within your intent to check if the device has display.
suggestPizza: function (){
//checking if the device has display by calling our supportsDisplay helper function and passing the JSON request received by the skill as an argument
if (supportsDisplay.call(this)) {
//device has display
}
else {
//device does not have display
}
}
Step 3: Respond differently (display vs. no-display)
Generally speaking, the customer will respond to a skill using different responses and different actions depending on whether the customer does or does not see a screen while using the skill. Now that your skill is able to detect if a device has display, your skill service code should reflect this difference and should reflect both types of interactions.
Here’s an example where after we detect if a device has display, we generate the GUI using one of the body templates provided by the Alexa Skills Kit.
const Alexa = require('alexa-sdk');
const makePlainText = Alexa.utils.TextUtils.makePlainText;
const makeRichText = Alexa.utils.TextUtils.makeRichText;
const makeImage = Alexa.utils.ImageUtils.makeImage;
'suggestPizza’: function (){
var speechOutput
//checking if the device has display by calling our supportsDisplay helper function and passing the JSON request received by the skill as an argument
if (supportsDisplay.call(this)) { //if device has display, generate display using a template, and the speech output
var title = 'Veggie Delite’;
var description = 'We suggest the Veggie Delite pizza which has Golden Corn, Black Olives, Capsicum and a lot of cheese. Yum!';
var imageURL = 'https://i.imgur.com/rpcYKDD.jpg'
speechOutput = description;
// building display directive
const builder = new Alexa.templateBuilders.BodyTemplate1Builder();
const template = builder.setTitle(title)
.setBackgroundImage(makeImage(imageURL))
.setTextContent(makeRichText('' + description + ''), null, null)
.build();
this.response.renderTemplate(template);
}
else { //if device does not have display, simply respond back with speech
speechOutput = "Here's your " + pizzaSuggested + “which contains” + pizzaDescription;
}
this.response.speak(speechOutput);
this.emit(':responseReady');
}
Testing Your Skill on Echo Spot
You can test your skill on your Echo Spot device (provided that the device and your Amazon Developer account are the same) or you can also use the new Echo Spot simulator on the Test page of the Alexa Skills Kit developer console.
More Resources
Check out some additional resources for designing voice-first skills for devices with screens.
- Designing Skills for Echo Show: Choosing the Right Display Template
- Best Practices for Designing Skills for Echo Devices With a Screen
- Display Interface and Template Reference
- Test for Screen-Based Interaction Issues in Your Alexa Skill
- Choose the Right Template on Echo Spot
- Alexa Skills Kit SDK for Node.js – Using Display Interface
Webinar: Designing Multimodal Skills for Alexa
Learn to design skills that shine across all Alexa-enabled devices including Echo Spot. Join our upcoming webinar to learn how to add imagery, video, and formatted text content. Register now to reserve your spot.
Source: Alexa Developer Blog