Creating an Alexa skill is a lot like cooking a delicious meal. There are several ingredients, and recipes are based on your taste and preferences. The Alexa Skill-Building Cookbook on GitHub gives you the ingredients and recipes you need to build engaging Alexa skills, using short code samples and guidance for adding features to your skill. With each installment of the Alexa skill recipe series, we’ll introduce you to a new recipe that can help you improve your skill design and customer engagement. You’re the chef. Now, let’s get cooking.
When it comes to Alexa skill building, designing and developing a customer experience that is perceived as quick, speedy, and efficient is important. In a previous post on responsiveness, we talked about improving perceived latency of the customer experience using the Progressive Response API. We shared how you can add content as a sort of auditory loading indicator while Alexa is computing a complex outcome. It’s a lot like asking a person you’re conversing with to pause, while you divert your attention to complete a task. Setting the expectation that a customer will have to wait momentarily is a good idea since it’s a natural, fundamental part of dialog. It also confirms that Alexa has understood the request and is in the process of responding to it.
Although, setting the expectation that a customer may have to wait a moment is good – wouldn’t it be great if we could actually improve the underlying responsiveness of an HTTP request from an Alexa skill? There’s a simple strategy we can employ to speed up how content is fetched from APIs. And while it’s a performance engineering practice (which is a field in and of itself), it's a quick, low-cost, and an easy-to-deploy solution. The practice is to enable a caching mechanism. Caches store data so that future requests for the same data can be served faster. In most cases, this will be the result of an earlier computation.
Amazon API Gateway is a fully managed AWS service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Whether or not we own the REST service that we're fetching, we can implement a robust caching mechanism with Amazon API Gateway that should help us reduce the latency of subsequent HTTP requests.
Recipe: Improving Latency by Employing a Passthrough Caching Mechanism with API Gateway
API Gateway can be setup as a passthrough API server. This means that any third-party APIs that you're trying to connect through your Lambda function will simply pass through API Gateway. It is important to note that caching is charged by the hour and is not eligible for the AWS free tier. You should also check the license of your third-party API to make sure that your cache is within the terms of use.
Yes, this introduces an extra hop between your customer and their data, though it will be worth it! API Gateway will be configured in such a way that any subsequent requests for the same data (or the same query) from any customer will hit your API Gateway cache. This means the time taken in the journey to do a full round trip to the third-party API endpoint will be greatly reduced.
What's great is that we only have to go through some configuration. First, we'll be setting up the passthrough API by building a new API with HTTP proxy integration with a third-party endpoint through a proxy resource.
- Sign in to the API Gateway console at https://console.aws.amazon.com/apigateway.
- To create an API, choose Create new API (for creating the first API) or Create API (for creating any subsequent API). Next, do the following:
- Choose New API.
- Type a name in API Name.
- Optionally, add a brief description in Description.
- Choose Create API.
- You should see a Resources tree to the right side of your APIs panel. It should be largely empty. These are your child resources. Select your Resources tree and then choose Create Resource from the Actions drop-down menu. Then, do the following in the New Child Resource pane.
- Select the Configure as proxy resource option to create a proxy resource.
- Type a name in the Resource Name* input text field. Or you can leave this as the default proxy.
- Type a new name or use the default name in the Resource Path* input text field.
- Choose Create Resource.
- Select Enable API Gateway CORS, if required.
- To set up the ANY method for integration with the HTTP backend, do the following:
- The resource just created should be selected in your main viewing panel.
- Choose HTTP Proxy for Integration type.
- Type an HTTP backend resource URL in Endpoint URL. This should be the URL of your third party API, followed by a {proxy} in the query param string portion of the URL.
- Use default settings for other fields.
- Choose Save to finish configuring the ANY method.
- Choose your Proxy resource from the Resource Tree and Select “Deploy API”
- Fill out Staging Information as appropriate and select Deploy.
Now your Passthrough API is live! Instead of going directly to your third-party URL, you can simply pass your query parameters through your own API.
And now all we have to do is enable caching! When enabling caching, you must choose a cache capacity. In general, a larger capacity gives a better performance, but also costs more.
In the API Gateway console, you configure caching in the Settings tab of a named Stage Editor.
To configure API caching for a given stage:
- Go to the API Gateway console.
- Navigate to the Stage Editor (Stages) for the stage for which you want to enable caching.
- Choose the Settings tab from the main panel.
- Select Enable API cache.
- Wait for the cache creation to complete.
If you would like to verify if caching is functioning as expected, you have a few general options:
- Inspect the CloudWatch metrics of CacheHitCount and CacheMissCount for your API and stage.
- Put a timestamp in the response.
- Benchmark an API call before and after caching has been enabled.
API Gateway also has plenty of features that make it fast and scalable (i.e. available in multiple zones, has redundancy mechanisms, staged environments for testing, etc.). Take a look at the full developer guide to learn more.
There are also trade-offs to consider with this approach. Instrumenting a cache means that your data will not always be fresh. Most caching strategies are based on a TTL (time-to-live) mechanism. This means that the data inside your cache goes stale after a set time period. For example, if you have a TTL set to 24 hours, your cache will keep data for a full day before flushing it and accepting new data. That's why caches don't always work for edge cases that require absolute real-time data. But for most practical purposes, a brief (even a few minutes) TTL caching mechanism can do wonders to backend/API load as well as the customer experience.
As I mentioned, you can use the caching mechanism and the Progressive Response API recipes together or individually to improve the responsiveness of your Alexa skill. Start by analyzing your customer's needs and the purpose of your skill to arrive at the perfect balance.
For more recipes, visit the Alexa Skill-Building Cookbook on GitHub.
More Resources
- Publishing Your Skill Code to Lambda via the Command Line Interface
- Quick Start Alexa Skills Kit Command Line Interface (ASK CLI)
- Alexa Skills Kit SDK for Node.js
- AWS SDK for JavaScript in Node.js
- Request Simplified HTTP client
- Send the User a Progressive Response
Make Money by Creating Engaging Skills Customers Love
You can make money through Alexa skills using in-skill purchasing or Amazon Pay for Alexa Skills. You can also make money for eligible skills that drive some of the highest customer engagement with Alexa Developer Rewards. Download our guide to learn which product best meets your needs.
Build a New Skill, Earn Developer Perks
Bring your big idea to life with Alexa and earn perks through our tiered rewards system. Publish a skill in May and receive an Alexa backpack. If 1,000 customers use your skill in its first 30 days in the Alexa Skills Store, you can also earn a free Echo Plus. Learn more about our promotion and start building today.
Source: Alexa Developer Blog