UNPKG

59.2 kBMarkdownView Raw
1# Alexa Skills Kit SDK for Node.js
2<!-- TOC -->
3
4- [Alexa Skills Kit SDK for Node.js](#alexa-skills-kit-sdk-for-nodejs)
5 - [Overview](#overview)
6 - [Setup Guide](#setup-guide)
7 - [Getting Started: Writing a Hello World Skill](#getting-started-writing-a-hello-world-skill)
8 - [Basic Project Structure](#basic-project-structure)
9 - [Set Entry Point](#set-entry-point)
10 - [Implement Handler Functions](#implement-handler-functions)
11 - [Response vs ResponseBuilder](#response-vs-responsebuilder)
12 - [Tips](#tips)
13 - [Standard Request and Response](#standard-request-and-response)
14 - [Interfaces](#interfaces)
15 - [AudioPlayer Interface](#audioplayer-interface)
16 - [Dialog Interface](#dialog-interface)
17 - [Delegate Directive](#delegate-directive)
18 - [Elicit Slot Directive](#elicit-slot-directive)
19 - [Confirm Slot Directive](#confirm-slot-directive)
20 - [Confirm Intent Directive](#confirm-intent-directive)
21 - [Display Interface](#display-interface)
22 - [Playback Controller Interface](#playback-controller-interface)
23 - [VideoApp Interface](#videoapp-interface)
24 - [Skill and List Events](#skill-and-list-events)
25 - [Services](#services)
26 - [Device Address Service](#device-address-service)
27 - [List Management Service](#list-management-service)
28 - [Directive Service](#directive-service)
29 - [Extend Features](#extend-features)
30 - [Skill State Management](#skill-state-management)
31 - [Persisting Skill Attributes through DynamoDB](#persisting-skill-attributes-through-dynamodb)
32 - [Adding Multi-Language Support for Skill](#adding-multi-language-support-for-skill)
33 - [Device ID Support](#device-id-support)
34 - [Speechcons (Interjections)](#speechcons-interjections)
35 - [Setting up your development environment](#setting-up-your-development-environment)
36
37<!-- /TOC -->
38
39## Overview
40
41Alexa SDK team is proud to present the new **Alexa Node.js SDK** -- the open-source Alexa Skill Development Kit built by developers for developers.
42
43Creating an Alexa skill using the Alexa Skill Kit, Node.js and AWS Lambda has become one of the most popular ways we see skills created today. The event-driven, non-blocking I/O model of Node.js is well-suited for an Alexa skill and Node.js is one of the largest ecosystems of open source libraries in the world. Plus, AWS Lambda is free for the first one million calls per month, which is sufficient for most Alexa skill developers. Also, when using AWS Lambda you don't need to manage any SSL certificates since the Alexa Skills Kit is a trusted trigger.
44
45Setting up an Alexa skill using AWS Lambda, Node.js and the Alexa Skills Kit has been a simple process. However, the actual amount of code you have to write has not. Alexa SDK team has now built an Alexa Skills Kit SDK specifically for Node.js that will help you avoid common hang-ups and focus on your skill's logic instead of boilerplate code.
46
47With the new alexa-sdk, our goal is to help you build skills faster while allowing you to avoid unneeded complexity. Today, we are launching the SDK with the following capabilities:
48
49- Hosted as an NPM package allowing simple deployment to any Node.js environment
50- Ability to build Alexa responses using built-in events
51- Helper events for new sessions and unhandled events that can act as a 'catch-all' events
52- Helper functions to build state-machine based Intent handling
53- This makes it possible to define different event handlers based on the current state of the skill
54- Simple configuration to enable attribute persistence with DynamoDB
55- All speech output is automatically wrapped as SSML
56- Lambda event and context objects are fully available via `this.event` and `this.context`
57- Ability to override built-in functions giving you more flexibility on how you manage state or build responses. For example, saving state attributes to AWS S3.
58
59
60## Setup Guide
61The alexa-sdk is immediately available on [Github](https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs) and can be deployed as a node package using the following command from your Node.js environment:
62```
63npm install --save alexa-sdk
64```
65## Getting Started: Writing a Hello World Skill
66### Basic Project Structure
67Your HelloWorld skill needs to have:
68- entry point to your skill where you'll import all packages needed for the skill, receive the events, set appId, set dynamoDB table, register handlers and so on;
69- handler functions which handle each request.
70
71### Set Entry Point
72To do this within your own project simply create a file named index.js and add the following to it:
73```javascript
74const Alexa = require('alexa-sdk');
75
76exports.handler = function(event, context, callback) {
77 const alexa = Alexa.handler(event, context, callback);
78 alexa.appId = APP_ID // APP_ID is your skill id which can be found in the Amazon developer console where you create the skill.
79 alexa.execute();
80};
81```
82This will import alexa-sdk and set up an Alexa object for us to work with.
83
84### Implement Handler Functions
85Next, we need to handle the events and intents for our skill. Alexa-sdk makes it simple to have a function fire an intent. You can implement the handers functions in index.js file just created or you can also write in separate files and import them later. For example, to create a handler for 'HelloWorldIntent', we can do it in two ways:
86```javascript
87const handlers = {
88 'HelloWorldIntent' : function() {
89 //emit response directly
90 this.emit(':tell', 'Hello World!');
91 }
92};
93```
94Or
95```javascript
96const handlers = {
97 'HelloWorldIntent' : function() {
98 //build response first using responseBuilder and then emit
99 this.response.speak('Hello World!');
100 this.emit(':responseReady');
101 }
102};
103```
104
105Alexa-sdk follows a tell/ask response methodology for generating the outputSpeech response objects corresponding to speak/listen in responseBuilder.
106```javascript
107this.emit(':tell', 'Hello World!');
108this.emit(':ask', 'What would you like to do?', 'Please say that again?');
109```
110which is equivalent to:
111```javascript
112this.response.speak('Hello World!');
113this.emit(':responseReady');
114
115this.response.speak('What would you like to do?')
116 .listen('Please say that again?');
117this.emit(':responseReady');
118```
119The difference between :ask/listen and :tell/speak is that after a :tell/speak action, the session is ended without waiting for the user to provide more input. We will compare the two ways using response or using responseBuilder to create the response object in next section.
120
121The handlers can forward request to each other, making it possible to chain handlers together for better user flow. Here is an example where our LaunchRequest and IntentRequest(of HelloWorldIntent) both return the same 'Hello World' message.
122```javascript
123const handlers = {
124 'LaunchRequest': function () {
125 this.emit('HelloWorldIntent');
126 },
127
128 'HelloWorldIntent': function () {
129 this.emit(':tell', 'Hello World!');
130 }
131};
132```
133
134Once we have set up event handlers we need to register them using the registerHandlers function of the alexa object we just created. So in the index.js file we created, add the following:
135
136```javascript
137const Alexa = require('alexa-sdk');
138
139exports.handler = function(event, context, callback) {
140 const alexa = Alexa.handler(event, context, callback);
141 alexa.registerHandlers(handlers);
142 alexa.execute();
143};
144```
145You can also register multiple handler objects at once:
146```javascript
147alexa.registerHandlers(handlers, handlers2, handlers3, ...);
148```
149Once you finish the above steps, your skill should work properly on the device.
150
151
152## Response vs ResponseBuilder
153
154Currently, there are two ways to generate the [response objects](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#Response%20Format) in Node.js SDK. The first way is using the syntax follows the format this.emit(`:${action}`, 'responseContent').
155Here are full list examples for common skill responses below:
156
157|Response Syntax | Description |
158|----------------|-----------|
159| this.emit(':tell',speechOutput);|Tell with [speechOutput](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#outputspeech-object)|
160|this.emit(':ask', speechOutput, repromptSpeech);|Ask with [speechOutput](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#outputspeech-object) and [repromptSpeech](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#reprompt-object)|
161|this.emit(':tellWithCard', speechOutput, cardTitle, cardContent, imageObj);| Tell with [speechOutput](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#outputspeech-object) and [standard card](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#card-object)|
162|this.emit(':askWithCard', speechOutput, repromptSpeech, cardTitle, cardContent, imageObj);| Ask with [speechOutput](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#outputspeech-object), [repromptSpeech](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#reprompt-object) and [standard card](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#card-object)|
163|this.emit(':tellWithLinkAccountCard', speechOutput);| Tell with [linkAccount card](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#card-object), for more information, click [here](https://developer.amazon.com/docs/custom-skills/link-an-alexa-user-with-a-user-in-your-system.html)|
164|this.emit(':askWithLinkAccountCard', speechOutput);| Ask with [linkAccount card](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#card-object), for more information, click [here](https://developer.amazon.com/docs/custom-skills/link-an-alexa-user-with-a-user-in-your-system.html)|
165|this.emit(':tellWithPermissionCard', speechOutput, permissionArray);| Tell with [permission card](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#session-object), for more information, click [here](https://developer.amazon.com/docs/custom-skills/configure-permissions-for-customer-information-in-your-skill.html)|
166|this.emit(':askWithPermissionCard', speechOutput, repromptSpeech, permissionArray)| Ask with [permission card](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#session-object), for more information, click [here](https://developer.amazon.com/docs/custom-skills/configure-permissions-for-customer-information-in-your-skill.html)|
167|this.emit(':delegate', updatedIntent);|Response with [delegate directive](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#delegate) in [dialog model](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#dialog-model-required)|
168|this.emit(':elicitSlot', slotToElicit, speechOutput, repromptSpeech, updatedIntent);|Response with [elicitSlot directive](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#elicitslot) in [dialog model](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#dialog-model-required)|
169|this.emit(':elicitSlotWithCard', slotToElicit, speechOutput, repromptSpeech, cardTitle, cardContent, updatedIntent, imageObj);| Response with [card](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#card-object) and [elicitSlot directive](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#elicitslot) in [dialog model](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#dialog-model-required)|
170|this.emit(':confirmSlot', slotToConfirm, speechOutput, repromptSpeech, updatedIntent);|Response with [confirmSlot directive](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#confirmslot) in [dialog model](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#dialog-model-required)|
171|this.emit(':confirmSlotWithCard', slotToConfirm, speechOutput, repromptSpeech, cardTitle, cardContent, updatedIntent, imageObj);| Response with [card](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#card-object) and [confirmSlot directive](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#confirmslot) in [dialog model](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#dialog-model-required)|
172|this.emit(':confirmIntent', speechOutput, repromptSpeech, updatedIntent);|Response with [confirmIntent directive](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#confirmintent) in [dialog model](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#dialog-model-required)|
173|this.emit(':confirmIntentWithCard', speechOutput, repromptSpeech, cardTitle, cardContent, updatedIntent, imageObj);| Reponse with [card](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#card-object) and [confirmIntent directive](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#confirmintent) in [dialog model](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#dialog-model-required)|
174|this.emit(':responseReady');|Called after the response is built but before it is returned to the Alexa service. Calls : saveState. Can be overridden.|
175|this.emit(':saveState', false);|Handles saving the contents of this.attributes and the current handler state to DynamoDB and then sends the previously built response to the Alexa service. Override if you wish to use a different persistence provider. The second attribute is optional and can be set to 'true' to force saving.|
176|this.emit(':saveStateError'); |Called if there is an error while saving state. Override to handle any errors yourself.|
177
178
179If you want to manually create your own responses, you can use `this.response` to help. `this.response` contains a series of functions, that you can use to set the different properties of the response. This allows you to take advantage of the Alexa Skills Kit's built-in audio and video player support. Once you've set up your response, you can just call `this.emit(':responseReady')` to send your response to Alexa. The functions within `this.response` are also chainable, so you can use as many as you want in a row.
180Here is full list example of creating response using responseBuilder.
181
182|Response Syntax | Description |
183|----------------|-----------|
184|this.response.speak(speechOutput);| Set the first speech output to [speechOutput](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#outputspeech-object)|
185|this.response.listen(repromptSpeech);| Set the reprompt speech output to [repromptSpeech](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#reprompt-object), shouldEndSession to false. Unless this function is called, this.response will set shouldEndSession to true.|
186|this.response.cardRenderer(cardTitle, cardContent, cardImage);| Add a [standard card](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#card-object) with cardTitle, cardContent and cardImage in response|
187|this.response.linkAccountCard();| Add a [linkAccount card](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#card-object) in response, for more information, click [here](https://developer.amazon.com/docs/custom-skills/link-an-alexa-user-with-a-user-in-your-system.html)|
188|this.response.askForPermissionsConsentCard(permissions);| Add a card to ask for [perimission](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#session-object) in response, for more information, click [here](https://developer.amazon.com/docs/custom-skills/configure-permissions-for-customer-information-in-your-skill.html)|
189|this.response.audioPlayer(directiveType, behavior, url, token, expectedPreviousToken, offsetInMilliseconds);(Deprecated) | Add an [AudioPlayer directive](https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html) with provided parameters in response.|
190|this.response.audioPlayerPlay(behavior, url, token, expectedPreviousToken, offsetInMilliseconds);| Add an [AudioPlayer directive](https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html) using the provided parameters, and set [`AudioPlayer.Play`](https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html#play) as the directive type.|
191|this.response.audioPlayerStop();| Add an [AudioPlayer.Stop directive](https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html#stop)|
192|this.response.audioPlayerClearQueue(clearBehavior);|Add an [AudioPlayer.ClearQueue directive](https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html#clearqueue) and set the clear behaviour of the directive.|
193|this.response.renderTemplate(template);| Add a [Display.RenderTemplate directive](https://developer.amazon.com/docs/custom-skills/display-interface-reference.html) in response|
194|this.response.hint(hintText, hintType);| Add a [Hint directive](https://developer.amazon.com/docs/custom-skills/display-interface-reference.html#hint-directive) in response|
195|this.response.playVideo(videoSource, metadata);|Add a [VideoApp.Play directive](https://developer.amazon.com/docs/custom-skills/videoapp-interface-reference.html#videoapp-directives) in response|
196|this.response.shouldEndSession(bool);| Set shouldEndSession manually|
197
198When you have finished set up your response, simply call `this.emit(':responseReady')` to send your response off.
199Below are two examples that build response with several response objects:
200```
201//Example 1
202this.response.speak(speechOutput)
203 .listen(repromptSpeech);
204this.emit(':responseReady');
205//Example 2
206this.response.speak(speechOutput)
207 .cardRenderer(cardTitle, cardContent, cardImage)
208 .renderTemplate(template)
209 .hint(hintText, hintType);
210this.emit(':responseReady');
211```
212Since responseBuilder is more flexible to build rich response objects, we prefer using this method to build the response.
213
214### Tips
215- When any of the response events are emitted `:ask`, `:tell`, `:askWithCard`, etc. The lambda context.succeed() method is called if the developer doesn't pass in `callback` function, which immediately stops processing of any further background tasks. Any asynchronous jobs that are still incomplete will not be completed and any lines of code below the response emit statement will not be executed. This is not the case for non responding events like `:saveState`.
216- To "transfer" a request from one state handler to another which is called intent forwarding, `this.handler.state` needs to be set to the name of the target state. If the target state is "", then `this.emit("TargetHandlerName")` should be called. For any other states, `this.emitWithState("TargetHandlerName")` must be called instead.
217- The contents of the prompt and reprompt values get wrapped in SSML tags. This means that any special XML characters within the value need to be escape coded. For example, this.emit(":ask", "I like M&M's") will cause a failure because the `&` character needs to be encoded as `&amp;`. Other characters that need to be encoded include: `<` -> `&lt;`, and `>` -> `&gt;`.
218
219## Standard Request and Response
220Alexa communicates with the skill service via a request-response mechanism using HTTP over SSL/TLS. When a user interacts with an Alexa skill, your service receives a POST request containing a JSON body. The request body contains the parameters necessary for the service to perform its logic and generate a JSON-formatted response. Since Node.js can handle JSON natively, Alexa Node.js SDK doesn't need to do JSON serialization and deserialization. Developers are only responsible for providing a proper response object in order for Alexa to respond to a customer request. The documentation on the JSON structure of the request body can be found [here](https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html#request-format).
221
222A SpeechletResponse may contain the following attributes:
223- OutputSpeech
224- Reprompt
225- Card
226- List of Directives
227- shouldEndSession
228
229As an example, a simple response containing both speech and a card can be constructed as follows:
230
231```javascript
232const speechOutput = 'Hello world!';
233const repromptSpeech = 'Hello again!';
234const cardTitle = 'Hello World Card';
235const cardContent = 'This text will be displayed in the companion app card.';
236const imageObj = {
237 smallImageUrl: 'https://imgs.xkcd.com/comics/standards.png',
238 largeImageUrl: 'https://imgs.xkcd.com/comics/standards.png'
239};
240this.response.speak(speechOutput)
241 .listen(repromptSpeech)
242 .cardRenderer(cardTitle, cardContent, imageObj);
243this.emit(':responseReady');
244```
245
246
247## Interfaces
248
249### AudioPlayer Interface
250Developers can include the following directives in their skill responses (respectively)
251- `PlayDirective`
252- `StopDirective`
253- `ClearQueueDirective`
254
255Here is an example of using `PlayDirective` to stream audio:
256```javascript
257const handlers = {
258 'LaunchRequest' : function() {
259 const speechOutput = 'Hello world!';
260 const behavior = 'REPLACE_ALL';
261 const url = 'https://url/to/audiosource';
262 const token = 'myMusic';
263 const expectedPreviousToken = 'expectedPreviousStream';
264 const offsetInMilliseconds = 10000;
265 this.response.speak(speechOutput)
266 .audioPlayerPlay(behavior, url, token, expectedPreviousToken, offsetInMilliseconds);
267 this.emit(':responseReady');
268 }
269};
270```
271In the above example, Alexa will speak the `speechOutput` first and then try to play audio.
272
273When building skills that leverage the [AudioPlayer](https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html) interfaces, the `playback` requests will be send to notify the skill about changes to the `playback` state.You can implement handler functions for their respective events.
274```javascript
275const handlers = {
276 'AudioPlayer.PlaybackStarted' : function() {
277 console.log('Alexa begins playing the audio stream');
278 },
279 'AudioPlayer.PlaybackFinished' : function() {
280 console.log('The stream comes to an end');
281 },
282 'AudioPlayer.PlaybackStopped' : function() {
283 console.log('Alexa stops playing the audio stream');
284 },
285 'AudioPlayer.PlaybackNearlyFinished' : function() {
286 console.log('The currently playing stream is nearly complate and the device is ready to receive a new stream');
287 },
288 'AudioPlayer.PlaybackFailed' : function() {
289 console.log('Alexa encounters an error when attempting to play a stream');
290 }
291};
292```
293
294Additional documentation about `AudioPlayer` interface can be found [here](https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html).
295
296Note: for specifications regarding the `imgObj` please see [here](https://developer.amazon.com/docs/custom-skills/include-a-card-in-your-skills-response.html#creating-a-home-card-to-display-text-and-an-image)
297### Dialog Interface
298The `Dialog` interface provides directives for managing a multi-turn conversation between your skill and the user. You can use the directives to ask the user for the information you need to fulfill their request. See the [Dialog Interface](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/dialog-interface-reference) and [Skill Editor](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/ask-define-the-vui-with-gui) documentation for more information on how to use dialog directives.
299
300You can use `this.event.request.dialogState` to access current `dialogState`.
301
302#### Delegate Directive
303Sends Alexa a command to handle the next turn in the dialog with the user. You can use this directive if the skill has a dialog model and the current status of the dialog (`dialogState`) is either `STARTED` or `IN_PROGRESS`. You cannot emit this directive if the `dialogState` is `COMPLETED`.
304
305You can use `this.emit(':delegate')` to send delegate directive response.
306```javascript
307const handlers = {
308 'BookFlightIntent': function () {
309 if (this.event.request.dialogState === 'STARTED') {
310 let updatedIntent = this.event.request.intent;
311 // Pre-fill slots: update the intent object with slot values for which
312 // you have defaults, then emit :delegate with this updated intent.
313 updatedIntent.slots.SlotName.value = 'DefaultValue';
314 this.emit(':delegate', updatedIntent);
315 } else if (this.event.request.dialogState !== 'COMPLETED'){
316 this.emit(':delegate');
317 } else {
318 // All the slots are filled (And confirmed if you choose to confirm slot/intent)
319 handlePlanMyTripIntent();
320 }
321 }
322};
323```
324
325#### Elicit Slot Directive
326Sends Alexa a command to ask the user for the value of a specific slot. Specify the name of the slot to elicit in the `slotToElicit`. Provide a prompt to ask the user for the slot value in `speechOutput`.
327
328You can use `this.emit(':elicitSlot', slotToElicit, speechOutput, repromptSpeech, updatedIntent)` or `this.emit(':elicitSlotWithCard', slotToElicit, speechOutput, repromptSpeech, cardTitle, cardContent, updatedIntent, imageObj)` to send elicit slot directive response.
329
330When using `this.emit(':elicitSlotWithCard', slotToElicit, speechOutput, repromptSpeech, cardTitle, cardContent, updatedIntent, imageObj)`, `updatedIntent` and `imageObj` are optional parameters. You can set them to `null` or not pass them.
331```javascript
332const handlers = {
333 'BookFlightIntent': function () {
334 const intentObj = this.event.request.intent;
335 if (!intentObj.slots.Source.value) {
336 const slotToElicit = 'Source';
337 const speechOutput = 'Where would you like to fly from?';
338 const repromptSpeech = speechOutput;
339 this.emit(':elicitSlot', slotToElicit, speechOutput, repromptSpeech);
340 } else if (!intentObj.slots.Destination.value) {
341 const slotToElicit = 'Destination';
342 const speechOutput = 'Where would you like to fly to?';
343 const repromptSpeech = speechOutput;
344 const cardContent = 'What is the destination?';
345 const cardTitle = 'Destination';
346 const updatedIntent = intentObj;
347 // An intent object representing the intent sent to your skill.
348 // You can use this property set or change slot values and confirmation status if necessary.
349 const imageObj = {
350 smallImageUrl: 'https://imgs.xkcd.com/comics/standards.png',
351 largeImageUrl: 'https://imgs.xkcd.com/comics/standards.png'
352 };
353 this.emit(':elicitSlotWithCard', slotToElicit, speechOutput, repromptSpeech, cardTitle, cardContent, updatedIntent, imageObj);
354 } else {
355 handlePlanMyTripIntentAllSlotsAreFilled();
356 }
357 }
358};
359```
360
361#### Confirm Slot Directive
362Sends Alexa a command to confirm the value of a specific slot before continuing with the dialog. Specify the name of the slot to confirm in the `slotToConfirm`. Provide a prompt to ask the user for confirmation in `speechOutput`.
363
364You can use `this.emit(':confirmSlot', slotToConfirm, speechOutput, repromptSpeech, updatedIntent)` or `this.emit(':confirmSlotWithCard', slotToConfirm, speechOutput, repromptSpeech, cardTitle, cardContent, updatedIntent, imageObj)` to send confirm slot directive response.
365
366When using `this.emit(':confirmSlotWithCard', slotToConfirm, speechOutput, repromptSpeech, cardTitle, cardContent, updatedIntent, imageObj)`, `updatedIntent` and `imageObj` are optional parameters. You can set them to `null` or not pass them.
367```javascript
368const handlers = {
369 'BookFlightIntent': function () {
370 const intentObj = this.event.request.intent;
371 if (intentObj.slots.Source.confirmationStatus !== 'CONFIRMED') {
372 if (intentObj.slots.Source.confirmationStatus !== 'DENIED') {
373 // Slot value is not confirmed
374 const slotToConfirm = 'Source';
375 const speechOutput = 'You want to fly from ' + intentObj.slots.Source.value + ', is that correct?';
376 const repromptSpeech = speechOutput;
377 this.emit(':confirmSlot', slotToConfirm, speechOutput, repromptSpeech);
378 } else {
379 // Users denies the confirmation of slot value
380 const slotToElicit = 'Source';
381 const speechOutput = 'Okay, Where would you like to fly from?';
382 this.emit(':elicitSlot', slotToElicit, speechOutput, speechOutput);
383 }
384 } else if (intentObj.slots.Destination.confirmationStatus !== 'CONFIRMED') {
385 if (intentObj.slots.Destination.confirmationStatus !== 'DENIED') {
386 const slotToConfirm = 'Destination';
387 const speechOutput = 'You would like to fly to ' + intentObj.slots.Destination.value + ', is that correct?';
388 const repromptSpeech = speechOutput;
389 const cardContent = speechOutput;
390 const cardTitle = 'Confirm Destination';
391 this.emit(':confirmSlotWithCard', slotToConfirm, speechOutput, repromptSpeech, cardTitle, cardContent);
392 } else {
393 const slotToElicit = 'Destination';
394 const speechOutput = 'Okay, Where would you like to fly to?';
395 const repromptSpeech = speechOutput;
396 this.emit(':elicitSlot', slotToElicit, speechOutput, repromptSpeech);
397 }
398 } else {
399 handlePlanMyTripIntentAllSlotsAreConfirmed();
400 }
401 }
402};
403```
404
405#### Confirm Intent Directive
406Sends Alexa a command to confirm the all the information the user has provided for the intent before the skill takes action. Provide a prompt to ask the user for confirmation in `speechOutput`. Be sure to repeat back all the values the user needs to confirm in the prompt.
407
408You can use `this.emit(':confirmIntent', speechOutput, repromptSpeech, updatedIntent)` or `this.emit(':confirmIntentWithCard', speechOutput, repromptSpeech, cardTitle, cardContent, updatedIntent, imageObj)` to send confirm intent directive response.
409
410When using `this.emit(':confirmIntentWithCard', speechOutput, repromptSpeech, cardTitle, cardContent, updatedIntent, imageObj)`, `updatedIntent` and `imageObj` are optional parameters. You can set them to `null` or not pass them.
411```javascript
412const handlers = {
413 'BookFlightIntent': function () {
414 const intentObj = this.event.request.intent;
415 if (intentObj.confirmationStatus !== 'CONFIRMED') {
416 if (intentObj.confirmationStatus !== 'DENIED') {
417 // Intent is not confirmed
418 const speechOutput = 'You would like to book flight from ' + intentObj.slots.Source.value + ' to ' +
419 intentObj.slots.Destination.value + ', is that correct?';
420 const cardTitle = 'Booking Summary';
421 const repromptSpeech = speechOutput;
422 const cardContent = speechOutput;
423 this.emit(':confirmIntentWithCard', speechOutput, repromptSpeech, cardTitle, cardContent);
424 } else {
425 // Users denies the confirmation of intent. May be value of the slots are not correct.
426 handleIntentConfimationDenial();
427 }
428 } else {
429 handlePlanMyTripIntentAllSlotsAndIntentAreConfirmed();
430 }
431 }
432};
433```
434Additional documentation about `Dialog` interface can be found [here](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html).
435
436### Display Interface
437Alexa provides several `Display templates` to support a wide range of presentations. Currently, there are two categories of `Display templates`:
438- `BodyTemplate` displays text and images which cannot be made selectable. Currently has four options
439+ `BodyTemplate1`
440+ `BodyTemplate2`
441+ `BodyTemplate3`
442+ `BodyTemplate6`
443+ `BodyTemplate7`
444- `ListTemplate` displays a scrollable list of items, each with associated text and optional images. These images can be made selectable. Currently has two options:
445+ `ListTemplate1`
446+ `ListTemplate2`
447
448Developers must include `Display.RenderTemplate` directive in their skill responses.
449Template Builders are now included in alexa-sdk in the templateBuilders namespace. These provide a set of helper methods to build the JSON template for the `Display.RenderTemplate` directive. In the example below we use the `BodyTemplate1Builder` to build the `Body template`.
450
451```javascript
452const Alexa = require('alexa-sdk');
453// utility methods for creating Image and TextField objects
454const makePlainText = Alexa.utils.TextUtils.makePlainText;
455const makeImage = Alexa.utils.ImageUtils.makeImage;
456
457// ...
458'LaunchRequest' : function() {
459 const builder = new Alexa.templateBuilders.BodyTemplate1Builder();
460
461 const template = builder.setTitle('My BodyTemplate1')
462 .setBackgroundImage(makeImage('http://url/to/my/img.png'))
463 .setTextContent(makePlainText('Text content'))
464 .build();
465
466 this.response.speak('Rendering a body template!')
467 .renderTemplate(template);
468 this.emit(':responseReady');
469}
470```
471
472We've added helper utility methods to build Image and TextField objects. They are located in the `Alexa.utils` namespace.
473
474```javascript
475const ImageUtils = require('alexa-sdk').utils.ImageUtils;
476
477// Outputs an image with a single source
478ImageUtils.makeImage(url, widthPixels, heightPixels, size, description);
479/**
480Outputs {
481 contentDescription : '<description>'
482 sources : [
483 {
484 url : '<url>',
485 widthPixels : '<widthPixels>',
486 heightPixels : '<heightPixels>',
487 size : '<size>'
488 }
489 ]
490}
491*/
492
493ImageUtils.makeImages(imgArr, description);
494/**
495Outputs {
496 contentDescription : '<description>'
497 sources : <imgArr> // array of {url, size, widthPixels, heightPixels}
498}
499*/
500
501
502const TextUtils = require('alexa-sdk').utils.TextUtils;
503
504TextUtils.makePlainText('my plain text field');
505/**
506Outputs {
507 text : 'my plain text field',
508 type : 'PlainText'
509}
510*/
511
512TextUtils.makeRichText('my rich text field');
513/**
514Outputs {
515 text : 'my rich text field',
516 type : 'RichText'
517}
518*/
519
520```
521In the next example, we will use ListTemplate1Builder and ListItemBuilder to build ListTemplate1.
522```javascript
523const Alexa = require('alexa-sdk');
524const makePlainText = Alexa.utils.TextUtils.makePlainText;
525const makeImage = Alexa.utils.ImageUtils.makeImage;
526// ...
527'LaunchRequest' : function() {
528 const itemImage = makeImage('https://url/to/imageResource', imageWidth, imageHeight);
529 const listItemBuilder = new Alexa.templateBuilders.ListItemBuilder();
530 const listTemplateBuilder = new Alexa.templateBuilders.ListTemplate1Builder();
531 listItemBuilder.addItem(itemImage, 'listItemToken1', makePlainText('List Item 1'));
532 listItemBuilder.addItem(itemImage, 'listItemToken2', makePlainText('List Item 2'));
533 listItemBuilder.addItem(itemImage, 'listItemToken3', makePlainText('List Item 3'));
534 listItemBuilder.addItem(itemImage, 'listItemToken4', makePlainText('List Item 4'));
535 const listItems = listItemBuilder.build();
536 const listTemplate = listTemplateBuilder.setToken('listToken')
537 .setTitle('listTemplate1')
538 .setListItems(listItems)
539 .build();
540 this.response.speak('Rendering a list template!')
541 .renderTemplate(listTemplate);
542 this.emit(':responseReady');
543}
544```
545
546Sending a `Display.RenderTemplate` directive to a headless device (like an echo) will result in an invalid directive error being thrown. To check whether a device supports a particular directive, you can check the device's supportedInterfaces property.
547
548```javascript
549const handler = {
550 'LaunchRequest' : function() {
551
552 this.response.speak('Hello there');
553
554 // Display.RenderTemplate directives can be added to the response
555 if (this.event.context.System.device.supportedInterfaces.Display) {
556 //... build mytemplate using TemplateBuilder
557 this.response.renderTemplate(myTemplate);
558 }
559
560 this.emit(':responseReady');
561 }
562};
563```
564
565Similarly for video, you check if VideoApp is a supported interface of the device
566
567```javascript
568const handler = {
569 'PlayVideoIntent' : function() {
570
571 // VideoApp.Play directives can be added to the response
572 if (this.event.context.System.device.supportedInterfaces.VideoApp) {
573 this.response.playVideo('http://path/to/my/video.mp4');
574 } else {
575 this.response.speak("The video cannot be played on your device. " +
576 "To watch this video, try launching the skill from your echo show device.");
577 }
578
579 this.emit(':responseReady');
580 }
581};
582```
583Additional documentation on `Display` interface can be found [here](https://developer.amazon.com/docs/custom-skills/display-interface-reference.html).
584
585### Playback Controller Interface
586The `PlaybackController` interface enables skills to handles requests sent when a customer interacts with player controls such as buttons on a device or a remote control. Those requests are different from normal voice requests such as "Alexa, next song" which are standard intent requests. In order to enable skill to handle `PlaybackController` requests, developers must implement `PlaybackController` interface in Alexa Node.js SDK.
587```javascript
588const handlers = {
589 'PlaybackController.NextCommandIssued' : function() {
590 //Your skill can respond to NextCommandIssued with any AudioPlayer directive.
591 },
592 'PlaybackController.PauseCommandIssued' : function() {
593 //Your skill can respond to PauseCommandIssued with any AudioPlayer directive.
594 },
595 'PlaybackController.PlayCommandIssued' : function() {
596 //Your skill can respond to PlayCommandIssued with any AudioPlayer directive.
597 },
598 'PlaybackController.PreviousCommandIssued' : function() {
599 //Your skill can respond to PreviousCommandIssued with any AudioPlayer directive.
600 },
601 'System.ExceptionEncountered' : function() {
602 //Your skill cannot return a response to System.ExceptionEncountered.
603 }
604};
605```
606Additional documentation about `PlaybackController` interface can be found [here](https://developer.amazon.com/docs/custom-skills/playback-controller-interface-reference.html).
607
608
609### VideoApp Interface
610To stream native video files on Echo Show, developers must send `VideoApp.Launch` directive. Alexa Node.js SDK provides a function in responseBuilder, to help build the JSON response object.
611Here is an example to stream video:
612```javascript
613//...
614'LaunchRequest' : function() {
615 const videoSource = 'https://url/to/videosource';
616 const metadata = {
617 'title': 'Title for Sample Video',
618 'subtitle': 'Secondary Title for Sample Video'
619 };
620 this.response.playVideo(videoSource metadata);
621 this.emit(':responseReady');
622}
623```
624Additional documentation on `VideoApp` interface can be found [here](https://developer.amazon.com/docs/custom-skills/videoapp-interface-reference.html).
625
626### Skill and List Events
627Skill developers have the capability to integrate with Alexa skill events directly. If the skill is subscribed to these events, the skill is notified when an event occurs.
628
629In order to use events in your skill service, you must set up access to the Alexa Skill Management API (SMAPI) as described in [Add Events to Your Skill With SMAPI](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/add-events-to-your-skill-with-smapi).
630
631Skill and List Events come out of session. Once your skill has been set up to receive these events. You can specify behaviour by adding the event names to your default event handler.
632
633```javascript
634const handlers = {
635 'AlexaSkillEvent.SkillEnabled' : function() {
636 const userId = this.event.context.System.user.userId;
637 console.log(`skill was enabled for user: ${userId}`);
638 },
639 'AlexaHouseholdListEvent.ItemsCreated' : function() {
640 const listId = this.event.request.body.listId;
641 const listItemIds = this.event.request.body.listItemIds;
642 console.log(`The items: ${JSON.stringify(listItemIds)} were added to list ${listId}`);
643 },
644 'AlexaHouseholdListEvent.ListCreated' : function() {
645 const listId = this.event.request.body.listId;
646 console.log(`The new list: ${JSON.stringify(listId)} was created`);
647 }
648 //...
649};
650
651exports.handler = function(event, context, callback) {
652 const alexa = Alexa.handler(event, context, callback);
653 alexa.registerHandlers(handlers);
654 alexa.execute();
655};
656```
657
658We've created a [sample skill and walk-through](https://github.com/Alexa/alexa-cookbook/tree/master/context/skill-events) to guide you through the process of subscribing to skill events.s
659
660## Services
661
662### Device Address Service
663
664Alexa NodeJS SDK provides a ```DeviceAddressService``` helper class that utilizes Device Address API to retrieve customer device address information. Currently the following methods are provided:
665
666```javascript
667getFullAddress(deviceId, apiEndpoint, token)
668getCountryAndPostalCode(deviceId, apiEndpoint, token)
669```
670``apiEndpoint`` and ``token`` can be retrieved from the request at ``this.event.context.System.apiEndpoint`` and ``this.event.context.System.user.permissions.consentToken``
671
672``deviceId`` can also be retrieved from request at ``this.event.context.System.device.deviceId``
673
674```javascript
675const Alexa = require('alexa-sdk');
676
677'DeviceAddressIntent': function () {
678 if (this.event.context.System.user.permissions) {
679 const token = this.event.context.System.user.permissions.consentToken;
680 const apiEndpoint = this.event.context.System.apiEndpoint;
681 const deviceId = this.event.context.System.device.deviceId;
682
683 const das = new Alexa.services.DeviceAddressService();
684 das.getFullAddress(deviceId, apiEndpoint, token)
685 .then((data) => {
686 this.response.speak('<address information>');
687 console.log('Address get: ' + JSON.stringify(data));
688 this.emit(':responseReady');
689 })
690 .catch((error) => {
691 this.response.speak('I\'m sorry. Something went wrong.');
692 this.emit(':responseReady');
693 console.log(error.message);
694 });
695 } else {
696 this.response.speak('Please grant skill permissions to access your device address.');
697 this.emit(':responseReady');
698 }
699}
700```
701
702
703### List Management Service
704
705Alexa customers have access to two default lists: Alexa to-do and Alexa shopping. In addition, Alexa customer can create and manage custom lists in a skill that supports that.
706
707Alexa NodeJS SDK provides a ```ListManagementService``` helper class to help developer create skills that manage default and custom Alexa lists more easily. Currently the following methods are provided:
708
709````javascript
710getListsMetadata(token)
711createList(listObject, token)
712getList(listId, itemStatus, token)
713updateList(listId, listObject, token)
714deleteList(listId, token)
715createListItem(listId, listItemObject, token)
716getListItem(listId, itemId, token)
717updateListItem(listId, itemId, listItemObject, token)
718deleteListItem(listId, itemId, token)
719````
720
721``token`` can be retrieved from the request at ``this.event.context.System.user.permissions.consentToken``
722
723``listId`` can be retrieved from a ``GetListsMetadata`` call.
724``itemId`` can be retrieved from a ``GetList`` call
725
726````javascript
727const Alexa = require('alexa-sdk');
728
729function getListsMetadata(token) {
730 const lms = new Alexa.services.ListManagementService();
731 lms.getListsMetadata(token)
732 .then((data) => {
733 console.log('List retrieved: ' + JSON.stringify(data));
734 this.context.succeed();
735 })
736 .catch((error) => {
737 console.log(error.message);
738 });
739};
740````
741
742### Directive Service
743
744`enqueue(directive, endpoint, token)`
745
746Returns a directive to an Alexa device asynchronously during skill execution. It currently accepts speak directives only, with both SSML (inclusive of MP3 audio) and plain text output formats being supported. Directives can only be returned to the originating device when the skill is active. `apiEndpoint` and `token` parameters can be retrieved from the request at `this.event.context.System.apiEndpoint` and `this.event.context.System.apiAccessToken` respectively.
747- The response speech should be limited to 600 characters.
748- Any audio snippets referenced in SSML should be limited to 30 seconds.
749- There is no limit on the number of directives that a skill can send through the directive service. If necessary, skills can send multiple requests for each execution.
750- The directive service does not contain any deduplication processing, so we do not recommend any form of retry processing as it may result in users receiving the same directive multiple times.
751
752```javascript
753const Alexa = require('alexa-sdk');
754
755const handlers = {
756 'SearchIntent' : function() {
757 const requestId = this.event.request.requestId;
758 const token = this.event.context.System.apiAccessToken;
759 const endpoint = this.event.context.System.apiEndpoint;
760 const ds = new Alexa.services.DirectiveService();
761
762 const directive = new Alexa.directives.VoicePlayerSpeakDirective(requestId, "Please wait...");
763 const progressiveResponse = ds.enqueue(directive, endpoint, token)
764 .catch((err) => {
765 // catch API errors so skill processing an continue
766 });
767 const serviceCall = callMyService();
768
769 Promise.all([progressiveResponse, serviceCall])
770 .then(() => {
771 this.response.speak('I found the following results');
772 this.emit(':responseReady');
773 });
774 }
775};
776
777```
778
779## Extend Features
780
781### Skill State Management
782
783Alexa-sdk use state manager to route the incoming intents to the correct function handler. State is stored as a string in the session attributes indicating the current state of the skill. You can emulate the built-in intent routing by appending the state string to the intent name when defining your intent handlers, but alexa-sdk helps do that for you.
784
785Let's take a sample skill [highlowgame](https://github.com/alexa/skill-sample-nodejs-highlowgame/blob/master/lambda/custom/index.js) as an example to explain how state management works in SDK. In this skill, the customer will guess a number and the Alexa will tell if the number is higher or lower. It will also tell how many times the customer has played. It has two states 'start' and 'guess':
786```javascript
787const states = {
788 GUESSMODE: '_GUESSMODE', // User is trying to guess the number.
789 STARTMODE: '_STARTMODE' // Prompt the user to start or restart the game.
790};
791```
792The NewSession handler in newSessionHandlers will short-cut any incoming intent or launch requests and route them to this handler.
793```javascript
794const newSessionHandlers = {
795 'NewSession': function() {
796 if(Object.keys(this.attributes).length === 0) { // Check if it's the first time the skill has been invoked
797 this.attributes['endedSessionCount'] = 0;
798 this.attributes['gamesPlayed'] = 0;
799 }
800 this.handler.state = states.STARTMODE;
801 this.response.speak('Welcome to High Low guessing game. You have played '
802 + this.attributes['gamesPlayed'].toString() + ' times. Would you like to play?')
803 .listen('Say yes to start the game or no to quit.');
804 this.emit(':responseReady');
805 }
806};
807```
808Notice that when a new session is created we simply set the state of our skill into `STARTMODE` using `this.handler.state`. The skills state will automatically be persisted in your skill's session attributes, and will be optionally persisted across sessions if you set a DynamoDB table.
809
810It is also important to point out that `NewSession` is a great catch-all behavior and a good entry point but it is not required. `NewSession` will only be invoked if a handler with that name is defined. Each state you define can have its own `NewSession` handler which will be invoked if you are using the built-in persistence. In the above example we could define different `NewSession` behavior for both `states.STARTMODE` and `states.GUESSMODE` giving us added flexibility.
811
812In order to define intents that will respond to the different states of our skill, we need to use the `Alexa.CreateStateHandler` function. Any intent handlers defined here will only work when the skill is in a specific state, giving us even greater flexibility!
813
814For example, if we are in the `GUESSMODE` state we defined above we want to handle a user responding to a question. This can be done using StateHandlers like this:
815```javascript
816const guessModeHandlers = Alexa.CreateStateHandler(states.GUESSMODE, {
817
818'NewSession': function () {
819 this.handler.state = '';
820 this.emitWithState('NewSession'); // Equivalent to the Start Mode NewSession handler
821},
822
823'NumberGuessIntent': function() {
824 const guessNum = parseInt(this.event.request.intent.slots.number.value);
825 const targetNum = this.attributes['guessNumber'];
826
827 console.log('user guessed: ' + guessNum);
828
829 if(guessNum > targetNum){
830 this.emit('TooHigh', guessNum);
831 } else if( guessNum < targetNum){
832 this.emit('TooLow', guessNum);
833 } else if (guessNum === targetNum){
834 // With a callback, use the arrow function to preserve the correct 'this' context
835 this.emit('JustRight', () => {
836 this.response.speak(guessNum.toString() + 'is correct! Would you like to play a new game?')
837 .listen('Say yes to start a new game, or no to end the game.');
838 this.emit(':responseReady');
839 });
840 } else {
841 this.emit('NotANum');
842 }
843},
844
845'AMAZON.HelpIntent': function() {
846 this.response.speak('I am thinking of a number between zero and one hundred, try to guess and I will tell you' +
847 ' if it is higher or lower.')
848 .listen('Try saying a number.');
849 this.emit(':responseReady');
850},
851
852'SessionEndedRequest': function () {
853 console.log('session ended!');
854 this.attributes['endedSessionCount'] += 1;
855 this.emit(':saveState', true); // Be sure to call :saveState to persist your session attributes in DynamoDB
856},
857
858'Unhandled': function() {
859 this.response.speak('Sorry, I didn\'t get that. Try saying a number.')
860 .listen('Try saying a number.');
861 this.emit(':responseReady');
862}
863});
864```
865On the flip side, if I am in `STARTMODE` I can define my `StateHandlers` to be the following:
866
867```javascript
868const startGameHandlers = Alexa.CreateStateHandler(states.STARTMODE, {
869
870 'NewSession': function () {
871 this.emit('NewSession'); // Uses the handler in newSessionHandlers
872 },
873
874 'AMAZON.HelpIntent': function() {
875 const message = 'I will think of a number between zero and one hundred, try to guess and I will tell you if it' +
876 ' is higher or lower. Do you want to start the game?';
877 this.response.speak(message)
878 .listen(message);
879 this.emit(':responseReady');
880 },
881
882 'AMAZON.YesIntent': function() {
883 this.attributes['guessNumber'] = Math.floor(Math.random() * 100);
884 this.handler.state = states.GUESSMODE;
885 this.response.speak('Great! ' + 'Try saying a number to start the game.')
886 .listen('Try saying a number.');
887 this.emit(':responseReady');
888 },
889
890 'AMAZON.NoIntent': function() {
891 this.response.speak('Ok, see you next time!');
892 this.emit(':responseReady');
893 },
894
895 'SessionEndedRequest': function () {
896 console.log('session ended!');
897 this.attributes['endedSessionCount'] += 1;
898 this.emit(':saveState', true);
899 },
900
901 'Unhandled': function() {
902 const message = 'Say yes to continue, or no to end the game.';
903 this.response.speak(message)
904 .listen(message);
905 this.emit(':responseReady');
906 }
907});
908```
909Take a look at how `AMAZON.YesIntent` and `AMAZON.NoIntent` are not defined in the `guessModeHandlers` object, since it doesn't make sense for a 'yes' or 'no' response in this state. Those intents will be caught by the `Unhandled` handler.
910
911Also, notice the different behavior for `NewSession` and `Unhandled` across both states? In this game, we 'reset' the state by calling a `NewSession` handler defined in the `newSessionHandlers` object. You can also skip defining it and alexa-sdk will call the intent handler for the current state. Just remember to register your State Handlers before you call `alexa.execute()` or they will not be found.
912
913Your attributes will be automatically saved when you end the session, but if the user ends the session you have to emit the `:saveState` event (`this.emit(':saveState', true)`) to force a save. You should do this in your `SessionEndedRequest` handler which is called when the user ends the session by saying 'quit' or timing out. Take a look at the example above.
914
915If you want to explicitly reset the state, the following code should work:
916```javascript
917this.handler.state = '' // delete this.handler.state might cause reference errors
918delete this.attributes['STATE'];
919```
920
921### Persisting Skill Attributes through DynamoDB
922
923Many of you would like to persist your session attribute values into storage for further use. Alexa-sdk integrates directly with [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) (a NoSQL database service) to enable you to do this with a single line of code.
924
925Simply set the name of the DynamoDB table on your alexa object before you call alexa.execute.
926```javascript
927exports.handler = function (event, context, callback) {
928 const alexa = Alexa.handler(event, context, callback);
929 alexa.appId = appId;
930 alexa.dynamoDBTableName = 'YourTableName'; // That's it!
931 alexa.registerHandlers(State1Handlers, State2Handlers);
932 alexa.execute();
933};
934```
935
936Then later on to set a value you simply need to call into the attributes property of the alexa object. No more separate `put` and `get` functions!
937```javascript
938this.attributes['yourAttribute'] = 'value';
939```
940
941You can [create the table manually](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SampleData.CreateTables.html) beforehand or simply give your Lambda function DynamoDB [create table permissions](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html) and it will happen automatically. Just remember it can take a minute or so for the table to be created on the first invocation. If you create the table manually, the Primary Key must be a string value called "userId".
942
943Note: If you host your skill on lambda and choose to persist skill attributes through DynamoDB, please make sure the excution role of lambda function includes access to DynamoDB.
944
945### Adding Multi-Language Support for Skill
946Let's take the Hello World example here. Define all user-facing language strings in the following format.
947```javascript
948const languageStrings = {
949 'en-GB': {
950 'translation': {
951 'SAY_HELLO_MESSAGE' : 'Hello World!'
952 }
953 },
954 'en-US': {
955 'translation': {
956 'SAY_HELLO_MESSAGE' : 'Hello World!'
957 }
958 },
959 'de-DE': {
960 'translation': {
961 'SAY_HELLO_MESSAGE' : 'Hallo Welt!'
962 }
963 }
964};
965```
966
967To enable string internationalization features in Alexa-sdk, set resources to the object we created above.
968```javascript
969exports.handler = function(event, context, callback) {
970 const alexa = Alexa.handler(event, context);
971 alexa.appId = appId;
972 // To enable string internationalization (i18n) features, set a resources object.
973 alexa.resources = languageStrings;
974 alexa.registerHandlers(handlers);
975 alexa.execute();
976};
977```
978
979Once you are done defining and enabling language strings, you can access these strings using the this.t() function. Strings will be rendered in the language that matches the locale of the incoming request.
980```javascript
981const handlers = {
982 'LaunchRequest': function () {
983 this.emit('SayHello');
984 },
985 'HelloWorldIntent': function () {
986 this.emit('SayHello');
987 },
988 'SayHello': function () {
989 this.response.speak(this.t('SAY_HELLO_MESSAGE'));
990 this.emit(':responseReady');
991 }
992};
993```
994For more infomation about developing and deploying skills in multiple languages, please go [here](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/developing-skills-in-multiple-languages).
995
996### Device ID Support
997When a customer enables your Alexa skill, your skill can obtain the customer’s permission to use address data associated with the customer’s Alexa device. You can then use this address data to provide key functionality for the skill, or to enhance the customer experience.
998
999The `deviceId` is now exposed through the context object in each request and can be accessed in any intent handler through `this.event.context.System.device.deviceId`. See the [Address API sample skill](https://github.com/alexa/skill-sample-node-device-address-api) to see how we leveraged the deviceId and the Address API to use a user's device address in a skill.
1000
1001### Speechcons (Interjections)
1002
1003[Speechcons](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/speechcon-reference) are special words and phrases that Alexa pronounces more expressively. In order to use them you can just include the SSML markup in the text to emit.
1004
1005* `this.emit(':tell', 'Sometimes when I look at the Alexa skills you have all taught me, I just have to say, <say-as interpret-as="interjection">Bazinga.</say-as> ');`
1006* `this.emit(':tell', '<say-as interpret-as="interjection">Oh boy</say-as><break time="1s"/> this is just an example.');`
1007
1008_Speechcons are supported for English (US), English (UK), English (India), and German._
1009
1010## Setting up your development environment
1011
1012- Requirements
1013- Gulp & mocha ```npm install -g gulp mocha```
1014- Run npm install to pull down stuff
1015- run gulp to run tests/linter
1016
1017For more information about getting started with the Alexa Skills Kit, check out the following additional assets:
1018
1019[Alexa Dev Chat Podcast](http://bit.ly/alexadevchat)
1020
1021[Alexa Training with Big Nerd Ranch](https://developer.amazon.com/public/community/blog/tag/Big+Nerd+Ranch)
1022
1023[Alexa Skills Kit (ASK)](https://developer.amazon.com/ask)
1024
1025[Alexa Developer Forums](https://forums.developer.amazon.com/forums/category.jspa?categoryID=48)
1026
1027[Training for the Alexa Skills Kit](https://developer.amazon.com/alexa-skills-kit/alexa-skills-developer-training)
1028
1029-Dave ( [@TheDaveDev](http://twitter.com/thedavedev))
1030
\No newline at end of file