Skill Flow Builder Lambda Function reset the DynamoDB - aws-lambda

Apologies for the title of this question. It's hard to put it any other way. I have built an Alexa skill using their new dev tools (Skill Flow Builder). This tool has a feature that deploys the skill and creates the Lambda function you need to run it. This Lambda function uses DynamoDB to store information about variables, and the scene names that represent your current position in the skill as you progress through it.
I have edited the skill, and have been testing it thoroughly, but I have now removed all of the old scenes and replaced them with new ones that have new names.
Now, when I deploy and try to run the skill, it is throwing an error because it is trying to find the name of a scene that no longer exists. It is doing this because it wants to resume the skill at that point. The old scene name is stored in the DB.
Here is the error message thrown by the Lambda function:
{"errorMessage":"Cannot find the scene not interesting.","errorType":"Error","stackTrace":["StoryAccessor.getSceneByID (/var/task/node_modules/#alexa-games/sfb-f/dist/storyEntities/StoryAccessor.js:28:19)","ACEDriver.processScene (/var/task/node_modules/#alexa-games/sfb-f/dist/driver.js:435:47)","ACEDriver.resumeStory (/var/task/node_modules/#alexa-games/sfb-f/dist/driver.js:188:41)","<anonymous>","process._tickDomainCallback (internal/process/next_tick.js:228:7)"]}
It is the scene that was called "not interesting" that it can no longer find.
The question is, how can I reset the skill so it is not using the DB to resume the skill at the last point?

The answer, of course, is to spin up a new DynamoDB Table when you have significant changes to your Skill: in my case, a complete change in the Scene names.
In the Skill Flow Builder abcConfig.json file edit the dynamo-db-session-table-name string which is in the default object with all the other settings for your skill. Give it a new name and then re-deploy. It will build a new Table.

Related

How to create sharded-queue tube the right way?

Let's say we have Tarantool Cartridge based service, which stores posts made by users. When user makes a new post, it is inserted to corresponding space. Simultaneously into sharded-queue tube notify_friends is added task for notifying user friends about new post.
Question is about creation of tube notify_friends. Initially I planned to do that in init() method of the service role, but it causes error, because tube creation modifies clusterwide-config and it is being changed when init() runs.
I could try creating tube at first task add request, but not sure if it's the best approach.
You can put it to "default config" of your app.
Check it here:
How to implement default config section for a custom Tarantool Cartridge role?
There are 2 ways I'd go with it:
Create the tube on the first request as you propose. Nothing bad will happen.
If you want to do it in advance - create a fiber in the init function that will try to create the tube after 10 seconds since the initialization, if the tube doesn't exist. You can figure out all instances that have sharded_queue storage, and run the fiber only on the first one (sort alphabetically by instance URI).

Listening for response in Alexa skills kit

I am currently using Alexa skills and AWS lambda to create a custom skill. I am currently stuck on trying to get Alexa to ask a question back to the user for a response.
For example, I want Alexa to present a list of, let's say, books (which I have successfully done) and then I want Alexa to ask me to pick a book from that list and then listen to a response. Do you have any tips or can point me in the right direction for the part about asking and then listening for a response?
P.S. My AWS lambda function is in python currently so preferably help in python would be great, but I can also manage other languages as well.
Sounds like you want to ask the question at the end of the list. Tack it on.
When you present the list of books and ask the question, Alexa will automatically listen for a response unless you explicitly end the session.
On the "Build" tab in the developer console, go to the slot types. You can create a custom type with just your list of titles, or you can add the Amazon.BOOK slot type to your skill and use it as a slot.
Then create an intent, maybe name it "BookChoice," where the sample utterances contain {book}... "I want {book}," "tell me about {book}," etc. Then in the configuration for the intent, it will have a "book" slot which you can set to the Amazon.BOOK or custom slot type you created.
Add a handler for the "BookChoice" intent to your Lambda. Creating slots can be difficult, so I'm answering that. Handlers and getting slot values are intro tutorial stuff, so I won't go into that.

Bot greeting stops working when adding QnA Maker resource

Using the awesome resources provided by Microsoft.
Following the documentation at https://learn.microsoft.com/en-us/composer/
Create Folder somewhere
Perform https://learn.microsoft.com/en-us/composer/setup-yarn in that folder
Perform https://learn.microsoft.com/en-us/composer/tutorial/tutorial-create-bot
Test in Emulator: Pressed Restart Conversation - New User ID: Works fine, responds with: Hi! I’m a friendly bot that can help with the weather. Try saying WEATHER or FORECAST.
Perform https://learn.microsoft.com/en-us/composer/tutorial/tutorial-add-dialog
Test in Emulator: Presents ”Let’s check the weather” som response på user input “weather”. Works fine.
Then create new Trigger with Dialog event and Dialog started and continue with: https://learn.microsoft.com/en-us/composer/how-to-add-qna-to-bot, enter the following in the settings
Please note that in order to use the Settings values, an extra “=”
has to proceed the id, e.g. “=settings.qna.knowledgebaseid”.
Please also not that in order to make this work in Europe, with our
“,” instead of “.” as decimal marker, the Threshold has to be set to
“float(‘0.3’)” in order to be evaluated as a float.
Make sure that the settings are accurate according to your QnA Base.
Please note that at this point the LUIS fiels are left mostly empty,
except for the values prefilled as described in
https://learn.microsoft.com/en-us/composer/how-to-add-qna-to-bot
No LUIS added at this point.
Restart bot
Click Test in Emulator
Press Restart Conversation - New User ID
Now there are three problems:
A. There is no longer any greeting phrase.
B. The first response from QnA maker results in a “The given key ‘stepIndex’ was not present in the dictionary.”. Then after this the QnA maker part works, but issue A and C are still present.
C. The weather regex does only trigger once if it is the first entry only, then at the second attempt or after entering something else, it fails to trigger.
Expected behavior:
When Press Restart Conversation - New User ID, the bot should greet
the user.
When the weather regex is the best choice it should trigger
The text “The given key ‘stepIndex’ was not present in the dictionary” should not be presented as the first response, instead the right reply should be presented based on the intent provided.
I`m a bit late to the game on this but I hit the exact same issue in composer and found the same problem. The suggested approach to use unknown intent in MS Docs does not work well. its really just a tutorial to get you up and running as quickly as possible, with no real thought beyond that - and as you point out, it easily gets stuck in an internal loop that prevents other intents from firing.
Assuming you are using Luis.ai, a "QnA intent recognised" should be added & a "duplicate intent recognised". this will make sure that automatic cross-training is implemented such that QNAmaker will know about Luis Questions and vice versa and they will not only understand their own questions but know to exclude the questions in the other approach. This will make for better training. However, depending on how similar questions are in both, they may both return matches of varying confidence anyway - this is what the "duplicate intent recognised" is for. It catches both before they execute their intents and implements checks for confidence against each and re-raises the event that wins out. Thus ensuring only one of the two is recognised and executed.

Comment filter design for closed topics

In my app, multiple people can chat on a topic. However once the topic has been closed by its owner, chat should also be disabled on that topic.
My Tables -
ChatComment - a new comment is stored here as a record - it contains pointer to Topic
Topic - details related to topic for eg. body, owner, status - open/closed
I'm using cloud functions to create a new comment made by a person on its owner. So everytime I call the cloud function to write new comment, it first queries 'Topic' class to check if topic is still open or not, if its open itll go forward to create new comment in comment class, or else it will throw error.
My problem is that in realtime so many people chat on the topic so frequently that the first query(that checks if topic is still open) occurs for each comment and adds a delay. It really kills user experience.
Can we write a filter to meet above conditions? Please advice me how to deal with this in any other way if possible?
A common pattern is to fake it, the idea works like this:
For the user making the comment, as soon as they enter a comment show it in the topic as if it was added normally. Then start the async call to your cloud function and update the status based on the result.
You might choose to do nothing with the confirmation, or do something like iOS Messages app that shows a "Delivered" tag.
If the cloud function comes back with an error because the topic was closed, update the message to highlight that it was rejected (strikethrough is appropriate here) and disable the ability to add more comments.
This gives the illusion of speed in a delayed system.

Parse.com. Execute backend code before response

I need to know the relative position of an object in a list. Lets say I need to know the position of a certain wine of all wines added to the database, based in the votes received by users. The app should be able to receive the ranking position as an object property when retrieving a "wine" class object.
This should be easy to do in the backend side but I've seen Cloud Code and it seems it only is able to execute code before or after saving or deleting, not before reading and giving response.
Any way to do this task?. Any workaround?.
Thanks.
I think you would have to write a Cloud function to perform this calculation for a particular wine.
https://www.parse.com/docs/cloud_code_guide#functions
This would be a function you would call manually. You would have to provide the "wine" object or objectId as a parameter and then get have your cloud function return the value you need. Keep in mind there are limitations on cloud functions. Read the documentation about time limits. You also don't want to make too many API calls every time you run this. It sounds like your computation could be fairly heavy if your dataset is large and you aren't caching at least some of the information.

Resources