Single AWS Lambda function to respond to Alexa skill requests, and return a JSON object, depending on how it is called - aws-lambda

I am trying to use the same AWS Lambda function to do two things with the same DynamoDB dataset.
(a) Provide Alexa Skill Responses
I have already implemented this, and the Skill is operating correctly. I am using NodeJS and the Amazon Skills Kit version 2. The last few lines of my index.js are as follows:
const skillBuilder = Alexa.SkillBuilders.standard();
exports.handler = skillBuilder
.addRequestHandlers(
LaunchRequest,
HelpIntent,
// .... various other intents
UnhandledIntent
)
.addErrorHandlers(ErrorHandler)
.lambda();
(b) Provide a JSON object summarising some database contents
The NodeJS code I have set up for the Alexa skill, processes DynamoDB data in order to provide the skill responses. I want to use the same codebase (i.e. the same Lambda function) to produce summaries for my own benefit. I don't want to copy-paste pieces of the Lambda function into a separate function. I would much rather have the same code do both tasks, in order to keep everything in step.
My problem is that the structure of the Lambda code for returning this JSON response to my request is as follows, from https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-lambda.html:
'use strict';
console.log('Loading hello world function');
exports.handler = function(event, context, callback) {
let name = "you";
let city = 'World';
// etc ... more code ...
callback(null, response);
};
Both pieces of code assign a function to `exports.handler`
I think I want to achieve this effect:
if ( /* test whether being called by Alexa or by API Gateway */ )
{ /* Make the Alexa Response */ }
else
{ /* Construct the JSON data summary response */ }
From what I can make out, each Lambda function (in the sense of each Amazon Resource Number for a Lambda function) has to have only one entry file, i.e. I can't make Lambda start index_Alexa.js versus index_JSON.js.
I am open to any suggestion on how to get the same Lambda function (in the sense of the same JS file or package of files) do both things.

I question the usefulness of the approach somewhat, and the following has some potential for optimization remaining, but one way of accomplishing this is to declare exports.handler as a simple wrapper that invokes the correct previously-declared handler function based on a condition you can test in the request.
// set up handler for alexa
const skillBuilder = Alexa.SkillBuilders.standard();
const alexa_handler = skillBuilder
.addRequestHandlers(
LaunchRequest,
HelpIntent,
// .... various other intents
UnhandledIntent
)
.addErrorHandlers(ErrorHandler)
.lambda();
// set up handler for API Gateway
const api_gateway_handler = function(event, context, callback) {
let name = "you";
let city = 'World';
// etc ... more code ...
callback(null, response);
};
// for each invocation, choose which of the above to invoke
exports.handler = function(event, context, callback) {
if(/* some test that is true only for API Gateway */)
{
api_gateway_handler(event, context, callback);
}
else
{
alexa_handler(event, context, callback);
}
};
The line if(/* some test that is true only for API Gateway */) is something you'll need to work out, but I suspect something like this might work:
if(event.input && event.input.requestContext && event.input.requestContext.apiId)
The API Gateway docs suggest that this value would always be present.

Related

How to know which environment/network a NEAR smart-contract is deployed to (AssemblyScript)?

I'm doing some cross contract calls using NEAR and AssemblyScript. I would like to call different accounts based on the environment my smart-contract is deployed to. If the contract is deployed to testnet, I want to call a testnet cross-contract call. If the contract is deployed to mainnet, I want to call a mainnet cross-contract call.
export function callMetaNear(accountId: string): void {
// how to get correct contract name based on where the contract is deployed?
let otherContract: string = 'test.testnet';
if(contractIsDeployedToMainnet) {
otherContract = 'test.near';
}
// cross-contract call example
const itemArgs: AddItemArgs = {
accountId,
itemId: "Sword +9000",
};
const promise = ContractPromise.create(
otherContract,
"addItem",
itemArgs.encode(),
0,
);
promise.returnAsResult();
I will answer my own question, but I'm not sure if it's the best solution. Better answers are welcome.
I figured we can assume the contract is deployed to mainnet if Context.contractName ends with ".near".
import { Context } from 'near-sdk-core';
...
let otherContract: string = 'test.testnet';
if(Context.contractName.endsWith(".near")) {
otherContract = 'test.near';
}

Testing error for function call to a non-contract account

While testing our VRF getRandomNumber(s) with test-helpers, we keep on getting Error: Transaction reverted: function call to a non-contract account at:
require(LINK.balanceOf(address(this)) > fee, "Not enough LINK to initialte function call");
LINK seems to be used correctly here. What's the meaning/issue with the non-contract account?
Other tests on the same RandomNumberConsumer object are successful.
contract RandomNumberConsumer is VRFConsumerBase {
[...]
function getRandomNumber(uint256 userProvidedSeed) public returns (bytes32 requestId) {
require(LINK.balanceOf(address(this)) >= fee, "Not enough LINK - fill contract with faucet");
return requestRandomness(keyHash, fee, userProvidedSeed);
}
describe("getRandomNumber()", function() {
it("Should return a requestID", async function() {
const requestId = await randomNumberConsumer.getRandomNumber(12);
// checks on requestId
});
});
Any LINK.xxx() call refers to the external LINK contract which is not existent in your code. It's a contract already deployed on the network - that's why you're most probably passing the LINK address to the constructor of your contract.
To make it work in the test, you need to mock something so that your test doesn't end up calling the real LINK interface. One of the ways would be to mock your getRandomNumber function. Since it's public, that should be easily doable with Waffle's mocking utils: https://ethereum-waffle.readthedocs.io/en/latest/mock-contract.html.
Alternatively (probably more legit, but longer) you can mock the entire LINK contract:
Have some Mocks.sol contract:
pragma solidity ^0.8.7;
import "#chainlink/contracts/src/v0.8/interfaces/LinkTokenInterface.sol";
abstract contract LinkMock is LinkTokenInterface {}
Initialize it as a mock in your test and pass its address to your mock contract as the first argument, as per the VRF documentation:
import { waffle } from 'hardhat'
import { abi as linkMockAbi } from '../artifacts/contracts/Mocks.sol/LinkMock.json'
const [deployer, vrfCoordinatorMock, ...actors] = waffle.provider.getWallets()
const getContract = async ({ mockedLinkBalance }: { mockedLinkBalance: string }) => {
const linkMockContract = await waffle.deployMockContract(deployer, linkMockAbi)
// Mocks the external LINK contract that we don't have access to during tests
await linkMockContract.mock.balanceOf.returns(ethers.utils.parseEther(mockedLinkBalance))
await linkMockContract.mock.transferAndCall.returns(true)
return waffle.deployContract(deployer, ContractJson, [
vrfCoordinatorMock.address,
linkMockContract.address,
'0x0000000000000000000000000000000000000000000000000000000000000000',
'100000000000000000',
])
}
Call rawFulfillRandomness() which the VRF calls itself, whenever you want to mock the VRF generating the randomness:
const contract = await getContract()
await contract.connect(vrfCoordinatorMock).rawFulfillRandomness('0x0000000000000000000000000000000000000000000000000000000000000000', mockedRandomnessValue)
Note I hardcoded requestId for brevity in the above example. You'll have to come up with a way of stubbing it if you rely on its value in your contract.

Using fetch, why do I need so many "await" statements?

I have a fetch instruction in one function that grabs an API key from a server and it's used by a few other objects to deliver that API key to whatever service needs it.
export default async function getAPIKey(key) {
return await (await fetch('http://localhost:8000/' + key)).json();
}
And in my weather object:
export default {
URI: 'https://api.openweathermap.org',
getLocalWeather: async function(city=null, countryCode=null) {
try {
// fetch the API key from environment
const API_KEY = await getAPIKey('wx');
//... rest of code
The code as it is works, but I don't understand why I need 3 await statements. Wouldn't I only need two? I need one for the fetch() in getAPIKey(). Then .json() returns a promise because it has to wait for the response body, so I'd need an await where I call the function in getLocalWeather(). But if I don't have two awaits in getAPIKey() it just returns [object Response]?
Essentially I'm wondering why the following is wrong:
export default async function getAPIKey(key) {
return (await fetch('http://localhost:8000/' + key)).json();
}
And in my weather object:
export default {
URI: 'https://api.openweathermap.org',
getLocalWeather: async function(city=null, countryCode=null) {
try {
// fetch the API key from environment
const API_KEY = await getAPIKey('wx');
//... rest of code
Am I miss-counting? Because I only see two Promises. I know async/await functions are promises under the hood, so getAPIKey() is a promise, but wouldn't that promise be the .json() Promise? And if so why isn't the await where I call the function sufficient?
I'm not sure what what I'm failing to understand.
You don't need any of those await statements inside of getAPIKey() and your function doesn't even need to be async. You can just do this:
export default function getAPIKey(key) {
return fetch('http://localhost:8000/' + key).json();
}
You just want to return the promise from fetch().json().
The code as it is works, but I don't understand why I need 3 await statements. Wouldn't I only need two?
Actually, you only need one when you do await getAPIKey(). The others inside of getAPIKey() are not needed at all.
When you do something like:
export default async function getAPIKey(key) {
return await fetch('http://localhost:8000/' + key).json();
}
You're just adding a superfluous await that has no benefit. The function returns a promise (all async functions return a promise) and that promise is resolved when the await is done which is exactly the same as just doing return fetch('http://localhost:8000/' + key).json(); in the first place. Adding the second await also adds no value.

Alexa lambda function defaulting to unhandled?

Below is my code for my Alexa lambda function with all my data and other intents removed. The problem I'm having is that my lambda function doesn't seem to launching and I'm constantly getting the output, "sorry, I don't know what to" meaning that it's going to the unhandled function. Could anyone please advise?
var Alexa = require('alexa-sdk');
const APP_ID = 'amzn1.ask.skill.353021cb-577e-4cfc-9edd-b440e6f095fe';
var handlers = {
'LaunchRequest': function() {
this.emit(':tell', 'I can help you pick your tie. Tell me the color of your outfit, pattern of your shirt, or pattern of your tie.','Tell me the color of your outfit, pattern of your shirt, or pattern of your tie.');
},
'Unhandled': function() {
this.emit(':tell','Sorry, I don\'t know what to do');
},
};
exports.handler = function(event,context){
var alexa = Alexa.handler(event,context);
alexa.registerHandlers(handlers);
alexa.execute();
};
How did you test your Skill? If you tested it using the ('old') Service Simulator, you don't get a request of the LaunchRequest type, but an IntentRequest with the best-matching intent of your interaction model - Just as when you invoke your Skill with an intent such as 'Alexa, ask tie picker to pick a tie'.
If you want such 'deep' invocations to trigger your first handler, you could replace LaunchRequest by NewSession.
Hope that helps!

Multiple AJAX requests with React Redux

I want to make requests to two different APIs. I then need to organize that data. I'm using redux-promise.
Currently, I have a function, calls two other functions that do the AJAX request:
export function fetchEstimates(input) {
let firstRequest = fetchFirstRequest(input);
let secondRequest = fetchFirstRequest(input);
return {
type: FETCH_DATA,
payload: {
firstRequest: firstRequest
secondRequest: secondRequest
}
}
}
Unfortunately, by putting both requests in an object, I can't seem to access the results.
export default function(state = [], action) {
switch (action.type) {
case FETCH_DATA:
// console.log(action.firstRequest);
// console.log(action.secondRequest);
return result;
}
return state;
}
As I toggle the object in dev tools, I come to this:
[[PromiseStatus]]:"resolved"
[[PromiseValue]]:Object
I can continue to toggle the options, but I can't seem to access them in my code.
If in my payload, I just return this
payload: firstRequest
I don't have issues. But of course, I need both requests. Any ideas. What is a good approach to handle this?
If you look at the source for redux-promise, you'll see that it assumes you've provided a promise as the "payload" field in your action. You're instead providing an object with two promises as sub-fields under "payload". I'm assuming that you're really interested in having both promises resolve, and then passing both results to the reducer. You'd want to use Promise.all to create a new promise that receives the results of both original promises as an argument, then use that promise as your payload. The reducer would then receive something like: {type : "DATA_RECEIVED", payload : [response1, response2]}.
You need some sort of middleware to deal with Promises (like redux-thunk), and wrap the promises in Promise.all to wait until they're both resolved. Let me know if you need a code example.

Resources