How do I include run time arguments while executing a google cloud workflow in Nodejs? - google-workflows

I'm trying to include run time variables while executing a google cloud workflow. I can't find the documentation to do so unless you're using a REST API.
Here's my code that's mostly from their documentation I just get null for the arguments. I think it could be something with the second parameter it expects on createExecution named execution, but I can't figure it out.
const { ExecutionsClient } = require('#google-cloud/workflows');
const client = new ExecutionsClient();
const execute = () => {
return client.createExecution(
{
parent: client.workflowPath('project_id', 'location', 'name'),
},
{
argument: {
users: ['info here'],
},
},
);
};
module.exports = execute;
Thanks for the help!

In case anyone else has this problem you pass the parameter execution to createExecution() along with parent. It's just an object and you can specify argument there which takes a string. Stringify your object and you're good to go!
const { ExecutionsClient } = require('#google-cloud/workflows');
const client = new ExecutionsClient();
const execute = () => {
return client.createExecution({
parent: client.workflowPath('', '', ''),
execution: {
argument: JSON.stringify({
users: [],
}),
},
});
};
module.exports = execute;

Related

Strapi Generic filtering from REST request to EntityServiceAPI

I've been reading the docs on the EntityService API and I understand you can builder filters, populates etc, however I'm not sure how to pass the filters down from the request without parsing the URL manually and constructing an object?
If I have a GET request that looks like http://localhost:1337/my-content-types?filters[id][$eq]=1 which is how it looks in the filtering example here: https://docs.strapi.io/developer-docs/latest/developer-resources/database-apis-reference/rest/filtering-locale-publication.html#deep-filtering
how do I pass the filters down to the EntityServiceAPI?
Request: http://localhost:1337/my-content-types?filters[id][$eq]=1
I have a core service that looks like this:
module.exports = createCoreService('plugin::my-plugin.my-content-type', ({strapi}) => ({
find(ctx) {
// console.log("Params:");
return super.find(ctx)
}
}))
which is called from the controller:
module.exports = createCoreController('plugin::my-plugin.my-content-type', ({strapi}) => ({
async find(ctx) {
return strapi
.plugin(_pluginName)
.service(_serviceName)
.find(ctx);
}
}));
and my routing:
module.exports = {
type: 'admin',
routes: [
{
method: 'GET',
path: '/',
handler: 'my-content-type.find',
config: {
policies: [],
auth: false
}
}
]
};
EDIT:
I've got something working by writing my own very crude pagination, but I'm not happy with it, I'd really like a cleaner solution:
find(ctx) {
const urlSearchParams = new URLSearchParams(ctx.req._parsedUrl.search);
const params = {
start: urlSearchParams.get('start') || 0,
limit: urlSearchParams.get('limit') || 25,
populate: '*'
}
const results = strapi.entityService.findMany('plugin::my-plugin.my-content-type', params);
return results;
}
you can get params from ctx.query
example:
async find(ctx){
const limit = ctx.query?.limit ?? 20;
const offset = ctx.query?.offset ?? 0;
return await strapi.db.query('api::example.example').find({…ctx.query, limit, offset, populate: ['someRelation']});
}
I think the normally it done by wrapping id under where, and extracting the known parms. Gonna do a test when near pc but if the above variant dose not work, you can do the same with:
const { offset, limit, populate, …where} = ctx.query;
await strapi.db.query(‘…’).find({offset, limit, populate, where})
You can check pagination example in this thread: Strapi custom service overwrite find method

Trying to write to AWS dynamo db via api

I am new to AWS and I've slowly been trying to perform different actions. I recently set up an API that allows me to query a dynamodb table and now I am trying to set up an api that will allow me to update a value in the table with the current temperature. This data will come from a script running on a raspberry pi.
I've been wading through so many tutorials but I haven't gotten this quite locked down. I am able to write to the db using a hard-coded python script so I know my db and roles is set up correctly. I am now trying to create a node-based lambda function that will accept parms from the URL and put the values into the table. I am missing something.
First, do I need to map the values in the api? Some guides do it, others do not. Like I said, ideally I want to pass them in as URL parms.
const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB({apiVersion: '2012-08-10'});
exports.handler = (event, context, callback) => {
dynamodb.putItem({
TableName: "temperature",
Item: {
"tempid": {
S: event.queryStringParameters["tempid"]
}
}
}, function(err, data) {
if (err) {
console.log(err, err.stack);
callback(null, {
statusCode: '500',
body: err
});
} else {
callback(null, {
statusCode: '200',
body: 'Result from ' + event.queryStringParameters["tempid"] + '!'
});
}
})
};
When I test it in the api using "tempid=hotttub1" in the query string I get this error:
START RequestId: 1beb4572-65bf-4ab8-81a0-c217677c3acc Version: $LATEST
2020-07-09T14:02:05.773Z 1beb4572-65bf-4ab8-81a0-c217677c3acc INFO { tempid: 'hottub1' }
2020-07-09T14:02:05.774Z 1beb4572-65bf-4ab8-81a0-c217677c3acc ERROR Invoke Error {"errorType":"TypeError","errorMessage":"Cannot read property 'tempid' of undefined","stack":["TypeError: Cannot read property 'tempid' of undefined"," at Runtime.exports.handler (/var/task/index.js:11:47)"," at Runtime.handleOnce (/var/runtime/Runtime.js:66:25)"]}
EDIT
If I print out event I can see that the value is coming in and I am apparently referencing it wrong. Still looking.
{
"tempid": "hottub1"
}
It needed to be in this format:
const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB({apiVersion: '2012-08-10'});
exports.handler = (event, context, callback) => {
console.info("EVENT\n" + JSON.stringify(event.tempid, null, 2))
var temperatureid = JSON.stringify(event.tempid, null, 2)
dynamodb.putItem({
TableName: "temperature",
Item: {
"tempid": {
S: temperatureid
}
}

Parallel requests in cypress

I want to make parallel requests in cypress. I define a command for that:
const resetDb = () => {
const apiUrl = `${Cypress.config().baseUrl}/api`;
Cypress.Promise.all([
cy.request(`${apiUrl}/group/seed/resetDb`),
cy.request(`${apiUrl}/auth/seed/resetDb`),
cy.request(`${apiUrl}/email/seed/resetDb`),
]);
};
Cypress.Commands.add('resetDb', resetDb);
However, it is still making those requests in sequence. What am I doing wrong?
I was able to solve this problem using task in Cypress, which allows you to use nodejs API.
In the plugins index file, I define a task as follows:
const fetch = require('isomorphic-unfetch');
module.exports = on => {
on('task', {
resetDb() {
const apiUrl = `http://my.com/api`;
return Promise.all([
fetch(`${apiUrl}/group/seed/resetDb`),
fetch(`${apiUrl}/auth/seed/resetDb`),
fetch(`${apiUrl}/email/seed/resetDb`),
]);
},
});
};
The it can be used as follows:
before(() => {
return cy.task('resetDb');
});

How to ignore url querystring from cached urls when using workbox?

Is there a way to ignore query string "?screenSize=" from below registered route using workbox! If I can use regex how would i write it in below scenario? Basically, I am looking to match the cache no matter what is the screenSize querystring.
workboxSW.router.registerRoute('https://example.com/data/image?screenSize=980',
workboxSW.strategies.cacheFirst({
cacheName: 'mycache',
cacheExpiration: {
maxEntries: 50
},
cacheableResponse: {statuses: [0, 200]}
})
);
After trying the cachedResponseWillBeUsed plugin:
I do not see the plugin is applied:
Update: As of Workbox v4.2.0, the new cacheKeyWillBeUsed lifecycle callback can help override the default cache key for both read and write operations: https://github.com/GoogleChrome/workbox/releases/tag/v4.2.0
Original response:
You should be able to do this by writing a cachedResponseWillBeUsed plugin that you pass in when you configure the strategy:
// See https://workboxjs.org/reference-docs/latest/module-workbox-runtime-caching.RequestWrapper.html#.cachedResponseWillBeUsed
const cachedResponseWillBeUsed = ({cache, request, cachedResponse}) => {
// If there's already a match against the request URL, return it.
if (cachedResponse) {
return cachedResponse;
}
// Otherwise, return a match for a specific URL:
const urlToMatch = 'https://example.com/data/generic/image.jpg';
return caches.match(urlToMatch);
};
const imageCachingStrategy = workboxSW.strategies.cacheFirst({
cacheName: 'mycache',
cacheExpiration: {
maxEntries: 50
},
cacheableResponse: {statuses: [0, 200]},
plugins: [{cachedResponseWillBeUsed}]
});
workboxSW.router.registerRoute(
new RegExp('^https://example\.com/data/'),
imageCachingStrategy
);
To build on the other answer, caches.match has an option ignoreSearch, so we can simply try again with the same url:
cachedResponseWillBeUsed = ({cache, request, cachedResponse}) => {
if (cachedResponse) {
return cachedResponse;
}
// this will match same url/diff query string where the original failed
return caches.match(request.url, { ignoreSearch: true });
};
As of v5, building on aw04's answer, the code should read as follows:
const ignoreQueryStringPlugin = {
cachedResponseWillBeUsed: async({cacheName, request, matchOptions, cachedResponse, event}) => {
console.log(request.url);
if (cachedResponse) {
return cachedResponse;
}
// this will match same url/diff query string where the original failed
return caches.match(request.url, {ignoreSearch: true});
}
};
registerRoute(
new RegExp('...'),
new NetworkFirst({
cacheName: 'cache',
plugins: [
ignoreQueryStringPlugin
],
})
);
You can use the cacheKeyWillBeUsed simply, modifying the saved cache key to ignore the query at all, and matching for every response to the url with any query.
const ignoreQueryStringPlugin = {
cacheKeyWillBeUsed: async ({request, mode, params, event, state}) => {
//here you can extract the fix part of the url you want to cache without the query
curl = new URL(request.url);
return curl.pathname;
}
};
and add it to the strategy
workbox.routing.registerRoute(/\/(\?.+)?/,new
workbox.strategies.StaleWhileRevalidate({
matchOptions: {
ignoreSearch: true,
},
plugins: [
ignoreQueryStringPlugin
],
}));
ignoreURLParametersMatching parameter worked for me:
https://developers.google.com/web/tools/workbox/modules/workbox-precaching#ignore_url_parameters

Using graphql-tools to mock a GraphQL server seems broken

I've followed the documentation about using graphql-tools to mock a GraphQL server, however this throws an error for custom types, such as:
Expected a value of type "JSON" but received: [object Object]
The graphql-tools documentation about mocking explicitly states that they support custom types, and even provide an example of using the GraphQLJSON custom type from the graphql-type-json project.
I've provided a demo of a solution on github which uses graphql-tools to successfully mock a GraphQL server, but this relies on monkey-patching the built schema:
// Here we Monkey-patch the schema, as otherwise it will fall back
// to the default serialize which simply returns null.
schema._typeMap.JSON._scalarConfig.serialize = () => {
return { result: 'mocking JSON monkey-patched' }
}
schema._typeMap.MyCustomScalar._scalarConfig.serialize = () => {
return mocks.MyCustomScalar()
}
Possibly I'm doing something wrong in my demo, but without the monkey-patched code above I get the error regarding custom types mentioned above.
Does anyone have a better solution than my demo, or any clues as to what I might be doing wrong, and how I can change the code so that the demo works without monkey-patching the schema?
The relevant code in the demo index.js is as follows:
/*
** As per:
** http://dev.apollodata.com/tools/graphql-tools/mocking.html
** Note that there are references on the web to graphql-tools.mockServer,
** but these seem to be out of date.
*/
const { graphql, GraphQLScalarType } = require('graphql');
const { makeExecutableSchema, addMockFunctionsToSchema } = require('graphql-tools');
const GraphQLJSON = require('graphql-type-json');
const myCustomScalarType = new GraphQLScalarType({
name: 'MyCustomScalar',
description: 'Description of my custom scalar type',
serialize(value) {
let result;
// Implement your own behavior here by setting the 'result' variable
result = value || "I am the results of myCustomScalarType.serialize";
return result;
},
parseValue(value) {
let result;
// Implement your own behavior here by setting the 'result' variable
result = value || "I am the results of myCustomScalarType.parseValue";
return result;
},
parseLiteral(ast) {
switch (ast.kind) {
// Implement your own behavior here by returning what suits your needs
// depending on ast.kind
}
}
});
const schemaString = `
scalar MyCustomScalar
scalar JSON
type Foo {
aField: MyCustomScalar
bField: JSON
cField: String
}
type Query {
foo: Foo
}
`;
const resolverFunctions = {
Query: {
foo: {
aField: () => {
return 'I am the result of resolverFunctions.Query.foo.aField'
},
bField: () => ({ result: 'of resolverFunctions.Query.foo.bField' }),
cField: () => {
return 'I am the result of resolverFunctions.Query.foo.cField'
}
},
},
};
const mocks = {
Foo: () => ({
// aField: () => mocks.MyCustomScalar(),
// bField: () => ({ result: 'of mocks.foo.bField' }),
cField: () => {
return 'I am the result of mocks.foo.cField'
}
}),
cField: () => {
return 'mocking cField'
},
MyCustomScalar: () => {
return 'mocking MyCustomScalar'
},
JSON: () => {
return { result: 'mocking JSON'}
}
}
const query = `
{
foo {
aField
bField
cField
}
}
`;
const schema = makeExecutableSchema({
typeDefs: schemaString,
resolvers: resolverFunctions
})
addMockFunctionsToSchema({
schema,
mocks
});
// Here we Monkey-patch the schema, as otherwise it will fall back
// to the default serialize which simply returns null.
schema._typeMap.JSON._scalarConfig.serialize = () => {
return { result: 'mocking JSON monkey-patched' }
}
schema._typeMap.MyCustomScalar._scalarConfig.serialize = () => {
return mocks.MyCustomScalar()
}
graphql(schema, query).then((result) => console.log('Got result', JSON.stringify(result, null, 4)));
I and a few others are seeing a similar issue with live data sources (in my case MongoDB/Mongoose). I suspect it is something internal to the graphql-tools makeExecutableSchema and the way it ingests text-based schemas with custom types.
Here's another post on the issue: How to use graphql-type-json package with GraphQl
I haven't tried the suggestion to build the schema in code, so can't confirm whether it works or not.
My current workaround is to stringify the JSON fields (in the connector) when serving them to the client (and parsing on the client side) and vice-versa. A little clunky but I'm not really using GraphQL to query and/or selectively extract the properties within the JSON object. This wouldn't be optimal for large JSON objects I suspect.
If anyone else comes here from Google results, the solution for me was to add the JSON resolver as parameter to the makeExecutableSchema call. It's described here:
https://github.com/apollographql/apollo-test-utils/issues/28#issuecomment-377794825
That made the mocking work for me.

Resources