How to display mongoose error to backbone views - validation

So, I have a backbone view where I am trying to save a user
this.model.save(user_details, { // this is backbone model
error: function (model, errors) {
},
success: function (model, response) {
}
});
Backbone Model urlRoot points to a backend function where
// here user is a Mongoose schema
user.save(function (err) {
if (err) {
res.send(err.errors);
}
});
I am running some validation in Mongoose schema.
If the validation fails how can I display these "err.errors" on my backbone view.
I can see at terminal if i console log the errors but not being able to send them back to the views.

Found out the solution after looking through the "errors" object
All errors are returned in "errors.resposeText" which has format like
{
"key name": {
"message": "",
"name": "",
"path": "",
"type": "",
"value": ""
}
}
this.model.save(user_details, { // this is backbone model
error: function (model, errors) {
var err = JSON.parse(errors.responseText);
$.each(errors, function (name, err) {
// do something with error
console.log(name + err.message);
}
},
success: function (model, response) {
}
});
NOTE: Errors from mongodb like unique, dup keys are not appended in this format. So its upto us to change them to json and wrap it up in res.errors.
In case of error in unique keys user.save(function (err) {
if (err) {
if(err.code!='undefined' && err.code=='11000')
err.errors = {'email':{'message':'This unique value is already in db'}};
res.send(500, err.errors);
}

Related

Slack Bolt App: Options body view state is not updated like actions body view state

I am trying to implement dependent external selects inside a modal but I am having problems passing the state of the first dropdown to the second. I can see the state I need inside the app.action listener but I am not getting the same state inside the app.options listener.
body.view.state inside app.action("case_types"). I specifically need the case_create_case_type_block state.
"state": {
"values": {
"case_create_user_select_block": {
"case_create_selected_user": {
"type": "users_select",
"selected_user": "U01R3AE65GE"
}
},
"case_create_case_type_block": {
"case_types": {
"type": "external_select",
"selected_option": {
"text": { "type": "plain_text", "text": "Incident", "emoji": true },
"value": "Incident"
}
}
},
"case_create_case_subtype_block": {
"case_subtypes": { "type": "external_select", "selected_option": null }
},
"case_create_case_owner_block": {
"case_owners": { "type": "external_select", "selected_option": null }
},
"case_create_subject_block": {
"case_create_case_subject": {
"type": "plain_text_input",
"value": null
}
},
"case_create_description_block": {
"case_create_case_description": {
"type": "plain_text_input",
"value": null
}
}
}
},
body.view.state inside app.options("case_subtypes")
"state": {
"values": {
"case_create_user_select_block": {
"case_create_selected_user": {
"type": "users_select",
"selected_user": "U01R3AE65GE"
}
}
}
},
I did also try to update the view myself hoping it would update the state variables inside app.action({ action_id: "case_types" })
//need to update view with new values
try {
// Call views.update with the built-in client
const result = await client.views.update({
// Pass the view_id
view_id: body.view.id,
// Pass the current hash to avoid race conditions
hash: body.view.hash,
});
console.log("Case Type View Update result:");
console.log(JSON.stringify(result));
//await ack();
} catch (error) {
console.error(error);
//await ack();
}
I ended up posting this on the github issues page for slack bolt. This was a bug that will fixed in a future release. Below is the workaround using private metadata to hold the state to check for future dependent dropdowns.
// Action handler for case type
app.action('case_type_action_id', async ({ ack, body, client }) => {
try {
// Create a copy of the modal view template and update the private metadata
// with the selected case type from the first external select
const viewTemplate = JSON.parse(JSON.stringify(modalViewTemplate))
viewTemplate.private_metadata = JSON.stringify({
case_type: body.view.state.values['case_type_block_id']['case_type_action_id'].selected_option.value,
});
// Call views.update with the built-in client
const result = await client.views.update({
// Pass the view_id
view_id: body.view.id,
// Pass the current hash to avoid race conditions
hash: body.view.hash,
// Pass the updated view
view: viewTemplate,
});
console.log(JSON.stringify(result, 0, 2));
} catch (error) {
console.error(error);
}
await ack();
});
// Options handler for case subtype
app.options('case_subtype_action_id', async ({ body, options, ack }) => {
try {
// Get the private metadata that stores the selected case type
const privateMetadata = JSON.parse(body.view.private_metadata);
// Continue to render the case subtype options based on the case type
// ...
} catch (error) {
console.error(error);
}
});
See the full explaination here: https://github.com/slackapi/bolt-js/issues/1146

Supabase third party oAuth providers are returning null?

I'm trying to implement Facebook, Google and Twitter authentication. So far, I've set up the apps within the respective developer platforms, added those keys/secrets to my Supabase console, and created this graphql resolver:
/* eslint-disable #typescript-eslint/explicit-module-boundary-types */
import camelcaseKeys from 'camelcase-keys';
import { supabase } from 'lib/supabaseClient';
import { LoginInput, Provider } from 'generated/types';
import { Provider as SupabaseProvider } from '#supabase/supabase-js';
import Context from '../../context';
import { User } from '#supabase/supabase-js';
export default async function login(
_: any,
{ input }: { input: LoginInput },
{ res, req }: Context
): Promise<any> {
const { provider } = input;
// base level error object
const errorObject = {
__typename: 'AuthError',
};
// return error object if no provider is given
if (!provider) {
return {
...errorObject,
message: 'Must include provider',
};
}
try {
const { user, session, error } = await supabase.auth.signIn({
// provider can be 'github', 'google', 'gitlab', or 'bitbucket'
provider: 'facebook',
});
console.log({ user });
console.log({ session });
console.log({ error });
if (error) {
return {
...errorObject,
message: error.message,
};
}
const response = camelcaseKeys(user as User, { deep: true });
return {
__typename: 'LoginSuccess',
accessToken: session?.access_token,
refreshToken: session?.refresh_token,
...response,
};
} catch (error) {
return {
...errorObject,
message: error.message,
};
}
}
I have three console logs set up directly underneath the signIn() function, all of which are returning null.
I can also go directly to https://<your-ref>.supabase.co/auth/v1/authorize?provider=<provider> and auth works correctly, so it appears to have been narrowed down specifically to the signIn() function. What would cause the response to return null values?
This is happening because these values are not populated until after the redirect from the OAuth server takes place. If you look at the internal code of supabase/gotrue-js you'll see null being returned explicitly.
private _handleProviderSignIn(
provider: Provider,
options: {
redirectTo?: string
scopes?: string
} = {}
) {
const url: string = this.api.getUrlForProvider(provider, {
redirectTo: options.redirectTo,
scopes: options.scopes,
})
try {
// try to open on the browser
if (isBrowser()) {
window.location.href = url
}
return { provider, url, data: null, session: null, user: null, error: null }
} catch (error) {
// fallback to returning the URL
if (!!url) return { provider, url, data: null, session: null, user: null, error: null }
return { data: null, user: null, session: null, error }
}
}
The flow is something like this:
Call `supabase.auth.signIn({ provider: 'github' })
User is sent to Github.com where they will be prompted to allow/deny your app access to their data
If they allow your app access, Github.com redirects back to your app
Now, through some Supabase magic, you will have access to the session, user, etc. data

Is there any cost advantage of Parse.Object.saveAll vs. saving individually?

The Parse JS SDK provides a Parse.Object.saveAll() method to save many objects with one command.
From looking at ParseServerRESTController.js it seems that each object is saved individually:
if (path === '/batch') {
let initialPromise = Promise.resolve();
if (data.transaction === true) {
initialPromise = config.database.createTransactionalSession();
}
return initialPromise.then(() => {
const promises = data.requests.map(request => {
return handleRequest(
request.method,
request.path,
request.body,
options,
config
).then(
response => {
return {
success: response
};
},
error => {
return {
error: {
code: error.code,
error: error.message
},
};
}
);
});
return Promise.all(promises).then(result => {
if (data.transaction === true) {
if (
result.find(resultItem => typeof resultItem.error === 'object')
) {
return config.database.abortTransactionalSession().then(() => {
return Promise.reject(result);
});
} else {
return config.database.commitTransactionalSession().then(() => {
return result;
});
}
} else {
return result;
}
});
});
}
It seems that saveAll is merely a convenience wrapper around saving each object individually, so it still does seem to make n database requests for n objects.
It it correct that saveAll has no cost advantage (performance, network traffic, etc) vs. saving each object individually in Cloud Code?
I can tell you that the answer is that Parse.Object.saveAll and Parse.Object.destroyAll batch requests by default in batches of 20 objects. But why take my word for it? Let's test it out!
Turn verbose logging on and then run the following:
const run = async function run() {
const objects = [...Array(10).keys()].map(i => new Parse.Object('Test').set({i}));
await Parse.Object.saveAll(objects);
const promises = objects.map(o => o.increment('i').save());
return Promise.all(promises);
};
run()
.then(console.log)
.catch(console.error);
And here's the output from the parse-server logs (I've truncated it, but it should be enough to be apparent what is going on):
verbose: REQUEST for [POST] /parse/batch: { // <--- note the path
"requests": [ // <--- an array of requests!!!
{
"method": "POST",
"body": {
"i": 0
},
"path": "/parse/classes/Test"
},
... skip the next 7, you get the idea
{
"method": "POST",
"body": {
"i": 9
},
"path": "/parse/classes/Test"
}
]
}
.... // <-- remove some irrelevent output for brevity.
verbose: RESPONSE from [POST] /parse/batch: {
"response": [
{
"success": {
"objectId": "szVkuqURVq",
"createdAt": "2020-03-05T21:25:44.487Z"
}
},
...
{
"success": {
"objectId": "D18WB4Nsra",
"createdAt": "2020-03-05T21:25:44.491Z"
}
}
]
}
...
// now we iterate through and there's a request per object.
verbose: REQUEST for [PUT] /parse/classes/Test/szVkuqURVq: {
"i": {
"__op": "Increment",
"amount": 1
}
}
...
verbose: REQUEST for [PUT] /parse/classes/Test/HtIqDIsrX3: {
"i": {
"__op": "Increment",
"amount": 1
}
}
// and the responses...
verbose: RESPONSE from [PUT] /parse/classes/Test/szVkuqURVq: {
"response": {
"i": 1,
"updatedAt": "2020-03-05T21:25:44.714Z"
}
}
...
In the core manager code, you do correctly identify that we are making a request for each object to the data store (i.e. MongoDB), This is necessary because an object may have relations or pointers that have to be handled and that may require additional calls to the data store.
BUT! calls between the parse server and the data store are usually over very fast networks using a binary format, whereas calls between the client and the parse server are JSON and go over longer distances with ordinarily much slower connections.
There is one other potential advantage that you can see in the core manager code which is that the batch is done in a transaction.

Graphql returning Cannot return null for non-nullable field Query.getDate. As I am new to graphql I want to know is my approach is wrong or my code?

I have created resolver, schema and handler which will fetch some record from dynamoDB. Now when I perform query then I am getting "Cannot return null for non-nullable field Query.getDate" error. I would like to know whether my approach is wrong or there is any change required in code.
My code : https://gist.github.com/vivek-chavan/95e7450ff73c8382a48fb5e6a5b96025
Input to lambda :
{
"query": "query getDate {\r\n getDate(id: \"0f92fa40-8036-11e8-b106-952d7c9eb822#eu-west-1:ba1c96e7-92ff-4d63-879a-93d5e397b18a\") {\r\n id\r\n transaction_date\r\n }\r\n }"
}
Response :
{
"errors": [
{
"message": "Cannot return null for non-nullable field Query.getDate.",
"locations": [
{
"line": 2,
"column": 7
}
],
"path": [
"getDate"
]
}
],
"data": null
}
Logs of lambda function :
[ { Error: Cannot return null for non-nullable field Query.getDate.
at completeValue (/var/task/node_modules/graphql/execution/execute.js:568:13)
at completeValueCatchingError (/var/task/node_modules/graphql/execution/execute.js:503:19)
at resolveField (/var/task/node_modules/graphql/execution/execute.js:447:10)
at executeFields (/var/task/node_modules/graphql/execution/execute.js:293:18)
at executeOperation (/var/task/node_modules/graphql/execution/execute.js:237:122)
at executeImpl (/var/task/node_modules/graphql/execution/execute.js:85:14)
at execute (/var/task/node_modules/graphql/execution/execute.js:62:229)
at graphqlImpl (/var/task/node_modules/graphql/graphql.js:86:31)
at /var/task/node_modules/graphql/graphql.js:32:223
at graphql (/var/task/node_modules/graphql/graphql.js:30:10)
message: 'Cannot return null for non-nullable field Query.getDate.',
locations: [Object],
path: [Object] } ],
data: null }
2019-02-25T10:07:16.340Z 9f75d1ea-2659-490b-ba59-5289a5d18d73 { Item:
{ model: 'g5',
transaction_date: '2018-07-05T09:30:31.391Z',
id: '0f92fa40-8036-11e8-b106-952d7c9eb822#eu-west-1:ba1c96e7-92ff-4d63-879a-93d5e397b18a',
make: 'moto' } }
Thanks in advance!
This is your code:
const data = {
getDate(args) {
var params = {
TableName: 'delete_this',
Key: {
"id": args.id
}
};
client.get(params, function(err,data){
if(err){
console.log('error occured '+err)
}else{
console.log(data)
}
});
},
};
const resolvers = {
Query: {
getDate: (root, args) => data.getDate(args),
},
};
You're seeing that error because getDate is a a Non-Null field in your schema, but it is resolving to null. Your resolver needs to return either a value of the appropriate type, or a Promise that will resolve to that value. If you change data like this
const data = {
getDate(args) {
return {
id: 'someString',
transaction_date: 'someString',
}
}
}
you'll see the error go away. Of course, your goal is to return data from your database, so we need to add that code back in. However, your existing code utilizes a callback. Anything you do inside the callback is irrelevant because it's ran after your resolver function returns. So we need to use a Promise instead.
While you can wrap a callback with Promise, that shouldn't be necessary with aws-sdk since newer versions support Promises. Something like this should be sufficient:
const data = {
getDate(args) {
const params = //...
// must return the resulting Promise here
return client.get(params).promise().then(result => {
return {
// id and transaction_date based on result
}
})
}
}
Or using async/await syntax:
const data = {
async getDate(args) {
const params = //...
const result = await client.get(params).promise()
return {
// id and transaction_date based on result
}
}
}

How to wait until all bulk writes are completed in elastic search api

Using NodeJS elastic search client. Trying to write a data importer to bulk import documents from MongoDB. The problem I'm having is the index refresh doesn't seem to wait until all documents are written to elastic before checking the counts.
Using the streams API in node to read the records into a batch, then using the elastic API bulk command to write the records. Shown below:
function rebuildIndex(modelName, queryStream, openStream, done) {
logger.debug('Rebuilding %s index', modelName);
async.series([
function (next) {
deleteType(modelName, function (err, result) {
next(err, result);
});
},
function (next) {
var Model;
var i = 0;
var batchSize = settings.indexBatchSize;
var batch = [];
var stream;
if (queryStream && !openStream) {
stream = queryStream.stream();
} else if (queryStream && openStream) {
stream = queryStream;
}else
{
Model = mongoose.model(modelName);
stream = Model.find({}).stream();
}
stream.on("data", function (doc) {
logger.debug('indexing %s', doc.userType);
batch.push({
index: {
"_index": settings.index,
"_type": modelName.toLowerCase(),
"_id": doc._id.toString()
}
});
var obj;
if (doc.toObject){
obj = doc.toObject();
}else{
obj = doc;
}
obj = _.clone(obj);
delete obj._id;
batch.push(obj);
i++;
if (i % batchSize == 0) {
console.log(chalk.green('Loaded %s records'), i);
client().bulk({
body: batch
}, function (err, resp) {
if (err) {
next(err);
} else if (resp.errors) {
next(resp);
}
});
batch = [];
}
});
// When the stream ends write the remaining records
stream.on("end", function () {
if (batch.length > 0) {
console.log(chalk.green('Loaded %s records'), batch.length / 2);
client().bulk({
body: batch
}, function (err, resp) {
if (err) {
logger.error(err, 'Failed to rebuild index');
next(err);
} else if (resp.errors) {
logger.error(resp.errors, 'Failed to rebuild index');
next(resp);
} else {
logger.debug('Completed rebuild of %s index', modelName);
next();
}
});
} else {
next();
}
batch = [];
})
}
],
function (err) {
if (err)
logger.error(err);
done(err);
}
);
}
I use this helper to check the document counts in the index. Without the timeout, the counts in the index are wrong, but with the timeout they're okay.
/**
* A helper function to count the number of documents in the search index for a particular type.
* #param type The type, e.g. User, Customer etc.
* #param done A callback to report the count.
*/
function checkCount(type, done) {
async.series([
function(next){
setTimeout(next, 1500);
},
function (next) {
refreshIndex(next);
},
function (next) {
client().count({
"index": settings.index,
"type": type.toLowerCase(),
"ignore": [404]
}, function (error, count) {
if (error) {
next(error);
} else {
next(error, count.count);
}
});
}
], function (err, count) {
if (err)
logger.error({"err": err}, "Could not check index counts.");
done(err, count[2]);
});
}
And this helper is supposed to refresh the index after the update completes:
// required to get results to show up immediately in tests. Otherwise there's a 1 second delay
// between adding an entry and it showing up in a search.
function refreshIndex(done) {
client().indices.refresh({
"index": settings.index,
"ignore": [404]
}, function (error, response) {
if (error) {
done(error);
} else {
logger.debug("deleted index");
done();
}
});
}
The loader works okay, except this test fails because of timing between the bulk load and the count check:
it('should be able to rebuild and reindex customer data', function (done) {
this.timeout(0); // otherwise the stream reports a timeout error
logger.debug("Testing the customer reindexing process");
// pass null to use the generic find all query
searchUtils.rebuildIndex("Customer", queryStream, false, function () {
searchUtils.checkCount("Customer", function (err, count) {
th.checkSystemErrors(err, count);
count.should.equal(volume.totalCustomers);
done();
})
});
});
I observe random results in the counts from the tests. With the artificial delay (setTimeout in the checkCount function) then the counts match. So I conclude that the documents are eventually written to elastic and the test would pass. I thought the indices.refresh would essentially force a wait until the documents are all written to the index, but it doesn't seem to be working with this approach.
The setTimeout hack is not really sustainable when the volume goes to actual production level....so how can I ensure the bulk calls are completely written to elastic index before checking the count of documents?
Take a look at the "refresh" parameter (elasticsearch documentation)
For example:
let bulkUpdatesBody = [ bulk actions / docs to index go here ]
client.bulk({
refresh: "wait_for",
body: bulkUpdatesBody
});
I'm not sure if this is the answer or not - but I flushed the index prior to checking the count. It "appears" to work, but I don't know if it's just because of the timing between the calls. Perhaps someone from elastic team knows if flushing the index will really solve the issue?
function checkCount(type, done) {
async.series([
function(next) {
client().indices.flush({
"index": settings.index,
"ignore": [404]
}, function (error, count) {
if (error) {
next(error);
} else {
next(error, count.count);
}
});
},
function (next) {
refreshIndex(type, next);
},
function (next) {
client().count({
"index": settings.index,
"type": type.toLowerCase(),
"ignore": [404]
}, function (error, count) {
if (error) {
next(error);
} else {
next(error, count.count);
}
});
}
], function (err, count) {
if (err)
logger.error({"err": err}, "Could not check index counts.");
done(err, count[2]);
});
}

Resources