TypeORM Insert Returning - Getting Correct ID Back But As 3 Different Names For Same Value - insert

I'm inserting into a MS SQL table with an identity column, and need the newly inserted ID returned.
This code does the correct insert, and correctly returns the newly created ID. I can't figure out why I get three different names for the same returned value.
This isn't blocking anything - just curious if there's a cleaner way to only get one returned reference:
import "reflect-metadata";
import {createConnection} from "typeorm";
import {getConnection} from "typeorm";
import {Organizations} from "./entity/Organizations"
createConnection().then(async connection => {
//insert new organization - return organizationId
const newOrganizationId = await getConnection()
.createQueryBuilder()
.insert()
.into(Organizations)
.values([
{ organizationName: "Test Organization", organizationId: 2, orgShortName: "TestOrg"}
])
.returning(["organizationId"])
.execute();
// .getSql();
console.log("Organization Created: ", newOrganizationId);
}).catch(error => console.log(error));
Here's what's getting returned:
Organization Created: InsertResult {
identifiers: [ { organizationId: 8 } ],
generatedMaps: [ { organizationId: 8 } ],
raw: [ { organizationId: 8 } ]
}
Why all three references for the same organizationID value?

Figured it out:
console.log(newOrganizationId.raw);

Related

How to modify just a property from a dexie store without deleting the rest?

I'm having the dexie stores showed in the print screen below:
Dexie stores print screen
My goal is to update a dexie field row from a store without losing the rest of the data.
For example: when I edit and save the field "com_name" from the second row (key={2}) I want to update "com_name" only and not lose the rest of the properties, see first and the third row.
I already tried with collection.modify and table.update but both deleted the rest of the properties when used the code below:
dexieDB.table('company').where('dexieKey').equals('{1}')
//USING table.update
//.update(dexieRecord.dexiekey, {
// company: {
// com_name: "TOP SERVE 2"
// }
//})
.modify(
{
company:
{
com_name: TOP SERVE 2
}
}
)
.then(function (updated) {
if (updated)
console.log("Success.");
else
console.log("Nothing was updated.");
})
.catch(function (err) { console.log(err); });
Any idea how can I accomplish that?
Thanks
Alex
You where right to use Table.update or Collection.modify. They should never delete other properties than the ones specified. Can you paste a jsitor.com or jsfiddle repro of that and someone may help you pinpoint why the code doesn't work as expected.
Now that you are saying I realised that company and contact stores are created dynamically and editedRecords store has the indexes explicitly declared therefore when update company or contact store, since dexie doesn't see the indexes will overwrite. I haven't tested it yet but I suspect this is the behaviour.
See the print screen below:
Dexie stores overview
Basically I have json raw data from db and in the browser I create the stores and stores data based on it, see code below:
function createDexieTables(jsonData) { //jsonData - array, is the json from db
const stores = {};
const editedRecordsTable = 'editedRecords';
jsonData.forEach((jsonPackage) => {
for (table in jsonPackage) {
if (_.find(dexieDB.tables, { 'name': table }) == undefined) {
stores[table] = 'dexieKey';
}
}
});
stores[editedRecordsTable] = 'dexieKey, table';
addDataToDexie(stores, jsonData);
}
function addDataToDexie(stores, jsonData) {
dbv1 = dexieDB.version(1);
if (jsonData.length > 0) {
dbv1.stores(stores);
jsonData.forEach((jsonPackage) => {
for (table in jsonPackage) {
jsonPackage[table].forEach((tableRow) => {
dexieDB.table(table).add(tableRow)
.then(function () {
console.log(tableRow, ' added to dexie db.');
})
.catch(function () {
console.log(tableRow, ' already exists.');
});
});
}
});
}
}
This is the json, which I convert to object and save to dexie in the value column and the key si "dexieKey":
[
{
"company": [
{
"dexieKey": "{1}",
"company": {
"com_pk": 1,
"com_name": "CloudFire",
"com_city": "Round Rock",
"serverLastEdit": [
{
"com_pk": "2021-06-02T11:30:24.774Z"
},
{
"com_name": "2021-06-02T11:30:24.774Z"
},
{
"com_city": "2021-06-02T11:30:24.774Z"
}
],
"userLastEdit": []
}
}
]
}
]
Any idea why indexes were not populated when generating them dynamically?
Given the JSON data, i understand what's going wrong.
Instead of passing the following to update():
{
company:
{
com_name: "TOP SERVE 2"
}
}
You probably meant to pass this:
{
"company.com_name": "TOP SERVE 2"
}
Another hint is to do the add within an rw transaction, or even better if you can use bulkAdd() instead to optimize the performance.

How to paste array of objects into GraphQL Apollo cache?

I have an array of countries received from Apollo backend without an ID field.
export const QUERY_GET_DELIVERY_COUNTRIES = gql`
query getDeliveryCountries {
deliveryCountries {
order
name
daysToDelivery
zoneId
iso
customsInfo
}
}
`
Schema of these objects:
{
customsInfo: null
daysToDelivery: 6
iso: "UA"
name: "Ukraine"
order: 70
zoneId: 8
__typename: "DeliveryCountry"
}
In nested components I read these objects from client.readQuery.
What I want is to insert it to localStorage, read it initially and write this data to Apollo Client Cache.
What I've already tried to do:
useEffect(() => {
const deliveryCountries = JSON.parse(localStorage.getItem('deliveryCountries') || '[]')
if(!deliveryCountries || !deliveryCountries.length) {
getCountriesLazy()
} else {
deliveryCountries.map((c: DeliveryCountry) => {
client.writeQuery({
query: QUERY_GET_DELIVERY_COUNTRIES,
data: {
deliveryCountries: {
__typename: "DeliveryCountry",
order: c.order,
name: c.name,
daysToDelivery: c.daysToDelivery,
zoneId: c.zoneId,
iso: c.iso,
customsInfo: c.customsInfo
}
}
})
})
}
}, [])
But after execution the code above I have only one object in countries cache. How to write all objects without having an explicit ID, how can I do it? Or maybe I'm doing something wrong?
Lol. I just had to put the array into necessary field without iterating. writeQuery replaces all the data and not add any "to the end".
client.writeQuery({
query: QUERY_GET_DELIVERY_COUNTRIES,
data: {
deliveryCountries: deliveryCountries
}
})

Prisma Not Returning Created Related Records

i want to create a new graphql api and i have an issue that i am struggling to fix.
the code is open source and can be found at: https://github.com/glitr-io/glitr-api
i want to create a mutation to create a record with relations... it seems the record is created correctly with all the expected relations, (when checking directly into the database), but the value returned by the create<YourTableName> method, is missing all the relations.
... so so i get an error on the api because "Cannot return null for non-nullable field Meme.author.". i am unable to figure out what could be wrong in my code.
the resolver looks like the following:
...
const newMeme = await ctx.prisma.createMeme({
author: {
connect: { id: userId },
},
memeItems: {
create: memeItems.map(({
type,
meta,
value,
style,
tags = []
}) => ({
type,
meta,
value,
style,
tags: {
create: tags.map(({ name = '' }) => (
{
name
}
))
}
}))
},
tags: {
create: tags.map(({ name = '' }) => (
{
name
}
))
}
});
console.log('newMeme', newMeme);
...
that value of newMeme in the console.log here (which what is returned in this resolver) is:
newMeme {
id: 'ck351j0f9pqa90919f52fx67w',
createdAt: '2019-11-18T23:08:46.437Z',
updatedAt: '2019-11-18T23:08:46.437Z',
}
where those fields returned are the auto-generated fields. so i get an error for a following mutation because i tried to get the author:
mutation{
meme(
memeItems: [{
type: TEXT
meta: "test1-meta"
value: "test1-value"
style: "test1-style"
}, {
type: TEXT
meta: "test2-meta"
value: "test2-value"
style: "test2-style"
}]
) {
id,
author {
displayName
}
}
}
can anyone see what issue could be causing this?
(as previously mentioned... the record is created successfully with all relationships as expected when checking directly into the database).
As described in the prisma docs the promise of the Prisma client functions to write data, e.g for the createMeme function, only returns the scalar fields of the object:
When creating new records in the database, the create-method takes one input object which wraps all the scalar fields of the record to be
created. It also provides a way to create relational data for the
model, this can be supplied using nested object writes.
Each method call returns a Promise for an object that contains all the
scalar fields of the model that was just created.
See: https://www.prisma.io/docs/prisma-client/basic-data-access/writing-data-JAVASCRIPT-rsc6/#creating-records
To also return the relations of the object you need to read the object again using an info fragment or the fluent api, see: https://www.prisma.io/docs/prisma-client/basic-data-access/reading-data-JAVASCRIPT-rsc2/#relations

Error while trying to run a GraphQL query recursively, along with queried results

This is closely related to my last question here. In short, I have 2 schemas, dbPosts and dbAuthors. They look somewhat like this (I've omitted some fields here for the sake of brevity):
dbPosts
id: mongoose.Schema.Types.ObjectId,
title: { type: String },
content: { type: String },
excerpt: { type: String },
slug: { type: String },
author: {
id: { type: String },
fname: { type: String },
lname: { type: String },
}
dbAuthors
id: mongoose.Schema.Types.ObjectId,
fname: { type: String },
lname: { type: String },
posts: [
id: { type: String },
title: { type: String }
]
I'm resolving my post queries like this:
const mongoose = require('mongoose');
const graphqlFields = require('graphql-fields');
const fawn = require('fawn');
const dbPost = require('../../../models/dbPost');
const dbUser = require('../../../models/dbUser');
fawn.init(mongoose);
module.exports = {
// Queries
Query: {
posts: (root, args, context) => {
return dbPost.find({});
},
post: (root, args, context) => {
return dbPost.findById(args.id);
},
},
Post: {
author: (parent, args, context, ast) => {
// Retrieve fields being queried
const queriedFields = Object.keys(graphqlFields(ast));
console.log('-------------------------------------------------------------');
console.log('from Post:author resolver');
console.log('queriedFields', queriedFields);
// Retrieve fields returned by parent, if any
const fieldsInParent = Object.keys(parent.author);
console.log('fieldsInParent', fieldsInParent);
// Check if queried fields already exist in parent
const available = queriedFields.every((field) => fieldsInParent.includes(field));
console.log('available', available);
if(parent.author && available) {
return parent.author;
} else {
return dbUser.findOne({'posts.id': parent.id});
}
},
},
};
And I'm resolving all author queries like this:
const mongoose = require('mongoose');
const graphqlFields = require('graphql-fields');
const dbUser = require('../../../models/dbUser');
const dbPost = require('../../../models/dbPost');
module.exports = {
// Queries
Query: {
authors: (parent, root, args, context) => {
return dbUser.find({});
},
author: (root, args, context) => {
return dbUser.findById(args.id);
},
},
Author: {
posts: (parent, args, context, ast) => {
// Retrieve fields being queried
const queriedFields = Object.keys(graphqlFields(ast));
console.log('-------------------------------------------------------------');
console.log('from Author:posts resolver');
console.log('queriedFields', queriedFields);
// Retrieve fields returned by parent, if any
const fieldsInParent = Object.keys(parent.posts[0]._doc);
console.log('fieldsInParent', fieldsInParent);
// Check if queried fields already exist in parent
const available = queriedFields.every((field) => fieldsInParent.includes(field));
console.log('available', available);
if(parent.posts && available) {
// If parent data is available and includes queried fields, no need to query db
return parent.posts;
} else {
// Otherwise, query db and retrieve data
return dbPost.find({'author.id': parent.id, 'published': true});
}
},
},
};
Again, I've left out bits not relevant to this question, such as mutations, in the interest of brevity. My objective is to make all queries work recursively while also optimizing database lookups. But somehow I'm unable to accomplish this. Here's one query I'm running, for instance:
{
posts{
id
title
author{
first_name
last_name
id
posts{
id
title
}
}
}
}
And it returns this:
{
"errors": [
{
"message": "Cannot return null for non-nullable field Post.author.",
"locations": [
{
"line": 5,
"column": 5
}
],
"path": [
"posts",
1,
"author"
]
}
],
"data": {
"posts": [
{
"id": "5ba1f3e7cc546723422e62a4",
"title": "A Title!",
"author": {
"first_name": "Bill",
"last_name": "Erby",
"id": "5ba130271c9d440000ac8fc4",
"posts": [
{
"id": "5ba1f3e7cc546723422e62a4",
"title": "A Title!"
}
]
}
},
null
]
}
}
If you notice, this query does return all values requested, but also adds an error message against the post.author query! What could be causing this?
I haven't included the entire codebase so as not to make things confusing, but should you wish to take a look, it's up on Github and a GraphiQL interface is up at https://graph.schandillia.com should you wish to see the results for yourself.
Thank you so much for your time, if you've come this far. Would really appreciate any pointer in the right direction!"
P.S.: If you notice, I'm logging the values of 3 variables in each resolver for debugging purposes:
queriedFields: An array of all fields being queried
fieldsInParent: An array of all fields being returned in the resolver's parent property
available: A boolean showing if all queriedFields members exist in fieldsInParent
And when I run a simple query like this:
{
posts{
id
author{
id
posts{
id
}
}
}
}
This is what gets logged:
-------------------------------------------------------------
from Post:author resolver
queriedFields [ 'id', 'posts' ]
fieldsInParent [ '$init', 'id', 'first_name', 'last_name' ]
available false
-------------------------------------------------------------
from Post:author resolver
queriedFields [ 'id', 'posts' ]
fieldsInParent [ '$init', 'id', 'first_name', 'last_name' ]
available false
-------------------------------------------------------------
from Author:posts resolver
queriedFields [ 'id' ]
fieldsInParent [ 'id', 'title' ]
available true
Shouldn't the post:author resolver execute only once? Also, it's funny how in the first 2 logs, fieldsInParent is missing the posts field even when the schema for author includes such a field.
Your query result does not in fact include all the requested data. The posts query resolves to an array that includes one Post object and a null. The null is there because GraphQL tried to fully resolve the other Post object and could not -- it encountered a validation error, namely that the post's author resolved to null.
You can change your schema to make the author field nullable, which would get rid of the error but would still leave you with the null post. Presumably, if a post exists, it should have an author (although with MongoDB I guess it's very possible you just have some bad data). If you look inside your resolver, there's two return statements -- one of them (probably the db call) is returning null for that second post.
As an aside, as a client, you probably don't want to deal with nulls inside the array and want an empty array instead of a null for the whole field. When using lists (arrays), you may want to make them both non-nullable and make each item in that list non-nullable as well. You do so like this:
posts: [Post!]!
You still need to ensure your resolver logic prevents those nulls from happening, but adding the validation can help you catch that sort of behavior more easily.

GraphQL: Filtering, sorting and paging on nested entities from separate data sources?

I'm attempting to use graphql to tie together a number of rest endpoints, and I'm stuck on how to filter, sort and page the resulting data. Specifically, I need to filter and/or sort by nested values.
I cannot do the filtering on the rest endpoints in all cases because they are separate microservices with separate databases. (i.e. I could filter on title in the rest endpoint for articles, but not on author.name). Likewise with sorting. And without filtering and sorting, pagination cannot be done on the rest endpoints either.
To illustrate the problem, and as an attempt at a solution, I've come up with the following using formatResponse in apollo-server, but am wondering if there is a better way.
I've boiled down the solution to the most minimal set of files that i could think of:
data.js represents what would be returned by 2 fictional rest endpoints:
export const Authors = [{ id: 1, name: 'Sam' }, { id: 2, name: 'Pat' }];
export const Articles = [
{ id: 1, title: 'Aardvarks', author: 1 },
{ id: 2, title: 'Emus', author: 2 },
{ id: 3, title: 'Tapir', author: 1 },
]
the schema is defined as:
import _ from 'lodash';
import {
GraphQLSchema,
GraphQLObjectType,
GraphQLList,
GraphQLString,
GraphQLInt,
} from 'graphql';
import {
Articles,
Authors,
} from './data';
const AuthorType = new GraphQLObjectType({
name: 'Author',
fields: {
id: {
type: GraphQLInt,
},
name: {
type: GraphQLString,
}
}
});
const ArticleType = new GraphQLObjectType({
name: 'Article',
fields: {
id: {
type: GraphQLInt,
},
title: {
type: GraphQLString,
},
author: {
type: AuthorType,
resolve(article) {
return _.find(Authors, { id: article.author })
},
}
}
});
const RootType = new GraphQLObjectType({
name: 'Root',
fields: {
articles: {
type: new GraphQLList(ArticleType),
resolve() {
return Articles;
},
}
}
});
export default new GraphQLSchema({
query: RootType,
});
And the main index.js is:
import express from 'express';
import { apolloExpress, graphiqlExpress } from 'apollo-server';
var bodyParser = require('body-parser');
import _ from 'lodash';
import rql from 'rql/query';
import rqlJS from 'rql/js-array';
import schema from './schema';
const PORT = 8888;
var app = express();
function formatResponse(response, { variables }) {
let data = response.data.articles;
// Filter
if ({}.hasOwnProperty.call(variables, 'q')) {
// As an example, use a resource query lib like https://github.com/persvr/rql to do easy filtering
// in production this would have to be tightened up alot
data = rqlJS.query(rql.Query(variables.q), {}, data);
}
// Sort
if ({}.hasOwnProperty.call(variables, 'sort')) {
const sortKey = _.trimStart(variables.sort, '-');
data = _.sortBy(data, (element) => _.at(element, sortKey));
if (variables.sort.charAt(0) === '-') _.reverse(data);
}
// Pagination
if ({}.hasOwnProperty.call(variables, 'offset') && variables.offset > 0) {
data = _.slice(data, variables.offset);
}
if ({}.hasOwnProperty.call(variables, 'limit') && variables.limit > 0) {
data = _.slice(data, 0, variables.limit);
}
return _.assign({}, response, { data: { articles: data }});
}
app.use('/graphql', bodyParser.json(), apolloExpress((req) => {
return {
schema,
formatResponse,
};
}));
app.use('/graphiql', graphiqlExpress({
endpointURL: '/graphql',
}));
app.listen(
PORT,
() => console.log(`GraphQL Server running at http://localhost:${PORT}`)
);
For ease of reference, these files are available at this gist.
With this setup, I can send this query:
{
articles {
id
title
author {
id
name
}
}
}
Along with these variables (It seems like this is not the intended use for the variables, but it was the only way I could get the post processing parameters into the formatResponse function.):
{ "q": "author/name=Sam", "sort": "-id", "offset": 1, "limit": 1 }
and get this response, filtered to where Sam is the author, sorted by id descending, and getting getting the second page where the page size is 1.
{
"data": {
"articles": [
{
"id": 1,
"title": "Aardvarks",
"author": {
"id": 1,
"name": "Sam"
}
}
]
}
}
Or these variables:
{ "sort": "-author.name", "offset": 1 }
For this response, sorted by author name descending and getting all articles except the first.
{
"data": {
"articles": [
{
"id": 1,
"title": "Aardvarks",
"author": {
"id": 1,
"name": "Sam"
}
},
{
"id": 2,
"title": "Emus",
"author": {
"id": 2,
"name": "Pat"
}
}
]
}
}
So, as you can see, I am using the formatResponse function for post processing to do the filtering/paging/sorting. .
So, my questions are:
Is this a valid use case?
Is there a more canonical way to do filtering on deeply nested properties, along with sorting and paging?
Is this a valid use case? Is there a more canonical way to do filtering on deeply nested properties, along with sorting and paging?
Major part of original questing lies on segregating collections on different databases on separate microservices. In fact, it's nessasary to perform collection joining and subsequent filtering on some key, but it's directly impossible since there is no field in original collection to filter, sort or paginate.
Strightforward solution is perform full or filtered queries to original collections, and then perform joining and filtering result dataset on application server, e.g. by lodash, such at your solution. In is possible for small collections, but in general case causes large data transfer and unefficent sorting since there is no index structure - real RB-tree or SkipList, so with quadratic complexity it's not very good.
Dependent on resource volume on application server, special cache and index tables can be build there. If collection structure is fixed, some relations between collection entries and their fields can be reflected in special search table and update respectively on demain. It's like find & search index creation, but not it database, but on application server. Of cource, it will consume resources, but will be more fast than direct lodash-like sorting.
Also task can be solved from another side, if there is access to structure of original databases. Key is denormalization. In counter for classical relation approach, collections can have dublicate information for avioding further join operation. E.g., Articles collection can have some information from Authors collection, which is nessasary to perform filtering, sorting and pagination in further operations.

Resources