How can I define and use a UUID field? - strapi

I need a UUID field in a content type, with the help of the introduction below I have modified the file "MyType.settings.json".
https://strapi.io/documentation/3.x.x/guides/models.html#define-the-attributes
"uid": {
"default": "",
"type": "uuid"
},
I thought a UUID is automatically saved, but nothing happens.
How can I define and use a UUID field? Can someone give me a hint?
Should I also modify the file \api\MyType\controllers\MyType.js?
Thanks in advance!
Benjamin

You will have to use uuid node module.
So keep your attribute and in the lifeCyle functions, set your uuid with the lib.
'use strict';
const uuid = require('uuid');
module.exports = {
beforeCreate: async (model) => {
model.set('uid', uuid());
}
};

You will probably also want a v4 UUID, which is the fastest to generate by a very large margin and will give you the least chance for collision with all other things being equal (can be 10x or even faster), which is what 99% of all people want to generate UUIDs for:
Work in the global UUID community, remain as unique as possible, not collide with anything else.
In addition, v4 is more secure than v1 as v1 includes the time it was made and which hardware it was created on, and has an insanely higher collision rate as today nobody respects the node field and just fills it with entropy which defeated the whole purpose of v1.
The reasons to use v4 over v1 is so stark in fact that this information should be made much more public and widespread. It is night and day. All major frameworks that only seek entropy now use v4.
import { v4 as uuid } from "uuid";
// Then just...
const myUUID = uuid()
UUID v1 was created because people, even the genius computer scientists that came up with UUID, didn't fully grasp that the address space of 128 bits alone was much better than fancy timing and node tracking. UUID v1 is a great case of over-engineering.

In order to auto-generate uuid both after an entity has been saved via API and admin UI, use the lifecycle-method of your model, not the controller. The file /api/myType/models/myType.js thus may look like this:
'use strict';
const { v4: uuid } = require('uuid');
module.exports = {
lifecycles: {
beforeCreate: async (data) => {
if (!data.uuid) {
data.uuid = uuid();
}
},
}
};

In Strapi V4 create file ./src/api/[api-name]/content-types/[api-name]/lifecycles.js
with following content:
"use strict";
const { v4: uuid } = require("uuid");
module.exports = {
beforeCreate: async (data) => {
if (!data.params.data.uuid) {
data.params.data.uuid = uuid();
}
},
};

Related

Use RTK Query with Graphql

So far I understand I need to build my own baseQuery. I could write graphql queries and mutations like in example here https://rtk-query-docs.netlify.app/examples/react-with-graphql, will I get full type safety for queries and mutations if I add types to query.builder like this builder.query<Device, void> or I must use something like this https://www.graphql-code-generator.com/docs/plugins/typescript-graphql-request#simple-request-middleware. In latter case how should my baseQuery look if I use generated hook for graphql-request library.
Here is example of hook from 2:
import { GraphQLClient } from 'graphql-request';
import { getSdk } from './sdk'; // THIS FILE IS THE GENERATED FILE
async function main() {
const client = new GraphQLClient('https://countries.trevorblades.com/');
const sdk = getSdk(client);
const { continents } = await sdk.continents(); // This is fully typed, based on the query
console.log(`GraphQL data:`, continents);
}
I am thinking something like:
import {getSdk} from './sdk'
const client = new GraphQLClient('https://countries.trevorblades.com/');
const graphqlBaseQuery = (someGeneratedQueryOrMutation, client) => {
const something = someGeneratedQueryOrMutation(client);
const { continents } = await something.continents();
return { data: continents };
};
Code does not really make sence but I hope you see where I am going with this. Thanks :)
Edit: By now there is a Grahql Codegen plugin available at https://www.graphql-code-generator.com/docs/plugins/typescript-rtk-query
Actually I started writing a plugin for the code generator a few days ago.
You can see the generated result here:
https://github.com/phryneas/graphql-code-generator/blob/5f9a2eefd81538782b791e0cc5df633935164a89/dev-test/githunt/types.rtk-query.ts#L406-L427
This would require you to create an api with a baseQuery using a graphql library of your choice like this.
A configuration would look like this
./dev-test/githunt/types.rtk-query.ts:
schema: ./dev-test/githunt/schema.json
documents: ./dev-test/githunt/**/*.graphql
plugins:
- typescript
- typescript-operations
- typescript-rtk-query
config:
importBaseApiFrom: '../../packages/plugins/typescript/rtk-query/tests/baseApi'
exportHooks: true
And I think for bundle-splitting purposes it would also work with the near-operation-file preset.
All that is not upstream yet - I will try to get that ready this weekend but don't know how much time it would take to actually get it in.
You could check the repo out, do a local build and install it with something like yalc though.
For a more basic approach without code generation you could look at this example or for an a bit more advanced setup (but also without full code generation, more integrated with existing tooling) you could look at this PR

Unable to add remote file node to parent node Gatsby

I'm trying to add File nodes to a gatsby source that doesn't create them automatically (gatsby-source-tumblr in this case). Every node has an array of images associated that I want to download and later convert using Gatsby-Image.
Here's my code so far:
exports.onCreateNode = async ({
node,
actions: { createNode, createNodeField },
store,
cache,
getCache,
createNodeId,
}) => {
if (node.internal.type === `TumblrPost`) {
const pathValue = "/posts/" + node.slug
createNodeField({
node,
name: `path`,
value: pathValue,
})
}
if (
node.internal.type === `TumblrPost` &&
node.photos &&
node.photos.length
) {
await node.photos.map(async photo => {
let fileNode
try {
fileNode = await createRemoteFileNode({
url: photo.original_size.url,
parentNodeId: node.id,
getCache,
createNode,
cache,
createNodeId,
store,
})
// Adds a field `localFile` to the node
// ___NODE appendix tells Gatsby that this field will link to another node
if (fileNode) {
photo.localFile___NODE = fileNode.id
}
} catch (err) {
console.warn(err)
}
})
}
}
As suggested in multiple tutorials I've found, I'm using createRemoteFileNode to download the images and add them as file nodes. This part works, I can see the images being downloaded, and there's a number of items visible in GraphiQL under the allImageSharp category.
However I can't seem to get the association with the parent node to work. Doing something like photo.localFile___NODE = fileNode.id does not seem to have any effect. Experimenting with adding even one image to the parent node (post) doesn't seem to work: When looking at the post node in the GraphiQL inspector, there is no trace of the File node.
I have the feeling I might be missing something obvious, but I've been fiddling with this for hours now and can't seem to figure it out.
Any help would be really appreciated!
I found this answer that helped me fix it. I think the problem was with how I was handling the async loops. I copied the code in which it sets product.images (in that other user's codebase) and adapted it to mine, it worked. I'm guessing the _ await Promise.all_ was the key.
Which is weird to me, because some point I brought in the fantastic async library (something I'm more familiar with than await etc.) to make sure there weren't any issues with the looping, and that didn't work. Ah well, such is life. I'm glad it works now.

Access JSON chunk exported from Gatsby Static Query

I have a React Component in a Gatsby app that is using the useStaticQuery hook to pull in data from the GraphQL layer. This component gets used in my application, but it also gets used as part of a JavaScript embed/widget that is created in a separate Webpack configuration.
I don't want the widget to depend on Gatsby, so I've shimmed the relevant bits of Gatsby, but I still need to pass in data to the shim I've created for useStaticQuery. I found that my Gatsby app is generating a file at public/static/d/2250905522.json that contains a perfect representation of the query data, and I'd like to use it like so:
// This file gets substituted when importing from `gatsby`
import queryResult from "../public/static/d/2250905522.json"
export const useStaticQuery = () => queryResult.data
export const graphql = () => {}
This works, but I haven't figured out where this is coming from or how to determine the file name in a way that is deterministic/stable. How is Gatsby determining this file name, and what internals might I use to do the same?
Edit: I found this routine in the Gatsby codebase that appears to be using staticQueryComponent.hash to determine the number. staticQueryComponent is being destructured from store.getState() where store is associated with Redux, but I'm still not sure where the hash is being determined yet.
Edit 2: Found another mention of this in the documentation here. It sounds like hash is a hash of the query itself, so this will change over time if the query changes (which is likely), so I'm still looking for the routine used to compute the hash.
Due to changes in the babel-plugin-remove-graphql-queries, coreyward's (awesome) answer should be updated to:
const { stripIgnoredCharacters } = require('graphql/utilities/stripIgnoredCharacters');
const murmurModule = require('babel-plugin-remove-graphql-queries/murmur');
const murmurhash = typeof murmurModule === 'function' ? murmurModule : murmurModule.murmurhash;
const GATSBY_HASH_SEED = 'abc';
function hashQuery(query) {
const result = murmurhash(stripIgnoredCharacters(query), GATSBY_HASH_SEED).toString();
return result;
}
module.exports = hashQuery;
The changes are:
fix the way murmurhash is imported. Credit to github user veloce, see: https://github.com/birkir/gatsby-source-graphql-universal/pull/16/files
Change to using stripIgnoredCharacters in order to match the updated way that gatsby internally hashes queries by first stripping whitespace and comment lines for efficiency.
Gatsby is using murmurhash with a seed of "abc" to calculate the hash of the full text of the query (including whitespace). This occurs in babel-plugin-remove-graphql-queries.
Since the reused components are isolated from Gatsby, the graphql tagged template literal can be shimmed in order to get the original query for hashing:
// webpack.config.js
module.exports = {
resolve: {
alias: {
gatsby: path.resolve(__dirname, "gatsby-shim.js"),
},
},
}
// gatsby-shim.js
import { murmurhash } from "babel-plugin-remove-graphql-queries/murmur"
import {
stripIgnoredCharacters,
} from "graphql/utilities/stripIgnoredCharacters"
const GATSBY_HASH_SEED = "abc"
const hashQuery = (query) =>
murmurhash(
stripIgnoredCharacters(query),
GATSBY_HASH_SEED
).toString()
export const graphql = query => hashQuery(query.raw[0])
This results in the query hash being passed into useStaticQuery, which can be shimmed similarly to retrieve the cached query from disk.
Also worth noting, newer versions of Gatsby store the StaticQuery result data in public/page-data/sq/d/[query hash].json.
If you're looking to do something similar, I've written up a much longer blog post about the details of this process and the solution I arrived at here.

Apollo Server Slow Performance when resolving large data

When resolving large data I notice a very slow performance, from the moment of returning the result from my resolver to the client.
I assume apollo-server iterates over my result and checks the types... either way, the operation takes too long.
In my product I have to return large amount of data all at once, since its being used, all at once, to draw a chart in the UI. There is no pagination option for me where I can slice the data.
I suspect the slowness coming from apollo-server and not my resolver object creation.
Note, that I log the time the resolver takes to create the object, its fast, and not the bottle neck.
Later operations performed by apollo-server, which I dont know how to measure, takes a-lot of time.
Now, I have a version, where I return a custom scalar type JSON, the response, is much much faster. But I really prefer to return my Series type.
I measure the difference between the two types (Series and JSON) by looking at the network panel.
when AMOUNT is set to 500, and the type is Series, it takes ~1.5s (that is seconds)
when AMOUNT is set to 500, and the type is JSON, it takes ~150ms (fast!)
when AMOUNT is set to 1000, and the type is Series, its very slow...
when AMOUNT is set to 10000, and the type is Series, I'm getting JavaScript heap out of memory (which is unfortunately what we experience in our product)
I've also compared apollo-server performance to express-graphql, the later works faster, yet still not as fast as returning a custom scalar JSON.
when AMOUNT is set to 500, apollo-server, network takes 1.5s
when AMOUNT is set to 500, express-graphql, network takes 800ms
when AMOUNT is set to 1000, apollo-server, network takes 5.4s
when AMOUNT is set to 1000, express-graphql, network takes 3.4s
The Stack:
"dependencies": {
"apollo-server": "^2.6.1",
"graphql": "^14.3.1",
"graphql-type-json": "^0.3.0",
"lodash": "^4.17.11"
}
The Code:
const _ = require("lodash");
const { performance } = require("perf_hooks");
const { ApolloServer, gql } = require("apollo-server");
const GraphQLJSON = require('graphql-type-json');
// The GraphQL schema
const typeDefs = gql`
scalar JSON
type Unit {
name: String!
value: String!
}
type Group {
name: String!
values: [Unit!]!
}
type Series {
data: [Group!]!
keys: [Unit!]!
hack: String
}
type Query {
complex: Series
}
`;
const AMOUNT = 500;
// A map of functions which return data for the schema.
const resolvers = {
Query: {
complex: () => {
let before = performance.now();
const result = {
data: _.times(AMOUNT, () => ({
name: "a",
values: _.times(AMOUNT, () => (
{
name: "a",
value: "a"
}
)),
})),
keys: _.times(AMOUNT, () => ({
name: "a",
value: "a"
}))
};
let after = performance.now() - before;
console.log("resolver took: ", after);
return result
}
}
};
const server = new ApolloServer({
typeDefs,
resolvers: _.assign({ JSON: GraphQLJSON }, resolvers),
});
server.listen().then(({ url }) => {
console.log(`🚀 Server ready at ${url}`);
});
The gql Query for the Playground (for type Series):
query {
complex {
data {
name
values {
name
value
}
}
keys {
name
value
}
}
}
The gql Query for the Playground (for custom scalar type JSON):
query {
complex
}
Here is a working example:
https://codesandbox.io/s/apollo-server-performance-issue-i7fk7
Any leads/ideas would be highly appreciated!
There's a related open issue here. Lee Byron summed it up pretty well:
I think the TL;DR of this issue is that GraphQL has some overhead and that reducing that overhead is non-trivial and removing it completely may not be an option. Ultimately GraphQL.js is still responsible for making API boundary guarantees about the shape and type of the returned data and by design does not trust the underlying systems. In other words GraphQL.js does runtime type checking and sub-selection and this has some cost.
The benefits that GraphQL offers (validation, sub-selection, etc.) inevitably incur some overhead as they require additional processing of the data you're returning. And unfortunately, this overhead scales with the size of the data. I imagine if you were to implement a REST endpoint that supported partial responses and did response validation using something like Swagger or Joi, you'd encounter a similar issue.
The "heap out of memory" error means exactly what it says -- you're running out of memory on the heap. You can try to alleviate this by manually increasing the limit.
Typically, large datasets like this should be broken up by implementing pagination. If that's not an option, utilizing a custom scalar will be the next best approach. The biggest downside to this approach is that clients consuming your API will not be able to request specific fields inside the JSON object you return. Outside of patching GraphQL.js, there's really no other alternative to speed up the responses and reduce your memory usage.
Comment summary
This data structure/types:
are not individual entities;
just a series of [groupped] data;
don't need normalization;
won't be normalized properly in apollo cache (no id fields);
This way this dataset is not the graphQL was designed for. Of course graphQL still can be used for fetching this data but type parsing/matching should be disabled.
Using custom scalar types (graphql-type-json) can be a solution. If you need some hybrid solution - you can type Group.values as json (instead entire Series). Groups still should have an id field if you want to use normalized cache [access].
Alternative
You can use apollo-link-rest for fetching 'pure' json data (file) leaving type parsing/matching to be client side only.
More advanced alternative
If you want to use one graphql endpoint ...
write own link - use directives - 'ask for json, get typed' - mix of two above. Sth like in rest link with de-/serializers.
In both alternatives - why do you really need it? Just for drawing? Not worth the effort. No pagination but hopefully streaming (live updates?) ... no cursors ... load more (subscriptions/polling) by ... last time update? Doable but 'not feel right'.

Get room/rooms of client [duplicate]

I can get room's clients list with this code in socket.io 0.9.
io.sockets.clients(roomName)
How can I do this in socket.io 1.0?
Consider this rather more complete answer linked in a comment above on the question: https://stackoverflow.com/a/24425207/1449799
The clients in a room can be found at
io.nsps[yourNamespace].adapter.rooms[roomName]
This is an associative array with keys that are socket ids. In our case, we wanted to know the number of clients in a room, so we did Object.keys(io.nsps[yourNamespace].adapter.rooms[roomName]).length
In case you haven't seen/used namespaces (like this guy[me]), you can learn about them here http://socket.io/docs/rooms-and-namespaces/ (importantly: the default namespace is '/')
Updated (esp. for #Zettam):
checkout this repo to see this working: https://github.com/thegreatmichael/socket-io-clients
Using #ryan_Hdot link, I made a small temporary function in my code, which avoids maintaining a patch. Here it is :
function getClient(roomId) {
var res = [],
room = io.sockets.adapter.rooms[roomId];
if (room) {
for (var id in room) {
res.push(io.sockets.adapter.nsp.connected[id]);
}
}
return res;
}
If using a namespace :
function getClient (ns, id) {
return io.nsps[ns].adapter.rooms[id]
}
Which I use as a temporary fix for io.sockets.clients(roomId) which becomes findClientsSocketByRoomId(roomId).
EDIT :
Most of the time it is worth considering avoiding using this method if possible.
What I do now is that I usually put a client in it's own room (ie. in a room whose name is it's clientID). I found the code more readable that way, and I don't have to rely on this workaround anymore.
Also, I haven't tested this with a Redis adapter.
If you have to, also see this related question if you are using namespaces.
For those of you using namespaces I made a function too that can handle different namespaces. It's quite the same as the answer of nha.
function get_users_by_room(nsp, room) {
var users = []
for (var id in io.of(nsp).adapter.rooms[room]) {
users.push(io.of(nsp).adapter.nsp.connected[id]);
};
return users;
};
As of at least 1.4.5 nha’s method doesn’t work anymore either, and there is still no public api for getting clients in a room. Here is what works for me.
io.sockets.adapter.rooms[roomId] returns an object that has two properties, sockets, and length. The first is another object that has socketId’s for keys, and boolean’s as the values:
Room {
sockets:
{ '/#vQh0q0gVKgtLGIQGAAAB': true,
'/#p9Z7l6UeYwhBQkdoAAAD': true },
length: 2 }
So my code to get clients looks like this:
var sioRoom = io.sockets.adapter.rooms[roomId];
if( sioRoom ) {
Object.keys(sioRoom.sockets).forEach( function(socketId){
console.log("sioRoom client socket Id: " + socketId );
});
}
You can see this github pull request for discussion on the topic, however, it seems as though that functionality has been stripped from the 1.0 pre release candidate for SocketIO.

Resources