Multiple databases in Strapi - strapi

Is it possible to use two different databases, i.e MongoDB and Redis, simultaneously in Strapi?
I want to keep track of my refresh tokens by Redis, while all other documents on Mongo.

Yes, buit Strapi does not support Redis
module.exports = ({ env }) => ({
defaultConnection: 'default', // this defines the default connections
connections: {
default: { //that's the default connection
connector: 'bookshelf',
settings: {
client: 'sqlite',
filename: env('DATABASE_FILENAME', '.tmp/main.db'), //uses main db
},
options: {
useNullAsDefault: true,
},
},
messages: { //that's the connection for messages db
connector: 'bookshelf',
settings: {
client: 'sqlite',
filename: env('DATABASE_FILENAME', '.tmp/messages.db'), //uses another db
},
options: {
useNullAsDefault: true,
},
},
},
});

Related

NestJS GraphQL inconsistent autoSchemaFile and service sql

I'm using nestJS with ApolloFederationDriver. I also add some prefix with transformSchema option. The issue is that in sdl I can't see that prefix, but in the autoSchemaFile he is. It is a problem, because I can't create connection between this microservice and Gateway.
#Module({
imports: [
GraphQLModule.forRoot<ApolloDriverConfig>({
autoSchemaFile: 'src/graphql/schema.gql',
driver: ApolloFederationDriver,
plugins: [
ApolloServerPluginLandingPageLocalDefault(),
ApolloServerPluginInlineTraceDisabled(),
],
sortSchema: true,
transformSchema: <some_transforms>,
transformAutoSchemaFile: true,
}),
],
})
Example from the schema.gql file
type Query {
prefix_getSomeData(filter: prefix_FilterInput)
}
And the same query in the sdl(query ApolloGetServiceDefinition { _service { sdl } })
type Query {
getSomeData(filter: FilterInput)
}
How I can add prefix to the sdl file?

How to specify environment to point Cypress tests at the correct DB credentials?

Below is the index.js code I am using to connect to a MySQL DB in my cypress test:
const mysql = require('mysql')
function queryTestDb(query, config) {
const connection = mysql.createConnection(config.env.db)
connection.connect()
return new Promise((resolve, reject) => {
connection.query(query, (error, results) => {
if (error) reject(error)
else {
connection.end()
return resolve(results)
}
})
})
}
module.exports = (on, config) => {
on('task', { queryDb: query => { return queryTestDb(query, config) }, });
require('cypress-grep/src/plugin')(config)
return config
}
Currently, my test use the DB credentials provided in cypress.json on this line:
const connection = mysql.createConnection(config.env.db)
But I want the framework to run in different environments, as the database name is different.
I have already created qa.json & staging.json config files that store the DB credentials like so:
qa.json:
{
"extends": "./cypress.json",
"baseUrl": "myUrl",
"env": {
"db": {
"host": "myHost",
"user": "myUser",
"password": "myPassword",
"database": "taltektc_qa"
}
}
}
staging.json:
{
"extends": "./cypress.json",
"baseUrl": "myUrl",
"env": {
"db": {
"host": "myUrl",
"user": "myUser",
"password": "myPassword",
"database": "taltektc_stage"
}
}
}
Here is the command I am currently using to run the tests:
npx cypress open --config-file staging.json
I tried to update my index.js below, but I get a Cypress is not defined error message:
module.exports = (on, config) => {
on('task', { queryDb: query => { return queryTestDb(query, Cypress.config()) }, });
Can someone please tell me what changes are required in my index.js so that I can specify which config file to use when making the DB connection?
In a Node plugin task, the config parameter is equivalent to Cypress.config() in the browser-side spec.
You should be getting the correct config resolved after --config-file staging.json is applied, so the original code is all you need
module.exports = (on, config) => {
on('task', { queryDb: query => { return queryTestDb(query, config) }, });
You can check what has been resolved after opening the runner, under settings/configuration

Apollo Federation Gateway: include local schemas when composing supergraph

When composing a supergraph for Apollo Federation's gateway, you would create a .yaml config file with the routing urls to the subgraphs.
Ex: https://github.com/apollographql/supergraph-demo/blob/main/subgraphs/inventory/inventory.js
//supergraph.yaml
subgraphs:
inventory:
routing_url: http://inventory:4000/graphql
schema:
file: ./subgraphs/inventory/inventory.graphql
products:
routing_url: http://products:4000/graphql
schema:
file: ./subgraphs/products/products.graphql
users:
routing_url: http://users:4000/graphql
schema:
file: ./subgraphs/users/users.graphql
In the example above, they are starting an Apollo server for each subgraph and composing a supergraph.
Is it possible to compose a supergraph without starting Apollo servers and just including the local schemas?
You can. Following this tutorial: https://www.apollographql.com/blog/backend/using-apollo-federation-with-local-schemas/
Instead of using super and sub-graphs, use serviceList and conditionally build the data source.
const gateway = new ApolloGateway({
serviceList: [
{ name: "products", url: "http://localhost:4002" },
{ name: "countries", url: "http://countries" },
],
buildService: ({ url }) => {
if (url === "http://countries") {
return new LocalGraphQLDataSource(getCountriesSchema());
} else {
return new RemoteGraphQLDataSource({
url,
});
}
},
});

Gatsby-source-graphql requires option `fieldName` to be specified

I am trying to get a demo app that uses Hasura and Gatsby started (https://github.com/praveenweb/dynamic-jamstack-gatsby-hasura/tree/master/dynamic-auth-client).
I edited the gatsby-config.js file with my Hasura endpoint URL, but I get the following error.
ERROR
UNHANDLED REJECTION Type HASURA must define one or more fields.
Error: Type HASURA must define one or more fields.
gatsby-config.js
module.exports = {
siteMetadata: {
title: "projectname",
siteUrl: `https://www.myurlhere.com`,
},
plugins: [
`gatsby-plugin-react-helmet`,
`gatsby-plugin-sitemap`,
{
resolve: `gatsby-plugin-nprogress`,
options: {
// Setting a color is optional.
color: `tomato`,
// Disable the loading spinner.
showSpinner: false,
},
},
{
resolve: "gatsby-source-graphql",
options: {
typeName: "HASURA",
fieldName: "hasura",
url: "https://myurlhere.com/v1/graphql",
},
},
],
}
I found I only need to make the typeName: "Query" and the fieldName: "blah".
Error: Invariant Violation: gatsby-source-graphql requires option `typeName` to be specified
{
resolve: "gatsby-source-graphql",
options: {
// This type will contain remote schema Query type
typeName: "Query",
// This is field under which it's accessible
fieldName: "blah",
// Url to query from
url: "http://10.113.34.59:4000/graphql",
// this is URL where served exposed its service in local
},

Apollo: passing root to resolver with info.mergeInfo.delegateToSchema

I have a stitched graphql schema. Some type fields are resolved with info.mergeInfo.delegateToSchema
Here's an example (which is from the apollo docs):
const mergedSchema = mergeSchemas({
schemas: [
transformedChirpSchema,
authorSchema,
linkTypeDefs,
],
resolvers: {
User: {
chirps: {
fragment: `... on User { id }`,
resolve(user, args, context, info) {
return info.mergeInfo.delegateToSchema({
schema: chirpSchema,
operation: 'query',
fieldName: 'chirpsByAuthorId',
args: {
authorId: user.id,
},
context,
info,
});
},
},
},
});
Is it possible to access root in chirps resolver? So that in the root there were all the parent fields? Another way is, of course, to use context for this purpose, but using root, I guess, would be better from a code perspective as I'm already using root value in some cases.
Under the hood info.mergeInfo.delegateToSchema can call remote GraphQL application (more details).
So by design remote resolver don't have access to local root/context/info/arg, you need send all required data in arguments for remote field. For example:
const mergedSchema = mergeSchemas({
schemas: [
transformedChirpSchema,
authorSchema,
linkTypeDefs,
],
resolvers: {
User: {
chirps: {
fragment: `... on User { id }`,
resolve(user, args, context, info) {
return info.mergeInfo.delegateToSchema({
schema: chirpSchema,
operation: 'query',
fieldName: 'chirpsByAuthorId',
args: {
// author is InputType at remove schema with similar user structure
author: user,
},
context,
info,
});
},
},
},
});
I don't know your case, but don't forgot about schema-transforms during working with remove schemas.

Resources