How to setup ArangoDB replication via the ArangoDB Go driver - go

I need to set up a simple replication schema with a secondary database. I figured out that using arangosh I can set it up with the following commands:
db._useDatabase("myDB");
require("#arangodb/replication").setupReplication({
endpoint: "tcp://main-server:8529",
username: "user",
password: "pass",
verbose: false,
includeSystem: false,
incremental: true,
autoResync: false,
autoStart: true,
restrictType: "include",
restrictCollections: [ "Products" ]
});
This setup, however does not seem to persist. Connection going down, or server restarts make it disappear.
So, I would like to set up some monitoring and re-establishment of the replication in my Go program.
I searched both the ArangoDB website Manual pages, and Go driver documentation but I could not find anything that would allow me to run the above setup in Go using the driver.
Additionally, I didn't find how I could interface with arangosh, and possibly run the JS code above and get the results. Is that possible somehow using the Go driver?

I accidentally found a solution to this.
The Golang driver does not provide this functionality. But Arango has a pretty simple HTTP based API which allows access to all functions and features of the database engine.
Here's the link to the documentation I used: https://www.arangodb.com/docs/3.8/http/index.html
(I'm using version 3.8 because after that the type of replication I needed was no longer part of the community edition).
Setting up a replication requires just two steps:
PUT request to the /_db/yourDBname/_api/replication/applier-config with a JSON payload:
{
"endpoint":"tcp://serverIP:8529",
"database":"yourDBname",
"username":"root",
"password":"password",
"autoResync": true,
"autostart":true
}
And another PUT request to get the replication actually started, to /_db/yourDBname/_api/replication/applier-start . This one doesn't need any payload
And to see how things are going you can do a GET request to /_db/yourDBname/_api/replication/applier-state
All these requests need a JWT token that you can get with a POST request to /_open/auth with a payload of:
{
"username": "user",
"password": "passwd"
}
The token you receive will need to be included in the HTTP header as a bearer token. Pretty simple.

Related

Export / finding GraphQL Schema with Strapi and GraphQL plugin

I’m new to Strapi and to GraphQL.
I successfully created a website that uses Apollo to query data from my Strapi website.
So functionally I have everything I need.
For my DX I’m wondering:
Since I installed the GraphQL IntelliJ plugin: Where do I find the schemas for it? I read something about remote schema detection - is that supported with Strapi GraphQL Plugin? Where can I read about it? Otherwise how can I export GraphQL schema files from Strapi?
If I got 1) to work: Will TypeScript types work out of the box? Would I use one of the GraphQL schema to TS converters out there? It feels like there might be something working automatically, but I can’t tell till I get 1) to work.
First, you asked two separate questions and should therefore separate then in two separate threads.
To answer your first question: Here is how you can utilise the GraphQL IntelliJ plugin:
You need to create a .graphlconfig file. In Webstorm select your project folder and go to 'File' -> 'New' -> 'GraphQl Configuration File'.
Change the endpoint url to your strapi endpoint.
Visit the GraphQl Tool Window, double click your endpoint and select 'Get GraphQl Schema from Endpoint (introspection)'. This will retrieve the schema file from strapi and save it to schema.graphql.
Now you can run queries against your endpoint, e.g. create a new Scratch File scratch.graphql and run queries against your endpoint or try to figure out how to solve your second question ;)
Thank you for the answer! This was helpful!
Further to this, one query - typically, is .graphlconfig committed to git repo and scratch.graphql ignored from the git repo?
In addition for others looking for a similar solution - you could use values from .env. using the format below:
{
"name": "Strapi GraphQL Schema",
"schemaPath": "schema.graphql",
"extensions": {
"endpoints": {
"Default GraphQL Endpoint": {
"url": "${env:GRAPHQL_HOST}/graphql",
"headers": {
"Authorization": "Bearer ${env:GRAPHQL_TOKEN}",
"user-agent": "JS GraphQL"
},
"introspect": false
}
}
}
}

Using SpringBoot, NodeJS and Vue-CLI - not saved session

I have been using SpringBoot for web development before, which needs to manage session, database, HTML and other functions.
Recently, I am going to use VUE, replacing the HTML part of SpringBoot.
Now I use the following two methods for development, but I am troubled by problems such as cross domain and session cannot be keeped.
Use nodejs as the enter of the website and use vue.config to configure the agent. In this way, the URL located is correct, but the login status cannot be saved.For example, if submit a form for login, the background returns to the successful message for login, but if refresh the page, it will show 'not login'. The 'devServer' configuration is as follows
devServer: {
open: false,
host: 'localhost',
port: 8080,
proxy: {
'/ API: {
Target: 'http://192.168.9.211/',
ChangeOrigin: true,
ws: true,
PathRewrite: {
'^ / API' : ' '
},
cookieDomainRewrite: 'localhost',
}
}
}
Use Nginx as gateway, and then forward the HTML request to NodeJS, and the data request (such as JSON data) to SpringBoot.I haven't had time to experiment with this idea, because it may still occur session saving issues.Nginx is forwarding HTTP requests to Nodejs and SpringBoot, so this saved session is from Nodejs or SpringBoot?
So ask if anyone has done a similar project and if you can share your experience.

Solution for caching an api with specific behaviour

I am developing a rest api that serves a game. Every three minutes a job runs on server updating an important information of this game. So after this job runs, I need to invalid the cache and create a new one with recent information.
Ok, I implemented on my application MemCached, but a senior developer said that it would be very important to have other cache. He suggested to me to use Varnish, but I really don't know if it would fit in my logic.
Do you have any suggestions of how I could achieve this?
Varnish will work just fine in your case. Of course, Memcached is used for caching transient data whereas Varnish is a full page cache, so it's great for reducing the load on your backend application (whichever language it's powered with, PHP or anything).
You will not need to make any change to your application to cache things with Varnish properly (however you could go that route as well, and adjust your app to send the proper caching headers). Simply develop the VCL (Varnish Configuration Language) file with instructions on your cache policy.
Do not use complete copy paste for VCL files you find online. Add smallest snippest as possible, understand how things work and Varnish will not dissapoint you. Important would be:
Ensure that your cache varies by the API token (if you use for API authentication). You will implement this in vcl_hash procedure.
Integrate cache clearing in your job for updating information: Varnish cache can be cleared by use of a PURGE HTTP request (again, you'd need to develop the necessary VCL code for it, inside vcl_recv procedure).
You can use Mcrouter a memcached protocol router to basically replicate your Memcache.
This config can handle your request:
{
"pools": {
"A": {
"servers": [
// First Memcache address:
"memcache_1_ip:11211",
// Second Memcache address:
"memcache_2_ip:11211"
]
}
},
"route": {
"type": "OperationSelectorRoute",
"operation_policies": {
"add": "AllSyncRoute|Pool|A",
"delete": "AllSyncRoute|Pool|A",
"get": "LatestRoute|Pool|A",
"set": "AllSyncRoute|Pool|A"
}
}

Cache internal routes with sw-precache

I'm creating a SPA using vanilla JavaScript and currently setting up sw-precache to handle the caching of resources. The service worker is generated as part of a gulp build and installed successfully. When I navigate to the root url (http://127.0.0.1:8080/) whilst offline the app shell displays, illustrating that resources are indeed cached.
I'm now attempting to get the SW to handle internal routing without failing. When navigating to http://127.0.0.1:8080/dashboard_index whilst offline I get the message 'Site can't be reached'.
The app handles this routing on the client side via a series of event listeners on the users actions or, in the case of using the back button, the url. When accessing one of these urls, no calls to the server should be made. As such, the service worker should allow these links to 'fall through' to the client side code.
I've tried a few things and expected this Q/A to solve the problem. I've included the current state of the generate-service-worker gulp task, and with this setup I'd expect to be able to access /dashboard_index offine. Once this is working I can adapt the solution to cover other routes.
Any help much appreciated.
gulp.task('generate-service-worker', function(callback) {
var rootDir = './public';
swPrecache.write(path.join(rootDir, 'sw.js'), {
staticFileGlobs: [rootDir + '/*/*.{js,html,png,jpg,gif,svg}',
rootDir + '/*.{js,html,png,jpg,gif,json}'],
stripPrefix: rootDir,
navigateFallback: '/',
navigateFallbackWhitelist: [/\/dashboard_index/],
runtimeCaching: [{
urlPattern: /^http:\/\/127\.0\.0\.1\:8080/getAllData, // Req returns all data the app needs
handler: 'networkFirst'
}],
verbose: true
}, callback);
});
update
The code to the application can be found here.
Removing the option navigateFallbackWhitelist does not chage the result.
Navigating to /dashboard_index whilst offline prints the following to the console.
GET http://127.0.0.1:8080/dashboard_index net::ERR_CONNECTION_REFUSED
sw.js:1 An unknown error occurred when fetching the script.
http://127.0.0.1:8080/sw.js Failed to load resource: net::ERR_CONNECTION_REFUSED
The same An unknown error occurred when fetching the script. is also duplicated in the 'application > service workers' tab of chrome debug tools.
It's also noted that the runtimeCaching option is not caching the json response returned from that route.
For the record, in case anyone else runs into this, I believe this answer from the comments should address the issue:
Can you switch from navigateFallback: '/' to navigateFallback:
'/index.html'? You don't have an entry for '/' in your list of
precached resources, but you do have an entry for '/index.html'.
There's some logic in place to automatically treat '/' and
'/index.html' as being equivalent, but that doesn't apply to what
navigateFallback is doing...

Raven-js errors not getting "site" property in sentry, while python raven errors are?

I have three python clients and a javascript client (all raven) connecting to a single sentry server. I have a unique site set for each client. However, while errors generated by the three python clients have site properly set in the sentry interface, errors generated by the javascript client have no site set.
My raven-js setup (pardon my Django):
require(['lib/raven-1.0.7'], function(Raven){
Raven.config('{% sentry_public_dsn %}', {
// escapere is a custom tag, simply wraps python's re.escape
includePaths: [new RegExp('{{ request.build_absolute_uri|escapere }}')],
site: 'AJAX'
}).install();
Raven.setUser({
email: "{{ user.email|escapejs }}",
id: "{{ user.id|escapejs }}"
});
});
I did a little bit of digging in the sentry code (using the highly scientific scatter-some-logging-statements-around method), and I'm convinced that the "site" parameter is, indeed, being sent to the sentry API, but for some reason it's getting lost between there and creating the actual event Group.
It seems sentry is moving away from the site parameter in favor of tags. Upgrading to the latest master from the raven-js repo and changing
site: 'AJAX'
to
tags: {site: 'AJAX'}
Makes things behave as expected.

Resources