Currently we have a single Aurora Postgresql db instance that we are interacting with but I would like to add one more db instance so that we can read from one database and write to another(existing one). We are using Hanami v1.3 and the project is in ruby.
I am trying to find documentation/resources on how to implement this and is it possible to do so?
This is not possible in Hanami v1.3, checkout the following link:
https://github.com/hanami/hanami/issues/1028
In hanami 2 (already in beta 🥳) it is possible through rom-rb (you will love to work with it). I would not recommend to start a new project with hanami 1.3. I would go definitely with 2.0.
Related
I've got a local strapi set up with sqlite. I didn't think ahead, sadly that I would need use postgres to deploy to Heroku later.
After struggling to deploy with the project using sqlite, I decided to create a new project using postgres and successfully deployed it to Heroku. Now, in the local project, I've already setup content types, pages and everything. I was wondering, instead of having to recreate what I have done locally, how do I copy what I've done to the new project on Heroku including the database (sqlite --> postgres).
Has anyone done this before or maybe could point me to the right direction?
thank you in advance!
According to this:
https://github.com/strapi/strapi/issues/205#issuecomment-490813115
Database migration (content types and relations) is no longer an issue, but moving existing data entries from one database to another is.
To change database provider, I suppose you just need to edit config/environments/**/database.json according to Postgres setup.
Faced same issue, my solution is to use new project to generate a core for PostgreSQL and then run your existing code base on freshly created PostgreSQL:
npx create-strapi-app my-project and then choose custom -> PostgreSQL (Link)
When manually create a collections that are exists in SQLite, without fields
Run your old codebase with new database config which point on a PostgreSQL (that will create fields that you have in your data models)
Require a little bit of manual work, but works for me. Good luck!
I am fairly new to Ruby on Rails. I am using it to create a web API application and was wondering if instead of creating a schema based on my Model, can I do the reverse? E.g. is it possible to create model that would fit with an already existing schema? Something like that would be fairly easy in Java World using JPA but I am not so sure about Rails using DSL for databases.
Do I have to manually change the migration files in this case? If yes, is there an easy/recommended way to do this?
Thanks
The only thing that you have to do is to add ActiveRecords with the name of your tables.
https://guides.rubyonrails.org/active_record_basics.html
And yes there is way to reverse engineer it.
There is pretty good article about that: https://codeburst.io/how-to-build-a-rails-app-on-top-of-an-existing-database-baa3fe6384a0
I just started with ElasticSearch and I wish to automate migration between code versions.
For RDBMS I use tools like phinx that apply changes to the DB.
For example:
Create a migration file with up() & down() methods.
Write commands to apply (for example add index).
after tests and etc ./phinx migrate.
Is there a migration tool like this?
If not, is there another acceptable approach to handle changes to the cluster?
I have never heard of a tool like that specifically for ES indexes.
If your goal is to update the internal representation of your data, i think the best approach is just create a script that:
Find the affected documents
Read the contents
Modify them
Reindex them in a new doc
Then you can delete the old document.
Update a doc it wont be more efficient that reindex, since documents are immutable, so update is just get + reindex (https://www.elastic.co/guide/en/elasticsearch/guide/current/update-doc.html)
Flyway with code-based (e.g. Java) migrations can be used to work with any data store. Similar to migrating relational DB, but requires a bit more work since you need to implement calls to ElasticSearch with the relevant commands (e.g. create index).
https://flywaydb.org/documentation/concepts/migrations.html#java-based-migrations
Coming from a background of RDBMS, the migration tool is very handy when you are working with a big project that is having a lot of migrations files. I was also facing the same issue with Elasticsearch that currently there was no stable migration tool in the community.
I have created a migration tool and it will be handy if you are coming from a background of python https://pypi.org/project/chalan/. The core idea is taken from Alembic migration tool that is for Sqlalchemy.
Usage is simple
pip install chalan
Then for upgrade you have to use
chalan upgrade
And for downgrade you have to use
chalan downgrade
Please let me know if you face any issues with this tool and feel free to suggest some improvements if any.
For source code please refer the github link - https://github.com/anandtripathi5/chalan
I just started to discover the world of Neo4j and stumbled right into an issue, I have problems to grasp.
I installed Neo4j and started it via bin/neo4j start.
In the next steps I wrote a ruby script that creates new nodes, after installing jruby and the neo4j gem. Everything fine until here.
How to get started is decribed here:
http://wiki.neo4j.org/content/Getting_Started_With_Ruby
My Problem: When the server is started and I try to creates nodes, Neo4j responds that the database is locked. When I stop the server the nodes get created.
I am used to relational databases, so I don´t understand this behaviour.
When I check the Server Info via the Neo4j Webadmin Tool (http://localhost:7474/webadmin) the ReadOnly flag is set to false.
It seems to me that the Neo4j approach is maybe different from relational db, meaning the server could maybe have a slightly other purpose then a db server.
Thanks for any advices,
Tobias
The JRuby bindings will start it's own Neo4j instance, meaning that you will end up having two database instances trying to use the same files.
The approach is somewhat different, but relational databases use it as well, for example Apache Derby. As with Neo4j, you can either embed it in your application (that is what the JRuby bindings are doing in your case) or run it as a standalone server.
So just don't start a server yourself, that should solve the problem.
Is it possible to create a new user in sonar without using the web interface?
I need to write a script that inserts the same users for some tools, including sonar.
There are three ways you can do this:
Write directly to the database (there is a simple table called users).
Use the LDAP plugin, if you specify sonar.authenticator.createUsers: true in sonar.properties, it will create the users in the sonar database automatically the first time they authenticate.
Write a java application that depends on the sonar plugin API, you can then use constructor injection to get a Sonar hibernate session and persist the user you want. See Here.
Since SonarQube version 3.6, there is support for user management in webservice API:
https://sonarqube.com/web_api/api/users
http://docs.sonarqube.org/display/DEV/Web+API
The web service API does not seem to support user management. Anything's possible, but it doesn't look like this is offered directly via Sonar.
You could probably use some web automation library (webbrowser, webunit, watir, twill) to do it through the running server; it might even be possible to just use something like 'curl' by looking carefully at the page source for the users/create form.
Or, if you want to go straight to the database, you could try to pull out the user creation functionality from the code and mess with the sonar.users table directly.
There is the LDAP Plugin, which would take care of authentication, but it still requires you to create the users in Sonar, so that wouldn't solve your problem.