How connect to different db depending on #request.host value?
Using Sinatra and MongoDB with Mongoid.
I need to read a Sintra application's menu, data ... from different databases. I wish to deploy it only in one place and depending on request.host(subdomain) value to serve the specific pages.
You're probably better off storing all your data in one database marking/tagging/categorizing it depending on the subdomain you're on.
If you setup your Mongoid connection manually already, you could do something like this:
connection = Mongo::Connection.new
Mongoid.database = connection.db(#request.host)
But still, I think you're better of with one database.
Related
For a multi tenant application I need to create I want to evaluate how convenient is Slick for creating queries against Postgres different schemas (not to confuse with schema tables).
I'm having a hard time finding how to configure TableQuery to use dynamically the schema provided by the user. TableQuery[Users].resul should return different datasets depending on me querying tenant A or tenant B.
Is it possible with current Slick versions?
TableQuery itself will not need to be configured, as its methods only return queries and actions. Actions are run by a DatabaseDef instance, and that is what will need to be configured to access different schemas/databases/etc. Slick official documentation describes a simple way to create an instance of a DatabaseDef, which by default uses the Typesage Config library:
val db = Database.forConfig("mydb")
where "mydb" specifies a key in a property file Typesafe Config is looking at. You can create and manipulate Config instances programmatically as well, and create db instances from those. I suspect you will have to do something along the lines of creating a new Config instance (there is the convenient withValue() method to copy a Config and replace a config value at the specified key) and use that to create a new db instance for each new schema you are interested in querying.
I'm planning to distribute load from my database making a copy on several servers (each server will have the same tables but with different company data).
In order to do this, I will need to programmatically change the Datastore associated to my Data Views. For other tables I'm using the "Before Connect" property.
It's possible to handle this in Genexus?
Thanks,
Yes, you can use dbConnection Data Type.
Just create a variable based on this data type, and use it's methods and properties to set it up when you need it to be changed...
I have a simple web app UI (which stores certain dataset parameters (for simplicity, assuming they are all data tables in a single Redshift database, but the schema/table name can vary, and the Redshift is in AWS). Tableau is installed on an EC2 instance in the same AWS account.
I am trying to determine an automated way of passing 'parameters' as a data source (i.e. within the connection string inside Tableau on EC2/AWS) rather than manually creating data source connections and inputting the various customer requests.
The flow for the user would be say 50 users select various parameters on the UI (for simplicity suppose the parameters are stored as a JSON file in AWS) -> parameters are sent to Tableau and data sources created -> connection is established within Tableau without the customer 'seeing' anything in the back end -> customer is able to play with the data in Tableau and create tables and charts accordingly.
How may I do this at least through a batch job or cloud formation setup? A "hacky" solution is fine.
Bonus: if the above is doable in real-time across multiple users that would be awesome.
** I am open to using other dashboard UI tools which solve this problem e.g. QuickSight **
After installing Tableau on EC2 I am facing issues in finding an article/documentation of how to pass parameters into the connection string itself and/or even parameterise manually.
An example could be customer1 selects "public_schema.dataset_currentdata" and "public_scema.dataset_yesterday" and one customer selects "other_schema.dataser_currentdata" all of which exist in a single database.
3 data sources should be generated (one for each above) but only the data sources selected should be open to the customer that selected it i.e. customer2 should only see the connection for other_schema.dataset_currentdata.
One hack I was thinking is to spin up a cloud formation template with Tableau installed for a customer when they make a request, creating the connection accordingly, and when they are done then just delete the cloud formation template. I am mainly unsure how I would get the connection established though i.e. pass in the parameters. I am not sure spinning up 50 EC2's though is wise. :D
An issue I have seen so far is creating a manual extract limits the number of rows. Therefore I think I need a live connection per customer request. Hence I am trying to get around this issue.
You can do this with a combination of a basic embed and applying filters. This would load the Tableau workbook. Then you would apply a filter based on whatever values your user selects from the JSON.
The final missing part is that you would use a parameter instead of a filter and pass those values to the database via initial sql.
I have integrated the jack rabbit with Oracle database and I am storing the
Data using Jackrabbit, if I don't want to retrieve the data using the
Jackrabbit, in what way I can get the data. In database data is storing in
blob type.
The way Jackrabbit stores the data in the DB is an implementation detail, and it does not magically map this into a "nice" DB schema if that's what you mean. (The hierarchical nature and all the JCR features make this impossible). It's a bit like having a Unix file system and then asking how can I read the low level inodes etc. from the file system implementation - you really should not.
Last but not least note that while it is running nothing else (except for a Jackrabbit cluster setup) must write to the DB (the tables used by Jackrabbit) as this will easily lead to data corruption.
As #TedTrippin already mentioned above, an ORM framework would make things much easier. But if you really want to do it manually in Oracle, the approach would be:
Study the code of the OCM http://jackrabbit.apache.org/jcr/object-content-mapping.html, then get the content according to the logic of associations and relations from Oracle, probably not in one but multiple queries per document; eventually with user-defined functions, which are supported in Oracle and might make things easier.
Would be interesting to know the background of your questions. You tagged it with "Spring" and "CMS". I don't see any reason why you would want to access the data directly from Oracle, it's tedious. In case you want to provide an API for the content to an external system, or in case you have lost a CMS that was once in front of and just using the Jackrabbit repo as a content store, you could still use such ORM / OCM framework standalone to make it easier to access the data.
We have been running one server for the past few months and it contains all the files, SQL data, and is running as our server. We have recently bought 2 more servers to use replication because our database load was so high.
We are going to use a simple master slave replication using transaction replication in MSSQL however our methods that we use to acess LINQ entities must be changes.
For all functions that update they need to connect to the master, but all the ones that select need to query the slave.
How can we edit the connection string based on the function that needs to be done?
Any help would be appreciated.
Thanks
The simplest approach would be;
Create two connection strings on the web.config <connectionStrings> section for read and write.
When querying data, pass the read connection string name to the context's constructor.
and, pass the write connection string name when updating.
If you are using LINQ to entities, you can pass the connection string to the instance of the context i.e ModelContext ctx = new ModelContext("[edmx format connectionstring]");