0xDBE How to disable scema scan on startup - datagrip

I'm trying to use 0xdbe to connect a huge database.
Problem is, that on startup it starts to scan DB schema, locking db and preventing it access from outside. Full scan takes a lot of time(more than an hour) so, it's absolutely impossible to do on prod database.
I managed to connect to dev database(at night when there was no load), and after that it caches that data somewhere and works really fast.
Is there any option, to disable this scan, or make it less aggressive?
Where this data is stored, how frequently it's updated.
Is it possible to scan everything once, write it to some file, and import on machines of other developers?

I've got an reply on question in dev forum, from Andrey Dernov. I'll summarize it here:
About slowliness of synchronisation, there is a related issue in YouTrack. And intellij team said, that they have implemented new DB introspection which will increase the performance in this regard.
And all caches are located in .idea folder under the ~/.0xDBE10/config/projects/<your_project_name>.
It is possible to share dataSources.ids and dataSources.xml files from there, to speed-up process for other developers in team.

Related

Realtime one-way mirroring of a SQLite database

I am dealing with a 3rd party application that's running a SQLite 3 database with WAL (Write-Ahead Logging) on a local computer, and I'm looking to mirror that database (read only, this is a one-way mirroring) to another system. The challenge is that I'm running in a separate process, which seems to complicate things somewhat.
The database is being created and opened with a normal locking mode so there's no problem reading it from another process, but I'm trying to either find an existing implementation or get some pointers on where to get started. My understanding, based on other posts is that the standard sqlite update hooks (such as sqlite3_update_hook) will not work out of process.
A key issue is speed, I'd like to ideally be able to detect each update as soon as it happens and begin transmitting it. This means that most polling options would be out of the question, but even if they were, how would you detect the most recent changes?
I'm seeing two files that look promising: the actual WAL file (foo.db-wal), and that memory mapped index file (foo.db-shm). I'm hoping that those two contain the information I need to: A. Detect when changes occur in the database and B. Be able to grab just the incremental changes since the last update.
But a pointer to some existing solution would be much preferred... :-)
SymmetricDS might be the solution for you

How can couchbase be used as a caching layer on top of oracle?

I have Oracle as my main RDBMS for read and write, but I want to use couchbase as caching layer as it has map-reduce as can be used as memcache. Any idea as to how i can implement that, and how to transfer and update data in the caching layer, when Oracle is updated or inserted etc.
You are not telling anything about your current performance issues.
I have seen too many applications which do not really take advantage of RDBMS/SQL features, especially if an ORM sits in between.
The cure is to put another cache on top of a database, and to synchronize this in a cluster manually using IP multicasts (SwarmCache for example), message queues (JMS) or nightly import jobs. It could create more problems in the end. And it increases system complexity.
So my answer to your question is: I would not do it, as long as there is room for improvement regarding your data model and/or queries.
I believe your question is about Database synchronization. This can be done through a combination of using DB dependencies and "right-thru" features that I am not too sure about whether couchbase offers. So with DB dependency you have cached items dependent upon Db items and if the DB items are updated or deleted the corresponding dependent item in the cache is removed and at the same time you can write a "right-thru" handler executed at the server level; and the main purpose of this handler is loading fresh copies of the removed items in the cache. So, basically, you'll write the handler once and registerit with the cache server and the cache server will execute it when needed to sync. new items in the DB with the cache. This reading on Db synchronization can be useful . Its based on a product Ncache.
So your question is not directly related to Couchbase, but as other stated more about how you can be alerted when data are changing into your Oracle instance.
One thing that is not well known is the Oracle Database Change Notification feature that is quite cool for this:
http://docs.oracle.com/cd/E11882_01/java.112/e16548/dbchgnf.htm
So you can create an application that is listening to your changes and pushes the data into Couchbase.

Can/Should I disable the cache expiry when backing data store is unavailable?

I'm just started out with Ehcache, and it seems pretty good so far. I'm using it in a simplistic fashion to speed up reads against a database, but I wonder whether I can also use it to let the application stay up if the database is unavailable for short periods. (Update - my context is a application with high-availability modules that only read from the database)
It seems like I could do that by disabling expiry in the event of a database read problem, and re-enabling it when a read works again.
What do you think? Is that a reasonable approach or have I missed something? If it's a fair approach, any tips for how best to implement appreciated.
Update - ehcache supports a dynamically configurable option to un/set the cache to 'eternal'. This seems to do what I need.
Interesting question - usually, the answer would be "it depends".
Firstly, if you have database reliability problems, I'd invest time and energy in fixing them, rather than applying a bandaid solution.
Secondly, most applications need both reading and writing to work - it doesn't seem to make sense to keep your app up for reads only.
However, if your app has a genuine "read only" function, and there's a known and controlled reason for database down time (e.g. backups), then yes, you can use your cache to keep the application up and running while the database is down. I would do this by extending the cache periods, rather than trying to code specific edge cases. For instance, you might have a background process which checks whether the database is available and swaps in a different configuration file when there's trouble.

How do I update an expensive in-memory cache across a SharePoint farm?

We have 3 front-end servers each running multiple web applications. Each web application has an in memory cache.
Recreating the cache is very expensive (>1 min). Therefore we repopulate it using a web service call to each web application on each front-end server every 5 minutes.
The main problem with this setup is maintaining the target list for updating and the cost of creating the cache several times every few minutes.
We are considering using AppFabric or something similar but I am unsure how time consuming it is to get up and running. Also we really need the easiest solution.
How would you update an expensive in memory cache across multiple front-end servers?
The problem with memory caching is that it's unique to the server. I'm going with the idea that this is why you want to use AppFabric. I'm also assuming that you're re-creating the cache every few minutes to keep the in memory caches in sync across all servers. With all this work, I can well appreciate that caching is expensive for you.
It sounds like you're doing a lot of work that probably isn't necessary. This article has some detail about the caching mechanisms available within SharePoint. You may be interested in the output cache discussed near the top of the article. You may also want to read the linked TechNet article and the linked article called "Custom Caching Overview".
The only SharePoint way to do that is to use Service Application infrastructure. The only problem is that it requires some time to understand how it works. Also it's too complicated to do it from scratch. You might consider downloading one of existing applications and rename classes/GUIDs to match your naming conventions. I used this one: http://www.parago.de/2011/09/paragoservices-a-sharepoint-2010-service-application-sample/. In this case you can have single cache per N front-end servers.

Speeding up integration tests that rely on an Oracle DB

We have an Oracle database server specifically for our unit tests to run against. Is there a way to tune Oracle specifically for this kind of purpose? As the data is constantly being thrown away (since it's just test data). I wonder if there is a way to have an Oracle database in-memory and connect without the TCP/IP stack perhaps to speed up these tests.
Any suggestions?
The answer is likely yes, but changing the database environment from the production configuration to a integration configuration during testing introduces risk that the testing will give false results.
If the hangup is the database cleanup/reset stage, and you have Enterprise Edition, look into FLASHBACK DATABASE as (potentially) a quicker way to reset the database to a fixed point.
At worst, you don't need to waste time building the cleanup/reset scripts.
The TCP/IP stack is unlikely to be adding much to your overhead. You could, however, run the Oracle instance on the same server as your test cases, and access via ORACLE_SID (which I believe uses OS-level inter-process communication).
Before examining changes to Oracle, however, I'd look at what tests are getting run on your continuous integration server. If you haven't done it already, this means splitting the integration tests (which require a back end) from the unit tests (which don't), and running them on different schedules. There's rarely a reason to run a full suite of integration tests for every change.
Next: are you using any sort of object-relational mapper to access your database? If yes, and you're not relying on any particular Oracle quirks, you could replace Oracle with an in-memory database (you don't say what language you're using, so this may or may not be an option).
And finally, consider using the Oracle import/export facility to completely rebuild your database for each integration test run. It's probably quicker, and definitely more stable than trying to delete whatever rows you created (this assumes that your integration tests start with pre-populated data; if not, just drop and rebuild the schema).
There are a lot of things you could do to the Oracle instance for the scenario you mention like using the correct locking strategy/isolation level, disabling all kinds of undo-logs, etc. You should consult a good Oracle Tuning book for that (I like the one by Mark Gurry, but I'm not sure how up to date that is).
There is one other thing that might be important: If you constantly add and delete data from your db (I mean "totally empty the db"), make sure you're setting up the storage parameter for each table correctly. If you have the space, consider assigning an initial extent equal to the max size for your test cases. (Either in the db creation script, or define once and then just trancate the tables with the reuse storage option.) Then when you run the test cases, the db doesn't have to allocate additional storage space.
I faced a similar issue and I was able to speed up the unit tests by moving the redo logs, undo and users tablespace to a RAM disk. There is a free version of ramdisk software available for you to try out. Commercial versions that back up the files periodically are also very cheap.
In my case the unit tests only verify data integrity and hence this strategy is low risk even though it is does not replicate production setup. We have a separate scale and performance test stgrategy.
You can incorporate rebuilding of your table indexes as part of the test run. Choose to either take the time hit of rebuilding your indexes prior to the run or after the run. You'll eat up the same total amount of time, but you'll "feel" it less if you rebuild them after the test run.
ALTER INDEX index_name REBUILD
Will rebuild the indexes without dropping and re-creating them.

Resources