Speed up PostGreSQL createdb? - performance

Is there a way to speed up PostgreSQL's createdb command?
Normally I wouldn't care, but doing unit testing in Django creates a database every time, and it takes about 5 seconds.
I'm using openSUSE 11.2 64-bit, PostgreSQL 8.4.2

It won't help you now, but there has been some work done around this in PostgreSQL 9.0.`
What you can try as a workaround is to run with fsync=off. Of course, don't even think about doing this if you have actual data in your database, but if it runs on a test system only, that will make your CREATE DATABASE run a lot faster.

If Django supported postgres schemas then you could simply drop the schema in question & recreate it instead of killing the entire database.
You can still use DROP OWNED BY ... CASCADE to drop all objects created by whichever user is configured in Django, bringing the database back to an essentially pristine condition. See how much faster this is.
You can shut down Postgres, then untar an existing database cold backup instead of running initdb. See how much faster this is.

Related

How to check that the H2 DataBase is Fully not corrupted?

H2 Database is not very stable (But very Fast wich is very good for DEV), especialy during the developpement process, i hope that the number of corruption is du to the immediat shutdown of the Server (during debuging).
How to ensure that a H2 DataBase is not corrupted?
In order garant that a backup is good.
Probably the best way to check if everything is OK is to create a SQL script from the database, using the SCRIPT statement. If that works, then the data is fully readable. The index data might still be corrupt, but indexes can be re-created.
Another option is to always backup the data in the form of a SQL script. This will make a separate check unnecessary; but backup is a bit slower and can't be done online (while updates are happening).
By the way: if a database file gets corrupt, it's due to misconfiguration or wrong usage (H2 supports disabling the transaction log), due to hardware failure, or due to a bug in the database engine itself.

Nhibernate Nunit - clear database between testcases

We have a rather extensive test suite that takes forever to execute.
After each test has completed, the database (MSSQL) needs to be emptied so it is fresh for the next testcase.
The way we do this is by temporarily removing all foreign keys, TRUNCATE'ing all tables, and re-adding the FKs.
This step takes somewhere between 2-3 seconds, according to NHProfiler. All the time is seemingly spent with the FK operations.
Our current method is clearly not optimal, but which way should we go to improve the performance ? The number of elements which are actually deleted from the DB is completely insignificant compared to the number of operations for the FK removal/additions.
Using an in-memory SQLite database is not an option, as the code under test uses MSSQL specific operations.
You could wrap everything in a transaction and in the end just rollback everything. That's how I do it. It allows also to run tests in parallel.
what about using SQL Server Compact, create the database from the mapping files using nhibernate schema create and load the data for each test. if you are talking about a trivial amount data.
Have a look at this blog post - Using SQL Server Compact Edition for Unit testing
Alternativly you could use Fluent Migrator to create the database schema and load the data for each test.
Why are you even using a DB in your tests? Surely you should be mocking the persistence mechanism? Unless you're actually trying to test that part of the functionality you're wasting time and resources actually inserting/updating/deleting data.
The fact that your tests rely on ms sql specifics and returned data hints at the possibility that your architecture needs looking at.
I'm not meaning to sound rude here - I'm just surprised no one else has picked you up on this.
w://
There are a couple of things that I've done in the past to help speed up database integration tests. First thing I did was I ended up having a sql script that actually creates the entire database from scratch. This can be easily accomplished using a tool like Red-Gate SQL Compare against a blank database.
Second I created a script that removed all of the database objects from an existing database.
Then I needed a script that populated the database with test data. Again, simple to create using Red-Gate tools. You don't need/want a ton of data here, just enough to cover your test cases.
With those items in place, I created one test class with all of my read-only operations in there. In the init of that class, i cleared a local sql server express instance, ran the create script and then ran the populate script. This ensured the database was initialized correctly for all of the read-only tests.
For tests that actually manipulate the database, we just did the same routing as above except that we did it on test init as opposed to class init.
Obviously the more database manipulation tests you have, the longer it will take to run all of your tests. If it becomes unruly, you should look at categorizing your tests and only running what is necessary locally and running the full suite on a continuous integration server.

Migrating and Backing up Schemas (complex database structures)

Hey guys,
I need to figure out a way to back up and also migrate our Oracle database from our production schema to the dev schema and the other way around.
We have bunch of config tables that drive how systems on our platform run, and when setting up new systems or doing maintenance, we need to update our config tables. We want to be able to work on the dev schemas and after setting up a system/feature, we want to be able to migrate all those configs to the dev schemas.
I thought of running a procedure where we give the ID of the system (from the main table) and i would go through all the tables and select nvl(..) and if it doesn't exist, i would insert into, and if it does exist then i just run an update on that row.
This code will get very messy and complicated especially since the whole config schema is very complex and it might be hard to handle all the keys properly.
Another option i was looking at was triggers, so when setting up a new system, there would be a log of all the statements we ran while setting up/editing a system, then we would run it on our production schema.
I'm on a coop term, and have only been working with databases for 6 months, so i don't know that much and any information/advice would be greatly appericiated.
(We use pl/sql)
What about using export / import (or datapump) to bring over the config tables?
Check out data comparison tools like this
Think TOAD has one built in. I'm sure there are others out there too.
It is common to have tables in a schema that are what we call "static data", i.e. the users don't change it because it controls how the application works.
Each change to config data should not be run ad-hoc in the target environment. Instead, you design and code your DML carefully in one or more scripts, which get tested in a dev environment, checked into change control, and can be re-run in any environment when required.

Installer package for program that uses JDBC to connect to MySQL

I have an installer wizard thing called 'install creator'. I want to include my mySQL database into the installer or find another way that the user, upon installation, can just use my database. Prob is-not everyone has MySQL installed on the computer and even then, the user doesn't know the name of the database or my password. Somehow the database must be created automatically upon install, and for my purposes, some of the tables created. How can one do this. Thanks
If you are just using MySQL as a local storage engine, as it seems to be what you are doing, then you should consider using Sqlite with JDBC, instead of MySQL. MySQL is really intended to be used on a server, where information from multiple users is stored, and where the database is accessed only indirectly through the programs that you create that run on the server. You could, in theory, package up MySQL and MySQL Connector/J which lets JDBC talk with MySQL; however, MySQL is a pretty big beast, and I don't think it's nice to do that to your users (also, don't forget that they might already have MySQL installed, and if you were to install MySQL for the first time, you would effectively be forcing them to use your root password). Unlike MySQL, sqlite is intended to provide the structure of SQL for use with lightweight, local file storage.

Is it safe to delete the 3 default databases created during a PostgreSQL install?

I installed a default installation of PostgreSQL 8.4 on Windows 2003 Server, using the one-click installer provided. Running psql -l for the first time, I noticed there are three databases installed by default: postgres, template0, and template1.
Being security-minded, my initial reaction is to delete or change default configurations. However, I also know I haven't a clue regarding databases (as this install is my first step in self-learning about databases), so I thought I would ask first.
Is it safe to delete these?
Basically - no.
postgres database is here as a non-template database with reasonable guarantee that it exists - so any script that doesn't know where to connect to, can connect there.
if you will remove template1 - you will lose the ability to create new databases (at least easily).
template0 is there as a backup, in case your template1 got damaged.
While I can theoretically imagine a working database with no template* and postgres databases, the thing that bugs me is that i have no idea what (security-wise) you want to achieve by removing them.
You can delete the postgres but do not touch template0 or template1. The postgres database is there for convenience.

Resources