Cocoa SQLite when to close database? - cocoa

I am writing my first MacOS application that uses SQLite (https://github.com/ccgus/fmdb).
I could either open/close the database connexion for each transaction (CRUD), or on init/dealloc. What is the best way?

I'm not sure I have the definitive answer, but having looked into this a bit myself, I've seen numerous people who say it's ok to leave the database open.
Also, if you look at the Sqlite site you'll see they've done a lot of work on ensuring a database will not get corrupted from crashes, power failures etc.
http://www.sqlite.org/testing.html
http://www.sqlite.org/atomiccommit.html
My experience using Sqlite and FMDB is it seems to be fine to open a connection and just leave it open. Remember, this is a "connection" to a file, that's on a local file system that's on Flash memory. That's a very different situation than a connection over the network. I think the chances of failure are extremely slim, as it's clearly designed to handle crashes, power failures etc. even if they occur during an actual database operation - so outside of a database operation they are not an issue.
You could of course argue that it's bad practice to keep a database connection open when not in use, and I wouldn't recommend it in a typical client-server setup, but on the iPhone/iPad I think it's a non-issue. Keeping it open seems to work fine and is one less thing to worry about.

You don't want your app to keep the DB open from start to finish, unless all it does is start, do DB stuff, then quit. The reason for this is that on rare occasions, the app may be terminated by a system problem, loss of power, etc.; since SqLite is file-based, this may result in an unclosed file or some other out-of-sync condition. Open the DB when you need it open, do your thing, and close it when you no longer need it open. You can't protect against a crash while you're actually doing db ops, but you see to it that the db was stable and closed when your last set of db ops ran. Just as an aside, SqLite opens and closes very quickly. Well, let me amend that: the SqLite3 I have compiled into my app does. I don't actually know about other versions.

Related

the simplest test to check ODP connectivity

I want to install a new version of ODP into a production environment and I'm looking for the simplest test that the drivers have actually gone on ok, and the bespoke apps on the server can still connect to the database.
Sounds easy, but there are some caveats...
First, one thing I need to do over-and-above the Oracle setup is to manually introduce a key into the registry, TNS_ADMIN. This is critical to the environment I'm installing to and when this key is missing, or the path is incorrect, this is the normal cause of problems. Effectively, this is what I'm actually looking to test.
Next, since these are production servers, there are no tools installed on them, so I can't just run up a copy of Toad, for example. The only truly safe assumption for the software present will be the operating system (Windows 2003) and the Oracle drivers (ODP 11.2 R3 which at the time of writing is Oracle's current production version).
Next, the bespoke apps on there are generally service-oriented, so simply saying "just run up one of the apps" might be easier said than done. Also on this point, it won't actually be me whose running these drivers in, but will be an operator who will have limited knowledge of what they're doing (sad but true). So whatever test I settle on, its got to be easy enough for the guy to follow, and easy enough for him to interpret the results.
Next, I'm fully aware I could write a 5-line test rig just to open and close a connection. This has the advantage of making life easy for the operator, and is definitely a fallback option, but can't help wondering if there is an easier approach.
I guess I'm just wondering whether anyone knows of some kind of utility, which more than likely ships with ODP, which will effect a connection test. Even if I end up giving the operator a .bat file to execute it'll be simpler (and less error prone???) than writing my own app.
Points for the best suggestion,
Pete
I don't think there is one in just ODP.net, no. At least, I don't see anything in the Entity Framework beta version of it (which I have installed).
In the larger driver packages you could use SQL*Plus, which is a command line tool. But for your purposes the simplest answer is likely to write a very small app that just connects and does a SELECT * FROM DUAL;
I got the operator to create an ODBC connection and to test it. I can do this just using Windows, no additional software required, they just need to make sure they use the right driver and have a valid database login to hand

Why would a long Rake task just stop, then start again?

I have a complex legacy data migration problem. MS Access data going into MySQL. I'm using a Rake task. There's a lot of data and it requires a lot of transforming and examining. The Rake task is hundreds of lines across about 12 files. The whole thing takes about two hours to run. It has to run on Windows (I'm using XP VMware VM hosted on an OS X Leopard system) because the Ruby libraries that can talk to MS Access only work on Windows.
I'm finding that sometimes, not every time, I'll start the task and come back later and it will be stalled. No error message. I put numerous print statements in it so you should see lots of reporting going by, but there's just the last thing it did just sitting there. "11% done" or whatever.
I hit Ctrl-C to and instead of going back to the command prompt, the task starts up again where it left off, reported output starts going by again.
I'm sorry for the abstract question, but I'm hoping someone might have an idea or notion of what's happening. Maybe suggestions for troubleshooting.
Well, if the access side seems to be freezing, consider shoving the data into MySql, and see if that eliminates this problem. In other words, the data has to go over eventually, you might as move the data into that system from the get go. There is a number of utilities around that allow you to move the data into MySql (or just export the access data to CSV files).
So, you not doing data transformations during that transfer of data while you move it into MySql (so, it not a labor nor programming cost of time hereā€¦just transfer the data).
Once you have the data in MySql, then your code is doing the migration (transformation) of data from one MySql database (or table) to another. And, you out of a VM environment and running things in a native environment. (faster performance, likely more stable).
So, I would vote to get your data over into MySql..then you down to a single platform.
The less systems involved, the less chance of problems.

Fast restart technique instead of keeping the good state (availability and consistency)

How often do you solve your problems by restarting a computer, router, program, browser? Or even by reinstalling the operating system or software component?
This seems to be a common pattern when there is a suspect that software component does not keep its state in the right way, then you just get the initial state by restarting the component.
I've heard that Amazon/Google has a cluster of many-many nodes. And one important property of each node is that it can restart in seconds. So, if one of them fails, then returning it back to initial state is just a matter of restarting it.
Are there any languages/frameworks/design patterns out there that leverage this techinque as a first-class citizen?
EDIT The link that describes some principles behind Amazon as well as overall principles of availability and consistency:
http://www.infoq.com/presentations/availability-consistency
This is actually very rare in the unix/linux world. Those oses were designed (and so was windows) to protect themselves from badly behaved processes. I am sure google is not relying on hard restarts to correct misbehaved software. I would say this technique should not be employed and if someone says that the fatest route to recovery for their software you should look for something else!
This is common in the embedded systems world, and in telecommunications. It's much less common in the server based world.
There's a research group you might be interested in. They've been working on Recovery-Oriented Computing or "ROC". The key principle in ROC is that the cleanest, best, most reliable state that any program can be in is right after starting up. Therefore, on detecting a fault, they prefer to restart the software rather than attempt to recover from the fault.
Sounds simple enough, right? Well, most of the research has gone into implementing that idea. The reason is exactly what you and other commenters have pointed out: OS restarts are too slow to be a viable recovery method.
ROC relies on three major parts:
A method to detect faults as early as possible.
A means of isolating the faulty component while preserving the rest of the system.
Component-level restarts.
The real key difference between ROC and the typical "nightly restart" approach is that ROC is a strategy where the reboots are a reaction. What I mean is that most software is written with some degree of error handling and recovery (throw-and-catch, logging, retry loops, etc.) A ROC program would detect the fault (exception) and immediately exit. Mixing up the two paradigms just leaves you with the worst of both worlds---low reliability and errors.
Microcontrollers typically have a watchdog timer, which must be reset (by a line of code) every so often or else the microcontroller will reset. This keeps the firmware from getting stuck in an endless loop, stuck waiting for input, etc.
Unused memory is sometimes set to an instruction which causes a reset, or a jump to a the same location that the microcontroller starts at when it is reset. This will reset the microcontroller if it somehow jumps to a location outside the program memory.
Embedded systems may have a checkpoint feature where every n ms, the current stack is saved.
The memory is non-volatile on power restart(ie battery backed), so on a power start, a test is made to see if the code needs to jump to an old checkpoint, or if it's a fresh system.
I'm going to guess that a similar technique(but more sophisticated) is used for Amazon/Google.
Though I can't think of a design pattern per se, in my experience, it's a result of "select is broken" from developers.
I've seen a 50-user site cripple both SQL Server Enterprise Edition (with a 750 MB database) and a Novell server because of poor connection management coupled with excessive calls and no caching. Novell was always the culprit according to developers until we found a missing "CloseConnection" call in a core library. By then, thousands were spent, unsuccessfully, on upgrades to address that one missing line of code.
(Why they had Enterprise Edition was beyond me so don't ask!!)
If you look at scripting languages like php running on Apache, each invocation starts a new process. In the basic case there is no shared state between processes and once the invocation has finished the process is terminated.
The advantages are less onus on resource management as they will be released when the process finishes and less need for error handling as the process is designed to fail-fast and it cannot be left in an inconsistent state.
I've seen it a few places at the application level (an app restarting itself if it bombs).
I've implemented the pattern at an application level, where a service reading from Dbase files starts getting errors after reading x amount of times. It looks for a particular error that gets thrown, and if it sees that error, the service calls a console app that kills the process and restarts the service. It's kludgey, and I hate it, but for this particular situation, I could find no better answer.
AND bear in mind that IIS has a built in feature that restarts the application pool under certain conditions.
For that matter, restarting a service is an option for any service on Windows as one of the actions to take when the service fails.

What kind of safeguards do you use to avoid accidentally making unintended changes to your production environment?

Because we don't have a good staging environment we often have to debug issues on our production systems. We have web, application, and database servers.
What kind of safeguards do you use to avoid accidentally making unintended changes to your production environment when doing this?
EDIT:
The application is a very complex B2B vertical web application. There is a lot of data involved. Some tables have close to 100 million records.
EDIT:
The staging environment we have in place does not have the capacity to mirror production. There are also hundreds of gigabytes of data files involved besides the actual database data.
EDIT:
We do use source control for the code but not for the stored procedures. There are some old stored procedures in source control but nobody keeps that updated anymore.
The main concerns are the database and data on the file system.
BTW, I am a consultant at this company, not an actual employee.
The most direct answer is: "Don't do that."
source control. nothing like a rollback when things to irreparably wrong. Also, a diff can help you replicate the changes to other production systems.
New production releases go via our systems guys, the programmers and developers can only request to make their new system go live, approval is needed as well, and we show that each change that has been made has been tested (by including a snapshot of all that was tested in this release in the production request).
We keep the previous production releases for fallback in case of issues.
If things do break (which they shouldn't do often with a proper testing procedure and managed releases) then we can either roll back, or hotfix. Often when things are broken in live and the fix is small, we can hotfix, then move the fix to test to do a proper test.
Regardless, sometimes things get by...
only allow certain accounts write access, so you have to log in differently to make a change
on web server, have two directory structures, that mirror each other, one where only one ID can write, the other staging dir, everyone can write.
on database server, have one production db, where only one ID can write, have a staging db where everyone can write. the staging DB can have nightly backup restored to it.
HOWEVER, if you have a bad query or some resource hog in your staging system resources will be pulled from production, and the machine could hang.
For Web and Application Servers, I would try to copy the environment to a new location (but on the same environment) and have the affected people reproduce behavior on the copy. This will at least give you a level of separation from accidentally screwing with 100% of your clients.
For Database Servers, I would configure user accounts on the production system to give them read only rights.
Read-Only/Guest accounts. Seriously. It's the same reason you don't always login as root or Administrator.
This is a tough thing, and it goes with the territory of "no staging environment."
For many reasons, it's best to have a dedicated (duplicate) of PROD you can use to stage deploys to...and to debug on, but I know that sometimes when you're starting out that doesn't work out as quickly or thoroughly as we'd want.
One thing I've seen work is the use of VMs: aside from the debug environment, you can create a mini-PROD in a VM and use that to debug. This may not be practical given the type of app you're developing, so additional detail in that area would be helpful.
As for avoiding changes to PROD during debugging: is there a reason you'd need to change anything to facilitate debugging? If so, that might be worth looking into solving another way.
Version control is immensely helpful for controlling changes to production environments - just make your production environment a working copy of the appropriate directory or directories from the repository. When you roll out an update, your source control system makes sure that ALL the changed files get copied. When an update breaks something, you can roll the production working copy back to the last revision which wasn't broken. Also, you can check your production WC out from a tag instead of from the trunk; that way you can decide which repository revisions to apply to the production environment by adjusting the tag.
If you're not familiar with the concepts of version control systems, I'd advise you to do some research. They're conceptually complex but incredibly useful and powerful. The Wikipedia article is a good place to start:
http://en.wikipedia.org/wiki/Revision_control
I'm sorry, you have to have a staging environment. There's no getting around this. If it means you have to cull the size of your datasets, then that's what you have to do. Use VMware and VMware converter to import the production systems during down-periods, if you have them (this is a many-hour process, so maybe not practical).
There are a certain class of problems you can't solve without having full access to production DBs (or a copy), performance is one of these. But you really should build a staging environment, even if it's on someone's desktop machine with a stripped down dataset.
That aside, I've had to live my life with a few of these in the past, and really, there's nothing you can do except lots of backups. Every change you make should be preceded by incremental backups. That way if you fubar'd something, the amount you've lost is not substantial. SQL server can take differential backups that limit amount of diskspace used for backups. Oracle can as well.
In case you really have no other choice, and it is likely to be a chronic situation... consider adding some way to the application data (files, or database) to flag a set of data as 'please god do not actually actively change production state with this data', combined with data dumps at critical positions in a process when this flag is activated, you may be able to exercise most of the production logic without the data actually being acted upon.

How big can a Sourcesafe DB be before "problems" arise?

We use SourceSafe 6.0d and have a DB that is about 1.6GB. We haven't had any problems yet, and there is no plan to change source control programs right now, but how big can the SourceSafe database be before it becomes an issue?
Thanks
I've had VSS problems start as low as 1.5-2.0 gigs.
The meta-answer is, don't use it. VSS is far inferior to a half-dozen alternatives that you have at your fingertips. Part of source control is supposed to be ensuring the integrity of your repository. If one of the fundamental assumptions of your source control tool is that you never know when it will start degrading data integrity, then you have a tool that invalidates its own purpose.
I have not seen a professional software house using VSS in almost a decade.
1 byte!
:-)
Sorry, dude you set me up.
Do you run the built-in ssarchive utility to make backups? If so, 2GB is the maximum size that can be restored. (http://social.msdn.microsoft.com/Forums/en-US/vssourcecontrol/thread/6e01e116-06fe-4621-abd9-ceb8e349f884/)
NOTE: the ssarchive program won't tell you this; it's just that if you try to restore a DB over 2GB, it will fail. Beware! All these guys who are telling you that they are running fine with larger DB are either using another archive program, or they haven't tested the restore feature.
I've actually run a vss db that was around 40 gig. I don't recommend it, but it is possible. Really the larger you let it go, the more you're playing with fire. I've heard instances where the db gets corrupted, and the items in source control were unrecoverable. I would definately back it up on a daily basis and start looking to change source control systems. Having been in the position of the guy who they call when it fails, I can tell you that it will really start to get stressful when you realize that it could just go down and never come back.
Considering the amount of problems SourceSafe can generate on its own, I would say the size has to be in the category "Present on disk" for it to develop problems.
I've administered a VSS DB over twice that size. As long as your are vigilant about running Analyze, you should be OK.
Sourcesafe recommends 3-5G with a "don't ever go over 13G".
In practice, however, ours is over 20G and seems to be running fine.
The larger you get, Analyze will find more and more problems including lost files, etc.
EDIT: Here is the official word: http://msdn.microsoft.com/en-us/library/bb509342(VS.80).aspx
I have found that Analyze/Fix starts getting annoyingly slow at around 2G on a reasonably powerful server. We run Analyze once per month on databases that are used by 20 or so developers. The utility finds occasional fixes to perform, but actual use has been basically problem free for years at my workplace.
The main thing according to Microsoft is make sure you never run out of disk space, whatever the size of the database.
http://msdn.microsoft.com/en-us/library/bb509342(VS.80).aspx
quote:
Do not allow Visual SourceSafe or the Analyze tool to run out of disk space while running. Running out of disk space in the middle of a complex operation can create serious database corruption

Resources