Is it safe to clean out the contents of the C:\ProgramData\firebird
folder, i.e. wipe it, when the Firebird service (superserver, v3.0) is not
running?
I understand that it contains lock tables etc. so should not be touched
while FB is running. But it's not clear to me if it can be wiped safely
when FB is not running, or if it contains data that can be vital when FB
starts up again.
My situation is that I'm migrating a VM with an FB installation.
Migration has been done like this, due to practical reasons (uptime vs.
file transfer & VM conversion time):
Snapshot of source VM, i.e. nightly backup is copied to new location.
Source stays up and running. Copy process takes about 1 day. (We have the databases locked with nbackup when nightly snapshot is taken).
Snapshot is unpacked at target location, converted from VMWare to
HyperV and brought online for additional reconfig and system testing.
A few days pass.
Both source and target Firebird services are stopped, so no database
activity is going on anywhere.
Sync files from source to target, including database files. This file
transfer is much smaller then in step 1 so it can be done during offline
time.
In step 5 I find diffs in the C:\ProgramData\firebird folder, and I'm
wondering what would be the best approach:
A) Wipe the folder at target.
B) Sync so target has the same content as source.
C) Leave target as is.
Please note that when FB service is started again at target, the
database files will be identical with those at the source at the time of
FB shutdown, and probably won't "match" the contents of
C:\ProgramData\firebird at target. I would assume that this fact rules
out option C).
The files in C:\ProgramData\firebird are only used during runtime of the Firebird server and contain transient data. It should be safe to delete these files when Firebird is not running.
In other words, when migrating from one server to another, you do not need to migrate the contents of C:\ProgramData\Firebird.
Related
A delete . was executed on the folder containing a HSQLDB. The only file which was locked by the system (and thus not deleted) was the database.data file. Is it possible to recover the database from this file alone?
If the delete was done within the BuildServer directory itself and not specifically within the BuildServer/system directory you are out of luck since all the builds and their build step configurations are stored within BuildServer/config/projects.
The Database only stores build logs, changes, users and etc. but not the actual config. They are all XML based configs on the file system.
If the delete was done within BuildServer/system you may be able to start up a clean TC Instance to rebuild the BuildServer/system directory and then shut it down. Once its down switch out the buildserver.data files and bring it up again. (Trying to do this now but its taking forever to start up. If I find out more I'll edit).
I just recently installed worklight in Eclipse in order to work on developing an iPad app, but I noticed it takes me significantly longer to build and deploy compared to the other developers. The others take rougly 5-7minutes each build while mine takes about 25-30 minutes. I am not sure what could be the reason and was hoping for some suggestions on what it may be?
I was told that in the build process worklight copies the contents of your projects to another directory on your machine, and I think the location of that directory might be the issue, but I am not sure how to check to see where this is happening.
Edit: To give more details as requested:
Both my machine and my coworkers machine are running Windows 7 Enterprise, with Intel dual core and 8G of RAM.
The workspace containing the project is located locally in the base of the C: drive but user profile files/folders such as My Documents are stored on a shared network drive. The project itself is 143mb.
To the best of my knowledge there are few factors that influence build time:
Size of the Project (eg. 100MB)
Number of Files in the Project (eg. 1200 files)
Your environment got into a strange state.
Some one reported performance issues with adding new Java code.
Hardware
You can try:
Lower the size of your project by removing unnecessary files, compressing images using lossy compression, etc.
Concatenate resources like JS and CSS files.
Try to use resources hosted on other servers, at least for development, for example:
< script data-dojo-config="async: 1"
src="http//ajax.googleapis.com/ajax/libs/dojo/1.8.1/dojo/dojo.js">
< script src="http://code.jquery.com/jquery-1.9.1.min.js">
Try creating a new Workspace and importing your project or removing (back up first!) the project's metadata directories and files (Workspace/WorklightServerHome, bin/). You may have a some success removing and re-creating the native environment folders. There's also a -clean flag you can pass to eclipse.
I was able to fix my own problem, worklight was using a .wlapp which was stored on my shared network drive. By changing the TEMP and TMP environment variables to a folder which is for sure local, such as C:\TEMP, worklight then accesses only local files great speeding up the build proccess.
How do I move a .sdf file into my isolated storage and after I have moved it is there a way to delete it as it is of no use. I have added my .sdf file as a content in my project.
Your question is not very clear, but let me see if I get this. You created a database, added it to your file as content to your project so that you can have all the data present when the user installs your app. Then you are copying the data from the read-only .sdf file into a database that you are creating on first run, so that you can read/write to it. Correct?
If so, I do not believe there is a way to delete the read-only file that you included with the install.
If your database is large enough that you are concerned about the space it will take by having two copies of it on the phone, I would suggest placing your data on a server, creating a web service, and access that web service on first run. Place a notice on the screen that lets your user know that it is downloading information that will only be downloaded once, and that subsequent launches will not take as long. Be sure you include code to prevent a problem should the download be interrupted by a phone call, text message, back key press, start button, or other event. Make it be able to continue the download if it was interrupted in a prior run.
To answer your question, .SDF is a format of Microsoft SQL Server Compact (SQL CE) databases. The link you have pasted talks about SQLite databases.
This the way to download the entire Isolated Storage onto your device.
Open cmd and go to the following directory
C:\Program Files\Microsoft SDKs\Windows Phone\v7.1\Tools\IsolatedStorageExplorerTool
then use the isetool.exe to download the Isolated Storage along with the .sdf file onto your machine.
isetool.exe ts xd [Product_id_here_see_WMAppManifest.xml] "D:\Sandbox"
You should get message like download successful into D:\Sandbox.
You can also upload the sdf by changing the argument ts with rs
Can ClickOnce be configured to delete off old published directories?
Or
Has anyone written some code that will delete off these publish directories (maybe keeping the last 10)?
Currently, every time a ClickOnce Publish is done a new directory is being created on the IIS Server. This NEW directory contains a copy of the whole application, which is downloaded. The old directories do not seem to be used anymore and is just taking up a lot of space.
Here is a sample of the directory names being created. As you can see the application version number is being used in the name.
EduBenesysNET_1_0_1_0
EduBenesysNET_1_0_1_1
….
EduBenesysNET_1_0_1_192
EduBenesysNET_1_0_1_193
We have had 194 (zero based) builds with each directory staying out there. With the size of one build being about 50mb, you can see how keeping the old directories out there will start to eat away at the disk space.
The way our application works is you always have to download the latest version. You do not have an option to skip the download so I am hoping that deleting off the old directories should not be a problem.
Good question (+1) - one would think that this should be possible somehow ...
Looking a bit closer though reveals that the observed publishing behavior is not actually a feature of the ClickOnce technology, rather one of the Visual Studio Publish Wizard - see for example section ClickOnce publish folder structure in ClickOnce Publishing Process:
If you manually generate or update a ClickOnce application publication using either Mage or a custom tool, you are not constrained to this folder and file structure. For any particular ClickOnce publication, the chain of dependencies includes the following: [...] [emphasis mine]
The Walkthrough: Manually Deploying a ClickOnce Application yields the same conclusion, i.e. the folder structure in use by VS is simply a (reasonable) convention/approach.
Unfortunately the VS Publish Wizard doesn't seem to offer deleting older versions indeed, at least it is neither visible nor documented somewhere. However, given the resulting folder structure is just an artifact of the build process, you might as well add a custom build step doing just that - figuring out the details (i.e. accessing the VS automation properties to derive the last published version etc.) is outside of the scope of your question though ;)
Regarding your sub question:
I am hoping that deleting off the old directories should not be a problem.
Definitely not a problem, it just depends on how many of these you want to keep for rollback operations eventually, see e.g. Can I delete previous old versions from Publishing Location created by ClickOnce?
The short answer is that this is not something that is built into Visual Studio or ClickOnce deployment, and you will have to find another way to do this, perhaps through a script that you run on your server.
You can delete all of the versions except the current one if you push updates as required updates. If you don't do that, you'll want to keep two versions in case the user reverts back a version.
I have a fairly large PHP codebase (10k files) that I work with using Eclipse 3.4/PDT 2 on a windows machine, while the files are hosted on a Debian fileserver. I connect via a mapped drive on windows.
Despite having a 1gbit ethernet connection, doing an eclipse project refresh is quite slow. Up to 5 mins. And I am blocked from working while this happens.
This normally wouldn't be such a problem since Eclipse theoretically shouldn't have to do a full refresh very often. However I use the subclipse plugin also which triggers a full refresh each time it completes a switch/update.
My hunch is that the slowest part of the process is eclipse checking the 10k files one by one for changes over samba.
There is a large number of files in the codebase that I would never need to access from eclipse, so I don't need it to check them at all. However I can't figure out how to prevent it from doing so. I have tried marking them 'derived'. This prevents them from being included in the build process etc. But it doesn't seem to speed up the refresh process at all. It seems that Eclipse still checks their changed status.
I've also removed the unneeded folders from PDT's 'build path'. This does speed up the 'building workspace' process but again it doesn't speed up the actual refresh that precedes building (and which is what takes the most time).
Thanks all for your suggestions. Basically, JW was on the right track. Work locally.
To that end, I discovered a plugin called FileSync:
http://andrei.gmxhome.de/filesync/
This automatically copies the changed files to the network share. Works fantastically. I can now do a complete update/switch/refresh from within Eclipse in a couple of seconds.
Do you have to store the files on a share? Maybe you can set up some sort of automatic mirroring, so you work with the files locally, and they get automatically copied to the share. I'm in a similar situation, and I'd hate to give up the speed of editing files on my own machine.
Given it's subversioned, why not have the files locally, and use a post commit hook to update to the latest version on the dev server after every commit? (or have a specific string in the commit log (eg '##DEPLOY##') when you want to update dev, and only run the update when the post commit hook sees this string).
Apart from refresh speed-ups, the advantage of this technique is that you can have broken files that you are working on in eclipse, and the dev server is still ok (albeit with an older version of the code).
The disadvantage is that you have to do a commit to push your saved files onto the dev server.
I solwed this problem by changing "File Transfer Buffer Size" at:
Window->Preferences->Remote Systems-Files
and change "File transfer buffer size"-s Download (KB) and Upload (KB) values to high value, I set it to 1000 kb, by default it is 40 kb
Use offline folder feature in Windows by right-click and select "Make availiable offline".
It could save a lot of time and round trip delay in the file sharing protocol.
The use of svn externals with the revision flag for the non changing stuff might prevent subclipse from refreshing those files on update. Then again it might not. Since you'd have to make some changes to the structure of your subversion repository to get it working, I would suggest you do some simple testing before doing it for real.