Need documentation of Progress application upgrade 11.3? - installation

We are upgrading our Progress application on 9.1D to 11.3. Is there any sample document which we should look for our migration.
Currently we have built a new server where we are installing OpenEdge Enterprise RDBMS 11.3.
Can we backup the current database and dump it to new version.
Any suggestions/documents ?

Generally Progress is very "kind" when upgrading but you have to take in mind that moving from 9.1d to 11.3 (11.4 is soon out by the way) is moving from 2002 to 2013. A lot has changed since then.
If you have program logic that relies on disc layout, os utilities (for example using UNIX, DOS or OS-COMMAND) they might be changed as well. So an upgrade might break even if the files compile without errors. You need to test everything!
You cannot directly backup and restore from 9.1D to 11.3, you need to dump & load.
What you need to do:
Back everything up! Don't miss this and make sure you save a copy of the backup. Back up database, scripts, program files (.p, .i, .r, .cls, etc). Everything! This is vital! Make sure you always have an untouched version left of the backup so you can restart if things go bad. Progress has built in utilities for backing up the database. OS utilities can also be used. Be aware that OS utilities can not be used to create online backups. The backed up database will most likely be corrupt. Shut down the database before backing up when using OS utilities.
Dump you current database. Data as well as schema. Don't forget to check for sequences etc.
Rebuild a new database on your new server with schema from old db.
If possible - move to Type 2 Storage Areas when doing an upgrade like this. It will increase perfomance. Check documentation and knowledgebase around required settings for this.
Load dumped data
Copy program files from old server to new
Recompile
Create startup scripts etc for starting databases as well as clients. Old parameters might not fit your new server, you most likely have more memory, faster CPU, larger discs etc.
All steps have several substeps. I suggest you dive into the documentation found at community.progress.com. You can also search the KnowledgeBase (knowledgebase.progress.com)
Also if you run into problems you can ask more specific questions here (but tag accordingly for example with openedge).
11.3 Documentation
9.1D Documentation
KnowledgeBase

Related

In NixOS, if a new config requires rebuilding the kernel, will old configurations still work?

The title really says it all, but just in case, here's some context:
Each time you change your configuration in NixOS, you need to run nixos rebuild to create a new boot image, which will be listed in Grub when you start the computer. A new configuration might require a new kernel. If it does, and you build it, will your old configurations continue to work?
In Ubuntu it appears that one can indeed host multiple kernels on the same machine. And I read somewhere the linux kernel can be pretty small, like 60 MB. Those two facts lead me to expect NixOS will retain the old kernels. But I haven't found anything online that really makes that explicit.
I am currently building a configuration that uses Musnix. If you ask for it, Musnix will build you a realtime kernel. I'm currently building such a new configuration, and hoping I'll still be able to boot my computer after it. I worry because GIthub user #magnetophon, who is involved in Musnix's development, said the Musnix realtime kernel is borken.
This is one of the cool features of NixOS. When you run nixos-rebuild boot (or nixos-rebuild switch too for that matter), it will create new boot entries alongside the old ones. These entries have the right kernel and system configuration in them. So if your experimental kernel doesn't work, you can just reboot and start a previous version of your system, knowing that it will work, even if your kernel also came with userland changes.
The nixos-rebuild command is documented here in the NixOS manual: https://nixos.org/nixos/manual/#sec-changing-config

File not written to by System.Data.SQLite on windows share after network failure

I have the following scenario:
WPF Application using SQLite-file per System.Data.SQLite on a windows-share as backend. Client is Windows 7 Home Edition
A user reported, that she was using the software, saving the new data she entered, but then after a while when she re-visited the software, some of her stuff was gone.
She said the session lasted until about 11:50am - the timestamp on the SQLite file in question was 10:55am. I have a logfile in the same network share, it is written to using a filewriter with autoflush.
The logfile looks normal until 10:55am, then there is one line with rubbish (looks like several lines written together in one), then goes on as normal.
Clearly the network (or maybe the drive) must have had a hiccup at that time. Miraculously my application just continued as normal. No exceptions were thrown, according to the logfile all transactions are "completed". No journal file remains to tell the tale. This behaviour - although wonderous in itself - is not appreciated by the user :)
My question: what happened here and how can I prevent it in the future? Did Windows 7 cache the file in the standby list and not care at all about comitting the changes to the actual disk? Why is my application or the SQLite provider unaware of this?
I will now build a confirmation mechanism that checks the timestamp on the backend file to see if it was actually changed at all after the last transaction, but that seems a bit silly. Just as silly as being very pedantic with catching exceptions when they are not being thrown :)
Thank you
TL;DR
SQlite file shows no changes after momentarily hardware (network or drive) failure, but the application seemingly continues to function and no exceptions are thrown by System.Data.SQLite.
See section 3 of How To Corrupt An SQLite Database File; it appears that Windows did not sync the network share when SQLite told it to.

How to take a snapshot of the entire system of Macbook Pro OS x 10.8 Mountain Lion

I want to be able to take a snapshot of the current system and will go back to it whenever I mess up with files. I looked at the Time Machine solution, but realized that it's only a good solution when I know what file I am looking for. But sometimes, some installation process creates binary files in multiple system paths, which are very hard to locate and identify. Say I installed a package, but then I felt like I shouldn't have done that. Uninstallation might still leave files around. So What might be some of the graceful solutions to go back to a status of the machine when everything is nice and clean.
Use disk utility (Applications | Utilities)
Click on the HDD and then click on new image. You can choose to have a compressed image or not. If you don't have much stuff on the drive it shouldn't be more than 30-40GB. Once you have the dmg file, stick it somewhere for backup purposes.
Also, create a recovery disk/stick with the recovery tool.
I dunno about "graceful" but Carbon Copy Cloner is definitely an easy solution for rolling back to a previous state. You can make an exact clone of your drive, then restore it back if something goes horribly wrong. I use CCC to make periodic backups of my Macs, as a sort of secondary backup to time-machine, which is easy to use but which I don't have total confidence in.
You can restore an entire system from a Time Machine snapshot, but it requires booting from the Recovery Partition or a Recovery disk. Basically, once you've rebooted in recovery mode, you can choose Restore From Time Machine Backup and then you'll be asked to locate the drive. Once you've done that, a list of Time Machine snapshots will be presented for restoring.
I haven't done this recently, but there are indications that the time of the backups may always be in PST, so be careful when looking at the times.
While OSX comes with TimeMachine, it also has the well-known (in Linux community!) command line tool called rsync.
With Google, I'm sure you can find many articles of how to use it, though here's an interesting blog of why its author uses rsync with Time Machine.

Windows VSS service

I am a newbie and I am working on a driver that tracks creation/write/modification on files. Now I have been told to work on Volume snapshot. I have seen the code of VSS that comes with Windows SDK.
But I have been informed to work on VSS at the kernel level, means I have to find out how I can use or communicate Windows Volume snapshot service through my driver. Please can someone give some inputs on this and try to help me because i googled a lot for Volume snapshot but did not get much help from there. Should I implement VSS Writer at the kernel level or something else to use the feature of Windows VSS service. Thanks in advance.
I think you should implement the VSS Hardware Provider.
Get the development document
http://msdn.microsoft.com/en-us/library/windows/desktop/aa381601(v=vs.85).aspx
Get the Sample Code
You need to install the Microsoft SDK--for example--7.1
Assumed that the SDK is installed under default path, access the path C:\Program Files\Microsoft SDKs\Windows\v7.1\Samples\winbase\vss\vsssampleprovider
Here you can find the sample codes.
Good Luck!
Should I develop a VSS writer or VSS provider : Neither. Incremental block level backup of files would require a FS mini-filter driver approach which, for the incremental time range Tn to Tn+1, should track block level writes happening on the live file. At time Tn+1, when the vss snapshot is taken, this minifilter should additionally track writes happening on the "file's view" sitting on the snapshot block device. The snap is not always read-only from birth. There is a brief time window in the VSS state machine during which the snapshot is actually writable so that various writers could do their thing (writes, updates, rollbacks etc). You could, in principle, also delete files from the snapshot while executing the onpostsnapshot callback (if you have a custom writer i.e.) The exact point in time when you'd need to stop live file tracking and start snapshot view tracking can be managed based on the completion of flush and hold writes IOCTL. So basically, at the end of snapshot, you'd have 2 change bitmaps : one that describes the writes on the live file and the other describing the writes on the snapshot view of the file. Merge these 2 bitmaps and then backup the changed blocks (based on the merged bitmap), off the snapshot block device. More or less similar scheme can also be applied for taking incremental block level backups of volumes.

How do I improve Windows Subversion client update performance?

How do I improve Subversion client update performance? It appears to be disk bound on the client.
Details:
CollabNet Windows client version 1.6.2 (r37639)
Windows XP SP2
3 GB RAM with PF Usage around 1 GB and System Cache of 1.1 GB.
Disk has write caching enabled
Update takes 7-15 minutes (when very little to update).
Checkout has 36,083 directories/files (from svn list)
Repository has 58,750 revisions.
Checkout takes about 2.7 GB
Perf monitor shows % Disk Write time stays near 90% during update.
Max Disk Read Bytes/sec got up to 12.8M and write got up to 5.2M
CPU, paging file usage, and network usage are all low.
Watching the server performance seems to show that it isn't a bottleneck.
I'm especially interested in answers besides getting a faster disk (especially configuration changes).
Updates from some of the suggestions:
I need the whole thing so sparse directories won't work.
Another client (TortoiseSVN) takes 7 minutes also
TortoiseSVN icon overlays have be configured so they don't cause the problem.
Anti-virus is configured to to skip that directory is it isn't causing the problem.
I experience exatly the same thing. Recently replaced Perforce with svn, but if we cannot overcome the performance problems on Windows me must consider another tool.
Using svn 1.6.6, Win XP and Vista clients. RedHat server.
My observations matches yours:
Huge disk-write activity.
Antivirus not a bottleneck.
No matter witch svn-clients are used.
No server or network bottleneck.
Complementary info
More than 3 times faster operations on:
Linux (Ubuntu).
Linux (Ubuntu) running on VirtualBox at Win Vista host.
Win XP running on VMWare at RedHat host.
Do you need every bit of the repository on your working copy? If you truly only care about particular portions of the tree, look into Subversion's Sparse Directories (a.k.a. "Sparse Checkouts") feature. It allows you to manipulate your working copy so it only contains those directories of interest.
Just as an example, you might use this to prune documentation, installer-related files, etc. Depending on what you truly need on your local machine, embracing this approach could make a serious dent in your wait times.
Try svn client version 1.5.. It helped me on my Vista laptop. Versions 1.6. are extremely slow.
This is more likely to be your network and the amount of data moved as well as your client. Are you using Tortoise? I find it to be a bit slow myself when moving that much data!
Are you using TortoiseSVN? If so, the Icon Overlays do slow down operations. If you go to TortoiseSVN Settings/Icon Overlays there are several settings you can tweak to control the level to which you want to use the Overlays, including turning them off completely. See if that affects your performance.
Do you run a virus checker that uses on-access scanning? That can really make it crawl. If so, turn it off and see if that helps. Most scanners will have a way to exclude specific directories if that helps.
Nobody seems to be pointing out the one reason that I often consider a design flaw. Subversion creates a second "pristine" copy of the checkout for offline operations. If you're checking out 4G of files, it's actually writing 8G to disk.
Compare a checkout to an export. That will show you the massive difference when writing those second copies.
There's nothing you can do about that.
Upgrade to svn 1.7
From Discussion of Slow Performance of SVN Update:
The update process in svn 1.6 goes something like this:
search the entire working copy, to see what's there at the moment, and locking it so no one changes the answer during the next steps
tell that to the server
receive from the server whatever new stuff you need, applying the changes to the files as you go
recurse over the entire working copy again, unlocking it
If there are many directories and files, steps 1 and 4 can take up a
lot of time. This would be consistent with your observation of long
delays with no network traffic.
Working copy format was changed in svn 1.7. Now all meta information is stored in SQLite database in root folder of working copy and there is no need to perform steps 1 and 4 any more which consumed most of the time durring svn update.

Resources