What to look for in estimating a PowerBuilder Conversion Project? - estimation

I've been trying to do a spec for a PowerBuilder 9 to 11.5
migration of a relatively complex application. Granted PowerBuilder
is not really my specialty I'm having issues trying to justify an
estimate for this part of the project (and the PowerBuilder people
I've been talking with have had some personal issues lately and are out of
communication). These are some of the metrics that we have seen and can evaluate:
-PBL Files
-Main Windows
-Data Windows
-Functions
(no we don't have the source available on this project)
What metrics in particular are helpful and how long would any given "unit" such as a Data Window take?

Most PowerBuilder migrations are rather smooth. The biggest thing that might get you moving from 9.0 to 11.5 are (a) the change in the Rich Text Edit control (if used) and (b) Unicode versus ANSI. The later will primarily be an issue if you have external function calls that pass strings, and only require the addition of a ;ANSI suffix or a migration to the Unicode version of the call.
So, look to see if the Rich Text Edit control is used, and look to see how many external function calls are declared. If you don't have any of either, it would be as simple as opening the project up in 11.5 (after making a backup of course) and allowing 11.5 to do the migration.

Its been a while, and I don't remember the specifics, but our upgrade from 9 to 11.5 went very smoothly.

Related

Best Alternative for Converting VB6 to run under Windows 8

I originally (many years ago) wrote my applications using VB6 on the assumption that it was a strategic Microsoft product and so I would be able to run them as long as I wanted. However with Windows 8 it looks like VB6 will not longer be able to run.
What do people think is a good strategic alternative to VB6 as a simple development environment for applications not needing Internet access (they just run on the PC)? I really do not want to have to convert the applications again in the future.
Go for .net , it runs on a "virtual machine" and it seems it will be here for long time to come. The development cycle is similar to VB6.
You can look at PowerBasic Compiler for Windows + PowerBasic Forms 2.0...
Delphi is an option and depending on how complex your programs are there are some conversion tools available.
Alternatives abound, and choosing a good one is hard enough. Forget finding a "best" one.
Some people like Jabaco, which has the advantage of targeting multiple platforms.
VB6 SP6 run perfectly on 8 full version, some kb update will stop system info and rtf tools from loading, so be careful if in case you app uses them, when doing some kb updates be careful and reimage your system regularly.
WARINING: IF YOU HAVE NOT UPGRADE FROM 8.0 TO 8.1 NOW, PLEASE DON'T, BECAUSE ABOUT 90% OF THE TOOL DOESN'T WORK. IT MAY BE SUPPORTED LATER, SO WAIT.

Can Delphi 5 generate a .PDB file that VS can use?

We've got this large application written in Delphi 5, and development is ongoing to this day. There is research going on into migrating to newer versions, but so far there is no success, as some 3rd party components have not been updated in ages and do not work on later versions.
In the meantime however people need to continue work on it. Now Delphi 5 IDE is no real treat. It's pretty bug-ridden and lacks a lot of features of contemporary IDEs which makes it difficult to use. Especially when it comes to debugging.
So I was wondering - would it be possible to use Visual Studio in the process? As far as I know the .PDB file format is pretty old and is well documented. Could it be possible to make the Delphi compiler to somehow generate a .PDB files for it's compiled results? Then the program could be debugged with Visual Studio, possibly to a much greater extent than in the original IDE.
Well, the absolute Holy Grail would be to move all development to VS, just keeping the compiler from Delphi, but I imagine that would be pretty impossible.
No, and neither can any other version of Delphi. You can use Map2Dgb to turn a detailed map file into a dbg file, though, and you can use that in WinDbg.
I'm curious what debugging features you're expecting to use in Visual Studio that aren't in Delphi 5 and that also don't rely on the IDE understanding the Delphi language. I was always rather pleased with Delphi 5.
BTW, you can vote for this feature here.
Note, that VS-compatitible debug info will be useful not only for debugging application (I agree: it's better to use Delphi), but it will be useful for using tools like Process Explorer. For example, Process Explorer may be able to show human-readable call stack, instead of RAW numbers.
I've tried tds2pdb and it works great for me.
Apparently you can't. Seems that PDB is after all a propieritary Microsoft format without documentation, and as such there are no other tools generating it. Pity. :(
I would recommend moving to a later version of Delphi. We have done this with various applications for clients. Moving to a newer version of Delphi is normally straightforward, but there were issues moving from D5 to D6 due to changes in the way components were handled (design time code being separated from run time) and the change to Unicode in D2009 was a bigger change.
The main thing is to sort out the third party components. We only ever use third party components that come with source so if the worst happens and the vendor disappears, we can still work on the components ourselves.
Which components are causing the issues?

Have you ever been the victim of a bug in a programming language or technology?

Bugs can be difficult enough to resolve when they're your (or a coworker's) fault. However, we all know that the technology we use to implement our programs is written by infallible people such as ourselves. So it stands to reason that some people have been affected by bugs in the implementation of the tools they used.
So, have you found a bug in your program that was caused by a widespread underlying technology, such as a programming language or framework? If so, did it fail with some indication, or did it silently overwrite some data? How difficult was it to debug? Did it cause a potential security vulnerability? Were you able to contact the provider and confirm that it was fixed (or fix it yourself)?
Here are some of the worst (in my opinion) technologies to have a bug in (especially one that fails silently):
Programming language
Concurrency framework
Remote API
Database
I deal with one on a daily basis called Internet Explorer.
To be fair though, there are lots of bugs in all of the browsers. I have filed several bugs for Firefox as well, and just the other day I found a strange case where the border doesn't take padding into consideration.
This is a good argument for writing lots of unit tests. If you upgrade your platform to a newer version that has some new bug, hopefully you have a test that reveals the bug
in one case I had, the vendor was working on a brand new API. They were not ready to release the new API, but they were not very keen to fix the bug in the old one either as they considered it dead from a $$ perspective.
A colleague once stumbled across a bug in the Jikes Java compiler. He had something like this:
if (condition)
{
}
else
{
System.out.println("Code that does stuff.");
}
He hadn't intended to leave the top block empty permanently, but just had it that way during development. He discovered that the condition was ignored unless he put a comment in that block so that it was no longer empty.
During my time developing (mostly) with Java I've run into bugs in the following components:
Java compiler
This actually happened several times. Usually we found that ecj (the Eclipse compiler) and javac (the Sun compiler) disagree about the validity of some Java code. Usually I enter bug reports for both systems, one of them gets accepted and the other one is closed as invalid.
Database engine
Those are very rare and very, very nasty, because no one expects the DB itself to have a bug. In our case it was an old product (the bug was already fixed in a newer release) that accepted values in a field that were not within the defined range (similar to having a NOT NULL field containing NULL)
JDBC driver
There were several bug fixes to a JDBC driver due to a project I've been working on. The bug fixes ranged from trivial ("why is there debug output in the production release?") to might-not-even-be-a-real-bug ("you can easily safe one roundtrip per statement by doing that-and-that").
JVM implementations
those are hard to diagnose and often present themselves as effectively random crashes on one JVM and stable operation on another one. We temporarily switched JVM vendor several times due to things like this.
Each time it took quite some time (and usually the involvement of the vendor of the component) before I actually believed it was a bug in that component.
And yes: the cases of false positives (i.e. the bug was actually in my/our code) were orders of magnitude more common.
The only place where bugs in the third-party component are kind-of expected seem to be web browsers. Almost no one questions you when you say "that's a bug in <insert buggy browser of the week>, we need to work around it like this ...".
I guess almost anyone who has programmed JavaScript with Internet Explorer has found a bug in their program which was caused by a widespread underlying technology.
The indication of failure is the blue "e" on your Windows desktop.
The first that comes to mind was with version 1 of the .NET Framework; for some reason, Random.NextDouble() method never produced a value greater than 0.5. I was completely baffled, and having run a test apps that called the method thousands and thousands of times, I had to presume it was a bug and work round it.
Never did find out what the cause was...
I've run into something with the gcc 4.4.0 but as the product I'm currently working on is still pre-alpha, it was fairly easy to repair locally. Hopefully they'll fix it soon.
i found a very strange bug in gcc on Mipsel (openwrt). We was testing a small app (about 3K sloc) that give me sigsev even if the code was corrected in theory.
I don't know the detail of the bug (and I don't have that code anymore) but change the gcc version from 4.1 to 3.6 solve the problem.

What's the diff between VSS 6.0 and VSS 2005?

We've been using VSS 6.0 since time began, but yesterday I nabbed VSS2005 off of our MSDN subscription, it wouldn't let me install it off the ISO through Daemon Tools (not sure why, but I submitted error report to MS...). I noticed it had a program files directory right on the ISO, so I just copied the folder onto my hard drive. Well, I opened up the client and behold, a glamorous version of VSS 6.0 connected to the exact same DB.
Anyone know if I'm going to destroy everything by using it?
We moved from VSS6 to VSS2005 just over a year ago. The database structure is identical. The only caveat we found was if some people still used VSS6 on a database where others were using VSS2005. VSS2005 treats Unicode text files as text files, whereas VSS6 does not. Which means that when VSS2005 adds a Unicode text file, VSS6 sees it as binary (this affects csproj files among others).
Other than that, VSS2005 supports proper HTTP access to the database (provided server extensions are installed), improved LAN performance (again, with server extensions), and better file system dialogs (the nasty old ones are gone). However, the new file add dialog shows ALL files, not just the ones that aren't included.
Also, VSS2005 allows the provision of custom editors and differencing tools by file extension, which is very useful. For example, some of our XML files are encrypted, so we run a decryption tool before the difference tool by using this system, which has increased the efficiency of our review processes substantially.
There are also other tweaks here and there, mostly good but occasionally annoying.
Finally, nothing has been destroyed. In fact, there appears to have been less additional corruption in the database since the transition - but I wouldn't put this down to the new VSS as it wasn't a comprehensive test.
I'm pretty sure, that there is no more danger of destroying anything than when using VSS 6.0.
It's quite a long time ago since I last used VSS, but we also updated from version 6 to version 2005. As far as I remember, there were only some cosmetic changes in the client (VSS explorer), but the format of the database and also the available feature were exactly the same than in VSS 6.
You should be fine.
Since VSS just uses a file share for everything, and there's nothing that is really server based, you're fine. Not much has changed in the format of the database, mostly client side stuff.

What successful conversion/rewrite of software have you done? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What successful conversion/rewrite have you done of software you were involved with? What where the languages and framework involved in the process? How large was the software in question? Finally what is the top one or two thing you learned from being involved with the process.
This is related to this question
I'm going for "most abstruse" here:
Ported an 8080 simulator written in
FORTRAN 77 from a DECSystem-10 running TOPS-10 to an
IBM 4381 mainframe running VM/CMS.
I rewrote 20,000 lines of Perl to use "use strict" in every file. I had to add "my" everywhere it was needed and I had to fix the bugs that were uncovered during the process.
The biggest thing I learned from doing this is, "It always takes longer than you think."
I had to get it done all at once overnight so that the other coders would not be writing new, unfixed code at the same time. I thought it would go quickly, but it didn't, and I was still hacking on it at 6 AM the next morning.
I did get it complete and checked in before everyone else started work though!
I rewrote a large java web application to an ASP.Net application for a realty company for various reasons.
The biggest thing I learned is that, no matter how trivial the feature the original system had, if it's not in the second system, the client thinks the rewrite is a failure. Expectation management is everything when writing the new system.
This is the biggest reason rewrites are so hard: it seems so easy to the client ("Just re-do what I already have and add a few things.").
The coolest one for me, I think, was the port of MAME to the iPod. It was a great learning experience with embedded hardware, and I got to work with a lot of great people. Official site.
I am doing a rewrite of an Inhouse Project managment system to a more standard MVC model. Its in the LAMP stack (PHP) and i am close to the 1st milestone.
The things i have learned from that currently is how simple the program feels at the beginning and i tried to not add complexity until i have to.
Example is that i programmed all the functionality first (like i was an admin user) and then when that is sorted out, add the complexity of having restrictions (user levels etc)
I ported/redesigned/rewrote a 30,000-line MS-DOS C++ program into a similar-length but much more fully-featured and usable Java Swing program.
I learned never to take another job involving C++ or Java.
I ported a client server Powerbuilder app, a couple of hundred screens worth, into an ASP.NET app (C#).
Due to performance and maintainability issues, I had over the previous year moved a ton of embedded SQL out of Powerbuilder scripts and into stored procedures.
Although this would make a lot of you wince, having a lot of business logic in the database, it mean the Powerbuilder app was relatively "light" and when we built the .Net front end, it could take advantage of the SQL codebase and have a lot of functionality already built and tested.
Not saying I'd recommend building apps that way, but it certainly worked to our advantage in this instance.
We had a code generation tool in our application framework that was used to read in text-based data files, About 20 other applications made use of it.
We wanted to make use of XML data files instead of structured text-based files. The original code was quite outdated and difficult to maintain. We replaced this tool by a combination of XSLT scripts and a utility library. For the utility library we could make use of some code in the old tool.
The result was that all 20 applications could now make use of either the obsolete text based file format or the new XML based format. We also delivered a conversion-generation tool that converted old data files to new XML data files.
After bringing out one or two release we have now decided that we will no longer support the old text based format and everybody is able to convert their data to XML.
We did hardly have to do manual conversions,
Converted the main company app from pre-standard C++ to standard C++. We had a multimillion dollar sale contingent on making it work on AIX, and after looking at it we decided that converting to standard C++ was going to be just as easy as converting to IBM's traditional C++.
I don't know the line count, but the source code ran to hundreds of megabytes.
We used standard Unix tools to do this, including vi and the assorted compilers.
It took a few months. Most of the fixes were simple ones, caught by the compiler and almost mechanically fixed. Some of them were much more complicated.
I think my main takeaway was: Don't get too awfully clever with code in a language that hasn't been standardized yet, or is likely to have things change in unexpected ways. We had to do a lot of digging in some of the ingenious adaptations/abuses of C++ streams.
Ten years ago I managed a team that converted a CAD system from DOS into Windows. The DOS version used home-brew libraries for graphics drawing, the Windows version used MFC. The software was about 70.000 lines of C code at the time of the conversion. The most important thing we learned in the process is the power of abstraction. All device-specific non-portable routines were isolated in a few files. It was therefore relatively easy to substitute the calls to the DOS-based library that would draw by directly accessing the frame buffer with Windows API calls. Similarly, for input we just substituted the event loop that checked for keyboard and mouse events, with the corresponding Windows event loop. We continued our policy of isolating the non-portable (this time Windows) code from the rest of the system, but we have not yet found this particularly useful. Perhaps one day we will port the system to Mac OS X and be thankful again.
Several. But I mention one.
It was a performance modeling tool. Part delphi 1, part turbo pascal. It needed a rewrite else it was not going to survive. So we started as a team of 2, but only me survived to the end. And I was ready before the deadline ;-).
Several things we did:
Make it multimodel. The original had lots of globals. I removed them all and multi model was easy to adapt.
Extended error messages. Click on a message and get the help.
Lots of graphs and diagrams. All clickable to drill down.
Simulation. Change parameters over time and see how long the current configuration was enough.
We really made this one clean and it paid back heavily in the end. Such a big learning experience.
Re-wrote a system for a company that processes legal invoices - the original system was a VB monstrosity that had no idea of good OO principles - everything was mixed together. The HTML did SQL, and the SQL wrote HTML. A large part of it was a custom rules engine that used something like XML for the rules.
Two teams did the re-write, which took about 9 months. One team did the web front end and the backend workflow, while the other team (that I was on) re-wrote the rules engine. The new system was written in C#, and was done test-first. Adding new rules to the system when we were done was dirt simple, and it was all testable. Along the way we did things like convert the company from VSS to SVN, implement continuous integration, automate the deployment, and teach the other developers how to do TDD and other Scrum/XP practices.
Managing expectations was crucial through the project. Having a customer that was savvy about software was very helpful.
Having a mix of large scale (end-to-end) tests along with comprehensive unit and integration tests helped tons.
Converted vBulletin which is written in PHP into C#/Asp.NET. I'm pretty familiar with both languages, but PHP is the hands down the winner for building that software. The biggest pain in the rear was needing to do a C# equivalent of PHP's eval() for calling the templates.
It was my first challenge in trying to do a conversion. I learned that I need more experience with C# and that writing it from scratch is just the easier route sometimes.
I converted a dynamical build-process completely written in Perl to a C#/.Net solution using a workflow-engine a co-worker had developed (which was still in beta - so I had to do some refinements). That gave me the oppertunity to add fail-safe and fail-over functionality to the build process.
Before you ask - no - the microsoft workflow-foundation could not be used since you cannot dynamically change a process during its runtime.
What I learned:
to hate the Perl-developer
process-optimization using a wf-engine
fail-safe and fail-over strategies
some C# tweaks ;)
In the end it covered about 5k - 6k (including the wf-engine) LoC origin from 3 200 LoC Perl-files. But it was fun - and far better in the end ;)
Converting theoretically portable C code into theoretically portable C code across architectures to support a hardware change that saves the company X dollars per unit.
The size varies - this is a common need, and I've done small and large projects.
I learned to write more portable C code. Elegance is great, but when it comes right down to it the compiler takes care of performance, and the code should be as simple and portable as possible.
Ported a simulation written in Fortran 77 (despite being written in the 90s) to C/Java because the original only worked on small data sets. I learned to love big O notation after several times of explaining why just moving the entire data table into memory at the start of the program was not going to scale.
Migrating the B-2 Stealth Bomber mission software from JOVIAL to C. 100% fully automated conversion. Seriously!
Main lesson: using configurable automated conversion tools is a huge win.
See DMS Software Reengineering Toolkit.

Resources