Most suitable language for cheque/check printing on Windows Platform - windows

I need to create a simple module/executable that can print checks (fill out the details). The details need to be retried from an existing Oracle 9i DB on the Windows(xp or later)
Obviously, I shall need to define the pixel format as to where the details (Name, amount, etc) are to be filled.
The major constraint is that the client needs / strongly prefers a executable , not code that is either interpreted or uses a VM. This is so that installation is extremely simple. This requirement really cannot be changed.
Now, the question is, how do I do it.
(.NET, java and python are out of the question, unless there is a way around the VMs)
I have never worked with MFC or other native windows APIs. I am also unfamiliar with GDI.
Do I have any other option? Any language that can abstract the complexities and can be packed into a x86 binary?
Also, if not then any code help with GDI would be appreciated.

The most obvious possibilities would probably be C, C++, and Delphi. There are a few others such as Ada (e.g., Gnat), but offhand I don't see a lot of reason to favor them (especially for a job this small).
At least the way I'd write this, the language would be almost irrelevant. I'd have it run almost entirely by an external configuration file that gave the name of each field, and the location where it should be printed. I'd probably use something like MM_LOMETRIC mapping mode, so Windows will handle most of the translation to real-world coordinates (and use tenths of a millimeter in the configuration file, so you can use the coordinates without any translation).
Probably the more difficult part of this would/will be the database connectivity. There are various libraries around to help out with that, so this won't be terribly difficult, but it's still not (quite) as trivial as the drawing part.

Related

Popup window in Turbo Pascal 7

In Turbo Pascal 7 for DOS you can use the Crt unit to define a window. If you define a second window on top of the first one, like a popup, I don’t see a way to get rid of the second one except for redrawing the first one on top again.
Is there a window close technique I’m overlooking?
I’m considering keeping an array of screens in memory to make it work, but the TP IDE does popups like I want to do, so maybe it’s easy and I’m just looking in the wrong place?
I don't think there's a window-closing technique you're missing, if you mean one provided by the CRT unit.
The library Borland used for the TP7 IDE was called TurboVision (see https://en.wikipedia.org/wiki/Turbo_Vision) and it was eventually released to the public domain, but well before that, a number of 3rd-party screen handling/windowing libraries had become available and these were much more powerful than what could be achieved with the CRT unit. Probably the best known was Turbopower Software's Object Professional (aka OPro).
Afaik, these libraries (and, fairly obviously TurboVision) were all based on an in-memory representation of a framed window which could be rapidly copied in and out of the PC's video memory and, as in Windows with a capital W, they were treated as a stack with a z-order. So the process or closing/erasing the top level window was one of getting the window(s) that it had been covering to re-draw itself/themselves. Otoh, CRT had basically evolved from v. primitive origins similar to, if not based on, the old DEC VT100 display protocol and wasn't really up to the job of supporting independent, stackable window objects.
Although you may still be able to track down the PD release of TurboVision, it never really caught on as a library for developers. In an ideal world, a better place to start would be with OPro. It was apparently on SoureForge for a while, but seems to have been taken down sometime since about 2007, and these days even if you could get hold of a copy, there is a bit of a question mark over licensing. However ...
There was also a very popular freeware library available for TP by the name of the "Technojock's toolkit" and which had a large functionality overlap (including screen handling) with OPro and it is still available on github - see https://github.com/lallousx86/TurboPascal/tree/master/TotLib/TOTSRC11. Unlike OPro, I never used TechnoJocks myself, but devotees swore by it. Take a look.

How to create custom single byte character set for Windows?

Windows uses some encoding table for non-unicode applications to map characters from unicode table to 1-byte table. There are many predefined character sets, user can choose one in windows settings. I need to create a custom character set. Where can I find some information about that process? I tried to Google it, but didn't have any luck, I guess, few people are doing that.
AFAIK, you can't do that, I don't think there's even a way to write some kernel mode "driver" for it, but, haven't looked into these things for a while, maybe there is some way (now).
In any case, you might be better off using a library you can change/update, such as libiconv.
UPDATE:
Since you don't have the source code, you're in a very unfortunate position.
For all string resources (in EXE or any DLLs or, though unlikely, in some other file(s)), you can "read them out" and figure out what's the code page used in them and change it (and the strings themselves), tweaking it in some way that would achieve your purpose - to have the right glyphs appear (yes, you might actually see different glyphs in Notepad, but, who cares if you application shows the right one(s) - FWIW, for such hacks, it's best to use a hex-editor). Then, of course, "put" the (changed) resources back in (EXE/DLL). But, it's quite possible not all strings are in resources, and that's when the "real" problems start.
There's any number of hacks that could have been done here. Your best option is to use some good debugger (WinDbg or better) and figure out what's going on and how are character sets handled = since you don't have the source code, it's gonna be quite painful. You want to find out:
Are the default charset(s) used (OEM/ANSI), or some specific (via NLS APIs)?
Whatever charset is used, is it a standard one or not? The charset here is the "code" Windows assigns to it. Look at Windows lists of available charsets.
Is the application installing fonts? If it is, use a font tool to examine them - maybe it has a specific (non-standard?) code-page supported in it.
Is the application installing some some drivers. If it is, the only way to gain more insight is to use a kernel debugger (which is very tricky and annoying, but, as already said, you're in an unfortunate situation).
It appears that those tables are located at C:\Windows\system32*.nls. I'm not sure whether there's proper documentation for their structure. There's some information in Russian here. Also you might want to tinker with registry at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls

How do I reverse-engineer the "import file" feature of an abandoned pascal application?

first question I've asked and I'm not sure how to ask it clearly, or if there will be an answer that I want to hear ;)
tl;dr: "I want to import a file into my application at work but I don't know the input format. How can I discover it?"
Forgive any pending wordiness and/or redaction.
In my work I depend on an unsupported (and proprietary) application written in Pascal. I have no experience with pascal (yet...) and naturally have no source code access. It is an excellent (and very secret/NDA sort of deal I think) application that allows us to deal with inventory and financial issues in my employer's organization. It is quite feature-comprehensive, reasonably stable and robust, and kind of foistered (word?) on us by a higher power.
One excellent feature that it has is the ability to load up "schedules" into our corporate system. This feature should be saving us hundreds of hours in data entry.
But it isn't.
The problem is, the schedules we receive are written in a legacy format intended for human eyes. The "new" system can't interpret them.
Our current information (which I have to read and then re-enter into the database by hand) is send in a sort of rich-text flat-file format, which would be easy to parse with the string library of probably any mainstream language.
So I want to write a converter to convert our data into a format that the new software can interpret.
By feeding certain assorted files into the system, I have learned a little bit about what kind of file it expects:
I "import" a zero-byte file. Nothing happens (same as printing a report with no data)
I "import" an XML file that I guess might look like the system expects. It responds with an exception dialog and a stacktrace. Apparently the string <?xml contains illegal characters or something
I "import" a jpeg image -- similar result to #2.
So I think that my target wants a flat-file itself. The file would need to contain a "document number" along with {entries with "incident IDs" and descriptions and numeric values}.
But I don't know this for certain.
Nobody is able to tell me exactly what these files should look like. Someone in the know said that they have seen the feature demonstrated -- somewhere out there is a utility that creates my importable schedules. But for now, the utility is lost and I am on my own.
What methods can I use to figure out the input file format? I know nothing about debugging pascal, but I assume that that is probably my best bet. Or do I have to keep on with brute force until I can afford a million monkey-operated typewriters? Do I have to decompile the target application? I don't know if I can get away with that, let alone read the decompiled source.
My google-fu has failed me.
Has anyone done something like this before or could they point me in the right direction? Are there any guides on this subject?
Thanks in advance.
PS: I am certain that I am not breaking any laws at this point, although I will have to check to find out if decompilation would get me into trouble or not, and that might be outside of my technical competence anyway.
If you have an example file you can try to take a hexdump utility and try to see if there things you can identify. Any additional info that you have (what should in the file) helps with that. Better even, if you know a program that can edit the file, you can use the editor to make minimal changes and then compare the file before and after.
IOW standard tricks of binary file format reverse engineering.
...If you have no existing files whatsoever, then reverse engineering the binary is your only option, and that is not pretty. Decompilation of native binaries is a black art that requires considerable time and skill. Read the various decompilation FAQs on the net.
First and for all, I would try to contact the authors of the program. Source code are options 1,2,3 and you only go with other options if there is really, really, really no hope whatsoever of obtaining source or getting normal support.

Html entities in file names: Possible mine traps?

When I thought about resizing images and saving the new sizes parallel on the server, I came to the following question:
// Original size
DSC_18342.jpg
// New size: Use an "x" for "times"
DSC_18342_640x480px.jpg
// New size: Use the real "×" for "times"
DSC_18342_640×480px.jpg
The point is, that it's slightly easier if you got a real × instead of an x in the file name, as the unit px already contains the x, which makes it a little bit harder to read.
Question: What problems could I get in, when using the Html entity in the filename?
Sidenotes: I'm writing an open source, publicly available script, so the targeted server can be anything - therefore I'm also interested (and will vote up) edge cases, that I'm not aware off.
Thank you all!
You may have noticed, that I'm aware, that I could simply avoid it (which I'll do anyway), but I'm interested in this issue and learning about it, so please just take above example as possible case.
There are file systems that simply don't support unicode. This may be less of a problem if you make unicode support a requirement of your application.
Some consideration about different unicode file system are given in File Systems, Unicode, and Normalization.
A concluding remark (from a viewpoint of solaris file systems) is:
Complete compatibility and seamless interoperability with
all other existing Unicode file systems appears not 100%
possible due to inherent differences.
I can imagine that there will be problems especially when migrating the application. Just storing files is probably no problem but if their names are stored in a database there might be a mismatch after migration.

How important is portability?

I was just writing a procedure that is looking for a newline and I was contemplating using Environment.NewLine vs '\n'.
Syntactically: Is Environment.NewLine clearer than '\n'?
And how important is portability really?
Depends on how likely you are to run your program on another platform doesn't it?
Any builtin API that abstracts platform specific semantics/syntax is always better to use, as it provides portability without much complexity overhead, but with easy gains for using it.
Writing portable C on the other-hand might be more complex and require a stronger business case for the effort. When dealing with things like C#, Python, Java and others ... use the provided abstractions for those annoyances across platforms, which in many cases is what they are reduced to.
It is not really important if the program is written for a specific known target audience/platform and you are certain its scope will not extend beyond that. But that's where the problem lies: often you cannot be certain about these things. You cannot look into the future.
Often writing portable code is not harder than writing a non-portable alternative. So, always strive to write portable code.
I would go with Environment.NewLine. This is because, depending on the language in use, we can change its definition. If we go with '\n', each compiler/language will have its own understanding and intepretation.
So, it would be preferrable to go with Environmental.NewLine.
Having portable code is a kind of a business opportunity. Say you only sell software for Windows now. Then the government of your country decides that it doesn't want to pay licensing fees to Microsoft and migrates all government institutions to Linux. If you can't quickly port your software you are no longer able to sell it to the government and that's big money.
Environment.NewLine works well, I've used it a lot in the past, however, if the app is a web app, and you insert an Environment.NewLine in the rendered html, it will have no effect in the browser window, it will however affect your source layout.
If I remember rightly Environment.NewLine will also add a carriage return if the system expects it, where \n wont.
I forgot to answer the portability aspect. I would always make my code more portable, as someone working in a consultancy I dont want to have to redevelop code so by using Environment.NewLine (for example) I would reduce the amount of work I would have to do should the code need to be reused in future.
Portability aside, wouldn't one always go with Environmental.NewLine (or whatever the equivalent is on your platform) as it's simply more human readable?
Two years down the line when 'A Random Maintenance Programmer' comes along who doesn't understand the nuances of \n Environmental.NewLine is also more bullet proof.
As they have different meaning, you should use the one that is correct for the data that you are handling.
Environment.NewLine means the newline combination for the current system.
A char/string literal like '\n' or "\r\n" means a specific newline combination regardless of the current system.
If the data is for example a text file that is produced by a regular text editor in the system, you would use Environment.NewLine to match the newlines. If the data is some data format where the newlines are defined as a specific character combination regardless of what system they are used on, you would use that specific literal.
For those things... \n, the newline character is a fixed character in the ASCII set, so that's portable to almost anything. It's up to you to decide how important you find your code to be portable across platforms...
To make this decision, figure out what the chances are of your code ever being ported to another platform. Then think what the investment will be to make it portable now vs. porting it later, when the time comes. Choose the one that's cheapest, or most convenient...
There are two aspects to your question: how important is portability, and how do I represent a newline in a portable way.
The need for Portability is, as others said before me, a business requirement: your own private command-line tool needn't be portable, while a commercial library may better be. Based on this need you can choose the platform you're working on.
The newline character has to be recognized by your parser. If you're working in Pytho, C++, ... the parser will always recognize the '\n' sequence. If you are writing regular expressions, the '$' will be recognized as end-of-line.
If the audience of your code is acquainted with '\n', I would use that one since it jumps out as a character. If you want to emphasize the meaning of "end-of-line", go with the symbolic thing.
Unless you're working on some tiny project, portability is probably very important. Even if you do program for Windows only, you will probably want your program to run on future versions of Windows. There are quite a lot of things that break in the new versions of Windows, the most common thing I see is copy protection which depends on certain obscure undocumented runtime internal structures of Windows to exist. Similarly in Unix-like O/S, you'd want that program to work on the latest kernel, which is why you must avoid using system calls and whatnot. The thing is, if your program is very non-portable against O/S or architectures, it's likely to not even be future proof. Heh, this reminds me of Windows registry/filesystem organization.

Resources