Delphi 7 Line Indentation option? - delphi-7

I have received 10 PAS files. All the lines in these 10 files starts in column 1. NO indentation at all.
Biggest file is 2548 lines.
Now I wonder: Is there an OPTION in my Delphi 7 Enterprise (running on XP sp 3) to automate that indentation to increase readability (at least for me):
IF not I plan to scan through the files and when BEGIN, THEN, ELSE, CASE, END etc. (where indentation will be natural) is met, I will (if necessary) do a manually indentation. (But it is smarter letting Delphi do it).

you can use CnPack
http://www.cnpack.org/showlist.php?id=39&lang=en
it has (among a lot of features) a Code Formatting Wizard and some other nifty features to enchance the IDE

Related

Is dual mode executable possible?

A bit of history... I have 3 systems that I spend time on, a DOS 6.22 system, a Windows 95 system, and a modern Windows 7 (64-bit) system. When I upgraded to Win7-64, some of my favorite command line utilities stopped working, so I decided to re-write them myself. The only 2 compilers I have are Borland Turbo C++ 3.0 and Visual Studio 2008, and they worked fine for building 2 versions, a DOS 16-bit, and a Windows 7 32-bit (could have built 64-bit too, I guess.) The problem came with my Win95 system. The DOS version works fine there, but since I spent the time to support LFNs in the Win7 build, I wanted it with my Win95 system. So, after a lot of research, I found and purchased Visual Studio 6 (last one with Win95 support according to what I researched,) copied the code over (had to rewrite sections, of course,) and it compiled just fine, and works :)
The problem occurred the next time I had to boot my Win95 system in DOS mode. The program stopped working (of course,) because Win95 wasn't loaded. I don't really want to have 2 copies of the program installed (needing 2 different file names,) so I was hoping there was a way to link the 2 versions together into one file. If I execute it in DOS, instead of it saying it requires windows, it would just jump to the DOS section of the program. That way, it would be a single program, with LFN support if Win95 is loaded, and without if Win95 isn't loaded. Since the Win95 version also works fine in Win7-64, it would probably also produce a single version that works on all 3 systems (which would be an added bonus.)
I did some web searches, and couldn't find anything germane to what I'm looking for. So I have no idea if it is even possible. I may have to get yet another compiler, but considering how old it would have to be, I could probably afford it. My web searches did result in information that leads me to believe that it "should" be possible, though. It would just require a different exe header than the one Windows compilers put in. It may require that I re-write the DOS version for 32-bit and use a DOS extender (for protected mode, assuming I can't find a way to include it in the file itself.) That would be acceptable (though not ideal.) I would much rather have 16-bit code in the DOS section, and 32-bit code in the Windows section (for the most compatibility.)
Does anyone have any information about something like this? If you could just point me in the right direction it would be greatly appreciated.
I don't know if it has been continued in Windows 7 executables, but back in Win95 the executable (EXE) actually had two entry points -- one "normal" one that DOS would find, and a second one that Windows would use. The DOS entry point was usually a very simple default that would just print "This is a Windows program" and exit. You can actually override this default, and have the linker use your own code, however it is very limited.
What I'd recommend doing is add logic to your DOS 6.22 version (e.g. "sed") that would check the OS level & if it meets the right criteria, pass the parameters along to a second executable (e.g. "sedx") that uses features from the "newer" OS.
The documentation for Visual Studio 6 describes the /STUB option here, simply point this at the DOS version of your program.
I don't have VS6 handy, so I can't be too specific, but in the project settings GUI, there should be an "additional options" setting in the linker section.
Well the answer is the /stub option in the Linker you are using for your Windows code. Some additional information for anyone who finds the question later.... I had to do several days of web searches to find that there doesn't appear to be another answer to my particular problem.
Stub requires that the DOS mode executable have a header of at least 40 bytes. After fighting with multiple compilers that "DO" give you a header of the right size (Borland Turbo C++ won't,) and not being able to convert my code, I had to get sneaky/fancy. BTW - Visual C 1.52c (last Visual C that supports DOS,) will make a correct header, as will Open WatCom.
If you are faces with the same issue I was - the compiler you used won't make the correct size header, and your code is too compiler specific to convert easily, you can do what I ended up doing. I used Open WatCom to write a tiny ("Hello World") Windows program using my exe with the short (Borland created,) header as the stub. Open WatCom will adjust the header automatically. I then used a Hex Editor to read the header information to get the ending address of the stub and a partial file copier to copy only that part of the program to a file I named "stub.exe" (stripping of the Windows code.) Using the same Hex Editor I zeroed out the PE pointer in the header. I now had a working DOS exe that would also work as a stub. Took my stub to my Windows compiler, and linked it in. It works great, all features fully realized :)
FYI - Information needed to strip the Windows portion and zero the PE pointer.
first byte is offset 0 (of course, but some people may not realize that, and think it's byte 1.) Also remember, that most Hex Editors (by their very name,) are giving you numbers in hexadecimal format.
offset 2 & 3, number of bytes in the last block of the DOS portion of the file in low byte - high byte format. That is, offset 2 is low, 3 is high. So take them, reverse them, and you will get a number from 0 - 511 (0 - 1ff in hex.) 0 means the entire block of 512 (200 in hex) bytes is used.
offset 4 & 5 (again in low/high format,) is the number of 512 (200 in hex) byte block in the DOS portion. Remember to reverse the number, and that the last block may only be a partial block. So, subtract one, multiply by 512 (200 hex,) add the number from 2-3, and you have how many bytes are in the DOS portion. Since you are starting from 0, subtract 1, and you now know to only copy bytes 0 - "whatever the total is" to your stub exe.
offset 60-61 (hex 3C-3D) is the pointer to the start of the PE (or Portable Executable,) portion of the code (the part that Windows jumps to.) It should be just past (mine was padded with a few zeroes,) the end of the DOS portion of the code. This isn't important at this time, as we are just turning those into 0's anyway (the PE portion has been stripped.) You can use this as confirmation that you have the correct "end of DOS" offset selected though.
The tools I used are:
Open WatCom at http://www.openwatcom.org/index.php/Main_Page
and
Part Copy at http://www.virtualobjectives.com.au/utilitiesprogs/partcopy.htm
I have no idea where to find the Hex Editor I used. I used CEdit, a DOS program I really like, but have been unable to find on the net. Have to use DOSBox with it as Win7 won't run it, though. There are probably other compilers that do the same thing, and probably tons of partial file copiers available. These are the tools I used.

How does line ending effect in coding?

Why do line ending differ from platform to platform? Even why is there term like line ending in programming?
I prefer saving my codes in Unix/Linux format, even if I'm on Windows. Am I missing anything by not saving it in Windows or MacOS format? How does line ending effect in coding.
In the early days, when Typewriters were nearly the only way of getting output from a computer, CR and LF did different things. Unix started the tradition of using a single character to mark the end of a line, probably because it made their pipelining easier; their drivers could easily convert a single LF to CR/LF if need be. Linux is mostly a Unix clone so it keeps that convention. The others hold on to the CR/LF convention for historical reasons, even though it's not strictly necessary.
Some languages such as C, C++, and Python will let you specify the type of file when you open it, either binary or text. For text files a translation is performed so that a single LF is translated into the line ending convention required by the OS.
Basically everyone wanted to be different when creating OS's - Un*x's started with LF, then VMS and DOS wanted CR/LF (like a typewriter) and of course MAC wanted to be different so they went for CR only.
They just wanted to make it harder to transfer between OS's so that you 'bought' into one
Added because of comment
Up to the programmer - if you need to support different line endings then you must code for them. eg you could create a #define for the line ending and then have this change depending on compile options

Windows 7 file problem

I am using VB6 SP6
This code has work correctly for years but I am now having a problem on a WIN7 to WIN7 network. It also works correctly on an XP to Win7 network.
Open file for random as ChannelNum LEN =90
'the file is on the other computer on the network
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
'(MyAcFile is UDT that is less than 90 long)
.......... other code that does not reference file or RecNum - then
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
Close ChannelNum
The second record overwrites the first.
We had a similar problem in the past with OpportunisticLocking so we turn that off at install - along with some other keys that cause errors in data in Windows networks.
However we have had no problems like this for years, so I think MS have some new "better" option that they think will "improve" networking.
Thanks for your help
I doubt there is any "bug" here except in your approach. The file metadata that LOF() interrogates is not meant to be updated immediately by simple writes. A delay seems like a silly idea, prone to occasional failure unless a very long delay is used and sapping performance at best. Even close/reopen can be iffy: VB6's Close statement is an async operation. That's why the Reset statement exists.
This is also why things like FlushFileBuffers() and SetEndOfFile() exist at the API level. They are also relatively expensive operations from a performance standpoint.
Track your records yourself. Only rely on LOF() if necessary after you first open the file.
Hmmm... is file (as per in the open statement at the top of the code sample) UNC filename or similar to x:\ where x is the mapped drive? Are you not incrementing RecNum? Judging by the code, the RecNum is unchanged and hence appears to overwrite the first record...Sorry for sounding ummm no pun intended... basic...It would be of help to show some more code here...
Hope this helps,
Best regards,
Tom.
It can be just timing issue. In some runs your LOF() function returns more updated information than in other runs. The file system API is asynchronous, for example when some write function is called it will not be immediately reflected as the increazed size.
In short: you code have shown an old bug, which is just easier to reproduce on Windows 7.
To fix the bug the cheapest way: you may decide to add a delay (it can be significant delay of say 5 seconds).
More elaborate fix is to force the size update by closing and reopening file.

Is there a good reason to limit Windows filename extentions to three characters?

I am creating a utility that will store data on flat file in a specific binary format.
I want the filename extension to be specific to my application. Is there any reason other than the old 8.3 filename limit for restricting the extension to 3 characters, and if not, what is the limit? Can I have myfilename.MyExtensionSoHandsOffEverybodyElse ?
This is a hold over from the old windows 3.x/MSDOS days. Today, there are plenty of file names that have more than 3 character extensions.
If I remember correctly, Windows XP had a maximum character limit for path names (including the file name) of 255 characters.
In my experience, having seen a few non-3-character extensions I'd say that it's a matter of tradition, and you're perfectly welcome to use myfilename.MyExtensionSoHandsOffEverybodyElse.
The only good reason for doing this is if you plan to support Windows 9x. If you're only targeting XP and later, as with most projects nowdays, the 8.3 thing is irrelevant.
In fact, Windows itself stores things in long-extension filenames in Vista and later, for example, .search-ms for saved searches.
No, there isn't a good reason to limit the extension to 3 characters. However, a shorter, descriptive name is better if a user has to remember it. For example, most people know what a .html or .doc file would contain.
As long as you make a reasonable attempt to avoid naming collisions with major software there shouldn't be an issue. A corollary to that is the fact that unless you create some insanely long extension that will only ever be unique to your software (and even then, it's not guaranteed), the extension you choose will always be subject to name collision by other people's software when they choose their program's data file extension as you are doing here.

Text editor to open big (giant, huge, large) text files [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
This question's answers are a community effort. Edit existing answers to improve this post. It is not currently accepting new answers or interactions.
I mean 100+ MB big; such text files can push the envelope of editors.
I need to look through a large XML file, but cannot if the editor is buggy.
Any suggestions?
Free read-only viewers:
Large Text File Viewer (Windows) – Fully customizable theming (colors, fonts, word wrap, tab size). Supports horizontal and vertical split view. Also support file following and regex search. Very fast, simple, and has small executable size.
klogg (Windows, macOS, Linux) – A maintained fork of glogg. Its main feature is regular expression search. It supports monitoring file changes (like tail), bookmarks, highlighting patterns using different colors, and has serious optimizations built in. But from a UI standpoint, it's rather minimal.
LogExpert (Windows) – "A GUI replacement for tail." It's really a log file analyzer, not a large file viewer, and in one test it required 10 seconds and 700 MB of RAM to load a 250 MB file. But its killer features are the columnizer (parse logs that are in CSV, JSONL, etc. and display in a spreadsheet format) and the highlighter (show lines with certain words in certain colors). Also supports file following, tabs, multifiles, bookmarks, search, plugins, and external tools.
Lister (Windows) – Very small and minimalist. It's one executable, barely 500 KB, but it still supports searching (with regexes), printing, a hex editor mode, and settings.
Free editors:
Your regular editor or IDE. Modern editors can handle surprisingly large files. In particular, Vim (Windows, macOS, Linux), Emacs (Windows, macOS, Linux), Notepad++ (Windows), Sublime Text (Windows, macOS, Linux), and VS Code (Windows, macOS, Linux) support large (~4 GB) files, assuming you have the RAM.
Large File Editor (Windows) – Opens and edits TB+ files, supports Unicode, uses little memory, has XML-specific features, and includes a binary mode.
GigaEdit (Windows) – Supports searching, character statistics, and font customization. But it's buggy – with large files, it only allows overwriting characters, not inserting them; it doesn't respect LF as a line terminator, only CRLF; and it's slow.
Builtin programs (no installation required):
less (macOS, Linux) – The traditional Unix command-line pager tool. Lets you view text files of practically any size. Can be installed on Windows, too.
Notepad (Windows) – Decent with large files, especially with word wrap turned off.
MORE (Windows) – This refers to the Windows MORE, not the Unix more. A console program that allows you to view a file, one screen at a time.
Web viewers:
readfileonline.com – Another HTML5 large file viewer. Supports search.
Paid editors/viewers:
010 Editor (Windows, macOS, Linux) – Opens giant (as large as 50 GB) files.
SlickEdit (Windows, macOS, Linux) – Opens large files.
UltraEdit (Windows, macOS, Linux) – Opens files of more than 6 GB, but the configuration must be changed for this to be practical: Menu » Advanced » Configuration » File Handling » Temporary Files » Open file without temp file...
EmEditor (Windows) – Handles very large text files nicely (officially up to 248 GB, but as much as 900 GB according to one report).
BssEditor (Windows) – Handles large files and very long lines. Don’t require an installation. Free for non commercial use.
loxx (Windows) – Supports file following, highlighting, line numbers, huge files, regex, multiple files and views, and much more. The free version can not: process regex, filter files, synchronize timestamps, and save changed files.
Tips and tricks
less
Why are you using editors to just look at a (large) file?
Under *nix or Cygwin, just use less. (There is a famous saying – "less is more, more or less" – because "less" replaced the earlier Unix command "more", with the addition that you could scroll back up.) Searching and navigating under less is very similar to Vim, but there is no swap file and little RAM used.
There is a Win32 port of GNU less. See the "less" section of the answer above.
Perl
Perl is good for quick scripts, and its .. (range flip-flop) operator makes for a nice selection mechanism to limit the crud you have to wade through.
For example:
$ perl -n -e 'print if ( 1000000 .. 2000000)' humongo.txt | less
This will extract everything from line 1 million to line 2 million, and allow you to sift the output manually in less.
Another example:
$ perl -n -e 'print if ( /regex one/ .. /regex two/)' humongo.txt | less
This starts printing when the "regular expression one" finds something, and stops when the "regular expression two" find the end of an interesting block. It may find multiple blocks. Sift the output...
logparser
This is another useful tool you can use. To quote the Wikipedia article:
logparser is a flexible command line utility that was initially written by Gabriele Giuseppini, a Microsoft employee, to automate tests for IIS logging. It was intended for use with the Windows operating system, and was included with the IIS 6.0 Resource Kit Tools. The default behavior of logparser works like a "data processing pipeline", by taking an SQL expression on the command line, and outputting the lines containing matches for the SQL expression.
Microsoft describes Logparser as a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. The results of the input query can be custom-formatted in text based output, or they can be persisted to more specialty targets like SQL, SYSLOG, or a chart.
Example usage:
C:\>logparser.exe -i:textline -o:tsv "select Index, Text from 'c:\path\to\file.log' where line > 1000 and line < 2000"
C:\>logparser.exe -i:textline -o:tsv "select Index, Text from 'c:\path\to\file.log' where line like '%pattern%'"
The relativity of sizes
100 MB isn't too big. 3 GB is getting kind of big. I used to work at a print & mail facility that created about 2% of U.S. first class mail. One of the systems for which I was the tech lead accounted for about 15+% of the pieces of mail. We had some big files to debug here and there.
And more...
Feel free to add more tools and information here. This answer is community wiki for a reason! We all need more advice on dealing with large amounts of data...

Resources