How important is portability? - portability

I was just writing a procedure that is looking for a newline and I was contemplating using Environment.NewLine vs '\n'.
Syntactically: Is Environment.NewLine clearer than '\n'?
And how important is portability really?

Depends on how likely you are to run your program on another platform doesn't it?
Any builtin API that abstracts platform specific semantics/syntax is always better to use, as it provides portability without much complexity overhead, but with easy gains for using it.
Writing portable C on the other-hand might be more complex and require a stronger business case for the effort. When dealing with things like C#, Python, Java and others ... use the provided abstractions for those annoyances across platforms, which in many cases is what they are reduced to.

It is not really important if the program is written for a specific known target audience/platform and you are certain its scope will not extend beyond that. But that's where the problem lies: often you cannot be certain about these things. You cannot look into the future.
Often writing portable code is not harder than writing a non-portable alternative. So, always strive to write portable code.

I would go with Environment.NewLine. This is because, depending on the language in use, we can change its definition. If we go with '\n', each compiler/language will have its own understanding and intepretation.
So, it would be preferrable to go with Environmental.NewLine.

Having portable code is a kind of a business opportunity. Say you only sell software for Windows now. Then the government of your country decides that it doesn't want to pay licensing fees to Microsoft and migrates all government institutions to Linux. If you can't quickly port your software you are no longer able to sell it to the government and that's big money.

Environment.NewLine works well, I've used it a lot in the past, however, if the app is a web app, and you insert an Environment.NewLine in the rendered html, it will have no effect in the browser window, it will however affect your source layout.
If I remember rightly Environment.NewLine will also add a carriage return if the system expects it, where \n wont.
I forgot to answer the portability aspect. I would always make my code more portable, as someone working in a consultancy I dont want to have to redevelop code so by using Environment.NewLine (for example) I would reduce the amount of work I would have to do should the code need to be reused in future.

Portability aside, wouldn't one always go with Environmental.NewLine (or whatever the equivalent is on your platform) as it's simply more human readable?
Two years down the line when 'A Random Maintenance Programmer' comes along who doesn't understand the nuances of \n Environmental.NewLine is also more bullet proof.

As they have different meaning, you should use the one that is correct for the data that you are handling.
Environment.NewLine means the newline combination for the current system.
A char/string literal like '\n' or "\r\n" means a specific newline combination regardless of the current system.
If the data is for example a text file that is produced by a regular text editor in the system, you would use Environment.NewLine to match the newlines. If the data is some data format where the newlines are defined as a specific character combination regardless of what system they are used on, you would use that specific literal.

For those things... \n, the newline character is a fixed character in the ASCII set, so that's portable to almost anything. It's up to you to decide how important you find your code to be portable across platforms...
To make this decision, figure out what the chances are of your code ever being ported to another platform. Then think what the investment will be to make it portable now vs. porting it later, when the time comes. Choose the one that's cheapest, or most convenient...

There are two aspects to your question: how important is portability, and how do I represent a newline in a portable way.
The need for Portability is, as others said before me, a business requirement: your own private command-line tool needn't be portable, while a commercial library may better be. Based on this need you can choose the platform you're working on.
The newline character has to be recognized by your parser. If you're working in Pytho, C++, ... the parser will always recognize the '\n' sequence. If you are writing regular expressions, the '$' will be recognized as end-of-line.
If the audience of your code is acquainted with '\n', I would use that one since it jumps out as a character. If you want to emphasize the meaning of "end-of-line", go with the symbolic thing.

Unless you're working on some tiny project, portability is probably very important. Even if you do program for Windows only, you will probably want your program to run on future versions of Windows. There are quite a lot of things that break in the new versions of Windows, the most common thing I see is copy protection which depends on certain obscure undocumented runtime internal structures of Windows to exist. Similarly in Unix-like O/S, you'd want that program to work on the latest kernel, which is why you must avoid using system calls and whatnot. The thing is, if your program is very non-portable against O/S or architectures, it's likely to not even be future proof. Heh, this reminds me of Windows registry/filesystem organization.

Related

How to create custom single byte character set for Windows?

Windows uses some encoding table for non-unicode applications to map characters from unicode table to 1-byte table. There are many predefined character sets, user can choose one in windows settings. I need to create a custom character set. Where can I find some information about that process? I tried to Google it, but didn't have any luck, I guess, few people are doing that.
AFAIK, you can't do that, I don't think there's even a way to write some kernel mode "driver" for it, but, haven't looked into these things for a while, maybe there is some way (now).
In any case, you might be better off using a library you can change/update, such as libiconv.
UPDATE:
Since you don't have the source code, you're in a very unfortunate position.
For all string resources (in EXE or any DLLs or, though unlikely, in some other file(s)), you can "read them out" and figure out what's the code page used in them and change it (and the strings themselves), tweaking it in some way that would achieve your purpose - to have the right glyphs appear (yes, you might actually see different glyphs in Notepad, but, who cares if you application shows the right one(s) - FWIW, for such hacks, it's best to use a hex-editor). Then, of course, "put" the (changed) resources back in (EXE/DLL). But, it's quite possible not all strings are in resources, and that's when the "real" problems start.
There's any number of hacks that could have been done here. Your best option is to use some good debugger (WinDbg or better) and figure out what's going on and how are character sets handled = since you don't have the source code, it's gonna be quite painful. You want to find out:
Are the default charset(s) used (OEM/ANSI), or some specific (via NLS APIs)?
Whatever charset is used, is it a standard one or not? The charset here is the "code" Windows assigns to it. Look at Windows lists of available charsets.
Is the application installing fonts? If it is, use a font tool to examine them - maybe it has a specific (non-standard?) code-page supported in it.
Is the application installing some some drivers. If it is, the only way to gain more insight is to use a kernel debugger (which is very tricky and annoying, but, as already said, you're in an unfortunate situation).
It appears that those tables are located at C:\Windows\system32*.nls. I'm not sure whether there's proper documentation for their structure. There's some information in Russian here. Also you might want to tinker with registry at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls

Most suitable language for cheque/check printing on Windows Platform

I need to create a simple module/executable that can print checks (fill out the details). The details need to be retried from an existing Oracle 9i DB on the Windows(xp or later)
Obviously, I shall need to define the pixel format as to where the details (Name, amount, etc) are to be filled.
The major constraint is that the client needs / strongly prefers a executable , not code that is either interpreted or uses a VM. This is so that installation is extremely simple. This requirement really cannot be changed.
Now, the question is, how do I do it.
(.NET, java and python are out of the question, unless there is a way around the VMs)
I have never worked with MFC or other native windows APIs. I am also unfamiliar with GDI.
Do I have any other option? Any language that can abstract the complexities and can be packed into a x86 binary?
Also, if not then any code help with GDI would be appreciated.
The most obvious possibilities would probably be C, C++, and Delphi. There are a few others such as Ada (e.g., Gnat), but offhand I don't see a lot of reason to favor them (especially for a job this small).
At least the way I'd write this, the language would be almost irrelevant. I'd have it run almost entirely by an external configuration file that gave the name of each field, and the location where it should be printed. I'd probably use something like MM_LOMETRIC mapping mode, so Windows will handle most of the translation to real-world coordinates (and use tenths of a millimeter in the configuration file, so you can use the coordinates without any translation).
Probably the more difficult part of this would/will be the database connectivity. There are various libraries around to help out with that, so this won't be terribly difficult, but it's still not (quite) as trivial as the drawing part.

Why can't the compiler just compile my code as I type it?

Why can't the compiler just compile my code as I type it?
From the user's point of view, it could work as smoothly as syntax colouring does today. If you stop typing for long enough (maybe a couple of seconds) the compilation (not linking) would finish, and code errors would be identified using something like syntax colouring.
It's not like my 3GHz quad core monster computer was really busy doing something else. Why not let it compile all the time?
That's exactly what the VB.NET code editor in Visual Studio does.
The advantage is much more accurate IntelliSense than C#. The disadvantage is that it wastes truly vast amounts of processor time and memory. :-(
It can. Or, to be more useful, the answer to this question depends on
What language
What degree of optimization you require
How annoyed you will be if you temporarily type something dumb, and the compiler compiles and injects the result into the binary your are debugging before you can fix it.
Some really strong optimizations would be very messy to mess with on the fly. On the other hand, a basic compilation, if there's no need to worry about assigning offsets for X86 instructions? Sure.
Some IDEs do compile (or at least check syntax and some semantics) code as it is typed. For example, I think Eclipse does it. I think Visual Basic 6 (and maybe earlier versions) did this.
Note sure what IDE you're using, but that's how VB.NET works.
I'm not well-versed in compilers or the methods by which code is converted to IL and machine language, etc. But even so I can see how altering my program by one flow control statement can completely invalidate the work a compiler has done up to that point. By adding or changing a single line of code, entire portions of a program may become obsolete, unused, or in some other way require re-evaluation.
I think I'd rather save those CPU cycles for distributed.net or SETI # Home instead of constantly recompiling my code as I alter it.
That totally depend on the language.
Languages that have context-independent syntaxes "could" pre-compile expressions once typed. However, compilation of such languages project is always fast, so why use the cpu when you can batch quickly the work when the code is ready?
Other languages, like infamously C++, are context-dependent. In most cases, the compiler can't understand an expression without having already read the whole code before the expression. It's really really hard to parse and that's why we have error checking before compilation only now (in VS2010 and other recent ide). In this case it looks like impossible to implement the feature you're asking for.
That said, I'm not a specialist at all. That's all I know about it.
Even interpreted languages like PHP have support for this in the Komodo editor. I'm sure there's many more editors out there that support this for almost any language.

Should you wrap 3rd party libraries that you adopt into your project? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
A discussion I had with a colleague today.
He claims whenever you use a 3rd party library, you should always write for it a wrapper. So you can always change things later and accomodate things for your specific use.
I disagree with the word always, the discussion arose regarding log4j and I claimed that log4j has well tested and time proven API and implementation, and everything thinkable can be configured a posteriori and there is nothing you should wrap. Even if you wanted to wrap there are proven wrappers like commons-logging and log5j.
Another example that we touched in our discussion is Hibernate. I claimed that it has a very big API to be wrapped. Furthermore it has a layered API which lets you tweak its inside if you so need. My friend claimed that he still believes it should be wrapped but he didn't do it because of the size of the API (this co-worker is much veteran than me in our current project).
I claimed this, and that wrapping should be done in specific cases:
you are not sure how the library will fit your needs
you will only use a small portion of a libary (in which case you may only expose a part of its API).
you are not sure of the quality of the library's API or implementation.
I also maintained that sometimes you can wrap your code instead of the library. For example, puting your database related code in a DAO layer, instead of preemptively wrapping all of hibernate.
Well, in the end this is not really a question, but your insights, experiences and opinions are highly appreciated.
It's a perfect example for YAGNI:
it is more work
it inflates your project
it may complicate your design
it has no immediate benefit
the scenarion you write it for may never manifest
when it does, your wrapper most likely needs to be re-written completely because it is tied too closely to the concrete library you were using and the new one's API simply doesn't match yours.
Well, the obvious benefit is for switching technologies. If you have a library that becomes deprecated, and you want to switch, you may end up rewriting a lot of code to accommodate the change, whereas if it were wrapped, you'd have an easier time writing a new wrapper for the new lib, than changing all your code.
On the other hand, it would mean that you have to write a wrapper for every trivial library that you include, which is probably an unacceptable amount of overhead.
My industry is all about speed, so the only time I'd be able to justify writing a wrapper is if it was around some critical library that was likely to change dramatically on a regular basis. Or, more commonly, if I need to take a new library and shoehorn it into old code, which is an unfortunate reality.
It's definitely not an "always" situation. It's something that may be desirable. But the time isn't always going to be there, and, in the end, if writing a wrapper takes hours and the long term code library changes are going to be few, and trivial...Why bother?
No. Java architects/wanna-bees are too busy designing against imaginary changes.
With modern IDE, it's a piece of cake when you do need change. Until then, keep it simple.
I agree with everything that's been said pretty much.
The only time wrapping third party code is useful (bar violating YAGNI) is for unit testing.
Mocking statics and so forth requires you to wrap the code, this is a valid reason to write wrappers for third party code.
In the case of logging code, its not needed though.
The problem here is partially the word 'wrapper', partially a false dichotomy, and partially a false distinction between the JDK and everything else.
The word 'wrapper'
Wrapping all of Hibernate, as you say, is a completely impractical enterprise.
Restricting the Hibernate dependencies to an identified, controlled, set of source files, on the other hand, may well be practical and achieve the same results.
The false dichotomy
The false dichotomy is the failure to recognize a third option: standards. If you use, say, JPA annotations, you can swap Hibernate for other things. If you are writing a web service and use JAX-WS annotations and JAX-B, you can swap between the JDK, CXF, Glassfish, or whatever.
The false distinction
Sure, the JDK changes slowly and is unlikely to die. But major open source packages also change slowly and are unlikely to die. Untold thousands of developers and projects use Hibernate. There's really no more risk of Hibernate disappearing or making radical incompatible API changes than there is of Java itself.
If the library you are planning to wrap is unique in its "access principles, metaphors and idioms" from other offerings in the same domain, then your wrapper is pretty much going to be similar to that library and won't do you any good if you one day switch to a different library since you will need a new wrapper.
If the library is accessed in a similar way to other libraries and the same wrapper can apply to these libraries, then they are probably written based on some existing standard and there is some common layer that already exists to access both of them.
I would only go with wrappers if I knew for sure that I would have to support multiple and substantially different libraries in production.
The main factor for deciding to wrap a library or not is the impact a library change will have on the code. When a library is only called from 1 class the impact of changing library will be minimal. If on the other side a library is called in all classes a wrapper is much more likely.
Any uncertainty around the choice of 3rd party library should be flushed out at the beginning of the project using prototypes to test the scalability/suitability/whatever of the 3rd party library.
If you decide to go ahead and provide full de-coupling/abstraction support it should be costed up and ultimately approved by the project sponsor - ultimately it's a commercial decision as someone has to pay for it and the work required to do it (unless it's absolutely trivial, in which case the api is probably low risk anyway).
Generally an experienced architect will chose a technology that they can be reasonably confident with, and have experience of, and that they are confident will last the lifetime of the app, OR else they will eliminate any risk in the decision early on in the project, thus removing any need to do this, most of the time
I'd tend to agree with most of your points. Using absolutes often gets you into trouble and saying you should "always" do something limits your flexibility. I'd add some more points to your list.
When you use wrapping code around a very common API, like Hibernate or log4j you make it more difficult to bring on new developers. New developers now have to learn a whole new API, where if you hadn't wrapped the code they would have been very familiar right away.
On the flip side of that, you also limit your developers' view into the API. Using an advanced feature of the API takes more time because you have to make sure that your wrapper is implemented in a way that can handle it.
Many of the wrapping layers I've seen also are very specific to the underlying implementation. So, if you write a log wrapper around log4j, you are thinking in log4j terms. If some new cool framework comes out, it may change the whole paradigm, so your wrapping code doesn't migrate as well as you had thought.
I'm definitely not saying wrapping code is always bad, but as you stated, there are a lot of factors you have to consider.
The purpose of wrapping even a well-tested and time-proven 3rd-party library is that you might decide to switch libraries at some point in the future. Wrapping it makes it easier to switch without changing any code in your core application. Only the wrapper needs to change.
If you're absolutely sure that you'll never (another absolute) use a different logging framework in your project, go ahead and skip the wrapper. Even having said that, I'd probably hold off on writing the wrapper until I knew I needed it, like the first time I need to switch.
This is kind of a funny question.
I've worked in systems where we've found showstopper bugs in libraries we were using, and which upstream was either no longer maintaining, or not interested in fixing. In a language like Java, you usually can't fix internal bugs from a wrapper. (Fortunately, if they're open-source, you can at least fix them yourself.) So it's no help here.
But I'm often working in a language where you can easily modify libraries at any time, without seeing or even having their source code -- I commonly add new methods to existing classes, for example. So in this case, there's no point in wrapping: just make the change you want.
Also, does your colleague draw the line at things called "libraries"? What about Java itself? Does he wrap built-in classes? Does he wrap the filesystem? The thread scheduler? The kernel? (That is, with his own wrappers -- in a sense, everything is a wrapper around the CPU, but it sounds like he's talking about wrappers in your source repo that are completely under your control.) I've had built-in functionality change or disappear when new versions of it appear. Java is not immune from this.
So the idea to always write a wrapper comes down to a bet. Assuming he's only wrapping third-party libraries, he seems to be implicitly betting that:
"first-party" functionality (like Java itself, the kernel, etc.) will never change
when "third-party" functionality changes, it will always be done in a way that can be fixed in a wrapper
Is that true in your case? I don't know. Of the medium-large Java projects I've done, it's rarely true for me. I wouldn't spend effort wrapping all third-party libraries, because it seems like a poor bet, but your situation is certainly different from mine.
There is one situation where you with good reason can wrap. Namely if you need to test stuff, and the default third party object is heavy weight. Then having an interface can really make a difference.
Note, this is not to replace the library ,but make it manageable where it doesn't matter much.
Wrapping a whole library is boilerplate, ineffective, and wrong in most cases. It can be done in a much clever way. I'd say that wrapping a library is appropriate mostly in case of UI component libraries, and again, you have to be adding some additional core functionality of yours to all the components for this to be needed.
if too much modifications and additions are needed, this is most likely not the library you are looking for
if there is a moderate amount of additions and modifications - there are always the design patterns that come handy in those cases. The Decorator pattern (allows new/additional behaviour to be added to an existing object dynamically) , for example, is rather suitable for the most cases.
IDE search/replace and refactoring capabilities offer an easy way to change your code in all required places if some important change is needed and a wrapping object appears. (of course, unit-tests would be helpful here ;) )
In my experience the question becomes fairly moot if you're using abstractions sufficiently. Coupling to a library is just like coupling to any other interface. Thus you want to reduce accidental coupling and the scope of rewrite necessary if you need to swap out the implementation. Don't bind your application logic to some construct, but don't just form a bunch of stupid (literally) wrappers around something and expect to gain any benefit.
A wrapper doesn't usually gain you anything unless it's answering a specific purpose (such as polymorphizing a non-polymorphic construct). They often show up in refactoring, but I wouldn't recommend forming an architecture on them. There's a few exceptions of course, but there is with any principle.
This doesn't speak toward adapters. An adapter can be a pretty important component for when you want to actually alter the interface of a library and its use to be in line with architecture, code, or domain concepts in your project.
You should do it always, often, sometimes, rarely, or never. Not even your colleague does it always, but the instructive cases are always and never. Suppose that it is sometimes necessary. If you never wrapped a library, the worst consequence is that one day you discovered that it was necessary for a library that you had used all over the place. It would take you some time to wrap that library and to perform shotgun surgery on the clients. The question is whether that eventuality would take more time than habitually providing wrappers that are rarely necessary, but having never to perform the shotgun surgery.
My instinct is to appeal to the YAGNI (you ain't gonna need it) principle and opt for "rarely".
I would not wrap it as a one to one thing, but I would layer the app so that each part it replaceable as much as possible. The ISO OSI model works well for all types of software :-)

Inheriting applications at a new job

When inheriting applications at a new job do you tend to stick to the original developers coding practices or do you start applying your own?
I work in a small shop with no guidelines and always wondered what the rule was here. Some applications are written very well but do not follow the standards I use (variable names etc...) and I do not want to "dirty" them up. I find my self taking a little extra time being consistent.
Others are written very poorly and it looks like the developer was changing his mind every keystroke...
ADDITIONAL THOUGHT
What about when I start my own projects? So now I have introduced a new coding standard to the mix:
The good code - but not my style
The bad code with bad practices and lack of standards
My own standards
If there are standards evident in the code, you should stick to them. If there aren't, start introducing your own.
If there are multiple developers who work on the same module, don't change the style.
If you will hand it off to another developer in the near future (this role is temporary), don't change the style.
If you are taking complete, exclusive, permanent ownership of the module, change it, but follow these rules:
One change at a time.
Fix all indentation to your liking at once, and commit that change.
Fix all brace placement to your liking at once, and commit that change.
Fix all other formatting to your liking at once, and commit that change.
Fix all naming to your liking at once, and commit that change.
Don't spend a lot of time on it.
If it takes more than an hour or two, then cut back.
Make the commit description clear.
So you can quickly ignore these changes when analyzing change history.
Use automated tools
to make sure the result is consistent and complete, so you don't have to mess with it again.
Run your tests
Just because your changes shouldn't affect behavior doesn't mean they won't. (Triple negative, ouch!)
Make sure everyone knows what you're doing
Someone might have a change hanging around that they want to commit now, and it'll be painful to merge with your changes. Also, you don't want anyone to get surprised and go tell your boss before you do.
Don't do it again
This is a one-time thing.
Publish a style guide that follows best practices, and build consensus around following it. Refactor old code as you need to maintain it.
I'm in the same boat as you. Lone developer who inherited some apps from the last guy. I
I've been sticking to what appear to be his standards for existing projects for consistency, and using my own preferences for new stuff.
I've noticed that most people think whoever came before them had no idea how to write code. Then whoever comes after them thinks the same thing. Some things are common sense, but most things are just personal preference.
For major problems, i.e. using comments v.s. not using comments, updating the code will probably make it easier to work with, and easier for anyone else to work with. Even then, your time is probably best-spent updating the code as you come across it, instead of embarking on a huge project to refactor everything (introducing new problems in the process).
For things like indentation, line spacing, variable names, one-line ifs v.s. multi-line ifs, the reality is that your coding style is likely just as bad as you think theirs is.
I think it depends on what you mean by "coding practices". If you mean things like code formatting and naming conventions and things that I would personally consider "cosmetic", then stick with whats already there. If you mean things like coding best prcatices and writing code correctly in the first place, then go back and fix the problems if possible, but at the very least make your new code follow best practices.
Given that most of the applications I've inherited have been hacked together by "cowboy coders" who didn't apply even the most basic of coding practices, my opinion is a little biased.
I say introduce coding standards if there are none or the ones that exist are blatantly wrong and/or stupid (e.g. "All variables must be no more than 4 characters in length", "Every database column is varchar(255) null", etc.). Obviously if you have a team then you'll need to come to an agreement as to what practices to implement, but if you're a solo dev then you have free reign and IMO you should introduce order to the chaos.
If the code works, and seems to have had a clean format. Don't waste time changing the style.
If the code is badly written. By all means change it when you have some down time, or the next time you work on the project.
For new projects do them your way, since there is no standard. As with the other well written programs yours should be easy enough to maintain.
composition is often preferred over inheritance
:-P
If it's just you, go for it. If it's a team, especially if any of the original developers are still around (or likely to be called in for consulting), keep with the existing style and practices as much as you can. Don't follow them down a rat hole - if you think they're doing something stupid, change it, but if it's just a stylistic thing, keep to their style as much as you can.
On several jobs I've been on, we had no rule on coding style other than "if you're making changes to an existing file/class, use the existing style, even for new code."
I follow company standards if there are any.
If there aren't any and the changes are small, I adopt to the used style of coding.
If there are larger changes to be made and I don't like the coder's style, I will use my own.
And if the existing code is bad I will change that too.
Will you ever have a better opportunity to update existing code with a standard style? Probably not. When you are new to the code you are going to have the best chance of taking some extra time to make non-new-feature and non-bug-fix changes. The lack of standards may be discouraging but you are unlikely to have a better chance to standardize than when you first inherit the code.
It sounds like we're talking about a situation with no official style guides / best practices. In that case, as Sean said, I'd take the lead on establishing some. But... if at all possible, pick an existing, widely-used standard. It's more likely to be accepted, all the arguments are done with, and the odds of out-of-the-box tool support (editors, code review tools, etc.) greatly increase.
Getting others to adopt it will often work best from the bottom up -- write new code to the new standards, mention to others that you've done so, ask for feedback. Much easier than trying to get approval and buy-in in advance.
Within the existing, ugly project, avoid wholesale changes to existing modules. For one thing, diffs and version control will get quite confusing if a file is suddenly reindented.
If the chunk you're working on is so bad as to be unreadable, I'd do an initial checkin just to reformat it; follow that up with actual code changes.
I would apply the same refactoring standards to the code as I would if it DID match my style standards. That is, I'd ignore the style and just go on about my business.
If it's not terribly difficult to follow the style that is in the code - with regards to naming conventions, I'd go ahead and use those for new code.
However, I wouldn't bother trying to follow stuff like 'tabs should not be used', 'every line should be indented 2 spaces', etc. There are plenty of editors out there where you can 'pretty' the code whenever you need it these days.
G-Man
I think it depends highly on the specific case.
If you are a consultant on a project for a short time you should stick to the way thing are.
If you are on for a long time. Try to refactor bad code into your own scheme.
If you are on for a short time but you are working on an isolated module, then use your own scheme.
Short answer is, "It depends." Here are a few factors that I'd consider important in determining whether to keep the old style or not:
1) Scope of changes. If it is close to a total re-write of the application, then it may make more sense to put in a new standard if you have one that you feel works well for you.
2) Likelihood of future changes. Will this be changed over and over again? If so, then taking some time early on may well be worth it in the end. This does require a bit of judgement and predicting the future, but it may be easy in some cases to see that there will be changes over and over again for some systems that are fairly complex.
3) How much of the code is a customization on a 3rd party codebase, e.g. a company's specific customizations of Oracle products for their business processes, compared to a completely home grown application. The impact here is that when new versions are relased and an upgrade is requested, how much pain may there be on what breaks since it was customized so much.
When starting your own projects, put in the best standard that you know.
If I inherit code that has obviously never been refactored, I would take that as an opportunity to impose some of my own structure.
If people expect me to make time and cost estimates for adding functionality to the code, I'll need to be intimately familiar it, and make sure it lives up to my standards.
If the code is already well-written, that would be a blessing that I would not mess with. But in my experience, this hasn't happened very often.

Resources