I have a build process for a large enterprise system comprising several dozen separate EXEs and DLLs. These use multiple languages, C, C++, Fortran, Python, Awk and a couple more. The build scripts are 4DOS batch processes which evolved over 4 decades. They are large and unwieldy and need constant care and feeding.
I must keep the Visual Studio solution and project files as the basic compile/link entities. What's the best tool for wrapping these disparate languages all together. 4DOS is very old and cumbersome.
EDIT:
Thanks gang. I think I'll try SCONS first because it's Python. We have plenty of people well versed in Python to be able to update and maintain it. I'm 61 now and it's not going to be me supporting this in the long term. I don't like anything requiring JAVA or XML because those are not languages already in our product mix and we have enough in play.
Those blog posts were good. He concluded that SCONS was best but simply too slow for his purposes. I'm not looking for speed in nightly builds. It's got until 7 AM. I want readability and maintainability.
For example Apache Ant
Ant is a good choice. I would also be tempted to try Rake.
I think the best choice is NAnt and MSBulid
Scons perhaps?
These may be a little a outdated - the build systems might have evolved quite a bit, but this should at least give you a better idea on what to expect:
The Quest for the Perfect Build System
The Quest for the Perfect Build System (Part 2)
Personally, I never needed anything special, that couldn't be achieved with VS project/solution files, makefile's and BATCH'es, so I won't be recommending anything in particular.
Scons definitely. It plays with fortran and C naturally, and it is python based so it shouldn't have any problem with that one either (never used it for py though, so can't tell from experience). Also, much more readable than the majority of them out there.
I know Maven isn't known to focus on anything but Java, but perhaps it might at least be worth mentioning. There have been some work towards enabling at least C/C++. When comparing to Ant, it's pluggable in a similar fashion, but it's declarative rather than imperative, with standardized dependency management, and a build result repository which may even be distributed.
ANT + terp for the C++ portions. terp plays nicely with VisualStudio as well as with many other C++ compilers on many platforms. ANT requires Java though, if only as the hosting technology. I don't know whether that is a no-no with your requirements or whether you just don't want to start writing Java code.
Related
What is the best way to unpack PE files? I've seen some tools from 7 years ago, like Quick Unpack. Is there anything more recent? Or is it better to run different tools for different packers since individual unpackers are likely more up-to-date?
There is no one size fits all solution.
The latest "all in one" unpackers are Quick Unpack and GUnpacker but they are both getting rather old.
There is Pe-sieve & mal_unpack from hasherezade, mal_unpack can work in an automated fashion. Mal_unpack is basically just an automated version of Pe-sieve. Pe-bear is another tool from hasherezade which helps you re-align sections after doing a dump, which requires that you know how to unpack manually but it make the process much easier.
There is a new online tool for unpacking malware called unpac.me from the OpenAnalysis team which can unpack just about anything.
Successful malware distributors are not using public crypters as often, heuristics for detecting public packers is too easy, which is why we have seen a reduction in AIO unpacker tools. In addition, Themida and VMProtect are the standard now and as they continue to add more features, they're becoming more difficult to unpack everyday. With the new virtualization features, automated unpacking is becoming almost impossible.
Even though Quick Unpack is old, I would not underestimate its power in present days. As long as you find the best setup, this tool will produce launchable dumps of Exes/Dlls packed by dozens of known packers + even unknown ones!
Make sure you are running an old OS (WinXP - 7) really or virtually, and don't forget that Quick Unpack is a dynamic unpacker (i.e. can hardly be used for malware analysis). Old does not mean bad :)
This concerns packers in their classical definition.
If you are looking for some "generic unpacker" for modern protectors (such as VMProtect, Themida, Enigma, Obsidium, etc.), then I do not think they will ever be made. There are some specific tools (both private and public) which can help you to automate "unpacking" partially, but the majority of work still needs to be done by hands to remove these kinds of protectors. But again, it depends on what you want to see in the end (analysable code, de-obfuscated dump, fully launchable reconstruct, ...).
I’m looking into multithreading, and GCD seems like a much better option than manually writing a solution using pthread.h and pthreads-win32. However, although it looks like libdispatch is either working on, or soon going to be working on, most newer POSIX-compatible systems… I have to ask, what about Windows? What are the chances of libdispatch being ported to Windows? What are the barriers preventing that from happening?
If it came down to it, what would I need to do to preform that portage?
Edit: Some things I already know, to get the discussion started:
We need a blocks-compatible compiler that will compile on Windows, no? Will PLBlocks handle that?
Can we use the LLVM blocks runtime?
Can’t we replace all the pthread.h dependencies in userspace libdispatch with APR calls, for portability? Or, alternatively, use pthreads-win32 I suppose…
Edit 1: I am hearing that this is completely and totally impossible, ever, because libdispatch depends (somehow) on kqueue, which can’t be made available on Windows… does anybody know if this is true?
Take a look at : http://opensource.mlba-team.de/xdispatch/
This project (and other third-party libs) brings libdispatch into platforms(windows, linux) other than macosx
https://github.com/DrPizza/libdispatch
The Windows equivalent of libdispatch, from my basic understanding of it, is the Concurrency Runtime for unmanaged code and a collection of technologies collectively known as Parallel Extensions for managed code. It appears to me that GCD maps pretty well to both of these, since they both abstract work units (or "tasks") in a similar way.
From a bit of research, it appears that there's already a fair bit of interest in a port, but that port would be a fairly drastic undertaking and might end up being basically just another implementation of the API and not actually sharing significant code with the original libdispatch. I did see some proposals to porting libdispatch to being based on the Apache Portable Runtime instead of POSIX which'd make it easier to make it cross-platform to Windows, but even this would not be an easy change.
Likely, this would be by no means a small undertaking.
I think that rather than libdispatch-on-pthreads and pthreads-on-Win32, or libdispatch-on-APR and APR-on-Win32, it might be better to implement libdispatch directly on the Win32 Thread Pool API. The good news is that the two APIs are similar enough that you could probably do the port yourself. The bad news is that there would probably be lots of corner cases where there are small semantic mismatches that make exact behavior hard to achieve.
What forces are at work keeping crufty old Make (with or without makefile generator tools) prominent as a build tool? Is it deficiencies in alternatives that keep them from being widely adopted, or insufficient publicity, or does something about Make keep it in place?
Despite Make's many weaknesses and difficulties dealing with large projects
(e.g. see http://freshmeat.net/articles/what-is-wrong-with-make) it appears to still be more widely used than newer, improved alternatives such as Scons, Jam, Rake, Cook, and others.
Are there measurable benefits to the alternatives, or are the "market shares" due mostly to opinion and experience of team leaders?
Ubiquity: I like Make because I can trust it will be available where I need it i.e. installed or easily installable on the target machine.
It's widely available, well documented, concise and powerful + best of all - no XML!.
I've been using it for close to 15 years and still haven't found something better. The coolest thing I've done with it is to have a master makefile generate makefiles for sub projects on-the-fly.
Regarding your question, which forces are keeping make alive ...its the force of habit.
simplicity - easy to do simple things
ubiquity - some version is on your system
speed - fast enough for most things
expressive - pretty good match to the job
nonobvious complexity - mainly large projects expose problems
It's availability on a large number of platforms probably helps. If writing a product for multiple platforms, knowing it will always be there is a plus point. It's a pain to have to port your build tool to a new platform before you can build your own project.
Hm, I never used make as a build system.
Other than that, it's a unique dataflow-programming language, where you can describe set of nodes, each serving specific purpose, describe their behavior, and let the manager handle and control the data flow between them.
We used scons on a relatively large project to replace make, and found that it was a reasonably flexible system, that allowed us to do some very necessary (but very unfortunate) hacking to get things to build the way we needed them to. Also, make is -strange-.
i think what would have to occur to see a big shift to another tool, is 1st the tool would have to be created.... that is significantly better. and to affect change, either one of the linux distros or one of the major packages would have to switch to it and probably keep the old one arround for compatibility. i would envision that the new build tool would be capable of generating the legacy makefiles. linux already demonstrated how well he can solve the source code control system with git. i have a pretty good hunch he could come up with something pretty cool and tie in with git.
Currently, as I'm sure most of you are aware, the Flex (EDIT: Flex 3) compiler is extraordinarily slow. It does however have an API. My question is: are there alternative (possibly C/C++ based) compilers that are faster than the current Adobe one?
I realize compilers aren't something you can pump out in a few days, but if no alternative is available, do you think it would be worth the time to implement a faster flex compiler?
The compiler is supposed to be much faster in Flex 4. But I haven't verified this with actual real-world use cases yet. If you try it then let me know what you find.
You should definitely check out HFCD (http://bytecode-workshop.com/). It supports both Flex 3 and 4. It's faster than Flex 3 and 4 because it allows for compiling multiple applications at the same time on a multi-core computer. HFCD is also TCP/IP enabled. That means you can run the HFCD compiler on a second machine (possibly with more CPU and memory).
You may want to have a look at the HFCD which analyzes your project structure and spawns multiple compiler tasks in parallel.
This however does only change the performance if your project consists of multiple small modules. An Eclipse plug-in for HFCD exists as well.
I suspect that it would be worthwhile for someone to implement a complete alternative compiler and dev infrastructure (Flex Builder isn't that strong to begin with). Having said that, I know of know such project for the AS3 language.
If you are willing to go to a language that is only marginally different (and from the looks of it, just plain better), then I'd suggest taking a look at Haxe. From what I understand, the Haxe compiler is quite a bit faster than the Flex compiler.
There is a nice plugin for Aptana to develop Air applications
I did some asking around and someone else told me about this:
http://www.deitte.com/archives/2008/10/a_faster_flex_3.htm
This is related to what #James Ward said, that the flex 4 compiler is supposed to be faster. This guy back-ported some of the changes from the flex 4 to the flex 3.0/3.1/3.2 sdks and claims a 25% increase in speed.
I've never tried it; the person I talked to said he had and it was giving him some problems, but it could have been something he was doing wrong.
If someone uses this, please do post your experiences with it.
I'm creating several NSIS installers and as my expertise in this thing grows up I'm no longer happy with just making things work, I would like to see if there are some best practices or coding standards around this language, like how to write conditionals, variable names, unistallers, etc..
As far as I know, there is no specific coding standard for NSIS available -- but there are a lot of tutorials and examples to learn from. As with every other language you're trying to master, I think reading other's code helps a lot and inspires you to think in different directions.
From my own experience with NSIS, I can also suggest to tidy up your installer scripts regularly. As you learn new things, old workarounds become obsolete and can be replaced by proper solutions. Also watch out for new developments. Before we were able to use nsDialogs, InstallOptions was the way to go when it came to user-defined dialogs -- and now it's so much easier to do with less code.
Since you're aiming at creating several installers, I'd also try to reuse as much code as possible in the different installers. Modularising shared functionality is possible with .nsh files and fosters a good and clean code base.