What forces are at work keeping crufty old Make (with or without makefile generator tools) prominent as a build tool? Is it deficiencies in alternatives that keep them from being widely adopted, or insufficient publicity, or does something about Make keep it in place?
Despite Make's many weaknesses and difficulties dealing with large projects
(e.g. see http://freshmeat.net/articles/what-is-wrong-with-make) it appears to still be more widely used than newer, improved alternatives such as Scons, Jam, Rake, Cook, and others.
Are there measurable benefits to the alternatives, or are the "market shares" due mostly to opinion and experience of team leaders?
Ubiquity: I like Make because I can trust it will be available where I need it i.e. installed or easily installable on the target machine.
It's widely available, well documented, concise and powerful + best of all - no XML!.
I've been using it for close to 15 years and still haven't found something better. The coolest thing I've done with it is to have a master makefile generate makefiles for sub projects on-the-fly.
Regarding your question, which forces are keeping make alive ...its the force of habit.
simplicity - easy to do simple things
ubiquity - some version is on your system
speed - fast enough for most things
expressive - pretty good match to the job
nonobvious complexity - mainly large projects expose problems
It's availability on a large number of platforms probably helps. If writing a product for multiple platforms, knowing it will always be there is a plus point. It's a pain to have to port your build tool to a new platform before you can build your own project.
Hm, I never used make as a build system.
Other than that, it's a unique dataflow-programming language, where you can describe set of nodes, each serving specific purpose, describe their behavior, and let the manager handle and control the data flow between them.
We used scons on a relatively large project to replace make, and found that it was a reasonably flexible system, that allowed us to do some very necessary (but very unfortunate) hacking to get things to build the way we needed them to. Also, make is -strange-.
i think what would have to occur to see a big shift to another tool, is 1st the tool would have to be created.... that is significantly better. and to affect change, either one of the linux distros or one of the major packages would have to switch to it and probably keep the old one arround for compatibility. i would envision that the new build tool would be capable of generating the legacy makefiles. linux already demonstrated how well he can solve the source code control system with git. i have a pretty good hunch he could come up with something pretty cool and tie in with git.
Related
What is the best way to unpack PE files? I've seen some tools from 7 years ago, like Quick Unpack. Is there anything more recent? Or is it better to run different tools for different packers since individual unpackers are likely more up-to-date?
There is no one size fits all solution.
The latest "all in one" unpackers are Quick Unpack and GUnpacker but they are both getting rather old.
There is Pe-sieve & mal_unpack from hasherezade, mal_unpack can work in an automated fashion. Mal_unpack is basically just an automated version of Pe-sieve. Pe-bear is another tool from hasherezade which helps you re-align sections after doing a dump, which requires that you know how to unpack manually but it make the process much easier.
There is a new online tool for unpacking malware called unpac.me from the OpenAnalysis team which can unpack just about anything.
Successful malware distributors are not using public crypters as often, heuristics for detecting public packers is too easy, which is why we have seen a reduction in AIO unpacker tools. In addition, Themida and VMProtect are the standard now and as they continue to add more features, they're becoming more difficult to unpack everyday. With the new virtualization features, automated unpacking is becoming almost impossible.
Even though Quick Unpack is old, I would not underestimate its power in present days. As long as you find the best setup, this tool will produce launchable dumps of Exes/Dlls packed by dozens of known packers + even unknown ones!
Make sure you are running an old OS (WinXP - 7) really or virtually, and don't forget that Quick Unpack is a dynamic unpacker (i.e. can hardly be used for malware analysis). Old does not mean bad :)
This concerns packers in their classical definition.
If you are looking for some "generic unpacker" for modern protectors (such as VMProtect, Themida, Enigma, Obsidium, etc.), then I do not think they will ever be made. There are some specific tools (both private and public) which can help you to automate "unpacking" partially, but the majority of work still needs to be done by hands to remove these kinds of protectors. But again, it depends on what you want to see in the end (analysable code, de-obfuscated dump, fully launchable reconstruct, ...).
Our dev team uses VS.NET for app development and TortoiseSVN/VisualSVN for version control. It seems that almost every day issues arise with the working copy or the repository getting screwed up, and folks just throw up their hands and call me when it happens. There are definitely human factors at work (SVN works as it should) but I'm tired of playing SVN helpdesk to the dev team. Can anyone recommend a better/more intuitive setup for version control?
Agent SVN works well for me. It integrates nicely with Visual Studio.
SVN is about as simple as version control systems get. Problems should only arise when dealing with merging operations...those can be tricky.
If you don't address the "human factors" it won't matter which version control system you use, you will always be the helpdesk. To address these kinds of problems, you typically need to:
Set up a wiki with common "recipes" for version control tasks.
Include a workflow diagram for how changes are made to your code (for those who don't like to read).
Host a training session that is specifically
designed for your users (use the wiki
material).
When helping someone with a problem, be sure to make them perform the actual fix. Don't just do it for them, talk them through it instead.
Make a point of directing users to product documentation when helping them.
Introducing a new version control system into any organization should include the items I listed. I realize it is extra work for those who get it done, but it does save you from long "support" hours down the road.
Can anyone recommend a better/more intuitive setup for version control?
Better? Yes. More intuitive? That's debatable. Look into distributed version control software, namely Mercurial or Git. Both have freely available plugins to integrate with Visual Studio. And if you can manage spending a little money, I've heard very good things about Fog Creek's Kiln.
As for your issues with SVN, I have a couple tips. The first is to make sure you keep everyone synced on the same version of the product. It tends to update frequently, and so this can be tricky, as you also don't want to fall too far behind the current version. The second is that we used to have big problems with Tortoise trying to cache icon overlays on mapped network drives. There is an option you can turn off somewhere that suddenly made things way more stable. But that was at my last job, and I don't remember the exact setting any more.
I think you already gave the answer in your question - sort out the "human factors" by providing appropriate training. Version control for software development doesn't get much simpler than SVN, so from the way your question is phrased, my guess would be that said human factors are just going to find other ways of making your life interesting.
if you have issues with your repository getting screwed (like committing on tags, wrong commit messages...), one of the easiest way is to play it the hard way : put hooks on the server to enforce policies. You can have a look in official documentation.
Basically, this is an easy way to enforce naming / formatting and avoid a lot of human issues (committing on tags, messing with externals...)
I’m looking into multithreading, and GCD seems like a much better option than manually writing a solution using pthread.h and pthreads-win32. However, although it looks like libdispatch is either working on, or soon going to be working on, most newer POSIX-compatible systems… I have to ask, what about Windows? What are the chances of libdispatch being ported to Windows? What are the barriers preventing that from happening?
If it came down to it, what would I need to do to preform that portage?
Edit: Some things I already know, to get the discussion started:
We need a blocks-compatible compiler that will compile on Windows, no? Will PLBlocks handle that?
Can we use the LLVM blocks runtime?
Can’t we replace all the pthread.h dependencies in userspace libdispatch with APR calls, for portability? Or, alternatively, use pthreads-win32 I suppose…
Edit 1: I am hearing that this is completely and totally impossible, ever, because libdispatch depends (somehow) on kqueue, which can’t be made available on Windows… does anybody know if this is true?
Take a look at : http://opensource.mlba-team.de/xdispatch/
This project (and other third-party libs) brings libdispatch into platforms(windows, linux) other than macosx
https://github.com/DrPizza/libdispatch
The Windows equivalent of libdispatch, from my basic understanding of it, is the Concurrency Runtime for unmanaged code and a collection of technologies collectively known as Parallel Extensions for managed code. It appears to me that GCD maps pretty well to both of these, since they both abstract work units (or "tasks") in a similar way.
From a bit of research, it appears that there's already a fair bit of interest in a port, but that port would be a fairly drastic undertaking and might end up being basically just another implementation of the API and not actually sharing significant code with the original libdispatch. I did see some proposals to porting libdispatch to being based on the Apache Portable Runtime instead of POSIX which'd make it easier to make it cross-platform to Windows, but even this would not be an easy change.
Likely, this would be by no means a small undertaking.
I think that rather than libdispatch-on-pthreads and pthreads-on-Win32, or libdispatch-on-APR and APR-on-Win32, it might be better to implement libdispatch directly on the Win32 Thread Pool API. The good news is that the two APIs are similar enough that you could probably do the port yourself. The bad news is that there would probably be lots of corner cases where there are small semantic mismatches that make exact behavior hard to achieve.
At work I use ClearCase and SourceSafe, but have found some time to do some time to code for myself enroute thanks to a disposable laptop.
However, I wish I had a lightweight VCS on my system using which I would be able to make changes to my code during the commute and then push/grab them from my Linux systems.
I use git on my home system, but I can't really get it working on Windows. I don't want all that cygwin hack.
If it does not run natively on Windows, it just won't do.
What have you guys tried on your Windows system? Something that YOU use.
The big player at the moment seems to be Mercurial?
What would be best for a one (or maybe two) man team?
I just need to maintain :
Versioned copies of source code.
Checking in and out should be as less obtrusive as possible.
I am looking forward to a multiple Undo kind of feature (like that in an EMacs buffer) but persistent.
I really like the way git keeps track of lines moving between files in a source code set
I should be able to move part(s)/sub tree(s) of the source tree (each sub tree implies a module/plugin to my the main software I am building) to an archival system either completly or partially and restore them back from the archive as and when required and the system should track any changes to this tree as well.
I actually want to experiment with my code as much as possible without me manually keeping track of what I modified and what I need to undo once I try out some idea, so that I am back to where I want to continue from.
Notes : A similar topic came up a year ago : DVCS Choices - What's good for Windows?
I hope things have changed, and I really want people to share their own, real life experiences. Not something they recommend without using it or they think will work.
Bazaar and Mercurial both work very well on Windows. I posted in the question you linked, and since then, both have improved their Windows support even more. Using them is easy and flawless, and they even have GUIs if you swing that way.
I for one have switched from bazaar to git, and I've been pleased.
If you've a Clearcase background, why don't you take a look at Plastic SCM? Check this link, it will show you how it works on a distributed setup (and of course all the basic operations) http://codicesoftware.blogspot.com/2010/03/distributed-development-for-windows.html.
You won't miss any of the "good" clearcase features but all the shortcomings are simply gone (faster, installs on 45seconds, no cumbersome setup to use on a mixed Win/Linux scenario, built-in ACLs, excellent branching and merging, much better common ancestor algorithm, visualizations, better GUI, and you still have "selectors" in case you miss config_specs, but not being mandatory)
I have a build process for a large enterprise system comprising several dozen separate EXEs and DLLs. These use multiple languages, C, C++, Fortran, Python, Awk and a couple more. The build scripts are 4DOS batch processes which evolved over 4 decades. They are large and unwieldy and need constant care and feeding.
I must keep the Visual Studio solution and project files as the basic compile/link entities. What's the best tool for wrapping these disparate languages all together. 4DOS is very old and cumbersome.
EDIT:
Thanks gang. I think I'll try SCONS first because it's Python. We have plenty of people well versed in Python to be able to update and maintain it. I'm 61 now and it's not going to be me supporting this in the long term. I don't like anything requiring JAVA or XML because those are not languages already in our product mix and we have enough in play.
Those blog posts were good. He concluded that SCONS was best but simply too slow for his purposes. I'm not looking for speed in nightly builds. It's got until 7 AM. I want readability and maintainability.
For example Apache Ant
Ant is a good choice. I would also be tempted to try Rake.
I think the best choice is NAnt and MSBulid
Scons perhaps?
These may be a little a outdated - the build systems might have evolved quite a bit, but this should at least give you a better idea on what to expect:
The Quest for the Perfect Build System
The Quest for the Perfect Build System (Part 2)
Personally, I never needed anything special, that couldn't be achieved with VS project/solution files, makefile's and BATCH'es, so I won't be recommending anything in particular.
Scons definitely. It plays with fortran and C naturally, and it is python based so it shouldn't have any problem with that one either (never used it for py though, so can't tell from experience). Also, much more readable than the majority of them out there.
I know Maven isn't known to focus on anything but Java, but perhaps it might at least be worth mentioning. There have been some work towards enabling at least C/C++. When comparing to Ant, it's pluggable in a similar fashion, but it's declarative rather than imperative, with standardized dependency management, and a build result repository which may even be distributed.
ANT + terp for the C++ portions. terp plays nicely with VisualStudio as well as with many other C++ compilers on many platforms. ANT requires Java though, if only as the hosting technology. I don't know whether that is a no-no with your requirements or whether you just don't want to start writing Java code.