How to write a "bidirectional" makefile that keeps two files synchronized? - makefile

I'm working with set of binary files that can be "decompiled" to or "compiled" from a set of INI files. Since both the binary and INI files are checked into my repository, I use a small script to (de)compile all of them.
Our workflow usually involves editing the binary files directly, and decompiling the modified binaries to INI format. However, occasionally we need to edit the INI files and compiling the changes to binaries.
The question: Can I make a single makefile that detects which set was modified more recently, and automatically issues (de)compile commands in either direction to keep both set of files up to sync? I prefer using common (GNU?) make features, but if there is a more specialized tool that works, I'm all ears.
(I could make two separate directives, "decompile-all" and "compile-all". I want to know if there's a single-command option.)

I don't see how that can work. Suppose it could be done in make; now you have two files foo.exe and foo.ini (you don't say what your actual filename patterns are). You run make and it sees that foo.exe is newer than foo.ini, so it decompiles the binary to build a new foo.ini. Now, you run make again and this time it sees that foo.ini is newer than foo.exe, because you just built the former, so it compiles foo.ini into foo.exe.
Etc. Every time you run make it will perform an operation on all the files because one or the other will always be out of date.
The only way this could work would be if you (a) tested to see if files did not have exactly identical time last modified times, and (b) had a way to reset the time on the compiled/decompiled file so that it was identical to the file it was built from, rather than "now" which is the default of course.
The answer is that make cannot be used for this situation. You could of course write yourself a small shell script that went through every file and tested that the last modified times were identical or not, and if not compiled them then used touch -m -r origin where origin is the file that had the newer modification time, so that both had the same modification time.

Related

How to rebuild when the recipe has changed

I apollogize if this question has already been asked. It's not easy to search.
make has been designed with the assumption that the Makefile is kinda god-like. It is all-knowing about the future of your project and will never need any modification beside adding new source files. Which is obviously not true.
I used to make all my targets in a Makefile depend on the Makefile itself. So that if I change anything in the Makefile, the whole project is rebuilt.
This has two main limitations :
It rebuilds too often. Adding a linker option or a new source file rebuilds everything.
It won't rebuild if I pass a variable on the command line, like make CFLAGS=-O3.
I see a few ways of doing it correctly, but none of them seems satisfactory at first glance.
Make every target depend on a file that contains the content of the recipe.
Generate the whole rule with its recipe into a file destined to be included from the Makefile.
Conditionally add a dependency to the targets to force them being rebuilt whenever necessary.
Use the eval function to generate the rules.
But all these solutions need an uncommon way of writing the recipes. Either putting the whole rule as a string in a variable, or wrap the recipes in a function that would do some magic.
What I'm looking for is a solution to write the rules in a way as straightforward as possible. With as little additional junk as possible. How do people usually do this?
I have projects that compile for multiple platforms. When building a single project which had previously been compiled for a different architecture, one can force a rebuild manually. However when compiling all projects for OpenWRT, manual cleanup is unmanageable.
My solution was to create a marker identifying the platform. If missing, everything will recompile.
ARCH ?= $(shell uname -m)
CROSS ?= $(shell uname -s).$(ARCH)
# marker for the last built architecture
BUILT_MARKER := out/$(CROSS).built
$(BUILT_MARKER) :
#-rm -f out/*.built
#touch $(BUILT_MARKER)
build: $(BUILT_MARKER)
# TODO: add your build commands here
If your flags are too long, you may reduce them to a checksum.
"make has been designed with the assumption that the Makefile is kinda god-like. It is all-knowing about the future of your project and will never need any modification beside adding new source files."
I disagree. make was designed in a time when having your source tree sitting in a hierarchical file system was about all you needed to know about Software configuration management, and it took this idea to the logical consequence, namely that all that is, is a file (with a timestamp). So, having linker options, locator tables, compiler flags and everything else but the kitchen sink in a file, and putting the dependencies thereof also in a file, will yield a consistent, complete and error-free build environment as far as make is concerned.
This means that passing data to a process (which is nothing else than saying that this process is dependent on that data) has to be done via a file - command line arguments as make variables are an abuse of makes capabilities and lead to erroneous results. make clean is the technical remedy for a systemic misbehaviour. It wouldn't be necessary, had the software engineer designed the make process properly and correctly.
The problem is that a clean build process is hard to design and maintain. BUT: in a modern software process, transient/volatile build parameters such as make all CFLAGS=O3 never have a place anyway, as they wreck all good foundations of config management.
The only thing that can be criticised about make may be that it isn't the be-all-end-all solution to software building. I question if a program with this task would have reached even one percent of makes popularity.
TL;DR
place your compiler/linker/locator options into separate files (at a central, prominent, easy to maintain and understand, logical location), decide about the level of control through the granularity of Information (e.g. Compiler flags in one file, linker flags in another) and put the true dependencies down for all files, and voila, you will have the exactly necessary amount of compilation and a correct build.

strategies for backing up packages on macosx

I am writing a program that synchronizes files across file systems much like rsync but I'm stuck when it comes to handling packages. These are folders that are identified by the system as containing a coherent set of files. Pages and Numbers can use packages rather than monolithic files, and applications are actually packages for example. My problem is that I want to keep the most recent version and also keep a backup copy. As far as I can see I have two options -
I can just treat the whole thing as a regular folder and handle the contents entry by entry.
I can look at all the modification dates of all the contents and keep the complete folder tree for the one that has the most recently modified contents.
I was going for (2) and then I found that the iPhoto library is actually stored as a package and that would mean I would copy the whole library (10s, or even 100s of gigabytes) even if only one photograph was altered.
My worry with (1) is that handling the content files individually might break things. I haven't really come up with a good solution that will guarantee that the package will work and won't involved unnecessarily huge backup files in some cases. If it is just iPhoto then I can probably put in a special case, or perhaps change strategy if the package is bigger than some user specified limit.
Packages are surprisingly mysterious, and what the system treats as a package does not seem to be just a matter of setting an extended attribute on a folder.
It depends on how you treat the "backup" version. Do you keep two versions of each file (the current and first previous), or two versions of the sync snapshot (i.e. if a file hasn't changed between the last two syncs, you only store one version)?
If it's two versions of the sync, packages shouldn't be a big problem -- just provide a way to restore the "backup" version, which if necessary splices together the changed files from the "backup" with the unchanged files from the current sync. There are some things to watch out for, though: make sure you correctly handle files that're deleted or added between the two snapshots.
If you're storing two versions of each file, things are much more complicated -- you need some way to record which versions of the files within the package "go together". I think in this case I'd be tempted to only store backup versions of files within the package from the last time something within the package changed. So, for example, say you sync a package called preso.key. On the second sync, preso.key/index.apxl.gz and preso.key/splash.png are modified, so the old version of those two files get stored in the backup. On the third sync, preso.key/index.apxl.gz is modified again, so you store a new backup version of it and remove the backup version of preso.key/splash.png.
BTW, another way to save space would be hard-linking. If you want to store two "full" versions of a big package without without wasting space, just store one copy of each unchanged file and hard-link it into both backups.

command line wisdom for 2 panel file manager user

Want to upgrade my file management productivity by replacing 2 panel file manager with command line (bash or cygwin). Can commandline give same speed? Please advise a guru way of how to do e.g. copy of some file in directory A to the directory B. Is it heavy use of pushd/popd? Or creation of links to most often used directories? What are the best practices and a day-to-day routine to manage files of a command line master?
Can commandline give same speed?
My experience is that commandline copying is significantly faster (especially in the Windows environment). Of course the basic laws of physics still apply, a file that is 1000 times bigger than a file that copies in 1 second will still take 1000 seconds to copy.
..(howto) copy of some file in directory A to the directory B.
Because I often have 5-10 projects that use similar directory structures, I set up variables for each subdir using a naming convention :
project=NewMatch
NM_scripts=${project}/scripts
NM_data=${project}/data
NM_logs=${project}/logs
NM_cfg=${project}/cfg
proj2=AlternateMatch
altM_scripts=${proj2}/scripts
altM_data=${proj2}/data
altM_logs=${proj2}/logs
altM_cfg=${proj2}/cfg
You can make this sort of thing as spartan or baroque as needed to match your theory of living/programming.
Then you can easily copy the cfg from 1 project to another
cp -p $NM_cfg/*.cfg ${altM_cfg}
Is it heavy use of pushd/popd?
Some people seem to really like that. You can try it and see what you thing.
Or creation of links to most often used directories?
Links to dirs are, in my experience used more for software development where a source code is expecting a certain set of dir names, and your installation has different names. Then making links to supply the dir paths expected is helpful. For production data, is just one more thing that can get messed up, or blow up. That's not always true, maybe you'll have a really good reason to have links, but I wouldn't start out that way, just because it is possible to do.
What are the best practices and a day-to-day routine to manage files of a command line master?
( Per above, use standardized directory structure for all projects.
Have scripts save any small files to a directory your dept keeps in the /tmp dir, .
i.e /tmp/MyDeptsTmpFile (named to fit your local conventions) )
It depends. If you're talking about data and logfiles, dated fileNames can save you a lot of time. I recommend dateFmts like YYYYMMDD(_HHMMSS) if you need the extra resolution.
Dated logfiles are very handy, when a current process seems like it is taking a long time, you can look at the log file from a week ago and quantify exactly how long this process took, a week, month, 6 months (up to how much space you can afford). LogFiles should also capture all STDERR messages, so you never have to re-run a bombed program just to see what the error message was.
This is Linux/Unix you're using, right? Read the man page for the cp cmd installed on your machine. I recommend using an alias like alias CP='/bin/cp -pi' so you always copy a file with the same permissions and with the original files' time stamp. Then it is easy to use /bin/ls -ltr to see a sorted list of files with the most recent files showing up at the bottom of the list. (No need to scroll back to the top, when you sort by time,reverse). Also the '-i' option will warn you that you are going to overwrite a file, and this has saved me more than a couple of times.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.

Diff-ing windows in vim

I am working on a script that has become fairly convoluted. I suspect there are several sections that have nearly identical code. Can I (and how can I) open the file in vim, with two (or more) windows on the buffer, and diff the contents of the windows on the same file? vimdiff seems to work only on two files. If I make a copy of the file and try to vimdiff the two versions, the diff origin remains locked on the beginning of the file. Although I can unscroll-lock the windows, and move the windows to the parts of the file I want to compare the diffs do not show up. Any hints or tips? I could cut and paste the sections I want to compare to different files and then apply vimdiff but then I risk getting lost in what section came from where when I try to patch the separate files together, and I feel sure there must be a more straightforward, easier way.
What I usually do is diff to a copy
:%w %.alt
:vert diffsplit %.alt
And then happiliy rearrange the 'alt' version so that the pseudo-matching bits get aligned.
Note that (presumably) git contains spiffy merge/diff cow-powers that should be able to detect sub-file moved block changes.
Although I haven't (yet) actually put this into practice, I have a hunch that the very nice git plugin fugitive for vim might be able to leverage some of this horsepower to make this easier. Note: fully expect this to require scriptinh before being usable, but I still thought it would be nice to share this idea (perhaps you can share a script if you get to it first!)
As an alternative solution that I've been using occasionally and which works very nicely in my opinion is linediff.vim.
It allows you to use visual mode to select two bodies of text from arbitrary buffers (or the same for that matter) and run vimdiff on them. The beauty of it, is that when you edit and save the temporary diff buffers, you update the original buffers with the changes, without saving.
One of my use-cases is when I'm resolving merge issues related to script refactoring and reordering, where a function has been moved and perhaps also modified. In order to make sure you do not lose any of the modifications coming in from either ancestor, you diff the two versions of the function alone by visually selecting them and running the linediff command.

How to programmatically find the difference between two directories

First off; I am not necessarily looking for Delphi code, spit it out any way you want.
I've been searching around (especially here) and found a bit about people looking for ways to compare to directories (inclusive subdirs) though they were using byte-by-byte methods. Second off, I am not looking for a difftool, I am "just" looking for a way to find files which do not match and, just as important, files which are in one directory but not the other and vice versa.
To be more specific: I have one directory (the backup folder) which I constantly update using FindFirstChangeNotification. Though the first time I need to copy all files and I also need to check the backup directory against the original when the applications starts (in case something happened when the application wasn't running or FindFirstChangeNotification didn't catch a file change). To solve this I am thinking of creating a CRC list for the backed up files and then run through the original directory computing the CRC for every file and finally compare the two CRCs. Then somehow look for files which are in one directory and not the other (again; vice versa).
Here's the question: Is this the fastest way? If so, how would one (roughly) get the job done?
You don't necessarily need CRCs for each file, you can just compare the "last modified" date for every file for most normal purposes. It's WAY faster. If you need additional safety, you can also compare the lengths. You get both of these metrics for free with the find functions.
And in your change notification, you should probably add the files to a queue and use a timer object to copy the new queued files every ~30sec or something, so you don't bog down the system with frequent updates/checks.
For additional speed, use the Win32 functions wherever possible, avoid any Delphi find/copy/getfileinfo functions. I'm not familiar with the Delphi framework but for example the C# stuff is WAY WAY WAY slower than the Win32 functions.
Regardless of you "not looking for a difftool", are you opposed to using Cygwin with it's "diff" command for the shell? If you are open to this its quite easy, particularly using diff with the -r "recursive" option.
The following generates the differences between 2 Rails installs on my machine, and greps out not only information about differences between files but also, specifically by grepping for 'Only', finds files in one directory, but not the other:
$ diff -r pgnindex pgnonrails | egrep '^Only|diff'
Only in pgnindex/app/controllers: openings_controller.rb
Only in pgnindex/app/helpers: openings_helper.rb
Only in pgnindex/app/views: openings
diff -r pgnindex/config/environment.rb pgnonrails/config/environment.rb
diff -r pgnindex/config/initializers/session_store.rb pgnonrails/config/initializers/session_store.rb
diff -r pgnindex/log/development.log pgnonrails/log/development.log
Only in pgnindex/test/functional: openings_controller_test.rb
Only in pgnindex/test/unit: helpers
The fastest way to compare one directory on the local machine to a directory on another machine thousands of miles away is exactly as you propose:
generate a CRC/checksum for every file
send the name, path, and CRC/checksum for each file over the internet to the other machine
compare
Perhaps the easiest way to do that is to use rsync with the "--dryrun" or "--list-only" option.
(Or use one of the many applications that use the rsync algorithm,
or compile the rsync algorithm into your application).
cd some_backup_directory
rsync --dryrun myname#remote_host:latest_version_directory .
For speed, the default rsync assumes, as Blindy suggested, that two files with the same name and the same path and the same length and the same modification time are the same.
For extra safety, you can give rsync the "--checksum" option to ignore the length and modification time and force it to compare (the checksum of) the actual contents of the file.

Resources