Version control of a digital image and the file that produced it - image

I'm mostly working on my own with Blender. I create a blend file and use it to render and save an image. Then I fiddle some more, render another image hopefully save the Blender file and then repeat. I try to keep the same version and file names for images and blend files but I often get out of sync. What I need to be able to do is browse all the image versions and reliably find the blend file that produced it.
This seems like I could use a version control system such as git, making a commit after each render.
Is there an image browser that can view images in different commits or a git viewer that can show images?
Or is there a better way to do this?

While git keeps the information of all versions around, it doesn't keep all versions in the working tree simultantously.
And its interface is meant to show differences between revisions.
The problem is that unlike text files, there is no good and common definition about what a "diff" between two images should mean. You could try and devise a textconv option for jpg files. But I wonder if that would be useful.
If you want to be able to see different version side-by-side, you should probably use some kind of numbering scheme and a script to help you with that.
Suppose you have a blender file named foo.blend that you are working on. This is your working copy. You should write a script that does two things;
Copy foo.blend to e.g. foo-027.blend, if the last existing version in the same directory is foo-026.blend, unless there is no difference between foo.blend and foo-026.blend.
Render foo-027.blend to e.g. foo-027.jpg using blender in batch rendering mode.
Call this script whenever you want to save a certain version. You also might want to make previous revisions read-only.
Python would be a good candidate to write this script in, since blender supports Python scripting. You might even be able to call it from a menu or shortcut.

With enough storage space Git is okay with binary files. But Git doesn't store files in their native form; instead they are packed in "blobs" of unreadable format. So you will have to checkout older versions to have them in your disposal.
You can have a separate directory to "retrieve" files from older revisions. Say, you've got a repo in your workspace directory.
cd path/to/my/workspace/directory
git --work-tree=/path/to/another/directory checkout <revision> <filenames separated by spaces>
Or from the another/directory:
cd path/to/another/directory
git --git-dir=/path/to/my/workspace/directory checkout <revision>
Now your workspace directory stays untouched, while another/directory has the older version. You can now compare and do whatever you want.
Of course, you can just checkout to older revision in the same directory:
git checkout <revision>
# and come back then:
git checkout master
This answer is inspired with:
git pull while not in a git directory
How do I reconnect my project to my existing GitHub repository

Related

Using SVN, how do I selectively create a patch file?

I have a codebase with several changes in it that are best split up into several commits.
In git, I would use git add -p to select the changes I wanted from each file and create a commit and pull request based on those.
I'm new to SVN and I'm wondering about the best way to achieve this? It looks like I can do file-level selection, but not changes within those files?
I'm using TortoiseSVN as my local version control tool, but I'm happy to use another tool (has to run on Windows) if there's one that will do what I want.
This is something you won't get from SVN. Separating changes in your working area into multiple commit can only have a file granularity. You can't split changes from the same file into several commits.
So I'd say you should instead give a look at how git-svn works. It allows you to use Git over an SVN repository, with some limitations. You'll use git dcommit to push to the SVN repository for example. You must use a rebasing strategy over merging too. But otherwise, you get colored diffs, stash, rebasing, proper handling of multiple branches, proper formatting of patches by default, etc.
If you already know git, this will give you more, for less annoyance.
It looks like I can do file-level selection, but not changes within those files?
This is easily possible with VisualSVN plug-in for Visual Studio 2017. The feature is called QuickCommit and it helps you partially commit selected changes in a file.
Use the Commit this Block and Commit Selection context menu commands in the Visual Studio editor.
Here is an animated screenshot:

Git + LaTeX + BitBucket: Sharing image files

If I am version controlling my LaTeX docs and have a repo on bitbucket which I share with other conotributors, how do I share the png/jpg etc. files without having git tracking them?
Because every contributor should be able to compile it without LaTeX's draft check and visualize the complete paper with images, but it makes no sense to track such images with git (my .gitignore has a img/ line in it)
Check out the "Downloads" section of your Bitbucket repo. It is made for "adding any file that you would like to make available to your users, such as app binaries", which sounds pretty much like what you need. But you collaborators still have to download / unpack them manually.
Also, you can actually store binaries in Git repos. The problem is that they cannot be "delted" effectively due to Git internals and each binary file modification just duplicates all the bytes, even if you changed only one. So, if you don't change them frequently it's pretty ok. Bitbucket has a limitations on max repo size, so you'll get a warning when it is fool.
Another approach is to use Git Large File Storage which is especially created to handle binaries in Git repos. Unfortunately, it is not available on Bitbucket yet. If you can move your repo to Github consider this possibility.

Anyone have a method for moving files around keeping Xcode project and git in sync?

I have a large (hundreds of files), horribly disorganized (on the file system that is) Xcode project. I want to move a bunch of files into different folders. I want git to track the move operations, and I want my Xcode project to track the moves as well (just keeping the references intact is enough; I don't need Xcode to rearrange its internal group structure, etc.)
If I drag things around in the Finder, both Xcode and git are in the dark. I have faith that git will figure things out by content when the time comes, but I also notice that there's a difference in the output of git status between doing a git mv and moving the file in the Finder, then adding the deletion and addition operations separately, so I'm assuming there's some difference (even if that difference doesn't end up getting encoded into the content of the commit.) Xcode, on the other hand, is hopeless in the face of this. (You have to manually re-find every single file.)
If I use git mv from the command line, git tracks the move, but I still have to manually reconnect each reference in Xcode (or tear them all out and reimport everything, which is a pain in the ass because many of these files have custom build flags associated with them.)
It appears that there simply isn't a way to cause a file system move from within Xcode.
I've found zerg-xcode and a plug-in for it that claims to sync the file system to mirror the Xcode group structure, but I've not been able to google up anything that goes the other way. I just want a way to move files on the file system and have the two other things (git and Xcode) to keep track of the files across the moves. Is this too much to ask? So far it seems the answer is, "yes".
Yes, I've seen Moving Files into a Real Folder in Xcode I'm asking whether someone's written a script or something that makes this less painful.
Actually, by design, Git doesn't track moves. Git is only about content. If any Git tool tells you there was a move (like git log --follow, it's something that was guessed from content, not from metadata).
So you won't lose information if you move files around with another tool then git add the whole folder.

Make a SVN working folder identical to repository version

I basically want to do an SVN export as part of a scripted build process, but without having to get the entire repo from scratch every time, which is slow and eats bandwidth... not to mention will make testing the script a pain in the backside if it does this everytime we tweak something or spot a typo in the scripts.
Is there an obvious way to do an export into an existing directory, so only files that are different are fetched, and non-repo files are deleted, basically giving a clean export but done in a smart way?
Windows is preferred, but I guess Cygwin is an option.
I think the only way to get this done, is to checkout a working copy, and update & revert that. Updating a WC only gets the changes.
svn export doesn't know what files are changed, and to compare files, you first have to fetch all of them. Also it would be hard to get files that were deleted or renamed out of your 'export' directory.
Checkout a working copy, then export from your working copy.
SVN update on the working copy will then be quick and bandwitch light.
Then you can delete the original export and re-export from the working copy.
All the bandwidth hungry operations are optimized. The heavy handed delete and re-create is the same as it was before, but it's now all local, so should be much faster.
Also, you have the option to make changes in the exported working copy, but you might want to be careful with that and consider the impact of having conflicts occur during your svn update.
I am not sure if I understand your question right. To rephrase it. I think you would want to have the repo local copy updated on a regular basis. However you would want the working copy pristine so that the resulting build is a clean. Considering this is your question below is what I would suggest.
To my knowledge svn export might not be the be best option for this. Because the purpose of svn export is to obtain a unversioned working copy of the svn repo. As it is unversioned, svn client would not really know from where it has to start the update.
The best option i can think of is this. Checkout the copy of the repo (local copy, LC) in a location. This LC should be updated during the build process. Make a copy of the LC in a different location and use it for performing the build. Below are the commands you would require
1. svn update <arbitrary path>(in the working copy)
2. copy <arbitrary path> <build path>
3. find <build path> -type 'd' -name '.svn' (if you would like to remove the .svn hidden files, but they are not going to really hurt the build process)
Some Options for Eliminating the copy time from factoring in the build process time
If you would like to save the copy time during the build process probably you can do this copy operation after each build and svn update the copy just before building (assume the .svn folders are retained).
On linux two folders can be kept in sync using rsync. The build copy can be made to reflect the updates in the pristine copy.
In Windows, there are a few tools to achieve sync suggested above. I have not used them but I will provide you the links to try it yourself.
http://lifehacker.com/326199/synchronize-folders-with-synctoy-20
http://www.techsupportalert.com/best-free-folder-synchronization-utility.htm
Another option is to use checkout and revert / update but also use something like the SharpSvn library to make a script that will delete non-source controlled files. This way build artifacts like compiled code will be removed and the versioned files will be returned to base state by the revert / update.
If you have a lot of directories and files this scanning could be slow, but if you are confident about what directories will contain build artifacts would can just scan those.

Best approaches to versioning Mac "bundle" files

So you know a lot of Mac apps use "bundles": It looks like a single file to your application, but it's actually a folder with many files inside.
For a version control system to handle this, it needs to:
check out all the files in a directory, so the app can modify them as necessary
at checkin,
commit files which have been modified
add new files which the application has created
mark as deleted files which are no longer there (since the app deleted them)
manage this as one atomic change
Any ideas on the best way to handle this with existing version control systems? Are any of the versioning systems more adept in this area?
Mercurial in particular versions based on file, not directory structure. Therefore, your working tree, which is a fully-fledged repository, doesn't spit out .svn folders at each level.
It also means that a directory that is replaced, like an Application or other Bundle, will still find it's contents with particular file names under revision control. File names are monitored, not inodes or anything fancy like that!
Obviously, if a new file is added to the Bundle, you'll need to explicitly add this to your repository. Similarly, removing a file from a Bundle should be done with an 'hg rm'.
There aren't any decent Mercurial GUIs for OS X yet, but if all you do is add/commit/merge, it isn't that hard to use a command line.
For distributed SCM systems like git and mercurial shouldn't be a problem as Matthew mentioned.
If you need to use a centralized SCM like Subversion or CVS, then you can zip up (archive) your bundles before checking them into source control. This can be painful and takes an extra step. There is a good blog post about this at Tapestry Central:
Mac OS X bundles vs. Subversion
This article demonstrates a ruby script that manages the archiving for you.
An update from the future:
If I recall, the problem with managing bundles in SVN was all the .svn folders getting cleared each time you made a bundle. This shouldn't be a problem any more, now that SVN stores everything in a single .svn folder at the root.
Bringing this thread back to daylight, since the October 2013 iWork (Pages 5.0 etc.) no longer allows storing in 'flat file' (zipped), but only as bundles.
The problem is not the creation of version control hidden folders inside such structures (well, for svn it is), but as Mark says in the question: getting automatic, atomic update of files added or removed (by the application, in this case iWork) so I wouldn't need to do that manually.
Clearly, iWork and Apple are only bothered by iCloud usability. Yet I have a genuine case for storing .pages, .numbers and .keynote in a Mercurial repo. After the update, it blows everything apart. What to do?
Addendum:
Found 'hg addremove' that does the trick for me.
$ hg help addremove
hg addremove [OPTION]... [FILE]...
add all new files, delete all missing files

Resources