With macOS High Sierra a new file system is available: APFS.
This file system supports clone operations for files: No data duplication on storage.
cp command has a flag (-c) that enables cloning in Terminal (shell).
But I didn't find a way to identify theses cloned files after.
Somebody knows how to identify cloned files with a shell command, or a flag in a existent command, like ls?
After 3 years and 2 months... I received a lot of points because of this question here on stackoverflow.
So yesterday I decided to revisit this topic :).
Using fcntl and F_LOG2PHYS is possible to check if files are using same physical blocks or not.
So I made an utility using this idea and put it on github (https://github.com/dyorgio/apfs-clone-checker).
It is only the first release guys, but I hope that the community can improve it.
Now maybe a good tool to remove duplicated files using clone APFS feature can be born. >:)
The command you have used, is not a feature of APFS-Filesystem. The CP -c command calls a function named "clonefile" which is part of bsd since 2015 (s. Man-Page)
http://www.manpagez.com/man/2/clonefile/
So if you clone a file for example, you can change attributes from Original and the Clone can have diffrent Attributs.
I think, the Feature, you are searching for is build in per Copy and Write. You can see the different, if you make a clone with Time Machine.
A have not found a commando per Terminal today, to show this differences, but the clonefile command therefore is not the right function.
The only Known-Way today to Show changed Attributes in Clones is Apple Time Machine Backup Solution.
It`s a Snapshot Solution. Something about this, in this Apple Dev Support-Case:
https://forums.developer.apple.com/thread/81171
I think this is meant to be an internal proprietary feature of APFS that you are not supposed to be playing with. It strikes me as a relatively useless feature. If you want to have two files that are the same and use standard APIs, try either hard or soft links, or else Apple aliases.
I am trying to create a parallel application in C++ and I chose to use the Intel Cilk-Plus Libraries for this reason.
My problem is just that I am still trying to download the extension for g++ and compile it on my machine, but this thing is taking ages. As explained in the Intel Cilk Library installation doc, I am trying to get the compiler sources to compile them. I am actually trying to go for the GCC 4.9 release.
Trying SVN...
I did try using svn but this is very slow and many times the program fails and forces me to restart from where it broke.
Trying GIT...
When I try git it is even worse. The command executes but the program fails telling me that there is ome broken stuff on server side... Guess their git repository is not well-formed.
Bruteforse: WGET
So I decided to cut the head to the chicken and have a direct recursive download using wget:
wget -r -l 0 -np --erobots=off http://gcc.gnu.org/svn/gcc/branches/cilkplus/
It has been downloading since yesterday... I there a damn tarball to download or I really need to download using wget without any progress info? Thankyou
As asked by the OP, I'll post my comment as an answer.
To get the cilk-plus sources as a tarball:
In the install docs, there is a section labelled a. cilkplus. Under it there is a subsection labelled iii. Using a snapshot.
In that subsection there is a link to the gcc git repository. Click on it.
When the webpage is loaded, search at the top for the first link. At the time of writing this it was named [gcc/]. However, the name may change.
The link we need is the one with colored tags next to it.
Once located search, in the same line, for a link named snapshot at the right corner.
Click on it and wait while the tarball is generated.
NOTE: The link provided in the docs sometimes point to a rather old revision of the source code. To get the most recent one (1 or 2 days old at most) click on the summary section in the menu bar. There should appear an entry with the colored tags master and trunk next to it.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Git GUI.. stage everything
My employer is finally looking into setting up some source control after much pleading by all of the developers. Unfortunately, none of our developers, inclusive of myself, have ever done much with source control. I've looked into SVN and thought that would be fine but another of the developers didn't like it. I've moved on to looking at GIT as an option. I've downloaded the GIT GUI from http://git-scm.com and started tinkering with it which brings me to my question/problem.
The web application we're trying to add to source control (GIT) is 7,386 files and 712 folders. When doing the initial commit, from what I understand, I've got to click on each file I want to commit to move it from the Unstaged Changes to Staged Changes pane. Well obviously I am hesitant to sit and click 7,386 times (once for each file to commit). Is there another faster way to do this?
I'm currently using this page as my reference to learn to use the GIT GUI http://nathanj.github.com/gitguide/tour.html. If anyone has a better tutorial/reference for using the GIT GUI I'd much appreciate linking me to it.
Thanks
To add every file (that is not ignored), use
cd /path/to/workspaceRoot
git add .
I don't use any gui, but I think this should be possible by adding a directory (the root directory in this case).
Remember to create a appropriate .gitignore file before, so no unwanted files get added. You can check, what is (un)staged with
git status
Also, there should be a context option (or something like that) in your gui, that provides status.
Extra: Additional git resource (quite good imo) http://progit.org/
From Git-GUI select all files in the "Unstaged changes" list and select "Commit->Stage to commit" in the menu.
If you're not afraid of the command line, try this tutorial instead. It's a detailed walkthrough of beginning git in 10 parts and ends with a Git reference card to help you go further. The free Pro Git book is also much applauded.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Background
I work on a lot of small projects (prototypes, demos, one-offs, etc.). They are mostly coded in Visual Studio (WPF or ASP.NET with code written in C#). Usually, I am the only coder. Occasionally, I work with one other person. The projects come and go, usually in a matter of months, but I have a constantly evolving set of common code libraries that I reuse.
The problem
I've tried to use source control software before (SourceGear Vault), but it seemed like a lot of overhead when working on a small project, especially when I was the only programmer. Still, I would like some of the features that version control offers.
Here's a list of features I'd like to have:
Let me look at any file in an older version of my project instantly. Please don't force me through the rigmarole of (1) checking in my current work, (2) reverting my local copy to the old version, and (3) checking the current version back out so I can once again work on it.
In fact, if I'm the only one on the project, I don't ever want to check out. The only thing I want to be able to do is say, "Please save what I have now as version 2.5."
Store my data efficiently. If I have 100 Mb of media in my project, I don't want that to get copied with every new version I release. Only copy what changes.
Let me keep my common library code files in a single location on my hard drive so that all my current projects can benefit from any bug fixes or improvements I make to my library. I don't want to have to keep copying my library to other projects every time I make a change.
However, do let me go back in time to any version of any project and see what the source code (including the library code) looked like at the time that version was released.
Please don't make me store a special database server on my machine that makes my computer take longer to start up and/or uses resources when I'm not even programming.
Does this exist?
If not, how close can I get?
Edit 1: TortoiseSVN impressions
I did some experimenting with Subversion. A couple observations:
Once you check something in to a repository, it does stuff to your files. It puts these hidden .svn folders inside your project folders. It messes with folder icons. I'm still yet to get my project back to "normal". Unversion a working copy got me part of the way there, but I still have folders with blue question mark icons. This makes me grumpy :-/ Update: finally got rid of the folder icons by manually creating new folders and copying the folders over. (Not good.)
I installed the open source plugin for Visual Studio (AnkhSVN). After creating a fresh repository in my hard drive, I attempted to check in a solution from Visual Studio. It did exact what I was afraid it would do. It checked in only the folders and files that are physically (from the POV of the file system) inside my solution folder. In order to accomplish item #5 above, I need all source code used by solution to be check in. I attempted to do this by hand, but it wasn't a user friendly process (for one thing, when I selected multiple library projects at once and attempted to check them in, it only appeared to check in the first one). Then, I started getting error dialogs when I tried to check in subsequent projects.
So, I'm a little frustrated with SVN (and its supporting software) at this point.
Edit 2: TortoiseHG impressions
I'm trying out Mercurial now (TortoiseHG). It was a little bit difficult to figure out at first, no better or worse than TortoiseSVN I'd say. I noticed an RPC Server on startup (relates to item 6). I figure it should be possible to turn this off if I'm not sharing anything with anyone, but it wasn't something I could figure out just by looking at the options (will check out the help later).
I do appreciate having my local repository as just a single .hg folder. And, simply throwing the folder in the Recycle Bin seemed to be all I needed to do to return everything back to normal (i.e., unversion my project). When I check in (commit), it seems to offer a simple comment window only. I thought maybe there would be a place to put version numbers.
My (probably not very clever) attempt to add a Windows shortcut (a folder aliasing my library projects) failed, not that I really thought it would work :) I thought maybe this would be a sneaky way to get my library projects (currently located elsewhere) included in the repository. But no. Maybe I'll try out "subrepos", but that feature is under construction. So, iffy that I'll be able to do items 4 and 5 without some manual syncing.
Any of the distributed source control solutions seem to match your requirements. Take a look at bazaar, git or mercurial (already mentioned above). Personally I have been using bazaar since v0.92 and have no complaints.
Edit: Heck, after looking at it again, I'm pretty sure any of those 3 solutions handles all 6 of your requested features.
Distributed Version Control Systems (Mercurial, Bazaar, Git) are nice in that they can be completely self-contained in a single directory (.hg, .bzr, .git) in the top of the working copy, where Subversion uses a separate repository directory, in addition to .svn directories in every directory of your working copy.
Mercurial and Subversion are probably the easiest to use on Windows, with TortoiseHG and TortoiseSVN; the Bazaar GUIs have also been improving. Apparently there is also TortoiseGit, though I haven't tried it. If you like the command line, Easy Git seems to be a bit nicer to use than the standard git commands.
I'd like to address point 4, common libraries, in more detail. Unfortunately I don't think any of them will be too easy to use, since I don't think they're directly supported by GUIs (I could be wrong). The only one of these I've actually used in practice is Subversion Externals.
Subversion is reasonably good at this job; you can use Externals (see the chapter in the SVN book), but to associate versions of a project with versions of a library you need to "pin" the library revision in the externals definition (which is itself versioned, as a property of the directory).
Mercurial supports something similar, but both solutions seem a bit immature: subrepository support built-in to the latest version and the "Forest Extension".
Git has "submodule" support.
I haven't seen anything like sub-respositories or sub-modules for Bazaar, unfortunately.
I think Fog Creek's new product, Kiln, will get you pretty close. In response to your specific points:
This is easily done through the web interface -- you don't need to touch your local copy or update. Just find the file you want, click the revision you want to see, and your code will be in front of you.
I'm not sure you can do things exactly like "Please save this as version 2.5", but you can add unique tags to changesets that allow you to identify a special revision (where "special" can mean whatever it wants to you).
Mercurial does a great job of this already (which Kiln uses in the back end), so there shouldn't be any problems in this regard.
By creating different repositories, you can easily have one central 'core' section which is consistent across various projects (though I'm not entirely sure if this is what you're talking about).
I think most version control systems allow you to do this...
Kiln is hosted, so there's no hit on performance to your local machine. The code you commit to the system is kept safe and secure.
Best of all, Kiln is free for up to two licenses by way of their Student and Startup Edition (which also gets you a free copy of FogBugz).
Kiln is in public beta right now -- you can request your account at my first link -- and users are being let as more and more problems are already resolved. (For some idea of what current beta users are saying, take a look at the Kiln Knowledge Exchange site that's dedicated to feedback.)
(Full Disclosure: I am an intern currently working at Fog Creek)
For your requirements I would recommend subversion.
Let me look at any file in an older version of my project instantly. Please don't force me through the rigmarole of (1) checking in my current work, (2) reverting my local copy to the old version, and (3) checking the current version back out so I can once again work on it.
You can use the repository browser of Tortoise Svn to navigate to every existing version easily.
In fact, if I'm the only one on the project, I don't ever want to check out. The only thing I want to be able to do is say, "Please save what I have now as version 2.5."
This is done by svn copy . svn://localhost/tags/2.5.
Store my data efficiently. If I have 100 Mb of media in my project, I don't want that to get copied with every new version I release. Only copy what changes.
Given by subversion.
Let me keep my common library code files in a single location on my hard drive so that all my current projects can benefit from any bug fixes or improvements I make to my library. I don't want to have to keep copying my library to other projects every time I make a change.
However, do let me go back in time to any version of any project and see what the source code (including the library code) looked like at the time that version was released.
Put your libraries into the same svn repository as your remaining code and you'll have global revision numbers to switch back all to a common state.
Please don't make me store a special database server on my machine that makes my computer take longer to start up and/or uses resources when I'm not even programming.
You only have to start svnserve to start a local server. If you only work on one machine you can even do without this and use your repository directly.
I'd say that Mercurial along with TortoiseHg will do what you want. Of course, since you don't seem to be requiring much, subversion with TortoiseSvn should serve equally well, if you only ever work alone, though I think mercurial is nicer for collaboration.
Mercurial:
hg cat --rev 2.5 filename (or "Annotate Files" in TortoiseHg)
hg commit ; hg tag 2.5
Mercurial stores (compressed) diffs (and "keyframes" to avoid having to apply ten thousand diffs in a row to find a version of a file). It's very efficient unless you're working with large binary files.
Symlink the library into all the projects?
OK, now that I read this point I'm thinking Mercurial's Subrepos are closer to what you want. Make your library a repository, then add it as a subrepository in each of your projects. When your library updates you'll need to hg pull in the subrepos to update it, unfortunately. But then when you commit in a project Mercurial will record the state of the library repo, so that when you check out this version later to see what it looked like you'll get the correct version of the library code.
Mercurial doesn't do that, it stores data in files.
Take a look on fossil, its single exe file.
http://www.fossil-scm.org
As people have pointed out, nearly any DVCS will probably serve you quite well for this. I thought I would mention Monotone since it hasn't been mentioned already in the thread. It uses a single binary (mtn.exe), and stores everything as a SQLite database file, nothing at all in your actual workspace except a _MTN directory on the top level (and .mtn-ignore, if you want to ignore files). To give you a quick taste I've put the mtn commands showing how one carries out your wishlist:
Let me look at any file in an older version of my project instantly.
mtn cat -r t:1.8.0 readme.txt
Please save what I have now as version 2.5
mtn tag $(mtn automate heads) 2.5
Store my data efficiently.
Monotone uses xdelta to only save the diffs, and zlib to compress the deltas (and the first version of each file, for which of course there is no delta).
Let me keep my common library code files in a single location on my hard drive so that all my current projects can benefit from any bug fixes or improvements I make to my library.
Montone has explicit support for this; quoting the manual "The purpose of merge_into_dir is to permit a project to contain another project in such a way that propagate can be used to keep the contained project up-to-date. It is meant to replace the use of nested checkouts in many circumstances."
However, do let me go back in time to any version of any project and see what the source code (including the library code) looked like at the time that version was released.
mtn up -r t:1.8.0
Please don't make me store a special database server on my machine
SQLite can be, as far as you're concerned, a single file on your disk that Monotone stores things in. There is no extra process or startup craziness (SQLite is embedded, and runs directly in the same process as the rest of Monotone), and you can feel free to ignore the fact that you can query and manipulate your Monotone repository using standard tools like the sqlite command line program or via Python or Ruby scripts.
Try GIT. Lots of positive comments about it on the Web.
So you know a lot of Mac apps use "bundles": It looks like a single file to your application, but it's actually a folder with many files inside.
For a version control system to handle this, it needs to:
check out all the files in a directory, so the app can modify them as necessary
at checkin,
commit files which have been modified
add new files which the application has created
mark as deleted files which are no longer there (since the app deleted them)
manage this as one atomic change
Any ideas on the best way to handle this with existing version control systems? Are any of the versioning systems more adept in this area?
Mercurial in particular versions based on file, not directory structure. Therefore, your working tree, which is a fully-fledged repository, doesn't spit out .svn folders at each level.
It also means that a directory that is replaced, like an Application or other Bundle, will still find it's contents with particular file names under revision control. File names are monitored, not inodes or anything fancy like that!
Obviously, if a new file is added to the Bundle, you'll need to explicitly add this to your repository. Similarly, removing a file from a Bundle should be done with an 'hg rm'.
There aren't any decent Mercurial GUIs for OS X yet, but if all you do is add/commit/merge, it isn't that hard to use a command line.
For distributed SCM systems like git and mercurial shouldn't be a problem as Matthew mentioned.
If you need to use a centralized SCM like Subversion or CVS, then you can zip up (archive) your bundles before checking them into source control. This can be painful and takes an extra step. There is a good blog post about this at Tapestry Central:
Mac OS X bundles vs. Subversion
This article demonstrates a ruby script that manages the archiving for you.
An update from the future:
If I recall, the problem with managing bundles in SVN was all the .svn folders getting cleared each time you made a bundle. This shouldn't be a problem any more, now that SVN stores everything in a single .svn folder at the root.
Bringing this thread back to daylight, since the October 2013 iWork (Pages 5.0 etc.) no longer allows storing in 'flat file' (zipped), but only as bundles.
The problem is not the creation of version control hidden folders inside such structures (well, for svn it is), but as Mark says in the question: getting automatic, atomic update of files added or removed (by the application, in this case iWork) so I wouldn't need to do that manually.
Clearly, iWork and Apple are only bothered by iCloud usability. Yet I have a genuine case for storing .pages, .numbers and .keynote in a Mercurial repo. After the update, it blows everything apart. What to do?
Addendum:
Found 'hg addremove' that does the trick for me.
$ hg help addremove
hg addremove [OPTION]... [FILE]...
add all new files, delete all missing files