This is a 2 part question...
I have created a Master VM, a Full Clone of the master for Dev work, and I want to try some new code, it will have implications to the OS, and the MySQL DB that is currently installed. I don't want to mess up the clone so I will spawn a new clone from it.
My question is if I create a linked clone, run the code I put on it and choose to delete the linked clone, will it like a snapshot merge the code back into the VM that I created a link from or will the diff discs just be removed?
The second part to this is... if the linked clone is just deleted and not merged back into the originating VM, what happens with the snapshot that the originating VM takes? Is is safe to delete/merge or just keep spawning linked clones off of it?
I have the hard drive space, I just dont feel like waiting 30 minutes to spawn full clones for each new potential feature.
This is partly a comment and partly an answer.
The comment: Is there a reason not use linked clones? This is an archetypal use case for a linked clone.
The Answer:
Rather than creating a full clone, use a linked clone. The creation time will be a few minutes, is very small amount of data, and can easily be moved. One of the great advantages of a linked clone is that a common base VM can be easily shared among several developers.
Each use case you have - whether installing new versions of software, testing new versions of code, etc can easily be done in a customized linked clone VM. This makes it easy to run multiple different environments.
Another benefit of the linked clone is sharing a common base VM. For instance I created a VM with the development tools which are common amongst all of us, and then shared it as a linked clone to all the others. This ensures a uniformity of tools/configuration.
Hope this helps you ( past ) question.
-daniel
Related
I work mainly on a desktop Mac but also have a laptop Mac that I use when away from the office.
I want to access and work on my latest html, css, php and python files from either computer.
I thought Github was the way to do this but am having a problem understanding the "flow" and I've RTFM! I don't understand whether I should create a Repository on Github first, why when I try to "clone" something it doesn't magically end up on my local computer... where the nice big red button that says "sync" is...
... or whether I should just use the commandline ONLY...
So, if I start on my desktop and create new files, what are the correct steps using git or Github (?) to put those files where they can then be accessed from my laptop and then have the files on my laptop merged back into the ?Github repository so I can then access those files from my desktop.
Thank you all for your replies and answers! The git workflow, for my needs, is now clear.
The workflow presented by wadesworld is concise and was the overview I needed.
However, Michael Durrant's commandline steps filled in that workflow specifically with commandline directives - and I needed that also.
steelclaw and uDaY's answers and responses were important because I did not understand that it did not matter which repo I created first and, adding and committing locally were essential first steps in my workflow.
Specifically, steelclaw's response to one of my response questions provided the closure I needed, so I could learn more:
After initializing the repository, be sure to use 'add' and 'commit.' These will make the files an official version of the repository. After that, you need to use 'push' to upload it to the remote repository."
ilollar's resource, "Git for Ages 4 and Up" is also worthy of the click, especially for folks like me who are visual!
Thank you all so very much!!
Do you want to version control your files or just have access to the same files in both places?
It is a good idea to use version control as a developer, whether you're writing code or designing websites. But, to do so, you have to have a commitment to learning how version control systems work, since they all have some learning curve.
But, if you're not interested in that complexity and simply want to be sure you have access to the latest version of your files, then you're looking at a file syncing operation which can be much more simple.
So, which one do you want?
Edit: Based on the response, here's the model:
1) Create repository on work computer.
2) Create repository with same name on github.
3) Push to repository on github
4) At home, do a git clone to pull down the changes you pushed.
5) Now that the repository exists in both locations, you can simply do a git push before you leave work, and git pull when you get home, and vice-versa when going the other direction.
To answer the detail of your question: I'd go with Dropbox.
UbuntuOne is also good even for non Ubuntu users and of course Google drive is the (big) new player on the block.
They compare as follows:
Service Free*1 NextLevel*1 NextLevel($)*2 Features
Dropbox 2 50 $2.5O One Folder, best gui sync tools.
UbuntuOne 5 20 $4.00 Multiple directories anywhere
GDrive 5 25 $2.50 It's Google.
*1 GB
*2 Cost per month
To answer the title of your question:
If you wanted something that's more suited to programmers, I'd use git:
First, install gitx (linux readers, that's gitg) as that is by far the most popular gui for git:
For the "flow" I can also refer you to my write-up of various features at:
What are the core concepts of git, github, fork & branch. How does git compare to SVN?
Using gitx or gitg the specific flow is as follow:
1) Make some changes to files.
2) Use the tools "commit" tab to see what's changed ("unstaged"):
3) Add a file by dragging it from "unstaged" to "staged":
4) Give a commit message
5) Commit the file.
6) I then push it to the remote at the command line with $ git push remote or I use the gui by right clicking and select ing the 2nd master - see here:
.
If I'm sharing with others I'll often need to do git pull to get ands merge in others chnages) before being able to do a git push
The github part is doing init and push and clone but I'd say just read up on those tutorials more rather than an SO question. Basically though, I do:
Set up repository locally in git:
git init
git add .
git commit "Initial commit"
Set up github:
Create a github repository using github (https://help.github.com/articles/create-a-repo)and then push your local repository to it as in:
git push origin master.
If the repository already exists on github but not on your local pc, that's wheh you click the remote link and then in a terminal type git clone [paste here, e.g. ctrl-v]
If you're "starting" with github:
Make code changes
git pull - get latest version into your repository and merge in any changes
git add . Add all modified files
git commit -m "message"
git push # origin master is the default.
If, at the end of the day you decide to go with something simple like Dropbox you can use my referral link -http://db.tt/pZrz4t3k- to get a little more than the standard 2GB, Using this we both get an extra 0.5 GB, however which of all these routes to go is up to you and your needs. I use all these services (git, github, UbuntuOne, Dropbox and googleDrive, so I am not recommending one over the others -it depends on the needs).
I would recommend using DropBox or Google Drive. They will let you do EXACTLY what you are trying to achieve, they are very user friendly (and free [5 Gb I think]).
They automatically update (as long as you have an internet connection obviously)
Just make a folder, put some files in it, and you are away.
Since explaining how to use an entire VCS in one answer is an overwhelming task, I can instead point you in the direction of some very helpful resources to get you to understanding and using Git:
Pro Git - a free online book (written with Git!) with easy language on all things Git.
GitHub Help - GitHub's own help section walks you through setting up and using Git, and not just with their own apps. Very useful.
Get Started with Git - A good tutorial getting you up and running with Git.
Git For Ages 4 and Up - Fantastic video explaining the inner-workings of Git with Tinker Toys. Not best for an introduction into Git, but a great video to watch once you feel a bit more comfortable.
Git may feel complicated or strange at first, but if what you are looking for is a good version control system, it is excellent.
However, if all you're looking for is a cloud-like service to sync some files across multiple computers, like the others have mentioned, Dropbox would be the way to go.
I use Github as a "hub" of git, to share finished codes. (And Git for version control)
And Dropbox to sync files between different computers and mobile/tablet, to manage files.
http://db.tt/EuXOgGQ
They serve different purposes for me. Both are good!
Git is an advanced and rather difficult tool to use for version control. If you're feeling brave, you can try to install the command line tool, however I recommend using a graphical client, specifically SourceTree.
http://www.atlassian.com/software/sourcetree/overview
You'll need to clone your repository, or else initialize a new one. To connect to your repository, you'll need to know the URL, and possibly a username and password for your repository. You also need to provide a valid name for the repository.
To update files there are several steps: First, you need to add the changes to the directory. Source tree might do this automatically. Then you need to commit the changes. This is basically confirming changes and signing them with a comment. To upload them, you need to use push and select the correct remote repository. When you want to update your local repository, you'll need to use pull and again select the correct remote repository.
For your purposes, however, it seems like dropbox might be better, because it automatically updates and is very simple. If you don't need the advanced version control that git provides (e.g. branching, merging from many users), then it seems like it would be a better option for you.
https://www.dropbox.com/
First, a confession. In about 10 years of professional development, I've never used a source control system. There, that feels better. My historical setup (as recommended by management, ha!) has been to create dated folders on a server and copy-and-paste my data into it each evening.
I've known for a long time that a much better, manageable solution would be to use git or Mercurial to manage my source but I've never taken the time to learn any of these new tools because myold system has always worked well enough for my needs as the lone developer for every project I've ever worked on.
I have finally change this setup. I've installed Mercurial on my Mac, which after a bit of reading, I prefer over git. As a GUI front-end, I have installed SourceTree which appears to be easy to use and quite friendly. The problem I am having is that I can't find a very simple, straight-forward walkthrough for setting up a server repository that I use for pushing changes to each evening. I'm sure it's there, I just can't find it.
I've honestly tried to Google this, but there is something about the term "SourceTree". I can't find anything useful because half of the information I find is in regards to using git and it tends to involve pushing a project to a site like github.com, which is not pertinent in my case.
Additionally, I have skimmed the Mercurial documentation and I still may not be entirely clear about the full commit/update/push/pull/branch/merge concept. I just want to get something setup rather fast that will back-up and track the changes of my projects, without having to be a source control guru.
How do I setup a simple repository on a Windows network server, and push and pull changes each evening? My company want me to store my data in a personal folder, on a network share that is backed up to tape and is then stored off site.
I'm sure it has to be simple. I just want to be sure that I'm doing it correctly so that in the case that I need to access a back up, it is there and can be easily pulled... or branched.. or whatever.
Well, it depends on the kind of the server you are going to use.
Let's assume it's not a Windows server (just a guess, as you're a Mac user). Let's also assume that right now you only need it for yourself, not for a bunch of users.
Then the simplest way is to use SSH. Suppose the server is server, and you have an account rlh there. You'll need to have a public/private key pair for a seamless access (no need to enter the password on each pull/push). You'll need to install Mercurial on the server as well, obviously.
On the server, create a repo (in your home dir, for example):
rlh#mac$ ssh server
rlh#server$ mkdir myproject
rlh#server$ cd myproject
rlh#server$ hg init
On your machine, clone the repo:
rlh#mac$ hg clone ssh://rlh#server/myproject myproject
The default target will be set automatically, and you should be able to pull/push with no additional configuration.
Feel free to ask if you have a question regarding this.
When searching for hosting solutions, best not to include the term SourceTree in your query — SourceTree is just a front-end tool that is in principle unrelated to Mercurial hosting. That might explain the lack of useful information.
Here is an overview of ways to set up Mercurial servers:
https://www.mercurial-scm.org/wiki/PublishingRepositories
Personally I’m using plain hgweb and that has served me well.
Also I would recommend to consider using a hosting service such as BitBucket or Google Code. It requires much less effort to set up and maintain. Here is an overview of Mercurial hosting services:
https://www.mercurial-scm.org/wiki/MercurialHosting
Personally I’m also considering moving my self-hosted Mercurial repositories over to BitBucket, because of reduced maintenance overhead, and also it has functionality like bug tracker, wiki etc.
Intro
I've used SVN before, back when I was working as a solo programmer, just to keep an offsite record of what I was doing, so I kind of know about ideas like "repositories" and "commits" and the like, though not much more than that. "Branches", "merges" and "checking out" are, sadly, a mystery to me.
I want to start using Git because we've got a couple of guys who work away from the office and they have complained that they sometimes can't get through to some other version control systems because their IDE integration causes them to sulk and fall over when they get out of contact. Git's idea of "Every working directory is a repository" seems like it should go some way towards solving that.
Anyway, I've downloaded the "Git Extensions" to add The Shiny to the Windows context menus, etc. and I've found that I really have no concept of how I'm supposed to use this to control my versioning. Not finding anything obvious after a google search, I present the following theoretical scenario to Stack Overflow in the hopes that someone will tell me what to do, in small words:
Scenario
I have three projects. One project, ProjectReuse is used by the two other projects (ProjectA and ProjectB). Various people in the organisation will need to edit the code for each project, using Visual Studio 2010.
I have three folders on my desktop, labelled "ProjectReuse", "ProjectA" and "ProjectB". I've got the Git Extenstions window open. A cow, wearing a Santa Claus hat, is staring at me.
Questions
What do I do now to create the repositories in such a way that several people (including those pesky not-always-on-site guys) can access a repository when they need to, on-site or off, with or without a permanent connection to our servers?
When the first guy needs to edit a file, what does he need to do? Check out? Branch? I have to explain this to the other team members and I'm a bit wobbly on these concepts, myself. I've only used version control for my solo projects before.
Wheedling and excuses
The first "how do I set this up?" question is what I'm most interested in, but I figure if I'm going to ask for the idiot's guide, I might as well ask for it to be as useful as possible for the next idiot who stumbles onto this question. I'm not looking for particularly in-depth answers, here; I just haven't got any clear picture in my mind of how a multi-user version control system works. Once I've got that in mind I should be able to put the rest together by myself.
Ok, let's get the first thing about git straight. Git is distributed. Repeat this to yourself as many times as necessary: there is no central server, central repository or indeed anything central. Completely in contrast to SVN, you are not all accessing one repository (nothing central).
What you do is create a repository somewhere. Then everybody else clones (copies) it. Now everyone has their own copy of the repository and can do with it as they wish. Branch names don't even need to be consistent across repositories although many people do this as it helps.
So, how do you set it up? You init a repository then everyone who wants it clones it. In your case, I'd recommend each project have its own git repository rather than lumping all three together.
But but but how on earth am I supposed to manage a development team, I hear you say? Never fear. I said there was no central repository, but that doesn't stop you designating one of the repositories to be the release repository, and another to be the experimental or whatever.
What you need then is a workflow. Typically, what you do is create said release repository and then say: "everybody pull from the master on here". This represents your latest "stable" release. Now you tell everyone to branch from here and develop their new features. They do this. As these get done, you ask people to merge their changes back into their own master or wherever, then you, the release manager, pull from them (probably into a development branch but you could well do it into your release master because it is trivial to reverse commits). The next person then updates their master and does their merge. They resolve their own merge conflicts etc, and so on until you have all the features you want. You do your testing etc then the release repository really does pull all of that into its master.
In practice you might do many different things. For example, if you have a lot of people you're going to need quite a lot of communication going on, because if lots of people make incompatible changes to your shared project there's going to be a few merge commits. To avoid this, users can pull/merge from each other or use a local "shared" repository.
It does work.
As said by Ninefingers, just init a new repository and let others clone it. Keep in mind that when you want a public (or central) repository, create a shared repository and not a private repository. The easiest way is to create a personal repository first, and than clone it to a place (e.g. network share) where everyone has access to as a shared repository.
I created a few video tutorials for Git Extensions here:
http://code.google.com/p/gitextensions/
The video tutorials are a bit old and not very good since I do not have a mic or any video editing skills, but it should be enough to get you started.
Good luck!
Like many programmers, I'm prone to periodic fits of "inspiration" wherein I will suddenly See The Light and perform major surgery on my code. Typically, this works out well, but there are times when I discover later that — due to lack of sleep/caffeine or simply an imperfect understanding of the problem — I've done something very foolish.
When this happens, the next step is do reverse the damage. Most easily, this means the undo stack in my editor… unless I closed the file at some point. Version control is next, but if I made changes between my most recent commit (I habitually don't commit code which breaks the build) and the moment of inspiration, they are lost. It wasn't in the repository, so the code never existed.
I'd like set up my work environment in such a way that I needn't worry about this, but I've never come up with a completely satisfactory solution. Ideally:
A new, recoverable version would be created every time I save a file.
Those "auto-saved" versions won't clutter the main repository. (The vast majority of them would be completely useless; I hit Ctrl-S several times a minute.)
The "auto-saved" versions must reside locally so that I can browse through them very quickly. A repository with a 3-second turnaround simply won't do when trying to scan quickly through hundreds of revisions.
Options I've considered:
Just commit to the main repository before making a big change, even if the code may be broken. Cons: when "inspired", I generally don't have the presence of mind for this; breaks the build.
A locally-hosted Subversion repository with auto-versioning enabled, mounted as a "Web Folder". Cons: doesn't play well with working copies of other repositories; mounting proper WebDAV folders in Windows is painful at best.
As with the previous method, but using a branch in the main repository instead and merging to trunk whenever I would normally manually commit. Cons: not all hosted repositories can have auto-versioning enabled; doesn't meet points 2 and 3 above; can't safely reverse-merge from trunk to branch.
Switch to a DVCS and "combine" all my little commits when pushing. Cons: I don't know the first thing about DVCSes; sometimes Subversion is the only tool available; I don't know how to meet point 1 above.
Store working copy on a versioned file system. Cons: do these exist for Windows? If so, Google has failed to show me the way.
Does anyone know of a tool or combination of tools that will let me get what I want? Or have I set myself up with contradictory requirements? (Which I rather strongly suspect.)
Update: After more closely examining the tools I already use (sigh), it turns out that my text editor has a very nice multi-backup feature which meets my needs almost perfectly. It not only has an option for storing all backups in a "hidden" folder (which can then be added to global ignores for VCSes), but allows browsing and even diffing against backups right in the editor.
Problem solved. Thanks for the advice, folks!
Distributed Version Control. (mercurial, git, etc...)
The gist of the story is that there are no checkouts, only clones of a repository.
Your commits are visible only to you until you push it back into the main branch.
Want to do radical experimental change? Clone the repository, do tons of commits on your computer. If it works out, push it back; if not, then just rollback or trash the repo.
Most editors store the last version of your file before the save to a backup file. You could customize that process to append a revision number instead of the normal tilde. You'd then have a copy of the file every time you saved. If that would eat up too much disk space, you could opt for creating diffs for each change and customizing your editor to sequentially apply patches until you get to the revision you want.
if you use Windows Vista, 7 or Windows Server 2003 or newer you could use Shadow Copy. Basically the properties window for your files will have a new tab 'previous version' that keeps track of the previous version of the file.
the service should automatically generate the snapshot, but just to be safe you can run the following command right after your moment of "inspiration"
'vssadmin create shadow /for=c:\My Project\'
it has defiantly saved my ass quite a few times.
Shadow Copy
I think it is time to switch editors. Emacs has a variable version-control, which determines whether Emacs will automatically create multiple backups for a file when saving it, naming them foo.~1~, foo.~2~ etc. Additional variables determine how many backup copies to keep.
I want to have a bare git repository stored on a (windows) network share. I use linux, and have the said network share mounted with CIFS. My coleague uses windows xp, and has the network share automounted (from ActiveDirectory, somehow) as a network drive.
I wonder if I can use the repo from both computers, without concurrency problems.
I've already tested, and on my end I can clone ok, but I'm afraid of what might happen if we both access the same repo (push/pull), at the same time.
In the git FAQ there is a reference about using network file systems (and some problems with SMBFS), but I am not sure if there is any file locking done by the network/server/windows/linux - i'm quite sure there isn't.
So, has anyone used a git repo on a network share, without a server, and without problems?
Thank you,
Alex
PS: I want to avoid using an http server (or the git-daemon), because I do not have access to the server with the shares. Also, I know we can just push/pull from one to another, but we are required to have the code/repo on the share for back-up reasons.
Update:
My worries are not about the possibility of a network failure. Even so, we would have the required branches locally, and we'll be able to compile our sources.
But, we usually commit quite often, and need to rebase/merge often. From my point of view, the best option would be to have a central repo on the share (so the backups are assured), and we would both clone from that one, and use it to rebase.
But, due to the fact we are doing this often, I am afraid about file/repo corruption, if it happens that we both push/pull at the same time. Normally, we could yell at each other each time we access the remote repo :), but it would be better to have it secured by the computers/network.
And, it is possible that GIT has an internal mechanism to do this (since someone can push to one of your repos, while you work on it), but I haven't found anything conclusive yet.
Update 2:
The repo on the share drive would be a bare repo, not containing a working copy.
Git requires minimal file locking, which I believe is the main cause of problems when using this kind of shared resource over a network file system. The reason it can get away with this is that most of the files in a Git repo--- all the ones that form the object database--- are named as a digest of their content, and immutable once created. So there the problem of two clients trying to use the same file for different content doesn't come up.
The other part of the object database is trickier-- the refs are stored in files under the "refs" directory (or in "packed-refs") and these do change: although the refs/* files are small and always rewritten rather than being edited. In this case, Git writes the new ref to a temporary ".lock" file and then renames it over the target file. If the filesystem respects O_EXCL semantics, that's safe. Even if not, the worst that could happen would be a race overwriting a ref file. Although this would be annoying to encounter, it should not cause corruption as such: it just might be the case that you push to the shared repo, and that push looks like it succeeded whereas in fact someone else's did. But this could be sorted out simply by pulling (merging in the other guy's commits) and pushing again.
In summary, I don't think that repo corruption is too much of a problem here--- it's true that things can go a bit wrong due to locking problems, but the design of the Git repo will minimise the damage.
(Disclaimer: this all sounds good in theory, but I've not done any concurrent hammering of a repo to test it out, and only share them over NFS not CIFS)
Why bother? Git is designed to be distributed. Just have a repository on each machine and use the publish and pull mechanism to propagate your changes between them.
For backup purposes, run a nightly task to copy your repository to the share.
Or, create one repository each on the share and do your work from them but use them as distributed repositories from which you can pull changesets from each other. If you use this method, then performance of doing builds and so on will be decreased since you will be constantly accessing over the network.
Or, have distributed repositories on your own computers, and run a periodic task to push your commits to the repositories on the share.
Sounds just as if you'd rather like to use a centralized versioning system, so the query for backup is satisifed.
Perhaps with xxx2git in between for you to work locally.