I have made some changes. I cannot use those changes now. I need to discard them for now and go back to them later when the star alignment is more favorable (e.g. when our Cobol guy has enough time to get to his half of the work).
Short of using Eclipse → Synchronize with team and manually copy pasting the contents to a scratch directory so I can do the merging later, is there any way to "stash" changes for later?
There is no git stash equivalent on Serena Dimensions. The poor man's way will be to store your changes temporally on a different folder or a file with different name without including it to the source controlled solution and switching back and forth as needed.
Another alternative is to use streams in order to have your changes source controlled without affecting production code; a typical scenario is to have an Integration and Main streams. But it depends on your access level to the dimension database you are using and your project needs.
A git repo can be maintained locally to have this and other git functionality on your local computer (or even small team with shared folders or a git server) since it does not interfere with Dimensions, as long as you don't store the git metadata in the dimensions managed code and vice versa. This is not a straight forward solution and will require that you know how to set a git repo and precaution on you side when delivering to the Dimension server, but it works and is really helpful if you are familiar with git workflow.
Dimensions is not so friendly as git on this kind of usages, but way more robust for larger and more controlled projects.
Git and Dimensions work on different methodologies. Dimensions allows only to either commit a new version or discard the version, after checking out the file. As indicated above, one can still use streams or individual branches for their development work and can merge/deliver the changes later point in time, without affecting others work.
Related
We have an existing repository on the network accessed via HTTP:.
Should I first import these files to my local machine? I tried importing directories, files, etc., everything is empty in my local folders. It says "success", but nothing ever shows up!
It doesn't make sense to create a repository on my side. But all the tutorials seem to say that, but then I think they're assuming you're starting from nothing.
My experience with Tortoise SVN has mostly been negative. Typically whatever I think I should do turns out to be incorrect, and I end up having to undo, and redo, or lose my work. Once I even managed to corrupt the main repository and it had to be restored from backup.
I absolutely cannot damage this existing repository!
If you're used to CVS or some older version control systems, note that SVN uses the same terms differently. In those, checkout often means lock in exclusive mode.
In SVN checkout will make a copy and automatically manage the revisions and help you merge from multiple sources. You don't need to lock a file, unless it's graphical or some other binary where merging doesn't make sense.
So in TortoiseSVN, you can checkout, and edit the files. The icons on the files will change to indicate their status.
SVN is easy in comparison to git, where the same terms are again redefined and significantly augmented!
The title may not be so clear but the issue I am facing is this:
Are designers are working on large photoshop files across the network, this has a number of network traffic and file corruption issues which I am trying to overcome.
The way I want to do this is to have the designers copy the the files to their machine (Mac OSX) and work on them locally. But the problem then stands that they may forget to copy them back up or that another designer may start work on the version stored on the network.
What I need is a system where the designer checks out the files or folders from the server which locks those files so no other user can copy them until they are checked back in. We do not need to store revisions for the files.
My initial idea was to use SVN or preferably GIT and force lock on checkout somehow, does this sound feasible or is there a better system?
How big are the files on average? Not sure about GIT haven't used it but SVN should be ok - If you did go with SVN I would trial checking out over Http/Https vs Network Path to the repo as you may get a speed advantage out of one or the other. When we vpn to our repo at work it is literally 100 times faster over http than checking out using a network \\path to the repo.
SVN is a good option, but you will have revisions (this is the whole point of SVN). SVN doesn't lock files by default, but you may configure it so that it does. See http://svnbook.red-bean.com/nightly/en/svn-book.html?bcsi_scan_554E00F99A9AD604=0&bcsi_scan_filename=svn-book.html#svn.advanced.locking
I don't know git very well, but since it's not a centralized VCS, I'm pretty sure it isn't the right tool for your situation.
I am working on a project where the branches folder contain at least 300 different branch (copy of the trunk) which is will no more be used. Since SVN is running more and more slowly I wonder if deleting those branches will make subversion behave operate faster?
Other people in my team say that since the source code will still be on the server, so it wont change anything. (So branch stay undeleted).
But I read something on Subversion before (I dont remember where) saying that HEAD is managed a little bit different that previous version which could increase the speed of the repository.
Which one of these hold true ?
Subversion performance is more related to the load on your server than the size of the repository. Check on disk space and CPU performance, as well as looking into the web server performance (or svnserve on Windows).
If you remove branches, there will still be a repository version that has those branches in it, so they will not be removed. The only way to actually remove content is to dump the repository (svnadmin dump) and then use svndumpfilter to remove the branches in question from the dumped content. The resulting content can be loaded into a new repository without the removed content, and even the revision numbers can be updated.
I am not aware of the HEAD being handled differently in terms of performance. However, copies of the HEAD (or anything else) are cheap, lightweight copies, and should not affect performance.
Can you provide any additional information on which specific operations are slowing down?
Currently I am working on a project that uses TFS as source control. I am in the middle of implementing a piece of functionality, but am blocked by work that needs to be done by outside resources. Since the functionality is not fully complete, I can't check in the changes without breaking the build. So instead of waiting a couple days while the blocking work is finished, I want to work on some defects.
To do this work in isolation from my other changes, I am working the defects in a second workspace I just created.
After using a second workspace to isolate my changes, a coworker asked me why I didn't just shelve my changes. After doing some reading on shelving, it looks like this is preferred solution to situations like mine. My question is what situations, if any, would you create multiple workspaces and what situations should you use shelving? There are some posts about shelving, but I don't see very much on the subject of workspaces.
By the way, I got the idea for creating a second workspace here.
A new branch would probably be the best way to go. But, to answer your question, one of the key differences between shelving and just using a differnet workspace is that when you shelve, you push your code back to TFS, so it is backed up. Whatever is in your workspace is just what you have on your machine -- if you lose it, it's gone.
We use branching a lot in my shop, and as a result, I haven't seen many uses for shelving.
However, I have found one case where it has been very useful to me:
I often bounce between 2 different development machines (one at the office, one at home, connected via VPN). If I am working on something, and I want to transfer it from home to work, or vice-versa, I often use shelving. I can shelve it from one machine and un-shelve it from the other. I do this when I am in the middle of a change, and checking in would break the build or otherwise interrupt other developers.
You are talking about two completely different concepts here. When you shelve code, you are saving it to TFS, but not checking it in to any particular branch. Creating a different workspace just sets up a new local folder on your development machines and saves the files in your branch there. When you do a check-in, you still could have conflicts.
Why not create a new branch of your code. You can work on that branch and check in without stepping on anyone else's changes, because you are checking in to your own branch of the code. Then, when you have completed your changes, and others have completed their's on the main branch, you can merge your changes into the main branch.
Shelving is the ideal option. Shelving allows you to make changes en masse in TFS outside the regular build, and retrieve them later by name. Multiple workspaces is not a solution for what you're doing. Multiple workspaces are good if you're maintaining different versions of a product and need to work on them, e.g. let's say you have a 4.0 and a 5.0 product and need to apply a security fix to both versions. Shelving is great when you want to make changes but not commit them immediately.
Like many programmers, I'm prone to periodic fits of "inspiration" wherein I will suddenly See The Light and perform major surgery on my code. Typically, this works out well, but there are times when I discover later that — due to lack of sleep/caffeine or simply an imperfect understanding of the problem — I've done something very foolish.
When this happens, the next step is do reverse the damage. Most easily, this means the undo stack in my editor… unless I closed the file at some point. Version control is next, but if I made changes between my most recent commit (I habitually don't commit code which breaks the build) and the moment of inspiration, they are lost. It wasn't in the repository, so the code never existed.
I'd like set up my work environment in such a way that I needn't worry about this, but I've never come up with a completely satisfactory solution. Ideally:
A new, recoverable version would be created every time I save a file.
Those "auto-saved" versions won't clutter the main repository. (The vast majority of them would be completely useless; I hit Ctrl-S several times a minute.)
The "auto-saved" versions must reside locally so that I can browse through them very quickly. A repository with a 3-second turnaround simply won't do when trying to scan quickly through hundreds of revisions.
Options I've considered:
Just commit to the main repository before making a big change, even if the code may be broken. Cons: when "inspired", I generally don't have the presence of mind for this; breaks the build.
A locally-hosted Subversion repository with auto-versioning enabled, mounted as a "Web Folder". Cons: doesn't play well with working copies of other repositories; mounting proper WebDAV folders in Windows is painful at best.
As with the previous method, but using a branch in the main repository instead and merging to trunk whenever I would normally manually commit. Cons: not all hosted repositories can have auto-versioning enabled; doesn't meet points 2 and 3 above; can't safely reverse-merge from trunk to branch.
Switch to a DVCS and "combine" all my little commits when pushing. Cons: I don't know the first thing about DVCSes; sometimes Subversion is the only tool available; I don't know how to meet point 1 above.
Store working copy on a versioned file system. Cons: do these exist for Windows? If so, Google has failed to show me the way.
Does anyone know of a tool or combination of tools that will let me get what I want? Or have I set myself up with contradictory requirements? (Which I rather strongly suspect.)
Update: After more closely examining the tools I already use (sigh), it turns out that my text editor has a very nice multi-backup feature which meets my needs almost perfectly. It not only has an option for storing all backups in a "hidden" folder (which can then be added to global ignores for VCSes), but allows browsing and even diffing against backups right in the editor.
Problem solved. Thanks for the advice, folks!
Distributed Version Control. (mercurial, git, etc...)
The gist of the story is that there are no checkouts, only clones of a repository.
Your commits are visible only to you until you push it back into the main branch.
Want to do radical experimental change? Clone the repository, do tons of commits on your computer. If it works out, push it back; if not, then just rollback or trash the repo.
Most editors store the last version of your file before the save to a backup file. You could customize that process to append a revision number instead of the normal tilde. You'd then have a copy of the file every time you saved. If that would eat up too much disk space, you could opt for creating diffs for each change and customizing your editor to sequentially apply patches until you get to the revision you want.
if you use Windows Vista, 7 or Windows Server 2003 or newer you could use Shadow Copy. Basically the properties window for your files will have a new tab 'previous version' that keeps track of the previous version of the file.
the service should automatically generate the snapshot, but just to be safe you can run the following command right after your moment of "inspiration"
'vssadmin create shadow /for=c:\My Project\'
it has defiantly saved my ass quite a few times.
Shadow Copy
I think it is time to switch editors. Emacs has a variable version-control, which determines whether Emacs will automatically create multiple backups for a file when saving it, naming them foo.~1~, foo.~2~ etc. Additional variables determine how many backup copies to keep.