I have a fairly large PHP codebase (10k files) that I work with using Eclipse 3.4/PDT 2 on a windows machine, while the files are hosted on a Debian fileserver. I connect via a mapped drive on windows.
Despite having a 1gbit ethernet connection, doing an eclipse project refresh is quite slow. Up to 5 mins. And I am blocked from working while this happens.
This normally wouldn't be such a problem since Eclipse theoretically shouldn't have to do a full refresh very often. However I use the subclipse plugin also which triggers a full refresh each time it completes a switch/update.
My hunch is that the slowest part of the process is eclipse checking the 10k files one by one for changes over samba.
There is a large number of files in the codebase that I would never need to access from eclipse, so I don't need it to check them at all. However I can't figure out how to prevent it from doing so. I have tried marking them 'derived'. This prevents them from being included in the build process etc. But it doesn't seem to speed up the refresh process at all. It seems that Eclipse still checks their changed status.
I've also removed the unneeded folders from PDT's 'build path'. This does speed up the 'building workspace' process but again it doesn't speed up the actual refresh that precedes building (and which is what takes the most time).
Thanks all for your suggestions. Basically, JW was on the right track. Work locally.
To that end, I discovered a plugin called FileSync:
http://andrei.gmxhome.de/filesync/
This automatically copies the changed files to the network share. Works fantastically. I can now do a complete update/switch/refresh from within Eclipse in a couple of seconds.
Do you have to store the files on a share? Maybe you can set up some sort of automatic mirroring, so you work with the files locally, and they get automatically copied to the share. I'm in a similar situation, and I'd hate to give up the speed of editing files on my own machine.
Given it's subversioned, why not have the files locally, and use a post commit hook to update to the latest version on the dev server after every commit? (or have a specific string in the commit log (eg '##DEPLOY##') when you want to update dev, and only run the update when the post commit hook sees this string).
Apart from refresh speed-ups, the advantage of this technique is that you can have broken files that you are working on in eclipse, and the dev server is still ok (albeit with an older version of the code).
The disadvantage is that you have to do a commit to push your saved files onto the dev server.
I solwed this problem by changing "File Transfer Buffer Size" at:
Window->Preferences->Remote Systems-Files
and change "File transfer buffer size"-s Download (KB) and Upload (KB) values to high value, I set it to 1000 kb, by default it is 40 kb
Use offline folder feature in Windows by right-click and select "Make availiable offline".
It could save a lot of time and round trip delay in the file sharing protocol.
The use of svn externals with the revision flag for the non changing stuff might prevent subclipse from refreshing those files on update. Then again it might not. Since you'd have to make some changes to the structure of your subversion repository to get it working, I would suggest you do some simple testing before doing it for real.
Related
Recently switched jobs and with it switched source control from TFS to SVN, which is new to me.
In TFS there was an option to disable automatic checkout of files when you started typing in them. It's enabled by default and a lot of users like this behaviour, but I prefer to know for certain what's being changed before committing. A personal thing.
VisualSVN auto-checkouts by default. Is there an similar option to turn it off? I can't seem to find out in the settings.
"Automatic checkout" term in SVN and in TFS worlds has different meanings, as far as I see.
In Subversion, checkout relates to svn checkout operation which gets a working copy from a repository. In TFS it looks like the term somehow relates to automatic locking mechanism.
If you want a file to be locked automatically when you start modifying it in Visual Studio (with VisualSVN extension installed), see the KB article "Lock-Modify-Unlock Model with VisualSVN". I also suggest reading the SVNBook chapter "Locking".
Generally speaking, you can set svn:needs-lock property on files. The property instructs client which files must be locked before editing. After applying svn:need-lock to a file the file gets read-only attribute. Before editing the file must be explicitly locked by the user. After committing the lock is released by default.
Short answer: I don;t think you can do this without becoming very unpopular.
I think you should read up on the SVN redbook's description of how SVN works, especially the versioning models
In your environment, everyone wants to be able to modify any file locally and then send their changes to the server, merging changes with colleague's changes if necessary. This approach works well if 2 people are not changing the same files all the time, which is typical of most dev shops.
The old TFS/VSS model of checkout a file to work on it is pretty obsolete today - the more 'optimistic' approach where you assume you have exclusive access is much more productive. (as usual its easier to ask forgiveness if it goes wrong than ask permission every time)
Your main problem is that you cannot mix these models - if your colleagues are using the merge model, then you have to as well. You cannot lock a file and expect them to still be able to change any file anytime.
Now, there are tricks you can use to prevent yourself from modifying files you never meant to - I'm not sure of VisualSVN but TortoiseSVN (awesome tool) can run client hooks - ie you can write a program to run on every checkout, and that program can be as simple as setting every file's read-only flag. Whether this is god enough for you is another matter.
Personally, I would get used to the idea of change whatever you like whenever. If you accidentally edit a file, you can see the change indicator (AnkhSVN turns the file icon orange for changed files), and its easy to 'svn revert' changes you didn't want to make. Also SVN lets you see diffs really easily, especially on commit - double click the files in the commit dialog. The productivity gains from being able to work without the tools getting in your way (as I found with TFS continually pinging at me as I tried to edit a file) are huge. The SVN tools are really good to let you "ask forgiveness" so you don't need to run in the crappy old TFS way now you've upgraded to something better.
The other advantage is that this applies to files that are not in a Visual Studio project, if you've ever had a project file that was edited outside VS (eg a generated WCF client stub) then you will appreciate how SVN works - never again will you do a full commit and find that TFS has conveniently decided that your changed file wasn't changed and so didn't need to be committed!
Actually I have faced this issue many time during working with SVN. Most of the time I am working with VSS for source control but since last couple of months working with SVN.
We are using tortoise and AnkhSVN with VS 2010.
In our team there are 5-6 people and some of them are working on same file at a time. Now when somebody commit , we have seen that some other developer changes get vanished and Sometime we get some line with version number. This thing get consume lots of time and we have to resolve conflict and all.
Please provide information so we can avoid such issues.
If two developers are working on the same file and make changes to the same are of code, then you have to manually resolve this conflict. There is no way to avoid it, no matter which version control you use.
The version control cannot know what the correct code is, so it requires a human intervention.
There is no way around this, other than preventing the users from working on the same code. this is done in svn by locking the file.
Each developer must svn update before svn commit. Between the update and commit, the developer must do a full, clean build and run all tests to make sure their code still works after merging in all other developer's changes into their copy.
You can set svn:needs-lock on files or folders that need to be locked before making changes, they'll be forced to check for locks. When you will try to edit a file, you will be required to lock it first. And when it is already locked by someone else, they you get an error message, preventing you from making any changes. This can be done in Tortoise SVN in Properties -> Advanced
The title may not be so clear but the issue I am facing is this:
Are designers are working on large photoshop files across the network, this has a number of network traffic and file corruption issues which I am trying to overcome.
The way I want to do this is to have the designers copy the the files to their machine (Mac OSX) and work on them locally. But the problem then stands that they may forget to copy them back up or that another designer may start work on the version stored on the network.
What I need is a system where the designer checks out the files or folders from the server which locks those files so no other user can copy them until they are checked back in. We do not need to store revisions for the files.
My initial idea was to use SVN or preferably GIT and force lock on checkout somehow, does this sound feasible or is there a better system?
How big are the files on average? Not sure about GIT haven't used it but SVN should be ok - If you did go with SVN I would trial checking out over Http/Https vs Network Path to the repo as you may get a speed advantage out of one or the other. When we vpn to our repo at work it is literally 100 times faster over http than checking out using a network \\path to the repo.
SVN is a good option, but you will have revisions (this is the whole point of SVN). SVN doesn't lock files by default, but you may configure it so that it does. See http://svnbook.red-bean.com/nightly/en/svn-book.html?bcsi_scan_554E00F99A9AD604=0&bcsi_scan_filename=svn-book.html#svn.advanced.locking
I don't know git very well, but since it's not a centralized VCS, I'm pretty sure it isn't the right tool for your situation.
Like many programmers, I'm prone to periodic fits of "inspiration" wherein I will suddenly See The Light and perform major surgery on my code. Typically, this works out well, but there are times when I discover later that — due to lack of sleep/caffeine or simply an imperfect understanding of the problem — I've done something very foolish.
When this happens, the next step is do reverse the damage. Most easily, this means the undo stack in my editor… unless I closed the file at some point. Version control is next, but if I made changes between my most recent commit (I habitually don't commit code which breaks the build) and the moment of inspiration, they are lost. It wasn't in the repository, so the code never existed.
I'd like set up my work environment in such a way that I needn't worry about this, but I've never come up with a completely satisfactory solution. Ideally:
A new, recoverable version would be created every time I save a file.
Those "auto-saved" versions won't clutter the main repository. (The vast majority of them would be completely useless; I hit Ctrl-S several times a minute.)
The "auto-saved" versions must reside locally so that I can browse through them very quickly. A repository with a 3-second turnaround simply won't do when trying to scan quickly through hundreds of revisions.
Options I've considered:
Just commit to the main repository before making a big change, even if the code may be broken. Cons: when "inspired", I generally don't have the presence of mind for this; breaks the build.
A locally-hosted Subversion repository with auto-versioning enabled, mounted as a "Web Folder". Cons: doesn't play well with working copies of other repositories; mounting proper WebDAV folders in Windows is painful at best.
As with the previous method, but using a branch in the main repository instead and merging to trunk whenever I would normally manually commit. Cons: not all hosted repositories can have auto-versioning enabled; doesn't meet points 2 and 3 above; can't safely reverse-merge from trunk to branch.
Switch to a DVCS and "combine" all my little commits when pushing. Cons: I don't know the first thing about DVCSes; sometimes Subversion is the only tool available; I don't know how to meet point 1 above.
Store working copy on a versioned file system. Cons: do these exist for Windows? If so, Google has failed to show me the way.
Does anyone know of a tool or combination of tools that will let me get what I want? Or have I set myself up with contradictory requirements? (Which I rather strongly suspect.)
Update: After more closely examining the tools I already use (sigh), it turns out that my text editor has a very nice multi-backup feature which meets my needs almost perfectly. It not only has an option for storing all backups in a "hidden" folder (which can then be added to global ignores for VCSes), but allows browsing and even diffing against backups right in the editor.
Problem solved. Thanks for the advice, folks!
Distributed Version Control. (mercurial, git, etc...)
The gist of the story is that there are no checkouts, only clones of a repository.
Your commits are visible only to you until you push it back into the main branch.
Want to do radical experimental change? Clone the repository, do tons of commits on your computer. If it works out, push it back; if not, then just rollback or trash the repo.
Most editors store the last version of your file before the save to a backup file. You could customize that process to append a revision number instead of the normal tilde. You'd then have a copy of the file every time you saved. If that would eat up too much disk space, you could opt for creating diffs for each change and customizing your editor to sequentially apply patches until you get to the revision you want.
if you use Windows Vista, 7 or Windows Server 2003 or newer you could use Shadow Copy. Basically the properties window for your files will have a new tab 'previous version' that keeps track of the previous version of the file.
the service should automatically generate the snapshot, but just to be safe you can run the following command right after your moment of "inspiration"
'vssadmin create shadow /for=c:\My Project\'
it has defiantly saved my ass quite a few times.
Shadow Copy
I think it is time to switch editors. Emacs has a variable version-control, which determines whether Emacs will automatically create multiple backups for a file when saving it, naming them foo.~1~, foo.~2~ etc. Additional variables determine how many backup copies to keep.
I'll try to make this as straight forward as possible.
Currently our team has a VSS database where our projects are stored.
Developers grab the code and place on their localhost machine and develop locally.
Designated developer grabs latest version and pushes to development server.
The problem is, when a file is removed from the project (by deleting it in VS2008) then the next time another developer (not the one who deleted it) checks in, it prompts them to check in those deleted files because they still have a copy on their local machine.
Is there a way around this? To have VSS instruct the client machine to remove these files and not prompt them to check back in? What is the preferred approach for this?
Edit Note(s):
I agree SVN is better than VSS
I agree Web Application project is better than Web Site project
Problem: This same thing happens with files which are removed from class libraries.
You number one way around this is to stop using web site projects. Web Site Projects cause visual studio to automatically add anything it finds in the project path to the project.
Instead, move to Web Application Projects which don't have this behavior problem.
Web Site projects are good for single person developments.
UPDATE:
VB shops from the days gone past had similiar issues in that whatever they had installed affected the build process. You might take a page from their playbook and have a "clean" build machine. Prior to doing a deployment you would delete all of the project folders, then do a get latest. This way you would be sure that the only thing deployed is what you have in source control.
Incidentally, this is also how the TFS Build server works. It deletes the workspace, then creates a new one and downloads the necessary project files.
Further, you might consider using something like Cruise Control to handle builds.
Maybe the dev should take care to only check in or add things that they have been working on. Its kind of sloppy if they are adding things that they were not even using.
Your best solution would be to switch to a better version control system, like SVN.
At my job we recently acquired a project from an outsourcing company who did use VSS as their version control. We were able to import all of the change history into SVN from VSS, and get up and running pretty quickly with SVN at that point.
And with SVN, you can set up ignores for files and folders, so the files in your web projects dont get put into SVN and the ignore attributes are checked out onto each developer's machine
I believe we used VSSMigrate to do the migration to SVN http://www.poweradmin.com/sourcecode/vssmigrate.aspx
VSS is an awful versioning system and you should switch to SVN but that's got nothing to do with the crux of the problem. The project file contains references to what files are actually part of the project. If the visual studio project isn't checked in along with the changes to it, theres no way for any other developer to be fully updated hence queries to delete files when they grab the latest from VSS. From there you've got multiple choices...
Make the vbproj part of the repository. Any project level changes will be part of the commit and other developers can be notified. Problem here is it's also going to be on the dev server. Ideally you could use near the same process to deploy to dev as you would to deploy as release. This leads into the other way...
SVN gives you hooks for almost all major events, where hooks are literally just a properly named batch file / exe. For your purposes, you could use a post-commit hook to push the appropriate files, say via ftp, to the server on every commit. File problems solved, and more importantly closer towards the concept of continuous integration.
Something you may want to consider doing:
Get Latest (Recursive)
Check In ...
Its a manual process, but it may give you the desired result, plus if VS talks about deleted files, you know they should be deleted from the local machine in step 1.