I'll try to make this as straight forward as possible.
Currently our team has a VSS database where our projects are stored.
Developers grab the code and place on their localhost machine and develop locally.
Designated developer grabs latest version and pushes to development server.
The problem is, when a file is removed from the project (by deleting it in VS2008) then the next time another developer (not the one who deleted it) checks in, it prompts them to check in those deleted files because they still have a copy on their local machine.
Is there a way around this? To have VSS instruct the client machine to remove these files and not prompt them to check back in? What is the preferred approach for this?
Edit Note(s):
I agree SVN is better than VSS
I agree Web Application project is better than Web Site project
Problem: This same thing happens with files which are removed from class libraries.
You number one way around this is to stop using web site projects. Web Site Projects cause visual studio to automatically add anything it finds in the project path to the project.
Instead, move to Web Application Projects which don't have this behavior problem.
Web Site projects are good for single person developments.
UPDATE:
VB shops from the days gone past had similiar issues in that whatever they had installed affected the build process. You might take a page from their playbook and have a "clean" build machine. Prior to doing a deployment you would delete all of the project folders, then do a get latest. This way you would be sure that the only thing deployed is what you have in source control.
Incidentally, this is also how the TFS Build server works. It deletes the workspace, then creates a new one and downloads the necessary project files.
Further, you might consider using something like Cruise Control to handle builds.
Maybe the dev should take care to only check in or add things that they have been working on. Its kind of sloppy if they are adding things that they were not even using.
Your best solution would be to switch to a better version control system, like SVN.
At my job we recently acquired a project from an outsourcing company who did use VSS as their version control. We were able to import all of the change history into SVN from VSS, and get up and running pretty quickly with SVN at that point.
And with SVN, you can set up ignores for files and folders, so the files in your web projects dont get put into SVN and the ignore attributes are checked out onto each developer's machine
I believe we used VSSMigrate to do the migration to SVN http://www.poweradmin.com/sourcecode/vssmigrate.aspx
VSS is an awful versioning system and you should switch to SVN but that's got nothing to do with the crux of the problem. The project file contains references to what files are actually part of the project. If the visual studio project isn't checked in along with the changes to it, theres no way for any other developer to be fully updated hence queries to delete files when they grab the latest from VSS. From there you've got multiple choices...
Make the vbproj part of the repository. Any project level changes will be part of the commit and other developers can be notified. Problem here is it's also going to be on the dev server. Ideally you could use near the same process to deploy to dev as you would to deploy as release. This leads into the other way...
SVN gives you hooks for almost all major events, where hooks are literally just a properly named batch file / exe. For your purposes, you could use a post-commit hook to push the appropriate files, say via ftp, to the server on every commit. File problems solved, and more importantly closer towards the concept of continuous integration.
Something you may want to consider doing:
Get Latest (Recursive)
Check In ...
Its a manual process, but it may give you the desired result, plus if VS talks about deleted files, you know they should be deleted from the local machine in step 1.
Related
We have multiple developers on our team. This works for everyone except one developer, but we cannot seem to find the reason it does not work for this individual. We all have VS premium+, TFS 2012 power tools installed.
We have a branch. We get latest version from branch. Go to windows explorer and delete all files in folder "sdk" (there exist no subdirectories in sdk/). Then we copy into it a bunch of files. (This effectively leaves some files as new files, updated files, identical files or removed files when compared with what was deleted.)
When we go to pending changes, these changes show up under "Excluded Changes - Add(s) 51, Deletes(3)".
Except for one developer. His system does not recognize these changes. What might cause this to not work for him?
If it helps troubleshoot, he is also the only developer that if he were to delete these files via power tools delete option in windows explorer, his .dll files get locked. This does not happen for anyone else either.
This is what we've checked so far:
EDIT: Solution Found - Thank you all for the responses! It was indeed the local vs server workspace option. Setting his workspace to local solved these and a few other issue he was apparently having.
Make sure that the developer is using a "Local Workspace" as opposed to the "Server Workspace".
This is a concept which was introduced in TFS 2012 which helps developers to work offline as opposed to server workspace in earlier versions which did not allow that. TFS 2012 changes up the workspace options. Server workspaces are still available, and work exactly has they have in previous versions. However, TFS 2012 now contains a new type of workspace, called a Local workspace. Again, this is an oversimplification, but in a Local workspace, all the files are read/write, not read-only. The meta-data about the files is stored in a hidden folder in the root of the workspace, which allows edits, renames and deletes to be done locally without any communication to the server.
This improves the offline story with TFS significantly, as you no longer encounter issues with editing read-only files. It also makes it easier to work with other tools (such as Notepad) to edit code files. Making a change to a code file using Notepad will still mark that file as edited, which will be picked up by TFS the next time you connect.
LINK
This only ever happens when a user tampers with a local view of source control (be it a local workspace, or not). If all you ever did was get latest from TFS this would never occur, instead, the local view of what is in TFS would always be properly managed.
Also sounds like a bad merge, e.g. getting latest (where the files no longer exist) then copying in old content (introducing untracked files.) One thing you might try doing to correct the issue is doing a forced fetch from TFS after deleting the local workspace contents BEFORE attempting a merge. This will ensure that the local workspace is up to date an accurate with what the TFS server believes is truth, if it still occurs after merging in content then the problem is almost certainly within the merge process the user is going through (i.e. PEBKAC, or a knowledge gap about what they are doing.)
If you unshelve old content (pre-deletion) into the local workspace (where the deletions have already been performed, according to the SCC, and thus locally because of a sync/get-latest) then the unshelved files will effectively become untracked and it's up to the user to clean up the mess. This is identical to a user having copied loose files into their workspace that TFS never had any knowledge of. TFS isn't going to prune untracked files for you, I believe some other source control tools might do this as a configurable default, TFS does not.
That this is only happening to one developer in the team suggests that the other developers, one at a time, should sit with this developer and drive using "their process" to see if it still occurs for them. More often than not this comes down to a bad process a user has adopted, and putting a different person in the chair can help highlight why it has been occurring and help end it. A disciplined build/source manager and/or developer should not experience this problem.
Very interested in knowing what the problem turns out to be.
To start with some background, I am a member of a small team developing an ASP.NET application. In addition to us, there are 2 other teams working on it, all from different countries. Source code is hosted on a shared SVN server but there is no central testing environment. Each developer runs the app on their own machine and data services are set up per team.
Unfortunately our SVN workflow has some gaps in it: annoyances arise when there is time for an SVN update.
It is mainly because each developer and team have slightly different environments in terms of disk directory structure and configuration (both IIS and app itself). Hence conflicts in configuration files and elsewhere that in essence are not conflicts at all - for runtime configuration (XML) and in *.suo.
How should we handle this if our objective is to keep checkout, app setup and update as painless as possible?
One option would obviously be master copies. Another one establishing uniformity in developer environments and keeping it. But what about a third alternative?
One thing to do is to not put the .suo files into SVN, there's no reason to do that.
For IIS configuration there should be no argument - uniform environment across the build team.
For app.config files and the like, I tend to keep them in a separate "cfg" directory in the root of the project and use pre-build events to copy in the relevant ones I need depending on the project and environment I'm working on.
You could have a separate build task to copy in user-specific config into your output directory. Add a new directory in your root project called "user.config or something, and leave it empty. Then configure your project build to check this for entries and copy them to the output directory. This is easy to do, and then each dev can have their own config without affecting the master copies. Just make sure you have an ignore pattern on that folder so you don't commit user-specific configuration. If you have svnadmin access to your source code repo, you could set a hook to prevent it from ever happening.
Also set ignore patterns on your root directory (recursively) for .suo, .user, _Resharper or any other extensions you think are pertinent. There are some So questions already on exactly this topic:
Best general SVN Ignore Pattern?
Ignore *.suo and *.user files in svn. It is easy. After that create two types of config files in subversion. Development and Server, if in use add Test also. See below example.
ConnectionStringDevelopment.config
ConnectionStringServer.config
AppSettingsDevelopment.config
AppSettingsServer.config
Server files would contain server information. Development files is not contained in svn and ignored there. Every new developer will start by copying server files and making changes according to his environment.
Look following example site
http://code.google.com/p/karkas/source/browse/trunk/Karkas.Ornek/WebSite/web.config
following lines are interest.
<appSettings configSource="appSettingsDevelopment.config"/>
<connectionStrings configSource="ConnectionStringsDevelopment.config" />
ConfigSource can be used almost everywhere in web.config therefore you will be able to change every config to every developer. Only make use of following naming convention. ignore *Development.config in subversion. This way no developer config will be added to subversion.
Its not a perfect solution (and should only be used if there are not many of those special files), but what I do is to add fake files for each case, and switch the real file locally to it.
In detail: I have a file foo that creates the problem. I also create foo_1 and foo_2 and then locally switch foo to foo_1 (I use tortoisesvn, so I cant really give you the command line to do that). Then I am working on foo on my machine, but actually commit to foo_1. Other parties could then switch to foo_2...
(I admit this is basically a variant of the master-file approach you suggested yourself; but if there are not many actual changes to those files this at least reduces the numer of conflicts you have to think about)
I'm using Publish/Web Deploy to deploy an asp.net aplication from Visual studio 2010. It works perfect, but there is a problem. If the new release is not working as expected, the old version is already replaced by the new one and there is no easy way to roll back to the working version. How is this best solved? I wish it was possible to keep the old version on the server so I could just switch back if needed.
With WebDeploy there is no built in rollback feature, so once you've deployed that's it.
There's a number of hand rolled strategies you could put in place, for example:
Limited Access e.g. Shared Hosting:
Where you don't have full access to the machine -
Backup the live site beforehand by downloading it.
Keep copies of what you deployed so you can push the previous version should something break
Full Access:
Maintain two sets of folders for the application and map your site to one or other of these folders. When you come to deploy switch the IIS site's physical path to the other folder then deploy. If the site fails then just knock the site back to the original folder. Each successful deploy would alternate between these two folders.
For stuff like user uploaded content you'd need to map virtual directories to a place on the file system that's always the same place because you don't want to be copying these around each time.
You're not the only one who has encountered these issues. Have a look at this article by Rob Conery and his observations about the state of affairs regarding ASP.NET deployment.
ASP.NET Deployment Needs To Be Fixed
Getting Constructive On ASP.NET Deployment
Using some form of Source Control would be another alternative. We use subversion, so if the publish goes bad, we can just update back to the last-good revision, and publish that. Even if you're the only developer, using source control can be very useful.
Can ClickOnce be configured to delete off old published directories?
Or
Has anyone written some code that will delete off these publish directories (maybe keeping the last 10)?
Currently, every time a ClickOnce Publish is done a new directory is being created on the IIS Server. This NEW directory contains a copy of the whole application, which is downloaded. The old directories do not seem to be used anymore and is just taking up a lot of space.
Here is a sample of the directory names being created. As you can see the application version number is being used in the name.
EduBenesysNET_1_0_1_0
EduBenesysNET_1_0_1_1
….
EduBenesysNET_1_0_1_192
EduBenesysNET_1_0_1_193
We have had 194 (zero based) builds with each directory staying out there. With the size of one build being about 50mb, you can see how keeping the old directories out there will start to eat away at the disk space.
The way our application works is you always have to download the latest version. You do not have an option to skip the download so I am hoping that deleting off the old directories should not be a problem.
Good question (+1) - one would think that this should be possible somehow ...
Looking a bit closer though reveals that the observed publishing behavior is not actually a feature of the ClickOnce technology, rather one of the Visual Studio Publish Wizard - see for example section ClickOnce publish folder structure in ClickOnce Publishing Process:
If you manually generate or update a ClickOnce application publication using either Mage or a custom tool, you are not constrained to this folder and file structure. For any particular ClickOnce publication, the chain of dependencies includes the following: [...] [emphasis mine]
The Walkthrough: Manually Deploying a ClickOnce Application yields the same conclusion, i.e. the folder structure in use by VS is simply a (reasonable) convention/approach.
Unfortunately the VS Publish Wizard doesn't seem to offer deleting older versions indeed, at least it is neither visible nor documented somewhere. However, given the resulting folder structure is just an artifact of the build process, you might as well add a custom build step doing just that - figuring out the details (i.e. accessing the VS automation properties to derive the last published version etc.) is outside of the scope of your question though ;)
Regarding your sub question:
I am hoping that deleting off the old directories should not be a problem.
Definitely not a problem, it just depends on how many of these you want to keep for rollback operations eventually, see e.g. Can I delete previous old versions from Publishing Location created by ClickOnce?
The short answer is that this is not something that is built into Visual Studio or ClickOnce deployment, and you will have to find another way to do this, perhaps through a script that you run on your server.
You can delete all of the versions except the current one if you push updates as required updates. If you don't do that, you'll want to keep two versions in case the user reverts back a version.
I am working on a large source base (approx 15K files) decomposed into about 25 projects. I want to keep the source in perforce (and am evaluating perforce to that end) but due to complications in the setup it isn't possible for me to keep the visual studio projects in source control, I know in theory the answer to this is to check the projects in, but that isn't feasible (we would end up with projects for several versions of VS checked in, and additionally several variants of each of these, instead they are generated automatically and this setup works very well).
Is there a way to get VS to checkout files for editing as it goes without adding the project to perforce, to avoid the user having to go to the perforce client and manually check out each file for editing as they go? Alternatively (and even better) is there a way to get VS to recognise that the files in a project are under source control, without having to add the project to source control also?
I know we could also take the tack of having every user check out for editing all files they might potentially want to edit ahead of time, then revert unmodified files before submitting their changes, is there a performance penalty in perforce in taking this approach?
In your case, I'd suggest not using the visual studio integration for Perforce.
You can either add Perforce commands to the Tools Menu, or try Nifty Perforce from Google:
http://code.google.com/p/niftyplugins/
One option is to use Perforce as if you were disconnected from the server and reconcile your changes later, rather than telling Perforce everything you do before you do it. (This is roughly equivalent to the workflow in CVS or Subversion.) You would synchronize your working copy, go off and develop, and then ask Perforce to figure out what you did while it wasn't watching.
Perforce has a nice document describing the process: Working Disconnected From The Perforce Server
One thing the document doesn't mention is the allwrite clientspec attribute, which marks all files in your working directory as writable instead of only the files you have checked out.
For the sake of completeness: There is a new tool for your wish called P4VS. I like it better that P4SCC which never worked for me as I wanted.