I am working with a team and we are trying to restructure our approach to managing our perforce depo. Our current solution is to maintain a separate "work" folder structure. Each person is limited to their own work folder a swell as discipline folder. We have people coming in and out of the project constantly so this way no one in art can mistakenly screw something up in programming. Once assets (this is a game) are done, they should be copied by one of the team leaders into the actual build. This way things can be kept clean and organized outside of the build itself does not get cluttered with peoples temporary files/solutions/code/etc. The issue that I have with this approach is that aspect that... we already have a copy of the file in our work structure. There is no reason to do a deep copy into the game folder. Is there a way to shadow copy the file into the game build from the asset which exists the personal folder of a user/group? We are using the visual (p4v) client.
On the depot side Perforce does lazy copying, so you only have one copy internally. That is, Perforce uses metadata and internal logic to fetch the file when users browse and sync. Only when someone changes the file does the depot contain the additional information regarding these changes. What this means is that you can branch very large file trees without requiring massive amount of storage for your depot.
As a side note (and for the sake of completeness), on the client side, when you branch into a new location, Perforce creates a local copy as a convenience to you. The assumption is simply that creating a new branch means you want to work on it right away. If this is not the case, or if you're branching a very large tree that would take up a great deal of storage on your hard drive, you can branch using the -v option (v stands for virtual), as follows:
p4 integ -v //depot/game/... //depot/workspace/...
You can still retrieve the files by syncing to them afterwards.
Related
Our development team is currently working on a project in Visual Studio Professional 2015 that involves forms, etc. As it stands, the master code is stored on an individuals desktop but a copy can be found on a shared drive as well as my personal hard drive.
We just encountered a very serious issue. I made changes to my copy on my personal hard drive and when I arrived at work, was made aware that my change had in fact changed the master code. This will prevent anyone from being able to experiment with the code at all, anywhere.
Has anyone ever experienced this and if so, do you know of any way to make it stop??
From the responses, no one else has heard of this happening or even knew it to be a thing, so creating backups of the code should have been sufficient. Using a VCS is not the only way. Without the VCS, we are and have been creating backup copies of all coding. The problem becomes if I'm attempting to make an update to test in my copy of the code, it's updating all copies - never in the history of any coding/application have I ever seen this.
The copies I am referring to, is literally a copy. I copy the project, then put it in another folder on my hard drive. It makes no logical sense to me or the team how or why a copy is updating the master and vice versa.
I guess the only solution as this point is to use a VCS. I would like to know why and how this is happening regardless because we were only lucky that this was a simple line of code that was changed and not a massive overhaul!
I have made some changes. I cannot use those changes now. I need to discard them for now and go back to them later when the star alignment is more favorable (e.g. when our Cobol guy has enough time to get to his half of the work).
Short of using Eclipse → Synchronize with team and manually copy pasting the contents to a scratch directory so I can do the merging later, is there any way to "stash" changes for later?
There is no git stash equivalent on Serena Dimensions. The poor man's way will be to store your changes temporally on a different folder or a file with different name without including it to the source controlled solution and switching back and forth as needed.
Another alternative is to use streams in order to have your changes source controlled without affecting production code; a typical scenario is to have an Integration and Main streams. But it depends on your access level to the dimension database you are using and your project needs.
A git repo can be maintained locally to have this and other git functionality on your local computer (or even small team with shared folders or a git server) since it does not interfere with Dimensions, as long as you don't store the git metadata in the dimensions managed code and vice versa. This is not a straight forward solution and will require that you know how to set a git repo and precaution on you side when delivering to the Dimension server, but it works and is really helpful if you are familiar with git workflow.
Dimensions is not so friendly as git on this kind of usages, but way more robust for larger and more controlled projects.
Git and Dimensions work on different methodologies. Dimensions allows only to either commit a new version or discard the version, after checking out the file. As indicated above, one can still use streams or individual branches for their development work and can merge/deliver the changes later point in time, without affecting others work.
I have read FAR too many posts on SO and I am now in analysis paralysis!
I work with Visual Studio 2010 and I have many small projects, many of which reference library/shared projects.
I don't really mind about having to check/re-build dependent projects if I make changes to shared code...I'll be putting TeamCity in place ASAP to assist with this, but for the moment, I just amend the code next time I work on a project. Many projects are "write once and forget", so they'll never need updating.
The team is very small at the moment (ME!) but new devs are expected early this year, but it will still be a very small in-house team, with fast project cycles if that makes any difference.
At the moment I have a very flat folder structure on disk, so ALL of my sln files are in a "development" folder on disk. Then there is a folder per VS project. This makes sharing pretty simple, and also leaves me with a single packages folder for nuget.
I am about to import everything into SVN (VisualSVN) and I'd like to start adding things like database scripts, docs, UAT tests, etc. etc.
Do I keep my flat structure and have a single trunk/branch/tag at
root level?
Do I expand the structure to an SVN folder-per-solution
and then have trunk/src, trunk/docs and manage things like nuget
packages with svn:eternals?
Do I hybrid this and have an SVN folder-per-solution but with docs in the VS solution?
NOTE: I am putting in SVN so I can bring in some Java development but keep source code managed in a single way. We will also share with a DB team, who want to put docs/sql sripts etc in there. I intend a separate repository each for DB and Java - but would like a "similar" folder structure for each of them.
NOTE2: I have some SVN user experience, but no Admin experience. The new devs have no experience at all (they are coming from an AS/400 background) so the simpler the solution the better! I've looked at repo per project and svn:extenals and whilst it is a great solution, it will require me to manage and maintain all the time (as well as do my own work! lol)
ANY advice from people who have "Been there, done that-GTTS" is very gratefully received.
OK, I now have the following local solution structure:
ALL my sln/suo files are in the same folder.
ALL of my project folders/files are subfolders
This makes sharing projects easy enough...but looks very messy and is hard to find anything :(
Should I be using svn:externals to manage "reference" projects, so I can branch/tag them?
Should I only reference built DLL's - and all the management that comes with doing that?
Should I let VS2010 manage my folders, and not care that I have lots of "nuget" folders etc.?
VERY VERY confused now...any decent answers? :(
NOTE: Will be adding TeamCity (or something similar) to the mix ASAP to provide CI capabilities. Any serious (and FREE) recommendations for CI also appreciated.
Here is a structure I use at work and for personal projects:
SVN structure:
root
shared_code
productA
trunk
branch_of_shared_code
productA projects
productA solution
branches
branch1
branch_of_shared_code
productA projects
productA solution
tags
...
productB
...
Periodically (when exactly depends on your needs) all changes from the main branch of shared code are merged into product's branches of the shared code. Changes to the shared code are either made in product's branch and then merged back, or in the main branch and then merge to products.
Product sources content:
Everything needed to build the complete package is considered as source. E.g. if you have DB scripts - they are part of sources. Tests - too. For documentation I usually add a separate project into the solution which contains all sources for building documentation and produces result in the output directory. Then a project creating installer will include it into the generated distributive.
Planning:
This may be debatable, but I prefer to store tasks list next to sources and branch/merge them together. If a task is completed in a branch, it's not completed in trunk until merge. More general planning may or may not be appropriate for storing next to the sources.
On disk:
First of all I believe in working with repository in such a way that it's OK to not store working copies for every product, but check them out on demand. Of course, checking out/deleting working copy for every change is impractical, so I have a directory for every product which I'm working frequently at this time, inside of it I check out branches I work on (trunk and some others). The rest of products need not to be checked out if you don't expect their development soon.
Like many programmers, I'm prone to periodic fits of "inspiration" wherein I will suddenly See The Light and perform major surgery on my code. Typically, this works out well, but there are times when I discover later that — due to lack of sleep/caffeine or simply an imperfect understanding of the problem — I've done something very foolish.
When this happens, the next step is do reverse the damage. Most easily, this means the undo stack in my editor… unless I closed the file at some point. Version control is next, but if I made changes between my most recent commit (I habitually don't commit code which breaks the build) and the moment of inspiration, they are lost. It wasn't in the repository, so the code never existed.
I'd like set up my work environment in such a way that I needn't worry about this, but I've never come up with a completely satisfactory solution. Ideally:
A new, recoverable version would be created every time I save a file.
Those "auto-saved" versions won't clutter the main repository. (The vast majority of them would be completely useless; I hit Ctrl-S several times a minute.)
The "auto-saved" versions must reside locally so that I can browse through them very quickly. A repository with a 3-second turnaround simply won't do when trying to scan quickly through hundreds of revisions.
Options I've considered:
Just commit to the main repository before making a big change, even if the code may be broken. Cons: when "inspired", I generally don't have the presence of mind for this; breaks the build.
A locally-hosted Subversion repository with auto-versioning enabled, mounted as a "Web Folder". Cons: doesn't play well with working copies of other repositories; mounting proper WebDAV folders in Windows is painful at best.
As with the previous method, but using a branch in the main repository instead and merging to trunk whenever I would normally manually commit. Cons: not all hosted repositories can have auto-versioning enabled; doesn't meet points 2 and 3 above; can't safely reverse-merge from trunk to branch.
Switch to a DVCS and "combine" all my little commits when pushing. Cons: I don't know the first thing about DVCSes; sometimes Subversion is the only tool available; I don't know how to meet point 1 above.
Store working copy on a versioned file system. Cons: do these exist for Windows? If so, Google has failed to show me the way.
Does anyone know of a tool or combination of tools that will let me get what I want? Or have I set myself up with contradictory requirements? (Which I rather strongly suspect.)
Update: After more closely examining the tools I already use (sigh), it turns out that my text editor has a very nice multi-backup feature which meets my needs almost perfectly. It not only has an option for storing all backups in a "hidden" folder (which can then be added to global ignores for VCSes), but allows browsing and even diffing against backups right in the editor.
Problem solved. Thanks for the advice, folks!
Distributed Version Control. (mercurial, git, etc...)
The gist of the story is that there are no checkouts, only clones of a repository.
Your commits are visible only to you until you push it back into the main branch.
Want to do radical experimental change? Clone the repository, do tons of commits on your computer. If it works out, push it back; if not, then just rollback or trash the repo.
Most editors store the last version of your file before the save to a backup file. You could customize that process to append a revision number instead of the normal tilde. You'd then have a copy of the file every time you saved. If that would eat up too much disk space, you could opt for creating diffs for each change and customizing your editor to sequentially apply patches until you get to the revision you want.
if you use Windows Vista, 7 or Windows Server 2003 or newer you could use Shadow Copy. Basically the properties window for your files will have a new tab 'previous version' that keeps track of the previous version of the file.
the service should automatically generate the snapshot, but just to be safe you can run the following command right after your moment of "inspiration"
'vssadmin create shadow /for=c:\My Project\'
it has defiantly saved my ass quite a few times.
Shadow Copy
I think it is time to switch editors. Emacs has a variable version-control, which determines whether Emacs will automatically create multiple backups for a file when saving it, naming them foo.~1~, foo.~2~ etc. Additional variables determine how many backup copies to keep.
I hope this qualifies as programming related since it involves how to structure a project.
Because I've always used the web site model with VS.net I never had solution and project files and putting everything into source control worked great. I knew that everything I had in my web site directory was all I needed for the web site.
Now I'm using asp.net MVC and it only has a project model so now I have these solution and project files. If I work on it alone it's fine but once other people start to add/delete files from the project our solution file gets messed up and people end up having to grab the latest solution file, see what got changed and then add back/remove their files and check in the solution file again. It's become sort of a problem because sometimes people don't realize the solution file was changed, they make other changes and then when they check in everything other people do an update on their files they find that their files are gone from the project (although still physically on disk).
Is this normal? Is there a way to structure a project so that we don't need to check in solution and project files?
Your developers are not using TFS correctly. You should have multiple check-outs turned on, and everyone needs to be careful to merge their changes correctly when checking in. TFS will prompt you to do this, and accepting the defaults is nearly always the right thing to do.
It's not uncommon to have one or two developers who never get it, and you might have to help them now and then. But every programmer who works on a team needs to learn how to use source control tools correctly. If they can't manage that, they shouldn't be writing software.
[edit] It occurs to me that you might run into these problems if you check in the *.sln file directly, rather than choosing to "Add Solution to Source Control".
I don't think it's normal - what are you using for source control? It sounds like developers aren't respecting changes that others a making - checking in without merging first.
I know that early on in a project, when lots of files are being added & deleted, it can be a problem to keep up - you need to check out the project file, add your files, then check in the new file & project so other developers can also update it. You'll probably have multiple project files in a solution - perhaps one interim solution would be to have one "holding" project for each developer, then clean them up periodically - though these types of temporary fixes do have a tendency to become permanent.
I don't know of a way to set up a project file that's not in source control, though I suppose you could create a script that would generate them.
Having been through this, the key is respect & good communication between the developers.
This tends to happen with TFS multiple check outs. It can be hard to grasp coming from VSS to TFS as VSS allowed one person to check a file out at one time. Auto-merge should work most of the time for you but a couple of rules should ease the pain:
Check in early and often (if you add remove or rename a file check it in straight away even if it is a blank holder)
Before you check in do a get latest, this will ask you to resolve conflicts locally
Try to get continuous integration set up so that developers always know the state of the buidl and whether it is OK to check in\out.
We had a bit fo pain at the start of our current project but it soon settled down when we followed the rules above.
Personally, I think making changes to project and solution files requires discipline and clear (well understood) rules throughout your development team. These files (.sln, .*proj) are the bottlenecks of your project, and any errors or inconsistencies can cost you in team downtime. Changes need to be well thought out, planned and then executed.
They must be secured by source control (which you're already using, excellent) and your team members should work on the basis of only making the changes they need, and not leaving project or solution files checked out for an extended period.
If you are allowing multiple (shared) checkouts, this could become problematic in terms of overwriting another user's changes. Depending on your source control mechanism, people may be required to manually merge changes. Personally, I'd ask people to negotiate their project/solution changes with each other over merging (this can't always be achieved).
A third option if you are using TFS is the shelve feature. If someone needs to make changes locally, they can shelve the changes and merge later.
Lastly, another strategy is to try to architect your solution to be as modularized as possible - so people are distributed, working on separate projects and do not (ideally) have to overlap on too many common areas.
I'm not sure if you are using TFS, as people have mentioned, but if you are (or if you are using source control with similar capabilities) you can set it such that sln and csproj files are exclusive lockouts and are not able to be merged.
We have done this with quite large teams and while it causes some initial issues as people get used to it in the long run it has resolved many issues that were previously causing problems. Essentially you trade longer term merge issues/complexity for short term compile/checkin issues which we have found to be a good trade off.
Once you have set it to forced exclusive checkout and no merge you then get your dev teams used to the fact they should keep locks on the sln and proj files for as shorter time as possible.
Always check them in.
Always check out latest (merge if possible), make sure your change is there, before checking in a new version.
If your source control doesn't require a special action to check in from an old version, GET A DIFFERENT SOURCE CONTROL.