I have a few objects that need to be Committed to GXServer but for some reason I am getting a failure. The message is something like:
'Commit failed: At least Environment 'Java Environment' was modified in GeneXus Server since your last update. Please Update Knowledge Base and retry.'
When I go to the Update Tab in GXS for this KB it is empty - I am in sync. It seems like I am stuck - the Commit side says Update but the Update side says I am in sync.
I have been able to Commit all of the objects except the one listed in this error. In my case, I am sure I have the latest version of the object, so I need to know how to force this object to GXS.
I wanted to share with the Community the answer I received from Support as it solved the problem. If you run into this and have questions you can ask Support and they can give you more official details. This is my take on the situation.
If you know the object in the error should be Committed, you can make a change to a file in your running GX version to 'force' the Commit. Here are the steps I followed:
You should be able to Commit all objects except the one in question. It is best IMO to commit all objects normally and only have the object in error listed.
Close GX
In Windows Explorer find the GeneXus installation folder (something like C:\Program Files (x86)\GeneXus\GeneXus16U5)
Open the file GeneXus.exe.config for editing ( you may need 'Run as Administrator')
Search for </appSettings>
Above this line, add a line with this information
<add key="ForceCommit" value="true"/>
Save your changes
Open KB and Commit the object in question
Once the object is committed, close GX and remove the line from the config file so you are not forcing future objects up.
This force process should NOT be used unless you are in this situation, and it should not be a normal occurrence.
Also, make sure you change the correct file. One time I edited a file with a similar name by mistake (even the contents looked similar), and it did not work.
The last time I performed this was in GX 16 U5. I do not know what the original issue was that caused the conflict, but this was the way I was given to force the Commit.
Related
I am using Git for Windows (version 2.15, but the same issue occurs in 2.14 and I think older versions as well) and I noticed a rather annoying behavior: When I perform some basic git operations*), the modification date of the .git/objects/pack/pack-*.pack file changes. The file itself remains unchanged, but the last modification date field gets updated, which causes my backup software to think the file was changed and needs to be added to my differential backup. Because my .pack files are rather large, this increases the size of my daily backups significantly. Is there a way to prevent this behavior? That is, keep the pack file completely unchanged, including its metadata, until I perform a git gc or git repack?
Unfortunately, I wasn't able to pinpoint which operation causes this behavior. When it happened today, I only used git status, git log, git add, git mv and git commit and nothing else and the date/time got changed, but when I tried to replicate the behavior on my yesterday's backup, the date change didn't occur. I guess next time I will run Process Monitor and watch accesses to the file, but in the meanwhile, does anyone have an idea of what might be causing this problem? Thanks.
Instead of referencing your Git repo itself for your backup program to process (with the date issue), you could have:
a task which does a git bundle of your repo (that generates only one file)
your backup program would back up only that one file.
That way, you bypass entirely the modification date issue for those pack files.
You can either save and keep only one copy of a full bundle of the repo.
Or make incremental bundles.
In the end it turns out that Edward Thomson's answer explains why no "real" solution is possible. However, to facilitate my needs, I wrote a simple Windows command-line application which scans through a tree of directories, locates possible Git repositories, locates their packfiles and changes the date/time of each .pack file to that of the respective .idx file. So far it seems to run OK. I did not encounter any garbage collection issues yet, anyway. I did not release the tool yet, because I rather suspect no one else cares, but if someone is interested, I can upload it somewhere.
Apparently, someone is interested. So the program is released as of now. Not on GitHub, but still as open source, under the 3-clause BSD license. Download the binaries here: https://www.pepak.net/files/git/gitpacksync-0.01.zip
and the source code here: https://www.pepak.net/files/git/gitpacksync-0.01-source.zip
If you try to disable this then you would be prone to see subtle bugs where objects that are still in use will disappear from your repository.
You had trouble pinpointing the exact operation because every operation that adds files will do it.
This is very much intentional - Git refreshes the timestamps of objects in the database (updating the timestamp on either loose objects or packfiles) to know when an object was last written. Whenever you create a new commit, it will update the timestamp on all the files that contain objects hat were referenced.
This is important as it helps the tools that remove data (like prune) avoid race conditions: an object may be dereferenced and then re-referenced. Prune will also look at the timestamp, so by touching the file, it will not be eligible for garbage collection.
I have created database project in visual studio 2013 from existing database. Then I have done lot of changes in database project like modify stored procedures, post deployment script, table structure, etc . Now database project is ready to deploy. But I am worry if any script fails then How I can retain the original state though it build properly.
Please suggest that if any query fails then I want ROLLBACK the all changes that I have made in database project.
Firstly you need to trust your tools and either believe they will work or find other tools.
While you are building the trust I would add a create backup to the pre-deployment script or run a backup before you deploy then if anything goes wrong you can restore and figure out what went wrong.
As David said to roll-back, you would get the previously deployed dacpac and generate a new deployment script from that but fixing forward is almost always the correct thing to do rather than rolling back to a previous version.
ed
Have you been checking your changes into version control? If so, all you need to do is to revert back to the last known good version.
Or... simply work out why it's failing now and fix the root cause?
I used Db projects some time ago and as far as I remember the deploy script was wrapped in a transaction. It is possible to generate sql script without executing it. That setting was somewhere in DB project settings. You can take a look inside that script and make sure that it'll rollback on error.
Doing a backup would still be a recommended practice especially when you deploy to production.
when working on important scripts I developed a habit of always starting a transaction and commenting out the commit.
If you accidentally run it, it won't take effect. The commented out commit would only come out when the thing was done.
While this answer indicates that you CAN revert in source control (Assuming SSDT at this point) it would be nice to get a pointer to the exact process to do this. On a file by file basis the history works the same but how to revert the entire database at once isn't immediately obvious.
Actually I have faced this issue many time during working with SVN. Most of the time I am working with VSS for source control but since last couple of months working with SVN.
We are using tortoise and AnkhSVN with VS 2010.
In our team there are 5-6 people and some of them are working on same file at a time. Now when somebody commit , we have seen that some other developer changes get vanished and Sometime we get some line with version number. This thing get consume lots of time and we have to resolve conflict and all.
Please provide information so we can avoid such issues.
If two developers are working on the same file and make changes to the same are of code, then you have to manually resolve this conflict. There is no way to avoid it, no matter which version control you use.
The version control cannot know what the correct code is, so it requires a human intervention.
There is no way around this, other than preventing the users from working on the same code. this is done in svn by locking the file.
Each developer must svn update before svn commit. Between the update and commit, the developer must do a full, clean build and run all tests to make sure their code still works after merging in all other developer's changes into their copy.
You can set svn:needs-lock on files or folders that need to be locked before making changes, they'll be forced to check for locks. When you will try to edit a file, you will be required to lock it first. And when it is already locked by someone else, they you get an error message, preventing you from making any changes. This can be done in Tortoise SVN in Properties -> Advanced
I have been using XCode with subversion for some time now, no problem was caused when I was using it as a single developer (I was using 2 commands only, commit and add).
But now I have to share the code with another developer (who has never used any kind of version control) and integrating/merging the code has become a nightmare. No problem occur when we are integrating/merging .h/.m files but as soon as it comes to ".nib", "xcodeproj" and ".xcdatamodeld" files, we really don't know what to do.
Whenever we try to merge "xcodeproj", project was getting corrupt and merging ".xcdatamodeld" was kind of impossible for us.
So I was wondering if someone can share his/her experience on how to effectively use subversion/git/mercurial with XCode 4.0 in multiuser environment? or share a link, which can explain how to use subversion effectively in multiuser environment.
Thanks.
Are you doing this using Subversion? For 90% to 99% of the files in your repository, the standard Subversion workflow of checkout, edit, commit works well. However, for some types of files such as JPEGS and GIFS simply don't merge well. In this case, you'll have to do it the way we use to in the old SCCS and RCS days: Before you can edit and commit a file, you must lock it.
Locking a file prevents others from editing the same file and committing changes while you're doing your work on the file. It's crude, but it works. In Subversion, you can always lock any file you're editing, but if the file has the property svn:needs-lock on it, it will be checked out as read-only. You have to lock the file before editing it to make it writable, and you're not allowed to commit the file unless it is locked.
So, for those files, set the svn:needs-lock property on it.
You can automatically set this property on all newly added files (depending upon suffix) via setting the auto-properties in your Subversion client configuration.
And, if you really, really want to make sure that all .nibs and xcodeproj and all of the other flies of these types have svn:needs-lock set on them, you can use my pre-commit hook which will prevent these files from being committed unless this property is set.
There is no failsafe way to merge these kinds of files that I am aware of. So you will have to
try to ensure that only one person is changing these files at a time. That won't work always, so just log what you changed in the file with the commit message. Then if there is a conflict, you can manually resolve it by taking the version that changed more of the file and redo manually what the other person did.
That's normally not a big deal, like adding a new source file to an .xcodeproject, or changing the alignment of an element in a .nib. It's becoming a problem if your project is huge or your nib is containing the whole interface. For it to work well (which in practice it does), you need to split up your projects into sub-projects if they grow too huge.
I had the same problem with 2 other developers Xcode with git. Unfortunately, Xcode project files are an XML file, tracks file included in the project as well as setting. I'm not certain, but I think .nib files are also XML files as well. Someone can correct me on that.
Git did a great job at merging the Xcode project file, and never really had any problems with our *.nib files either. The only time we did have a problem is when we both added/removed files with the same names, or someone did a lot of heavy removing and adding of a lot of files.
The only way we solved this was to have each other push ann pull as soon as we added/removed files. So that way the person had the latest files, and didn't add them in their own repository then pull the latest commit which had the same file in it. Or they work adding changes to a file that was removed or renamed.
That is the best solution we found, as soon as we added or removed a file have everyone else in the team pull. Not a great solution btw. However, you should be committing often anyways.
I'm using Visual Studio 2010 Pro against Team Server 2010 and I had my project opened (apparently) as a solution from the repo, but I should've opened it as "web site". I found this out during compile, so I went to shelve my new changes and deleted the project from my local disk, then opened the project again from source (this time as web site) and now I can't unshelve my files.
Is there any way to work around this? Did I blow something up? Do I need to do maintenance at the server?
I found this question on SO #2332685 but I don't know what cache files he's talking about (I'm on XP :\ )
EDIT: Found this link after posting the question, sorry for the delay in researching, still didn't fix my problem
Of course I can't find an error code for TF203015 anywhere, so no resolution either (hence my inclusion of the number in the title, yeah?)
EDIT:
I should probably mention that these files were never checked in in the first place. Does that matter? Can you shelve an unchecked item? Is that what I did wrong?
EDIT:
WHAP - FOUND IT!!! Use "Undo" on the items that don't exist because they show up in pending changes as checkins.
I had deleted the files in trying to reload the workspace, even though I had shelved the changes. Then VS2010 thought those files were still pending to save. I didn't need that, so I had to figure out to "undo" the changes in Pending Changes.
Then I could unshelve.
It thought I had two ops (unshelve, commit-for-add) going simultaneously, and I thought I had only one op (unshelve).
This is a slight aside to the OP's question
You can get a TF203015 when you try and batch merge a multiple changesets from one branch to the other without due care.
Consider a situation where you have a MAIN trunk and a DEV branch. You branched DEV from MAIN and have diligently worked away at a feature in DEV; checking work back into DEV as you progressed. Now fast forward a week or two. You are now feature complete and want to merge back into MAIN.
This is where one of our devs hit this error.
He had been working on one solution for weeks, and checking changesets back into DEV periodically, so wanted to merge a non contiguous series of changesets back into MAIN.
So he picks the merge option, selects the first changeset; merges without issue, then immediately went to merge the next changeset; and bang TF203015, and its very unhelpful test in the output window; incompatible pending changes.
After a little fiddling around we now realize what is going on here; the first merge created a pending change in MAIN for the developers solution. The next merge attempt was also changes to the same solution, which would require TFS to "queue up" a second set of pending changes to the same files. It cant do this.
So in this scenario TF203015 means; "The destination branch already has pending changes on some files that are changed in this changeset. Please resolve and commit the destination branch changes before performing this merge operation"
The solution; after each merge operation our developer tests the workspace for MAIN and commits the pending change caused by the merge, then goes back to DEV and repeats.
Actually sensible and simple, but masked by a very obtuse error message.
You can use the Team Foundation Server Power Tools March 2011 (http://msdn.microsoft.com/en-us/vstudio/bb980963.aspx) that includes the command tfpt unshelve.
Once the Power Tools are installed, open a Visual Studio command prompt, change to the directory that contains the project of interest, and execute the tfpt unshelve command. It will unshelve and display the merge dialog so you can resolve the conflicts.
I credit this blog post with helping me find this solution: http://fluentbytes.com/the-how-and-why-behind-tf203015-file-has-an-incompatible-change-while-unshelving-a-shelve-set
I had what appeared to be the same issue but I had created a branch after shelving my changes and I wanted to unshelve those changes to the new branch.
TFS cannot unshelve to a different path than the path upon which the shelf was created.
Solution: I unshelved back to the original branch then I used beyond compare to merge the changes from my original branch to the new branch and checked in.
It could also be that after you create a folder in say a "Test" and you want to merge from dev to test, that you do not have that newly created folder structure checked into TFS - You will /can also get this error message.
Thus this message error CAN occur without anything to do with SHELVESETS as well for others coming from google and finding this page.
This might be the same as jcolebrand's answer, but I'm afraid I found the phrasing there a bit abstruse. Sincere apologies if I'm just repeating.
In my scenario the incompatible pending change message was presented because I was trying to roll back multiple changesets, and the same file was affected by more than one of those changeset.
In my case I did not want to commit until all the changes had been rolled back. I believe if I had been able to commit after rolling back each changeset, the error would not have happened.
The method which worked for me was as follows:
I opted to roll back one changeset at a time. I found using the command line was actually a more informative way of doing this because it lists all the conflicts, whereas I think the VS UI rollback just lists the first.
While rolling back a changeset, if there was an incompatible pending change, I had to undo my workspace's pending changes for the affected files.
When all the changesets had been rolled back, I had to manually revert the files which had experienced incompatible pending change. Mostly this could be achieved simply by getting a specific version of the file (the "last-known-good" version before all the bad checkins started). But for some files where there had been both desired changes and undesired changes, I got the "last-known-good" and manually applied the good changes to it.
This link resolved my issue:
https://blogs.infosupport.com/the-how-and-why-behind-tf203015-lt-file-gt-has-an-incompatible-change-while-unshelving-a-shelve-set/
The reason was pending change in the same work space create an incompatible change. So undo the pending changes and try unshelve. This should resolve the issue.
If you have two branches MAIN(target) and DEV(source), now you want merge DEV into MAIN, then all files you want merge from your source, must not be older then the similar files in your target branch.
For example: you have an changed file test.cs in your DEV branch, changed at 14.03.2016. In your MAIN branch you have test.cs changed at 15.03.2016. So the target is newer then the source file and you have TF203015.
Solution: navigate in TFS Explorer to the conflict-file and merge it explicit. TFS will open the conflict manager and you can merge the conflicts by hand. Following you can merge the selected changeset.
Remarks: If you have more conflicts, you must navigate to each conflict-file and merge it explicit, so TFS opens the the conflict Manager and you can merge it by hand.