Why does my Xcode bot trigger twice? - xcode

I've been working on using Xcode server to build my app, and have been running into some snags. The most recent involves Bots running over-zealously. I'll commit and push one change to one file, and two builds get triggered, separated by a minute or two. This also happens if I click the "Integrate Now" button, or if I make changes to the bot, with "Integrate immediately" unchecked.
Since my build takes a while to run, this is a pretty big problem, especially when I'm trying to iterate on Bot configuration.
Is anyone aware of what process triggers builds, or how can I troubleshoot this type of failure in general? It seems like there are multiple daemons listening for the signal to trigger the build or something like that.
Since it may be a bug in the Xcode beta, I submitted a radar (rdar://20456212)

I had the same problem. I changed the bot so that it does not do a clean for each integration and now it only does one build per commit. My guess is that the clean process and download of code was taking so long that the bot was being triggered before it was complete. So now I clean once a day and I only get a double build on the first build of the day. Hope this helps.

Related

Heroku rebuild and redeploy with scheduler

I would like to have my app rebuilt and redeployed once per day. The reason for this is that I am using Jekyll and I display 'upcoming events'. I use LiquidScript to determine which events are 'upcoming', which ones are already in the past and which ones are too far in the future. But without redeploying, nothing on the site changes, even if I rebuild it. That's why I would like a solution to this, and one that can be made automatic with scheduler.

getting current background transfer tasks returns empty list on UWP

I use the background transfer tasks to download files in my UWP c# application. When the app is closed and reopen, we normally can discover the pending tasks with the function GetCurrentDownloadsAsync.
Unfortunately, it returns always an empty list even if the task is not completed.
It did not manage to compile the https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/BackgroundTransfer sample to see if the behavior is the same.
Does someone have a solution?
Unfortunately, it returns always an empty list even if the task is not completed
By testing on my side I cannot reproduce your issue. I tested with the official sample, after closed with one task still in progress, and then reopen, the count of BackgroundDownloader.GetCurrentDownloadsAsync() will return to one correctly. Here is the testing result:
My testing environment is OS build 15063. Please also tried to test the official sample on your side. And please ensure that you "closed and reopen" the app, not uninstall since when an app is uninstalled any current or persisted Background Transfer operations associated with it are cleaned up. More details please reference this article.
If you still have issues, please upload a minimal reproduced project.

Notify developer that the file got overwritten

We are using Microsoft Team foundation server for version control where multiple developers are working on a branch and check in and check out the code.
How can a developer A be notified via email or SMS that his code got overwritten during the checkin by developer B.
Developer A needs to know this ASAP because the code changes of developer A will not work when its deployed into QA.
We are tring to save time in a fast paced development environment and trying to avoid code overwrite issues.
The easiest way to allow continuous parallel development and prevent a checkin from one person breaking the code of others, is to use a CI server. TFS supports this through Team Build.
Though it's preferred to run team build on a dedicated build server, it can be installed side-by-side on your main TFS server and it's possible to install the Controller component centrally and use your developer's workstations as agents.
There are two types of build triggers that can help you out here:
Continuous Integration - this triggers a build of all code directly after every checkin. It will tell you quickly that something did not compile. If you are doing unit tests it can even run these and tell you that a test is failing.
Gated - this will force a developer to shelve his code and will only check in the code when the build of the latest version plus the changes in the shelveset succeed. This may seem even better, as the code in source control will never be in a broken state, but in reality I prefer the ci trigger. The main reason for that us that Gated builds can't happen in parallel (due to their nature) and can actually delay the notification that the code is broken.
You can easily configure email alerts through webaccess on specific build outcomes. You can also configure alerts on source changes, but there is no option to only warn people who have edited these specific files before.
You can also run the Build Notification tool from the task tray to show a notification in Windows.
Though this will not tell the person whose code has just been overwritten that it's no longer working, it will tell the person I rewriting that code that he should pay more attention when checking in ;).
Of course you can configure a team alert that notifies everyone when the build breaks (as that's generally called), and there are funny ways to show the build status through small apps like "siren of shame", which provides a build monitor service that can be connected to a USB alarm-light that turns on and provides noise whenever someone does something stupid.
If you need to avoid this problem during check-in & merge, then I would recommend disabling multiple check-out. This allows file to only be checked out one-at-a-time and can prevent confusion on team projects.
If you need to do farther down the line, you can create TFS Alerts when any code is checked in and sent out to a distribution list, but it would not notify when specific contributions from a specific developer is altered - only a list of altered files during the check-in.

Ibm Web Aplication Server 7 publishing issue

My application is a combination of Spring/Hibernate/JPA. Recently my development environment was migrated to RAD 7 with WAS7. Previous I was using v.6 for RAD & WAS.
The problem is,
when I make a Java change, the server publishes for a long time, sometimes it takes upto 10 mins for a single line of change to take effect. Also even JSP changes alone takes much time during publishing!!
This was not the case in WAS6. Publishing java changes was not even a concern in WAS6. The changes takes effect immediately as the publish process is done within a few seconds.
This publishing process keeps on running several times as I make changes in my code, and I have to wait (for long intervals during work hours) till it completes, to verify/test my changes during runtime. This is horrible!!
Is there a way to make WAS7 publish JSP/Java changes faster in few seconds as like WAS6? Is there any fix/refresh pack for this?
Can someone help me with this?
Thanks in adavance.
This problem can be overcome if you have the control to publish rather than automatically publishing it. You can wait to make all your changes and then publish it.
To do that
In the server view double click on the server that you are working on and under publishing check the option "Never publish automatically".
Also if you can give the option "Run server with resources within the workspace" that would do reduce the time of copying the files from your workspace to server space while publishing.

Lab Management build definition restores to a snapshot but the VM is stopped

I'm trying to work out an automated Build-Deploy-Test workflow using Lab Management/VS 2010 and everything is working ok... except for one small thing.
In my build definition, within the process>environment I set a snapshot that I want the environment to be reverted to before deploying, and this works fine, but when the snapshot is reverted to, the environment is stopped but not restarted.
What results is that the build waits indefinitely for the workflow capability to be available on the environment.
A temporary fix is to just watch and wait for the enivonment to be stopped, and then to start it manually and everything will proceed as expected... but this is hardly ideal.
This is happening to everyone on my team, and none of us have come across a solution. Has anyone else seen this and solved it?
You'll want to take the "Initial Snapshot" when the VM is actually turned on. You can do that, or you can customize the Lab default build process template and include a StartEnvironment workflow activity after the restore snapshot phase.
My suggestion would be to just take the snapshot when the VMs are turned on though. That's the way I have always ended up doing it.

Resources