Refresh vars for all projects using a Variable Set in Octopus - octopus-deploy

We have a variable set in Octopus which is used by a large number of projects. Among other things, this variable set contains an api key which we need to update.
Since we use a lot of triggered deploy with immutable infrastructure, we have a lot of "redeploys" of an existing project, so we can't simply wait for new releases to update to the new variable values.
Is there any way I can trigger a "variable update" on all "currently deployed" releases in all projects that use a certain variable set? I'm dreading the thought of having to spend hours clicking around in Octopus to get stuff updated, that's nearly an impossible task given our number of projects.

You can do this by writing a script that composes the underlying API requests into one action.
You can find some documentation on the API at https://demo.octopus.com/swaggerui/index.html
The Dashboard endpoint will give you the current release that is deployed in each environment for each project.
It may be easier to keep track of which projects you want to refresh, but you can query the projects endpoint for project details including what variable sets are included.
From there you can use the releases/{id}/snapshot-variables endpoint to refresh the variables for that release.
There are examples of using the API at https://github.com/OctopusDeploy/OctopusDeploy-Api

Related

How to reduce staging tasks with CI in Kentico 13

I'm looking for advice on how to deal with Kentico's staging tasks as they relate to Kentico 13's continuous integration development model.
Here's our challenge:
Each developer has their own Kentico database, developing using Visual studio, git source control through Azure DevOps and the CI switch turned on in Kentico.
As the developer makes a change to a Kentico object, for example, adds a new property to a custom page type, the CI process in Kentico serializes the page type onto the file system for that developer. They create a pull request and the new XML file that represents the serialized page type is now in source control... along with basically every other Kentico object.
When the DevOps release process kicks in, our shared build server is updated through the CIRestore process with the new page type property. All good - everything working as expected at this point.
However, at some stage we need to get this new page type property from the shared build server into testing, and later production. And traditionally we'd using Kentico Staging to do this. The problem we're facing is that during the CIRestore process in our build server every single Kentico object is updated regardless of whether an actual change was made... and in that list of hundreds and hundreds of items in the Staging task list is our update page type with the new property.
The issue is that we have no way of identifying what's actually changed and subsequently what needs to be staged from our build server through to the test instance of Kentico. We don't want to stage everything as there are hundreds and hundreds of items.
We've reviewed the repository.config file and have made some changes to exclude many object types. And we initially thought that we could use this approach to just include the page types (and other objects) that we want to monitor in the CI process, however this config works in an exclude manner rather than an inclusive manner. So we'd have to add an entry to exclude every object by name which seems a bit error-prone and redundant.
I'm hoping someone's been through this pain and I'm looking for suggestions on how we might handle this challenge.
Check out this thread on devnet. You can actually write a global event handler to tell the system to not generate tasks in the staging module based on certain conditions.
You could also try excluding all objects and then using the IncludedObjectTypes as a whitelist for just the ones that you want. Check out this documentation.
In general CI does take some time to setup and get correct in our experience. This Kentico CI cheat sheet can be helpful as well.

How do I change an attribute for a policy group without having to re-compile all of the policyfiles?

I am trying to shift from environment files in Chef to using Policyfiles. https://docs.chef.io/policy.html. I really like the concept, especially since you can include a policy from policy into another, but I am trying to understand how do a simple attribute change.
For instance, if I want to change a globally-used attribute that may be an error message for a problem that is happening now. ("The system will be down for 10 minutes. Thanks for your patience"). Or perhaps I want to turn off some AB testing with an attribute working as a flag. From what I can tell, the only way I can do this is to change an attribute in the policyfile, and then I need to create a new version of the policy file.
And if the policyfile is an included in many other policyfiles, like in the case of a base policyfile, then I have a lot of work to do for a simple change.
default['production']['maintenance_message'] = 'We will be down for the next 15 minutes!'
default['production']['start_new_feature'] = true
How do I make a simple change to an attribute that affects an entire policy group? Is there a simple way to change an attribute, or do I have to move all my environment properties to a data bag??
OK, I used Chef Support and got an answer: Nope.
This is their response:
"You've called out one of the main reasons we recommend that people use something like Jenkins pipelines to deliver cookbooks to their infrastructure. All that work can be kicked off by a build system recognizing a change in a dependency and initiating new builds for all the downstream consumer jobs.
For what it's worth, I don't really like putting application configurations like that maintenance message example you used in configuration management, I think something like Habitat is a better system for that kind of rapid-change configuration delivery, although you could also go down the route of storing application configuration like that in a system like Hashicorp Vault, Consul, or etcd, and ensuring that whatever apps need to ingest those changes are able to do so without configuration management fighting with the key-value config store.
If that was just an example to illustrate things, ignore the previous comment and refer only to my recommendation to use pipelines to deliver cookbooks, attributes, etc to your infrastructure (and I would generally recommend against data bags these days, but that's mostly a preference thing)."

SonarQube 6.7.5 - Api to retrieve all rules applied to a project

I need to compile every rule applied to each of SonarQube's projects.
I could not find a direct way to do this so I am doing the following:
Retrieve the projects. We have over 1000 projects. (Don't ask me why ¯\_(ツ)_/¯)
http://host/api/components/search?qualifiers=TRK
Then I retrieve the languages because if I try to retrieve the Quality Profiles directly without setting the language, I get every profile, even for different languages.
http://host/api/measures/component?componentKey=compKey&metricKeys=ncloc_language_distribution
After that, I retrieve the Quality profiles
http://host/api/qualityprofiles/search?project=projectKey&language=lang
Finally, I retrieve the rules
http://host/api/rules/search?activation=true&qprofile=profile
Now, given the sheer amount of projects and http requests, this process takes a LONG time.
Am I missing an easier way to do this?
To know the rules that were applied you need the
project's languages
profile used for each language
rules in each profile
There are a couple different ways to go at this. You've found one. Let's look at others.
(I'm looking at a more recent copy of SonarQube but I believe the APIs in question should still be the same.)
I know that the profiles used in a project's most recent analysis are listed on the homepage. Using the developer console, I see that that data comes in response to a call to api/navigation/component. That web service is marked internal, meaning you probably shouldn't use it. So we can't go from project directly to Quality Profile.
However, api/measures/component is not internal, and with it the front page requests the ncloc_language_distribution among other things. That's a DISTRIB metric and it gives the project's lines of code per language. For the project I'm looking at, the returned value is "css=11473;java=235467;js=2549;ts=95516". As you see, from there we can get to the languages in the project. Now, let's set that aside for a moment.
Let's say you start by getting the list of Quality Profiles, (api/qualityprofiles/search). From that you know which profile is the default for each language. Now use api/qualityprofiles/projects to get the list of projects explicitly assigned to each profile. This is probably also a good point to retrieve and store locally the rules per profile.
Now you can iterate your projects, look at the languages for each one and figure out which profiles are used for each project. (Is there an explicit profile assignment for this language? No? Then the default!) Then you can pull your rules-per-profile out of memory and you've got your list for the project.
*dusts hands*
Simple!

Cloud based CI server that can handle concurrency blocking based on external resources

I've been researching cloud based CI systems for a while now and cannot seem to find any systems that can address a major need of mine.
I'm building CI processes for development on Salesforce, but this question is more generally about builds which rely on an external resource. In our builds, we deploy code into a cloud hosted Salesforce instance and then run the tests in that instance. During a build, the external resource is effectively locked and build failures will occur if two builds target the same external resource at the same time. This means that the normal concurrency model of cloud based CI systems would start tripping over the Salesforce instance (external resource) with a concurrency greater than 1.
To complicate things a bit more, we actually have 5 different external resources for each project (feature, master, packaging, beta, and release) and need to control the concurrency of any builds relying on an external resource to 1. For example, all our feature branches build against the feature external resource. We can identify these builds by the branch name which uses the pattern feature/* and need to ensure that only one feature build runs at a time. However, the feature build doesn't tie up the other 4 external resources so ideally any builds that would need those resources should still be able to run concurrently.
I currently accomplish this in Jenkins using the Throttle Concurrent Builds plugin and assign a throttle group to each build identifying the external resource it relies on. This has been successful at preventing concurrent builds from tripping over external resources.
A few clarifications:
I'm not asking how to reduce concurrency to 1 at the repo level. I know every cloud CI system can do that. I should be able to set repo concurrency to N external resources (in my case, 5).
Ideally, I'd like to be able to use a regex pattern on branch name as the "group" with which to block concurrence. So, a setting like: If branch name matches 'feature/.*' then limit concurrency to 1. I want to avoid having to manually configure new feature branches in the build system and instead match on pattern.
I have to say, it's been nearly impossible to find a restrictive Google search term that would help me answer this question. Hopefully someone out there has faced this problem before and can shed some light for me :)
With Jenkins Pipeline plugin you can set the stage concurrency to 1 - and only 1 thing will pass through that stage at a time. The stage was designed to be able to represent things like this.
https://www.cloudbees.com/blog/parallelism-and-distributed-builds-jenkins
stage "build"
node {
sh './test-the-awesome'
}
stage name: "environment test", concurrency: 1
node {
sh 'tests that lock the environment'
}
You can put the build pipeline in a Jenkinsfile in a repo too: https://documentation.cloudbees.com/docs/cookbook/pipeline-as-code.html (so any branches that build, also obey that lock).
As pointed out by #Jesse Glick in the comments below, perhaps a more general solution (not yet compatible with pipeline) is to use the Lockable Resources Plugin - which will then work across jobs, of any type.
I accomplish this with a Drone.io setup.
Essentially, I use a grunt plugin to access a Redis db hosted externally. It provides semaphore locking on any param you'd like.
Determine if the lock is free for that Env.
If so that Env's Key with a reasonable timeouts
Run the tests
Clear the lock
If the lock is held, get it's expiration time, and sleep until then.
I am not aware of any cloud based CI tools that can manage external resources the way you want to, unless you include the logic as part of the build script, which you've already said you'd prefer not to do. If you decide you want to do that you could do it with Snap CI or Drone or any of the other cloud tools I imagine.
In this sort of situation, I would usually recommend an agent-based system such as Go.cd

Do Different CRM Orgs Running On The Same Box Share The Same App Domain?

I'm doing some in memory Caching for some Plugins in Microsoft CRM. I'm attempting to figure out if I need to be concerned about different orgs populating the same cache:
// In Some Plugin
var settings = Singleton.GetCache["MyOrgSpecificSetting"];
// Use Org specific cached Setting:
or do I need to do something like this to be sure I don't cross contaminate settings:
// In Some Plugin
var settings = Singleton.GetCache[GetOrgId() + "MyOrgSpecificSetting"];
// Use Org specific cached Setting:
I'm guessing this would also need to be factored in for Custom Activities in the AsyncWorkflowService as well?
Great question. As far as I understand, you would run into the issue you describe if you set static data if your assemblies were not registered in Sandbox Mode, so you would have to create some way to uniquely qualify the reference (as your second example does).
However, this goes against Microsoft's best practices in Plugin/Workflow Activity development. Every plugin should not rely on state outside of the state that is passed into the plugin. Here is what it says on MSDN found HERE:
The plug-in's Execute method should be written to be stateless because
the constructor is not called for every invocation of the plug-in.
Also, multiple system threads could execute the plug-in at the same
time. All per invocation state information is stored in the context,
so you should not use global variables or attempt to store any data in
member variables for use during the next plug-in invocation unless
that data was obtained from the configuration parameter provided to
the constructor.
So the ideal way to managage caching would be to use either one or more CRM records (likely custom) or use a different service to cache this data.
Synchronous plugins of all organizations within CRM front-end run in the same AppDomain. So your second approach will work. Unfortunately async services are running in separate process from where it would not be possible to access your in-proc cache.
I think it's technically impossible for Microsoft NOT to implement each CRM organization in at least its own AppDomain, let alone an AppDomain per loaded assembly. I'm trying to imagine how multiple versions of a plugin-assembly are deployed to multiple organizations and loaded and executed in the same AppDomain and I can't think of a realistic way. But that may be my lack of imagination.
I think your problem lies more in the concurrency (multi-threading) than in sharing of the same plugin across organizations. #BlueSam quotes Microsoft where they seem to be saying that multiple instances of the same plugin can live in one AppDomain. Make sure multiple threads can concurrently read/write to your in-mem cache and you'll be fine. And if you really really want to be sure, prepend the cache key with the OrgId, like in your second example.
I figure you'll be able to implement a concurrent cache, so I won't go into detail there.

Resources