I would like to use Azure Devops to create folders and shares on servers and share drives. The intent is to move away from having developers or IAM create folders and have the folder / share structure be part of the release itself. This needs to include empty folders, shares on servers and on a shared drives and granting specific permissions to service accounts by environment.
As an example, a new data movement application requires a source folder, a destination folder and an archive folder. Each of these folders is in a different location (system wide share vs server share) and has different permissions. I would like to have the prod release of the application create these folders (only if needed) and assign them the correct read/modify permissions to the service accounts and IT support accounts.
All of my searches have yielded results on how to create/set folders in the Azure Devops workspace, not the greater IT environment.
I suggest you to set up your DevOps pipeline on a self-hosted agent.
After that, every pipeline based on the agent should do operations as the administrator of the OS.
For example, I set up a self-hosted agent on my server, and put it to a agent pool named VMAS, then I write a YAML pipeline like this:
trigger:
- none
pool: VMAS
steps:
- script: |
echo %username%
mkdir "C:/xxxyyyzzz"
displayName: 'Run a one-line script'
You can create the folder in the server with no problem(out of the workspace):
So if you can use command directly to achieve some non-interactive operations, then it is also possible to do the same thing via DevOps pipeline.
Related
We maintain dozens of developer accounts on AWS and for maintenance purposes it would be amazing if on all cloudshell environments we would have a set of scripts available.
It is possible to upload files to the cloudshell environment manually by using the Actions->Upload File feature in the web console, but that is not feasible to manage dozens of environments.
Is there any ansible module or other way to upload files to cloudshell? Probably via S3 bucket, but we're missing the last mile into the cloudshell environment.
I have 4 workstations (windows) that I need to backup and save to gloud storage, I would like it to be automatically. it is possible?
You can imagine to set up, on each workstation a planned task that performs a gcloud rsync regularly in dedicated folder on Google Cloud Storage (and that get the correct folder from the local workstation)
The ansible playbook I'm running via aws codebuild only deploys to the same account. Instead of using a separate build for each account, I'd like to use only one build and manage multi-account deployment via ansible inventory. How can I set up the ansible static library to add yml files for every other aws account or environment it will be deploying to? That is, the inventory classifies those accounts into dev, stg & prod environments.
I know a bit about how this should be structured and that is to create a yml file in the inventory folder having the account name and also create a relevant file in the group-vars subfolder without the yml extension. But, I do not know the details of file contents. Can you please explain this to me?
On the other side, codebuild environment variable is given a few account names, the environment, and the role it should be assuming in those accounts to deploy. My question is how inventory structure and file content should be set up for this to work?
If you want to act on resources in different account, the general idea in AWS is to "assume" a role in that account and run API calls as normal. I see that Ansible has a module 'sts_assume_role' which helps to assume a role. I found the following blog article that may give you some pointers. Whether you run the ansible command on your laptop or CodeBuild, the idea is the same:
http://www.drivenbydevops.io/aws-ansible-and-assumed-roles/
In my Jenkins home dir I see these configs for all my users:
"c:\Program Files (x86)\Jenkins\users\someuser\config.xml"
What is this config? Looks like its caching their sessions or something? Do I need to backup these files? What woudl the impact be if the users folder got deleted?
I should add that Im using active directory for auth so these aren't internal jenkins users they are AD users. Which is why Im wondering what config jenkins is keeping for them
You should backup those indeed. These folders contain user-specific configuration data, like public keys or api access tokens (see user's "configure" menu in the Web interface).
I've been watching some videos from the build conference re: Inside Windows Azure etc.
My take away from one of them was that unless I loaded in a preconfigured VHD into a virtual machine role, I would lose any system settings that I might have made should the instance be brought down or recycled.
So for instance, I have a single account with 2 Web Roles running multiple (small) websites. To make that happen I had to adjust the settings in the Hosts file. I know my websites will be carried over in the event of failure because they are defined in the ServiceConfiguration.csfg but will my hosts file settings also carry over to a fresh instance in the event of a failure?
i.e. how deep/comprehensive is my "template" with a web role?
The hosts file will be reconstructed on any full redeployment or reimage.
In general, you should avoid relying on changes to any file that is created by the operating system. If your application is migrated to another server it will be running on a new virtual machine with its own new copy of Windows, and so the changes will suddenly appear to have vanished.
The same will happen if you perform a deployment to the Azure "staging" environment and then perform a "swap VIP": the "staging" environment will not have the changes made to the operating system file.
Microsoft intentionally don't publish inner details of what Azure images look like as they will most likely change in future, but currently
drive C: holds the boot partition, logs, temporary data and is small
drive D: holds a Windows image
drive E: or F: holds your application
On a full deployment, or a re-image, you receive a new virtual machine so all three drives are re-created. On an upgrade, the virtual machine continues to run but the load balancer migrates traffic away while the new version of the application is deployed to drive F:. Drive E: is then removed.
So, answering your question directly, the "template" is for drive E: -- anything else is subject to change without your knowledge, and can't be relied on.
Azure provides Startup Scripts so that you can make configuration changes on instance startup. Often these are used to install additional OS components or make IIS-configuration changes (like disabling idle timeouts).
See http://blogs.msdn.com/b/lucascan/archive/2011/09/30/using-a-windows-azure-startup-script-to-prevent-your-site-from-being-shutdown.aspx for an example.
The existing answers are technically correct and answer the question, but hosting multiple web sites in a single web role doesn't require editing the hosts file at all. Just define multiple web sites (with different host headers) in your ServiceDefinition.csdef. See http://msdn.microsoft.com/en-us/library/gg433110.aspx