I have written a Terraform template that creates an Azure Windows VM. I need to configure the VM to Enable PowerShell Remoting for the release pipeline to be able to execute Powershell scripts. After the VM is created I can RDP to the VM and do everything I need to do to enable Powershell remoting, however, it would be ideal if I could script all of that so it could be executed in a Release pipeline. There are two things that prevent that.
The first, and the topic of this question is, that I have to run "WinRM quickconfig". I have the template working such that when I do RDP to the VM, after creation, that when I run "WinRM quickconfig" I receive the following responses:
WinRM service is already running on this machine.
WinRM is not set up to allow remote access to this machine for management.
The following changes must be made:
Configure LocalAccountTokenFilterPolicy to grant administrative rights remotely to local users.
Make these changes [y/n]?
I want to configure the VM in Terraform so LocalAccountTokenFilterPolicy is set and it becomes unnecessary to RDP to the VM to run "WinRM quickconfig". After some research it appeared I might be able to do that using the resource azure_virtual_machine_extension. I add this to my template:
resource "azurerm_virtual_machine_extension" "vmx" {
name = "hostname"
location = "${var.location}"
resource_group_name = "${var.vm-resource-group-name}"
virtual_machine_name = "${azurerm_virtual_machine.vm.name}"
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
# "commandToExecute": "powershell Set-ItemProperty -Path 'HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Policies\\System' -Name 'LocalAccountTokenFilterPolicy' -Value 1 -Force"
}
SETTINGS
}
When I apply this, I get the error:
Error: compute.VirtualMachineExtensionsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="This operation cannot be performed when extension operations are disallowed. To allow, please ensure VM Agent is installed on the VM and the osProfile.allowExtensionOperations property is true."
I couldn't find any Terraform documentation that addresses how to set the allowExtensionOperations property to true. On a whim, I tried adding the property "allow_extension_operations" to the os_profile block in the azurerm_virtual_machine resource but it is rejected as an invalid property. I also tried adding it to the os_profile_windows_config block and isn't valid there either.
I found a statement on Microsoft's documentation regarding the osProfile.allowExtensionOperations property that says:
"This may only be set to False when no extensions are present on the virtual machine."
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.management.compute.models.osprofile.allowextensionoperations?view=azure-dotnet
This implies to me that the property is True by default but it doesn't actually say that and it certainly isn't acting like that. Is there a way in Terraform to set osProfile.alowExtensionOperations to true?
Running into the same issue adding extensions using Terraform, i created a Windows 2016 custom image,
provider "azurerm" version ="2.0.0"
Terraform 0.12.24
Terraform apply error:
compute.VirtualMachineExtensionsClient#CreateOrUpdate: Failure sending request: StatusCode=0
-- Original Error: autorest/azure: Service returned an error.
Status=<nil>
Code="OperationNotAllowed"
Message="This operation cannot be performed when extension operations are disallowed. To allow, please ensure VM Agent is installed on the VM and the osProfile.allowExtensionOperations property is true."
I ran into same error, possible solution depends on 2 things here.
You have to pass provider "azurerm" version ="2.5.0 and you have to pass os_profile_windows_config (see below) parameter in virtual machine resource as well. So, that terraform will consider the extensions that your are passing. This fixed my errors.
os_profile_windows_config {
provision_vm_agent = true
}
Related
I'm using Ansible to provision a Windows Server 2016. This is the task I'm running:
- name: Ensure 'Audit System Extension' is set to 'Success and Failure'
win_audit_policy_system:
subcategory: Security System Extension
audit_type: success, failure
output:
changed: [10.8.20.177] => {
"changed": true,
"current_audit_policy": {
"security system extension": "success and failure"
}
}
When I go to check on the machine if the change was really applied I find that it is not. I tried restarting the machine and it still didn't apply.
Windows Server 2016 system audit policies shows the following
Any ideas what's going on?
This command might give the expected result.
auditpol.exe /get /category:*
Microsoft mentioned something about this in the article below.
https://support.microsoft.com/en-us/help/2573113/auditpol-and-local-security-policy-results-may-differ
I've got this infrastructure description
variable "HEROKU_API_KEY" {}
provider "heroku" {
email = "sebastrident#gmail.com"
api_key = "${var.HEROKU_API_KEY}"
}
resource "heroku_app" "default" {
name = "judge-re"
region = "us"
}
Originally I forgot to specify buildpack. It created the application on heroku. I decided to add it to resource entry
buildpacks = [
"heroku/java"
]
But when I try to apply the plan in terraform I get this error
Error: Error applying plan:
1 error(s) occurred:
* heroku_app.default: 1 error(s) occurred:
* heroku_app.default: Post https://api.heroku.com/apps: Name is already taken
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Terraform plan looks like this
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ heroku_app.judge_re
id: <computed>
all_config_vars.%: <computed>
buildpacks.#: "1"
buildpacks.0: "heroku/java"
config_vars.#: <computed>
git_url: <computed>
heroku_hostname: <computed>
name: "judge-re"
region: "us"
stack: <computed>
web_url: <computed>
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
As a workaround I tried to add destroy in my deploy.sh script
terraform init
terraform plan
terraform destroy -force
terraform apply -auto-approve
But it does not destroy the resource as I get the message Destroy complete! Resources: 0 destroyed.
What is the problem?
Link to build
It looks like you also changed the name of the resource. Your original example has the resource name heroku_app.default while your plan has heroku_app.judge_re.
To point your state to the remote resource, so Terraform knows you are editing and not trying to recreate the resource, use terraform import:
terraform import heroku_app.judge_re judge-re
In terraform, normally you needn't destroy the whole stack, which you just want to re-build one or several resources in it.
terraform taint does this trick. The terraform taint command manually marks a Terraform-managed resource as tainted, forcing it to be destroyed and recreated on the next apply.
terraform taint heroku_app.default
Second, when you troubleshooting why the resource isn't list in destroy resource, please make sure you point to the right terraform tfstate file.
when you run terraform plan, did you see any resources which already was created?
I placed plugin manager in "lib\ext" folder and tried to open it showed error:
java.io.IOException: Repository responded with wrong status code: 407
Jmeter version - 3.3
Plugin version - 0.16
Jmeter is invoked from command line by using the following parameters:
C:\Users\princen\Performance Testing\Software\apache-jmeter-3.3\bin\jmeter.bat -H Proxyserver -P 1234 -u princen -a ***
Parameters modified as suggested here
JVM_ARGS="-Dhttps.proxyHost=Proxyserver -Dhttps.proxyPort=1234 -Dhttp.proxyUser=princen -Dhttp.proxyPass=***" C:\Users\princen\Performance Testing\Software\apache-jmeter-3.3\bin\jmeter.bat
Above try gives the following error message
Windows cannot find "JVM_ARGS="-Dhttps.proxyHost=Proxyserver -Dhttps.proxyPort=1234 -Dhttp.proxyUser=princen -Dhttp.proxyPass=***
When I tried to changes command to the following:
C:\Users\princen\Performance Testing\Software\apache-jmeter-3.3\bin\jmeter.bat -Dhttps.proxyHost=Proxyserver -Dhttps.proxyPort=1234 -Dhttp.proxyUser=princen -Dhttp.proxyPass=***
I received an error:
java.io.IOException: Repository responded with wrong status code: 407
Can someone please correct parameters required to load the plugin manager?
Ensure you use last version of jmeter-plugins download manager.
Regarding your parameters, you're mixing different configurations, just set (for both http and https):
JVM_ARGS="-Dhttps.proxyHost=myproxy.com -Dhttps.proxyPort=8080 -Dhttps.proxyUser=john -Dhttps.proxyPass=password -Dhttp.proxyHost=myproxy.com -Dhttp.proxyPort=8080 -Dhttp.proxyUser=john -Dhttp.proxyPass=password"
Where password is your real password.
None of above methods working for me. Its really tough to work with Java(due to Loadrunner background). I added Ultimate thread alone and its working fine.
Thank you all for your inputs..
JMeter is using the official proxy configuration from Oracle (like here: https://memorynotfound.com/configure-http-proxy-settings-java/)
The problem is that the jmeter documentation is wrong about the password parameter: it should be http.proxyPassword not http.proxyPass.
Also you must use the https. properties for secured urls you want to access using the proxy. And the http. properties for non secured.
I've got an OctopusDeploy process for deploying to a database server.
The steps include copying db backups from a location to the local server, restoring them into SQL Server, then running a dacpac against them to upgrade them to a specific version.
This all works fine, but we've now added a new environment and I can't work out how to configure the deployment process for it.
Initially, the server was to be a windows clustered environment, with the tentacle running as a clustered service (which meant a single deployment target).
However, the company setting up our servers couldn't get clustering to work for whatever reason, and have now given us something in between:
We have two servers, each with the tentacle installed, configured and running on it.
Each tentacle has a unique thumbprint, and are always running and accessible.
Upon the windows servers, SQL Server has been installed and configured as "always on", with one server being the primary and the other being the secondary.
The idea being that if the primary dies, the secondary picks up the pieces and runs fine.
Conceptually, this works for us, as we have a "clustered" ip for the SQL server connection and our web app won't notice the difference.
(It's important to note, I CANNOT change this setup - it's a case of work with what we're given....)
Now, in Octopus, I need to ONLY deploy to one of the servers in this environment, as if I were to deploy to both, I'd either be duplicating the task (if run as a rolling deployment), or worse, have conflicting deployments (if run asynchronous).
I initially tried added a secondary role to each server ("PrimaryNode", "SecondaryNode"), but I then discovered Octopus treats roles as an "or" rather than an "and", so this wouldn't work for us out of the box
I then looked at writing powershell scripts that checked if the machine that had the roles "dbserver" AND "primarynode" had a status of "Online" and a health of "healthy", then set an output variable based on the status:
##CONFIG##
$APIKey = "API-OBSCURED"
$MainRole = "DBServer"
$SecondaryRole = "PrimaryNode"
$roles = $OctopusParameters['Octopus.Machine.Roles'] -split ","
$enableFailoverDeployment = $false
foreach($role in $roles)
{
if ($role -eq "FailoverNode")
{
#This is the failovernode - check if the primary node is up and running
Write-Host "This is the failover database node. Checking if primary node is available before attempting deployment."
$OctopusURL = "https://myOctourl" ##$OctopusParameters['Octopus.Web.BaseUrl']
$EnvironmentID = $OctopusParameters['Octopus.Environment.Id']
$header = #{ "X-Octopus-ApiKey" = $APIKey }
$environment = (Invoke-WebRequest -UseBasicParsing "$OctopusURL/api/environments/$EnvironmentID" -Headers $header).content | ConvertFrom-Json
$machines = ((Invoke-WebRequest -UseBasicParsing ($OctopusURL + $environment.Links.Machines) -Headers $header).content | ConvertFrom-Json).items
$MachinesInRole = $machines | ?{$MainRole -in $_.Roles}
$MachinesInRole = $MachinesInRole | ?{$SecondaryRole -in $_.Roles}
$measure = $MachinesInRole | measure
$total = $measure.Count
if ($total -gt 0)
{
$currentMachine = $MachinesInRole[0]
$machineUri = $currentMachine.URI
if ($currentMachine.Status -eq "Online")
{
if ($currentMachine.HealthStatus -eq "Healthy")
{
Write-Host "Primary node is online and healthy."
Write-Host "Setting flag to disable failover deployment."
$enableFailoverDeployment = $false
}
else
{
Write-Host "Primary node has a health status of $($currentMachine.HealthStatus)."
Write-Host "Setting flag to enable failover deployment."
$enableFailoverDeployment = $true
}
}
else
{
Write-Host "Primary node has a status of $($currentMachine.Status)."
Write-Host "Setting flag to enable failover deployment."
$enableFailoverDeployment = $true
}
}
break;
}
}
Set-OctopusVariable -name "EnableFailoverDeployment" -value $enableFailoverDeployment
This seemingly works - I can tell if I should deploy to the primary OR the secondary.
However, I'm now stuck at how I get the deployment process to use this.
Obviously, if the primary node is offline, then the deployment won't happen on it anyway.
Yet, if BOTH tentacles are online and healthy, then octopus will just attempt to deploy to them.
The deployment process contains about 12 unique steps, and is successfully used in several other environments (all single-server configurations), but as mentioned, now needs to ALSO deploy to a weird active/warm environment.
Any ideas how I might achieve this?
(If only you could specify "AND" in roles..)
UPDATE 1
I've now found that you can update specific machines "IsDisabed" via the web api, so I added code to the end of the above to enable/disable the secondary node depending on the outcome instead of setting an output variable.
Whilst this does indeed update the machine's status, it doesn't actually effect the ongoing deployment process.
If I stop and restart the whole process, the machine is correctly picked up as enabled/disabled accordingly, but again, if it's status changes DURING the deployment, Octopus doesn't appear to be "smart" enough to recognise this, ruling this option out.
(I did try adding a healthcheck step before and after this script to see if that made a difference, but whilst the healthcheck realised the machine was disabled, it still made no difference to the rest of the steps)
Update 2
I've now also found the "ExcludedMachineIds" property of the "Deployment" in the API, but I get a 405 (not allowed) error when trying to update it once a deployment is in process.
gah.. why can't this be easy?
ok - so the route we took with this was to have a script run against the clustered Always-On SQL instance, which identified the primary and secondary nodes, as follows:
SELECT TOP 1 hags.primary_replica
FROM sys.dm_hadr_availability_group_states hags
INNER JOIN sys.availability_groups ag
ON ag.group_id = hags.group_id
WHERE ag.name = '$alwaysOnClusterInstance';
This allowed me to get the hostname of the primary server.
I then took the decision to include the hostname in the actual display name of the machine within OctopusDeploy.
I then do a simple "like" comparison with Powershell between the result from the above SQL and the current machine display name ($OctopusParameters['Octopus.Machine.Name'])
If there's a match, then I set an output variable from this step equal to the internal ID of the OctopusDeploy machine ($OctopusParameters['Octopus.Machine.Id'])
Finally, at the start of each step, I simply compare the current machine id against the above mentioned output variable to determine if I am on the primary node or a secondary node, and act accordingly (usually by exiting the step immediately if it's a secondary node)
The last thing to note is that every single step where I care what machine the step is being run on, has to be run as a "Rolling step", with a windows size of 1.
Luckily, as we are just usually exiting if we're not on the primary node, this doesn't add any real time to our deployment process.
I am trying to setup a network in the container (using Docker's libnetwork and libcontainer), but I keep running into this issue. As far as I can tell it's looking into some_app to get some sandbox information?
INFO[3808] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers : [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[3808] IPv6 enabled; Adding default IPv6 external servers : [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
Error: unknown command "/var/run/docker/netns/582bd184e561" for "some_app"
Run 'some_app --help' for usage.
ERRO[3808] Resolver Setup/Start failed for container 6b81802576bd4f16aa117061f81b5c3e, "setup not done yet"
ERRO[3808] failed to add interface vethef0a693 to sandbox: failed in prefunc: failed to set namespace on link "vethef0a693": invalid argument
ERRO[3808] failed to add interface vethef0a693 to sandbox: failed in prefunc: failed to set namespace on link "vethef0a693": invalid argument
I was wondering if anyone could help me make sense of this and perhaps prevent it. Are these two separate errors?
Thank you
Here is the library I am trying to use
It took me a while to figure this out, but here goes:
Just like in Docker, libnetwork creates a veth interface pair. It then moves one end of the veth pair into the container namespace. During this process libnetwork tries to execute commands registered at runtime on the current instance of the binary (some_app in this case).
These commands do not exist on the external interface of some_app however. They are injected later using a library called reexec. For this to work, reexec needs to be initialized like this:
if reexec.Init() {
return
}
Also note that according to this thread libnetwork is currently not supported for applications outside of Docker.
NB: I discovered this by reading the source code, so I might be wrong but my issue went away after this.