I would like to send the ARM template Project with related PS scripts for DSC to a third party. They would probably deploy option in VS to do the deployment. It is possible to attach the DSC script as part of the ARM project and on deploy picks up the dsc script from local disk? Under settings we have "ModulesUrl" it is possible to replace this with another parameter which points to local disk something like c:\myproject\IISInstall.ps1.zip
{
"apiVersion": "2015-06-15",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', parameters('webSrvVmName'))]"
],
"location": "[resourceGroup().location]",
"name": "qawebsrv/iisinstall",
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.19",
"autoUpgradeMinorVersion": true,
"settings": {
"ModulesUrl": "https://dscscript.blob.core.windows.net/dscscripts/IISInstall.ps1.zip",
"ConfigurationFunction": "[variables('configurationFunction')]",
"Properties": {},
"SasToken": "",
"wmfVersion": "4.0"
},
"protectedSettings": {}
},
"tags": {
"displayName": "VM Extensions"
},
"type": "Microsoft.Compute/virtualMachines/extensions"
}
If you're planning to use VS to do the deployment, then VS can stage the DSC package for you - the deployment script in VS does this... it can actually build the DSC package as well, but that has some limitations.
There's nothing magic about the VS script - this repo has a DSC sample that uses the same script used by VS - see: https://github.com/bmoore-msft/AzureRM-Samples/tree/master/VMDSCInstallFile
for a "Hello World" example...
No, this is not possible. Closest you can get to this is upload your script to a publicly available place and the VM will pull it.
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.20",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": {
"url": "https://github.com/xxx.zip",
"script": "scriptname.ps1",
"function": "main"
},
"configurationArguments": {}
},
"protectedSettings": {}
}
Related
I am trying to change the location of where Serilog files get written to.
It will write to a directory at the root of the project or below (example, StructuredLogs/file-here.txt), but any time I attempt to use an env variable or write to an absolute file path or even relative file path, it will only write to the root directory and replace / or \ (properly escaped) with :
The settings are in the appSettings.json as such:
"Serilog": {
"Using": [],
"Properties": {
"ApplicationName": "Test API"
},
"MinimumLevel": {
"Default": "Debug",
"Override": {
"Microsoft": "Debug",
"Microsoft.TestApi": "Debug",
"System": "Debug"
}
},
"Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId", "WithProcessName", "WithThreadId" ],
"WriteTo": [
{ "Name": "Console" },
{
"Name": "File",
"Args": {
"path": "D:\\repo\\micro-services\\Apis\\structured-logs\\logs.txt",
"rollingInterval": "Day",
"outputTemplate": "{Timestamp:G} {Message}{NewLine:1}{Exception:1}"
}
},
{
"Name": "File",
"Args": {
"path": "D:\\repo\\micro-services\\Apis\\structured-logs\\logs.json",
"rollingInterval": "Day",
"formatter": "Serilog.Formatting.Json.JsonFormatter, Serilog"
}
},
...
I have tried:
Moving the solution to my local computer from a portable drive: C:/ vs D:/
Ensuring it was running with elevated permissions of Visual Studio 2022
changing to the various types of ways I could get to a different directory (relative, absolute, env variable, etc.)
forward-slash vs back-slash
none have worked.
Looks vaguely familiar to a character set encoding issue of some sort, but not sure.
Any thoughts as ideally I would have a relative path?
I use these at program.cs
the log file was saved at parent dictionary
Log.Logger = new LoggerConfiguration() .WriteTo.Console() .WriteTo.File("../log-.txt", rollingInterval: RollingInterval.Day) .CreateLogger();
So I've run into this issue with a web app I've made:
it gets a file path as input
if the file exists on a bucket, it uses a python client api to create a compute engine instance
it passes the file path to the instance in the startup script
When I ran it locally, I created a python virtual environment and then ran the app. When I make the input on the web browser, the virtual machine is created by the api call. I assumed it used my personal account. I changed to the service account in the command line with this command 'gcloud config set account', it ran fine once more.
When I simply go to the source code directory deploy it as is, the application can create the virtual machine instances as well.
When I use Google cloud build and deploy to cloud run, it doesn't create the vm instance.
the web app itself is not throwing any errors, but when I check compute engine's logs, there is an error in the logs:
`{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "INVALID_PARAMETER"
},
"authenticationInfo": {
"principalEmail": "####"
},
"requestMetadata": {
"callerIp": "#####",
"callerSuppliedUserAgent": "(gzip),gzip(gfe)"
},
"serviceName": "compute.googleapis.com",
"methodName": "v1.compute.instances.insert",
"resourceName": "projects/someproject/zones/somezone/instances/nameofinstance",
"request": {
"#type": "type.googleapis.com/compute.instances.insert"
}
},
"insertId": "######",
"resource": {
"type": "gce_instance",
"labels": {
"instance_id": "#####",
"project_id": "someproject",
"zone": "somezone"
}
},
"timestamp": "2021-06-16T12:18:21.253551Z",
"severity": "ERROR",
"logName": "projects/someproject/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operation-#####",
"producer": "compute.googleapis.com",
"last": true
},
"receiveTimestamp": "2021-06-16T12:18:21.253551Z"
}`
In theory, it is the same exact code that worked from my laptop and on app engine. I'm baffled why it only does this for cloud run.
App engines default service account was stripped of all its roles and given a custom role tailored to the web apps function.
The cloud run is using a different service account, but was given that exact same custom role.
Here is the method I use to call the api.
def create_instance(path):
compute = googleapiclient.discovery.build('compute', 'v1')
vmname = "piinnuclei" + date.today().strftime("%Y%m%d%H%M%S")
startup_script = "#! /bin/bash\napt update\npip3 install pg8000\nexport BUCKET_PATH=my-bucket/{}\ngsutil -m cp -r gs://$BUCKET_PATH /home/connor\ncd /home/connor\n./cloud_sql_proxy -dir=cloudsql -instances=sql-connection-name=unix:sql-connection-name &\npython3 run_analysis_upload.py\nexport ZONE=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')\nexport NAME=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')\ngcloud --quiet compute instances delete $NAME --zone=$ZONE".format(path)
config = {
"kind": "compute#instance",
"name": vmname,
"zone": "projects/my-project/zones/northamerica-northeast1-a",
"machineType": "projects/my-project/zones/northamerica-northeast1-a/machineTypes/e2-standard-4",
"displayDevice": {
"enableDisplay": False
},
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": startup_script
}
]
},
"tags": {
"items": []
},
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": vmname,
"initializeParams": {
"sourceImage": "projects/my-project/global/images/my-image",
"diskType": "projects/my-project/zones/northamerica-northeast1-a/diskTypes/pd-balanced",
"diskSizeGb": "100"
},
"diskEncryptionKey": {}
}
],
"canIpForward": False,
"networkInterfaces": [
{
"kind": "compute#networkInterface",
"subnetwork": "projects/my-project/regions/northamerica-northeast1/subnetworks/default",
"accessConfigs": [
{
"kind": "compute#accessConfig",
"name": "External NAT",
"type": "ONE_TO_ONE_NAT",
"networkTier": "PREMIUM"
}
],
"aliasIpRanges": []
}
],
"description": "",
"labels": {},
"scheduling": {
"preemptible": False,
"onHostMaintenance": "MIGRATE",
"automaticRestart": True,
"nodeAffinities": []
},
"deletionProtection": False,
"reservationAffinity": {
"consumeReservationType": "ANY_RESERVATION"
},
"serviceAccounts": [
{
"email": "batch-service-accountg#my-project.iam.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform"
]
}
],
"shieldedInstanceConfig": {
"enableSecureBoot": False,
"enableVtpm": True,
"enableIntegrityMonitoring": True
},
"confidentialInstanceConfig": {
"enableConfidentialCompute": False
}
}
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
The issue was with the zone. For some reason, when it was ran on cloud run, the code below was the culprit.
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
"northamerica-northeast1" should have been "northamerica-northeast1-a"
EDIT:
I made a new virtual machine image and quickly ran into the same problem, it would work locally and break down in the cloud run environment. After letting it sit for some time, it began to work again. This is leading me to the conclusion that there is also some sort of delay before it can be called by cloud run.
When pointing satis to a Gitlab repo, it chooses a dist url that mirrors the source url instead of finding the Gitlab Release and using the dist zip artifact.
Let's say your satis.json looks like this:
{
"repositories": [
{ "type": "vcs", "url": "git#gitlab.com:group-name/project-name.git" },
],
"require-all": true
}
When you run satis build satis.json, satis will create a packages.json that looks like this:
{
"packages": {
"group-name/project-name": {
"v1.0.1": {
"name": "group-name/project-name",
"version": "v1.0.1",
"version_normalized": "1.0.1.0",
"source": {
"type": "git",
"url": "git#gitlab.com:group-name/project-name.git",
"reference": "68da091ec3d6891e8519095a8066b28eb2261c20"
},
"dist": {
"type": "zip",
"url": "https://gitlab.com/api/v4/projects/group-name%2Fproject-name/repository/archive.zip?sha=68da091ec3d6891e8519095a8066b28eb2261c20",
"reference": "68da091ec3d6891e8519095a8066b28eb2261c20",
"shasum": ""
},
"require": {
"composer/installers": "v1.0.6"
},
"require-dev": {
"wp-coding-standards/wpcs": "^2.2"
},
"time": "2020-01-13T04:39:57+00:00",
"type": "wordpress-plugin"
}
}
}
The problem
The dist.url is simply an API call to Gitlab to generate a zip of the files as they appear within the git repo.
But I'm carefully constructing a zip distributable as part of my CI builds that contains minified javascript, generated css, etc. This zip distributable is then attached to a Gitlab Release as an artifact.
I want satis to find my GitLab Release and use the zip artifact in the dist.url. In my case, it would look something like this:
"dist": {
"type": "zip",
"url": "https://gitlab.com/api/v4/projects/11301246/jobs/400678589/artifacts/project-name-1.0.1.zip",
},
Following advice on https://superuser.com/questions/1210215/how-to-bootstrap-windows-hosts-with-remote-powershell-for-use-with-ansible I am trying to add Custom Script extension to existing VM.
Below is my playbook
- name: Create VM playbook
hosts: localhost
connection: local
tasks:
- name: Custom Script Extension
azure_rm_deployment:
state: present
location: 'uk west'
resource_group_name: 'AnsibleRG'
template: "{{ lookup('file', '/etc/ansible/playbooks/extension.json') | from_json }}"
deployment_mode: incremental
This is extension.json
{
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.4",
"settings": {
"fileUris": [
"https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1"
],
"commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File ConfigureRemotingForAnsible.ps1"
}
}
When I run the playbook I get following error on azure
The request content was invalid and could not be deserialized: 'Could
not find member 'publisher' on object of type 'Template'. Path
'properties.template.publisher', line 1, position 64.'.
Can anyone please point me in right direction?
Thanks
You still need to provide a valid template
You need to provide proper type for the resource, extensions isnt a proper type
Your name has to include the vm name, as this is how the template supposed to figure out which vm apply this extension to.
Example:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vmName": {
"type": "string"
}
},
"resources": [
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vmName'),'/ConfigureRemotingForAnsible')]",
"apiVersion": "2015-06-15",
"location": "[resourceGroup().location]",
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.8",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1"
],
"commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File ConfigureRemotingForAnsible.ps1"
}
}
}
]
}
The application works fine locally. However deploying it in heroku gives the following error without any clue
com.github.dandelion.core.DandelionException: The file
'WEB-INF/classes/dandelion/my-bundle.json' is wrongly formatted.
Please help me on how to resolve this.
Below is the JSON file for reference.
Below is the json file used in the deployment for your reference.
{
"bundle" : "my-bundle",
"assets": [
{
"name": "jquery",
"version": "2.1.1",
"type": "js",
"locations": {
"webapp": "/webjars/jquery/2.1.1/jquery.min.js"
}
},
{
"name": "datatables",
"version": "1.10.5",
"type": "js",
"locations": {
"webapp": "/webjars/datatables/1.10.5/js/jquery.dataTables.js"
}
},
{
"name": "datatables",
"version": "1.10.5",
"type": "css",
"locations": {
"webapp": "/webjars/datatables/1.10.5/css/jquery.dataTables.css"
}
}
]
}
Thanks.