What does it mean if I reimage an Azure VM scale set? - azure-vm-scale-set

I have a vm scale set deployed using a custom image.
What will happen if I re-image it? The docs aren't very helpful here.
How does it differ from deleting and redeploying via ARM?

You can update the custom image you want to use for your vm scale set.
Specifically, in the ARM template (or CLI/PS script), you can specify an image ID like this:
"storageProfile": {
"imageReference": {
"id": "/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/images/{existing-custom-image-name}"
},
Refer to the thread here:https://github.com/MicrosoftDocs/azure-docs/issues/16899
Let me know if you are looking for something else.

Related

Databricks and Delta cache setting

I am trying to follow the instructions on the MSFT website to use delta cache and hoping someone would help me understand it a little better:
https://learn.microsoft.com/en-us/azure/databricks/delta/optimizations/delta-cache
So In the guide it mentions that I should use Standard_E or L series of VMs. Our workload is now set to use the F series machines and when I tried to use only E or L it seemed that the job ran longer and would be using more DBUs.
I did however notice that the Dv3 series allow you to use delta caching (ex: Standard_D16s_v3 VMs). I tried to run some of our workloads using those types of machines and notices that under the storage tab it now shows a similar screen as in the MSFT docs:
Problem is that I am not sure if that is the right way to go about this. The reason I wanted to try to use Dv3 VMs was because it was relatively comparable to the F series, but also seem to allow the delta caching.
I am also wondering if the MSFT recommendation of using the following settings is correct or if they can be different:
spark.databricks.io.cache.maxDiskUsage 50g
spark.databricks.io.cache.maxMetaDataCache 1g
spark.databricks.io.cache.compression.enabled false
Has any one else played with this and can recommend what they did it would be much appreciated.
As background we have the databricks clusters spin up using our Databricks Linked Service (from ADF) and in that linked service we put the following settings:
This is what sends the config settings to the automated clusters that are spun up when we execute Databricks notebooks though ADF.
Thank you

Disabling specific context path in azure app insights

I have a spring-boot application that I have used alongside azure app insights and it works as expected.
Had a question on this: Is it possible to exclude specific context paths? I have just dropped in the starter and just configured the instrumentation key.
Are there any specific settings on application.properties that can fix this? Or do I need to create my own instance of TelemetryClient?
I am assuming that you don't want to store some event in AI instance , If yes then you can use sampling feature to suppress some of the events.
You can set sampling in the Java SDK in the ApplicationInsights.xml file.
You can Include or Exclude specific types of telemetry from Sampling using the following tags inside the Processor tag "FixedRateSamplingTelemetryProcessor"
<ExcludedTypes>
<ExcludedType>Request</ExcludedType>
</ExcludedTypes>
<IncludedTypes>
<IncludedType>Exception</IncludedType>
</IncludedTypes>
You can read more about sampling here.
Hope it helps.

find corresponding docker images of docker layers

We are using nexus oss 3.13 as a private docker registry.
During development due to misconfiguration, some images/layers can get extremly big.
Currently we have a nexus groovy script which generates a report of the biggest files (==layer), but there's no way to find out the corresponding images.
For production this is a show-stopper. Therefore we can not delete the images, which are using the big layers, because we do not know which image is affected.
We are surprised, that such basic functionality is not provided.
Did we miss something in the documentation?
How are others tackling this problem?
Has someone a good approach/workaround (maybe a groovy script) to match the docker layers to the docker images in order to solve this issue?
You can copy the non-truncated ID (SHA256) of the layer and grep for it in the folder /var/lib/docker/image.
This will find a file that has a SourceRepository JSON field:
$/var/lib/docker/image# find . -name *aae63f31dee9107165b24afa0a5e9ef9c9fbd079ff8a2bdd966f8c5d8736cc98*
./overlay2/distribution/v2metadata-by-diffid/sha256/aae63f31dee9107165b24afa0a5e9ef9c9fbd079ff8a2bdd966f8c5d8736cc98
Then when we cat that file, we can see the SourceRepository field I referred to above:
/var/lib/docker/image# cat ./overlay2/distribution/v2metadata-by-diffid/sha256/aae63f31dee9107165b24afa0a5e9ef9c9fbd079ff8a2bdd966f8c5d8736cc98
[{"Digest":"sha256:9931fdda3586a52049081bc78fa9793476662310356127cc8baa52e38bb34a8d","SourceRepository":"docker.io/library/mysql","HMAC":""}]
In the above we can see that the Source image is "MySQL" which I picked a layer from randomly.
As of the moment I don't believe there's a built-in way to accomplish this, maybe it's worth submitting a feature request.

Modifying the Top Right Information panel on a Windows 2016 EC2 Windows Instance

I am having an issue with my Amazon EC2 Instance. I want the information panel that appears on the top right of the instance when you access it (as displayed in the image below) to be modified. Is there a way to add lines of information from sources like one of the instance tags?
If someone has a solution to this that would be excellent. Thank you for taking the time to read this.
Example of where the information panel explained above is
I don't think you can modify it as such, it's a feature of the aws agent preinstalled on your instance. You could disable the feature in ec2config/ec2launch and then use a script to recreate the functionality with your custom data. The data displayed is accessible via http request from the instance metadata.
You could use a simple script using imagemagick, or PowerShell to create an image and set a registry key to set the new background.

How to create a VM Image from an already uploaded VHD in Azure using API

I've created a VHD in Azure using packer and uploaded it to Azure. So it is now available in a storage account. Now, I want to create a VM Image out of it which I can publish in marketplace, using API. I have searched the docs and seen powershell's Add-​Azure​VM​Image, but I need the same using API (Well, a Ruby library would be perfect).
I've created a VHD using packer and it is available in a storage account
I need to create an image from that VHD. The blog post titled "VM Image" speaks about running a VM and taking a snapshot of that VM as an image, while I don't want that VM creation process.
To be more clear, I need something similar to step 3 in https://learn.microsoft.com/en-us/azure/virtual-machines/linux/classic/create-upload-vhd … which doesn't require a local vhd file
So you already uploaded .vhd file into your Storage account then you should run this PowerShell to create image out of this .vhd file.
Add-AzureVmImage -ImageName 'xyz' -Label 'xyz' -MediaLocation 'location of the VHD' -OS Windows
It seems that the REST API Create a virtual machine image with the json request body Create a virtual machine image from a blob like below is you want.
{
"location": "West US",
"properties": {
"storageProfile": {
"osDisk": {
"osType": "Windows",
"blobUri": "https://mystorageaccount.blob.core.windows.net/osimages/osimage.vhd",
"osState": "generalized"
}
}
}
}
And for using Azure Ruby SDK, I found the method create_or_update of the model Azure::ARM::Compute, but there is not any sample code.
Hope it helps.

Resources