I'm trying to deploy large number of linux machines using the azure-cli (v 2.0.7) using this bash script:
#!/usr/bin/env bash
number_of_servers=12
for i in `seq 1 1 ${number_of_servers}`;
do
az vm create --resource-group Automationsystem --name VM${i} --image BaseImage --admin-username azureuser --size Standard_F4S --ssh-key-value ~/.ssh/mykey.pub &
done
The machines are created from a custom image.
When I ran it I got the following error :
The resource operation completed with terminal provisioning state 'Canceled'.The operation has been preempted by a more recent operation
I tried to create less machines but the error still exists.
I looked at this question but it does not solved my problem.
can I create the machines from custom image?
Yes, we can use custom image to create Azure VMss.
We can use this template to deploy create vmss with custom image.
"sourceImageVhdUri": {
"type": "string",
"metadata": {
"description": "The source of the blob containing the custom image, must be in the same region of the deployment."
}
},
We should store image to Azure storage account first.
Related
I set-up some AWS EC2 instances using Docker using docker-machine on my previous laptop, using commands like this:
docker-machine create --driver amazonec2 --amazonec2-instance-type "t2.micro" --amazonec2-security-group MY_SECURITY_GROUP container-1
On the old laptop, I can still view and control them:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
container-1 - amazonec2 Stopped Unknown
container-2 - amazonec2 Running tcp://xx.xx.xx.xxx:yyyy v20.10.7
container-3 - amazonec2 Stopped Unknown
But on my new laptop, I'm not able to see them:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
I have the AWS environment variables (key/secret) on the new laptop. I've looked at the hidden files in the old laptop to see if there's something that docker-machine uses to store a list of created containers, but I don't see anything.
Is there a command to add these to the new laptop, so I can see and start/stop them?
I found the solution to this. You need to manually copy the the machine's hidden directory (from ~/.docker/machine/machines/) on the old laptop to the new one. In the example above, that would be ~/.docker/machine/machines/container-1, ~/.docker/machine/machines/container-2, etc.
In addition, each machine has a config.json that contains absolute paths to the certificates. That config file looks something like this:
{
"ConfigVersion": 3,
"Driver": {
"IPAddress": "4.94.173.4",
"MachineName": "container-1",
"SSHUser": "ubuntu",
"SSHPort": 22,
"SSHKeyPath": "/Users/USERNAME/.docker/machine/machines/zhxw-production-2/id_rsa",
"StorePath": "/Users/USERNAME/.docker/machine",
...
... where USERNAME is your system username. If this username is different between old and new laptops, you'll need to update all the references to the new paths.
I create my Azure Machine Learning Workspace using Azure CLI:
$env="prd"
$instance="001"
$location="uksouth"
$suffix="predict-$env-$location-$instance"
$rg="rg-$suffix"
$ws="mlw-$suffix"
$computeinstance="vm-$suffix".Replace('-','')
$computeinstance
az group create --name $rg --location $location
az configure --defaults group=$rg
az ml workspace create --name $ws
az configure --defaults workspace=$ws
az ml compute create --name $computeinstance --size Standard_DS11_v2 --type ComputeInstance
I run the above code manually in Visual Studio Code, and everything works properly.
However, when I integrate the above into an Azure DevOps pipeline via the YAML:
steps:
- bash: az extension add -n ml
displayName: 'Install Azure ml extension'
- task: AzureCLI#2
inputs:
azureSubscription: "$(AZURE_RM_SVC_CONNECTION)"
scriptType: 'ps'
scriptLocation: 'scriptPath'
scriptPath: './environment_setup/aml-cli.ps1'
The pipeline creates the Azure Machine Learning workspace as expected.
The pipeline creates the compute instance, which has the status "Running" and green status.
However, the compute instance has all applications greyed out. This means I cannot connect to the compute instance using a terminal, notebook or otherwise, essentially making it useless. The application links in the following screenshot are not clickable:
I attempted:
Specifying brand new resource names.
Creating the workspace and compute in separate pipelines in case of a timing issue.
Deleting the resource group first using:
az group delete -n rg-predict-prd-uksouth-001 --force-deletion-types Microsoft.Compute/virtualMachines --yes
All to no avail.
How do I create a useable Azure Machine Learning compute instance using Azure CLI and Azure DevOps pipelines?
Earlier only the creator of the instance was allowed to run the jupyter,jupyterlab etc(check the comments on this issue) but now there is a feature in preview that allows to create a compute instance "on behalf of" someone.
So please try passing the aad user's objectid who will access the compute instance's development tools. The same can be done using arm templates also using 'personalComputeInstanceSettings' property.
az ml compute create --name $computeinstance --size Standard_DS11_v2
--type ComputeInstance --user-object-id <aaduserobjectid> --user-tenant-id <aaduserobjecttenantid>
I'm having a .Net Framework and .NetCore Containers and I would like to run them in Kubernetes. I have Docker Desktop for Windows installed and Kubernetes with it. How can I run these Windows Containers in Kubernetes?
This Documentation specifies how to create a Windows Node on Kubernetes, but it is very confusing. As I am on windows machine and I see linux based commands in there (And no mention of what OS you need to run all those). I am on Windows 10 Pro Machine. Is there a way to run these containers on Kubernetes?
When I try to create a Pod with Windows Containers, it fails with the following error message "Failed to pull image 'imagename'; rpc error: code = Unknown desc = image operating system 'windows' cannot be used on this platform"
Welcome on StackOverflow Srinath
To my knowledge you can't run Windows Containers on local version of Kubernetes at this moment. When you enable Kubernetes option in your Docker Desktop for Windows installation, the Kubernetes cluster is simply run inside Linux VM (with its own Docker Runtime for Linux containers only) on Hyper-V hypervisor.
The other solution for you is to use for instance a managed version of Kubernetes with Windows nodes from any of popular cloud providers. I think relatively easy to start is Azure (if you don't have an Azure subscription, create a free trial account, valid for 12 months).
I would suggest you to use an old way to run Kubernetes on Azure, a service called Azure Container Service aka ACS, for one reason, it has been verified by me to be working well with Windows Containers, especially for testing purposes (I could not achieve the same with its successor, called AKS):
Run following commands in Azure Cloud Shell, and your cluster will be ready to use in few minutes.
az group create --name azEvalRG --location eastus
az acs create -g azEvalRG -n k8s-based-win -d k8s-based-win --windows --agent-count 1 -u azureuser --admin-password 'YourSecretPwd1234$' -t kubernetes --location eastus
Here is my json file
# cat s6.json
{
"ImageId": "ami-0b33d91d",
"InstanceType": "i2.xlarge",
"KeyName": "xxx"
}
And I can use this command...
# aws ec2 request-spot-instances --spot-price "1.03" --instance-count 1 --type "one-time" --launch-specification file://s6.json
The above command works as expected. But if I change the Image ID to windows ami-ab33d3bd I get this error...
An error occurred (InvalidInput) when calling the RequestSpotInstances
operation: Unsupported product.
I can however request a regular on-demand instance without any problem. So this command works...
# aws ec2 run-instances --image-id ami-ab33d3bd --count 1 --instance-type i2.xlarge --key-name xxx
Does it mean that Windows instances are not available on spot?
From EC2-Spot FAQs:
Q. Which operating systems are available as Spot instances?
Linux/UNIX and Windows Server are available. Windows Server with SQL
Server is not currently available.
The AMI ami-ab33d3bd is a Windows Server 2008 with SQL Enterprise which is not supported for Spot.
I have to do some quick benchmarking.
I am unable to my vms since neutron is not setup properly.
I can create centos vm.. but i can not log into it.
I tried adding keypair, i tried could init change root password
#cloud-config
chpasswd:
list: |
root:stackops
centos:stackops
expire: False
it does not work. I mean it did not give any errors on log console but i am not abel to login with the credentials i set.
So my question is ..where can i find a openstack centos 7 image whose password is already set ( i guess it would be a custom one)
If Neutron isn't set up correctly, you're not going to be able to do much with your OpenStack environment. However, even with broken networking, you can pass your user-data script to the instance using the --config-drive option, e.g:
nova boot --user-data /path/to/config.yaml --config-drive=true ...
There is a checkbox in the Horizon gui to use this feature as well. This attaches your configuration as a virtual CD-ROM device, which cloud-init will use rather than the network metadata service.
If I put your cloud-config into a file called user-data.yaml, and then run:
nova boot --image centos-7-cloud --user-data user-data.yaml centos
Then I can log in as the centos user using the password stackops.