How to get detailed VM Size information - azure-sdk-python

I would like to use the Python Azure SDK to find the VM Sizes that support Enhanced Networking as well as AVX-512. The method I've seen so far to query information about VM Sizes is ComputeManagementClient.virtual_machine_sizes.list(region). But, the information returned doesn't include whether Enhanced Networking is supported for each VM type, or whether AVX-512 is supported.
This is an example of what one entry of virtual_machine_sizes.list provides:
{'name': 'Standard_M208ms_v2', 'numberOfCores': 208, 'osDiskSizeInMB': 1047552, 'resourceDiskSizeInMB': 4194304, 'memoryInMB': 5836800, 'maxDataDiskCount': 64}
I found on https://learn.microsoft.com/en-us/rest/api/compute/resourceskus/list that perhaps the resource skus list will provide the info I'm looking for. But, I don't see a way to use that resource skus list function in the Python SDK.
I am using version 4.0.0 of Python's azure library. Installed it via:
pip3 install -Iv azure==4.0.0
Thank you in advance for any help you can provide!

If you want to list azure vm resource sku with python, please refer to the following steps:
Create a service principal and assign Contributor role to the sp
az login
#create sp and assign Contributor role at subscription level
az ad sp create-for-rbac -n "your service principal name"
code
from azure.mgmt.compute import ComputeManagementClient
from azure.common.credentials import ServicePrincipalCredentials
client_id = "sp appId"
secret = "sp password"
tenant = "sp tenant"
credentials = ServicePrincipalCredentials(
client_id = client_id,
secret = secret,
tenant = tenant
)
Subscription_Id = ''
compute_client =ComputeManagementClient(credentials,Subscription_Id)
resource_group_name='Networking-WebApp-AppGW-V1-E2ESSL'
virtual_machine_scale_set_name='VMSS'
results = compute_client.resource_skus.list(raw=True)
resourceSkusList = [result.as_dict() for result in results]
r=json.dumps(resourceSkusList)
print(r)
For more details, please refer to here.

Related

Delete "Other Contact" using Python with Google People API

I used Google People API v1.otherContacts.copyOtherContactToMyContactsGroup (reference) to copy a contact from "Other Contacts" to "myContacts" contact group. I now want to delete the original contact from "Other Contacts" using the same API.
REST Resource v1.otherContacts (reference) does not list a DELETE action.
I tried using v1.people.deleteContact (reference) passing the resource name of my "Other Contact":
import pickle
from googleapiclient.discovery import build
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
people_api = build('people', 'v1', credentials=creds)
people_service = people_api.people()
response = people_service.deleteContact(resourceName='otherContacts/c1971897568350947161').execute()
But I got an error saying:
TypeError: Parameter "resourceName" value "otherContacts/c1971897568350947161" does not match the pattern "^people/[^/]+$"
Looks like v1.people.deleteContact does not work for deleting a contact in "Other Contacts".
How can I programmatically delete a contact from "Other Contacts"?
EDIT: Based on #DaImTo's suggestion below, I tried replacing otherContacts/ in the resource name with people/ and invoking the v1.people.deleteContact API, but I got an error saying:
googleapiclient.errors.HttpError: <HttpError 404 when requesting https://people.googleapis.com/v1/people/c1971897568350947161:deleteContact?alt=json returned " generic::NOT_FOUND: Contact person resources are not found.". Details: "[{'#type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'resourceNames[0]', 'description': 'Resource not found.'}]}]">
I advice consulting the documentation for people.deleteContact
Required. The resource name of the contact to delete.
DELETE https://people.googleapis.com/v1/{resourceName=people/*}:deleteContact
That means it should be people/c1971897568350947161 assuming that is the id of the user you want to delete c1971897568350947161.
Looks like Other Contacts are read only, according to this announcement from Google: https://developers.google.com/contacts/v3/announcement
The new People API has the same functionality as the legacy Contacts
API for all features, with the following exceptions for “Other
Contacts”:
Administrators have read-only permissions for “Other Contacts” through
the new scope. As sending mutate/write signals back to “Other
Contacts” is not supported, your users will have to add the Other
Contact as a My Contact if they wish to update its data fields.

How can I call list of ec2 instance based on the app code using tag method

I am trying to get all the instance(server name) ID based on the app. Let's say I have an app in the server. How do I know which apps below to which server. I want my code to find all the instance (server) that belongs to each app. Is there any way to look through the app in the ec2 console and figure out the servers are associated with the app. More of using tag method
import boto3
client = boto3.client('ec2')
my_instance = 'i-xxxxxxxx'
(Disclaimer: I work for AWS Resource Groups)
Seeing your comments that you use tags for all apps, you can use AWS Resource Groups to create a group - the example below assumes you used App:Something as tag, first creates a Resource Group, and then lists all the members of that group.
Using this group, you can for example get automatically a CloudWatch dashboard for those resources, or use this group as a target in RunCommand.
import json
import boto3
RG = boto3.client('resource-groups')
RG.create_group(
Name = 'Something-App-Instances',
Description = 'EC2 Instances for Something App',
ResourceQuery = {
'Type': 'TAG_FILTERS_1_0',
'Query': json.dumps({
'ResourceTypeFilters': ['AWS::EC2::Instance'],
'TagFilters': [{
'Key': 'App',
'Values': ['Something']
}]
})
},
Tags = {
'App': 'Something'
}
)
# List all resources in a group using a paginator
paginator = RG.get_paginator('list_group_resources')
resource_pages = paginator.paginate(GroupName = 'Something-App-Instances')
for page in resource_pages:
for resource in page['ResourceIdentifiers']:
print(resource['ResourceType'] + ': ' + resource['ResourceArn'])
Another option to just get the list without saving it as a group would be to directly use the Resource Groups Tagging API
What you install on an Amazon EC2 instance is totally up to you. You do this by running code on the instance itself. AWS is not involved in the decision of what you install on the instance, nor does it know what you installed on an instance.
Therefore, you will need to keep track of "what apps are installed on what server" yourself.
You might choose to take advantage of Tags on instances to add some metadata, such as the purpose of the server. You could also use AWS Systems Manager to run commands on instances (eg to install software) or even use AWS CodeDeploy to roll-out software to fleets of servers.
However, even with all of these deployment options, AWS cannot track what you have put on each individual server. You will need to do that yourself.
Update: You can use AWS Resource Groups to view/manage resources by tag.
Here's some sample Python code to list tags by instance:
import boto3
ec2_resource = boto3.resource('ec2', region_name='ap-southeast-2')
instances = ec2_resource.instances.all()
for instance in instances:
for tag in instance.tags:
print(instance.instance_id, tag['Key'], tag['Value'])

Fetch boto3 credentials only from EC2 instance profile

The boto3 documentation lists the order in which credentials are searched and the credentials are fetched from the EC2 instance metadata service only at the very last.
How do I force boto3 to fetch the credentials only from the EC2 instance profile or the instance metadata service?
I came across this which lets me get the temporary credentials from the metadata service and then I could pass this on to create a boto3 session.
However my question is whether there is a better way to do this? Is it possible to create a boto3 session by specifying the provider to use ie InstanceMetadataProvider - link? I tried searching the docs a lot, but couldn't figure it out.
The reason - the context under which this script runs also has environment variables with AWS keys set which would obviously take precedence, however I need the script to run only with the IAM role assigned to the EC2 instance.
So I ended up doing this, works as expected. Always uses the temp creds from the instance role. The script is short-lived so the validity of the creds is not an issue.
from botocore.credentials import InstanceMetadataProvider, InstanceMetadataFetcher
provider = InstanceMetadataProvider(iam_role_fetcher=InstanceMetadataFetcher(timeout=1000, num_attempts=2))
creds = provider.load().get_frozen_credentials()
client = boto3.client('ssm', region_name='us-east-1', aws_access_key_id=creds.access_key, aws_secret_access_key=creds.secret_key, aws_session_token=creds.token)
If there is a better way to do, please feel free to post.
You could also use boto3.
>>> session = boto3.Session(region_name='foo_region')
>>> credentials = session.get_credentials()
>>> credentials = credentials.get_frozen_credentials()
>>> credentials.access_key
u'ABC...'
>>> credentials.secret_key
u'DEF...'
>>> credentials.token
u'ZXC...'
>>> access_key = credentials.access_key
>>> secret_key = credentials.secret_key
It's a similar idea, but I find it returns much faster
import boto3
import botocore
botocore_session = botocore.session.get_session()
credential_provider = botocore_session.get_component('credential_provider')
instance_metadata_provider = credential_provider.get_provider('iam-role')
credential_provider.insert_before('env', instance_metadata_provider)
boto3_session = boto3.Session(botocore_session=botocore_session)
client = boto3_session.client(...)
resource = boto3_session.resource(...)

"Insufficient permissions" on google calendar api's acl.list

I'm getting Insufficient permissions when trying to call the acl.list method of the google calendar api via python.
service.acl().list(calendarId='primary').execute();
*** HttpError: <HttpError 403 when requesting https://www.googleapis.com/calendar/v3/calendars/primary/acl?alt=json returned "Insufficient Permission">
I'm using the scope 'https://www.googleapis.com/auth/calendar' as recommended in the documentation. Additionally, other API methods do work, for example service.calendarList
service.calendarList().list(pageToken=page_token).execute()
What am I missing?
Here is the code I'm using based almost entirely on the sample they provide:
import sys
from oauth2client import client
from googleapiclient import sample_tools
def main(argv):
# Authenticate and construct service.
# import pdb;pdb.set_trace()
service, flags = sample_tools.init(
argv, 'calendar', 'v3', __doc__, __file__,
# scope='https://www.googleapis.com/auth/calendar.readonly')
scope='https://www.googleapis.com/auth/calendar')
try:
page_token = None
while True:
calendar_list = service.calendarList().list(pageToken=page_token).execute()
for calendar_list_entry in calendar_list['items']:
print calendar_list_entry['summary']
page_token = calendar_list.get('nextPageToken')
service.acl().list(calendarId='primary').execute();
if not page_token:
break
except client.AccessTokenRefreshError:
print ('The credentials have been revoked or expired, please re-run'
'the application to re-authorize.')
if __name__ == '__main__':
main(sys.argv)
You might have to delete existing credentials, in the form of .json files. I had a similar "Insufficient permissions" problem, and I had to delete stored credentials. I had the additional problem that because of trying out some of Google's scripts in their tutorials, unknowingly I had credentials stored in a hidden .credentials folder in my home directory (users/home). Since they were hidden, I had to look for them through Terminal (on Mac), and delete them there. Once deleted, the problem was solved, since I could create new and proper credentials, suitable for the scope of my new script.
Something is wrong with your authentication. Insufficent permissions means that you don't have access.
I can verify that the scope https://www.googleapis.com/auth/calendar is enough to display ACL.list on the primary calendar.
You have to find the location of "calendar-dotnet-quickstart.json" file and delete it. I used .NET example and I have to debug the following code the find exact location.
string credPath = System.Environment.GetFolderPath(
System.Environment.SpecialFolder.Personal);
credPath = Path.Combine(credPath, ".credentials/calendar-dotnet-quickstart.json");
Then change scope as bellow and rebuild the solution.
string[] scopes = { CalendarService.Scope.Calendar};
You will notice that google will ask to confirm the access again.

Using script to fire Xcode bot

Is there a way to manually fire existing Xcode bots using shell scripts? I have a manual bot and I'd like to fire it based on certain custom logic criteria.
Yes.
You'll need to do a couple of things:
Firstly, I'm going to call your Xcode Server's IP address XCS_IP, usually localhost if you're on the machine where Xcode Server's running.
Find out the ID of the bot: in Terminal, run curl -k "https://XCS_IP:20343/api/bots". Copy the output to some editor and find the value for key _id for your bot, will be something like 6b3de48352a8126ce7e08ecf85093613. Let's call it BOT_ID.
Trigger an integration by running curl -k -X POST -u "username:password" "https://XCS_IP:20343/api/bots/BOT_ID/integrations" -i
Where username and password are credentials of a user that is allowed to create bots on the server, an admin will do.
If you're interested in more details, I have an app in Swift that uses that API and many more: https://github.com/czechboy0/Buildasaur/blob/master/BuildaCIServer/XcodeServer.swift#L324
And checkout my article on how to find Xcode Server's API "documentation": http://honzadvorsky.com/blog/2015/5/4/under-the-hood-of-xcode-server.
TL;DR? On your Mac, look at /Applications/Xcode.app/Contents/Developer/usr/share/xcs/xcsd/routes/routes.js, where you can find the available APIs.
Hope this helped.
Apple has added documentation for the Xcode server API that you can use to trigger bots.
https://developer.apple.com/library/tvos/documentation/Xcode/Conceptual/XcodeServerAPIReference/index.html#//apple_ref/doc/uid/TP40016472-CH1-SW1
Below is some example code on how you can make a python script that triggers a bot.
import requests
xcodeIP = '1.2.3.4.5'
def main():
botName = "name of bot"
runBot(botName)
def runBot(botName):
requests.post(xcodeIP + '/api/bots/' + getBot(botName)["_id"] + '/integrations', auth=('username', 'password'), verify=False)
def getBot(botName):
botIDRequest = requests.get(xcodeIP + '/api/bots', auth=('username', 'password'), verify=False)
bots = botIDRequest.json()["results"]
for bot in bots:
if bot["name"] == botName:
return bot
if __name__ == "__main__":
main()

Resources