Programmatically setting instance name with the OpenStack Nova API - amazon-ec2

I have resigned myself to the fact that many of the features that EC2 users are accustomed to (in particular, tagging) do not exist in OpenStack. There is, however, one piece of functionality whose absence is driving me crazy.
Although OpenStack doesn't have full support for instance tags (like EC2 does), it does have the notion of an instance name. This name is exposed by the Web UI, which even allows you to set it:
This name is also exposed through the nova list command line utility.
However (and this is my problem) this field is not exposed through the nova-ec2 API layer. The cleanest way for them to integrate this with existing EC2 platform tools would be to simulate an instance Tag with name "Name", but they don't do this. What's more, I can't figure out which Nova API endpoint I can use to read and write the name (it doesn't seem to be documented on the API reference); but of course it must be somehow possible since the web client and nova-client can both somehow do it.
At the moment, I'm forced to change it manually from the website every time I launch a new instance. (I can't do it during instance creation because I use the nova-ec2 API, not the nova command line client).
My question is:
Is there a way to read/write the instance name through the EC2 API layer?
Failing that, what is the REST endpoint to set it programmatically?
(BONUS): What is Nova's progress on supporting general instance tagging?

The Python novaclient.v1_1 package has a method on the server object:
def update(self, server, name=None):
"""
Update the name or the password for a server.
:param server: The :class:`Server` (or its ID) to update.
:param name: Update the server's name.
"""
if name is None:
return
body = {
"server": {
"name": name,
},
}
self._update("/servers/%s" % base.getid(server), body)
This indicates that you can update the name of a server by POST-ing
the following JSON to http://nova-api:port/v2.0/servers/{server-id}:
{
"server": {
"name": "new_name"
}
}
Of course, all of the usual authentication headers (namely X-Auth-Token
from your Keystone server) are still required, so it is probably easier to
use a client library for whatever language you prefer to manage all that.

Related

AWS SAM APIGatewayProxyRequestEvent HttpMethod not set when starting API locally

I created an application with a AWS::SERVERLESS::FUNCTION, which has an HttpApi Event attached to it. I thought it would be a good idea to create one lambda per resource, so e.g. Post, Get and Put on /customer are all handled by a single lambda, which decides which action to take using
switch (input.getHttpMethod()) {
case "GET": ...
case "POST: ...
}
So now coming to my problem: When starting an application using sam local start-api my lambda gets called correctly, but neither input.getHttpMethod() nor input.getRequestContext().getHttpMethod() is set.
Given that SAM supports multiple HttpApi events, failing to provide the http method when running the application locally mitigates local development virtually completely. Am I doing something wrong, or is this really not working? I'm using Java btw, I can't tell if this problem exists using other languages as well.
Just in Case:
Is my "one lambda per resource" approach wrong, should every single action have its own lambda?
I found the problem. HttpApi uses "PayloadFormatVersion: 2.0" as default. input was of type com.amazonaws.services.lambda.runtime.events.APIGatewayProxyRequestEvent, but this represents format 1.0, I had to use com.amazonaws.services.lambda.runtime.events.APIGatewayV2HTTPEvent to get it to work.
This fails silently because instances are just constructed from json input, and several fields are the same among both classes.

MIP SDK: fail to create FileHandler with error "Content protected by on prem servers is unsupported"

We are developing an application to open and edit protected PDF files using the MIP SDK (we're currently using version 1.6.103).
So far, we were able to open files protected with different versions of Microsoft protection, including MicrosoftIRMServices version 1.
We are now hitting a problem with one of our customers. They keep their files on a SharePoint 2016 directory, which is configured to automatically add protection to all files uploaded. All their environment is on-premise and AD RMS Service is used for protection. They do not have Azure IP on server side.
When we download the resulting file and try to open, we create a mipns::FileEngine and then invoke CreateFileHandlerAsync() to create a mipns::FileHandler. This call fails with the following mipns::NetworkError:
NetworkError : Content protected by on prem servers is unsupported., NetworkError.Category=FailureResponseCode, HttpRequest.SanitizedUrl=https://api.aadrm.com/my/v2/enduserlicenses,
As the error suggests, I suspect the issue is with the usage of an on-premise protection.
I thought it might be resolved following the instructions at
https://learn.microsoft.com/en-us/information-protection/develop/quick-app-adrms#configuring-protection-api-in-c-to-use-ad-rms
so, following those instructions, I created the FileEngine with
ProtectionEngine::Settings engineSettings("", authDelegate, "");
engineSettings.SetProtectionCloudEndpointBaseUrl("http://<my server>/_wmcs/licensing");
but so far no success, although the error has changed and is now
NetworkError : The protection service is unavailable., NetworkError.Category=FailureResponseCode, HttpRequest.SanitizedUrl=https://<my server>/my/v1/enduserlicenses,
(where of course <my server> is replaced with a local service)
Am I going in the wrong direction? If not, perhaps I am using the wrong endpoint? How can I find the endpoint URL to be passed to SetProtectionCloudEndpointBaseUrl as suggested in the linked page?
Thanks
This is likely caused by a missing MDE install or MDE SRV record. You'll need to validate that mobile device extensions for AD RMS has been deployed and configured. If it has, you'll also need to validate that the SRV record is in place for any mail suffixes your customer is using. For example, if the RMS service is at RMS.FABRIKAM.COM, but your customer email addresses are #Contoso.com, you'd need an SRV record that looks like _rmsdisco._http._tcp.contoso.com which would then point to the server at RMS.FABRIKAM.COM.
The base URL isn't used in consumption scenarios. It's only for publishing. That said, looks like you've set the _wmcs endpoint, but we expect only the base for AD RMS:
ProtectionCloudEndpointBaseUrl = "https://rms.contoso.com"
That's only required when you don't provide a mip::Identity object when creating the file engine. If you do provide the identity, we'll use the domain suffix to look up the DNS record and chase that referral.

Extract Graphql queries sent by a browser application with Apollo client

I am trying to simplify the process of exporting GraphQL queries sent by my application for documentation purposes. For the record, I want to be able to paste those queries into Postman collections.
Here are my different approaches:
Relying on .graphql files: first it's still very difficult to setup with a full fledged TypeScript + Webpack + Babel setup (using Next.js). Anyway, it does not provide variables, so you only have half the query.
Relying on the network tab. From there, we can copy queries content and import into Postman. Combined with Cypress it could provide an awesome workflow. It works OK, but Apollo Client will send queries as JSON objects, difficult to interpret
I tried to use the "application/graphql" content-type. It's way more readable and available in Postman. BUUUT it is non-standard, and thus not available in Apollo client.
So my question is rather open, but what are the possibilities to enable extracting graphql queries (and variables) sent by my browser and inject them into Postman?
Most promising solution is enabling "application/graphql" client side, or converting the JSON representation back to a string representation. But I could explore another possibility (eg using Apollo Engine as an intermediate)
A way to do this is to use the apollo CLI tool. It includes a client:extract command that extracts all of the GraphQL operations/documents in your application into a file. You run the tool on your JS(X)/TS(X) files and it extracts the GraphQL documents into a file that looks like this (this output is the result of pointing the tool at a single file containing a single query):
{
"version": 2,
"operations": [
{
"signature": "b4f318e6aebcc3631bc88eedef09c6001bb8c310917e97ee6df4a99e17c3c056",
"document": "query BootstrapQuery{user:viewer{__typename delivery_time_1 delivery_time_2 devices{__typename fcm_token id notification{__typename enabled}}has_password id label location name next_delivery_string oauths{__typename email id name picture provider}plan plan_billing_service plan_expires plan_since plan_will_renew profile_picture recovery_email timezone username}}",
"metadata": {
"engineSignature": ""
}
}
]
}
You can then use that file however you want.
In my case, I use this tool to publish an allow-list of operations to Hasura. I'm not sure what you mean by injecting queries into Postman, but I think this tool may provide you with an automated start that would be an improvement over manual copy/pasting.

How to use Polkadot-js API with Substrate Node Template?

In the Substrate ecosystem, it is common to begin writing a new blockchain node by forking the Substrate Node Template. There are a few options for user interfaces (such as Apps and front-end-template) both of which are based on the same underlying Polkadot-JS API.
Some versions of the API work with some versions of the node template without any custom configuration, but in general the API must be supplied with information about which types the node uses. The process of supplying types is documented https://polkadot.js.org/api/start/types.extend.html#impact-on-extrinsics but which types do I need to supply?
There was a type-incompatible change introduced to the Substrate node template here on March 10th 2020. I'll use the terms "old" and "new" to refer to before and after this date.
Using the API Directly
If you're using an new node template with the polkadot-js API, you will need to use the following types as documented here
{
"Address": "AccountId",
"LookupSource": "AccountId"
}
Using a Front End Package
The frontends mentioned in the question have both been updated in an attempt to ease the life of users. The Apps UI here and the front-end-template here. However if you're trying to use an old node-template with anew front-end or vice versa, you will need to do some custom type injection.
Old node template, Old front end
No custom types necessary
New node template, New front end
No custom types necessary
Old node template, New front end
{
"Address": "GenericAddress",
"LookupSource": "Address"
}
New node template, Old front end
{
"Address": "AccountId",
"LookupSource": "AccountId"
}
How to Inject Types
In Apps
Go to the Settings tab on the left and the Developer tab on the top. Paste the types in.
In Front End Template
Modify this file https://github.com/substrate-developer-hub/substrate-front-end-template/blob/dff9783e29123f49a19cbc43f5df7ae010c92775/src/config/common.json#L4

How entity edit URL from within plug-in in MS Dynamics CRM 4.0

I would like to have a workflow create a task, then email the assigned user that they have a new task and include a link to the newly created task in the body of the email. I have client side code that will correctly create the edit URL, using the entities GUID and stores it in a custom attribute. However, when the task is created from within a workflow, the client script isn't run.
So, I think a plug-in should work, but I can't figure out how to determine the URL of the CRM installation. I'm authoring this in a test environment and definitely don't want to have to change things when I move to production. I'm sure I could use a config file, but seems like the plug-in should be able to figure this out at runtime.
Anyone have any ideas how to access the URL of the crm service from within a plug-in? Any other ideas?
There is no simple way to do this. However, there is one.
The MSCRM_Config is the deployment database that handle physical deployment properties, like the URL from which users are accessing the CRM deployment. The url that you might want is the one stored in "ADWebApplicationRootDomain", in the MSCRM_CONFIG.dbo.DeploymentProperties table. You may need some permission to access this database.
Note that this doesn't work in a deployment that is an Internet Facing Deployment.
Another way could be to query the discovery service to retrieve the same information (in the case that you are on the Online edition of MSCRM4).
What do you mean by "change things"?
If you create a custom workflow assembly, you can give it a server url input. Once you register it with CRM, you can simply type in the server url when you configure the workflow. You'll have to update the url for any workflows that use the custom workflow assembly once you move to production, but you'll only have to do that once.
My apologies if this is what you meant you wanted to avoid.
Edit: Sounds like you may be able to use the CustomConfiguration attribute when you register the plugin. Here's some more info.
http://blogs.msdn.com/crm/archive/2008/10/24/storing-configuration-data-for-microsoft-dynamics-crm-plug-ins.aspx
String Url = ((string)(Registry.LocalMachine.OpenSubKey(
"Software\\Microsoft\\MSCRM").GetValue("ServerUrl"))
).Replace("MSCRMServices", "");

Resources