Starting an AWS Instance with API, AUTHPARAMS - amazon-ec2

I am trying to start an AMI using
https://ec2.amazonaws.com/
?Action=StartInstances
&InstanceId.1=i-10a64379
&AUTHPARAMS
Like documentadion says here but I am unable to find what AUTHPARAMS refers to.
Thanks

As Steffen notes, the API is much easier to use than the direct REST calls (especially the reasonably new Command Line Interface -- which is much more lightweight, and arguably easier to use as a result of the JSON integration, than the original by-product Command Line Tools)
...but if you are determined:
It's somewhat buried in the documentation, but the following links seem to lead us toward an answer:
1) the high level description of the "AUTHPARAMS" (as referenced frequently in the API documentation.)
AuthParams
The parameters that are required to authenticate a
Conditional request. Contains:
AWSAccessKeyID
SignatureVersion
Timestamp
Signature
Default: None
Required: Conditional
2) a step by step outline of the parameters needed for a REST request:
3) the detailed outline of the method to derive the "signature" for the "AUTHPARAMS"
This is the example in the documentation (I've added newlines to make it easier to read)
https://elasticmapreduce.amazonaws.com?
AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE&
Action=DescribeJobFlows&
SignatureMethod=HmacSHA256&
SignatureVersion=2&
Timestamp=2011-10-03T15%3A19%3A30&
Version=2009-03-31&
Signature=i91nKc4PWAt0JJIdXwz9HxZCJDdiy6cf%2FMj6vPxyYIs%3D
4) Additionally there is some general information here about signatures

First and foremost, to interact with the Amazon EC2 API, I highly recommend to use one of the available SDKs if possible - this will make your life much simpler, especially when also interacting with any of the many other AWS Products and Solutions over time, insofar the SDKs relief you from tedious boilerplate code and also harmonize cross services API usage in general and the authentication process you are asking about in particular.
Now, if you really want/need to handle authentication yourself, you'll find the required information in Query API Authentication, which links to Signature Version 2 Signing Process in turn (the signature version changes over time, which is one of the topics the SDKs abstract away for example).

Related

Fuchsia: how to use a built-in capability in a component

I'm trying to learn and use Fuchsia for fun, and a pretty basic concept is keeping me from progressing.
I thought that, as a learning experience, I could write a simple HTTP client that prints the content of some random URL to the log. Really nothing fancy.
As I understand, using the network (in my case I'd like to utilize fuchsia.net.http.Loader) is a capability, which has to be granted to a running component. Makes sense, that's pretty much the core of the OS.
I also understand that the initiating component, the one that runs my component, needs to grant this capability to my component. That's fair.
What I don't understand, and I'd very much appreciate any additional information (pretty please!) is how I can grant this to my component?
Specifically all demos and examples I saw had a custom client & server under a realm, which talked to each other. That's a good practice, but it doesn't bring in any capability that's built in.
What am I missing? Thanks in advance!
I'm trying to learn and use Fuchsia for fun, and a pretty basic concept is keeping me from progressing.
Thanks for your interest in Fuchsia! First of all, if you haven't already gone through Fuchsia Fundamentals I would strongly suggest that as a starting point for many of the foundational concepts.
Specifically all demos and examples I saw had a custom client & server under a realm, which talked to each other. That's a good practice, but it doesn't bring in any capability that's built in.
This is primarily because there's isn't necessarily a concept of any set of components or capabilities being "built in" to the system. The capabilities available to components in the system are entirely dependent on the rest of the components in a particular product build and how they are organized (this is called the component topology).
I thought that, as a learning experience, I could write a simple HTTP client that prints the content of some random URL to the log. Really nothing fancy.
The answer has a few sharp edges to it at the moment, as Fuchsia is a rapidly evolving open source project. Hopefully some of the details below will help you move forward.
Determine the capability routes
So you'll have to do a bit of work to figure out where the capability you need is provided and routed. In fact, one of the components exercises shows you how to do this for the fuchsia.net.http.Loader capability. Knowing where a capability is offered/used allows you to determine where your component would need to be instantiated to obtain the necessary capability.
You might also find some of the content in the Connect components developer guide useful in accessing the capability.
Run the component
Knowing where a capability is routed allows you to determine how to run your component. The most straightforward way of instantiating a component in the topology is to do so dynamically using ffx component. However, this requires a collection somewhere on the system with the capabilities you need. The ffx-laboratory realm where most examples are run has a very limited set of capabilities that does not include fuchsia.net.http.Loader.
You'll likely need to add your component statically to the topology using a core realm shard so that the necessary routes can be declared explicitly between the components that offer fuchsia.net.http.Loader and your component. With the component included statically in your product build, you can execute it using ffx component commands.
For more details on component execution, check out the Run components developer guide as well.
Run a CLI binary
Since this is a learning exercise, another option is to build your code as a binary that runs within the context of a component that already has the capabilities you need vs. creating and running an entirely new component. This is commonly used for CLI tools. With the ffx component explore command you can run your code as a binary inside the existing component that provides the HTTP capability you are looking for using the --tools argument, without the need to work through all the capability routing pieces described above.
For more details on ffx component explore, see Explore components.

Supporting multiple versions of Kuberentes APIs in Go program

Kubernetes has a rapidly evolving API and I am trying to find best practices, recommendations, or really any kind of guidance about how to write Go software that gracefully handles supporting its evolving API and supports multiple versions simultaneously. I am sure I am not the first person to attempt this, but so far I have not found any guidance about Kubernetes specifically, and what I have read about polymorphism in Go has not inspired a great solution yet.
Kubernetes is written in Go and provides Go packages like k8s.io/api/extensions/v1beta1 and k8s.io/api/networking/v1beta1. Kubernetes resources, for example Ingress, are first released in one API group (extensions) and as they become more mature, get moved to another API group (networking) and can also change versions (e.g. go from v1beta1 to plain v1). Kubernetes also provides k8s.io/client-go for interacting with a Kubernetes cluster.
I am an experienced object-oriented (and other types of) programmer, but fairly new to Go and completely new to the Kubernetes packages. What I want to accomplish is a program architecture that allows me to write code once and have it work on any version of the Kubernetes resource, at least as long as the resource contains all the features I care about. In a typical object-oriented environment, I would create a base Ingress class and have all these various versions derive from it, and package up operations so that I could just work on Ingress everywhere. My sense is that Go intends for people to take a different approach, and in any case there are complications because of the client/server aspect.
Client/server and APIs
My Go program is a client of the Kubernetes server. Various version of the server will support various version of the Kubernetes API, and therefor various versions of the Ingress resource. So my first problem is that I have to do something like this to get a list of all the Ingresses:
ingressesExt, err := il.kubeClient.ExtensionsV1beta1().Ingresses(namespace).List(metav1.ListOptions{})
ingressesNet, err := il.kubeClient.NetworkingV1beta1().Ingresses(namespace).List(metav1.ListOptions{})
I have to gracefully handle errors about the API not being supported. Because the return types are different, AFAIK there is no unified interface where I can just make one call and get the results in a single list. It seems like this is the sort of thing someone should have solved and provided a solution for, but so far I have not found anything.
Type conversion
I also have to find some way to merge ingressesExt and ingressesNet into a single usable list, with an eye toward maintainability/extensibility now that Ingress has graduated to NetworkingV1.
Kubernetes utilities
I see that Kubernetes provides a lot of auto-generated code and utilities, but I have not found a lot of documentation about how to use them. For example, Ingress has functions like
DeepCopy
Marshal
XXX_DiscardUnknown
XXX_Merge
XXX_Unmarshal
Maybe I can use these to do the type conversion? Combine marshal, unmarshall, discard, and merge somehow to take the data from on version and import it into another?
Questions
Hopefully you see the issue and understand what I am trying to achieve.
Are there packages from Kubernetes or other open source authors that make some progress in unifying the APIs like I need?
Are any of the Kubernetes auto-generated functions meant for general use (as opposed to internal use) and helpful to my challenge? I have not found documentation for any but DeepCopy.
What is the "Go way" of abstracting out the differences between the various versions of the Ingress object such that I can write the rest of the code to work on any version? Keep in mind that I may need to make another API call for further processing, in which case I would need to know the concrete type of the object and select the right API call. It is not obvious to me that client-go provides any support for such auto-selection of API calls.

Spring HATEOAS: Practicable for a microservice architecture?

I know this question was already asked but I could not find a satisfying answer.
I started to dive deeper in building a real restful api and I like it's contraint of using links for decoupling. So I built my first service ( with java / spring ) and it works well ( although I struggled a bit with finding the right format but that's another question ). After this first step I thought about my real world use case. Micorservices. Highly decoupled individual services. So I made a my previous scenario and I came to some problems or doubts.
SCENARIO:
My setup consists of a reverse proxy ( Traefik which works as service discovery and api gateway) and 2 Microservices. In addition, there is an openid connect security layer. My services are a Player service and a Team service.
So after auth I have an access token with the userId and I am able to call player/userId to get the player information and teams?playerId=userId to get all the teams of the player.
In my opinion, I would in both responses link the opposite service. The player/userId would link to the teams?playerId=userId and vice versa.
QUESTION:
I haven't found a solution besides linking via a hardcoded url. But this comes with so many downfalls as I can't imagine that this a solution used in real world applications. I mean just imagine your api is a bit more advanced and you have to link to 10 resources. If something changes, you have refactor and redeploy them all.
Besides the synchonization problem, how do you handle state in such a case. I mean, REST is all about state transfer. So I won't offer the link of the player to teams service if the player is in no team. Of course I can add the team ids as attribute to the player to decide whether to include the link or not. But this again increases coupling between the services.
The more I dive in the more obstacles I find and I'm about to just stay with my spring rest docs and neglect the core of Rest which I is a pity to me.
Practicable for a microservice architecture?
Fielding, 2000
The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.
Fielding 2008
REST is intended for long-lived network-based applications that span multiple organizations.
It is not immediately clear to me that "microservices" are going to fall into the sweet spot of "the web". We're not, as a rule, tring to communicate with a microservice that is controlled by another company, we often don't get a lot of benefit out of caching, or code on demand, or the other rest architectural constraints. How important is it to us that we can use general purpose components to exchange information between different microservices within our solution? and so on.
If something changes, you have refactor and redeploy them all.
Yes; and if that's going to be a problem for us, then we need to invest more work up front to define a stable interface between the two. (The fact that we are using "links" isn't special in that regard - if these two things are going to talk to each other, then they are going to need to speak a common language; if that common language needs to evolve over time (likely) then you need to build those capabilities into it).
If you want change over time, then you have to plan for it.
If you want backwards/forwards compatibility, then you have to plan for it.
Your identifiers don't need to be static - there are lots of possible ways of deferring the definition of an identifier; the most obvious being that you can use another identifier to look up the identifier you want, or the formula for calculating it, or whetever.
Think about how Google works - the links they use change all the time, but it doesn't matter because the protocol (refresh your bookmarked search form, enter your text in "the" one field, click the button) hasn't changed in 20 years. The interface is stable (even though the underlying spellings of the identifiers is not) and that's enough.

System API in mulesoft

I have a requirement to persist some data in a table (single table). The data is coming from UI. Do i need to write just the system API and persist the data OR i need to write process and system API both? I don't see a use of process API in this case. Please suggest. Is it always necessary to access system API through process API or system API can be invoked without process API as well.
I would recommend a fine-grained approach to this. We should be following it through the experience layer even though we do not have must customization to the data.
In short, an experience layer API and directly calling System layer API (if there is no orchestration/data conversion/formatting needed)
Why we need a system API & experience API? A couple of points.
System API should be more attached to the underlying system. And if
in case, in the future, it changes then it should not impact any of
the clients.
Secondly, giving an upper layer gives us the feasibility to add
different SLAs, policies, logging and lots more, to different
clients. Even if you have a single client right now it's better to
architect for the future. Reusing is the key advantage of these APIs.
Please check Pattern 2 in this document
That is a question for the enterprise architect in your organisation. In this case, the process API would probably be a simple proxy for the system API, but that might not always be the case in future. Also, it is sometimes useful to follow a standard architectural pattern even if it creates some spurious complexity in the implementation. As always, there are design trade-offs and the answer will depend on factors that cannot be known by people outside of your organisation.

Test endpoints compliance against openapi contract in Spring Boot Rest

I am looking for a nice way to write tests to make sure that enpoints in Spring Boot Rest (ver. 2.1.9) application follows the contract in openapi contract.
In the project I moved recently there is following workflow: architects write contract openapi.yml and developers have to implement endpoint to compliance the contract. Unfortunately a lot of differences happen and this test have to catch such situation and it is not possible to change this :(
I was thinking about the solution to generate openapi.yml from current ednpoints and compares it somehow but wonder if there is some out of the box solution.
I was thinking about the solution to generate openapi.yml from current ednpoints and compares it somehow but wonder if there is some out of the box solution.
In a general case, even the generated spec may not match the actual app behavior because some things can't be expressed with Open API. However, it still could be helpful as a starting point.
Open API provides a way to specify examples, that could be used to verify the contract. But the actual schemas might be a better source of expectations.
I want to note two tools that can generate and execute test cases based only on the input Open API spec:
Schemathesis uses both examples and schemas and doesn't require configuration by default. It utilizes property-based testing and verifies properties defined in the tested schema - response codes, schemas, and headers. It supports Open API 2 & 3.
Dredd focuses more on examples and provides several automatic expectations. It supports only Open API 2, and the third version is experimental.
Both provide a CLI and could be extended with various hooks to fit the desired workflow.
I'd suggest passing the contracts (as a spec you mentioned) to Schemathesis and it will verify if all schemas and examples are handled correctly by your app.

Resources