Is there anyway to see all the functions provided by a contract that has been deployed on Near protocol? - nearprotocol

I am trying to use make function calls against a deployed contract on Near. Documentation is non-existent as is so often the case.
Is there anyway to see all the functions provided by a contract that has been deployed on Near protocol? The contract is on zhiwong5.testnet

Yes , stats gallery enables you to see all the methods within a deployed contract on near protocol and you can even execute the functions from there.
Here is the link of methods for your contract : https://stats.gallery/testnet/zhiwong5.testnet/contract?t=week

Related

Is it possible to have a multi-endpoint REST API on Google Cloud Functions? (Aws Lambda migration to GCF)

My company has been using AWS Lambda for many years to run our Spring Boot REST API. We are migrating to GCP and they want me to deploy our code to GCF the same way we were with AWS Lambda, but I am not sure that GCF works that way.
According to Google Cloud Functions are only good for Single Endpoints and can only work as a web server using the functions framework.
Spring has a document that uses the GcfJarLauncher, but that is still in alpha and I can only get it to work for a single endpoint. Any additional functions I put into the code are ignored and every endpoint triggers the same function.
There were some posts here on SO that talked about using Functional Beans to map to multiple functions, but I couldn't fully get it working and my boss isn't interested in that.
I've also read of people putting the endpoint in the request payload and then mapping to the proper function, but we are not interested in doing that either.
TLDR/Conclusion:
Is it even possible to deploy our app to GCF or do we need to use Cloud Run (as Google suggests in my first link)?

How can I make authorized requests to secured API endpoint from cronjobs?

I have a golang application which has API key authorization via the JWT token
I am using Kubernetes. So, this golang app is in a pod.
Now, I want to create another application for cronjobs to hit golang endpoint once a week.
What I need:
How to do / skip the authorization?
skip: Ingress is not required here as I can simply call it internally. Can that help this case?
What I Tried:
I tried keeping the cronjobs and api in the same application so I can simply call the service instead of the endpoint, But that also has a drawback.
I am not able to create replicas as they will also replicate the cronjobs and the same endpoint will be hit 1*no of replicas times
I want to call "abc.com" endpoint once a week. It requires a token and I cannot simply pass a token.
I hope there is some way around this.
If you just have to call them internally without exposing them, it can certainly help.
Provided both Pods (and therefore Deployments) are running under the same Cluster you can use Kubernetes' internal DNS.
K8s automatically creates DNS records for Services you create that can be used for internal communication by following this specific format: <service-name>.<service-namespace>.svc.cluster.local
More information from the official docs here: DNS for Services and Pods
If it sounds weird or if it can help understanding the gist of it, try to think of the "endpoint" as a rule you add to your system's hosts file: it boils down to basicly adding a rule where <service-name>.<service-namespace>.svc.cluster.local points to your pod's IP address, except it's done automatically
E.g.
Your golang app is running inside a Pod.
You created a Service pointing to it, named go-api and under the namespace go-apps.
If your cron-job worker is running in a Pod inside the same cluster, you can use go-api.go-apps.svc.cluster.local[:<port>] to reach your app without using an Ingress
The authorization is up to you, since you're usually handling it either directly or by using specific frameworks.
You could, for example, add a custom endpoint path inside your app where you make sure that the only accepted clients come from the same, private IP subnet of your cluster, either without a token (not recommended) or with a specific semi-fixed one that you generate and control, so that you would send a request to something like this from your crons: go-api.go-apps.svc.cluster.local:8080/api/v1/callWithNoAuth
I am creating a new answer as I feel I have more points to contribute to this.
So, what I eventually did end up doing is creating a middle-ware for internal API calls. Inside this middleware, I authorize the caller in middleware i.e. check whether it is from "xyz-service:port".
I found that the need to use a 3rd party authenticator was overkill and using a token in code was riskier.
Since all the services except for the ingress is ClusterIP, No one can access it from outside directly.
Thanks to #LeoD's answer. I gained confidence using this method.
P.S. Simply service-name:port can suffice
eg: http://myservice:4000/api

Create GCS V4 signed url via google cloud workflows

Before I conclude that I can't do this with google cloud workflows alone, I just wanted to check with the community that I'm not missing anything...
I have a google cloud workflows program which exports data from BigQuery to GCS and then sends an email to a user with a URL in the body of the email. I want this URL to be signed.
The gcloud CLI and language-specific libraries all come with nice helpers to do this but I can't access any of this direct from google cloud workflows. I considered implementing my own sub-workflow which would perform the logic described in the signing URLS manually documentation but I don't think I can do this from Workflows alone (I could easily create some cloud func which I call [and in that case, I could just use the helper from the python SDK for example] but I'm trying to avoid that). The following functionality from the python example constitute blockers; logic that I believe I can't do from google cloud workflows alone - unless anyone knows of public web services that I can call to get around this?
canonical_request_hash = hashlib.sha256(canonical_request.encode()).hexdigest()
signature = binascii.hexlify(google_credentials.signer.sign(string_to_sign)).decode()
Everything else I could just about do in a fairly long and drawn out sub-workflow... but it would be possible.
Cloud Workflows do not natively support hashing & RSA signing libraries within its Standard library which is a core requirement of GCS URL signing algorithm.
As also advised in public docs, Cloud workflows / sub-workflows should be primarily used as an orchestration flow to invoke services, parse responses, and construct inputs for other connected services. Services (like Cloud Function / Run etc.) should be created to perform any work that is too complex for Workflows or for operations that are not natively supported by Workflows expressions and its standard library.
Solution for above use case is to either:
a) Create a service (~ triggered from Cloud Workflow) like Cloud Function to generate signed GCS URLs.
OR b) Generate the GCS Signed URL as an independent task outside & after execution of the core workflow operation as shown in this sample.

What are Istio alternative for Authentication Policy and what is Istio flow for development?

At this url you can have a look at my project jut to have some context:
https://github.com/Deviad/clarity/tree/feature/hyperledger
Long story short I am building an open source framework for building Escrows that can take advantage of the latest PSD2 https://www.openbankingtracker.com/
It support Cryptocurrency payments and implements some sort of side chain in order to have a proof that a contract was signed.
Basically of all of the things that Istio does what I really need is the Authentication Policy using JWT.
This in order to avoid writing this part in every microservice that I am creating.
Of course the gateway is also something important.
The main issue is that I have no idea while I am developing using my IDE (Intellij IDEA) what I can do in order to avoid having to stop, rebuild and start containers every single time I need to rebuild since once I use Istio, I will need to use Istio also in development, otherwise I would have to write some dummy services that fake the authorization from istio when I want to check if a certain user has the permissions to access a resource.
What possibilities I have to have a lean workflow with Istio and eventually what alternatives to Istio do I have?
As for the workflow part of my question, I have found a possible solution:
https://garden.io
There is a nice workshop available here:
https://www.youtube.com/watch?v=Xfi9XqcZ76M

East/West communication in a AWS serverless microservice architecture

I am well aware of the fact that east/west, or service to service synchronous communication between services is not the gold standard, and should only be used sparingly in a microservice architecture. However, in every real world implementation of a microservice architecture, I have seen some use-cases which require it. For example, the user service is often needs to be communicated with by other services to get up the millisecond details on the user (I'm aware that event based sharing of that data is also a possibility, but in some cases that isn't always the right approach).
My question is, what is the best way to do function to function, service to service communication in a Lambda + API Gateway style architecture?
My guess is that making an http request back out on the domain name is not ideal, since it will require going back out over the internet to resolve DNS.
Is it using the SDK to do an invoke on the downstream function directly? Will this cause issue if the downstream function depends on an API Gateway Proxy Event structure?

Resources