Consul configuration to automatically callback (on event) rest API - consul

It is possible to configure Consul callback "POST" to a rest API every time that a service status is updated?
I've found "Watch" feature in documentation (https://www.consul.io/docs/dynamic-app-config/watches#http-endpoint), but seems like this feature is not to automatically Consul call some API when a services event is called.
Please, if someone know how to do this task, will be very thankful!

I’ve found a solution:
In agent command (to start Consul):
sudo consul agent --dev --client 0.0.0.0 --config-file ./path/consul-agent.json
In consul-agent.json file:
{
“watches”: [
{
“type”: “services”,
“handler_type”: “http”,
“http_handler_config”: {
“path”: “https://localhost/routetocall”,
“method”: “POST”,
“header”: { “Authentication”: [“something”] },
“timeout”: “10s”,
“tls_skip_verify”: true
}
}
]
}

Related

Cause of "Browser needs to be launched with the global proxy" Playwright error

I'm running a Playwright test that makes a request to http://localhost:3000/somePage and wanted to run the request through a proxy (the Fiddler proxy, so I can inspect the traffic, but that's beside the point).
In my playwright.config.ts I have:
projects: [
{
name: 'chromium',
use: {
...devices['Desktop Chrome'],
proxy: {
server: 'http://127.0.0.1:8888'
}
},
},
]
The proxy key is what I added to what was already in the config file generated by Playwright when I set up the project.
When I run my test, I get the following error and the test fails to run:
browser.newContext: Browser needs to be launched with the global proxy. If all contexts override the proxy, global proxy will be never used and can be any string, for example "launch({ proxy: { server: 'http://per-context' } })"
A search online turns up little other than a couple github issues that were closed a long time ago. It seems like it's complaining that it should use the proxy, but only... when I tell it to use the proxy.
When I remove the proxy from the config, the test runs just fine. What am I missing?

How to debug quarkus lambda locally

I am beginner to Quarkus lambda and when I am looking for how to debug the Quarkus lambda then everyone is showing with REST API endpoints, is there any way to debug the Quarkus app using lambda handler ?
I know how to start the app in dev mode but I am struggling with invoking the handler method.
You can use SAM CLI for local debugging and testing. Here is the official documentation from quarkus.
It's really important that you follow the sequence.
Step-1:
sam local start-api --template target/sam.jvm.yaml -d 5005
Step-2:
Hit your API using your favourite rest client
Step-3
Add a Remote JVM debug configuration in your IDE, set your breakpoints and start debugging.
You can actually just add a Main class and set up a usual Run Configuration.
import io.quarkus.runtime.annotations.QuarkusMain;
import io.quarkus.runtime.Quarkus;
#QuarkusMain
public class Main {
public static void main(String ... args) {
System.out.println("Running main method");
Quarkus.run(args);
}
}
After that, just use curl or Postman to invoke the endpoint.
By default, the lambda handler starts on port 8080.
You can override it by passing
-Dquarkus.lambda.mock-event-server.dev-port=9999
So the curl will look like:
curl -XGET "localhost:9999/hello"
if the definition of the resource class looks like:
#Path("/hello")
public class GreetingResource {
#GET
#Produces(MediaType.TEXT_PLAIN)
public String hello() {
return "hello jaxrs";
}
}
Add a breakpoint in the Resource class and start the Main class in Debug mode. This will actually pause during a debug on a breakpoint.
You can just run mvn quarkus:dev and connect a remote debugger to it on port 5005 as shown in this image
Once quarkus is started in dev mode and you connect the remote debugger you can use Postman to send a request. Your breakpoints will be evaluated.

Bad request when deploying smart contract

So I'm currently trying to deploy a router smart contract. I've been building it through erdpy contract build, which has been successful (I'm on rust nightly tool chain as the Smart contract needs it). And I am now trying to deploy it, but I can't manage to do it. I keep having a 400 BadRequest from https://devnet-api.elrond.com/transaction/send.
Here are the logs from the deployment:
erdpy contract deploy
INFO:accounts:Account.sync_nonce()
INFO:accounts:Account.sync_nonce() done: 0
INFO:cli.contracts:Contract address: erd1qqqqqqqqqqqqqpgqzqv7kk893c3ftwgaekvvv9whpqcfn4kazqxq3mud36
INFO:transactions:Transaction.send: nonce=0
CRITICAL:cli:Proxy request error for url [https://devnet-api.elrond.com/transaction/send]: {'statusCode': 400, 'message': 'Bad Request'}
And here is erdpy.json used to configure the command:
{
"configurations": {
"default": {
"proxy": "https://devnet-api.elrond.com",
"chainID": "D"
}
},
"contract":{
"deploy":{
"verbose": true,
"bytecode": "output/router.wasm",
"recall-nonce": true,
"nonce": 1,
"pem": "../../../wallets/owner/wallet-owner.pem",
"gas-limit": 600000000,
"send": true,
"outfile": "deploy-testnet.interaction.json"
}
}
}
The contract I'm trying to deploy is the following. I've also been through the OpenAPI Spec or the documentation searching for an answer, but there is nothing about it. This route is normally returning error message, but for this specific case it is not.
Some other contract like ping-pong are working properly with the same erdpy.json config.
After talking with someone who were interested into this issue, I ended up with the following command:
erdpy --verbose contract deploy --project=$PROJECT_NAME --pem="wallet-owner.pem" --gas-limit=600000000 --proxy="https://devnet-gateway.elrond.com" --outfile="elrond.workspace.json" --recall-nonce --send --chain="D"
Replace $PROJECT_NAME by the folder of your contract (you need to be one level upper than your smart contract folder).
It won't use the elrond.json file, but i guess you can move up the file to make the command use it.
I have you tried to deploy with the argument --verbose?
That should be something like that (not sure of the syntax because I am on phone)
erdpy --verbose contract deploy
I was getting the "bad request" error too, and I worked out that for me this was because my wallet was empty. To add xEGLD to your devnet wallet:
Go to https://devnet-wallet.elrond.com/faucet
Log in using your pem file / whatever you normally use to log in
Click the "Faucet" option from the left hand menu
This should pop up a modal to add 10 xEGLD to your wallet (You can request 10 xEGLD every 24 hours)
Now you can return to the terminal and run erdpy contract deploy
This worked for me, and now I'm getting the correct output.
In the suggested erdpy.json from Elrond Docs there is an "chainID": "D" variable inside configuration.default object.
Delete this and add inside contract.deploy this : "chain": "D".
Example
{
"configurations": {
"default": {
"proxy": "https://devnet-api.elrond.com"
"chainID": "D" <----- Delete this
}
},
"contract":{
"deploy":{
<Other fields>
"chain": "D" <----- Add this
}
}
}

Execution failed due to configuration error: Invalid permissions on Lambda function

I am building a serverless application using AWS Lambda and API Gateway via Visual Studio. I am working in C#, and using the serverless application model (SAM) in order to deploy my API. I build the code in Visual Studio, then deploy via publish to Lambda. This is working, except every time I do a new build, and try to execute an API call, I get this error:
Execution failed due to configuration error: Invalid permissions on Lambda function
Doing some research, I found this fix mentioned elsewhere (to be done via the AWS Console):
Fix: went to API Gateway > API name > Resources > Resource name > Method > Integration Request > Lambda Function and reselected my existing function, before "saving" it with the little checkmark.
Now this works for me, but it breaks the automation of using the serverless.template (JSON) to build out my API. Does anyone know how to fix this within the serverless.template file? So that I don't need to take action in the console to resolve? Here's a sample of one of my methods from the serverless.template file
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Transform" : "AWS::Serverless-2016-10-31",
"Description" : "An AWS Serverless Application.",
"Resources" : {
"Get" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"VpcConfig":{
"SecurityGroupIds" : ["sg-111a1476"],
"SubnetIds" : [ "subnet-3029a769","subnet-5ec0b928"]
},
"Handler": "AWSServerlessInSiteDataGw::AWSServerlessInSiteDataGw.Functions::Get",
"Runtime": "dotnetcore2.0",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [ "AWSLambdaBasicExecutionRole","AWSLambdaVPCAccessExecutionRole","AmazonSSMFullAccess"],
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "/",
"Method": "GET"
}
}
}
}
},
You may have an issue in permission config, that's why API couldn't call your lambda. try to explicitly add to template.yaml file invoke permission to your lambda from apigateway as a principal here's a sample below:
ConfigLambdaPermission:
Type: "AWS::Lambda::Permission"
DependsOn:
- MyApiName
- MyLambdaFunctionName
Properties:
Action: lambda:InvokeFunction
FunctionName: !Ref MyLambdaFunctionName
Principal: apigateway.amazonaws.com
Here's the issue that was reported in SAM github repo for complete reference and here is an example of hello SAM project
If you would like to add permission by AWS CLI for testing things out, you may want to use aws lambda add-permission. please visit official documentation website for more details.
I had a similar issue - I deleted then re-installed a lambda function. My API Gateway was still pointing at the old one, so I had to go into the API Gateway and change my Resource Methods to alter the Integration Request setting to point to the new one (it may look like it's pointing to the correct one but wasn't in my case)
I was having the same issue but I was deploying through Terraform. After a suggestion from another user, I reselected my Lambda function in the Integration part of the API Gateway, and then checked what changed in my Lambda permissions. Turns out I needed to add a "*" where I was putting the stage name in the source_arn section of the API Gateway trigger in my Lambda resource. Not sure how SAM compares to Terraform but perhaps you can change the stage name or just try this troubleshooting technique that I tried.
My SO posting: AWS API Gateway and Lambda function deployed through terraform -- Execution failed due to configuration error: Invalid permissions on Lambda function
Same error, and the solution was simple: clearing and applying the "Lambda Function" mapping again in the integration setting of the API Gateway.
My mapping looks like this: MyFunction-894AR653OJX:test where "test" is the alias to point to the right version of my lambda
The problem was caused by removing the ALIAS "test" on the lambda, and recreating it on another version (after publishing). It seems that the API gateway internally still links to the `old' ALIAS instance.
You would expect that the match is purely done on name...
Bonus: so, via the AWS console you cannot move that ALIAS, but you can do this via the AWS CLI, using the following command:
aws lambda --profile <YOUR_PROFILE> update-alias --function-name <FUNCTION_NAME> --name <ALIAS_NAME> --function-version <VERSION_NUMBER>
I had the same issue. I changed the integration to mock first, i.e unsetting the integration type to Lambda, and then after one deployment, set the integration type to lambda again. It worked flawlessly thereafter.
I hope it helps.
Facing the same issue, I figured out the problem is : API Gateway is not able to invoke the Lambda function as I couldn't see any CloudWatch logs for the lambda Function.
So firstly I went through API Gateway console and under the Integration Request - gave the full ARN for the Lambda Function. and it is started working.
Secondly, through the CloudFormation
x-amazon-apigateway-integration:
credentials:
Fn::Sub: "${ApiGatewayLambdaRole.Arn}"
type: "aws"
uri:
Fn::Sub: "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${lambda_function.Arn}/invocations"
I had the same problem so I deleted then created the stack and it worked.
Looks like "Execution failed due to configuration error: Invalid permissions on Lambda function" is a catch all for multiple things :D
I deployed a stack with CloudFormation templates and hit this issue.
I was using the stage name in the SourceArn for the AWS::Lambda::Permission segment.
when i changed that to a * AWS was a bit more explicit about the cause, which in my case happened to be an invalid Handler reference (I was using Java, the handler had moved package) in the AWS::Lambda::Function section.
Also, when i hit my API GW i got this message
{
"message": "Internal server error"
}
It was only when I was at the console and sent through the payload as a test for the resource that I got the permissions error.
If I check the Cloudwatch logs for the API GW when I configured that, it does indeed mention the true cause even when the Stage Name is explicit.
Lambda execution failed with status 200 due to customer function error: No public method named ...
In my case, I got the error because the Lambda function had been renamed. Double-check your configuration just in case.
Technically, the error message was correct—there was no function, and therefore no permissions. A helpful message would, of course, have been useful.
I had a similar problem and was using Terraform. It needed the policy with the "POST" in it. For some reason the /*/ (wildcard) policy didn't work?
Here's the policy and the example terraform I used to solve the issue.
Many thanks to all above.
Here is what my Lambda function policy JSON looked like and the terraform:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "AllowAPIGatewayInvoke",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-east-1:999999999999:function:MY-APP",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:execute-api:us-east-1:999999999999:d85kyq3jx3/test/*/MY-APP"
}
}
},
{
"Sid": "e841fc76-c755-43b5-bd2c-53edf052cb3e",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-east-1:999999999999:function:MY-APP",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:execute-api:us-east-1:999999999999:d85kyq3jx3/*/POST/MY-APP"
}
}
}
]
}
add in a terraform like this:
//************************************************
// allows you to read in the ARN and parse out needed info, like region, and account
//************************************************
data "aws_arn" "api_gw_deployment_arn" {
arn = aws_api_gateway_deployment.MY-APP_deployment.execution_arn
}
//************************************************
// Add in this to support API GW testing in AWS Console.
//************************************************
resource "aws_lambda_permission" "apigw-post" {
statement_id = "AllowAPIGatewayInvokePOST"
action = "lambda:InvokeFunction"
//function_name = aws_lambda_function.lambda-MY-APP.arn
function_name = module.lambda.function_name
principal = "apigateway.amazonaws.com"
// "arn:aws:execute-api:us-east-1:473097069755:708lig5xuc/dev/POST1/cloudability-church-ws"
source_arn = "arn:aws:execute-api:${data.aws_arn.api_gw_deployment_arn.region}:${data.aws_arn.api_gw_deployment_arn.account}:${aws_api_gateway_deployment.MY-APP_deployment.rest_api_id}/*/POST/${var.api_gateway_root_path}"
}
The documentation for AWS lambda resource permissions shows 3 levels of access you can filter or wildcard, /*/*/*, which is documented as $stage/$method/$path. However, their example and most examples online only use 2 levels and I was bashing my head against the wall using 3 only to get Access Denied. I changed down to 2 levels and lambda then created the trigger. Hopefully, this will save someone from throwing their computer against the wall.
In my case I used Lambda path, that doesn't starts with a '/', like Path: "example/path" in my template.yaml.
As a result AWS generate incorrect permission for this lambda:
{
"ArnLike": {
"AWS:SourceArn": "arn:aws:execute-api:{Region}:{AccountId}:{ApiId}/*/GETexample/path/*"
}
}
So I fixed it by adding '/' to my lambda path in the template.

run lambda function with input parameter in serverless framework command line

I have started a aws project with help of serverless framework, but i have one question regarding run lambda function.
How can I run lambda function with input parameters? I can do it via amazon console, lambda test configuration->test event. but I cannot find a correspondning function in serverless, does anyone know?
Thanks
For the lambda part
You can use event.json file:
{
"principalId": "1234",
"inputVar": "foo"
}
and then run sls function run.
According to docs, if don't specify any stage, function will run locally, if you do specify a stage, function will run deployed code in corresponding stage. BUT the docs seems outdated, you also need to pass -d flag like:
sls function run myFunction -s dev -d
This command will invoke your deployed lambda function, with parameters from your local event.json file
Here is the source code for function run options.
For APIG integration
There are some samples in documentation.
If you don't want to use templates, you can just insert related code in your s-function.json, inside the endpoint description.
"endpoints": [
...
"requestTemplates": {
"application/json": {
"principalId": "$context.authorizer.principalId",
"apiKey": "$context.identity.apiKey",
"inputVar": "$input.json('inputVar')"
}
}
...
]
Syntax is as described in API Gateway Accessing the $input Variable doc.

Resources