I have a Spring Boot app where on some occasions I am sending automated emails. My App is hosted on AWS ECS(Fargate) and is using AWS SES for sending emails. I added a role to my Fargate task with all needed permissions for AWS SES. Most of the time, the app is able to authenticate and send emails correctly however on some occasions authentication fails and because of that email/s are not sent. The error I am receiving is:
> Unable to load AWS credentials from any provider in the chain:
> EnvironmentVariableCredentialsProvider: Unable to load AWS credentials
> from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and
> AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)),
> SystemPropertiesCredentialsProvider: Unable to load AWS credentials
> from Java system properties (aws.accessKeyId and aws.secretKey),
> WebIdentityTokenCredentialsProvider: To use assume role profiles the
> aws-java-sdk-sts module must be on the class path.,
> com.amazonaws.auth.profile.ProfileCredentialsProvider#6fa01606:
> profile file cannot be null,
> com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#4ca2c1d:
> Failed to connect to service endpoint
Now, if this would happen each time I would conclude that I configured something incorrectly. However, since authentication fails only sometimes I am not sure what the problem is.
I am using the following aws-sdk version: 1.12.197
When I am initializing a client, I am doing it on the following way:
AmazonSimpleEmailService client = AmazonSimpleEmailServiceClientBuilder.standard() .withRegion(Regions.US_EAST_1).build();
Does anyone has any idea why authentication would fail only sometimes?
Thank you for help.
It appears that one of my ECS tasks used an older version of the Task definition and it didn't have the correct permissions set. After I updated it, it seems to work fine no.
Related
I am trying to move my backend API app (node.js express server) from Heroku to AWS Elastic Beanstalk. But I did not realize the amount of features that Heroku was providing automatically and which I now have to set up manually in AWS.
So here is the list of features which I discovered were missing in AWS and the solutions I have implemented.
Could you please let me know if I am missing something in order to run smoothly my APIs in AWS and get the equivalent of what I had in Heroku?
auto-restart server when crashed : I am using PM2 to automatically restart my server in case of critical error
SSL certificate : I am using AWS ACM certificate,
logging : have inserted the datadog agent in order to receive logs in datadog
logging response time : I have added the "morgan-body" package to get each requests' duration and response code (had to manually filter the AWS healthchecks and search engine bots, because AWS gave me an IP adress which was visited constatntly by Baidu bots)
server timeout : I have implemented a 1200000ms timeout on the whole app (any better option ?)
auto deploy from Github : I have implemented a github automation to deploy code automatically (better options?)
Am I missing something? This app is already live so I do not want to put my customers at risk when I will move from Heroku to AWS...
Thanks for your help!
I believe you are covered:
Heroku Dynos restart after crashing or raising an error (Heroku Restarting Policy)
SSL certificates are provided for free
logging: Heroku supports various plugins, including Datadog
response time (in millisec) is logged automatically
HTTP timeout is 30 sec (it cannot be changed)
deploy from Github is possible (connecting the accounts), Docker deployment is also supported. Better options? Using Github Actions to deploy a new version after code push or tagging.
If you are migrating a production environment I strongly suggest first to setup a Heroku (Free) Dyno to test and verify all your needs are satisfied.
I have configured WSO2 API Manager 4.0.0 in an AWS EC2 which runs on Amazon Linux 2. I am following this WSO2 documentation to setup my first API. I am accessing the API Manager's Dev Portal via my local machine. I am in Step 3 : Invoking my API.
When I click the Execute button under Try Out for GET requests, I get a 200 OK response, but with an error saying TypeError: Failed to fetch. I have attached a screenshot here.
I feel that the request URL mentioned here ( https://localhost:8243/hello/1.0.0 ) should have the EC2 server's IP address, instead of localhost , but I cannot find a way to modify that. What am I doing wrong here?
Output
Browser's Inspect Console Tab
The Swagger was not able to make the invocation, as it is getting refused. Try updating the API Gateway Environment configurations in the deployment.toml to the Hostnames / IP address (publicly accessible) of the EC2 instance.
Following is a sample TOML configuration of API Gateway Environments. Update the <change-this> with appropriate hostnames.
[[apim.gateway.environment]]
...
ws_endpoint = "ws://<change-this>:9099"
wss_endpoint = "wss://<change-this>:8099"
http_endpoint = "http://<change-this>:${http.nio.port}"
https_endpoint = "https://<change-this>:${https.nio.port}"
websub_event_receiver_http_endpoint = "http://<change-this>:9021"
websub_event_receiver_https_endpoint = "https://<change-this>:8021"
Once the configurations are done, restart the server and invoke the API from the Devportal Swagger UI.
I'm trying to connect to s4 hana system using s4 sdk. While executing calls via .execute() method in cloud foundry environment, i see below error logs:
Caused by: com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: Failed to get authentication headers. Destination service returned error: Missing private and public key for subaccount ******-****-****-***-*******.
Note: I've already configured trust between subaccount and S4Hana system and created respective communication and business user. The associated authentication method used in the destination is oAuth2SamlBearerAssertion. Note: The call executes fine in both local and cloud foundry environment with basic authentication.
Can someone please suggest what is wrong here.
As correctly pointed out by #Dennis H there was a problem in trust configuration between my subaccount and S4 Hana system, the configuration wrong in my case :
-> The certificate I downloaded for trust was using this URL:
https://.authentication.eu10.hana.ondemand.com/saml/metadata
This is incorrect we need to get the certificate from download trust button in destination tab at subaccount level
->Provider name was incorrect in the communication system.
We are developing a side-by-side extension app and deploying it to CF. Our app is trying to connect to S4HANA cloud system using oAUTH2SAMLBEARERASSERTION. But facing issues while doing it. We are getting below error in logs. Please be noted, we are able to connect to S4HANA Cloud using basic auth.
com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: Failed to access the configuration of destination
Our destination parameters look as attached screenshotenter image description here
Thank you.
Recently upgrading to run the firebase 3 sdk both in the client, in e2e tests and on the server.
Previously when using the firebase 2.x sdk you could connect to firebase in the same was as a client using signInWithCustomToken. This meant I could generate a token with the {debug: true} flag and use this for my mocha tests. Meaning I would get verbose output from firebase in the invent of security rejection.
Firebase 3 does not allow you to use client types of auth when running the sdk from node (i.e mocha). You must use service accounts. I have created the service account and have serviceaccount.json. I can connect and spoof the UID by using databaseAuthVariableOverride and everything is running AOK but I cannot figure out how to get firebase to send verbose database output so I can debug new firebase rules from my tests.
I have tried things like adding "Log Viewer" permission to my service account. I have also tried (in vein) to add debug: true to the serviceaccount.json
Any help appreciated.
Have you tried the following (in Node.js):
firebase.database.enableLogging(true);
I am trying to learn my way of developing a REST app via Spring Boot framework using the AWS Elastic Beanstalk infrastructure. I am using the IntelliJ IDE to develop and test the app on my local box before deploying it to the AWS Elastic BeanStalk server. I am trying to talk to the AWS RDS instance in my app. With the following code snippets my app is able to talk to RDS instance when deployed and run against my local box but gives me http 404 when deployed on the AWS server which i guess is because the deploy failed due to failure to connect to the RDS instance from AWS.
Project POM file
Application Properties file
User Repository file
I am looking for a correct way to configure these secrets so that they are not present in the git. Ideally take it from AWS environment variables defined for the instance but i am not able to figure out how the spring boot application properties files can access AWS Elastic BeanStalk environment configuration variables.
I have read some documents and tutorials but not exactly able to figure this out. Like Spring Cloud SDK, Sample Spring Boot AWS App
[Edit 1] To provide more information, I was able to ssh into the box and observe the logs. The point of interest is :
Caused by: com.amazonaws.AmazonServiceException: User: arn:aws:sts::486695215273:assumed-role/aws-elasticbeanstalk-ec2-role/i-dc86381f is not authorized to perform: cloudformation:DescribeStackResources (Service: AmazonCloudFormation; Status Code: 403; Error Code: AccessDenied; Request ID: 1ee8c03b-ecd4-11e5-9fe1-378ce4cb26d3)
[Edit 2] After adding AWSCloudFormationReadOnlyAccess security policy in the required policy,
Stack for i-dc86381f does not exist (Service: AmazonCloudFormation; Status Code: 400; Error Code: ValidationError; Request ID: f579cc15-ecd4-11e5-a20b-114992e25084)
My template file as mentioned in AWSCloudFormation is My Template File
Configuring Elastic Beanstalk "secrets", or environment variables, can be done via the cli or via the GUI. For the cli use:
eb setenv ExampleVar=ExampleValue
Which is pretty straight forward. Docs here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-setenv.html
To do it via the GUI you'll navigate to your application and the desired environment, click on Configuration in the left hand menu. Click the gear icon on the "Software Configuration" panel, and you'll be taken to the the configuration page where you can set "Environment Properties", which are key/value pairs... You can set a property name and then the property value and when you click "apply" they'll be applied to your environment and then your application can access them however it would normally access environment variables in production.