WSO2 API Manager - Tokens become inactive - amazon-ec2

I'm currently deploying the WSO2 API manager solution on Amazon EC2.
After each restart of my instance , I'm facing the following issue : All my access tokens become inactive.
<ams:code>900904</ams:code><ams:message>Access Token Inactive</ams:message>
I have already changed the "ApplicationAccessTokenDefaultValidityPeriod" value to 0 in the identity.xml configuration file (/repository/conf/identity.xml) but it did not prevent my tokens from being inactive.
Is there a way to keep all my generated tokens active after each instance restart?
PS: this error does not occur when I restart my wso2 application without restarting my ec2 instance.
Error log :
ERROR - APIAuthenticationHandler API authentication failure
org.wso2.carbon.apimgt.gateway.handlers.security.APISecurityException: Access failure for API: /test, version: 1.0.3 with key: bLhh7pDxZ8NYwXz5k09nGO_Udcga
at org.wso2.carbon.apimgt.gateway.handlers.security.oauth.OAuthAuthenticator.authenticate(OAuthAuthenticator.java:135)
at org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler.handleRequest(APIAuthenticationHandler.java:88)
at org.apache.synapse.rest.API.process(API.java:252)
at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:76)
at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:63)
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:191)
at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:83)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.http.util.RESTUtil.invokeAxisEngine(RESTUtil.java:144)
at org.apache.axis2.transport.http.util.RESTUtil.processURLRequest(RESTUtil.java:139)
at org.apache.synapse.transport.nhttp.util.RESTUtil.processGetAndDeleteRequest(RESTUtil.java:146)
at org.apache.synapse.transport.nhttp.DefaultHttpGetProcessor.processGetAndDelete(DefaultHttpGetProcessor.java:464)
at org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor.process(NHttpGetProcessor.java:296)
at org.apache.synapse.transport.nhttp.ServerWorker.run(ServerWorker.java:272)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

AccessTokenDefaultValidityPeriod defines for how longer the server keeps the AccessToken alive. By default this is 1 hour (3600s), which means you need to generate a new access token when trying after 1 hours. Therefore setting this value to 0 second is wrong and to make the token doesn't expire you need to set this to -1 as,
<!-- Default validity period for Access Token in seconds -->
<AccessTokenDefaultValidityPeriod>-1</AccessTokenDefaultValidityPeriod>
You can refer to WSO2 API Manager documentation in here.

Related

Use gMSA for Hashicorp Vault mssql credential rotation

I want to start using Vault to rotate credentials for mssql databases, and I need to be able to use a gMSA in my mssql connection string. My organization currently only uses Windows servers and will only provide gMSAs for service accounts.
Specifying the gMSA as the user id in the connection string returns the 400 error error creating database object: error verifying connection: InitialBytes InitializeSecurityContext failed 8009030c.
I also tried transitioning my vault services to use the gMSA as their log on user, but this made nodes unable to become a leader node even though they were able to join the cluster and forward requests.
My setup:
I have a Vault cluster running across a few Windows servers. I use nssm to run them as a Windows service since there is no native Windows service support.
nssm is configured to run vault server -config="C:\vault\config.hcl" and uses the Local System account to run under.
When I change the user, the node is able to start up and join the raft cluster as a follower, but can not obtain leader status, which causes my cluster to become unresponsive once the Local System user nodes are off.
The servers are running on Windows Server 2022 and Vault is at v1.10.3, using integrated raft storage. I have 5 vault nodes in my cluster.
I tried running the following command to configure my database secret engine:
vault write database/config/testdb \
connection_url='server=myserver\testdb;user id=domain\gmsaUser;database=mydb;app name=vault;' \
allowed_roles="my-role"
which caused the error message I mentioned above.
I then tried to change the log on user for the service. I followed these steps to rotate the user:
Updated the directory permissions for everywhere vault is touching (configs, certificates, storage) to include my gMSA user. I gave it read permissions for the config and certificate files and read/write for storage.
Stopped the service
Removed the node as a peer from the cluster using vault operator raft remove-peer instanceName.
Deleted the old storage files
Changed the service user by running sc.exe --% config "vault" obj="domain\gmsaUser" type= own.
Started the service back up and waited for replication
When I completed the last step, I could see the node reappear as a voter in the Vault UI. I was able to directly hit the node using the cli and ui and get a response. This is not an enterprise cluster, so this should have just forwarded the request to the leader, confirming that the clustering portion was working.
Before I got to the last node, I tried running vault operator step-down and was never able to get the leader to rotate. Turning off the last node made the cluster unresponsive.
I did not expect changing the log on user to cause any issue with node's ability to operate. I reviewed the logs but there was nothing out of the ordinary, even by setting the log level to trace. They do show successful unseal, standby mode, and joining the raft cluster.
Most of the documentation I have found for the mssql secret engine includes creating a user/pass at the sql server for Vault to use, which is not an option for me. Is there any way I can use the gMSA in my mssql config?
When you put user id into the SQL connection string it will try to do SQL authentication and no longer try windows authentication (while gMSA is a windows authentication based).
When setting up the gMSA account did you specify the correct parameter for who is allowed to retrieve the password (correct: PrincipalsAllowedToRetrieveManagedPassword, incorrect but first suggestion when using tab completion PrincipalsAllowedToDelegateToAccount)
maybe you need to Install-ADServiceAccount ... on the machine you're running vault on

Spring Boot issue with AWS SES authentication

I have a Spring Boot app where on some occasions I am sending automated emails. My App is hosted on AWS ECS(Fargate) and is using AWS SES for sending emails. I added a role to my Fargate task with all needed permissions for AWS SES. Most of the time, the app is able to authenticate and send emails correctly however on some occasions authentication fails and because of that email/s are not sent. The error I am receiving is:
> Unable to load AWS credentials from any provider in the chain:
> EnvironmentVariableCredentialsProvider: Unable to load AWS credentials
> from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and
> AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)),
> SystemPropertiesCredentialsProvider: Unable to load AWS credentials
> from Java system properties (aws.accessKeyId and aws.secretKey),
> WebIdentityTokenCredentialsProvider: To use assume role profiles the
> aws-java-sdk-sts module must be on the class path.,
> com.amazonaws.auth.profile.ProfileCredentialsProvider#6fa01606:
> profile file cannot be null,
> com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#4ca2c1d:
> Failed to connect to service endpoint
Now, if this would happen each time I would conclude that I configured something incorrectly. However, since authentication fails only sometimes I am not sure what the problem is.
I am using the following aws-sdk version: 1.12.197
When I am initializing a client, I am doing it on the following way:
AmazonSimpleEmailService client = AmazonSimpleEmailServiceClientBuilder.standard() .withRegion(Regions.US_EAST_1).build();
Does anyone has any idea why authentication would fail only sometimes?
Thank you for help.
It appears that one of my ECS tasks used an older version of the Task definition and it didn't have the correct permissions set. After I updated it, it seems to work fine no.

clould foundry dataflow server basic auth

We are using SCDF on PCF with File-based Authentication, it works fine on a single instance - however when we scale to 2 or more instances, it fails on login stating "Not Logged in" - there's no error message on the server..
Does SCDF store user info in session ? Not sure why login not working when scaled up
SCDF - 1.5.1.RELEASE
(Apparently it was working in 1.3.0.RELEASE)
The file-based authentication is not a recommended approach for cloud platforms like PCF.
In PCF in particular, you'd want to take advantage of the single-sign-on solution provided by the platform. With OAuth and SSO backed by UAA, it'd be a consistent security experience regardless of the number of instances. Please refer to the write-up on authentication options available for SCDF on PCF.
With this, you can centrally also renew an expired OAuth token or even revoke them as needed.
Also, as an FYI, when using the SCDF Tile, all this is automatically configured for you. You'd create an instance of SCDF service from the marketplace and the space-developer can gain access to the Dashboard, REST-APIs, and Shell - all of it works on an SSO model by default.

Unable to execute odata calls using S4Hana SDK in cloud foundry environment with oAuth2SAMLBearerAssertion authentication

I'm trying to connect to s4 hana system using s4 sdk. While executing calls via .execute() method in cloud foundry environment, i see below error logs:
Caused by: com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: Failed to get authentication headers. Destination service returned error: Missing private and public key for subaccount ******-****-****-***-*******.
Note: I've already configured trust between subaccount and S4Hana system and created respective communication and business user. The associated authentication method used in the destination is oAuth2SamlBearerAssertion. Note: The call executes fine in both local and cloud foundry environment with basic authentication.
Can someone please suggest what is wrong here.
As correctly pointed out by #Dennis H there was a problem in trust configuration between my subaccount and S4 Hana system, the configuration wrong in my case :
-> The certificate I downloaded for trust was using this URL:
https://.authentication.eu10.hana.ondemand.com/saml/metadata
This is incorrect we need to get the certificate from download trust button in destination tab at subaccount level
->Provider name was incorrect in the communication system.
We are developing a side-by-side extension app and deploying it to CF. Our app is trying to connect to S4HANA cloud system using oAUTH2SAMLBEARERASSERTION. But facing issues while doing it. We are getting below error in logs. Please be noted, we are able to connect to S4HANA Cloud using basic auth.
com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: Failed to access the configuration of destination
Our destination parameters look as attached screenshotenter image description here
Thank you.

OpenAM : Failed to get the valid sessions from the specified server

I have an issue to retrieve current sessions in Openam.
When I connect with the amAdmin user on the first server and go to the session item on the administration page, I cannot see the session on the second server.
I got the following error :
Failed to get the valid sessions from the specified server.
But sometimes I can see the sessions on the second server.
But when I connect with the amAdmin user on the second server and go to the session item, I can only see the open sessions on the second server (only the current sessions on the second server are displayed instead of the open sessions for the first server)
I have restarted web container after configuring both servers and also I have checked keystore.jk (it the same on both servers)
The session failover is configured as recommended in openam documentation.
After checking /sso/debug -> Session
I get the following message:
ERROR: Session:getValidSession :
com.iplanet.dpro.session.SessionException: AQIC5wM2LY4Sfcx_fLoDaTo7RYYE1qLOq3Q4WtoQQ1k7_jk.*AAJTSQACMDIAAlMxAAIwMQ..* Invalid session ID.AQIC5wM2LY4Sfcx_fLoDaTo7RYYE1qLOq3Q4WtoQQ1k7_jk.*AAJTSQACMDIAAlMxAAIwMQ..*
at com.iplanet.dpro.session.Session.getSessionResponseWithoutRetry(Session.java:1583)
at com.iplanet.dpro.session.Session.getValidSessions(Session.java:1340)
at com.iplanet.dpro.session.Session.getValidSessions(Session.java:1201)
at com.sun.identity.console.session.model.SMProfileModelImpl.initSessionsList(SMProfileModelImpl.java:111)
at com.sun.identity.console.session.model.SMProfileModelImpl.getSessionCache(SMProfileModelImpl.java:307)
at com.sun.identity.console.session.SMProfileViewBean.beginDisplay(SMProfileViewBean.java:190)
at com.iplanet.jato.taglib.UseViewBeanTag.doStartTag(UseViewBeanTag.java:149)
Did you have any ideas to fix this issue?
Best regards
OpenAM uses an HTTP url connection to the other instance url (listed under 'Servers & Sites' to retrieve the session information.
if the OpenAM server instance urls have scheme 'https', make sure the deployment container trusts the issuer of the cert ... that's plain JSSE (http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html), not OpenAM related.
Session failover means 'failover', not session replication.
The issue has been resolved after modifing settings in the openam config file 'bootstrap'.
Some settings are not correctly saved in this file.

Resources