I have Authentication Auditing Support in my application, I followed the following article to implement it.
https://www.baeldung.com/spring-boot-authentication-audit
The last time it worked I had the following versions of
<spring.version>5.1.9.RELEASE</spring.version>
<spring.boot.version>2.1.7.RELEASE</spring.boot.version>
then in some point, we upgrade spring and spring boot
<spring.version>5.2.0.RELEASE</spring.version>
<spring.boot.version>2.2.0.RELEASE</spring.boot.version>
and now the auditEventHappened method no longer breaks when user logins to the application
#Component
public class LoginAttemptsLogger {
#EventListener
public void auditEventHappened(
AuditApplicationEvent auditApplicationEvent) {
AuditEvent auditEvent = auditApplicationEvent.getAuditEvent();
System.out.println("Principal " + auditEvent.getPrincipal()
+ " - " + auditEvent.getType());
WebAuthenticationDetails details =
(WebAuthenticationDetails) auditEvent.getData().get("details");
System.out.println("Remote IP address: "
+ details.getRemoteAddress());
System.out.println(" Session Id: " + details.getSessionId());
}
}
Hoping that someone had the same issue and was able to solve it .. downgrading the libraries this is not the way to solve it ..
in case you are stuck like me, I was able to make it work:
this is the tricky part, I was expecting, that been generation will be something that returns new AuditEventRepository(), but no AuditEventRepository is an interface and you need to implement custom behavior. so if you are happy with default implementation use InMemoryAuditEventRepository
#Bean
public InMemoryAuditEventRepository auditEventRepository() throws Exception {
return new InMemoryAuditEventRepository();
}
you will need to turn the audit on in the application.property file
management.auditevents.enabled = true
Related
I was reading "Spring Microservices In Action (2021)" because I wanted to brush up on Microservices.
Now with Spring Boot 3 a few things changed. In the book, an easy example of how to push messages to a topic and how to consume messages to a topic were presented.
The Problem is: The examples presented do just not work with Spring Boot 3. Sending Messages from a Spring Boot 2 Project works. The underlying project can be found here:
https://github.com/ihuaylupo/manning-smia/tree/master/chapter10
Example 1 (organization-service):
Consider this Config:
spring.cloud.stream.bindings.output.destination=orgChangeTopic
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.kafka.binder.zkNodes=kafka #kafka is used as a network alias in docker-compose
spring.cloud.stream.kafka.binder.brokers=kafka
And this Component(Class) which can is injected in a service in this project
#Component
public class SimpleSourceBean {
private Source source;
private static final Logger logger = LoggerFactory.getLogger(SimpleSourceBean.class);
#Autowired
public SimpleSourceBean(Source source){
this.source = source;
}
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
source.output().send(MessageBuilder.withPayload(change).build());
}
}
This code fires a message to the topic (destination) orgChangeTopic. The way I understand it, the firsttime a message is fired, the topic is created.
Question 1: How do I do this Spring Boot 3? Config-Wise and "Code-Wise"?
Example 2:
Consider this config:
spring.cloud.stream.bindings.input.destination=orgChangeTopic
spring.cloud.stream.bindings.input.content-type=application/json
spring.cloud.stream.bindings.input.group=licensingGroup
spring.cloud.stream.kafka.binder.zkNodes=kafka
spring.cloud.stream.kafka.binder.brokers=kafka
And this code:
#SpringBootApplication
#RefreshScope
#EnableDiscoveryClient
#EnableFeignClients
#EnableEurekaClient
#EnableBinding(Sink.class)
public class LicenseServiceApplication {
public static void main(String[] args) {
SpringApplication.run(LicenseServiceApplication.class, args);
}
#StreamListener(Sink.INPUT)
public void loggerSink(OrganizationChangeModel orgChange) {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
}
What this method is supposed to do is to fire whenever a message is fired in orgChangeTopic, we want the method loggerSink to fire.
How do I do this in Spring Boot 3?
In Spring Cloud Stream 4.0.0 (the version used if you are using Boot 3), a few things are removed - such as the EnableBinding, StreamListener, etc. We deprecated them before in 3.x and finally removed them in the 4.0.0 version. The annotation-based programming model is removed in favor of the functional programming style enabled through the Spring Cloud Function project. You essentially express your business logic as java.util.function.Funciton|Consumer|Supplier etc. for a processor, sink, and source, respectively. For ad-hoc source situations, as in your first example, Spring Cloud Stream provides a StreamBridge API for custom sends.
Your example #1 can be re-written like this:
#Component
public class SimpleSourceBean {
#Autowired
StreamBridge streamBridge
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
streamBridge.send("output-out-0", MessageBuilder.withPayload(change).build());
}
}
Config
spring.cloud.stream.bindings.output-out-0.destination=orgChangeTopic
spring.cloud.stream.kafka.binder.brokers=kafka
Just so you know, you no longer need that zkNode property. Neither the content type since the framework auto-converts that for you.
StreamBridge send takes a binding name and the payload. The binding name can be anything - but for consistency reasons, we used output-out-0 here. Please read the reference docs for more context around the reasoning for this binding name.
If you have a simple source that runs on a timer, you can express this simply as a supplier as below (instead of using a StreamBrdige).
#Bean
public Supplier<OrganizationChangeModel> ouput() {
return () -> {
// return the payload
};
}
spring.cloud.function.definition=output
spring.cloud.bindings.output-out-0.destination=...
Example #2
#Bean
public Consumer<OrganizationChangeModel> loggerSink() {
return model -> {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
};
}
Config:
spring.cloud.function.definition=loggerSink
spring.cloud.stream.bindings.loggerSink-in-0.destination=orgChangeTopic
spring.cloud.stream.bindings.loggerSinnk-in-0.group=licensingGroup
spring.cloud.stream.kafka.binder.brokers=kafka
If you want the input/output binding names to be specifically input or output rather than with in-0, out-0 etc., there are ways to make that happen. Details for this are in the reference docs.
I have a very strange behavior since I updated to Spring Boot v2.2.6-RELEASE.
My spring data rest base path (/api/v1) is loosing it's #RepositoryRestResource links. Custom links are still available.
After a server restart I got:
After an unkown period of time (2-3 days) I got:
My base path it customized this way:
#Bean
public RepresentationModelProcessor<RepositoryLinksResource> globalLinkProcessor() {
// do not replace with lambda!!!
return new RepresentationModelProcessor<RepositoryLinksResource>() {
#Override
public RepositoryLinksResource process(final RepositoryLinksResource repositoryLinksResource) {
repositoryLinksResource.add(linkHelper.newLinkFromMethodInvocation(
WebMvcLinkBuilder.methodOn(FileProcessorController.class).status(), "fileProcessor"));
repositoryLinksResource.add(linkHelper.newLinkFromMethodInvocation(
WebMvcLinkBuilder.methodOn(CurrentUserController.class).whoAmI(), "whoAmI"));
repositoryLinksResource.add(linkHelper.newLinkFromMethodInvocation(
WebMvcLinkBuilder.methodOn(UserController.class).listOperations(), "users"));
repositoryLinksResource.add(linkHelper.newLinkFromMethodInvocation(
WebMvcLinkBuilder.methodOn(StatisticController.class).listOperations(), "statistics"));
return repositoryLinksResource;
}
};
}
There is NO exception in any log output. When I debug from my local machine everything is fine. I am not getting any hands on it. Can anyone help here?
Thanks for reading,
Christian
An update to Spring Boot v2.3.0-RELEASE solved the problem. (until now)
This is my first question here so please bear with me.In a recent release of Spring 5.2 there were certain and extremely helpful components added to Spring Integration as seen in this link:https://docs.spring.io/spring-integration/reference/html/sftp.html#sftp-server-eventsApache MINA was integrated with a new listener "ApacheMinaSftpEventListener" which
listens for certain Apache Mina SFTP server events and publishes them as ApplicationEvents
So far my application can capture the application events as noted in the documentation from the link provided but I can't seem to figure out when the event finishes... if that makes sense (probably not).In a process flow the application starts up and activates as an SFTP Server on a specified port.I can use the user name and password to connect to and "put" a file on the system which initiates the transfer.When I sign on I can capture the "SessionOpenedEvent"When I transfer a file I can capture the "FileWrittenEvent"When I sign off or break the connection I can capture the "SessionClosedEvent"When the file is a larger size I can capture ALL of the "FileWrittenEvent" events which tells me the transfer occurs on a stream of a predetermined or calculated sized buffer.What I'm trying to determine is "How can I find out when that stream is finished". This will help me answer "As an SFTP Server accepting a file, when can I access the completed file?"
My Listener bean (which is attached to Apache Mina on start up via the SubSystemFactory)
#Configuration
public class SftpConfiguration {
#Bean
public ApacheMinaSftpEventListener apacheMinaSftpEventListener() {
return new ApacheMinaSftpEventListener();
}
}
SftpSubsystemFactory subSystem = new SftpSubsystemFactory();
subSystem.addSftpEventListener(listener);
My Event Listener: this is here so I can see some output in a logger which is when I realized, on a few GB file, the FileWrittenEvent went a little crazy.
#Async
#EventListener
public void sftpEventListener(ApacheMinaSftpEvent sftpEvent) {
log.info("Capturing Event: ", sftpEvent.getClass().getSimpleName());
log.info("Event Details: ", sftpEvent.toString());
}
These few pieces were all I really needed to start capturing the eventsI was thinking that I would need to override a method to help me capture when the stream finishes so I can move on with my business logic but I'm not sure which one.I seem to be able to access the file (read/write) prior to the stream being done so I don't seem to be able to use logic that attempts to "move" the file and wait for it to throw an error, though that approach seemed like bad practice to me.Any guidance would be greatly appreciated, thank you.
Versioning Information
Spring 5.2.3
Spring Boot 2.2.3
Apache Mina 2.1.3
Java 1.8
This may not be helpful for others but I've found a way around my initial problem by integrating a related solution combined with the new Apache MINA classes found in this answer:https://stackoverflow.com/a/45513680/12806809
My solution:
Create a class that extends the new ApacheMinaSftpEventListener while overridding the 'open' and 'close' methods to ensure my SFTP Server business logic know when a file is done writing.
public class WatcherSftpEventListener extends ApacheMinaSftpEventListener {
...
...
#Override public void open(ServerSession session, String remoteHandle, Handle localHandle) throws IOException {
File file = localHandle.getFile().toFile();
if (file.isFile() && file.exists()) {
log.debug("File Open: {}", file.toString());
}
// Keep around the super call for now
super.open(session, remoteHandle, localHandle);
}
#Override
public void close(ServerSession session, String remoteHandle, Handle localHandle) {
File file = localHandle.getFile().toFile();
if (file.isFile() && file.exists()) {
log.debug("RemoteHandle: {}", remoteHandle);
log.debug("File Closed: {}", file.toString());
for (SftpFileUploadCompleteListener listener : fileReadyListeners) {
try {
listener.onFileReady(file);
} catch (Exception e) {
String msg = String.format("File '%s' caused an error in processing '%s'", file.getName(), e.getMessage());
log.error(msg);
try {
session.disconnect(0, msg);
} catch (IOException io) {
log.error("Could not properly disconnect from session {}; closing future state", session);
session.close(false);
}
}
}
}
// Keep around the super call for now
super.close(session, remoteHandle, localHandle);
}
}
When I start the SSHD Server I added my new listener bean to the SftpSubsystemFactory which uses a customized event handler class to apply my business logic against the incoming files.
watcherSftpEventListener.addFileReadyListener(new SftpFileUploadCompleteListener() {
#Override
public void onFileReady(File file) throws Exception {
new WatcherSftpEventHandler(file, properties.getSftphost());
}
});
subSystem.addSftpEventListener(watcherSftpEventListener);
There was a bit more to this solution but since this question isn't getting that much traffic and it's more for my reference and learning than anything now, I won't provide anything more unless asked.
I've been trying to override Stormpath's RequestEventListenerAdapter methods to populate an account's Custom Data when the user logs in or creates an account.
I created a class that extends RequestEventListenerAdapter and am trying to override the on SuccessfulAuthenticationRequestEvent and the on LogoutRequestEvent to make some simple outputs to the console to test if they are working (A simple "Hello world!" for example). But when I do any of these actions on the application, none of these events are triggering. So I was wondering if anyone here could help me out, I'm not sure if the bean I'm supposed to declare is in the right place or if I'm missing some kind of configuration for the events to trigger. Thanks for any help and let me know if more information is needed.
This is my custom class:
import com.stormpath.sdk.servlet.authc.LogoutRequestEvent;
import com.stormpath.sdk.servlet.authc.SuccessfulAuthenticationRequestEvent;
import com.stormpath.sdk.servlet.event.RequestEventListenerAdapter;
public class CustomRequestEventListener extends RequestEventListenerAdapter {
#Override
public void on(SuccessfulAuthenticationRequestEvent e) {
System.out.println("Received successful authentication request event: {}\n" + e);
}
#Override
public void on(LogoutRequestEvent e) {
System.out.println("Received logout request event: {}\n" + e);
}
}
This is the bean that I'm not sure where to place:
#Bean
public RequestEventListener stormpathRequestEventListener() {
return new CustomRequestEventListener();
}
What you are doing looks exactly right. I have created a sample project demonstrating how to get things working. You could take a look at it (it is very simple) and compare it with what you have.
I also added instructions on how to get it running so you can see that it does indeed work.
i have grails pluggin spring-security-core-1.2.1
I registered security event listener as a spring bean in grails-app/conf/spring/resources.groovy:
securityEventListener(LoggingSecurityEventListener)
and make two additions to grails-app/conf/Config.groovy:
grails.plugins.springsecurity.useSecurityEventListener = true
grails.plugins.springsecurity.logout.handlerNames =
['rememberMeServices',
'securityContextLogoutHandler',
'securityEventListener']
my logging/logout listener
class LoggingSecurityEventListener implements ApplicationListener<AbstractAuthenticationEvent>, LogoutHandler {
void onApplicationEvent(AbstractAuthenticationEvent event) {
System.out.println('appEvent')
}
void logout(HttpServletRequest request, HttpServletResponse response,
Authentication authentication) {
System.out.println('logout')
}
}
on ApplicationEvent works good, but logout not working
what could be the problem?
or you can tell how to get all logging users
When you set
grails.plugins.springsecurity.useSecurityEventListener = true
the spring security plugin will register it's own event listener called securityEventListener. The handler looked up from handlerNames is probably getting the plugin registered one instead of yours. Try renaming your bean to something like:
loggingSecurityEventListener(LoggingSecurityEventListener)
and replacing the handlerNames with
grails.plugins.springsecurity.logout.handlerNames =
['rememberMeServices',
'securityContextLogoutHandler',
'loggingSecurityEventListener']
NOTE: the configuration property names have changed (plugins -> plugin). If you're using the grails spring-security plugin version 2.0 or later, use this:
grails.plugin.springsecurity.logout.handlerNames