How to enable Spring Boot to display a list of files under a directory - spring-boot

I have a folder structure /data/reports on a file system, which contains all reports.
How can I configure a SpringBoot application to serve the contents of this file sytem.
Currently I have tried few options, but none working
#Configuration
#EnableWebMvc
public class AppConfig implements WebMvcConfigurer {
#Value(value = "${spring.resources.static-locations:#{null}}")
private String fileSystem;
#Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
registry
.addResourceHandler("/data/reports/**")
.addResourceLocations(fileSystem)
.setCachePeriod(3600)
.resourceChain(true)
.addResolver(new PathResourceResolver());
}
}
and in application.properties I have defined
spring.resources.static-locations=file:///data/reports
server.servlet.jsp.init-parameters.listings=true
But in both cases, when I try
http://host:port/application/data/reports
I'm getting 404
What am I missing ?
Based on the suggestions given, I realized that one mistake I'm doing is to access the reports via
http://host:port/application/data/reports
instead of
http://host:port/data/reports
if I use application in the request, those calls will go through RequestDispatcher and will try to find for a matching RequestMapping, which does not exist. I think I'm convinced so far.
But the problem I'm seeing now is, I'm getting SocketTimeoutException while trying to read from the resource listed in the URL. I had put some breakpoints in Spring source "ResourceHttpMessageConverter.java"
protected void writeContent(Resource resource, HttpOutputMessage outputMessage)
throws IOException, HttpMessageNotWritableException {
try {
InputStream in = resource.getInputStream(); //It is timing out here
try {
StreamUtils.copy(in, outputMessage.getBody());
}
catch (NullPointerException ex) {
// ignore, see SPR-13620
}
The resource is a small text file with 1 line "Hello World". Yet it is timing out.
The resource in the above class is a FileUrlResource opened on file:///c:/data/reports/sample.txt
On the other hand, I tried to read that resource as
File file = new File("c:/data/reports/sample.txt");
System.out.println(file.exists());
URL url = file.toURI().toURL();
URLConnection con = url.openConnection();
InputStream is = con.getInputStream(); //This works
Thanks

Related

How to upgrade spring boot admin from 1.5 to 2.0

Is there any reference guide for spring boot admin upgrade?
I have a legacy app that I need to upgrade from 1.5 to 2.0, but the entire API has changed & there is 0 info in the official reference guide. https://codecentric.github.io/spring-boot-admin/current/
For example, the main domain class now seems to be InstanceEvent, whereas it used to be 'Application'; but they hold completely different info.
Same with the class 'AbstractStatusChangeNotifier'; which now seems to use InstanceEvent & Spring webflux...
My more specific question is:
How can I get application info from spring boot admin 2.0?
I used to be able to do this; which now no longer exists in the api.
public class XXXMailNotifier extends AbstractStatusChangeNotifier {
#Override
protected void doNotify(ClientApplicationEvent event) {
try {
helper.setText(mailContentGenerator.statusChange(event), true);
} catch (IOException | MessagingException e) {
logger.error(e.getMessage());
}
}
String statusChange(ClientApplicationEvent event) throws IOException {
ImmutableMap.Builder<String, Object> content = ImmutableMap.<String, Object>builder()
.put("name", event.getApplication().getName())
.put("id", event.getApplication().getId())
.put("healthUrl", event.getApplication().getHealthUrl())
.put("managementUrl", event.getApplication().getManagementUrl())
.put("serviceUrl", event.getApplication().getServiceUrl())
.put("timestamp", DATE_TIME_FORMATTER.print(new LocalDateTime(event.getApplication().getInfo().getTimestamp())));
Well, if it might help anyone...
I looked in the code and found that I can get the info from the instance.registration object.
You can change the above in the below:
#Override
protected Mono<Void> doNotify(InstanceEvent event, Instance instance) {
try {
MimeMessage message = sender.createMimeMessage();
MimeMessageHelper helper = new MimeMessageHelper(message, true);
helper.setSubject(format(subject, environment, pool, instance.getRegistration().getName(), event.getInstance().getValue()));
helper.setText(mailContentGenerator.statusChange(event, instance, getLastStatus(event.getInstance())), true);
public String statusChange(InstanceEvent event, Instance instance, String lastStatus) throws IOException {
Registration registration = instance.getRegistration();
ImmutableMap.Builder<String, Object> content = ImmutableMap.<String, Object>builder()
.put("name", registration.getName())
.put("id", instance.getId().getValue())
.put("healthUrl", registration.getHealthUrl())
.put("managementUrl", registration.getManagementUrl())
.put("serviceUrl", registration.getServiceUrl())
.put("timestamp", DATE_TIME_FORMATTER.print(new LocalDateTime(instance.getStatusTimestamp())));

Olingo with Spring Boot

I am using this tutorial and it works for a simple java web application. Now I want to convert it to Spring Boot. I remove the web.xml and add the following two annotations to DemoServlet
#RestController
public class DemoServlet extends DispatcherServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOG = LoggerFactory.getLogger(DemoServlet.class);
#RequestMapping("/DemoService.svc/*")
protected void service(final HttpServletRequest req, final HttpServletResponse resp) throws ServletException, IOException {
try {
// create odata handler and configure it with CsdlEdmProvider and Processor
OData odata = OData.newInstance();
ServiceMetadata edm = odata.createServiceMetadata(new DemoEdmProvider(), new ArrayList<EdmxReference>());
ODataHttpHandler handler = odata.createHandler(edm);
handler.register(new DemoEntityCollectionProcessor());
// let the handler do the work
handler.process(req, resp);
} catch (RuntimeException e) {
LOG.error("Server Error occurred in ExampleServlet", e);
throw new ServletException(e);
}
}
}
I also change the HTTPServlet to DispatcherServlet.
Now I am only able to access one end point. i.e.
http://localhost:8080/DemoService.svc/
The metadata end point is not working. It returns the service document instead of xml content.
http://localhost:8080/DemoService.svc/$metadata
Can somebody explain what is going on here?
user the below code for the process method.
handler.process(new HttpServletRequestWrapper(request) {
// Spring MVC matches the whole path as the servlet path
// Olingo wants just the prefix, ie upto /odata, so that it
// can parse the rest of it as an OData path. So we need to override
// getServletPath()
#Override
public String getServletPath() {
return "/DemoService.svc";
}
}, response);
You can create a #Configuration and Map your servlet in it like the following
#Bean
public ServletRegistrationBean odataServlet() {
ServletRegistrationBean odataServRegstration = new ServletRegistrationBean(new CXFNonSpringJaxrsServlet(),
"/DemoService.svc/*");
Map<String, String> initParameters = new HashMap<>();
initParameters.put("javax.ws.rs.Application", "org.apache.olingo.odata2.core.rest.app.ODataApplication");
initParameters.put("org.apache.olingo.odata2.service.factory",
"com.metalop.code.samples.olingo.springbootolingo2sampleproject.utils.JPAServiceFactory");
odataServRegstration.setInitParameters(initParameters);
return odataServRegstration;
}
Add the following after the handler.register call:
req.setAttribute("requestMapping", "/DemoService.svc");
The best implementation of olingo2 and spring-boot can be found here. I would suggest to take a look at this repository, it is very straight forward and easy.

Multipart File to file error

I want to upload a multipart file to AWS S3. So, i have to convert it.
But new File method needs a local location to get the file.
I am able to do in local. But running this code in every machine seems like a issue.
Please find both scenarios.
Working
private File convertMultiPartToFile(MultipartFile multipartFile) throws IOException {
File convFile = new File("C:\\Users\\" + multipartFile.getOriginalFilename());
multipartFile.transferTo(convFile);
return convFile;
}
Not working
private File convertMultiPartToFile(MultipartFile multipartFile) throws IOException {
File convFile = new File(multipartFile.getOriginalFilename());
multipartFile.transferTo(convFile);
return convFile;
}
Error received :
java.io.FileNotFoundException: newbusiness.jpg (Access is denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
You could use Spring Content S3. This will hide the implementation details so you don't need to worry about them.
There are Spring Boot starter alternatives but as you are not using Spring Boot add the following dependency to your pom.xml
pom.xml
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>spring-content-s3</artifactId>
<version>0.0.11</version>
</dependency>
Add the following configuration that creates a SimpleStorageResourceLoader bean:
#Configuration
#EnableS3Stores
public class S3Config {
#Autowired
private Environment env;
public Region region() {
return Region.getRegion(Regions.fromName(env.getProperty("AWS_REGION")));
}
#Bean
public BasicAWSCredentials basicAWSCredentials() {
return new BasicAWSCredentials(env.getProperty("AWS_ACCESS_KEY_ID"), env.getProperty("AWS_SECRET_KEY"));
}
#Bean
public AmazonS3 client(AWSCredentials awsCredentials) {
AmazonS3Client amazonS3Client = new AmazonS3Client(awsCredentials);
amazonS3Client.setRegion(region());
return amazonS3Client;
}
#Bean
public SimpleStorageResourceLoader simpleStorageResourceLoader(AmazonS3 client) {
return new SimpleStorageResourceLoader(client);
}
}
Create a "Store":
S3Store.java
public interface S3Store extends Store<String> {
}
Autowire this store into where you need to upload resources:
#Autowired
private S3Store store;
WritableResource r = (WritableResource)store.getResource(getId());
InputStream is = // plug your input stream in here
OutputStream os = r.getOutputStream();
IOUtils.copy(is, os);
is.close();
os.close();
When your application starts it will see the dependency on spring-content-s3 and your S3Store interface and inject an implementation for you, therefore, you don't need to worry about implementing this yourself.
IF you writing some sort of web application or microservice and you need a REST API then you can also add this dependency:
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>spring-content-rest</artifactId>
<version>0.0.11</version>
</dependency>
Update your S3Config.java as follows:
#Configuration
#EnableS3Stores
#Import(RestConfiguration.class)
public class S3Config {
...
Update your store as follows:
S3Store.java
#StoreRestResource(path="s3docs")
public interface S3Store extends Store<String> {
}
Now when your application starts it will see your Store interface and also inject an #Controller implementation that will forward REST request onto your store. This replaces the autowiring code above obviously.
Then:
curl -X POST /s3docs/example-doc
with a multipart/form-data request will store the image in s3.
curl /s3docs/example-doc
will fetch it again and so on. This controller supports full CRUD and video streaming by the way.
If you want to associate this "content" with JPA Entity or something like that then you can have your S3Store extend AssociateStore or ContentStore and you have additional methods available that provide for associations.
There are a couple of getting started guides here. The s3 reference guide is here. And there is a tutorial video here. The coding bit starts about 1/2 way through.
HTH
Since it needs a temporary location to place files. Below code worked after deploying war on AWS.
private File convertMultiPartToFile(MultipartFile multipartFile) throws IOException {
File convFile = new File(System.getProperty("java.io.tmpdir") + System.getProperty("file.separator") +
multipartFile.getOriginalFilename());
multipartFile.transferTo(convFile);
return convFile;
}
You have problems with relative Paths
You can do this
public class UploadStackoverflow {
private String location = "upload-dir";
private Path rootLocation;
public File convertFile(MultipartFile file) throws IOException {
rootLocation = Paths.get(location);
Files.createDirectories(rootLocation);
String filename = StringUtils.cleanPath(file.getOriginalFilename());
InputStream inputStream = file.getInputStream();
Files.copy(inputStream, this.rootLocation.resolve(filename),
StandardCopyOption.REPLACE_EXISTING);
return new File(this.rootLocation.resolve(filename).toAbsolutePath().toString());
}
}

Spring Batch reader file by file

I'm developing a Spring webapp, using spring boot and spring batch frameworks.
We have a set of complex & different json files, and we need to:
read each file
slightly modify its content
finally store them in mongodb.
The question: It makes sense to use spring batch for this task? As I can see in tutorials examples etc, spring batch is the right tool for line by line processing, but what about file by file?
I don't have problems with the writer (MongoItemWritter) and processer, but I do not see how to implement the reader.
Thanks!
yes you can definetly use Spring Batch.
The item for your Reader can be a File.
public class CustomItemReader implements InitializingBean{
private List<File> yourFiles= null;
public File read() {
if ((yourFiles!= null) && (yourFiles.size() != 0)) {
return yourFiles.remove(0);
}
return null;
}
//Reading Items from Service
private void reloadItems() {
this.yourItems= new ArrayList<File>();
// populate the items
}
#Override
public void afterPropertiesSet() throws Exception {
reloadItems();
}
}
A custom Processor :
public class MyProcessor implements ItemProcessor<File, File> {
#Override
public File process(File arg0) throws Exception {
// Apply any logic to your File before transferring it to the writer
return arg0;
}
}
And A custom Writer :
public class MyWriter{
public void write(File file) throws IOException {
}
}

ApacheConnector does not process request headers that were set in a WriterInterceptor

I am experiencing problems when configurating my Jersey Client with the ApacheConnector. It seems to ignore all request headers that I define in a WriterInterceptor. I can tell that the WriterInterceptor is called when I set a break point within WriterInterceptor#aroundWriteTo(WriterInterceptorContext). Contrary to that, I can observe that the modification of an InputStream is preserved.
Here is a runnable example demonstrating my problem:
public class ApacheConnectorProblemDemonstration extends JerseyTest {
private static final Logger LOGGER = Logger.getLogger(JerseyTest.class.getName());
private static final String QUESTION = "baz", ANSWER = "qux";
private static final String REQUEST_HEADER_NAME_CLIENT = "foo-cl", REQUEST_HEADER_VALUE_CLIENT = "bar-cl";
private static final String REQUEST_HEADER_NAME_INTERCEPTOR = "foo-ic", REQUEST_HEADER_VALUE_INTERCEPTOR = "bar-ic";
private static final int MAX_CONNECTIONS = 100;
private static final String PATH = "/";
#Path(PATH)
public static class TestResource {
#POST
public String handle(InputStream questionStream,
#HeaderParam(REQUEST_HEADER_NAME_CLIENT) String client,
#HeaderParam(REQUEST_HEADER_NAME_INTERCEPTOR) String interceptor)
throws IOException {
assertEquals(REQUEST_HEADER_VALUE_CLIENT, client);
// Here, the header that was set in the client's writer interceptor is lost.
assertEquals(REQUEST_HEADER_VALUE_INTERCEPTOR, interceptor);
// However, the input stream got gzipped so the WriterInterceptor has been partly applied.
assertEquals(QUESTION, new Scanner(new GZIPInputStream(questionStream)).nextLine());
return ANSWER;
}
}
#Provider
#Priority(Priorities.ENTITY_CODER)
public static class ClientInterceptor implements WriterInterceptor {
#Override
public void aroundWriteTo(WriterInterceptorContext context)
throws IOException, WebApplicationException {
context.getHeaders().add(REQUEST_HEADER_NAME_INTERCEPTOR, REQUEST_HEADER_VALUE_INTERCEPTOR);
context.setOutputStream(new GZIPOutputStream(context.getOutputStream()));
context.proceed();
}
}
#Override
protected Application configure() {
enable(TestProperties.LOG_TRAFFIC);
enable(TestProperties.DUMP_ENTITY);
return new ResourceConfig(TestResource.class);
}
#Override
protected Client getClient(TestContainer tc, ApplicationHandler applicationHandler) {
ClientConfig clientConfig = tc.getClientConfig() == null ? new ClientConfig() : tc.getClientConfig();
clientConfig.property(ApacheClientProperties.CONNECTION_MANAGER, makeConnectionManager(MAX_CONNECTIONS));
clientConfig.register(ClientInterceptor.class);
// If I do not use the Apache connector, I avoid this problem.
clientConfig.connector(new ApacheConnector(clientConfig));
if (isEnabled(TestProperties.LOG_TRAFFIC)) {
clientConfig.register(new LoggingFilter(LOGGER, isEnabled(TestProperties.DUMP_ENTITY)));
}
configureClient(clientConfig);
return ClientBuilder.newClient(clientConfig);
}
private static ClientConnectionManager makeConnectionManager(int maxConnections) {
PoolingClientConnectionManager connectionManager = new PoolingClientConnectionManager();
connectionManager.setMaxTotal(maxConnections);
connectionManager.setDefaultMaxPerRoute(maxConnections);
return connectionManager;
}
#Test
public void testInterceptors() throws Exception {
Response response = target(PATH)
.request()
.header(REQUEST_HEADER_NAME_CLIENT, REQUEST_HEADER_VALUE_CLIENT)
.post(Entity.text(QUESTION));
assertEquals(200, response.getStatus());
assertEquals(ANSWER, response.readEntity(String.class));
}
}
I want to use the ApacheConnector in order to optimize for concurrent requests via the PoolingClientConnectionManager. Did I mess up the configuration?
PS: The exact same problem occurs when using the GrizzlyConnector.
After further research, I assume that this is rather a misbehavior in the default Connector that uses a HttpURLConnection. As I explained in this other self-answered question of mine, the documentation states:
Whereas filters are primarily intended to manipulate request and
response parameters like HTTP headers, URIs and/or HTTP methods,
interceptors are intended to manipulate entities, via manipulating
entity input/output streams
A WriterInterceptor is not supposed to manipulate the header values while a {Client,Server}RequestFilter is not supposed to manipulate the entity stream. If you need to use both, both components should be bundled within a javax.ws.rs.core.Feature or within the same class that implements two interfaces. (This can be problematic if you need to set two different Prioritys though.)
All this is very unfortunate though, since JerseyTest uses the Connector that uses a HttpURLConnection such that all my unit tests succeeded while the real life application misbehaved since it was configured with an ApacheConnector. Also, rather than suppressing changes, I wished, Jersey would throw me some exceptions. (This is a general issue I have with Jersey. When I for example used a too new version of the ClientConnectionManager where the interface was renamed to HttpClientConnectionManager I simply was informed in a one line log statement that all my configuration efforts were ignored. I did not discover this log statement til very late in development.)

Resources