I am new to springboot, need help in uploading the excel data to the oracle database table.
i have set up the sprinboot environment and database is connected and able to get the table data.
Upload it as multi-part file in API,
#PostMapping(value = "/upload-excel")
#ResponseStatus(value = HttpStatus.CREATED)
public bool uploadMutipleDocuments(#RequestParam("files") MultipartFile file) {
return uploadService.uploadFile(file);
}
In your, service class invoke the repository to save the file in database.
Related
I have a requirement to write a simple application to write some values to DB.
Basically this is some repetitive task which has to be done quite often and I want to build a simple Spring boot app with a UI exposed so that it can be done in an automatic way.
I have an Entity Class with a simple POJO MyClient and I have written a Controller and Service Classes and am able to GET and POST To DB:
My App.properties looks like below:
spring.datasource.url=jdbc:oracle:thin:#db-host-1:1521/xxx.xx.intern
spring.datasource.username=root
spring.datasource.password=root
//Controller Class
#GetMapping("/clients")
public List<MyClient> retrieveAllClientVersions(){
return myClientService.listAllClientVersions();
}
#RequestMapping(value = "/client/add", method = RequestMethod.POST)
#ResponseBody
void addNewClientVersion(#RequestBody MyClient myClient){
myClientService.addNewClientVersion(myClient);
}
//Service Class
private MyClientRepository myClientRepository;
#Autowired
public MyClientService(MyClientRepository myClientRepository){
this.myClientRepository=myClientRepository;
}
public List<MyClient> listAllClientVersions(){
List<MyClient> myClients=new ArrayList<>();
myClientRepository.findAll().forEach(myClients::add);
return myClients;
}
public void addNewClientVersion(MyClient myClient){
myClient.setReleaseKeyVersion(RELEASE_KEY_VERSION);
myClient.setClientVersion(myClient.getClientVersion());
myClient.setDescription(DESCRIPTION);
myClient.setReleaseCertDn(DGV_RELEASE_CERT_DN);
myClient.setStatus(STATUS);
myClient.setClientSecurityProfileDbId(CLIENT_SECURITY_PROFILE_DB_ID);
myClient.setIssuerDbId(ISSUER_DB_ID);
myClientRepository.save(myClient);
}
We have around 50 test environments where I need to run the same query. I wanted to create a UI where I can have check boxes against all environment with buttons like GET and POST.
Whatever environment user selects from check boxes and say POST the "Insert" should run on all those environments.
How can this be handled? Is there a way that based on Query Parameter in POST Request the Insert can be run on different DB. How do we connect to different DB at runtime? What could be best way to do this?
I'm building an app to serve data from a PostgreSQL database via a REST API (with Spring MVC) and a PWA (with Vaadin).
The PostgreSQL database stores files up to 2GB using Large Objects (I'm not in control of that); the JDBC driver provides streamed access to their binary content via Blob#getBinaryStream, so data does not need to be read entirely into memory.
The only requirement is that the stream from the blob must be consumed in the same transaction, otherwise the JDBC driver will throw.
The problem is that even if I retrieve the stream in a transactional repository method, both Spring MVC and Vaadin's StreamResource will consume it outside the transaction, so the JDBC driver throws.
For example, given
public interface SomeRepository extends JpaRepository<SomeEntity, Long> {
#Transactional(readOnly = true)
default InputStream getStream() {
return findById(1).getBlob().getBinaryStream();
}
}
this Spring MVC method will fail
#RestController
public class SomeController {
private final SomeRepository repository;
#GetMapping
public ResponseEntity getStream() {
var stream = repository.getStream();
var resource = new InputStreamResource(stream);
return new ResponseEntity(resource, HttpStatus.OK);
}
}
and the same for this Vaadin StreamResource
public class SomeView extends VerticalLayout {
public SomeView(SomeRepository repository) {
var resource = new StreamResource("x", repository::getStream);
var anchor = new Anchor(resource, "Download");
add(anchor);
}
}
with the same exception:
org.postgresql.util.PSQLException: ERROR: invalid large-object descriptor: 0
which means the transaction is already closed when the stream is read.
I see two possible solutions to this:
keep the transaction open during the download;
write the stream to disk during transaction and then serve the file from disk during download.
Solution 1 is an anti-pattern and a security risk: the transaction duration is left on the hands of the client and both a slow-reader or an attacker might block data access.
Solution 2 creates a huge delay between the client request and the server response, since the stream is first read from the database and written to disk.
One idea might be to start reading from the disk while the file is being written with data from the database, so that the transfer starts immediately but the transaction duration would be decoupled from the client download; but I don't know which side-effects this might have.
How can I achieve the goal of serving PostgreSQL large objects in a secure and performant way?
We solved this problem in Spring Content by using threads + piped streams and a special inputstream wrapper ClosingInputStream that delays closes the connection/transaction until the consumer closes the input stream. Maybe something like this would help you too?
Just as an FYI. We have found using Postgres's OIDs and the Large Object API to be extremely slow when compared with similar databases.
Perhaps it is also possible that you might be able to just retrofit Spring Content JPA to your solution and therefore use its http endpoints (and the solution I just outlined) instead of creating your own? Something like this:-
pom.xml
<!-- Java API -->
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>spring-content-jpa-boot-starter</artifactId>
<version>0.4.0</version>
</dependency>
<!-- REST API -->
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>spring-content-rest-boot-starter</artifactId>
<version>0.4.0</version>
</dependency>
SomeEntity.java
#Entity
public class SomeEntity {
#Id
#GeneratedValue
private long id;
#ContentId
private String contentId;
#ContentLength
private long contentLength = 0L;
#MimeType
private String mimeType = "text/plain";
...
}
SomeEntityContentStore.java
#StoreRestResource(path="someEntityContent")
public interface SomeEntityContentStore extends ContentStore<SomeEntity, String> {
}
Is all you need to get REST endpoints that will allow you to associate content with your entity SomeEntity. There is a working example in our examples repo here.
One option is to decouple reading from the database and writing response to client as you mentioned. The downside is the complexity of the solution, you would need to synchronize between the reader and the writer.
Another option is to first get the large object id in the main transaction and then read data in chunks, each chunk in the separate transaction.
byte[] getBlobChunk(Connection connection, long lobId, long start, long chunkSize) throws SQLException {
Blob blob = PgBlob(connection, lobId);
InputStream is = blob.getBinaryStream(start, chunkSize);
return IOUtils.toByteArray(is);
}
This solution is much simpler but has an overhead of establishing a new connection which shouldn't be a big deal if you use connection pooling.
I read that I should not store videos in the database. I should store them on the file system and save in the database their name.
I read some topics like this: https://softwareengineering.stackexchange.com/a/345736
So I have my video files in a folder in my app called videoDirs. I would like somehow to return them to the client.
My idea is to return multiple video files for an user using an array of MultiPartFile and receive them in AngularJS for the frontend.
Should I do something like this?
#RequestMapping(value = "/download-video", method = RequestMethod.GET)
public MultipartFile[] handleDownloadVideoRequest(#RequestParam String username) {
return null;
}
Can you provide me some examples?
I am having trouble uploading files via AJAX from my web-client to my Server. I am using the following jQuery library in the client-side to do the file upload: https://github.com/hayageek/jquery-upload-file
In the server-side, I'm using Spring Framework and I have followed the following Spring Tutorial to build my Service: https://spring.io/guides/gs/uploading-files/
At first, my server method looked like this (file was defined as #RequestParam):
#RequestMapping(value="/upload", method=RequestMethod.POST)
public #ResponseBody String handleFileUpload(#RequestParam("file") MultipartFile file){
//functionality here
}
but every time I submitted the Upload form I got a Bad Request message from the Server, and my handleFileUpload() method was never called.
After that, I realized the file was not being sent as a Request Parameter so I defined file as #RequestBody, and now my method looks like this:
#RequestMapping(value="/upload", method=RequestMethod.POST)
public #ResponseBody String handleFileUpload(#RequestBody("file") MultipartFile file){
//functionality here
}
Now handleFileUpload() is called every time the Upload form is submitted, but I am getting a NullPointerException every time I want to manipulate file.
I want to avoid submitting the form by default, I just want to do it through AJAX straight to the Server. Does anybody know what could be happening here?
you may try changing the signature of the method to
#RequestMapping(value="/upload", method=RequestMethod.POST)
public #ResponseBody String handleFileUpload(MultipartHttpServletRequest request){
Iterator<String> iterator = request.getFileNames();
while (iterator.hasNext()) {
String fileName = iterator.next();
MultipartFile multipartFile = request.getFile(fileName);
byte[] file = multipartFile.getBytes();
...
}
...
}
this works with jQuery File Upload in our webapp. If for some reason this does
not work for you, you may try to isolate the problem, by inspecting the HTTP
request issued by the jQuery File Upload (for example, with Fiddler), and debugging the response starting from Spring
DispatcherServlet.
I recently discovered GridFS which I'd like to use for file storage with metadata. I just wondered if it's possible to use a MongoRepository to query GridFS? If yes, can someone give me an example?
I'd also take a solution using Hibernate, if there is some.
The reason is: My metadata contains a lot of different fields and it would be much easier to query a repository than to write some new Query(Criteria.where(...)) for each scenario. And I hopefully could also simply take a Java object and provide it via REST API without the file itself.
EDIT: I'm using
Spring 4 Beta
Spring Data Mongo 1.3.1
Hibernate 4.3 Beta
There is a way to solve this:
#Document(collection="fs.files")
public class MyGridFsFile {
#Id
private ObjectId id;
public ObjectId getId() { return id; }
private String filename;
public String getFilename() { return filename; }
private long length;
public long getLength() { return length; }
...
}
You can write a normal Spring Mongo Repo for that. Now you can at least query the fs.files collection using a Spring Data Repo. But: You cannot access the file contents this way.
For getting the file contents itself, you've got (at least) 2 options:
Use file = gridOperations.findOne(Query.query(Criteria.where("_id").is(id))); InputStream is = file.getInputStream();
Have a look at the source code of GridFSDBFile. There you can see, how it internally queries the fs.chunks collection and fills the InputStream.
(Option 2 is really low level, Option 1 is a lot easier and this code gets maintained by the MongoDB-Java-Driver devs, though Option 1 would be my choice).
Updating GridFS entries:
GridFS is not designed to update file content!
Though only updating the metadata field can be useful. The rest of the fields is kinda static.
You should be able to simply use your custom MyGridFsFileRepo's update method. I suggest to only create a setter for the metadata field.
Different metadata for different files:
I solved this using an abstract MyGridFsFile class with generic metadata, i.e.:
#Document(collection="fs.files")
public abstract class AbstractMyGridFsFile<M extends AbstractMetadata> {
...
private M metadata;
public M getMetadata() { return metadata; }
void setMetadata(M metadata) { this.metadata = metadata; }
}
And of course each impl has its own AbstractMetadata impl associated. What have I done? AbstractMetadata always has a field called type. This way I can find the right AbstractMyGridFsFile impl. Though I have also a generic abstract repository.
Btw: In the meantime I switched here from using Spring Repo, to use plain access via MongoTemplate, like:
protected List<A> findAll(Collection<ObjectId> ids) {
List<A> files = mongoTemplate.find(Query.query(Criteria
.where("_id").in(ids)
.and("metadata.type").is(type) // this is hardcoded for each repo impl
), typeClass); // this is the corresponding impl of AbstractMyGridFsFile
return files;
}
Hope this helps. I can write more, if you need more information about this. Just tell me.
You can create a GridFS object with the database from your MongoTemplate, and then interact with that:
MongoTemplate mongoTemplate = new MongoTemplate(new Mongo(), "GetTheTemplateFromSomewhere");
GridFS gridFS = new GridFS(mongoTemplate.getDb());
The GridFS object lets you create, delete and find etc.