Transfer file among microservices - spring

I have a chain of Microservices (Spring boot/cloud)
UI allows user to download file from file storage, but response returns back throw all microservices. I dont want to download file on each microservice and upload it to next one when response.(I dont want to store in memory, it will cause OutOfmemory error)
Is it possible to return some stream?
Thanks

I would pass back a file reference only (like a url) and only when you need it retrieve the actual file.
So if the Client UI requires an actual file from MicroService 1 I would pass the reference back to MicroService 1 and let that service get the file content and send it to the client.
If the client can resolve a URL/reference itself you could even do with just returning that to the client and then letting the client retrieve the file.
Either way you want to minimize the moving/loading of the file and basically do this at the last possible moment.

Related

Download different content types with spring

I need to expose a POST endpoint where the user uploads an excel file and based on some validations, I either send back the file with some information added to it along with json response OR just send status code as 200 OK(no data).
I am trying to do this in spring boot. I tried following link:
https://javadigest.wordpress.com/2012/02/13/downloading-multiple-files-using-multipart-response/
This works but needs adding boundary manually. Is there any other way to do it so that I can send both the data ?
You should use #Produces as it is written here: https://docs.oracle.com/cd/E19776-01/820-4867/ghrpv/index.html
You can define the MIME-Type of your payload.

Trouble using Websockets with Julia

I am trying to connect to an API that uses websockets. I need to do the following:
Connect to the websocket using a given URI
Send a login request
Send a request for the required data stream(s)
Store the returned streamed data in an array for immediate processing (the array will be continually updated while data is streamed)
When finished collecting data, send a logout request
I have a general understanding of websockets, but have never tried to connect to a websocket. I have read through the “documentation” for packages HTTP (which I have used before), WebSockets, and DandelionWebSockets. Each has left me scratching my head trying to understand how to implement the above tasks.
Would someone please help by showing me, line-by-line, how to set up the above tasks and also explain why each line or function is used? (Assume I have the correct URI, login, data, and logout request formats.)

How to deal with 6 megabytes limit in AWS lambda services?

I have a three-tier app running in AWS. The midware was written in Python Flask and stored on a Linux machine.
However, they asked me to move to AWS Lambda services. There is a limit of 6 M for the returning data function. As I´m dealing with GEOJson, sometimes it´s necessary to return up to 15 M.
Despite the AWS lambda stuff is stateless, I could provide some way to return data partitioned, but it´s problematic I think it will be necessary to generate the whole map again and again until I could fulfill all data.
Is there a better way to deal with this? I´m programming in Python.
I'd handle this by sending the data to S3, and issuing a redirect or JSON response that points to the URL on S3 (with a temporary, expiring URL if the data should be secure). If the data's long-lived, you can just leave it there; if not, you could use S3's lifecycle rules to have the files automatically delete after 24 hours or so.
If you have control of the client too that receives those data, you can send a compressed result that is then uncompressed client side. So you'll be able to send that 15MB response too, which can become really small when compressed.
Or you can send a fragment of the whole response with a token or something indicating the client that the response is not complete. Than the client will make another request with that token to get the next fragment, and so on until there are no more fragments. At this point the client can join all fragments to get the full response.
Speaking of the 6MB limit, I hope that at some point we will have the ability to set what is the max payload size. since 6MB is fine for most cases, but not ALL cases
You can use presigned S3 URL to upload, using this there will be no bound by payload size.
Get HTTP GET request to API Gateway, then Lambda function get generate presigned URL and return it presigned S3 URL.
Then client directly update content to s3 using pre-signed s3 URL.
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

Client side checking before mapping to resource/ url mapping

I have created a API which is used to upload file on AWS S3.
I want to restrict the file size to 10MB.
The following is my API.
#POST
#Path("/upload")
#Produces("application/json")
public Response kafkaBulkProducer(InputStream a_fileInputStream) throws IOException {
// UPLOADING LOGIC
}
As much as I understand when a request is made to my API the data/InputStream is loaded on my server.
This is consuming resources (connection etc.).
Is there any way by which I can identity the file size before the URL is mapped or resource mapping is done, so that if my file size is greater than 10MB I will not allow it to reach my server.
I think I can work with pre-filter. But my biggest concern and question is when the API is called, will the stream data will be stored on my server first ?
Do the pre-matching filter will help, so the the data will not be stored on my server in case if its size is greater than 10MB.
Basically I don't want to store data on my server then check the size and then upload to s3.
I want a solution where I can check the file size before loading to server and then I can upload to S3.
This API I will be using with curl.
How can I do this.
I hope I am clear about my question.
Thank you

CodeIgniter send multiple responses to client

Is there any way when the client sends a request to the CodeIgniter server that it can respond with multiple responses?
For example, this can be a file upload to show how the file is processed in the backend.
Can this be done with an ajax request or do I need sockets for this?
EDIT:
Example situation here
I send a data object to the CI backend to be saved. Let's say on the
backend the following functions has to run on this received data
Inspect data to be secure
search for any similar data
Save data
Create a page based on the data
add some default images to this new page
As you see, these functions may take more then 5 seconds to execute.
Do I need a response from the server every time a task from the list above is completed? Does that make sense?
Another example would be when task 1 is complete the server will send a response saying that task 1 has completed. So the user knows that something is going on.

Resources