How can I download download/get multiple objects from MinIO using a single call of MinIO Java SDK.
In this case it is not possible to achieve this using a single call. you'll need to do the following:
First you'll need to list all the files in the selected bucket using the listObjects call, once you have this list you can filter the objects to check if the file/files that you want are included in the bucket.
Once you have all the valid files list, you can use the getObject call to actually get the information of the file, al this point you can use this data-stream to create a zip file with all the files that you require.
Completed this then you can start streaming this file to start the download.
Please remember that the MinIO team is available on their public slack channel or by email to answer questions 24/7/365.
Related
I want to implement following use case with Apache Camel FTP:
On a remote location I have 0 to n amount of files stored.
When I receive a command, using FTP, I want to download one file as a byte array (which one does not matter), if any files are available.
When the file is downloaded, I want to save it in a database as a blob.
Then I want to delete the stored/processed file on the remote location
Wait for the next download command and once received go back to step 1.
The files have to be downloaded through the same FTP session.
My problem is that if I use a normal FTP route, it downloads all available files.
When I tell the route to only download one, I have to create a new route for the other files and I cannot reuse the FTP session.
Is there a way to implement this use case with Apache Camel FTP?
Camel-ftp doesn't consume all available files at once it consumes them individually one after another meaning that each file gets processed separately. If you need to process them in some specific order you can try using file-name or modified date with sortBy option.
If you want to control when file gets downloaded i.e when command gets called you can call FTP Consumer endpoint using pollEnrich
Example:
// 1. Loads one file from ftp-server with timeout of 3 seconds.
// 2. logs the body and headers
from("direct:example")
.pollEnrich("ftp:host:port/directoryName", 3000)
.to("log:loggerName?showBody=true&showHeaders=true");
You can call the direct consumer endpoint with ProducerTemplate you can obtain from CamelContext or change it to whatever consumer endpoint fits your use case.
If you need to use dynamic URI you can use simple to provide the URI for poll-enrich and also also provide timeout afterwards.
from("direct:example")
.pollEnrich()
.simple("ftp:host:port/directoryName?fileName=${headers.targetFile}")
.timeout(3000)
.to("log:loggerName?showBody=true&showHeaders=true");
I have a third-party executable that takes a directory path as an argument and in turn looks there for a collection of .db files. I have said collection of files stored in a Google Cloud Storage bucket and would like to stream the content of those files into some local named pipes that can be used as input to the executable.
I'm writing an application to perform the above in Go and am using the "cloud.google.com/go/storage" package to work with cloud storage objects.
As a note, I need all pipes/files to be available for reading at the time I run the executable.
What is the best way to go about this? I'm looking to essentially used the named pipe as a proxy of sorts to make remote files look local to this executable. Possible?
I want to write a file in gcs with a stream object but I've only found the "create_file" function that creates a new file object by providing a path to a local file to upload and the path to store it with in the bucket.
Is there any function to create a file in gcs from a stream?
Fuse over GCS
You could try gcsfuse which layers a user-space fs over a bucket but it is only beta s/ware at present. There's a nice section on limitations which you should read first.
I use fog to access GCS but that is a thin layer which doesn't try to impose any additional semantics into the bucket/object model.
Warning, if your problem really requires a standard file-system underneath any possible solution then GCS is not a good fit.
The ability to provide an IO object instead of a File object has only recently been possible. It was added in PR 1335, and will be included in the next release.
Until then, quickest way is to write the stream to a tempfile and upload that. For more see Issue 305.
I would like to create a utility to automatically copy files from one folder to another folder.
• The target and destination folder will be specified as a parameter
• The file type in (i.e. “.csv” or “.txt”) will also be specified as a parameter
How to pass parameters to windows service.. Please reply ..
Set your service to be a web service on some obscure port. Use a REST api (or whatever you like) to put a json/xml (or use get query strings if you like to) post to the service with directory, extension, and whatever else you might need.
The service has the data and can do its thing. You can also define other apis the web server can respond to which can stop the copy process or report status on ongoing copies (10 of 15 files copied, etc). You can enhance and augment it however you like.
We are new to Windows azure and have used Windows azure storage for blob objects while developing sitefinity application but the blob files which are uploaded to this storage via publishing to azure from Visual Studio uploads files with only the file names and do not maintain the prefix folder name and slash. Hence we have to rename all files manually on the windows azure management portal and put the folder name and slash in the beginning of each file name so that the page which is accessing these images can show the images properly otherwise the images are not shown due to incorrect path.
Though in sitefinity admin panel , when we upload these images/blob files in those pages , we upload them inside a folder and we have configured to leverage sitefinity to use azure storage instead of database.
Please check the file attached to see the screenshot.
Please help me to solve this.
A few things I would like to mention first:
Windows Azure does not support rename functionality. Rename blob functionality = copy blob followed by delete blob.
Copy blob operation is asynchronous so you must wait for copy operation to finish before deleting the blob.
Blob storage does not support folder hierarchy natively. As you may have already discovered, you create an illusion of a folder by prepending a blob name (say logo.png) with the name of folder you want (say images) and separate them with slash (/) so your blob name becomes images/logo.png.
Now coming to your problem. Needless to say that manually renaming the blobs would be a cumbersome exercise. I would recommend using a storage management tool to do that. One such example would be Azure Management Studio from Cerebrata. If you use that tool, essentially what you can do is create an empty folder in the container and then move the files into that folder. That to me would be the fastest way to achieve your objective.
If you wish to write some code to do that, here are the steps you will take:
First you will list all blobs in a blob container.
Next you will loop over this list.
For each blob (let's call it source blob), you would get its name and prepend the folder name that you want and create an instance of a CloudBlockBlob object.
Next you would initiate a copy blob operation on that blob using StartCopyFromBlob on this new blob where source is your source blob.
You would need to wait for the copy operation to finish. Once the copy operation is finished, you can safely delete the source blob.
P.S. I would have written some code but unfortunately I'm stuck with something else. I might write something later on (but please don't hold your breath for that :)).