Is it possible to save a group of files with MinIO client in one transaction? - spring-boot

I have a Spring Boot application which stores files on a MinIO server. My application receives groups of files and should save all files per each group or save nothing in a problem group. I use io.minio.MinioClient#putObject for each file in a group. Now my code looks like
fun saveFile(folderName: String, fileName: String, file: ByteArray) {
file.inputStream().use {
minioClient.putObject(folderName, fileName, it, PutObjectOptions(file.size.toLong(), -1))
}
}
fun saveFiles(folderName: String, files: Map<String, ByteArray>) {
try {
files.forEach { (fileName, file) -> saveFile(folderName, fileName, file) }
} catch (e: Exception) {
files.forEach { (fileName, _) -> minioClient.removeObject(folderName, fileName) }
throw e
}
}
I wonder how I could refactor my saveFiles method to make it more transactional.
N.B. There are no rules about reading files by groups - each file could be read individually.

You can try use this S3 feature, MinIO also support this feature.
Create .tar or .zip archive and send to S3 with metadata option snowball-auto-extract=true (header: X-Amz-Meta-Snowball-Auto-Extract), archive will be automatically extracted in S3.
This is not transaction but look very similar for me.

Related

Azure Data Factory Blob Event Trigger not working

We see below error message for ADF Blob Event Trigger and there was no code change for Blob trigger container, folder path. We see this error for Web Activity, when included into pipeline.
ErrorCode=InvalidTemplate, ErrorMessage=Unable to parse expression '*sanitized*'
I faced the same problem and fixed it. Here is the solution
The problem is that I parameterised some input for linkedService and datasets. For example, here is one of my blob storage datasets Bicep file
resource stagingBlobDataset 'Microsoft.DataFactory/factories/datasets#2018-06-01' = {
// ... Create a JSON file dataset in a blob storage linkedService
parameters: {
tableName: {
type: 'string'
}
}
typeProperties: {
location: {
type: 'AzureBlobStorageLocation'
// fileName: '#concat(dataset().tableName,\'.json\')' // WRONG LINE
// new line
fileName: {
value: '#concat(dataset().tableName,\'.json\')'
type: 'Expression'
}
}
}
}
}
I hope that Microsoft has provided more info. Anyway, I found the issue in my Data Factory code

Getting 403 while downloading files from a certain folder in an Amazon S3 bucket in Spring boot application

I am using one S3 bucket for my Spring boot application.
Here I have created the folder and uploaded files within this in S3 bucket from my Spring boot application with the help of the following upload function. Now, while I am listing the files within the folder, I am able to see them. But I cannot download them, getting 403 always.
Code snippet for uploading, listing the objects and downloading thereafter:
//Download is failing
public File downloadObject(String filePath) {
File file = null;
log.info("Downloading object {} from s3 bucket {}", filePath, bucketName);
try {
file = File.createTempFile(filePath, "");
file.deleteOnExit();
amazonS3.getObject(new GetObjectRequest(bucketName, filePath), file);
} catch (Exception exception) {
exception.stackTrace();
}
return file;
}
//Following function is working perfectly fine
public List<String> listObjects(String pathPrefix) {
final ListObjectsV2Result listingResponse = amazonS3.listObjectsV2(new ListObjectsV2Request()
.withPrefix(pathPrefix)
.withBucketName(bucketName));
if (Objects.nonNull(listingResponse)) {
List<String> result = listingResponse.getObjectSummaries().stream().map(
S3ObjectSummary::getKey).collect(
Collectors.toList());
result.remove(pathPrefix);
return result;
}
return Collections.emptyList();
}
//uploading is also working fine
public void uploadFile(InputStream inputStream, String filePath)
{
try {
amazonS3.putObject(new PutObjectRequest(bucketName, filePath, inputStream, null));
} catch (SdkClientException exception) {
exception.stackTrace();
}
}
S3 bucket permission is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAReadWriteAccessToBucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456:role/abcd"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
You can see, as per the bucket policy, I have given every permission. Even after this, why the download is failing, not able to figure out. Please help.
First thing I am noticing is you are using the old V1 S3 API. Amazon strongly recommends moving to the AWS SDK for Java V2.
The AWS SDK for Java 2.x is a major rewrite of the version 1.x code base. It’s built on top of Java 8+ and adds several frequently requested features. These include support for non-blocking I/O and the ability to plug in a different HTTP implementation at run time.
The Amazon S3 V2 Java API work nicely in a Spring application. There is a multi service example that shows use of the S3 V2 Java API within a Spring BOOT app. In this use case, we get a byte[] to pass to the Amazon Rekognition service.
To get a byte[] from an object in an Amazon S3 bucket (which is what i assume you mean when you use the word download), you can use V2 code like this:
public byte[] getObjectBytes (String bucketName, String keyName) {
s3 = getClient();
try {
// Create a GetObjectRequest instance
GetObjectRequest objectRequest = GetObjectRequest
.builder()
.key(keyName)
.bucket(bucketName)
.build();
// Get the byte[] from this S3 object
ResponseBytes<GetObjectResponse> objectBytes = s3.getObjectAsBytes(objectRequest);
byte[] data = objectBytes.asByteArray();
return data;
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
return null;
}
Refer to this end to end example that shows you how to perform this use case in a Spring app. Look at the code in the S3Service class.
Creating an example AWS photo analyzer application using the AWS SDK for Java
I just ran this app and it works perfectly...

Issues in JSON to XML and Upload to FTP in Ballerina Integrator

I am trying samples given in Ballerina integrator tutorials, while running json to xml and then upload into ftp sample facing the issue:
error org.wso2.ei.b7a.ftp.core.util.BallerinaFTPException.
I knew the reason for this issue but don't know where i have to put that command. Please help me to sort out the issue.
Reason for the issue is: ftp credentials are mentioned in conf file, I put conf file under the root directory but it doesn't refer. Need to give
b7a.config.file=src/upload_to_ftp/resources/ballerina.conf
But I don't know where I have to give this?
Thanks in Advance.
You can add -b7a.config.file when running the generated jar file.
Official documentation :
https://ei.docs.wso2.com/en/latest/ballerina-integrator/develop/running-on-jvm/
However, keeping the ballerina.conf file in the root directory should work. Ballerina looks for the conf file automatically when running. Make sure the conf file is outside the src directory.
For the error that you have mentioned, could you add in logs to see if the json has been converted to xml properly? Since the code is structured in a way that checks if the conversion has occurred, it should print an xml
if (employee is xml) {
var ftpResult = ftp->put(remoteLocation, employee);
if (ftpResult is error) {
log:printError("Error", ftpResult);
response.setJsonPayload({Message: "Error occurred uploading file to FTP.", Resason: ftpResult.reason()});
} else {
response.setJsonPayload({Message: "Employee records uploaded successfully."});
}
} else {
response.setJsonPayload({Message: "Error occurred tranforming json to xml.", Resason: employee.reason()});
}
The if( employee is xml ) part will check if the conversion is successful.
The same applies after the file is sent to the server. If the file hasnt been sent, then the ftpResult would be an error. Basically, if you got the message { Message : "Employee records uploaded successfully" } then all the checks should have passed.
I have passed credentials directly to ftpConfig then its working fine. Conversion happened and converted file has been uploaded into ftp location successfully
ftp:ClientEndpointConfig ftpConfig = {
protocol: ftp:SFTP,
host: "corpsftp.dfaDFDA.com",
port: 22,
secureSocket: {
basicAuth: {
username: "DDFDS",
password: "FADFHYFGJ"
}
}
};
Output
{
"Message": "Employee records uploaded successfully."
}

WebTestClient with multipart file upload

I'm building a microservice using Spring Boot + Webflux, and I have an endpoint that accepts a multipart file upload. Which is working fine when I test with curl and Postman
#PostMapping("/upload", consumes = [MULTIPART_FORM_DATA_VALUE])
fun uploadVideo(#RequestPart("video") filePart: Mono<FilePart>): Mono<UploadResult> {
log.info("Video upload request received")
return videoFilePart.flatMap { video ->
val fileName = video.filename()
log.info("Saving video to tmp directory: $fileName")
val file = temporaryFilePath(fileName).toFile()
video.transferTo(file)
.thenReturn(UploadResult(true))
.doOnError { error ->
log.error("Failed to save video to temporary directory", error)
}
.onErrorMap {
VideoUploadException("Failed to save video to temporary directory")
}
}
}
I'm now trying to test using WebTestClient:
#Test
fun shouldSuccessfullyUploadVideo() {
client.post()
.uri("/video/upload")
.contentType(MULTIPART_FORM_DATA)
.syncBody(generateBody())
.exchange()
.expectStatus()
.is2xxSuccessful
}
private fun generateBody(): MultiValueMap<String, HttpEntity<*>> {
val builder = MultipartBodyBuilder()
builder.part("video", ClassPathResource("/videos/sunset.mp4"))
return builder.build()
}
The endpoint is returning a 500 because I haven't created the temp directory location to write the files to. However the test is passing even though I'm checking for is2xxSuccessful if I debug into the assertion that is2xxSuccessful performs, I can see it's failing because of the 500, however I'm still getting a green test
Not sure what I am doing wrong here. The VideoUploadException that I map to simply extends ResponseStatusException
class VideoUploadException(reason: String) : ResponseStatusException(HttpStatus.INTERNAL_SERVER_ERROR, reason)

how to create log files in Gradle

I am using Gradle-2.11 and I am unable to find a way to create log files that logs debug level information. I don't want to do it through command line by redirecting the logs to the log file. I want Gradle code just like Apache Ant's 'record' task so that I can put that code in my build.gradle file wherever I want to create logs.
For ex: If I want to convert this ant task to gradle, then what would be the code:
<record name="${BuildLogPath}/${BuildLogFile}" append="no" loglevel="verbose" action="start"/>
Gradle integrates really nicely with Ant (https://docs.gradle.org/2.11/userguide/ant.html)
It doesn't automatically record each step. I didn't realize that is what you were asking. The updated below will produce the output and you can manually log.
ant.record(name: "${BuildLogPath}/${BuildLogFile}", append:false, loglevel: "verbose", action: "start")
ant.echo("start logging")
//... do stuff here
ant.echo(message: "end logging")
ant.record(name: "${BuildLogPath}/${BuildLogFile}", append:false, loglevel: "verbose", action: "stop")
This may do more of what you are asking. Note: This is something I adapted slightly from this excellent example:
http://themrsion.blogspot.com/2013/10/gradle-logging-writing-to-log-to-file.html
import org.gradle.logging.internal.*
String currentDate = new Date().format('yyyy-MMM-dd_HH-mm-ss-S')
String loggingDirectory = "${rootDir}/build/logs"
mkdir("${loggingDirectory}")
File gradleBuildLog = new File("${loggingDirectory}/${currentDate}_gradleBuild.log")
gradle.services.get(LoggingOutputInternal).addStandardOutputListener (new StandardOutputListener () {
void onOutput(CharSequence output) {
gradleBuildLog << output
}
})
gradle.services.get(LoggingOutputInternal).addStandardErrorListener (new StandardOutputListener () {
void onOutput(CharSequence output) {
gradleBuildLog << output
}
})

Resources