aws s3 delete object not working - spring

I'm trying to upload/delete image to/from aws s3 bucket using spring boot.
public class AmazonClient {
private AmazonS3 s3client;
private void initializeAmazon() {
AWSCredentials credentials = new BasicAWSCredentials(this.accessKey, this.secretKey);
this.s3client = AmazonS3ClientBuilder.standard().withRegion(region).withCredentials(new AWSStaticCredentialsProvider(credentials)).build();
}
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(new PutObjectRequest(bucketName, fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
public void deleteFileFromS3Bucket(String fileUrl) {
String fileName = fileUrl.substring(fileUrl.lastIndexOf("/") + 1);
s3client.deleteObject(new DeleteObjectRequest(bucketName + "/", fileName));
}
}
The upload function works well, I can see the file has been uploaded to the s3 bucket, but the delete function seems malfunctioning, I get a successful message but the file is still in the bucket.
Thanks in advance if anyone could help me to figure out the problem.

From the javadoc of deleteObject (emphasis mine)
Deletes the specified object in the specified bucket. Once deleted, the object can only be restored if versioning was enabled when the object was deleted.
If attempting to delete an object that does not exist, Amazon S3 will return a success message instead of an error message.
So, most probably the path (fileName) you construct in deleteFileFromS3Bucket does not point to an S3 object.
EDIT
I'm updating my answer based on the comments:
The file name used has special characters (: in the provided example) which gets URL encoded (percent encoded). This encoded URL cannot be used to retrieve or delete the S3 object as the percent in the URL would get encoded again(% gets encoded to %25).
The encoded URL has to be decoded. One way is to use java.net.URLDecoder
URLDecoder.decode(encodedPath, "UTF-8")

public boolean deleteFileFromS3Bucket(String fileUrl) {
String fileName = fileUrl.substring(fileUrl.lastIndexOf("/") + 1);
try {
DeleteObjectsRequest delObjReq = new DeleteObjectsRequest(bucketName).withKeys(fileName);
s3client.deleteObjects(delObjReq);
return true;
} catch (SdkClientException s) {
return false;
}
}
For me, working here is an option.

Just found out that I added an additional slash in new DeleteObjectRequest.

The only thing that worked for me is deleting it through Cyberduck (I neither work for nor am promoting Cyberduck, I genuinely used it and it worked). Here are the steps of what I did:
Download and install Cyberduck.
Click on Open Connection
Select Amazon S3 from the dropdown (default would be FTP)
Enter your access key ID and secret Access key (if you don't have one then you need to create one on your s3 bucket through IAM on AWS).
You will see a list your S3 buckets. Select the file or folder or bucket you want to delete, right click and delete. Even files with 0kb show up here and can be deleted.

Related

Tomcat Performance with Spring Boot API for File Upload

I have a Spring boot API and one of the endpoints allows users to upload video's. Now My controller basically takes the file as a MultiPart file and then I store it in a temp folder accessible to tomcat. Once I have it stored on Disk, I then push the video to an S3 bucket.
Now to me anyway, this seems to be less than optimal, Like if I wanted to have a 100 or a 1000 users upload at once it seems really non performant to write the files to disk first.
As a little background I'm storing it on disk with the intention that if there is a issue pushing to S3 I can retry
The below code might show what I'm doing better than the above:
public Video addVideo(#RequestParam("title") String title,
#RequestParam("Description") String Description,
#RequestParam(value = "file", required = true) MultipartFile file) {
this.amazonS3ClientService.uploadFileToS3Bucket(file, title, description));
}
Method for storing Video file:
String fileNameWithExtenstion = awsS3FileName + "." + FilenameUtils.getExtension(multipartFile.getOriginalFilename());
//creating the file in the server (temporarily)
File file = new File(tomcatTempDir + fileNameWithExtenstion);FileOutputStream fos = new FileOutputStream(file);
fos.write(multipartFile.getBytes());
fos.close();PutObjectRequest putObjectRequest = new PutObjectRequest(this.awsS3Bucket, awsS3BucketFolder + UnigueId + "/" + fileNameWithExtenstion, file);
if (enablePublicReadAccess) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
}
// Upload a file as a new object with ContentType and title
specified.amazonS3.putObject(putObjectRequest);
//removing the file created in the server
file.delete();
So my question is....is there a better way in Tomcat to:
A) Take in a file via a controllerB) Push to S3
There is no other way to do it with multipart. The problem with multipart that to properly segement parts from the requst they need sometimes skipped or be repeatable. That is impossible within memory w/o having memory to explode. Therefore, Commons FileUpload caches them on disk after a certain threshold is reached.
Multipart requests are the worst way for that. I highly recommend to use either PUT or POST with content type application/octet-stream. You can take the bare request input stream and pass to HttpClient to stream to your backend server. I did this already 5 years ago and it works for gigabytes. I have posted the solution in the Apache HttpClient mailing list.
There is one possibility how this could work under specific conditions:
All parts are in the correct physical order you want to read
Your write to a backend is fast enough to sustain the read from the front
Consume the root part and then go over to the next physical one, process the request body lazily. JAX-WS RI (Metro) has a very nice handling of multipart requests for XOP/MTOM. Learn from that because you won't be able to make it any better.
Perhaps you can try to direct stream the input stream from your MultipartFile to S3.
Consider the following uploadFileToS3Bucket method:
public PutObjectResult uploadFileToS3Bucket(InputStream input, long size, String title, String description) {
// Indicate the length of the information to avoid the need to compute it by the AWS SDK
// See: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/PutObjectRequest.html#PutObjectRequest-java.lang.String-java.lang.String-java.io.InputStream-com.amazonaws.services.s3.model.ObjectMetadata-
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(size); // rely on Spring implementation. Maybe you probably also can use input.available()
// compute the object name as appropriate
String key = "...";
PutObjectRequest putObjectRequest = new PutObjectRequest(
this.awsS3Bucket, key, input, objectMetadata
);
// The rest of your code
if (enablePublicReadAccess) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
}
// Upload a file as a new object with ContentType and title
return specified.amazonS3.putObject(putObjectRequest);
}
Of course, you need to provide the service the input stream obtained from the client request associated with the MutipartFile object:
public Video addVideo(
#RequestParam("title") String title,
#RequestParam("Description") String Description,
#RequestParam(value = "file", required = true) MultipartFile file) {
try (InputStream input = file.getInputStream()) {
this.amazonS3ClientService.uploadFileToS3Bucket(input, file.getSize(), title, description));
}
}
Probably you can also play with the getBytes method of MultipartFile and create a ByteArrayInputStream to perform the operation.
In addVideo:
byte[] bytes = file.getBytes();
In uploadFileToS3Bucket:
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(bytes.length);
PutObjectRequest putObjectRequest = new PutObjectRequest(
this.awsS3Bucket, key, new ByteArrayInputStream(bytes), objectMetadata
);
I would prefer the first solution, but try to determine which option offers you the best performance.

Can anyone tell me the Java utility to download documents to your local PC from Content Engine in filenet?

Hello Guys I am trying to write the java utility to download the documents to local PC from content engine in filenet can anyone help me out?
You should read about FileNet P8 CE API, you can start here:
You have to know that the FileNet Content Engine has two types of interface that can be used to connect to it: RMI and SOAP. A cmd line app you are planning to write, can connect only by SOAP (I am not sure that this is true for the newest versions, but what is definitely true, that it is much easier to setup the SOAP connection than EJB), so you have to read that part of the documentation, how to establish a connection in this way to your Content Engine.
On the link above, you can see that first of all you have to collect the required jars for SOAP connection: please check the "Required for a Content Engine Java API CEWS transport client" section for the file names.
After you collect them, you will need a SOAP WSDL URL and a proper user and password, the user has to have read properties and read content right to the documents you would like to download. You also need to know the ObjectStore name and the identifier or the location of your documents.
Now we have to continue using this Setting Up a Thick Client Development Environment link (I opened it from the page above.)
Here you have to scroll down to the "CEWS transport protocol (non-application-server dependent)" section.
Here you can see, that you have to create a jaas.conf file with the following content:
FileNetP8WSI {
com.filenet.api.util.WSILoginModule required;
};
This file must be added as the following JVM argument when you run the class we will create:
java -cp %CREATE_PROPER_CLASSPATH% -Djava.security.auth.login.config=jaas.conf DownloadClient
Now, on the top-right corner of the page, you can see links that describes what to do in order to get a connection, like "Getting Connection", "Retrieving an EntireNetwork Object" etc. I used that snipplet to create the class below for you.
public class DownloadClient {
public static void main(String[] args) throws Exception{
String uri = "http://filenetcehost:9080/wsi/FNCEWS40MTOM";
String userId = "ceadmin";
String password = "password";
String osName = "Test";
UserContext uc = UserContext.get();
try {
//Get the connection and default domain
Connection conn = Factory.Connection.getConnection(uri);
Domain domain = Factory.Domain.getInstance(conn, null);
ObjectStore os = Factory.ObjectStore.fetchInstance(domain, osName, null);
// the last value (jaas samza name) must match with the name of the login module in jaas.conf
Subject subject =UserContext.createSubject(connection, userId, password, "FileNetP8WSI");
// set the subject to the local thread via threadlocal
uc.pushSubject(subject);
// from now, we are connected to FileNet CE, and objectStore "Test"
//https://www.ibm.com/support/knowledgecenter/en/SSNW2F_5.2.0/com.ibm.p8.ce.dev.ce.doc/document_procedures.htm
Document doc = Factory.Document.getInstance(os, ClassNames.DOCUMENT, new Id("{F4DD983C-B845-4255-AC7A-257202B557EC}") );
// because in FileNet a document can have more that one associated content element
// (e.g. stores single page tifs and handle it as a multipaged document), we have to
// get the content elements and iterate list.
ContentElementList docContentList = doc.get_ContentElements();
Iterator iter = docContentList.iterator();
while (iter.hasNext() )
{
ContentTransfer ct = (ContentTransfer) iter.next();
// Print element sequence number and content type of the element.
// Get and print the content of the element.
InputStream stream = ct.accessContentStream();
// now you have an inputstream to the document content, you can save it local file,
// or you can do what you want with it, just do not forget to close the stream at the end.
stream.close();
}
} finally {
uc.popSubject();
}
}
}
This code is just shows how can you implement such a thick client, I have created it now using the documentation, not production code. But after specifying the packages to import, and may handle the exceptions it will probably work.
You have to specify the right URL, user, password and docId of course, and you have to implement the copy from the TransferInputStream to a FileOutputStream, e.g. by using commons.io or java NIO, etc.

Amazon S3 secure URL at the bucket level

I want to be able to serve URLs to client that are "signed" and so, are only relevant to 24 hours (for example).
However, I don't want to call S3 for every URL generated:
AWS::S3::S3Object.new(bucket, name).url_for(:read, :secure => true, :expires => expires_in).to_s
Instead, I want to generate the URL by myself (I have the file name and the bucket link, I can build it myself).
However, I want to sign the url at the bucket level (say, once a day for all the files in a given bucket). is this possible?
When you create a pre-signed URL, that is done completely locally. You could do it "by yourself", but it is much easier to use the SDK, and there would be no practical diferences. See that there is no "sign" action on the S3 API.
However, you can not sign at the "bucket level", as signature is checked per-object. I believe signing a whole bucket would not be feasible.
Sorry I do not have ruby code for this only Java...
But you will not be able to get a presigned url for the whole bucket, only each file.
Here is the function I created. This will print everything for you. Does the process make sense?
private static URI GetURL(AmazonS3Client amazonS3Client, S3ObjectSummary s3ObjectSummary) throws URISyntaxException {
return amazonS3Client.generatePresignedUrl(
new GeneratePresignedUrlRequest(s3ObjectSummary.getBucketName(), s3ObjectSummary.getKey())
.withMethod(HttpMethod.GET)
.withExpiration(GetExperation())).toURI();
}
public static void run(String accessKey, String secretKey, String bucketName) {
AmazonS3Client amazonS3Client = new AmazonS3Client(new BasicAWSCredentials(accessKey, secretKey));
amazonS3Client.listObjects(bucketName)
.getObjectSummaries()
.stream()
.forEach(s3ObjectSummary
-> System.out.println(GetURL(amazonS3Client, s3ObjectSummary).toString()));
}

not a valid virtual path - when trying to return a file from a url

We download a file from our CdN and then return a url to that downloaded file to the user. I'm trying to get this implemented so that when a user clicks the download buttton, it goes and test that url to that downloaded file then forces a save prompt based on that local url.
So for example if there is a button called download on the page for a specific .pdf, we ultimately have code in our controller going to the cdn and downloading the file, zipping it then returning a url such as: http://www.ourLocalAssetServer.com/assets/20120331002728.zip
I'm not sure if you you can use the File() method to return the resource to the user as to cause a save prompt when you have a url to the file, not a system directory virtual path.
So how can I get this working with the url? I need the download button to ultimately force a save prompt on their end given a url such as what is generated per this example above? Not I am using POST, not a GET, so not sure which I should use in this case either besides this not working overall to force a save prompt. It is hitting my GetFileDownloadUrl but ultimately errors saying it's not a virtual path.
Here's my code:
#foreach (CarFileContent fileContent in ModelCarFiles)
{
using (Html.BeginForm("GetFileDownloadUrl", "Car", FormMethod.Get, new { carId = Model.CarId, userId = Model.UserId, #fileCdnUrl = fileContent.CdnUrl }))
{
#Html.Hidden("userId", Model.UserId);
#Html.Hidden("carId", Model.CarId);
#Html.Hidden("fileCdnUrl", fileContent.CdnUrl);
<p><input type="submit" name="SubmitCommand" value="download" /> #fileContent.Name</p>
}
}
public ActionResult GetFileDownloadUrl(string fileCdnUrl, int carId, int userId)
{
string downloadUrl = string.Empty;
// take the passed Cdn Url and go and download that file to one of our other servers so the user can download that .zip file
downloadUrl = GetFileZipDownloadUrl(carId, userId, fileCdnUrl;
// now we have that url to the downloaded zip file e.g. http://www.ourLocalAssetServer.com/assets/20120331002728.zip
int i = downloadUrl.LastIndexOf("/");
string fileName = downloadUrl.Substring(i);
return File(downloadUrl, "application/zip", fileName);
}
error: not a valid virtual path
This won't work except the zip file is in your virtual path.
The File method you have used here File(string, string, string) expects a fileName which will be used to create a FilePathResult.
Another option would be to download it (using WebClient.DownloadData or DownloadFile methods) and passing either the byte array or the file path (depending on which you choose).
var webClient = new Webclient();
byte[] fileData = webClient.DownloadData(downloadUrl);
return File(fileData, "application/zip", fileName);
And the lines where you get the index of "/" just to get the filename is unnecessary as you could have used:
string fileName = System.IO.Path.GetFileName(downloadUrl);

Returning an MVC FileContentResult to download .pdf file from

Hi I've search around for this quite a bit but I didn't find a situation that really resembled mine..hope I didn't miss a duplicate somewhere
The Goal: Return a file from a UNC share to the client as a download/open option.
Info: The share is located on a different server than the one hosting the web site. When a corresponding folder name on the menu is clicked, I am able to successfully read from the share (I return the files as a JSON result) and in Jquery I then append list items for each file found in the folder and make the list item ID's the filename. This works great.
When these appended list items are clicked on I pass their ID's (which are the filename, like "thefile.pdf") to the following controller which returns a FileContentResult.
files[0].ToString() below is similar to "\server\folder\"
public ActionResult OpenTheFile(string id)
{
List<string> files = new List<string>();
files.AddRange(Directory.GetFiles(LNFiles.ThePath, id, SearchOption.AllDirectories));
Response.AppendHeader("Content-Disposition", "attachment; filename=" + id + ";");
return File(files[0].ToString(), System.Net.Mime.MediaTypeNames.Application.Pdf, id);
}
And yes the obligatory "it works on my local machine". When deployed to the IIS 7.5 server and I click on the list item I get this YSOD error:
The handle is invalid. (Exception from
HRESULT: 0x80070006 (E_HANDLE))
I'm impersonating a user with rights to the file share...I'm at a loss, i was thinking something with encoding or screwed up rights? I've tried using a virtual dir instead but alas same issue.
In my case
changing this:
public ActionResult Download(int id)
{
var item = ItemRepo.GetItemById(id);
string path = Path.Combine(Server.MapPath("~/App_Data/Items"), item.Path);
return File(path, "application/octetstream", item.Path);
}
to this:
public ActionResult Download(int id)
{
var item = ItemRepo.GetItemById(id);
string path = Path.Combine(Server.MapPath("~/App_Data/Items"), item.Path);
return File(new FileStream(path, FileMode.Open), "application/octetstream", item.Path);
}
has worked. I am putting this here just in case anyone needs.
Check out this for a workaround
You may want to try a packet capture to see if you are receiving the same issue as documented here:
http://forums.asp.net/t/1473379.aspx/1
For your unc path - are you directly referencing \servername\share or are you using a network mapped drive letter?
God Bless you : ProgRockCode.
and since that involved an ActionResult, I wrote a custom ActionResult that used the "WriteFile" method.
public override void ExecuteResult(ControllerContext context)
{
context.HttpContext.Response.WriteFile(FilePath, true);
context.HttpContext.Response.End();
}

Resources