I want to be able to serve URLs to client that are "signed" and so, are only relevant to 24 hours (for example).
However, I don't want to call S3 for every URL generated:
AWS::S3::S3Object.new(bucket, name).url_for(:read, :secure => true, :expires => expires_in).to_s
Instead, I want to generate the URL by myself (I have the file name and the bucket link, I can build it myself).
However, I want to sign the url at the bucket level (say, once a day for all the files in a given bucket). is this possible?
When you create a pre-signed URL, that is done completely locally. You could do it "by yourself", but it is much easier to use the SDK, and there would be no practical diferences. See that there is no "sign" action on the S3 API.
However, you can not sign at the "bucket level", as signature is checked per-object. I believe signing a whole bucket would not be feasible.
Sorry I do not have ruby code for this only Java...
But you will not be able to get a presigned url for the whole bucket, only each file.
Here is the function I created. This will print everything for you. Does the process make sense?
private static URI GetURL(AmazonS3Client amazonS3Client, S3ObjectSummary s3ObjectSummary) throws URISyntaxException {
return amazonS3Client.generatePresignedUrl(
new GeneratePresignedUrlRequest(s3ObjectSummary.getBucketName(), s3ObjectSummary.getKey())
.withMethod(HttpMethod.GET)
.withExpiration(GetExperation())).toURI();
}
public static void run(String accessKey, String secretKey, String bucketName) {
AmazonS3Client amazonS3Client = new AmazonS3Client(new BasicAWSCredentials(accessKey, secretKey));
amazonS3Client.listObjects(bucketName)
.getObjectSummaries()
.stream()
.forEach(s3ObjectSummary
-> System.out.println(GetURL(amazonS3Client, s3ObjectSummary).toString()));
}
Related
I have a Spring boot API and one of the endpoints allows users to upload video's. Now My controller basically takes the file as a MultiPart file and then I store it in a temp folder accessible to tomcat. Once I have it stored on Disk, I then push the video to an S3 bucket.
Now to me anyway, this seems to be less than optimal, Like if I wanted to have a 100 or a 1000 users upload at once it seems really non performant to write the files to disk first.
As a little background I'm storing it on disk with the intention that if there is a issue pushing to S3 I can retry
The below code might show what I'm doing better than the above:
public Video addVideo(#RequestParam("title") String title,
#RequestParam("Description") String Description,
#RequestParam(value = "file", required = true) MultipartFile file) {
this.amazonS3ClientService.uploadFileToS3Bucket(file, title, description));
}
Method for storing Video file:
String fileNameWithExtenstion = awsS3FileName + "." + FilenameUtils.getExtension(multipartFile.getOriginalFilename());
//creating the file in the server (temporarily)
File file = new File(tomcatTempDir + fileNameWithExtenstion);FileOutputStream fos = new FileOutputStream(file);
fos.write(multipartFile.getBytes());
fos.close();PutObjectRequest putObjectRequest = new PutObjectRequest(this.awsS3Bucket, awsS3BucketFolder + UnigueId + "/" + fileNameWithExtenstion, file);
if (enablePublicReadAccess) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
}
// Upload a file as a new object with ContentType and title
specified.amazonS3.putObject(putObjectRequest);
//removing the file created in the server
file.delete();
So my question is....is there a better way in Tomcat to:
A) Take in a file via a controllerB) Push to S3
There is no other way to do it with multipart. The problem with multipart that to properly segement parts from the requst they need sometimes skipped or be repeatable. That is impossible within memory w/o having memory to explode. Therefore, Commons FileUpload caches them on disk after a certain threshold is reached.
Multipart requests are the worst way for that. I highly recommend to use either PUT or POST with content type application/octet-stream. You can take the bare request input stream and pass to HttpClient to stream to your backend server. I did this already 5 years ago and it works for gigabytes. I have posted the solution in the Apache HttpClient mailing list.
There is one possibility how this could work under specific conditions:
All parts are in the correct physical order you want to read
Your write to a backend is fast enough to sustain the read from the front
Consume the root part and then go over to the next physical one, process the request body lazily. JAX-WS RI (Metro) has a very nice handling of multipart requests for XOP/MTOM. Learn from that because you won't be able to make it any better.
Perhaps you can try to direct stream the input stream from your MultipartFile to S3.
Consider the following uploadFileToS3Bucket method:
public PutObjectResult uploadFileToS3Bucket(InputStream input, long size, String title, String description) {
// Indicate the length of the information to avoid the need to compute it by the AWS SDK
// See: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/PutObjectRequest.html#PutObjectRequest-java.lang.String-java.lang.String-java.io.InputStream-com.amazonaws.services.s3.model.ObjectMetadata-
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(size); // rely on Spring implementation. Maybe you probably also can use input.available()
// compute the object name as appropriate
String key = "...";
PutObjectRequest putObjectRequest = new PutObjectRequest(
this.awsS3Bucket, key, input, objectMetadata
);
// The rest of your code
if (enablePublicReadAccess) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
}
// Upload a file as a new object with ContentType and title
return specified.amazonS3.putObject(putObjectRequest);
}
Of course, you need to provide the service the input stream obtained from the client request associated with the MutipartFile object:
public Video addVideo(
#RequestParam("title") String title,
#RequestParam("Description") String Description,
#RequestParam(value = "file", required = true) MultipartFile file) {
try (InputStream input = file.getInputStream()) {
this.amazonS3ClientService.uploadFileToS3Bucket(input, file.getSize(), title, description));
}
}
Probably you can also play with the getBytes method of MultipartFile and create a ByteArrayInputStream to perform the operation.
In addVideo:
byte[] bytes = file.getBytes();
In uploadFileToS3Bucket:
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(bytes.length);
PutObjectRequest putObjectRequest = new PutObjectRequest(
this.awsS3Bucket, key, new ByteArrayInputStream(bytes), objectMetadata
);
I would prefer the first solution, but try to determine which option offers you the best performance.
Hello Guys I am trying to write the java utility to download the documents to local PC from content engine in filenet can anyone help me out?
You should read about FileNet P8 CE API, you can start here:
You have to know that the FileNet Content Engine has two types of interface that can be used to connect to it: RMI and SOAP. A cmd line app you are planning to write, can connect only by SOAP (I am not sure that this is true for the newest versions, but what is definitely true, that it is much easier to setup the SOAP connection than EJB), so you have to read that part of the documentation, how to establish a connection in this way to your Content Engine.
On the link above, you can see that first of all you have to collect the required jars for SOAP connection: please check the "Required for a Content Engine Java API CEWS transport client" section for the file names.
After you collect them, you will need a SOAP WSDL URL and a proper user and password, the user has to have read properties and read content right to the documents you would like to download. You also need to know the ObjectStore name and the identifier or the location of your documents.
Now we have to continue using this Setting Up a Thick Client Development Environment link (I opened it from the page above.)
Here you have to scroll down to the "CEWS transport protocol (non-application-server dependent)" section.
Here you can see, that you have to create a jaas.conf file with the following content:
FileNetP8WSI {
com.filenet.api.util.WSILoginModule required;
};
This file must be added as the following JVM argument when you run the class we will create:
java -cp %CREATE_PROPER_CLASSPATH% -Djava.security.auth.login.config=jaas.conf DownloadClient
Now, on the top-right corner of the page, you can see links that describes what to do in order to get a connection, like "Getting Connection", "Retrieving an EntireNetwork Object" etc. I used that snipplet to create the class below for you.
public class DownloadClient {
public static void main(String[] args) throws Exception{
String uri = "http://filenetcehost:9080/wsi/FNCEWS40MTOM";
String userId = "ceadmin";
String password = "password";
String osName = "Test";
UserContext uc = UserContext.get();
try {
//Get the connection and default domain
Connection conn = Factory.Connection.getConnection(uri);
Domain domain = Factory.Domain.getInstance(conn, null);
ObjectStore os = Factory.ObjectStore.fetchInstance(domain, osName, null);
// the last value (jaas samza name) must match with the name of the login module in jaas.conf
Subject subject =UserContext.createSubject(connection, userId, password, "FileNetP8WSI");
// set the subject to the local thread via threadlocal
uc.pushSubject(subject);
// from now, we are connected to FileNet CE, and objectStore "Test"
//https://www.ibm.com/support/knowledgecenter/en/SSNW2F_5.2.0/com.ibm.p8.ce.dev.ce.doc/document_procedures.htm
Document doc = Factory.Document.getInstance(os, ClassNames.DOCUMENT, new Id("{F4DD983C-B845-4255-AC7A-257202B557EC}") );
// because in FileNet a document can have more that one associated content element
// (e.g. stores single page tifs and handle it as a multipaged document), we have to
// get the content elements and iterate list.
ContentElementList docContentList = doc.get_ContentElements();
Iterator iter = docContentList.iterator();
while (iter.hasNext() )
{
ContentTransfer ct = (ContentTransfer) iter.next();
// Print element sequence number and content type of the element.
// Get and print the content of the element.
InputStream stream = ct.accessContentStream();
// now you have an inputstream to the document content, you can save it local file,
// or you can do what you want with it, just do not forget to close the stream at the end.
stream.close();
}
} finally {
uc.popSubject();
}
}
}
This code is just shows how can you implement such a thick client, I have created it now using the documentation, not production code. But after specifying the packages to import, and may handle the exceptions it will probably work.
You have to specify the right URL, user, password and docId of course, and you have to implement the copy from the TransferInputStream to a FileOutputStream, e.g. by using commons.io or java NIO, etc.
I'm trying to upload/delete image to/from aws s3 bucket using spring boot.
public class AmazonClient {
private AmazonS3 s3client;
private void initializeAmazon() {
AWSCredentials credentials = new BasicAWSCredentials(this.accessKey, this.secretKey);
this.s3client = AmazonS3ClientBuilder.standard().withRegion(region).withCredentials(new AWSStaticCredentialsProvider(credentials)).build();
}
private void uploadFileTos3bucket(String fileName, File file) {
s3client.putObject(new PutObjectRequest(bucketName, fileName, file)
.withCannedAcl(CannedAccessControlList.PublicRead));
}
public void deleteFileFromS3Bucket(String fileUrl) {
String fileName = fileUrl.substring(fileUrl.lastIndexOf("/") + 1);
s3client.deleteObject(new DeleteObjectRequest(bucketName + "/", fileName));
}
}
The upload function works well, I can see the file has been uploaded to the s3 bucket, but the delete function seems malfunctioning, I get a successful message but the file is still in the bucket.
Thanks in advance if anyone could help me to figure out the problem.
From the javadoc of deleteObject (emphasis mine)
Deletes the specified object in the specified bucket. Once deleted, the object can only be restored if versioning was enabled when the object was deleted.
If attempting to delete an object that does not exist, Amazon S3 will return a success message instead of an error message.
So, most probably the path (fileName) you construct in deleteFileFromS3Bucket does not point to an S3 object.
EDIT
I'm updating my answer based on the comments:
The file name used has special characters (: in the provided example) which gets URL encoded (percent encoded). This encoded URL cannot be used to retrieve or delete the S3 object as the percent in the URL would get encoded again(% gets encoded to %25).
The encoded URL has to be decoded. One way is to use java.net.URLDecoder
URLDecoder.decode(encodedPath, "UTF-8")
public boolean deleteFileFromS3Bucket(String fileUrl) {
String fileName = fileUrl.substring(fileUrl.lastIndexOf("/") + 1);
try {
DeleteObjectsRequest delObjReq = new DeleteObjectsRequest(bucketName).withKeys(fileName);
s3client.deleteObjects(delObjReq);
return true;
} catch (SdkClientException s) {
return false;
}
}
For me, working here is an option.
Just found out that I added an additional slash in new DeleteObjectRequest.
The only thing that worked for me is deleting it through Cyberduck (I neither work for nor am promoting Cyberduck, I genuinely used it and it worked). Here are the steps of what I did:
Download and install Cyberduck.
Click on Open Connection
Select Amazon S3 from the dropdown (default would be FTP)
Enter your access key ID and secret Access key (if you don't have one then you need to create one on your s3 bucket through IAM on AWS).
You will see a list your S3 buckets. Select the file or folder or bucket you want to delete, right click and delete. Even files with 0kb show up here and can be deleted.
Note: i'm using an experimental pre-release of microsoft's latest adal
I'm trying to get my identity providers to work on the mobile applications. So far I've been able to load my identity providers and managed to get the login page to show (except for facebook).
The problem is that whenever i actually try to login i'm getting some error in the form off "invalid redirect uri".
Google, for instance, will say: "The redirect URI in the request: https://login.microsoftonline.com/... did not match a registered redirect URI.
Facebook will show: "Given URL is not allowed by the application configuration: One or more of the given URLs is not allowed by the App's settings. It must match the website URL or Canvas URL, or the domain must be a subdomain of one of the App's domains."
As far as I understand you don't actually need to register the mobile application anymore with the different identity providers because Azure sits in between you and them. Azure handles the connection, gets your token and uses it to identify you. It should then return a set of "azure tokens" to you.
To my knowledge the used redirect URI is registered on the portal since I'm able to load the identity providers in the first place?
Not to mention it seems to be a default URL that's used by many applications: urn:ietf:wg:oauth:2.0:oob which simply tells it to return it to some none-browser based application?
This is the code i'm using to actually do the login/signup:
private static String AUTHORITY_URL = "https://login.microsoftonline.com/<directory>/oauth2/authorize/";
private static String CLIENT_ID = "my_client_id";
private static String[] SCOPES = { "my_client_id" };
private static String[] ADDITIONAL_SCOPES = { "" };
private static String REDIRECT_URL = "urn:ietf:wg:oauth:2.0:oob";
private static String CORRELATION_ID = "";
private static String USER_HINT = "";
private static String EXTRA_QP = "nux=1";
private static String FB_POLICY = "B2C_1_<your policy>";
private static String EMAIL_SIGNIN_POLICY = "B2C_1_SignIn";
private static String EMAIL_SIGNUP_POLICY = "B2C_1_SignUp";
public async Task<AuthenticationResult> Login(IPlatformParameters parameters, bool isSignIn)
{
var authContext = new AuthenticationContext(AUTHORITY_URL, new TokenCache());
if (CORRELATION_ID != null &&
CORRELATION_ID.Trim().Length != 0)
{
authContext.CorrelationId = Guid.Parse(CORRELATION_ID);
}
String policy = "";
if (isSignIn)
policy = EMAIL_SIGNIN_POLICY;
else
policy = EMAIL_SIGNUP_POLICY;
return await authContext.AcquireTokenAsync(SCOPES, ADDITIONAL_SCOPES, CLIENT_ID, new Uri(REDIRECT_URL), parameters, UserIdentifier.AnyUser, EXTRA_QP, policy);
}
microsoft's documentation isn't really helping because most are either empty (they're literally not yet typed out) or it's some help topic from over a year ago. This stuff is pretty new so documentation seems to be hard to come by.
So, dear people of stackoverflow, what am I missing? Why is it saying that the redirect urI is invalid when it's been registered on the azure web portal? And if the redirect URI is invalid why can I retrieve the identity providers in the first place?
why is it that i can't seem to find solutions after hours of searching, yet when i post a question here i somehow find the answer within minutes...
It was quite a stupid mistake at that, one of my collegues had sent me the wrong authority url.
The funny thing is that it was correct "enough" to load the identity providers we had installed on the portal but not correct enough to handle actually signing in or up.
I initially used:
https://login.microsoftonline.com/<tenant_id>/oauth2/authorize/
where it should have been:
https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/authorize
You see that little "v2.0"? yeah that little bastard is what caused all the pain...
I'm developing a WebAPI service in which you can upload a file. The Action looks something like this:
[HttpPost]
public async Task<IHttpActionResult> PostAsync(byte[] content)
{
var now = DateTime.UtcNow;
}
The client which are using the WebAPI also provides a timestamp as header which is used together with some HMAC-stuff to authenticate. One part of the auth check is to validate the timestamp. We parse the timestamp and checks if it is +/- 5 minutes from now. If not then the auth fails.
It works great for all our API calls except this upload API (in some cases). The problem is that sometimes a user uploads a large file over a slow connection and therefore it takes more than 5 minutes to upload the file and the point in time where we check is AFTER the whole file has been uploaded.
Therefore:
Can we somehow do the HMAC check BEFORE the whole file is uploaded? (the file itself (HTTP Content) is not used in the HMAC check). Today we are using an ActionFilter.
Can I get the "time of request" (first byte arrived or whatever) in my Action code?
Thanks!
So, after some investigation I came up with a much better solution:
Use a HTTP Module to do the actual HMAC authentication.
After reading this blog post (http://blogs.msdn.com/b/tmarq/archive/2007/08/30/iis-7-0-asp-net-pipelines-modules-handlers-and-preconditions.aspx) I got a much better understanding of how IIS Works.
I decided to use a HTTP Module which is invoked before the MVC Action.
The code ended up like this:
public class HmacModule : IHttpModule
{
public void Init(HttpApplication context)
{
EventHandlerTaskAsyncHelper taskAsyncHelper = new EventHandlerTaskAsyncHelper(Authenticate);
context.AddOnBeginRequestAsync(taskAsyncHelper.BeginEventHandler, taskAsyncHelper.EndEventHandler);
}
private async Task Authenticate(object sender, EventArgs e)
{
var context = ((HttpApplication)sender).Context;
var request = context.Request;
var authResponse = await CheckAuthentication(request);
if (!authResponse.HasAccess)
{
context.Response.StatusCode = (int)HttpStatusCode.Unauthorized;
context.Response.StatusDescription = authResponse.ErrorMessage;
if (authResponse.Details != null)
context.Response.Write(authResponse.Details);
context.Response.End();
}
}
}
I hope this helps others in the same situation...