FineUploader.js - How to access S3 policy document signature? - fine-uploader

Purpose: In order to confirm with the server that the image was uploaded to the S3 bucket successfully, I would like to send the policy document signature back to the server. My policy document includes a key randomly generated server-side (this way a malicious user can't overwrite another user's uploads by deliberately using a duplicate key), and a corresponding row is temporarily created in the database marked as "pending". If the correct policy document signature does not come back to the server before the expiry, that means that the upload may have been abandoned, so the server may try to delete any file with that key from the bucket and then delete the temporary database row.
Question: I intend to include the policy document signature in uploadSuccess.params. To accomplish this, how can the policy document signature be accessed?
Possibly relevant snippet of source code, but I don't know how to "reach" responseJson and use it (assuming this is where the signature is contained):
function handleSignatureReceived(id, xhrOrXdr, isError) {
var responseJson = xhrOrXdr.responseText,
pendingSignatureData = pendingSignatures[id],
promise = pendingSignatureData.promise,
signatureConstructor = pendingSignatureData.signatureConstructor,
errorMessage, response;

The response passed to your onComplete callback handler is the response from s3 when the upload completes, as expected. There is no good reason to track the exact signature. If you want to know if the upload failed or succeeded, check for the success property in the response.

Related

Which http status codes to use when processing http post?

I have a HTML form, which I submit via http post.
There are two cases:
Case 1: The data is valid and data on the server will be updated accordingly
Case 2: The data is invalid and the http response contains an error message for the user.
Which http status codes should be used for each case?
I use htmx to submit the form. This means I don't need to use the POST/Redirect/GET pattern.
This question is not about JSON-APIs.
The complete list of HTTP response codes published by the Mozilla Foundation is pretty comprehensive and easy-to-read, so I'd recommend always consulting it as a guide. For the generic use-cases mentioned by you, there are a couple of different codes you can return - depending on what happens with the data on the server, and what you want to happen in the user's browser.
CASE 1: data is valid, data on server is updated
Base on your short description, the different status codes that might be applicable are:
200 (OK): you are updating an existing record on your own server - eg., the user is submitting a form which updates their existing contact information on your website - and information was received, and the record updated successfully. The response would usually contain a copy of the updated record.
201 (Created): you are not updating an existing record, but rather, creating a new record on your server - eg., your user is adding a new phone number to their contact details, which is saved in your database as a separate 'phone' record. The response should contain a copy of the newly created record.
205 (Reset Content): the same as 200, but implies that the browser view needs to be refreshed. This is useful when the record that is being updated has values that are dynamically calculated by the server, and which might change automatically depending on the values you're submitting. For example, if you have a user adding extra information to their online profile, which might grant them special status, badges and privileges on the website. This means, that if the user is viewing their profile information, that information will need to be updated with the new 'status' automatically granted by the server. The 205 response body will normally be empty, which means that to update the browser view your response-handling code will need to:
do further ajax requests and update the relevant part(s) of your
interface with new information from the server, or
redirect the user to a new URL, or
reload the entire page.
If working with HTMX, a 200 or 201 response would include the actual html snippet of the record that you want updated on the page - and HTMX will replace it automatically for you when it receives the response. With a 205 response, you could send an HX-Trigger response header that would call a custom event on the interface elements that need to update themselves - see the examples in the docs.
CASE 2: data is invalid, data on server is not updated
The status code that needs to be returned in case of an error varies depending on what caused the error. Errors that the server believes are the responsibility of the client - such as 'sending invalid data' - have a status code in the 4XX range. Some of the common errors in that range include 404 ('Not Found'), 403 ('Forbidden'), and 'Unauthorised' (401).
In the case of a client sending data that the server cannot process because it is 'not valid' - either because the request itself is malformed, or because the data doesn't pass some business validation logic - the current advice is to return status 400 (Bad Request).
Many years ago, some people believed that the status code 400 should only be used to indicate a malformed request (syntactical error) - not to indicate a failure in business validation logic (semantic error). There was a lot of debate, and temporarily a new status code (422) was created, that was supposed to cover semantic errors, exclusively. In 2014, however, the official definition of the status 400 code was changed to allow for the inclusion of both syntactical and semantical errors - which rendered status 422 essentially unnecessary.
You can find lots of discussions and explanations online about the differences between 400 and 422, and some people still argue passionately about this to this day. In practice, however, the 400 code is all you'll need - and you can include a response body in it that explains in detail, if needed, the cause of the error.
Note that when working with HTMX, a response with a 400 code should trigger an htmx:responseError event automatically. You can trap that event, for example, to update your form interface elements, in case of data validation errors caught by the server.
Well, 200 OK and 201 Created are the best for successful result.
For invalid data I would return 422 Unprocessable Entity, because the headers are correct, but body is not (though parseable by the server). The caveat is some HTTP clients won't handle 422 properly and in this case you have to use 400 Bad Request, however, the most of the modern clients will be fine.
You have said it is not about JSON APIs, but how will you meet this type of requirement - it is not clear whether this is relevant for your scenario???
SERVER DRIVEN BEHAVIOUR
I cannot see how a client could ever decide an HTTP status code based on input data. How would the client deal with these examples?
The call is not authenticated (cookie or token) - an API would return 401 - this tells the UI to perform a retry action.
The call is not authorized - an API would return 403 or 404 and the UI would present an error display.
The data is malformed or invalid according to domain specific checks - an API would return 400 and tell the UI what is wrong so that it can perform actions.
Something went wrong in server processing, eg data cannot be saved because database is down.
MY THOUGHTS
htmx looks interesting but a key requirement before using it would be ensuring that htmx can read server side error responses and use values returned. Maybe there is an elegant way to do this ...
Maybe I am just paranoid :). But it is worth being careful when choosing technologies that there are no blocking issues. Lack of error handlng control would be a blocking issue in most systems.
I'm using htmx 1.8 with asp.net core 6.0.
This works for me.
controller:
//server side validation failed
Response.StatusCode = 422;
return PartialView("Core", product);
client side javascript:
document.body.addEventListener('htmx:beforeOnLoad', function (evt) {
if (evt.detail.xhr.status === 422) {
//
// allow 422 responses to swap as we are using this as a signal that
// a form was submitted with bad data and want to rerender with the
// error messages
//
evt.detail.shouldSwap = true;
evt.detail.isError = false;
}
});
200 OK or 201 Created are the best choice for a successful POST request.
However, for invald data, you can pass 415 Unsupported Media Type

How to get list of failed parts on s3 multipart upload

sometimes after completing the multipart upload request with s3 client, I get this error:
InvalidPart: One or more of the specified parts could not be found. The part may not have been uploaded, or the specified entity tag may not match the part's entity tag.\n\tstatus code: 400, request id: 15BACD159D8498B7
there is no error on any s3client.UploadPart(partInput) calls, so the parts are uploaded well, and my completedParts slice is complete and all of the s3.CompletedPart are in it.
How can I identify those "One or more" parts that s3 can't find and re-upload them?

Redux Local Storage Workflow

I am using Redux and would like to store some state on local storage.
I only like to store the token which I receive from the server. There are other things in the store that I don't like to store.
The workflow that I found on google is to grab from local storage in initial store map.
Then he uses store.subscribe to update the local storage on regular interval.
That is valid if we are storing the entire store. But for my case, the token is only updated when user logs out or a new user logs in.
I think store.subscribe is an overkill.
I also read that updating local storage in reducers is not the redux way.
Currently, I am updating local storage in action before reducer is updated.
Is it the correct flow or is there a better way?
The example you found was likely about serializing the entire state tree into localstorage with every state change, allowing users to close the tab without worrying about constantly saving since it will always be up to date in LocalStorage.
However, it's clear that this isn't what you are looking for, as you are looking to cache specific priority data in LocalStorage, not the entire state tree.
You are also correct about updating LocalStorage as part of a reducer being an anti-pattern, as all side-effects are supposed to be localized to action creators.
Thus you should be reading from and writing to LocalStorage in your action creators.
For instance, your action creator for retrieving a token could look something like:
const TOKEN_STORAGE_KEY = 'TOKEN';
export function fetchToken() {
// Assuming you are using redux-thunk for async actions
return dispatch => {
const token = localStorage.getItem(TOKEN_STORAGE_KEY);
if (token && isValidToken(token)) {
return dispatch(tokenRetrieved(token));
}
return doSignIn().then(token => {
localStorage.setItem(TOKEN_STORAGE_KEY, token);
dispatch(tokenRetrieved(token));
}
}
}
export function tokenRetrieved(token) {
return {
type: 'token.retrieved',
payload: token
};
}
And then somewhere early on in your application boot, such as in one of your root component's componentWillMount lifecycle methods, you dispatch the fetchToken action.
fetchToken takes care of both checking LocalStorage for a cached token as well as storing a new token there when one is retrieved.

Link variable to data set in JMeter

I'm using JMeter 3.2 to create some performance tests.
I have a set up where a thread group has multiple threads (users) which perform multiple loops requesting a resource, form a server, every time.
Each of the threads go through an once only controller which retrieves a token from a server, which identifies the user, and is required on all subsequent requests. The token is different every time it is generated, I cannot store it in a data set (csv) as it would later be invalid.
I have a data set (.csv file) containing username and password of my test users.
So far so good.. Now the threads need to request a resource on the server requiring the token to be sent. It goes well the first time, but the second time it starts messing up. It seems like each iteration uses data from the next row in the dataset, but the token retrieved (from the once only controller) is not linked to row of data (username and password) used, so something like this happens:
thread1: data1/token1 - good
thread2: data2/token2 - good
Perhaps thread2 finishes first and starts the new iteration:
thread2: data1/token2 - error
thread1: data2/token1 - error
So my question is: how can I link the token retrieved to a row in data set (as a variable), so that the correct token will be sent every time that piece of data is used for a request?
Edit
I have an idea. Creating a Hashtable with some data, from the data set, as key and the token as value, but I have some issue. I've created the following code:
import java.util.Hashtable;
map = new Hashtable();
vars.putObject("map", map);
but it cast the following error:
java.util.Hashtable cannot be cast to java.lang.String
I finally figured it out, though it might not be the most optimal solution. What I did is to create a property (variables won't work), which is a JSONObject. In this I can store an id (for my data) and the token linked to it. I transform it to a string and store it in the property.
In a preprocessor to the http request requiring the token I retrieve the property and parses it back to a JSONObject and can look up the token using the id.

Identify http request / response by id

I am building an extension with Firefox's ADD ON SDK (v1.9) that will be able to read all HTTP requests / responses and calculate the time they took to load. This includes not only the main frame but any other loading file (sub frame, script, css, image, etc.).
So far, I am able to use the "observer-service" module to listen for:
"http-on-modify-request" when a HTTP request is created.
"http-on-examine-response" when a HTTP response is received
"http-on-examine-cached-response" when a HTTP response is received entirely from cache
"http-on-examine-merged-response" when a HTTP response is received partially from cache
My application follows the following sequence:
A request is created and registered through the observer.
I save the current time and mark it as start_time of the request load.
A response for a request is received and registered through one of the observers.
I save the current time and use the previously saved time to calculate load time of the request.
Problem:
I am not able to link the start and end times of the load since I cannot find a request ID (or other unique value) that will tie the request with the response.
I am currently using the URL of the request / response to tie them together but this is not correct since it will raise a "race condition" if two or more equal urls are loading at the same time. Google Chrome solves this issue by providing unique requestIds, but I have not been able to find a similar functionality on Firefox.
I am aware of two ways to recognize a channel that you receive in this observer. The "old" solution is to use nsIWritablePropertyBag interface to attach data to the channel:
var {Ci} = require("chrome");
var channelId = 0;
...
// Attach channel ID to a channel
if (channel instanceof Ci.nsIWritablePropertyBag)
channel.setProperty("myExtension-channelId", ++channelId);
...
// Read out channel ID for a channel
if (channel instanceof Ci.nsIPropertyBag)
console.log(channel.getProperty("myExtension-channelId"));
The other solution would be using WeakMap API (only works properly starting with Firefox 13):
var channelMap = new WeakMap();
var channelId = 0;
...
// Attach channel ID to a channel
channelMap.set(channel, ++channelId);
...
// Read out channel ID for a channel
console.log(channelMap.get(channel));
I'm not sure whether WeakMap is available in the context of Add-on SDK modules, you might have to "steal" it from a regular JavaScript module:
var {Cu} = require("chrome");
var {WeakMap} = Cu.import("resource://gre/modules/FileUtils.jsm", null);
Obviously, in both cases you can attach more data to the channel than a simple number.
Firebug does what you're thinking of by implementing a central observer for these events:
https://github.com/firebug/firebug/blob/master/extension/modules/firebug-http-observer.js
This might be a good place to start, although eventually Firefox will ship a more complete network monitor / debugger by default. I think I read somewhere that it will be based on Firebug's.

Resources