Web Api Request Content Length Double the File Size - model-view-controller

I upload a 3.5 MB file from my MVC app. When I go to send that file to my web api endpoint, I notice the request's content-length is double the size of the file (7 MB).
I tested this theory with a 5 MB file and sure enough the content-length when I went to send to the web api was 10 MB.
Below is how I am sending the file to my web api endpoint:
using (HttpClient client = new HttpClient())
{
client.BaseAddress = new Uri(url);
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
return await client.PostAsync(requestUri, new StringContent(serializedContent, Encoding.Unicode, "application/json"));
}
I am calling this method from my MVC controller in the POST method. Why does my content-length get doubled?
UPDATE:
I should note that I am using JSON.NET's JsonConvert.SerializeObject method to convert the object that contains the byte array to a string

You're using Encoding.Unicode which uses 16-bit characters by default. If you want to save roughly half the space then use Encoding.UTF8 which uses 8-bit characters by default. Note, characters that can't be expressed in just 8-bits will use multiple bytes.

Related

Why is the signature verification not working when the signature is constructed by node-forge?

I have a Nuxt application that needs to retrieve some information from a Spring Boot-based auth service.
Right now I sign a text message on the Nuxt app (the auth server is aware of that text message), using node-forge, and then I send it encrypted and with the signature for verification on the auth service.
The problem is that the auth service keeps telling me that the size of the signature is wrong, with a java.security.SignatureException: Signature length not correct: got 3XX but was expecting 256.
Here is the code generating the encrypted message and signature on the Nuxt side:
var md = forge.md.sha256.create();
md.update("123"); // for example purposes
var sign = pPrivateKey.sign(md);
var digestBytes = md.digest().bytes();
console.log("Signature:", sign );
console.log("Encrypted:", digestBytes);
console.log("Encrypted B64:", Buffer.from(digestBytes).toString("base64"));
var keyAuthB64Url = Buffer.from(digestBytes).toString("base64url");
var signB64Url = Buffer.from(sign).toString("base64url");
var jwt = await axios.get(process.env.URL + "/auth", { params: { encrypted: keyAuthB64Url, signature: signB64Url } });
On the auth service I have the following code:
byte[] messageBytes = Base64.getUrlDecoder().decode(encryptedMessage);
byte[] signatureBytes = Base64.getUrlDecoder().decode(signature);
Signature sign = Signature.getInstance("SHA256withRSA");
sign.initVerify(certPublicKey);
sign.update(messageBytes);
boolean verified = sign.verify(signatureBytes);
if (!verified) {
throw new Exception("Not verified!");
}
From all the debugging I have done, it seems like the Spring Boot app has a problem with the signature generated by node-forge on the Nuxt side, with a signature generated in the Spring Boot app the verification works.
There are several issues:
First, the bug that was already mentioned in the comment: While the NodeJS code does not hash implicitly, the Java side does. Therefore, hashing must not be done explicitly on the Java side:
byte[] messageBytes = "123".getBytes("utf-8");
...
sign.update(messageBytes); // Fix 1: Don't hash
Also, in the NodeJS code, sign() returns the data as a bytes string, which must therefore be imported into a NodeJS buffer as a 'binary':
var keyAuthB64Url = Buffer.from(digestBytes, "binary").toString("base64url"); // Fix 2: Import via 'binary' encoding
Without explicit specification of the encoding, a UTF-8 encoding is performed by default, which irreversibly corrupts the data.
And third, latin1 is implicitly used as encoding when generating the hash in the NodeJS code. Other encodings must be specified explicitly, e.g. for the common UTF-8 with utf8:
md.update("123", "utf8"); // Fix 3: Specify the encoding
For the example data 123 used here, this fix has no effect, which changes as soon as characters with a Unicode value larger than 0x7f are included, e.g. 123ยง. Note that there is little margin for error in the specification of the encoding, e.g. utf-8 would be ignored (because of the hyphen) and latin1 would be used silently.
With these fixes, verification with the Java code works.

JMeter: Keycloak "User session not found " error with /protocol/openid-connect/token end point

I was trying to simulate /protocol/openid-connect/token Keycloak end point using JMeter. Even though I have correlated code parameter and passing that properly. There is something called code_verifier and it is not found any of the previous requests. Providing the sample request and response for your reference. Can someone help me here if I have to take any additional steps to overcome this issue in the response attached.
Request:
POST https://{HOST}/auth/realms/{Appname}/protocol/openid-connect/token
POST data:
code=f99e9da5-cfcf-4069-aaec-b53mee00af54.e46a981h-5291-4862-b6fd-abc7f2d222f2.87488f77-3b05-47b0-afd7-8a8c80b384e7%0AContent-Length%3A+0%0ADate%3A+Wed%2C+29+Dec+2021+18%3A30%3A26+GMT%0A&grant_type=authorization_code&redirect_uri=https%3A%2F%2Fwebclient-performance.appname.ad%2F&client_id=premium-web-client&code_verifier=YTlYTmoxZ2tXbzM1M0xkVkRfZXg0M280TUhDZXVMYVdIY2hoVzRqTE5ESXkw
Cookie Data:
AUTH_SESSION_ID=e46a61f9-5291-4862-b6fd-eff7f2d222f2.d306f6737649; KEYCLOAK_LOCALE=en; KEYCLOAK_IDENTITY=eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJiZDBmYTc0Ni02Y2NmLTRiMjktYTBmZC1kOWMxMWNmY2RlM2UifQ.eyJleHAiOjE2NDA4ODkwMjYsImlhdCI6MTY0MDgwMjYyNiwianRpIjoiZWFjZDczNDctNDYyNC00Mjk0LWE4NjYtYzRiYmM1MjNiMDlhIiwiaXNzIjoiaHR0cHM6Ly8xNzIuMjYuMjMzLjE0NDoyODA4MC9hdXRoL3JlYWxtcy9uZXh0Z2VuLXNvbmV0Iiwic3ViIjoiOTE4MDcyNDktZWZlYi00ZWZlLWEwY2EtMGRlMTYxZWIzNTU5IiwidHlwIjoiU2VyaWFsaXplZC1JRCIsInNlc3Npb25fc3RhdGUiOiJlNDZhNjFmOS01MjkxLTQ4NjItYjZmZC1lZmY3ZjJkMjIyZjIiLCJzdGF0ZV9jaGVja2VyIjoiU3VmS2tOLXE0UTNDVUhvM2xFblhHZ3NFSWdWSS0wektFR2JKRENzZHpiYyJ9.4XA6eGrUB8HhhLTfNlhY9twiX3oJLQhlFlYDY3zYa6Q; KEYCLOAK_IDENTITY_LEGACY=eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJiABCmYTc0Ni02Y2NmLTRiMjktYTBmZC1kOWMxMWNmY2RlM2UifQ.eyJleHAiOjE2NDA4ODkwMjYsImlhdCI6MTY0MDgwMjYyNiwianRpIjoiZWFjZDczNDctNDYyNC00Mjk0LWE4NjYtYzRiYmM1MjNiMDlhIiwiaXNzIjoiaHR0cHM6Ly8xNzIuMjYuMjMzLjE0NDoyODA4MC9hdXRoL3JlYWxtcy9uZXh0Z2VuLXNvbmV0Iiwic3ViIjoiOTE4MDcyNDktZWZlYi00ZWZlLWEwY2EtMGRlMTYxZWIzNTU5IiwidHlwIjoiU2VyaWFsaXplZC1JRCIsInNlc3Npb25fc3phdGUiOiJlNDZhNjFmOS01MjkxAAA4NjItYjZmZC1lZmY3ZjJkMjIyZjIiLCJzdGF0ZV9jaGVja2VyIjoiU3VmS2tOLXE0UTNDVUhvM2xFblhHZ3NFSWdWSS0wektFR2JKRENzZHpiYyJ9.4XA8bGrUB8HhhLTfNlhY9twiX3oJLQhlFlYDY3zYa6Q; KEYCLOAK_SESSION=appname/91807249-efeb-4abc-a0ca-0de161eb8741/e46a61f9-2147-4862-b6fd-eff7f2d222f2; KEYCLOAK_SESSION_LEGACY=name/85211234-efeb-4efe-a0ca-0de161eb1877/e46a78f9-5291-4862-b6fd-eff7f2d899f2
Response:
{"error":"invalid_grant","error_description":"User session not found"}
This code_verifier parameter needs to be generated, not correlated.
See the algorithm description in the RFC 7636
4. Protocol
4.1. Client Creates a Code Verifier
The client first creates a code verifier, "code_verifier", for each
OAuth 2.0 [RFC6749] Authorization Request, in the following manner:
code_verifier = high-entropy cryptographic random STRING using the
unreserved characters [A-Z] / [a-z] / [0-9] / "-" / "." / "_" / "~"
from Section 2.3 of [RFC3986], with a minimum length of 43 characters
and a maximum length of 128 characters.
ABNF for "code_verifier" is as follows.
code-verifier = 43*128unreserved
unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
ALPHA = %x41-5A / %x61-7A
DIGIT = %x30-39
NOTE: The code verifier SHOULD have enough entropy to make it
impractical to guess the value. It is RECOMMENDED that the output of
a suitable random number generator be used to create a 32-octet
sequence. The octet sequence is then base64url-encoded to produce a
43-octet URL safe string to use as the code verifier.
Sample implementation is available in the Create Code Verifier section of Auth0 manual
// Dependency: Apache Commons Codec
// https://commons.apache.org/proper/commons-codec/
// Import the Base64 class.
// import org.apache.commons.codec.binary.Base64;
SecureRandom sr = new SecureRandom();
byte[] code = new byte[32];
sr.nextBytes(code);
String verifier = Base64.getUrlEncoder().withoutPadding().encodeToString(code);
you can add the following line to store the generated value into a JMeter Variable:
vars.put("code_verifier", verifier);
and use ${code_verifier} in your HTTP Request sampler instead of the hard-coded value. In the above snippet vars stands for JMeterVariables class instance, see Top 8 JMeter Java Classes You Should Be Using with Groovy article for more details if needed.
The code can be called from i.e. JSR223 PreProcessor

Video - stream slow using s3 content store

Video - stream - this works.. however on larger files it is slow..
How can I improve the content s3 store to deliver the content faster ?
I have tried returning a byte array and copy to a buffer.. all load.. just slow... I am not sure where the bottle neck is coming from..
Optional<File> f = filesRepo.findById(id);
if (f.isPresent()) {
InputStreamResource inputStreamResource = new InputStreamResource(contentStore.getContent(f.get()));
HttpHeaders headers = new HttpHeaders();
headers.setContentLength(f.get().getContentLength());
headers.set("Content-Type", f.get().getMimeType());
return new ResponseEntity<Object>(inputStreamResource, headers, HttpStatus.OK);
}
also get this warning:
Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.

Apache HTTP Client forcing UTF-8 encoding

I'm making a rest call using the org.apache.http package as below. I'm expecting user profile details in the response in English and other international languages.
HttpGet req = new HttpGet(baseUrl + uri);
HttpResponse res= closeableHttpClient.execute(req);
The response has UTF-8 as character set, which is what I wanted. From here, I used 2 approaches to unmarshall the response to a map.
Approach-1:
String response = EntityUtils.toString(res.getEntity(),"UTF-8");
// String response = EntityUtils.toString(httpResponse.getEntity(),Charset.forName("UTF-8"));
map = jsonConversionUtil.convertStringtoMap(response);
Issue:
httpResponse.getEntity() was returning StringEntity object which had default charset as ISO_8859_1, but even when I force to convert to UTF-8 (uncommmented line and commented line above, both I tried), I'm not able to override to UTF-8.
Approach-2:
HttpEntity responseEntity = res.getEntity();
if (responseEntity != null ) {
InputStream contentStream = responseEntity.getContent();
if (contentStream != null) {
String response = IOUtils.toString(contentStream, "UTF-8");
map = jsonConversionUtil.convertStringtoMap(response);
}
}
Issue:
IOUtils.toString(contentStream, "UTF-8"); is not setting to UT8.
I am using httpclient 4.3.2 jar & httpcore-4.3.1 jar. Java version used in Java 6. I can't upgrade to a higher java version.
Can you please guide how I can set to UTF-8 format.
If the StringEntity object has an ISO-8859-1 encoding, then the server has returned its response encoded as ISO-8859-1. Your assumption that "the response has UTF-8 as character set" is most likely wrong.
Since it's ISO-8859-1, both your approaches don't work:
Approach 1: The "UTF-8" parameter has no effect as the parameter specifies the default encoding in case the server doesn't specify one (see EntityUtils.toString(). But the server has obviously specified one.
Approach 2: Reading the binary content as UTF-8, which is in fact encoded in ISO-8859-1, will likely result in garbage (though many characters have a similar representation in UTF-8 and ISO-8859-1).
So try to ask the server to return UTF-8:
HttpGet req = new HttpGet(baseUrl + uri);
req.addHeader("Accept", "application/json");
req.addHeader("Accept-Charset", "utf-8");
HttpResponse res = closeableHttpClient.execute(req);
If it disregards the specified characters set and still returns JSON in ISO-8859-1, then it will be unable to use characters outside the ISO-8859-1 range (unless it uses escaping within JSON).

Importing binary data to parse.com

I'm trying to import data to parse.com so I can test my application (I'm new to parse and I've never used json before).
Can you please give me an example of a json file that I can use to import binary files (images) ?
NB : I'm trying to upload my data in bulk directry from the Data Browser. Here is a screencap : i.stack.imgur.com/bw9b4.png
In parse docs i think 2 sections could help you out depend on whether you want to use REST api of the android sdk.
rest api - see section on POST, uploading files that can be upload to parse using REST POST.
SDk - see section on "files"
code for Rest includes following:
use some HttpClient implementation having "ByteArrayEntity" class or something
Map your image to bytearrayEntity and POST it with the correct headers for Mime/Type in httpclient...
case POST:
HttpPost httpPost = new HttpPost(url); //urlends "audio OR "pic"
httpPost.setProtocolVersion(new ProtocolVersion("HTTP", 1,1));
httpPost.setConfig(this.config);
if ( mfile.canRead() ){
FileInputStream fis = new FileInputStream(mfile);
FileChannel fc = fis.getChannel(); // Get the file's size and then map it into memory
int sz = (int)fc.size();
MappedByteBuffer bb = fc.map(FileChannel.MapMode.READ_ONLY, 0, sz);
data2 = new byte[bb.remaining()];
bb.get(data2);
ByteArrayEntity reqEntity = new ByteArrayEntity(data2);
httpPost.setEntity(reqEntity);
fis.close();
}
,,,
request.addHeader("Content-Type", "image/*") ;
pseudocode for post the runnable to execute the http request
The only binary data allowed to be loaded to parse.com are images. In other cases like files or streams .. etc the most suitable solution is to store a link to the binary data in another dedicated storage for such type of information.

Resources