I write performance test script with JMeter, but the target site using gzip to compress response. So Jmeter can't display any response and only show EOFException.
java.io.EOFException
at java.base/java.util.zip.GZIPInputStream.readUByte(GZIPInputStream.java:269)
To solve this problem. I write JSR223 PostProcessor for decompress response, but my script always show "Not in GZIP format" message.
java.util.zip.ZipException: Not in GZIP format
I don't have any idea about this message. Please help me.
This is my "JSR223 PostProcessor" script.
import java.util.zip.GZIPInputStream;
import java.io.InputStream;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
byte[] responseBody = prev.getResponseData();
ByteArrayOutputStream decompressBaos = new ByteArrayOutputStream();
try(InputStream gzip = new GZIPInputStream(new ByteArrayInputStream(responseBody))) {
int b;
while ((b = gzip.read()) != -1) {
decompressBaos.write(b);
}
}catch(Exception ex){
log.info("Exception:"+ex);
}
byte[] decompressed = decompressBaos.toByteArray();
prev.setResponseData(decompressed);
Are you sure the response is in GZIP format? In order to be able to request it in GZIP you need to add HTTP Header Manager and configure it to send Accept-Encoding header with the value of gzip,deflate
Related
The application I am creating takes a gzipped file sent to a RESTful PUT, unzips the file and then does further processing like so:
public class Service {
#PUT
#Path("/{filename}")
Response doPut(#Context HttpServletRequest request,
#PathParam("filename") String filename,
InputStream inputStream) {
try {
GZIPInputStream gzipInputStream = new GZIPInputStream(inputStream);
// Do Stuff with GZIPInputStream
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
}
I am able to successfully send a gzipped file in a unit test like so:
InputStream inputStream = new FileInputStream("src/main/resources/testFile.gz);
Service service = new Service();
service.doPut(mockHttpServletRequest, "testFile.gz", inputStream);
// Verify processing stuff happens
But when I build the application and attempt to CURL the same file from the src/main/resources dir with the following I get a ZipException:
curl -v -k -X PUT --user USER:Password -H "Content-Type: application/gzip" --data-binary #testFile.gz https://myapp.dev.com/testFile.gz
The exception is:
java.util.zip.ZipException: Not in GZIP format
at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:165)
at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:79)
at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:91)
at Service.doPut(Service.java:23)
// etc.
So does anyone have any idea why sending the file via CURL causes the ZipException?
Update:
I ended up taking a look at the actual bytes being sent via the InputStream and figured out where the ZipException: Not in GZIP format error was coming from. The first two bytes of a GZIP file are required to be 1F and 8B respectively in order for GZIPInputStream to recognize the data as being in GZIP format. Instead the 8B byte, along with every other byte in the steam that doesn't correspond to a valid UTF-8 character, was transformed into the bytes EF, BF, BD which are the UTF-8 unknown character replacement bytes. Thus the server is reading the GZIP data as UTF-8 characters rather than as binary and is corrupting the data.
The issue I am having now is I can't figure out where I need to change the configuration in order to get the server to treat the compressed data as binary vs UTF-8. The application uses Jax-rs on a Jersey server using Spring-Boot that is deployed in a Kubernetes pod and ran as a service, so something in the setup of one of those technologies needs to be tweaked to prevent improper encoding from being used on the data.
I have tried adding -H "Content-Encoding: gzip" to the curl command, registering the EncodingFilter.class and GZipEncoder.class in jersey ResourceConfig class, adding application/gzip to the server.compression.mime-types in application.propertes, adding the #Consumes("application/gzip") annotation to the doPut method above, and several other things I can't remember off the top of my head but nothing seems to have any effect.
I am seeing the following in the verbose CURL logs:
> PUT /src/main/resources/testFile.gz
> HOST: my.host.com
> Authorization: Basic <authorization stuff>
> User-Agent: curl/7.54.1
> Accept: */*
> Content-Encoding: gzip
> Content-Type: application/gzip
> Content-Length: 31
>
} [31 bytes data]
* upload completely sent off: 31 out of 31 bytes
< HTTP/1.1 500
< X-Application-Context: application
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: <date stuff>
...etc
Nothing I have done has affected the receiving side
Content-Type: application/json;charset=UTF-8
portion, which I suspect is the issue.
I met the same problem and finally solved it by using -H 'Content-Type:application/json;charset=UTF-8'
Use Charles to find the difference
I can successfully send the gzipped file using Postman. So I used Charles to catch two packages sent by curl and postman respectively. After I compared these two packages, I found that Postman used application/json as Content Type while curl used text/plain.
Spring docs: Content Type and Transformation
According to Spring docs, if the content type is text/plain and the source payload is byte[], Spring will convert the payload to string using charset specified in the content-type header. That's why ZipException occurred. Since the original byte data had already been decoded and not in gzip format anymore.
Spring source code
#Override
protected Object convertFromInternal(Message<?> message, Class<?> targetClass, #Nullable Object conversionHint) {
Charset charset = getContentTypeCharset(getMimeType(message.getHeaders()));
Object payload = message.getPayload();
return (payload instanceof String ? payload : new String((byte[]) payload, charset));
}
I am working with an API server implements a "server-push" feature by using an infinite chunked response. Each chunk in the response represents an message server pushed to client. Each chunk is actually a complete json object. Here is the code I am using as a client receiving the messages server pushed to.
Flux<JSONObject> jsonObjectFlux = client
.post(uriBuilder.expand("/data/long_poll").toString(), request -> {
String pollingRequest = createPollingRequest();
return request
.failOnClientError(false)
.failOnServerError(false)
.addHeader("Authorization", host.getToken())
.addHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.addHeader(HttpHeaders.CONTENT_LENGTH,
String.valueOf(ByteBufUtil.utf8Bytes(pollingRequest)))
.sendString(Mono.just(pollingRequest));
}).flatMapMany(response -> response.receiveContent().map(httpContent -> {
ByteBuf byteBuf = httpContent.content();
String source = new String(ByteBufUtil.getBytes(byteBuf), Charsets.UTF_8);
return new JSONObject(source);
}));
jsonObjectFlux.subscribe(jsonObject -> {
logger.debug("JSON: {}", jsonObject);
});
However I got exception like:
reactor.core.Exceptions$ErrorCallbackNotImplemented: org.json.JSONException: Unterminated string at 846 [character 847 line 1]
Caused by: org.json.JSONException: Unterminated string at 846 [character 847 line 1]
at org.json.JSONTokener.syntaxError(JSONTokener.java:433)
at org.json.JSONTokener.nextString(JSONTokener.java:260)
at org.json.JSONTokener.nextValue(JSONTokener.java:360)
at org.json.JSONObject.<init>(JSONObject.java:214)
at org.json.JSONTokener.nextValue(JSONTokener.java:363)
at org.json.JSONObject.<init>(JSONObject.java:214)
Obviously, I am not getting a whole json data. I am wondering if using response.receiveContent() is the right way to deal with one chunk data.
Trying to create connection but displayed error . Below are the Beanshell sampler code .
import org.jivesoftware.smack.ConnectionConfiguration;
import org.jivesoftware.smack.ConnectionListener;
import org.jivesoftware.smack.tcp.XMPPTCPConnection;
import org.jivesoftware.smack.tcp.XMPPTCPConnectionConfiguration;
import org.jivesoftware.smack.SASLAuthentication;
import org.jivesoftware.smack.SmackException;
import org.jivesoftware.smack.XMPPException;
import org.jivesoftware.smack.XMPPException.XMPPErrorException;
String jabberId = "admin";
String jabberPass = "12345";
String SERVER_ADDRESS = "xxx.xxx.xxx.xxx";
int PORT = 5222; // or any other port
String ServiceName = "Smack";
SASLAuthentication.blacklistSASLMechanism("DIGEST-MD5");
SASLAuthentication.unBlacklistSASLMechanism("PLAIN");
XMPPTCPConnectionConfiguration config = XMPPTCPConnectionConfiguration.builder()
.setCompressionEnabled(false)
.setHost(SERVER_ADDRESS)
.setServiceName(ServiceName)
.setPort(DEFAULT_PORT)
.setSecurityMode(ConnectionConfiguration.SecurityMode.disabled)
.setSendPresence(true)
.setDebuggerEnabled(true)
.build();
XMPPTCPConnection con = new XMPPTCPConnection(config);
int REPLY_TIMEOUT = 50000; // 50 seconds, but can be shorter
con.setPacketReplyTimeout(REPLY_TIMEOUT);
//con = getConnection();
con.connect();
//con.login(jabberId,jabberPass);
Below are the error..
Response code: 500 Response message:
org.apache.jorphan.util.JMeterException: Error invoking bsh method: eval
In file: inline evaluation of: ``public XMPPConnection doConnect(String userName,String password) { XMPPConne . . . '' Encountered "( xxx.xxx .xxx" at line 7, column 60.
please tell me what's wrong with it . or give me correct code to connect jmeter to xmpp server .
I need to transfer big files (at least 14MB) from the Cosmos instance of the FIWARE Lab to my backend.
I used the Spring RestTemplate as a client interface for the Hadoop WebHDFS REST API described here but I run into an IO Exception:
Exception in thread "main" org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/<user.name>/<path>?op=open&user.name=<user.name>":Truncated chunk ( expected size: 14744230; actual size: 11285103); nested exception is org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 14744230; actual size: 11285103)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:580)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:545)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:466)
This is the actual code that generates the Exception:
RestTemplate restTemplate = new RestTemplate();
restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory());
restTemplate.getMessageConverters().add(new ByteArrayHttpMessageConverter());
HttpEntity<?> entity = new HttpEntity<>(headers);
UriComponentsBuilder builder =
UriComponentsBuilder.fromHttpUrl(hdfs_path)
.queryParam("op", "OPEN")
.queryParam("user.name", user_name);
ResponseEntity<byte[]> response =
restTemplate
.exchange(builder.build().encode().toUri(), HttpMethod.GET, entity, byte[].class);
FileOutputStream output = new FileOutputStream(new File(local_path));
IOUtils.write(response.getBody(), output);
output.close();
I think this is due to a transfer timeout on the Cosmos instance, so I tried to
send a curl on the path by specifying offset, buffer and length parameters, but they seem to be ignored: I got the whole file.
Thanks in advance.
Ok, I found out a solution. I don't understand why, but the transfer succeds if I use a Jetty HttpClient instead of the RestTemplate (and so Apache HttpClient). This works now:
ContentExchange exchange = new ContentExchange(true){
ByteArrayOutputStream bos = new ByteArrayOutputStream();
protected void onResponseContent(Buffer content) throws IOException {
bos.write(content.asArray(), 0, content.length());
}
protected void onResponseComplete() throws IOException {
if (getResponseStatus()== HttpStatus.OK_200) {
FileOutputStream output = new FileOutputStream(new File(<local_path>));
IOUtils.write(bos.toByteArray(), output);
output.close();
}
}
};
UriComponentsBuilder builder = UriComponentsBuilder.fromHttpUrl(<hdfs_path>)
.queryParam("op", "OPEN")
.queryParam("user.name", <user_name>);
exchange.setURL(builder.build().encode().toUriString());
exchange.setMethod("GET");
exchange.setRequestHeader("X-Auth-Token", <token>);
HttpClient client = new HttpClient();
client.setConnectorType(HttpClient.CONNECTOR_SELECT_CHANNEL);
client.setMaxConnectionsPerAddress(200);
client.setThreadPool(new QueuedThreadPool(250));
client.start();
client.send(exchange);
exchange.waitForDone();
Is there any known bug on the Apache Http Client for chunked files transfer?
Was I doing something wrong in my RestTemplate request?
UPDATE: I still don't have a solution
After few tests I see that I don't have solved my problems.
I found out that the hadoop version installed on the Cosmos instance is quite old Hadoop 0.20.2-cdh3u6 and I read that WebHDFS doesn't support partial file transfer with length parameter (introduced since v 0.23.3).
These are the headers I received from the Server when I send a GET request using curl:
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: HEAD, POST, GET, OPTIONS, DELETE
Access-Control-Allow-Headers: origin, content-type, X-Auth-Token, Tenant-ID, Authorization
server: Apache-Coyote/1.1
set-cookie: hadoop.auth="u=<user>&p=<user>&t=simple&e=1448999699735&s=rhxMPyR1teP/bIJLfjOLWvW2pIQ="; Version=1; Path=/
Content-Type: application/octet-stream; charset=utf-8
content-length: 172934567
date: Tue, 01 Dec 2015 09:54:59 GMT
connection: close
As you see the Connection header is set to close. Actually, the connection is usually closed each time the GET request lasts more than 120 seconds, even if the file transfer has not been completed.
In conclusion, I can say that Cosmos is totally useless if it doesn't support large file transfer.
Please correct me if I'm wrong, or if you know a workaround.
Something is altering data written to TCP sockets on my Windows (Windows 7) machine - specifically, when the bytes follow a specific HTTP POST pattern, the pattern is repeated when the bytes are read from the corresponding listener socket side of the connection.
The following bytes are written to the client socket (note: each line ends with a carriage-return and newline and the two nonblank lines are followed by two blank lines):
POST / HTTP/1.1
Transfer-Encoding: chunked
what is read from the listener socket is:
POST / HTTP/1.1
Transfer-Encoding: chunked
POST / HTTP/1.1
Transfer-Encoding: chunked
I've tested this on the loopback (127.0.0.1) address on my machine, but I've also seen the modified bytes when the listener socket was on another machine, so it appears the bytes are modified on the client side. I've reproduced the problem using both netcat and a java program (see below) on my machine, so the issue appears to be in the TCP stack. I've only been able to cause it with a specific set of HTTP headers, so it appears that something is doing deep packet inspection on my TCP communication and altering it. If I alter the input bytes slightly (e.g. so it is not a valid HTTP request by, for instance, changing "POST" to "QOST", it works fine).
Below is a java program I've written that demonstrates this and its output:
import java.io.InputStream;
import java.io.OutputStream;
import java.net.InetAddress;
import java.net.ServerSocket;
import java.net.Socket;
import java.nio.charset.Charset;
public final class Main {
private static final String PAYLOAD
= "POST / HTTP/1.1\r\n"
+ "Transfer-Encoding: chunked\r\n"
+ "\r\n"
+ "\r\n"
;
private static final int PORT = 8080;
public static final void main(final String[] args) throws Exception {
final Thread serverThread = new Thread(new Server());
serverThread.start();
final byte[] payloadBytes = PAYLOAD.getBytes(Charset.forName("UTF-8"));
int i = 0;
try (final Socket socket = new Socket(InetAddress.getLoopbackAddress(), PORT)) {
socket.setTcpNoDelay(true);
final OutputStream os = socket.getOutputStream();
for (final byte byteValue : payloadBytes) {
os.write(byteValue);
os.flush();
i++;
}
}
serverThread.join();
System.out.println("bytes written: " + i);
}
private static final class Server implements Runnable {
#Override
public void run() {
try (final ServerSocket serverSocket = new ServerSocket(PORT)) {
// while (true) {
final Socket socket = serverSocket.accept();
socket.setTcpNoDelay(true);
try (final InputStream is = socket.getInputStream()) {
int i = 0;
int byteValue;
while ((byteValue = is.read()) >= 0) {
System.out.print((char) byteValue);
System.out.flush();
i++;
}
System.out.println("----------------");
System.out.println("bytes read: " + i);
}
// }
} catch (final Exception e) {
throw new RuntimeException(e);
}
}
}
}
output:
POST / HTTP/1.1
Transfer-Encoding: chunked
POST / HTTP/1.1
Transfer-Encoding: chunked
----------------
bytes read: 96
bytes written: 49
Below is the same test using netcat (nc.exe from cygwin) on windows (note: the file test_payload.blob contains the bytes as described above and derived from the PAYLOAD constant in the java program):
start the nc listener:
nc -l 8080 > nc_capture; more nc_capture
run the nc client (in another shell from the listener):
nc -v 127.0.0.1 8080 < test_payload.blob
the output written to nc_capture:
POST / HTTP/1.1
Transfer-Encoding: chunked
POST / HTTP/1.1
Transfer-Encoding: chunked
My first thought was a buggy firewall, so I disabled it, but it still happens. I also tried resetting my winsock and tcp/ip, and it still happens. I tried disabling all of my network adapters (the above tests work on the loopback IP address, so they are not needed), and it still happens. At this point, I am pretty much out of ideas, and I don't even know how I would go about trying to debug this at a lower level. Has anyone ever seen something like this before? Is there some low level diagnostic tool on windows that I can use to see what might have its hooks into my TCP stack?