Spring WebFlux not streaming response - spring

I was expecting this code to stream events to the client (code is in Kotlin but Java is very similar)
#RestController
object CustomerController {
#GetMapping("/load", produces = arrayOf("application/stream+json"))
fun load(): Flux<String> {
var flux = Flux.fromIterable(ResultIterable())
flux.subscribe({println(it)})
return flux
}
}
ResultIterable is an iterable that generates a string on regular intervals. An infinite stream basically.
I don't see any output, it hangs forever.
I do see the string being printed on regular intervals (println(it)).
I am using the following curl:
curl -X GET http://localhost:8080/load -H 'accept: application/stream+json' -H 'cache-control: no-cache' -H 'content-type: application/stream+json'

Your error is here:
flux.subscribe({println(it)})
You subscribe to the Flux and consume it directly in the method.
When this Flux reaches the Reactor Netty HTTP container, there is nothing to consume already.
If you really would like println() each item, consider to use doOnNext() instead and really leave that subscribe() to the container.
Also you have to really follow Server Side Events rules:
The server-side event stream syntax is simple. Set the "Content-Type" header to "text/event-stream".
https://www.w3schools.com/html/html5_serversentevents.asp
So, when I do this:
#GetMapping("/load", produces = [MediaType.TEXT_EVENT_STREAM_VALUE])
fun load() =
Flux.just("foo", "bar", "baz")
.doOnNext({ println(it) })
I start to get Server Side Events in my connected client:
C:\tmp\so50823339>curl -X GET http://localhost:8080/load
data:foo
data:bar
data:baz
C:\tmp\so50823339>
where at the same time I get logs on the server for the mentioned doOnNext():
2018-06-12 17:33:37.453 INFO 6800 --- [ main] c.e.s.s.So50823339ApplicationKt : Started So50823339ApplicationKt in 3.112 seconds (JVM running for 3.924)
foo
bar
baz

Related

rSocket websocket postman testing mime types and endpoints

I am using spring-boot-starter-webflux and spring-boot-starter-rsocket version 2.7.1
The rSocket transport is set to websocket like this:
spring.rsocket.server.transport=websocket
spring.rsocket.server.mapping-path=/rsocket
# this setting has no effect when transport==WEBSOCKET
spring.rsocket.server.port=7000
There's a spring #Controller endpoint #MessageMapping setup for a simple string like:
#MessageMapping("test")
String test() {
Logs.Info("*** Received test ***");
return "tested";
}
I want to get a successful test done with Postman. Run the spring boot app locally and connect to ws://localhost:7000 using mime types
dataMimeType: 'application/json'
metadataMimeType: 'message/x.rsocket.routing.v0'
Like this:
The rsocket websocket connects, but I can't hit the endpoint test
With error 1005 No Status Received: Missing status code even though one was expected
On the server the error is
DEBUG [reactor-http-nio-2] debug: [c4e97d34-1, L:/127.0.0.1:7000 - R:/127.0.0.1:2051] Cancelling Websocket inbound. Closing Websocket
DEBUG [reactor-http-nio-2] debug: [c4e97d34, L:/127.0.0.1:7000 - R:/127.0.0.1:2051] Removed handler: PongHandler, pipeline: DefaultChannelPipeline{(wsencoder = io.netty.handler.codec.http.websocketx.WebSocket13FrameEncoder), (wsdecoder = io.netty.handler.codec.http.websocketx.WebSocket13FrameDecoder), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
DEBUG [reactor-http-nio-2] debug: [c4e97d34, L:/127.0.0.1:7000 ! R:/127.0.0.1:2051] An outbound error could not be processed
java.nio.channels.ClosedChannelException
at reactor.core.publisher.MonoErrorSupplied.call(MonoErrorSupplied.java:61)
at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.subscribeNext(MonoIgnoreThen.java:228)
at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.onComplete(MonoIgnoreThen.java:203)
at reactor.core.publisher.SinkEmptyMulticast$VoidInner.complete(SinkEmptyMulticast.java:238)
at reactor.core.publisher.SinkEmptyMulticast.tryEmitEmpty(SinkEmptyMulticast.java:70)
at reactor.core.publisher.SinkEmptySerialized.tryEmitEmpty(SinkEmptySerialized.java:46)
What's the incorrect setting in Postman?
The answer is don't use postman. Rsocket is a binary protocol, Even though based on Websocket, There are many tools test it.
use spring message write a unit test
#Autowired
private RSocketRequester rSocketRequester;
StepVerifier.create(rSocketRequester
.route("test")
.retrieveMono(String.class))
.expectNext("tested")
.verifyComplete();
RSocket Client CLI (RSC)
rsc --request --route=test --debug ws://localhost:7000/rsocket
Actually the following message was received:
{
"data":"test",
"metadata":4
}
Per screenshot
But now the error on the server side is:
DEBUG [reactor-http-nio-6] lambda$receive$0: receiving ->
Frame => Stream ID: 2064452128 Type: REQUEST_N Flags: 0b100000 Length: 42
RequestN: 539124833
Data:
DEBUG [reactor-http-nio-6] sendErrorAndClose: sending -> InvalidSetupException: SETUP or RESUME frame must be received before any others

Is there a way to debug transactions in RSK network, which could be the best way?

We are running an RSK node, some smart contract transactions show internal errors, but the message related to the failed require condition doesn't appear in those error messages...
We only see "internal error" and are unable to see which specific error occurred.
If your contract emits messages in the reversions, then you can find them out by using debug_traceTransaction.
NOTE: The debug RPC module is enabled by default in RSK config, but this is disabled on public nodes.
Furthermore, the RSK public nodes do not expose this feature, and you must run your own node in order to do so.
The following assumes that you have a local node running with RPC exposed on port 4444.
First, you need to enable debug module in your config file:
modules = [
...
{
"name": "debug",
"version": "1.0",
"enabled": "true",
},
...
]
Then, you can execute the RPC method passing the
transaction ID as a parameter, like in this example:
curl \
-X POST \
-H "Content-Type:application/json" \
--data '{"jsonrpc":"2.0","method":"debug_traceTransaction","params":["0xa9ae08f01437e32973649cc13f6db44e3ef370cbcd38a6ed69806bd6ea385e49"],"id":1}' \
http://localhost:4444
You will get the following response
(truncated for brevity):
{
...
"result": "08c379a00000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000001e536166654d6174683a207375627472616374696f6e206f766572666c6f770000",
"error": "",
"reverted": true,
...
}
Finally, convert result from hexadecimal to ASCII,
to obtain a readable message:
Ãy SafeMath: subtraction overflow

Couldn't make new request verification for Slack API

I'm trying the new request verification process for Slack API on AWS Lambda but I can't produce a valid signature from a request.
The example showed in https://api.slack.com/docs/verifying-requests-from-slack is for a slash command but I'm using for an event subscription, especially, a subscription to a bot event (app_mention). Does the new process support event subscriptions as well?
If so, am I missing something?
Mapping template for Integration request in API Gateway. I can't get a raw request as the slack documentation says but did my best like this:
{
"body" : $input.body,
"headers": {
#foreach($param in $input.params().header.keySet())
"$param": "$util.escapeJavaScript($input.params().header.get($param))" #if($foreach.hasNext),#end
#end
}
}
My function for verification:
def is_valid_request(headers, body):
logger.info(f"DECODED_SECRET: {DECODED_SECRET}")
logger.info(f"DECRYPTED_SECRET: {DECRYPTED_SECRET}")
timestamp = headers.get(REQ_KEYS['timestamp'])
logger.info(f"timestamp: {timestamp}")
encoded_body = urlencode(body)
logger.info(f"encoded_body: {encoded_body}")
base_str = f"{SLACK_API_VER}:{timestamp}:{encoded_body}"
logger.info(f"base_str: {base_str}")
base_b = bytes(base_str, 'utf-8')
dgst_str = hmac.new(DECRYPTED_SECRET, base_b, digestmod=sha256).hexdigest()
sig_str = f"{SLACK_API_VER}={dgst_str}"
logger.info(f"signature: {sig_str}")
req_sig = headers.get(REQ_KEYS['sig'])
logger.info(f"req_sig: {req_sig}")
logger.info(f"comparing: {hmac.compare_digest(sig_str, req_sig)}")
return hmac.compare_digest(sig_str, req_sig)
Lambda Log in CloudWatch. I can't show the values for security reasons but it seems like each variable/constant has a reasonable value:
DECODED_SECRET: ...
DECRYPTED_SECRET: ...
timestamp: 1532011621
encoded_body: ...
base_str: v0:1532011621:token= ... &team_id= ... &api_app_id= ...
signature: v0=3 ...
req_sig: v0=1 ...
comparing: False
signature should match with req_sig but it doesn't. I guess there is something wrong with base_str = f"{SLACK_API_VER}:{timestamp}:{encoded_body}". I mean, the concatination or urlencoding of the request body, but I'm not sure. Thank you in advance!

Sending .gz file via CURL to RESTful put creating ZipException in GZIPInputStream

The application I am creating takes a gzipped file sent to a RESTful PUT, unzips the file and then does further processing like so:
public class Service {
#PUT
#Path("/{filename}")
Response doPut(#Context HttpServletRequest request,
#PathParam("filename") String filename,
InputStream inputStream) {
try {
GZIPInputStream gzipInputStream = new GZIPInputStream(inputStream);
// Do Stuff with GZIPInputStream
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
}
I am able to successfully send a gzipped file in a unit test like so:
InputStream inputStream = new FileInputStream("src/main/resources/testFile.gz);
Service service = new Service();
service.doPut(mockHttpServletRequest, "testFile.gz", inputStream);
// Verify processing stuff happens
But when I build the application and attempt to CURL the same file from the src/main/resources dir with the following I get a ZipException:
curl -v -k -X PUT --user USER:Password -H "Content-Type: application/gzip" --data-binary #testFile.gz https://myapp.dev.com/testFile.gz
The exception is:
java.util.zip.ZipException: Not in GZIP format
at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:165)
at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:79)
at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:91)
at Service.doPut(Service.java:23)
// etc.
So does anyone have any idea why sending the file via CURL causes the ZipException?
Update:
I ended up taking a look at the actual bytes being sent via the InputStream and figured out where the ZipException: Not in GZIP format error was coming from. The first two bytes of a GZIP file are required to be 1F and 8B respectively in order for GZIPInputStream to recognize the data as being in GZIP format. Instead the 8B byte, along with every other byte in the steam that doesn't correspond to a valid UTF-8 character, was transformed into the bytes EF, BF, BD which are the UTF-8 unknown character replacement bytes. Thus the server is reading the GZIP data as UTF-8 characters rather than as binary and is corrupting the data.
The issue I am having now is I can't figure out where I need to change the configuration in order to get the server to treat the compressed data as binary vs UTF-8. The application uses Jax-rs on a Jersey server using Spring-Boot that is deployed in a Kubernetes pod and ran as a service, so something in the setup of one of those technologies needs to be tweaked to prevent improper encoding from being used on the data.
I have tried adding -H "Content-Encoding: gzip" to the curl command, registering the EncodingFilter.class and GZipEncoder.class in jersey ResourceConfig class, adding application/gzip to the server.compression.mime-types in application.propertes, adding the #Consumes("application/gzip") annotation to the doPut method above, and several other things I can't remember off the top of my head but nothing seems to have any effect.
I am seeing the following in the verbose CURL logs:
> PUT /src/main/resources/testFile.gz
> HOST: my.host.com
> Authorization: Basic <authorization stuff>
> User-Agent: curl/7.54.1
> Accept: */*
> Content-Encoding: gzip
> Content-Type: application/gzip
> Content-Length: 31
>
} [31 bytes data]
* upload completely sent off: 31 out of 31 bytes
< HTTP/1.1 500
< X-Application-Context: application
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: <date stuff>
...etc
Nothing I have done has affected the receiving side
Content-Type: application/json;charset=UTF-8
portion, which I suspect is the issue.
I met the same problem and finally solved it by using -H 'Content-Type:application/json;charset=UTF-8'
Use Charles to find the difference
I can successfully send the gzipped file using Postman. So I used Charles to catch two packages sent by curl and postman respectively. After I compared these two packages, I found that Postman used application/json as Content Type while curl used text/plain.
Spring docs: Content Type and Transformation
According to Spring docs, if the content type is text/plain and the source payload is byte[], Spring will convert the payload to string using charset specified in the content-type header. That's why ZipException occurred. Since the original byte data had already been decoded and not in gzip format anymore.
Spring source code
#Override
protected Object convertFromInternal(Message<?> message, Class<?> targetClass, #Nullable Object conversionHint) {
Charset charset = getContentTypeCharset(getMimeType(message.getHeaders()));
Object payload = message.getPayload();
return (payload instanceof String ? payload : new String((byte[]) payload, charset));
}

Grails 3 - Spring Rest Docs using Rest assured giving SnippetException when using JSON views

I am trying to integrate Spring REST docs with rest assured with Grails 3.1.4 application. I am using JSON Views.
Complete code is at https://github.com/rohitpal99/rest-docs
In NoteController when I use
List<Note> noteList = Note.findAll()
Map response = [totalCount: noteList.size(), type: "note"]
render response as grails.converters.JSON
Document generation works well.
But I want to use JSON views like
respond Note.findAll()
where I have _notes.gson and index.gson files in /views directory. I get a SnippetException. A usual /notes GET request response is correct.
rest.docs.ApiDocumentationSpec > test and document get request for /index FAILED
org.springframework.restdocs.snippet.SnippetException at ApiDocumentationSpec.groovy:54
with no message. Unable to track why it occurs.
Please suggest.
Full stacktrace
org.springframework.restdocs.snippet.SnippetException: The following parts of the payload were not documented:
{
"instanceList" : [ {
"title" : "Hello, World!",
"body" : "Integration Test from Hello"
}, {
"title" : "Hello, Grails",
"body" : "Integration Test from Grails"
} ]
}
at org.springframework.restdocs.payload.AbstractFieldsSnippet.validateFieldDocumentation(AbstractFieldsSnippet.java:134)
at org.springframework.restdocs.payload.AbstractFieldsSnippet.createModel(AbstractFieldsSnippet.java:74)
at org.springframework.restdocs.snippet.TemplatedSnippet.document(TemplatedSnippet.java:64)
at org.springframework.restdocs.generate.RestDocumentationGenerator.handle(RestDocumentationGenerator.java:192)
at org.springframework.restdocs.restassured.RestDocumentationFilter.filter(RestDocumentationFilter.java:63)
at com.jayway.restassured.internal.filter.FilterContextImpl.next(FilterContextImpl.groovy:73)
at org.springframework.restdocs.restassured.RestAssuredRestDocumentationConfigurer.filter(RestAssuredRestDocumentationConfigurer.java:65)
at com.jayway.restassured.internal.filter.FilterContextImpl.next(FilterContextImpl.groovy:73)
at com.jayway.restassured.internal.RequestSpecificationImpl.applyPathParamsAndSendRequest(RequestSpecificationImpl.groovy:1574)
at com.jayway.restassured.internal.RequestSpecificationImpl.get(RequestSpecificationImpl.groovy:159)
at rest.docs.ApiDocumentationSpec.$tt__$spock_feature_0_0(ApiDocumentationSpec.groovy:54)
at rest.docs.ApiDocumentationSpec.test and document get request for /index_closure2(ApiDocumentationSpec.groovy)
at groovy.lang.Closure.call(Closure.java:426)
at groovy.lang.Closure.call(Closure.java:442)
at grails.transaction.GrailsTransactionTemplate$1.doInTransaction(GrailsTransactionTemplate.groovy:70)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:133)
at grails.transaction.GrailsTransactionTemplate.executeAndRollback(GrailsTransactionTemplate.groovy:67)
at rest.docs.ApiDocumentationSpec.test and document get request for /index(ApiDocumentationSpec.groovy)
REST Docs will fail a test if you try to document something that isn't there or fail to document something that is there. You've documented two fields in your test:
responseFields(
fieldWithPath('totalCount').description('Total count'),
fieldWithPath('type').description("Type of result")
)))
REST Docs has failed the test as some parts of the response haven't been documented. Specifically an instanceList array that contains maps with two keys: title and body. You can document those and the other two fields with something like this:
responseFields(
fieldWithPath('totalCount').description('Total count'),
fieldWithPath('type').description("Type of result"),
fieldWithPath('instanceList[].title').description('Foo'),
fieldWithPath('instanceList[].body').description('Bar')
)))
If you don't care about potentially missing fields, you can use relaxedResponseFields instead of responseFields:
relaxedResponseFields(
fieldWithPath('totalCount').description('Total count'),
fieldWithPath('type').description("Type of result")
))
This won't fail the test if some fields are not mentioned.

Resources