Spring 5's Reactive WebClient not so asyncronous? - reactor-netty

I met strange Spring WebClient's behavior. I have two URLs, slow and fast. Bot do nothing, but slow just wait ten seconds before response. When i call them simultaneously using WebClient, i expect fast URL will be completed earlier than slow, but actually they both completed at same time. And worst of it - sometimes it works as expected. Has anybody thoughts, why it acts in that manner, and how to make it works right? Here is my example
fun main() {
val webClient = WebClient.create()
println("${LocalDateTime.now()} [${Thread.currentThread().name}] Start")
webClient.get().uri("http://callback-mock/slow-url").exchange()
.subscribe { response ->
println("${LocalDateTime.now()} [${Thread.currentThread().name}] Executed callback slow URL with result ${response.statusCode()}")
}
webClient.get().uri("http://callback-mock/fast-url").exchange()
.subscribe { response ->
println("${LocalDateTime.now()} [${Thread.currentThread().name}] Executed callback fast URL with result ${response.statusCode()}")
}
println("${LocalDateTime.now()} [${Thread.currentThread().name}] Waiting for exit")
Thread.sleep(15_000)
println("${LocalDateTime.now()} [${Thread.currentThread().name}] Exit")
}
Result (in most cases)
2019-10-02T13:04:34.536 [main] Start
2019-10-02T13:04:35.173 [main] Waiting for exit
2019-10-02T13:04:44.791 [reactor-http-nio-4] Executed callback slow URL with result 200 OK
2019-10-02T13:04:44.791 [reactor-http-nio-2] Executed callback fast URL with result 200 OK
2019-10-02T13:04:50.193 [main] Exit
Process finished with exit code 0
In rare cases it works as expected
2019-10-02T13:23:35.619 [main] Start
2019-10-02T13:23:36.300 [main] Waiting for exit
2019-10-02T13:23:36.802 [reactor-http-nio-2] Executed callback fast URL with result 200 OK
2019-10-02T13:23:45.810 [reactor-http-nio-4] Executed callback slow URL with result 200 OK
2019-10-02T13:23:51.308 [main] Exit
Process finished with exit code 0

The following very simple test shows that fast is always returned first. (Reactor Netty is used as HTTP server)
#Test
public void test() throws InterruptedException {
DisposableServer server =
HttpServer.create()
.port(0)
.route(r -> r.get("/fast", (req,res) -> res.sendString(Mono.just("test")))
.get("/slow", (req,res) -> res.sendString(Mono.just("test").delayElement(Duration.ofSeconds(10)))))
.bindNow();
WebClient webClient = WebClient.create();
System.out.println(LocalDateTime.now() + " " + Thread.currentThread().getName() + " Start");
webClient.get().uri("http://localhost:" + server.port() + "/slow").exchange()
.subscribe(response ->
System.out.println(LocalDateTime.now() + " " + Thread.currentThread().getName() +
" Executed callback slow URL with result " + response.statusCode()));
webClient.get().uri("http://localhost:" + server.port() + "/fast").exchange()
.subscribe(response ->
System.out.println(LocalDateTime.now() + " " + Thread.currentThread().getName() +
" Executed callback fast URL with result " + response.statusCode()));
System.out.println(LocalDateTime.now() + " " + Thread.currentThread().getName() + " Waiting for exit");
Thread.sleep(15_000);
System.out.println(LocalDateTime.now() + " " + Thread.currentThread().getName() + " Exit");
server.disposeNow();
}

Related

How do I sequentially and nonparallel loop through an array in RxSwift?

I have a list of objects i need to send to a server and i would like to do this one after the other (not in parallel). After all objects have been sent and there was no error i want to run additional Observables which do different things.
let objects = [1, 2, 3]
let _ = Observable.from(objects).flatMap { object -> Observable<Void> in
return Observable.create { observer in
print("Starting request \(object)")
DispatchQueue.main.asyncAfter(deadline: .now() + 2) { // one request takes ~2sec
print("Request \(object) finished")
observer.onNext(Void())
observer.onCompleted()
}
return Disposables.create()
}
}.flatMap { result -> Observable<Void> in
print("Do something else (but only once)")
return Observable.just(Void())
}.subscribe(
onNext: {
print("Next")
},
onCompleted: {
print("Done")
}
)
What i get is
Starting request 1
Starting request 2
Starting request 3
Request 1 finished
Do something else (but only once)
Next
Request 2 finished
Do something else (but only once)
Next
Request 3 finished
Do something else (but only once)
Next
Done
The whole process ends after 2 sec. What i want is
Starting request 1
Request 1 finished
Starting request 2
Request 2 finished
Starting request 3
Request 3 finished
Do something else (but only once)
Next
Done
The whole sequence should end after 6 seconds (because it's not executed parallel).
I got this to work with a recursive function. But with lots of requests this ends in a deep recursion stack which i would like to avoid.
Use concatMap instead of flatMap in order to send them one at a time instead of all at once. Learn more here:
RxSwift’s Many Faces of FlatMap
Then to do something just once afterwards, use toArray(). Here is a complete example:
let objects = [1, 2, 3]
_ = Observable.from(objects)
.concatMap { object -> Observable<Void> in
return Observable.just(())
.debug("Starting Request \(object)")
.delay(.seconds(2), scheduler: MainScheduler.instance)
.debug("Request \(object) finished")
}
.toArray()
.flatMap { results -> Single<Void> in
print("Do something else (but only once)")
return Single.just(())
}
.subscribe(
onSuccess: { print("done") },
onError: { print("error", $0) }
)

Minimizing ZeroMQ round trip latency

My question is about minimizing the latency between a ZMQ client and server
I have the following modified Hello World ZMQ (JeroMQ 0.5.1)
import org.zeromq.SocketType;
import org.zeromq.ZContext;
import org.zeromq.ZMQ;
public class server {
public static void main(String[] args) {
try (ZContext context = new ZContext()) {
// Socket to talk to clients
ZMQ.Socket socket = context.createSocket(SocketType.REP);
socket.bind("tcp://*:5555");
while (!Thread.currentThread().isInterrupted()) {
byte[] reply = socket.recv(0);
System.out.println(
"Received " + ": [" + reply.length+ "]"
);
String response = "world";
socket.send(response.getBytes(ZMQ.CHARSET), 0);
}
}
}
}
and client:
import org.zeromq.SocketType;
import org.zeromq.ZContext;
import org.zeromq.ZMQ;
public class client {
public static void main(String[] args)
{
try (ZContext context = new ZContext()) {
// Socket to talk to server
System.out.println("Connecting to hello world server" + args[0] + args[1] + args[2] );
ZMQ.Socket socket = context.createSocket(SocketType.REQ);
socket.connect("tcp://"+args[0]+":"+args[1]);
for (int requestNbr = 1; requestNbr != 10; requestNbr++) {
byte[] request = new byte[requestNbr*(Integer.parseInt(args[2]))];
System.out.println("Sending Hello " + requestNbr);
long time = System.nanoTime();
socket.send(request, 0);
byte[] reply = socket.recv(0);
double restime = (System.nanoTime() - time)/1000000.0;
System.out.println(
"Received " + new String(reply, ZMQ.CHARSET) + " " +
requestNbr + " " + restime
);
}
}
}
}
I'm running the server and the client over a network with latency (160ms round trip). I create the latency using tc on both the client and the server:
tc qdisc del dev eth0 root
tc class add dev eth0 parent 1: classid 1:155 htb rate 1000mbit
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 flowid 1:155 match ip dst 192.168.181.1/24
tc qdisc add dev eth0 parent 1:155 handle 155: netem delay $t1 $dt1 distribution normal
Now when I run java -jar client.jar 192.168.181.3 5555 100000 I get the following output:
Sending Hello 1
Received world 1 1103.392783
Sending Hello 2
Received world 2 322.553512
Sending Hello 3
Received world 3 478.10143
Sending Hello 4
Received world 4 606.396567
Sending Hello 5
Received world 5 641.465041
Sending Hello 6
Received world 6 772.961712
Sending Hello 7
Received world 7 910.848674
Sending Hello 8
Received world 8 966.694224
Sending Hello 9
Received world 9 940.645636
which means that as we increase the size of the message, it takes more round trips to send the message and receive the ack (you can play with the message size to see for yourself). I was wondering what I need to do to prevent that from happening, that is: send everything in one go and minimize the latency to the roundtrip time.
Note: In my original application, I'm using a REQ-ROUTER pattern as I have multiple clients, but the issue with the latency and large messages lingers on

Running into rate limit for Boto3 EC2 create_snapshots

I ran the code at the bottom of this post in my environment and got the following error after a few successes:
An error occurred (SnapshotCreationPerVolumeRateExceeded) when calling the CreateSnapshot operation: The maximum per volume CreateSnapshot request rate has been exceeded. Use an increasing or variable sleep interval between requests.
I'm used to doing something like this to paginate my results using a MaxResults variable and the NextToken returned by the response:
maxResults = 100
result = ec2.describe_instances(MaxResults=maxResults)
nextToken = result['NextToken']
instance_ids = []
for reservation in result['Reservations']:
for instances in reservation['Instances']:
for i in instances:
instance_ids.append(i['InstanceId'])
size = len(instance_ids)
while size == maxResults:
result = ec2.describe_instances(MaxResults=maxResults, NextToken=nextToken)
nextToken = result['NextToken']
size = len(instance_ids)
# etc...
However, because I'm already filtering by tag in my describe_instances call, I'm not allowed to pass a maxResults parameter as well. Additionally, create_snapshot's call signature only allows me to specify a dry run, the volume ID, and a description of the snapshot, and does not return a nextToken or similar. How can I avoid this error - must I introduce a sleep like the error message suggests?
Lambda function code:
from __future__ import print_function
import boto3
import datetime
import time
ec2 = boto3.client('ec2')
def createScheduleSnapshots(event, context):
errors = []
try:
print("Creating snapshots on " + str(datetime.datetime.today()) + ".")
schedulers = ec2.describe_instances(Filters=[{'Name':'tag:GL-sub-purpose', 'Values':['Schedule']}])
schedule_instances = []
for reservation in schedulers['Reservations']:
for instance in reservation['Instances']:
schedule_instances.append(instance)
print("Performing backup on " + str(len(schedule_instances)) + " schedules.")
successful = []
failed = []
for s in schedule_instances:
try:
instanceId=s['InstanceId']
blockDeviceMappings = s['BlockDeviceMappings']
snapshotDescription = instanceId + "-" + str(datetime.date.today().strftime('%Y-%m-%d')) + "-46130e7ac954-automated"
for bd_maps in blockDeviceMappings:
if (bd_maps['DeviceName'] == '/dev/sdf'): #Don't back up OS
volumeId = bd_maps['Ebs']['VolumeId']
print("\tSnapshotting " + instanceId)
ec2.create_snapshot(
VolumeId=volumeId,
Description=snapshotDescription
)
successful.append(instanceId)
except Exception as e:
print(e)
errors.append(e)
failed.append(instanceId + " :\t" + str(e))
print("Performed backup on " + str(len(successful)) + " schedulers. Failed backup on " + str(len(failed)) + " schedulers. ")
except Exception as e:
print(e)
errors.append(e)
if len(errors) == 0:
return "Success"
else:
raise Exception("Errors during invocation of Lambda. " + str(errors))

F#: Breaking out of a loop

I am new to programming and F# is my first language.
I have a list of URLs that, when first accessed, either returned HTTP error 404 or experienced gateway timeout. For these URLs, I would like to try accessing them another 3 times. At the end of these 3 attempts, if a WebException error is still thrown, I will assume that the URL doesn't exist, and I will add it to a text file containing all of the invalid URLs.
Here is my code:
let tryAccessingAgain (url: string) (numAttempts: int) =
async {
for attempt = 1 to numAttempts do
try
let! html = fetchHtmlAsync url
let name = getNameFromPage html
let id = getIdFromUrl url
let newTextFile = File.Create(htmlDirectory + "\\" + id.ToString("00000") + " " + name.TrimEnd([|' '|]) + ".html")
use file = new StreamWriter(newTextFile)
file.Write(html)
file.Close()
with
:? System.Net.WebException -> File.AppendAllText("G:\User\Invalid URLs.txt", url + "\n")
}
I have tested fetchHtmlAsync, getNameFromPage and getIdFromUrl in F# Interactive. All of them work fine.
If I succeed in downloading the HTML contents of a URL without using all 3 attempts, obviously I want to break out of the for-loop immediately. My question is: How may I do so?
use recursion instead of the loop:
let rec tryAccessingAgain (url: string) (numAttempts: int) =
async {
if numAttempts > 0 then
try
let! html = fetchHtmlAsync url
let name = getNameFromPage html
let id = getIdFromUrl url
let newTextFile = File.Create(htmlDirectory + "\\" + id.ToString("00000") + " " + name.TrimEnd([|' '|]) + ".html")
use file = new StreamWriter(newTextFile)
file.Write(html)
file.Close()
with
| :? System.Net.WebException ->
File.AppendAllText("G:\User\Invalid URLs.txt", url + "\n")
return! tryAccessingAgain url (numAttempts-1)
}
please note that I could not test it and there might be some syntax errors - sorry if
as we are at it - you might want to rewrite the logging of the invalid url like this:
let rec tryAccessingAgain (url: string) (numAttempts: int) =
async {
if numAttempts <= 0 then
File.AppendAllText("G:\User\Invalid URLs.txt", url + "\n")
else
try
let! html = fetchHtmlAsync url
let name = getNameFromPage html
let id = getIdFromUrl url
let newTextFile = File.Create(htmlDirectory + "\\" + id.ToString("00000") + " " + name.TrimEnd([|' '|]) + ".html")
use file = new StreamWriter(newTextFile)
file.Write(html)
file.Close()
with
| :? System.Net.WebException ->
return! tryAccessingAgain url (numAttempts-1)
}
this way it will only be logged once the attempts where all made

JMeter asserting a response has been successfully downloaded

I am using JMeter to test some of the functionality on my site. Through using the Save Responses to a file element, I have been able to successfully issue a request to download a pdf through JMeter. However, I am curious if there is an assertion to check that a file has actually downloaded (and if possible, is in the format I specified!). I know I can simply look at the file, but I'm hoping to make this more automated. I have checked "Save Successful Responses Only," but I want to ensure a response has actually been saved.
I think you need to use a Beanshell Assertion for this.
Example code to check file presence, size and content type is below:
File file = new File("/path/to/downloaded/file");
//check file existence
if (!file.exists())
{
Failure = true;
FailureMessage = "File " + file.getName() + " does not exist";
}
//check file size
long expectedSize = SampleResult.getBodySize();
long actualSize = file.length();
if (expectedSize != actualSize)
{
Failure = true;
FailureMessage = "Actual file size differs from expected. Expected: " + expectedSize + " and got: " + actualSize ;
}
//check content type
String expectedType = SampleResult.getContentType();
String actualType = file.toURI().toURL().openConnection().getContentType();
if (!expectedType.equals(actualType))
{
Failure = true;
FailureMessage = "Response types are different. Expected: " + expectedType + " and got: " + actualType;
}
See How to Use JMeter Assertions in 3 Easy Steps guide for more information on JMeter Assertions superpower.

Resources