Webclient Leaks Memory In Docker? - spring-boot

I'm trying to use Webclient in my project, but when I load test, I'm noticing the docker memory usage never goes down until the instance dies.
#Component
public class Controller {
//This is an endpoint to another simple api
//I use my local Ip instead of localhost in the container
private static final String ENDPOINT = "http://localhost:9090/";
private WebClient client;
public Controller(WebClient.Builder client) {
super();
this.client = client.build();
}
#Bean
public RouterFunction<ServerResponse> router() {
return RouterFunctions.route(GET("helloworld"), this::handle);
}
Mono<ServerResponse> handle(ServerRequest request) {
Mono<String> helloMono =
client.get().uri(ENDPOINT + "/hello").retrieve().bodyToMono(String.class);
Mono<String> worldMono =
client.get().uri(ENDPOINT + "/world").retrieve().bodyToMono(String.class);
return Mono.zip(helloMono, worldMono, (h, w) -> h + w)
.flatMap(s -> ServerResponse.ok().bodyValue(s));
}
}
Here's my dockerFile as well.
FROM openjdk:8
ENV SERVICE_NAME reactive-hello-world
ADD target/reactive-hello-world-*.jar $APP_HOME/reactive-hello-world.jar
RUN mkdir /opt/reactor-netty/
EXPOSE 9010
CMD java \
-Dcom.sun.management.jmxremote=true \
-Dcom.sun.management.jmxremote.local.only=false \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Djava.rmi.server.hostname=localhost \
-Dcom.sun.management.jmxremote.port=9010 \
-Dcom.sun.management.jmxremote.rmi.port=9010 \
-Xmx190M \
-jar reactive-hello-world.jar
EXPOSE 8080
Have I missed a step somewhere?
Edit: Here's some images
Before Load Test:
After Load Test
As you can see, the GC is happening correctly but the memory hasn't decreased. If I let the test continue it kills the instance in a couple minutes.
I've tried similar code using RestTemplate and I'm not experiencing any issues, the memory doesn’t usually exceed 400MB even when I run the Jmeter for an extended time. Can you help understand what's happening?
Edit: I've also tried the deprecated AsyncRestTemplate and I'm not seeing a problem with that either.
Edit: I have created the repos for this example. Please check if you can reproduce the issue.
The Hello World Backend
The Webclient Hello World(JMX is inside this repo)
The RestTemplate Hello World
The AsyncRestTemplate Hello World

Nevermind lads, I was able to find the answer, see:
https://github.com/reactor/reactor-netty/issues/1304
Essentially, the reactor netty dependency was outdated.

I don't think your problem has anything to do with the RestTemplate vs WebClient. Your GC graph looks pretty normal. It doesn't look like there is a leak, because whenever a GC happens, it seems like the allocated memory is able to go back to the previous levels.
It is important to note that you wouldn't always see a drop in the container memory usage when a garbage collection happens. This is because the Java Virtual Machine does not necessarily return memory back to system after a GC. This means, even if a portion of the memory is unused/freed after a GC in your process, it may still appear as "used" from outside of the process.
To be clear: JVM does return memory back to the system in some circumstances, and it depends on various factors including the garbage collector that is used.
From the GC graph on the second screenshot of your example, the GC graph looks pretty normal, but it seems that a pretty small amount of heap was released back to the system (the orange area).
You can try switching to G1 GC via -XX:+UseG1GC JVM flag and tune the behavior to more aggressively release the unallocated memory back to the system by tweaking -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio, particularly by reducing the -XX:MaxHeapFreeRatio value from the default 70 percent to a lower number. See here.
Lowering -XX:MaxHeapFreeRatio to as low as 10% and -XX:MinHeapFreeRatio has shown to successfully reduce the heap size without too much performance degradation; however, results may vary greatly depending on your application. Try different values for these parameters until they're as low as possible, yet still retain acceptable performance.
For further information on the topic, see:
Does GC release back memory to OS?
JEP 346: Promptly Return Unused Committed Memory from G1

Related

How to demonstrate memory visibility problems in Go?

I'm giving a presentation about the Go Memory Model. The memory model says that without a happens-before relationship between a write in one goroutine, and a read in another goroutine, there is no guarantee that the reader will observe the change.
To have a bigger impact on the audience, instead of just telling them that bad things can happen if you don't synchronize, I'd like to show them.
When I run the below code on my machine (2017 MacBook Pro with 3.5GHz dual-core Intel Core i7), it exits successfully.
Is there anything I can do to demonstrate the memory visibility issues?
For example are there any specific changes to the following values I could make to demonstrate the issue:
use different compiler settings
use an older version of Go
run on a different operating system
run on different hardware (such as ARM or a machine with multiple NUMA nodes).
For example in Java the flags -server and -client affect the optimizations the JVM takes and lead to visibility issues occurring.
I'm aware that the answer may be no, and that the spec may have been written to give future maintainers more flexibility with optimization. I'm aware I can make the code never exit by setting GOMAXPROCS=1 but that doesn't demonstrate visibility issues.
package main
var a string
var done bool
func setup() {
a = "hello, world"
done = true
}
func main() {
go setup()
for !done {
}
print(a)
}

Actual memory used by web server. Memory not released to OS

I push a bunch of requests through the web server, and according to Htop / activity monitor on my mac, Virt is 530G, Res is 247Mb.
The memory doesn't seem to be being released to the OS. I tried adding the following for force memory to be returned to OS as a test to no avail:
func freeMem() {
tick := time.Tick(time.Second * 10)
for range tick {
debug.FreeOSMemory()
}
}
and at the top of main, calling go freeMem(), but this seems to have no effect.
So I tried checking garbage collector is working properly and visualising with dave cheney's gcvis https://github.com/davecheney/gcvis:
Looks like gcvis shows things are working fine and dandy, but htop & activity monitor seem to be v high memory usage.
Do I have anything to worry about? One thing I did notice in gcvis, is that whilst gc.heapinuse goes down to acceptable levels, scvg.released and scvg.sys seem to remain high.

JavaFX eats my memory?

Before going frustrated about the title, I would like to clear out that I am a fresher on JavaFX UI. I have been a developer for 9 years, using Swing and currently I decided to give a try to the JavaFX. Examples on the net shows that JavaFX really can create beautiful GUIs compared to Swing. Maybe I am trying to create and deploy GUIs the wrong way, but one thing is for sure. JavaFX panes load slower than Swing and consumes a lot more memory. The same GUI was redesigned with JAVAFX and it takes almost 200Mb while the Swing GUI take only 50Mb.
Here I give an example of the code of how I create the GUIs programmatically using FXML.
public class PanelCreator {
private FXMLPane<LoginPaneController> loginFXML;
private FXMLPane<RegistrationPaneController> registerFXML;
private FXMLPane<EmailValidationPaneController> emailValidationFXML;
public PanelCreator() {
this.rootPane = rootPane;
try {
loginFXML = new FXMLPane<LoginPaneController>("Login.fxml");
registerFXML = new FXMLPane<RegistrationPaneController>("Register.fxml");
emailValidationFXML = new FXMLPane<EmailValidationPaneController>("EmailValidation.fxml");
} catch (IOException e) {e.printStackTrace();} // catch
} // Constructor Method
public Pane getLoginPane() {
return loginFXML.getPane();
} // getLoginPane()
public Pane getRegisterPane() {
return registerFXML.getPane();
} // getRegisterPane
public Pane getEmailValidationPane() {
return emailValidationFXML.getPane();
} // getEmailValidationPane
public LoginPaneController getLoginPaneController() {
return loginFXML.getController();
} // getLoginPaneController()
public RegistrationPaneController getRegistrationPaneController() {
return registerFXML.getController();
} // getRegistrationPaneController()
} // class PanelCreator
The constructor method of PanelCreator creates 3 FXMLPane classes, a class that combines both the FXML Pane and its Controller. An code of FXMLPane class is shown on the following code.
public class FXMLPane<T> {
private Pane pane;
private T paneController;
public FXMLPane(String url) throws IOException {
URL location = getClass().getResource(url);
FXMLLoader fxmlLoader = new FXMLLoader();
fxmlLoader.setLocation(location);
fxmlLoader.setBuilderFactory(new JavaFXBuilderFactory());
pane = fxmlLoader.load(location.openStream());
paneController = fxmlLoader.<T>getController();
} // Constructor Method
public Pane getPane() {
return pane;
} // getPane()
public T getController() {
return paneController;
} // getController()
}
Through PanelCreator now I can use the get methods to get each JavaFX Panel and its controller and I do not have to run the FXML load method every time to get the panel. Currently, what bothers me is not that the creation of FXML GUIs is slower than Swing but more that the RAM is 3x and 4x times more than the correspoing Swing version.
Can someone explain to me what I am doing wrong? The FXML files have just basic components on a Grid Pane, components like buttons, layers and textfields.
The code for the above example can be found here
Summarizing the answers from the comment section:
JavaFX needs more memory in general. E.g. JavaFX uses double precision for all properties along the UI-components, while Swing uses integer values most of the time. But the difference should not be noticeable.
Java consumes more memory as it needs to. As a default Java does not return memory back to your system even if you trigger the garbage collection. Thus if a JavaFX program needs a lot of memory on the initialization process but frees it afterwards, the JRE continues to hold the maximum level of memory for ever (see picture 1). As a side effect the GC will be triggered less often, because there is so much free unused memory (see picture 2). You can change the default by using the JVM option -XX:+UseG1GC. This changes the behavior of how memory is allocated, how it's freed and when the GC is triggered. With this option the allocated memory should better fit in with the used memory. If you want more tuning see Java Heap Tuning
JavaFX is a new framework compared to Swing. It will be improved over time in performance and resources consumption. As you can see in picture 1 and 3 it has already been improved. It now uses 8 to 9MB of memory on a 64Bit Linux machine. This is even less memory than the Swing version. I used Oracle Java
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
Picture 1: Memory consumption over time for the JavaFX example program. It shows a huge amount of free memory compared to the used memory. The GC was triggered manually multiple times to show used memory part without garbage.
Picture 2: Memory consumption over time for the JavaFX example program, but without manually triggering the GC. The used memory grows and grows because the GC isn't triggered.
Picture 3: Memory consumption over time for the JavaFX example program using the GC option -XX:+UseG1GC. After the first GC cycle the memory size was reduced to fit the real size of used memory.

ITextRenderer.createPDF stops after creating multiple PDF

I'm facing a problem I can't seems to fix and I need your help.
I'm generating a list of PDF that I write to the hard drive and everything works fine for a small amount of files, but when I start to generate more files (via a for loop), the creations stops and the others PDF files arent created.
I'm using Play Framework with the PDF module, that rely on ITextRenderer to generate the PDF.
I localized the problem (well, I believe it's here) by adding outputs to see where it stops, and the problem is when I call .createPDF(os);.
At first, I was able to only create 16 files and after that, it would stops, but I created a Singleton that creates the renderer in the Class instance and re-use the same instance (in order to avoid adding the fonts and settings everytime) and I went to 61 files created, but no more.
I though about a memory leak that blocks the process, but can't see where nor how to find it correctly.
Here's my part of the code :
List models; // I got a list of MyModel from a db query, this MyModel contains a path to a file
List<InputStream> files = new ArrayList<InputStream>();
for (MyModel model : models) {
if (!model.getFile().exists()) {
model.generatePdf();
}
files.add(new FileInputStream(model.getFile()));
}
// The generatePDF :
public void generatePdf() {
byte[] bytes = PDF.toBytes(views.html.invoices.pdf.invoice.render(this, due));
FileOutputStream output;
try {
File file = getFile();
if (!file.getParentFile().exists()) {
file.getParentFile().mkdirs();
}
if (file.exists()) {
file.delete();
}
output = new FileOutputStream(file);
BufferedOutputStream bos = new BufferedOutputStream(output);
bos.write(bytes);
bos.flush();
bos.close();
output.flush();
output.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
As you can see, I do my best to avoid memory leaks but this isn't enough.
In order to locate the problem, I replaced PDF.toBytes and all subsequent calls from that class to a copy/paste version inside my class, and added outputs. That's how I found that the thread hangs at createPDF, line
Update 1:
I have two (indentical) PlayFramework applications running with those parameters :
-Xms1024m -Xmx1024m -XX:PermSize=512m -XX:MaxPermSize=512m
I tried to stop one instance and re-execute the PDF generation, but it didn't impact the number of file generated, it stops at the same amount of files.
I also tried to update the allocated memories :
-Xms1536m -Xmx1536m -XX:PermSize=1024m -XX:MaxPermSize=1024m
No changes at all neither.
For information, the server has 16 Gb of RAM.
cat /proc/cpuinfo :
model name : Intel(R) Core(TM) i5-2400 CPU # 3.10GHz
cpu MHz : 3101.000
cpu cores : 4
cache size : 6144 KB
Hope it'll helps.
Well I'm really surprised the bug has absolutely nothing related to memory, memory leaks or available memory left.
I'm astonished.
It's related to an image that was loaded via an url, in the same server (local), that was taking to long to load. Removing that image fixed the issue.
I will make a base64 encoded image and it should fix the issue.
I still can't believe it!
The module is developed by Jörg Viola, I think it's safe to assume everything is fine on this side. From the IText library, I also believe it's safe to assume that everything is safe.
The bottleneck, as you guessed, was from your code. The interesting part was that it's wasn't some memory not properly managed, but from a network request that was making the PDF rendering slower and slower everytime, until it would ultimately fails.
It's nice you finally make it work.

IronPython memory usage

I'm hosting IronPython in a c#-based WebService to be able to provide custom extension scripts. However, I'm finding that memory usage sharply increases when I do simple load testing by executing the webservice repeatedly in a loop.
IronPython-1.1 implemented IDisposable on its objects so that you can dispose of them when they are done. The new IronPython-2 engine based on the DLR has no such concept.
From what I understood, everytime you execute a script in the ScriptEngine a new assembly is injected in the appdomain and can't be unloaded.
Is there any way around this?
You could try creating a new AppDomain every time you run one of your IronPython scripts. Although assebmlies cannot be unloaded from memory you can unload an AppDomain and this will allow you to get the injected assembly out of memory.
You need to disable the optimized code generation:
var runtime = Python.CreateRuntime();
var engine = runtime.GetEngine("py");
PythonCompilerOptions pco = (PythonCompilerOptions)engine.GetCompilerOptions();
pco.Module &= ~ModuleOptions.Optimized;
// this shouldn't leak now
while(true) {
var code = engine.CreateScriptSourceFromString("1.0+2.0").Compile(pco);
code.Execute();
}
Turns out, after aspnet_wp goes to about 500mb, the garbage collector kicks in and cleans out the mess. The memory usage then drops to about 20mb and steadily starts increasing again during load testing.
So there's no memory 'leak' as such.

Resources