Docker container with SWI-Prolog terminated with fatal error - spring

Im developing a Spring Boot Web Application, using SWI-Prolog's JPL interface to call Prolog from Java. In development mode everything runs OK.
When I deploy it to Docker the first call on JPL through API, runs fine. When I try to call JPL again, JVM crashes.
I use LD_PRELOAD to point to libswipl.so
SWI_HOME_DIR is set also.
LD_LIBRARY_PATH is set to point to libjvm.so
My Controller function:
#PostMapping("/rules/testAPI/")
#Timed
public List<String> insertRule() {
String use_module_http = "use_module(library(http/http_open)).";
JPL.init();
Query q1 = new Query(use_module_http);
if (!q1.hasNext()) {
System.out.println("Failed to load HTTP Module");
} else {
System.out.println("Succeeded to load HTTP Module");
}
return null;
}
Console output
1st Call
Succeeded to load HTTP Module
2nd Call
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f31705294b2, pid=16, tid=0x00007f30d2eee700
#
# JRE version: OpenJDK Runtime Environment (8.0_191-b12) (build 1.8.0_191-8u191-b12-2ubuntu0.18.04.1-b12)
# Java VM: OpenJDK 64-Bit Server VM (25.191-b12 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libswipl.so+0xb34b2] PL_thread_attach_engine+0xe2
#
# Core dump written. Default location: //core or core.16
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
I uploaded the error log file in pastebin. click here
Has anyone faced the same problem? Is there a solution about this?
Note that, I also checked it also with oracle-java-8 but the same error occurs.
UPDATE:
#CapelliC answer didn't work.

I think I would try to 'consume' the term. For instance
Query q1 = new Query(use_module_http);
if (!q1.hasNext()) {
System.out.println("Failed to load HTTP Module");
} else {
System.out.println("Succeeded to load HTTP Module:"+q1.next().toString());
// remember q1.close() if there could be multiple soultions
}
or better
if ((new Query(use_module_http)).oneSolution() == null) ...
or still better
if ((new Query(use_module_http)).hasSolution() == false) ...

Not a direct answer because it suggests a different approach, but for a long time I was running a setup where a C++ program I wrote would wrap SWI-Prolog the way you're doing with Spring Boot and it was very difficult to add features to/maintain. About a year ago I went to a totally different approach where I added a MQTT plugin to SWI-Prolog so my Prolog code could run continuously and respond to and send MQTT messages. So now Prolog can interoperate with other modules in a variety of languages (mostly Java), but everything runs in its own process. This has worked out MUCH better for me and I've got everything running in Docker containers - including the MQTT broker. I'm not firmly suggesting MQTT (though I like it), just to consider the approach of having Java and Prolog less tightly coupled.

Most likely the reason why it is failing the second time is because you are calling JPL.init() again. It should be called only once.

Finally it was a bug of JPL package. After contacting SWI-Prolog developers, they patched a fix to the SWI-Prolog Git and now the error is gone!
Right configuration, so that Docker container be able to understand JPL is found in this link: Github : env.sh

Related

Debugging Linux Kernel Module

I have built a linux kernel module which helps in migrating TCP socket from one server to another. The module is working perfectly except when the importing server tries to close the migrating socket, the whole server hangs and freezes.
I am not able to find out the root of the problem, I believe it is something beyond my kernel module code. Something I am missing when I am recreating the socket in the importing machine, and initializes its states. It seems that the system is entering an endless loop. But when I close the socket from client side, this problem does not appear at all.
So my question, what is the appropriate way to debug the kernel module and figure out what is going on, why is it freezing? How to dump error messages especially in my case I am not able to see anything, once I close the file descriptor related to the migrated socket in the server side, the machines freezes.
Note: I used printk to print all the values, and I am not able to find something wrong in the code.
Considering your system is freezing, have you checked if your system is under heavy load while migrating the socket, have you looked into any sar reports to confirm this, see if you can take a vmcore (after configuring kdump) and use crash-tool to narrow down the problem. First, install and configure kdump, then you may need add the following lines to /etc/sysctl.conf and running sysctl -p
kernel.hung_task_panic=1
kernel.hung_task_timeout_secs=300
Next get a vmcore/dump of memory:
echo 'c' > /proc/sysrq-trigger # ===> 1
If you still have access to the terminal, use the sysrq-trigger to dump all the stack traces of kernel thread in the syslog:
echo 't' > /proc/sysrq-trigger
If you system is hung try using the keyboard hot keys
Alt+PrintScreen+'c' ====> same as 1
Other things you may want to try out, assuming you would have already tried some of the below:
1. dump_stack() in your code
2. printk(KERN_ALERT "Hello msg %ld", err); add these lines in the code.
3. dmesg -c; dmesg

HHVM 3.1 Stack Overflow

Running HHVM 3.1 using Ubuntu 14.04 Final Beta using Nginx with fastcgi.
Fatal error: Stack overflow in
I've tried increasing the stack using -vEval.VMStackElms=524288, but it doesn't seem to make a difference. Any Ideas?
I checked the version it actually is HHVM 3.0 I pull and built it from the 3.1 Tag.
HipHop VM 3.0.0-dev (rel) Compiler: heads/HHVM-3.1-0-g3af5bc29494cedc3457f4d60f1afdd603337d08c Repo schema: 86fe165eb703fdba1680d5e43db3f5a3f836e504.
I'm testing it with Magento. Wordpress seems to work just fine. Works fine with php5-fpm. It's failing when it's trying to load the magento config. /magento/app/code/core/Mage/Core/Model/Config.php on line 1407
If any of your database connections are configured with <use/>, change that line you mentioned in Mage/Core/Model/Config.php from
if (!empty($conn->use)) {
to
$use = (string)$conn->use;
if (!empty($use)) {
The "right way" to do this is of course to make a copy of that class in your local code pool.
This is nowhere near enough info. I also doubt you are using 3.1 since I didn't release it yet.
Try putting prints in you code to narrow down what function is being called recursively. Does the problem happen on php5?
Remove < use /> in etc/local.xml fixed the problem.

java7 SecureRandom performance on MacOSX

I'm running tomcat7 with jdk7 on MacOSX Mavericks DP3.
Everything goes well , and it only takes 500 ms to start up .
But suddenly, it slows down to 35 seconds.
The log message shows SecureRandom is the root cause.
Thanks for google, I found it's a jre bug , and following code to verify:
import java.security.SecureRandom;
class JRand {
public static void main(String args[]) throws Exception {
System.out.println("Ok: " +
SecureRandom.getInstance("SHA1PRNG").nextLong());
}
}
Yes. The simplest code also takes 35 seconds.
But it seems that all those related solutions do not work for me.
Both /dev/random and /dev/urandom are not block devices on Mac.
cat /dev/random | hexdump -C
output very quickly!
When switched back to jre6, it's very fast to generate randoms.
Download the latest jdk8-ea, the problem still exists.
In fact , not only tomcat slows down significantly, Netbeans, glassfish are all affected.
After struggling with for hours, I gave up at last.
This morning, when I came to office, plugged in Ethernet, guess what ?
It recovered!
So my question is , what happens behind ? It's really weird.
Thanks.
Haha, resolved.
Get InetAddress.java source code (can copy from IDE);
modify method getLocalHost from
String local = impl.getLocalHostName();
to :
String local = "localhost"; // impl.getLocalHostName();
recompile it, and add java.net.InetAddress.class back to JDK/jre/lib/rt.jar.
RESOLVED.
Don't change InetAddress, other code may rely on it. Instead, change sun.security.provider.SeedGenerator::getSystemEntropy() to not use the local ip address. (How secure is that anyway?) As an added bonus, you now become slightly more secure via obscurity :)

Debugging Node.js processes with cluster.fork()

I've got some code that looks very much like the sample in the Cluster documentation at http://nodejs.org/docs/v0.6.0/api/cluster.html, to wit:
var cluster = require('cluster');
var server = require('./mycustomserver');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
var i;
// Master process
for (i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('death', function (worker) {
console.log('Worker ' + worker.pid + ' died');
});
} else {
// Worker process
server.createServer({port: 80}, function(err, result) {
if (err) {
throw err;
} else {
console.log('Thread listening on port ' + result.port);
}
});
}
I've installed node-inspector and tried using both it and the Eclipse V8 plugin detailed at https://github.com/joyent/node/wiki/Using-Eclipse-as-Node-Applications-Debugger to debug my application, but it looks like I can't hook a debugger up to forked cluster instances to put breakpoints at the interesting server logic--I can only debug the part of the application that spawns the cluster processes. Does anybody know if I can in fact do such a thing, or am I going to have to refactor my application to use only a single thread when in debugging mode?
I'm a Node.js newbie, so I'm hoping there's something obvious I'm missing here.
var fixedExecArgv=[];
fixedExecArgv.push('--debug-brk=5859');
cluster.setupMaster({
execArgv: fixedExecArgv
});
Credit goes to Sergey's post.
I changed my server.js to fork only one worker mainly for testing this then added the code above the forking. This fixed the debugging issue for me. Thank you Sergey for explaining and providing the solution!!!
For those who wish to debug child processes in VS Code, just add this to launch.json configuration:
"autoAttachChildProcesses": true
https://code.visualstudio.com/docs/nodejs/nodejs-debugging#_remote-debugging
I already opened a ticket about this here: https://github.com/dannycoates/node-inspector/issues/130
Although it's not fixed yet, there's a workaround:
FWIW: The reason I suspect is that the node debugger needs to bind to the debug port (default: 5858). If you are using Cluster, I am guessing the master/controller binds first, and succeeds, causing the bind in the children/workers to fail. While a port can be supplied to node --debug=N there seems to be no easy way to do this when node is invoked within Cluster for the worker (it might be possible to programmatically set process.debug_port and then enable debugging, but I haven't got that working yet). Which leaves a bunch of options: 1) start node without the --debug option, and once it is running, find the pid for the worker process you want to debug/profile, and send it a USR1 signal to enable debugging. Another option is to write a wrapper for node that calls the real node binary with --debug set to a unique port each time. There are possibly options in Cluster that let you pass such as arg as well.
if you use VSCode to debug, you need to specify the port and and "autoAttachChildProcesses": true in the lanuch.json file.
If you debug directly in DevTool, you need to add a connection to the corresponding port in the console.
For anyone looking at this in 2018+, no startup arguments are necessary.
From this Github issue:
Just a time saver for anyone who might have been in the same boat as me-- the Node.js V8 --inspector Manager (NiM) seems to introduce this issue when it otherwise wouldn't be present-- I spent about an hour banging my head before disabling the Chrome plugin and discovering that everything worked fine when opening from chrome://inspect.
I also spent hours reading github posts, tweaking the settings of gulp-typescript and gulp-sourcemaps, etc, only to have that plugin be the issue. Also worth noting is that I had to add port N+1 to the remote targets of chrome://inspect, so localhost:9230, to debug my worker process.
Use the --inspect flag for node version higher than or equal to 7.7.0 to debug the node js process,
if someone wants more information on how to debug cluster processing and setting up chrome debugger tools for Node JS, please follow my post here.

How to use Java Service Wrapper for our java application

I'm trying to implement scheduler to my application. I use spring and quartz support.
I have test my component and run perfectly.
My Main method is:
public class Main {
public static void main(String[] args) {
new ClassPathXmlApplicationContext("application-context.xml");
}
}
I use wrapper-windows-x86-32-3.5.7, I configure the wrapper.conf, and run from console using DemoApp.bat wrapper.
It works.
But When I want to install the service, I got error message Startup failed: Timed out waiting for a signal from the JVM.
After 5 times repetition, I got error message
JVM did not exit on request, terminated
There were 5 failed launches in a row, each lasting less than 300 seconds. Giving up.
Thanks for help.
Because it is working fine for you running in a console, but not as a service. This is most likely a problem with the environment of the SYSTEM user. The most common cause is not being able to locate the java binary. The cause should be fairly obvious if you look in the wrapper.log file.
The default location of the java binary is:
wrapper.java.command=java
This will cause it to be found on the PATH. To use a JAVA_HOME location, try the following:
wrapper.java.command=%JAVA_HOME%/bin/java
Then make sure you have declared the JAVA_HOME variable as a SYSTEM WIDE variable, not jsut for your current user account.
Cheers,
Leif

Resources