I'm running tomcat7 with jdk7 on MacOSX Mavericks DP3.
Everything goes well , and it only takes 500 ms to start up .
But suddenly, it slows down to 35 seconds.
The log message shows SecureRandom is the root cause.
Thanks for google, I found it's a jre bug , and following code to verify:
import java.security.SecureRandom;
class JRand {
public static void main(String args[]) throws Exception {
System.out.println("Ok: " +
SecureRandom.getInstance("SHA1PRNG").nextLong());
}
}
Yes. The simplest code also takes 35 seconds.
But it seems that all those related solutions do not work for me.
Both /dev/random and /dev/urandom are not block devices on Mac.
cat /dev/random | hexdump -C
output very quickly!
When switched back to jre6, it's very fast to generate randoms.
Download the latest jdk8-ea, the problem still exists.
In fact , not only tomcat slows down significantly, Netbeans, glassfish are all affected.
After struggling with for hours, I gave up at last.
This morning, when I came to office, plugged in Ethernet, guess what ?
It recovered!
So my question is , what happens behind ? It's really weird.
Thanks.
Haha, resolved.
Get InetAddress.java source code (can copy from IDE);
modify method getLocalHost from
String local = impl.getLocalHostName();
to :
String local = "localhost"; // impl.getLocalHostName();
recompile it, and add java.net.InetAddress.class back to JDK/jre/lib/rt.jar.
RESOLVED.
Don't change InetAddress, other code may rely on it. Instead, change sun.security.provider.SeedGenerator::getSystemEntropy() to not use the local ip address. (How secure is that anyway?) As an added bonus, you now become slightly more secure via obscurity :)
Related
I am using the AT command binary (provided by Espresif) to interface my wi-fi application. In order to identify the device over a network, I've changed the Hostname to a known name, but when I scan the network, the Hostname still being "Espressif", instead of being my "Own Hostname".
Does anyone knows how to fix that? I actually think that it's an error on AT command's binary.
I've got the same issue.
Code looks like:
#include <Arduino.h>
#include "WiFi.h"
void setup() {
// Start the Wifi connection ...
WiFi.enableSTA(true);
WiFi.begin(ssid, password);
// TODO Hostname setting does not work. Always shows up as "espressif"
if(WiFi.setHostname("myHostname")) {
Serial.printf("\nHostname set!\n");
} else {
Serial.printf("\nHostname NOT set!\n");
}
}
There is a bug in the Espressif code. The workaround is to reset the WIFI before setting hostname and starting the WIFI:
WiFi.disconnect(true);
WiFi.config(INADDR_NONE, INADDR_NONE, INADDR_NONE);
WiFi.setHostname(hostname);
Also note that the OTA may make the issue even harder to fix. If using MDSN and OTA, please add the following code (after the WIFI-stuff) to ensure that the hostname gets correlcty set:
MDNS.begin(hostname);
ArduinoOTA.setHostname(hostname);
ArduinoOTA.begin();
For details, please read the issue 2537 in the Espressif GitHub repo.
Try to wait for Wifi to set-up. easiest (never the best) way with Delay(150).
After literally more than 10 hours of research, trying to pinpoint, analyze and/or fix the issue one way or another, I gave up and accepted the workaround. You can find it here.
In short, what I do and what works for me is:
WiFi.disconnect();
WiFi.config(INADDR_NONE, INADDR_NONE, INADDR_NONE); // This is a MUST!
if (!WiFi.setHostname("myFancyESP32")) {
Serial.println("Hostname failed to configure");
}
WiFi.begin(ssid, password);
This is really frustrating but for the time being it seems that the issue comes from the ESP IDF and unless it is fixed there, it will not work.
Im developing a Spring Boot Web Application, using SWI-Prolog's JPL interface to call Prolog from Java. In development mode everything runs OK.
When I deploy it to Docker the first call on JPL through API, runs fine. When I try to call JPL again, JVM crashes.
I use LD_PRELOAD to point to libswipl.so
SWI_HOME_DIR is set also.
LD_LIBRARY_PATH is set to point to libjvm.so
My Controller function:
#PostMapping("/rules/testAPI/")
#Timed
public List<String> insertRule() {
String use_module_http = "use_module(library(http/http_open)).";
JPL.init();
Query q1 = new Query(use_module_http);
if (!q1.hasNext()) {
System.out.println("Failed to load HTTP Module");
} else {
System.out.println("Succeeded to load HTTP Module");
}
return null;
}
Console output
1st Call
Succeeded to load HTTP Module
2nd Call
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f31705294b2, pid=16, tid=0x00007f30d2eee700
#
# JRE version: OpenJDK Runtime Environment (8.0_191-b12) (build 1.8.0_191-8u191-b12-2ubuntu0.18.04.1-b12)
# Java VM: OpenJDK 64-Bit Server VM (25.191-b12 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libswipl.so+0xb34b2] PL_thread_attach_engine+0xe2
#
# Core dump written. Default location: //core or core.16
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
I uploaded the error log file in pastebin. click here
Has anyone faced the same problem? Is there a solution about this?
Note that, I also checked it also with oracle-java-8 but the same error occurs.
UPDATE:
#CapelliC answer didn't work.
I think I would try to 'consume' the term. For instance
Query q1 = new Query(use_module_http);
if (!q1.hasNext()) {
System.out.println("Failed to load HTTP Module");
} else {
System.out.println("Succeeded to load HTTP Module:"+q1.next().toString());
// remember q1.close() if there could be multiple soultions
}
or better
if ((new Query(use_module_http)).oneSolution() == null) ...
or still better
if ((new Query(use_module_http)).hasSolution() == false) ...
Not a direct answer because it suggests a different approach, but for a long time I was running a setup where a C++ program I wrote would wrap SWI-Prolog the way you're doing with Spring Boot and it was very difficult to add features to/maintain. About a year ago I went to a totally different approach where I added a MQTT plugin to SWI-Prolog so my Prolog code could run continuously and respond to and send MQTT messages. So now Prolog can interoperate with other modules in a variety of languages (mostly Java), but everything runs in its own process. This has worked out MUCH better for me and I've got everything running in Docker containers - including the MQTT broker. I'm not firmly suggesting MQTT (though I like it), just to consider the approach of having Java and Prolog less tightly coupled.
Most likely the reason why it is failing the second time is because you are calling JPL.init() again. It should be called only once.
Finally it was a bug of JPL package. After contacting SWI-Prolog developers, they patched a fix to the SWI-Prolog Git and now the error is gone!
Right configuration, so that Docker container be able to understand JPL is found in this link: Github : env.sh
I'm trying to do this:
import groovy.sql.Sql
def sql = Sql.newInstance(
url:'jdbc:sqlserver://localhost\\myDB',
user:'server\user', //this I don't think I need because of SSPI
password:'password',
driver:'com.microsoft.sqlserver.jdbc.SQLServerDriver',
SSPI: 'true'
)
The problem I'm having is that this connection is just timing out. I can ping the machine. I can also connect to the database with Managment Studio logged into my SSPI user (or whatever you call it, I start the Management Studio with a different user)
So I've tried that with my SoapUI as well, started the program as a different user, but I still time out when I initiate the connection. So something is very wrong with my connection string and any help would be appreciated.
P.S. Yes, I don't know what's up with the \ backslashes after the URL to the server, I guess it indicates that it's at the root. If I don't use them I get a message that I'm on the incorrect version.
And then we found the answer..... First of all I had the wrong JDBC driver installed. You need to head over to microsoft to get the real deal:
https://www.microsoft.com/en-us/download/details.aspx?id=11774
Then you need to unpack this one, place the 4 or 4.1 version in your bin directory of SoapUI. (You are apparently supposed to use Lib/Ext, but that doesn't work for me)
Then, since we are trying to use SSPI or Windows Authentication, to connect to the SQL server, you need to place the sqljdbc_auth.dll from the driver/enu/auth folder. This is used in one of your path's or in SoapUI Lib folder. Remember to use the 32 bit dll for 32 bit SoapUI!!! I did not since my system is 64.....
After this, I used this string, but now you have the setup correct, so it should work fine as long as you remember to start SoapUI up using the correct windows user. (Shif-right click - start as different user - use the same user you have started the SQL server with)
Again, I wasn't completely aware of this from the start (yes, total newbie here) and it failed.
Finally, when you have done all this, this is the string that works - and probably a lot of derivatives since the failing part here were the driver and dll.
def sql =Sql.newInstance("jdbc:sqlserver://localhost;Database=myDB;integratedSecurity=true","com.microsoft.sqlserver.jdbc.SQLServerDriver")
I have Symfony2 running on an Ubuntu Server 12.04 (64-bit) VM (VirtualBox). The host is a MacBook pro. For some reason I am getting really long request times in development mode (app_dev.php). I know its slower in dev mode, but I'm talking 5-7 seconds per request (sometimes even slower). On my Mac I get request times of 200ms or so in development mode.
After looking at my timeline in the Symfony2 profiler, I noticed that ~95% of the request time is "initialization time". What is this? What are some reasons it could be so slow?
This issue only applies to Symfony2 in dev mode, not any other sites I'm running on the VM, and not even to Symfony2 in production mode.
I saw this (http://stackoverflow.com/questions/11162429/whats-included-in-the-initialization-time-in-the-symfony2-web-profiler), but it doesn't seem to answer my questions.
I had 5-30 sec responses from Symfony2 by default. Now it's ~500ms in dev environment.
Then I modified the following things in php.ini:
set realpath_cache_size = 4M (or more)
disabled XDebug completely (test with phpinfo)
realpath_cache_ttl=7200
enabled and set OPcache (or APC) correctly
restarted Apache in order to have php.ini reloaded
And voilá, responses went under 2 secs in dev mode!
Before: 6779 ms
After: 1587 ms
Symfony2 reads classes from thousands of files and that's a slow process. When using a small PHP realpath cache, file paths need to be resolved one by one every time a new request is made in the dev environment if they are not in PHP's realpath cache. The realpath cache is too small by default for Symfony2. In prod this is not a problem of course.
Cache metadata:
Caching the metadata (e.g. mappings) is also very important for further performance boost:
doctrine:
orm:
entity_managers:
default:
metadata_cache_driver: apc
query_cache_driver: apc
result_cache_driver: apc
You need to enable APCu for this. It's APC without bytecode cache, as OPCache already does opcode caching. OPCache is built in since PHP 5.5.
---- After: 467 ms ----
(in prod environment the same response is ~80 ms)
Please note, this is project uses 30+ bundles and has tens of thousands of lines of code, almost hundred own services, so 0.5s is quite good on a local Windows environment using just a few simple optimizations.
I figured out the cause of the problem (and its not Symfony2). For some reason on the ubuntu VM, the modification times on certain files are incorrect (ie in the future, etc). When symfony2 checks these times using filemtime() against its registry, it determines that the cache is not longer fresh and it rebuilds the whole thing. I haven't been able to figure out why it is doing that yet.
I also needed to disable xdebug (v2.2.21) to debug apache2 max timeout loading on my macbook. It was installed using macports:
sudo port install php54-xdebug.
With xdebug enabled, every page run out max loading time, with a fatal error exceeding max timeout message dispatched. When disabled, everything just loads fine in a reasonable expected time. I came to this using MAMP, no xdebug enabled by default, and apache2 just works fast as usual. I may change for another debugger, that's a pitty, because xdebug worked fine before.
Config:
MacOSX 10.6.8
macports 2.1.3
Apache 2.2.24
php 5.4
We have the same problem.
Here we have 10 second and more for every request.
I see if I remove following lines in bootstrap.php.cache all times return in normal state (298 ms).
foreach ($meta as $resource) {
if (!$resource->isFresh($time)) {
return false;
}
}
It's possible that we have wrong modifications times, but we don't know how to fix. Somebody know a solution?
As said at https://stackoverflow.com/a/12967229/6108843 the reason of such behavior might be Ubuntu VM settings. You should to sync date and time between host and guest OS as explained at https://superuser.com/questions/463106/virtualbox-how-to-sync-host-and-guest-time.
File modification date changes to host's value when you upload file to VM via FTP. So that's why filemtime() returns wrong value.
You can move APP/var/cache в /dev/shm/YourAppName/var/cache. But it's good to have built container in local files too for IDE autocomplete and code validation. In app/AppKernel.php:
public function getCacheDir()
{
return $this->getVarOrShmDir('cache/' . $this->getEnvironment());
}
public function getLogDir()
{
return $this->getVarOrShmDir('logs');
}
private function getVarOrShmDir($dir)
{
$result = dirname(__DIR__) . '/var/' . $dir;
if (
in_array($this->environment, ['dev', 'test'], true) &&
empty($_GET['warmup']) && // to force using real directory add ?warmup=1 to URL
is_dir($result) && // first time create real directory, later use shm
file_exists('/bin/mount') && shell_exec('mount | grep vboxsf') // only for VirtualBox
) {
$result = '/dev/shm/' . 'YourAppName' . '/' . $dir . '/' . $this->getEnvironment();
}
return $result;
}
I disabled xdebug and it resulted in a decrease loading time from 17s (yea..) to 0.5s.
I had problems as well with slow page loads in development, which can extremely frustrating when you're tweaking CSS or something similar.
After a bit of digging I found that for me the problem was caused by Assetic which was recompiling all assets on every page load:
http://symfony.com/doc/current/cookbook/assetic/asset_management.html#dumping-asset-files-in-the-dev-environment
By disabling the use of the Assetic controller I was able to drastically increase my page load. However, as the link above states, this comes at a cost of regenerating your assets whenever you make a change to them (or set a watch on them).
In app_dev, all the caches and auto loading is starting from scratch and what I found to be most slow in dev is the orm. I shy away from using orm and focus mainly on dbal because of it, although i probably shouldn't. Orm is used quite a bit in sf2. My guess is orm is what's slowing you down most in dev. Look at the difference between your dev config and prod config. However, some tweaks to your dev config can make development much snappier and enjoyable.. Just try and be aware of what your doing. for example, turning off the twig controller and then modifying a lot of templates will be kind of frustrating. Your need to keep clearing your cache. But like you mentioned, its dev only and when its time to go live, symfony will speed up for you.
I'm getting an error while add my public key & it showing this error code: "
Error updating settings on gear: 01779aa6c3e04c71be82fbaa10662fcf with status: -1 and output: "
Any idea that why this is showing everytime when I am registering public key.....//
We believe this is a problem that arose in our most recent update on tuesday and are now investigating. When you add an SSH key we copy it to each of your applications (so git will work) - it looks like the copy process started failing.
EDIT: We fixed an issue in production that was affecting a small number of users that resulted in this symptom. Please let us know if the issue is not fixed, and we'll investigate further.
I am getting the same error. Looks like its their internal server problem..
EDIT: it seems you can't put security on applications that are available for test in openShift. It makes sense. Remove the test applications that you got from OpenShift.
I got it solved. That number {01779aa6c3e04c71be82fbaa10662fcf} is an application you currently have in your domain. I removed all applications in there. Have backups first, then clean your domain. Update your public key again and I am 100% sure it will work.
Please do this with care. Back up your application first. To remove
rhc app destroy -a {appName} -d
It's was my silly mistake, just I have to add my private key at the time of authentication in Putty.