Passenger crashing on startup - passenger

I'm a bit of a newbie to this. I'm just updating redmine/ruby(2.4.0)/passenger/apache2.4 on SLES linux and it seems like passenger (5.3.4) is crashing on start up. Can anyone understand what is going on in this crash dump file? Here is a sample of the dump file.
Passenger-config validate install checks all came back as 'Everything looks good'
[ pid=20304, timestamp=1534916330 ] Process aborted! signo=SIGSEGV(11), reason=SI_KERNEL, si_addr=0x0, randomSeed=1534916327
[ pid=20304 ] Crash log dumped to /var/tmp/passenger-crash-log.1534916330
[ pid=20304 ] Date, uname and ulimits:
Wed Aug 22 15:08:50 ACST 2018
Linux 3.0.101-0.47.71-default #1 SMP Thu Nov 12 12:22:22 UTC 2015 (b5b212e) x86_64 x86_64
core file size (blocks, -c) 1
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63924
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 6964728
open files (-n) 100000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 63924
virtual memory (kbytes, -v) 13263520
file locks (-x) unlimited
[ pid=20304 ] Phusion Passenger version: 5.3.4
[ pid=20304 ] libc backtrace available!
--------------------------------------
[ pid=20304 ] Backtrace with 34 frames:
ERROR: cannot execute '/home/redmine/.rvm/gems/ruby-2.4.0/wrappers/ruby "/home/redmine/.rvm/gems/ruby-2.4.0/gems/passenger-5.3.4/src/helper-scripts/backtrace-sanitizer.rb"' for sanitizing the backtrace, writing to stderr directly...
Passenger core[0x5f61b0]
Passenger core[0x5f69e4]
Passenger core[0x5f72ab]
Passenger core[0x5f7cf1]
/lib64/libpthread.so.0(+0xf850)[0x7faa292ca850]
/var/appdevelop/openssl/lib/libcrypto.so.1.1(X509_PUBKEY_get0+0x20)[0x7faa287732e0]
/var/appdevelop/openssl/lib/libcrypto.so.1.1(X509_PUBKEY_get+0x6)[0x7faa287733c6]
/usr/lib64/libssl.so.0.9.8(+0x43550)[0x7faa27641550]
/usr/lib64/libssl.so.0.9.8(SSL_CTX_use_PrivateKey_file+0x92)[0x7faa276417f2]
/usr/lib64/libcurl.so.4(+0x2c7bb)[0x7faa28c597bb]
/usr/lib64/libcurl.so.4(+0x2e466)[0x7faa28c5b466]
/usr/lib64/libcurl.so.4(+0x310a4)[0x7faa28c5e0a4]
/usr/lib64/libcurl.so.4(+0x31312)[0x7faa28c5e312]
/usr/lib64/libcurl.so.4(+0x4cb31)[0x7faa28c79b31]
/usr/lib64/libcurl.so.4(+0x134be)[0x7faa28c404be]
/usr/lib64/libcurl.so.4(+0x27128)[0x7faa28c54128]
/usr/lib64/libcurl.so.4(+0x2a1aa)[0x7faa28c571aa]
/usr/lib64/libcurl.so.4(+0x2a334)[0x7faa28c57334]
/usr/lib64/libcurl.so.4(+0x3c062)[0x7faa28c69062]
/usr/lib64/libcurl.so.4(+0x3c2fb)[0x7faa28c692fb]
/usr/lib64/libcurl.so.4(curl_easy_perform+0x148)[0x7faa28c69f0d]
Passenger core[0x76b5db]
Passenger core[0x79e954]
Passenger core[0x7c577c]
Passenger core[0x6d2b40]
Passenger core[0x6d8b09]
Passenger core[0x6d8b46]
Passenger core[0x6d8b65]
Passenger core[0x5b71ab]
Passenger core[0x8f3733]
Passenger core[0x59a354]
Passenger core[0x8e6291]
/lib64/libpthread.so.0(+0x7806)[0x7faa292c2806]
/lib64/libc.so.6(clone+0x6d)[0x7faa27b669bd]
--------------------------------------
[ pid=20304 ] Dumping additional diagnostical information...
--------------------------------------
### Backtraces
Thread 'Main thread' (0x7faa298b8760, LWP 20304):
in 'void waitForExitEvent()' (CoreMain.cpp:1085)
in 'void mainLoop()' (CoreMain.cpp:973)
in 'int runCore()' (CoreMain.cpp:1216)
Thread 'Pool analytics collector' (0x7faa298b5700, LWP 20304):
in 'static void Passenger::ApplicationPool2::Pool::collectAnalytics(Passenger::ApplicationPool2::PoolPtr)' (AnalyticsCollection.cpp:52)

Related

Flask-SocketIO client connections failing at just over 1000 connections

Ubuntu 20.04 on EC2 instance
Flask-SocketIO 5.1.1
python-engineio 4.2.1
python-socketio 5.4.0
simple-websocket 0.3.0
I'm doing some load testing on this server to maximise the number of client connections I can get out of it. The server python code includes the following main elements:
from flask import Flask, request
from flask_socketio import SocketIO, emit, join_room, leave_room, send
app = Flask(__name__)
app.config['SECRET_KEY'] = appConfig.SECRET_KEY
socketio = SocketIO(app, cors_allowed_origins="*")
#socketio.on('json')
def handle_json(json):
emit("json", json, broadcast=True, include_self=False)
if __name__ == '__main__':
socketio.run(
app,
host='0.0.0.0',
port=443,
keyfile='MYPATH/privkey.pem',
certfile='MYPATH/fullchain.pem',
max_size=100000
)
Note the max_size arg in socketio.run() passing through to the max_size eventlet param.
I have set the file descriptor limits with sudo nano /etc/security/limits.conf as follow:
* soft nproc 100000
* hard nproc 100000
* soft nofile 100000
* hard nofile 100000
I can confirm this with ulimit -a which gives output:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1806
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 100000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 100000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I am load testing with the following code (reduced for brevity) which creates 2000 connections running from my local machine:
import socketio
URL = "MY_SOCKETIO_SERVER_URL"
clients = []
for k in range(2000):
sio = socketio.Client()
sio.connect(URL)
clients.append(sio)
for client in clients:
client.disconnect()
When I run this, around 1176 connections are made successfully, and then they fail to be created after that point and I get a high CPU process in htop of process /usr/lib/snapd/snapd and the socket.io python app terminates printing the message "Killed" after a few seconds.
I need to support around 5000 connections minimum on this server. I understand that it is possible, but how do I do that please?

npm run dev aborts on shared hosting sever

I'm trying to set up a git repository that contains a laravel project on a server that uses cpanel. After copy missing libraries and dependencias from both composer.json and package.json the project asks me to run npm run dev in order to create the mix manifest file. However, whenever I enter those commands this error keeps coming up:
> # dev /home3/regioye5/repositorios/region-admin
> npm run development
> # development /home3/regioye5/repositorios/region-admin
> mix
node[474]: ../src/node_platform.cc:61:std::unique_ptr<long unsigned int> node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Start(): Assertion `(0) == (uv_thread_create(t.get(), start_thread, this))' failed.
1: 0xa04200 node::Abort() [node]
2: 0xa0427e [node]
3: 0xa7429e [node]
4: 0xa74366 node::NodePlatform::NodePlatform(int, v8::TracingController*) [node]
5: 0x9d1ae6 node::InitializeOncePerProcess(int, char**) [node]
6: 0x9d1d21 node::Start(int, char**) [node]
7: 0x7fbfeb70d555 __libc_start_main [/lib64/libc.so.6]
8: 0x9694cc [node]
Aborted
I've been looking for an answer on internet but nobody seems to have had this issue before. I rean in other posts that may be the job processes or something like that, I share to you the ulimited -a command result on server:
core file size (blocks, -c) 0
data seg size (kbytes, -d) 800000
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 178728
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 800000
open files (-n) 100
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 15240
cpu time (seconds, -t) unlimited
max user processes (-u) 25
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I visited laravel mix documentation and I found the following command. For me it worked like a charm!
node_modules/.bin/webpack --config=node_modules/laravel-mix/setup/webpack.config.js
It appears it goes straight to the library and asks it to config the file that contains the mix variables. Hope it works for you.

What the .out files in the hadoop logs folder? Is it safe to delete them?

I manage a small fully distributed hadoop cluster and I was doing my routine cleanup of logs and inspection. I see a bunch of files with the .out extension in the {HADOOP_HOME}/logs path that I configured. There are several such as:
hadoop-<my-system-name>-namenode-<my-system-name>.out
hadoop-<my-system-name>-namenode-<my-system-name>.out.1
hadoop-<my-system-name>-namenode-<my-system-name>.out.2
hadoop-<my-system-name>-datanode-<my-system-name>.out
hadoop-<my-system-name>-historyserver-<my-system-name>.out
hadoop-<my-system-name>-historyserver-<my-system-name>.out.2
hadoop-<my-system-name>-historyserver-<my-system-name>.out.3
hadoop-<my-system-name>-resourcemanager-<my-system-name>.out
hadoop-<my-system-name>-resourcemanager-<my-system-name>.out.1
hadoop-<my-system-name>-secondarynamenode-<my-system-name>.out
hadoop-<my-system-name>-secondarynamenode-<my-system-name>.out.1
hadoop-<my-system-name>-secondarynamenode-<my-system-name>.out.2
etc. etc. etc.
When I look at one of them with an editor, such as the hadoop-<my-system-name>-namenode-<my-system-name>.out.1 file, I get:
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 514997
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 16384
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 8092
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
What are these files? Do they serve a purpose to keep or can they be deleted?
Like all good applications, logs serve a great purpose - finding out what is happening with your service. You should probably be putting the logs into something like Elasticsearch/Solr/Graylog/etc to search/alert on them
Anything that ends in a number can be safely deleted.
They are managed by the log4j.properties RollingFileAppender that is started with Hadoop.

MongoDB insertion failure "error inserting documents: new file allocation failure"

I used a bash script to do the insertion:
for i in *.json
do
mongoimport --db testdb --collection test --type json --file $i --jsonArray
done
Now my database testdb is 5.951GB and the terminal keeps giving me the error
error inserting documents: new file allocation failure
How much data can I hold in one collection? What is the best way for me to handle this? I currently have 20GB worth of data but I will have another 40GB data to be added.
-UPDATE-
Here's my ulimit status:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 31681
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 31681
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Mongo can handle billions of documents in one collection but the maximum document size is 16m.
When you create a collection you can set his size:
db.createCollection( "collection-name", { capped: true, size: 100000 } )
Mongo provide a bulk api if you have to insert multiples document in a collection: bulk-write-operations

Puppet agent hangs and eventually gives a memory allocation error

I'm using puppet as a provisioner for Vagrant, and am coming across an issue where Puppet will hang for an extremely long time when I do a "vagrant provision". Building the box from scratch using "vagrant up" doesn't seem to be a problem, only subsequent provisions.
If I turn puppet debug on and watch where it hangs, it seems to stop at various, seemingly arbitrary, points the first of which is:
Info: Applying configuration version '1401868442'
Debug: Prefetching yum resources for package
Debug: Executing '/bin/rpm --version'
Debug: Executing '/bin/rpm -qa --nosignature --nodigest --qf '%{NAME} %|EPOCH?{% {EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\n''
Executing this command on the server myself returns immediately.
Eventually, it gets past this and continues. Using the summary option, I get the following, after waiting for a very long time for it to complete:
Debug: Finishing transaction 70191217833880
Debug: Storing state
Debug: Stored state in 9.39 seconds
Notice: Finished catalog run in 1493.99 seconds
Changes:
Total: 2
Events:
Failure: 2
Success: 2
Total: 4
Resources:
Total: 18375
Changed: 2
Failed: 2
Skipped: 35
Out of sync: 4
Time:
User: 0.00
Anchor: 0.01
Schedule: 0.01
Yumrepo: 0.07
Augeas: 0.12
Package: 0.18
Exec: 0.96
Service: 1.07
Total: 108.93
Last run: 1401869964
Config retrieval: 16.49
Mongodb database: 3.99
File: 76.60
Mongodb user: 9.43
Version:
Config: 1401868442
Puppet: 3.4.3
This doesn't seem very helpful to me, as the amount of time total's 108 seconds, so where have the other 1385 seconds gone?
Throughout, Puppet seems to be hammering the box, using up a lot of CPU, but still doesn't seem to advance. The memory it uses seems to continually increase. When I kick off the command, top looks like this:
Cpu(s): 10.2%us, 2.2%sy, 0.0%ni, 85.5%id, 2.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 4956928k total, 2849296k used, 2107632k free, 63464k buffers
Swap: 950264k total, 26688k used, 923576k free, 445692k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28486 root 20 0 439m 334m 3808 R 97.5 6.9 2:02.92 puppet
22 root 20 0 0 0 0 S 1.3 0.0 0:07.55 kblockd/0
18276 mongod 20 0 788m 31m 3040 S 1.3 0.6 2:31.82 mongod
20756 jboss-as 20 0 3081m 1.5g 21m S 1.3 31.4 7:13.15 java
20930 elastics 20 0 2340m 236m 6580 S 1.0 4.9 1:44.80 java
266 root 20 0 0 0 0 S 0.3 0.0 0:03.85 jbd2/dm-0-8
22717 vagrant 20 0 98.0m 2252 1276 S 0.3 0.0 0:01.81 sshd
28762 vagrant 20 0 15036 1228 932 R 0.3 0.0 0:00.10 top
1 root 20 0 19364 1180 964 S 0.0 0.0 0:00.86 init
To me, this seems fine, there's over 2GB of available memory and plenty of available swap. I have a max open files limit of 1024.
About 10-15 minutes later, still no advance in the console output, but top looks like this:
Cpu(s): 11.2%us, 1.6%sy, 0.0%ni, 86.9%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%s
Mem: 4956928k total, 3834376k used, 1122552k free, 64248k buffers
Swap: 950264k total, 24408k used, 925856k free, 445728k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28486 root 20 0 1397m 1.3g 3808 R 99.6 26.7 15:16.19 puppet
18276 mongod 20 0 788m 31m 3040 R 1.7 0.6 2:45.03 mongod
20756 jboss-as 20 0 3081m 1.5g 21m S 1.3 31.4 7:25.93 java
20930 elastics 20 0 2340m 238m 6580 S 0.7 4.9 1:52.03 java
8486 root 20 0 308m 952 764 S 0.3 0.0 0:06.03 VBoxService
As you can see, puppet is now using a lot more of the memory, and it seems to continue in this fashion. The box it's building has 5GB of RAM, so I wouldn't have expected it to have memory issues. However, further down the line, after a long wait, I do get "Cannot allocate memory - fork(2)"
Running unlimit -a, I get:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 38566
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Which, again looks fine to me...
To be honest, I'm completely at a loss as to how to go about solving this, or what is causing it.
Any help or insight would be greatly appreciated!
EDIT:
So I managed to fix this eventually... It came down to using recurse with a file directive for a large directory. The target directory in question contained around 2GB worth of files, and puppet took a huge amount of time loading this into memory and doing it's hashes and comparisons. The first time I stood the server up, the directory was relatively empty so the check was quick, but then other resources were placed in it that increased its size massively, meaning subsequent runs took much longer.
The memory error that eventually was thrown was because, I can only assume, Puppet was loading the whole thing into memory in order to do its stuff...
I found a way around using the recurse function, and am now trying to avoid it like the plague...
Yeah, the problem with the recurse parameter on the file type is that it checks every single file's checksum, which on a massive directory adds up real quick.
As Felix suggests, using checksum => none is one way to fix it, another is to accomplish the task you're trying to do (say chmod or chown a whole directory) with an exec performing the native task, with an unless to check if it's already been done.
Something like:
define check_mode($mode) {
exec { "/bin/chmod $mode $name":
unless => "/bin/sh -c '[ $(/usr/bin/stat -c %a $name) == $mode ]'",
}
}
Taken from http://projects.puppetlabs.com/projects/1/wiki/File_Permission_Check_Patterns

Resources