I´ve a problem to debug my application in docker. My setup is correct I think because without debugger everything works fine.
Ports
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/usr/src/app
ports:
- "3000:3000"
- "1234:1234"
- "26162:26162"
- "26168:26168"
The error: (The site is not responding)
Fast Debugger (ruby-debug-ide 0.7.0, debase 0.2.4.1, file filtering is supported) listens on 0.0.0.0:1234
=> Booting Puma
=> Rails 5.2.3 application starting in development
=> Run `rails server -h` for more startup options
[8] Puma starting in cluster mode...
[8] * Version 4.0.1 (ruby 2.6.2-p47), codename: 4 Fast 4 Furious
[8] * Min threads: 5, max threads: 5
[8] * Environment: development
[8] * Process workers: 2
[8] * Preloading application
[8] * Listening on tcp://0.0.0.0:3000
[8] Use Ctrl-C to stop
[8] ! Terminating timed out worker: 16
[8] ! Terminating timed out worker: 18
[8] ! Terminating timed out worker: 20
[8] ! Terminating timed out worker: 21
[8] ! Terminating timed out worker: 24
[8] ! Terminating timed out worker: 25
[8] ! Terminating timed out worker: 28
[8] ! Terminating timed out worker: 29
I just figured it out.
My solution is set WEB_CONCURRENCY to 0 (config/puma.rb)
workers ENV.fetch("WEB_CONCURRENCY") { 0 }
For the production system I change the value in the .env file.
WEB_CONCURRENCY=2
Source:
* https://github.com/JetBrains/sample_rails_app/blob/master/config/puma.rb
# Note that workers are not supported for JRuby or Windows
#workers ENV.fetch("WEB_CONCURRENCY") { 2 }
Related
My heroku app cannot be accessed after a build. The logs show that the web server node and worker node both are listening.
It's a flask app run by gunicorn and it has 2 addons - newrelic and redistogo.
Error:
This site can’t be reached.
appname.herokuapp.com’s server IP address could not be found.
DNS_PROBE_FINISHED_NXDOMAIN
Logs:
```2020-02-05T16:24:31.556201+00:00 heroku[worker.1]: Starting process with command `python worker.py`
2020-02-05T16:24:32.278863+00:00 heroku[worker.1]: State changed from starting to up
2020-02-05T16:24:33.363132+00:00 app[worker.1]: 16:24:33 RQ worker started, version 0.5.1
2020-02-05T16:24:33.364484+00:00 app[worker.1]: 16:24:33
2020-02-05T16:24:33.364574+00:00 app[worker.1]: 16:24:33 *** Listening on high, default, low...
2020-02-05T16:24:35.295791+00:00 heroku[web.1]: Starting process with command `newrelic-admin run-program gunicorn app:server`
2020-02-05T16:24:41.159117+00:00 heroku[web.1]: State changed from starting to up
2020-02-05T16:24:40.959907+00:00 app[web.1]: [2020-02-05 16:24:40 +0000] [4] [INFO] Starting gunicorn 20.0.4
2020-02-05T16:24:40.961836+00:00 app[web.1]: [2020-02-05 16:24:40 +0000] [4] [INFO] Listening at: http://0.0.0.0:21126 (4)
2020-02-05T16:24:40.962097+00:00 app[web.1]: [2020-02-05 16:24:40 +0000] [4] [INFO] Using worker: sync
2020-02-05T16:24:40.971809+00:00 app[web.1]: [2020-02-05 16:24:40 +0000] [12] [INFO] Booting worker with pid: 12
2020-02-05T16:24:41.143051+00:00 app[web.1]: [2020-02-05 16:24:41 +0000] [20] [INFO] Booting worker with pid: 20```
EDIT:
Procfile:
web: newrelic-admin run-program gunicorn app:server
worker: python worker.py
Worker:
import os
import redis
from rq import Worker, Queue, Connection
listen = ['high', 'default', 'low']
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
conn = redis.from_url(redis_url)
if __name__ == '__main__':
with Connection(conn):
worker = Worker(list(map(Queue, listen)))
worker.work()
The app is accessible now. It seems to be downtime with Heroku. Although they haven't reported this in their status page.
I am trying to install openproject using docker on centos7.6 but Phusion passenger fails to start after installation. Error is suggesting it failed to parse response.
The preloader process sent an unparseable response:. I don't know how to fix this issue.
stdout:
-----> Database setup finished.
On first installation, the default admin credentials are login: admin, password: admin
-----> Launching supervisord...
2019-05-08 08:14:46,313 CRIT Supervisor running as root (no user in config file)
2019-05-08 08:14:46,318 INFO supervisord started with pid 1
2019-05-08 08:14:47,321 INFO spawned: 'postgres' with pid 155
2019-05-08 08:14:47,325 INFO spawned: 'apache2' with pid 156
2019-05-08 08:14:47,328 INFO spawned: 'web' with pid 157
2019-05-08 08:14:47,331 INFO spawned: 'worker' with pid 158
2019-05-08 08:14:47,351 INFO spawned: 'postfix' with pid 159
2019-05-08 08:14:47,360 INFO spawned: 'memcached' with pid 160
2019-05-08 08:14:47.634 UTC [172] LOG: database system was shut down at 2019-05-08 08:14:44 UTC
2019-05-08 08:14:47,634 INFO success: postfix entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2019-05-08 08:14:47.649 UTC [172] LOG: MultiXact member wraparound protections are now enabled
2019-05-08 08:14:47.653 UTC [155] LOG: database system is ready to accept connections
2019-05-08 08:14:47.663 UTC [177] LOG: autovacuum launcher started
2019-05-08 08:14:48,670 INFO success: postgres entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: apache2 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: web entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: memcached entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
2019-05-08 08:14:50,198 INFO exited: postfix (exit status 0; expected)
--> Downloading a Phusion Passenger agent binary for your platform
--> Installing Nginx 1.15.8 engine
--------------------------
[passenger_native_support.so] trying to compile for the current user (app) and Ruby interpreter...
(set PASSENGER_COMPILE_NATIVE_SUPPORT_BINARY=0 to disable)
Compilation successful. The logs are here:
/tmp/passenger_native_support-15tsfhk.log
[passenger_native_support.so] successfully loaded.
=============== Phusion Passenger Standalone web server started ===============
PID file: /app/tmp/pids/passenger.8080.pid
Log file: /app/log/passenger.8080.log
Environment: production
Accessible via: http://0.0.0.0:8080/
You can stop Phusion Passenger Standalone by pressing Ctrl-C.
Problems? Check https://www.phusionpassenger.com/library/admin/standalone/troubleshooting/
===============================================================================
[ N 2019-05-08 08:15:01.7338 404/Tb age/Cor/SecurityUpdateChecker.h:519 ]: Security update check: no update found (next check in 24 hours)
Forcefully loading the application. Use :environment to avoid eager loading.
[auth_saml] Missing settings from '/app/config/plugins/auth_saml/settings.yml', skipping omniauth registration.
hook registered
App 439 output: [auth_saml] Missing settings from '/app/config/plugins/auth_saml/settings.yml', skipping omniauth registration.
App 439 output: hook registered
Creating scope :order_by_name. Overwriting existing method Sprint.order_by_name.
App 439 output: Creating scope :order_by_name. Overwriting existing method Sprint.order_by_name.
[Worker(host:d0b3748f627a pid:158)] Starting job worker
2019-05-08T08:15:45+0000: [Worker(host:d0b3748f627a pid:158)] Starting job worker
App 439 output: /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `fork': Cannot allocate memory - fork(2) (Errno::ENOMEM)
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `handle_spawn_command'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:78:in `accept_and_process_next_client'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:167:in `run_main_loop'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:207:in `<module:App>'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:30:in `<module:PhusionPassenger>'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:29:in `<main>'
[ E 2019-05-08 08:15:46.6971 404/Tc age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /app: The preloader process sent an unparseable response:
Error ID: d7825364
Error details saved to: /tmp/passenger-error-wjSTKF.html
[ E 2019-05-08 08:15:46.7028 404/T8 age/Cor/Con/CheckoutSession.cpp:276 ]: [Client 1-1] Cannot checkout session because a spawning error occurred. The identifier of the error is d7825364. Please see earlier logs for details about the error.
[ W 2019-05-08 08:34:24.7967 404/Tk age/Cor/Spa/SmartSpawner.h:572 ]: An error occurred while spawning an application process: Cannot connect to Unix socket '/tmp/passenger.PKROzbY/apps.s/preloader.hyl9g8': No such file or directory (errno=2)
[ W 2019-05-08 08:34:24.7968 404/Tk age/Cor/Spa/SmartSpawner.h:574 ]: The application preloader seems to have crashed, restarting it and trying again...
App 543 output: [auth_saml] Missing settings from '/app/config/plugins/auth_saml/settings.yml', skipping omniauth registration.
App 543 output: hook registered
App 543 output: Creating scope :order_by_name. Overwriting existing method Sprint.order_by_name.
App 543 output: /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `fork': Cannot allocate memory - fork(2) (Errno::ENOMEM)
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `handle_spawn_command'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:78:in `accept_and_process_next_client'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:167:in `run_main_loop'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:207:in `<module:App>'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:30:in `<module:PhusionPassenger>'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:29:in `<main>'
[ E 2019-05-08 08:34:52.2521 404/Tk age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /app: The preloader process sent an unparseable response:
Error ID: c2ce0823
Error details saved to: /tmp/passenger-error-bpsfAC.html
[ E 2019-05-08 08:34:52.2570 404/T8 age/Cor/Con/CheckoutSession.cpp:276 ]: [Client 1-2] Cannot checkout session because a spawning error occurred. The identifier of the error is c2ce0823. Please see earlier logs for details about the error.
Thanks.
The import line in the log is this one:
App 439 output: /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `fork': Cannot allocate memory - fork(2) (Errno::ENOMEM)
This means your container is unable to allocate necessary memory. It could be that your system is in a OOM state and things are being killed or due to some other restriction on the daemon that prevents it from allocating additional memory
For reference:
https://success.docker.com/article/docker-daemon-error-cannot-allocate-memory
Version Info:
"org.apache.storm" % "storm-core" % "1.2.1"
"org.apache.storm" % "storm-kafka-client" % "1.2.1"
I have a storm Topology with 3 bolts(A,B,C), Where the middle bolt takes around 450ms mean time and other two bolts takes less than 1ms.
I am running topology with following parallelism hint values on two machines:
A: 4
B: 700
C: 10
I am getting following error after few minutes of topology starting:
in worker log:
2018-07-04T20:16:28.835+05:30 Client [ERROR] discarding 7 messages because the Netty client to Netty-Client-/ip:6700 is being closed
in supervisor logs:
2018-07-04 20:16:29.468 o.a.s.d.s.BasicContainer [INFO] Worker Process 32bc11c0-a1d0-4593-a91a-3ff788ea041a exited with code: 20
2018-07-04 20:16:31.592 o.a.s.d.s.Slot [WARN] SLOT 6700: main process has exited
2018-07-04 20:16:31.592 o.a.s.d.s.Container [INFO] Killing 2825cbe9-aedd-4f10-a796-4f9dc30ae72f:32bc11c0-a1d0-4593-a91a-3ff788ea041a
2018-07-04 20:16:31.600 o.a.s.u.Utils [INFO] Error when trying to kill 7422. Process is probably already dead.
2018-07-04 20:16:32.600 o.a.s.d.s.Slot [INFO] STATE RUNNING msInState: 391195 topo:myTopo-1-1530715184 worker:32bc11c0-a1d0-4593-a91a-3ff788ea041a -> KILL_AND_RELAUNCH msInState: 0 topo:myTopo-1-1530715184 worker:32bc11c0-a1d0-4593-a91a-3ff788ea041a
2018-07-04 20:16:32.600 o.a.s.d.s.Container [INFO] GET worker-user for 32bc11c0-a1d0-4593-a91a-3ff788ea041a
I see similar question asked here and here, I have few queries related to this:
Why is this error coming and how to resolve?
How to get more debug information from Storm, I have already set conf.setDebug(true)
Is there some limitation/guidelines around how much parallelism factor os OK for a bolt on n number of machines?
Edit:
Logs for strace -fp PID -e trace=read,write,network,signal,ipc in gist. Some relevant looking part is when above thing happends, but however I see such SIGSEGV many places in strace output :
[pid 23635] --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_ACCERR, si_addr=0x7f83af6f1180} ---
[pid 23549] <... read resumed> "PK\3\4\n\0\0\0\10\0\364J\336F\222'\202\312\310\2\0\0\16\5\0\0\36\0\0\0", 30) = 30
[pid 23654] --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_ACCERR, si_addr=0x7f83af6f1f80} ---
[pid 23549] read(23, "\235TmW\22A\24~\6\224\227u\vE4\255,JR\300WP\322\0245TH\23\313\3j\347"..., 712) = 712
[pid 23654] rt_sigreturn({mask=[QUIT]}) = 140203560738688
[pid 23635] rt_sigreturn({mask=[QUIT]}) = 140203560735104
strace output of worker process is here, relevant looking logs here:
[pid 24435] recvfrom(291, "HTTP/1.1 200 OK\r\nContent-Type: a"..., 8192, 0, NULL, NULL) = 544
[pid 23473] write(3, "Heap\n garbage-first heap total"..., 347) = 347
[pid 24434] +++ exited with 20 +++
[pid 24405] +++ exited with 20 +++
[pid 24435] +++ exited with 20 +++
[pid 24427] +++ exited with 20 +++
Edit 2:
There is this question as well: Connection refused error in worker logs - apache storm : as par it's answer not setting storm.local.hostname might cause it, but it is already set for me.
There is another bug filed here having similar netty error, which is also still unresolved.
After updating macOS Server to 5.3 (running on macOS 10.12.4) my Calendar & Contacts have stopped syncing.
It seems that it's having trouble starting Postgres for cluster /Library/Server/Calendar and Contacts/Data/Database.xpg/cluster.pg and possibly trouble with the agent too.
The GUI seems to think that the Calendar & Contacts services have started and are available, but when I run $ sudo serveradmin fullstatus calendar from the command line I get:
calendar:setStateVersion = 1
calendar:readWriteSettingsVersion = 1
calendar:state = "STARTING"
calendar:contactsState = "STARTING"
calendar:calendarState = "STARTING"
System log is being spammed with:
Apr 22 11:58:42 com.apple.xpc.launchd[1] (org.calendarserver.agent[44649]): Service exited with abnormal code: 1
Apr 22 11:58:42 com.apple.xpc.launchd[1] (org.calendarserver.agent): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
Apr 22 11:58:52 com.apple.xpc.launchd[1] (org.calendarserver.agent[44659]): Service exited with abnormal code: 1
Apr 22 11:58:52 com.apple.xpc.launchd[1] (org.calendarserver.agent): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
Apr 22 11:59:02 com.apple.xpc.launchd[1] (org.calendarserver.agent[44668]): Service exited with abnormal code: 1
Apr 22 11:59:02 com.apple.xpc.launchd[1] (org.calendarserver.agent): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
Apr 22 11:59:07 com.apple.xpc.launchd[1] (org.calendarserver.calendarserver[44676]): Service exited with abnormal code: 1
Apr 22 11:59:07 com.apple.xpc.launchd[1] (org.calendarserver.calendarserver): Service only ran for 0 seconds. Pushing respawn out by 60 seconds.
Here's the output of $ sudo /Applications/Server.app/Contents/ServerRoot/usr/sbin/calendarserver_diagnose
Any ideas?
OS Build: 16E195
Server Build: 16S4123
/Library/Server/Preferences/Calendar.plist exists and can be parsed
Prefs plist says ServerRoot directory is: /Library/Server/Calendar and Contacts
ServerRoot volume ok
/Library/Server/Calendar and Contacts/Config/caldavd-system.plist exists and can be parsed
/Library/Server/Calendar and Contacts/Config/caldavd-user.plist does not exist
Configuration:
Calendar and Contacts service processes:
USER PID %CPU %MEM RSS ELAPSED STARTED COMMAND
root 42554 0.0 0.1 11072 07:49 Sat 22 Apr 11:32:16 2017 servermgr_calendar
Serverd status:
org.calendarserver.agent is enabled
org.calendarserver.calendarserver is enabled
org.calendarserver.relocate is enabled
Disk space on boot volume:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 999G 777G 222G 78% 8520180 4286447099 0% /
Disk space on service data volume:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 999G 777G 222G 78% 8520180 4286447099 0% /
Disk space used by Calendar and Contacts service:
20K /Library/Server/Calendar and Contacts/Config
1014M /Library/Server/Calendar and Contacts/Data
200M /Library/Server/Calendar and Contacts/Logs
Postgres status for cluster /Library/Server/Calendar and Contacts/Data/Database.xpg/cluster.pg:
pg_ctl: no server running
Agent:
Attempting to send a request to the agent...
Can't connect to agent: timed out
Server connection:
Traceback (most recent call last):
File "/Applications/Server.app/Contents/ServerRoot/usr/sbin/calendarserver_diagnose", line 14, in <module>
load_entry_point('CalendarServer==9.1a1.dev0+56b4197875debefef19d9c19840f903a8e480c88.head', 'console_scripts', 'calendarserver_diagnose')()
File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python2.7/site-packages/calendarserver/tools/diagnose.py", line 145, in main
connectToCaldavd(keys)
File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python2.7/site-packages/calendarserver/tools/diagnose.py", line 584, in connectToCaldavd
url = "https://{host}/principals/".format(host=keys["ServerHostName"])
KeyError: 'ServerHostName'
I use rails 5 with actioncable on heroku and i have this error only in production
WebSocket connection to 'wss://adham-chatty.heroku.com/cable' failed: WebSocket opening handshake was canceled
i think because of puma
2016-01-21T23:33:56.372977+00:00 heroku[web.1]: Starting process with command `bundle exec puma -t 5:5 -p ${PORT:-3000} -e ${RACK_ENV:-development}`
2016-01-21T23:33:57.651242+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2016-01-21T23:33:58.721808+00:00 app[web.1]: [3] - Gracefully shutting down workers...
2016-01-21T23:33:58.873303+00:00 app[web.1]: [3] === puma shutdown: 2016-01-21 23:33:58 +0000 ===
2016-01-21T23:33:58.873305+00:00 app[web.1]: [3] - Goodbye!
2016-01-21T23:33:58.910391+00:00 app[web.1]: [3] Puma starting in cluster mode...
2016-01-21T23:33:58.910405+00:00 app[web.1]: [3] * Version 2.15.3 (ruby 2.2.3-p173), codename: Autumn Arbor Airbrush
2016-01-21T23:33:58.910407+00:00 app[web.1]: [3] * Min threads: 5, max threads: 5
2016-01-21T23:33:58.910409+00:00 app[web.1]: [3] * Environment: production
2016-01-21T23:33:58.910429+00:00 app[web.1]: [3] * Process workers: 2
2016-01-21T23:33:58.910453+00:00 app[web.1]: [3] * Preloading application
2016-01-21T23:33:59.632680+00:00 heroku[web.1]: Process exited with status 0
I just spent an evening and got my rails 5 ActionCable app working wonderfully on Heroku. I wrote up what I learned after going through a lot of hoops here: http://www.whodya.com/posts/19632. What I'd guess from looking at your error message above is, ahem, exactly the same problem I had for a bit. You're trying to use wss: to connect, but this only works if you're using SSL/HTTPS on your server. Try ws: instead, until you get HTTPS up and running. Again, my write-up is here (http://www.whodya.com/posts/19632), with my config/settings/etc.
-John