Passenger on EasyApache 4/CentOS/cPanel will not start Rails app - passenger

On a new server, CentOS/EasyApache 4, I created a single account and set up a new Rails 6.0.2 app. I first installed Passenger 6.0.4 using the EasyApache interface and was unable to start the Rails app.
Web application could not be started by the Phusion Passenger application server
But there was no error mentioned in the Apache error log. (And nothing in the Rails log since it didn't start.) I increased the log level but still nothing I could identify. Certainly no error that matches the error ID shown on the page.
I removed that version of Passenger and instead installed it as a gem -- I first tried installing as my app user but got "not secure" permissions errors on the application directories -- so I installed RVM and Passenger as root user.
Each time, no matter how I install, I still get the same error screen but no error in the logs. Passenger seems to be running, it seems to be playing nice with Apache. I can launch the Rails app in console in production, and I can make database calls.
Not sure where to go to look for details. Below is the Apache error log, starting with the last time I rebooted Apache. Not sure if Passenger is starting up, shutting down and starting again? I've never had to look before, so I don't know if this is normal. But in the end, it looks to be running. And there is no error registered after this, even though I've hit the site several times and gotten error screens. Thanks for any help.
[Fri May 01 14:07:43.442039 2020] [mpm_prefork:notice] [pid 12050] AH00169: caught SIGTERM, shutting down
[ N 2020-05-01 14:07:43.4543 12056/T8 age/Cor/CoreMain.cpp:671 ]: Signal received. Gracefully shutting down... (send signal 2 more time(s) to force shutdown)
[ N 2020-05-01 14:07:43.4543 12056/T1 age/Cor/CoreMain.cpp:1246 ]: Received command to shutdown gracefully. Waiting until all clients have disconnected...
[ N 2020-05-01 14:07:43.4544 12056/T8 Ser/Server.h:902 ]: [ServerThr.1] Freed 0 spare client objects
[ N 2020-05-01 14:07:43.4545 12056/T8 Ser/Server.h:558 ]: [ServerThr.1] Shutdown finished
[ N 2020-05-01 14:07:43.4546 12056/Tc Ser/Server.h:902 ]: [ApiServer] Freed 0 spare client objects
[ N 2020-05-01 14:07:43.4546 12056/Tc Ser/Server.h:558 ]: [ApiServer] Shutdown finished
[ N 2020-05-01 14:07:43.4547 12056/T9 Ser/Server.h:902 ]: [ServerThr.2] Freed 0 spare client objects
[ N 2020-05-01 14:07:43.4547 12056/T9 Ser/Server.h:558 ]: [ServerThr.2] Shutdown finished
[ N 2020-05-01 14:07:43.9554 12056/T1 age/Cor/CoreMain.cpp:1325 ]: Passenger core shutdown finished
[ N 2020-05-01 14:07:44.0784 18605/T1 age/Wat/WatchdogMain.cpp:1373 ]: Starting Passenger watchdog...
[ N 2020-05-01 14:07:44.0950 18608/T1 age/Cor/CoreMain.cpp:1340 ]: Starting Passenger core...
[ N 2020-05-01 14:07:44.0951 18608/T1 age/Cor/CoreMain.cpp:256 ]: Passenger core running in multi-application mode.
[ N 2020-05-01 14:07:44.1015 18608/T1 age/Cor/CoreMain.cpp:1015 ]: Passenger core online, PID 18608
[Fri May 01 14:07:44.102969 2020] [:notice] [pid 18603] ModSecurity for Apache/2.9.3 (http://www.modsecurity.org/) configured.
[Fri May 01 14:07:44.103122 2020] [:notice] [pid 18603] ModSecurity: APR compiled version="1.7.0"; loaded version="1.7.0"
[Fri May 01 14:07:44.103127 2020] [:notice] [pid 18603] ModSecurity: PCRE compiled version="8.32 "; loaded version="8.32 2012-11-30"
[Fri May 01 14:07:44.103129 2020] [:notice] [pid 18603] ModSecurity: LUA compiled version="Lua 5.1"
[Fri May 01 14:07:44.103131 2020] [:notice] [pid 18603] ModSecurity: YAJL compiled version="2.0.4"
[Fri May 01 14:07:44.103133 2020] [:notice] [pid 18603] ModSecurity: LIBXML compiled version="2.9.7"
[Fri May 01 14:07:44.103141 2020] [:notice] [pid 18603] ModSecurity: Status engine is currently disabled, enable it by set SecStatusEngine to On.
[ N 2020-05-01 14:07:44.1052 18608/T8 age/Cor/CoreMain.cpp:671 ]: Signal received. Gracefully shutting down... (send signal 2 more time(s) to force shutdown)
[ N 2020-05-01 14:07:44.1052 18608/T1 age/Cor/CoreMain.cpp:1246 ]: Received command to shutdown gracefully. Waiting until all clients have disconnected...
[ N 2020-05-01 14:07:44.1053 18608/Tb Ser/Server.h:902 ]: [ServerThr.2] Freed 0 spare client objects
[ N 2020-05-01 14:07:44.1053 18608/Tb Ser/Server.h:558 ]: [ServerThr.2] Shutdown finished
[ N 2020-05-01 14:07:44.1053 18608/T8 Ser/Server.h:902 ]: [ServerThr.1] Freed 0 spare client objects
[ N 2020-05-01 14:07:44.1053 18608/T8 Ser/Server.h:558 ]: [ServerThr.1] Shutdown finished
[ N 2020-05-01 14:07:44.1054 18608/Tc Ser/Server.h:902 ]: [ApiServer] Freed 0 spare client objects
[ N 2020-05-01 14:07:44.1054 18608/Tc Ser/Server.h:558 ]: [ApiServer] Shutdown finished
[Fri May 01 14:07:44.130060 2020] [:notice] [pid 18628] mod_ruid2/0.9.8 enabled
[ N 2020-05-01 14:07:44.1508 18631/T1 age/Wat/WatchdogMain.cpp:1373 ]: Starting Passenger watchdog...
[ N 2020-05-01 14:07:44.1733 18634/T1 age/Cor/CoreMain.cpp:1340 ]: Starting Passenger core...
[ N 2020-05-01 14:07:44.1735 18634/T1 age/Cor/CoreMain.cpp:256 ]: Passenger core running in multi-application mode.
[ N 2020-05-01 14:07:44.1800 18634/T1 age/Cor/CoreMain.cpp:1015 ]: Passenger core online, PID 18634
[Fri May 01 14:07:44.184432 2020] [mpm_prefork:notice] [pid 18628] AH00163: Apache/2.4.43 (cPanel) OpenSSL/1.1.1g mod_bwlimited/1.4 Phusion_Passenger/6.0.4 configured -- resuming normal operations
[Fri May 01 14:07:44.184457 2020] [core:notice] [pid 18628] AH00094: Command line: '/usr/sbin/httpd'
[ N 2020-05-01 14:07:44.6441 18608/T1 age/Cor/TelemetryCollector.h:531 ]: Message from Phusion: End time can not be before or equal to begin time
[ N 2020-05-01 14:07:44.6750 18608/T1 age/Cor/CoreMain.cpp:1325 ]: Passenger core shutdown finished
[ N 2020-05-01 14:07:46.7458 18634/T4 age/Cor/SecurityUpdateChecker.h:519 ]: Security update check: no update found (next check in 24 hours)

Related

docker installation of openproject: Phusion passenger fails to start after installation

I am trying to install openproject using docker on centos7.6 but Phusion passenger fails to start after installation. Error is suggesting it failed to parse response.
The preloader process sent an unparseable response:. I don't know how to fix this issue.
stdout:
-----> Database setup finished.
On first installation, the default admin credentials are login: admin, password: admin
-----> Launching supervisord...
2019-05-08 08:14:46,313 CRIT Supervisor running as root (no user in config file)
2019-05-08 08:14:46,318 INFO supervisord started with pid 1
2019-05-08 08:14:47,321 INFO spawned: 'postgres' with pid 155
2019-05-08 08:14:47,325 INFO spawned: 'apache2' with pid 156
2019-05-08 08:14:47,328 INFO spawned: 'web' with pid 157
2019-05-08 08:14:47,331 INFO spawned: 'worker' with pid 158
2019-05-08 08:14:47,351 INFO spawned: 'postfix' with pid 159
2019-05-08 08:14:47,360 INFO spawned: 'memcached' with pid 160
2019-05-08 08:14:47.634 UTC [172] LOG: database system was shut down at 2019-05-08 08:14:44 UTC
2019-05-08 08:14:47,634 INFO success: postfix entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2019-05-08 08:14:47.649 UTC [172] LOG: MultiXact member wraparound protections are now enabled
2019-05-08 08:14:47.653 UTC [155] LOG: database system is ready to accept connections
2019-05-08 08:14:47.663 UTC [177] LOG: autovacuum launcher started
2019-05-08 08:14:48,670 INFO success: postgres entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: apache2 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: web entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-08 08:14:48,670 INFO success: memcached entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
2019-05-08 08:14:50,198 INFO exited: postfix (exit status 0; expected)
--> Downloading a Phusion Passenger agent binary for your platform
--> Installing Nginx 1.15.8 engine
--------------------------
[passenger_native_support.so] trying to compile for the current user (app) and Ruby interpreter...
(set PASSENGER_COMPILE_NATIVE_SUPPORT_BINARY=0 to disable)
Compilation successful. The logs are here:
/tmp/passenger_native_support-15tsfhk.log
[passenger_native_support.so] successfully loaded.
=============== Phusion Passenger Standalone web server started ===============
PID file: /app/tmp/pids/passenger.8080.pid
Log file: /app/log/passenger.8080.log
Environment: production
Accessible via: http://0.0.0.0:8080/
You can stop Phusion Passenger Standalone by pressing Ctrl-C.
Problems? Check https://www.phusionpassenger.com/library/admin/standalone/troubleshooting/
===============================================================================
[ N 2019-05-08 08:15:01.7338 404/Tb age/Cor/SecurityUpdateChecker.h:519 ]: Security update check: no update found (next check in 24 hours)
Forcefully loading the application. Use :environment to avoid eager loading.
[auth_saml] Missing settings from '/app/config/plugins/auth_saml/settings.yml', skipping omniauth registration.
hook registered
App 439 output: [auth_saml] Missing settings from '/app/config/plugins/auth_saml/settings.yml', skipping omniauth registration.
App 439 output: hook registered
Creating scope :order_by_name. Overwriting existing method Sprint.order_by_name.
App 439 output: Creating scope :order_by_name. Overwriting existing method Sprint.order_by_name.
[Worker(host:d0b3748f627a pid:158)] Starting job worker
2019-05-08T08:15:45+0000: [Worker(host:d0b3748f627a pid:158)] Starting job worker
App 439 output: /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `fork': Cannot allocate memory - fork(2) (Errno::ENOMEM)
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `handle_spawn_command'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:78:in `accept_and_process_next_client'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:167:in `run_main_loop'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:207:in `<module:App>'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:30:in `<module:PhusionPassenger>'
App 439 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:29:in `<main>'
[ E 2019-05-08 08:15:46.6971 404/Tc age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /app: The preloader process sent an unparseable response:
Error ID: d7825364
Error details saved to: /tmp/passenger-error-wjSTKF.html
[ E 2019-05-08 08:15:46.7028 404/T8 age/Cor/Con/CheckoutSession.cpp:276 ]: [Client 1-1] Cannot checkout session because a spawning error occurred. The identifier of the error is d7825364. Please see earlier logs for details about the error.
[ W 2019-05-08 08:34:24.7967 404/Tk age/Cor/Spa/SmartSpawner.h:572 ]: An error occurred while spawning an application process: Cannot connect to Unix socket '/tmp/passenger.PKROzbY/apps.s/preloader.hyl9g8': No such file or directory (errno=2)
[ W 2019-05-08 08:34:24.7968 404/Tk age/Cor/Spa/SmartSpawner.h:574 ]: The application preloader seems to have crashed, restarting it and trying again...
App 543 output: [auth_saml] Missing settings from '/app/config/plugins/auth_saml/settings.yml', skipping omniauth registration.
App 543 output: hook registered
App 543 output: Creating scope :order_by_name. Overwriting existing method Sprint.order_by_name.
App 543 output: /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `fork': Cannot allocate memory - fork(2) (Errno::ENOMEM)
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `handle_spawn_command'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:78:in `accept_and_process_next_client'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:167:in `run_main_loop'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:207:in `<module:App>'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:30:in `<module:PhusionPassenger>'
App 543 output: from /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/helper-scripts/rack-preloader.rb:29:in `<main>'
[ E 2019-05-08 08:34:52.2521 404/Tk age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /app: The preloader process sent an unparseable response:
Error ID: c2ce0823
Error details saved to: /tmp/passenger-error-bpsfAC.html
[ E 2019-05-08 08:34:52.2570 404/T8 age/Cor/Con/CheckoutSession.cpp:276 ]: [Client 1-2] Cannot checkout session because a spawning error occurred. The identifier of the error is c2ce0823. Please see earlier logs for details about the error.
Thanks.
The import line in the log is this one:
App 439 output: /app/vendor/bundle/ruby/2.6.0/gems/passenger-6.0.1/src/ruby_supportlib/phusion_passenger/preloader_shared_helpers.rb:108:in `fork': Cannot allocate memory - fork(2) (Errno::ENOMEM)
This means your container is unable to allocate necessary memory. It could be that your system is in a OOM state and things are being killed or due to some other restriction on the daemon that prevents it from allocating additional memory
For reference:
https://success.docker.com/article/docker-daemon-error-cannot-allocate-memory

Heroku restarting with SIGTERM status 143

I have a scraper running on Heroku. It has been running for a while (+- 2 months) and it has days where it does great and reaches its 1,000 maximum and days during which it just magically restarts.
Does anyone know what the reason could be for such a restart? The scraper is showing no errors, the only thing I can find is the message below in the Heroku logs:
Feb 05 03:02:55 scraper heroku/web.1: Cycling
Feb 05 03:02:55 scraper heroku/web.1: State changed from up to starting
Feb 05 03:02:57 scraper heroku/web.1: Stopping all processes with SIGTERM
Feb 05 03:02:57 scraper heroku/web.1: Process exited with status 143
Feb 05 03:03:16 scraper heroku/web.1: Starting process with command `npm start`
The Cycling bit of the log here is the interesting one.
Heroku will restart dynos every 24h, and this process is called "cycling". This is what you're seeing here.

passenger fails to start when restarting web server

I am seeing a weird passenger error when restarting httpd in /var/log/httpd/error_log:
[ N 2019-01-14 13:34:38.1896 30817/T1 age/Wat/WatchdogMain.cpp:1366 ]: Starting Passenger watchdog...
[ N 2019-01-14 13:34:38.2390 30820/T1 age/Cor/CoreMain.cpp:1339 ]: Starting Passenger core...
[ N 2019-01-14 13:34:38.2393 30820/T1 age/Cor/CoreMain.cpp:256 ]: Passenger core running in multi-application mode.
[ N 2019-01-14 13:34:38.2991 30820/T1 age/Cor/CoreMain.cpp:1014 ]: Passenger core online, PID 30820
[Mon Jan 14 13:34:38.306645 2019] [mpm_prefork:notice] [pid 30775] AH00163: Apache/2.4.6 (CentOS) Phusion_Passenger/6.0.0 configured -- resuming normal operations
[Mon Jan 14 13:34:38.306705 2019] [core:notice] [pid 30775] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[ N 2019-01-14 13:34:38.7734 30780/T1 age/Cor/TelemetryCollector.h:531 ]: Message from Phusion: End time can not be before or equal to begin time
[ N 2019-01-14 13:34:38.8869 30780/T1 age/Cor/CoreMain.cpp:1324 ]: Passenger core shutdown finished
The second to last line appears to be what's giving me an issue.
My best guess is that this was caused by reverting my VM to a snapshot, but I can't find any way to get passenger to run.
Old question but this is the only place I've seen the same mistake I got, In my case the problem was that I turned on a force_ssl config in my rails app when I was accepting requests on port 80. So, if you have a rails app and have this message the solution might be to comment next line on production.rb
config.force_ssl = true

Getting Netty client related error in storm topology and worker restarting

Version Info:
"org.apache.storm" % "storm-core" % "1.2.1"
"org.apache.storm" % "storm-kafka-client" % "1.2.1"
I have a storm Topology with 3 bolts(A,B,C), Where the middle bolt takes around 450ms mean time and other two bolts takes less than 1ms.
I am running topology with following parallelism hint values on two machines:
A: 4
B: 700
C: 10
I am getting following error after few minutes of topology starting:
in worker log:
2018-07-04T20:16:28.835+05:30 Client [ERROR] discarding 7 messages because the Netty client to Netty-Client-/ip:6700 is being closed
in supervisor logs:
2018-07-04 20:16:29.468 o.a.s.d.s.BasicContainer [INFO] Worker Process 32bc11c0-a1d0-4593-a91a-3ff788ea041a exited with code: 20
2018-07-04 20:16:31.592 o.a.s.d.s.Slot [WARN] SLOT 6700: main process has exited
2018-07-04 20:16:31.592 o.a.s.d.s.Container [INFO] Killing 2825cbe9-aedd-4f10-a796-4f9dc30ae72f:32bc11c0-a1d0-4593-a91a-3ff788ea041a
2018-07-04 20:16:31.600 o.a.s.u.Utils [INFO] Error when trying to kill 7422. Process is probably already dead.
2018-07-04 20:16:32.600 o.a.s.d.s.Slot [INFO] STATE RUNNING msInState: 391195 topo:myTopo-1-1530715184 worker:32bc11c0-a1d0-4593-a91a-3ff788ea041a -> KILL_AND_RELAUNCH msInState: 0 topo:myTopo-1-1530715184 worker:32bc11c0-a1d0-4593-a91a-3ff788ea041a
2018-07-04 20:16:32.600 o.a.s.d.s.Container [INFO] GET worker-user for 32bc11c0-a1d0-4593-a91a-3ff788ea041a
I see similar question asked here and here, I have few queries related to this:
Why is this error coming and how to resolve?
How to get more debug information from Storm, I have already set conf.setDebug(true)
Is there some limitation/guidelines around how much parallelism factor os OK for a bolt on n number of machines?
Edit:
Logs for strace -fp PID -e trace=read,write,network,signal,ipc in gist. Some relevant looking part is when above thing happends, but however I see such SIGSEGV many places in strace output :
[pid 23635] --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_ACCERR, si_addr=0x7f83af6f1180} ---
[pid 23549] <... read resumed> "PK\3\4\n\0\0\0\10\0\364J\336F\222'\202\312\310\2\0\0\16\5\0\0\36\0\0\0", 30) = 30
[pid 23654] --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_ACCERR, si_addr=0x7f83af6f1f80} ---
[pid 23549] read(23, "\235TmW\22A\24~\6\224\227u\vE4\255,JR\300WP\322\0245TH\23\313\3j\347"..., 712) = 712
[pid 23654] rt_sigreturn({mask=[QUIT]}) = 140203560738688
[pid 23635] rt_sigreturn({mask=[QUIT]}) = 140203560735104
strace output of worker process is here, relevant looking logs here:
[pid 24435] recvfrom(291, "HTTP/1.1 200 OK\r\nContent-Type: a"..., 8192, 0, NULL, NULL) = 544
[pid 23473] write(3, "Heap\n garbage-first heap total"..., 347) = 347
[pid 24434] +++ exited with 20 +++
[pid 24405] +++ exited with 20 +++
[pid 24435] +++ exited with 20 +++
[pid 24427] +++ exited with 20 +++
Edit 2:
There is this question as well: Connection refused error in worker logs - apache storm : as par it's answer not setting storm.local.hostname might cause it, but it is already set for me.
There is another bug filed here having similar netty error, which is also still unresolved.

Can't connect to a local apache installation under XAMPP on Win XP 64-bit. Help!

I'm using XAMPP v1.7 on an Win XP-64 bit machine, my Symantec AV is turned off as is my Windows Firewall, and I can't connect to localhost from a browser.
I originally had these errors:
[Wed Jan 07 16:24:55 2009] [error] (OS 10038)An operation was attempted on something that is not a socket. : Child 2716: Encountered too many errors accepting client connections. Possible causes: dynamic address renewal, or incompatible VPN or firewall software. Try using the Win32DisableAcceptEx directive
These errors went away after I added the Win32DisableAcceptEx directive to httpd.conf, but the net result remains the same: no joy.
Now, I get these errors:
[Wed Jan 07 16:40:15 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i mod_autoindex_color PHP/5.2.8 configured -- resuming normal operations
[Wed Jan 07 16:40:15 2009] [notice] Server built: Dec 10 2008 00:10:06
[Wed Jan 07 16:40:15 2009] [notice] Parent: Created child process 5916
[Wed Jan 07 16:40:15 2009] [notice] Disabled use of AcceptEx() WinSock2 API
[Wed Jan 07 16:40:15 2009] [notice] Digest: generating secret for digest authentication ...
[Wed Jan 07 16:40:15 2009] [notice] Digest: done
[Wed Jan 07 16:40:15 2009] [notice] Child 5916: Child process is running
[Wed Jan 07 16:40:15 2009] [notice] Child 5916: Acquired the start mutex.
[Wed Jan 07 16:40:15 2009] [notice] Child 5916: Starting 250 worker threads.
[Wed Jan 07 16:40:15 2009] [notice] Child 5916: Listening on port 443.
[Wed Jan 07 16:40:15 2009] [notice] Child 5916: Listening on port 80.
[Wed Jan 07 16:40:15 2009] [error] (OS 10038)An operation was attempted on something that is not a socket. : Too many errors in select loop. Child process exiting.
[Wed Jan 07 16:40:15 2009] [notice] Child 5916: Exit event signaled. Child process is ending.
[Wed Jan 07 16:40:16 2009] [notice] Child 5916: Released the start mutex
[Wed Jan 07 16:40:17 2009] [notice] Child 5916: All worker threads have exited.
[Wed Jan 07 16:40:17 2009] [notice] Child 5916: Child process is exiting
[Wed Jan 07 16:40:17 2009] [notice] Parent: child process exited with status 0 -- Restarting.
[Wed Jan 07 16:40:17 2009] [notice] Digest: generating secret for digest authentication ...
[Wed Jan 07 16:40:17 2009] [notice] Digest: done
And, apache seems to be crashing (Windows tells me so, and I can see the crash in the system events.)
I'm a n00b to apache, but need to get this running. Ideas?
Marcus
If anyone comes across this, try running
netsh winsock RESET
from the command line. It worked for me.

Resources