heroku apps not running forever script.js - heroku

I have a heroku app created on my local system and pushed it to heroku then i ran
heroku run node_modules/forever/bin/forever start server.js
and i got this response -
warn: --minUptime not set. Defaulting to: 1000ms
warn: --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms
info: Forever processing file: server.js
and after that if i run
heroku run node_modules/forever/bin/forever list
I got -
Running `node_modules/forever/bin/forever list` attached to terminal... up, run.5132
info: No forever processes running
and the heroku logs here have -
Starting process with command `node_modules/forever/bin/forever start server.js` by harshitladdha93#gmail.com
2014-07-05T17:24:54.833343+00:00 heroku[run.5098]: State changed from starting to up
2014-07-05T17:24:58.695683+00:00 heroku[run.5098]: State changed from up to complete
2014-07-05T17:24:58.689043+00:00 heroku[run.5098]: Process exited with status 0
this, and my server.js has -
var async = require('async');
var shell = require('shelljs');
async.parallel([
async.apply(shell.exec, './collect1.sh'),
async.apply(shell.exec, './collect2.sh'),
async.apply(shell.exec, './collect3.sh'),
async.apply(shell.exec, './collect4.sh'),
async.apply(shell.exec, './collect5.sh'),
async.apply(shell.exec, './mi2.sh'),
],
function (err, results) {
console.log(results);
});
and these shell scripts are long executing files with huge amount of delays but the logs say that the state from up to complete, but I don't understand why as in my local system it creates more processes and sleep states and it runs fine
so, heroku doesn't allow this or I am making some mistake here.

Related

How to get better error messaging from nightwatch when running tests in parallel

We have a problem when we run our nightwatch tests in parallel and there is a problem with the setup, for example the selenium grid is not available. The tests execute very quickly and we get no error messages.
Started child process for: folder1/test1
Started child process for: folder1/test2
Started child process for: folder1/test3
Started child process for: folder1/test4
>> folder1/test1 finished.
>> folder1/test2 finished.
>> folder1/test3 finished.
>> folder1/test4 finished.
But when I run the tests serially, I get a good error message like
Error retrieving a new session from the selenium server
Connection refused! Is selenium server started?
{ status: 13,
value: { message: 'Error forwarding the new session Empty pool of VM for setup Capabilities [{acceptSslCerts=true, name=Test1, browserName=chrome, javascriptEnabled=true, uuid=ab54872b-10ee-43a1-bf65-7676262fa647, platform=ANY}]',
class: 'org.openqa.grid.common.exception.GridException' } }
Why don't I get the good error message when running in parallel mode? Is there something I can change so I get the good error message in parallel mode?
By setting
live_output: true
in your nightwatch config file, you'll see logs while running in parallel.
More information: config-basic

Jekyll Error - "serve" only works once

Started working on an update to my website. It was working fine the other day but now I get an error.
I generally type on CMD (i'm on Windows 10) "bundle exec Jekyll serve --watch" and the server goes. I can edit and save and its all reflected in browser upon refresh.
Now I can do this, but if I make one change to any file it works. Do another change and I get an error. I have to terminate and type again.
Below is the error:
D:\Tristen Grant\Documents\GitHub\portfolio>bundle exec jekyll serve --watch
DL is deprecated, please use Fiddle
Configuration file: D:/Tristen Grant/Documents/GitHub/portfolio/_config.yml
Source: D:/Tristen Grant/Documents/GitHub/portfolio
Destination: D:/Tristen Grant/Documents/GitHub/portfolio/_site
Incremental build: disabled. Enable with --incremental
Generating...
done in 0.595 seconds.
Auto-regeneration: enabled for 'D:/Tristen Grant/Documents/GitHub/portfolio'
Configuration file: D:/Tristen Grant/Documents/GitHub/portfolio/_config.yml
Server address: http://127.0.0.1:3000//
Server running... press ctrl-c to stop.
Regenerating: 1 file(s) changed at 2016-09-06 16:16:28 ...done in 0.521498 seconds.
Regenerating: 1 file(s) changed at 2016-09-06 16:16:30 ...error:
Error: No such file or directory - git rev-parse HEAD
Error: Run jekyll build --trace for more information.
[2016-09-06 16:19:35] ERROR Errno::ENOTSOCK: An operation was attempted on something that is not a socket.
C:/Ruby21-x64/lib/ruby/2.1.0/webrick/server.rb:170:in `select'
Terminate batch job (Y/N)? y
Terminate batch job (Y/N)? y
I'm using Ruby 2.1.5 64 bit version. RubyDevKit, Sass, Bourbon.
Any ideas how to fix this? I don't know much about Jekyll or ruby. Just starting out.
I also get this error in CMD. You should have github's desktop app installed; try running the same commands from the GitShell you get with that app.
For me that works, it saves me the trouble of installing git globally on Windows and setting it up.

Unresponsive socket after x time (puma - ruby)

I'm experiencing an unresponsive socket in with my Puma setup after random time. Up to this point I don't have a clue what's causing the issue. I was hoping somebody over here can help we with some answers or point me in the right direction. I'm having the following setup:
I'm using the official docker ruby-2.2.3-slim image together with the latest puma release 2.15.3, I've also installed Nginx as a reverse proxy. But I'm already sure Nginx isn't the problem over here because and I've tried to verify if the socket was working using this script. And the socket wasn't working, I got a timeout over there as well so I could ignore Nginx.
This is a testing environment so the server isn't experiencing any extreme load, I've also check memory consumption it has still several GB's of free space so that couldn't be the issue either.
What triggered me to look at the puma socket was the error message I got in my Nginx error logging:
upstream timed out (110: Connection timed out) while reading response header from upstream
Also I couldn't find anything in the logs of puma indicating what is going wrong, over here are my puma setup:
threads 0, 16
app_dir = ENV.fetch('APP_HOME')
environment ENV['RAILS_ENV']
daemonize
bind "unix://#{app_dir}/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log", "#{app_dir}/log/puma.stderr.log", true
pidfile "#{app_dir}/pids/puma.pid"
state_path "#{app_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require 'active_record'
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[ENV['RAILS_ENV']])
end
And this it the output in my puma state file:
---
pid: 43
config: !ruby/object:Puma::Configuration
cli_options:
conf:
options:
:min_threads: 0
:max_threads: 16
:quiet: false
:debug: false
:binds:
- unix:///APP/sockets/puma.sock
:workers: 1
:daemon: true
:mode: :http
:before_fork: []
:worker_timeout: 60
:worker_boot_timeout: 60
:worker_shutdown_timeout: 30
:environment: staging
:redirect_stdout: "/APP/log/puma.stdout.log"
:redirect_stderr: "/APP/log/puma.stderr.log"
:redirect_append: true
:pidfile: "/APP/pids/puma.pid"
:state: "/APP/pids/puma.state"
:control_url: unix:///tmp/puma-status-1449260516541-37
:config_file: config/puma.rb
:control_url_temp: "/tmp/puma-status-1449260516541-37"
:control_auth_token: cda8879717be7a645ea323d931b88d4b
:tag: APP
The application itself is a Rails app on the latest version 4.2.5, it's deployed on GCE (Google Container Engine).
If somebody could give me some pointer's on how to debug this any further would be very much appreciated. Because now I don't see any output anywhere which could help me any further.
EDIT
I replaced the unix socket with tcp connection to Puma with the same result, still hangs after x time
I'd start with:
How many requests get processed successfully per instance of puma?
Make sure you log the beginning and end of each request with the thread id of the thread executing it, what do you see?
Not knowing more about your application, I'd say it's likely the threads get stuck doing some long/blocking calls without timeouts or spinning on some computation until the whole thread pool gets depleted.
We'll see.
I finally found out why my application was behaving the way it was.
After trying to use a tcp connection and switching to Unicorn I start looking into other possible sources.
That's when I thought maybe my connection to Google Cloud SQL could be the problem. Once I read the faq of Cloud SQL, they mentioned that you have to tweak you Compute instances to ensure they keep open your DB connection. So I performed the next steps they recommend and that solved the problem for me, I added them just in case:
# Display the current tcp_keepalive_time value.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time
# Set tcp_keepalive_time to 60 seconds and make it permanent across reboots.
$ echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
# Apply the change.
$ sudo /sbin/sysctl --load=/etc/sysctl.conf
# Display the tcp_keepalive_time value to verify the change was applied.
$ cat /proc/sys/net/ipv4/tcp_keepalive_time

Informatica error 1417 :: Task not yet registered with this service process

I am getting following error while running a workflow in informatica.
Session task instance [worklet.session] : [TM_6775 The master DTM process was unable to connect to the master service process to update the session status with the following message: error message [ERROR: The session run for [Session task instance [worklet.session]] and [ folder id = 206, workflow id = 16042, workflow run id = 65095209, worklet run id = 65095337, task instance id = 13272 ] is not yet registered with this service process.] and error code [1417].]
This error comes randomly for many other sessions, when they are ran through workflow as a whole. However if I "start task" that failed task next time, it runs successfully.
Any help is much appreciated.
Just an idea to try if you use versioning. Check that everthing is checked in correctly. If the mapping, worflow or worklet is checked out then you and informatica will run different versions wich may cause the behaivour to differ when you start it manually.
Infromatica will allways use the checked in version and you will allways use the checked out version.

Celerybeat shuts down immediately after start

I have a django app that is using celeryd and celerybeat. Both are set up to run as daemons.
The celerybeat tasks won't get executed because celerybeat does not start correctly. According to the logs it shuts down immediately:
[2012-05-04 13:02:49,055: WARNING/MainProcess] celerybeat v2.5.1 is starting.
[2012-05-04 13:02:49,122: INFO/MainProcess] process shutting down
[2012-05-04 13:02:49,122: DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[2012-05-04 13:02:49,134: DEBUG/MainProcess] running the remaining "atexit" finalizers
I'm starting it with /etc/int.d/celerybeat start
This is the /etc/default/celerybeat config:
# Where the Django project is.
CELERYBEAT_CHDIR="/var/www/path_to_app/cms/"
# Python interpreter from environment.
ENV_PYTHON="$CELERYBEAT_CHDIR/bin/python"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="cms.settings"
# Path to celerybeat
CELERYBEAT="$ENV_PYTHON $CELERYBEAT_CHDIR/cms/manage.py celerybeat"
# Extra arguments to celerybeat
CELERYBEAT_LOG_LEVEL="DEBUG"
CELERYBEAT_USER="www-data"
CELERYBEAT_GROUP="www-data"
The task schedule is set in settings.py:
CELERYBEAT_SCHEDULE = {
# Executes every morning at 7:00 A.M
"every-morning": {
"task": "cms.tasks.get_recent_posts_for_all_pages",
"schedule": crontab(hour=7, minute=00)
},
}
When I run celerybeat from the shell with ./manage.py celerybeat it seems to run fine.
There is also a celerybeat section in the celeryd config but I assume that one is ignored.
Regards
Simon
Maybe you're missing using a broker like rabbitmq
https://web.archive.org/web/20180703074815/http://celery.readthedocs.io/en/latest/getting-started/brokers/rabbitmq.html

Resources