I'm running a Rails 4 app with pry-remote, and calling binding.pry_remote. I can see stdout prints [pry-remote] Waiting for client on druby://127.0.0.1:9876, but when I run pry-remote at this point, the command just hangs.
Is there anything I can investigate to troubleshoot?
Related
I am running a Flask/python app on a website in my terminal using:
gunicorn --workers 4 --bind :5000 app:app
It shows me the log but it eventually times out, even though the website is still up and running. Is there a way I can go back to viewing the log?
So far, I can only go back to seeing the log by running sudo reboot
for the Ubuntu server and then reopening the server with ssh and running the gunicorn code again.
I have an issue installing Kibana/elasticsearch/logstash.
I followed the instructions for the download and the install of each. From the Admin CMD line, I ran the elasticsearch.bat for elasticsearch. The script ran, but just hung and never fully completed. However, when testing on my localhost 9200, it showed ES installed correctly. I did the same with Kibana and had the same result. The script never fully completed, however, I was able to open the web app on port 5601. I also completed the install for logstash – the script never fully completed. When I manually exited each script by exiting the cmd window, I lost connection to the localhost. "This site can't be reached."
Has anyone had this issue or know what might be happening here?
I am trying to run a Docker image on amazon ECS. I am using a command that starts a shell script to boot up the program:
CMD ["sh","-c", "app/bin/app start; bash"]
in order to start it because for some reason when I run the app (elixir/phoenix app) in the background it was crashing immediately but if I run it in the foreground it is fine. If I run it this way locally, everything works fine but when I try to run it in my cluster, it shuts down. Please help!!
Docker was supposed to keep track of your running foreground process, if the process stop, the container stop. The reason your container work when you use command with "bash" because bash wasn't stop.
I guess you use she'll script to start an application that serve in background like nginx or a daemon. So try to find an option that make the app running foreground will keep your container alive. I.e nginx has an option while starting "daemon off"
for some reason when I run the app (elixir/phoenix app) in the background it was crashing immediately
So you have a corrupted application and you are looking for a kludge to make it looking like it somewhat works. This is not a reliable approach at all.
Instead you should:
make it working in background
use systemctl or upstart to manage restarts of Erlang VM on crashes
Please note that it matters where you have your application compiled. It must be the exactly same architecture/container as the production one, with same Erlang, Elixir, OS versions, otherwise nobody guarantees it will be robust or even working.
I have a scrapy scraper running on Heroku. I want to restart the scraper as soon as it finishes. What is the best way to achieve this?
The obvious solution would be to kill or restart that dyno, then Heroku will rerun the scraper via the procfile. However, it's not clear what the best way to do this is.
Related: does Heroku automatically detect that your process has stopped running and automatically shut down the dyno for you? Or does the dyno just sit there doing nothing?
Related #2: could you call a python script/program from within your scraper just before it exits, then that script could wait 5 seconds and then execute
scrapy runspider myspider
Would that work? Presumably trying to execute
scrapy runspider myspider
from within the spider itself will cause the universe to implode or something (twisted error probably)?
The following code repeatedly runs a spider, re-running it when it finishes, using subprocess. Assume that this code is in a file called myScript.py
import subprocess
while True:
subprocess.call(["scrapy", "runspider", "myspider.py"])
The procfile should reference that script instead of the actual scraper as follows:
myHerokuScrapingProcess: python myScript.py
I still don't know how to restart the actual dyno though.
I am trying to debug a meteor application at server side.
I created an environment variable export NODE_OPTIONS='--debug'.
I run meteor (version 0.7.0.1) command. It tells the debugger listening on port 5858.
I start node-inspector (version v0.7.0-2) and point to 127.0.0.1:8080/debug?port=5858, but I can see only a couple of strings, Source, Console and a prompt > where I cannot write anything.
I have this error in the console:
“The connection to ws//127.0.0.1:8080/socket.io/1/websocket/Za… was interrupted while the page was loading”.
The same if I use 0.0.0.0:8080: I can see something more of the debugger on the right panel, as Watch expression, Call stack, but the Source list is still empty.
Node-inspector should be listening, because if I stop meteor says that the remote debugging has been terminated. I cannot figure out what I am doing wrong.
have a look at https://groups.google.com/forum/#!topic/meteor-talk/EG8pe7pF3f8
Just want to share some of my experience on using node-inspector to
debug server side codes:
1. When you run Meteor, it will spawn two processes on Linux machine
(Note: I have not checked this on Windows or Mac machine)
process1: /usr/lib/meteor/bin/node /usr/lib/meteor/app/meteor/
meteor.js
process2: /usr/lib/meteor/bin/node /home/paul/codes/bbtest_code/
bbtest02/.meteor/local/build/main.js --keepalive
You need to send kill -s USR1 on process2
Run node-inspector and you can see your server code
On my first try, I modify the last line on meteor startup script in /
usr/lib/meteor/bin/meteor to
exec "$DEV_BUNDLE/bin/node" $NODE_DEBUG "$METEOR" "$#"
and run NODE_DEBUG=--debug meteor on command prompt. This only put --
debug flag on process1 so I only see meteor files on node-inspector
and could not find my code.
Any suggestion on how to modify the script so we can use "--debug"
flag on the meteor script?
Cheers,
Paul