i want to run Faye server on system startup
i'm trying with this in
/etc/init/faye.conf
description "faye"
author "email#gmail.com"
description "Chat Server for In-house chat rooms"
start on filesystem or runlevel [2345]
stop on run level [!2345]
cd /var/App
exec /usr/local/rvm/gems/ruby-2.0.0-p451/bin/rackup /var/App/faye.ru -s thin -E production
but it not working.
when i'm excuting command
sh faye.conf
then working fine.
and even working via irb console
`/usr/local/rvm/gems/ruby-2.0.0-p451/bin/rackup /var/App/faye.ru -s thin -E production -D`
any idea where is problem why init script not working self ?
after some investigation i found error:
/usr/bin/env: ruby_executable_hooks: No such file or directory
Related
I've recently updated git bash to 2.35.1. I do Minecraft plugin/server development so am often using git bash to start and manage test servers in windows. Unfortunately I do not know what version I was on before.
Now when I started a server using java -Xms4G -Xmx4G -jar server.jar it boots, but I can't type anything into the console. I am completely locked out. Using CTRL+C will shutdown the server but it says "do you want to terminate the batch job" and hangs. I have to force quit the java process.
Before, I could type game console commands in without issue.
I've tried running git bash as administrator, no change. PowerShell doesn't work at all, it won't even show the output of the console after starting the jar.
When I stop the server from in-game, I get this error after it fully stops:
$ stop
bash: stop: command not found
I have a Docker container that handles an application. I am attempting to write tests for my system using npx and nightwatchJS.
I use CI and to run the tests for my entire suite I docker-compose build then run commands from outside of the container like so:
Example of backend python test being called (this works and is run as expected):
docker-compose run --rm web sh -c "pytest apps/login/tests -s"
Now I am trying to run an npx command to do some front-end testing but I am getting errors with something I cannot seem to diagnose:
Error while running .navigateTo() protocol action: An unknown server-side error occurred while processing the command. – unknown error: net::ERR_CONNECTION_REFUSED
Here is that command:
docker-compose run --rm web sh -c "npx nightwatch apps/login/tests/nightwatch/login_test.js"
The odd part of this is that if I go into bash:
docker-compose exec web bash
And then run:
npx nightwatch apps/login/tests/nightwatch/login_test.js
I don't get that error as I'm in bash.
This leads me to believe that I have an error in something with the command. Can somebody please help with this?
Think as containers as a separate computers.
When you run on your computer pytest apps/login/tests -s and then I run on my computer npx nightwatch apps/login/tests/nightwatch/login_test.js surely my computer will not connect to yours. I will get "connection refused" kind of error.
With docker run you run a separate new "computer" that runs that command - it has it's own pid space, it's own network address, etc. Than inside "that computer" you can execute another command with docker exec. To have you command to connect with localhost, you have to run them on the same "computer".
So when you run docker run with the client, it does not connect to a separate docker run. Either specify correct ip address or run both commands inside the same container.
I suggest to research how docker works. The above is a very crude oversimplification.
I am running the latest version of macOS Sierra and I installed PostgreSQL via brew. Then I ran the command:
pg_ctl -D /Users/tmo/PSQL-data -l logfile start
but received for output:
waiting for server to start..../bin/sh: logfile: Permission denied
stopped waiting
pg_ctl: could not start server
Examine the log output.
EDIT: After restarting my operating system and rerunning the command... I'm now receiving a slightly modified output... but the modification is significant.
waiting for server to start.... stopped waiting
pg_ctl: could not start server
Examine the log output.
Where is the "log output" stored?
How do I make this command work?
The problem could be one of two things, that I can see:
A typo in your database path:
/Users/tmo/PSQL-data --> /Users/tmp/PSQL-data
If the above was just a transcription error, I would guess that your postgres user doesn't have write access to the directory where you are setting the logfile. The argument following the -l switch tells PG where to save the logfile. When you don't provide the -l switch with a path, but just a filename, it will use the same dir you use to specify the database cluster (with the -D flag). So in this case, PG is trying to write to /Users/tmp/PSQL-data/logfile, and getting a permission error.
To fix this, I would try:
If the directory /Users/tmp/PSQL-data/ doesn't exist:
sudo mkdir /Users/tmp/PSQL-data
Then create the logfile manually:
sudo touch /Users/tmp/PSQL-data/logfile
Then make the postgres user own the file (I'm assuming user is postgres here)
sudo chown postgres /Users/tmp/PSQL-data/logfile
Try again, and hopefully you can launch the server.
Caveat: I'm not a macOS user, so I'm not sure how the /tmp folder behaves. If it is periodically cleared, you may want to specify a different logfile location, so that you don't need to create and chown the file each time you need to launch the cluster.
I have a Spring Boot application which runs on embedded Tomcat servlet container mvn spring-boot:run . And I don’t want to deploy the project as separate war to standalone Tomcat.
Whenever I push code to BitBucket/Github, a hook runs and triggers Jenkins job (runs on Amazon EC2) to deploy the application.
The Jenkins job has a post build action: mvn spring-boot:run, the problem is that the job hangs when post build action finished.
There should be another way to do this. Any help would be appreciated.
The problem is that Jenkins doesn't handle spawning child process from builds very well. Workaround suggested by #Steve in the comment (nohuping) didn't change the behaviour in my case, but a simple workaround was to schedule app's start by using the at unix command:
> echo "mvn spring-boot:run" | at now + 1 minutes
This way Jenkins successfully completes the job without timing out.
If you end up running your application from a .jar file via java -jar app.jar be aware that Boot breaks if the .jar file is overwritten, you'll need to make sure the application is stopped before copying the artifact. If you're using ApplicationPidListener you can verify that the application is running (and stop it if it is) by adding execution of this command:
> test -f application.pid && xargs kill < application.pid || echo 'App was not running, nothing to stop'
I find very useful to first copy the artifacts to a specified area on the server to keep track of the deployed artifacts and not to start the app from the jenkins job folder. Then create a server log file there and start to listening to it on the jenkins window until the server started.
To do that I developed a small shell script that you can find here
You will also find a small article explaining how to configure the project on jenkins.
Please let me know if worked for you. Thnaks
The nohup and the at now + 1 minutes didn't work for me.
Since Jenkins was killing the process spun in the background, I ensured the process to not be killed by setting a fake BUILD_ID to that Jenkins task. This is what the Jenkins Execute shell task looks like:
BUILD_ID=do_not_kill_me
java -jar -Dserver.port=8053 /root/Deployments/my_application.war &
exit
As discussed here.
I assume you have a Jenkins-user on the server and this user is the owner of the Jenkins-service:
log in on the server as root.
run sudo visudo
add "jenkins ALL=(ALL) NOPASSWD:ALL" at the end (jenkins=your Jenkins-user)
Sign In in Jenkins and choose your jobs and click to configure
Choose "Execute Shell" in the "Post build step"
Copy and paste this:
service=myapp
if ps ax | grep -v grep | grep -v $0 | grep $service > /dev/null
then
sudo service myapp stop
sudo unlink /etc/init.d/myapp
sudo chmod +x /path/to/your/myapp.jar
sudo ln -s /path/to/your/myapp.jar /etc/init.d/myapp
sudo service myapp start
else
sudo chmod +x /path/to/your/myapp.jar
sudo ln -s /path/to/your/myapp.jar /etc/init.d/myapp
sudo service myapp start
fi
Save and run your job, the service should start automatically.
This worked for me on jenkins on a linux machine
kill -9 $(lsof -t -i:8080) || echo "Process was not running."
mvn clean compile
echo "mvn spring-boot:run" | at now + 1 minutes
In case no process on 8080 it will print the message and will continue.
Make sure that at is installed on your linux machine. You can use
sudo apt-get install at
to install at
I'm trying to debug a sinatra app using RubyMine. I am using rackup to run the app on localhost and unicorn to run it on remote host. My ruby version is 1.9.3.
I should also note that the "run debug mode icon" is grayed out. I don't know what is missing from the configuration.
What gems do I need? What else do I need to do?
update:
I have run the server process on localhost using rackup -p 9000. In order to start debugging -run rdebug-ide --port 1234 -- rackup and got this message :
Fast Debugger (ruby-debug-ide 0.4.17.beta16, ruby-debug-base 0.10.5.rc1) listens on 127.0.0.1:1234
I still don't understand how to debug using Rubymine. I have opened the browser in http://0.0.0.0:1234 and I don't get any response (it keeps loading)
I run the remote host using unicorn like so :
unicorn -c etc/fin_srv_unicorn.conf -E staging
how shold I set up remote debugging? I have tried also rack and ruby remote.
Tried connection to the remote host and running the service (using the command listed above), and then running the rdebug like so :
rdebug-ide --port 1911 -- $SCRIPT$
where for $SCRIPT$ I have tried app/main.rb staging , unicorn -E staging, unicorn -c etc/fin_srv_unicorn.conf -E staging