Ruby on Rails 4 executing shell command daemon - shell

I use the built-in options to distribute my Ruby on Rails app.
Meaning I start the server with
rails s -e production -p 80 -d
But as soon as I append the daemon flag -d to this command I can no longer execute shell commands...
I tried a wide spectrum of possibilities to execute commands e.g.:
system(cmd)
%x[ #{cmd} ]
`#{cmd}`
Process.detach(spawn(cmd))
Process.fork do
p = spawn(cmd)
Process.detach(p)
end
And I have no idea what else I can do...
Would be very grateful for a hint/solution...
Some informations to the running system:
OS: Ubuntu 14.04 LTS
Rail Version: 4.0.2
I log via ssh onto the computer and start the rails server
I've tested all commands listed above, they all work without the daemon flag but will not work with it...
Thanks in advance.
Greetings Alex

Failure found.
There is not connection between Server as daemon and no processes/shell commands.
I was just stupid enough to look in the wrong dirs...
Note:
If you execute the rails server as daemon it is no longer a process of the user but of the root. Just check the next time if all paths are bullet proof.

Related

On Windows how do I get rubygem swipely/docker-api to connect with local docker daemon (service)

I have a ruby script running on
ruby 1.9.3p545 (2014-02-24) [i386-mingw32]
all running on a windows 10 pro 64 bit box.
The docker .exe client installed with docker connects properly to
DOCKER_HOST=tcp://localhost:2375
and connects and runs properly in the same shell running ruby and the script.
The script at present is simply
require 'docker'
Docker.url='tcp://localhost:2375' # I also tried http://localhost:2375
# results were the same
Docker.options={}
vers = Docker.version # this hangs for a very long timeout
Docker.version hangs and eventually times out due to a failure to connect to the daemon. I am stuck writing the script unless I can get it to connect to the local docker daemon.
Apparently the gem or ruby or excon or whatever does not resolve "localhost"
If I use this for the Docker.url
Docker.url='tcp://127.0.0.1:2375'
it works.

How to find the right version of Ruby

I'm currently working on a build pipeline that uses Jenkins and GitLab to trigger builds for the project. Basically, the build is triggered when someone pushes to the repository. Also, some Ruby scripts are executed as part of the build process. These scripts run some checks on the projects and perform some fixes, like synchronizing an Xcode project with added and deleted files from the source directory - in this case they are not the same.
I'm using several tools to configure the pipeline. The builds run on a machine that is physically located on the build slave. Jenkins is deployed to an AWS machine. For this reason, I used pritunl to connect the two on a virtual network. I can use local IPs to communicate between the machines and SSH is working fine both ways.
When I push to the remote the build starts correctly on the slave, but it fails to complete. However, if I manually access using SSH through the terminal, the build performs fine. This is the output I get from Jenkins:
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- xcodeproj (LoadError)
from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
from /Users/jenkins/workspace/Core/platform/ios/scripts/pbxsync.rb:58:in `<main>'
As you can see, it fails to require Xcodeproj, causing the build to fail. Still, this only happens if the build is triggered by Jenkins, not manually.
This makes me think that Jenkins is using some different installation of Ruby, or at least a different environment. Basically what I need is to install gems for the same Ruby environment that Jenkins is using, but I don't know which one that is. Any ideas?
Jenkins has a console that runs Groovy scripts on the remote slave. I've been playing with it a bit, but not many conclusions so far. Maybe that helps.
This may be important; this is the shebang I'm using for the Ruby scripts: #!/usr/bin/env ruby
On the terminal, I'm using the same user as Jenkins is to access the slave machine. It's called "jenkins".
One thing I forgot to mention is that the output is telling me the right version: /Users/jenkins/.rvm/rubies/ruby-2.4.0. At least that's the path it's indicating it's trying to load the gem from. So I tried the following:
: /Users/jenkins/.rvm/rubies/ruby-2.4.0/bin/ruby
require 'xcodeproj'
Then I press ctrl+D and get no output - that installation of ruby is finding the gem properly.
If you are using Jenkins Slave plugin to communicate between Jenkins Master and Jenkins Slave, every command that u specify will be run in non-interactive shell. That means that Jenkins will only have access to system ruby in your case.
So if you want to install something that needs to be installed you have to do it in system ruby. You are using rvm so: rvm use system and you can install gem to system ruby.
If you want to use different Ruby version than system ruby you need to add RVM to $PATH for non-interactive shell. Here is basic setup that should help: https://rvm.io/rvm/basics
I finally managed it. As #Cosaquee indicated in another response, it's important to distinguish between interactive and non-interactive shells. The main reason for this is because, depending on how you call SSH, it makes a difference. As the man page indicates:
If command is specified, it is executed on the remote host instead of
a login shell.
This is meaningful, because the Launch Command for the node I have set for Jenkins is this one:
ssh jenkins#x.x.x.x java -jar ~/bin/slave.jar
In the meanwhile, I was logging in with the standard ssh jenkins#x.x.x.x from the terminal, which starts a login shell. It makes sense that I was getting different results because the two shells load different initial scripts. Basically, if you use ssh jenkins#x.x.x.x to login into the machine ~/.bash_profile is loaded, while if you specify a command, such as ssh jenkins#x.x.x.x whatever, then ~/.bashrc is loaded instead. As such, I added this line to ~/.bashrc:
[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm"
Without it I got:
RVM is not a function, selecting rubies with 'rvm use ...' will not
work.
The advantage was that I could now use RVM from the same environment Jenkins was using. The rest is easy:
ssh jenkins#x.x.x.x rvm --default use 2.3
And:
ssh jenkins#x.x.x.x
rvm --default use 2.3
And both are now using the same version of ruby.

Vagrant shell provisioner hangs instead of exiting

My shell provisioner is a small bash script that apt-gets a few things, installs a few Perl modules through cpan, sets up Apache and MySQL, echos some text, and exists.
Except that after printing its final message, it seems not to exit, but hangs forever.
Am I forgetting to do something? How can I begin to debug this?
If I use the VirtualBox manager to close the VM, I get a stack trace whose head reads,
/Applications/Vagrant/embedded/gems/gems/net-ssh-2.6.7/lib/net/ssh/ruby_compat.rb:30:in `select': closed stream (IOError)
Host OS: OS X Snow Leopard
Guest OS: Ubunut via precise32
TIA
This is really a comment but I don't have enough reputation to post it as a comment.
I would suggest two techniques to debug this problem.
1) Enable debugging in Vagrant like so:
VAGRANT_LOG=info vagrant up
2) Define set -x at the top of your shell script to link one line of your shell script to the output it creates when run. This should allow you to see which line of your shell script is hanging.
Updating your question with the Vagrantfile will also help us guide you in the right direction.
This issue should be resolved in a Vagrant release 1.2.4 or newer, which includes a fix which closes the ssh channel when the shell provisioner exits.

System commands dont work when running over passenger

I have a sinatra app with a page that shows some information about the application. Some of which is generated by running commands on page load. Everything works fine on my MacBook when running in unicorn and everything works fine on the production server when running in unicorn but swap to Apache/Passenger and suddenly the commands start returning nil.
For example to get a list of committers I use:
comitters = `cd /path/to/app && git shortlog -s -n`
This works perfectly until run in the apache/passenger setup.
Is there some option within passenger that disables system commands?
The problem lies in your $PATH environment variable, which the system uses to look for commands. You run Unicorn from the shell don't you? So Unicorn inherit $PATH from your shell. But app processes started from Phusion Passenger are started from Apache/Nginx, which in turn are usually started from some system init service, which have completely different environment variables than your shell. Read http://blog.phusion.nl/2008/12/16/passing-environment-variables-to-ruby-from-phusion-passenger/ for a solution.

I would like to find something like gkrellm for the Mac

I have a linux dev server I watch, and lately its chugging at some points so I'd like to keep a better eye on it. I used to use Gkrellm, but its been a pain to try get Gkrellm to build on my Mac.
Besides servering X remotely (which would not be optimal), I guess i'm looking for alternatives to Gkrellm.
I would like a program that will let me watch the I/O CPU, Memory, processes, etc of a remote server running Linux. I am on a Mac.
If you're looking for something simple, and almost certainly already installed on the Linux box, you could SSH into the Linux machine and use tools like top, vmstat, and lsof to see what it's up to.
If you still want to test Gkrellm on Mac, you can follow this procedure
# sudo port install gkrellm
If you have this error :
Error: Target org.macports.activate returned: Registry error: xorg-xproto 7.0.16_0 not registered as installed.
[...]
Error: Status 1 encountered during processing.
Do this
# sudo port clean xorg-xproto
# sudo port install xorg-xproto
And continue install
# sudo port install gkrellm
Now if you have this error :
Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_gnome_gtk-doc/work/gtk-doc-1.11" && ./configure --prefix=/opt/local --with-xml-catalog=/opt/local/etc/xml/catalog " returned error 1
[...]
Error: Status 1 encountered during processing.
Do this
# sudo port clean gtk-doc
# sudo port install gtk-doc
And last
# sudo port install gkrellm
To start gkrellm
# gkrellm
You could use Growl for this purpose. It's possible to send Growl messages from a unix machine by using netgrowl.py, which masquerades as the growlnotify program, but all written in python.
You could then have a process running on the server that monitors the other bits, and posts notifications when limits are exceeded, or whatever.
It would be a hand-coded solution, but we are on Stack Overflow, so programming-related stuff is the go :)
(Oh, and the netgrowl.py page has a few links to similar projects in other languages, if that's your thing, too).
You are propably looking for a more rigid monitoring tool like zabbix. https://zabbix.org

Resources