Executing 'tail -f' with Capistrano 3 pipes nothing into output - ruby

I have a Capistrano task that looks like this:
desc "tail log file"
task :tail do
on roles(:app) do
execute "tail -f #{shared_path}/log/#{fetch(:log_file)}.log"
end
end
When I run the task, it proceeds with the blocking tail -f request, but it shows up nothing. I am one hundred percent sure that it simply does not pipe the data somehow (I've verified it - the log file gets updated on the remote) thus it shows nothing. Did I miss something?
The app role is included in the stage config.

Mmmmmmm..., check the permissions in the file system. The user that runs the task should have the permission to read the file.
You can try chmod o+r logfile.log
Here you're giving permissions to read to anybody on the file (Useful for debugging purposes).

I have found a workaround/solution to my problem. I don't remember where I did found the solution, but executing the command rawly by instantiating an ssh connection with a pseudo-tty forced allocation (that's the -t) got it working. That will get the blocking requests like tail -f working. As the man pages say on the -t option:
This can be used to execute arbitrary screen-based programs on a remote machine,
which can be very useful...
def execute_interactively(command)
user = fetch(:user)
port = fetch(:port)
cmd = "ssh -l #{user} #{host} -p #{port} -t 'cd #{deploy_to}/current && #{command}'"
exec cmd
end

You'll need to set the capistrano verbosity level to DEBUG to see any streaming output. I found the solution in the capistrano3-taillog gem; see taillog.cap.
desc "tail log file"
task :tail do
on roles(:app) do
with_verbosity Logger::DEBUG do
execute "tail -f #{shared_path}/log/#{fetch(:log_file)}.log"
end
end
end
def with_verbosity(output_verbosity)
old_verbosity = SSHKit.config.output_verbosity
begin
SSHKit.config.output_verbosity = output_verbosity
yield
ensure
SSHKit.config.output_verbosity = old_verbosity
end
end

Related

Strange behavior with ruby_block resource in Chef

I have two ruby blocks at the end of a recipe:
ruby_block 'set permissions for app dir' do
block do
require 'fileutils'
FileUtils.chown_R 'user01', 'user01', '/mnt/app/'
end
action :run
end
ruby_block 'configure node app session' do
block do
cmd = "sudo su - user01 -c \"/mnt/app/http-app-/bin/app create /mnt/app/http-app/#{node['hostname']}\" && sudo su -c 'systemctl enable app' && sudo su -c 'systemctl start app'"
exec(cmd)
end
action :run
not_if "stat -c %U /mnt/app/#{node['hostname']} |grep app"
end
A couple strange things are happening. One, I cannot add any code after the last block... it will not run if added. Two, when the cookbook runs the recipe never ends with if the run failed or was successful. Bootstrapping the system a second time will prove to finish successful... but ssh'ing to the box and running chef-client comes back with an empty run list.
Can anyone explain this behavior? How can i fix it?
exec() is not what you think. That's a Ruby core method which calls the actual exec() syscall, which replaces the current process with something new. What you want is our shell_out!() helper which runs a subcommand and returns and object with the results.

Chef run sh script

I have a problem trying to run shell script via Chef (with docker-provisioning).
This is how I try to execute my script:
bash 'shell_try' do
user "root"
run = "#{some_path_to_script}/my_script.sh some_params"
code " #{run} > stdout.txt 2> stderr.txt"
end
(note that this script should run another scripts, processes and write logs)
Here's no errors in the output, but when I log into machine and run ps aux process isn't running.
I guess something wrong with permissions (or env variables), because when I try the same command manually - it works.
A bash resource just runs the provided script text directly, if you wanted to run a long-running process generally you would set up an Upstart or systemd service and use the service resource to start it.
Finally find a solution (thanks to #coderanger) -
Install supervisor:
Download supervisor cookbook
Add:
include_recipe 'supervisor::default'
Add my service to supervisor:
supervisor_service "name" do
action :enable
#action :start
command '/path/script.sh start'
end
Run supervisor service
All done!
Please see the Chef documentation for your resource: https://docs.chef.io/resource_bash.html. The bash resource does not support a run attribute. Text of the code attribute is run as a bash script. The default action is to run the script unless told otherwise by the resource.
bash 'shell_try' do
user "root"
code " #{run} > stdout.txt 2> stderr.txt"
action :run
end
The code attribute is written to a temporary file where it is then run using the attributes specified in the resource.
The line run = "#{some_path_to_script}/my_script.sh some_params" at this point does nothing.

Chef: Read variable from file and use it in one converge

I have the following code which downloads a file and then reads the contents of the file into a variable. Using that variable, it executes a command. This recipe will not converge because /root/foo does not exist during the compile phase.
I can workaround the issue with multiple converges and an
if File.exist
but I would like to do it with one converge. Any ideas on how to do it?
execute 'download_joiner' do
command "aws s3 cp s3://bucket/foo /root/foo"
not_if { ::File.exist?('/root/foo') }
end
password = ::File.read('/root/foo').chomp
execute 'join_domain' do
command "net ads join -U joiner%#{password}"
end
The correct solution is to use a lazy property:
execute 'download_joiner' do
command "aws s3 cp s3://bucket/foo /root/foo"
creates '/root/foo'
sensitive true
end
execute 'join_domain' do
command lazy {
password = IO.read('/root/foo').strip
"net ads join -U joiner%#{password}"
}
sensitive true
end
That delays the file read until after it is written. Also I included the sensitive property so the password is not displayed.
You can download the file at compile time by using run_action and wrap the second part in the conditional block which will be executed at run time.
execute 'download_joiner' do
command "aws s3 cp s3://bucket/foo /root/foo"
not_if { ::File.exist?('/root/foo') }
end.run_action(:run)
if File.exist?('/root/foo')
password = ::File.read('/root/foo').chomp
execute 'join_domain' do
command "net ads join -U joiner%#{password}"
end
end
You should read the file from the second command, so that reading happens during convergence too:
execute 'download_joiner' do
command "aws s3 cp s3://bucket/foo /root/foo"
not_if { ::File.exist?('/root/foo') }
end
execute 'join_domain' do
command "net ads join -U joiner%`cat /root/foo`"
end
Note that both your approach and mine will break if the password contains funny characters. If net lets you provide the password on stdin or an env var, I'd do that instead.

expect: launching scp after sftp

I could really use some help. I'm still pretty new with expect. I need to launch a scp command directly after I run sftp.
I got the first portion of this script working, my main concern is the bottom portion. I really need to launch a command after this command completes. I'd rather be able to spawn another command than, hack something up like piping this with a sleep command and running it after 10 s or something weird.
Any suggestions are greatly appreciated!
spawn sftp user#host
expect "password: "
send "123\r"
expect "$ "
sleep 2
send "cd mydir\r"
expect "$ "
sleep 2
send "get somefile\r"
expect "$ "
sleep 2
send "bye\r"
expect "$ "
sleep 2
spawn scp somefile user2#host2:/home/user2/
sleep 2
So i figured out I can actually get this to launch the subprocess if I use "exec" instead of spawn.. in other words:
exec scp somefile user2#host2:/home/user2/
the only problem? It prompts me for a password! This shouldn't happen, I already have the ssh-keys installed on both systems. (In other words, if I run the scp command from the host I'm running this expect script on, it will run without prompting me for a password). The system I'm trying to scp to, must be recognizing this newly spawned process as a new host, because its not picking up my ssh-key. Any ideas?
BTW, I apologize I haven't actually posted a "working" script, I can't really do that without comprimising the security of this server. I hope that doesn't detract from anyones ability to assist me.
I think the problem lies with me not terminating the initially spawned process. I don't understand expect enough to do it properly. If I try "close" or "eof", it simply kills the entire script, which I don't want to do just yet (because I still need to scp the file to the second host).
Ensure that your SSH private key is loaded into an agent, and that the environment variables pointing to that agent are active in the session where you're calling scp.
[[ $SSH_AUTH_SOCK ]] || { # if no agent already running...
eval "$(ssh-agent -s)" # ...then start one...
ssh-add /path/to/your/ssh/key # ...load your key...
started_ssh_agent=1 # and flag that we started it ourselves
}
# ...put your script here...
[[ $started_ssh_agent ]] && { # if we started the agent ourselves...
eval "$(ssh-agent -s -k)" # ...then clean up nicely when done.
}
As an aside, I'd strongly suggest replacing the code given in the question with something like the following:
lftp -u user,123 -e 'get /mydir/somefile -o localfile' sftp://host </dev/null
lftp scp://user2#host2 -e 'put localfile -o /home/user2/somefile' </dev/null
Each connection handled in one line, and no silliness messing around with expect.

How do you prompt for a sudo password using Ruby?

Often I find myself needing to write scripts that have to execute some portions as a normal user and other portions as a super user. I am aware of one similar question on SO where the answer was to run the same script twice and execute it as sudo, however that is not sufficient for me. Some times I need to revert to being a normal user after a sudo operation.
I have written the following in Ruby to do this
#!/usr/bin/ruby
require 'rubygems'
require 'highline/import'
require 'pty'
require 'expect'
def sudorun(command, password)
`sudo -k`
PTY.spawn("sleep 1; sudo -u root #{command} 2>&1") { | stdin, stdout, pid |
begin
stdin.expect(/password/) {
stdout.write("#{password}\n")
puts stdin.read.lstrip
}
rescue Errno::EIO
end
}
end
Unfortunately, using that code if the user enters the wrong password the script crashes. Ideally it should give the user 3 tries to get the sudo password right. How do I fix this?
I am running this on Linux Ubuntu BTW.
In my opinion, running a script that does stuff internally with sudo is wrong. A better approach is to have the user run the whole script with sudo, and have the script fork lesser-privileged children to do stuff:
# Drops privileges to that of the specified user
def drop_priv user
Process.initgroups(user.username, user.gid)
Process::Sys.setegid(user.gid)
Process::Sys.setgid(user.gid)
Process::Sys.setuid(user.uid)
end
# Execute the provided block in a child process as the specified user
# The parent blocks until the child finishes.
def do_as_user user
unless pid = fork
drop_priv(user)
yield if block_given?
exit! 0 # prevent remainder of script from running in the child process
end
puts "Child running as PID #{pid} with reduced privs"
Process.wait(pid)
end
at_exit { puts 'Script finished.' }
User = Struct.new(:username, :uid, :gid)
user = User.new('nobody', 65534, 65534)
do_as_user(user) do
sleep 1 # do something more useful here
exit! 2 # optionally provide an exit code
end
puts "Child exited with status #{$?.exitstatus}"
puts 'Running stuff as root'
sleep 1
do_as_user(user) do
puts 'Doing stuff as a user'
sleep 1
end
This example script has two helper methods. #drop_priv takes an object with username, uid, and gid defined and properly reduces the permissions of the executing process. The #do_as_user method calls #drop_priv in a child process before yielding to the provided block. Note the use of #exit! to prevent the child from running any part of the script outside of the block while avoiding the at_exit hook.
Often overlooked security concerns to think about:
Inheritance of open file descriptors
Environment variable filtering
Run children in a chroot?
Depending on what the script is doing, any of these may need to be addressed. #drop_priv is an ideal place to handle all of them.
If it is possible, you could move the stuff you want executed as root to a seperate file and use the system() function to run it as sudo, including the sudo prompt etc:
system("sudo ruby stufftorunasroot.rb")
The system() function is blocking, so the flow of your program doesn't need to be changed.
I do not know if this is what you want or need, but have you tried sudo -A (search the web or the man page for SUDO_ASKPASS which might have a value like /usr/lib/openssh/gnome-ssh-askpass or similar)? This is what I use when I need to present a graphical password dialogue to users in GUI environments.
Sorry if this is the wrong answer, maybe you really want to remain on the console.
#!/usr/bin/ruby
# ... blabla, other code
# part which requires sudo:
system "sudo -p 'sudo password: ' #{command}"
# other stuff
# sudo again
system "sudo -p 'sudo password: ' #{command}"
# usually sudo 'remembers' that you just authenticated yourself successfuly and doesn't ask for the PW again...
# some more code...

Resources