How to find most recently modified file in a remote directory (via ssh)? - ruby

I found this answer helpful:
How can you find the most recently modified folder in a directory using Ruby?
But what I need is to do the same for a remote directory (via SSH). What is the easiest way to do this in Ruby?
Here's what I have so far:
paths = (IO.popen("ssh -A user#yo.mammas.house.com ls /install/")).read.split("\n")
I only want these folders:
if p =~ /^release-MC-.*$/
I'm currently parsing the result of the ls command, splitting on new lines, matching on the regex and the next step is to build a hash of the date string embedded in the folder name. I really don't want to have to do this last step but it will work.
Is there a better way?

This is less a Net::SSH question as it is "What command can I issue to find the most recently modified file?"
SSH connections can issue a command, so once you know what command to send, or execute, you're done. I'd look at:
ls -Alt path/to/files | sed -n '2p'
Fleshing out something more usable results in:
require 'net/ssh'
HOST = 'hostname.domain'
USER = 'user'
PASSWORD = "password"
output = Net::SSH.start(HOST, USER, :password => PASSWORD) { |ssh|
ssh.exec!('ls -alt . | grep pattern_to_find')
}
puts output
Which, after filling in the fields with the right values and running it, connected to one of my hosts at work and returned something like:
drwxr-xr-x 11 xxxxxxxxxxxx xxxxxxxxx 4096 Oct 2 16:20 development
If you have multiple hits you need to retrieve, either expand the pattern after grep or discard the pipe to grep and parse your resulting output in Ruby once the command returns. You can also discard the t flag from ls if you want to sort locally, though it's a better idea to offload as much of the processing to the far-side host, rather than have it return a huge glob of data and process it locally. The less you return, the faster your overall code will be.

Related

How to deal with shell commands that never stops

Here is the case;
There is this app called "termux" on android which allows me to use a terminal on android, and one of the addons are androids API's like sensors, tts engines, etc.
I wanted to make a script in ruby using this app, specifically this api, but there is a catch:
The script:
require('json')
JSON.parse(%x'termux-sensor -s "BMI160 Gyro" -n 1')
-s = Name or partially the name of the sensor
-n = Count of times the command will run
returns me:
{
"BMI160 Gyroscope" => {
"values" => [
-0.03...,
0.00...,
1.54...
]
}
}
I didn't copied and pasted the values, but that's not the point, the point is that this command takes almost a full second the load, but there is a way to "make it faster"
If I use the argument "-d" and not use "-n", I can specify the time in milliseconds to delay between data being sent in STDOUT, it also takes a full second to load, but when it loads, the delay works like charm
And since I didn't specify a 'n' number of times, it never stops, and there is the problem
How can I retrieve the data continuously in ruby??
I thought about using another thread so it won't stop my program, but how can I tell ruby to return the last X lines of the STDOUT from a command that hasn't and will not ever stop since "%x'command'" in ruby waits for a return?
If I understood you need to connect to stdout from a long running process.
see if this works for your scenario using IO.popen:
# by running this program
# and open another terminal
# and start writing some data into data.txt
# you will see it appearing in this program output
# $ date >> data.txt
io_obj = IO.popen('tail -f ./data.txt')
while !io_obj.eof?
puts io_obj.readline
end
I found out a built in module that saved me called PTY and the spawn#method plus thread management helped me to keep a variable updated with the command values each time the command outputted new bytes

bash: wrong behavior in for... loop together with a test statement

I am trying to test if certain files, called up in a list of textfiles, are in a certain directory. Every once in a while (and I am quite certain I use the same statements every time) I get an error, complaining that the echo command cannot be found.
The textfiles I have in my directory /audio/playlists/ are named according to their date on which they are supposed to be used: 20130715.txt for example for today:
me#computer:/some/dir# ls /audio/playlists/
20130715.txt 20130802.txt 20130820.txt 20130907.txt 20130925.txt
20130716.txt 20130803.txt 20130821.txt 20130908.txt 20130926.txt
(...)
me#computer:/some/dir# cat /audio/playlists/20130715.txt
#A Comment line goes here
00:00:00 141-751.mp3
00:03:35 141-704.mp3
00:06:42 140-417.mp3
00:10:46 139-808.mp3
00:15:13 136-126.mp3
00:20:26 071-007.mp3
(...)
23:42:22 136-088.mp3
23:46:15 128-466.mp3
23:50:15 129-592.mp3
23:54:29 129-397.mp3
So much for the facts. The following statement, which lets me test if all files called upon in all of the textfiles in the given directory are actually a file in the directory /audio/mp3/, produces an error:
me#computer:/some/dir# for i in $(cat /audio/playlists/*.txt|cut -c 10-16|sort|uniq); do [ -f "/audio/mp3s/$i.mp3" ] || echo $i; done
 echo: command not found
me#computer:/some/dir#
I would guess bash wants to complain about the "A Comment"-line (actually " line ") not being a file, but why would that cause echo not to be found? Again, mostly this works, but every so often I get this error. Any help is greatly appreciated.
That space before echo isn't U+0020, it's U+00A0. And indeed, the command " echo" doesn't exist.

Ruby Dump all cron jobs to text file

I want a ruby script that will dump all the existing cron jobs to a text file using "crontab -l" or anything else that will achieve the same objective. Also the text file should be possible to use with crontab txtfile to create the cron jobs again.
Below is the code I already wrote:
def dump_pre_cron_jobs(file_path)
begin
cron_list = %x[crontab -l]
if(cron_list.size > 0)
cron_list.each do |crl|
mymethod_that_writes_tofile(file_path, crl) unless crl.chomp.include?("myfilter")
end
end
rescue Exception => e
raise(e.message)
end
end
Why does this need to be a Ruby script?
As you say, you can dump the crontab to a file with crontab -l > crontab.txt.
To read them back in again, simply use crontab crontab.txt, or cat crontab.txt | crontab -
I agree with #Vortura that you do not need to create a Ruby script to do this.
If you really want to, here is a probable way:
File.open('crontab.txt', 'w') do |crontab|
crontab << `crontab -l`
end
NOTE: Running this as root, or using sudo should capture all the cron jobs on a system, not just a single users' jobs. Run it as yourself or as that user and it might capture just those jobs. I haven't test that aspect of it.
Trying to run crontab -l to capture crontab files for all the users and packages seems the indirect way to do the task and could have the hassle of dealing with password requests hanging your code. I'd write code to comb through the directories that store them, rather than mess with prompts. Run the code using sudo and you shouldn't have any problems accessing the files.
Take a look at the discussion at: http://www.linuxquestions.org/questions/linux-newbie-8/etc-crontab-vs-etc-cron-d-vs-var-spool-cron-crontabs-853881/ for information on where the actual cron tab files are stored on disk.
Also https://superuser.com/questions/389116/how-to-recover-crontab-jobs-from-filesystem/389137 has similar information.
Mac OS varies a little from Linux in where Apple puts the cron files. Run man cron at the command-line for the definitive details on either OS.
Here's slightly-tested code for how I'd back up the files. How you restore them is for you to figure out, but it shouldn't be hard to figure out:
require 'fileutils'
BACKUP_PATH = '/path/to/some/safe/storage/directory'
CRONTAB_DIRS = %w[
/usr/lib/cron/tabs
/var/spool/cron
/etc/anacrontab
/etc/cron.d
]
CRONTAB_FILES = %w[
/etc/cron_list
]
def dump_pre_cron_jobs(file_path)
full_backup_path = File.join(
BACKUP_PATH,
File.dirname(file_path)
)
FileUtils.mkdir_p(full_backup_path) unless Dir.exist?(full_backup_path)
File.write(
File.join(
full_backup_path,
file_path
),
File.read(file_path)
)
rescue Exception => e
STDERR.puts e.message
end
CRONTAB_DIRS.each do |ct|
next unless Dir.exist?(ct)
begin
Dir.entries(File.join(ct, '*')).each { |fn| dump_pre_cron_jobs(fn) }
rescue Errno::EACCES => e
STDERR.puts e.message
end
end
CRONTAB_FILES.each do |fn|
dump_pre_cron_jobs(fn)
end
You'll need to run this as root via sudo to access the directories and files as they're usually locked down from unauthorized prying eyes.
The code creates a repository of crontabs, in BACKUP_PATH, based on their original file paths. No changes are made to the file contents so they can be restored as-is by copying them back via cp or writing code to reverse this process.

How do I get this Capistrano task to run on the server instead of locally?

I want to pull the last modified file from a directory. This capistrano task works locally just fine, but how do I make this run on the server so I can pull the servers data?
namespace :pull do
desc "Hello Pull data from the server"
task :hello, roles: :db do
## Want this to return what's on the server. Not locally.
puts "Getting filename of last created database backup"
db_backups_directory_path = "/home/deployer/backups"
last_db_backup_archived = Dir.glob(File.join(db_backups_directory_path, '*')).
select {|f| File.file? f }.
sort_by {|f| File.mtime f }.
last
puts last_db_backup_archived
end
end
I'd just go with run. Capistrano executes commands in parallel over a bunch of servers, so you'll have to translate your ruby into shell code. Thankfully, in your case it's more or less a straightforward translation.
task :hello, roles: :db do
## Want this to return what's on the server. Not locally.
puts "Getting filename of last created database backup"
db_backups_directory_path = "/home/deployer/backups"
run <<-CMD
find #{db_backups_directory_path} -type f -printf '%A# %p\n'|
sort -n | tail -n1 | cut -d" " -f2
CMD
end
The capture command will also run on a remote server. In addition to running a command remotely, it can write the stdout of the command to a ruby variable. So you could manipulate it with ruby methods, and then pass it back in with
some_variable = capture ("pwd")
capture ("cd #{some_variable}/.. && ls -alh")
This isn't the best example, but you get the general idea. The second capture is obviously not necessary, and you could substitute it with run and it wouldn't make a difference.
However, you should know that this will not work if you are running this task against multiple servers.
From the documentation:
Executes the given command on the first server targetted by the
current task, collects it's stdout into a string, and returns the
string. The command is invoked via #invoke_command.
http://rdoc.info/github/capistrano/capistrano/Capistrano/Configuration/Actions/Inspect#capture-instance_method

A ruby script to run tail on a log file?

I want to write a ruby script that read from a config file that will have filenames, and then when I run the script it will take the tail of each file and output the console.
What's the best way to go about doing this?
Take a look at File::Tail gem.
You can invoke linux tail -number_of_lines file_name command from your ruby script and let it print on console or capture output and print it yourself (if you need to do something with these lines before you print it)
We have a configuration file that contain a list of the log files; for example, like this:
---
- C:\fe\logs\front_end.log
- C:\mt\logs\middle_tier.log
- C:\be\logs\back_end.log
The format of the configuration file is a yaml simple sequence , therefore suppose we named this file 'settings.yaml'
The ruby script that take the tail of each file and output the console could be like this:
require 'yaml'
require 'file-tail'
logs = YAML::load(File.open('settings.yaml'))
threads = []
logs.each do |the_log|
threads << Thread.new(the_log) { |log_filename|
File.open(log_filename) do |log|
log.extend(File::Tail)
log.interval = 10
log.backward(10)
log.tail { |line| p "#{File.basename(the_log,".log")} - #{line}" }
end
}
end
threads.each { |the_thread| the_thread.join }
Note: displaying each line I wanted to prefix it with the name of the file from which it originates, ...this for me is a good option but you can edit the script to change as you like ; is the same for the tails parameters.
if file-tail is missing in your environment, follow the link as #Mark Thomas posts in his answear; i.e you need to:
> gem install file-tail
I found the file-tail gem to be a bit buggy. I would write to a file and it would read the entire file again instead of just thelines appended. This happened even though I had log.backward set to 0. I ended up writing my own and figured that I would share it here in case any one else is looking for a Ruby alternative to the file-tail gem. You can find the repo here. It uses non_blocking io, so it will catch amendments to the file immediately. There is one caveat that can be easily fixed if you can program in the Ruby programming language; log.backward is hard coded to be -1.

Resources