I'm working on backup some accounts from my bucket with the prefix is id
When I exec once id is working correctly but when I used for multiple ids it will quit my ruby command. I was check, the error is when it run exec command. I was trying to research why it breaks but it takes more time. Anybody can help me why?
test.txt with 1 id:
1
test.txt with multiple ids:
1,2,3
My code:
file_names = ["test.txt"]
Dir.mkdir("logs") unless Dir.exist?("logs")
Dir.mkdir("data") unless Dir.exist?("data")
file_names.each do |file|
out_file = File.new("logs/#{file}", "w")
out_file.puts("Start read file #{file}")
member_ids = File.read("#{file}").strip!.split(",")
member_ids.each do |id|
Dir.mkdir("data/#{id}") unless Dir.exist?("data/#{id}")
command = "aws s3 sync s3://mybucket/#{id}/ data/#{id}/"
exec command
out_file.puts("#{id}")
end
out_file.puts("Finished read file #{file}")
out_file.close
end
This error by use exec command, change it to system command it will working :D
Related
I am using Amazon opsworks and struggling to get it working through a single script, I have created a script named clamav.rb. The content of script is:
yum_package 'clamav' do
action :install
end
yum_package 'clamav-update' do
action :install
end
file_names = ['/etc/freshclam.conf']
file_names.each do |file_name|
text = File.read(file_name)
replace = text.gsub("Example", "#Example")
# To merely print the contents of the file, use:
puts replace
# To write changes to the file, use:
File.open(file_name, "w") {|file| file.puts replace }
end
execute "Run Freshclam" do
command "/usr/bin/freshclam"
end
When I execute the above script it stuck with an error:
[2016-08-01T13:02:36+00:00] ERROR: Running exception handlers
[2016-08-01T13:02:36+00:00] ERROR: Exception handlers complete
[2016-08-01T13:02:36+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache.stage2/chef-stacktrace.out
[2016-08-01T13:02:36+00:00] ERROR: No such file or directory - /etc/freshclam.conf
[2016-08-01T13:02:36+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
but when I divide the script in two parts it run very well, like creating separate script for yum packages and separate for configuration change.
You're being bitten by Chef's two-pass loading model. At that point in the code, the package hasn't been installed yet. Check out https://coderanger.net/two-pass/ for more details on that, but to fix your actual problem, use the line cookbook which has resources for this kind of search and replace in files, which will handle sequencing correctly for you.
I resolved this problem, below is the solution
code: replaced my older code with this
File.open('/etc/freshclam.conf', "r") do |aFile|
if aFile
text = File.read('/etc/freshclam.conf')
replace = text.gsub("Example", "#Example")
# To merely print the contents of the file, use:
puts replace
# To write changes to the file, use:
File.open('/etc/freshclam.conf', "w") {|file| file.puts replace }
else
puts "Unable to open file!"
end
end
I want a ruby script that will dump all the existing cron jobs to a text file using "crontab -l" or anything else that will achieve the same objective. Also the text file should be possible to use with crontab txtfile to create the cron jobs again.
Below is the code I already wrote:
def dump_pre_cron_jobs(file_path)
begin
cron_list = %x[crontab -l]
if(cron_list.size > 0)
cron_list.each do |crl|
mymethod_that_writes_tofile(file_path, crl) unless crl.chomp.include?("myfilter")
end
end
rescue Exception => e
raise(e.message)
end
end
Why does this need to be a Ruby script?
As you say, you can dump the crontab to a file with crontab -l > crontab.txt.
To read them back in again, simply use crontab crontab.txt, or cat crontab.txt | crontab -
I agree with #Vortura that you do not need to create a Ruby script to do this.
If you really want to, here is a probable way:
File.open('crontab.txt', 'w') do |crontab|
crontab << `crontab -l`
end
NOTE: Running this as root, or using sudo should capture all the cron jobs on a system, not just a single users' jobs. Run it as yourself or as that user and it might capture just those jobs. I haven't test that aspect of it.
Trying to run crontab -l to capture crontab files for all the users and packages seems the indirect way to do the task and could have the hassle of dealing with password requests hanging your code. I'd write code to comb through the directories that store them, rather than mess with prompts. Run the code using sudo and you shouldn't have any problems accessing the files.
Take a look at the discussion at: http://www.linuxquestions.org/questions/linux-newbie-8/etc-crontab-vs-etc-cron-d-vs-var-spool-cron-crontabs-853881/ for information on where the actual cron tab files are stored on disk.
Also https://superuser.com/questions/389116/how-to-recover-crontab-jobs-from-filesystem/389137 has similar information.
Mac OS varies a little from Linux in where Apple puts the cron files. Run man cron at the command-line for the definitive details on either OS.
Here's slightly-tested code for how I'd back up the files. How you restore them is for you to figure out, but it shouldn't be hard to figure out:
require 'fileutils'
BACKUP_PATH = '/path/to/some/safe/storage/directory'
CRONTAB_DIRS = %w[
/usr/lib/cron/tabs
/var/spool/cron
/etc/anacrontab
/etc/cron.d
]
CRONTAB_FILES = %w[
/etc/cron_list
]
def dump_pre_cron_jobs(file_path)
full_backup_path = File.join(
BACKUP_PATH,
File.dirname(file_path)
)
FileUtils.mkdir_p(full_backup_path) unless Dir.exist?(full_backup_path)
File.write(
File.join(
full_backup_path,
file_path
),
File.read(file_path)
)
rescue Exception => e
STDERR.puts e.message
end
CRONTAB_DIRS.each do |ct|
next unless Dir.exist?(ct)
begin
Dir.entries(File.join(ct, '*')).each { |fn| dump_pre_cron_jobs(fn) }
rescue Errno::EACCES => e
STDERR.puts e.message
end
end
CRONTAB_FILES.each do |fn|
dump_pre_cron_jobs(fn)
end
You'll need to run this as root via sudo to access the directories and files as they're usually locked down from unauthorized prying eyes.
The code creates a repository of crontabs, in BACKUP_PATH, based on their original file paths. No changes are made to the file contents so they can be restored as-is by copying them back via cp or writing code to reverse this process.
I want to pull the last modified file from a directory. This capistrano task works locally just fine, but how do I make this run on the server so I can pull the servers data?
namespace :pull do
desc "Hello Pull data from the server"
task :hello, roles: :db do
## Want this to return what's on the server. Not locally.
puts "Getting filename of last created database backup"
db_backups_directory_path = "/home/deployer/backups"
last_db_backup_archived = Dir.glob(File.join(db_backups_directory_path, '*')).
select {|f| File.file? f }.
sort_by {|f| File.mtime f }.
last
puts last_db_backup_archived
end
end
I'd just go with run. Capistrano executes commands in parallel over a bunch of servers, so you'll have to translate your ruby into shell code. Thankfully, in your case it's more or less a straightforward translation.
task :hello, roles: :db do
## Want this to return what's on the server. Not locally.
puts "Getting filename of last created database backup"
db_backups_directory_path = "/home/deployer/backups"
run <<-CMD
find #{db_backups_directory_path} -type f -printf '%A# %p\n'|
sort -n | tail -n1 | cut -d" " -f2
CMD
end
The capture command will also run on a remote server. In addition to running a command remotely, it can write the stdout of the command to a ruby variable. So you could manipulate it with ruby methods, and then pass it back in with
some_variable = capture ("pwd")
capture ("cd #{some_variable}/.. && ls -alh")
This isn't the best example, but you get the general idea. The second capture is obviously not necessary, and you could substitute it with run and it wouldn't make a difference.
However, you should know that this will not work if you are running this task against multiple servers.
From the documentation:
Executes the given command on the first server targetted by the
current task, collects it's stdout into a string, and returns the
string. The command is invoked via #invoke_command.
http://rdoc.info/github/capistrano/capistrano/Capistrano/Configuration/Actions/Inspect#capture-instance_method
I want to write code in Ruby witch net::ssh that run commands one by one on remote linux machine and log everything (called command, stdout and stderr on linux machine).
So I write function:
def rs(ssh,cmds)
cmds.each do |cmd|
log.debug "[SSH>] #{cmd}"
ssh.exec!(cmd) do |ch, stream, data|
log.debug "[SSH:#{stream}>] #{data}"
end
end
end
For example if I want to create on remote linux new folders and file: "./verylongdirname/anotherlongdirname/a.txt", and list files in that direcotry, and find firefox there (which is stupid a little :P) so i call above procedure like that:
Net::SSH.start(host, user, :password => pass) do |ssh|
cmds=["mkdir verylongdirname", \ #1
"cd verylongdirname; mkdir anotherlongdirname, \ #2
"cd verylongdirname/anotherlongdirname; touch a.txt", \ #3
"cd verylongdirname/anotherlongdirname; ls -la", \ #4
"cd verylongdirname/anotherlongdirname; find ./ firefox" #5 that command send error to stderr.
]
rs(ssh,cmds) # HERE we call our function
ssh.loop
end
After run code above i will have full LOG witch informations about executions commands in line #1,#2,#3,#4,#5. The problem is that state on linux, between execude commands from cmds array, is not saved (so I must repeat "cd" statement before run proper command). And I'm not satisfy with that.
My purpose is to have cmds tables like that:
cmds=["mkdir verylongdirname", \ #1
"cd verylongdirname", \
"mkdir anotherlongdirname", \ #2
"cd anotherlongdirname", \
"touch a.txt", \ #3
"ls -la", \ #4
"find ./ firefox"] #5
As you see, te state between run each command is save on the linux machine (and we don't need repeat apropriate "cd" statement before run proper command). How to change "rs(ssh,cmds)" procedure to do it and LOG EVERYTHING (comand,stdout,stdin) like before?
Perhaps try it with an ssh channel instead to open a remote shell. That should preserve state between your commands as the connection will be kept open:
http://net-ssh.github.com/ssh/v1/chapter-5.html
Here's also an article of doing something similar with a little bit different approach:
http://drnicwilliams.com/2006/09/22/remote-shell-with-ruby/
Edit 1:
Ok. I see what you are saying. SyncShell was removed from Net::SSH 2.0. However I found this, which looks like it does pretty much what SyncShell did:
http://net-ssh-telnet.rubyforge.org/
Example:
s = Net::SSH.start(host, user)
t = Net::SSH::Telnet.new("Session" => s, "Prompt" => %r{^myprompt :})
puts t.cmd("cd /tmp")
puts t.cmd("ls") # <- Lists contents of /tmp
I.e. Net::SSH::Telnet is synchronous, and preserves state, because it runs in a pty with your remote shell environment. Remember to set the correct prompt detection, otherwise Net::SSH::Telnet will appear to hang once you call it (it's trying to find the prompt).
You can use pipe instead:
require "open3"
SERVER = "..."
BASH_PATH = "/bin/bash"
BASH_REMOTE = lambda do |command|
Open3.popen3("ssh #{SERVER} #{BASH_PATH}") do |stdin, stdout, stderr|
stdin.puts command
stdin.close_write
puts "STDOUT:", stdout.read
puts "STDERR:", stderr.read
end
end
BASH_REMOTE["ls /"]
BASH_REMOTE["ls /no_such_file"]
Ok, finally with the help of #Casper i get the procedure (maby someone use it):
# Remote command execution
# t=net::ssh:telnet, c="command_string"
def cmd(t,c)
first=true
d=''
# We send command via SSH and read output piece by piece (in 'cm' variable)
t.cmd(c) do |cm|
# below we cleaning up output piece (becouse it have strange chars)
d << cm.gsub(/\e\].*?\a/,"").gsub(/\e\[.*?m/,"").gsub(/\r/,"")
# when we read entire line(composed of many pieces) we write it to log
if d =~ /(^.*?)\n(.*)$/m
if first ;
# instead of the first line (which has repeated commands) we log commands 'c'
#log.info "[SSH]>"+c;
first=false
else
#log.info "[SSH] "+$1;
end
d=$2
end
end
# We print lines that were at the end (in last piece)
d.each_line do |l|
#log.info "[SSH] "+l.chomp
end
end
And we call it in code:
#!/usr/bin/env ruby
require 'rubygems'
require 'net/ssh'
require 'net/ssh/telnet'
require 'log4r'
...
...
...
Net::SSH.start(host, user, :password => pass) do |ssh|
t = Net::SSH::Telnet.new("Session" => ssh)
cmd(t,"cd /")
cmd(t,"ls -la")
cmd(t,"find ./ firefox")
end
Thanks, bye.
Here's wrapper around Net/ssh here's article http://ruby-lang.info/blog/virtual-file-system-b3g
source https://github.com/alexeypetrushin/vfs
to log all commands just overwrite the Box.bash method and add logging there
I want to be able to read a currently open file. The test.rb is sending its output to test.log which I want to be able to read and ultimately send via email.
I am running this using cron:
*/5 * * * /tmp/test.rb > /tmp/log/test.log 2>&1
I have something like this in test.rb:
#!/usr/bin/ruby
def read_file(file_name)
file = File.open(file_name, "r")
data = file.read
file.close
return data
end
puts "Start"
puts read_file("/tmp/log/test.log")
puts "End"
When I run this code, it only gives me this output:
Start
End
I would expect the output to be something like this:
Start
Start (from the reading of the test.log since it should have the word start already)
End
Ok, you're trying to do several things at once, and I suspect you didn't systematically test before moving from one step to the next.
First we're going to clean up your code:
def read_file(file_name)
file = File.open(file_name, "r")
data = file.read
file.close
return data
end
puts "Start"
puts read_file("/tmp/log/test.log")
puts "End"
can be replaced with:
puts "Start"
puts File.read("./test.log")
puts "End"
It's plain and simple; There's no need for a method or anything complicated... yet.
Note that for ease of testing I'm working with a file in the current directory. To put some content in it I'll simply do:
echo "foo" > ./test.log
Running the test code gives me...
Greg:Desktop greg$ ruby test.rb
Start
foo
End
so I know the code is reading and printing correctly.
Now we can test what would go into the crontab, before we deal with its madness:
Greg:Desktop greg$ ruby test.rb > ./test.log
Greg:Desktop greg$
Hmm. No output. Something is broken with that. We knew there was content in the file previously, so what happened?
Greg:Desktop greg$ cat ./test.log
Start
End
Cat'ing the file shows it has the "Start" and "End" output of the code, but the part that should have been read and output is now missing.
What happening is that the shell truncated "test.log" just before it passed control to Ruby, which then opened and executed the code, which opened the now empty file to print it. In other words, you're asking the shell to truncate (empty) it just before you read it.
The fix is to read from a different file than you're going to write to, if you're trying to do something with the contents of it. If you're not trying to do something with its contents then there's no point in reading it with Ruby just to write it to a different file: We have cp and/or mv to do those things for us witout Ruby being involved. So, this makes more sense if we're going to do something with the contents:
ruby test.rb > ./test.log.out
I'll reset the file contents using echo "foo" > ./test.log, and cat'ing it showed 'foo', so I'm ready to try the redirection test again:
Greg:Desktop greg$ ruby test.rb > ./test.log.out
Greg:Desktop greg$ cat test.log.out
Start
foo
End
That time it worked. Trying it again has the same result, so I won't show the results here.
If you're going to email the file you could add that code at this point. Replacing the puts in the puts File.read('./test.log') line with an assignment to a variable will store the file's content:
contents = File.read('./test.log')
Then you can use contents as the body of a email. (And, rather than use Ruby for all of this I'd probably do it using mail or mailx or pipe it directly to sendmail, using the command-line and shell, but that's your call.)
At this point things are in a good position to add the command to crontab, using the same command as used on the command-line. Because it's running in cron, and errors can happen that we'd want to know about, we'd add the 2>&1 redirect to capture STDERR also, just as you did before. Just remember that you can NOT write to the same file you're going to read from or you'll have an empty file to read.
That's enough to get your app working.
class FileLineRead
File.open("file_line_read.txt") do |file|
file.each do |line|
phone_number = line.gsub(/\n/,'')
user = User.find_by_phone_number(line)
user.destroy unless user.nil?
end
end
end
open file
read line
DB Select
DB Update
In the cron job you have already opened and cleared test.log (via redirection) before you have read it in the Ruby script.
Why not do both the read and write in Ruby?
It may be a permissions issue or the file may not exist.
f = File.open("test","r")
puts f.read()
f.close()
The above will read the file test. If the file exists in the current directory
The problem is, as I can see, already solved by Slomojo. I'll only add:
to read and print a text file in Ruby, just:
puts File.read("/tmp/log/test.log")