How can I make a shell command execute synchronously in ruby - ruby

I want to extract zip files and delete them when they are extracted. Googling tells me that the easiest way to unzip files in Ruby is to execute them with unzip filename.zip. My next step is to delete the zip file.
The second step happens so fast that the shell command unzip does not have a chance to even see the file before it is deleted. It errors out saying
"unzip: cannot find either filename.zip or filename.zip.zip."
I just want to have the unzip... command complete before continuing execution of the ruby script. I want it to block synchronously. Is there a way to do that? I cannot use sleep because I cannot estimate how long it will take.

The usual ways to run an external program in ruby are synchronous so there should be no problem.
Try
`unzip`
or
system("unzip")
or
system("unzip x && rm x")

Related

Script piped into bash fails to expand globs during rm command

I am writing a script with the intention of being able to download and run it from anywhere, like:
bash <(curl -s https://raw.githubusercontent.com/path/to/script.sh)
The command above allows me to download the script, run interactive commands (e.g. read), and - for the most part - Just Works. I have run into an issue during the cleanup portion of my script, however, and haven't been able to discern a fix
During cleanup I need to remove several .bkp files created by the script's execution. To do so I run rm -f **/*.bkp inside the script. When a local copy of the script is run, this works great! When run via bash/curl, however, it removes nothing. I believe this has something to do with a failure to expand the glob as a result of the way I've connected the I/O of bash and curl, but have been unable to find a way to get everything to play nice
How can I meet all of the following requirements?
Download and run a script from a remote resource
Ensure that the user's keyboard input is connected for use in e.g. read calls within the script
Correctly expand the glob passed to rm
Bonus points: colorize input with e.g. echo -e "\x1b[31mSome error text here\x1b[0m" (also not working, suspected to be related to the same bash/curl I/O issues)

Instead of giving command for batch mode, give .scm file path?

It is possible to supply batch commands directly with the -b flag, but if the commands become very long, this is no longer an option. Is there a way to give the path to an .scm script that was written to a file, without having to move the file into the scripts directory?
No as far as I know. What you give in the -b flag is a Scheme statement, which implies your function has already been loaded by the script executor process. You can of course add more directories that are searched for scripts using Edit>Preferences>Folders>Scripts.
If you write your script in Python the problem is a bit different since you can alter the Python path before loading the script code but the command line remains a bit long.

How can I have a shell script copy back a file if there is a failure anywhere?

At the beginning of the file I do
cp ~/.bundle/config ~/.bundle/config_save
At the end of the file I restore it with
cp ~/.bundle/config_save ~/.bundle/config
and within the file I am issuing lots of different rspec/spec/dir/file.rb commands
How can I make it so that if interrupted by the user (ctrl - c), it does cleanup and restores the config_save file back to be config ?
I would like the processes to run in the foreground is possible so that I can see the actual failures themselves. Failing this, perhaps another option might be to tail the logs/test.log in each repository.
Maybe I misunderstand your question, but can't you just "concatenate" the commands using &&:
cp ~/.bundle/config ~/.bundle/config_save
rspec spec/dir/file1.rb &&
rspec spec/dir/file2.rb &&
rspec spec/dir/file3.rb
cp ~/.bundle/config_save ~/.bundle/config
If one of the rspec commands fails, the remaining commands are skipped and the next (i.e. last) line is executed.

Changing directory for User in Ruby script

I'm quite familiar with Dir.chdir("/xyz")
Unfortunately, this changes the directory of the process, but not actually the directory of the user. I'll make the following example to illustrate my need.
$~/: ruby my_script.rb
CHANGING TO PATH FOR USER NOT SCRIPT
$/Projects/Important/Path: pwd
$/Projects/Important/Path
See? I need the script to change the user's path. Performing system/backticks/Dir.chdir all adjust the process path, and end with the user sitting where they started, instead of the path I want them.
From what I've read exec was the way to go, since it takes over the existing process... but to no avail.
You can't, but you can do something which might be good enough. You can invoke another shell from ruby:
Dir.chdir("/xyz")
system("bash")
Running this will create a new bash process, which will start in the /xyz directory. The downside is that changing this process will bring you back to the ruby script, and assuming it ends right away - back to the bash process that started the ruby script.
Another hack that might work is to use the prompt as a hackish hook that will be called after each command. In the ruby script, you can write the new directory's path somewhere that can be read from both bash and ruby(for example a file - but not an environment variable!). In the PROMPT_COMMAND function, you check that file and cd to what's written there. Just make sure you delete that file, so you don't get automatically cded there after every command you run.

Can a shell script indicate that its lines be loaded into memory initially?

UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)

Resources