Bash script calls vi for manual editing, then script resumes? - bash

I wrote a script that creates a backup of a text file, and a second script that verifies some syntax in text file using SED.
In the middle, there is a manual process: Users edit the original file adding some strings. This process must remain manual.
I would like to merge my two scripts so the backup is created, vi is open for the user, when the user is done editing the file, the script resumes doing the syntax verification.
I am learning by doing, but really do not know how to code the "open vi, wait for the user to do his editing, take control over and resume with verification" part.
I read there is a function called system (in Perl) that could be used, but my code is in BASH.
Any suggestions on how to get this done in BASH? Thanks!

In bash, each statement is essentially like an implicit call to system (unless it's a builtin shell command) since shell scripts are designed to make it easy to run other programs.
backup some_file.txt
vi some_file.txt # The script blocks until the user exits vi
verify_syntax some_file.txt
The only difference between using vi and a command like ls is that ls will do its thing and exit without user intervention, while vi (or any interactive command) will run until the user explicitly exits.

Related

"history -c" doesn't work when called inside a script?

I wrote a bash script to call a Python script that encrypts private data using AES, taking a filepath and a 256-bit password as the only arguments. After encryption is done, it clears the history so the password isn't sitting there in case I leave the terminal open. It looks something like this:
#!/bin/bash
python aesencrypt.py "$1" "$2"
history -c
echo "" > ~/.bash_history
The ~/.bash_history file is cleared just fine, but if I run history after running this script then all of my history is still there (until I exit the terminal). Is there anything I'm missing here?
Don't try to clear history -- even though that's the most obvious way that passing a password on the command line exposes it, that action is giving a false sense of security: Passwords given on the command line are trivial to capture via other processes running on the same machine (even under untrusted accounts!) even without history involved at all.
Moreover, as you note, a shell can only modify its own in-memory state, not the in-memory state of the separate process that started it (which may not even be the same shell, or a shell at all!).
Instead, modify your Python program's calling convention to read the password direct from the TTY (as SSH does), or from the environment. For the latter, usage might look like:
# assumes you renamed aesencrypt.py to aesencrypt, ran chmod +x, and gave a valid shebang
password="somePassword" aesencrypt outFile
...and you would want to modify your Python script to do something like:
#!/usr/bin/env python
import os, sys
filename = sys.argv[1]
password = os.environ['password']
# ...put the rest of your logic here.

calling another scripts to run in current script

I'm writing a shell script. what it does is it will create a file by the input that is received from the user. Now, i want to add the feature called "view a file" for my current script. Now, it's unreasonal to retype it again since i've already had a script that helps
I know it's crazy when it is possible to it with normal shell command. I'm actually writing a script that help me to create pages that are generated from the touch command. (this pages had attached date, author name, subjects, and title).
The question is how to call a another script or inhere another script?
Couple of ways to do this. My prefered way is by using source
You can -
Call your other script with the source command (alias is .) like this: source /path/to/script.
Make the other script executable, add the #!/bin/bash line at the top, and the path where the file is to the $PATH environment variable. Then you can call it as a normal command.
Use the bash command to execute it: /bin/bash /path/to/script

Are shell scripts read in their entirety when invoked?

I ask because I recently made a change to a KornShell (ksh) script that was executing. A short while after I saved my changes, the executing process failed. Judging from the error message, it looked as though the running process had seen some -- but not all -- of my changes. This strongly suggests that when a shell script is invoked, the entire script is not read into memory.
If this conclusion is correct, it suggests that one should avoid making changes to scripts that are running.
$ uname -a
SunOS blahblah 5.9 Generic_122300-61 sun4u sparc SUNW,Sun-Fire-15000
No. Shell scripts are read either line-by-line, or command-by-command followed by ;s, with the exception of blocks such as if ... fi blocks which are interpreted as a chunk:
A shell script is a text file containing shell commands. When such a
file is used as the first non-option argument when invoking Bash, and
neither the -c nor -s option is supplied (see Invoking Bash), Bash
reads and executes commands from the file, then exits. This mode of
operation creates a non-interactive shell.
You can demonstrate that the shell waits for the fi of an if block to execute commands by typing them manually on the command line.
http://www.gnu.org/software/bash/manual/bashref.html#Executing-Commands
http://www.gnu.org/software/bash/manual/bashref.html#Shell-Scripts
It's funny that most OS'es I know, do NOT read the entire content of any script in memory, and run it from disk. Doing otherwise would allow making changes to the script, while running. I don't understand why that is done, given the fact :
scripts are usually very small (and don't take many memory anyway)
at some point, and shown in this thread, people would start making changes to a script that is already running anyway
But, acknowledging this, here's something to think about: If you decided that a script is not running OK (because you are writing/changing/debugging), do you care on the rest of the running of that script ? you can go ahead making the changes, save them, and ignore all output and actions, done by the current run.
But .. Sometimes, and that depends on the script in question, a subsequent run of the same script (modified or not), can become a problem since the current/previous run is doing an abnormal run. It would typically skip some stuff, or sudenly jump to parts in the script, it shouldn't. And THAT may be a problem. It may leave "things" in a bad state; particularly if file manipulation/creation is involved.
So, as a general rule : even if the OS supports the feature or not, it's best to let the current run finish, and THEN save the updated script. You can change it already, but don't save it.
It's not like in the old days of DOS, where you actually have only one screen in front of you (one DOS screen), so you can't say you need to wait on run completion, before you can open a file again.
No they are not and there are many good reasons for that.
One of the things you should keep in mind is that a shell is not an interpreter even if there are some similarities. Shells are designed to work with a stream of commands. Either from the TTY ,a PIPE, FIFO or even a socket.
The shell reads from its resource line by line until a EOF is returned by the kernel.
The most shells have no extra support for interpreting files. they work with a file as they would work with a terminal.
In fact this is considered to be a nice feature because you can do interesting stuff like this How do Linux binary installers (.bin, .sh) work?
You can use a binary file and prepend shell scripts. You can't do this with an interpreter. because it parses the whole file or at least it would try it and fail. A shell would just interpret it line by line and doesnt care about the garbage at the end of the file. You just have to make sure the execution of the script gets terminated before it reaches the binary part.

Can I use what I wrote on the shell (bash, cmd, irb, etc) in a script automatically?

The general idea is pretty simple, I want to make a script for a certain task, I do it in the shell (any shell), and then I want to copy the commands I have used.
If I copy all the stuff in the window, then I have a lot of stuff to delete and to correct. (and is not easy to copy from shell)
Resume: I want to take all the things I wrote...
Is there an easy way to do this easy task?
Update: Partial solution
In bash, the solution is pretty simple, there is a history command, and there are ports of the idea:
IRB: Tweaking IRB
Cmd: Use PowerShell -> Get-History (or use cygwin)
Another Update:
I found that doskey have a parameter history to do this:
cmd: Doskey /history >> history.cmd
Yes, you can use:
history -w filename.sh
This will save your command history to filename.sh. You may need to edit that to keep just the lines at the end that are part of your command sequence.
NOTE: This is a bash command and will not work with all shells.
script may help here. Typing script will throw you into a new shell and save
all input and output to a file called typescript. When you're done with your interaction,
exit the shell. The file typescript is then amenable to grep'ing. For example, you might
grep for your prompt and save the output to the file. If you're a clumsy typist like me, then you may need to do some cleanup work to remove backspaces. There used to be a program that did thisbut I don't seem to find it right now. Here is one I found on the
'net: http://www.cat.pdx.edu/tutors/files/fixts.cpp
This approach is especially useful if you want to track and post on the web an entire interactive session.

Can a shell script indicate that its lines be loaded into memory initially?

UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)

Resources