Writing to an IO stream in Ruby instead of sanitizing user input and sending to the shell? - ruby

I was just reading a blog post about sanitizing user input in Ruby before sending it to the command line. The author's conclusion was: don't send user input it to the command line in the first place.
In creating a contact form, he said he learned that
What I should do instead is open a
pipe to the mail command as an IO
stream and just write to it like any
other file handle.
This is the code he used:
open( %Q{| mail -s "#{subject}" "#{recipient}" }, 'w' ) do |msg|
msg << body
end
(I actually added the quotes around recipient - they're needed, right?)
I don't quite understand how this works. Could anybody walk me through it?

OK, I'll explain it with the caveat that I don't think it's the best way to accomplish that task (see comments to your question).
open() with a pipe/vertical bar as the first character will spawn a shell, execute the command, and pass your input into the command through a unix-style pipe. For example the unix command cat file.txt | sort will send the contents of the file to the sort command. Similarly, open("| sort", 'w') {|cmd| cmd << file} will take the content of the file variable and send it to sort. (The 'w' means it is opened for writing).
The %Q() is an alternate way to quote a Ruby string. This way it doesn't interfere with literal quote characters in the string which can result in ugly escaping. So the mail -s command is being executed with a subject and a recipient.
Quotes are needed around the subject, because the mail command will be interpreted by the shell, and arguments are separated by spaces, so if you want spaces in an argument, you surround it with quotes. Since the -s argument is for the subject, it needs to be in quotes because it will likely contain spaces. On the other hand, the recipient is an email address and email addresses don't contain spaces, so they're not necessary.
The block is providing the piped input to the command. Everything you add to the block variable (in this case msg) is sent into the pipe. Thus the email body is being appended (via the << operator) to the message, and therefore piped to the mail command. The unix equivalent is something like this: cat body.txt | mail -s "subject" recipient#a.com

Related

Sending script and file content via STDIN

I generate (dynamically) a script concatenating the following files:
testscript1
echo Writing File
cat > /tmp/test_file <<EOF
testcontent
line1
second line
testscript2
EOF
echo File is written
And I execute by calling
$ cat testscript1 testcontent testscript2 | ssh remote_host bash -s --
The effect is that the file /tmp/test_file is filled with the desired content.
Is there also a variant thinkable where binary files can be supplied in a similar fashion? Instead of cat of course dd could be used or other Tools, but the problem I see is 'telling' them that the STDIN now ended (can I send ^D through that stream?)
I am not able to get my head around that problem, but there is likely no comparable solution. However, I might be wrong, so I'd be happy to hear from you.
Regards,
Mazze
can I send ^D through that stream
Yes but you don't want to.
Control+D, commonly notated ^D, is just a character -- or to be pedantic (as I often am), a codepoint in the usual character code (ASCII or a superset like UTF-8) that we treat as a character. You can send that character/byte by a number of methods, most simply printf '\004', but the receiving system won't treat it as end-of-file; it will instead be stored in the destination file, just like any other data byte, followed by the subsequent data that you meant to be a new command and file etc.
^D only causes end-of-file when input from a terminal (more exactly, a 'tty' device) -- and then only in 'cooked' mode (which is why programs like vi and less can do things very different from ending a file when you type ^D). The form of ssh you used doesn't make the input a 'tty' device. ssh can make the input (and output) a 'tty' (more exactly a subclass of 'tty' called a pseudo-tty or 'pty', but that doesn't matter here) if you add the -t option (in some situations you may need to repeat it as -t -t or -tt). But then if your binary file contains any byte with the value \004 -- or several other special values -- which is quite possible, then your data will be corrupted and garbage commands executed (sometimes), which definitely won't do what you want and may damage your system.
The traditional approach to what you are trying to do, back in the 1980s and 1990s, was 'shar' (shell archive) and the usual solution to handling binary data was 'uuencode', which converts binary data into only printable characters that could go safely go through a link like this, matched by 'uudecode' which converts it back. See this surviving example from GNU. uuencode and uudecode themselves were part of a communication protocol 'uucp' used mostly for email and Usenet, which are (all) mostly obsolete and forgotten.
However, nearly all systems today contain a 'base64' program which provides equivalent (though not identical) functionality. Within a single system you can do:
base64 <infile | base64 -d >outfile
to get the same effect as cp infile outfile. In your case you can do something like:
{ echo "base64 -d <<END# >outfile"; base64 <infile; echo "END#"; otherstuff; } | ssh remote bash
You can also try:
cat testscript1 testcontent testscript2 | base64 | ssh <options> "base64 --decode | bash"
Don't worry about ^D, because when your input is exhausted, the next processes of the pipeline will notice that they have reached the end of the input file.

Why does bash use/need so many input redirect symbols?

I am curious as to the nature and purpose of using multiple "<" characters to satisfy certain bash redirections. When is each of the <, <<, <<<, syntax correct/preferred? And under what conditions? Shouldn't a single "<" be sufficient for a properly written command, function, or subroutine? In unix 'everything' is a file. So why mask this with process-substitution? Isn't that already just a mask for the natural (grouping) capability of any shell? Or in some cases just a matter of proper order of execution?
Efficiency and performance always have trade-offs, as do ease of read/write ability or ease of usability. I'm an old dog trying to learn new tricks. 10 lines of code I understand, to perform the same task as one line of code that I do not understand, is worth the trade-off to me. In my years of scripting, I have had very few situations require writing to non-volatile storage, unless it was intended to be left there ""permanently.
I have not seen such reference for output. A single ">" will create/overwrite a file. A double ">>" will create/append a file. Is there a ">>>" for output too? This is a redundant question. I am only interested in the input redirect.
In simple words, they all have different meanings.
< Redirection of input
<< Here Document
<<< Here String (variant of here document)
Examples
< Redirection of input
grep foo < a-file.txt
This redirects the contents of a-file.txt to grep's standard input. grep searches for occurrences of string 'foo' in file a-file.txt.
<< Here Document
grep foo <<EOF
foo
foobar
baz
bar
EOF
Notice the EOF right after << and in the last line. From man bash:
This type of redirection instructs the shell to read input from the current source until a line containing only delimiter (with no trailing blanks) is seen.
So effectively, grep gets the string enclosed by the two EOFs as input.
<<< Here String (variant of here document)
grep foo <<<"foobar"
You could see this as a "single line" here document (<<). grep gets the string "foobar" as input.
Shouldn't a single "<" be sufficient for a properly written command, function, or subroutine?
So, which variant is the correct one to use depends on your use case and is indepent from the command you're using, as your shell (most likely bash) will take care of them.
I recommend section 3.6 Redirection of bash's manual for further reading. The sections concerning <, << and <<< are 3.6.1, 3.6.6, 3.6.7: https://www.gnu.org/software/bash/manual/bash.html#Redirections

Newlines in shell script variable not being replaced properly

Situation: Using a shell script (bash/ksh), there is a message that should be shown in the console log, and subsequently sent via email.
Problem: There are newline characters in the message.
Example below:
ErrMsg="File names must be unique. Please correct and rerun.
Duplicate names are listed below:
File 1.txt
File 1.txt
File 2.txt
File 2.txt
File 2.txt"
echo "${ErrMsg}"
# OK. After showing the message in the console log, send an email
Question: How can these newline characters be translated into HTML line breaks for the email?
Constraint: We must use HTML email. Downstream processes (such as Microsoft Outlook) are too inconsistent for anything else to be of use. Simple text email is usually a good choice, but off the table for this situation.
To be clear, the newlines do not need to be completely removed, but HTML line breaks must be inserted wherever there is a newline character.
This question is being asked because I have already attempted to use several commands, such as sed, tr, and awk with varying degrees of success.
TL;DR: The following snippet will do the job:
ErrMsg=`echo "$ErrMsg"|awk 1 ORS='<br/>'`
Just make sure there are double quotes around the variable when using echo.
This turned out to be a tricky situation. Some notes of explanation are below.
Using sed
Turns out, sed reads through input line by line, which makes finding and replacing those newlines somewhat outside the norm. There were several clever tricks that appeared to work, but I felt they were far too complicated to apply appropriately to this rather simple situation.
Using tr
According to this answer the tr command should work. Unfortunately, this only translates character by character. The two character strings are not the same length, and I am limited to translating the newline into a space or other single character.
For the following:
ErrMsg="Line 1
Line 2
"
ErrMsg=`echo $ErrMsg| tr '\n' 'BREAK'`
# You might expect:
# "Line 1BREAKLine 2BREAK"
# But instead you get:
# "Line 1BLine 2B"
echo "${ErrMsg}"
Using awk
Using awk according to this answer initially appeared to work, but due to some other circumstances with echo there was a subtle problem. The solution is noted in this forum.
You must have double-quotes around your variable, or echo will strip out all newlines.(Of course, awk will receive the characters with a newline at the end, because that's what echo does after it echos stuff.)
This snippet is good: (line breaks in the middle are preserved and replaced correctly)
ErrMsg=`echo "$ErrMsg"|awk 1 ORS='<br/>'`
This snipped is bad: (newlines converted to spaces by echo, one line break at end)
ErrMsg=`echo $ErrMsg|awk 1 ORS='<br/>'`
You can wrap your message in HTML using <pre>, something like
<pre>
${ErrMsg}
and more.
</pre>

How does : <<'END' work in bash to create a multi-line comment block?

I found a great answer for how to comment in bash script (by #sunny256):
#!/bin/bash
echo before comment
: <<'END'
bla bla
blurfl
END
echo after comment
The ' and ' around the END delimiter are important, otherwise things inside the block like for example $(command) will be parsed and executed.
This may be ugly, but it works and I'm keen to know what it means. Can anybody explain it simply? I did already find an explanation for : that it is no-op or true. But it does not make sense to me to call no-op or true anyway....
I'm afraid this explanation is less "simple" and more "thorough", but here we go.
The goal of a comment is to be text that is not interpreted or executed as code.
Originally, the UNIX shell did not have a comment syntax per se. It did, however, have the null command : (once an actual binary program on disk, /bin/:), which ignores its arguments and does nothing but indicate successful execution to the calling shell. Effectively, it's a synonym for true that looks like punctuation instead of a word, so you could put a line like this in your script:
: This is a comment
It's not quite a traditional comment; it's still an actual command that the shell executes. But since the command doesn't do anything, surely it's close enough: mission accomplished! Right?
The problem is that the line is still treated as a command beyond simply being run as one. Most importantly, lexical analysis - parameter substitution, word splitting, and such - still takes place on those destined-to-be-ignored arguments. Such processing means you run the risk of a syntax error in a "comment" crashing your whole script:
: Now let's see what happens next
echo "Hello, world!"
#=> hello.sh: line 1: unexpected EOF while looking for matching `''
That problem led to the introduction of a genuine comment syntax: the now-familiar # (which was first introduced in the C shell created at BSD). Everything from # to the end of the line is completely ignored by the shell, so you can put anything you like there without worrying about syntactic validity:
# Now let's see what happens next
echo "Hello, world!"
#=> Hello, world!
And that's How The Shell Got Its Comment Syntax.
However, you were looking for a multi-line (block) comment, of the sort introduced by /* (and terminated by */) in C or Java. Unfortunately, the shell simply does not have such a syntax. The normal way to comment out a block of consecutive lines - and the one I recommend - is simply to put a # in front of each one. But that is admittedly not a particularly "multi-line" approach.
Since the shell supports multi-line string-literals, you could just use : with such a string as an argument:
: 'So
this is all
a "comment"
'
But that has all the same problems as single-line :. You could also use backslashes at the end of each line to build a long command line with multiple arguments instead of one long string, but that's even more annoying than putting a # at the front, and more fragile since trailing whitespace breaks the line-continuation.
The solution you found uses what is called a here-document. The syntax some-command <<whatever causes the following lines of text - from the line immediately after the command, up to but not including the next line containing only the text whatever - to be read and fed as standard input to some-command. Here's an alternate shell implementation of "Hello, world" which takes advantage of this feature:
cat <<EOF
Hello, world
EOF
If you replace cat with our old friend :, you'll find that it ignores not only its arguments but also its input: you can feed whatever you want to it, and it will still do nothing (and still indicate that it did that nothing successfully).
However, the contents of a here-document do undergo string processing. So just as with the single-line : comment, the here-document version runs the risk of syntax errors inside what is not meant to be executable code:
#!/bin/sh -e
: <<EOF
(This is a backtick: `)
EOF
echo 'In modern shells, $(...) is preferred over backticks.'
#=> ./demo.sh: line 2: bad substitution: no closing "`" in `
The solution, as seen in the code you found, is to quote the end-of-document "sentinel" (the EOF or END or whatever) on the line introducing the here document (e.g. <<'EOF'). Doing this causes the entire body of the here-document to be treated as literal text - no parameter expansion or other processing occurs. Instead, the text is fed to the command unchanged, just as if it were being read from a file. So, other than a line consisting of nothing but the sentinel, the here-document can contain any characters at all:
#!/bin/sh -e
: <<'EOF'
(This is a backtick: `)
EOF
echo 'In modern shells, $(...) is preferred over backticks.'
#=> In modern shells, $(...) is preferred over backticks.
(It is worth noting that the way you quote the sentinel doesn't matter - you can use <<'EOF', <<E"OF", or even <<EO\F; all have the same result. This is different from the way here-documents work in some other languages, such as Perl and Ruby, where the content is treated differently depending on the way the sentinel is quoted.)
Notwithstanding any of the above, I strongly recommend that you instead just put a # at the front of each line you want to comment out. Any decent code editor will make that operation easy - even plain old vi - and the benefit is that nobody reading your code will have to spend energy figuring out what's going on with something that is, after all, intended to be documentation for their benefit.
It is called a Here Document. It is a code block that lets you send a list of commands to another command or program
The string following the << is the marker determining the end of the block. If you send commands to no-op, nothing happens, which is why you can use it as a comment block.
That's heredoc syntax. It's a way of defining multi-line string literals.
As the answer at your link explains, the single quotes around the END disables interpolation, similar to the way single-quoted strings disable interpolation in regular bash strings.

GoldParser: Accept programs not ending with an empty line

I'm rewriting a GoldParser Grammar for VBScript. In VBScript Statements are terminated using either a newline or ':'. Therefore i use the following terminal:
NewLine = {All Newline}
| ':'
Because every statement has to end with the Newline terminal, only programs ending with an empty line are accepted. How can i extend the newline terminal to also accept programs not ending with an empty line? I tried the following:
NewLine = {All Newline}
| ':'
| {EOF}
This does not work because the {EOF} (End of File) group does not exist.
EOF is a special token and I'm not aware of any syntax allowing you to use it in a production rule. It is emitted when the tokenizer receives no more data, and as such it is not a control character you could use in a terminal definition either.
That being said, you have different possibilities to parse the (strictly speaking invalid) input. The simplest may be to just append a newline at the end of the string or text being tokenized. While this will not make it parse correctly in the GOLD Builder test window, it will make your code process the data as expected and it will not add complexity to the grammar.

Resources