I am doing some port forwarding like this:
socat tcp-listen:8000,reuseaddr,fork tcp:localhost:9000
Data is ASCII.
Each line is CR/LF terminated.
I have header and trailer strings I want to wrap any passed strings in.
Example:
(header is "start," and trailer is ",end")
user sends "ABC<CR,LF>"
socat sends "start,ABC,end<CR,LF>"
Is something that like possible?
Socat can pipe each line entered through awk like this:
socat TCP-L:8000,reuseaddr,fork,nodelay SYSTEM:"gawk -f my.awk|socat - TCP\:iq\:9000"
my.awk:
{
print "start," $1 ",end\n";
fflush()
}
Thanks to Gerhard, socat author.
Related
I generate (dynamically) a script concatenating the following files:
testscript1
echo Writing File
cat > /tmp/test_file <<EOF
testcontent
line1
second line
testscript2
EOF
echo File is written
And I execute by calling
$ cat testscript1 testcontent testscript2 | ssh remote_host bash -s --
The effect is that the file /tmp/test_file is filled with the desired content.
Is there also a variant thinkable where binary files can be supplied in a similar fashion? Instead of cat of course dd could be used or other Tools, but the problem I see is 'telling' them that the STDIN now ended (can I send ^D through that stream?)
I am not able to get my head around that problem, but there is likely no comparable solution. However, I might be wrong, so I'd be happy to hear from you.
Regards,
Mazze
can I send ^D through that stream
Yes but you don't want to.
Control+D, commonly notated ^D, is just a character -- or to be pedantic (as I often am), a codepoint in the usual character code (ASCII or a superset like UTF-8) that we treat as a character. You can send that character/byte by a number of methods, most simply printf '\004', but the receiving system won't treat it as end-of-file; it will instead be stored in the destination file, just like any other data byte, followed by the subsequent data that you meant to be a new command and file etc.
^D only causes end-of-file when input from a terminal (more exactly, a 'tty' device) -- and then only in 'cooked' mode (which is why programs like vi and less can do things very different from ending a file when you type ^D). The form of ssh you used doesn't make the input a 'tty' device. ssh can make the input (and output) a 'tty' (more exactly a subclass of 'tty' called a pseudo-tty or 'pty', but that doesn't matter here) if you add the -t option (in some situations you may need to repeat it as -t -t or -tt). But then if your binary file contains any byte with the value \004 -- or several other special values -- which is quite possible, then your data will be corrupted and garbage commands executed (sometimes), which definitely won't do what you want and may damage your system.
The traditional approach to what you are trying to do, back in the 1980s and 1990s, was 'shar' (shell archive) and the usual solution to handling binary data was 'uuencode', which converts binary data into only printable characters that could go safely go through a link like this, matched by 'uudecode' which converts it back. See this surviving example from GNU. uuencode and uudecode themselves were part of a communication protocol 'uucp' used mostly for email and Usenet, which are (all) mostly obsolete and forgotten.
However, nearly all systems today contain a 'base64' program which provides equivalent (though not identical) functionality. Within a single system you can do:
base64 <infile | base64 -d >outfile
to get the same effect as cp infile outfile. In your case you can do something like:
{ echo "base64 -d <<END# >outfile"; base64 <infile; echo "END#"; otherstuff; } | ssh remote bash
You can also try:
cat testscript1 testcontent testscript2 | base64 | ssh <options> "base64 --decode | bash"
Don't worry about ^D, because when your input is exhausted, the next processes of the pipeline will notice that they have reached the end of the input file.
I have a plain text file, that is formatted like:
10 https://google.com
22 https://facebook.com
I'd like to parse this file in a bash script and for each line and the number before the url make that many wget requests to the url.
I know I can simply read in the file with:
URLS=$(cat ./urls)
But how do I split on the number and space and newlines and run the wget command for each line?
Use read to read each part into a variable, and while to loop through the lines.
while read prio url
do
...
done < ./urls
I'm working completely in theory here so please excuse any misunderstandings. I need to run the same command on each line of a bash command return, but only using a portion of that command's return. Note that this is custom command line return. Example:
// initial command
> ~$ device findAll
// returned data
Scanning ...
Network Name Hardware Address IPV4 Address Details
test1 CD:F8:D4:15:3B:AE 172.1.3.22 "Blah Blah Blah"
test1 AB:C1:D2:11:31:EF 192.15.31.2 "Blah Blah Blah"
...
test1 CE:A8:B4:16:3A:FD 172.1.6.21 "Blah Blah Blah"
test1 AC:B1:E2:16:21:DF 172.1.6.22 "Blah Blah Blah"
Total: 600 Devices
With this returned data I need to access only the IPV4 address section of each line so that I can ssh into the device and run an update. I know how to ssh into each device individually, but with 600 returned values, that would be a waste of time. I also do not know how to ignore the header lines and the total line of the returned data.
My question is this: How can I access only the IPV6 section of the returned data using a only command line?
End result would theoretically be something like:
> ~$ device findall | while read -r line ; do
//access device by ssh command
scp /current-firmware-pathway/firmware.bin user#**[IPV4 Value Here]**:/tmp/fwupdate.bin
done
If storing the return in a variable and iterating over the variable is more efficient, I'm also open to that result. Thank you in advance for your help.
This is fairly easy using awk:
device findall | awk '$3 ~ /^[0-9]+(\.[0-9]+){3}$/ { print $3 }' | while read ip; do
scp /current-firmware-pathway/firmware.bin user#$ip:/tmp/fwupdate.bin
done
With $3 ~ /^[0-9]+(\.[0-9]+){3}$/ we filter the lines,
matching 4 non-empty numeric sequences separated by a dot.
This is not strictly the pattern of IPv4 addresses,
but probably close enough.
If there is a match, we print the 3rd column.
The header and summary lines are ignored,
as they don't match the IP address pattern in the 3rd column.
Suppose I have a bash script that goes through a file that contains a list of old URLs that have all been redirected.
curl --location http://destination.com will process a page by following a redirect. However, I'm interested not in the content, but on where the redirect points so that I can update my records.
What is the command-line option for curl to output what that new location for the URL is?
You wou want to leave out the --location/-L flag, and use -w, checking the redirect_url variable. curl -w "%{redirect_url}" http://someurl.com should do it.
Used in a script:
REDIRECT=`curl -w "%{redirect_url}" http://someurl.com`
echo "http://someurl.com redirects to: ${REDIRECT}"
From the curl man page:
-w, --write-out <format>
Make curl display information on stdout after a completed transfer. The
format is a string that may contain plain text mixed with any number
of variables. The format can be specified as a literal "string", or
you can have curl read the format from a file with "#filename" and to
tell curl to read the format from stdin you write "#-".
The variables present in the output format will be substituted by the
value or text that curl thinks fit, as described below. All variables
are specified as %{variable_name} and to output a normal % you just
write them as %%. You can output a newline by using \n, a carriage
return with \r and a tab space with \t.
NOTE: The %-symbol is a special symbol in the win32-environment, where
all occurrences of % must be doubled when using this option.
The variables available are:
...
redirect_url When an HTTP request was made without -L to follow
redirects, this variable will show the actual URL a redirect would
take you to. (Added in 7.18.2)
...
This might work (as a starting point)
curl -sI google.com | head -1 | grep 301 | wc -l
man curl
then
search redirect_url
redirect_url When a HTTP request was made without -L to follow
redirects, this variable will show the actual URL a redirect would
take you to. (Added
in 7.18.2)
the variable above is for -w/--write-out <format>
I was just reading a blog post about sanitizing user input in Ruby before sending it to the command line. The author's conclusion was: don't send user input it to the command line in the first place.
In creating a contact form, he said he learned that
What I should do instead is open a
pipe to the mail command as an IO
stream and just write to it like any
other file handle.
This is the code he used:
open( %Q{| mail -s "#{subject}" "#{recipient}" }, 'w' ) do |msg|
msg << body
end
(I actually added the quotes around recipient - they're needed, right?)
I don't quite understand how this works. Could anybody walk me through it?
OK, I'll explain it with the caveat that I don't think it's the best way to accomplish that task (see comments to your question).
open() with a pipe/vertical bar as the first character will spawn a shell, execute the command, and pass your input into the command through a unix-style pipe. For example the unix command cat file.txt | sort will send the contents of the file to the sort command. Similarly, open("| sort", 'w') {|cmd| cmd << file} will take the content of the file variable and send it to sort. (The 'w' means it is opened for writing).
The %Q() is an alternate way to quote a Ruby string. This way it doesn't interfere with literal quote characters in the string which can result in ugly escaping. So the mail -s command is being executed with a subject and a recipient.
Quotes are needed around the subject, because the mail command will be interpreted by the shell, and arguments are separated by spaces, so if you want spaces in an argument, you surround it with quotes. Since the -s argument is for the subject, it needs to be in quotes because it will likely contain spaces. On the other hand, the recipient is an email address and email addresses don't contain spaces, so they're not necessary.
The block is providing the piped input to the command. Everything you add to the block variable (in this case msg) is sent into the pipe. Thus the email body is being appended (via the << operator) to the message, and therefore piped to the mail command. The unix equivalent is something like this: cat body.txt | mail -s "subject" recipient#a.com