I am sending emails through shell script, in which I get message body from a text file. Issue comes when my disclaimer goes long, it starts getting extra ! sign in between.
Please suggest me to remove that without shortening or giving extra lines between the disclaimer. Let me know if you need any details.
Needed(say):
I am sending emails through shell script, in which I get message body from a text file. Issue comes when my disclaimer goes long, it starts getting extra ! sign in between.
Please suggest me to remove that without shortening or giving extra lines between the disclaimer. Let me know if you need any details.
Output in mail:
I am sending emails through shell script, in which I get message body from a text file. Issue comes when my disclaimer goes long, it starts getting extra ! sign in between.
Please suggest me to remove that without shortening or giving e!xtra lines between the disclaimer. Let me know if you need any details.
Here you can see that we are getting ! sign in extra word. Just keep the body long (sry I couldn't provide you the actual text), and you will get the ! sign.
Code I am using - to read from text file
EMAILMSG=$(cat $2) # $2 is path of text file
to send email
(echo -e $EMAILMSG;) | mailx -s "$SUBJECT" -b "abc#abc.com" $EMAIL -r $MAILBCC
I think this will help you to understand the situation. Please let me know anyone need further details.
I found this: Random exclamation mark in email body using CDO
It seems to indicate long lines are the problem. Try this using fold to wrap lines:
fold -s "$2" | mailx -s "$SUBJECT" -b "abc#abc.com" "$EMAIL" -r "$MAILBCC"
Style recommendations:
don't use $ALL_CAPS_VARS (here's why not)
always quote your "$vars" except when you know exactly when not to.
Related
Trying to do the following on contos7 works as I expect:
pod_in_question=$(curl -u uname:password -k very.cluster.com/api/v1/namespaces/default/pods/ | grep -i '"name": "myapp-' | cut -d '"' -f 4)
echo "$pod_in_question"
curl -u uname:password -k -X DELETE "very.cluster.com/api/v1/namespaces/default/pods/${pod_in_question}"
However, trying the same thing on MacOS (10.12.1) yields:
curl: (3) [globbing] bad range in column 92
When I try to curl the last line with a -g option it substitutes with a malformed name such as: myapp-\\x1b[m\\x1b[Kl1eti\
The echo statement would always execute just fine and show something like myapp-v7454 which I later want to put into the last curl statement. So where are these other characters coming from?
A robust solution - Basic cURL CLI debugging.
This answer is revised after it's been identified that the issue for the OP relates to curl applying color output.
There's a proposed answer which explains clearly what the embedded special characters meant, and instructions to override the grep behaviour to not output color. Certainly this is a good practise for grep use in piping. There are however a number of best practises that can help diagnose this or a similar issue with cURL and ultimately lead to the most robust solution.
Re-creating the problem
Assuming it's a JSON Content-Type, we use echo {'"name": "myapp-7414"'} to simulate the output from cURL
We filter the text and set a variable with it that we use in a cURL command
We force grep to output color, since it doesn't normally by default when outputting to a tty.
Recreation:
myvar=$(echo {'"name": "myapp-7414"'} | grep --color=always -i '"name": "myapp-' | cut -d '"' -f 4)
curl "https://www.google.com/${myvar}"
Output:
curl: (3) [globbing] bad range in column 32
First up:
'{}' are special characters to cURL, period.
The best practise for URL syntax in cURL:
If Variable Expansion is required:
Apply the -g switch to disable potential globbing done by cURL
Otherwise:
Use $variable as part of a "quoted" url string, instead of ${variable}
Second: In addition to -g, we add --libcurl /tmp/libcurl so we can get some insight into what cURL is seeing.
Recreation with -g and --libcurl:
curl -g --libcurl /tmp/libcurl "https://www.google.com/${myvar}"
Output:
<p>Your client has issued a malformed or illegal request <ins>That’s all we know.
Perfect, at least now everything is getting to the server and back! Let's see what cURL sent out to the server:
cat /tmp/libcurl
Surely enough we find this line: (note the bold part).
curl_easy_setopt(hnd, CURLOPT_URL, "https://www.google.com/myapp-\033[m7414");
So we know that:
The shell is doing something strange with our variable.
cURL knows not to try glob once we send the -g switch. That way - If there is an error with the shell variable, we can actually see what it is. We shouldn't be debugging a globbing error if we're not trying to use URL Ranges.
The special characters are colors. They represent the --color=always that we added to simulate the OPs environment.
At this point. Since it looks like we're working with JSON data, why not just use a widely available, high performance JSON parsing tool. That has a number of benefits, including:
Not relying on any environment that could affect string filtering
Can request the data we want (aka. "name")
The app name "myapp" can change and we won't have to re-write the code to retrieve it.
It's cleaner and accounts for things I haven't considered yet.
If we used jq for example (while we're at it, we don't need the -g switch because we don't need '{}' for the variable because we're already double " the URL):
myvar=$(echo {'"name": "myapp-7414"'} | jq -r .name)
curl --libcurl /tmp/libcurl "https://www.google.com/$myvar"
Now we get:
<p>The requested URL /myapp-7414 was not found on this server. That’s all we know.
Great. It's all working now. It should be obvious that the test URL here being www.google.com is obviously not going to know was myapp-7414 was.
So we've gone from :
Globbing bad range, to:
Malformed URL, to:
URL not found on server.
We could also as suggested elsewhere change the grep output and override it to --color=never (As I have noted: If grep has to be used, the --color=never is a great way to use it as a best practise when piping strings, period.). However, given the portability issues already experienced because of string filtering, and the fact that we are already handed structured data on a plate that can be parsed reliably, the more robust solution would be to do just that, if possible.
The substitution you showed at the last part looks like one of your calls injected ANSI escape sequences. It's possible that grep isn't detecting non-TTY output and is colorizing.
On a terminal that supports ANSI escape sequences, your particular codes might not be visible. The codes ^E[m^E[K set the screen mode and clear the current line. That's why you thought the echo command proved your data was correct.
You can examine the raw data with:
echo "$pod_in_question" | hexdump -C
And you should see there are other characters in there which did not appear in your terminal before. When you put these "invisible" codes into the URL, curl tries to encode them and then fails when it encounters a control character (ESC).
The solution is to add the argument --color=never to your grep call, which will disable colorization.
I have a shell script like this. The purpose of this script is to tail+head out a certain amount of data from file.csv and then send it to email Bob#123.com. DataFunction seems to work fine alone however when I try to call DataFunction within the email function body. It seems it sends a empty email with the correct Title and destination. The body of the email is missing which should be the data from DataFunction. Is there a workaround for this ? Thank you in advance.
#!/bin/bash
DataFunction()
{
tail -10 /folder/"file.csv" | head -19
}
fnEmailFunction()
{
echo ${DataFunction}| mail -s Title Bob#123.com
}
fnEmailFunction
You are echoing an unset variable, $DataFunction (written ${DataFunction}), not invoking the function.
You should use:
DataFunction | mail -s Title Bob#123.com
You may have been trying to use:
echo $(DataFunction) | mail -s Title Bob#123.com
but that is misguided for several reasons. The primary problem is that it converts the 10 lines of output from the DataFunction function into a single line of input to mail. If you enclosed the $(DataFunction) in double quotes, that would preserve the 'shape' of the input, but it wastes time and energy compared to running the command (function) directly as shown.
Try this:
mail -s "Title" "bob#123com" <<EOF
${DataFunction}
EOF
I tried #0xAX's answer and couldnt get it to work properly within a bash script. You simply need to save the output of DataFunction() in some variable. You can simply use these three lines to achieve this.
#!/bin/bash
VAR1=`tail -10 "/folder/file.csv" | head -19`
echo $VAR1 | mail -s Title Bob#123.com
I have a bash script that send an email like so ...
/bin/mailx -s "Unsatisfied dependencies report for the [$lc] YUM repo" red#example.com < /tmp/$prog.output
... when the email arrives to Outlook I get that stupid message about "Extra line breaks in this message were removed". I tried running unix2dos on the /tmp/$prog.output file but that results in the report being sent as binary attachment.
Is there anything I can do from my bash script to prevent the annoying "extra line break in this message were removed" message?
From the outlook documentation
"By default, the Auto Remove Line Breaks feature in Outlook is enabled. This causes the line breaks to be removed. Any two or more successive line breaks are not removed."
So maybe doubling up your new lines would avoid that.
In my situation I was able to use awk
echo "$variable" | awk '{ print $0" " }' | mail -s ....
This reads every single line, and re-prints it with three spaces at the end. Outlook is now happy.
https://blog.dhampir.no/content/outlook-removes-extra-line-breaks-from-plain-text-emails-how-to-stop-it
I am trying to redirect emails that match a particular pattern to a shell script which will create files containing the texts, with datestamped filenames.
First, here is the routine from .procmailrc that hands the emails off to the script:
:0c:
* Subject: ^Ingest_q.*
| /home/myname/procmail/process
and here is the script 'process':
#!/bin/bash
DATE=`date +%F_%N`
FILE=/home/myname/procmail/${DATE}_email.txt
while read line
do
echo "$line" 1>>"$FILE";
done
I have gotten very frustrated with this because I can pipe text to this script on the command line and it works fine:
mybox-248: echo 'foo' | process
mybox-249: ls
2013-07-31_856743000_email.txt process
The file contains the word 'foo.'
I have been trying to get an email text to get output as a date-stamped file for hours now, and nothing has worked.
(I've also turned logging on in my .procmailrc and that isn't working either -- I'm not trying to ask a second question by mentioning that, just wondering if that might provide some hint as to what I might be doing wrong ...).
Thanks,
GB
Quoting your attempt:
:0c:
* Subject: ^Ingest_q.*
| /home/myname/procmail/process
The regex is wrong, ^ only matches at beginning of line, so it cannot occur after Subject:. Try this instead.
:0c:process.lock
* ^Subject: Ingest_q
| /home/myname/procmail/process
I also specified a named lockfile; I do not believe Procmail can infer a lock file name from just a script name. As you might have multiple email messages being delivered at the same time, and you don't want their logging intermingled in the log file, using a lock file is required here.
Finally, the trailing .* in the regex is completely redundant, so I removed it.
(The olde Procmail mini-FAQ also addresses both of these issues.)
I realize your recipe is probably just a quick test before you start on something bigger, but the entire recipe invoking the process script can be completely replaced by something like
MAILDIR=/home/myname/procmail
DATE=`date +%F_%N`
:0c:
${DATE}_email.txt
This will generate Berkeley mbox format, i.e. each message should have a From_ pseudo-header before the real headers; if you are not sure whether this is already the case, you should probably use procmail -Yf- to make sure to make it so (otherwise there is really no way to tell where one message ends and another begins; this applies both to your original solution, and this replacement).
Because Procmail sees the file name you are delivering to, it can infer a lockfile name now, as a minor bonus.
Using MAILDIR to specify the directory is the conventional way to do this, but you can specify a complete path to an mbox file if you prefer, of course.
I would need to read certain data using curl. I'm basically reading keywords from file
while read line
do
curl 'https://gdata.youtube.com/feeds/api/users/'"${line}"'/subscriptions?v=2&alt=json' \
> '/home/user/archive/'"$line"
done < textfile.txt
Anyway I haven't found a way to form the url to curl so it would work. I've tried like every possible single and double quoted versions. I've tried basically:
'...'"$line"'...'
"..."${line}"..."
'...'$line'...'
and so on.. Just name it and I'm pretty sure that I've tried it.
When I'm printing out the URL in the best case it will be formed as:
/subscriptions?v=2&alt=jsoneeds/api/users/KEYWORD FROM FILE
or something similar. If you know what could be the cause of this I would appreciate the information. Thanks!
It's not a quoting issue. The problem is that your keyword file is in DOS format -- that is, each line ends with carriage return & linefeed (\r\n) rather than just linefeed (\n). The carriage return is getting read into the line variable, and included in the URL. The giveaway is that when you echo it, it appears to print:
/subscriptions?v=2&alt=jsoneeds/api/users/KEYWORD FROM FILE"
but it's really printing:
https://gdata.youtube.com/feeds/api/users/KEYWORD FROM FILE
/subscriptions?v=2&alt=json
...with just a carriage return between them, so the second overwrites the first.
So what can you do about it? Here's a fairly easy way to trim the cr at the end of the line:
cr=$'\r'
while read line
do
line="${line%$cr}"
curl "https://gdata.youtube.com/feeds/api/users/${line}/subscriptions?v=2&alt=json" \
> "/home/user/archive/$line"
done < textfile.txt
Your current version should work, I think. More elegant is to use a single pair of double quotes around the whole URL with the variable in ${}:
"https://gdata.youtube.com/feeds/api/users/${line}/subscriptions?v=2&alt=json"
Just use it like this, should be sufficient enough:
curl "https://gdata.youtube.com/feeds/api/users/${line}/subscriptions?v=2&alt=json" > "/home/user/archive/${line}"
If your shell gives you issues with & just put \&, but it works fine for me without it.
If the data from the file can contain spaces and you have no objection to spaces in the file name in the /home/user/archive directory, then what you've got should be OK.
Given the contents of the rest of the URL, you could even just write:
while read line
do
curl "https://gdata.youtube.com/feeds/api/users/${line}/subscriptions?v=2&alt=json" \
> "/home/user/archive/${line}"
done < textfile.txt
where strictly the ${line} could be just $line in both places. This works because the strings are fixed and don't contain shell metacharacters.
Since you're code is close to this, but you claim that you're seeing the keywords from the file in the wrong place, maybe a little rewriting for ease of debugging is in order:
while read line
do
url="https://gdata.youtube.com/feeds/api/users/${line}/subscriptions?v=2&alt=json"
file="/home/user/archive/${line}"
curl "$url" > "$file"
done < textfile.txt
Since the strings may end up containing spaces, it seems (do you need to expand spaces to + in the URL?), the quotes around the variables are strongly recommended. You can now run the script with sh -x (or add a line set -x to the script) and see what the shell thinks it is doing as it is doing it.