Calling another variable inside a variable assignment [duplicate] - bash

This question already has answers here:
Are shell scripts sensitive to encoding and line endings?
(14 answers)
Closed last month.
My line of code is :
while IFS= read -r line
do
echo "$line"
echo "**"
CURL_REQ=$(curl -is "${line}" -H 'Pragma: akamai-x-get-extracted-values' | grep 'AKA_PM_PROPERTY_NAME')
echo "$line"
echo "$CURL_REQ"
After doing some trial an error, I now understood that inside variable declaration of "CURL_REQ"
${line} is not being recognized.
However the echo "$line" is working perfectly without any issues outside the variable declaration.
When I replace "${line}" with a static hostname, the curl works fine and I can see the value on echo "$CURL_REQ"
Reproducible example :
#!/bin/bash
INPUT_FILE="/domain.txt"
EXPECTED_HEADER="AKA_PM_PROPERTY_NAME"
declare -i COUNT=1
echo "*************************************************************************"
while IFS= read -r line
do
CURL_REQ=$(curl -is ${line} -H 'Pragma: akamai-x-get-extracted-values' | grep 'AKA_PM_PROPERTY_NAME')
echo "$line"
echo $CURL_REQ
PROP_VALUE=$(echo ${CURL_REQ} | cut -c 51-)
echo $PROP_VALUE
done < "$INPUT_FILE"
Input file domain.txt should contain the lines below :
myntra.com
ndtv.com

#mhs Based on your provided script, seems it is working for me - I have only changed the INPUT_FILE="/domain.txt" to INPUT_FILE="./domain.txt" to read the file from the same directory that I am running the script.
Here are the details, please compare them with yours and you'd find the issue on your side.
IMHO the domain.txt file's context or permissions (accessibility) should be the problem.
Note: The script.sh and the domain.txt files are both in the same directory (my user's home directory) and they have proper permissions relevant to user (my user) that I'm running the script
If you didn't find the issue, I'll suggest re-format your question and provide step by step description and details as I have in the following.
The script.sh file content I'm running:
#!/bin/bash
INPUT_FILE="./domain.txt"
EXPECTED_HEADER="AKA_PM_PROPERTY_NAME"
declare -i COUNT=1
echo "*************************************************************************"
while IFS= read -r line
do
CURL_REQ=$(curl -is ${line} -H 'Pragma: akamai-x-get-extracted-values' | grep 'AKA_PM_PROPERTY_NAME')
echo "$line"
echo $CURL_REQ
PROP_VALUE=$(echo ${CURL_REQ} | cut -c 51-)
echo $PROP_VALUE
done < "$INPUT_FILE"
The domain.txt file content:
myntra.com
ndtv.com
Here are the files permissions:
-rw-r--r-- 1 vrej vrej 20 Jan 5 17:18 domain.txt
-rwxr-xr-x 1 vrej vrej 440 Jan 5 17:18 script.sh
The output that I get is:
*************************************************************************
myntra.com
X-Akamai-Session-Info: name=AKA_PM_PROPERTY_NAME; value=www.myntra.com_ssl
value=www.myntra.com_ssl
ndtv.com
X-Akamai-Session-Info: name=AKA_PM_PROPERTY_NAME; value=www.ndtv.com-EON-1501
value=www.ndtv.com-EON-1501
My BASH version is:
GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2019 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Related

How to move all files in a subfolder except certain files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I usually download with filezilla in the directory /Public/Downloads on my nas.
I made a script executed by filezilla when download queue is finished, so all my downloads are moved to /Public/Downloads/Completed. My directory /Public/Downloads contains also two files and three directories that must not be moved.
folder.jpg
log.txt
Temp
Cache
Completed
I tried this command:
find /Public/Downloads/* -maxdepth 1 | grep -v Completed | grep -v Cache | grep -v Temp | grep -v log.txt | grep -v folder.jpg | xargs -i mv {} /Public/Downloads/Completed
This works for downloaded files and folders named without special characters: they are moved to /Public/Downloads/Completed
But when there is a <space> or an à or something else special, xarg is complaining unmatched single quote; by default quotes are special to xargs unless you use the -0 option
I've searched a solution by myself but haven't find something for my needs combining find, grep and xargs for files and directories.
How do I have to modify my command ?
This is just a suggestion for your change strategy and not about xargs. You only need the bash shell and mv for the external tool.
#!/usr/bin/env bash
shopt -s nullglob extglob
array=(
folder.jpg
log.txt
Temp
Cache
Completed
)
to_skip=$(IFS='|'; printf '%s' "*#(${array[*]})")
for item in /Public/Downloads/*; do
[[ $item == $to_skip ]] && continue
echo mv -v "$item" /Public/Downloads/Completed/ || exit
done
Remove the echo if you think that the output is correct.
Add the -x e.g. set -x (after the shebang) option to see which/what the code is doing or bash -x my_script, assuming my_script is the name of your script.
So i've changed my strategy for a loop using a temporary text file:
#!/bin/bash
ls -p /Public/Downloads | grep -v "Cache/" | grep -v "Temp/" | grep -v "Completed/" | grep -v 'log.txt' | grep -v 'folder.jpg' >/Public/Downloads/Completed/temp.txt
cat /Public/Downloads/Completed/temp.txt |\
while IFS='' read -r CURRENT || [ -n "$CURRENT" ]; do
mv /Public/Downloads/"$CURRENT" /Public/Downloads/Completed
done
rm /Public/Downloads/Completed/temp.txt
1) I write a list of directories and files to be moved with "ls" in "temp.txt"
2) Each line of "temp.txt" is passed into the $CURRENT variable. So each file and directory is moved one by one with "mv". The variable $CURRENT is double quoted in the mv command in case of "space" character in the name of the directory or the file
3) "temp.txt" is deleted
Based on OP input, the main issue is files name with "special" characters, including space. Two options:
If none of the input files has embedded new line in the file name (which is the case), the problem can be addressed by explicit specification of the delimited string to new line (-d '\n'). See below
If any of the files can contain new line in the name, the whole pipeline has to use zero-terminated strings. Does not seems to be the case here.
find /Public/Downloads/* -maxdepth 1 |
grep -v Completed |
grep -v Cache |
grep -v Temp |
grep -v log.txt |
grep -v folder.jpg |
xargs -d '\n' -i mv {} /Public/Downloads/Completed

How to write a Bash script to edit many text files using the same commands? [duplicate]

This question already has answers here:
Run script on multiple files
(3 answers)
Closed 3 years ago.
I'm very new to bash. I have ten text files that I want to edit with the same line of code.
#!/bin/bash
sed -i -e 's/.\{6\}/&\n/g' -e 's/edit/edit2/g' | tr -d "\n" | sed 's/edit2/edit/g'| grep -o "here.*there" | sed -r '/^.{,100}$/d'
< files 1-10
I know I could use sed -f sed.sh <file1 >file1 but that only works with sed commands and it only works one file at a time?
Do I have to run a loop?
There's some great existing answers on the Unix stack exchange that help deal with your problem. Specifically, from this post, they use a loop to recursively loop through all the files in a particular directory, as follows:
( shopt -s globstar dotglob;
for file in **; do
if [[ -f $file ]] && [[ -w $file ]]; then
sed -i -- 's/foo/bar/g' "$file"
fi
done
)
Note the line, shopt -s globstar dotglob;, which allows us to use globbing patterns in the for loop. We also enclose the code in brackets, to prevent the shopt -s globstar dotglob; line option from becoming a global setting.
If you would like to apply this example to your file, you can just place your files in the current directory, and the code would probably look something like this:
( shopt -s globstar dotglob;
for file in **; do
if [[ -f $file ]] && [[ -w $file ]]; then
sed -i -e 's/.\{6\}/&\n/g' -e 's/edit/edit2/g' | tr -d "\n" | sed 's/edit2/edit/g' | grep -o "here.*there" | sed -r '/^.{,100}$/d' "$file"
fi
done
)
Note that we have placed a "$file" variable beside each of the seds that you used in your code, this replaces the name of the file for each command.
There is another example given in the code that allows you to pick which files to run on, rather than all the files in a directory, which you can also re-purpose for your code, as given here:
( shopt -s globstar dotglob
sed -i -- 's/foo/bar/g' **baz*
sed -i -- 's/foo/bar/g' **.baz
)
To answer your question of doing a loop on each line, you will need to put a loop for each line inside your for loop, like so:
while read line ; do
: sed -i -e 's/.\{6\}/&\n/g' -e 's/edit/edit2/g' | tr -d "\n" | sed 's/edit2/edit/g' | grep -o "here.*there" | sed -r '/^.{,100}$/d' "$line”
done
)
Although the for loop can be useful for dealing with files in recursive directories, I would recommend against also using another loop to grab lines, since it muddies your code, and it’s possible there is a better way to do it without parsing line by line.
The linked question is a fairly complete guide to many of the cases you may come across, and is also worth a read if you want to learn more.
Hope that helps!
You could use a for loop.
You could use the tool parallel.
Example
Create a set of test files using a for-loop
mkdir -p /tmp/so58333536
cd /tmp/so58333536
for i in 1.txt 2.txt 3.txt 4.txt 5.txt;do echo "The answer is 41" > $i;done
cat /tmp/so58333536/*
Now correct your mistake using parallel [1].
mkdir /tmp/so58333536.new
ls /tmp/so58333536/* |parallel "sed 's/41/42/' {} > /tmp/so58333536.new/{/}"
cat /tmp/so58333536.new/*
{}:: refers to the current file
{/}:: refers to name of the current file (path is removed)
Reads: List all files in so58333536 and apply the following sed command to each file and write the output to so58333536.new.
[1] Another option is to use sed -i for in-place editing.
Be very carefull with this!! Mistakes can cause serious damages!
# !! Do not use -i option regularly !!
ls /tmp/so58333536/* |parallel "sed -i 's/41/42/'"

Collecting Cron Tab entries across servers/users

Struggling with this for a couple of days. Trying to create a space delimited list of $host $useraccount $crontab entries.
I've tried a couple of different ways. Each ending in a different level of disaster, The closest I've come is this, someone point out the obvious thing I'm missing.
#!/usr/bin/bash
#Global Crontab Inventory for Scripts
#
outputfile="/localpath/cronoutput.txt"
LPARLIST=/pathto/LPAR.txt
while read LPAR;
do
ping -c 1 $LPAR > /dev/null
if [ $? -eq 0 ]; then
for user in $(ssh -n $LPAR /opt/freeware/bin/ls /var/spool/cron/crontabs);
do
while read line;
do
echo "$LPAR $user $line"
done <"$(ssh -n "$LPAR" /opt/freeware/bin/tail -n +29 /var/spool/cron/crontabs/$user)"
done
fi
done <$LPARLIST
It seems to be complaining about trying to execute the output of the tail as a command.
./crons.sh: line 11: (Several pages of cropped cron entries): File name too long
./crons.sh: line 11: : No such file or directory
./crons.sh: line 11: #
This is working for me.
#!/bin/bash
#Global Crontab Inventory for Scripts
#
outputfile="/localpath/cronoutput.txt"
LPARLIST=LPAR.txt
cat $LPARLIST |
while read LPAR;
do
ping -c 1 $LPAR > /dev/null
if [ $? -eq 0 ]; then
for user in $(ssh -n root#$LPAR ls /var/spool/cron/crontabs);
do
ssh -n "root#$LPAR" tail -n +29 /var/spool/cron/crontabs/$user |
while read line;
do
echo "$LPAR $user $line"
done
done
fi
done
I prefer cat xxx | while ... instead of redirecting input as you did. In theory it should be the same. I did not spot anything specifically wrong and I did not really change anything -- just rearranged what you had.
The advantage of the cat xxx | while ... technique is you can insert commands between the cat and the while. In this case, I would not do the tail -n +29 because you are guessing the first 29 lines are junk but that might not be true. Rather I would just do a cat of the file and then egrep out the lines that start with a hash #. Again, .... yes, the cat is redundant but who really cares. It is more general and easier to add and delete things.
I don't have the /opt packages installed and I would not depend upon them unless absolutely necessary. You are increasing dependencies. So I just used the local "ls" and "tail". I also added an explicit root# but you don't need that. It just simplified my testing.
Perhaps the ls is overflowing. Try this one:
#!/bin/bash
#Global Crontab Inventory for Scripts
#
outputfile="/localpath/cronoutput.txt"
LPARLIST=LPAR.txt
cat $LPARLIST |
while read LPAR;
do
ping -c 1 $LPAR > /dev/null
if [ $? -eq 0 ]; then
ssh -n root#$LPAR ls /var/spool/cron/crontabs |
while read user
do
ssh -n "root#$LPAR" tail -n +29 /var/spool/cron/crontabs/$user |
while read line;
do
echo "$LPAR $user $line"
done
done
fi
done
Hope this helps...

Stale file descriptor with /dev/stdin

I'm attempting to write a script to loop over entries in .ssh/authorized_keys and do things with them, namely print their fingerprint and append them to a new place. This is what I have so far:
echo "$SSH_KEYS" | while read key ; do
ssh-keygen -lf /dev/stdin <<< "$key"
echo "$key" >> newplace
done
This unfortunately gives me the following error:
/dev/stdin: Stale file handle
I'm running Bash 4.3.11 on Ubuntu 14.04 kernel 3.13.0-24-generic.
On the same kernel running Bash 4.3.8, it works fine. Changing my version of Bash doesn't look to be an option at this point, this is an automated script for something in production.
I found this solution in another question here on StackOverflow but it seems to not work with this later version of Bash.
I think you're on the right track, but you want something like:
while read key; do
ssh-keygen -lf /dev/stdin <<< "$key"
echo "$key" >> newplace
done < .ssh/authorized_keys
As opposed to:
echo "$SSH_KEYS" | while read key ; do
ssh-keygen -lf /dev/stdin <<< "$key"
echo "$key" >> newplace
done
Note that instead of piping the output of echo, simply feed the file directly into the stdin of the while loop.
This worked for me on:
$ bash --version
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
I'm also on Ubuntu 14.04, but it seems that someone has also maybe seen this problem: https://github.com/docker/docker/issues/2067
A work-around is to write each key to a tempfile, process it, then remove the file.

creating a file downloading script with checksum verification

I want to create a shellscript that reads files from a .diz file, where information about various source files are stored, that are needed to compile a certain piece of software (imagemagick in this case). i am using Mac OSX Leopard 10.5 for this examples.
Basically i want to have an easy way to maintain these .diz files that hold the information for up-to-date source packages. i would just need to update these .diz files with urls, version information and file checksums.
Example line:
libpng:1.2.42:libpng-1.2.42.tar.bz2?use_mirror=biznetnetworks:http://downloads.sourceforge.net/project/libpng/00-libpng-stable/1.2.42/libpng-1.2.42.tar.bz2?use_mirror=biznetnetworks:9a5cbe9798927fdf528f3186a8840ebe
script part:
while IFS=: read app version file url md5
do
echo "Downloading $app Version: $version"
curl -L -v -O $url 2>> logfile.txt
$calculated_md5=`/sbin/md5 $file | /usr/bin/cut -f 2 -d "="`
echo $calculated_md5
done < "files.diz"
Actually I have more than just one question concerning this.
how to calculate and compare the checksums the best? i wanted to store md5 checksums in the .diz file and compare it with string comparison with "cut"ting out the string
is there a way to tell curl another filename to save to? (in my case the filename gets ugly libpng-1.2.42.tar.bz2?use_mirror=biznetnetworks)
i seem to have issues with the backticks that should direct the output of the piped md5 and cut into the variable $calculated_md5. is the syntax wrong?
Thanks!
The following is a practical one-liner:
curl -s -L <url> | tee <destination-file> |
sha256sum -c <(echo "a748a107dd0c6146e7f8a40f9d0fde29e19b3e8234d2de7e522a1fea15048e70 -") ||
rm -f <destination-file>
wrapping it up in a function taking 3 arguments:
- the url
- the destination
- the sha256
download() {
curl -s -L $1 | tee $2 | sha256sum -c <(echo "$3 -") || rm -f $2
}
while IFS=: read app version file url md5
do
echo "Downloading $app Version: $version"
#use -o for output file. define $outputfile yourself
curl -L -v $url -o $outputfile 2>> logfile.txt
# use $(..) instead of backticks.
calculated_md5=$(/sbin/md5 "$file" | /usr/bin/cut -f 2 -d "=")
# compare md5
case "$calculated_md5" in
"$md5" )
echo "md5 ok"
echo "do something else here";;
esac
done < "files.diz"
My curl has a -o (--output) option to specify an output file. There's also a problem with your assignment to $calculated_md5. It shouldn't have the dollar sign at the front when you assign to it. I don't have /sbin/md5 here so I can't comment on that. What I do have is md5sum. If you have it too, you might consider it as an alternative. In particular, it has a --check option that works from a file listing of md5sums that might be handy for your situation. HTH.

Resources