if elif else multiple options error - bash

I have a .ini file, with one of the options being:
[LDdecay]
;determine if examining LD decay for SNPs
;"0" is false (no), "1" is true for GBS only, "2" is true for SoySNP50K only, "3" is true for Merge only, "4" is true for all (GBS, SoySNP50K and Merge)
decay=1
I am continuously getting the following error for lines 69, 72, 80, 88 and 96 (essentially anywhere there is an if or elif statement):
[: : integer expression expected
I'm clearly overlooking something as I have worked with if, elif and else successfully in the past, so anyone who can catch the glitch would be greatly appreaciated.
Thanks
#!/bin/bash
#################################################
# #
# A base directory must be created and #
# the config file must be placed in there #
# #
#################################################
#check if config file was supplied
if [ $# -ne 1 ]; then #if number of argument is different than 1
echo -e "Usage: run_pipeline_cfg.sh <configuration file>" #print error message
exit 1 #stop script execution
fi
#Parse the info from the ini file
ini=$(readlink -m "$1")
source <(grep="$ini" | \
sed -e "s/[[:space:]]*\=[[:space:]]*/=/g" \
-e "s/;.*$//" \
-e "s/[[:space:]]*$//" \
-e "s/^[[:space:]]*//" \
-e "s/^\(.*\)=\([^\"']*\)$/\1=\"\2\"/")
#ini="/home/directory/Initiator.ini" #debug
#Retrieve the base directory path
baseDir=$(dirname "$ini")
#Create required directory structure
logs="$baseDir/logs"
LDdecay="$baseDir/LDdecay"
imputed="$baseDir/imputed"
#dont create if already exists
[[ -d "$logs" ]] || mkdir "$logs"
[[ -d "$LDdecay" ]] || mkdir "$LDdecay"
[[ -d "$imputed" ]] || mkdir "$imputed"
#find imputed vcf files
if [ -e $imputed ] ; then
echo -e "Folder with imputed vcf files exists. Determining if calculating LD decay"
else
echo -e "Folder with imputed vcf files does not exist. Cannot calculate LD decay."
exit 1
fi
#######################################################
# #
# Create LD decay files for LD decay plots #
# #
#######################################################
#determine on which files to perform LD decay calculations
#"0" is false (no), "1" is true for GBS only, "2" is true for Microarray only, "3" is true for Integrated only, "4" is true for all (GBS, Microarray and Integrated)
if [ "$decay" -eq 0 ]; then
printf "LD decay rates not calculated" | tee -a $logs/log.txt;
elif [ "$decay" -eq 1 ]; then
#perform LD decay calculation for GBS only
zcat $imputed/GBS_MAF0.01.vcf.gz > $LDdecay/GBS_MAF0.01.vcf
plink --vcf $LDdecay/GBS_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/GBS_LD_decay
rm $LDdecay/GBS_MAF0.01.vcf
printf "LD decay for GBS dataset completed" | tee -a $logs/log.txt
elif [ "$decay" -eq 2 ]; then
#perform LD decay calculation for microarray only
zcat $imputed/microarray_MAF0.01.vcf.gz > $LDdecay/microarray_MAF0.01.vcf
plink --vcf $LDdecay/microarray_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/microarray_LD_decay
rm $LDdecay/microarray_MAF0.01.vcf
printf "LD decay for microarray dataset completed" | tee -a $logs/log.txt
elif [ "$decay" -eq 3 ]; then
#perform LD decay for Merged dataset only
zcat $imputed/Integrated_MAF0.01_sorted.vcf.gz > $LDdecay/Integrated_MAF0.01.vcf
plink --vcf $LDdecay/Integrated_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/Integrated_LD_decay
rm $LDdecay/Integrated_MAF0.01.vcf
printf "LD decay calculation for Integrated dataset complete" | tee -a $logs/log.txt
elif [ "$decay" -eq 4 ]; then
#perform LD decay for Merged, GBS and SoySNP50K datasets
zcat $imputed/Integrated_MAF0.01_sorted.vcf.gz > $LDdecay/Integrated_MAF0.01.vcf
plink --vcf $LDdecay/Integrated_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/Integrated_LD_decay
rm $LDdecay/Integrated_MAF0.01.vcf
zcat $imputed/microarray_MAF0.01.vcf.gz > $LDdecay/microarray_MAF0.01.vcf
plink --vcf $LDdecay/microarray_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/microarray_LD_decay
rm $LDdecay/microarray_MAF0.01.vcf
zcat $imputed/snp_imputed_GBS_MAF0.01.vcf.gz > $LDdecay/GBS_MAF0.01.vcf
plink --vcf $LDdecay/GBS_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/GBS_LD_decay
rm $LDdecay/GBS_MAF0.01.vcf
printf "LD decay calculation completed for Integrated, GBS and SoySNP50K datasets completed" | tee -a $logs/log.txt
else
echo "Wrong LD Decay calculation setup"
exit 1
fi

Consider changing the elsif chain to a case statement. It should side-step the need for the tests to have integer arguments and it is both clearer and more easy to extend:
case $decay in
0)
printf "LD decay rates not calculated" | tee -a $logs/log.txt;
;;
1)
#perform LD decay calculation for GBS only
zcat $imputed/GBS_MAF0.01.vcf.gz > $LDdecay/GBS_MAF0.01.vcf
plink --vcf $LDdecay/GBS_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/GBS_LD_decay
rm $LDdecay/GBS_MAF0.01.vcf
printf "LD decay for GBS dataset completed" | tee -a $logs/log.txt
;;
2)
#perform LD decay calculation for microarray only
zcat $imputed/microarray_MAF0.01.vcf.gz > $LDdecay/microarray_MAF0.01.vcf
plink --vcf $LDdecay/microarray_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/microarray_LD_decay
rm $LDdecay/microarray_MAF0.01.vcf
printf "LD decay for microarray dataset completed" | tee -a $logs/log.txt
;;
3)
#perform LD decay for Merged dataset only
zcat $imputed/Integrated_MAF0.01_sorted.vcf.gz > $LDdecay/Integrated_MAF0.01.vcf
plink --vcf $LDdecay/Integrated_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/Integrated_LD_decay
rm $LDdecay/Integrated_MAF0.01.vcf
printf "LD decay calculation for Integrated dataset complete" | tee -a $logs/log.txt
;;
4)
#perform LD decay for Merged, GBS and SoySNP50K datasets
zcat $imputed/Integrated_MAF0.01_sorted.vcf.gz > $LDdecay/Integrated_MAF0.01.vcf
plink --vcf $LDdecay/Integrated_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/Integrated_LD_decay
rm $LDdecay/Integrated_MAF0.01.vcf
zcat $imputed/microarray_MAF0.01.vcf.gz > $LDdecay/microarray_MAF0.01.vcf
plink --vcf $LDdecay/microarray_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/microarray_LD_decay
rm $LDdecay/microarray_MAF0.01.vcf
zcat $imputed/snp_imputed_GBS_MAF0.01.vcf.gz > $LDdecay/GBS_MAF0.01.vcf
plink --vcf $LDdecay/GBS_MAF0.01.vcf --r2 --ld-window 10000000 --ld-window-kb 10000 --ld-window-r2 0 --output $LDdecay/GBS_LD_decay
rm $LDdecay/GBS_MAF0.01.vcf
printf "LD decay calculation completed for Integrated, GBS and SoySNP50K datasets completed" | tee -a $logs/log.txt
;;
*)
echo "Wrong LD Decay calculation setup"
exit 1
;;
esac

Related

Issue with Jenkins pipeline stript

I am trying to build a pipeline for modifying kafka topics. Part of the steps includes getting the current number of partitions of the topic to determine if it can be updated or not. For some reason, jenkins does not like my use of variables in the script. I am getting an error on --topic ${TOPIC}. the variable is correctly enclosed by {} so i don't understand what is going on.
error being given:
WorkflowScript: 32: illegal string body character after dollar sign;
solution: either escape a literal dollar sign "\$5" or bracket the value expression "${5}" # line 32, column 36.
--topic ${TOPIC} | head -1)
snippet of pipeline:
script {
sh """
if [ -n "\$(printf '%s\n' ${NUM_PARTITIONS} | sed 's/[0-9]//g')" ]; then
echo 'Num partitions is not numeric'
exit 1
fi
"""
sh """
desc=\$(kafka-topics.sh \
--bootstrap-server ${KAFKA_SERVERS} \
--command-config sasl.properties \
--describe \
--topic ${TOPIC} | head -1)
if [ \$? -ne 0 ]; then
echo 'describe failed'
exit 1
fi
np=\$(echo desc | gawk 'match($0, /PartitionCount:\s*([[:digit:]]*)\s*/, a) {print a[1]}')
if [ ${NUM_PARTITIONS} -le np ]; then
echo 'num partitions <= configured partitions. alter skipped'
else
kafka-topics.sh \
--bootstrap-server ${KAFKA_SERVERS} \
--command-config sasl.properties \
--alter \
--topic ${TOPIC} \
--partitions ${NUM_PARTITIONS}
fi
"""
}
got it working. Weird escaping rules...
script {
sh """
if [ -n "\$(printf '%s\n' ${NUM_PARTITIONS} | sed 's/[0-9]//g')" ]; then
echo 'Num partitions is not numeric'
exit 1
fi
if [ -n "\$(printf '%s\n' ${RETENTION} | sed 's/[0-9]//g')" ]; then
echo 'Retention is not numeric'
exit 1
fi
"""
sh """
desc=\$(${KAFKA_SCRIPTS_LOC}/kafka-topics.sh \
--bootstrap-server ${KAFKA_SERVERS} \
--command-config ${SASL_LOC} \
--describe \
--topic $TOPIC)
if [ \$? -ne 0 ]; then
echo 'describe failed'
exit 1
fi
np=\$(echo \${desc} | head -1 | gawk 'match(\$0, /PartitionCount:\\s*([[:digit:]]*)\\s*/, a) {print a[1]}')
if [ ${NUM_PARTITIONS} -le \${np} ]; then
echo 'num partitions <= configured partitions. alter skipped'
else
${KAFKA_SCRIPTS_LOC}/kafka-topics.sh \
--bootstrap-server ${KAFKA_SERVERS} \
--command-config ${SASL_LOC} \
--alter \
--topic ${TOPIC} \
--partitions ${NUM_PARTITIONS}
fi
"""
}

How to pipe aws s3 cp to gzip to be used with "$QUERY" | psql utility

I have following command
"$QUERY" | psql -h $DB_HOST -p $DB_PORT -U $DB_USERNAME $DB_NAME
Where $QUERY is a command that loads files from a bucket, unzip it, and put to the database. It looks like following:
COPY my_table
FROM PROGRAM 'readarray -t files <<<"$(aws s3 ls ${BUCKET_PATH} | tr [:space:] "\n")"; for (( n = ${#files[#]} - 1; n >= 0; n--)); do if [[ ${files[$n]} =~ .csv.gz$ ]]; then aws s3 cp ${BUCKET_PATH}${files[$n]} >(gzip -d -c); break; fi done'
WITH DELIMITER ',' CSV
Here is formatted bash code:
#!/usr/bin/env bash
raw_files=`aws s3 ls ${BUCKET_PATH} | tr [:space:] "\n"`
readarray -t files <<<"$raw_files"
for (( n = ${#files[#]} - 1; n >= 0; n--)); do
if [[ ${files[$n]} =~ .csv.gz$ ]];
then aws s3 cp ${BUCKET_PATH}${files[$n]} >(gzip -d -c);
break; # for test purposes to be no load all files, jsut one
fi
done
aws-CLI version
#: aws --version
#: aws-cli/1.11.13 Python/3.5.2 Linux/4.13.0-43-generic botocore/1.4.70
This script works. But when I try to use it with psql, it fails, and I cannot understand why.
How can I fix it?
Here is a script that loads data from s3 bucket and merges it to fat file:
#!/usr/bin/env bash
bucket_path=$1
limit_files=$2
target_file_name=$3
echo "Source bucket $bucket_path"
if [ -z $target_file_name ]; then
target_file_name="fat.csv.gz"
echo "Default target file $target_file_name"
fi
echo "Total files $(aws s3 ls $bucket_path | wc -l)"
readarray -t files <<<"$(aws s3 ls $bucket_path | tr [:space:] "\n")"
for (( n = ${#files[#]} - 1, i=1; n >= 0; n--)); do
if [[ ${files[$n]} =~ .csv.gz$ ]]; then
aws s3 cp --quiet $bucket_path${files[$n]} >(cat >> "$target_file_name");
echo "$((i++)), ${files[$n]}, current size: $(du -sh $target_file_name)"
if [ ! -z $limit_files ] && [ $i -gt $limit_files ]; then
echo "Final size $(du -sh $target_file_name)"
exit 0
fi
fi
done
exit 0
It works correctly.
But when I try pipe this fat.csv.gz to psql db using the following code
echo "COPY my_table
FROM PROGRAM 'gzip -d -c fat.csv.gz'
WITH DELIMITER ',' CSV" | psql -h $DB_HOST -p $DB_PORT -U $DB_USERNAME $DB_NAME
I am getting the error:
ERROR: must be superuser to COPY to or from a file
It looks like a specific of working of pg (I guess it's due to security reasons) - link
So, the problem now that I don't know how to rework my script to be pipe the fat.csv.gz. I cannot get such privilege and should find a workaround.
I finally wrote the following bash script downloads files from s3, merge them to 50MB archives and pipe to pg in a sub process. Hope it will be helpful for somebody:
get_current_timestamp() (
date '+%s.%N'
)
execute_sql() (
write_log "Importing data from s3 to pg..."
import_data_from_s3 "$EVENTS_PATH"
write_log "Importing data from s3 to pg...done"
)
columns() (
local columns=`echo "SELECT array_to_string(
array(SELECT column_name::text
FROM information_schema.columns
WHERE table_name ILIKE '${TMP_TABLE}'
AND column_name NOT ILIKE '${DATE_FIELD}'), ',')" | \
psql --tuples-only -h $DB_HOST -p $DB_PORT -U $DB_USERNAME $DB_NAME`
echo -n "${columns}"
)
get_timestamp_difference() (
FROM=$1
TO=$2
echo $FROM $TO | awk '{
diff = $2-$1
if (diff >= 86400) {
printf "%i days ", diff/86400
}
if (diff >= 3600) {
printf "%i hours ", (diff/3600)%24
}
if (diff >= 60) {
printf "%i mins ", (diff/60)%60
}
printf "%f secs", diff%60
}'
)
pretty_size() (
if [ ! -z $1 ]; then
local size=$1;
else
local size=`cat <&0`;
fi
echo "${size}" | \
awk '{ \
split( "B KB MB GB" , v ); \
s=1; \
while( $1>=1024 ) { \
$1/=1024; s++ \
} \
printf "%.1f%s", $1, v[s] \
}' | \
add_missing_eol >&1
)
import_data_from_s3() (
local bucket_path=$1
local limit_files=$2
local target_file_name=$3
write_log "Source bucket $bucket_path"
if [ -z ${target_file_name} ]; then
target_file_name="fat.csv.gz"
write_log "Default target file $target_file_name"
fi
if [ ! -z ${limit_files} ]; then
write_log "Import ${limit_files} files"
else
write_log "Import all files"
fi
write_log "Total files $(aws s3 ls $bucket_path | wc -l)"
readarray -t files <<<"$(aws s3 ls $bucket_path | tr [:space:] "\n")"
write_log "Remove old data files..."
find . -maxdepth 1 -type f -name "*${target_file_name}" -execdir rm -f {} +;
write_log "Remove old data files...done"
TMP_TABLE_COLUMNS=$(columns)
write_log "Importing columns: ${DW_EVENTS_TMP_TABLE_COLUMNS}"
declare -A pids
local total_data_amount=0
local file_size_bytes=0
local file_size_bytes=0
local size_limit=$((50*1024*1024))
for (( n = ${#files[#]} - 1, file_counter=1, fat_file_counter=1; n >= 0; n--)); do
if [[ ! ${files[$n]} =~ .csv.gz$ ]]; then continue; fi
file="${fat_file_counter}-${target_file_name}"
aws s3 cp --quiet ${bucket_path}${files[$n]} >(cat >> "${file}");
file_size_bytes=$(stat -c%s "$file")
if [ $file_size_bytes -gt $size_limit ]; then
import_zip "${file}" "$(pretty_size ${file_size_bytes})" & pids["${file}"]=$!;
total_data_amount=$((total_data_amount+file_size_bytes))
write_log "Files read: ${file_counter}, total size(zipped): $(pretty_size ${total_data_amount})"
((fat_file_counter++))
fi
# write_log "${file_counter}, ${files[$n]}, current size: $(du -sh $file)"
if [ ! -z ${limit_files} ] && [ ${file_counter} -gt ${limit_files} ]; then
write_log "Final size $(du -sh ${file})"
if [ ! ${pids["${file}"]+0} ]; then
import_zip "${file}" "$(pretty_size ${file_size_bytes})" & pids["${file}"]=$!;
fi
break;
fi
((file_counter++))
done
# import rest file that can less than limit size
if [ ! ${pids["${file}"]+0} ]; then
import_zip "${file}" "$(pretty_size ${file_size_bytes})" & pids["${file}"]=$!;
fi
write_log "Waiting for all pids: ${pids[*]}"
for pid in ${pids[*]}; do
wait $pid
done
write_log "All sub process have finished. Total size(zipped): $(pretty_size ${total_data_amount})"
)
import_zip() (
local file=$1
local size=$2
local start_time=`get_current_timestamp`
write_log "pid: $!, size: ${size}, importing ${file}...";
gzip -d -c ${file} | \
psql --quiet -h ${DB_HOST} -p ${DB_PORT} -U ${DB_USERNAME} ${DB_NAME} \
-c "COPY ${TMP_TABLE}(${TMP_TABLE_COLUMNS})
FROM STDIN
WITH DELIMITER ',' CSV";
rm $file;
local end_time=`get_current_timestamp`
write_log "pid: $!, time: `get_timestamp_difference ${start_time} ${end_time}`, size: ${size}, importing ${file}...done";
)

Build a bash command with conditional parameters and cuote parameters

I need to build a bash command in a script depending on some cuote or normal parameters. For example:
BAYES)
class="weka.classifiers.bayes.BayesNet"
A="-D -Q weka.classifiers.bayes.net.search.local.K2 -- -P 1 -S BAYES -E"
B="weka.classifiers.bayes.net.estimate.SimpleEstimator -- -A 0.5" ;;
LOGISTIC)
class="weka.classifiers.functions.Logistic"
A="-R 1.0E-8 -M -1 -num-decimal-places 4" ;;
SIMPLELOG)
class="weka.classifiers.functions.SimpleLogistic"
A="-I 0 -M 500 -H 50 -W 0.0" ;;
SMO)
class="weka.classifiers.functions.SMO"
A="-C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K"
A1="weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0" ;;
IBK)
class="weka.classifiers.lazy.IBk"
A="-K 1 -W 0 -A "
A1="weka.core.neighboursearch.LinearNNSearch -A"
A2="weka.core.EuclideanDistance -R first-last" ;;
KSTAR)
class="weka.classifiers.lazy.KStar"
A="-B 20 -M a" ;;
...
java -Xmx"$mem"m -cp "$WEKA_INSTALL_DIR/weka.jar" $class -s $i -t "$file" $A "$A1" $B "$B1"
However, my problem is that in some conditions, when $A1 is empty, the "$A1" parameter is not valid. The same with "$B1". And the parameter could be in any combination ($A1 with $B1, $A1 without $B2, ...).
Also I've tried include $A1 in $A as following:
A="-C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K \"weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0\""
and execute:
java -Xmx"$mem"m -cp "$WEKA_INSTALL_DIR/weka.jar" $class -s $i -t "$file" $A
but this doesn't work.
You cannot safely and reliably store multiple arguments in a single string; you need to use arrays; this is their intended use case. Make sure to initialize any arrays that won't be used, so that they "disappear" when expanded.
# If A is undefined, "${A[#]}" is an empty string.
# But if A=(), then "${A[#]}" simply disappears from the command line.
A=()
B=()
A1=()
A2=()
case $something in
BAYES)
class="weka.classifiers.bayes.BayesNet"
A=(-D -Q weka.classifiers.bayes.net.search.local.K2 -- -P 1 -S BAYES -E)
B=(weka.classifiers.bayes.net.estimate.SimpleEstimator -- -A 0.5);;
LOGISTIC)
class="weka.classifiers.functions.Logistic"
A=(-R 1.0E-8 -M -1 -num-decimal-places 4);;
SIMPLELOG)
class="weka.classifiers.functions.SimpleLogistic"
A=(-I 0 -M 500 -H 50 -W 0.0) ;;
SMO)
class="weka.classifiers.functions.SMO"
A=(-C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K)
A1=(weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0) ;;
IBK)
class="weka.classifiers.lazy.IBk"
A=(-K 1 -W 0 -A)
A1=(weka.core.neighboursearch.LinearNNSearch -A)
A2=(weka.core.EuclideanDistance -R first-last);;
KSTAR)
class="weka.classifiers.lazy.KStar"
A=(-B 20 -M a) ;;
esac
and always quote parameter expansions.
java -Xmx"$mem"m -cp "$WEKA_INSTALL_DIR/weka.jar" \
"$class" -s "$i" -t "$file" "${A[#]}" "${A1[#]}" "${B[#]}" "${B1[#]}"
SOLUTION:
I solved all my problems using only a parameter A like this:
BAYES)
class="weka.classifiers.bayes.BayesNet"
A=(-D -Q weka.classifiers.bayes.net.search.local.K2 -- -P 1 -S BAYES -E weka.classifiers.bayes.net.estimate.SimpleEstimator -- -A 0.5);;
SMO)
class="weka.classifiers.functions.SMO"
A=(-C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K "weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0");;
java -Xmx"$mem"m -cp "$WEKA_INSTALL_DIR/weka.jar" $class -s $i -t "$file" "${A[#]}"
From your question, I did:
Initialized the variables
Completed the case statement
Removed some not required double quotes
Defined some variables for which you did not provide values for
backslash your double quotes if you must have then in the java command
If you need double quotes for certain variables, put these in the variables. This way you will not have "" in your java command if the variables is empty. I did this for A1 in case IBK.
This will get you started, modify as required:
#!/bin/bash
#
mem="512"
WEKA_INSTALL_DIR='/opt/weka'
class=""
i="value-of-i"
A=""
A1=""
B=""
B1=""
file="SOMEFILE"
case $1 in
'BAYES')
class="weka.classifiers.bayes.BayesNet"
A="-D -Q weka.classifiers.bayes.net.search.local.K2 -- -P 1 -S BAYES -E"
B="weka.classifiers.bayes.net.estimate.SimpleEstimator -- -A 0.5"
;;
'LOGISTIC')
class="weka.classifiers.functions.Logistic"
A="-R 1.0E-8 -M -1 -num-decimal-places 4"
;;
'SIMPLELOG')
class="weka.classifiers.functions.SimpleLogistic"
A="-I 0 -M 500 -H 50 -W 0.0"
;;
'SMO')
class="weka.classifiers.functions.SMO"
A="-C 1.0 -L 0.001 -P 1.0E-12 -N 0 -V -1 -W 1 -K"
A1="weka.classifiers.functions.supportVector.PolyKernel -C 250007 -E 1.0"
;;
'IBK')
class="weka.classifiers.lazy.IBk"
A="-K 1 -W 0 -A "
A1="\"weka.core.neighboursearch.LinearNNSearch -A\""
A2="weka.core.EuclideanDistance -R first-last"
;;
'KSTAR')
class="weka.classifiers.lazy.KStar"
A="-B 20 -M a"
;;
*)
# default options
;;
esac
echo java -Xmx${mem}m -cp $WEKA_INSTALL_DIR/weka.jar $class -s $i -t $file $A $A1 $B $B1
Example:
./test.bash LOGISTIC
java -Xmx512m -cp /opt/weka/weka.jar weka.classifiers.functions.Logistic -s value-of-i -t SOMEFILE -R 1.0E-8 -M -1 -num-decimal-places 4
./test.bash IBK
java -Xmx512m -cp /opt/weka/weka.jar weka.classifiers.lazy.IBk -s value-of-i -t SOMEFILE -K 1 -W 0 -A "weka.core.neighboursearch.LinearNNSearch -A"

Bash: execute command repeatedly and write output tab seperated to file

I want my file to Log like that:
Time_Namelookup: 0,1 0,2 0,12 0,45 ...
Time_Connect: 0,34 0,23 0,23 0,11 ...
Time_Starttransfer: 0,9 0,23 0,12 ...
I want that Values are added to their specific line every n seconds.
I got a code like this:
while [ "true" ]
do
echo "Time_Namelookup:" >> $file
curl -w "%{time_namelookup}\t" -o /dev/null -s https://website.com/
echo "Time_connect" >> $file
curl -w "%{time_connect}\t" -o /dev/null -s https://website.com/
echo "Time_Starttransfer:" >> $file
curl -w "%{time_starttransfer}\t" -o /dev/null -s https://website.com/
sleep 5
done
But I get something like
Time_Namelookup: 0,1
Time_Connect: 0,34
Time_Starttransfer: 0,9
Time_Namelookup: 0,2
Time_Connect:0,23 0,23
Time_Starttransfer: 0,23
Time_Namelookup: 0,45
Time_Connect: 0,11
Time_Starttransfer: 0,12
Can you help me ?
You could put this inside your loop
if [ ! -f $file ]; then
echo "Time_Namelookup:" > $file
echo "Time_Connect:" >> $file
echo "Time_Starttransfer:" >> $file
fi
name_lookup=$(curl -w "%{time_namelookup}\t" -o /dev/null -s https://website.com/)
connect=$(curl -w "%{time_connect}\t" -o /dev/null -s https://website.com/)
starttransfer=$(curl -w "%{time_starttransfer}\t" -o /dev/null -s https://website.com/)
sed -i -e "s/\(Time_Starttransfer:.*\)/\1 $starttransfer/" \
-e "s/\(Time_Connect:.*\)/\1 $connect/" \
-e "s/\(Time_Starttransfer:.*\)/\1 $starttransfer/" \
$file
you can try this;
file=yourFile
echo "Time_Namelookup :" >> $file
echo "Time_Connect :" >> $file
echo "Time_Starttransfer:" >> $file
while [ "true" ]
do
time_namelookup=$(curl -w "%{time_namelookup}\t" -o /dev/null -s https://website.com/)
time_connect=$(curl -w "%{time_connect}\t" -o /dev/null -s https://website.com/)
time_starttransfer=$(curl -w "%{time_starttransfer}\t" -o /dev/null -s https://website.com/)
sed -i "/Time_Namelookup :/s/$/\t$time_namelookup/" $file
sed -i "/Time_Connect :/s/$/\t$time_connect/" $file
sed -i "/Time_Starttransfer:/s/$/\t$time_starttransfer/" $file
sleep 5
done
sed -i : to be edited in-place.
The following part is to find "Time_Namelookup :" in the file.
/Time_Namelookup :/
/s/$/\t$time_namelookup/"
s :substitute command
/../../ :delimiter
$ :end of line character.
\t$time_namelookup: to append tab and the value.

result of "less than" operator in bash script

I'm trying to do a script which downloads a file and if it takes less than 'x' seconds it's a pass but from some reason I can't get it to work properly, i've tried '<=' '-lt' in the below example it's always 'FAST' i'e 2.09 FAST
#!/bin/bash
file=&(time -p wget -O /dev/null -q http://site/file.iso ) 2>&1 | grep real | sed -e s/real//g -e s/' '//g
if [ $file <= 1 ]
then
echo "FAST"
else
echo "SLOW"
fi
The bash builtin time outputs directly to your terminal, not to a stdio channel.
You'll need to use /bin/time which uses stderr:
$ time -p sleep 1 >/dev/null 2>&1
real 1.00
user 0.00
sys 0.00
$ /bin/time -p sleep 1
real 1.00
user 0.00
sys 0.00
$ /bin/time -p sleep 1 2>/dev/null
$
So:
$ command time -p sleep 1 2>&1 | awk -v limit=0.5 '$1 == "real" {exit ($2 <= limit)}'
$ echo $?
0
$ command time -p sleep 1 2>&1 | awk -v limit=1.5 '$1 == "real" {exit ($2 <= limit)}'
$ echo $?
1
and then
limit=1 # 1 second
if command time -p wget -O /dev/null -q http://site/file.iso |
awk -v lmt=$limit '$1 == "real" {exit ($2 <= lmt)}'
then
echo "FAST"
else
echo "SLOW"
fi
To capture the output of time you need curly braces:
file=$( { time -p wget -O /dev/null -q http://site/file.iso ; } 2>&1 | grep real | sed -e s/real//g -e s/' '//g )
Bash can't do floating-point calculations, so you can use bc for that:
[[ $( echo $file' <= 1.0' | bc ) == 1 ]] && echo FAST || echo SLOW

Resources