How to round float in Bash? (to a decimal) - bash

I want to round my float variables in order for the sum of these variables to be equal 1. Here is my program :
for float in 0.0 0.001 0.01 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225 0.25; do
w1=`echo "1.0 - $float" | bc -l`
w2=`echo "$w1/3" | bc -l`
echo "$w2 0.0 $w2 0.0 0.0 0.0 $w2 $float 0.0 0.0 0.0 0.0"
done
Where the sum 3*$w2 + $float has to be 1.00.
I'm a beginner but I need this to compute some results.
I tried already what I found on the internet to round w2, but I didn't manage to make it work. And it has to be rounded and not truncated for the final result to be 1.00.

bc lets you use variables, so you can say:
for float in 0.0 0.001 0.01 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225 0.25; do
{ read w2; read f; } < <(
bc -l <<< "scale=5; w2=(1.0-$float)/3; w2; 1.0-3*w2"
)
echo "$w2 0.0 $w2 0.0 0.0 0.0 $w2 $f 0.0 0.0 0.0 0.0"
done
.33333 0.0 .33333 0.0 0.0 0.0 .33333 .00001 0.0 0.0 0.0 0.0
.33300 0.0 .33300 0.0 0.0 0.0 .33300 .00100 0.0 0.0 0.0 0.0
.33000 0.0 .33000 0.0 0.0 0.0 .33000 .01000 0.0 0.0 0.0 0.0
.32500 0.0 .32500 0.0 0.0 0.0 .32500 .02500 0.0 0.0 0.0 0.0
.31666 0.0 .31666 0.0 0.0 0.0 .31666 .05002 0.0 0.0 0.0 0.0
.30833 0.0 .30833 0.0 0.0 0.0 .30833 .07501 0.0 0.0 0.0 0.0
.30000 0.0 .30000 0.0 0.0 0.0 .30000 .10000 0.0 0.0 0.0 0.0
.29166 0.0 .29166 0.0 0.0 0.0 .29166 .12502 0.0 0.0 0.0 0.0
.28333 0.0 .28333 0.0 0.0 0.0 .28333 .15001 0.0 0.0 0.0 0.0
.27500 0.0 .27500 0.0 0.0 0.0 .27500 .17500 0.0 0.0 0.0 0.0
.26666 0.0 .26666 0.0 0.0 0.0 .26666 .20002 0.0 0.0 0.0 0.0
.25833 0.0 .25833 0.0 0.0 0.0 .25833 .22501 0.0 0.0 0.0 0.0
.25000 0.0 .25000 0.0 0.0 0.0 .25000 .25000 0.0 0.0 0.0 0.0
Adjust scale=? as required.

From your comment in your OP you say it's acceptable to alter the float variable so as to have a sum equal to 1. In this, case, first compute the w2 and then re-compute float from that:
w2=$(bc -l <<< "(1-($float))/3")
float=$(bc -l <<< "1-3*($w2)")
The whole thing, written in a better style:
floats=( 0.0 0.001 0.01 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225 0.25 )
for float in "${floats[#]}"; do
w2=$(bc -l <<< "(1-($float))/3")
float=$(bc -l <<< "1-3*($w2)")
printf "%s 0.0 %s 0.0 0.0 0.0 %s %s 0.0 0.0 0.0 0.0\n" "$w2" "$w2" "$w2" "$float"
done
This uses the precision provided by bc -l (20 decimal digits after the decimal point). If you don't want that accuracy, you may round the w2 before recomputing float as so:
floats=( 0.0 0.001 0.01 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225 0.25 )
for float in "${floats[#]}"; do
w2=$(bc -l <<< "scale=3; (1-($float))/3")
float=$(bc <<< "1-3*($w2)")
printf "%s 0.0 %s 0.0 0.0 0.0 %s %s 0.0 0.0 0.0 0.0\n" "$w2" "$w2" "$w2" "$float"
done
Note that the last bc isn't called with the -l option: it will use whatever significant digits are in w2. Change the scale to suit your needs. Proceeding thus will guarantee that your numbers add up to 1, as you can check from the output of the previous snippet:
.333 0.0 .333 0.0 0.0 0.0 .333 .001 0.0 0.0 0.0 0.0
.333 0.0 .333 0.0 0.0 0.0 .333 .001 0.0 0.0 0.0 0.0
.330 0.0 .330 0.0 0.0 0.0 .330 .010 0.0 0.0 0.0 0.0
.325 0.0 .325 0.0 0.0 0.0 .325 .025 0.0 0.0 0.0 0.0
.316 0.0 .316 0.0 0.0 0.0 .316 .052 0.0 0.0 0.0 0.0
.308 0.0 .308 0.0 0.0 0.0 .308 .076 0.0 0.0 0.0 0.0
.300 0.0 .300 0.0 0.0 0.0 .300 .100 0.0 0.0 0.0 0.0
.291 0.0 .291 0.0 0.0 0.0 .291 .127 0.0 0.0 0.0 0.0
.283 0.0 .283 0.0 0.0 0.0 .283 .151 0.0 0.0 0.0 0.0
.275 0.0 .275 0.0 0.0 0.0 .275 .175 0.0 0.0 0.0 0.0
.266 0.0 .266 0.0 0.0 0.0 .266 .202 0.0 0.0 0.0 0.0
.258 0.0 .258 0.0 0.0 0.0 .258 .226 0.0 0.0 0.0 0.0
.250 0.0 .250 0.0 0.0 0.0 .250 .250 0.0 0.0 0.0 0.0

You have to use the bc utility to process floating point numbers in bash.
For example consider the code given below,
a=15
b=2
echo "$a / $b"
will give you 7 as result.
Where as,
a=15
b=2
echo "$a / $b" | bc -l
Will give 7.500000 as results

You can use printf to round the output of bc:
printf '%.2f\n' $( bc -l <<< "3 * $w2 + $float" )

Related

Aggregate Top CPU Using Processes

When I run top -n 1 -d 2 | head -n 12; it returns processor usage for some processes sorted by %cpu desc as desired, but I'm not convinced that the results are aggregated as they should be. I'm wanting to put these results in a file maybe like
while true; do
top -n 1 -d 2 | head -n 12;
done > top_cpu_users;
When I run top -d 2; interactively, I first see some results, then two seconds later I see the results updated and they appear to be aggregated over the last two seconds. The first results do not appear to be aggregated in the same way.
How do I get top cpu users every two seconds aggregated over the previous two seconds?
top will always capture a first full scan of process info for use as a baseline. It uses that to initialize the utility's database of values used for later comparative reporting. That is the basis of the first report presented to the screen.
The follow-on reports are the true measures for the specified evaluation intervals.
Your code snippet will therefore never provide what you are really looking for.
You need to skip the results from the first scan and only use the follow on reports, but the only way to do that is to generate them from a single command by specifying the count of scans desired, then parse the resulting combined report.
To that end, here is a proposed solution:
#!/bin/bash
output="top_cpu_users"
rm -f ${output} ${output}.tmp
snapshots=5
interval=2
process_count=6 ### Number of heavy hitter processes being monitored
top_head=7 ### Number of header lines in top report
lines=$(( ${process_count} + ${top_head} )) ### total lines saved from each report run
echo -e "\n Collecting process snapshots every ${interval} seconds ..."
top -b -n $(( ${snapshots} + 1 )) -d ${interval} > ${output}.tmp
echo -e "\n Parsing snapshots ..."
awk -v max="${lines}" 'BEGIN{
doprint=0 ;
first=1 ;
}
{
if( $1 == "top" ){
if( first == 1 ){
first=0 ;
}else{
print NR | "cat >&2" ;
print "" ;
doprint=1 ;
entry=0 ;
} ;
} ;
if( doprint == 1 ){
entry++ ;
print $0 ;
if( entry == max ){
doprint=0 ;
} ;
} ;
}' ${output}.tmp >${output}
more ${output}
The session output for that will look like this:
Collecting process snapshots every 2 seconds ...
Parsing snapshots ...
266
531
796
1061
1326
top - 20:14:02 up 8:37, 1 user, load average: 0.15, 0.13, 0.15
Tasks: 257 total, 1 running, 256 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.5 us, 1.0 sy, 0.0 ni, 98.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3678.9 total, 157.6 free, 2753.7 used, 767.6 buff/cache
MiB Swap: 2048.0 total, 1116.4 free, 931.6 used. 629.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
31773 root 20 0 0 0 0 I 1.5 0.0 0:09.08 kworker/0:3-events
32254 ericthe+ 20 0 14500 3876 3092 R 1.0 0.1 0:00.04 top
1503 mysql 20 0 2387360 20664 2988 S 0.5 0.5 3:10.11 mysqld
2250 ericthe+ 20 0 1949412 130004 20272 S 0.5 3.5 0:46.16 caja
3104 ericthe+ 20 0 4837044 461944 127416 S 0.5 12.3 81:26.50 firefox
29998 ericthe+ 20 0 2636764 165632 54700 S 0.5 4.4 0:36.97 Isolated Web Co
top - 20:14:04 up 8:37, 1 user, load average: 0.14, 0.13, 0.15
Tasks: 257 total, 1 running, 256 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.5 us, 0.7 sy, 0.0 ni, 97.4 id, 0.4 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3678.9 total, 157.5 free, 2753.7 used, 767.6 buff/cache
MiB Swap: 2048.0 total, 1116.4 free, 931.6 used. 629.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3104 ericthe+ 20 0 4837044 462208 127416 S 3.0 12.3 81:26.56 firefox
1503 mysql 20 0 2387360 20664 2988 S 1.0 0.5 3:10.13 mysqld
32254 ericthe+ 20 0 14500 3876 3092 R 1.0 0.1 0:00.06 top
1489 root 20 0 546692 61584 48956 S 0.5 1.6 17:23.78 Xorg
2233 ericthe+ 20 0 303744 11036 7500 S 0.5 0.3 4:46.84 compton
7239 ericthe+ 20 0 2617520 127452 44768 S 0.5 3.4 1:41.13 Isolated Web Co
top - 20:14:06 up 8:37, 1 user, load average: 0.14, 0.13, 0.15
Tasks: 257 total, 1 running, 256 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.6 us, 0.4 sy, 0.0 ni, 99.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3678.9 total, 157.5 free, 2753.7 used, 767.6 buff/cache
MiB Swap: 2048.0 total, 1116.4 free, 931.6 used. 629.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1489 root 20 0 546700 61584 48956 S 1.5 1.6 17:23.81 Xorg
3104 ericthe+ 20 0 4837044 462208 127416 S 1.5 12.3 81:26.59 firefox
1503 mysql 20 0 2387360 20664 2988 S 0.5 0.5 3:10.14 mysqld
2233 ericthe+ 20 0 303744 11036 7500 S 0.5 0.3 4:46.85 compton
2478 ericthe+ 20 0 346156 10368 8792 S 0.5 0.3 0:22.97 mate-cpufreq-ap
2481 ericthe+ 20 0 346540 11148 9168 S 0.5 0.3 0:41.73 mate-sensors-ap
top - 20:14:08 up 8:37, 1 user, load average: 0.14, 0.13, 0.15
Tasks: 257 total, 1 running, 256 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.6 us, 0.5 sy, 0.0 ni, 98.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3678.9 total, 157.5 free, 2753.6 used, 767.7 buff/cache
MiB Swap: 2048.0 total, 1116.4 free, 931.6 used. 629.3 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
32254 ericthe+ 20 0 14500 3876 3092 R 1.0 0.1 0:00.08 top
3104 ericthe+ 20 0 4837044 462208 127416 S 0.5 12.3 81:26.60 firefox
18370 ericthe+ 20 0 2682392 97268 45144 S 0.5 2.6 0:55.36 Isolated Web Co
19436 ericthe+ 20 0 2618496 123608 52540 S 0.5 3.3 1:55.08 Isolated Web Co
26630 ericthe+ 20 0 2690464 179020 56060 S 0.5 4.8 1:45.57 Isolated Web Co
29998 ericthe+ 20 0 2636764 165632 54700 S 0.5 4.4 0:36.98 Isolated Web Co
top - 20:14:10 up 8:37, 1 user, load average: 0.13, 0.13, 0.15
Tasks: 257 total, 1 running, 256 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.5 us, 0.9 sy, 0.0 ni, 96.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3678.9 total, 157.5 free, 2753.6 used, 767.7 buff/cache
MiB Swap: 2048.0 total, 1116.4 free, 931.6 used. 629.3 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3104 ericthe+ 20 0 4837076 463000 127416 S 7.5 12.3 81:26.75 firefox
1489 root 20 0 546716 61584 48956 S 1.5 1.6 17:23.84 Xorg
1503 mysql 20 0 2387360 20664 2988 S 1.0 0.5 3:10.16 mysqld
32254 ericthe+ 20 0 14500 3876 3092 R 1.0 0.1 0:00.10 top
2233 ericthe+ 20 0 303744 11036 7500 S 0.5 0.3 4:46.86 compton
2481 ericthe+ 20 0 346540 11148 9168 S 0.5 0.3 0:41.74 mate-sensors-ap

How would I take the average of the first column of a file every 6 lines with bash?

I have a file that looks like this:
[root#localhost ~]# cat output.txt
0.0 709312 gnome-session-b dan
0.7 3662292 \_ gnome-shell dan
0.0 1157420 \_ gnome-softw dan
0.0 903172 gnome-shell-cal dan
0.0 286580 gnome-keyring-d dan
0.0 709312 gnome-session-b dan
0.7 3662292 \_ gnome-shell dan
0.0 1157420 \_ gnome-softw dan
0.0 903172 gnome-shell-cal dan
0.0 286580 gnome-keyring-d dan
0.0 709312 gnome-session-b dan
0.7 3662292 \_ gnome-shell dan
0.0 1157420 \_ gnome-softw dan
0.0 903172 gnome-shell-cal dan
0.0 286580 gnome-keyring-d dan
0.0 709312 gnome-session-b dan
0.7 3662292 \_ gnome-shell dan
0.0 1157420 \_ gnome-softw dan
0.0 903172 gnome-shell-cal dan
0.0 286580 gnome-keyring-d dan
0.0 709312 gnome-session-b dan
0.7 3662292 \_ gnome-shell dan
0.0 1157420 \_ gnome-softw dan
0.0 903172 gnome-shell-cal dan
0.0 286580 gnome-keyring-d dan
0.0 709312 gnome-session-b dan
0.7 3662292 \_ gnome-shell dan
0.0 1157420 \_ gnome-softw dan
0.0 903172 gnome-shell-cal dan
0.0 286580 gnome-keyring-d dan
How would I sort this and calculate the averages of the first 2 columns? I have a for loop that runs 6 times to populate this data so it would only have to calculate the averages every 6 lines.
Bash is the wrong tool -- it doesn't support floating-point math. Use awk instead:
awk '
# Initialization: Run once at startup
BEGIN {
i=0;
sum1=0;
sum2=0;
}
# Run once per line
{
sum1+=$1;
sum2+=$2;
if(++i >= 6) {
print (sum1 / i) " " (sum2 / i);
sum1=0;
sum2=0;
i=0;
}
}
# Run once at the very end, for if our total number of lines was not divisible by 6
END {
if(i > 0) {
print (sum1 / i) " " (sum2 / i)
}
}
'

apache running slow without using all ram

I have a centos server running apache with 8GB ram. It is running very slowly to load simple php pages. I have set the following on my config file. I cannot see any httpd process more than 100m. It usually slows down after 5mins from a restart.
<IfModule prefork.c>
StartServers 12
MinSpareServers 12
MaxSpareServers 12
ServerLimit 150
MaxClients 150
MaxRequestsPerChild 1000
</IfModule>
$ ps -ylC httpd | awk '{x += $8;y += 1} END {print "Apache Memory Usage (MB): "x/1024; print "Average Proccess Size (MB): "x/((y-1)*1024)}'
Apache Memory Usage (MB): 1896.09
Average Proccess Size (MB): 36.4633
What else can I do to make the pages load faster.
$ free -m
total used free shared buffers cached
Mem: 7872 847 7024 0 29 328
-/+ buffers/cache: 489 7382
Swap: 7999 934 7065
top - 15:42:17 up 545 days, 16:46, 2 users, load average: 0.05, 0.06, 0.
Tasks: 251 total, 1 running, 250 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 2.3%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.
Mem: 8060928k total, 909112k used, 7151816k free, 30216k buffers
Swap: 8191992k total, 956880k used, 7235112k free, 336612k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16544 apache 20 0 734m 47m 10m S 0.0 0.6 0:00.21 httpd
16334 apache 20 0 731m 45m 10m S 0.0 0.6 0:00.41 httpd
16212 apache 20 0 723m 37m 10m S 0.0 0.5 0:00.72 httpd
16555 apache 20 0 724m 37m 10m S 0.0 0.5 0:00.25 httpd
16347 apache 20 0 724m 36m 10m S 0.0 0.5 0:00.42 httpd
16608 apache 20 0 721m 34m 10m S 0.0 0.4 0:00.16 httpd
16088 apache 20 0 717m 31m 10m S 0.0 0.4 0:00.35 httpd
16012 apache 20 0 717m 30m 10m S 0.0 0.4 0:00.78 httpd
16338 apache 20 0 716m 30m 10m S 0.0 0.4 0:00.36 httpd
16336 apache 20 0 715m 29m 10m S 0.0 0.4 0:00.42 httpd
16560 apache 20 0 716m 29m 9.9m S 0.0 0.4 0:00.06 httpd
16346 apache 20 0 715m 28m 10m S 0.0 0.4 0:00.28 httpd
16016 apache 20 0 714m 28m 10m S 0.0 0.4 0:00.74 httpd
16497 apache 20 0 715m 28m 10m S 0.0 0.4 0:00.18 httpd
16607 apache 20 0 714m 27m 9m S 0.0 0.4 0:00.17 httpd
16007 root 20 0 597m 27m 15m S 0.0 0.3 0:00.13 httpd
16694 apache 20 0 713m 26m 10m S 0.0 0.3 0:00.10 httpd
16695 apache 20 0 712m 25m 9.9m S 0.0 0.3 0:00.04 httpd
16554 apache 20 0 712m 25m 10m S 0.0 0.3 0:00.15 httpd
16691 apache 20 0 598m 14m 2752 S 0.0 0.2 0:00.00 httpd
22613 root 20 0 884m 12m 6664 S 0.0 0.2 132:10.11 agtrep
16700 apache 20 0 597m 12m 712 S 0.0 0.2 0:00.00 httpd
16750 apache 20 0 597m 12m 712 S 0.0 0.2 0:00.00 httpd
16751 apache 20 0 597m 12m 712 S 0.0 0.2 0:00.00 httpd
2374 root 20 0 2616m 8032 1024 S 0.0 0.1 171:31.74 python
9699 root 0 -20 50304 6488 1168 S 0.0 0.1 1467:01 scopeux
9535 root 20 0 644m 6304 2700 S 0.0 0.1 21:01.24 coda
14976 root 20 0 246m 5800 2452 S 0.0 0.1 42:44.70 sssd_be
22563 root 20 0 825m 4704 2636 S 0.0 0.1 44:07.68 opcmona
22496 root 20 0 880m 4540 3304 S 0.0 0.1 13:54.78 opcmsga
22469 root 20 0 856m 4428 2804 S 0.0 0.1 1:18.45 ovconfd
22433 root 20 0 654m 4144 2752 S 0.0 0.1 10:45.71 ovbbccb
22552 root 20 0 253m 2936 1168 S 0.0 0.0 50:35.27 opcle
22521 root 20 0 152m 1820 1044 S 0.0 0.0 0:53.57 opcmsgi
14977 root 20 0 215m 1736 1020 S 0.0 0.0 15:53.13 sssd_nss
16255 root 20 0 254m 1704 1152 S 0.0 0.0 92:07.63 vmtoolsd
24180 root -51 -20 14788 1668 1080 S 0.0 0.0 9:48.57 midaemon
I do not have access to root
I have updated it to the following which seems to be better but i see occasional 7GB httpd process
<IfModule prefork.c>
StartServers 12
MinSpareServers 12
MaxSpareServers 12
ServerLimit 150
MaxClients 150
MaxRequestsPerChild 0
</IfModule>
top - 09:13:42 up 546 days, 10:18, 2 users, load average: 1.86, 1.51, 0.78
Tasks: 246 total, 2 running, 244 sleeping, 0 stopped, 0 zombie
Cpu(s): 28.6%us, 9.5%sy, 0.0%ni, 45.2%id, 16.7%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8060928k total, 7903004k used, 157924k free, 2540k buffers
Swap: 8191992k total, 8023596k used, 168396k free, 31348k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2466 apache 20 0 14.4g 7.1g 240 R 100.0 92.1 4:58.95 httpd
2285 apache 20 0 730m 31m 7644 S 0.0 0.4 0:02.37 httpd
2524 apache 20 0 723m 23m 7488 S 0.0 0.3 0:01.75 httpd
3770 apache 20 0 716m 21m 10m S 0.0 0.3 0:00.29 httpd
3435 apache 20 0 716m 20m 9496 S 0.0 0.3 0:00.60 httpd
3715 apache 20 0 713m 19m 10m S 0.0 0.2 0:00.35 httpd
3780 apache 20 0 713m 19m 10m S 0.0 0.2 0:00.22 httpd
3778 apache 20 0 713m 19m 10m S 0.0 0.2 0:00.28 httpd
3720 apache 20 0 712m 18m 10m S 0.0 0.2 0:00.21 httpd
3767 apache 20 0 712m 18m 10m S 0.0 0.2 0:00.21 httpd
3925 apache 20 0 712m 17m 10m S 0.0 0.2 0:00.11 httpd
2727 apache 20 0 716m 17m 7576 S 0.0 0.2 0:01.66 httpd
2374 root 20 0 2680m 14m 2344 S 0.0 0.2 173:44.40 python
9699 root 0 -20 50140 5556 624 S 0.0 0.1 1475:46 scopeux
3924 apache 20 0 598m 5016 2872 S 0.0 0.1 0:00.00 httpd
3926 apache 20 0 598m 5000 2872 S 0.0 0.1 0:00.00 httpd
14976 root 20 0 246m 2400 1280 S 0.0 0.0 42:51.54 sssd_be
9535 root 20 0 644m 2392 752 S 0.0 0.0 21:07.36 coda
22563 root 20 0 825m 2000 952 S 0.0 0.0 44:16.37 opcmona
22552 root 20 0 254m 1820 868 S 0.0 0.0 50:48.12 opcle
16255 root 20 0 254m 1688 1144 S 0.0 0.0 92:53.74 vmtoolsd
22536 root 20 0 282m 1268 892 S 0.0 0.0 24:21.73 opcacta
16784 root 20 0 597m 1236 180 S 0.0 0.0 0:02.16 httpd
14977 root 20 0 215m 1092 864 S 0.0 0.0 15:57.32 sssd_nss
22496 root 20 0 880m 1076 864 S 0.0 0.0 13:57.86 opcmsga
22425 root 20 0 1834m 944 460 S 0.0 0.0 74:12.96 ovcd
22433 root 20 0 654m 896 524 S 0.0 0.0 10:48.00 ovbbccb
2634 oiadmin 20 0 15172 876 516 R 9.1 0.0 0:14.78 top
2888 root 20 0 103m 808 776 S 0.0 0.0 0:00.19 sshd
16397 root 20 0 207m 748 420 S 0.0 0.0 32:52.23 ManagementAgent
2898 oiadmin 20 0 103m 696 556 S 0.0 0.0 0:00.08 sshd
22613 root 20 0 884m 580 300 S 0.0 0.0 132:34.94 agtrep
20886 root 20 0 245m 552 332 S 0.0 0.0 79:09.05 rsyslogd
2899 oiadmin 20 0 105m 496 496 S 0.0 0.0 0:00.03 bash
24180 root -51 -20 14788 456 408 S 0.0 0.0 9:50.43 midaemon
14978 root 20 0 203m 440 308 S 0.0 0.0 9:28.87 sssd_pam
14975 root 20 0 203m 432 288 S 0.0 0.0 21:45.01 sssd
8215 root 20 0 88840 420 256 S 0.0 0.0 3:28.13 sendmail
18909 oiadmin 20 0 103m 408 256 S 0.0 0.0 0:02.83 sshd
1896 root 20 0 9140 332 232 S 0.0 0.0 50:39.87 irqbalance
2990 oiadmin 20 0 98.6m 320 276 S 0.0 0.0 0:00.04 tail
4427 root 20 0 114m 288 196 S 0.0 0.0 8:58.77 crond
25628 root 20 0 4516 280 176 S 0.0 0.0 11:15.24 ndtask
4382 ntp 20 0 28456 276 176 S 0.0 0.0 0:28.61 ntpd
8227 smmsp 20 0 78220 232 232 S 0.0 0.0 0:05.09 sendmail
25634 root 20 0 6564 200 68 S 0.0 0.0 4:50.30 mgsusageag
4926 root 20 0 110m 188 124 S 0.0 0.0 3:23.79 abrt-dump-oops
9744 root 20 0 197m 180 136 S 0.0 0.0 1:46.59 perfalarm
22469 root 20 0 856m 128 128 S 0.0 0.0 1:18.65 ovconfd
4506 rpc 20 0 19036 84 40 S 0.0 0.0 1:44.05 rpcbind
32193 root 20 0 66216 68 60 S 0.0 0.0 4:54.51 sshd
18910 oiadmin 20 0 105m 52 52 S 0.0 0.0 0:00.11 bash
22521 root 20 0 152m 44 44 S 0.0 0.0 0:53.71 opcmsgi
18903 root 20 0 103m 12 12 S 0.0 0.0 0:00.22 sshd
1 root 20 0 19356 4 4 S 0.0 0.0 3:57.84 init
1731 root 20 0 105m 4 4 S 0.0 0.0 0:01.91 rhsmcertd
1983 dbus 20 0 97304 4 4 S 0.0 0.0 0:16.92 dbus-daemon
2225 root 20 0 4056 4 4 S 0.0 0.0 0:00.01 mingetty
Your server is slow because you are condeming it to a non-threaded, ever-reclaiming childs scenario.
That is you use more than 12 processes but your maxspareservers is 12 so HTTPD is constantly spawning and despawning processes and precisely that's the biggest weakness of a non-threaded mpm. And also such a low MaxRequestsPerChild won't help either if you have a decent amount of requests per second, although understable since you are using mod_php, that value will increase the constant re-spawning.
In any OS, spawning processes uses much more cpu than spawning threads inside a process.
So either you set MaxSpareServers to a very high number so your server have lots of them ready to serve your requests, or you STOP using mod_php+prefork and probably .htaccess (like everyone in here who seems to believe that's needed for apache httpd to work). to a more reliable: HTTPD with mpm_event + mod_proxy_fcgi + php-fpm, where you can configure dozens of hundreds of threads and apache will spawn and use them in less than a blink and you leave all your php load under php's own daemon, php-fpm.
So, it's not apache, it is your ever-respawning processes setup in a non-threaded mpm what's giving you troubles.

awk - apply to all fields except one?

Have a file where I want to preform awk Gsub on all fields except the first field. There are variable number of fields, so i am trying to figure out if I can write a conditional command to apply to all but $1.
I'd even work if there were a way to say ${2-20}. However, I can't seem to find this type of a command anywhere for awk. Thanks. Here's an example to practice on.
I am looking to do something like this:
EDIT
I tried this but it did not change anything.
awk 'x!=$1{gsub("C","g",x);gsub("G","c",x);gsub("T","a",x);gsub("A","t",x)}{print}' F1
F1
G 6472 193 0.0 0.0 193.0 0.0 0.0 C d
T 6482 91 91.0 0.0 0.0 0.0 0.0 T d
G 7482 187 0.0 0.0 187.0 0.0 0.0 C d
T 8860 74 0.0 0.0 0.0 74.0 0.0 A d
G 9254 52 0.0 0.0 52.0 0.0 0.0 C d
A 10059 78 78.0 0.0 0.0 0.0 0.0 T d
G 10476 757 0.0 1.0 755.0 1.0 0.0 C d
G 16122 125 0.0 1.0 124.0 0.0 0.0 C d
G 17053 316 0.0 0.0 316.0 0.0 0.0 C d
G 19312 56 0.0 0.0 55.0 1.0 0.0 C d
Desired out
G 6472 193 0.0 0.0 193.0 0.0 0.0 g d
T 6482 91 91.0 0.0 0.0 0.0 0.0 a d
G 7482 187 0.0 0.0 187.0 0.0 0.0 g d
T 8860 74 0.0 0.0 0.0 74.0 0.0 t d
G 9254 52 0.0 0.0 52.0 0.0 0.0 g d
A 10059 78 78.0 0.0 0.0 0.0 0.0 a d
G 10476 757 0.0 1.0 755.0 1.0 0.0 g d
G 16122 125 0.0 1.0 124.0 0.0 0.0 g d
G 17053 316 0.0 0.0 316.0 0.0 0.0 g d
G 19312 56 0.0 0.0 55.0 1.0 0.0 g d
Thanks.
Another way, ... going off your code:
awk '{ s=$1; sub($1,""); gsub("C","g"); gsub("G","c"); gsub("T","a"); gsub("A","t"); print s $0 }' filename
To preserve the whitespaces I used sub($1,"") instead of $1="".
this line does what you want:
awk 'BEGIN{d["C"]="g";d["G"]=c;d["T"]="a";d["A"]="t"}
$(NF-1) in d{$(NF-1)=d[$(NF-1)]}7' file
Just another option for translating characters, maybe overkill for this particular example:
$ cat tst.awk
function tr(old,new,str, oldA,newA,i) {
split(old,oldA,"")
split(new,newA,"")
for (i=1;i in oldA;i++) {
gsub(oldA[i],newA[i],str)
}
return str
}
{ print $1 tr("CGTA","gcat",substr($0,2)) }
$ awk -f tst.awk file
G 6472 193 0.0 0.0 193.0 0.0 0.0 g d
T 6482 91 91.0 0.0 0.0 0.0 0.0 a d
G 7482 187 0.0 0.0 187.0 0.0 0.0 g d
T 8860 74 0.0 0.0 0.0 74.0 0.0 t d
G 9254 52 0.0 0.0 52.0 0.0 0.0 g d
A 10059 78 78.0 0.0 0.0 0.0 0.0 a d
G 10476 757 0.0 1.0 755.0 1.0 0.0 g d
G 16122 125 0.0 1.0 124.0 0.0 0.0 g d
G 17053 316 0.0 0.0 316.0 0.0 0.0 g d
G 19312 56 0.0 0.0 55.0 1.0 0.0 g d
combining Kents answer and shellters answer in the comments, I came up with this script which allows me to change capitols to capitols and maintain the white spice as it originally was.
awk '
BEGIN{d["G"]="C";d["C"]="G";d["T"]="A";d["A"]="T";FS="";OFS=""}
{for(i=2;i<(NF+1);i++)
{if($i in d)
$i=d[$i]}
}
{print}' $1
Output:
G 6472 193 0.0 0.0 193.0 0.0 0.0 G d
T 6482 91 91.0 0.0 0.0 0.0 0.0 A d
G 7482 187 0.0 0.0 187.0 0.0 0.0 G d
T 8860 74 0.0 0.0 0.0 74.0 0.0 T d
G 9254 52 0.0 0.0 52.0 0.0 0.0 G d
A 10059 78 78.0 0.0 0.0 0.0 0.0 A d
G 10476 757 0.0 1.0 755.0 1.0 0.0 G d
G 16122 125 0.0 1.0 124.0 0.0 0.0 G d
G 17053 316 0.0 0.0 316.0 0.0 0.0 G d
G 19312 56 0.0 0.0 55.0 1.0 0.0 G d

Join fails to join whole files

I am trying to use join to add a column onto a file with about 4.5M lines. The files are sorted by their first column. All the numbers in the first column in file 1 are in the first column in file 2. when I use "join FILE1 FILE2 > output" it works for the first 1000 lines or so and then stops...
I am not married to the idea of join (program never seems to work right) and open to other ways to join these files. I tried grep, but doing this by grep for 4*10^6 records is very slow. Below is a sample of the data I'm working with.
FILE 1
964 0 0.0 0.0 0.0 0.0 1.0 -
965 0 0.0 1.0 0.0 0.0 0.0 -
966 0 0.0 0.0 0.0 0.0 1.0 -
967 0 0.0 0.0 0.0 0.0 1.0 -
968 0 0.0 1.0 0.0 0.0 0.0 -
969 0 0.0 0.0 0.0 1.0 0.0 -
970 0 0.0 0.0 1.0 0.0 0.0 -
971 0 0.0 1.0 0.0 0.0 0.0 -
1075 3 4.0 0.0 0.0 0.0 0.0 -
1076 0 4.0 0.0 0.0 0.0 0.0 -
1077 0 0.0 0.0 4.0 0.0 0.0 -
1078 0 0.0 0.0 0.0 4.0 0.0 -
File 2
964 T
965 C
966 T
967 G
968 C
969 T
970 G
971 C
972 G
973 G
974 T
975 G
976 C
977 T
978 G
979 G
980 C
981 T
982 G
output (Last few lines)
965 0 0.0 1.0 0.0 0.0 0.0 - C
966 0 0.0 0.0 0.0 0.0 1.0 - T
967 0 0.0 0.0 0.0 0.0 1.0 - G
968 0 0.0 1.0 0.0 0.0 0.0 - C
969 0 0.0 0.0 0.0 1.0 0.0 - T
970 0 0.0 0.0 1.0 0.0 0.0 - G
971 0 0.0 1.0 0.0 0.0 0.0 - C
9990 0 0.0 0.0 0.0 0.0 0.0 - T
9991 0 0.0 0.0 0.0 0.0 0.0 - C
EDIT
Sorting in dictionary format works for all records after 463835. I think it is because it sorted the input files differently, likely due to the other columns???
FILE 1
466630 0 0.0 0.0 0.0 0.0 0.0 -
46663 0 0.0 0.0 0.0 3.0 0.0 -
466631 0 0.0 0.0 0.0 0.0 0.0 -
FILE 2
466639 C
46663 A
466640 G
Your files are sorted numerically, but join expects them to be sorted in dictionary order (1 < 10 < 2 < 200 < 3). Use join <(sort FILE1) <(sort FILE2). But (as suggested in the comments) do consider using a database.

Resources