I have written a Bash script to sync all files from a local folder to s3 bucket. And all the logs are sent to email when the script is run through the cron job. The email output looks like this:
-----------------------Started at: Wed 3 Mar 10:56:01 +04 2021---------------------------
Start uploading to s3 bucket mohsin7007
Completed 1.8 KiB/15.5 KiB (5.2 KiB/s) with 3 file(s) remaining
upload: ../home/mohsin/Desktop/data2s3/README.md to s3://mohsin7007/README.md
Completed 1.8 KiB/15.5 KiB (5.2 KiB/s) with 2 file(s) remaining
Completed 12.9 KiB/15.5 KiB (15.0 KiB/s) with 2 file(s) remaining
upload: ../home/mohsin/Desktop/data2s3/LICENSE to s3://mohsin7007/LICENSE
Completed 12.9 KiB/15.5 KiB (15.0 KiB/s) with 1 file(s) remaining
Completed 15.5 KiB/15.5 KiB (18.0 KiB/s) with 1 file(s) remaining
------------------------Completed at: Wed 3 Mar 10:56:01 +04 2021---------------------------
I would like to bold out the "Started at" and "completed at" lines. So it will be more readable when looking for the logs from a certain date, like this:
-------------------------Started at: Wed 3 Mar 10:56:01 +04 2021---------------------------
I have used Tput utility to make these lines bold, however, the issue is when I run the script in the terminal, the outcome is as expected. However, when I open the log file or view the logs in email, The results are not bold lines.
Could you guys please help me that how can I get the above-mentioned lines in bold letters
I am pasting here my script as well.
#!/bin/sh
DEST=mytestingbucket8719
SOURCE=/home/sham/Desktop/data2s3
Date=`date`
bold=$(tput bold)
normal=$(tput sgr0)
echo " "
echo "$bold-----------------------Started at: $Date---------------------------$normal"
echo "Start uploading to s3 bucket" $BUCKET
aws s3 sync $SOURCE s3://$DEST
echo "complete uploading to s3 bucket" $BUCKET
echo "$bold------------------------Completed at: $Date---------------------------$normal"
echo " "
Related
Being a relative beginner, I can't figure this out. I have a script that is started via cron. Within this script is an if/fi where I check to see if a (yearly archive) directory does not exist. It it does not, I create the directory, and ATTEMPT to echo that to the cron's log file that is created for each run. The directory is created, but the echo does not appear in the log file.
Here is a snippet of the code in question.
035: yyyy=`date +%Y`
036: today=`date +%m/%d/%Y`
037: time=`date +%r` #+%l:%M:%S%P`
038: dayofweek=`date +%A`
039: numDayOfWeek=`date +%u`
040:
041: echo "Run Date/Time: $today $time"
042:
043: WFADIR="/data/ssa1/home1/NEI/GAP-EFT-FLAT/$yyyy"
044: if [ ! -d $WFADIR ] ; then
045: mkdir /data/ssa1/home1/NEI/GAP-EFT-FLAT/$yyyy
046: chmod 777 /data/ssa1/home1/NEI/GAP-EFT-FLAT/$yyyy
047: echo ""
048: echo "New folder $yyyy created in GAP-EFT-FLAT"
049: fi
050:
051: #display test variables for output
052: echo ""
053: echo "HOSTNAME..........: ${HOSTNAME^^}"
054: echo ""
055:
And here is the FULL log file.
Run Date/Time: 01/03/2023 08:00:01 AM
HOSTNAME..........: BASYSPROD
EFT contribution file found...
Calling expect script to transmit contribution file...
spawn sftp -P 22 -i privatekey.pem username#domain.com:/inbound/NATIO080_ACH_3
Connected to domain.com.
Changing to: /inbound/NATIO080_ACH_3
sftp> put B06737_CON_20230103
Uploading B06737_CON_20230103 to /inbound/NATIO080_ACH_3/B06737_CON_20230103
B06737_CON_20230103 0% 0 0.0KB/s --:-- ETA
B06737_CON_20230103 100% 2470 70.0KB/s 00:00
sftp> Returned from contribution expect script...
Archiving sent contribution file...
Sending email confirmation...
Process completed...
EFT 401K file found...
Calling expect script to transmit 401K file...
spawn sftp -P 22 -i privatekey.pem username#domain.com:/inbound/NATIO080_ACH_4
Connected to domain.com.
Changing to: /inbound/NATIO080_ACH_4
sftp> put B06736_401K_20230103
Uploading B06736_401K_20230103 to /inbound/NATIO080_ACH_4/B06736_401K_20230103
B06736_401K_20230103 0% 0 0.0KB/s --:-- ETA
B06736_401K_20230103 100% 7980 216.4KB/s 00:00
sftp> Returned from 401K expect script...
Archiving sent 401K file...
Sending email confirmation...
As you can see, the echo from line 41 is in the log file. Then, as this was the first run for 2023, the 2023 directory did not yet exist. It WAS created and the permissions were changed as well, with lines 45 and 46, respectively.
drwxrwxrwx. 2 neiauto staff 61 Jan 3 08:00 2023
So why do lines 47 and 48 appear not to execute, and the next echo in the log file is from line 52, 53 and 54, with the hostname display, surrounded by blank lines?
I was expecting a blank line, and "New folder 2023 created in GAP-EFT-FLAT" to be echoed after the Run date/time (first) line of the log file, and before the host name display.
Very likely your directory already existed. Add an else echo $WFADIR already exists to your code to have your answer next year :-). My guess would be that the same code was run twice (on the same, or another host if shared disk-space was used).
I want a simple way to add 2 numbers taken from a text file. Details below:
Daily, I run clamscan against my /home/ folder, which generates a simple log along the lines of this:
Scanning 851M in /home/.
----------- SCAN SUMMARY -----------
Infected files: 0
Time: 0.000 sec (0 m 0 s)
Start Date: 2021:11:27 06:25:02
End Date: 2021:11:27 06:25:02
Weekly, I scan both my /home/ folder and an external drive, so I get twice as much in the log:
Scanning 851M in /home/.
----------- SCAN SUMMARY -----------
Infected files: 0
Time: 0.000 sec (0 m 0 s)
Start Date: 2021:11:28 06:25:02
End Date: 2021:11:28 06:25:02
Scanning 2.8T in /mnt/ext/.
----------- SCAN SUMMARY -----------
Infected files: 0
Time: 0.005 sec (0 m 0 s)
Start Date: 2021:11:28 06:26:30
End Date: 2021:11:28 06:26:30
I don't email the log to myself, I just have a bash script that sends an email that (for the daily scan) reads the number that comes after "Infected files:" and says either "No infected files found" or "Infected files found, check log." (And, to be honest, once I'm 100% comfortable that it all works the way I want it to, I'll skip the "No infected files found" email.) The problem is, I don't know how to make that work for the weekly scan of multiple folders, because the summary I get doesn't combine those numbers.
I'd like the script to find both lines that start "Infected files:", get the numbers that follow, and add them. I guess the ideal solution use a loop in case I ever need to scan more than two folders. I've taken a couple of stabs at it with grep and cut, but I'm just not experienced enough a coder to make it all work.
Thanks!
This bash script will print out the sum of infected files:
#!/bin/bash
n=$(sed -n 's/^Infected files://p' logfile)
echo $((${n//$'\n'/+}))
or a one-liner:
echo $(( $(sed -n 's/^Infected files: \(.*\)/+\1/p' logfile) ))
Use case - Compare all the files and directories on the mounts ( /apps , /logs , etc ) and calculate which one is the latest and size differences.
I am trying with rsync command , but with the limitation I am not achieving exactly what I need.
Under /tmp/test_ram I created two directories as dir1 and dir2. I have created two files under dir1 as shown below .
drwxr-xr-x 2 chada users 4096 Nov 21 12:03 dir2
drwxr-xr-x 2 chada users 4096 Nov 21 12:03 dir1
cd dir1 ; ls -ltr
total 196
-rw-r--r-- 1 chada users 188510 Nov 21 12:03 file_man_rsync
-rw-r--r-- 1 chada users 6854 Nov 21 12:04 file_man_diff
With DryRun –
I see nothing is happening which is expected, but in the o/p size is showing as zero. Which is not I was expecting, I want to see a size in diff of files.
rsync -n -avrczP --out-format="%t %f %''b" --backup --backup-dir=/tmp/test_ram /tmp/test_ram/dir1/ /tmp/test_ram/dir2/
sending incremental file list
2018/11/21 12:04:55 tmp/test_ram/dir1/. 0
2018/11/21 12:04:55 tmp/test_ram/dir1/file_man_diff 0
2018/11/21 12:04:55 tmp/test_ram/dir1/file_man_rsync 0
sent 161 bytes received 25 bytes 372.00 bytes/sec
total size is 195,364 speedup is 1,050.34 (DRY RUN)
the actual run :
I see the file size is showing up, which is what expected. But I cannot take chance in copying without checking. Yes I do have backup dir, but still it need too much of analysis .
rsync -avrczP --out-format="%t %f %''b" --backup --backup-dir=/tmp/test_ram/dir3 /tmp/test_ram/dir1/ /tmp/test_ram/dir2/
2018/11/21 12:05:52 tmp/test_ram/dir1/. 0
file_man_diff
6,854 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=1/3)
2018/11/21 12:05:52 tmp/test_ram/dir1/file_man_diff 2.48K
file_man_rsync
188,510 100% 16.34MB/s 0:00:00 (xfr#2, to-chk=0/3)
2018/11/21 12:05:52 tmp/test_ram/dir1/file_man_rsync 56.28K
sent 58,915 bytes received 57 bytes 117,944.00 bytes/sec
total size is 195,364 speedup is 3.31
This is an example I took to depict . But my comparision would be between multiple servers.
The mount points can be same , but the files are directories is what I needed to compare.
your help is much appreciated.
I'd like to fetch result, let's say from 2017-12-19 19:14 till the entire day from a log file that looks like this -
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:00.723 Info: Saving /var/opt/MarkLogic/Forests/Meters/00001829
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:01.134 Info: Saved 9 MB at 22 MB/sec to /var/opt/MarkLogic/Forests/Meters/00001829
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:01.376 Info: Merging 19 MB from /var/opt/MarkLogic/Forests/Meters/0000182a and /var/opt/MarkLogic/Forests/Meters/00001829 to /var/opt/MarkLogic/Forests/Meters/0000182c, timestamp=15137318408510140
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:02.585 Info: Merged 18 MB in 1 sec at 15 MB/sec to /var/opt/MarkLogic/Forests/Meters/0000182c
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:05.200 Info: Deleted 15 MB at 337 MB/sec /var/opt/MarkLogic/Forests/Meters/0000182a
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:05.202 Info: Deleted 9 MB at 4274 MB/sec /var/opt/MarkLogic/Forests/Meters/00001829
I am new to Unix and familiar with grep command. I tried the below command
date="2017-12-19 [19-23]:[14-59]"
echo "$date"
grep "$date" $root_path_values
but it throws invalid range end error. Any solution ? The date is going to be coming from a variable so it will be unpredictable. Therefore, don't make a command just keeping the example in mind. $root_path_values is a sequence of error files like errorLog.txt, errorLog_1.txt, errorLog_2.txt and so on.
I'd like to fetch result, let's say from 2017-12-19 19:14 till the entire day … The date is going to be coming from a variable …
This is not a job for regular expressions. Since the timestamp has a sensible form, we can simply compare it as a whole, e. g.:
start='2017-12-19 19:14'
end='2017-12-20'
awk -vstart="$start" -vend=$end 'start <= $0 && $0 < end' ErrorLog_1.txt
egrep '2017-12-19 (19|2[0-3])\:(1[4-9]|[2-5][0-9])\:*\.*' path/to/your/file Try this regexp.
In the case if you need pattern in variable:
#!/bin/bash
date="2017-12-19 (19|2[0-3])\:(1[4-9]|[2-5][0-9])\:*\.*"
egrep ${date} path/to/your/file
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Having no exprience as devops I've just been given a project where I have to do the whole thing.
So, how do I keep an eye on usage of disk, memory, database space and access time, api reply times etc?
It's imperatively impossible for any admin to keep eyes on running processes at all time, this is where Server Monitory comes handy.
Try Monit, it can be easily installed with:
apt-get install monit -y
Monitoring:
nano /etc/monit/monitrc
Use the example config to configure what you would like to monitor, this is accessible over http or https as well, plus you don't really need to access it because it will alert you if anything goes wrong in your server. For example, you will get an email if your memory consumption is getting higher than what you specified in the config file above, or cpu is getting overloaded, or a certain website is down.
Let's dig into it a little bit.
type monit status to get status like the following:
The Monit daemon 5.3.2 uptime: 1h 32m
System 'myhost.mydomain.tld'
status Running
monitoring status Monitored
load average [0.03] [0.14] [0.20]
cpu 3.5%us 5.9%sy 0.0%wa
memory usage 26100 kB [10.4%]
swap usage 0 kB [0.0%]
data collected Thu, 30 Aug 2017 18:35:00
You can monitor virtually anything, apache, nginx, mysql, disks, process etc
Sample monit status:
File 'mysql_bin'
status Accessible
monitoring status Monitored
permission 755
uid 0
gid 0
timestamp Fri, 05 May 2017 22:33:39
size 16097088 B
checksum 6d7b5ffd8563f8ad44dde35ae4b8bd52 (MD5)
data collected Mon, 28 Aug 2017 06:21:02
File 'apache_rc'
status Accessible
monitoring status Monitored
permission 755
uid 0
gid 0
timestamp Fri, 05 May 2017 11:21:22
size 9974 B
checksum 55b2bc7ce5e4a0835877dbfd98c2646b (MD5)
data collected Mon, 28 Aug 2017 06:21:02
Filesystem 'Server01'
status Accessible
monitoring status Monitored
permission 660
uid 0
gid 6
filesystem flags 0x1000
block size 4096 B
blocks total 5006559 [19556.9 MB]
blocks free for non superuser 2615570 [10217.1 MB] [52.2%]
blocks free total 2875653 [11233.0 MB] [57.4%]
inodes total 1281120
inodes free 1085516 [84.7%]
data collected Mon, 28 Aug 2017 06:23:02
Filesystem 'Media'
status Accessible
monitoring status Monitored
permission 660
uid 0
gid 6
filesystem flags 0x1000
block size 4096 B
blocks total 4414923 [17245.8 MB]
blocks free for non superuser 3454811 [13495.4 MB] [78.3%]
blocks free total 3684839 [14393.9 MB] [83.5%]
inodes total 1130496
inodes free 1130384 [100.0%]
data collected Mon, 28 Aug 2017 06:23:02
System 'mywebsite.com'
status Resource limit matched
monitoring status Monitored
load average [0.01] [0.10] [0.61]
cpu 2.7%us 0.2%sy 0.0%wa
memory usage 1150372 kB [28.5%]
swap usage 184356 kB [35.2%]
data collected Mon, 28 Aug 2017 06:21:02
Setup with alert!
Don't forget that you will receive email alert for every rule that you specified to be monitor, eg when your website "mywebsite" is down, or when disk space is less than 20%, or disk failure, cpu is more than x% etc.
Install monit, check it's manual with man monit
You can user Window Performance Analyzer. Xperf is also helpful.
here is the link for the same.
https://msdn.microsoft.com/en-us/library/windows/hardware/hh162945.aspx
#!/bin/sh
file="/var/www/html/index.html"
linebreak="--------------------------------------------------------------------------------------------"
while true
do
echo "<html>" > $file
echo "<head>" >> $file
echo "<meta http-equiv="refresh" content="100">" >> $file
echo "</head>" >> $file
echo "<body>" >> $file
echo "<pre>" >> $file
date >> $file
echo $linebreak >> $file
uptime >> $file
echo $linebreak >> $file
top -b -n1 -u nobody | sed -n '3p' >> $file
echo $linebreak >> $file
free -m >> $file
echo $linebreak >> $file
df -h >> $file
echo $linebreak >> $file
iptables -nL >> $file
echo $linebreak >> $file
echo "</pre>" >> $file
echo "</body>" >> $file
echo "</html>" >> $file
sleep 100
done
I use this script to monitoring some information like temperature, disk usage, ram, firewall and so on.
I put the results in the index of an apache. So i can call the homepage of the server and see everything.
The script refreshs every 100 seconds the results. The webpage will refreshs every 100 seconds too.
With these script and apache you can monitor the server all over the world with mobile devices or pc.
Mo 28. Aug 14:36:03 CEST 2017
--------------------------------------------------------------------------------------------
14:36:03 up 1:34, 4 users, load average: 0,10, 0,09, 0,11
--------------------------------------------------------------------------------------------
%Cpu(s): 14,8 us, 1,6 sy, 0,7 ni, 82,2 id, 0,5 wa, 0,0 hi, 0,1 si, 0,0 st
--------------------------------------------------------------------------------------------
total used free shared buff/cache available
Mem: 3949 1027 756 74 2165 2542
Swap: 4093 0 4093
--------------------------------------------------------------------------------------------
Filesystem Size Used Avail Use% Mounted on
udev 2,0G 0 2,0G 0% /dev
tmpfs 395M 6,0M 389M 2% /run
/dev/sda1 21G 6,2G 14G 32% /
tmpfs 2,0G 43M 1,9G 3% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 2,0G 0 2,0G 0% /sys/fs/cgroup
Sharepoint 476G 300G 176G 64% /media/sf_Sharepoint
tmpfs 395M 92K 395M 1% /run/user/1000
--------------------------------------------------------------------------------------------
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
--------------------------------------------------------------------------------------------