Merge bash command(s)/simplification of a long command in bash - bash

Hi I am making one script for which I would like it, that through one command it would be possible to make a more complex command or function. The problem is that I have three docker containers which have the same name but only the digit changes. And just the docker container has one entry in the logs and that is just the right Cloudflared URL, I need to get it through that sub-command or function more than 1 times. Because I have three docker containers and each would have a different url from the logs.
I know that via classic way it goes! Only problem is that if I repeat the same commands it creates a little mess in the script (at least that's how I feel).
#!/bin/bash
function test {
docker_container_proxy_name="cloudflared"
real_domain_of_cloudflared="https://([a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60}).trycloudflare.com"
docker_container_1_uri_outside="$(docker logs ${docker_container_proxy_name}1 2>&1 | grep -Eo $real_domain_of_cloudflared | tail -n 1)"
docker_container_2_uri_outside="$(docker logs ${docker_container_proxy_name}2 2>&1 | grep -Eo $real_domain_of_cloudflared | tail -n 1)"
docker_container_3_uri_outside="$(docker logs ${docker_container_proxy_name}3 2>&1 | grep -Eo $real_domain_of_cloudflared | tail -n 1)"
}
test
Is there any way through which I can simplify it like the whole long command, but only the docker container digit changes. Into like this (more cleaner in my eye):
#!/bin/bash
function test {
docker_container_proxy_name="cloudflared"
real_domain_of_cloudflared="https://([a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60}).trycloudflare.com"
command="$(docker logs ${docker_container_proxy_name} 2>&1 | grep -Eo $real_domain_of_cloudflared | tail -n 1)"
docker_container_1_uri_outside="${command}1"
docker_container_2_uri_outside="${command}2"
docker_container_3_uri_outside="${command}3"
}
test
I would be glad for an answer.
Maybe a similar question already exists, but I admit I didn't look for it and went straight to write my question, and I apologize that maybe this will be a similar post.
Docker containers:
$ docker container ps | grep "cloudflared"
<id_of_container> cloudflare/cloudflared:2022.4.1-amd64 "cloudflared --no-au…" 11 days ago Up 2 days cloudflared3
<id_of_container> cloudflare/cloudflared:2022.4.1-amd64 "cloudflared --no-au…" 11 days ago Up 2 days cloudflared2
<id_of_container> cloudflare/cloudflared:2022.4.1-amd64 "cloudflared --no-au…" 11 days ago Up 2 days cloudflared1
#1 for XFCE_noVNC
#2 for filebrowser
#3 for code-server
#i want this services in my "pocket/everywhere" without my public ipv4.

Variables are for storing data, not executable code. Generally the best way to store executable code is in a function:
#!/bin/bash
get_container_uri() {
docker_container_proxy_name="cloudflared"
real_domain_of_cloudflared="https://([a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60}).trycloudflare.com"
docker logs "${docker_container_proxy_name}${1}" 2>&1 | grep -Eo "$real_domain_of_cloudflared" | tail -n 1
}
docker_container_1_uri_outside="$(get_container_uri 1)"
docker_container_2_uri_outside="$(get_container_uri 2)"
docker_container_3_uri_outside="$(get_container_uri 3)"
Note that the function just runs the docker logs ... command directly rather than capturing its output; that way that command's output becomes the function's output, which is exactly what's wanted.
Also, some general scripting recommendations: you should (almost) always put variable references in double-quotes (e.g. grep -Eo "$real_domain_of_cloudflared") to keep the shell from parsing them in unexpected ways. The function keyword is nonstandard; the standard way to define a function is by putting () after the name (as I did here). Don't use test as the name of a function, since that's the name of a rather important built-in command, and overriding it might break other things.
shellcheck.net will point out some of these (and many other common mistakes). I recommend running your scripts through it and fixing what it points out.

Almost correct:
Suggesting:
#!/bin/bash
docker_container_proxy_name="cloudflared"
real_domain_of_cloudflared="https://([a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60}).trycloudflare.com"
function command {
echo "$(docker logs "${docker_container_proxy_name}$1" 2>&1 | grep -Eo "$real_domain_of_cloudflared" | tail -n 1)"
}
function test {
docker_container_1_uri_outside=$(command 1)
docker_container_2_uri_outside=$(command 2)
docker_container_3_uri_outside=$(command 3)
}
test

Related

When I pipe the service --status-all command to grep, why do extra lines show up in the output?

For example,
sudo service --status-all | grep -oE 'php[0-9]+.[0-9]+'
generates the following output.
[ ? ] hwclock.sh
[ ? ] networking
php7.0
php7.3
My goal is to extract the version of another software package and put it into a configuration script so that the script won't break if that package gets upgraded or downgraded. If my understanding of regular expressions and the piping operator is correct, the first two lines shouldn't even show up in the output.
Can anyone explain to me why this is happening?
Redirecting the output to stderr eliminated the unwanted lines. I also piped the output to tail -1 to get the last line only--the one with the latest version.
sudo service --status-all 2>/dev/null | grep -oE 'php[0-9]+.[0-9]+' | tail -1

Appending an Operator at the end of a Grep Command to Set as Bash Variable

I am needing to append an addition operator at the end of the command that parses out the IP address from a list of running VM's in our labs. The command is as follows
$ dos.py net-list vanilla80 | grep -i 'fuelweb' | grep -Eo "[0-9][0-9][0-9].[0-9][0-9].[0-9][0-9][0-9].([0-9])"
At the end I would like to add two to the last number in the IP address at the end of the IP address.
Example 171.11.111.0 => 172.11.111.2
Any suggestions to this issue??? I am trying to make our labs more efficient with alias commands that reference a script that will match running lab vms and push their keys to allow for easy access to vanilla provisioned labs.
As a quick and dirty answer,
dos.py net-list vanilla80 |
grep -i 'fuelweb' |
grep -Eo "[0-9][0-9][0-9].[0-9][0-9].[0-9][0-9][0-9].([0-9])" |
awk -F . 'BEGIN { OFS=FS }
{ $NF += 2 }1'
However, a much better solution would be to refactor dos.py to produce machine-readable output, and/or perhaps turn it into a module you can import from another Python script which would replace this shell pipeline.
As an aside, your grep -Eo regex could be refactored into something like \<[0-9]{1,3}(\.[0-9]{1,3})(3}\> and if you don't end up rewriting all of this logic in Python, perhaps the entire pipeline could be refactored into a single Awk script.

Get Macbook screen size from terminal/bash

Does anyone know of any possible way to determine or glean this information from the terminal (in order to use in a bash shell script)?
On my Macbook Air, via the GUI I can go to "About this mac" > "Displays" and it tells me:
Built-in Display, 13-inch (1440 x 900)
I can get the screen resolution from the system_profiler command, but not the "13-inch" bit.
I've also tried with ioreg without success. Calculating the screen size from the resolution is not accurate, as this can be changed by the user.
Has anyone managed to achieve this?
I think you could only get the display model-name which holds a reference to the size:
ioreg -lw0 | grep "IODisplayEDID" | sed "/[^<]*</s///" | xxd -p -r | strings -6 | grep '^LSN\|^LP'
will output something like:
LP154WT1-SJE1
which depends on the display manufacturer. But as you can see the first three numbers in this model name string imply the display-size: 154 == 15.4''
EDIT
Found a neat solution but it requires an internet connection:
curl -s http://support-sp.apple.com/sp/product?cc=`system_profiler SPHardwareDataType | awk '/Serial/ {print $4}' | cut -c 9-` |
sed 's|.*<configCode>\(.*\)</configCode>.*|\1|'
hope that helps
The next script:
model=$(system_profiler SPHardwareDataType | \
/usr/bin/perl -MLWP::Simple -MXML::Simple -lane '$c=substr($F[3],8)if/Serial/}{
print XMLin(get(q{http://support-sp.apple.com/sp/product?cc=}.$c))->{configCode}')
echo "$model"
will print for example:
MacBook Pro (13-inch, Mid 2010)
Or the same without perl but more command forking:
model=$(curl -s http://support-sp.apple.com/sp/product?cc=$(system_profiler SPHardwareDataType | sed -n '/Serial/s/.*: \(........\)\(.*\)$/\2/p')|sed 's:.*<configCode>\(.*\)</configCode>.*:\1:')
echo "$model"
It is fetched online from apple site by serial number, so you need internet connection.
I've found that there seem to be several different Apple URLs for checking this info. Some of them seem to work for some serial numbers, and others for other machines.
e.g:
https://selfsolve.apple.com/wcResults.do?sn=$Serial&Continue=Continue&num=0
https://selfsolve.apple.com/RegisterProduct.do?productRegister=Y&country=USA&id=$Serial
http://support-sp.apple.com/sp/product?cc=$serial (last 4 digits)
https://selfsolve.apple.com/agreementWarrantyDynamic.do
However, the first two URLs are the ones that seem to work for me. Maybe it's because the machines I'm looking up are in the UK and not the US, or maybe it's due to their age?
Anyway, due to not having much luck with curl on the command line (The Apple sites redirect, sometimes several times to alternative URLs, and the -L option doesn't seem to help), my solution was to bosh together a (rather messy) PHP script that uses PHP cURL to check the serials against both URLs, and then does some regex trickery to report the info I need.
Once on my web server, I can now curl it from the terminal command line and it's bringing back decent results 100% of the time.
I'm a PHP novice so I won't embarrass myself by posting the script up in it's current state, but if anyone's interested I'd be happy to tidy it up and share it on here (though admittedly it's a rather long winded solution to what should be a very simple query).
This info really should be simply made available in system_profiler. As it's available through System Information.app, I can't see a reason why not.
Hi there for my bash script , under GNU/Linux : I make the follow to save
# Resolution Fix
echo `xrandr --current | grep current | awk '{print $8}'` >> /tmp/width
echo `xrandr --current | grep current | awk '{print $10}'` >> /tmp/height
cat /tmp/height | sed -i 's/,//g' /tmp/height
WIDTH=$(cat /tmp/width)
HEIGHT=$(cat /tmp/height)
rm /tmp/width /tmp/height
echo "$WIDTH"'x'"$HEIGHT" >> /tmp/Resolution
Resolution=$(cat /tmp/Resolution)
rm /tmp/Resolution
# Resolution Fix
and the follow in the same script for restore after exit from some app / game
in some S.O
This its execute command directly
ResolutionRestore=$(xrandr -s $Resolution)
But if dont execute call the variable with this to execute the varible content
$($ResolutionRestore)
And the another way you can try its with the follow for example
RESOLUTION=$(xdpyinfo | grep -i dimensions: | sed 's/[^0-9]*pixels.*(.*).*//' | sed 's/[^0-9x]*//')
VRES=$(echo $RESOLUTION | sed 's/.*x//')
HRES=$(echo $RESOLUTION | sed 's/x.*//')

So i'm trying to make a background process that 'espeak's specific log events

I'm relatively new to linux - please forgive me if the solution is simple/obvious..
I'm trying to set up a background running script that monitors a log file for certain keyword patterns with awk and tail, and then uses espeak to provide a simplified notification when these keywords appear in the log file (which uses sysklogd)
The concept is derived from this guide
This is a horrible example of what i'm trying to do:
#!/bin/bash
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/session opened for user/{system("espeak \"Opening SSH session\"")}'
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/session closed/{system("espeak \"Session closed. Goodbye.\"")}''
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/authentication failure/{system("espeak \"Warning: Authentication Faliure\"")}'
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/authentication failure/{system("espeak \"Authentication Failure. I have denied access.\"")}'
The first tail command by itself works perfectly; it monitors the defined log file for 'example sshd' and 'session opened for user', then uses espeak to say 'Opening SSH session'. As you would expect given the above excerpt, the bash script will not run multiple tails simultaneously (or at least it stops after this first tail command).
I guess I have a few questions:
How should I set out this script?
What is the best way to constantly run this script in the background - e.g init?
Are there any tutorials/documentation somewhere that could help me out?
Is there already something like this available that I could use?
Thanks, any help would be greatly appreciated - sorry for the long post.
Personally, I would attempt to set each of these up as an individual cron job. This would allow you to run it at a specific time and at specified intervals.
For example, you could type crontab -e
Then inside, have each of these tail commands listed as such:
5 * * * * tail -f -n1 /var/log/example_main | awk '/example sshd/&&/session opened for user/{system("espeak \"Opening SSH session\"")}'
That would run that one command at 5 minutes after the hour, every hour.
This was a decent guide I found: HowTo: Add Jobs To cron

Is it possible to output the contents of more than one stream into separate columns in the terminal?

For work, I occasionally need to monitor the output logs of services I create. These logs are short lived, and contain a lot of information that I don't necessarily need. Up until this point I've been watching them using:
grep <tag> * | less
where <tag> is either INFO, DEBUG, WARN, or ERROR. There are about 10x as many warns as there are errors, and 10x as many debugs as warns, and so forth. It makes it difficult to catch one ERROR in a sea of relevant DEBUG messages. I would like a way to, for instance, make all 'WARN' messages appear on the left-hand side of the terminal, and all the 'ERROR' messages appear on the right-hand side.
I have tried using tmux and screen, but it doesn't seem to be working on my dev machine.
Try doing this :
FILE=filename.log
vim -O <(grep 'ERR' "$FILE") <(grep 'WARN' "$FILE")
Just use sed to indent the desired lines. Or, use colors. For example, to make ERRORS red, you could do:
$ r=$( printf '\033[1;31m' ) # escape sequence may change depending on the display
$ g=$( printf '\033[1;32m' )
$ echo $g # Set the output color to the default
$ sed "/ERROR/ { s/^/$r/; s/$/$g/; }" *
If these are live logs, how about running these two commands in separate terminals:
Errors:
tail -f * | grep ERROR
Warnings:
tail -f * | grep WARN
Edit
To automate this you could start it in a tmux session. I tend to do this with a tmux script similar to what I described here.
In you case the script file could contain something like this:
monitor.tmux
send-keys "tail -f * | grep ERROR\n"
split
send-keys "tail -f * | grep WARN\n"
Then run like this:
tmux new -d \; source-file monitor.tmux; tmux attach
You could do this using screen. Simply split the screen vertically and run tail -f LOGFILE | grep KEYWORD on each pane.
As a shortcut, you can use the following rc file:
split -v
screen bash -c "tail -f /var/log/syslog | grep ERR"
focus
screen bash -c "tail -f /var/log/syslog | grep WARN"
then launch your screen instance using:
screen -c monitor_log_screen.rc
You can of course extend this concept much further by making more splits and use commands like tail -f and watch to get live updates of different output.
Do also explore screen other screen features such as use of multiple windows (with monitoring) and hardstatus and you can come up with quite a comprehensive "monitoring console".

Resources