systemd text based console chronology - systemd

Systemd startup process is quite complex, hence it would be useful to get a listing of started services in chronological order.
To this aim, one can create a svg-file:
systemd-analyze plot > startup_order.svg
When analyzing systemd behaviour on a server, it would be useful to get a konsole-based version of this. Does anybody know how to do this?
Closest I came was
for i in $(systemctl --no-pager --no-legend --all -o short-precise | cut -f 1 -d " "); do printf "%s %s\n" "$(systemctl show $i -p ExecMainStartTimestampMonotonic 2>/dev/null)" "$i";done | sed -n '/=/p' | sed 's/^ExecMainStartTimestampMonotonic=//' | sort -n
But, I think ExecMainStartTimestampMonotonic is not the boot start time.
Any ideas?

The output of systemd-analyze plot is an SVG, which is just text (XML). You can parse it using sed to get what you want.

Related

Merge bash command(s)/simplification of a long command in bash

Hi I am making one script for which I would like it, that through one command it would be possible to make a more complex command or function. The problem is that I have three docker containers which have the same name but only the digit changes. And just the docker container has one entry in the logs and that is just the right Cloudflared URL, I need to get it through that sub-command or function more than 1 times. Because I have three docker containers and each would have a different url from the logs.
I know that via classic way it goes! Only problem is that if I repeat the same commands it creates a little mess in the script (at least that's how I feel).
#!/bin/bash
function test {
docker_container_proxy_name="cloudflared"
real_domain_of_cloudflared="https://([a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60}).trycloudflare.com"
docker_container_1_uri_outside="$(docker logs ${docker_container_proxy_name}1 2>&1 | grep -Eo $real_domain_of_cloudflared | tail -n 1)"
docker_container_2_uri_outside="$(docker logs ${docker_container_proxy_name}2 2>&1 | grep -Eo $real_domain_of_cloudflared | tail -n 1)"
docker_container_3_uri_outside="$(docker logs ${docker_container_proxy_name}3 2>&1 | grep -Eo $real_domain_of_cloudflared | tail -n 1)"
}
test
Is there any way through which I can simplify it like the whole long command, but only the docker container digit changes. Into like this (more cleaner in my eye):
#!/bin/bash
function test {
docker_container_proxy_name="cloudflared"
real_domain_of_cloudflared="https://([a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60}).trycloudflare.com"
command="$(docker logs ${docker_container_proxy_name} 2>&1 | grep -Eo $real_domain_of_cloudflared | tail -n 1)"
docker_container_1_uri_outside="${command}1"
docker_container_2_uri_outside="${command}2"
docker_container_3_uri_outside="${command}3"
}
test
I would be glad for an answer.
Maybe a similar question already exists, but I admit I didn't look for it and went straight to write my question, and I apologize that maybe this will be a similar post.
Docker containers:
$ docker container ps | grep "cloudflared"
<id_of_container> cloudflare/cloudflared:2022.4.1-amd64 "cloudflared --no-au…" 11 days ago Up 2 days cloudflared3
<id_of_container> cloudflare/cloudflared:2022.4.1-amd64 "cloudflared --no-au…" 11 days ago Up 2 days cloudflared2
<id_of_container> cloudflare/cloudflared:2022.4.1-amd64 "cloudflared --no-au…" 11 days ago Up 2 days cloudflared1
#1 for XFCE_noVNC
#2 for filebrowser
#3 for code-server
#i want this services in my "pocket/everywhere" without my public ipv4.
Variables are for storing data, not executable code. Generally the best way to store executable code is in a function:
#!/bin/bash
get_container_uri() {
docker_container_proxy_name="cloudflared"
real_domain_of_cloudflared="https://([a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60}).trycloudflare.com"
docker logs "${docker_container_proxy_name}${1}" 2>&1 | grep -Eo "$real_domain_of_cloudflared" | tail -n 1
}
docker_container_1_uri_outside="$(get_container_uri 1)"
docker_container_2_uri_outside="$(get_container_uri 2)"
docker_container_3_uri_outside="$(get_container_uri 3)"
Note that the function just runs the docker logs ... command directly rather than capturing its output; that way that command's output becomes the function's output, which is exactly what's wanted.
Also, some general scripting recommendations: you should (almost) always put variable references in double-quotes (e.g. grep -Eo "$real_domain_of_cloudflared") to keep the shell from parsing them in unexpected ways. The function keyword is nonstandard; the standard way to define a function is by putting () after the name (as I did here). Don't use test as the name of a function, since that's the name of a rather important built-in command, and overriding it might break other things.
shellcheck.net will point out some of these (and many other common mistakes). I recommend running your scripts through it and fixing what it points out.
Almost correct:
Suggesting:
#!/bin/bash
docker_container_proxy_name="cloudflared"
real_domain_of_cloudflared="https://([a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60})(-[a-z0-9]{1,60}).trycloudflare.com"
function command {
echo "$(docker logs "${docker_container_proxy_name}$1" 2>&1 | grep -Eo "$real_domain_of_cloudflared" | tail -n 1)"
}
function test {
docker_container_1_uri_outside=$(command 1)
docker_container_2_uri_outside=$(command 2)
docker_container_3_uri_outside=$(command 3)
}
test

"WRITE" command works manually but not via script

My Co-Workers and I use the screen program on our Linux JUMP server to utilize as much screen space as possible. With that, we have multiple screens setup so that messages can go to one while we do work in another.
With that, i have a script that is used to verify network device connectivity which will send messages to my co-workers regardless if there is anything down or not.
The script initially references a file with their usernames in it and then grabs the highest PTS number which denotes the last screen session they activated and then puts it into the proper format in an external file like such:
cat ./netops_techs | while read -r line; do
temp=$(echo $line)
temp2=$(who | grep $temp | sed 's/[^0-9]*//g' | sort -n -r | head -n1)
if who | grep $temp; then
echo "$temp pts/$temp2" >> ./tech_send
fi
done
Once it is done, it will then scan our network every 5 minutes and send updates to the folks in the file "./tech_send" like such:
Techs=$(cat ./tech_send)
if [ ! -f ./Failed.log ]; then
echo -e "\nNo network devices down at this time."
for d in $Techs
do
cat ./no-down | write $d
done
else
# Writes downed buildings localy to my terminal
echo -e "\nThe following devices are currently down:"
echo ""
echo "IP Hostname Model Building Room Rack Users Affected" > temp_down.log
grep -f <(sed 's/.*/\^&\\>/' Failed.log) Asset-Location >> temp_down.log
cat temp_down.log | column -t > Down.log
cat Down.log
# This will send the downed buildings to the rest of NetOps
for d in $Techs
do
cat Down.log | write $d
done
fi
The issue stems from, when they are working in their main sectioned screen, the messages will pop up in that active screen instead of the inactive screen. If I send them a message manually such as:
write jsmith pts/25
Test Test
and then CTRL+D, it works fine even if they are in a different session. Via script though, it gives an error stating that:
write: jsmith is logged in more than once; writing to pts/23
write: jsmith/pts/25 is not logged in
I have verified the "tech_send" file and it has the correct format for them:
jsmith pts/25
Would appreciate any insight on why this is happening.

Get Macbook screen size from terminal/bash

Does anyone know of any possible way to determine or glean this information from the terminal (in order to use in a bash shell script)?
On my Macbook Air, via the GUI I can go to "About this mac" > "Displays" and it tells me:
Built-in Display, 13-inch (1440 x 900)
I can get the screen resolution from the system_profiler command, but not the "13-inch" bit.
I've also tried with ioreg without success. Calculating the screen size from the resolution is not accurate, as this can be changed by the user.
Has anyone managed to achieve this?
I think you could only get the display model-name which holds a reference to the size:
ioreg -lw0 | grep "IODisplayEDID" | sed "/[^<]*</s///" | xxd -p -r | strings -6 | grep '^LSN\|^LP'
will output something like:
LP154WT1-SJE1
which depends on the display manufacturer. But as you can see the first three numbers in this model name string imply the display-size: 154 == 15.4''
EDIT
Found a neat solution but it requires an internet connection:
curl -s http://support-sp.apple.com/sp/product?cc=`system_profiler SPHardwareDataType | awk '/Serial/ {print $4}' | cut -c 9-` |
sed 's|.*<configCode>\(.*\)</configCode>.*|\1|'
hope that helps
The next script:
model=$(system_profiler SPHardwareDataType | \
/usr/bin/perl -MLWP::Simple -MXML::Simple -lane '$c=substr($F[3],8)if/Serial/}{
print XMLin(get(q{http://support-sp.apple.com/sp/product?cc=}.$c))->{configCode}')
echo "$model"
will print for example:
MacBook Pro (13-inch, Mid 2010)
Or the same without perl but more command forking:
model=$(curl -s http://support-sp.apple.com/sp/product?cc=$(system_profiler SPHardwareDataType | sed -n '/Serial/s/.*: \(........\)\(.*\)$/\2/p')|sed 's:.*<configCode>\(.*\)</configCode>.*:\1:')
echo "$model"
It is fetched online from apple site by serial number, so you need internet connection.
I've found that there seem to be several different Apple URLs for checking this info. Some of them seem to work for some serial numbers, and others for other machines.
e.g:
https://selfsolve.apple.com/wcResults.do?sn=$Serial&Continue=Continue&num=0
https://selfsolve.apple.com/RegisterProduct.do?productRegister=Y&country=USA&id=$Serial
http://support-sp.apple.com/sp/product?cc=$serial (last 4 digits)
https://selfsolve.apple.com/agreementWarrantyDynamic.do
However, the first two URLs are the ones that seem to work for me. Maybe it's because the machines I'm looking up are in the UK and not the US, or maybe it's due to their age?
Anyway, due to not having much luck with curl on the command line (The Apple sites redirect, sometimes several times to alternative URLs, and the -L option doesn't seem to help), my solution was to bosh together a (rather messy) PHP script that uses PHP cURL to check the serials against both URLs, and then does some regex trickery to report the info I need.
Once on my web server, I can now curl it from the terminal command line and it's bringing back decent results 100% of the time.
I'm a PHP novice so I won't embarrass myself by posting the script up in it's current state, but if anyone's interested I'd be happy to tidy it up and share it on here (though admittedly it's a rather long winded solution to what should be a very simple query).
This info really should be simply made available in system_profiler. As it's available through System Information.app, I can't see a reason why not.
Hi there for my bash script , under GNU/Linux : I make the follow to save
# Resolution Fix
echo `xrandr --current | grep current | awk '{print $8}'` >> /tmp/width
echo `xrandr --current | grep current | awk '{print $10}'` >> /tmp/height
cat /tmp/height | sed -i 's/,//g' /tmp/height
WIDTH=$(cat /tmp/width)
HEIGHT=$(cat /tmp/height)
rm /tmp/width /tmp/height
echo "$WIDTH"'x'"$HEIGHT" >> /tmp/Resolution
Resolution=$(cat /tmp/Resolution)
rm /tmp/Resolution
# Resolution Fix
and the follow in the same script for restore after exit from some app / game
in some S.O
This its execute command directly
ResolutionRestore=$(xrandr -s $Resolution)
But if dont execute call the variable with this to execute the varible content
$($ResolutionRestore)
And the another way you can try its with the follow for example
RESOLUTION=$(xdpyinfo | grep -i dimensions: | sed 's/[^0-9]*pixels.*(.*).*//' | sed 's/[^0-9x]*//')
VRES=$(echo $RESOLUTION | sed 's/.*x//')
HRES=$(echo $RESOLUTION | sed 's/x.*//')

How to determine the latest stable TuxOnIce version as compactly as possible

So what I'm intending to do here is to determine the latest stable version of TuxOnIce from http://tuxonice.net/downloads/all/ (currently tuxonice-for-linux-3.8.0-2013-02-24.patch.bz2).
What complicates things is that there's no "current" link, so we gotta follow the versioning, which is something like (these don't exist):
tuxonice-for-linux-3.8.0-2013-4-2.patch.bz2
tuxonice-for-linux-3.8-4-2013-4-16.patch.bz2
tuxonice-for-linux-3.8-11-2013-5-23.patch.bz2
The problem is they're gonna be in this order:
tuxonice-for-linux-3.8-11-2013-5-23.patch.bz2
tuxonice-for-linux-3.8-4-2013-4-16.patch.bz2
tuxonice-for-linux-3.8.0-2013-4-2.patch.bz2
My current implemetation (which is garbage) is this. I thought about using the dates but couldn't figure out how to do that either (/tmp/tuxonice is the index file):
_major=3.8 # Auto-generated
_TOI=$(grep ${_major}-1[0-9] /tmp/tuxonice | cut -d '"' -f2 | tail -1)
[ ! $_TOI ] && _TOI=$(grep ${_major}- /tmp/tuxonice | cut -d '"' -f2 | tail -1)
[ ! $_TOI ] && _TOI=$(grep ${_major}.0-2 /tmp/tuxonice | cut -d '"' -f2 | tail -1)
Thanks.
Use the webserver's feature to sort the index page by modification date in reverse order, grab the page using lynx -dump, get the first line matching the filename you are interested in and print the respective column. This gives you the absolute URL to the file, from there you can tweak the command to give you the exact output you want (filename, just the version string, ...).
$ lynx -dump 'http://tuxonice.net/downloads/all/?C=M&O=D'|awk '/^[[:space:]]*[[:digit:]]+\..+\/tuxonice-for-linux/ { print $2; exit }'
http://tuxonice.net/downloads/all/tuxonice-for-linux-3.8.0-2013-02-24.patch.bz2
Still not super-robust and will obviously break if the modification dates are not as expected, and you probably also want to tweak the regex a bit to be more specific.
This isn't a real answer, but I thought this "one-liner"[1] was pretty cool:
HTML=$(wget -qO- http://tuxonice.net/downloads/all/ | grep tuxonice); TIMESTAMP=$(echo "$HTML" | sed 's/.*\([0-9]\{2\}-[A-Za-z]\{3\}-[0-9]\{4\} [0-9]\{2\}:[0-9]\{2\}\).*/\1/' | while read line; do echo $(date --date "$line" +%s) $line; done | sort | tail -n 1 | cut -d' ' -f2-3); LINK=$(echo "$HTML" | grep "$TIMESTAMP" | sed 's/.*href=\"\(.*\)\".*/\1/'); echo "http://tuxonice.net/downloads/all/${LINK}"
Prints:
http://tuxonice.net/downloads/all/tuxonice-for-linux-3.8.0-2013-02-24.patch.bz2
This approach is really just a joke though. Obviously, there are better ways to do this, perhaps using a scripting language that supports XML parsing.
At the very least, maybe this will give you some insight on how you can use the date/time values of the files to select the "newest". But I'd caution using this (because upload dates may not coincide with version numbers), and suggest that your version number idea was probably a better idea, if you can somehow handle all of the various naming and version numbering schemes it looks like they've used.
[1] It's not a real one liner

Is it possible to output the contents of more than one stream into separate columns in the terminal?

For work, I occasionally need to monitor the output logs of services I create. These logs are short lived, and contain a lot of information that I don't necessarily need. Up until this point I've been watching them using:
grep <tag> * | less
where <tag> is either INFO, DEBUG, WARN, or ERROR. There are about 10x as many warns as there are errors, and 10x as many debugs as warns, and so forth. It makes it difficult to catch one ERROR in a sea of relevant DEBUG messages. I would like a way to, for instance, make all 'WARN' messages appear on the left-hand side of the terminal, and all the 'ERROR' messages appear on the right-hand side.
I have tried using tmux and screen, but it doesn't seem to be working on my dev machine.
Try doing this :
FILE=filename.log
vim -O <(grep 'ERR' "$FILE") <(grep 'WARN' "$FILE")
Just use sed to indent the desired lines. Or, use colors. For example, to make ERRORS red, you could do:
$ r=$( printf '\033[1;31m' ) # escape sequence may change depending on the display
$ g=$( printf '\033[1;32m' )
$ echo $g # Set the output color to the default
$ sed "/ERROR/ { s/^/$r/; s/$/$g/; }" *
If these are live logs, how about running these two commands in separate terminals:
Errors:
tail -f * | grep ERROR
Warnings:
tail -f * | grep WARN
Edit
To automate this you could start it in a tmux session. I tend to do this with a tmux script similar to what I described here.
In you case the script file could contain something like this:
monitor.tmux
send-keys "tail -f * | grep ERROR\n"
split
send-keys "tail -f * | grep WARN\n"
Then run like this:
tmux new -d \; source-file monitor.tmux; tmux attach
You could do this using screen. Simply split the screen vertically and run tail -f LOGFILE | grep KEYWORD on each pane.
As a shortcut, you can use the following rc file:
split -v
screen bash -c "tail -f /var/log/syslog | grep ERR"
focus
screen bash -c "tail -f /var/log/syslog | grep WARN"
then launch your screen instance using:
screen -c monitor_log_screen.rc
You can of course extend this concept much further by making more splits and use commands like tail -f and watch to get live updates of different output.
Do also explore screen other screen features such as use of multiple windows (with monitoring) and hardstatus and you can come up with quite a comprehensive "monitoring console".

Resources