How to compare if the cpu serial is correct - bash

I have a question. I would like to check when the linux system starts or if the serial cpu is correct.
If not, he would be rebooting. So he would be doing reboot loops all the time.
I found the command to check the serial cpu command:
cat /proc/cpuinfo | grep Serial | cut -d ' ' -f 2
How to compare the result of this command to the value of eg 000000ddd0d0d??
And I do not know how to look like such a check script and where to put it in the Ubuntu system (/etc/init.d/rc.local ??).
It is correctly??:
#!/bin/bash
STR=cat /proc/cpuinfo | grep Serial | cut -d ' ' -f 2
if $STR != '000000ddd0d0d'; then
reboot
Thank you for your help
Sorry for my English.

You probably should do the whole thing in one grep command:
grep -q '^Serial.*000000ddd0d0d' /proc/cpuinfo || reboot
This will reboot unless a line with Serial and 000000ddd0d0d in it is found in /etc/cpuinfo.
BUT this is questionable for several reasons.
What should a reboot be good for in such a case? As this is supposed to be done during machine startup, you will then enter an infinite loop of reboots which can only be stopped by switching the computer off. This is horrible! You probably never encountered such a problem (as admin) or you wouldn't produce it voluntarily. The only way to fix such a system is to boot it via some other medium (USB thumb drive or similar).
Not all Linux distributions or kernel versions offer a CPU serial number in /proc/cpuinfo. My current system, for instance, doesn't. So on my computer this would not work at all.
The whole idea of reacting on a CPU serial number is highly questionable as a CPU might break (not likely but possible) and then probably is replaced and the new CPU, while surely having a different serial number, should not pose any trouble.
So, I think you might want to reconsider.

Related

xset in BASH script does not work under Cron

To learn about BASH scripting, I set myself the objective to write a Cron script which shuts down a PC with Mint 20 when activity on the Ethernet interface dropped below a threshold over an 1 hour.
I mainly (but not exclusively) use the PC as File/DLNA Server. The script works, but now I find that it also shuts down the PC the rare times I'm using the the front end. So I want my script to verify if the screen has been blanked (As per Power Management settings)
To test the principle I included this in my script:
screenon=$(/usr/bin/xset -q | grep 'Monitor is' | cut -d "s" -f 2)}
which when run in a terminal window gives (debug: set -x)
screenon= On
but when run from cron gives. (logger)
/usr/bin/xset: unable to open display ""
I have learned about similar problems, but cannot figure out how to solve this.
My script includes:PATH=$PATH:/usr/local/bin
and my PATH is: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
Thanks in advance for any help.

What is causing Xorg high CPU usage?

I am running feh image viewer on Debian and after some hours of normal CPU usage (3% aprox.) , xorg suddenly starts using much more CPU (80% aprox.) and everything runs very slowly. I am not running anything else so the bug should be either on feh or on the xserver...
I am using the command "feh -z -q -D20 -R 1" (-z for random image, -q for quiet, -D20 to change the picture every 20 seconds and -R 1 to refresh the directory every second, as I erase and insert pictures pretty often)
When I use the command "free -m" before the high CPU usage and feh running, I get
total used free shared buff/cache available
Mem: 923 117 474 19 331 735
Swap: 99 0 99
And after several hours I get the same for "mem" but the used amount of "swap" is 99.
The fact that your memory usage goes up (swap is full) points directly to memory leak in some program in your system. Considering that feh is not probably designed for such an use case I'd bet it's the cause for going out of memory.
The "everything runs slowly" is caused by kernel going out of memory and it's doing its best to keep the system running. If you insist on runnin feh your choices are
Triage the memory leak bug in feh and create a fix for it.
Try to get somebody else do the same for you.
Periodically kill feh and rerun it again. Basically you can do (in bash)
while true; do timeout 120m feh -z -q -D20 -R 1; sleep 2s; done
which will kill every 120 min and restart it after 2 second delay (which allows you to kill the while loop if needed). Another choice would be to use ulimit to set maximum amount of memory you want to allow for feh and the process probably simply dies once it's using too much.
I solved this problem, but I don't know why, too.
You can try run this code kill this process:
ps -a | grep Xorg | awk '{print $1}' | xargs kill 9

Verify network identity through unix commands

First some background: I'm using the LaunchD feature in Mac OSX to periodically launch an application I'll call "AppX". Optimally I like to run this application nearly 24/7. But due to issues with memory leakage (that is my best guess), AppX closes periodically. To solve this, I've created and loaded a simple plist file to launch the application every 6 hours. This itself works perfectly and minimizes application downtime. However, AppX itself can be a drain on my battery, and I'd prefer it only launch when I'm at home, connected to my wifi network.
Please be aware that while I have some experience with C++ and Java, I know very little in the way of Unix.
My question: I'd like to use an if statement to check whether the network I'm connected to is my home wifi network. Being the case that it is, the system will execute the command:
open -a AppX
So... How would I implement an if statement to accomplish this? Any help is appreciated.
There's an older SO question that gives part of the answer:
Get wireless SSID through shell script on Mac OS X
As for the if statement, the following should work:
homenet = "MyHomeNetwork"
netname = /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport -I | awk '/ SSID/ {print substr($0, index($0, $2))}'
if [ "$netname" -eq "$homenet" ]
then
# Do fancy service startup here
else
# This is not my home network
fi

How do I programatically detect if my laptop is plugged in or not? (osx)

Exactly as the title says -- I'm looking for a way, in OSX, to tell me if my laptop is currently plugged in so that I can start/pause CPU intensive tasks as necessary.
Better yet, a way to get notifications whenever there is a change to the plugged in state.
You could use pmset:
-g ps / batt displays status of batteries and UPSs.
Saying:
pmset -g ps
would tell you if the laptop is running on AC power or using battery power.
To translate into a condition, something like the following should work:
if [[ $(pmset -g ps | head -1) =~ "AC Power" ]]; then
echo "power on!"
fi
pmset(1) looks promising. Specifically:
-g ps / batt displays status of batteries and UPSs.
Looking through the source code for pmset, it seems that the key function you're looking for is IOPSGetProvidingPowerSourceType, which
Indicates the power source the computer is currently drawing from.

rsync suddenly hanging indefinitely during transfers

For the past few years, I have been using an rsync one-liner to back up important folders on my Mac Mini desktop (OSX 10.9, 2.5 GHz i5, 4 GB RAM) to a FreeNAS box (0.7.2 Sabanda revision 5266, Pentium D 2.66 GHz, 822MiB RAM [reported by the system, I think there's 1 GB in there]). I am running an rsync daemon on the FreeNAS box. Recently, these transfers have been hanging indefinitely. I have done the usual Google-fu and am unable to identify the source of the problem or a solution.
The one-liner is:
rsync -rvOlt --exclude '.DS_Store' \
--exclude '.com.apple.timemachine.supported' \
--delete /Volumes/Storage/Music/Albums/ 192.168.1.100::albums
I have tried enabling -vvv and --progress, but there is no pattern that I can discern between what hangs and what doesn't. Heck, if I retry, the same file might hang at a different point during the transfer or not at all. A dry run (-n) does not always succeed either. The only "success" I've had is implementing a timeout (--timeout=10) and rerunning the command over and over. Eventually, I creep along, but with no guarantee of success and at a pace that is unacceptable. I've reached a point where I have one file that I can't get past.
The Mac Mini is connected to my router via 5 GHz. The FreeNAS box is wired into that same router on a 100 mbit port. When transfers are actually going, rsync --progress reports 2.5-4 MB/s. According to --progress, a hang is literally just that—no data transfer is occurring as far as I can tell.
I need help with both the diagnostics and the solution.
I was having the same problem. Removing -v didn't work for me. My use-case is slightly different in that I'm going from source (EXT4) to ExFAT. The issue for me was that rsync was attempting to preserve device files and permissions, which ExFAT doesn't support. I was using the -hrltDvaP switches. The -D and -a switches seemed to be my problem. The -a switch translates to -rlptgoD (no -H,-A,-X). The -p, -g, and -o switches seemed to be my root cause as rsync was barfing on one or all of those during runtime. Removing -a and specifying -Prltvc switches explicitly is working for me.
bkupcmd="nice -n$nicelevel /usr/bin/rsync -Prltvc --exclude-from=/var/tmp/ignorelist "
I've been running into the same thing again and again and it seems to help if you drop the -v option (which is annoying if you need that output).
Try using --whole-file/-W.
This command disables the rsync delta-transfer algorithm.
That is what worked for us (WSL to OSX)
our full sync flags were -avWPle
(e was because we were using ssh, and that has to be the last flag)
This happened to me when the remote device ran out of space. The error wouldn't show when --verbose option was used; turning that off yielded some STDERR output that explained that the remote device was out of space. When I freed some space, I was able to run rsync again with --verbose and everything went fine.
I am using openSUSE 13.2 Linux, rsync version 3.1.1-2.4.1.x86_64, and I experienced similar problems, doing an rsync between my laptop and an external hard disk, with the destination device definitively having enough free space.
I thought I got an improvement omitting option -v, but after 10 minutes it was hanging again: strace said:
select(5, [], [4], [], {60, 0}) = 0 (Timeout)
And with "iotop" I counld see confirm that the rsync processes did no significant disk IO any more.
Neither removing the -v option nor limiting the bandwidth using --bwlimit fixed the problem.
Just had a similar problem while doing rsync from harddisk to a FAT32 USB drive. rsync froze already in less than a second in my case and did not react at all after that ... left it with CTRL+C.
Found out that the problem was a combination of usage of hardlinks on the harddisk and having FAT32 filesystem on the USB drive, which does not support hardlinks.
Formatting the USB drive with ext4 solved the problem for me.
In my situation rsync was not actually failing.
I have regular server backups which transfers large files over 500GB+ and have --append-verify or --checkusm over ssh parameters specified.
What I have found upon analysis is that once the client side completes it's file checks then the server side checks start. Which means while the server is doing it's checks the client side will appear hanged and frozen - run htop on the server to rsync working away.
This is likely a non issue if rsync is run in deamon mode on the server and using the rsync protocol instead of ssh for transfers.
On related note, this very LONG wait would trigger SSH timeout and a rsync: connection unexpectedly closed (254 bytes received so far) [sender] error message, sollution is to add ClientAliveInterval 120 and ClientAliveCountMax 720 to /etc/ssh/sshd_config.
I've seen this quite often on 3.0.9 on a directory with hardlinks, but it also happened on 3.1.3.
There is a nice analysis in Debian bug 820916: when its internal sockets are congested with errors, rsync could go into a deadlock.
This might have been fixed in a 3.2 release just a few days ago (Jun 2020):
Avoid a hang when an overabundance of messages clogs up all the I/O buffers.
The only good workaround I can think of is, if the problem is not persistent, then put timeout in front of it: timeout rsync <args> <source> <destination>, then retry. If it is persistent for you, you're the lucky one who can debug it :D
It also happens when the user on target machine has not write permissions on target folder.
You can try giving write permission to others target folder:
sudo chmod -R o+w /path/to/target-folder
In my case, it was the IPC (Intrusion Protection Component) in our firewall. It sees all the TCP SYN packets as a flood attack and kills the connection. I left a rsync over NFS session open and turned off the IPC for the servers firewall rule and it starting working again right away.
rsync -ravh /source /destination
When it happened I was not able to kill the rsync session. It locked up the NFS mount and I would have to reboot the client machine to get it to work again. The strange thing is it would copy some files over then all of a sudden stop. It always seemed to stop on the same file. So I was looking for file issues, permission issues, TCP offloading issues, tried removing the -v in the rsync call. If you are having this issue at least in my case it even happened with a simple.
cp -rp /source /destination
So I knew then to start looking at other factors. So if you have any sort of intrusion protection on a firewall or router between the servers you can try turning that off temporarily to see if it solves your issue as well.
Most likely not "your" problem, but I stumbled upon this question when I was researching a similar behavior:
I'm observing "hanging" when the target site has too much io load. e.G. on one of my small business servers, when someone is resyncing his IMAP account and downloading large batchs of data and a backup job runs that writes his data.
In this situation I notice a steep drop in performance for rsync. Noticeable in a high load value in top on the target machine, even though CPU and Mem are fine.
Waiting for the process to finish has helped every time or interrupting and attempting the rsync at a later time again.
I was having the same problem and it was because I was running out of memory during the rsync. Created a swap file and problem solved.
Had rsync hanging issue on Ubuntu 16. None of the options above helped. The problem was in the source drive (external SSD) which suddenly became faulty. I tried several disk checks, but all of them stuck. Ended up rebooting the system and disk suddenly became accessible again.
Holger Ohmacht aka h8ohmh / 8ohmh:
The problem lies in the filesystem buffer / usage of the interworking of harddisk/hw so far as I could investigate.
Temporal solution for local drives (eg. USB3<->HD) : A script which is polling the changing disk space. If no changing free disk space then rsync is stalled and has to be restarted
cmd="rsync -aW --progress --stats --preallocate --super \
<here your source dir> \
<here your dest dir>"
eval "$cmd" &
rm ./ndf.txt
rm ./odf.txt
while [[ 0 == 0 ]]; do
df > ./ndf.txt
cmp ./odf.txt ./ndf.txt
res="$?"
echo "$res"
if [[ $res == 0 ]]; then
echo "###########################################"
ls -al "./ndf.txt"
ls -al "./odf.txt"
killall rsync
eval "$cmd" &
else
cp ./ndf.txt ./odf.txt
fi
sleep 60
done
Change <source dir> etc to your paths!
In my case it is always stalling by usage of rsync's --preallocate option (normally because of better disk performance and rescueing continuous blocks), so as long as the disk and filesystem drivers not reworked there just this solution

Resources