rpm/yum/dnf one (or two) liner to determine whether a package needs updating? - makefile

Update: Working code to at least provide some info follows below, based on John Bollinger's answer....
As part of packaging/managing a complex project, I need to determine automagically whether a package needs to be installed or upgraded.
The first part, installation, is easy (example shown in bash; I'm using a very short form for illustration: the actual code will be more defensive and informative):
rpm -q ${package} >& /dev/null || sudo dnf -y install ${package}
The second part is determining whether or not there are upgrades available. My first thought was to use something like
dnf info ${package} | grep -q 'Available Packages' && sudo dnf upgrade ${package}
but that will trigger if there are packages of the same name but with different architectures. (Note that the lack of -y here: in case of available upgrades, human review may be required; as per the first example, this is an illustrative example, the actual implementation would like involve separate confirmation.)
I may not have a choice but to do some complex text and version and architecture processing, but I'd like to a) leverage what others have done and b) make this as KISS and reliable as possible.
In case it's relevant, the overall motivation is primarily efficiency and partly DRY: The checks will go into Makefile prerequisites and/or recipes. I'm considering having something like a check target that will query whether or not to apply available upgrades, and I'd that check to be as KISS and reliable as possible. For example, I might have something like this:
upgradables=()
for package in ${packages}; do
if someEfficientCheck $package; then
upgradables+=($package)
fi
done
cat << EOM
The following packages are upgradable:
${upgradables[#]}
EOM
read -p 'Updgrade these? ' answer
case $answer in...
etc.
Many thanks!
UPDATE: WORKING CODE
The following at least tells me more or less reliably what the state of a given package is. I need to find an example of a package that will return multiple architectures, e.g., to weed things down a bit more....
#!/usr/bin/env bash
package=$1
[[ -z $package ]] && { printf "\n\tA package name is required.\n"; exit 1; }
isInstalled=no
isUpgradable=no
rpm -q ${package} >& /dev/null && isInstalled=yes
if [[ $isInstalled == "yes" ]]; then
arch=$(rpm -q --queryformat "%{ARCH}" ${package} 2> /dev/null)
dnf info ${package}.${arch} | grep -q 'Available Packages' && isUpgradable=yes
fi
[[ $isInstalled == "yes" ]] && echo "'${package}' is installed"
[[ $isUpgradable == "yes" ]] && echo "'${package}' is upgradable"
I'm not crazy about calling rpm twice, but I can live with it (I'm working on a compact statement to get both the return value and the output value). The next step will be wrapping that to query the list I'm interested in.

My first thought was to use [dnf info] but that will trigger if
there are packages of the same name but with different architectures.
DNF and Yum understand package specifications of the form <package>.<arch>, so you can say, for example,
dnf info my_package.x86_64 | grep -q '^Available Packages' && do_something
to avoid DNF reporting about packages for architectures other than the one you are interested in.
Of course, that requires you to know what architecture you're interested in, which is not necessarily straightforward. Some tools that may be helpful there include:
the uname -m command
(for packages that are already installed) the output of rpm -q <package>, perhaps with use of -qf to specify a custom query format that makes the output contain more information and / or be easier to consume
Do not overlook noarch packages, whose existence means that you always have to accommodate at least two package architectures. Moreover, don't overlook that on a multilib system it might not be just that the same package is available for multiple architectures, but that it is installed for multiple architectures.
However, I'm inclined to be skeptical of the whole idea. Especially anything involving user interaction in a makefile smells bad to me, but more generally, if you're committed to (trying to) upgrade to the most recent versions of a set of packages, then I'd be inclined to just run the upgrade command instead of checking first. On the other hand, being committed to upgrading to the latest version also smells bad -- it makes sense to ensure that you satisfy at least some set of minimum requirements, but I'm having trouble seeing the justification for at all times demanding the latest available version of the packages of interest.
And if it would suffice to ensure at least a minimum version of one or more packages, then you might find that the yum-builddep command solves a lot of problems for you. Especially so if the context is building RPMs, but you could make broader use of yum-builddep, too, as long as you're willing to write enough of an RPM .spec file to be accepted by the tool and to convey the build requirements.

Related

How to extract only version from "go version" command using sed or other bash command in bash shell script running on Debian 10

The command go version currently prints go version go1.13.6 linux/amd64. I installed from the go website rather than Debian packages as the version is old. Therefore traditional ways to extract the version number like dpkg -s cannot be used.
I've explored sed commands to extract only the number (1.13.6) like this other question on this site which is similar I grant you, however after reading various sources online about whats possible with sed and my limited knowledge I've been unable to work out how to tell sed to find the starting point, yet alone make it future proof for new versions which may be slight alterations of this number format. I've tried to explore ways to say "find the 3rd to last number" so that I can then work backwards. Or, "find the 2nd word 'go'".
Current efforts have been purely theoretical, as I can't find where to begin, I've not included any attempts.
Can it be done?
$ v=`go version | { read _ _ v _; echo ${v#go}; }`
$ echo $v
1.13.6
Further reading:
Compound commands.
The read comand.
Parameter expansion.
Command substitution.

How defensive should a Bash script be?

I recently stumbled over a very basic code snippet and began to wonder how defensive a Bash script needs to be.
[[ -f $file ]] && rm -- "$file" || echo "File does not exist."
What is the purpose of checking if the file exists when you could also simply pass the file to rm and let rm do the check itself? If the reason is that you can control what is shown as an error message yourself, why isn't it then necessary to also check if the file is writable in order to delete it? This leads me to believe that if you once start to test for possible conditions before you call a command -- in order to be consistent -- you'd need to check for every eventuality or you'd end up with error messages printed by yourself and those of external commands (rmin this case).
Another slightly more complex example I have wondered about would be
ffmpeg -n -i "./$file" -codec:a flac "./${file%.*}.flac"
This takes a file and transcodes it with the FLAC codec. Would it be sensible to first check the existence of the input file, whether that file is readable and also whether the location is writable for the output file? (Checking if the output file already exists is already done by providing the option -n to ffmpeg which means not to overwrite existing files.)
All of these would be checked by ffmpeg itself if I wouldn't add these checks.
How defensive your script should be depends on how granular a control you want over execution of your script.
If you are happy with how a command handles failure and the rest of your script can execute without any problem whether it fails or succeeds, you can execute it totally blindly, ignoring its output and return value...
command_to_be_ignored >/dev/null 2>&1 ||:
Maybe you are satisfied just knowing whether it succeeds or fails?
command_to_be_ignored >/dev/null 2>&1 || result=$?
If you run a mission-critical script and you want to react to many potential failure modes, then you will need to capture and test the output of the command, or test preconditions and do something different if they are not met.
By the way, checking preconditions to the execution of a command does not make error handling unnecessary, because the command may still fail due to things happening in parallel, or simply you not fully understanding (or being able to test) all potential failure modes. Checking pre-conditions is used to change what your program does in specific cases, not to avoid handling errors.
To sum it up, it is a matter of engineering : understanding the underlying mechanisms, knowing what you need to do, and making appropriate tradeoffs.

Is it possible to run Go code as a script?

As Go is becoming the language of the "system". I wonder if it's possible to run Go code as a script, without compiling it, is there any possibility to do that?
The motivation (as there were questions as for motivation), taken from How to use Scala as a scripting language
Problem You want to use Scala as a scripting language on Unix systems,
replacing other scripts you’ve written in a Unix shell (Bourne Shell,
Bash), Perl, PHP, Ruby, etc.
UPDATE:
I wonder how much I can "abuse" go run to be able to have much Go code running as scripts though compiled (which I like that its compiled) but it looks like go run can give me the opportunity to replace scripting, that is have source files on servers and run them as source while getting compiled by the go run but I still manage sources and not executables.
UPDATE:
in addition saw gorun
gorun - Script-like runner for Go source files.
Though there is a motivation and tools which tried to workaround not being able to run as script, I will not go for it, +I've been told it's not production ready and was advised not to use it in production because it's not production ready, it's out of it's purpose and would need dump the scripting convenience in favour of Go. (I don't have anything against compiling, static typing, I'm a big fan of it, but wanted something similar to scripting convenience).
I am not an expert in go, but you can do something like this. I know this is not elegant, but may be something to start with :)
~$ cat vikas.go
//usr/bin/env go run "$0" "$#"; exit
package main
import "fmt"
func main() {
fmt.Printf("Hello World\n")
}
~$
~$ ./vikas.go
Hello World
~$
As others noted, Go is a compiled language. Programs written in the Go language rely on the Go runtime which is added to the executable binary.
The Go runtime provides certain vital features such as garbage collection, goroutine scheduling, runtime reflection etc. Without them a Go app cannot work as guaranteed by the language spec.
A theoretical Go interpreter would have to simulate those features, which would essentially mean to include a Go runtime and a Go compiler. There is no such thing, and there is no need for that.
Also note that if the code is not yet compiled, that means the Go interpreter would have to contain all the standard library, because a Go "script" could legally refer to anything from the standard library (when a Go app is compiled, only things that it uses / refers to gets compiled into the executable binary).
To quickly test something, just use go run, which also compiles your app and builds an executable binary in a temporary folder, launches that temp file and cleans it when your app exits.
"Solutions" posted by others may "feel" like scripting, but they are nothing more than automating / hiding the process of compiling the Go source to an executable binary and then launching that binary. This is exactly what go run does (which also cleans up the temporary binary).
Absolutely possible on linux, props to cf: https://blog.cloudflare.com/using-go-as-a-scripting-language-in-linux
In case you don't want to use the already mentioned 'shebang' which may have limitations:
//usr/bin/env go run "$0" "$#"; exit "$?"
Do this instead:
# check if binfmt_misc is already active
# (should already be done through systemd)
mount | grep binfmt_misc
# if not already mounted
sudo mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
# get gorun and place it so it will work
go get github.com/erning/gorun
sudo mv $GOPATH/bin/gorun /usr/local/bin
# make .go scripts being run with `gorun` wrapper script
echo ':golang:E::go::/usr/local/bin/gorun:OC' | sudo tee /proc/sys/fs/binfmt_misc/register
This is just for one-off testing. To make this persistant, I have this as the second last line in my /etc/rc.local's:
echo ':golang:E::go::/usr/local/bin/gorun:OC' | tee /proc/sys/fs/binfmt_misc/register 1>/dev/null
If you experience something like 'caching' issues during developing scripts, have a look at /tmp/gorun-$(hostname)-USERID.
I love bash, but as soon as you need a proper datastructures (bash arrays and associative arrays don't count due to unusability) you need another language.
Golang is faster to write for me than is python, and any golang setup as depicted above will do for me. No battling python 2 vs 3, its libs and pip horrors, have a distributable binary at will, and just instantly be able to change the source for small one-off scripts is why this approach has definitely its upsides.
see Neugram, Go scripting . From it's documentation:
#!/usr/bin/ng
// Start a web server to serve files.
import "net/http"
pwd := $$ echo $PWD $$
h := http.FileServer(http.Dir(pwd))
http.ListenAndServe(":8080", h)
No this is neither possible nor desirable.

bug? in codesign --remove-signature feature

I would like to remove the digital signature from a Mac app that has been signed with codesign. There is an undocumented option to codesign, --remove-signature, which by it's name seems to be what I need. However, I can't get it to work. I realize it is undocumented, but I could really use the functionality. Maybe I'm doing something wrong?
codesign -s MyIdentity foo.app
works normally, signing the app
codesign --remove-signature foo.app
does disk activity for several seconds, then says
foo.app: invalid format for signature
and foo.app has grown to 1.9 GB!!! (Specifically, it is the executable in foo.app/Contents/Resources/MacOS that grows, from 1.1 MB to 1.9 GB.)
The same thing happens when I try to sign/unsign a binary support tool instead of a .app.
Any ideas?
Background: This is my own app; I'm not trying to defeat copy protection or anything like that.
I would like to distribute a signed app so that each update to the app won't need user approval to read/write the app's entries in the Keychain. However, some people need to modify the app by adding their own folder to /Resources. If they do that, the signature becomes invalid, and the app can't use it's own Keychain entries.
The app can easily detect if this situation has happened. If the app could then remove it's signature, everything would be fine. Those people who make this modification would need to give the modified, now-unsigned app permission to use the Keychain, but that's fine with me.
A bit late, but I've updated a public-domain tool called unsign which modifies executables to clear out signatures.
https://github.com/steakknife/unsign
I ran into this issue today. I can confirm that the --remove-signature option to Apple's codesign is (and remains, six years after the OP asked this question) seriously buggy.
For a little background, Xcode (and Apple's command line developer tools) include the codesign utility, but there is not included a tool for removing signatures. However, as this is something that needs to be done in certain situations pretty frequently, there is included a completely undocumented option:
codesign --remove-signature which (one assumes, given lack of documentation) should, well, be fairly self-explanatory but unfortunately, it rarely works as intended without some effort. So I ended up writing a script that should take care of the OP's problem, mine, and similar. If enough people find it here and find it useful, let me know and I'll put it on GitHub or something.
#!/bin/sh # codesign_remove_for_real -- working `codesign --remove-signature`
# (c) 2018 G. Nixon. BSD 2-clause minus retain/reproduce license requirements.
total_size(){
# Why its so damn hard to get decent recursive filesize total in the shell?
# - Darwin `du` doesn't do *bytes* (or anything less than 512B blocks)
# - `find` size options are completely non-standardized and doesn't recurse
# - `stat` is not in POSIX at all, and its options are all over the map...
# - ... etc.
# So: here we just use `find` for a recursive list of *files*, then wc -c
# and total it all up. Which sucks, because we have to read in every bit
# of every file. But its the only truly portable solution I think.
find "$#" -type f -print0 | xargs -0n1 cat | wc -c | tr -d '[:space:]'
}
# Get an accurate byte count before we touch anything. Zero would be bad.
size_total=$(total_size "$#") && [ $size_total -gt 0 ] || exit 1
recursively_repeat_remove_signature(){
# `codesign --remove-signature` randomly fails in a few ways.
# If you're lucky, you'll get an error like:
# [...]/codesign_allocate: can't write output file: [...] (Invalid argument)
# [...] the codesign_allocate helper tool cannot be found or used
# or something to that effect, in which case it will return non-zero.
# So we'll try it (suppressing stderr), and if it fails we'll just try again.
codesign --remove-signature --deep "$#" 2>/dev/null ||
recursively_repeat_remove_signature "$#"
# Unfortunately, the other very common way it fails is to do something? that
# hugely increases the binary size(s) by a seemingly arbitrary amount and
# then exits 0. `codesign -v` will tell you that there's no signature, but
# there are other telltale signs its not completely removed. For example,
# if you try stripping an executable after this, you'll get something like
# strip: changes being made to the file will invalidate the code signature
# So, the solution (well, my solution) is to do a file size check; once
# we're finally getting the same result, we've probably been sucessful.
# We could of course also use checksums, but its much faster this way.
[ $size_total == $(total_size "$#") ] ||
recursively_repeat_remove_signature "$#"
# Finally, remove any leftover _CodeSignature directories.
find "$#" -type d -name _CodeSignature -print0 | xargs -0n1 rm -rf
}
signature_info(){
# Get some info on code signatures. Not really required for anything here.
for info in "-dr-" "-vv"; do codesign $info "$#"; done # "-dvvvv"
}
# If we want to be be "verbose", check signature before. Un/comment out:
# echo >&2; echo "Current Signature State:" >&2; echo >&2; signature_info "$#"
# So we first remove any extended attributes and/or ACLs (which are common,
# and tend to interfere with the process here) then run our repeat scheme.
xattr -rc "$#" && chmod -RN "$#" && recursively_repeat_remove_signature "$#"
# Done!
# That's it; at this point, the executable or bundle(s) should sucessfully
# have truly become stripped of any code-signing. To test, one could
# try re-signing it again with an ad-hoc signature, then removing it again:
# (un/comment out below, as you see fit)
# echo >&2 && echo "Testing..." >&2; codesign -vvvvs - "$#" &&
# signature_info "$#" && recursively_repeat_remove_signature "$#"
# And of course, while it sometimes returns false positives, lets at least:
codesign -dvvvv "$#" || echo "Signature successfully removed!" >&2 && exit 0
Here's the source for codesign which lists all options, including those not covered by the command-line -h and man page.
Also, here is Apple's tech note on recent changes in how code-signing works
I agree that there's something strange going on when you did --remove-signature.
However, instead of trying to un-code-sign, you should change the way your user put extra files in the Resources. Instead, designate a certain path, usually
~/Library/Application Support/Name_Of_Your_App/
or maybe
~/Library/Application Support/Name_Of_Your_App/Resources/
and ask the user to put extra files there. Then, in your code, always check for the directory in addition to the files in the Resources when you need to read a file.
On a second reading of this question, another thought: perhaps a better approach to accomplish what the ultimate goal of the question is would be not to remove the signatures, but to have users (via a script/transparently) re-sign the app after modification, using an ad-hoc signature. That is, codesign -fs - [app], I believe. See https://apple.stackexchange.com/questions/105588/anyone-with-experience-in-hacking-the-codesigning-on-os-x

Compiling historical information (esp. SLOCs) about a project

I am looking for a tool that will help me to compile a history of certain code metrics for a given project.
The project is stored inside a mercurial repository and has about a hundred revisions. I am looking for something that:
checks out each revision
computes the metrics and stores them somewhere with an identifier of the revision
does the same with the next revisions
For a start, counting SLOCs would be sufficient, but it would also be nice to analyze # of Tests,TestCoverage etc.
I know such things are usually handled by a CI Server, however I am solo on this project and thus haven't bothered to set up a CI Server (I'd like to use TeamCity but I really didn't see the benefit of doing so in the beginnig). If I'd set up my CI Server now, could it handle that?
According to jitter's suggestion I have written a small bash script running inside cygwin using sloccount for counting the source lines. The output was simply dumped to a textfile:
#!/bin/bash
COUNT=0 #startrev
STOPATREV = 98
until [ $COUNT -gt $STOPATREV ]; do
hg update -C -r $COUNT >> sloc.log # update and log
echo "" >> sloc.log # echo a newline
rm -r lib # dont count lib folder
sloccount /thisIsTheSourcePath | print_sum
let COUNT=COUNT+1
done
You could write a e.g. shell script which
checks out first version
run sloccount on it (save output)
check out next version
repeat steps 2-4
Or look into ohloh which seems to have mercurial support by now.
Otherwise I don't know of any SCM statistics tool which supports mercurial. As mercurial is relatively young (since 2005) it might take some time until such "secondary use cases" are supported. (HINT: maybe provide a hgstat library yourself as there are for svn and csv)
If it were me writing software to do that kind of thing, I think I'd dump metrics results for the project into a single file, and revision control that. Then the "historical analysis" tool would have to pull out old versions of just that one file, rather than having to pull out every old copy of the entire repository and rerun all the tests every time.

Resources