How to list installed go packages - go

To my knowledge go distribution comes with some sort of package manager. After go 1.4.1 installation I've run go help in order to find any sub-command capable of listing locally installed go packages, but unfortunately got none.
So how to do it?

goinstall is now history
goinstall was replaced by go get. go get is used to manage external / 3rd party libraries (e.g. to download them, update them, install them etc).
Type go help get to see command line help, or check out these pages:
Command go
About the go command (blog post)
If you want to list installed packages, you can do that with the go list command:
Listing Packages
To list packages in your workspace, go to your workspace folder and run this command:
go list ./...
./ tells to start from the current folder, ... tells to go down recursively. Of course this works in any other folders not just in your go workspace (but usually that is what you're interested in).
List All Packages
Executing
go list ...
in any folder lists all the packages, including packages of the standard library first followed by external libraries in your go workspace.
Packages and their Dependencies
If you also want to see the imported packages by each package, you can try this custom format:
go list -f "{{.ImportPath}} {{.Imports}}" ./...
-f specifies an alternate format for the list, using the syntax of package template. The struct whose fields can be referenced can be printed by the go help list command.
If you want to see all the dependencies recursively (dependencies of imported packages recursively), you can use this custom format:
go list -f "{{.ImportPath}} {{.Deps}}" ./...
But usually this is a long list and just the single imports ("{{.Imports}}") of each package is what you want.
Also see related question: What's the Go (mod) equivalent of npm-outdated?

Start Go documentation server:
godoc --http :6060
Visit http://localhost:6060/pkg
There will be list of all your packages.
When you install new ones they do not appear automatically. You need to restart godoc.

go list ... is quite useful, but there were two possible issues with it for me:
It will list all packages including standard library packages. There is no way to get the explicitly installed packages only (which is what I assume the more interesting inquiry).
A lot of times I need only the packages used in my projects (i.e. those listed in the respective go.mod files), and I don't care about other packages lying around (which may have been installed just to try them out). go list ... doesn't help with that.
So here's a somewhat different take. Assuming all projects are under ~/work:
find ~/work -type f -name go.mod \
-exec sed $'/^require ($/,/^)$/!d; /^require ($/d;/^)$/d; /\\/\\/ indirect$/d; s/^\t+//g' {} \; \
| cut -d' ' -f1 \
| sort | uniq
A line by line explanation:
find all go.mod files
apply sed to each file to filter its content as follows (explained expression by expression):
extract just the require( ... ) chunks
remove the require( and ) lines, so just lines with packages remain
remove all indirect packages
remove leading tabs 1)
extract just the qualified package name (drop version information)
remove duplicate package names
1) Note the sed expression argument uses bash quoting to escape the TAB character as "\t" for readability over a literal TAB.

on *nix systems (possibly on windows with bash tools like msysgit or cmder), to see what packages I have installed, I can run:
history | grep "go get"
But thats messy output. For whatever reason I decided to see of i could clean that output up a little so i made an alias for this command:
history | grep 'go get' | grep -v ' history ' | sed -e $'s/go get /\\\\\ngo get /g' | grep 'go get ' | sed -e $'s/-u //g' | sed -e $'s/-v //g' | sed -e $'s/ &&//g' | grep -v '\\\n' | egrep 'get [a-z]' | sed -e $'s/go get //g' | sed -e $'s/ //g' | sort -u
please don't ask why I did this. Challenge maybe? Let me explain the parts
history the history
grep "go get" grep over history and only show lines where we went and got something
grep -v " history " and remove times when we have searched for "got get" in history
sed -e $'s/go get /\\\\\ngo get /g' Now we take any instances of "go get " and shove a new line in front of it. Now they're all at the beginning.
grep "go get " filter only lines that now start with "go get"
sed -e $'s/-u //g' and sed -e $'s/-v //g' remove flags we have searched for. You could possibly leave them in but may get duplicates when output is done.
sed -e $'s/ &&//g' some times we install with multiple commands using '&&' so lets remove them from the ends of the line.
grep -v "\\\n" my output had other lines with newlines printed I didnt need. So this got rid of them
egrep "get [a-z]" make sure to get properly formatted go package urls only.
sed -e $'s/go get //g' remove the "go get " text
sed -e $'s/ //g' strip any whitespace (needed to filter out duplicates)
sort -u now sort the remaining lines and remove duplicates.
This is totally untested on other systems. Again, I am quite sure there is a cleaner way to do this. Just thought it would be fun to try.
It would also probably be more fun to make a go ls command to show the actual packages you explicitly installed. But thats a lot more work. Especially since i'm only still learning Go.
Output:
> gols
code.google.com/p/go.crypto/bcrypt
github.com/golang/lint/golint
github.com/kishorevaishnav/revelgen
github.com/lukehoban/go-find-references
github.com/lukehoban/go-outline
github.com/newhook/go-symbols
github.com/nsf/gocode
github.com/revel/cmd/revel
github.com/revel/revel
github.com/rogpeppe/godef
github.com/tpng/gopkgs
golang.org/x/tools/cmd/goimports
golang.org/x/tools/cmd/gorename
gopkg.in/gorp.v1
sourcegraph.com/sqs/goreturns

Related

how can I scrap data from reddit (in bash)

I want to scrap titles and date from http://www.reddit.com/r/movies.json in bash
wget -q -O - "http://www.reddit.com/r/movies.json" | grep -Po '(?<="title": ").*?(?=",)' | sed 's/\"/"/'
I have titles but I don't know how to add dates, can someone help?
wget -q -O - "http://www.reddit.com/r/movies.json" | grep -Po
'(?<="title": ").*?(?=",)' | sed 's/"/"/'
As extension suggest it is JSON (application/json) file, therefore grep and sed are poorly suited for working with it, as they are mainly for using regular expressions. If you are allowed to install tools, jq should be handy here. Try using your system package manager to install it, if it succeed you should get pretty printed version of movies.json by doing
wget -q -O - "http://www.reddit.com/r/movies.json" | jq
and then find where interesting values are placed which should allow you to grab it. See jq Cheat Sheet for example of jq usage. If you are limited to already installed tools I suggest taking look at json module of python.

Dynamically pipe user input into a variable for ls bash

I have the following one sed liner that I use to prettify the output of lsing all my .desktop files.
ls -1 | sed -e 's/\.desktop$//' | sed -e 's/\org.gnome.//' | grep "$name" | head -1
Currently I have a read command that I pipe into that $name variable for input. Can you give me some ideas on how to make it dynamically output or autocomplete what im typing as app launchers like Rofi or others do?
You could try the peco utility. (It's also packaged for several distributions.)
From its github page:
peco can be a great tool to filter stuff like logs, process stats, find files, because unlike grep, you can type as you think and look through the current results.
The fzf "commandline fuzzy finder" utility (repos) is similar:
It's an interactive Unix filter for command-line that can be used with any list; files, command history, processes, hostnames, bookmarks, git commits, etc.
There's a list of other options at: https://alternativeto.net/software/peco/

How to get the highest numbered link from curl result?

i have create small program consisting of a couple of shell scripts that work together, almost finished
and everything seems to work fine, except for one thing of which i'm not really sure how to do..
which i need, to be able to finish this project...
there seem to be many routes that can be taken, but i just can't get there...
i have some curl results with lots of unused data including different links, and between all data there is a bunch of similar links
i only need to get (into a variable) the link of the highest number (without the always same text)
the links are all similar, and have this structure:
always same text
always same text
always same text
i was thinking about something like;
content="$(curl -s "$url/$param")"
linksArray= get from $content all links that are in the href section of the links
that contain "always same text"
declare highestnumber;
for file in $linksArray
do
href=${1##*/}
fullname=${href%.html}
OIFS="$IFS"
IFS='_'
read -a nameparts <<< "${fullname}"
IFS="$OIFS"
if ${nameparts[1]} > $highestnumber;
then
highestnumber=${nameparts[1]}
fi
done
echo ${nameparts[1]}_${highestnumber}.html
result:
https://always/same/link/unique-name_19.html
this was just my guess, any working code that can be run from bash script is oke...
thanks...
update
i found this nice program, it is easily installed by:
# 64bit version
wget -O xidel/xidel_0.9-1_amd64.deb https://sourceforge.net/projects/videlibri/files/Xidel/Xidel%200.9/xidel_0.9-1_amd64.deb/download
apt-get -y install libopenssl
apt-get -y install libssl-dev
apt-get -y install libcrypto++9
dpkg -i xidel/xidel_0.9-1_amd64.deb
it looks awsome, but i'm not really sure how to tweak it to my needs.
based on that link and the below answer, i guess a possible solution would be..
use xidel, or use "$ sed -n 's/.href="([^"]).*/\1/p' file" as suggested in this link, but then tweak it to get the link with html tags like:
< a href="https://always/same/link/same-name_17.html">always same text< /a>
then filter out all that doesn't end with ( ">always same text< /a> )
and then use the grep sort as mentioned below.
Continuing from the comment, you can use grep, sort and tail to isolate the highest number of your list of similar links without too much trouble. For example, if you list of links is as you have described (I've saved them in a file dat/links.txt for the purpose of the example), you can easily isolate the highest number in a variable:
Example List
$ cat dat/links.txt
always same text
always same text
always same text
Parsing the Highest Numbered Link
$ myvar=$(grep -o 'https:.*[.]html' dat/links.txt | sort | tail -n1); \
echo "myvar : '$myvar'"
myvar : 'https://always/same/link/same-name_19.html'
(note: the command above is all one line separate by the line-continuation '\')
Applying Directly to Results of curl
Whether your list is in a file, or returned by curl -s, you can apply the same approach to isolate the highest number link in the returned list. You can use process substitution with the curl command alone, or you can pipe the results to grep. E.g. as noted in my original comment,
$ myvar=$(grep -o 'https:.*[.]html' < <(curl -s "$url/$param") | sort | tail -n1); \
echo "myvar : '$myvar'"
or pipe the result of curl to grep,
$ myvar=$(curl -s "$url/$param" | grep -o 'https:.*[.]html' | sort | tail -n1); \
echo "myvar : '$myvar'"
(same line continuation note.)
Why not use Xidel with xquery to sort the links and return the last?
xidel -q links.txt --xquery "(for $i in //#href order by $i return $i)[last()]" --input-format xml
The input-format parameter makes sure you don't need any html tags at the start and ending of your txt file.
If I'm not mistaken, in the latest Xidel the -q (quiet) param is replaced by -s (silent).

Bash: using the output of one command in the other

I have the following requirement here: Fetch all the commits from our SVN from the last two years and list the title of all the JIRA issues that had code committed. Our commit rules are pretty strict, so a commit must start with the JIRA code, like: COR-3123 Fixed the bug, introduced a new one
So, I wrote the following shell script to get this working:
svn log -r{2012-04-01}:{2014-04-01} | grep "COR-" | cut -f1 -d" " | sort -u
This gets me all the JIRA codes.
But now I want to use these in the following command:
wget --quiet --load-cookies cookies.txt -O - http://jira.example.com/browse/{HERE} | sed -n -e 's!.*<title>\(.*\)</title>.*!\1!p'
Ie: get the JIRA page via wget and parse out the title... (I have already cached my login credentials to use with wget in cookies.txt)
and obviously to the location {HERE} I want to insert the code obtained from the first list. Doing this via a two step (step 1: get list, step 2 iterate via list) script (python, perl, ... ) is not a problem, but I'd like to know if it's possible to do it in ONE step, using bash :)
(Yes, I know there is JIRA rest API)
You can use xargs to pass the parameter to wget:
xargs -I {} wget http://jira.example.com/browse/{}

SVN: How to know in which revision a file was deleted?

Given that I'm using svn command line on Windows, how to find the revision number, in which a file was deleted? On Windows, there are no fancy stuff like grep and I attempting to use command line only, without TortoiseSVN. Thanks in advance!
EDIT:
I saw a few posts, like examining history of deleted file but it did not answer my question.
Is there any way other than svn log -v url > log.out and search with Notepad?
Install Cygwin.
I use this:
svn log -v --limit <nr> -v | grep -E '<fileName>|^r' | grep -B 1 <fileName>
where
fileName - the name of the file or any pattern which matches it
nr - the number of latest revisions in which I want to look for
This will give you the revisions for all the actions (add, delete, remove, modify) concerning the file, but with a simple tweak with grep you can get the revisions only for deletion.
(Obviously, --limit is optional, however you usually have an overview about how deep you need to search which gains you some performance.)
The log is the place to look. I know you don't want to hear that answer, but that is where you need to look for deleted files in SVN.
The reason for this is simply that a deleted file is not visible after it's been deleted. The only place to find out about its existence at all is either in the logs, or by fetching out an earlier revision prior to it being deleted.
The easiest way I know of to deal with this problem is to move away from the the command line, and use a GUI tool such as TortoiseSVN.
TortoiseSVN links itself into the standard Windows file Explorer, so it's very easy to use. In the context of answering this question, you would still use it to look at the logs, but it becomes a much quicker excersise:
Browser to the SVN folder you want to examine. Then right-click on the folder icon and select TortoiseSVN -> View Logs from the context menu.
You'll now get a window showing all the revisions made in that folder. In particular, it is easy to see which revisions have had additions and deletions, because the list includes a set of Action icons for each revision. You can double-click on a revision to get a list of files that were changed (or straight into a diff view if only one file was changed)
So you can easily see which revisions have had deletions, and you can quickly click them to find out which files were involved. It really is that easy.
I know you're asking about the command-line, but for administrative tasks like this, a GUI browser really does make sense. It makes it much quicker to see what's happening compared with trying to read through pages of cryptic text (no matter how well versed you are at reading that text).
This question was posted and answered some time ago.
In this answer I'll try to show a flexible way to get the informations asked and extend it.
In cygwin, use svn log in combination with awk
REPO_URL=https://<hostname>/path/to/repo
FILENAME=/path/to/file
svn log ${REPO_URL} -v --search "${FILENAME}" | \
awk -v var="^ [D] ${FILENAME}$" \
'/^r[0-9]+/{rev=$1}; \
$0 ~ var {print rev $0}'
svn log ${REPO_URL} -v --search "${FILENAME}" asks svn log for a verbose log containing ${FILENAME}. This reduces the data transfer.
The result is piped to awk. awk gets ${FILENAME} passed via -v in the var vartogether with search pattern var="^ [D] ${FILENAME}$"
In the awk program /^r[0-9]+/ {rev=$1} assigns the revision number to rev if line matches /^r[0-9]+/.
For every line matching ^ [D] ${FILENAME}$ awk prints the stored revision number rev and the line: $0 ~ var {print rev $0}
if you're interested not only in the deletion of the file but also creation, modification, replacing, change Din var="^ [D] ${FILENAME}$"to DAMR.
The following will give you all the changes:
svn log ${REPO_URL} -v --search "${FILENAME}" | \
awk -v var="^ [DAMR] ${FILENAME}$" \
'/^r[0-9]+/ {rev=$1}; \
$0 ~ var {print rev $0}'
And if you're interested in username, date and time:
svn log ${REPO_URL} -v --search "${FILENAME}" | \
awk -v var="^ [DAMR] ${FILENAME}$" \
'/^r[0-9]+/ {rev=$1;user=$3;date=$5;time=$6}; \
$0 ~ var {print rev " | " user " | " date " | " time " | " $0}'

Resources