For every project I create, I have to do export GOPATH={path_to_project} every time I cd into the project dir. There has to be an easier way. Isn't there some way I can create a .bashrc or .bash_profile file for a given directory to define the GOPATH for that project?
For example, I have two go projects A and B. If I have a singular GOPATH that isn't redefined when I move between projects, then binaries for both projects will be stored in the same place. More importantly, binaries for third party libraries will be stored in the same place, so I have no way of maintaining multiple versions of the same library on a per project basis.
However, if I am able to define GOPATH on a per project basis, then all binaries and third party libraries are project dependent. This seems to be the common way of handling package management in most other language environments (ruby rbenv, python vertiualenv, etc.)
(Q2 2018:
Note that with the vgo (now "module") project, GOPATH might end up being deprecated in favor of a project-based workflow. That would avoid the manual project-based GOPATH I was proposing below, two years ago)
With Go 1.11 (August 2018), GOPATH can be optional, with modules.
You have a similar idea expressed in Manage multiple GOPATH dirs with ease, by Herbert Fischer (hgfischer), for a Linux/Unix environment (base on the question already mention in the comments above):
Just include the following snippet in your ~/.bashrc (or ~/.bash_profile) and reload your shell environment with source ~/.bashrc.
This snippet will create a shell function that will override the builtin command cd with a customized one that scans the entered directory, and every other above, for a file named .gopath.
cd () {
builtin cd "$#"
cdir=$PWD
while [ "$cdir" != "/" ]; do
if [ -e "$cdir/.gopath" ]; then
export GOPATH=$cdir
break
fi
cdir=$(dirname "$cdir")
done
}
Now you just need to create a .gopath file in every directory you want as your GOPATH and every time you enter this directory, the redefined cd function will set the GOPATH of your current environment to this directory.
Update 2017: if you don't want to modify your environment, you can still use one GOPATH per project, by opening the src folder of that project in Visual Studio Code (vscode, which is a multi-platform IDE), combined with the extension "Go for Visual Studio Code".
In that IDE, you can:
keep your global unique GOPATH in a setting called go.toolsGopath.
infer your current GOPATH with a setting called go.inferGopath
That way, VSCode will install a collection of tools in your global GOPATH (for you to use outside VSCode as well).
See "Go tools that the Go extension depends on": godep, golint, guru, godoc, ...
And yet, your GOPATH for your project will be the parent folder of src:
That works when you compile/install your project from the IDE.
If you want to do it from the command line, the original answer above would still apply.
You can use a tool like autoenv to set up a script that is automatically executed when you cd into a particular directory.
For your purposes, an example /happy/go/path/yay/.env file might look like:
export GOPATH="/happy/go/path/yay"
export PATH="$GOPATH/bin:$PATH"
I would write a script which can infer the proper GOPATH from the current directory, and then alias the go command to first call this script. For example, a very simple implementation:
#!/bin/bash
# infer-gopath.sh
pwd
And then, in .bash_aliases (or wherever you keep your aliases):
alias go='GOPATH=$(infer-gopath.sh) go'
This sets GOPATH to whatever infer-gopath.sh outputs just for the invocation of the go command, so it won't have any lasting effect on your shell.
I know this is not very clever but I find that if I simply go to the base directory of the go project where I have the src, pkg and bin folders I can simply type:
export GOPATH=$(pwd)
and thats it all good!
My impression is that the go tool actively discourages "maintaining multiple versions of the same library on a per project basis" for the precise reason that experience has shown that that strategy doesn't work on large codebases (such as Google's). There has been quite a lot of discussion about package versioning on golang-nuts: (search the list), and it seems that the discussion is still open, as indicated by Ian Lance Taylor in this June 6, 2013 interview (search for the word "versioning").
The go packaging system is designed to allow every project to have its own directory structure; the only restriction is that they all need to be children of (some) directory in GOPATH. This has the advantage that it interacts well with version control systems, as long as the VCS master always builds. In the blog interview referenced above, ILT suggests:
What we do internally is take a snapshot of the imported code, and update that snapshot from time to time. That way, our code base won't break unexpectedly if the API changes.
Substituting "my other libraries" for "the imported code", that seems like a possibility; you could have two go directories, production and development; for development, you could put the development directory first in the path so that development binaries and libraries don't pollute the production directories. I don't know if that is sufficient.
If you really want to have a separate GOPATH for each project, I'd suggest the following:
1) Make every project's GOPATH end at a directory named go (or some such)
2) Deduce the GOPATH using the something like the following shell function (almost totally untested):
gopath() {
GOPATH="$(
( while [[ $PWD != / && $(basename $PWD) != go ]]; do
cd ..
done
if [[ $PWD == / ]]; then
echo $GOPATH
else
echo $PWD
fi
))" go "$#"
}
Then you can use gopath instead of go as long as your current working directory is somewhere inside the project's repository. (More sophisticated possibilities might include using the explicitly provided project path, if any, to deduce GOPATH.)
I can't comment, but to build off the answer from #joshlf :::
alias go by prepending the GOPATH -- my method doesn't require you deal with/create an extra file:
alias go='GOPATH=$(echo $(pwd)) go'
cheers
I like the guts of the gopath() answer above, but I don't like (a) having the GOPATH set only for the go command (I use it in vim plugins, etc), nor do I like (b) having to remember to type gopath instead of go :)
This is what I ultimately added to my ~/.bash_profile:
export GOPATH="... a default path ..."
function cd() {
builtin cd $# &&
export GOPATH="$(
( while [[ $PWD != / && $(basename $PWD) != go ]]; do
cd ..
done
if [[ $PWD == / ]]; then
echo $GOPATH
else
echo $PWD
fi
))"
}
(I also wrote up the above with a little extra discussion of requirements in this blog post)
I'm a Golang newb, but the best way to do this would probably be to create a shell script in each of your projects, to build/run your project, and put this script in your project root :)
#!/usr/bin/env bash
// get absolute path to the directory which contains this script
PROJECT_DIR=$(cd $(dirname $0) && pwd)
// set GOPATH as you wish, then run go build
GOPATH=${GOPATH}:${PROJECT_DIR} && cd ${PROJECT_DIR} && go build
this script should work, even if you execute it from a directory that is not the project root.
Related
I'm studying shellscript and I know that there is a way to skip "./" when you need to execute a shellscript.
For example: after I made a script like this:
echo "hello world!"
I use the command "chmod +X" to make it executable. But to execute it on my terminal I need to type:
./helloworld.sh
I know that theres is a way to skip this "./" if you're using bash. You go to .bashrc and write in the end of the script "PATH=PATH:."
But since I'm using MacOS, which use zsh, I tried to type "PATH=PATH:." in the end of my .zshrc but this didn't work.
Then I would like to know if there is a way to remove the need of "./" for every shellscript that I need to run.
Thank you people
P.S.: I have brew and ohmyzsh installed in my machine
What's Wrong with Your Code
The reason your example doesn't work is because PATH is a variable, and you need to expand it by prefixing it with the dollar sign to access its value, e.g. PATH=$PATH:. rather than just PATH=PATH:.. However, there are some other considerations too.
Prepending, Appending, and Exporting PATH
It's generally not recommended to treat your current working directory as part of your PATH for security reasons, but you can do it in any Bourne-like shell by prepending or appending . (which means any current working directory) to your PATH. Depending on where you call it and how you've initialized your shell, you may also need to export the PATH variable to your environment.
Some examples include:
# find the named executable in the current working directory first
export PATH=".:$PATH"
# find the named executable in the current working directory
# only if it isn't found elsewhere in your PATH
export PATH="$PATH:."
# append only the working directory you're currently in when you
# update the PATH variable, rather than *any* current working
# directory, to your PATH
export PATH="$PATH:$PWD"
Note that it's also generally a good idea to quote your variables, since spaces or other characters may cause problems when unquoted. For example, PATH=/tmp/foo bar:$PATH will either not work at all or not work as expected. So, wrap it up for safety (with quotes)!
Use Direnv for Project-Based PATH Changes
You might also consider using a utility like direnv that would enable you to add the current working directory to your PATH when you enter known-safe directories, and remove it from the PATH when leaving the directory. This is commonly used for development projects, including shell scripting ones.
For example, you could create the following ~/dev/foo/.envrc file that would only prepend the current working directory when in ~/dev/foo, and remove it again when you move above the current .envrc in your filesystem:
# ~/dev/foo/.envrc
#
# prepend "$HOME/dev/foo:$HOME/dev/foo/bin" to your
# existing PATH when entering your project directory,
# and remove it from your PATH when you exit from the
# project
PATH_add "$PWD/bin"
PATH_add "$PWD"
Because direnv uses whitelisting and ensures that your items are prepended to PATH, it's often a safer and less error-prone way to manage project-specific modifications to your PATH or other environment variables.
Use "$HOME/bin" to Consolidate Scripts
Another option is to create a directory for shell scripts in your home directory, and add that to your PATH. Then, any script placed there will be accessible from anywhere on your filesystem. For example:
# add this line to .profile, .zprofile, .zshrc, or
# whatever login script your particular terminal calls;
# on macOS, this should usually be .zprofile, but YMMV
export PATH="$HOME/bin:$PATH"
# make a ~/bin directory and place your
# executables there
mkdir -p ~/bin
cp helloworld.sh ~/bin/
Assuming the shell scripts in your bin directory already have their executable bit set (e.g. chmod 755 ~/bin/*sh), you can run any shell script in that directory from anywhere on your filesystem.
You need PATH=$PATH:. -- the $ to expand the (old) value of PATH is important.
I'm running IntelliJ 2018.3 on Windows 7, as well as openSUSE Leap 15.
Under Windows 7, I've configured IntelliJ to use Git Bash, i.e., in Settings, under Tools -> Terminal, I'm setting Shell path to:
C:\Program Files (x86)\Git_2.17.1\bin\bash.exe
One of IntelliJ's new features is the ability to save and reload terminal sessions (see this link).
It works perfectly with openSUSE, however, on Windows, while the terminal tab names are correctly restored, I always end up with a new shell.
Is there a way to make IntelliJ and Git Bash play well together so that I can retain the current working directory and shell history after restarting IntelliJ?
You can try and setup your Git for Windows bash to remember the last used path for you, as seen in "How can I open a new terminal in the same directory of the last used one from a window manager keybind?"
For instance:
So instead of storing the path at every invocation of cd the last path can be saved at exit.
My ~/.bash_logout is very simple:
echo $PWD >~/.lastdir
And somewhere in my .bashrc I placed this line:
[ -r ~/.lastdir ] && cd $(<~/.lastdir)
That does not depend on Intellij IDEA directly, but on the underlying bash setup (here the Git for Windows bash referenced and used by Intellij IDEA.
Here's a possible workaround. It was heavily inspired by VonC's answer, as well as other answers to the question that he mentioned.
~/.bashrc
if [[ -v __INTELLIJ_COMMAND_HISTFILE__ ]]; then
__INTELLIJ_SESSION_LASTDIR__="$(cygpath -u "${__INTELLIJ_COMMAND_HISTFILE__%history*}lastdir${__INTELLIJ_COMMAND_HISTFILE__##*history}")"
# save path on cd
function cd {
builtin cd $#
pwd > $__INTELLIJ_SESSION_LASTDIR__
}
# restore last saved path
[ -r "$__INTELLIJ_SESSION_LASTDIR__" ] && cd $(<"$__INTELLIJ_SESSION_LASTDIR__")
fi
I don't like the fact that I had to wrap the cd command, however Git Bash does not execute ~/.bash_logout unless I explicitly call exit or logout; unfortunately due to this limitation, the .bash_logout variant is inadequate for the mentioned scenario.
The workaround above also leave small junk files inside __INTELLIJ_COMMAND_HISTFILE__ parent dir, however, I couldn't do any better.
Additionally I've opened a ticket in Jetbrain's issue tracker. There are many different shells that may benefit from official support. It would be great if JetBrains could eventually support powershell and popular terminals like windows-subsystem-for-linux, cygwin and git-bash. The only shell that currently works out of the box for me is cmd.
I have been putting together a makefile in a Windows environment for my team to use. I decided to use MinGW's version of make for Windows. I put that executable with its dependencies into a repository location that should be in everyone's PATH variable. The executable was renamed "make.exe" for simplicity.
Then I realized that I have to account for the case when someone has cygwin's bin folder in their path. Commands like echo, rmdir, and mkdir will call echo.exe, rmdir.exe, and mkdir.exe from cygwin's bin folder. This means that I need to appropriately catch this scenario and use different flags for each command.
I see three cases here:
Cygwin's bin path comes before the path where make.exe is located in the repository. When a team member executes make.exe, they will be executing cygwin's make. Unix-style commands must be used.
Cygwin's bin path comes after the path where make.exe is located in the repository. The correct make.exe will be executed, but I still have to use Unix-style commands.
Cygwin is not installed or not in the PATH. I can use all Windows commands in this case.
I am fine with treating cases 1 and 2 the same. Since MinGW's make and cygwin's make are both based on GNU Make, then I don't see this being much of an issue other than incompatibility issues between versions of GNU Make. Let's just assume that isn't a problem for now.
I have come up with the following check in my makefile.
ifneq (,$(findstring cygdrive,$(PATH))$(findstring cygwin,$(PATH))$(findstring Cygwin,$(PATH)))
#Use Unix style command variables
else
#Use Windows style command variables
endif
Finding "cygdrive" in the path variable means that we are most likely in case 1. Finding "cygwin" or "Cygwin" in the path variable most likely means that we are in case 2. Not finding either string in the path most likely means that we are in case 3.
I am not completely fond of this solution because the cygwin's folder can be renamed or the string "cygwin" or "cygdrive" can be put in the PATH variable without having cygwin installed. One team member is still having issues as he has cygwin's bin path in the PATH variable, but the above does not catch that. I am assuming that he renamed the folder to something else, but I haven't been able to check on that.
So is there a better way to figure out what syntax that I should be using?
Here is another solution that I thought up.
ifeq (a,$(shell echo "a"))
#Use Unix style command variables
else
#Use Windows style command variables
endif
This is based on the fact that 'echo "a"' in Unix will print a (without quotes) but windows will print "a" (with the quotes). If I am using the Unix style echo then I can assume that I am using all Unix commands.
I don't find this solution very elegant though, so I am not marking it as the solution for this question. I think this is better than what I originally had though.
Cygwin make v. MinGW make: Does mingw make support the jobserver, as in can you do make -j5? If not, ${.FEATURES} has jobserver for cygwin make. Maybe load is a good test too.
Cygwin before non-cygwin on path: cygpath.exe is unique to cygwin. You could just look for this in ${PATH}. Unfortunately, Windows users like using spaces in folder names, and there's no way of dealing with this in pure make. $(shell which make) will return /usr/bin/make for cygwin, though a shell invocation on every make run is very smelly.
You don't install a compiler from a repository, is not make a similar case? Just get your users to install cygwin and be done with it.
At my work I do not have access to the vim rc file(yet?) so I was wondering if it is possible to make and run a script of vim commands to quickly get my vi workstation up and running.
i.e all of the :set blahblach commands and what not.
Thank you!
Do you have your own /home/lilsheep/ directory? If yes, just put all your settings in ~/.vimrc and your plugins in ~/.vim/.
If you can't create those file and directory but are able to write somewhere on your machine, you can start vim with your own vimrc:
$ vim -u /path/to/your/vimrc
If you want to load your own plugins from your own vimruntime/ directory, place this line in the vimrc above:
set runtimepath+=/path/to/your/vimruntime
Be sure to add these two lines to your vimrc in order to reset any and all options set by other vimrcs and start in nocompatible "mode":
set all&
set nocompatible
You're looking for :source your-script.vim, I think?
:help source
I know what you mean/need, and I do have my own solution, not the best but it works. :)
I know this solution is a more full fledged solution, but I think it's a better solution than to have Vim do everything through VimL and stuff like that. ;)
Basically, what you need is a dotfiles folder, and a dotfiles repo. I'm not gonna teach you how to make a dotfiles repo, you can find plenty of entries from Google for that.
Next, what you wanna do is have Git on every new machine, that's actually the only requisite for this method. Besides having the software of course. :)
And then, all you gotta do is the following:
# I'll be using my own dotfiles directory for these examples
# Clone my dotfiles repo to the "~/dotfiles" directory
$ git clone git#github.com:Greduan/dotfiles.git ~/dotfiles
# Run my bootstrap file
sh ~/dotfiles/bootstrap.sh
OK, so that's easy enough, but what does the bootstrap.sh file do? It does a couple of things, mainly runs other scripts which do other things, but it also sets my preferred default shell for example. Here's a quick excerpt of it:
kernel=`uname -s` # Current Kernel name
time=`date +%H:%M` # Output current time
bootstrap=1 # Now scripts know they've been called by bootstrap file
echo "I'm gonna do some stuff for you, OK? I started doing stuff at [$time]..."
if [ $kernel == 'Darwin' ]; then
# Set some OSX options
source ./scripts/set_osx_defaults.sh
fi
# Make the necessary symlinks
source ./scripts/make_symlinks.sh
# Install some Homebrew stuff
source ./scripts/brew_installs.sh
# Setup Vim for first use
source ./scripts/setup_vim.sh
# Set the default shell
chsh -s /bin/zsh
sudo chsh -s /bin/zsh
# Install all submodules
git submodule update --init
echo "All right! I finished! It's [$time] right now."
So you see what it does? All of these scripts do different stuff so that they're not packed up in one single file, creating confusion and stuff. The files I want to direct your attention to are these three files:
# Make the necessary symlinks
source ./scripts/make_symlinks.sh
# Install some Homebrew stuff
source ./scripts/brew_installs.sh
# Setup Vim for first use
source ./scripts/setup_vim.sh
Let me explain what each one does...
make_symlinks.sh does just that, it runs a bunch of ln -sfn -v to all my dot files that I care about, in other words it configures all the Unix stuff I use before I even use it! Making it very easy to run that file and be set to go in a new other worldly machine (it also makes backups of old dot files [COMING SOON]).
brew_installs.sh is also pretty easy to guess. It runs a bunch of carefully chosen brew install commands, or any commands related to Homebrew really, setting up my Unix environment to use tools that I prefer, over others.
setup_vim.sh is probably the most interesting to you... It just makes a symlink to a Python thing in order to stop some errors, installs Vundle and installs all of Vundle's bundles.
Here's the URL to my dotfiles repo i case you need more references, or ideas: https://github.com/Greduan/dotfiles
I hope this answers your question and gives you enough inspiration to come up with your own solution. :)
I have several compilers with the same name but of different versions or location.
The ./configure script seems to stop at the first it finds in PATH. How can I tell Automake to choose one of them according to a custom rule ? I already have a macro which can check the compiler version.
I would like to avoid setting the path by hand (with the FC variable) as it can be cumbersome to type each time the whole path.
In my case, several MPI wrapper compilers are located in different directories with the same name (and added to the PATH by the user).
The idea would be to use something like ./configure --with-intel to find and select the IntelMPI compiler.
My solution is to set up CC and other "precious" variables via a shell script, lots of them for cross compilation. So I have a bunch of shell scripts sitting around with contents like:
export CROSS_COMPILE=arm-linux
export CC=${CROSS_COMPILE}-gcc
...
PATH=$PATH:/some/arm/compiler/bin:/some/arm/compiler/usr/bin # for arm compiler tools
export CFLAGS="..."
to set up the configure configuration. So at configure time I do:
source /path/to/configuration/some-arm-compiler.sh
./configure ...
It saves a lot of typing.
EDIT: So it could work for your particular case something like:
mpi-a.sh
export FC=mpif90
PATH=$PATH:/path/to/mpi-a/bin:/path/to/mpi-a/usr/bin
mpi-b.sh
export FC=mpif90
PATH=$PATH:/path/to/mpi-b/bin:/path/to/mpi-b/usr/bin
So for compiling with one of them:
source /path/to/mpi-a.sh
./configure ...
and the other:
source /path/to/mpi-b.sh
./configure ...
My solution : copy the search strategy of configure in a macro, with a custom matching criterion. Parsing PATH is done by setting the IFS variable (which is already defined in configure). In Bash, finding all the executables would be something like :
#!/bin/bash
IFS=":"
exe="mpif90"
for dir in $PATH
do
if test -f "$dir/$exe"; then
custom_test($dir/$exe)
fi
done
Note: This is recommanded in the manual
If you need to check the behavior of a program as well as find out whether it is present, you have to write your own test for it.