How to set path to kubectl when installed using gcloud components install? - shell

Ok, I installed kubectl in the following way on my Mac:
1) installed gcloud using homebrew
2) installed kubectl using gcloud components install.
I want to run a shell script that calls kubectl directly. However, I get an error.
$ kubectl version
-bash: kubectl: command not found
I expected gcloud components install to set path variables so that I can call kubectl. Looks like that has not happened. I searched for kubectl in my mac but was not able to find it.
How can I get kubectl to work from command line?

Short answer:
On macOS, you may need to add a symlink: sudo ln /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/kubectl /usr/local/bin/kubectl
Long answer:
I believe this is caused by installing kubectl via homebrew, then via gcloud, and then uninstalling the homebrew managed tool. homebrew will remove its symlink but gcloud doesn't add it back even when you reinstall kubectl.
To see if this is affecting you on macOS:
See if gcloud has installed kubectl: gcloud info | grep -i kubectl
If you are having the problem I am, I'd expect to see the output look something like this:
kubectl: [2019.05.31]
Kubectl on PATH: [False]
When working you should see something like this:
kubectl: [2019.05.31]
Kubectl on PATH: [/usr/local/bin/kubectl]
/usr/local/bin/kubectl
Check for the symlink: ls -la /usr/local/bin | grep -i google-cloud-sdk. That will show your links to google cloud binaries.
If kubectl isn't on the list then run sudo ln /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/kubectl /usr/local/bin/kubectl

The gcloud info command will tell you if and where kubectl is installed.
Per https://kubernetes.io/docs/tasks/tools/install-kubectl/, you can install kubectl with brew install kubernetes-cli. Alternatively, you can install the Google Cloud SDK per https://cloud.google.com/sdk/docs/quickstart-macos, and then install kubectl with gcloud components install kubectl.

Just to update and add more clarity around this:
If you did not install kubectl via Homebrew but chose to do
gcloud components install kubectl
instead then the binary is installed inside the bin folder of your gcloud install folder. Even if your gcloud bin folder is already on your shell's PATH it wont see kubectl right away unless you start a new shell or run hash -r (bash / zsh) to tell your shell to discard its cached paths.

Thanks to #mark, I was able to sort this out, but I feel modifying the $PATH is a better approach. I also found that the Caskroom folder was in a different spot. So I recommend adding this to the relevant rc file:
export PATH="`brew --prefix`/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin:$PATH"

Related

How do I install redis 5.0.14 using homebrew

I need to install redis version 5.0.14 on my mac using brew.
I have tried multiple ways like brew install redis#5.0.14, redis#5.0, redis#50, redis#5 but nothing seems to work!!
I was able to find out on https://formulae.brew.sh/ that the options that can be installed using brew are redis, redis#4.0, redis#3.2 . But I need to install redis 5.0.14 or basically above 5.0.6 because that is the version we have on our production. Can anyone help me out on this?
I have seen a way here that suggests to checkout specific homebrew formulae version but that would become too messy if something goes wrong. I would prefer a straight forward way if there is one.
Given that the Redis version you require is not available via homebrew, your question is unanswerable. However, given how good docker is on macOS, I have taken to using that rather than homebrew for lots of version-related problems.
With docker:
I can pull any version I want,
it's all isolated from my core macOS,
just as performant,
readily deletable,
simple to have many versions,
switchable between versions,
repeatable across platforms and
configurable by script.
Official image here.
So, in concrete terms, you could run Redis 5.0.14 as a daemon like this:
docker run --name some-redis -d redis:5.0.14
and then connect to that same container and run redis-cli inside it like this:
docker exec -it some-redis redis-cli PING
PONG
Or you could run Redis in the container but expose its port 6379 as port 65000 to your regular macOS applications like this:
docker run --name some-redis -p 65000:6379 -d redis:5.0.14
Then it is accessible to your macOS applications, such as redis-cli like this:
redis-cli -p 65000 info | grep redis_version
redis_version:5.0.14
The version you're looking for is not available on brew unfortunately.
bruno#pop-os ~> brew info --json redis | jq -r '.[].versioned_formulae[]'
redis#4.0
redis#3.2
You could get the source code from here: https://github.com/redis/redis/releases/tag/5.0.14
extract it to some directory, to run Redis with the default configuration just type:
% cd src
% ./redis-server
If you want to provide your redis.conf, you have to run it using an additional
parameter (the path of the configuration file):
% cd src
% ./redis-server /path/to/redis.conf
It is possible to alter the Redis configuration by passing parameters directly
as options using the command line. Examples:
% ./redis-server --port 9999 --replicaof 127.0.0.1 6379
% ./redis-server /etc/redis/6379.conf --loglevel debug
All the options in redis.conf are also supported as options using the command
line, with exactly the same name.
Optionally you could use Docker, docker run --name some-redis -d redis:5.0.14
brew update
brew install redis
To have launchd start redis now and restart at login:
brew services start redis
To stop it, just run:
brew services stop redis
Test if Redis server is running.
redis-cli ping

Running gsutil on instance using python subprocess - access permissions?

I have a python script that does calculations on google compute engine instances. The code works fine in terms of doing the calculations, but at certain points in the code it needs to add/delete files from a cloud storage bucket and I do this using gsutil. This works well when run from my local computer, but isn't working when the same code is run from a google cloud instance. By "not working" an error message is reported at the offending line, but my code carries on running and just ignores the steps that involve gsutil.
My understanding from Google's documentation is that gcloud instances boot with the "gsutil" utility already installed. My instances boot running a script like this (where is my actual google username):
#! /bin/bash
sudo apt-get update
sudo apt-get -yq install python-pip
sudo pip install --upgrade google-cloud
sudo pip install --upgrade google-cloud-storage
sudo pip install --upgrade google-api-python-client
sudo pip install --upgrade google-auth-httplib2
mkdir -p /home/<xxxx>/code
mkdir -p /home/<xxxx>/rawdata
mkdir -p /home/<xxxx>/processeddata
sudo chown -R <xxxx> /home/<xxxx>
gsutil cp gs://<codestorebucket>/worker-python-code/* /home/<xxxx>/code/
gsutil -m cp gs://<rawdatabucket>/* /home/<xxxx>/rawdata/
I dont run my code from the boot script yet as I want to "SSH" into the instance and run it myself from the command line while I am still developing. When I SHH into the instance the directories have all been created and all of the code and raw datafiles have been copied. I can run my ".py" file and it runs, but there are lines which use the python command:
subprocess.call('gsutil -q rm gs://<mybuckname>/<myfilename>', shell=True)
This generates an error which reads:
ERROR: (gsutil) Failed to create the default configuration. Ensure your have the correct permissions on: [/home/<xxxx>/.config/gc
loud/configurations].
Could not create directory [/home/<xxxx>/.config/gcloud/configurations]: Permission denied.
If it provides any clues, in the "daemon.log" file there an error line which reads:
chown: invalid user: ‘<xxxxx>’
which is reported when the sudo chown... command line runs.
The instances have full access to all APIs. If I run
whoami
The response is "xxxxx". If I run
echo $UID
The response is 1000.
I am a Linux novice, as I have only "learnt" about it through needing to do stuff on google instances. There is a link here where a user appears to have a similar problem. He fixes it using a sudo chown type command line, but when I run an equivalent command I am told that it "cannot access '/home/paulgarlick07/.config/': No such file or directory"
I'm really confused, and any help would be very much appreciated. If any additional info is required to help resolve this please let me know!
gsutil is not a program. It is a script. Therefore you need to execute a shell with gsutil as a command line argument. You will need to pass the full pathname for gsutil which might be different on your system.
subprocess.call('/bin/sh /usr/bin/gsutil -q rm gs://<mybuckname>/<myfilename>', shell=True)
If you are running gsutil from a service, then you will need to ensure that the user that the service is running under has gsutil setup. gsutil stores its configuration files based from the home directory of the user that it is executing under.

Run gcloud without sudo

I'm on Mac OSX and I've always had to run the gcloud command with sudo. I can usually work around it, but it has started to cause me some issues. I tried following this answer here, but I am not sure where the gcloud command gets called from. It's not in /usr/bin.
I have found that my gcloud sdk is installed at /Users/Max/Desktop/google-cloud-sdk/, and I have tried adding /bin/gcloud and '/lib/gcloud.py' from that path. No luck! Any idea how I can give NOPASSWD permissions to this command?
I'm on macOS and my issue was that my google-cloud-sdk install folder and it's config folder at ~/.config/gcloud were owned by root. The fix is to sudo chown -R <your-username> google-cloud-skd and sudo chown -R <your-username> ~/.config/gcloud. And done: no more sudo.
I was able to resolve this issue myself. This article was very helpful. Ultimately, you just have to add sudo privileges to the gcloud command. You will have to give those permissions by running sudo visudo and adding a line in the following format:
<yourusername> ALL=NOPASSWD: <command1>, <command2>
Mine line ended up looking like this:
Max ALL=NOPASSWD: /Users/Max/Desktop/google-cloud-sdk/bin/gcloud
The part that tripped me up was figuring out where the gcloud command was installed. You have to add that path at the end of the permissions. You can find out where it is installed by running which gcloud.

CircleCI 2.0 -> /bin/bash: bash: command not found

In circle CI build I am trying to install nvm as follows:
- run:
name: Install nvm
command: curl -o-https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | bash
But I am getting this error:
How do fix this issue?
Disclaimer: Developer Advocate at CircleCI
You didn't specify which Docker image (or executor) you are using. Most likely you're using a Docker image that doesn't include Bash. You can do one of three things:
Install Bash first in that Docker image.
Choose a Docker image with Bash already installed.
Use sh for the command instead of Bash.
Option 3 is the easiest option as long as the install script isn't using Bash specific features. You can try it by replacing the end of the command like this:
curl -o-https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | sh

Find out where Google Cloud SDK is installed

I need to add the Google Cloud SDK in the PATH. So I need the path to where is installed. Is there any gcloud ... command which gives me this information?
If not I have to go through the symlink in which gcloud, etc.
Any cleaner solution for this problem?
The following command will give you the information you're looking for:
$ gcloud info --format="value(installation.sdk_root)"
/path/to/google-cloud-sdk/
You need to append /bin.
You also have lots of other paths available: config.paths.global_config_dir, installation.sdk_root, and so on. Look at the output of gcloud info --format=json for all available properties to query.
I used:
dirname $(which gcloud)
And worked like a charm
As of March 2022, the gcloud sdk link indicates you can have the SDK for client libraries Java, Python, Node.js, Ruby, .Net and Php.
In addition you can have the CLI too. If you have installed the CLI, part of the ( optional ) install process is to add the gcloud binary path to your PATH thus you can find it.
This script is called path.bash.inc, path.zsh.inc etc and depending on what your default shell is it will run the correct path.your_shell.inc file. If you have never run that, then it is possible your gcloud was installed in a directory that you lost track of.
If that is the case, then you have to simply run a "find" from the root
find / -name gcloud -type f -ls 2>/dev/zero
This contrasts with
AWS cli, that gets installed into system common PATH directory like /usr/local/bin on Mac
2 AZ cli, that gets installed using homebrew on Mac - under /opt/homebrew/bin path - which, if you used a Mac, already is in your Path.

Resources