Bash scripts getting permission denied - bash

This problem has been a thorn in my side for months and I cant find a solution or a reason behind this. I work with bash scripts a lot and I find that half the time when i execute a .sh script that calls another .sh script, I'll get permission denied during certain sequences of the execution. Sometimes on cp, mkdir, git commands, and sometimes it can execute just fine. If I prepend sudo they work, but my coworker runs the same scripts and never has these issues. So what am I missing?
Another use case:
running command "sudo npm install go-npm",an error code pops up:
npm ERR! Command failed: /usr/bin/git clone --depth=1 -q -b master git://github.com/brockwood/spawn-sync.git /home/epoauto/.npm/_cacache/tmp/git-clone-d4bf86cc
npm ERR! /home/epoauto/.npm/_cacache/tmp/git-clone-d4bf86cc/.git: Permission denied
I've given sudo permissions with 'sudo visudo' but no luck.
Please help!

Related

git bash does not recognize "environment path" when using package.json scripts

I am trying to yarn start with git bash
"start": "node scripts/start.js",
It always works when using PowerShell or CMD.
But it does not work with git bash.
But when I tried to node scripts/start.js instead yarn start with bash it works!
I tested git bash
yarn -v, node -v, npm -v,
every command works well.
But not work with scripts...
This is the error message
'node' is not recognized as an internal or external command
And i tried to
"startStart": "yarn start",
And this time bash gives me this error message
'yarn ' is not recognized as an internal or external command
I checked my env PATH but all is fine.
--- ENV
VS_CODE
OS : window 10
node : 13.5
npm : 6.13.4
I installed git-bash with git
And all install config is default-standard
Add
I think Git-Bash can find the path when it is alone
I think we should focus on that it can't find path only when it try to trigger package.json scripts
About .profile I didn't know what it is and I never created it.
If it is not default - exist I don't have it.
without to relate to windows specific, npm executes scripts commands (specified in package.json) under the default shell, but it does not perform a login to the shell.
for instance, a bash login (bash --login) in order to use your custom system environment variable.
you can change this by using .npmrc file and set the script-shell. see this answer for the solution.
i hope this is what you suffer from :)

Running gsutil on instance using python subprocess - access permissions?

I have a python script that does calculations on google compute engine instances. The code works fine in terms of doing the calculations, but at certain points in the code it needs to add/delete files from a cloud storage bucket and I do this using gsutil. This works well when run from my local computer, but isn't working when the same code is run from a google cloud instance. By "not working" an error message is reported at the offending line, but my code carries on running and just ignores the steps that involve gsutil.
My understanding from Google's documentation is that gcloud instances boot with the "gsutil" utility already installed. My instances boot running a script like this (where is my actual google username):
#! /bin/bash
sudo apt-get update
sudo apt-get -yq install python-pip
sudo pip install --upgrade google-cloud
sudo pip install --upgrade google-cloud-storage
sudo pip install --upgrade google-api-python-client
sudo pip install --upgrade google-auth-httplib2
mkdir -p /home/<xxxx>/code
mkdir -p /home/<xxxx>/rawdata
mkdir -p /home/<xxxx>/processeddata
sudo chown -R <xxxx> /home/<xxxx>
gsutil cp gs://<codestorebucket>/worker-python-code/* /home/<xxxx>/code/
gsutil -m cp gs://<rawdatabucket>/* /home/<xxxx>/rawdata/
I dont run my code from the boot script yet as I want to "SSH" into the instance and run it myself from the command line while I am still developing. When I SHH into the instance the directories have all been created and all of the code and raw datafiles have been copied. I can run my ".py" file and it runs, but there are lines which use the python command:
subprocess.call('gsutil -q rm gs://<mybuckname>/<myfilename>', shell=True)
This generates an error which reads:
ERROR: (gsutil) Failed to create the default configuration. Ensure your have the correct permissions on: [/home/<xxxx>/.config/gc
loud/configurations].
Could not create directory [/home/<xxxx>/.config/gcloud/configurations]: Permission denied.
If it provides any clues, in the "daemon.log" file there an error line which reads:
chown: invalid user: ‘<xxxxx>’
which is reported when the sudo chown... command line runs.
The instances have full access to all APIs. If I run
whoami
The response is "xxxxx". If I run
echo $UID
The response is 1000.
I am a Linux novice, as I have only "learnt" about it through needing to do stuff on google instances. There is a link here where a user appears to have a similar problem. He fixes it using a sudo chown type command line, but when I run an equivalent command I am told that it "cannot access '/home/paulgarlick07/.config/': No such file or directory"
I'm really confused, and any help would be very much appreciated. If any additional info is required to help resolve this please let me know!
gsutil is not a program. It is a script. Therefore you need to execute a shell with gsutil as a command line argument. You will need to pass the full pathname for gsutil which might be different on your system.
subprocess.call('/bin/sh /usr/bin/gsutil -q rm gs://<mybuckname>/<myfilename>', shell=True)
If you are running gsutil from a service, then you will need to ensure that the user that the service is running under has gsutil setup. gsutil stores its configuration files based from the home directory of the user that it is executing under.

Permission denied when installing ruby

I am working on installing another version of ruby on another server which is CentOS 6.7 When while trying to configure ruby within the tmp directory I receive a:
sudo: unable to execute ./configure: Permission denied
Here is exactly what I am doing leading up to this:
mkdir /tmp/ruby && cd /tmp/ruby
curl --progress ftp://ftp.ruby-lang.org/pub/ruby/2.1/ruby-2.1.2.tar.gz | tar xz
cd ruby-2.1.2
./configure --disable-install-rdoc <!-- here is were we fail with permission denied
I am currently logged in as root. I have played around with changing my file permissions and that did not seem to help at all.
Any suggestions?
From this link it appears /tmp is mounted as read-only.
Open /etc/fstab, find the line that mounts your /tmp dir, and remove the noexec flag. Then remount the filesystem (or simply restart your system).
As a side note, you will also want to avoid running ./configure and make as root user. Only when it comes to run make install should you run as root.

Why does npm need sudo for EVERYTHING?

I don't know how I've managed it but npm seems to need sudo for absolutely every command, even npm help does not work without sudo. If I use a command without sudo, I do not see am EACCESS error, but instead my terminal session hangs and then just closes that tab (I use iTerm on Mac).
I have tried changing the ownership of my local .npm folder, outlined here and also done the same on my /usr/local/bin folder where node is installed but none of these allow me to just run npm without sudo, even when installing local packages...! It seems to me that something has screwed along the way, can anyone help?
Many thanks
I encountered the same error after a fresh install of 0.12.4 today; this solved the problem for me:
sudo chown -R $(whoami):admin /usr/local/lib/node_modules
In my particular case, I noticed that this folder was owned by '{some-large-integer-account}:wheel'...YMMV
If that doesn't solve it, take a look at the ownership of the folders that are being blocked as mentioned in the EACCESS error trace. If you're not sure what the ownership should be, you can usually infer it from the sibling dirs' ownership.
I had this as well on my machine. What I did to fix it (there are probably much less extreme ways) was to completely remove npm, and then did a fresh installation node.js (with which npm is included) from http://nodejs.org/ making sure I didn't install as root. That then allowed me to use npm without root (except for global installs).
Take ember project for example, I give all related project directory root:
neil#neil-System-Product-Name:~/Projects/ember-quickstart$ sudo chown -R $(whoami) /home/neil/Projects/ember-quickstart/
neil#neil-System-Product-Name:~/Projects/ember-quickstart$ ember s
Could not start watchman
Visit https://ember-cli.com/user-guide/#watchman for more info.
Livereload server on http://localhost:7020
Build successful (10679ms) – Serving on http://localhost:4200/
Slowest Nodes (totalTime => 5% ) | Total (avg)
----------------------------------------------+---------------------
Babel (18) | 7561ms (420 ms)
Concat (8) | 1872ms (234 ms)
Rollup (1) | 629ms
Use the below option.
Open the terminal and cd to your Home directory and run the below command.
mkdir "${HOME}/.npm-packages"
Then this command after that.
npm config set prefix "${HOME}/.npm-packages"
Next, open your .zshrc file using the open -t .zshrc command and add the following to it.
NPM_PACKAGES="${HOME}/.npm-packages"
export PATH="$PATH:$NPM_PACKAGES/bin"
# Preserve MANPATH if you already defined it somewhere in your config.
# Otherwise, fall back to `manpath` so we can inherit from `/etc/manpath`.
export MANPATH="${MANPATH-$(manpath)}:$NPM_PACKAGES/share/man"

error: cannot run ssh: No such file or directory when trying to clone on windows

I am trying to clone a remote repository on Windows, but when I did this:
git clone git#github.com:organization/xxx.git
I got this error:
error: cannot run ssh: No such file or directory
fatal: unable to fork
Am I missing something?
Check if you have installed ssh-client. This solves the problem on docker machines, even when ssh keys are present:
apt-get install openssh-client
You don't have ssh installed (or don't have it within your search path).
You can clone from github via http, too:
git clone http://github.com/organization/xxx
Most likely your GIT_SSH_COMMAND is referencing the incorrect public key.
Try:
export GIT_SSH_COMMAND="ssh -i /home/murphyslaw/.ssh/your-key.id_rsa
then
git clone git#github.com:organization/xxx.git
I am aware that it is an old topic, but having this problem recently, I want to bring here what I resolve my issue.
You might have this error on these conditions :
You use a URL like this : git#github.com:organization/repo.git
And you run a kind of command like this directly : git clone git#github.com/xxxxx.git whereas you don't have ssh client (or it is not present on path)
Or you have an ssh client installed (and git clone xxx.git work fine on direct command line) but when you run the same kind of command through a shell script file
Here, I assume that you don't want to change protocol ssh git# to http://
(git#github.com:organization/repo.git -> http://github.com/organization/repo.git), like my case, cause I needed the ssh format.
So,
If you do not have ssh client, first of all, you need to install it
If you have this error only when you execute it through a script, then you need to set GIT_SSH_COMMAND variable with your public ssh key, in front of your git command, like this :
GIT_SSH_COMMAND="/usr/bin/ssh -i ~/.ssh/id_rsa" git pull
(Feel free to change it depending on your context)
I had this issue right after my antivirus moved the cygwin ssh binary to virus vault, and restored it after.
Symptoms:
SSH seems properly installed
SSH can be run from command line without problem
Another option before reinstalling ssh in this particular case: check the ssh command permissions
$ ls -al /usr/bin/ssh.exe
----rwxrwx+
$ chmod 770 /usr/bin/ssh.exe
You can try these as well
ssh-add ~/.ssh/identity_file
chmod 400 ~/.ssh/identity_file
It so happened in my case that the new pair of ssh keys linked with my git account were not accessible.
I had to sudo chmod 777 ~/.ssh/id_rsa.* to resolve the issue.

Resources