I am trying to execute a command in remote machine through shell script. That command needs to be executed as a root user. After logging in to remte machine with my normal ID, then execute the command(which is there is different path) in a specific path with root user. I am using below code
current_dir=$PWD;/usr/local/bin;sudo -u root drush data_export_import-export nodes --content-types=book;cd $current_dir;
i am getting below error
./test.sh: line 8: /usr/local/bin: is a directory
[sudo] password for s57232:
PHP Warning: Module 'pgsql' already loaded in Unknown on line 0
PHP Warning: Module 'pgsql' already loaded in Unknown on line 0
The drush command 'data_export_import-export nodes' could not be found. Run `drush cache-clear drush` to clear the commandfile cache if you have installed new extensions. [error]
It should not expect the password to be supplied and also drush command need to be executed from /usr/local/bin in /var/www/html path.
i tried below also but did not work
sudo -u root /var/www/html
sudo -u root /usr/local/bin/drush data_export_import-export nodes --content-types=book >> ${DATAPATH} 2>&1
can anyone help me?
Related
I am trying to start Bitnami AWS with Putty in mac, but when i start Auth in SSH with both Catalina and Big Sur i get this error:
(putty: 3637): Gtk-WARNING **: Attempting to set the permissions of `/Users/daniele/.local/share/recently-used.xbel ', but failed: No such file or directory
I tried to install the folder:
sudo mkdir -p /root/.local/share
I get this error:
mkdir: /root/.local/share: Read-only file system
As per the error message, we should create the folder at the following path:
/Users/daniele/.local/share/
And not:
/root/.local/share
Therefore, the correct command is:
mkdir -p /Users/Daniele/.local/share
Require the result in command: csrutil status
If result is enabled, you need to restart machine and press command + R, open the terminal in the recovery, so input csrutil diabled.
Restart, and check the status: csrutil status.
Here are two methods:
you are root.
sudo mount -uw /
so, you could mkdir or touch new file.
If you still can't read or write any, you maybe try this:
cd ~ # cd home directory
sudo vim /etc/synthetic.conf # create new file if this doesn't exist
In the conf files, add new line
data /User/xx/data # Notice: the space between this two strings is tab
Restart, and you will find a link named /data in the root.
remote user:ab
escalated user: UNIX
when i am doing copy module to /etc/profile.d/.its throwing error permission denied.
but with shell and command module.
sudo cp myscript.sh /etc/profile.d/
its working from UNIX user.i want to use ansible module rather than shell or command.here issue with sudo from UNIX user to execute command with sudo privileged.Become user i can't use root directly. Dont have access through unix user i can use sudo.
already used below details.
become=yes
become_method=sudo
become_user=unix
become_ask_pass=false
sudo cp means that it is running as root, not the user unix.
Try removing the line
become_user=unix
I am using CD for deploying my code to a VPS. This VPS is running ubuntu 16.04 and has a user 'deployer'.
Now when I use ssh deployer#server I get shell access to the server and then when using cd /var/www I get into the /var/www directory.
When I do this from the deployment script, defined in .gitlab-ci.yml I get this error /bin/bash: line 101: cd: /var/www/data/: No such file or directory. I also did ls -al to view the directory structure of /var which turned out not to contain the www directory. So clearly now I have no permission to the www directory.
- rsync -avz --exclude=.env . deployer#devvers.work:/var/www/data/staging/home
- ssh deployer#devvers.work
- cd /var
- ls -al
- cd /var/www
Tthis is the part of the script where it fails. Does anyone know why my user has different permissions when using ssh from the terminal then when using ssh in this script? Coping the files with rsync when fine and all the files were copied.
My guess is that the cd and ls commands that you are trying are actually executed in the runner's environment (be it the host or a docker container, depending on your setup), not on the machine you ssh into.
I'd suggest you rather execute those commands with ssh. An example of creating a file and checking that it has been created:
ssh deployer#devvers.work "touch /var/www/test_file && ls -al /var/www/"
It is best to use an ssh executor, configured through a config.toml:
/etc/gitlab-runner/config.toml:
concurrent = 1
[[runners]]
url = "http://your_gitlab/ci"
token = "xxx..."
name = "yourGitLabCI"
executor = "ssh"
[runners.ssh]
user = "deployer"
host = "devvers.work"
port = "22"
identity_file = "/home/user/.ssh/id_rsa"
Then you .gitlab.yml can simply include
job:
script:
- "ls /var/www"
- "cd /var/www"
...
See also this example.
If you encounter the line 101: cd: issue on a gitlab-runner that is configured as a shell executor there might actually be a .bash_logout file in the gitlab-runner users home directory that causes the issue together with https://gitlab.com/gitlab-org/gitlab-runner/issues/3849
I'm on MAC OSX. I added these lines in my ~/.bash_profile :
PATH="/usr/local/stardog/bin:${PATH}"
export STARDOG_HOME=/data/stardog
export PATH
Then, in command line, I execute
cp stardog-licence-key.bin $STARDOG_HOME as the quick-start documentation states.
But, this seems useless, because when I execute sudo stardog-admin server start, it says :
A Stardog license was not found.
The license file 'stardog-license-key.bin'
should be in your Stardog Home directory 'xx/xx'.
xx/xx is the current directory when I launch this command ... but stardog home directory is supposed to be /data/stardog, not my working directory !
How to tell stardog his actual home directory ?
Fine (and sorry), I did not mention some elements : I executed the command stardog-admin server start with sudo (as seen in the last edit of my question).
Reasons :
I launched this command with sudo because I needed some permissions to start stardog properly.
Problem : With sudo, stardog home is not the one defined in my previous .bash_profile anymore.
Solution : I give (owner) permissions to myself on the directory $STARDOG_HOME with the command sudo chown -R myUsername /data/stardog
Open a new bash, type stardog-admin server start without sudo, it works.
I would like to write a shell script that sets up a mercurial repository, and allow all users in the group
"developers" to execute this script.
The script is owned by the user "hg", and works fine when ran. The problem comes when I try to run it
with another user, using sudo, the execution halts with a "permission denied" error, when it tries to source another file.
The script file in question:
create_repo.sh
#!/bin/bash
source colors.sh
REPOROOT="/srv/repository/mercurial/"
... rest of the script ....
Permissions of create_repo.sh, and colors.sh:
-rwxr--r-- 1 hg hg 551 2011-01-07 10:20 colors.sh
-rwxr--r-- 1 hg hg 1137 2011-01-07 11:08 create_repo.sh
Sudoers setup:
%developer ALL = (hg) NOPASSWD: /home/hg/scripts/create_repo.sh
What I'm trying to run:
user#nebu:~$ id
uid=1000(user) gid=1000(user) groups=4(adm),20(dialout),24(cdrom),46(plugdev),105(lpadmin),113(sambashare),116(admin),1000(user),1001(developer)
user#nebu:~$ sudo -l
Matching Defaults entries for user on this host:
env_reset
User user may run the following commands on this host:
(ALL) ALL
(hg) NOPASSWD: /home/hg/scripts/create_repo.sh
user#nebu:~$ sudo -u hg /home/hg/scripts/create_repo.sh
/home/hg/scripts/create_repo.sh: line 3: colors.sh: Permission denied
So the script is executed, but halts when it tries to include the other script.
I have also tried using:
user#nebu:~$ sudo -u hg /bin/bash /home/hg/scripts/create_repo.sh
Which gives the same result.
What is the correct way to include another shell script, if the script may be ran with a different user, through sudo?
What is probably happening is that the script tries to source the file colors.sh in the current directory and fails because it doesn't have permission to read your current directory because of sudo.
Try using source /home/hg/scripts/colors.sh.