Receiving a no such file or directory error: gcloud compute copy-files onto instance from local machine - macos

gcloud compute copy-files /Users/myusername/Pictures/IMG_0382.JPG myusername#my-instance:/var/www --zone asia-east1-c
/Users/myusername/Pictures/IMG_0382.JPG: No such file or directory
ERROR: (gcloud.compute.copy-files) [/usr/bin/scp] exited with return code [1].
I am copy-pasting the file directory into my local terminal. What is the deal? Do I have to modify the .bash file or something?

So, it turns out I was not in the right terminal. Instead of ssh-ing into your instance and then issuing the command, you have to open up a fresh terminal window, cd to the appropiate directory (the folder with the desired upload file) and then issue the gcloud compute copy-files LOCAL... yada yada. If you are already connected to your instance, that's fine, just open up a new terminal window.

Does this command succeed or fail:
ls -ld /Users/myusername /Users/myusername/Pictures/IMG_0382.JPG
The copy and paste most likely did not do what you expected.

The GCP documentation can be found here.
To make it work I had to:
include sudo For permissions
delte the ~ specified in the documentation.
add zone of my VM
Code:
sudo gcloud compute scp [LOCAL_FILE_PATH] [INSTANCE_NAME]:/ --zone [INSTANCE ZONE]
Example
sudo gcloud compute scp /Users/jb/Downloads/cudnn-8.0-linux-x64-v7.tar instance-1:/home/jb --zone us-west

Related

execute aws command in script with sudo

I am running a bash script with sudo and have tried the below but am getting the error below using aws cp. I think the problem is that the script is looking for the config in /root which does not exist. However doesn't the -E preserve the original location? Is there an option that can be used with aws cp to pass the location of the config. Thank you :).
sudo -E bash /path/to/.sh
- inside of this script is `aws cp`
Error
The config profile (name) could not be found
I have also tried `export` the name profile and `source` the path to the `config`
You can use the original user like :
sudo -u $SUDO_USER aws cp ...
You could also run the script using source instead of bash -- using source will cause the script to run in the same shell as your open terminal window, which will keep the same env together (such as user) - though honestly, #Philippe answer is the better, more correct one.

GCP SDK changing cloud directories from local terminal

I'm trying to follow this tutorial. It seems that if I open a "cloud shell" in the browser, I can execute commands such as cd <cloud path> without any issues. However, when I try to do so from my laptop terminal I run into issues:
(base) user ~ % gsutil ls
gs://my_project/
(base) user ~ % cd gs://my_project/
cd: no such file or directory: gs://my_project/
(base) user ~ % gsutil cd gs://my_project/
CommandException: Invalid command "cd".
(base) user ~ % gcloud cd gs://my_project/
ERROR: (gcloud) Invalid choice: 'cd'.
How should one change cloud directories from a local terminal, using gcloud SDK?
The program gsutil does not support the concept of a current working directory. Objects are specified using the full path <protocol><bucket><object-name>.
If you install the package gcsfuse, you can mount a Google Cloud Storage bucket as a directory. Then you can use the Linux shell command cd to move around the bucket as a file system.
Cloud Storage FUSE
This article shows how to set up gcsfuse on Cloud Shell:
Unlimited persistent disk in google cloud shell

Read-only file system when attempting mkdir /root/.local/share on Mac with Putty

I am trying to start Bitnami AWS with Putty in mac, but when i start Auth in SSH with both Catalina and Big Sur i get this error:
(putty: 3637): Gtk-WARNING **: Attempting to set the permissions of `/Users/daniele/.local/share/recently-used.xbel ', but failed: No such file or directory
I tried to install the folder:
sudo mkdir -p /root/.local/share
I get this error:
mkdir: /root/.local/share: Read-only file system
As per the error message, we should create the folder at the following path:
/Users/daniele/.local/share/
And not:
/root/.local/share
Therefore, the correct command is:
mkdir -p /Users/Daniele/.local/share
Require the result in command: csrutil status
If result is enabled, you need to restart machine and press command + R, open the terminal in the recovery, so input csrutil diabled.
Restart, and check the status: csrutil status.
Here are two methods:
you are root.
sudo mount -uw /
so, you could mkdir or touch new file.
If you still can't read or write any, you maybe try this:
cd ~ # cd home directory
sudo vim /etc/synthetic.conf # create new file if this doesn't exist
In the conf files, add new line
data /User/xx/data # Notice: the space between this two strings is tab
Restart, and you will find a link named /data in the root.

File not found exception while starting Flume agent

I have installed Flume for the first time. I am using hadoop-1.2.1 and flume 1.6.0
I tried setting up a flume agent by following this guide.
I executed this command : $ bin/flume-ng agent -n $agent_name -c conf -f conf/flume-conf.properties.template
It says log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: ./logs/flume.log (No such file or directory)
Isn't the flume.log file generated automatically? If not, how can I rectify this error ?
Try this:
mkdir ./logs
sudo chown `whoami` ./logs
bin/flume-ng agent -n $agent_name -c conf -f conf/flume-conf.properties.template
The first line creates the logs directory in the current directory if it does not already exist. The second one sets the owner of that directory to the current user (you) so that flume-ng running as your user can write to it.
Finally, please note that this is not the recommended way to run Flume, just a quick hack to try it.
You are getting this error probably because you are running command directly on console, you've to first go to the bin in flume and try running your command there over console.
As #Botond says, you need to set the right permissions.
However, if you run Flume within a program, like supervisor or with a custom script, you might want to change the default path, as it's relative to the launcher.
This path is defined in your /path/to/apache-flume-1.6.0-bin/conf/log4j.properties. There you can change the line
flume.log.dir=./logs
to use an absolute path that you would like to use - you still need the right permissions, though.

SSH remote command executing a script

I have two hosts, hosts A and B. A has a script (generate) that compiles my thesis:
#!/bin/sh
pdflatex Thesis.tex
When running this command on host A (console window) it works perfectly.
I am basically trying to connect from host B to A and run the generation command as an ssh remote command. All the keys are properly set. When I run the command, I get the following:
hostB> ssh user#hostA exec ~/Thesis/generate
This is pdfTeX, Version 3.1415926-1.40.10 (TeX Live 2009/Debian)
entering extended mode
! I can't find file `Thesis.tex'.
<*> Thesis.tex
I tried adjusting the script so that it considers the directory:
pdflatex ~/Thesis/Thesis.tex
But because the Thesis.tex inputs some others files (images), I get an error message.
I presume the problem is some sort of enviroment that doesn't exist in remote commands. How do I fix this?
ssh will run your command in your home directory. You probably wanted to run it in your ~/Thesis directory.
Just cd first and it should be fine:
ssh user#hostA 'cd ~/Thesis && ./generate'

Resources