When I run gsutil rsynch from the GCP Console, or a .bat file, the full progress data does not display (it used to I'm pretty sure.) I'm on vers 403.0.0
Here is the command:
>gsutil rsync -r -n \\xxxx\WEBSITE\xxx\pages gs://xxx/pages
Building synchronization state...
Starting synchronization...
>
If I run the same command, followed by a pause in a .bat file, the pause is not hit - the batch file terminates. glist (maybe others) do output their data to the console and continue on to the balance of the batch commands.
c:
cd "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\bin\"
gsutil rsync -r -d \\xxx\pages gs://xxx/pages
pause <<<<< never get here
If I use .Net Process I can capture the Standard Out which does contained the progress data. However StdOut seems to close well after rsynch has finished.
Is this a bug? Or am I missing something?
I recommend that you update to the latest gsutil release.
I tested on my machine and also on the Cloud Shell, both with the rsync version 4.31 and they display the progress of the command, listing the files being copied.
I reproduced the script on different environments and observed that the script skips the commands after the rsync complete only on Windows machines. I tested on Google Cloud SDK Shell and also on Cygwin for Windows.
However, on a Linux machine and also on Cloud Shell, the same script works as expected and it executes the subsequent commands after the rsync completes.
This behavior depends on the implementation of individual shells. You will need to catch the unexpected behavior and handle the situation in the desired way, as the solution differs depending on the environment.
Related
I have a batch script written to auto start and capture traffic on a server for me but for some reason when I run it wireshark tells me it doesn't have permission to the folder where the script is trying to save the file. I have tried multiple different folders on and off the server I have tried giving everyone including SYSTEM full access to the folder. I have tried remaking the folder. I have tried running under and not under admin credentials I have tried letting the system task run it. Always get a permissions issue.
The weirdest part is if I run wireshark manually and save the data manually it has no permissions issues. Just if I run the script is the problem. Although they're both run under the same admin account.
Here is the script in case you need to see the flags I used.
#echo off
cd C:\Program Files\Wireshark
Wireshark.exe -i 4 -k -a duration:10 -w C:\Temp
pause
I did try to use a powershell script I had found online but it was pretty old and I couldn't get it to actually run. So any recommendations are welcome that include powershell or batch
C:\Temp isn't a file; it's a folder. Try specifying an actual filename, like this:
#echo off
cd C:\Program Files\Wireshark
Wireshark.exe -i 4 -k -a duration:10 -w C:\Temp\foo.pcapng
pause
I am using GitBash and I am downloading a file greater than 10GB and it stopped halfway. I don't want to download the whole file again from start. How can I start the download from where it was stopped with SFTP?
I have tried reget command it showed cannot download non-regular file.
(Assuming, you are using OpenSSH sftp), use its reget command. It has the same syntax as the get, except that it starts a transfer from the end of an existing local file.
The same effect has -a switch to the get command or global command-line -a switch of sftp.
You need OpenSSH 6.3 and later for these features
I have a remote script on a machine (B) which works perfectly when I run it from machine (B). I wanted to run the script via ssh from machine (A) using:
ssh usersm#${RHOST} './product/2018/requests/inbound/delDup.sh'
However, machine (A) complains about the contents of the remote script (2018req*.txt is a variable defined at the beginning of the script):
ls: cannot access 2018req*.txt: No such file or directory
From the information provided, it's hard to do more than guess. So here's a guess: when you run the script directly on machine B, do you run it from your home directory with ./product/2018/requests/inbound/delDup.sh, or do you cd into the product/2018/requests/inbound directory and run it with ./delDup.sh? If so, using 2018req*.txt will look in different places; basically, it looks in the directory that you were in when you ran the script. If you cded to the inbound directory locally, it'll look there, but running it remotely doesn't change to that directory, so 2018req*.txt will look for files in the home directory.
If that's the problem, I'd rewrite the script to cd to the appropriate directory, either by hard-coding the absolute path directly in the script, or by detecting what directory the script's in (see "https://stackoverflow.com/questions/59895/getting-the-source-directory-of-a-bash-script-from-within" and BashFAQ #28: "How do I determine the location of my script? I want to read some config files from the same place").
BTW, anytime you use cd in a script, you should test the exit status of the cd command to make sure it succeeded, because if it didn't the rest of the script will execute in the wrong place and may do unexpected and unpleasant things. You can use || to run an error handler if it fails, like this:
cd somedir || {
echo "Cannot cd to somedir" >&2
exit 1
}
If that's not the problem, please supply more info about the script and the situation it's running in (i.e. location of files). The best thing to do would be to create a Minimal, Complete, and Verifiable example that shows the problem. Basically, make a copy of the script, remove everything that isn't relevant to the problem, make sure it still exhibits the problem (otherwise you removed something that was relevant), and add that (and file locations) to the question.
First of all when you use SSH, instead of directly sending the output (stdout and stderr) to the monitor, the remote machine/ssh server sends the data back to the machine from which you started the ssh connection. The ssh client running in your local machine will just display it (except if you redirect it of course).
Now, from the information you have provided, it looks like the files are not present on server (B) or not accessible (last but not least, are you sure your ls target the proper directory? ) you could display the current directory in your script before running the ls command for debugging purpose.
I am using fswatch to monitor a directory and run a script when video files are copied into that directory:
fswatch -o /Path/To/Directory/Directory | xargs -n 1 sh /Path/To/Script/Script.sh
The problem is that the file is often not completed its copy before the script is actioned. The files are video files of varying size. Small files are OK, larger files are not.
How can I delay the fswatch notification until the file has completed its copy?
First of all, the behaviour of the fswatch "monitors" is OS-specific: when asking question about fswatch you'd better specify the OS you use.
However, there's no way to do that using fswatch alone. A process may open a file for writing and keep it open for an amount of time sufficiently long for the OS to send multiple events. I'm afraid there is nothing fswatch can do about it.
An alternate approach may be using another tool to check whether the modified file is currently open: if it is not, then run your script, otherwise skip it and wait for its next event. Such tools are OS-specific: in OS X and Linux you may use lsof. Beware this approach does not protect you from another process opening that file while your script is running.
I created this simple script that does a backup, I wrote and tested it in Linux, then I copied it in my WebApp WEB-INF/scripts directory so that I could be run via Java Runtime.exec().
#!/bin/bash
JACCISE_FOLDER="/var/jaccise"
rm $JACCISE_FOLDER/jaccisebackup.zip
zip -r jaccisefolder.zip $JACCISE_FOLDER
mysqldump -ujacc -pxxx jacciseweb > jaccisewebdump.sql
zip jaccisebackup.zip jaccisewebdump.sql
zip jaccisebackup.zip jaccisefolder.zip
rm jaccisewebdump.sql
rm jaccisefolder.zip
cp jaccisebackup.zip $JACCISE_FOLDER
But it doesn't. So I tried to copy it from WEB-INF/scripts to my user dir and run it to roubleshoot it. The result is that it comes out with: ": File o directory non esistente" (Means "Unknown file or directory" notice the colon at the beginning). I created another file from scratch, copied and pasted the whole script and it works. I may think that this is related to:
Text encoding
\n\r differences between windows (I use Eclipse on windows to edit everything) and Linux.
How do I solve this deploy problem?
You should check if the file is executable (chmod +x). Then you should check, if your web server allows the execution of external programs. This might be a security problem and it is likely that the web server prevents the execution. Check the logs of the web server. The encoding of the file can be changed with the dos2unix command. In order to debug your script you can add an "set -x" at the beginning, but I think the script does not start at all.