It seems like net/scp in Ruby (I'm using 1.8.7) only accepts a path and not binary data as "local_file" parameter.
In my case, I have the local file stored in a variable.
Am I required to save->upload->delete a local file, or is it possible to send the file "directly" to the remote server via SSH without temporary creating it locally?
I'm open to other solutions than SCP.
What I tried so far is using normal SSH and then executing
echo 'binary here' > remote_file_name
however I'm concerned about command length limits in Unix and I faced escaping problems and so forth...
While it will interpret a string as a file name, it should recognise a StringIO object as actual data to upload.
Related
Im trying to copy a zip file located on a server by a ssh2 library.
the way i'm about to do is using less command and write it down on client side.
Less -r -L -f zipfile
but the output file is bigger than the original.
i know this not a good practice but i have to.
so how can i handle this to have my zip file on the client machine?
Is Less an mandatory command to do that ?
You can simply use scp to achieve that by providing user and host and then typing the directory, where to copy the file from the server to local host, like on the example below:
scp your_username#remotehost.edu:foobar.txt /some/local/directory
I want to run a Terminal command from within FileMaker. I use the Perform AppleScript script step with a native AppleScript:
do shell script "rsync -r Documents/Monturen/ fakeuser#fakeserverhosting.be:www/"
I installed a SSH Key on the remote server. The goal is to automate the sync of images.
I do get a 23 error. Any advice on what I'm doing wrong?
This is rsinc error 23 - some files could not be transferred. Try to transfer one file with explicitly defined full file path.
I think there is a problem with the source filepath as well. Shouldn't this be
~/Documents/Monturen
or
~/Documents/Monturen/*
If you have any spaces in your file names or folder names they have to be escaped with \\. The same applies to any apostrophes.
Let's assume I have a file request.txt that looks like:
GET / HTTP/1.0
Some_header: value
text=blah
I tried:
cat request.txt | openssl -s_client -connect server.com:443
Unfortunately it didn't work and I need to manually copy & paste the file contents. How can I do it within a script?
cat is not ideally suited to download remote files, it's best used for files local to the file system running the script. To download a remote file you have other commands that you can use which handle this better.
If your environment has wget installed you can download the file by URL. Here is a link for some examples on how it's used. That would look like:
wget https://server.com/request.txt
If your environment has curl installed you can download the file by URL. Here is a link for some examples on how it's used. That would look like:
curl -O https://server.com/request.txt
Please note that if you want to store the response in a variable for further modification you can do this as well with a bit more work.
Also worth noting is that if you really must use cat to download a remote file it's possible, but it may require ssh to be used and I'm not a fan of using that method as it requires access to a file via ssh where it's already publicly available over HTTP/S. There isn't a practical reason I can think of to go about it this way, but for the sake of completion I wanted to mention that it could be done but probably shouldn't.
Is there any way to write/append to a remote file via FTP? I need to append certain content in the file located in the server? Is there any way to do with shell script?
You can use cURL with the --append flag:
(FTP/SFTP) When used in an upload, this makes curl append to the target file instead of overwriting it. If the remote file doesn't exist, it will be created. Note that this flag is ignored by some SFTP servers (including OpenSSH).
See cURL man page.
I want to ensure an authorative remote file is in sync with a local file, without necessarily re-downloading the entire file.
I did mistakenly use wget -c http://example.com/filename
If "filename" was appended to remotely, that works fine. But if filename is prepended to, e.g. "bar" is prepended to a file just containing "foo", the end downloaded result filename contents in my test were wrongly "foo\nfoo", instead of "bar\nfoo".
Can anyone else suggest a different efficient http downloading tool? Something that looks at server caching headers or etags?
I believe that wget -N is what you are looking for. It turns on timestamping and allows wget to compare the local file timestamp with the remote timestamp. Keep in mind that you might still encounter corruption if the local file timestamp cannot be trusted e.g. if your local clock is drifting too much.
You could very well use curl: http://linux.about.com/od/commands/l/blcmdl1_curl.htm]1