Is there a way to get a HTTP link of a file from a script?
For example:
I have a file at:
/home/User/video.mp4
Next, I would like to get the http link of that file. For example:
http://192.168.1.5/video.mp4
I currently have nginx installed onto the remote server with a specific directory as the root of the web server.
On the server I have, you can get the server link using this:
echo "http://$(whoami).$(hostname -f)/path/to/file"
I could get the file link using the command above but this would be an issue with files with spaces in them.
I'm doing this so that I can send the link to Internet Download Manager under windows. So using wget to download files will not work for me.
I'm currently using cygwin to create the script.
To solve the spaces problem, you can replace them with %20:
path="http://$(whoami).$(hostname -f)/path/to/file"
path=${path// /%20}
echo $path
Regards.
Related
How can I download files from Artifactory . Is it possible to download using batch script . I used CURL commands to upload then on the same way please provide suggestions to download. Appreciate your help.
You can use the JFrog CLI - a compact and smart client that provides a simple interface that automates access to JFrog products. The CLI works on both Windows and Linux.
For downloading files, take a look at the command for downloading files from Artifactory. This command allows you downloading specific files, multiple files (using wildcards) or complete folders,
Use GNU WGET from here - http://gnuwin32.sourceforge.net/packages/wget.htm
Very small utillity and supports download percentage and alot of other options like overwriting, not download if file exists etc.
Hi I used the same CURL command with Ansible .But I missed to configure the remote server for Ansible .So the CURL was not working . After configuring the remote server. It was able to download Thanks a lot for the response
I need to download the following file using the command line to my remote computer:
download link
The point is that if I use wget of curl, I just get a html document. but, if I enter this address in my browser (on my laptop), it simply starts downloading.
Now, my question is that since the only way to access my remote machine is through command line, how I can download it directly on that machine using the command line?
Thanks
Assuming that you are using a linux terminal.
You can use a command line browser like Lynx to click on links and download files.
The link provided by you isn't a normal file link, this link sends the filename as a GET variable, another page with form is sent by server as a response to this request. So wget, cURL will not work.
That website likely tracking session and checks if you've submitted the data & confirmed you're not a robot
Try different approach: copy it from your machine to remote via scp:
scp /localpath/to/file username#remotehost.com:/path/to/destination
Alternatively, you may export cookies from your local machine to remote and then pass them to wget with ‘--load-cookies file’ option, but can't guarantee it will work 100% if site also tracks session ID to IP
Here's Firefox extension for exporting cookies:
https://addons.mozilla.org/en-US/firefox/addon/export-cookies/
Once you have cookies.txt file just scp it to remote machine and run wget with '--load-cookies file' option
One of the authors of the corpus here.
As pointed out by a friend of mine, this tool solves all the problems.
https://addons.mozilla.org/en-GB/firefox/addon/cliget/
After installation you just click the download link and copy the generated command to the remote machine. Just tried it, works perfectly. We should put that info on the download page.
Ok, so I need to download files from my server since they are files that users upload. The reason I am using FTP with WAMP is just so I can test my script on my computer. Once I get it working, I can change the config files once the app is uploaded to a web server. I am using FileZilla.
MY application installation folder:
W:\wamp\www\idealeffort
The above directory is also set as my FileZilla user's home directory. There is a folder in my application called "_resumes". These are the files that I will need to download. So here is the script that I use to download these files.
$this->load->library('ftp');
$this->ftp->connect();
$this->ftp->download('/_resumes/'.$app['resume_filename'], FCPATH.'_resumes/'.$app['resume_filename']);
$app['resume_filename'] is a database result. I have checked to make sure that the file exists. I get the error "Unable to download the specified file. Please check your path."
Any ideas?
Note: FCPATH displays as "W:\wamp\www\idealeffort\ "
If you check codeigniter help
you have to give path from /public_html/....
So i think its problem with path.
Hope this helps
I have created jekyll site. Regarding the deployment I don't want to host on github pages. To host private domain I came know from documentation to copy the all files from _site folder. That's all wicked.
Question:
Each time I add new blog post, I am running cmd>jekyll build then I am copying newly created html to hosted domain. Is there any easy way to update without compiling each time ?
The reason, Why I am asking is because it will updated by non technical person
Thanks for the help!!
If you don't want to use GitHub Pages, AFAIK there's no other way than to compile your site each time you make a change.
But of course you can script/automate as much as possible.
That's what I do with my own blog as well. I'm hosting it on my own webspace instead of GitHub Pages, so I need to do these steps for each update:
Compile on local machine
Upload via FTP
I can do this with a single click (okay, a single double-click).
Note: I'm on Windows, so the following solution is for Windows.
But if you're using Linux/MacOS/whatever, of course you can use the tools given there to build something similar.
I'm using a batch file (the Windows equivalent to a shell script) to compile my site and then call WinSCP, a free command-line FTP client.
WinSCP allows me to store session configurations, so I saved the connection to my server there once.
Because of this, I didn't want to commit WinSCP to my (public) repository, so my script expects WinSCP in the parent folder.
The batch file looks like this:
call jekyll build
echo If the build succeeded, press RETURN to upload!
pause
set uploadpath=%~dp0\_site
%~dp0\..\winscp.com /script=build-upload.txt /xmllog=build-upload.log
pause
The first parameter in the WinSCP call (/script=build-upload.txt) specifies the script file which contains the actual WinSCP commands
This is in the script file:
option batch abort
option confirm off
open blog
synchronize remote -delete "%uploadpath%"
close
exit
Some explanations:
%~dp0 (in the batch file) is the folder where the current batch file is
The set uploadpath=... line (in the batch file) saves the complete path to the generated site into an environment variable
The open blog line (in the script file) opens a connection to the pre-saved session configuration (which I named blog)
The synchronize remote ... line (in the script file) uses the synchronize command to sync from the local folder (saved in %uploadpath%, the environment variable from step 2) to the server.
IMO this solution is suitable for non-technical persons as well.
If the technical person in your case doesn't know how to use source control, you could even script committing & pushing, too.
There are a number of options available which are mentioned in the documentation: http://jekyllrb.com/docs/deployment-methods/
If you are using Git, I would recommend the Git Post-Receive Hook approach. It simply builds the site after the new code is received:
GIT_REPO=$HOME/myrepo.git
TMP_GIT_CLONE=$HOME/tmp/myrepo
PUBLIC_WWW=/var/www/myrepo
git clone $GIT_REPO $TMP_GIT_CLONE
jekyll build -s $TMP_GIT_CLONE -d $PUBLIC_WWW
rm -Rf $TMP_GIT_CLONE
exit
Since you mentioned that it will be updated by a non-technical person, you might try something like rack-jekyll to automatically rebuild when new files are FTP'd.
I made an ruby web application on nitrous.io, the tool is very nice and it helped a lot but now I want to download ther project in my computer and I didn't found any option to do that...
You can download and upload projects by any of the following options:
Utilize Nitrous Desktop to Sync your files locally.
Upload your project to Github, and pull the project from there. Here is a guide on adding the SSH key to Github if needed.
Upload the content via SCP. To do this, you will need to add an SSH Key to your account.
Next, run this command on your local machine, replacing {PORT} with the port # assigned to your Nitrous.IO box, and also changing usw1 with the proper region found in the SSH URI of your boxes page.
To Upload:
scp -P{PORT} -r path/to/yourFolder action#usw1-2.nitrousbox.com:~/workspace
To Download:
scp -P{PORT} -r action#usw1-2.nitrousbox.com:~/workspace path/to/yourLocalFolder
I do not know the service, but apparently they offer ssh access. Then you can use scp to copy the files to your machine. Anyway, probably you should ask their support...
...post a summary of their answer here and close the question :)
The easiest way is to store your project in a Git repository and then push this repository to an external host. You will then be able to clone your project from the external repository to any machine you want.
Personally, I use Bitbucket (Bitbucket as it is free and very easy to set up. Have a look at the tutorials there.
ok replying really late but I hope this will help anyone still looking for this. Here is how I download stuff from nitrous, no desktop utility download needed, and no ssh/scp or adding keys.
What you do is, simply make a archive for the folder you want to download by
tar -zcvf myarchive.tar.gz mydir/
now you got a *.gz file right? Whichever folder your gz file is in, be there and type:
python3.3 -m http.server 8080
you just started a cute little http server ready to serve you your download, now from the Preview menu click "Port 8080", this opens a new browser tab showing your gz file in the file listing (sample url http://yourboxes.apse1.nitrousbox.com:8080/). Now you can click your gz file and it will start downloading. Once done with the download, press Ctrl+C on the terminal to terminate the http server.
This is not limited to nitrous, you can make this work on many online VMs like cloud9 etc.