How define output folder of curl while running script - bash

I have command that executed when script over and download file from list. I use Termux on android and it say you can't use cd while running script.
xargs -n 1 curl -O -C - <url
But it download all file to folder where I runned this script. How I can change output directory.
PS: Only curl please. Aria2c and wget will ignored by me.

Okay. This script I use now
while read url
do
curl --create-dirs -o "$file path/name" $url
I use "basename" of url for name.
Please answer if you have better code.

Related

How to iterate all files inside folder and run curl command to install AEM packages

I am trying to create a shell script which to iterate all the zip files and install those in AEM package manager using curl command.
The below single curl command is working, It is properly installing the package in respective AEM instance.
curl -u admin:admin -F file=#"content-ope.zip" -F name="content-ope.zip" -F force=true -F install=true http://localhost:4502/crx/packmgr/service.jsp
But we have to install many zip files so we are planning to keep all of them in one folder, iterate all the zip files and install using curl command. Tried with while and for loop but unable to read all the .zip files using shell script.
Can anyone have any idea on this?
I wrote that exact thing, see here:
https://gist.github.com/ahmed-musallam/07fbf430168d4ac57bd8c89d8be9bca5
#!/bin/bash
# this script will install ALL zip packages in current directory the AEM instance at 4502
for f in *.zip
do
    echo "installing: $f"
    curl -u admin:admin -F file=#"$f" -F name="$f" -F force=true -F install=true http://localhost:4502/crx/packmgr/service.jsp
    echo "done."
done
Instead of using curl you can just copy the files over to the install folder of the AEM instance. These will get installed automatically. https://helpx.adobe.com/in/experience-manager/6-3/sites/administering/using/package-manager.html#FileSystemBasedUploadandInstallation
find . -name "*.zip" -maxdepth 1 -exec curl -u admin:admin -F file=#"{}" -F name="{}" -F force=true -F install=true http://localhost:4502/crx/packmgr/service.jsp ";"
Node, this will substitute ./foo.zip instead of foo.zip. If you need to strip ./, you should probably write shell-script wrapping you curl command that accepts zip file name as argument and strips ./ from it before passing to curl.

wget do not download subirectories only all files in specified directory [duplicate]

I am trying to download the files for a project using wget, as the SVN server for that project isn't running anymore and I am only able to access the files through a browser. The base URLs for all the files is the same like
http://abc.tamu.edu/projects/tzivi/repository/revisions/2/raw/tzivi/*
How can I use wget (or any other similar tool) to download all the files in this repository, where the "tzivi" folder is the root folder and there are several files and sub-folders (upto 2 or 3 levels) under it?
You may use this in shell:
wget -r --no-parent http://abc.tamu.edu/projects/tzivi/repository/revisions/2/raw/tzivi/
The Parameters are:
-r //recursive Download
and
--no-parent // Don´t download something from the parent directory
If you don't want to download the entire content, you may use:
-l1 just download the directory (tzivi in your case)
-l2 download the directory and all level 1 subfolders ('tzivi/something' but not 'tivizi/somthing/foo')
And so on. If you insert no -l option, wget will use -l 5 automatically.
If you insert a -l 0 you´ll download the whole Internet, because wget will follow every link it finds.
You can use this in a shell:
wget -r -nH --cut-dirs=7 --reject="index.html*" \
http://abc.tamu.edu/projects/tzivi/repository/revisions/2/raw/tzivi/
The Parameters are:
-r recursively download
-nH (--no-host-directories) cuts out hostname
--cut-dirs=X (cuts out X directories)
This link just gave me the best answer:
$ wget --no-clobber --convert-links --random-wait -r -p --level 1 -E -e robots=off -U mozilla http://base.site/dir/
Worked like a charm.
wget -r --no-parent URL --user=username --password=password
the last two options are optional if you have the username and password for downloading, otherwise no need to use them.
You can also see more options in the link https://www.howtogeek.com/281663/how-to-use-wget-the-ultimate-command-line-downloading-tool/
use the command
wget -m www.ilanni.com/nexus/content/
you can also use this command :
wget --mirror -pc --convert-links -P ./your-local-dir/ http://www.your-website.com
so that you get the exact mirror of the website you want to download
try this working code (30-08-2021):
!wget --no-clobber --convert-links --random-wait -r -p --level 1 -E -e robots=off --adjust-extension -U mozilla "yourweb directory with in quotations"
I can't get this to work.
Whatever I try, I just get some http file.
Just looking at these commands for simply downloading a directory?
There must be a better way.
wget seems the wrong tool for this task, unless it is a complete failure.
This works:
wget -m -np -c --no-check-certificate -R "index.html*" "https://the-eye.eu/public/AudioBooks/Edgar%20Allan%20Poe%20-%2"
This will help
wget -m -np -c --level 0 --no-check-certificate -R"index.html*"http://www.your-websitepage.com/dir

Using CURL to download file is having issue

I am trying to download a file from remote server using curl
curl -u username:password -O https://remoteserver/filename.txt
In my case a file filename.txt is getting created but the content of file says virtaul user logged in. It is not downloading the actual file.
I am not sure why this is happening. Any help on why the download is not working.
Try this in terminal:
curl -u username:password -o filedownload.txt -0 https://remoteserver/filename.txt
This command with -o will copy the contents of filename.txt to filedownload.txt in the current working directory.

How to download a file using curl

I'm on mac OS X and can't figure out how to download a file from a URL via the command line. It's from a static page so I thought copying the download link and then using curl would do the trick but it's not.
I referenced this StackOverflow question but that didn't work. I also referenced this article which also didn't work.
What I've tried:
curl -o https://github.com/jdfwarrior/Workflows.git
curl: no URL specified!
curl: try 'curl --help' or 'curl --manual' for more information
.
wget -r -np -l 1 -A zip https://github.com/jdfwarrior/Workflows.git
zsh: command not found: wget
How can a file be downloaded through the command line?
The -o --output option means curl writes output to the file you specify instead of stdout. Your mistake was putting the url after -o, and so curl thought the url was a file to write to rate and hence that no url was specified. You need a file name after the -o, then the url:
curl -o ./filename https://github.com/jdfwarrior/Workflows.git
And wget is not available by default on OS X.
curl -OL https://github.com/jdfwarrior/Workflows.git
-O: This option used to write the output to a file which named like remote file we get. In this curl that file would be Workflows.git.
-L: This option used if the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place.
Ref: curl man page
The easiest solution for your question is to keep the original filename. In that case, you just need to use a capital o ("-O") as option (not a zero=0!). So it looks like:
curl -O https://github.com/jdfwarrior/Workflows.git
There are several options to make curl output to a file
# saves it to myfile.txt
curl http://www.example.com/data.txt -o myfile.txt -L
# The #1 will get substituted with the url, so the filename contains the url
curl http://www.example.com/data.txt -o "file_#1.txt" -L
# saves to data.txt, the filename extracted from the URL
curl http://www.example.com/data.txt -O -L
# saves to filename determined by the Content-Disposition header sent by the server.
curl http://www.example.com/data.txt -O -J -L
# -O Write output to a local file named like the remote file we get
# -o <file> Write output to <file> instead of stdout (variable replacement performed on <file>)
# -J Use the Content-Disposition filename instead of extracting filename from URL
# -L Follow redirects

Save file to specific folder with curl command

In a shell script, I want to download a file from some URL and save it to a specific folder. What is the specific CLI flag I should use to download files to a specific folder with the curl command, or how else do I get that result?
I don't think you can give a path to curl, but you can CD to the location, download and CD back.
cd target/path && { curl -O URL ; cd -; }
Or using subshell.
(cd target/path && curl -O URL)
Both ways will only download if path exists. -O keeps remote file name. After download it will return to original location.
If you need to set filename explicitly, you can use small -o option:
curl -o target/path/filename URL
The --output-dir option is available since curl 7.73.0:
curl --create-dirs -O --output-dir /tmp/receipes https://example.com/pancakes.jpg
curl doesn't have an option to that (without also specifying the filename), but wget does. The directory can be relative or absolute. Also, the directory will automatically be created if it doesn't exist.
wget -P relative/dir "$url"
wget -P /absolute/dir "$url"
it works for me:
curl http://centos.mirror.constant.com/8-stream/isos/aarch64/CentOS-Stream-8-aarch64-20210916-boot.iso --output ~/Downloads/centos.iso
where:
--output allows me to set up the path and the naming of the file and extension file that I want to place.
Use redirection:
This works to drop a curl downloaded file into a specified path:
curl https://download.test.com/test.zip > /tmp/test.zip
Obviously "test.zip" is whatever arbitrary name you want to label the redirected file- could be the same name or a different name.
I actually prefer #oderibas solution, but this will get you around the issue until your distro supports curl version 7.73.0 or later-
For powershell in Windows, you can add relative path + filename to --output flag:
curl -L http://github.com/GorvGoyl/Notion-Boost-browser-extension/archive/master.zip --output build_firefox/master-repo.zip
here build_firefox is relative folder.
Use wget
wget -P /your/absolut/path "https://jdbc.postgresql.org/download/postgresql-42.3.3.jar"
For Windows, in PowerShell, curl is an alias of the cmdlet Invoke-WebRequest and this syntax works:
curl "url" -OutFile file_name.ext
For instance:
curl "https://airflow.apache.org/docs/apache-airflow/2.2.5/docker-compose.yaml" -OutFile docker-compose.yaml
Source: https://krypted.com/windows-server/its-not-wget-or-curl-its-iwr-in-windows/
Here is an example using Batch to create a safe filename from a URL and save it to a folder named tmp/. I do think it's strange that this isn't an option on the Windows or Linux Curl versions.
#echo off
set url=%1%
for /r %%f in (%url%) do (
set url=%%~nxf.txt
curl --create-dirs -L -v -o tmp/%%~nxf.txt %url%
)
The above Batch file will take a single input, a URL, and create a filename from the url. If no filename is specified it will be saved as tmp/.txt. So it's not all done for you but it gets the job done in Windows.

Resources