how to get and install in one line , wth curl and dpkg - bash

I want to get and install in bash one line, like:
curl XXX.deb | dpkg -i
but dpkg report argument missing
how to get it work?

I suggest to add -o to curl to avoid to redirect to stdout the binary file like:
curl http://security.ubuntu.com/ubuntu/pool/universe/e/eigen3/libeigen3-dev_3.3.2-1_all.deb -o libeigen3-dev_3.3.2-1_all.deb && dpkg -i libeigen3-dev_3.3.2-1_all.deb

You can't pipe information into dpkg like that. One possibility is combining them with &&. Meaning the first command must succeed for the next command to be executed.
curl XXX.deb && dpkg -i XXX.deb
Assuming you know the filename beforehand and can pass it to both statements.

You can use wget in a similar way.
wget https://example.com/path/someapp.deb -O app.deb && sudo dpkg -i app.deb && rm -f app.deb
Plus wget shows progress bar and the local filename is forced (maybe you can't predict from url).

Related

wget do not download subirectories only all files in specified directory [duplicate]

I am trying to download the files for a project using wget, as the SVN server for that project isn't running anymore and I am only able to access the files through a browser. The base URLs for all the files is the same like
http://abc.tamu.edu/projects/tzivi/repository/revisions/2/raw/tzivi/*
How can I use wget (or any other similar tool) to download all the files in this repository, where the "tzivi" folder is the root folder and there are several files and sub-folders (upto 2 or 3 levels) under it?
You may use this in shell:
wget -r --no-parent http://abc.tamu.edu/projects/tzivi/repository/revisions/2/raw/tzivi/
The Parameters are:
-r //recursive Download
and
--no-parent // Don´t download something from the parent directory
If you don't want to download the entire content, you may use:
-l1 just download the directory (tzivi in your case)
-l2 download the directory and all level 1 subfolders ('tzivi/something' but not 'tivizi/somthing/foo')
And so on. If you insert no -l option, wget will use -l 5 automatically.
If you insert a -l 0 you´ll download the whole Internet, because wget will follow every link it finds.
You can use this in a shell:
wget -r -nH --cut-dirs=7 --reject="index.html*" \
http://abc.tamu.edu/projects/tzivi/repository/revisions/2/raw/tzivi/
The Parameters are:
-r recursively download
-nH (--no-host-directories) cuts out hostname
--cut-dirs=X (cuts out X directories)
This link just gave me the best answer:
$ wget --no-clobber --convert-links --random-wait -r -p --level 1 -E -e robots=off -U mozilla http://base.site/dir/
Worked like a charm.
wget -r --no-parent URL --user=username --password=password
the last two options are optional if you have the username and password for downloading, otherwise no need to use them.
You can also see more options in the link https://www.howtogeek.com/281663/how-to-use-wget-the-ultimate-command-line-downloading-tool/
use the command
wget -m www.ilanni.com/nexus/content/
you can also use this command :
wget --mirror -pc --convert-links -P ./your-local-dir/ http://www.your-website.com
so that you get the exact mirror of the website you want to download
try this working code (30-08-2021):
!wget --no-clobber --convert-links --random-wait -r -p --level 1 -E -e robots=off --adjust-extension -U mozilla "yourweb directory with in quotations"
I can't get this to work.
Whatever I try, I just get some http file.
Just looking at these commands for simply downloading a directory?
There must be a better way.
wget seems the wrong tool for this task, unless it is a complete failure.
This works:
wget -m -np -c --no-check-certificate -R "index.html*" "https://the-eye.eu/public/AudioBooks/Edgar%20Allan%20Poe%20-%2"
This will help
wget -m -np -c --level 0 --no-check-certificate -R"index.html*"http://www.your-websitepage.com/dir

Bash script gets printed instead of being executed

This question is similar to this one: https://serverfault.com/questions/342697/prevent-sudo-apt-get-etc-from-swallowing-pasted-input-to-stdin but the answer is not satisfying (appending && to each line of bash script is not elegant) and does not explain why some users can paste/execute multiple subsequent apt-get install -y commands and others can't because stdout is swollen by the next command.
I have a script my_script.sh:
sudo apt-get install -y graphicsmagick
sudo apt-get install -y libgraphicsmagick++1-dev
...
It can have only two lines or more of sudo apt-get install stuff. The libraries (graphicsmagick, etc.) doesn't matter, it can be any library.
When I copy this script and paste it's contents to bash or just execute it like this:
cat my_script.sh | sudo -i bash
then for some reason only the first line (graphicsmagick) gets executed and the rest is just printed to the console. It happens only with sudo apt-get install -y, other scripts, which doesn't contain this command behave normally.
If I change bash to sh (which is dash) I get expected behaviour:
cat my_script.sh | sudo -i sh
Can you explain why this happens?
When answering, can you please avoid this questions/comments:
Why are you doing it this way?
Piping to your bash is not safe
Some other aspects are not safe or hackish
I just want to know why bash doesn't work as I would expect and sh does.
PS. I'm using Ubuntu 14.04, sh is dash as you can see here:
vagrant#vagrant-ubuntu-trusty-64:/tmp$ ls -l /bin/sh
lrwxrwxrwx 1 root root 4 Feb 19 2014 /bin/sh -> dash
Bash and dash simply behave different when using -i flag.
Bash always goes to interactive mode even when stdin is not a terminal.
Dash on the other hand will not go into interactive mode, even with -i flag.
Probably need the -s option
If the -s option is present, or if no arguments remain after option
processing, then commands are read from the standard input. This option allows
the positional parameters to be set when invoking an interactive shell.
Bash man page
curl -s http://foo.com/bar.sh | sudo -i bash -s
Example

How can I use aria2 with pacman?

I want to make an alias for zsh to download packages by aria2 and install them by pacman,
I don't want to use aria2c by adding xfercommand to pacman.conf because of 2 things:
First my internet connection's speed is low and I don't want pacman go lock for some hours,
Second xfercommand doesn't support multi link downloads.
First off, I use this command to download or upgrade and update by pacman:
sudo pacman -Sp [Package] > ~/Documents/.install&& sudo aria2c -c -x16 -x16 -m16 -k1M -j10 -i ~/Documents/.install -d /var/cache/pacman/pkg
But I don't know how to make it alias in zsh?
Install aria2, then edit /etc/pacman.conf by adding the following line to the [options] section:
XferCommand = /usr/bin/aria2c --allow-overwrite=true --continue=true --file-allocation=none --log-level=error --max-tries=2 --max-connection-per-server=2 --max-file-not-found=5 --min-split-size=5M --no-conf --remote-time=true --summary-interval=60 --timeout=5 --dir=/ --out %o %u
Taking from the aria2 arch wiki, you don't need the intermediary install file, just use the flag -i -. I also had to add sudo to the aria command. Looks like this:
pacman -Sp [package] | sudo aria2c -d /var/cache/pacman/pkg/ -i -
I have an aria2 config, so all other options are there.
From what I've seen, if you use aria2 in the XferCommand, it wouldn't do multiple downloads, just use aria2 one link at a time.
As for using a function, try
mypacman() {
pacman -Sp $1 | sudo aria2c -d /var/cache/pacman/pkg/ -i -
}
The $1 indicates the first thing after the function call will be placed in this place.
Use it like mypacman [package].
Note: It seems the next version of pacman will do parallel downloads out of the box :)
http://allanmcrae.com/
But I won't risk using it right now...

How to download multiple URLs using wget using a single command?

I am using following command to download a single webpage with all its images and js using wget in Windows 7:
wget -E -H -k -K -p -e robots=off -P /Downloads/ http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html
It is downloading the HTML as required, but when I tried to pass on a text file having a list of 3 URLs to download, it didn't give any output, below is the command I am using:
wget -E -H -k -K -p -e robots=off -P /Downloads/ -i ./list.txt -B 'http://'
I tried this also:
wget -E -H -k -K -p -e robots=off -P /Downloads/ -i ./list.txt
This text file had URLs http:// prepended in it.
list.txt contains list of 3 URLs which I need to download using a single command. Please help me in resolving this issue.
From man wget:
2 Invoking
By default, Wget is very simple to invoke. The basic syntax is:
wget [option]... [URL]...
So, just use multiple URLs:
wget URL1 URL2
Or using the links from comments:
$ cat list.txt
http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html
http://www.verizonwireless.com/smartphones-2.shtml
http://www.att.com/shop/wireless/devices/smartphones.html
and your command line:
wget -E -H -k -K -p -e robots=off -P /Downloads/ -i ./list.txt
works as expected.
First create a text file with the URLs that you need to download.
eg: download.txt
download.txt will as below:
http://www.google.com
http://www.yahoo.com
then use the command wget -i download.txt to download the files. You can add many URLs to the text file.
If you have a list of URLs separated on multiple lines like this:
http://example.com/a
http://example.com/b
http://example.com/c
but you don't want to create a file and point wget to it, you can do this:
wget -i - <<< 'http://example.com/a
http://example.com/b
http://example.com/c'
pedantic version:
for x in {'url1','url2'}; do wget $x; done
the advantage of it you can treat is as a single wget url command

wget and run/remove bash script in one line

wget http://sitehere.com/install.sh -v -O install.sh; rm -rf install.sh
That runs the script after download right and then removes it?
I like to pipe it into sh. No need to create and remove file locally.
wget http://sitehere.com/install.sh -O - | sh
I think you might need to actually execute it:
wget http://sitehere.com/install.sh -v -O install.sh; ./install.sh; rm -rf install.sh
Also, if you want a little more robustness, you can use && to separate commands, which will only attempt to execute the next command if the previous one succeeds:
wget http://sitehere.com/install.sh -v -O install.sh && ./install.sh; rm -rf install.sh
I think this is the best way to do it:
wget -Nnv http://sitehere.com/install.sh && bash install.sh; rm -f install.sh
Breakdown:
-N or --timestamping will only download the file if it is newer on the server
-nv or --no-verbose minimizes output, or -q / --quiet for no "wget" output at all
&& will only execute the second command if the first succeeds
use bash (or sh) to execute the script assuming it is a script (or shell script); no need to chmod +x
rm -f (or --force) the file regardless of what happens (even if it's not there)
It's not necessary to use the -O option with wget in this scenario. It is redundant unless you would like to use a different temporary file name than install.sh
You are downloading in the first statement and removing in the last statement.
You need to add a line to excute the file by adding :
./install.sh

Resources