I want to make an alias for zsh to download packages by aria2 and install them by pacman,
I don't want to use aria2c by adding xfercommand to pacman.conf because of 2 things:
First my internet connection's speed is low and I don't want pacman go lock for some hours,
Second xfercommand doesn't support multi link downloads.
First off, I use this command to download or upgrade and update by pacman:
sudo pacman -Sp [Package] > ~/Documents/.install&& sudo aria2c -c -x16 -x16 -m16 -k1M -j10 -i ~/Documents/.install -d /var/cache/pacman/pkg
But I don't know how to make it alias in zsh?
Install aria2, then edit /etc/pacman.conf by adding the following line to the [options] section:
XferCommand = /usr/bin/aria2c --allow-overwrite=true --continue=true --file-allocation=none --log-level=error --max-tries=2 --max-connection-per-server=2 --max-file-not-found=5 --min-split-size=5M --no-conf --remote-time=true --summary-interval=60 --timeout=5 --dir=/ --out %o %u
Taking from the aria2 arch wiki, you don't need the intermediary install file, just use the flag -i -. I also had to add sudo to the aria command. Looks like this:
pacman -Sp [package] | sudo aria2c -d /var/cache/pacman/pkg/ -i -
I have an aria2 config, so all other options are there.
From what I've seen, if you use aria2 in the XferCommand, it wouldn't do multiple downloads, just use aria2 one link at a time.
As for using a function, try
mypacman() {
pacman -Sp $1 | sudo aria2c -d /var/cache/pacman/pkg/ -i -
}
The $1 indicates the first thing after the function call will be placed in this place.
Use it like mypacman [package].
Note: It seems the next version of pacman will do parallel downloads out of the box :)
http://allanmcrae.com/
But I won't risk using it right now...
Related
I am trying to download the files for a project using wget, as the SVN server for that project isn't running anymore and I am only able to access the files through a browser. The base URLs for all the files is the same like
http://abc.tamu.edu/projects/tzivi/repository/revisions/2/raw/tzivi/*
How can I use wget (or any other similar tool) to download all the files in this repository, where the "tzivi" folder is the root folder and there are several files and sub-folders (upto 2 or 3 levels) under it?
You may use this in shell:
wget -r --no-parent http://abc.tamu.edu/projects/tzivi/repository/revisions/2/raw/tzivi/
The Parameters are:
-r //recursive Download
and
--no-parent // Don´t download something from the parent directory
If you don't want to download the entire content, you may use:
-l1 just download the directory (tzivi in your case)
-l2 download the directory and all level 1 subfolders ('tzivi/something' but not 'tivizi/somthing/foo')
And so on. If you insert no -l option, wget will use -l 5 automatically.
If you insert a -l 0 you´ll download the whole Internet, because wget will follow every link it finds.
You can use this in a shell:
wget -r -nH --cut-dirs=7 --reject="index.html*" \
http://abc.tamu.edu/projects/tzivi/repository/revisions/2/raw/tzivi/
The Parameters are:
-r recursively download
-nH (--no-host-directories) cuts out hostname
--cut-dirs=X (cuts out X directories)
This link just gave me the best answer:
$ wget --no-clobber --convert-links --random-wait -r -p --level 1 -E -e robots=off -U mozilla http://base.site/dir/
Worked like a charm.
wget -r --no-parent URL --user=username --password=password
the last two options are optional if you have the username and password for downloading, otherwise no need to use them.
You can also see more options in the link https://www.howtogeek.com/281663/how-to-use-wget-the-ultimate-command-line-downloading-tool/
use the command
wget -m www.ilanni.com/nexus/content/
you can also use this command :
wget --mirror -pc --convert-links -P ./your-local-dir/ http://www.your-website.com
so that you get the exact mirror of the website you want to download
try this working code (30-08-2021):
!wget --no-clobber --convert-links --random-wait -r -p --level 1 -E -e robots=off --adjust-extension -U mozilla "yourweb directory with in quotations"
I can't get this to work.
Whatever I try, I just get some http file.
Just looking at these commands for simply downloading a directory?
There must be a better way.
wget seems the wrong tool for this task, unless it is a complete failure.
This works:
wget -m -np -c --no-check-certificate -R "index.html*" "https://the-eye.eu/public/AudioBooks/Edgar%20Allan%20Poe%20-%2"
This will help
wget -m -np -c --level 0 --no-check-certificate -R"index.html*"http://www.your-websitepage.com/dir
I want to get and install in bash one line, like:
curl XXX.deb | dpkg -i
but dpkg report argument missing
how to get it work?
I suggest to add -o to curl to avoid to redirect to stdout the binary file like:
curl http://security.ubuntu.com/ubuntu/pool/universe/e/eigen3/libeigen3-dev_3.3.2-1_all.deb -o libeigen3-dev_3.3.2-1_all.deb && dpkg -i libeigen3-dev_3.3.2-1_all.deb
You can't pipe information into dpkg like that. One possibility is combining them with &&. Meaning the first command must succeed for the next command to be executed.
curl XXX.deb && dpkg -i XXX.deb
Assuming you know the filename beforehand and can pass it to both statements.
You can use wget in a similar way.
wget https://example.com/path/someapp.deb -O app.deb && sudo dpkg -i app.deb && rm -f app.deb
Plus wget shows progress bar and the local filename is forced (maybe you can't predict from url).
I have several packages that provide the same functionality - and in the device
'there can be only one' at the same time.
I have read about the 'Provides, Conflicts, Replaces' in the debian policy, but I have not found a way (using dpkg with commands/switches) to automatically replace an already installed virtual package without removing it manually first.
My package's control file specifies the following for all the packages in question:
Provides: myown-virtual-package
Conflicts: myown-virtual-package
Replaces: myown-virtual-package
Here is what I do, It seems to work, but I was wondering if there is a standard way using only dpkg
# remove any conflicting virtual packages
for i in /tmp/upgrade_software/*.deb
do
# find out what package name and what it provides
provides_line=$(dpkg --info $i | grep "^ Provides: ")
package_line=$(dpkg --info $i | grep "^ Package: ")
virt_package=${provides_line##*: }
this_package=${package_line##*: }
# skip if it is not a virtual package
[ -z "${virt_package}" ] && continue
# remove any package that provides the same
otherpackage_line=$(dpkg-query -W -f='${Provides}: ${Package}\n' \
| grep "${virt_package}:" | grep -v ${this_package})
if [ -n "${otherpackage_line}" ] ; then
otherpackage=${otherpackage_line##*: }
echo " ------ removing ${otherpackage} because of conflict -------"
dpkg --purge ${otherpackage}
echo " -------------"
fi
echo \'$virt_package\' checked for conflicts
done
Thanks in advance, jj
dpkg will not take this kind of automatic conflict resolution measures. For these tasks, there is apt-get and aptitude. It may just work with
dpkg -i package.deb ; apt-get -f install
The latter command is supposed to resolve the conflicts. If it opts to remove your own package for resolution, you may even want to try
dpkg -i package.deb ; apt-get -f install <package>
I.e., tell apt to install your package (without a .deb extension) as it should now be visible to apt.
This can be done with dpkg alone, by giving it enough information so that it can perform the operation. The way to prepare dpkg for this is via selections.
In this case you'd tell that removing the old provider is ok, and then when you install the new one dpkg will be able to remove the other package during the upgrade.
Try something like:
echo old-provider deinstall | dpkg --set-selections
dpkg -iB new-provider.deb
That should in principle do it, and no need for apt-get fixing it up (-f), or for pre purges (possibly with --force options if there are packages depending on the virtuals).
I'm writing some small bash scripts for copiyng certain files/directories in GNU/Linux and Solaris. Everything is OK in Linux, but cp command hasn't the same options in Linux and Solaris.
Copy command is something like this:
cp -ruv $source $dest
Unfortunately I don't know how to achieve copy verbose and copy update in Solaris. Any idea?
Thanks
Unfortunately, cp under Solaris doesn't have that option. man solaris should reveal that.
Are you comfortable making your script depend on rsync?
Or, if possible, you can install the coreutils package and use GNU's cp.
I ran into a similar issue myself and found that gcp takes care of it too. I've made installing coreutils part of my standard system setup.
I run these on a new Solaris install:
pkgadd -d http://get.opencsw.org/now
pkgutil -U
pkgutil -i -y coreutils
pkgutil -a vim
pkgutil -i -y vim
pkgutil -i -y findutils
Remember to add the path - and the documentation path - to your profile, and possibly to the system profile at /etc/profile:
# Set the program path
PATH=$PATH:/usr/sfw/bin:/usr/sfw/sbin:/usr/openwin/bin:/opt/csw/bin:/usr/ccs/bin:/usr/local/bin:/usr/local
export PATH
# Set the documentation path
MANPATH="$MANPATH:/usr/share/man:/opt/sfw/man:/opt/csw/man"
export MANPATH
It sounds like you might be new to Solaris - as I am relatively new. I also do these, which shouldn't affect anything.
I set VIM as the default editor instead of VI - it's compatible, but has more features, including ANSI color, and some terminal emulators will pass your mouse clicks and scrolling through for even more flexibility:
# Set the default editor
EDITOR=vim
export EDITOR
Then if you are still using the default prompt that doesn't say anything, you might want to add some information - this version requires a Bash shell:
# Set the command prompt, which includes the username, host name, and the current path.
PS1='\u#\h:\w>'
export PS1
To recreate verbose mode, you can tee the output to the controlling terminal (/dev/tty) while the stdoout output of tee itself is passed to cp via xargs.
find /some/source/directory -type f | \
tee /dev/tty | xargs -I {} cp {} /copy/to/this-directory/
Replace the find with whatever you like, so long as it passes the paths to the files to be copied through the pipe to tee.
Tested on a standard Solaris 10 system without extra GNU utils.
Goal: when the user types 'make packages', automatically search for the package libx11-dev (required for my program to compile) and, if not found, install it. Here's a stripped-down version of my makefile:
PACKAGES = $(shell if [ -z $(dpkg -l | grep libx11-dev) ]; then sudo apt-get install libx11-dev; fi)
[other definitions and targets]
packages: $(PACKAGES)
When I type 'make packages', I'm prompted for the super-user password. If entered correctly, it then hangs indefinitely.
Is what I'm trying to do even possible from within the makefile? If so, how?
Thanks so much.
The problem is that the shell function acts like backticks in the shell: it takes the output to stdout and returns it as the value of the function. So, apt-get is not hanging, it's waiting for you to enter a response to some question. But you cannot see the question because make has taken the output.
The way you're doing this is not going to work. Why are you using shell instead of just writing it as a rule?
packages:
[ -z `dpkg -l | grep libx11-dev` ] && sudo apt-get install libx11-dev
.PHONY: packages
I figured out a better way, which avoids the problem of having unexpected arguments to the if statement:
if ! dpkg -l | grep libx11-dev -c >>/dev/null; then sudo apt-get install libx11-dev; fi
The -c flag on grep makes it return the number of lines in dpkg -l which contain the string libx11-dev, which will either be 0 (if uninstalled) or 1 (if installed), allowing
dpkg -l | grep libx11-dev -c
to be treated like an ordinary boolean variable.