I am using travis-ci to test my github repository and I found out that 3 to 10 minutes are spent on downloading deb packages. They are only 127 MB big.
I checked travis-ci/Caching Dependencies and Directories
but there is no support for apt-get.
How to do this?
It is possible to achieve this by caching the content in another folder that is accessible for non-root users, and mv all deb under it into /var/cache/apt/archives/. After installation, cp them back to that folder.
Note:
I recommand you to make YOUR_DIR_FOR_DEB_PACKAGES somewhere under ~.
# .travis.yml
sudo: required
cache:
- directories:
- $YOUR_DIR_FOR_DEB_PACKAGES # This must be accessible for non-root users
addons:
apt:
sources:
# Whatever source you need
# Download the dependencies if it is not cached
# All the "echo" and "ls" in "before_script" can be remove since they are only used for debugging.
before_script:
- echo "Print content of $YOUR_DIR_FOR_DEB_PACKAGES"
- ls $YOUR_DIR_FOR_DEB_PACKAGES
- echo "Check whether apt-get has no cache"
- ls /var/cache/apt/archives
-
- echo "Start loading cache"
- |
exist() {
[ -e "$1" ]
}
- |
if exist ~/$YOUR_DIR_FOR_DEB_PACKAGES/*.deb
then
sudo mv ~/$YOUR_DIR_FOR_DEB_PACKAGES/*.deb /var/cache/apt/archives/
ls /var/cache/apt/archives
fi
-
- echo "Start to install software"
- sudo apt-get update
- sudo apt-get install -y --no-install-recommends --no-install-suggests $THE_PACKAGES_REQUIRED
-
- echo "Start updating the cache"
- cp /var/cache/apt/archives/*deb ~/$YOUR_DIR_FOR_DEB_PACKAGES/
script:
- # Do whatever you want here.
It seems to me like this is not a recommended practice.
According to official documentation
Large files that are quick to install but slow to download do not
benefit from caching, as they take as long to download from the cache
as from the original source:
Debian packages
Related
After upgrade from ubuntu 20.04 LTS to 22.04.1 LTS, I got a very persistent error:
(Reading database ... 350976 files and directories currently installed.)
Preparing to unpack .../firefox_1%3a1snap1-0ubuntu2_amd64.deb ...
=> Installing the firefox snap
==> Checking connectivity with the snap store
==> Installing the firefox snap
error: cannot perform the following tasks:
- Run hook connect-plug-host-hunspell of snap "firefox" (run hook "connect-plug-
host-hunspell": cannot perform operation: mount --rbind /var/log /tmp/snap.rootf
s_hE2Zj1//var/log: Permission denied)
dpkg: error processing archive /var/cache/apt/archives/firefox_1%3a1snap1-0ubunt
u2_amd64.deb (--unpack):
new firefox package pre-installation script subprocess returned error exit stat
us 1
Please restart all running instances of firefox, or you will experience problems
.
Errors were encountered while processing:
/var/cache/apt/archives/firefox_1%3a1snap1-0ubuntu2_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
This
- Run hook connect-plug-host-hunspell of snap "firefox" (run hook "connect-plug-
host-hunspell": cannot perform operation: mount --rbind /var/log /tmp/snap.rootf
s_hE2Zj1//var/log: Permission denied)
was very persistent and was hindering any apt-involved installations.
Thus, no apt install nor apt upgrade was working.
After long search and trying around,
where I did:
sudo apt --fix-broken install
sudo rm /var/lib/dpkg/lock
sudo rm /var/lib/dpkg/lock-frontend
sudo rm /var/lib/apt/lists/lock
sudo rm /var/cache/apt/archives/lock
sudo dpkg --configure -a
And then, cave(!) this removes firefox from your installed package list
and thus after this command you cannot use firefox any more on your computer
until you install it - so I did that - but I had a second computer
where I could google around while I had no firefox on that machine.
I couldn't install chromium browser or other browsers, because apt was not working! So run this command only when you have a second computer or at least your mobile to surf for instructions!
sudo dpkg --force depends -P firefox
I found a hint in
https://forums.mozillazine.org/viewtopic.php?f=38&t=3097766
My solution was:
# Add Mozilla Team PPA
sudo add-apt-repository ppa:mozillateam/ppa
# Set PPA priority
sudo gedit /etc/apt/preferences.d/mozillateamppa
# The command creates and opens empty config file in Gedit text editor.
# When it opens, add the lines below and save it:
Package: firefox*
Pin: release o=LP-PPA-mozillateam
Pin-Priority: 501
save and close that file.
after that, I could do finally:
sudo apt --fix-broken install
# and then:
sudo apt update && sudo apt upgrade
and then, all apt or snap commands were working again flawlessly.
note:
Now, I encounter
sudo apt install chromium-browser
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
chromium-browser
0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded.
Need to get 0 B/48,4 kB of archives.
After this operation, 164 kB of additional disk space will be used.
Preconfiguring packages ...
(Reading database ... 313313 files and directories currently installed.)
Preparing to unpack .../chromium-browser_1%3a85.0.4183.83-0ubuntu2_amd64.deb ...
=> Installing the chromium snap
==> Checking connectivity with the snap store
==> Installing the chromium snap
error: cannot perform the following tasks:
- Run configure hook of "chromium" snap if present (run hook "configure": cannot perform operation: mount --rbind /var/log /
tmp/snap.rootfs_Gg42mE//var/log: Permission denied)
dpkg: error processing archive /var/cache/apt/archives/chromium-browser_1%3a85.0.4183.83-0ubuntu2_amd64.deb (--unpack):
new chromium-browser package pre-installation script subprocess returned error exit status 1
Errors were encountered while processing:
/var/cache/apt/archives/chromium-browser_1%3a85.0.4183.83-0ubuntu2_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
I tried:
sudo add-apt-repository ppa:xtradeb/apps
sudo gedit /etc/apt/preferences.d/xtradebppa
# content:
Package: chromium*
Pin: release o=LP-PPA-xtradeb
Pin-Priority: 501
But this didn't help.
Finally, I found out the solution!
In former days, when my / containing system partition was too full, I sym-linked /var/log. I linked it to somehwere in my home folder. But then, I moved snap back.
- Run hook connect-plug-host-hunspell of snap "firefox" (run hook "connect-plug-
host-hunspell": cannot perform operation: mount --rbind /var/log /tmp/snap.rootf
s_hE2Zj1//var/log: Permission denied)
The permission was denied, because it tried to mount to a symlink.
All I had to do was:
sudo rm /var/log
sudo mkdir -p /var/log
Now, it is not a symlink any more. So it can actually mount to it.
I've created a local apt repo on apache2 using this structure:
mkdir -p /var/www/html/repo/pool/main/
cp /home/xxx.deb /var/www/html/repo/pool/main/.
mkdir -p /var/www/html/repo/dists/focal/main/binary-amd64
cd /var/www/html/repo
dpkg-scanpackages --multiversion --arch amd64 pool/ > dists/focal/main/binary-amd64/Packages
cat dists/focal/main/binary-amd64/Packages | gzip -9 > dists/focal/main/binary-amd64/Packages.gz
Made a release, signed the release in /var/www/html/repo/dists/focal
Added rule in /etc/apt/sources.list.d/gmss.list
After all this I can install my debs on this repository, however when I add a new version of my software in the pool and make a new package file and I do:
apt update
apt install softwarepkg
It says that the latest version is already installed.
How can I get this to update to the latest version op my software?
You may want to try running apt clean to clear the cache, which may have the older version of your package.
I have a cloudformation template that I have created in hopes to spin up an ec2 instance with the necessary dependencies (where these dependencies are installed as bash in UserData) to leverage GPU hardware within a docker container. The main dependencies are: 1) nvidia drivers, 2) docker, and 3) nvidia-docker2.
The first two dependencies install as expected and after several moments of running can be verified by 1) nvidia-smi, and docker --version. The third dependency however consistently does not install.
For reference here are the relevant parts of my UserData bash:
# install gpu stuff
apt-get install linux-headers-$(uname -r)
distribution=$(. /etc/os-release;echo $ID$VERSION_ID | sed -e 's/\.//g')
wget https://developer.download.nvidia.com/compute/cuda/repos/$distribution/x86_64/cuda-$distribution.pin
mv cuda-$distribution.pin /etc/apt/preferences.d/cuda-repository-pin-600
apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/$distribution/x86_64/7fa2af80.pub
echo "deb http://developer.download.nvidia.com/compute/cuda/repos/$distribution/x86_64 /" | tee /etc/apt/sources.list.d/cuda.list
apt-get update
apt-get -y install cuda-drivers
# install docker on system
curl https://get.docker.com | sh
systemctl start docker && systemctl enable docker
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | tee /etc/apt/sources.list.d/nvidia-docker.list
apt-get -y install nvidia-docker2 > /var/log/mason
# add nvidia runtime stuff
# echo "{ \"runtimes\": { \"nvidia\": { \"path\": \"/usr/bin/nvidia-container-runtime\", \"runtimeArgs\": [] } } }" >> /etc/docker/daemon.json
systemctl restart docker
I have tried to pipe the stdout from apt-get -y install nvidia-docker2 to a log file but the logs only show:
Reading package lists...
Building dependency tree...
Reading state information...
and seems to be stuck there.
Other potential helpful bits:
AMI: ubuntu 18.04 image
I will also note that I am able to SSH into the instance and install the apt-get -y install nvidia-docker2 in the command terminal without a hitch (or any user prompt or anything).
Can anyone help me figure out how to trouble shoot this issue or does anyone see any potential problems in what I have shared above? The stdout pipe to file is about the only trick I know to debug such an issue as this. Please let me know if I can update/edit this post to make this issue easier to debug.
Based on the comments.
The issue was caused by not updating ubuntu's repositories after adding nvidia-docker2 repo.
The solution was to run apt-get update after the addition of the repo.
replace:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID | sed -e 's/\.//g')
with:
distribution = ubuntu18.04
I have been using wkthmltopdf to convert html to pdf documents on-the-fly on my linux web server. The program originally needed X11 or similar X server to run correctly, but through many requests by developers to have this run on servers without GUI, I am pretty sure it runs a virtual X server in the static version. I have been using the static (stand-alone) version of the program and it works great! I would put the executable file in a folder, and run:
./wkhtmltopdf file1.html file2.pdf
However I would like to install this program system-wide. I used the apt-get install wkhtmltopdf (just installed yesterday) and since I am running on a 64 bit system, I also needed apt-get install ia32-libs. After installation I can find the version like this:
wkhtmltopdf --version
output:
Name:
wkhtmltopdf 0.9.9
License:
Copyright (C) 2008,2009 Wkhtmltopdf Authors.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it. There is NO
WARRANTY, to the extent permitted by law.
Authors:
Written by Jakob Truelsen. Patches by Mário Silva, Benoit Garret and Emmanuel
Bouthenot.
Now when I try to run the program installed via aptitude, I get the following error:
wkhtmltopdf: cannot connect to X server
Does anyone know how I can fix this? I guess this version is missing a virtual X server or something.
or try this (from http://drupal.org/node/870058)
Download wkhtmltopdf. Or better install it with a package manager:
sudo apt-get install wkhtmltopdf
Extract it and move it to /usr/local/bin/
Rename it to wkhtmltopdf so that now you have an executable at /usr/local/bin/wkhtmltopdf
Set permissions: sudo chmod a+x /usr/local/bin/wkhtmltopdf
Install required support packages.
sudo apt-get install openssl build-essential xorg libssl-dev
Check to see if it works: run
/usr/local/bin/wkhtmltopdf http://www.google.com test.pdf
If it works, then you are done. If you get the error "Cannot connect to X server" then continue to number 7.
We need to run it headless on a 'virtual' x server. We will do this with a package called xvfb.
sudo apt-get install xvfb
We need to write a little shell script to wrap wkhtmltopdf in xvfb. Make a file called wkhtmltopdf.sh and add the following:
xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf "$#"
Move this shell script to /usr/local/bin, and set permissions:
sudo chmod a+x /usr/local/bin/wkhtmltopdf.sh
Check to see if it works once again: run
/usr/local/bin/wkhtmltopdf.sh http://www.google.com test.pdf
Note that http://www.google.com may throw an error like "A finished ResourceObject received a loading finished signal. This might be an indication of an iframe taking to long to load." You may want to test with a simpler page like http://www.example.com.
This solved the issue for me:
sudo apt-get install xvfb
xvfb-run --server-args="-screen 0, 1024x768x24" wkhtmltopdf file1.html file2.pdf
I tried to do sudo apt-get install wkhtmltopdf but without any success.
Instead I recommend you try:
Download the latest executable (.11 rc1) :
wget https://wkhtmltopdf.googlecode.com/files/wkhtmltopdf-0.11.0_rc1-static-i386.tar.bz2
uncompress it :
tar -vxf wkhtmltopdf-0.11.0_rc1-static-i386.tar.bz2
rename it :
mv wkhtmltopdf-i386 wkhtmltopdf
chmod it to executable :
chmod a+x wkhtmltopdf
place it into /usr/bin :
sudo mv wkhtmltopdf /usr/bin
Just made it:
1- To download wkhtmltopdf dependencies
# apt-get install wkhtmltopdf
2- Download from source
# wget http://downloads.sourceforge.net/project/wkhtmltopdf/xxx.deb
# dpkg -i xxx.deb
3- Try
# wkhtmltopdf http://google.com google.pdf
Its working fine
It works!
I found method to resolve this problem without fake X server.
In newest version of wkhtmltopdf dont need X server for work, but it no into official linux repositories.
Solution for Ubuntu 14.04.4 LTS (trusty) i386
$ sudo apt-get install xfonts-75dpi
$ wget http://download.gna.org/wkhtmltopdf/0.12/0.12.2/wkhtmltox-0.12.2_linux-trusty-i386.deb
$ sudo dpkg -i wkhtmltox-0.12.2_linux-trusty-i386.deb
$ wkhtmltopdf http://www.google.com test.pdf
Solution for Ubuntu 14.04.4 LTS (trusty) amd64
$ sudo apt-get install xfonts-75dpi
$ wget http://download.gna.org/wkhtmltopdf/0.12/0.12.2/wkhtmltox-0.12.2_linux-trusty-amd64.deb
$ sudo dpkg -i wkhtmltox-0.12.2_linux-trusty-amd64.deb
$ wkhtmltopdf http://www.google.com test.pdf
User felixhummel got very good solution, but repository with utilite has changed.
Expanding on Timothy's answer...
If you're a web developer looking to use wkhtmltopdf as part of your web app, you can simply install it into your /usr/bin/ folder like so:
cd /usr/bin/
curl -C - -O http://wkhtmltopdf.googlecode.com/files/wkhtmltopdf-0.11.0_rc1-static-i386.tar.bz2
tar -xvjf wkhtmltopdf-0.11.0_rc1-static-i386.tar.bz2
mv wkhtmltopdf-i386 wkhtmltopdf
You can now run it anywhere using wkhtmltopdf.
I personally use the Snappy library in PHP. Here is an example of how easy it is to create a PDF:
<?php
// Create new PDF
$pdf = new \Knp\Snappy\Pdf('wkhtmltopdf');
// Set output header
header('Content-Type: application/pdf');
// Generate PDF from HTML
echo $pdf->getOutputFromHtml('<h1>Title</h1><p>Your content goes here.</p>');
Update to latest wkhtmltopdf version from SourceForge (0.12 as of this writing). It does not need an X Server to run.
Example for Ubuntu 14.04:
$ cd /tmp/
$ wget -q http://downloads.sourceforge.net/project/wkhtmltopdf/0.12.2.1/wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
$ dpkg -x wkhtmltox-0.12.2.1_linux-trusty-amd64.deb foo
$ echo '<p>hi</p>' | ./foo/usr/local/bin/wkhtmltopdf - /tmp/hi.pdf
Loading pages (1/6)
Counting pages (2/6)
Resolving links (4/6)
Loading headers and footers (5/6)
Printing pages (6/6)
Done
$ head -n3 /tmp/hi.pdf
%PDF-1.4
1 0 obj
<<
for 14.04.1-Ubuntu https://wkhtmltopdf.org/downloads.html
wget https://downloads.wkhtmltopdf.org/0.12/0.12.4/wkhtmltox-0.12.4_linux-generic-amd64.tar.xz -O mktemp.tar.xz
tar xf mktemp.tar.xz
sudo cp wkhtmltox/bin/wkhtmltopdf /usr/bin/wkhtmltopdf
sudo chmod +x /usr/bin/wkhtmltopdf
rm mktemp.tar.xz
rm wkhtmltox -rf
apt-get update
apt-get install -y libxrender1 libxtst6 libxi6
wkhtmltopdf http://www.google.com test.pdf
sudo -i
apt-get install wkhtmltopdf xvfb libicu48
mv /usr/bin/wkhtmltopdf /usr/bin/wkhtmltopdf-origin
touch /usr/bin/wkhtmltopdf && chmod +x /usr/bin/wkhtmltopdf && cat > /usr/bin/wkhtmltopdf << END
#!/bin/bash
/usr/bin/xvfb-run -a -s "-screen 0 1024x768x24" /usr/bin/wkhtmltopdf-origin "\$#"
END
Problem is probably in old version of wkhtmltopdf - version 0.9 from distribution repository require running X server, but current version - 0.12.2.1 doesnt require it - can run headless.
Download package for your distribution from http://wkhtmltopdf.org/downloads.html and install it - for Ubuntu:
sudo apt-get install xfonts-75dpi
sudo dpkg -i wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
wkhtmltopdf > 0.11 doesn't have this X-server issue.
So installing 0.12.2.1 on a linux server.
At first install xvfb server:
sudo apt-get install xvfb
Get needed version of wkhtmltopdf from http://wkhtmltopdf.org/downloads.html
Install wkhtmltopdf:
sudo dpkg -i wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
or install with wget
URL='http://download.gna.org/wkhtmltopdf/0.12/0.12.2.1/wkhtmltox-0.12.2.1_linux-trusty-amd64.deb'; FILE=`mktemp`; wget "$URL" -qO $FILE && sudo dpkg -i $FILE; rm $FILE
Install dependency (if needed):
sudo apt-get -f install
Create symblic link in /usr/local/bin/:
echo 'exec xvfb-run -a -s "-screen 0 640x480x16" wkhtmltopdf "$#"' | sudo tee /usr/local/bin/wkhtmltopdf.sh >/dev/null
sudo chmod a+x /usr/local/bin/wkhtmltopdf.sh
Now try below and it should work,
/usr/local/bin/wkhtmltopdf http://www.google.com test.pdf
I just figured out that I can simply move the static executable to the /usr/bin/ directory and execute it from anywhere.
solution for Centos7:
yum -y install xorg-x11-fonts-75dpi \
xorg-x11-fonts-Type1 \
&& rpm -Uvh http://download.gna.org/wkhtmltopdf/0.12/0.12.2.1/wkhtmltox-0.12.2.1_linux-centos7-amd64.rpm
We run into this problem inside docker containers and the above install has wkhtmltopdf with patched QT
It is recommended to use at least 0.12.2.1.
Starting from wkhtmltopdf >= 0.12.2 it doesn't require X server or emulation anymore. You can download new version from http://wkhtmltopdf.org/downloads.html
I did follow the instructions here and made wkhtmltopdf work for me but I would like to offer a bit of perspective which I discovered while doing my own little dance with wkhtmltopdf - xvfb.
This is important because the same reason that causes it to throw the infamous cannot connect to X server error is also causing it to run with sever limitations even if you do provide it a X server. These limitations include not being able to take multiple input sources, set header and footers, etc (check the Reduced Functionality section of the manual).
wkhtmltox by itself doesn't require a X11, however it's making use of QT libraries which do. In newever versions of wkthmltox developers made a patch for QT which allows it to run with a X11.
Currently some versions are built against patched QT and some are not. You can check your version by running wkhtmltopds --version. There should be a line at the end saying Compiled against wkhtmltopdf patched qt.
So, to conclude, if you install and use a version that uses the patched libraries it should work on a linux server without the xvfb server, as I can confirm.
Pay attention: your file could be wkhtmltopdf.sh or wkhtmltopdf, check it on second step
You must copy it into directory : /usr/local/bin, make sur it's executable and add symlink of wkhtmltopdf.sh like :
1- the command :
sudo apt-get install wkhtmltopdf
2 - insert the binary in directory /usr/bin so the browser can't have permission to execute in this directory.
You must copy the wkhtmltopdf.sh to directory /usr/local/bin cause the browser have permission in this directory like:
sudo cp /usr/bin/wkhtmltopdf.sh /usr/local/bin/wkhtmltopdf.sh
3 - After make sur the binary have permission of execution like :
sudo chmod a+x /usr/local/bin/wkhtmltopdf.sh
4 - so now you can test, it's work like:
/usr/local/bin/wkhtmltopdf.sh http://www.google.com google.pdf
it make download the pdf in the current directory in your terminal
5 - Optional
now you can add symlink in your directory /usr/local/bin like
ln -s /usr/local/bin/wkhtmltopdf.sh /usr/local/bin/wkhtmltopdf
Just tell the Qt backend to not use X:
QT_QPA_PLATFORM=offscreen wkhtmltopdf <input> <outfile.pdf>
Download file from this link
Extract it and move executable file(/wkhtmltox/bin/wkhtmltopdf) to /usr/bin/
Rename it to wkhtmltopdf if current name is not wkhtmltopdf. So that now you have an executable at /usr/bin/wkhtmltopdf
Set permissions: sudo chmod a+x /usr/bin/wkhtmltopdf
Install required support packages. sudo apt-get install openssl build-essential xorg libssl-dev
Now, check with wkhtmltopdf http://www.google.com test.pdf
hint: detail information from this link
Just install a version 0.12.4 or higher. This seems to solve the problem.
See How can I install the latest wkhtmltopdf on Ubuntu 16.04?.
If you config wkhtmltopdf for Rails or Somethings in Centos, you can follow these step bellow:
Go to https://wkhtmltopdf.org/downloads.html and copied the link of rpm file.
In centos server bash.
wget link_of_wkhtmltopdf_rpm.rpm
rpm -ivh link_of_wkhtmltopdf_rpm.rpm
which wkhtmltopdf
=> You will get path of wkhtmltopdf.
Setup for wicked_pdf or pdfkit with path in step 4.
This is sample config with wickedpdf. config/initializers/wicked_pdf.rb
if Rails.env != "production"
path = %x[which wkhtmltopdf].gsub(/\n/, "")
else
path = "path_of_wkhtmltopdf_in_step_4"
end
WickedPdf.config = { exe_path: path }
Restart server.
DONE.
For 64-bit Use:
wget http://wkhtmltopdf.googlecode.com/files/wkhtmltopdf-0.9.9-static-amd64.tar.bz2
tar xvjf wkhtmltopdf-0.9.9-static-amd64.tar.bz2
sudo mv wkhtmltopdf-amd64 /usr/bin/wkhtmltopdf
sudo chmod +x /usr/bin/wkhtmltopdf
While running
./configure --prefix=/mingw
on a MinGW/MSYS system for a library I had previously run
'./configure --prefix=/mingw && make && make install'
I came across this message:
WARNING: A version of the Vamp plugin SDK is already installed. Expect worries and sorrows if you install a new version without removing the old one first. (Continuing)
This had me worried. What's the opposite of 'make install', i.e. how is a library uninstalled in Linux? Will 'make clean' do the job, or are there other steps involved?
make clean removes any intermediate or output files from your source / build tree. However, it only affects the source / build tree; it does not touch the rest of the filesystem and so will not remove previously installed software.
If you're lucky, running make uninstall will work. It's up to the library's authors to provide that, however; some authors provide an uninstall target, others don't.
If you're not lucky, you'll have to manually uninstall it. Running make -n install can be helpful, since it will show the steps that the software would take to install itself but won't actually do anything. You can then manually reverse those steps.
If sudo make uninstall is unavailable:
In a Debian based system, instead of (or after*) doing make install you can run sudo checkinstall to make a .deb file that gets automatically installed. You can then remove it using the system package manager (e.g. apt/synaptic/aptitude/dpkg). Checkinstall also supports creating other types of package, e.g. RPM.
See also http://community.linuxmint.com/tutorial/view/162 and some basic checkinstall usage and debian checkinstall package.
*: If you're reading this after having installed with make install you can still follow the above instructions and do a dpkg -r $PACKAGE_NAME_YOU_CHOSEN afterwards.
If you have a manifest file which lists all the files that were installed with make install you can run this command which I have from another answer:
cat install_manifest.txt | xargs echo rm | sh
If you have sudo make install you will need to add a sudo to your uninstall:
cat install_manifest.txt | xargs echo sudo rm | sh
How to uninstall after "make install"
Method #1 (make uninstall)
Step 1: You only need to follow this step if you've deleted/altered the build directory in any way: Download and make/make install using the exact same procedure as you did before.
Step 2: try make uninstall.
cd $SOURCE_DIR
sudo make uninstall
If this succeeds you are done. If you're paranoid you may also try the steps of "Method #3" to make sure make uninstall didn't miss any files.
Method #2 (checkinstall -- only for debian based systems)
Overview of the process
In debian based systems (e.g. Ubuntu) you can create a .deb package very easily by using a tool named checkinstall. You then install the .deb package (this will make your debian system realize that the all parts of your package have been indeed installed) and finally uninstall it to let your package manager properly cleanup your system.
Step by step
sudo apt-get -y install checkinstall
cd $SOURCE_DIR
sudo checkinstall
At this point checkinstall will prompt for a package name. Enter something a bit descriptive and note it because you'll use it in a minute. It will also prompt for a few more data that you can ignore. If it complains about the version not been acceptable just enter something reasonable like 1.0. When it completes you can install and finally uninstall:
sudo dpkg -i $PACKAGE_NAME_YOU_ENTERED
sudo dpkg -r $PACKAGE_NAME_YOU_ENTERED
Method #3 (install_manifest.txt)
If a file install_manifest.txt exists in your source dir it should contain the filenames of every single file that the installation created.
So first check the list of files and their mod-time:
cd $SOURCE_DIR
sudo xargs -I{} stat -c "%z %n" "{}" < install_manifest.txt
You should get zero errors and the mod-times of the listed files should be on or after the installation time. If all is OK you can delete them in one go:
cd $SOURCE_DIR
mkdir deleted-by-uninstall
sudo xargs -I{} mv -t deleted-by-uninstall "{}" < install_manifest.txt
User Merlyn Morgan-Graham however has a serious notice regarding this method that you should keep in mind (copied here verbatim): "Watch out for files that might also have been installed by other packages. Simply deleting these files [...] could break the other packages.". That's the reason that we've created the deleted-by-uninstall dir and moved files there instead of deleting them.
99% of this post existed in other answers. I just collected everything useful in a (hopefully) easy to follow how-to and tried to give extra attention to important details (like quoting xarg arguments and keeping backups of deleted files).
Depending on how well the makefile/configure script/autofoo magic of the program in question is the following might solve your problem:
make uninstall
The problem is that you should execute this on the source tree of the version you've got installed and with exactly the same configuration that you used for installing.
make clean generally only cleans built files in the directory containing the source code itself, and rarely touches any installed software.
Makefiles generally don't contain a target for uninstallation -- you usually have to do that yourself, by removing the files from the directory into which they were installed. For example, if you built a program and installed it (using make install) into /usr/local, you'd want to look through /usr/local/bin, /usr/local/libexec, /usr/local/share/man, etc., and remove the unwanted files. Sometimes a Makefile includes an uninstall target, but not always.
Of course, typically on a Linux system you install software using a package manager, which is capable of uninstalling software "automagically".
The "stow" utility was designed to solve this problem: http://www.gnu.org/software/stow/
There is no standard unfortunately, this is one of the perils of installing from source. Some Makefiles will include an "uninstall", so
make uninstall
from the source directory may work. Otherwise, it may be a matter of manually undoing whatever the make install did.
make clean usually just cleans up the source directory - removing generated/compiled files and the like, probably not what you're after.
Make
Make is the program that’s used to install the program that’s compiled from the source code. It’s not the Linux package manager so it doesn’t keep track of the files it installs. This makes it difficult to uninstall the files afterward.
The Make Install command copies the built program and packages into the library directory and specified locations from the makefile. These locations
can vary based on the examination that’s performed by the configure script.
CheckInstall
CheckInstall is the program that’s used to install or uninstall programs that are compiled from the source code. It monitors and copies the files that are installed using the make program. It also installs the files using the Linux package manager which allows it to be uninstalled like any regular package.
The CheckInstall command is used to call the Make Install command. It monitors the files that are installed and creates a binary package from them. It also installs the binary package with the Linux package manager.
Replace "source_location.deb" and "name" with your information from the Screenshot.
Execute the following commands in the source package directory:
Install CheckInstall sudo apt-get install checkinstall
Run the Configure script sudo ./configure
Run the Make command sudo make
Run CheckInstall sudo checkinstall
Reinstall the package sudo dpkg --install --force-overwrite source_location.deb
Remove the package sudo apt remove name
Here's an article article I wrote that covers the whole process with explanations.
Method 1
From the source folder:
#make uninstall
Method 2
If there is no uninstall procedure:
open install_manifest.txt (created by #make install)
remove all the directories/files listed
remove any remaining files you missed:
#xargs rm < install_manifest.txt
remove any hidden directories/files:
$rm -rf ~/.packagename
Remove the source folder.
Method 3
If none of the above options work, view the install procedure:
#make -n install
and reverse the install procedure:
#rm -rf all directories/files created
Example
For example, this is how to uninstall nodejs, npm, and nvm from source:
How do I completely uninstall Node.js, and reinstall from beginning (Mac OS X)
which you can do using any of the above methods.
I know of few packages that support "make uninstall" but many more that support make install DESTDIR=xxx" for staged installs.
You can use this to create a package which you install instead of installing directly from the source. I had no luck with checkinstall but fpm works very well.
This can also help you remove a package previously installed using make install. You simply force install your built package over the make installed one and then uninstall it.
For example, I used this recently to deal with protobuf-3.3.0.
On RHEL7:
make install DESTDIR=dest
cd dest
fpm -f -s dir -t rpm -n protobuf -v 3.3.0 \
--vendor "You Not RedHat" \
--license "Google?" \
--description "protocol buffers" \
--rpm-dist el7 \
-m you#youraddress.com \
--url "http:/somewhere/where/you/get/the/package/oritssource" \
--rpm-autoreqprov \
usr
sudo rpm -i -f protobuf-3.3.0-1.el7.x86_64.rpm
sudo rpm -e protobuf-3.3.0
Prefer yum to rpm if you can.
On Debian9:
make install DESTDIR=dest
cd dest
fpm -f -s dir -t deb -n protobuf -v 3.3.0 \
-C `pwd` \
--prefix / \
--vendor "You Not Debian" \
--license "$(grep Copyright ../../LICENSE)" \
--description "$(cat README.adoc)" \
--deb-upstream-changelog ../../CHANGES.txt \
--url "http:/somewhere/where/you/get/the/package/oritssource" \
usr/local/bin \
usr/local/lib \
usr/local/include
sudo apt install -f *.deb
sudo apt-get remove protobuf
Prefer apt to dpkg where you can.
I've also posted answer this here
Make can tell you what it knows and what it will do.
Suppose you have an "install" target, which executes commands like:
cp <filelist> <destdir>/
In your generic rules, add:
uninstall :; MAKEFLAGS= ${MAKE} -j1 -spinf $(word 1,${MAKEFILE_LIST}) install \
| awk '/^cp /{dest=$NF; for (i=NF; --i>0;) {print dest"/"$i}}' \
| xargs rm -f
A similar trick can do a generic make clean.
Preamble
below may work or may not, this is all given as-is, you and only you are responsible person in case of some damage, data loss and so on. But I hope things go smooth!
To undo make install I would do (and I did) this:
Idea: check whatever script installs and undo this with simple bash script.
Reconfigure your build dir to install to some custom dir. I usually do this: --prefix=$PWD/install. For CMake, you can go to your build dir, open CMakeCache.txt, and fix CMAKE_INSTALL_PREFIX value.
Install project to custom directory (just run make install again).
Now we push from assumption, that make install script installs into custom dir just same contents you want to remove from somewhere else (usually /usr/local). So, we need a script.
3.1. Script should compare custom dir, with dir you want clean. I use this:
anti-install.sh
RM_DIR=$1
PRESENT_DIR=$2
echo "Remove files from $RM_DIR, which are present in $PRESENT_DIR"
pushd $RM_DIR
for fn in `find . -iname '*'`; do
# echo "Checking $PRESENT_DIR/$fn..."
if test -f "$PRESENT_DIR/$fn"; then
# First try this, and check whether things go plain
echo "rm $RM_DIR/$fn"
# Then uncomment this, (but, check twice it works good to you).
# rm $RM_DIR/$fn
fi
done
popd
3.2. Now just run this script (it will go dry-run)
bash anti-install.sh <dir you want to clean> <custom installation dir>
E.g. You wan't to clean /usr/local, and your custom installation dir is /user/me/llvm.build/install, then it would be
bash anti-install.sh /usr/local /user/me/llvm.build/install
3.3. Check log carefully, if commands are good to you, uncomment rm $RM_DIR/$fn and run it again. But stop! Did you really check carefully? May be check again?
Source to instructions:
https://dyatkovskiy.com/2019/11/26/anti-make-install/
Good luck!