apt install fftw3 doesn't create necessary symbolic links, what to do? - apt

System:
Ubuntu 21.04 inside Virtual Box.
Running sudo apt install libfftw3-3 installs the fftw3 libs, but doesn't seem to create the symbolic links for the system to find them.
In the below terminal output, I don't see libfftw3.so listed anywhere, and the linker can't find it anywhere. Should I have expected the apt install command to take care of this for me? In the short term, should I just manually create a symbolic link? What link should I create? I presume I it should be something like:
sudo ln -s /usr/lib/x86_64-linux-gnu/libfftw3.so.3 /usr/lib/libfftw3.so
anything wrong with that?
Here's the terminal output:
>>> find /usr -name "*fftw*"
/usr/lib/x86_64-linux-gnu/libfftw3f.so.3.5.8
/usr/lib/x86_64-linux-gnu/libfftw3f_threads.so.3.5.8
/usr/lib/x86_64-linux-gnu/libfftw3f_omp.so.3.5.8
/usr/lib/x86_64-linux-gnu/libfftw3l.so.3
/usr/lib/x86_64-linux-gnu/libfftw3_omp.so.3.5.8
/usr/lib/x86_64-linux-gnu/libfftw3f_threads.so.3
/usr/lib/x86_64-linux-gnu/libfftw3f_omp.so.3
/usr/lib/x86_64-linux-gnu/libfftw3_threads.so.3
/usr/lib/x86_64-linux-gnu/libfftw3_omp.so.3
/usr/lib/x86_64-linux-gnu/libfftw3l_threads.so.3.5.8
/usr/lib/x86_64-linux-gnu/libfftw3f.so.3
/usr/lib/x86_64-linux-gnu/libfftw3.so.3
/usr/lib/x86_64-linux-gnu/libfftw3l_omp.so.3.5.8
/usr/lib/x86_64-linux-gnu/libfftw3l.so.3.5.8
/usr/lib/x86_64-linux-gnu/libfftw3_threads.so.3.5.8
/usr/lib/x86_64-linux-gnu/libfftw3.so.3.5.8
/usr/lib/x86_64-linux-gnu/libfftw3l_omp.so.3
/usr/lib/x86_64-linux-gnu/libfftw3l_threads.so.3
/usr/share/doc/libfftw3-3
/usr/share/doc/libfftw3-long3
/usr/share/doc/libfftw3-single3
/usr/share/doc/libfftw3-double3
>>> g++ test.cpp -o test -lfftw3 && ./test
/usr/bin/ld: cannot find -lfftw3
collect2: error: ld returned 1 exit status
>>> ls -l /usr/lib/x86_64-linux-gnu/*fftw*
lrwxrwxrwx 1 root root 22 Jul 3 10:40 libfftw3f_omp.so.3 -> libfftw3f_omp.so.3.5.8
-rw-r--r-- 1 root root 31176 Jun 5 2020 libfftw3f_omp.so.3.5.8
lrwxrwxrwx 1 root root 18 Jul 3 10:40 libfftw3f.so.3 -> libfftw3f.so.3.5.8
-rw-r--r-- 1 root root 2156872 Jun 5 2020 libfftw3f.so.3.5.8
lrwxrwxrwx 1 root root 26 Jul 3 10:40 libfftw3f_threads.so.3 -> libfftw3f_threads.so.3.5.8
-rw-r--r-- 1 root root 35368 Jun 5 2020 libfftw3f_threads.so.3.5.8
lrwxrwxrwx 1 root root 22 Jun 5 2020 libfftw3l_omp.so.3 -> libfftw3l_omp.so.3.5.8
-rw-r--r-- 1 root root 31176 Jun 5 2020 libfftw3l_omp.so.3.5.8
lrwxrwxrwx 1 root root 18 Jun 5 2020 libfftw3l.so.3 -> libfftw3l.so.3.5.8
-rw-r--r-- 1 root root 899392 Jun 5 2020 libfftw3l.so.3.5.8
lrwxrwxrwx 1 root root 26 Jun 5 2020 libfftw3l_threads.so.3 -> libfftw3l_threads.so.3.5.8
-rw-r--r-- 1 root root 35368 Jun 5 2020 libfftw3l_threads.so.3.5.8
lrwxrwxrwx 1 root root 21 Jun 5 2020 libfftw3_omp.so.3 -> libfftw3_omp.so.3.5.8
-rw-r--r-- 1 root root 31176 Jun 5 2020 libfftw3_omp.so.3.5.8
lrwxrwxrwx 1 root root 17 Jun 5 2020 libfftw3.so.3 -> libfftw3.so.3.5.8
-rw-r--r-- 1 root root 2115912 Jun 5 2020 libfftw3.so.3.5.8
lrwxrwxrwx 1 root root 25 Jun 5 2020 libfftw3_threads.so.3 -> libfftw3_threads.so.3.5.8
-rw-r--r-- 1 root root 35368 Jun 5 2020 libfftw3_threads.so.3.5.8

Related

Retiring the once only volume, holding important looking files

/volume1 was once my only volume, and it's has been joined by /volume2 in preparation for retiring /volume1.
Having relocated all my content I can see lots of files I cannot explain. Unusually they are all prefixed with #, e.g.
/volume1$ ls -als
total 430144
0 drwxr-xr-x 1 root root 344 May 2 16:19 .
4 drwxr-xr-x 24 root root 4096 May 2 16:18 ..
0 drwxr-xr-x 1 root root 156 Jun 29 15:57 #appstore
0 drwx------ 1 root root 0 Apr 11 04:03 #autoupdate
0 drwxr-xr-x 1 root root 14 May 2 16:19 #clamav
332 -rw------- 1 root root 339245 Jan 23 13:50 #cnid_dbd.core.gz
0 drwxr-xr-x 1 admin users 76 Aug 19 2020 #database
0 drwx--x--x 1 root root 174 Jun 29 15:57 #docker
0 drwxrwxrwx+ 1 root root 24 Jan 23 15:27 #eaDir
420400 -rw------- 1 root root 430485906 Jan 4 05:06 #G1.core.gz
0 drwxrwxrwx 1 root root 12 Jan 21 13:47 #img_bkp_cache
0 drwxr-xr-x 1 root root 14 Dec 29 18:45 #maillog
0 drwxr-xr-x 1 root root 60 Dec 29 18:39 #MailScanner
0 drwxrwxr-x 1 root root 106 Oct 7 2018 #optware
7336 -rw------- 1 root root 7510134 Jan 24 01:33 #Plex.core.gz
0 drwxr-xr-x 1 postfix root 166 Oct 12 2020 #postfix
2072 -rw------- 1 root root 2118881 Jan 17 03:47 #rsync.core.gz
0 drwxr-xr-x 1 root root 88 May 2 16:19 #S2S
0 drwxr-xr-x 1 root root 0 Jan 23 13:50 #sharesnap
0 drwxrwxrwt 1 root root 48 Jun 29 15:57 #tmp
I have two questions
what does the # prefix signify, and
how can I move/remove them, given that something's going to miss these files.
From experimentation it seems the answers are:
Nothing - they're a convention used by the Synology packaging system, it appears.
With one exception I didn't need to consider the consequences of removing the file system on which these stood. The #appstore directory clearly holds the installed Synology packages, and after pulling /volume1 they showed in the Package Center as "needing repair". Once they were repaired, the same # prefixed directories appeared in the new volume - and the configuration was retained - so it appears these directories hold only the immutable software components.
The exception: I use ipkg mostly for fetchmail. I took a listing of the installed packages as well as the fetchmailrc, and then reinstalled the same packages once "Easy Bootstrap Installer" was ready for use (repair didn't work on this, but uninstall and reinstall worked fine).

Developing inside docker on WSL2-Ubuntu from vscode

I am trying run docker inside WSL (am running Ubuntu in WSL). Also am new to docker. The doc says:
To get the best out of the file system performance when bind-mounting files:
Store source code and other data that is bind-mounted into Linux containers (i.e., with docker run -v <host-path>:<container-path>) in the Linux filesystem, rather than the Windows filesystem.
Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME.
I also came across following:
Run sudo docker run -v "$HOME:/host" --name "[name_work]" -it docker.repo/[name]. With, [$HOME:/host], you can access your home directory in /host dir in docker image. This allows you to access your files on the local machine inside the docker. So you can edit your source code in your local machine using your favourite editor and run them directly inside the docker. Make sure that you have done this correct. Otherwise, you may need to copy files from the local machine to docker, for each edit (a painful job).
I am not able to understand the format of parameter passed to -v option and what it does. I am thinking that it will allow to access Ubuntu directories inside docker. So $HOME:/host will map Ubuntu's home directory to /host inside.
Q1. But what is /host?
Q2. Can I do what is stated by above two quotes together? I mean what they are saying is compatible? I guess yes. What all its saying is I should not mount from windows director like /mnt/<driveletter>/.... If I am mounting linux directory like $USER/... then it will give better performance, right?
I tried out running it to understand it:
~$ docker run -v "$HOME:/host" --name "mydokr" -it docker.repo.in/dokrimg
root#f814974a1cfb:/home# ls
root#f814974a1cfb:/home# ll
total 8
drwxr-xr-x 2 root root 4096 Apr 15 11:09 ./
drwxr-xr-x 1 root root 4096 Sep 22 07:16 ../
root#f814974a1cfb:/home# pwd
/home
root#f814974a1cfb:/home# cd ..
root#f814974a1cfb:/# ll
total 64
drwxr-xr-x 1 root root 4096 Sep 22 07:16 ./
drwxr-xr-x 1 root root 4096 Sep 22 07:16 ../
-rwxr-xr-x 1 root root 0 Sep 22 07:16 .dockerenv*
lrwxrwxrwx 1 root root 7 Jul 3 01:56 bin -> usr/bin/
drwxr-xr-x 2 root root 4096 Apr 15 11:09 boot/
drwxr-xr-x 5 root root 360 Sep 22 07:16 dev/
drwxr-xr-x 1 root root 4096 Sep 22 07:16 etc/
drwxr-xr-x 2 root root 4096 Apr 15 11:09 home/
drwxr-xr-x 5 1000 1001 4096 Sep 22 04:52 host/
lrwxrwxrwx 1 root root 7 Jul 3 01:56 lib -> usr/lib/
lrwxrwxrwx 1 root root 9 Jul 3 01:56 lib32 -> usr/lib32/
lrwxrwxrwx 1 root root 9 Jul 3 01:56 lib64 -> usr/lib64/
lrwxrwxrwx 1 root root 10 Jul 3 01:56 libx32 -> usr/libx32/
drwxr-xr-x 2 root root 4096 Jul 3 01:57 media/
drwxr-xr-x 2 root root 4096 Jul 3 01:57 mnt/
drwxr-xr-x 2 root root 4096 Jul 3 01:57 opt/
dr-xr-xr-x 182 root root 0 Sep 22 07:16 proc/
drwx------ 1 root root 4096 Aug 24 03:54 root/
drwxr-xr-x 1 root root 4096 Aug 11 10:24 run/
lrwxrwxrwx 1 root root 8 Jul 3 01:56 sbin -> usr/sbin/
drwxr-xr-x 2 root root 4096 Jul 3 01:57 srv/
dr-xr-xr-x 11 root root 0 Sep 22 03:32 sys/
-rw-r--r-- 1 root root 1610 Aug 24 03:56 test_logPath.log
drwxrwxrwt 1 root root 4096 Aug 24 03:57 tmp/
drwxr-xr-x 1 root root 4096 Aug 11 10:24 usr/
drwxr-xr-x 1 root root 4096 Jul 3 02:00 var/
root#f814974a1cfb:/home# cd ../host
root#f814974a1cfb:/host# ll
total 36
drwxr-xr-x 5 1000 1001 4096 Sep 22 04:52 ./
drwxr-xr-x 1 root root 4096 Sep 22 07:16 ../
-rw-r--r-- 1 1000 1001 220 Sep 22 03:38 .bash_logout
-rw-r--r-- 1 1000 1001 3771 Sep 22 03:38 .bashrc
drwxr-xr-x 3 1000 1001 4096 Sep 22 04:56 .docker/
drwxr-xr-x 2 1000 1001 4096 Sep 22 03:38 .landscape/
-rw-r--r-- 1 1000 1001 0 Sep 22 03:38 .motd_shown
-rw-r--r-- 1 1000 1001 921 Sep 22 04:52 .profile
-rw-r--r-- 1 1000 1001 0 Sep 22 03:44 .sudo_as_admin_successful
drwxr-xr-x 5 1000 1001 4096 Sep 22 04:52 .vscode-server/
-rw-r--r-- 1 1000 1001 183 Sep 22 04:52 .wget-hsts
So I am not getting whats happening here. I know docker has its own file system.
Q3. Is is that, what am finding at /home and /host is indeed container's own file system?
Q4. Also, what happened to -v $HOME:/host here?
Q5. How can I do as stated by 2nd quote:
This allows you to access your files on the local machine inside the docker. So you can edit your source code in your local machine using your favourite editor and run them directly inside the docker.
Q6. How do I connect vscode to this container? From WSL-Ubuntu, I could just run code . to launch vscode. But the same does not seem to work here:
root#f814974a1cfb:/home# code .
bash: code: command not found
This link says:
A devcontainer.json file can be used to tell VS Code how to configure the development container, including the Dockerfile to use, ports to open, and extensions to install in the container. When VS Code finds a devcontainer.json in the workspace, it automatically builds (if necessary) the image, starts the container, and connects to it.
But I guess this says starting up creating new container form vscode. But not connecting to already existing container. I am not able to find my dockercontainer.json. I downloaded this container image using docker pull.

Force rotate a certain log file without using logrotate

I am trying to force rotate a specific log, e.g., /mroot/etc/mlog/sktrace.log.
For example, currently here are all the logs related to sktrace:
<machine_name>% ll /mroot/etc/mlog/sktrace*
-rw-r--r-- 2 root wheel 13276789 Oct 16 13:00 /mroot/etc/mlog/sktrace.log
-rw-r--r-- 1 root wheel 3063670 Oct 13 10:42 /mroot/etc/mlog/sktrace.log.0000000001
-rw-r--r-- 1 root wheel 44072508 Oct 14 10:42 /mroot/etc/mlog/sktrace.log.0000000002
-rw-r--r-- 1 root wheel 96622284 Oct 15 10:42 /mroot/etc/mlog/sktrace.log.0000000003
-rw-r--r-- 1 root wheel 104858396 Oct 16 08:54 /mroot/etc/mlog/sktrace.log.0000000004
-rw-r--r-- 1 root wheel 10466192 Oct 16 10:42 /mroot/etc/mlog/sktrace.log.0000000005
-rw-r--r-- 2 root wheel 13276789 Oct 16 13:00 /mroot/etc/mlog/sktrace.log.0000000006
By “force rotate”, I mean to copy the content in the current /mroot/etc/mlog/sktrace.log to /mroot/etc/mlog/sktrace.log.0000000007, and then truncate /mroot/etc/mlog/sktrace.log to 0-byte.
The decent way is probably via logrotate. But it is not available on the system I am using:
<machine_name>% which logrotate
logrotate: Command not found.
<machine_name>% ll /usr/sbin/logrotate
ls: /usr/sbin/logrotate: No such file or directory
What's the best alternative in bash, please?

How to remove the static message that appears when opening a Linux shell?

How to remove the following message:
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.
Every time I open a Terminal it appears. I upgraded from Ubuntu 14.04 LTS to 16.04 LTS and it seems that update made that.
I am using bash.
I googled that and found this command:
$ touch ~/.hushlogin
Execute the below command and close the terminal. The message will be remove from the terminal.
sudo apt-get update
atleast my ubuntu 14.04 machine will display(or run) all the script in /etc/update-motd.d(motd => message of the day) directory.
ll /etc/update-motd.d/
total 40
drwxr-xr-x 2 root root 4096 Sep 27 2014 ./
drwxr-xr-x 109 root root 4096 Nov 30 10:27 ../
-rwxr-xr-x 1 root root 1220 Feb 20 2014 00-header*
-rwxr-xr-x 1 root root 1358 Feb 20 2014 10-help-text*
lrwxrwxrwx 1 root root 46 Sep 27 2014 50-landscape-sysinfo -> /usr/share/landscape/landscape-sysinfo.wrapper*
-rwxr-xr-x 1 root root 334 Sep 27 2014 51-cloudguest*
-rwxr-xr-x 1 root root 149 Aug 22 2011 90-updates-available*
-rwxr-xr-x 1 root root 299 Aug 21 2014 91-release-upgrade*
-rwxr-xr-x 1 root root 111 Mar 27 2014 97-overlayroot*
-rwxr-xr-x 1 root root 142 Aug 22 2011 98-fsck-at-reboot*
-rwxr-xr-x 1 root root 144 Aug 22 2011 98-reboot-required*
The scipt with lowest number is gonna execute first 00-header*

cannot access $LD_LIBRARY_PATH

without exporting $LD_LIBRARY_PATH anew, and without doing anything with the variable in bashrc,
echo $LD_LIBRARY_PATH
returns
/usr/local/cuda/lib64
However,
$LD_LIBRARY_PATH
returns
-bash: /usr/local/cuda/lib64:: No such file or dictionary
Yet, the path does exist.
What could've gone wrong?
------EDIT-----
ls -ld /usr/local{,/cuda{,/*}}
returns
drwxr-xr-x 16 root root 4096 Apr 10 17:07 /usr/local
lrwxrwxrwx 1 root root 19 Sep 16 2015 /usr/local/cuda -> /usr/local/cuda-7.5
drwxr-xr-x 3 root root 4096 Sep 16 2015 /usr/local/cuda/bin
drwxr-xr-x 5 root root 4096 Sep 16 2015 /usr/local/cuda/doc
drwxr-xr-x 4 root root 4096 Sep 16 2015 /usr/local/cuda/extras
drwxr-xr-x 5 root root 4096 Sep 16 2015 /usr/local/cuda/include
drwxr-xr-x 5 root root 4096 Sep 16 2015 /usr/local/cuda/jre
drwxr-xr-x 2 root root 4096 Sep 16 2015 /usr/local/cuda/lib
drwxr-xr-x 3 root root 4096 Sep 16 2015 /usr/local/cuda/lib64
drwxr-xr-x 8 root root 4096 Sep 16 2015 /usr/local/cuda/libnsight
drwxr-xr-x 7 root root 4096 Sep 16 2015 /usr/local/cuda/libnvvp
drwxr-xr-x 7 root root 4096 Sep 16 2015 /usr/local/cuda/nvvm
drwxr-xr-x 2 root root 4096 Sep 16 2015 /usr/local/cuda/pkgconfig
drwxr-xr-x 11 root root 4096 Sep 16 2015 /usr/local/cuda/samples
drwxr-xr-x 3 root root 4096 Sep 16 2015 /usr/local/cuda/share
drwxr-xr-x 2 root root 4096 Sep 16 2015 /usr/local/cuda/src
drwxr-xr-x 2 root root 4096 Sep 16 2015 /usr/local/cuda/tools
-rw-r--r-- 1 root root 20 Sep 16 2015 /usr/local/cuda/version.txt
The problem was resolved by modifying the Makefile.config as follows:
/usr/local/cuda
to
/usr/local/cuda-7.5

Resources