Why is Ubuntu dirmngr process consuming 100% of cpu? - cpu

Cross posting from AskUbuntu:
I couldn't find this asked anywhere else, so apologies if it has already been. I've discovered a recent problem where dirmngr consumes 100% of my CPU for hours without stopping. I can't kill the process without shutting the machine down. It seems to be associated with JetBrains products (I usually hear my fan kicking in during Indexing), but I'm not sure about that. Does anyone have an idea what might be happening?
OS details:
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic

I have a very similar problem on openSUSE, and Googling I have found reports from Arch and Mint as well. I think this is not specific to Ubuntu.
I could see that dirmngr was repeatedly accessing my GPG keys in ~/.ssh.
It is not a perfect solution, but I added "seahorse" and the necessary packages to use the GNOME keyring on my Xfce machine. I have configured the GNOME keyring to store my GPG passphrase in my login keyring and unlock it when I log in.
Now I can access devices using my stored GPG keys (such as Github) without entering my passphrase.
Dirmngr is still running but now it uses far less CPU and my machine is running quicker.
I cannot imagine it is connected, but: the problem first appeared for me when I added a 2nd graphics card to supplement my nVidia Quadro K620 (GM107GL GPU). I now also have a GeForce GT 220 (GT216 GPU) with a 3rd screen attached for a triple-head setup using Nouveau drivers.

Related

Substitute for running Vagrant and VirtualBox on M1 Apple Silicon

I currently have an almost perfect environment for local development using Vagrant and VirtualBox when working on Apple M1 chip (ARM-based). Configuration diagram as in the image below:
Diagram preview.
I am quite happy with this setup – doesn't take long to prepare. And what is most satisfying about such a solution is the code synchronization times - they are instantaneous, there is no noticeable delay - and the code compilation time. The latter, in the case of preparing a debian package, decreased from 12 minutes (with the standard use of VirtualBox locally) to 2.5 minutes!
More details:
VPN service is running on Synology NAS
Code synchronization uses Synology Drive
Do you have any insights or tips?

Performance issues on WSL 2

For the last two months I've (tried to) embraced WSL2 as my main development environment. It works fine with most small projects, but when it comes to complex ones, things start to slow down, making working on WSL2 impossible. With complex one I mean a monorepo with React, node, different libraries, etc. This same monorepo, on the same machine, works just fine when running it from Windows itself.
Please note that, when working on WSL2, all my files are in the linux environment; I'm not trying to access Windows files from WSL2.
I've the latest Docker Desktop installed, with WSL2 integration and kubernetes enabled. But the issue persists even with Docker completely stopped.
I've also tried to limit the memory consumption for WSL2, but that doesn't seems to fix the problem.
My machine is an Aero 15X with 16GB of ram. A colleague suggested upgrading to 32GB of ram. But before trying this, or "switching back" to Windows for now, I'd like to see if someone has any suggestions I could test out.
Thanks.
The recent Kernel version Linux MSI-wsl 5.10.16.3 starts slower than previous overall.But the root cause can be outside WSL: if you have a new NVIDIA GeForce card installed Windows gives it to eat as much memory as it can, i.e 6-16 Gb without using it. I had to limit WSL memory to 8Gb to start WSL service without OoM. Try to play with this parameter in .wslconfig in your home directory and look at the WSL_Console_Log in the same place. If the timestamps in this file are in ms my Kernel starts in 55 ms and then hangs on Networking(!!!).
I'm afraid that WSL Kernel network driver
lshw -c network
*-network
description: Ethernet interface
physical id: 1
logical name: eth0
serial: 00:15:5d:52:c5:c0
size: 10Gbit/s
capabilities: ethernet physical
configuration: autonegotiation=off broadcast=yes driver=hv_netvsc driverversion=5.10.16.3-microsoft-standard-WS duplex=full firmware=N/A ip=172.20.186.108 link=yes multicast=yes speed=10Gbit/s
is not so fast how it is expected to be

Pfsense bootloops after installation

I'm trying to install Pfsense 2.3.4 on a system having an Atom processor and a Seagate Barracuda 500GB SATA HDD.
Pfsense flashes correctly i.e. there are no errors during installation. However, after it reboots it gets stuck in a bootloop. All I'm able to see is this on the console
F1 pfSense
F6 PXE
Boot: F1
after which the machine restarts and this continues indefinitely.
When installing, I made sure to uncheck 'Packet Mode' as that was causing this bootloop issue with many user but that hasn't solved it.
I've tried 2 hard drives and a CF card but the issue persists.
My device is the Lanner FW-7525
Any help would be appreciated!
TIA xD
Did you use the serial console variant of pfSense?
Your system could be trying to load vga frame buffers which don't exist on your hardware, failing, then boot looping.
You may also be striking the Atom C2000 bug. https://www.servethehome.com/intel-atom-c2000-series-bug-quiet/
I'm not sure if this applies to the issue you require resolving but a number of issues seem to be similar.
Firstly my install was upon a hypervisor and the installation I thought went just fine but upon powering on the VM I came across issues with booting from the designated storage device.
My solution, however, resolved itself by using the previous release and nothing more.
It just seems quite strange that the storage you have is unusable too but you seemed also to have what appeared to have a smooth install process without error.
-Sorry, that the best-educated personal experience knowledge I can offer you, hope you reach a solution.

Would TensorFlow utilize GPU on a Mac if installed on a VM?

From TensorFlow's "Getting Started" page:
# Only CPU-version is available at the moment.
$ pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
I'm not super familiar with using GPU or CUDA libraries, but if I installed TensorFlow inside a Linux VM (say the precise32 available through Vagrant), then would TensorFlow utilize the GPU when running inside that VM?
Probably not. VirtualBox, for example, does not support PCI Passthrough on a MacOS host, only a Linux host (and even then, I'd... uh, not get my hopes up). MacOS ends up so tightly integrated with its GPU(s) that I'd be very dubious that any VM can do it at this point.
As an update: Tensorflow can now use GPUs on Mac OS X. The relevant PR is https://github.com/tensorflow/tensorflow/pull/664 and after a brew install coreutils the Linux installation 'build from source' instructions should work. I see a 10x speedup compared to the CPU version with an NVIDIA gforce 960 and Intel i7-6700K.
Edit/(downdate?): Starting with MacOS Mojave, due some API changes and what appears to be some long-standing beef between Apple and NVidia, drivers for NVidia graphics cards are no longer available. No NVidia means no Cuda means no Tensorflow, nor really any other respectable machine learning. It appears something like Google Collaboratory is the way to go for now.

Recover windows seven

I started on Ubuntu and have had the first considerable error. I'm looking for help.
I have an HP Pavilion dv6 i7. I had installed windows 7 and I decided to also install Ubuntu using a USB.
My first attempt was to install Ubuntu 11.10 following the instructions of the official Ubuntu website. When loading the pendrive, my PC stucks at the main menu of ubuntu, so after searching, I found could be due to a problem with my AMD Radeon graphic card (or not), but I decided to change.
Then I used Ubuntu 10.4. This could happen from the start menu i get into Ubuntu live. There I decided to install it because I liked it and I need to develope with Google TV (in windows is not posible).
And I fail in the partitions section. I tried to follow the instructions on this page:
http://hadesbego.blogspot.com/2010/08/instalando-linux-en-hp-pavilion-dv6.html
but there were things that changed a bit so I improvised. I took the windows partition of 700000MB and went to 600000Mb leaving 100GB free to install Linux there. The error was to set it to ext3 (it was ntfs). I thought the new 100gb partition will be set to ext3, and windows partition will stuck at ntfs system, but not.
Total I ran out to boot windows, and above I can not install ubuntu on the 100GB free.
Someone thinks I can help. Is there any easy way to convert back to ntfs windows and not lose data?
Thank you very much.
You should be able to hit F11 when the machine is booting up and go to the HP recovery application. This should let you reset to factory default.
You should definitely be able to install Ubuntu on the new 100GB partition as well. Just make sure you choose the right partition to install it on.
You will need to recover using recovery CD/DVD's. You must have been using the install gparted utility in Linux to "re-partition" your drive. You scrubbed some boot files.
If you successfully recover using the recovery media you can use Disk Management in Win 7 to shrink or extend your volume. In your case you would shrink it down 100Gb's and then when installing Linux gparted will see that available 100 GB and install there while Windows will still run.
Also, you should probably be running ext4 fs, not ext3. you would only want ext3 for compatibility reasons.

Resources