Ubuntu 20.04 very slow to get shell - shell

I've loaded Ubuntu 20.04 on a fast Xeon based machine and it takes ~ 20 seconds to get a shell, within a terminal window. I know I've seen something about which settings to check but now I can't find it. Any suggestions where to look or what to change most welcome. Thx. J

I found an answer thanks to this post!
https://askubuntu.com/questions/717961/shell-very-slow-to-load-ubuntu-14-04
I had a line, trying to get proper keyboard mapping in my .bashrc
##xmodmap ~/.xmodmap-uname -n
It was holding up proceedings, as soon as I commented it out, essentially removing it, now get shell instantly.

Related

Mallet installation - Command Prompt error, environmental variable

I am using a windows computer to install Mallet. I've found some difficulties when using the command prompt. I followed all the guidelines of installation but everytime I put the bin\mallet on cmd (Figure 2) following cd C:\mallet it states that mallet requires an environment variable MALLET_HOME. As you can see on Figure 1 I did that already but still not working. What am I doing wrong? Thanks in advance.
Figure 1
Figure 2
Your command usage is odd but working since you get the correct reply !
This is one of those frustrating cases where installation instructions fail to consider first time users of their software.
You start a command and it does not work, so you follow the instructions diligently and add an environment setting then go back to where it did not work then it is correctly running because you see an error message.
MALLET requires an environment variable MALLET_HOME.
but you already did that so now what?
You need to restart the cmd processor Usually thats easy close the one your in and start another. However in some odd cases, I have found that not to be enough, and a log out/log in has resolved some installer using additional setx manipulation.
In short If you are confident you installed into C:\mallet
Start a NEW cmd shell
Odd computer that I have. Your answers were applicable and may be useful for other people experiencing the same problem, thanks for your help. In my case, I had to run as administrator on the cmd and it worked!!! Feel abit silly now ahahah!

docker on OSX slow volumes

I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.
Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!
The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*
Follow these steps exactly.
1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~
Then type git clone https://github.com/IFSight/d4m-nfs
Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs
2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt
3.) Add the following lines of code to this.
/Users/yourusername:/Users/yourusername:0:0
What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.
EDIT
Do not put /Volumes here!!
4.) Go to your docker preferences and do the following
Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.
5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh
edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.
If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.
This is what mine looks like.
EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!
If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.
Just select go to folder
and then type /etc/exports
This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.
Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)
Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.
Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp
6.) If you did it right when running the script it will look like this.
Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!
You will need to run this anytime you restart your computer or docker.
Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.
Other Info:
For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047).
This will be release somewhere in April 2017 and should be a big improvement.
I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.
EDIT:
First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.
There's a long thread with explanation from Docker Team and various workarounds.
Currently, the issue is being tracked on GitHub.
While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.
I spent a lot of my time in searching viable solution. And I found.
d4m-nfs
allow you use docker volumes via nfs.
In my case it gave increase performance 16 times! (1.8sec vs ~30sec)
Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619
I just leave this here for other googlers.
Normaly volumes should be fast.
But you can not change anything to make them faster if you dont want to change the format of your disk.
But maybe the bottleneck is the CPU or RAM.
You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.
I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.
Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.
In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.
I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).
I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.
In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.
This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...
We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps:
https://github.com/okteto/cnd

Process running when starting terminal on Mac OSX

Whenever I start terminal on my Macbook Pro it is running a process. I have to use ctrl+C to kill it. If I close the window directly it warns me that following processs are running: login, bash, bash, perl5.12.
Any idea what might be going on here and how I get back to the normal state?
I personally had this issue a while ago. First check to see if it is from one of your profiles. Assuming you are using bash, we will look at your bash profile.
First, make sure the problem is actually stemming from your bash profile. Source the scrips as follows.
source ~/.bash_profile
source ~/.profile
If after running those you observe the same hanging problem where you have to cntrl+c, then you know what script has the problem.
The best way to remedy the situation is to comment out different parts of your script to figure out where the problem is. Backup your profile and then comment out half of your profile and do a
source ~/.bash_profile
and if it hangs or not will tell you what half the problem is on. Keep repeating this until you find the problem. It sounds longer than it actually is.

swi-prolog aborts (after installation via homebrew)

For some reasons I had to uninstall/reinstall homebrew on my MacBook Pro (OS X 10.9).
I wanted to reinstall swi-prolog via homebrew (like I did the first time). The installation process worked without any visible issue, but now every time in want to run swi-prolog in my terminal this message appears: "Abort trap: 6"
I have no clue of what that means. There is a lot of things about this message on the internet but I can't relate them with my issue.
Could you help me?
For some reason it seems that the symbolic link doesn't work correctly. In my version of swi-prolog I had to type the full path to get it to run correctly, for example:
/usr/local/Cellar/swi-prolog/6.4.1/bin/swipl
Remember to keep in mind that your version number could be different than what I have listed above.
This became extremely tedious however to remember when having to type it every time I wanted to use Prolog, so I was able to add it as an alias with this command:
alias prolog='/usr/local/Cellar/swi-prolog/6.4.1/bin/swipl'
From that point on in the current terminal session, I was able to open it by just typing:
prolog
This way is obviously much easier, however you need to remember to change the alias if the version also changes.
The command "prolog" can of course be exchanged with any command you wish to use.
Keep in mind, if you want this command to be more permanent (as in after you close the terminal window), you will need to also add the above alias command to the ~/.bash_profile file so it runs on startup.
Hope this helps!
if i am not mistaken, swi-prolog required x11 to run but now in mac 10.9, there were no x11 anymore instead of xQuartz.
i am not sure if this is the real problem now.

Using START in a cmd file starting more than 2K processes

I tried to wrap a little command in a batchfile to prevent me from typing it the whole time. But the result was a mess! I'm ended up with thousands of cmd processes and was unable to stop it with CTRL+C
The command was quite simple START iisreset
System Win7 64bit
Why is that happening?
EDIT:
With some help and additional tests I can now say that the Batch command START within a *.cmd file cause that mess. It opens a new commandwindow with every window until it crashes. Maybe you have luck and hit CTRL-C exactly the right time, but that really has to be luck. Anyway I will not use this command in future and it also seems not to be applicable to all machines. (Read the comments for full history of this)
It works OK on Windows 7 pro, 64 bit, but based on the other stuff you've tried, it looks like it might be a bug... You could try raising a bug report
(although that seems like a non-trivial exercise).

Resources