AWS: Can't Bundle AMI - amazon-ec2

I am trying to create an AMIBundle following these instructions, but am running into an error. When I get to
ec2-bundle-vol -d /mnt -k /mnt/pk-XXX.pem -c /mnt/cert-YYY.pem -u 123456789012 -r i386 -p
rightscale_ami
and run it (using my correct variables, of course) I get: ERROR: You need to be root to run /vol/downloads/ec2-ami-tools-1.3-66634//lib/ec2/amitools/bundlevol.rb
I am not sure what the problem is. I tried changing the permissions around, but to no avail.
I am running Ubuntu 11.04 Server on a large instance, have installed the ec2 AMI and ec2 API tools, added them to path and their respective environment variables, and have done sudo aptitude install ruby. Maybe I need something else with ruby? Please help! Thanks.

I ended up installing the ami and api tools from the multiverse package within Ubuntu's apt manager. When I installed the tools this way, I could correctly do a sudo to run as root, whereas when I ran it originally it looked like the super user couldn't get access to my environment variables.

Related

how can i install heroku in my kali linux operating system?

I cant install heroku in my kali linux operating system. how can i resolve this issue?
isn't it not possible to run heroku in kali linux?
when I have try to install, it show snap command not found.
Heroku no longer supports Snap installs:
Snap installs are no longer supported. Please use another install method below.
Since Kali is derived from Debian, you should be able to use the Debian / Ubuntu method (which doesn't auto-update) or the standalone tarball method (which does). You can also use the NPM / Yarn package if you prefer, though Heroku recommends against it.
All of these options require some amount of trust in Heroku. The first two pull a script down from the Internet and pipe it into sh, which always makes me a bit uneasy. I suspect they both request elevated privileges during the install process. Instead of piping the file directly in to sh as Heroku recommends, I suggest you download it and at least give it a quick read through the first time.
In any case, here is the command that Heroku recommends to install the standalone version:
curl https://cli-assets.heroku.com/install.sh | sh

pkill does not seem to remove dpkg process in Ubuntu 18.04 LTS

I'm currently using Ubuntu 18.04 LTS, and I am trying to install GitLab via the instructions on https://www.linode.com/docs/development/version-control/install-gitlab-with-docker/.
Initially, I was following the instructions to download and install GitLab via Ubuntu 18.04 LTS at: https://about.gitlab.com/install/#ubuntu, which lead to a problem similar to the problem posed here: https://askubuntu.com/questions/637962/gitlab-install-is-stuck-at-0-on-ubuntu.
I then tried removing the processes involving dpkg with the help of sudo pkill gitlab, following the instructions posed at: https://unix.stackexchange.com/questions/94430/process-id-and-killing-process-ps-commmand.
However, I obtained the following error:
Is there a way to resolve this such that you must manually run sudo dpkg --configure -' to correct the problem does not appear again?
To answer your specific question, running dpkg --configure -a once should resolve the issue and you won't see the message again on future apt install execution. This problem arises because you kill apt in the middle of it doing work.
It seems like the root of the issue may be that you cannot access the GitLab package repository, or CloudFront, to pull the package?
Are you able to access https://packages.gitlab.com/gitlab/gitlab-ee from this system? i.e. curl https://packages.gitlab.com/gitlab/gitlab-ee
If the above works, can you try downloading an actual package manually to see if that works? i.e. wget --content-disposition https://packages.gitlab.com/gitlab/gitlab-ee/packages/ubuntu/xenial/gitlab-ee_12.2.4-ee.0_amd64.deb/download.deb
The image itself is served via CloudFront. So I wonder if you're able to connect to https://packages.gitlab.com but not cloudfront.net once the actual file is served.

How to install a GUI on Amazon AWS EC2 or EMR with the Amazon AMI

I have a need to run an application that requires a GUI interface to start and configure. I also need to be able to run this application on Amazon's EC2 service and EMR service. The EMR requirement means it has to run on Amazon's Linux AMI.
After extensive searching I've been unable to find any ready made solutions, in particular the requirement to run on Amazon's AMI. The closest match and most often referenced solution is here. Unfortunately it was developed on a RHEL6 instance which differs enough from Amazon's AMI that the solution does not work.
I'm posting my solution below. Hopefully it will save some others from the many hours of experimentation it took to come up with the right recipe.
Here is my solution to get a GUI running on Amazon's AMI. I used this post as a starting point, but had to make many changes to get it working on Amazon's AMI. I also added additional info to make this work in a reasonably automated way so an individual who needs to bring up this environment more than once could do it without too much hassle.
Note: I include a lot of commentary in this post. I apologize in advance, but I thought it might be helpful to someone needing to make modfications if they could understand why made the various choices along the way.
The scripts included below install some files along the way. See section 4 for a list of the files and the directory structure used by these scripts.
Step 1. Install the Desktop
After performing a 'yum update', most solutions include a line like
sudo yum groupinstall -y "Desktop"
This deceivingly simple step requires significantly more effort on the Amazon AMI. This group is not configured in the Amazon AMI (AAMI from here on out). The AAMI has Amazon's own repositories installed and enabled by default. Also installed is the epel repo, but it is disabled by default. After enabling epel I found the Desktop group but it was not populated with packages. I also found Xfce (another desktop alternative) which was populated. Eventually I decided to install Xfce rather than Desktop. Still, that was not straight forward, but it eventually led to the solution.
Here it's worth noting that the first thing I tried was to install the centos repository and install the Desktop group from there. Initially this seemed promising. The group was fully populated with packages. However, after some effort I eventually decided there were simply too many version conflicts between the dependencies and packages that were already installed on the AAMI.
This led me choose Xfce from the epel repo. Since the epel repo was already installed on AAMI I figured there would be better dependency version coordination with the Amazon repos. This was generally true. Many dependencies were found either in the epel repo or the Amazon repos. For the ones that weren't, I was able to find them in the centos repo, and in most cases those were leaf dependencies. So most of the trouble came from the few dependencies in the centos repo that had sub-dependencies which conflicted with the amazon or epel repo. In the end a few hacks were required to bypass the dependency conflicts. I tried to minimize those as much as possible. Here is the script for installing Xfce
installGui.sh
#!/bin/bash
# echo each command
set -x
# assumes RSRC_DIR and IS_EMR set by parent script
YUM_RSRC_DIR=$RSRC_DIR/yum
sudo yum -y update
# Most info I've found on installing a GUI on AWS suggests to install using
#> sudo yum groupinstall -y "Desktop"
# This group is not available by default on the Amazon Linux AMI. The group
# is listed if the epel repo is enabled, but it is empty. I tried installing
# the centos repo, which does have support for this group, but it simply end
# up having to many dependency version conflicts with packages already installed
# by the Amazon repos.
#
# I found the path of least resistance to be installing the group Xfce from
# the epel repo. The epel repo is already included in amazon image, just not enabled.
# So I'm guessing there was at least some consideration by Amazon to align
# the dependency versions of this repo with the Amazon repos.
#
# My general approach to this problem was to start with the last command:
#> sudo yum groupinstall -y Xfce
# which will generate a list of missing dependencies. The script below
# essentially works backwards through that list to eliminate all the
# missing dependencies.
#
# In general, many of the dependencies required by Xfce are found in either
# the epel repo or the Amazon repos. Most of the remaining dependencies can be
# found in the centos repo, and either don't have any further dependencies, or if they
# do those dependencies are satisfied with the centos repo with no collisions
# in the epel or amazon repo. Then there are a couple of oddball dependencies
# to clean up.
# if yum-config-manager is not found then install yum-utils
#> sudo yum install yum-utils
sudo yum-config-manager --enable epel
# install centos repo
# place the repo config # /etc/yum.repos.d/centos.repo
sudo cp $YUM_RSRC_DIR/yum.repos.d/centos.repo /etc/yum.repos.d/
# The config centos.repo specifies the key with a URL. If for some reason the key
# must be in a local file, it can be found here: https://www.centos.org/keys/RPM-GPG-KEY-CentOS-6
# It can be installed to the right location in one step:
#> wget -O /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 https://www.centos.org/keys/RPM-GPG-KEY-CentOS-6
# Note, a key file must also be installed in the system key ring. The docs are a bit confusing
# on this, I found that I needed to run both gpg AND then followed by rpm, eg:
#> sudo gpg --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
#> sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
# I found there are a lot of version conflicts between the centos, Amazon and epel repos.
# So I did not enable the centos repo generally. Instead I used the --enablerepo switch
# enable it explicitly for each yum command that required it. This only works for yum. If
# rpm must be used, then yum-config-manager must be used to enable/disable repos as a
# separate step.
#
# Another problem I ran into was yum installing the 32-bit (*.i686) package rather than
# the 64-bit (*.x86_64) verision of the package. I never figured out why. So I had
# to specify the *.x86_64 package explicitly. The search tools (eg. 'whatprovides')
# did not list the 64 bit package either even though a manual search through the
# package showed the 64 bit components were present.
#
# Sometimes it is difficult to determine which package must be in installed to satisfy
# a particular dependency. 'whatprovides' is a very useful tool for this
#> yum --enablerepo centos whatprovides libgdk_pixbuf-2.0.so.0
#> rpm -q --whatprovides libgdk_pixbuf
sudo yum --enablerepo centos install -y gdk-pixbuf2.x86_64
sudo yum --enablerepo centos install -y gtk2.x86_64
sudo yum --enablerepo centos install -y libnotify.x86_64
sudo yum --enablerepo centos install -y gnome-icon-theme
sudo yum --enablerepo centos install -y redhat-menus
sudo yum --enablerepo centos install -y gstreamer-plugins-base.x86_64
# problem when we get to libvte, installing libvte requires expat, which conflicts with amazon lib
# the centos package version was older and did not install right lib version
# but … the expat dependency was coming from a dependency on python-libs.
# the easiest workaround was to install python using the amazon repo, that in turn
# installs a version of python libs that is compatible with the version of libexpat on the system.
sudo yum install -y python
sudo yum --enablerepo centos install -y vte.x86_64
sudo yum --enablerepo centos install -y libical.x86_64
sudo yum --enablerepo centos install -y gnome-keyring.x86_64
# another sticky point, xfdesktop requires desktop-backgrounds-basic, but ‘whatprovides’ does not
# provide any packages for this query (not sure why). It turns out this is provided by the centos
# repo, installing ‘desktop-backgrounds-basic’ will try to install the package redhat-logos, but
# unfortunately this is obsoleted by Amazon’s generic-logos package
# The only way I could find to get around this was to erase the generic logos package.
# This doesn't seem too risky since this is just images for the desktop and menus.
#
sudo yum erase -y generic-logos
# Amazon repo must be disabled to prevent interference with the install
# of redhat-logos
sudo yum --disablerepo amzn-main --enablerepo centos install -y redhat-logos
# next problem is a dependency on dbus. The dependency comes from dbus-x11 in
# centos repo. It requires dbus version 1.2.24, the amazon image already has
# version 1.6.12 installed. Since the dbus-x11 is only used by the GUI package,
# easiest way around this is to install dbus-x11 with no dependency checks.
# So it will use the newer version of dbus (should be OK). The main thing that could be a problem
# here is if it skips some other dependency. When doing manually, its possible to run the install until
# the only error left is the dbus dependency. It’s a bit risky running in a script since, basically it’s assuming
# all the dependencies are already in place.
yumdownloader --enablerepo centos dbus-x11.x86_64
sudo rpm -ivh --nodeps dbus-x11-1.2.24-8.el6_6.x86_64.rpm
rm dbus-x11-1.2.24-8.el6_6.x86_64.rpm
sudo yum install -y xfdesktop.x86_64
# We need the version of poppler-glib from centos repo, but it is found in several repos.
# Disable the other repos for this step.
# On EMR systems a newer version of poppler is already installed. So move up 1 level
# in dependency chain and force install of tumbler.
if [ $IS_EMR -eq 1 ]
then
yumdownloader --enablerepo centos tumbler.x86_64
sudo rpm -ivh --nodeps tumbler-0.1.21-1.el6.x86_64.rpm
else
sudo yum --disablerepo amzn-main --disablerepo amzn-updates --disablerepo epel --enablerepo centos install -y poppler-glib
fi
sudo yum install --enablerepo centos -y polkit-gnome.x86_64
sudo yum install --enablerepo centos -y control-center-filesystem.x86_64
sudo yum groupinstall -y Xfce
Here are the contents for the centos repository config file:
centos.repo
[centos]
name=CentOS mirror
baseurl=http://repo1.ash.innoscale.net/centos/6/os/x86_64/
failovermethod=priority
enabled=0
gpgcheck=1
gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-6
If all you needed was a recipe to get a desktop package installed on the Amazon AMI, then you're done. The rest of this post covers how to configure VNC to access the desktop via an SSH tunnel, and how to package all of this so that the instance can be easily started from a script.
Step 2. Install and Configure VNC
Below is my top level script for installing the GUI. After configuring a few variables the first thing it does is call the script from step 1 above. This script has some extra baggage since I've built it to work on a regular ec2 instance, or emr and as root or as ec2-user. The essential steps are
install libXfont
install tiger-vnc-server
install the VNC server config file
create a .vnc directory in the user home directory
install the xstartup file in the .vnc directory
install a dummy passwd file in the .vnc directory
start the VNC server
A few key points to note:
This assumes you will access the VNC server through an SSH tunnel. In the end this really seemed like the easiest and most reliably secure way to go. Since you probably have a port for SSH open in your security group specification, you won't have to make any changes to it. Also, the encryption config for VNC clients/servers is not straight forward. It seemed easy to make a mistake and leave your communications unencrypted. The settings for this are in the vncservers file. The -localhost switch tells vnc only to accept local connections. The '-nolisten tcp' tells associated xserver modules to also not accept connections from the network. Finally the '-SecurityTypes None' switch allows you to open your VNC session without typing a passwd, since the only way into the machine is through ssh, the additional password check seems redundant.
The xstartup file determines what will start when your VNC session is initiated the first time. I've noticed many posts on this subject skip this point. If you don't tell it to start the Xfce desktop, you will just get a blank window when you start VNC. The config I have here is very simple.
Even though I mentioned above that the VNC server is configured to not prompt for a password, it nevertheless requires a passwd file in the .vnc directory in order for the server to start. The first time you run the script it will fail when it tries to start the server. Login to the machine via ssh and run 'vncpasswd'. It will create a passwd file in the .vnc directory that you can save to use as part of these scripts during install. Note, I've read that VNC does not do anything sophisticated to protect the passwd file. So I would not recommend using a passwd that you use for other, more important accounts.
installGui.sh
#!/bin/bash
# echo each command
set -x
BIN_DIR="${BASH_SOURCE%/*}"
ROOT_DIR=$(dirname $BIN_DIR)
RSRC_DIR=$ROOT_DIR/rsrc
VNC_DIR=$RSRC_DIR/vnc
# Install user config files into ec2-user home directory
# if it is available. In practice, this should always
# be true
if [ -d "/home/ec2-user" ]
then
USER_ACCT=ec2-user
else
USER_ACCT=hadoop
fi
HOME_DIR="/home"
# Use existence of hadoop home directory as proxy to determine if
# this is an EMR system. Can be used later to differentiate
# steps on EC2 system vs EMR.
if [ -d "/home/hadoop" ]
then
IS_EMR=1
else
IS_EMR=0
fi
# execute Xfce desktop install
. "$BIN_DIR/installXfce.sh"
# now roughly follow the following from step 3: https://devopscube.com/setup-gui-for-amazon-ec2-linux/
sudo yum install -y pixman pixman-devel libXfont
sudo yum -y install tigervnc-server
# install the user account configuration file.
# This setup assumes the user will always connect to the VNC server
# through an SSH tunnel. This is generally more secure, easier to
# configure and easier to get correct than trying to allow direct
# connections via TCP.
# Therefore, config VNC server to only accept local connections, and
# no password required.
sudo cp $VNC_DIR/vncservers-$USER_ACCT /etc/sysconfig/vncservers
# install the user account, vnc config files
sudo mkdir $HOME_DIR/$USER_ACCT/.vnc
sudo chown $USER_ACCT:$USER_ACCT $HOME_DIR/$USER_ACCT/.vnc
# need xstartup file to tell vncserver to start the window manager
sudo cp $VNC_DIR/xstartup $HOME_DIR/$USER_ACCT/.vnc/
sudo chown $USER_ACCT:$USER_ACCT $HOME_DIR/$USER_ACCT/.vnc/xstartup
# Even though the VNC server is config'd to not require a passwd, the
# server still looks for the passwd file when it starts the session.
# It will fail if the passwd file is not found.
# The first time these scripts are run, the final step will fail.
# Then manually run
#> vncpasswd
# It will create the file ~/.vnc/passwd. Then save this file to persistent
# storage so that it can be installed to the user account during
# server initialization.
sudo cp $ROOT_DIR/home/user/.vnc/passwd $HOME_DIR/$USER_ACCT/.vnc/
sudo chown $USER_ACCT:$USER_ACCT $HOME_DIR/$USER_ACCT/.vnc/passwd
# This script will be running as root if called from the EC2 launch
# command. VNC server needs to be started as the user that
# you will connect to the server as (eg. ec2-user, hadoop, etc.)
sudo su -c "sudo service vncserver start" -s /bin/sh $USER_ACCT
# how to stop vncserver
# vncserver -kill :1
# On the remote client
# 1. start the ssh tunner
#> ssh -i ~/.ssh/<YOUR_KEY_FILE>.pem -L 5901:localhost:5901 -N ec2-user#<YOUR_SERVER_PUBLIC_IP>
# for debugging connection use -vvv switch
# 2. connect to the vnc server using client on the remote machine. When
# prompted for the IP address, use 'localhost:5901'
# This connects to port 5901 on your local machine, which is where the ssh
# tunnel is listening.
vncservers
# The VNCSERVERS variable is a list of display:user pairs.
#
# Uncomment the lines below to start a VNC server on display :2
# as my 'myusername' (adjust this to your own). You will also
# need to set a VNC password; run 'man vncpasswd' to see how
# to do that.
#
# DO NOT RUN THIS SERVICE if your local area network is
# untrusted! For a secure way of using VNC, see this URL:
# http://kbase.redhat.com/faq/docs/DOC-7028
# Use "-nolisten tcp" to prevent X connections to your VNC server via TCP.
# Use "-localhost" to prevent remote VNC clients connecting except when
# doing so through a secure tunnel. See the "-via" option in the
# `man vncviewer' manual page.
# Use "-SecurityTypes None" to allow session login without a password.
# This should only be used in combination with "-localhost"
# Note: VNC server still looks for the passwd file in ~/.vnc directory
# when the session starts regardless of whether the user is
# required to enter a passwd.
# VNCSERVERS="2:myusername"
# VNCSERVERARGS[2]="-geometry 800x600 -nolisten tcp -localhost"
VNCSERVERS="1:ec2-user"
VNCSERVERARGS[1]="-geometry 1280x1024 -nolisten tcp -localhost -SecurityTypes None"
xstartup
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
# exec /etc/X11/xinit/xinitrc
/usr/share/vte/termcap/xterm &
/usr/bin/startxfce4 &
Step 3. Connect to Your Instance
Once you've got the VNC server running on EC2 you can try connecting to it. First open an SSH tunnel to your instance. 5901 is the port where the VNC server listens for display 1 from the vncservers file. It will listen for display 2 on port 5902, etc. This command creates a tunnel from port 5901 on your local machine to port 5901 on the instance.
ssh -i ~/.ssh/<YOUR_KEY_FILE>.pem -L 5901:localhost:5901 -N ec2-user#<YOUR_SERVER_PUBLIC_IP>
Now open your preferred VNC client. Where it prompts for the IP address of the server enter:
localhost:5901
If nothing happens at all, then either there was a problem starting the vnc server, or there is a connectivity problem preventing the client from reaching the server, or possibly a problem in vncservers config file
If a window comes up, but it is just blank then check that the Xfce install completed successfully and that the xstartup file is installed.
Step 4. Simplify
If you just need to do this once then sftp'ing the scripts over to your instance and running manually is fine. Otherwise you're going to want to automate this as much as possible to make it faster and less error prone when you do need to fire up an instance with a GUI.
The first step to automating is to create an EFS volume containing the scripts and config files that can be mounted when the instance is started. Amazon has plenty of info on creating a network file system. A couple points to pay attention to when creating the volume. If you don't want your volume to be open to the world you may want to create a custom security group to use for your EFS volume. I created security group for my EFS volume (call it NFS_Mount) that only allows inbound TCP traffic on port 2049 coming from one of my other security groups, call it MasterVNC. Then when you create an instance, make sure to associate the MasterVNC security group with that instance. Otherwise the EFS volume won't allow your instance to connect with it.
Now mount the EFS volume:
sudo mkdir /mnt/YOUR_MOUNT_POINT_DIR
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-YOUR_EFS_ID.efs.us-east-1.amazonaws.com:/ /mnt/YOUR_MOUNT_POINT_DIR
Now populate /mnt/YOUR_MOUNT_POINT_DIR with the 6 files mentioned in steps 1 and 2 using the following directory structure. Recall that you must create the passwd file the first time using the command 'vncpasswd'. It will create the file at ~/.vnc/passwd.
/mnt/YOUR_MOUNT_POINT_DIR/bin/installGui.sh
/mnt/YOUR_MOUNT_POINT_DIR/bin/installXfce.sh
/mnt/YOUR_MOUNT_POINT_DIR/rsrc/vnc/vncservers-ec2-user
/mnt/YOUR_MOUNT_POINT_DIR/rsrc/vnc/xstartup
/mnt/YOUR_MOUNT_POINT_DIR/rsrc/vnc/passwd
/mnt/YOUR_MOUNT_POINT_DIR/rsrc/yum/yum.repos.d/centos.repo
At this point, setting up an instance with a GUI should be pretty easy. Create your instance as you normally would (make sure to include the MasterVNC security group), ssh to the instance, mount the EFS volume, and run the installGui.sh script.
Step 5. Automate
You can take things a step further and launch your instance in 1 step using the AWS CLI tools on your local machine. To do this you will need to mount the EFS volume and run the installGui.sh script using arguments to the AWS CLI commands. This just requires creating a top level script and passing it to the CLI command.
Of course there are a couple complications. EC2 and EMR use different switches and mechanisms to attach the script. And furthermore, on EMR I only want the GUI to be installed on the master node (not the core or task nodes).
Launching an EC2 instance requires embedding the script in the command with the --user-data switch. This is done easily by specifying the absolute path to the script file on your local machine.
aws ec2 run-instances --user-data file:///PATH_TO_YOUR_SCRIPT/top.sh ... other options
The EMR launch does not support embedding scripts from a local file. Instead you can specify an S3 URI in the bootstrap actions.
aws emr create-cluster --bootstrap-actions '[{"Path":"s3://YOUR_BUCKET/YOUR_DIR/top.sh","Name":"Custom action"}]' ... other options
Finally, you'll see in top.sh below most of the script is a function to determine if the machine is a basic EC2 instance or an EMR master. If not for that the script could be 3 lines. You may wonder why not just use the built in 'run-if' bootstrap action rather than writing my own function. The built in 'run-if' script has a bug and does not properly run scripts located in S3.
Debugging things once you put them in the init sequence can be a challenge. One thing that can help is the log file: /var/log/cloud-init-output.log. This captures all the console output from the scripts run during bootstrap initialization.
top.sh
#!/bin/bash
# note: conditional bootstrap function run-if has a bug, workaround ...
# this function adapted from https://forums.aws.amazon.com/thread.jspa?threadID=222418
# Determine if we are running on the master node.
# 0 - running on master, or non EMR node
# 1 - running on a task or core node
check_if_master_or_non_emr() {
python - <<'__SCRIPT__'
import sys
import json
instance_file = "/mnt/var/lib/info/instance.json"
try:
with open(instance_file) as f:
props = json.load(f)
is_master_or_non_emr = props.get('isMaster', False)
except IOError as ex:
is_master_or_non_emr = True # file will not exist when testing on a non-emr machine
if is_master_or_non_emr:
sys.exit(1)
else:
sys.exit(0)
__SCRIPT__
}
check_if_master_or_non_emr
IS_MASTER_OR_NON_EMR=$?
# If this machine is part of EMR cluster, then ONLY install on the MASTER node
if [ $IS_MASTER_OR_NON_EMR -eq 1 ]
then
sudo mkdir /mnt/YOUR_MOUNT_POINT_DIR
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-YOUR_EFS_ID.efs.us-east-1.amazonaws.com:/ /mnt/YOUR_MOUNT_POINT_DIR
. /mnt/YOUR_MOUNT_POINT_DIR/bin/installGui.sh
fi
exit 0

Installing OpenFOAM through Docker

I'm having a bad time trying to install OpenFOAM using Docker(on a MacOSX El Capitan). I've been following the official tutorial.
When I try to execute the first script (installOpenFOAM+), through the command line:
docker-machine ssh default $HOME/installOpenFOAM+ $HOME
I get the following result on the terminal screen:
machine does not exist
I've been looking for a solution online over and over but it seems nobody has had an issue like this. Has someone here faced the same problem?
Try making the install script executable before the first script, it seems to work for some people. That is, use
chmod +x installOpenFOAM+
I also had tough time installing Openfoam using docker.
After you install docker, you need to create a virtual machine (named default).
Once it is done, change the permission of install script. Then try to install it.
docker-machine create --driver virtualbox default
chmod a+x installMacOpenFOAM+
docker-machine ssh default $HOME/installMacOpenFOAM+ $HOME
I am not able to stat the application.

VirtualBox: mount.vboxsf: mounting failed with the error: No such device [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I'm using VirtualBox with OS X as host and CentOS on the guest VM.
In OS X I created folder myfolder, added it as shared folder to the VM, turned on the VM, in CentOS created folder /home/user/myfolder and typing:
sudo mount -t vboxsf myfolder /home/user/myfolder
and have output:
/sbin/mount.vboxsf: mounting failed with the error: No such device
What I'm doing wrong?
UPDATED:
Guest Additions installed.
My shared folder/clipboard stopped to work for some reason (probably due to a patch installation on my virtual machine).
sudo mount -t vboxsf Shared_Folder ~/SF/
Gave following result:
VirtualBox: mount.vboxsf: mounting failed with the error: No such device
The solution for me was to stop vboxadd and do a setup after that:
cd /opt/VBoxGuestAdditions-*/init
sudo ./vboxadd setup
You're using share folders, so you need to install VirtualBox Guest Additions inside your virtual machine to support that feature.
Vagrant
If you're using Vagrant (OS X: brew cask install vagrant), run:
vagrant plugin install vagrant-vbguest
vagrant vbguest
In case it fails, check the logs, e.g.
vagrant ssh -c "cat /var/log/vboxadd-install.log"
Maybe you're just missing the kernel header files.
VM
Inside VM, you should install Guest Additions, kernel headers and start the service and double check if kernel extension is running.
This depends on the guest operating system, so here are brief steps:
Install kernel include headers (required by VBoxLinuxAdditions).
RHEL: sudo apt-get update && sudo apt-get install kernel-devel
CentOS: sudo yum update && sudo yum -y install kernel-headers kernel-devel
Install Guest Additions (this depends on the operating system).
Ubuntu: sudo apt-get -y install dkms build-essential linux-headers-$(uname -r) virtualbox-guest-additions-iso
If you can't find it, check by aptitude search virtualbox.
Debian: sudo apt-get -y install build-essential module-assistant virtualbox-ose-guest-utils
If you can't find it, check by dpkg -l | grep virtualbox.
manually by downloading the iso file inside VM (e.g. wget) and installing it, e.g.
wget http://download.virtualbox.org/virtualbox/5.0.16/VBoxGuestAdditions_5.0.16.iso -P /tmp
sudo mount -o loop /tmp/VBoxGuestAdditions_5.0.16.iso /mnt
sudo sh -x /mnt/VBoxLinuxAdditions.run # --keep
Extra debug: cd ~/install && sh -x ./install.sh /mnt/VBoxLinuxAdditions.run
Double check that kernel extensions are up and running:
sudo modprobe vboxsf
Start/restart the service:
manually: sudo /opt/VBoxGuestAdditions*/init/vboxadd setup (add sudo sh -x to debug)
Debian: sudo /etc/init.d/vboxadd-service start
Fedora: sudo /etc/init.d/vboxdrv setup
CentOS: sudo service VBoxService start
Building the main Guest Additions module
If above didn't work, here are more sophisticated steps to fix it. This assumes that you've already VBoxGuestAdditions installed (as shown above).
The most common reason why mounting shared folder doesn't work may related to building Guest Additions module which failed. If in /var/log/vboxadd-install.log you've the following error:
The headers for the current running kernel were not found.
this means either you didn't install kernel sources, or they cannot be found.
If you installed them already as instructed above, run this command:
$ sudo sh -x /opt/VBoxGuestAdditions-5.0.16/init/vboxadd setup 2>&1 | grep KERN
+ KERN_VER=2.6.32-573.18.1.el6.x86_64
+ KERN_DIR=/lib/modules/2.6.32-573.18.1.el6.x86_64/build
So basically vboxadd script is expecting your kernel sources to be available at the following dir:
ls -la /lib/modules/$(uname -r)/build
Check if the kernel dir exists (symbolic link points to the existing folder). If it's not, please install them to the right folder (e.g. in /usr/src/kernels).
So vboxadd script can enter your kernel source directory and run make kernelrelease, get the value and compare with your current kernel version.
Here is the logic:
KERN_VER=`uname -r`
KERN_DIR="/lib/modules/$KERN_VER/build"
if [ -d "$KERN_DIR" ]; then
KERN_REL=`make -sC $KERN_DIR --no-print-directory kernelrelease 2>/dev/null || true`
if [ -z "$KERN_REL" -o "x$KERN_REL" = "x$KERN_VER" ]; then
return 0
fi
fi
If the kernel version doesn't match with the sources, maybe you've to upgrade your Linux kernel (in case the sources are newer than your kernel).
CentOS
Try:
vagrant plugin install vagrant-vbguest vagrant vbgues
If won't work, try the following manual steps for CentOS:
$ sudo yum update
$ sudo yum install kernel-$(uname -r) kernel-devel kernel-headers # or: reinstall
$ rpm -qf /lib/modules/$(uname -r)/build
kernel-2.6.32-573.18.1.el6.x86_64
$ ls -la /lib/modules/$(uname -r)/build
$ sudo reboot # and re-login
$ sudo ln -sv /usr/src/kernels/$(uname -r) /lib/modules/$(uname -r)/build
$ sudo /opt/VBoxGuestAdditions-*/init/vboxadd setup
I am able to resolved this by running below commmand
modprobe -a vboxguest vboxsf vboxvideo
In addition to #Mats answer, I'm adding some more info (it helped me on Debian 8).
My shared folder/clipboard stopped to work for some reason (probably due to a patch installation on my virtual machine).
sudo mount -t vboxsf Shared_Folder ~/SF/
Gave me following result:
VirtualBox: mount.vboxsf: mounting failed with the error: No such device
The solution for me was to stop vboxadd and do a setup after that:
cd /opt/VBoxGuestAdditions-*/init
sudo ./vboxadd setup
At this point, if you still get the following error:
No such device. The Guest Additions installation may have failed. The error has been logged in /var/log/vboxadd-install.log
You need to install linux headers:
apt-get install linux-headers-$(uname -r)
then you can install Guest Additions:
sh /media/cdrom/VBoxLinuxAdditions.run --nox11
and restart your Linux by:
reboot
then you will be able to mount your shared folder!
mount -t vboxsf Shared_Folder ~/SF/
More informations (in French), check this page.
This was the only solution what worked for me:
Install Vagrant plugin: vagrant-vbguest, which can keep your VirtualBox Guest Additions up to date.
vagrant plugin install vagrant-vbguest
Source: https://github.com/aidanns/vagrant-reload/issues/4#issuecomment-230134083
This was resolved by:
yum install gcc kernel-devel make
workaround is here: https://gist.github.com/larsar/1687725
Shared folder was earlier working for me but all f sudden it stopped working (Virualbox - host was Windows 7, Guest was OpenSuSe)
modprobe -a vboxguest vboxsf vboxvideo
then
mount -t vboxsf testsf /opt/tsf (testsf was the folder in Windows C drive which was added in Virtualbox shared folder --- and /opt/tsf is the folder in OpenSuse
My host is Windows10 my VM guest is ubuntu build by vagrant. This worked for me:
vagrant plugin install vagrant-winnfsd
The solution for me was to update guest additions
(click Devices -> Insert Guest Additions CD image)
I also had a working system that suddenly stopped working with the described error.
After furtling around in my /lib/modules it would appear that the vboxvfs module is no more. Instead modprobe vboxsf was the required incantation to get things restarted.
Not sure when that change ocurred, but it caught me out.
I am running VirtualBox 5.1.20, and had a similar issue. Here is a url to where I found the fix, and the fix I implemented:
# https://dsin.wordpress.com/2016/08/17/ubuntu-wrong-fs-type-bad-option-bad-superblock/
if [ "5.1.20" == "${VBOXVER}" ]; then
rm /sbin/mount.vboxsf
ln -s /usr/lib/VBoxGuestAdditions/mount.vboxsf /sbin/mount.vboxsf
fi
The link had something similar to /usr/lib/VBoxGuestAdditions/other/mount.vboxsf, rather than what I have in the script excerpt.
For a build script I use in vagrant for the additions:
https://github.com/rburkholder/vagrant/blob/master/scripts/additions.sh
Seems to be a fix at https://www.virtualbox.org/ticket/16670
For me, on a mac, it turned out I had an old VirtualBox image stored on my machine that didn't have metadata, so it wasn't being updated to the latest version.
That old image had an older version of the vbguest plugin installed in it, which the newer vbguest plugin on my machine couldn't work with.
So to fix it, I just removed the image that my Vagrant was based on, and then Vagrant downloaded the newer version and it worked fine.
# Remove an old version of the virtual box image that my vagrant was using
$ vagrant box remove centos/7
You can find out which boxes you have cached on your machine by running:
$ vagrant box list
I had also upgraded my vbguest plugin in my earlier attempts at getting this to work, using the following process, but I don't think this helped. FYI !
# Get rid of old plugins
vagrant plugin expunge
# Globally install the latest version of the vbguest plugin`
vagrant plugin install vagrant-vbguest
If you find bring the box fails on guest addtions, you can try doing the following to ensure the plugins install correctly. This downloads the latest based image for your system (for me CentOS), and may resolve the issue (it did for me!)
$ vagrant box update
There can be errors/incorrect approach in two scenarios. Check both of it and figure it out
SCENARIO 1 :
Once you are running the VBoxLinuxAdditions.run or VBoxSolarisAdditions.pkg or VBoxWindowsAdditions.exe , check if all the modules are getting installed properly.
1.1.a. In case of VBoxLinuxAdditions, if Building the VirtualBox Guest Additions kernel modules gets failed,
check the log file in /var/log/vboxadd-install.log . If the error is due to kernel version update your kernel and reboot the vm. In case of fedora,
1.1.b. yum update kernel*
1.1.c. reboot
1.2. If nothing gets failed, then all is fine. You are already having the expected kernel version
SCENARIO 2 :
If the VBoxGuestAdditions is installed (check for a folder /opt/VBoxGuestAdditions-* is present .... * represents version) you need to start it before mounting.
2.1. cd /opt/VBoxGuestAdditions-*/init && ./vboxadd start
You need to specify the user id and group id of your vm user as options to the mount command.
2.2.a. Getting uid and gid of a user:
id -u <'user'>
id -g <'user'>
2.2.b. Setting uid and gid in options of mount command:
mount -t vboxsf -o uid=x,gid=x shared_folder_name guest_folder
On Ubuntu this worked:
sudo apt-get install build-essential linux-headers-`uname -r` dkms
Had the same issue with VirtualBox 5.0.16/rXXX
Installed latest VirtualBox 5.0.18 and installed latest Vagrant 1.9.3, issue went toodles.
I added as root user
/etc/rc.d/rc.local
/root/mount-vboxsf.sh
then
chmod +x /etc/rc.d/rc.local
and the sample script /root/mount-vboxsf.sh (set your own the uid and gid options)
modprobe -a vboxguest vboxsf vboxvideo
mount -t vboxsf NAME_SHARED_DIRECTORY /media/sf_NAME_SHARED_DIRECTORY -o rw,uid=0,gid=0
you need add
chmod + /root/mount-vboxsf.sh
I have similar issue, check header if it's not match then run below command
CentOS: sudo yum update && sudo yum -y install kernel-headers kernel-devel
If you're on Debian:
1) remove all installed package through Virtualbox Guest Additions ISO file:
sh /media/cdrom/VBoxLinuxAdditions.run uninstall
2) install Virtualbox packages:
apt-get install build-essential module-assistant virtualbox-guest-dkms virtualbox-guest-utils
Note that even with modprobe vboxsf returning nothing (so the module is correctly loaded), the vboxsf.so will call an executable named mount.vboxsf, which is provided by virtualbox-guest-utils. Ignoring this one will prevent you from understanding the real cause of the error.
strace mount /your-directory was a great help (No such file or directory on /sbin/mount.vboxsf).
An update did the trick for me !
$ vagrant box update
$ vagrant plugin install vagrant-vbguest
Below two commands works for me.
vagrant ssh
sudo mount -t vboxsf -o uid=1000,gid=1000 vagrant /vagrant
Okay everyone is missing a basic fact.
mkdir /test - Makes sub directory in current directory.
sudo mkdir /test - Make directory in Root.
So if your shared directory name is shared and you do the following:
mkdir /test
sudo mount -t vboxsf shared /test
It generates this error:
sbin/mount.vboxsf: mounting failed with the error: No such file or directory
Because the directory is in the wrong place! Yes that's what this error is saying. The error is not saying reload the VBOX guest options.
But if you do this:
sudo mkdir ~/test
sudo mount -t vboxsf shared ~/test
Then it works fine.
It really amazes me how many people suggest reloading the Vbox guest additions to solve this error or writing a complex program to solve a directory created in the wrong place.

Resources