How to provision software using Vagrant without sudo - provisioning

I'm trying to setup Vagrant virtual machines to support my learning through Seven Databases in Seven Weeks. I'm provisioning software using basic shell scripts which performs appropriate actions within a sudo environment. However, I'm using the vagrant user to run the tutorials, and would like the provisioning to install the appropriate node / NPM modules as Vagrant, rather than through sudo.
My current npm command is the last line in this provisioning script, but the module is unavailable when vagrant tried to execute node scripts.
apt-get update
apt-get -y install build-essential
apt-get -y install tcl8.5
wget http://redis.googlecode.com/files/redis-2.6.0-rc3.tar.gz
tar xzf redis-2.6.0-rc3.tar.gz
cd redis-2.6.0-rc3
make
make install
make test
mkdir /etc/redis
mv redis.conf /etc/redis/redis.conf
sed -i.bak 's/127.0.0.1/0.0.0.0/g' /etc/redis/redis.conf
sed -i.bak 's/daemonize no/daemonize yes/g' /etc/redis/redis.conf
sed -i.bak 's/dir .\//dir \/var\/lib\/redis/g' /etc/redis/redis.conf
cd src/
wget https://raw.github.com/gist/1053791/880a4a046e06028e160055406d02bdc7c57f3615/redis-server
mv redis-server.1 /etc/init.d/redis-server
mv redis-cli /etc/init.d/redis-cli
chmod +x /etc/init.d/redis-server
sed -i.bak 's/DAEMON=\/usr\/bin\/redis-server/DAEMON=\/usr\/local\/bin\/redis-server/g' /etc/init.d/redis-server
useradd redis
mkdir -p /var/lib/redis
mkdir -p /var/log/redis
chown redis.redis /var/lib/redis
chown redis.redis /var/log/redis
update-rc.d redis-server defaults
/etc/init.d/redis-server start
cd /etc/init.d/
echo ./redis-cli
echo http://blog.hemantthorat.com/install-redis-2-6-on-ubuntu/
apt-get -y install python-software-properties python g++ make
add-apt-repository -y ppa:chris-lea/node.js
apt-get update
apt-get -y install nodejs
npm install hiredis redis csv

Simply set privileged to false in your VagrantFile like this:
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
...
config.vm.provision :shell, privileged: false, path: "script.sh"
...
end

The shell provision runs as the root user. If you with to run as the vagrant user, you can do something like this:
sudo -u vagrant npm install hiredis redis
..or for multiple lines:
sudo -u vagrant << EOF
[...]
npm install hiredis
npm install redis
EOF

Maybe use npm install -g to install it globally in the vm?

sed -i 's/.*requiretty$/Defaults !requiretty/' /etc/sudoers

Related

The authenticity of host 'github.com (192.30.253.113)' can't be established passing "yes" with a bash script

I'm working on a personal project that requires me to do some bash scripting. I am currently trying to write a script that pulls from my private git repo. At the moment I am able to spin up my instance and install all my packages through a script. But when it comes to pulling from my private repo I get The authenticity of host 'github.com (192.30.253.113)' can't be established
I am trying to figure out a way to pass "yes" with my script. I know this is very bad practice but for my current use case, I'm not too concerned about security.
running this ssh-keyscan github.com >>~/.ssh/known_hosts command manual works, but when I put this in my script it does not seem to work.
Any help would be greatly appreciated
My script:
echo "update install -start"
sudo yum -y update
sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
sudo yum install -y httpd mariadb-server
sudo yum install -y git
sudo systemctl start httpd
echo "end"
#file premissions
sudo usermod -a -G apache ec2-user
sudo chown -R ec2-user:apache /var/www
sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;
#pulling from my git repo
ssh-keyscan github.com >>~/.ssh/known_hosts
cd ../../var/www/html/
git clone git#github.com:jackbourkemckenna/testrepo

How to DockerFile commands in bash instead of sh, does bash alias work for sh shell?

I started with dockerizing my python hello world program, and the dockerfile looks something like this.
FROM ubuntu:16.04
MAINTAINER Bhavani Ravi
RUN apt-get update
RUN apt-get install -y software-properties-common vim
RUN add-apt-repository ppa:jonathonf/python-3.6
RUN apt-get update
RUN apt-get install -y build-essential python3.6 python3.6-dev python3-pip python3.6-venv
RUN apt-get install -y git
# update pip
RUN python3.6 -m pip install pip --upgrade
RUN python3.6 -m pip install wheel
RUN echo 'alias python=python3.6' >> ~/.bash_aliases
COPY hello.py .
ENTRYPOINT python hello.py
The problem is now when I run the image I get /bin/sh: 1: python: not found.
As you can see in the dockerfile, I have set up bash_alias. When I override entrypoint and run the image with /bin/bash and then use python command it works.
How to make the alaias work for all command environments
Can you run dockerfile commands in bash instead of shell
Maybe try wrapping each command in /bin/bash -c "yourcommand" if bash is available on your Docker image it will force it.
Example
RUN /bin/bash -c "apt-get update"
...
ENTRYPOINT /bin/bash -c "python hello.py"

ec2 launch bash command does not work

I am running this code while launching ec2 instance, python is installed, but the folder is not created.
#!/bin/bash
sudo yum update -y
sudo yum install python36 -y
mkdir venv
cd venv
virtualenv -p /usr/bin/pyton3.6 python36
echo "source /home/ec2-user/venv/python36/bin/activate" > /home/ec2-user/.bashrc
pip install boto3
A couple of things could go wrong with that script. I suggest a more robust way to write it:
#!/bin/bash
cd "$(dirname "$0")"
sudo yum update -y
sudo yum install python36 -y
if [ ! -d venv ]; then
mkdir venv
virtualenv -p /usr/bin/pyton3.6 venv/python36
echo "source venv/python36/bin/activate" >> ~/.bashrc
source venv/python36/bin/activate
pip install boto3
fi
Improved points:
Make sure we are in the right directory, by doing a cd into the directory of the script
Do not hardcode the user home directory location, use ~
Do not truncate ~/.bashrc if already exists
Before installing boto3, it's important to activate the virtual env, otherwise pip will not install it inside the virtual env (it will try to install system-wide)
Thank you for inputs. This worked.
Mainly:
clear paths
activate virtual environment for boto3 install
'#!/bin/bash
sudo yum update -y
sudo yum install python36 -y
mkdir /home/ec2-user/venv
cd /home/ec2-user/venv
virtualenv -p /usr/bin/python3.6 python36
echo "source /home/ec2-user/venv/python36/bin/activate" >> /home/ec2-user/.bashrc
source /home/ec2-user/venv/python36/bin/activate
pip install boto3

Docker network does not work with bash entrypoint

First, we have a Docker network like so:
docker network create cdt-net
Then I have this bash script which will start a selenium server:
cd $(dirname "$0")
./node_modules/.bin/webdriver-manager update
./node_modules/.bin/webdriver-manager start
The above bash script is called by this Dockerfile:
FROM openjdk:latest
RUN apt-get update && \
apt-get -y install sudo
RUN mkdir -p /root/cdt-webdriver
WORKDIR /root/cdt-webdriver
COPY start-selenium-server.sh .
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
I would build it like so:
docker build -t cdt-selenium .
and then run it like so:
docker run --network=cdt-net --name cdt-selenium -d cdt-selenium
the problem that I am having, is that even though everything is clean with no errors, other processes in the same Docker network cannot talk to the selenium server.
On the other hand, if I create a selenium server using a pre-existing image, like so:
docker run -d --network=cdt-net --name cdt-selenium selenium/standalone-firefox:3.4.0-chromium
then things are working as expected, and I can connect to the selenium server from other processes in the Docker network.
Anyone know what might be wrong with my bash script or Dockerfile? Perhaps my manually created Selenium server is not listening on the right host?
Here is the complete Dockerfile for reference:
FROM openjdk:latest
RUN apt-get update && \
apt-get -y install sudo
RUN sudo apt-get install -y curl
RUN sudo apt-get install -y apt-utils
RUN sudo apt-get -y update
RUN sudo apt-get -y upgrade
RUN sudo apt-get purge nodejs npm
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
RUN sudo apt-get install -y nodejs
RUN echo "before nodejs => $(which nodejs)"
RUN echo "before npm => $(which npm)"
RUN sudo ln -s `which nodejs` /usr/bin/node || echo "ignore error"
RUN mkdir -p /root/cdt-webdriver
WORKDIR /root/cdt-webdriver
COPY start-selenium-server.sh .
RUN rm -rf node_modules > /dev/null 2>&1
RUN npm init -f || echo "ignore non-zero exit code" > /dev/null 2>&1
RUN npm install webdriver-manager > /dev/null 2>&1
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
You should use -d only when you docker images run fine. Before that use -it.
Change you webdriver-manager to a global install
RUN npm install -g webdriver-manager > /dev/null 2>&1
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
Also change your start-selenium-server.sh to
webdriver-manager update
webdriver-manager start
And use below to run and check if there are any issues
docker run --network=cdt-net --name cdt-selenium -it cdt-selenium

How to install Python 3, pip, setuptools, virtualenv and virtualenvwrapper in CentOS 7

So how you install all this software in a Centos 7?
The code below, need to be run with root.
Just follow this simple steps.
sudo su -
nano script
paste the script and change the variable USER
chmod 755 script
./script
Thats it.
Here is the code to solve all this issues.
If you need the gist, here is the link:
https://gist.github.com/edutopy/7f66a2b9522bec7aa4e4
#!/bin/bash
## IMPORTANT ##
# Run this script with root (sudo su -), wont work if run as sudo.
# Change the variables as needed.
######################################################################
USER=sysadmin # User that will have ownership (chown) to /usr/local/bin and /usr/local/lib
USERHOME=/home/${USER} # The path to the users home, in this case /home/youruser
PYSHORT=3.5 # The Python short version, e.g. easy_install-${PYSHORT} = easy_install-3.5
PYTHONVER=3.5.1 # The actual version of python that you want to download from python.org
cd ${USERHOME}
# Install development tools and some misc. necessary packages
yum -y groupinstall "Development tools"
yum -y install zlib-devel # gen'l reqs
yum -y install bzip2-devel openssl-devel ncurses-devel # gen'l reqs
yum -y install mysql-devel # req'd to use MySQL with python ('mysql-python' package)
yum -y install libxml2-devel libxslt-devel # req'd by python package 'lxml'
yum -y install unixODBC-devel # req'd by python package 'pyodbc'
yum -y install sqlite sqlite-devel xz-devel
yum -y install readline-devel tk-devel gdbm-devel db4-devel
yum -y install libpcap-devel xz-devel # you will be sad if you don't install this before compiling python, and later need it.
# Alias shasum to == sha1sum (will prevent some people's scripts from breaking)
echo 'alias shasum="sha1sum"' >> ${USERHOME}/.bashrc
# Install Python ${PYTHONVER} (do NOT remove 2.7, by the way)
wget --no-check-certificate https://www.python.org/ftp/python/${PYTHONVER}/Python-${PYTHONVER}.tgz
tar -zxvf Python-${PYTHONVER}.tgz
cd ${USERHOME}/Python-${PYTHONVER}
./configure --prefix=/usr/local LDFLAGS="-Wl,-rpath /usr/local/lib" --with-ensurepip=install
make && make altinstall
# Install virtualenv and virtualenvwrapper
cd ${USERHOME}
chown -R ${USER} /usr/local/bin
chown -R ${USER} /usr/local/lib
easy_install-${PYSHORT} virtualenv
easy_install-${PYSHORT} virtualenvwrapper
echo "export WORKON_HOME=${USERHOME}/.virtualenvs" >> ${USERHOME}/.bashrc # Change this directory if you don't like it
echo "export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3.5" >> ${USERHOME}/.bashrc
echo "export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv" >> ${USERHOME}/.bashrc
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> ${USERHOME}/.bashrc # Important, don't change the order.
source ${USERHOME}/.bashrc
mkdir -p ${WORKON_HOME}
chown -R ${USER} ${WORKON_HOME}
chown -R ${USER} ${USERHOME}
# Done!
# Now you can do: `mkvirtualenv foo`

Resources