How install composer-asset-plugin while 'vagrant up'? - shell

I generate config by puphpet.com. I want to install composer-asset-plugin when I first run vagrant up.
I wrote simple script puphpet\files\exec-once\composer-asset-plugin.sh, which tries to do that:
#!/usr/bin/bash
echo "Installing Composer Asset Plugin"
composer global require "fxp/composer-asset-plugin:~1.0.0"
It install plugin in /root/.composer, so when I connect via vagrant ssh (under user vagrant) and try to use Composer I get a error, which means that plugin is absent because Composer really absent in /home/vagrant/.composer. After I install plugin malualy Composer works fine.
I tried to change user from root to vagrant before plugin instaling:
#!/usr/bin/bash
echo "Installing Composer Asset Plugin"
expect -c 'set timeout 3600; spawn su - vagrant; expect "Password:" {send -- "vagrant\r";}; exit 0'
composer global require fxp/composer-asset-plugin:~1.0.0;
It hangs after command expect. What I do wrong?

The following might help:
# Install Composer
if [ ! -f /usr/local/bin/composer ]; then
cd /tmp
curl -sS https://getcomposer.org/installer | php
mv composer.phar /usr/local/bin/composer
fi
# Install necessary plugin
sudo -H -u vagrant bash -c "composer global require fxp/composer-asset-plugin:~1.0.0"
In this case the command will be executed in vagrant user context and his home will be used.

Related

How to properly run entrypoint bash script on docker?

I would like to build a docker image for dumping large SQL Server tables into S3 using the bcp tool by combining this docker and this script. Ideally I could pass table, database, user, password and s3 path as arguments for the docker run command.
The script looks like
#!/bin/bash
TABLE_NAME=$1
DATABASE=$2
USER=$3
PASSWORD=$4
S3_PATH=$5
# read sqlserver...
# write to s3...
# .....
And the Dockerfile is:
# SQL Server Command Line Tools
FROM ubuntu:16.04
LABEL maintainer="SQL Server Engineering Team"
# apt-get and system utilities
RUN apt-get update && apt-get install -y \
curl apt-transport-https debconf-utils \
&& rm -rf /var/lib/apt/lists/*# SQL Server Command Line Tools
FROM ubuntu:16.04
LABEL maintainer="SQL Server Engineering Team"
# apt-get and system utilities
RUN apt-get update && apt-get install -y \
curl apt-transport-https debconf-utils \
&& rm -rf /var/lib/apt/lists/*
# adding custom MS repository
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
# install SQL Server drivers and tools
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql mssql-tools awscli
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
RUN /bin/bash -c "source ~/.bashrc"
ADD ./sql2sss.sh /opt/mssql-tools/bin/sql2sss.sh
RUN chmod +x /opt/mssql-tools/bin/sql2sss.sh
RUN apt-get -y install locales
RUN locale-gen en_US.UTF-8
RUN update-locale LANG=en_US.UTF-8
ENTRYPOINT ["/opt/mssql-tools/bin/sql2sss.sh", "DB.dbo.TABLE", "SQLSERVERDB", "USER", "PASSWORD", "S3PATH"]
If I replae the entrypoint for CMD /bin/bash and run the image with -it, I can manually run the sql2sss.sh and it works properly, reading and writing to s3. However if I try to use the entrypoint as shown yelds bcp: command not found.
I also noticed if I use CMD /bin/sh in iterative mode it will produce the same error. Am I missing some configuration in order for the entrypoint to run the script properly?
Have you tried
ENV PATH="/opt/mssql-tools/bin:${PATH}"
Instead of exporting the bashrc?
As David Maze pointed out docker doesn't read dot files
Basically add your env definitions in the ENV primitive

Trying to set GOPATH and GOROOT in AWS EC2 user data, but it is not working

I am trying to set up GOPATH GOROOT in my AWS EC2 Ubuntu 20.04 user data, but it never worked, every time I connect to the AWS EC2 and view the log in /var/log/cloud-init-output.log it always says
go: not found, but if I key in the echo part it will work.
I am trying to set up multiple EC2 with this basis, so I can't key in every instance myself.
The CloudFormation yaml user data part is below:
UserData:
Fn::Base64: |
#!/bin/bash
wget https://dl.google.com/go/go1.14.4.linux-amd64.tar.gz
tar -C /usr/local -zxvf go1.14.4.linux-amd64.tar.gz
mkdir -p ~/go/{bin,pkg,src}
echo 'export GOPATH=$HOME/go' >> ~/.bashrc
echo 'export GOROOT=/usr/local/go' >> ~/.bashrc
echo 'export PATH=$PATH:$GOPATH/bin:$GOROOT/bin' >> ~/.bashrc
echo 'export GO111MODULE=auto' >> ~/.bashrc
source ~/.bashrc
apt -y update
apt -y install mongodb wget git
systemctl start mongodb
apt -y install git gcc cmake autoconf libtool pkg-config libmnl-dev libyaml-dev
go get -u github.com/sirupsen/logrus
cd ~
git clone --recursive https://github.com/williamlin0504/free5gcWithOCF.git
cd free5gcWithOCF
make
And here is the error inside /var/log/cloud-init-output.log
Error while user data runs
Is there anyone is familiar with this, please I need some help~
In your error message, in the Makefile at line 30 there is a program bin/amf being used
This program appears to be a shell script with a problem in line 1
The nature of the problem is "go: not found"
If you have the bare word "go" in line 1 of the shell script and the path cannot find it then this is what will happen
Probably you need to alter the last line of your userdata shell script to say
PATH=/usr/local/go/bin:$PATH make
I know you have a source command earlier in the script that is supposed to set this up but it doesn't do what you think it does

Laravel Sail "build path ./vendor/laravel/sail/runtimes/8.0 either does not exist, is not accessible, or is not a valid URL."

I'm using Laravel 8.x with Sail using PHP 8.0, recently, I actually messed up my compose.json file resulting in issues with the vendor, trying to recreate the project from scratch, I deleted the vendor folder.
Normally, docker-compose would build and create the /path/to/project/vendor/laravel/sail/runtimes/ directory with its appropriate content, but for some reason, I keep getting the following error:
ERROR: build path /path/to/project/vendor/laravel/sail/runtimes/8.0 either does not exist, is not accessible, or is not a valid URL.
I tried using docker system prune and deleting the existing containers manually through the Docker Desktop interface, and I even tried running it with docker-compose build --no-cache, I still get the same error.
Is there a way to fix this or should I just clone my project again and try to build it?
Note: I'm using an old Mac without the possibility of just manually running composer install so any of my interactions with the instance relies on the docker container working.
docker run --rm --interactive --tty --volume C:/path/to/project:/app composer install --ignore-platform-reqs --no-scripts
The standard procedure for setting up any Laravel project should be running composer install, so an inability to do so really ties one's hands here.
However, in this case, where the only way for me to run composer was through docker, I elected to use the laravel.build website to create a new project and copy the vendor folder over. Here's the script:
docker info > /dev/null 2>&1
# Ensure that Docker is running...
if [ $? -ne 0 ]; then
echo "Docker is not running."
exit 1
fi
docker run --rm \
-v $(pwd):/opt \
-w /opt \
laravelsail/php80-composer:latest \
bash -c "laravel new example-app && cd example-app && php ./artisan sail:install --with=mysql,redis,meilisearch,mailhog,selenium"
cd example-app
CYAN='\033[0;36m'
LIGHT_CYAN='\033[1;36m'
WHITE='\033[1;37m'
NC='\033[0m'
echo ""
if sudo -n true 2>/dev/null; then
sudo chown -R $USER: .
echo -e "${WHITE}Get started with:${NC} cd example-app && ./vendor/bin/sail up"
else
echo -e "${WHITE}Please provide your password so we can make some final adjustments to your application's permissions.${NC}"
echo ""
sudo chown -R $USER: .
echo ""
echo -e "${WHITE}Thank you! We hope you build something incredible. Dive in with:${NC} cd example-app && ./vendor/bin/sail up"
fi
After that, running ./vendor/bin/sail up -d && ./vendor/bin/sail composer install fixed the problem.

Docker ERROR: Container command not found or does not exist when running from Win10

This is driving me crazy...
I have Win10 and I have installed the Docker Toolbox with
Docker=1.10.2
Compose=1.6.0
VirtualBox=5.0.14
I have successfully launched the LAMP in Linux [Amazon linux] but when I try to do the same the terminal responds with "ERROR: Container command not found or does not exist"
As I understand, there is something wrong with the way Windows interpreter the CMD syntax.
I have tried
- CMD ["/run.sh"]
- ENTRYPOINT ["/run.sh"]
- CMD /run.sh
- CMD '/run.sh'
- CMD run.sh
- CMD "/run.sh"
but nothing seems to work.
Note: When I run CMD /run.sh the error does not appear but the container exits immediately.
Note2: I have exactly the same problem when trying to setup the LAMP with Docker-Machine on AWS
I have this DockerfileLamp :
FROM ubuntu
# -- Install needed packages --
ENV DEBIAN_FRONTEND noninteractive
# -- Install additional utilities --
RUN apt-get update && \
apt-get install -y supervisor git curl apache2 mcrypt cron wget nano unzip
# -- Install PHP 5.5 --
RUN apt-get -y update && \
apt-get -y install php5 libapache2-mod-php5 mysql-server-5.5 php5-mysql pwgen php-apc php5-mcrypt php5-xdebug php5-gd php5-curl php-pear openssh-server php5-cli php5-apcu php5-intl php5-imagick php5-json
# -- Set localhost to apache conf file --
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
# -- Add image configuration and scripts --
ADD ./lamp/start-apache2.sh /start-apache2.sh
ADD ./lamp/start-mysqld.sh /start-mysqld.sh
ADD ./lamp/run.sh /run.sh
RUN chmod 755 /*.sh
ADD ./lamp/my.cnf /etc/mysql/conf.d/my.cnf
ADD ./lamp/supervisord-apache2.conf
/etc/supervisor/conf.d/supervisord-apache2.conf
ADD ./lamp/supervisord-mysqld.conf
/etc/supervisor/conf.d/supervisord-mysqld.conf
# -- Remove pre-installed database --
RUN rm -rf /var/lib/mysql/*
# -- Add MySQL utils --
ADD ./lamp/setup_MySQL.sh /setup_MySQL.sh
RUN chmod 755 /*.sh
# -- config to enable .htaccess --
##ADD apache_default /etc/apache2/sites-available/000-default.conf
RUN a2enmod rewrite
# -- Environmental variables to configure php --
ENV PHP_UPLOAD_MAX_FILESIZE 10M
ENV PHP_POST_MAX_SIZE 10M
# -- Add volumes for MySQL --
##VOLUME ["/etc/mysql", "/var/lib/mysql" ]
# -- Set up SSH server --
RUN mkdir /var/run/sshd
RUN echo 'root:root' |chpasswd
RUN sed -ri 's/^PermitRootLogin\s+.*/PermitRootLogin yes/'
/etc/ssh/sshd_config
RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config
ADD ./lamp/supervisord-openssh-server.conf
/etc/supervisor/conf.d/supervisord-openssh-server.conf
# -- Install Python & pip --
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python python-pip python-dev && \
pip install --upgrade pip
# -- Install xvfb --
RUN apt-get install -y xvfb
EXPOSE 80 3306 22
CMD /run.sh
and the run.sh :
#!/bin/bash
VOLUME_HOME="/var/lib/mysql"
sed -ri -e "s/^upload_max_filesize.*/upload_max_filesize = ${PHP_UPLOAD_MAX_FILESIZE}/" \
-e "s/^post_max_size.*/post_max_size = ${PHP_POST_MAX_SIZE}/" /etc/php5/apache2/php.ini
if [[ ! -d $VOLUME_HOME/mysql ]]; then
echo "=> An empty or uninitialized MySQL volume is detected in $VOLUME_HOME"
echo "=> Installing MySQL ..."
mysql_install_db > /dev/null 2>&1
echo "=> Done!"
/setup_MySQL.sh
else
echo "=> Using an existing volume of MySQL"
fi
exec supervisord -n
and the docker-compose.yml :
lamp: # apache + mysql/php
build: .
dockerfile: DockerfileLamp
ports:
- "8181:80" # open apache to public
- "3333:3306" # open mysql to public
- "2222:22" # open SSH to public
Docker is process centric, in other words your containers dies when your CMD script dies. At the end of your script run ...
tail -f logfile (where logfile is some logfile you are interested in)
This will
1 - stop your container exiting
2 - allow you to do
docker logs -f containerName
To help u debug
3 - allow you to enter into the container with
docker exec -it bash containerName
Then u can run the command that you think is failing inside the container and try n sort this out
Whilst this doesn't directly answer your question it should give u sufficient weaponry to attack this issue
For another project I tried to get to work on Windows with Docker Machine I ran into the same ambiguous error message of docker-compose Container command not found or does not exist.
Your comment about line endings triggered me to try dos2unix ./*/*.sh within git-bash (multiple scripts, in subfolders), which fixed the issue for me.
My suspicion is that git clone saves the files with DOS line endings, which results in incorrect syntax for the top line !#/bin/bash.
$ docker-compose -v
docker-compose version 1.6.2, build e80fc83
$ docker version
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 21:49:11 2016
OS/Arch: windows/amd64
Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 21:49:11 2016
OS/Arch: linux/amd64
I solved it by simplifying the file. I commented out all the controls because whatever I tried it would keep throwing Syntax Errors
#!/bin/bash
VOLUME_HOME="/var/lib/mysql"
sed -ri -e "s/^upload_max_filesize.*/upload_max_filesize = ${PHP_UPLOAD_MAX_FILESIZE}/" \
-e "s/^post_max_size.*/post_max_size = ${PHP_POST_MAX_SIZE}/" /etc/php5/apache2/php.ini
#if [[ ! -d $VOLUME_HOME/mysql ]]; then
echo "=> An empty or uninitialized MySQL volume is detected in $VOLUME_HOME"
echo "=> Installing MySQL ..."
mysql_install_db > /dev/null 2>&1
echo "=> Done!"
/setup_MySQL.sh
#else
# echo "=> Using an existing volume of MySQL"
#fi
exec supervisord -n
It works for my case so I am not going to investigate further. Cheers!
EDITED
The above solution was not so complete.
It worked because I was making changes from INSIDE the container.
The permanent solution goes like this :
I migrated the run.sh file to a private Gist . [It does not need to be private but ok]
I think the problem is that when I try to build the Dockerfile from Windows machine [either locally or on a cloud provider] it messes up the syntax , EOF , line breaks and whatnot.
So I broke out of it by ADDing the gist url
ADD http://gist_url/run.sh /run.sh
Note1: You must use the raw file URL otherwise you are going to get the complete HTML.
Note2: The private gist is not protected.You don't need authentication to fetch the URL.

Why is sudo: bundle command not found?

Why is command "bundle" not found when using sudo:
[root#desktop gitlab]# sudo -u git -H bundle exec rake gitlab:setup RAILS_ENV=production
sudo: bundle: command not found
[root#desktop gitlab]#
but does exist when not using sudo:
[root#desktop gitlab]# bundle exec rake gitlab:setup RAILS_ENV=production
Warning
You are running as user root, we hope you know what you are doing.
Things may work/fail for the wrong reasons.
For correct results you should run this as user git.
This will create the necessary database tables and seed the database.
You will lose any previous data stored in the database.
Do you want to continue (yes/no)? no
Quitting...
[root#desktop gitlab]#
The reason I ask is I am following https://github.com/gitlabhq/gitlab-recipes/tree/master/install/centos, and it states to use sudo.
I've tried adding a -i flag as described by Using $ sudo bundle exec ... raises 'bundle: command not found' error, but get "This account is currently not available.".
Check if the PATH has the same values both with and without sudo. Apparently it cannot find bundle just because it is not listed in PATH
You can compare the outputs of following two lines
$ echo 'echo $PATH' | sh
$ echo 'echo $PATH' | sudo sh
Ideally sudo is supposed to leave PATH untouched. But this might be a side issue of your hosting distribution.
Edit by original poster. Output is:
[root#desktop etc]# echo 'echo $PATH' | sh
/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
[root#desktop etc]# echo 'echo $PATH' | sudo sh
/sbin:/bin:/usr/sbin:/usr/bin:/user/local/bin
[root#desktop etc]#
The user was created without a bash login shell. Change this in centos using system-config-users. Then su git into /home/git and move to gitlab directory. Execute the bundle commands without the sudo tag. The next error you will encounter is the missing database.yml in the config dir. fix this with the correct password (i.e. copy the mysql or postgres sample and edit).
I had this issue I thought that my gitlab installed from source and I got same error. but after try Omnibus method for backup my issue solved
with this command:
sudo gitlab-rake gitlab:backup:create
Try :
sudo -u git -H env PATH=$PATH && bundle exec rake gitlab:check RAILS_ENV=production
to use the same PATH than current user.

Resources