Vagrant Error: File upload source file must exist - vagrant

I'm trying to use a vagrant file I received to set up a VM in Ubuntu with virtualbox.
After using the vagrant up command I get the following error:
File provisioner:
* File upload source file /home/c-server/tools/appDeploy.sh must exist
appDeploy.sh does exist in the correct location and looks like this:
#!/bin/bash
#
# Update the app server
#
/usr/local/bin/aws s3 cp s3://dev-build-ci-server/deploy.zip /tmp/.
cd /tmp
unzip -o deploy.zip vagrant/tools/deploy.sh
cp -f vagrant/tools/deploy.sh /tmp/.
rm -rf vagrant
chmod +x /tmp/deploy.sh
dos2unix /tmp/deploy.sh
./deploy.sh
rm -rf ./deploy.sh ./deploy.zip
#
sudo /etc/init.d/supervisor stop
sudo /etc/init.d/supervisor start
#
Since the script exists in the correct location, I'm assuming it's looking for something else (maybe something that should exist on my local computer). What that is, I am not sure.
I did some research into what the file provisioner is and what it does but I cannot find an answer to get me past this error.
It may very well be important that this vagrant file will work correctly on Windows 10, but I need to get it working on Ubuntu.

In your Vagrantfile, check that the filenames are capitalized correctly. Windows isn't case-sensitive but Ubuntu is.

Related

WGET seems not to work with user data on AWS EC2 launch

I launch an centos AMI I created, and try to add user data as a file which looks like this:
#!/bin/bash
mkdir /home/centos/testing
cd testing
wget https://validlink
So simply, on launch, the user data creates a folder called testing and downloads this validURL which I will not put as it links to my data - however it is valid and accessible.
When I launch the instance, the folder testing is created successfully, however there is no file inside the directory.
When I ssh into the instance, and run the wget command as a sudo, the file is downloaded successfully inside the testing folder.
Why does the file not get downloaded on the ec2 launch through user data?
You have no way of knowing the current working directory when you execute the cd command. So specify full path:
cd /home/centos/testing
Try this:
#!/bin/bash
mkdir /home/centos/testing
cd /home/centos/testing
wget https://validlink
Run it using the root user.
Try this instead:
#!/bin/bash
sudo su
yum -y install wget
mkdir /home/centos/testing
cd /home/centos/testing
wget https://validlink

Bash on Windows - alias for exe files

I am using the Bash on Ubuntu on Windows, the way to run bash on Windows 10. I have the Creators update installed and the Ubuntu version is 16.04.
I was playing recently with things as npm, node.js and Docker and for docker I found it is possible to install it and run it in windows and just use the client part from bash, calling directly the docker.exe file from Windows's Program Files files folder. I just update my path variable to include the path to docker as PATH=$PATH:~/mnt/e/Program\ Files/Docker/ (put in .bashrc) and then I am able to run docker from bash calling docker.exe.
But hey this bash and I dont want to write .exe at the end of the commands (programs). I can simply add an alias alias docker="docker.exe", but then I want to use lets say docker-compose and I have to add another one. I was thinking about adding a script to .bashrc that would go over path variable and search for .exe files in every path specified in the path variable and add an alias for every occurance, but it does not seem to be a very clean solution (but I guess it would serve its purpose quite well).
Is there a simple and clean solution to achieve this?
I've faced the same problem when trying to use Docker for Windows from WSL.
Had plenty of existing shell scripts that run fine under Linux and mostly under WSL too until failing due to docker: command not found. Changing everywhere docker to docker.exe would be too cumbersome and non-portable.
Tried workaround with aliases in ~/.bashrc as here at first:
shopt -s expand_aliases
alias docker=docker.exe
alias docker-compose=docker-compose.exe
But it requires every script to be run in interactive mode and still doesn't work within backticks without script modification.
Then tried exported bash functions in ~/.bashrc:
docker() { docker.exe "$#"; }
export -f docker
docker-compose() { docker-compose.exe "$#"; }
export -f docker-compose
This works. But it's still too tedious to add every needed exe.
Finally ended up with easier symlinks approach and a modified wslshim custom helper script.
Just add once to ~/.local/bin/wslshim:
#!/bin/bash -x
cd ~/.local/bin && ln -s "`which $1.exe`" "$1" || ln -s "`which $1.ps1`" "$1" || ln -s "`which $1.cmd`" "$1" || ln -s "`which $1.bat`" "$1"
Make it executable: chmod +x ~/.local/bin/wslshim
Then adding any "alias" becomes as easy as typing two words:
$ wslshim docker
+ cd ~/.local/bin
++ which docker.exe
+ ln -s '/mnt/c/Program Files/Docker/Docker/resources/bin/docker.exe' docker
$ wslshim winrm
+ cd ~/.local/bin
++ which winrm.exe
+ ln -s '' winrm
ln: failed to create symbolic link 'winrm' -> '': No such file or directory
++ which winrm.ps1
+ ln -s '' winrm
ln: failed to create symbolic link 'winrm' -> '': No such file or directory
++ which winrm.cmd
+ ln -s /mnt/c/Windows/System32/winrm.cmd winrm
The script auto picks up an absolute path to any windows executable in $PATH and symlinks it without extension into ~/.local/bin which also resides in $PATH on WSL.
This approach can be easily extended further to auto link any exe in a given directory if needed. But linking the whole $PATH would be an overkill. )
You should be able to simply set the executable directory to your PATH. Use export to persist.
Command:
export PATH=$PATH:/path/to/directory/executable/is/located/in
In my windows 10 the solution was to install git-bash and docker for windows.
in this bash, when I press "docker" it works
for example "docker ps"
I didnt need to make an alias or change the path.
you can download git-bash from https://git-scm.com/download/win
then from Start button, search "git bash".
Hope this solution good for you

boot2docker startup script to mount local shared folder with host

I'm running boot2docker 1.3 on Win7.
I want to connect a shared folder.
In the VirtualBox Manager under the image properties->shared folders I've added the folder I've want and named it "c/shared". The "auto-mount" and "make permanent" boxes are checked.
When boot2docker boots, it isn't mounted though. I have to do an additional:
sudo mount -t vboxsf c/shared /c/shared
for it to show up.
Since I need that for every time I'll ever use docker, I'd like that to just run on boot, or just already be there. So I thought if there were some startup script I could add, but I can't seem to find where that would be.
Thanks
EDIT: It's yelling at me about this being a duplicate of Boot2Docker on Mac - Accessing Local Files which is a different question. I wanted to mount a folder that wasn't one of the defaults such as /User on OSX or /c/Users on windows. And I'm specifically asking for startup scripts.
/var/lib/boot2docker/bootlocal.sh fits your need probably, it will be run by initial script /opt/bootscripts.sh
And bootscripts.sh will also put the output into the /var/log/bootlocal.log, see segment below (boot2docker 1.3.1 version)
# Allow local HD customisation
if [ -e /var/lib/boot2docker/bootlocal.sh ]; then
/var/lib/boot2docker/bootlocal.sh > /var/log/bootlocal.log 2>&1 &
fi
One use case for me is
I usually put shared directory as /c/Users/larry/shared, then I add script
#/bin/bash
ln -s /c/Users/larry/shared /home/docker/shared
So each time, I can access ~/shared in boot2docker as the same as in host
see FAQ.md (provided by #KCD)
If using boot2docker (Windows) you should do following:
First create shared folder for boot2docker VM:
"C:/Program Files/Oracle/VirtualBox/VBoxManage" sharedfolder add default -name some_shared_folder -hostpath /c/some/path/on/your/windows/box
#Then make this folder automount
docker-machine ssh
vi /var/lib/boot2docker/profile
Add following at the end of profile file:
sudo mkdir /windows_share
sudo mount -t vboxsf some_shared_folder /windows_share
Restart docker-machine
docker-machine restart
Verify that folder content is visible in boot2docker:
docker-machine ssh
ls -al /windows_share
Now you can mount the folder either using docker run or docker-compose.
Eg:
docker run it --rm --volume /windows_share:/windows_share ubuntu /bin/bash
ls -al /windows_share
If changes in the profile file are lost after VM or Windows restart please do following:
1) Edit file C:\Program Files\Docker Toolbox\start.sh and comment out following line:
#line number 44 (or somewhere around that)
yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
#change the line above to:
# yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
Thanks for your help with this. An additional few flags I needed to add, in order for the new mount to be accessible by the boot2docker "docker" user:
sudo mount -t vboxsf -o umask=0022,gid=50,uid=1000 Ext-HD /Volumes/Ext-HD
With docker 1.3 you do not need to manually mount anymore. Volumes should work properly as long as the source on the host vm is in your user directory.
https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
I can't make it work following Larry Cai's instruction. I figured I could make changes to "c:\Program Files\Boot2Docker for Windows\start.sh", add below
eval "$(./boot2docker.exe shellinit 2>/dev/null | sed 's,\\,\\\\,g')"
your mount command
eval "$(./boot2docker ssh 'sudo mount -t vboxsf c/shared /c/shared')"
I also add the command to start my container here.
eval "$(docker start KDP)"

Why does the lftp mirror command chmod files

I'm very new to lftp, so forgive my ignorance.
I just ran a dry run of my lftp script, which consists basically of a line like this:
mirror -Rv -x regexp --only-existing --only-newer --dry-run /local/root/dir /remote/dir
When it prints what it's going to do, it wants to chmod a bunch of files - files which I grabbed from svn, never modified, and which should be identical to the ones on the server.
My local machine is Ubuntu, and the remote is a Windows server. I have a few questions:
Why is it trying to do that? Does it try to match file permissions from the local with the remote?
What will happen when it tries to chmod the files? As I understand it, Windows doesn't support chmod - will it just fail gracefully and leave the files alone?
Many thanks!
Use the -p option and it shouldn't try to change permissions. I've never sent to a windows host, but you are correct in that it shouldn't do anything to the permission levels on the windows box.
I think that you should try
lftp -e "mirror -R $localPath $remotePath; chmod -R 777 $remotePath; bye" -u $username,$password $host

Tomcat startup script permission on Mac OS X

I'm struggling with a Mac OS X 10.5.8 that I've started using recently for development. I successfully installed tomcat and create launchd.conf for my environment variables.
I believe it works fine. Coz I can build a project with Netbeans using maven and cargo plugins successfully so i found online a script for start and stop the tomcat
#!/bin/bash
case $1 in
start)
sh /Library/apache-tomcat-6.0.20/bin/startup.sh
;;
stop)
sh /Library/apache-tomcat-6.0.20/bin/shutdown.sh
;;
restart)
sh /Library/apache-tomcat-6.0.20/bin/shutdown.sh
sh /Library/apache-tomcat-6.0.20/bin/startup.sh
;;
*)
echo "Usage :start|stop|restart"
;;
esac
exit 0
That script was created in nano in sudo sh
but when i want to run it. is spit out this
sh: /usr/bin/tomcat: Permission denied
I've added chmod 755 *.sh and *.bat inside /Library/apache-tomcat-6.0.20/bin
Still access denied so what do I go around that? I have the admin privileges on the machine.
Thanks for reading
Go to Tomcat bin directory and run the below command:
chmod +x *.sh
This worked for me.
Where did you install the tomcat script to? I'd recommend you install it to /usr/bin. Once installed, make sure the permissions are correct (i.e. chmod 755 /usr/bin/tomcat). You can then confirm with ls -l /usr/bin/tomcat.
If you still get errors once the permissions on /usr/bin/tomcat are correct, then you can add the following two lines following the #!/bin/bash line.
set -x
set -v
With the above lines, bash will output additional information that will allow you to tell what's being executed and where the error is happening.
1) Go to the tomcat directory, which preferably should be "/usr/local/folder-name"
2) Check for the permissions for the folder: ls -l
3) Change the permissions using: sudo chmod -R 755 folder-name
4) Change the owner to the current owner: sudo chown -R owner-name:group-name folder-name
e.g sudo chown -R userName:admin folder-name
Try executing the script again and it should work.

Resources