I have successfully installed homestead on my mac machine and i have set path using following command
export PATH=~/.composer/vendor/bin:$PATH
but every time when i restart my machine, homestead command does not work, i would have to run export PATH=~/.composer/vendor/bin:$PATH again to fix this.
Please suggest a way to fix this. Thanks for your help.
Use a shell provisioner to put that line either in /etc/profile (for the whole system) or just ~vagrant/.profile (for just your user). Like so:
config.vm.provision "set-path", type: "shell" do |s|
s.inline = "grep "~/.composer/vendor/bin" ~vagrant/.profile &> /dev/null || echo "export PATH=~/.composer/vendor/bin:\$PATH" >> ~vagrant/.profile"
end
This does a grep to check to see if that text is in the file, and if that returns a non-zero result (||) appends that line to the end of the file.
Related
I'm trying to create an EC2 User-data script to run other scripts on boot up. However, the scripts that I run fail to recognize some commands and variables that I'd already declared. I'm running the commands as the "ubuntu" user but it still isn't working.
My user-data script looks something like this:
export user="ubuntu"
sudo su $user -c ". ./run_script"
Within the script, I have these lines:
THIS_PATH="/some/path"
echo "export SOME_PATH=$THIS_PATH" >> ~/.bashrc
source ~/.bashrc
However, the script can't run SOME_PATH/application, and echo $SOME_PATH this returns a blank line. I'm confused because $SOME_PATH/application works when I log into the EC2 using SSH and my debug logs using whoami returns "ubuntu."
Am I missing something here?
Your data script is executed as root and su command leaves $HOME and other ENV variables intact (note that sudo is redundant). "su -" does not help either
So, do not use ~ or $HOME but full path /home/ubuntu/.bashrc
I found out the problem. It seems that source ~/.bashrc isn't enough to restart the shell -- the environment variables worked after I referenced them in another bash script.
I have set up the google assistant sdk on my Raspberry Pi as shown here: https://developers.google.com/assistant/sdk/prototype/getting-started-pi-python/run-sample
Now in order to re-run the assistant I have worked out the two commands are
$ source env/bin/activate
and
(env) $ google-assistant-demo
however I want to automate this process into a script that I can call from rc.local (followed by an &) in order to make the assistant boot from start up.
However if I run a simple script
#!/bin/bash
source env/bin/activate
google-assistant-demo
the assistant does not run inside the environment
my environment path is /home/pi/env/bin/activate
How can I have it so the script starts the environment and then runs the assistant inside the virtual environment?
EDIT: In the end I went with the following method:
using this as a base :
https://youtu.be/ohUszBxuQA4?t=774 – thanks to Eric Parisot
You will need to download the src file he uses and extract its contents into /home/pi/src/
However with a few changes.
I did not run gassist.sh as sudo, as it gave me the following error:
OpenAlsaHandle PcmOpen: No such file or directory
[7689:7702:ERROR:audio_input_processor.cc(756)] Input error
ON_MUTED_CHANGED:
{‘is_muted’: False}
ON_START_FINISHED
ON_ASSISTANT_ERROR:
{‘is_fatal’: True}
[7689:7704:ERROR:audio_input_processor.cc(756)] Input error
ON_ASSISTANT_ERROR:
{‘is_fatal’: True}
Fix: DO NOT run as sudo
If gassist.sh gives an error about RPi.GPIO you need to do https://youtu.be/ohUszBxuQA4?t=580:
$ cd /home/pi/env/bin
$ source activate
(env) $ pip install RPi.GPIO
(env) $ deactivate
And then I did sudo nano /etc/profile
and the appended this to the end:
#Harvs was here on 24/06/17
if pidof -x "gassist.sh" >/dev/null; then
echo ""
echo "/etc/profile says:"
echo "An instance of Google Assistant is already running, will not start again"
echo ""
else
echo "Starting Google Assistant..."
echo "If you are seeing this, perhaps you have SSH within seconds of reboot"
/home/pi/src/gassist.sh &
fi
And now it works perfectly, and inside the virtual enviroment :)
found solution here :https://raspberrypi.stackexchange.com/a/45089
Create a startup shell script in your root directory (I named mine "launch"), make it executable too :
sudo nano launch.sh
I wrote it that way :
#!/bin/bash
source /home/pi/env/bin/activate
/home/pi/env/bin/google-assistant-demo
Save the file
Edit the LXDE-pi autostart file
sudo nano /home/pi/.config/lxsession/LXDE-pi/autostart
Add this to the bottom of that file
./launch.sh
reboot
Scripts run from rc.local execute in the root directory (or possibly in the home directory of the root user, depending on the distro, I think?)
The easy fix is to code the full path to the environment.
#!/bin/bash
source /home/pi/env/bin/activate
google-assistant-demo
# or maybe /home/pi/google-assistant-demo
There is no need to explicitly background anything in rc.local
In the end I went with the following method:
using this as a base : https://youtu.be/ohUszBxuQA4?t=774 – thanks to Eric Parisot
However with a few changes.
You will need to download the src file he uses and extract its contents into /home/pi/src/
I did not run gassist.sh as sudo, as it gave me the following error:
OpenAlsaHandle PcmOpen: No such file or directory
[7689:7702:ERROR:audio_input_processor.cc(756)] Input error
ON_MUTED_CHANGED:
{‘is_muted’: False}
ON_START_FINISHED
ON_ASSISTANT_ERROR:
{‘is_fatal’: True}
[7689:7704:ERROR:audio_input_processor.cc(756)] Input error
ON_ASSISTANT_ERROR:
{‘is_fatal’: True}
Fix: DO NOT run as sudo
If gassist.sh gives an error about RPi.GPIO you need to do https://youtu.be/ohUszBxuQA4?t=580:
$ cd /home/pi/env/bin
$ source activate
(env) $ pip install RPi.GPIO
(env) $ deactivate
And then I did sudo nano /etc/profile and the appended this to the end:
#Harvs was here on 24/06/17
if pidof -x "gassist.sh" >/dev/null; then
echo ""
echo "/etc/profile says:"
echo "An instance of Google Assistant is already running, will not start again"
echo ""
else
echo "Starting Google Assistant..."
echo "If you are seeing this, perhaps you have SSH within seconds of reboot"
/home/pi/src/gassist.sh &
fi
And now it works perfectly, and inside the virtual enviroment, and in boot to CLI mode! :)
I would like to add an additional path to my VM's $PATH environment variable through use of my puppet config.yaml or Vagrant file (or some other VM external mechanism that I don't know).
Is this possible? If so, how?
In Vagrant you can easilly provision stuff with a shell script. So first, create a script (in the same folder than your Vagrantfile) that add additional path to $PATH. By exemple, create a file called bootstrap.sh with this content :
export PATH=$PATH:/foo/bar
# Or if you want it for all users :
echo 'PATH=$PATH:/foo/bar' >> /etc/profile
Then in your Vagrantfile, add this line to execute this script when the VM is booted :
config.vm.provision :shell, path: "bootstrap.sh"
This approach is partway there, but if for some reason you re-run provisioning on your Vagrant box, you'll end up with one of those lines in there for each time you run the provisioning. To avoid that:
grep -s -E "PATH=\$PATH:/foo/bar" /etc/profile || echo 'PATH=\$PATH:/foo/bar' >> /etc/profile
I don't know much about Chef, but Salt does a great job of creating a managed section in such files and then dealing with it on it's own. I'd be surprised if Chef doesn't do the same thing.
This is the contents of my /etc/rc.local file. It is supposed to run on login on my raspberry pi, yet it just logs in in (as I'm using auto login) and then does nothing, i.e. it just sits there with pi#raspberrypi ~$_ waiting for a command. I have no idea why it's not working nor any experience with bash scripts.
It should mount a usb then run a file on said usb but it doesn't.
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
sudo /bin/mount /dev/sda1 /media/robousb
sudo python media/robousb/Robopython/usercode_old.py
exit 0
I assuming you're running Raspbian, which is pretty much Debian.
rc.local runs as root before login, so you don't need or want sudo; it may be causing an error, hence nothing happening.
User-level commands that run for any user when they log in (unlike rc.local, which runs before login) can be put into /etc/bash.bashrc. That may be more applicable to your situation, at least the second command.
Login commands for the pi user only can be put into /home/pi/.bashrc
I don't know raspberry-pi but you could try to write something into a file to see if the file is running or not. For example :
touch /tmp/test.txt
echo "$(date) => It's running" > /tmp/test.txt
If it doesn't work, I know that on some OS (fedora, rhel, centos for example), the path of that file is /etc/init.d/rc.local. It doesn't cost anything to try this path ;)
I have the exact same problem with RPi3/Jessie.
I suggest you to launch your script in the bashrc by doing
sudo emacs /home/pi/.bashrc
In my case i wrote at the EOF:
bash /home/pi/jarvis/jarvis.sh -b &
And that works well at each startup.
I have the same problem. In Raspbian forum is the solution:
Just change the first row from #!/bin/sh -e to
#!/bin/bash
Ivan X is right. You don´t need sudo command.
So, I've got a bunch of vagrant VMs running some flavor of Linux (centos, ubuntu, whatever). I would like to automatically ensure that a "vagrant ssh" will also "cd /vagrant" so that no-one has to remember to do that whenever they log in.
I've figured out (duh!) that echo "\n\ncd /vagrant" >> /home/vagrant/.bashrc will do the trick. What I don't know is how to ensure that this only happens if the cd command isn't already there. I'm not a shell expert, so I'm completely confused here. :)
You can do this by using the config.ssh.extra_args setting in your Vagrantfile:
config.ssh.extra_args = ["-t", "cd /vagrant; bash --login"]
Then anytime you run vagrant ssh you will be in the /vagrant directory.
I put
echo "cd /vagrant_projects/my-project" >> /home/vagrant/.bashrc
in my provision.sh, and it works like a charm.
cd is a Bash shell built-in, as long as a shell is installed it should be there.
Also, be aware that ~/.bash_profile is for interactive login shell, if you add cd /vagrant in ~vagrant/.bashrc, it may NOT work.
Because distros like Ubuntu does NOT have this file -> ~/.bash_profile by default and instead use ~/.bashrc and ~/.profile
If someone creates a ~/.bash_profile for vagrant user on Ubuntu, ~vagrant/.bashrc will not be read.
You need to add cd /vagrant to your .bashrc in the vm. The best way to do this is in your provisioner script.
If you don't have a provisioner script, make one by adding this line to your Vagrantfile before end:
config.vm.provision "shell", path: "scripts/vagrant/provisioner.sh", privileged: false
Path is relative to the project root where the Vagrantfile is, and privileged depends on your project and what else is in your provisioner script which might need to be privileged. I use priveleged false and sudo explicitly when necessary.
And in the provisioner script:
if ! grep -q "cd /vagrant" ~/.bashrc ; then
echo "cd /vagrant" >> ~/.bashrc
fi
This will add cd /vagrant to .bashrc, but only if it isn't there already. This is useful if you reprovision, as it will prevent your .bashrc from getting cluttered.
Some answers mention a conflict with .bash_profile. If the above code doesn't work, you can try the same line with .bash_profile or .profile instead of .bashrc. However, I've been using vagrant with ubuntu guests. My Laravel/homestead box based on Ubuntu has a .bash_profile and a .profile but having cd /vagrant in .bashrc did work for me when using vagrant ssh without changing or deleting the other files.
You can add cd /vagrant to your .bashrc and it will run the command when you ssh. The /bashrc you want is in /home/vagrant (the user you login as when you vagrant ssh.) You can just stick the new line at the bottom of the file.
You can also do it this way:
vagrant ssh -c "cd /vagrant && bash"
And you could include it in a script to launch it (like ./vagrant-ssh).
May be this can help. Edit the Vagrantfile as replace your username with vagrant
`
config.vm.provision "shell" do |s|
s.inline = <<-SHELL
# Change directory automatically on ssh login
if ! grep -qF "cd /home/vagrant/ansible" /home/vagrant/.bashrc ;
then echo "cd /home/vagrant/ansible" >> /home/vagrant/.bashrc ; fi
chown vagrant. /home/vagrant/.bashrc
`
Ideally we just want to alter the vagrant ssh behaviour.
In my case, I wanted something that didn't affect any other processes in the environment, so we can do something like this in the vagrant file-
VAGRANT_COMMAND = ARGV[0]
if VAGRANT_COMMAND == "ssh"
config.ssh.extra_args = ["-t", "cd /vagrant; bash --login"]
end
You can use Ansible to assert that your .bashrc file contains cd /vagrant.
If you are not already using the Ansible provisioner for your VM, add the following lines to your Vagrantfile:
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "provisioning/playbook.yml"
end
And in your playbook, add the following task/play:
---
- hosts: all
gather_facts: no
tasks:
- name: chdir to vagrant directory
ansible.builtin.lineinfile:
path: /home/vagrant/.bashrc
line: cd /vagrant
According to this Q&A, I would recommend to modify .bashrc instead of .profile or .bash_profile.