I've been searching on net but no proper solution for the moment.
Raspberry Pi: Launch Python Script on Startup
This guide tells the way to lauch a python on startup.
The key of this guide is the following crontab command:
#reboot sh /home/pi/bbt/launcher.sh >/home/pi/logs/cronlog 2>&1
I tried it.It works well only when there is NO operation to open a web or any other display within the python script.
If I want to lanuch a webbrowser using python,this crontab does not work well.
I checked the log and it says:
xhost: unable to open display ":0.0"
no protocol specified
test.py :cannot connect to X server :0.0
So this is a DISPLAY problem.
This is my shell(named laucher.sh) ,mainly used to lauch python:
#!/bin/bash
xhost +local:root
export DISPLAY=:0.0
python /home/pi/test.py
Anyone knows about this DISPLAY problem? Please help....
Thanks a lot!
Sincerely,Helen
I think crontab is not a very useful way to open a DISPLAY on startup because when you open a DISPLAY it requires X server.AND you don't know whether X sever is enabled before the command "#reboot" running or after it running during the process of booting.
I solved this problem by doing the following configuration.
1.Enter these command in terminal:
sudo cd ~./config/lxsession/LXDE-pi
sudo nano autostart
2.Add one command in autostart:
#python /home/pi/test.py
For this example I'm opening a python program that will open a display on startup.
you may replace the above code with any application that opens a display.There might be some slight difference with the grammar but this whole thing is like a conception proving this is also a possible way.
Related
I have a deploy script in which I want to clear the cache of my CDN. When I am on the server and run my script everything is fine, however when I SSH in and run only that file (i.e. not actually getting into the server, cding into the directory and running it) it fails and states the my doctl command cannot be found. This seems to only be an issue with this program over ssh, running systemctl --help works fine.
Please note that I have installed Digital Ocean's doctl using sudo snap install doctl and it is there.
Here is the .sh file (minus comments):
#!/bin/sh
doctl compute cdn flush [MYID] --files [*] # static cache
So I am not sure what the issue is. Anybody have an idea?
Again, if I get into the server and run the file all works, but here is the SSH command I use that returns the error:
ssh root#123.45.678.999 "/deploy/clear_digital_ocean_cache.sh"
And here is the error.
/deploy/clear_digital_ocean_cache.sh: 10: doctl: not found
Well one solution was to change the command to be an absolute path inside my .sh file like so:
#!/bin/sh
/snap/bin/doctl compute cdn flush [MYID] --files [*] # static cache
I realized that I could run my user commands with ssh (like systemctl) so it was either change where doctl was located (i.e. in the user bin) or ensure that the command was called with an absolute path adding the /snap/bin/ in front of the command.
I set up spark on a single EC2 machine and, when I am connected to it, I am able to use spark either with jupyter or spark-submit, without any issue. Unfortunately, though, I am not able to use spark-submit via ssh.
So, to recap:
This works:
ubuntu#ip-198-43-52-121:~$ spark-submit job.py
This does not work:
ssh -i file.pem ubuntu#blablablba.compute.amazon.com "spark-submit job.py"
Initially, I kept getting the following error message over and over:
'java.io.IOException: Cannot run program "python": error=2, No such file or directory'
After having read many articles and posts about this issue, I thought that the problem was due to some variables not having been set properly, so I added the following lines to the machine's .bashrc file:
export SPARK_HOME=/home/ubuntu/spark-3.0.1-bin-hadoop2.7 #(it's where i unzipped the spark file)
export PATH=$SPARK_HOME/bin:$PATH
export PYTHONPATH=/usr/bin/python3
export PYSPARK_PYTHON=python3
(As the error message referenced python, I also tried adding the line "alias python=python3" to .bashrc, but nothing changed)
After all this, if I try to submit the spark job via ssh I get the following error message:
"command spark-submit not found".
As it looks like the system ignores all the environment variables when sending commands via SSH, I decided to source the machine's .bashrc file before trying to run the spark job. As I was not sure about the most appropriate way to send multiple commands via SSH, I tried all the following ways:
ssh -i file.pem ubuntu#blabla.compute.amazon.com "source .bashrc; spark-submit job.file"
ssh -i file.pem ubuntu#blabla.compute.amazon.com << HERE
source .bashrc
spark-submit job.file
HERE
ssh -i file.pem ubuntu#blabla.compute.amazon.com <<- HERE
source .bashrc
spark-submit job.file
HERE
(ssh -i file.pem ubuntu#blabla.compute.amazon.com "source .bashrc; spark-submit job.file")
All attempts worked with other commands like ls or mkdir, but not with source and spark-submit.
I have also tried providing the full path running the following line:
ssh -i file.pem ubuntu#blabla.compute.amazon.com "/home/ubuntu/spark-3.0.1-bin-hadoop2.7/bin/spark-submit job.py"
In this case too I get, once again, the following message:
'java.io.IOException: Cannot run program "python": error=2, No such file or directory'
How can I tell spark which python to use if SSH seems to ignore all environment variables, no matter how many times I set them?
It's worth mentioning I have got into coding and data a bit more than a year ago, so I am really a newbie here and any help would be highly appreciated. The solution may be very simple, but I cannot get my head around it. Please help.
Thanks a lot in advance :)
The problem was indeed with the way I was expecting the shell to work (which was wrong).
My issue was solved by:
Setting my variables in .profile instead of .bashrc
Providing full path to python
Now I can launch spark jobs via ssh.
I found the solution in the answer #VinkoVrsalovic gave to this post:
Why does an SSH remote command get fewer environment variables then when run manually?
Cheers
For some weird reason, this two errors started occurring on ssh connection initiation:
-bash: id: command not found
-bash: [: : integer expression expected
I'm not sure how those errors affect me, but in the last few days my VNC connection to raspberry pi also stopped working (I can see the login screen in the VNC viewer, but after i put my credentials, the screen turns black for a moment and then returns to the same login screen which I'm stuck on...)
I've tried updating my pi version through ssh and use some other commands I've found online, but nothing worked. Any idea how to solve those problems?
It looks like something is trying to load on login.
The places to check are as follows:
~/.bashrc
~/.bash_profile
~/.profile
~/.profile gets ran each time you login to the shell and the others run when running the bash shell.
By the looks of it something is trying to run the command id and as its not installed it's not running.
A quick test to see if this is in any of your files would to run grep in your home area.
# Change to your home area
cd ~/
# Recursively search for a string matching "id"
grep -rsi "id" .
This could explain why VNC is not working, as when you try to login to VNC it tries to load your config from those files and if they error VNC might not launch.
I tried to put the following before exit 0 in rc.local:
/FolderToThePyFile/piProgram.py &
The piProgram.py should start a local server, for a web-application. I tried to go to the browser to open the web-app, but the usual web-address doesn't work. Running the .py file only starts the server, it doesn't prompt for any user input. When I put in 'jobs' I don't see it running.
What am I doing wrong, and is there any way to fix it?
I am running Rasbian Os on Raspberry Pi 3 model B+.
According to Raspberry Pi Docs
First
sudo nano /etc/rc.local
Then you need to add the following before 'exit 0'
python /full/path/to/file/piProgram.py &
Raspberry Pi Docs: https://www.raspberrypi.org/documentation/linux/usage/rc-local.md
You need to specify 'python' before the path to the script
I've been stuck on this issue for the past 3 or 4 days. I am trying to run the attached command in a Windows batch file in Jenkins. This causes it to hang and it doesn't accept any further inputs:
knife winrm ec2-xx-xx-xx-xx.compute-1.amazonaws.com interactive -m -x Administrator -P xxxxxxxx
This works fine if run manually on a Windows machine, but I think the ruby.exe that is being opened is starting to cause Jenkins some problems.
Has anyone ever used knife winRM's interactive mode in such a way before? I'm at my wits end here and I really need this to work. Thank you for any help you could provide.
Have you tried running it as "call knife" instead of just "knife"? If, inside a batch file, you run another batch file (knife.bat, for example) without "call", the initial batch file run is terminated. There's a good explanation here: http://www.robvanderwoude.com/call.php