I wrote a bash script, run.sh which has a python command with multiple options -
python train.py --lr 0.01 \
--momentum 0.5 \
--num_hidden 3 \
--sizes 100,100,100 \
--activation sigmoid \
--loss sq \
--opt adam \
--batch_size 20 \
--anneal true
I tried running this command in iPython -
!./run.sh
However, in iPython I'm not able to access the variables of the python script train.py. Is there some way to run the bash script in iPython so that I can access the variables? I don't want to copy paste the above command from the bash script each and every time.
I'm currently using iPython 5.1.0 on macOS Sierra.
The python process that runs your script train.py and the python process you're using at the ipython command line are two separate processes. It makes sense that one doesn't know about the variables of the other. There is probably some fancy way to connect the two but I suspect from the way you described the problem that it's not worth the work.
Here's an easier way to get access: you could replace python train.py in your script with python -i train.py. This way you will go into interactive mode in the process that runs your script after it is done, and anything defined at the main level will be accessible. You could insert a call to pdb.set_trace() in your script to stop at an arbitrary point.
Related
I have a WSL Ubuntu distro that I've set up so that when I login 4 services start working, including a web API that I can test via Swagger to verify it is up and working.
I'm at the point where what I want to do now is start WSL via a script - that is, launch my distro, have all of the services start, and do it from Python. The problem is I cannot even figure out the correct syntax to get WSL to start from PowerShell in a manner where my services start.
Side note: "services" != systemctl (or similar) calls, but just executing bash CLI commands from either my .bashrc or .profile at login.
I've put the commands to execute in .profile & .bashrc. I've configured it both for root execution and non-root user execution. I've taken the commands out of those 2 files and put it into a script in the Windows file system that I pass in on the start of wsl. And I've put that shell script in the WSL file system as well. Nothing seems to work, and sometimes the distro starts and then stops after about 30 seconds.
Some of the PS CLI commands I've tried:
Start-Job -ScriptBlock{ wsl -d distro -u root }
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c /root/bin/start.sh'
Start-Job -ScriptBlock{ wsl -d distro -u root 'bash -i -l -c .\start.sh'
wsl -d distro -u root -- bash -i -l -c /root/bin/start.sh
wsl -d distro -u root -- bash -i -l -c .\start.sh
wsl -d distro -u root -- /root/bin/start.sh
Permutations of the above that I've tried: replace root with my default login, and turning all of the Start-Job bash options into a comma-separated list of single-quoted strings (Ex: 'bash', '-i', '-l', ... ). Nothing I launch from the CLI will allow me access to the web API that is supposed to be hosted on my distro.
Any advice on what to try next?
Not necessarily an answer here as much as troubleshooting tips which will hopefully lead to an answer:
First, most of the forms that you are using seem to be correct. The only ones that absolutely shouldn't work are those that attempt to run the script from the Windows filesystem.
Make sure that you have a shebang line starting your script. I'm assuming you do, but other readers may come across this as well. For the moment, try this form:
#!/usr/bin/env -S bash -li
That's going to have the same effect as the bash -li you tried -- It will source both both interactive startup files such as ~/.bashrc as well as login profiles such as ~/.bash_profile (and /etc/profile.d/*, etc.).
Note that preferably, you won't need the -li. Best practice would be to move anything necessary for the services over from the startup scripts to your start.sh script, and avoid parsing the profile and rc. I need to go update some of my answers, since I just realized I've been guilty of giving some potentially bad advice ...
Specifically, though, I'm wondering if your interactive Bash config has something truly, well, "interactive" in it that might be preventing the automatic running of the script itself. Again, best practice would be for ~/.bashrc to only hold configuration that is needed for interactive shell sessions.
Make sure the script is set as executable (chmod +x start.sh). Again, I'm assuming this is the case for you.
With a shebang line and an executable script, use something like:
wsl -d distro -u root -e /root/bin/start.sh
The -e tells WSL to launch the script directly. Since it has a shebang line, it will be parsed by Bash. Most of the other forms you use above actually run Bash twice - Once when launching WSL and another when it finds the shebang line in the script.
Try some basic troubleshooting for your script like:
Add set -x to the top (right under the shebang line) to turn on script debugging.
Add a ps -efH at the end to show the processes that are running when the script completes
If needed, resort to quick-and-dirty echo statements to show where things have progressed in the script.
I'm hopeful that the above will at least show you the problem, but if not, add the debugging info that you gain from this to your question, and we can troubleshoot further.
As a user I want to execute Robot Framework's robot command with some command line options. I put everything in a script to avoid retyping the long command each time - see example below. On Linux an Mac OS I can execute this script from any terminal emulator, i.e.
# Linux
. run_local_tests.sh
# Mac OS
./run_local_tests.sh
On Windows an application (VSCode Editor) associated with .sh file type is opened instead of executing the robot command or an error like robot: command not found is returned
# Windows
.\run_local_tests.sh
# OR
run_local_tests.sh
# OR
bash run_local_tests.sh
shell script - filename: run_local_tests.sh
#!/bin/bash
# Set desired loglevel: NONE (less details), INFO, DEBUG, TRACE (most details)
export LOG_LEVEL=TRACE
# RUN CONTRIBUTION SERVICE TESTS
robot -i CONTRIBUTION -e circleci \
--outputdir results \
--log NONE \
--report NONE \
--output XML/CONTRIBUTION.xml \
--noncritical not-ready \
--flattenkeywords for \
--flattenkeywords foritem \
--flattenkeywords name:_resources.* \
--loglevel $LOG_LEVEL \
--name CONTRI \
robot/CONTRIBUTION_TESTS/
Renaming the script from .sh to .bat doen't help :(
entering bash, then activating venv and calling the script doesn't work
What other options are there (without installing additional tools like Cygwin etc.)?
I'm actually trying to answer the same question in the opposite direction (how to trigger/run them on my machine as .sh). Looks like we may help each other out. 8)
I believe this is what you're looking for:
Your file would be run_local_tests.bat
Contents:
#echo off
cd C:\path\to\robot\project
call robot -d relative/path/to/test/output/dir relative/path/to/run_local_tests.bat
Of course you can use any other valid robot cli syntax in the call also. You may have to make it executable too. I'm not sure.
I am trying to use Valve's steamcmd package in a bash script, an interactive shell that takes user input. Of course i'd like to be able to send input manually as part of a docker build, but I'm running into some issues.
I started by installing steamcmd locally and running this script:
./steamcmd <<LOGIN
login anonymous
quit
LOGIN
Unsurprisingly, it works properly, yielding this output:
Steam Console Client (c) Valve Corporation
-- type 'quit' to exit --
Loading Steam API...OK.
Connecting anonymously to Steam Public...Logged in OK
Waiting for user info...OK
Steam>
The problems start when I try the same command in Docker:
RUN ./steamcmd <<LOGIN \
login anonymous \
quit \
LOGIN
During the build, it runs ./steamcmd but then hangs on the input prompt, none of the data in the Heredoc is passed and the build never completes. What am I doing wrong?
Extra Info:
Base Image: ubuntu:latest
User: non root
The Docker build process is completely non-interactive, and if you are looking for some input then you need to pass build-args, and the reference these build-args in your sub-sequent RUN command.
But as mentioned in the commend you need to run them as a CMD as it will stuck the build process.
Here is Dockerfile with some example entrypoint that may help you,
# start steamcmd to force it to update itself
RUN ./steamcmd/steamcmd.sh +quit
# start the server main script
ENTRYPOINT ["bash", "/home/steam/server_scripts/server.sh"]
You can change it to
/home/steam/steamcmd/steamcmd.sh \
+login anonymous \
+exit
and one example from above list is cs_go.sh which you need to rename to server.sh is
#!/bin/bash
# update server's data
/home/steam/steamcmd/steamcmd.sh \
+login anonymous \
+force_install_dir /home/steam/server_data \
+app_update 740 \
+exit
# start the server
/home/steam/server_data/srcds_run \
-game csgo -console -usercon \
-secure -autoupdate -tickrate 64 +hostport 27015 \
+game_type 0 +game_mode 1 +mapgroup mg_active +map de_dust2 \
-port 27015 -console -secure -nohltv +sv_pure 0 +ip 0.0.0.0
exit 0
updated:
Automating SteamCMD
There are two ways to automate SteamCMD. (Replace steamcmd with ./steamcmd.sh on Linux/OS X.)
Command line
Note: When using the -beta option on the command line, it must be quoted in a special way, such as +app_update "90 -beta beta".
Note: If this does not work, try putting it like "+app_update 90 -beta beta" instead.
Append the commands to the command line prefixed with plus characters, e.g.:
steamcmd +login anonymous +force_install_dir ../csgo_ds +app_update 740 +quit
Automating SteamCMD
I want to run some code in Beaglebone black without doing ssh when I apply power.
I have tried putting some commands to run the code in ~/.bashrc file but it only works when I login using ssh. I have tried same thing with /etc/rc.local file but didn't work even after ssh.
I have also tried #reboot my_command in crontab -e but it also requires me to login using ssh
Any suggestions??
EDIT:
root#beaglebone:~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
root#beaglebone:~# ps aux | grep cron | grep -v grep
root 295 0.0 0.3 4428 1988 ? Ss 15:03 0:00 /usr/sbin/cron -f
Output of crontab -e: last few lines
root#beaglebone:~# crontab -e
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
##reboot /root/wiringBone-master/library/main not working
#*/5 * * * * /root/wiringBone-master/library/main works
main is the script I want to run
/etc/rc.local is a quick way. Make sure to launch into background and don't prevent the script from finishing.
Writing a proper systemd service file would be better though.
crontab -e method worked!!. My script required two overlays to execute the code which I didn't load that's why my #reboot command didn't work. I solved my problem by adding required overlays.
#reboot config-pin overlay cape-universaln
#reboot config-pin overlay BB-ADC
#reboot /root/wiringBone-master/library/main
And now my code works on reboot.
I don't know anything about beagle bone, but on a normal Linux system you'd likely do this with either an init script, or, more easily, in a cron script set to run at boot.
You'll have to check if you're environment would support either of those. Even if it doesn't have cron, it is probably running some sort of init (likely to be the thing starting SSH on boot, but YMMV).
I've hit a snag with a shell script intended to run every 30 minutes in cron on a Redhat 6 server. The shell script is basically just a command to run a python script.
The native version python on the server is 2.6.6 but the python version required by this particular script is python 2.7+. I am able to easily run this on the command line by using the "scl" command (this example includes the python -V command to show the version change):
$ python -V
Python 2.6.6
$ scl enable python27 bash
$ python -V
Python 2.7.3
At this point I can run the python 2.7.3 scripts on the command line no problem.
Here's the snag.
When you issue the scl enable python27 bash command it starts a new bash shell session which (again) is fine for interactive commandline work. But when doing this inside a shell script, as soon as it runs the bash command, the script exits because of the new session.
Here's the shell script that is failing:
#!/bin/bash
cd /var/www/python/scripts/
scl enable python27 bash
python runAllUpserts.py >/dev/null 2>&1
It simply stops as soon as it hits line 4 because "bash" pops it out of the script and into a fresh bash shell. So it never sees the actual python command I need it to run.
Plus, if run every 30 minutes, this would add a new bash each time which is yet another problem.
I am reluctant to update the native python version on the server to 2.7.3 right now due to several reasons. The Redhat yum repos don't yet have python 2.7.3 and a manual install would be outside of the yum update system. From what I understand, yum itself runs on python 2.6.x.
Here's where I found the method for using scl
http://developerblog.redhat.com/2013/02/14/setting-up-django-and-python-2-7-on-red-hat-enterprise-6-the-easy-way/
Doing everything in one heredoc in the SCL environment is the best option, IMO:
scl enable python27 - << \EOF
cd /var/www/python/scripts/
python runAllUpserts.py >/dev/null 2>&1
EOF
Another way is to run just the second command (which is the only one that uses Python) in scl environment directly:
cd /var/www/python/scripts/
scl enable python27 "python runAllUpserts.py >/dev/null 2>&1"
scl enable python27 bash activates a python virtual environment.
You can do this from within a bash script by simply sourcing the enable script of the virtual environment, of the SCL package, which is located at /opt/rh/python27/enable
Example:
#!/bin/bash
cd /var/www/python/scripts/
source /opt/rh/python27/enable
python runAllUpserts.py >/dev/null 2>&1
Isn't the easiest to just your python script directly? test_python.py:
#!/usr/bin/env python
import sys
f = open('/tmp/pytest.log','w+')
f.write(sys.version)
f.write('\n')
f.close()
then in your crontab:
2 * * * * scl python27 enable $HOME/test_python.py
Make sure you make test_python.py executable.
Another alternative is to call a shell script that calls the python. test_python.sh:
#/bin/bash
python test_python.py
in your crontab:
2 * * * * scl python27 enable $HOME/test_python.sh
One liner
scl enable python27 'python runAllUpserts.py >/dev/null 2>&1'
I use it also with the devtoolsets on the CentOS 6.x
me#my_host:~/tmp# scl enable devtoolset-1.1 'gcc --version'
gcc (GCC) 4.7.2 20121015 (Red Hat 4.7.2-5)
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
scl is the dumbest "let us try and lock you in` nonsense I've seen in a while.
Here's how I made it so I could pass arguments to a series of scripts that all linked to a single skeleton file:
$ cat /usr/bin/skeleton
#!/bin/sh
tmp="$( mktemp )"
me="$( basename $0 )"
echo 'scl enable python27 - << \EOF' >> "${tmp}"
echo "python '/opt/rh/python27/root/usr/bin/${me}' $#" >> "${tmp}"
echo "EOF" >> "${tmp}"
sh "${tmp}"
rm "${tmp}"
So if there's a script you want to run that lives in, say, /opt/rh/python27/root/usr/bin/pepper you can do this:
# cd /usr/bin
# ln -s skeleton pepper
# pepper foo bar
and it should work as expected.
I've only seen this scl stuff once before and don't have ready access to a system with it installed. But I think it's just setting up PATH and some other environment variables in some way that vaguely similar to how they're done under virtualenv.
Perhaps changing the script to have the bash subprocess call python would work:
#!/bin/bash
cd /var/www/python/scripts/
(scl enable python27 bash -c "python runAllUpserts.py") >/dev/null 2>&1
The instance of python found on the subprocess bash's shell should be your 2.7.x copy ... and all the other environmental settings done by scl should be inherited thereby.