How can I prevent my terminal from breaking? - shell

I am using a watch command(in a shell script) in my docker image.
Command:
watch -d -t -g ls -la ${DIR_TO_WATCH} && sleep 5 && ${COMMAND} | tee
This command is watching a directory and if there is any change in the directory structure, we perform certain actions.
I am using this docker image in my helm chart.
Now, when I deploy the chart and check the logs of that pod, my terminal breaks and it will not be user friendly anymore.
Command:
kubectl logs -f pod-name -n name-space
After this, we need to reset terminal settings to get the terminal behave normal.
Is there anything that can be done to prevent this?
Best Regards,
Akshat

Solved this by sending output of watch to /dev/null.
watch -d -t -g ls -la ${DIR_TO_WATCH} > /dev/null && sleep 5 && ${COMMAND} | tee
The reason, according to my understanding, behind broken terminal was:
Two different command's logs(logs from watch and ${COMMAND}) were showing up on the same terminal at the same time, which resulted in creating a new terminal over the default one(which I am not sure how), causing the default terminal to break.
While ${COMMAND} logs were crucial for me, I did not need to view or monitor logs from watch. Hence, I sent the log outputs of watch to /dev/null and it solved my problem.
Please correct me if my understanding or approach is wrong.
Thank you.

Related

Bash exits after successfully running docker-compose

I can ssh onto a machine and run the following script
echo testing
docker-compose exec -T meteor php artisan down
echo done
which returns
testing
Application is now in maintenance mode.
done
However it I try and run that command over ssh it exits immediately after the docker-compose call.
ssh me#me.com << EOF
echo testing
docker-compose exec -T meteor php artisan down
echo done
EOF
gives
testing
Application is now in maintenance mode.
ie done is missing
I can get it to continue by adding && after the docker-compose command but i've got a long script and it makes it ugly and error prone if I have to explicity state this.
Any idea why this is happening and what I can change to fix it.
Update
I removed the -T from docker-compose and the script ran to completion however it gave the message the input device is not a TTY. It appears it can't allocate the interactive console. After a bit more googling I found that I can call
export COMPOSE_INTERACTIVE_NO_CLI=1
And then it will run to completion without giving error messages.
Thanks all for the help :)
The issue was being caused by the -T flag to docker-compose.
This was added because an error message was being printed if it wasn't there. the input device is not a TTY
I found you could prevent docker-compose from creating an interactive terminal if you use
export COMPOSE_INTERACTIVE_NO_CLI=1
Then the script runs correctly without the -T option.

Is it possible to view docker-compose logs in the output window running in Windows?

docker-compose on Windows is not able to be run in interactive mode.
ERROR: Interactive mode is not yet supported on Windows.
Please pass the -d flag when using `docker-compose run`.
When running docker-compose in detached mode, little is displayed to the console, and the only logs displayed under docker-compose logs appear to be:
Attaching to
which obviously isn't very useful.
Is there a way of accessing these logs for transient containers?
I've seen that it's possible to change the docker-daemons logging to use a file (without the ability to select the log location). Following this as a solution I could log to the predefined log location, then execute a copy script to move the files to a mounted volume to be persisted before the container is torn down. This doesn't sound ideal.
The solution I've currently gone with (also not ideal) is to wrap the shell script parameter in a dynamically created proxy script which logs all output to the mounted volume.
tempFile=myproxy.sh
echo '#!/bin/bash' > $tempFile
echo 'do.the.thing.sh 2> /data/log.txt'>>$tempFile
echo 'echo finished >> /data/logs/log.txt' >> $tempFile
Which then I'd call
docker-compose run -d doTheThing $tempFile
instead of
docker-compose run -d doTheThing do.the.thing.sh
docker-compose logs doTheThing

Running UIAutomation scripts from Xcode

Did anyone succeed in setting up automated UIAutomation tests in Xcode?
I'm trying to set up a target in my Xcode project that should run all the UIAutomation scripts I prepared. Currently, the only Build Phase of this target is this Run Script block:
TEMPLATE="/Applications/Xcode.app/Contents/Applications/Instruments.app/Contents/PlugIns/AutomationInstrument.bundle/Contents/Resources/Automation.tracetemplate"
MY_APP="/Users/Me/Library/Application Support/iPhone Simulator/6.0/Applications/564ED15A-A435-422B-82C4-5AE7DBBC27DD/MyApp.app"
RESULTS="/Users/Me/Projects/MyApp/Tests/UI/Traces/Automation.trace"
SCRIPT="/Users/Me/Projects/MyApp/Tests/UI/SomeTest.js"
instruments -t $TEMPLATE $MY_APP -e UIASCRIPT $SCRIPT -e UIARESULTSPATH $RESULTS
When I build this target it succeeds after a few seconds, but the script didn't actually run. In the build log I get these errors:
instruments[7222:707] Failed to load Mobile Device Locator plugin
instruments[7222:707] Failed to load Simulator Local Device Locator plugin
instruments[7222:707] Automation Instrument ran into an exception while trying to run the script. UIATargetHasGoneAWOLException
+0000 Fail: An error occurred while trying to run the script.
Instruments Trace Complete (Duration : 1.077379s; Output : /Users/Me/Projects/MyApp/Tests/UI/Traces/Automation.trace)
I am pretty sure, that my javascript and my run script are both correct, because if I run the exact same instruments command in bash it works as expected.
Could this be a bug in Xcode?
I finally found a solution for this problem. It seems like Xcode is running the Run Scripts with limited rights. I'm not entirely sure, what causes the instruments command to fail, but using su to change to your user will fix it.
su $USER -l -c <instruments command>
Obviously, this will ask you for your password, but you can't enter it when running as a script. I didn't find a way to specify the password for su, however if you run it as root, you don't have to specify one. Luckily sudo can accept a password via the pipe:
echo <password> | sudo -S su $USER -l -c <instruments command>
If you don't want to hardcode your password (always a bad idea), you could use some AppleScript to ask for the password.
I posted the resulting script below. Copy that to a *.sh file in your project and run that script from a Run Script.
#!/bin/bash
# This script should run all (currently only one) tests, independently from
# where it is called from (terminal, or Xcode Run Script).
# REQUIREMENTS: This script has to be located in the same folder as all the
# UIAutomation tests. Additionally, a *.tracetemplate file has to be present
# in the same folder. This can be created with Instruments (Save as template...)
# The following variables have to be configured:
EXECUTABLE="TestApp.app"
# Optional. If not set, you will be prompted for the password.
#PASSWORD="password"
# Find the test folder (this script has to be located in the same folder).
ROOT="$( cd -P "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# Prepare all the required args for instruments.
TEMPLATE=`find $ROOT -name '*.tracetemplate'`
EXECUTABLE=`find ~/Library/Application\ Support/iPhone\ Simulator | grep "${EXECUTABLE}$"`
SCRIPTS=`find $ROOT -name '*.js'`
# Prepare traces folder
TRACES="${ROOT}/Traces/`date +%Y-%m-%d_%H-%M-%S`"
mkdir -p "$TRACES"
# Get the name of the user we should use to run Instruments.
# Currently this is done, by getting the owner of the folder containing this script.
USERNAME=`ls -l "${ROOT}/.." | grep \`basename "$ROOT"\` | awk '{print $3}'`
# Bring simulator window to front. Depending on the localization, the name is different.
osascript -e 'try
tell application "iOS Simulator" to activate
on error
tell application "iOS-Simulator" to activate
end try'
# Prepare an Apple Script that promts for the password.
PASS_SCRIPT="tell application \"System Events\"
activate
display dialog \"Password for user $USER:\" default answer \"\" with hidden answer
text returned of the result
end tell"
# If the password is not set directly in this script, show the password prompt window.
if [ -z "$PASSWORD" ]; then
PASSWORD=`osascript -e "$PASS_SCRIPT"`
fi
# Run all the tests.
for SCRIPT in $SCRIPTS; do
echo -e "\nRunning test script $SCRIPT"
COMMAND="instruments -t \"$TEMPLATE\" \"$EXECUTABLE\" -e UIASCRIPT \"$SCRIPT\""
COMMAND="echo '$PASSWORD' | sudo -S su $USER -l -c '$COMMAND'"
echo "$COMMAND"
eval $COMMAND > results.log
SCRIPTNAME=`basename "$SCRIPT"`
TRACENAME=`echo "$SCRIPTNAME" | sed 's_\.js$_.trace_g'`
mv *.trace "${TRACES}/${TRACENAME}"
if [ `grep " Fail: " results.log | wc -l` -gt 0 ]; then
echo "Test ${SCRIPTNAME} failed. See trace for details."
open "${TRACES}/${TRACENAME}"
exit 1
break
fi
done
rm results.log
It seems as though this really might be an Xcode problem; at any rate, at least one person has filed a Radar report on it. Someone in this other thread claims you can work around this exception by disconnecting any iDevices that are currently connected to the computer, but I suspect that does not apply when you're trying to run the script as an Xcode target.
I would suggest filing a Radar report as well; you may get further details on the issue from Apple, or at least convince them that many people are having the problem and they ought to figure out what's going on.
Sorry for a not-terribly-helpful answer (should have been a comment, but comments and links/formatting do not mix very well). Please update this question with anything you find out on the issue.
Note: this is not a direct answer to the question, but it is an alternative solution to the underlying problem.
While searching for in-depth information about UIAutomation, I stumbled across a framework by Square called KIF (Keep it functional). It is a integration testing framework that allows for many of the same features as UIAutomation, but the great thing about is is that you can just write your integration tests in Objective-C.
It is very easy to setup (via CocoaPods), they have good examples too, and the best thing is that it's a breeze to set up with your CI system like Jenkins.
Have a look at: http://github.com/square/KIF
Late to the game but I have a solution that works for Xcode 5.1. Don't know if that's what broke the above solution or not. With the old solution I was still getting:
Failed to load Mobile Device Locator plugin, etc.
However, this works for the release version of Xcode 5.1.
echo <password> | sudo -S -u username xcrun instruments
Notice I removed the unneeded su command and added the xcrun command. The xcrun was the magic that was needed.
Here is my complete command:
echo <password> | sudo -S -u username xcrun instruments\
-w "iPhone Retina (3.5-inch) - Simulator - iOS 7.1"\
-D "${PROJECT_DIR}/TestResults/Traces/Traces.trace"\
-t "${DEVELOPER_DIR}/Instruments.app/Contents/PlugIns/AutomationInstrument.bundle/Contents/Resources/Automation.tracetemplate"\
"${BUILT_PRODUCTS_DIR}/MyApp.app"\
-e UIARESULTSPATH "${PROJECT_DIR}/TestResults"\
-e UIASCRIPT "${PROJECT_DIR}/UITests/main.js"
By the way if you type:
instruments -s devices
you will get a list of all the supported devices you can use for the -w option.
Edit: To make this work for different people checking out the project replace the following:
echo <password> | sudo -S -u username xcrun instruments
with
sudo -u ${USER} xcrun instruments
Since you are just doing an sudo to the same user no password is required.
Take a look at this tutorial that explains how to have Automated UI testing with Jenkins. It also uses Jasmine in the tutorial though. http://shaune.com.au/automated-ui-testing-for-ios-apps-uiautomation-jasmine-jenkins/ hope this helps. It has an example project file so you can download that as a template. Hope this helps.
In XCode - if you load up organizer (XCode->Window->Organizer)
Then select your machine under devices -> 'Enable Developer Mode'
This should remove the need for prompts with instruments.

Bash script: Turn on errors?

After designing a simple shell/bash based backup script on my Ubuntu engine and making it work, I've uploaded it to my Debian server, which outputs a number of errors while executing it.
What can I do to turn on "error handling" in my Ubuntu machine to make it easier to debug?
ssh into the server
run the script by hand with either -v or -x or both
try to duplicate the user, group, and environment of the error run in your terminal window If necessary, run the program with something like "su -c 'sh -v script' otheruser
You might also want to pipe the result of the bad command, particularly if run by cron(8), into /bin/logger, perhaps something like:
sh -v -x badscript 2>&1 | /bin/logger -t badscript
and then go look at /var/log/messages.
Bash lets you turn on debugging selectively, or completely with the set command. Here is a good reference on how to debug bash scripts.
The command set -x will turn on debugging anywhere in your script. Likewise, set +x will turn it off again. This is useful if you only want to see debug output from parts of your script.
Change your shebang line to include the trace option:
#!/bin/bash -x
You can also have Bash scan the file for errors without running it:
$ bash -n scriptname

How to run a command in background using ssh and detach the session

I'm currently trying to ssh into a remote machine and run a script, then leave the node with the script running. Below is my script. However, when it runs, the script is successfully run on the machine but ssh session hangs. What's the problem?
ssh -x $username#$node 'rm -rf statuslist
mkdir statuslist
chmod u+x ~/monitor/concat.sh
chmod u+x ~/monitor/script.sh
nohup ./monitor/concat.sh &
exit;'
There are some situations when you want to execute/start some scripts on a remote machine/server (which will terminate automatically) and disconnect from the server.
eg: A script running on a box which when executed
takes a model and copies it to a remote server
creates a script for running a simulation with the model and push it to server
starts the script on the server and disconnect
The duty of the script thus started is to run the simulation in the server and once completed (will take days to complete) copy the results back to client.
I would use the following command:
ssh remoteserver 'nohup /path/to/script `</dev/null` >nohup.out 2>&1 &'
#CKeven, you may put all those commands on one script, push it to the remote server and initiate it as follows:
echo '#!/bin/bash
rm -rf statuslist
mkdir statuslist
chmod u+x ~/monitor/concat.sh
chmod u+x ~/monitor/script.sh
nohup ./monitor/concat.sh &
' > script.sh
chmod u+x script.sh
rsync -azvp script.sh remotehost:/tmp
ssh remotehost '/tmp/script.sh `</dev/null` >nohup.out 2>&1 &'
Hope this works ;-)
Edit:
You can also use
ssh user#host 'screen -S SessionName -d -m "/path/to/executable"'
Which creates a detached screen session and runs target command within it
What do you think about using screen for this? You could run screen via ssh to start the command (concat.sh) and then you'd be able to return to the screen session if you wanted to monitor it (could be handy, depending on what concat does).
To be more specific, try this:
ssh -t $username#$node screen -dm -S testing ./monitor/concat.sh
You should find that the prompt returns immediately, and that concat.sh is running on the remote machine. I'll explain some of the options:
ssh -t makes a TTY. screen needs this.
screen -dm makes it start in "detached" mode. This is like "background" for your purposes.
-S testing gives your screen session a name. It is optional but recommended.
Now, once you've done this, you can go to the remote machine and run this:
screen -r testing
This will attach you to the screen session which contains your program. From there you can control it, kill it, see its output, and so on. Ctrl-A, then d will detach you from the screen session. screen -ls will list all running sessions.
It could be the standard input stream. Try ssh -n ... or ssh -f ....
For me, only this worked:
screen -dmS name sh my-script.sh
This, of course, depends on screen, and lets you attach later, if you ever want stdin or stdout. Screen will terminate itself when my-script.sh ends.
Below is a much more common decision that required some efforts to find, and it really works for me:
#!/usr/bin/bash
theScreenSessionName="test"
theTabNumber="1"
theStuff="date; hostname; cd /usr/local; pwd; /usr/local/bin/top"
echo "this is a test"
ssh -f user#server "/usr/local/bin/screen -x $theScreenSessionName -p $theTabNumber -X stuff \"
$theStuff
\""
It sends $theStuff list of commands to the tab No $theTabNumber of the screen session $theScreenSessionName preliminarily created at the 'server' on behalf of 'user'.
Please be aware of a trailing whitespace after
-X stuff \"
that is sent to overcome a 'stuff' option's glitch. The whitespace and $theStuff in the next line are appended by 'Enter' (^M) keystrokes. DON'T MISS 'EM!
The "this is a test" message is echoed in the initial terminal, and $theStuff commands are really executed inside the mentioned screen/tab.

Resources