Running Two C++ file in Bash script - bash

I wrote a bash script to run both client and server.
The code is written in cpp and client and server are executable.
$port=8008
$pack_rate=16
echo "Starting server"
./server -p $port -n 512 -e 0.001
echo "Starting client"
./client -p $port -n 512 -l 16 -s localhost -r $pack_rate -d
echo "end"
In the above case, the client will send data packets to the server and the server will process it.
So, both the client and server should run at the same time.
I tried to run the script file, but as expected only
"Starting server"
is getting printed. So, server is running and server will not terminate until it receives 512 packets from the client. But client process can not start until server ends in the bash script.
So, is there any way by which I can run both the process simultaneously using single bash script?

add & add the end of the ./server line, it will run the process in batch mode, and keep executing the rest of the script

You need to add an &:
./server -p $port -n 512 -e 0.001 &
Thus, the script will not wait the end of the server program to continue.

Related

Using expect on stderr (sshuttle as an example)

There is this program called sshuttle that can connects to a server and create a tunnel.
I wish to create a bash function that sequentially:
opens a tunnel to a remote server (sshuttle -r myhost 0/0),
performs 1 arbitrary commandline,
kill -s TERM <pidOfTheAboveTunnel>.
A basic idea (that works but the 5 seconds delay is a problem) is like sshuttle -r myhost 0/0 & ; sleep 5 ; mycommand ; kill -s TERM $(pgrep sshuttle)
Could expect be used to expect the string "c : Connected to server." that is received from stderr here? My attempts as a newbie were met with nothing but failure, and the man page is quite impressive.
When you use expect to control another program, it connects to that program through a pseudo-terminal (pty), so expect sees the same output from the program as you would on a terminal, in particular there is no distinction between stdout and stderr. Assuming that your mycommand is to be executed on the local machine, you could use something like this as an expect (not bash) script:
#!/usr/bin/expect
spawn sshuttle -r myhost 0/0
expect "Connected to server."
exec mycommand
exec kill [exp_pid]
close
The exec kill may not be needed if sshuttle exits when its stdin is closed, which will happen on the next line.

Started JBOSS Service via Shell Script, but hanging

I managed to start JBOSS service through a shell script running locally inside the server.
if [ $? -eq 0 ]; then
{
sh /jboss-6.1.0.Final/bin/run.sh -c server1 -g app1 -u x.x.x.x -b x.x.x.x -Djboss.messaging.ServerPeerID=1 &
}; fi
My problem is I am able to start the service and found the application working, but once the script finishes running, it is not returning to shell ($ prompt) back and keep on hanging there forever. When I run the same command directly (without script), after the command finishes running, on hitting Enter key, I can get my $ prompt and I shall do other works.
Can someone tell me what I am missing in my code so that I can return back to my $ prompt.
Remove & from the shell script. Also remove {} from if block , no need.

Terminate Curl Telnet session in bashscript

Working on automating telnet connectivity from various hosts running the script from specified host with curl telnet call.
However as are aware for telnet once we get connected status for any hosts we have to pass an escape character to terminate the telnet sessions, but in bash script I need to terminate the session as soon as we get Connected/Refused response from the target endpoint or after some seconds of running the telnet session .
PFB Script where telnet connectivity is checked through Curl call, so I need is there anyway in curl that we can terminate the telnet session in curl as soon as we get the response or terminate the session in some milliseconds/seconds.
Code:
#!/bin/bash
HOSTS='LPDOSPUT00100 LPDOSPUT00101'
for S in ${HOSTS}
do
echo "Checking Connectivity From Host : ${S}"
echo ""
ssh -q apigee#${S} "curl -v telnet://${TargetEndPoint}:${Port}"
done
You could run it in the timeout command to make it terminate after a certain amount of time.
ssh -q apigee#"$S" "timeout 5s curl -v telnet://${TargetEndPoint}:${Port}"
would terminate it after 5 seconds if it hadn't already exited on its own.
Perhaps curl isn't the right tool for this job though. Have you considered using nc instead?
ssh -q apigee#"$S" "nc -z ${TargetEndPoint} $Port"
will likely do what you want.

run Terminal script that executes one command and while that command is running, opens a new tab and runs another command

At the moment I am making a java application. To test it I have to run a server and then a client.
So I want to run this using a bash script:
#!/bin/bash
clear
gradle runServer
osascript -e 'tell application "Terminal" to activate' -e 'tell application "System Events" to tell process "Terminal" to keystroke "t" using command down'
gradle runClient
Problem: The server when run does not end until you close the game, so the next two commands will not execute. How can I run them concurrently/simultaneously ?
Run the server in the background, then kill it when the script is done.
Here’s an example with a simple HTTP server and client.
#!/bin/bash
date > foo.txt
python -m SimpleHTTPServer 1234 &
SERVER_PID="${!}"
# Automatically terminate server on script exit
trap 'kill "${SERVER_PID}"' 0 1 2 3 15
# Wait for server to start
while ! netstat -an -f inet | grep -q '\.1234 '; do
sleep 0.05
done
# Run client
curl -s http://localhost:1234/foo.txt
Running in another tab gets a lot trickier; this interleaves the output from the client and the server.
$ ./doit.sh
127.0.0.1 - - [19/Sep/2014 23:00:32] "GET /foo.txt HTTP/1.1" 200 -
Fri 19 Sep 2014 23:00:32 MDT
Note the log output from the HTTP server, and the output from the HTTP client. The server is automatically killed afterwards.

Send command to a background process

I have a previously running process (process1.sh) that is running in the background with a PID of 1111 (or some other arbitrary number). How could I send something like command option1 option2 to that process with a PID of 1111?
I don't want to start a new process1.sh!
Named Pipes are your friend. See the article Linux Journal: Using Named Pipes (FIFOs) with Bash.
Based on the answers:
Writing to stdin of background process
Accessing bash command line args $# vs $*
Why my named pipe input command line just hangs when it is called?
Can I redirect output to a log file and background a process at the same time?
I wrote two shell scripts to communicate with my game server.
This first script is run when computer start up. It does start the server and configure it to read/receive my commands while it run in background:
start_czero_server.sh
#!/bin/sh
# Go to the game server application folder where the game application `hlds_run` is
cd /home/user/Half-Life
# Set up a pipe named `/tmp/srv-input`
rm /tmp/srv-input
mkfifo /tmp/srv-input
# To avoid your server to receive a EOF. At least one process must have
# the fifo opened in writing so your server does not receive a EOF.
cat > /tmp/srv-input &
# The PID of this command is saved in the /tmp/srv-input-cat-pid file
# for latter kill.
#
# To send a EOF to your server, you need to kill the `cat > /tmp/srv-input` process
# which PID has been saved in the `/tmp/srv-input-cat-pid file`.
echo $! > /tmp/srv-input-cat-pid
# Start the server reading from the pipe named `/tmp/srv-input`
# And also output all its console to the file `/home/user/Half-Life/my_logs.txt`
#
# Replace the `./hlds_run -console -game czero +port 27015` by your application command
./hlds_run -console -game czero +port 27015 > my_logs.txt 2>&1 < /tmp/srv-input &
# Successful execution
exit 0
This second script it just a wrapper which allow me easily to send commands to the my server:
send.sh
half_life_folder="/home/jack/Steam/steamapps/common/Half-Life"
half_life_pid_tail_file_name=hlds_logs_tail_pid.txt
half_life_pid_tail="$(cat $half_life_folder/$half_life_pid_tail_file_name)"
if ps -p $half_life_pid_tail > /dev/null
then
echo "$half_life_pid_tail is running"
else
echo "Starting the tailing..."
tail -2f $half_life_folder/my_logs.txt &
echo $! > $half_life_folder/$half_life_pid_tail_file_name
fi
echo "$#" > /tmp/srv-input
sleep 1
exit 0
Now every time I want to send a command to my server I just do on the terminal:
./send.sh mp_timelimit 30
This script allows me to keep tailing the process on your current terminal, because every time I send a command, it checks whether there is a tail process running in background. If not, it just start one and every time the process sends outputs, I can see it on the terminal I used to send the command, just like for the applications you run appending the & operator.
You could always keep another open terminal open just to listen to my server server console. To do it just use the tail command with the -f flag to follow my server console output:
./tail -f /home/user/Half-Life/my_logs.txt
If you don't want to be limited to signals, your program must support one of the Inter Process Communication methods. See the corresponding Wikipedia article.
A simple method is to make it listen for commands on a Unix domain socket.
For how to send commands to a server via a named pipe (fifo) from the shell see here:
Redirecting input of application (java) but still allowing stdin in BASH
How do I use exec 3>myfifo in a script, and not have echo foo>&3 close the pipe?
You can use the bash's coproc comamnd. (avaliable only in 4.0+) - it's like ksh's |&
check this for examples http://wiki.bash-hackers.org/syntax/keywords/coproc
you can't send new args to a running process.
But if you are implementing this process or its a process that can take the args from a pipe, then the other answer would help.

Resources