Cannot reach port of gcloud appengine devserver - bash

for testing I try to run the gcloud devserver inside docker with that comment:
sudo /usr/local/gcloud/google-cloud-sdk/bin/java_dev_appserver.sh --disable_update_check --port=8888 --help /app/target/qdacity-war/ 2>&1 | sudo tee /app/logs/devserver.log > /dev/null &
To check if the devserver has started successfully, I use this script:
#!/bin/bash
# This script waits until the port 8888 is open.
SERVER=localhost
PORT=8888
TIMEOUT=180
TIME_INTERVAL=2
PORT_OPEN=1
PORT_CLOSED=0
time=0
isPortOpen=0
while [ $time -lt $TIMEOUT ] && [ $isPortOpen -eq $PORT_CLOSED ];
do
# Connect to the port
(echo > /dev/tcp/$SERVER/$PORT) >/dev/null 2>&1
if [ $? -ne 0 ]; then
isPortOpen=$PORT_CLOSED
else
isPortOpen=$PORT_OPEN
fi
time=$(($time+$TIME_INTERVAL))
sleep $TIME_INTERVAL
done
if [ $isPortOpen -eq $PORT_OPEN ]; then
echo "Port is open after ${time} seconds."
# Give the server more time to properly start
sleep 10
else
echo "Reached the timeout (${TIMEOUT} seconds). The port ${SERVER}:${PORT} is not available."
exit 1
fi
After running all the test, I just got back:
Reached the timeout (180 seconds). The port localhost:8888 is not available.
I couldn't find out if there were any problems starting the devserver or querying the port.
Does anyone have an idea or solution?
Thanks!

By default, by only accepting localhost|loopback traffic, you're unable to access the server remotely.
Please try adding --address=0.0.0.0:
(link) to your java_dev_appserver.sh command.
Example
Used a variant of Google's HelloWorld sample.
Ran this with mvn appengine:run (to confirm it works and build the WAR).
Then /path/to/bin/java_dev_appserver.sh ./target/hellofreddie-0.1 (to confirm it works with the local development server).
Then used Google's Cloud SDK container image (link), mounted the previously generated WAR directory into it, and ran the server on :9999:
docker run \
--interactive \
--tty \
--publish=9999:9999 \
--volume=${PWD}/target:/target \
google/cloud-sdk \
/usr/lib/google-cloud-sdk/bin/java_dev_appserver.sh \
--address=0.0.0.0 \
--port=9999 \
./target/hellofreddie-0.1
Am able to curl the endpoint:
curl \
--silent \
--location \
--write-out "%{http_code}" \
--output /dev/null \
localhost:9999
returns 200
And, running your scripts adjusted with PORT=9999 returns:
Port is open after 2 seconds.

Related

Mark Gitlab CI stage as passed based on condition in job log

I have a bash script that is executing a dataflow job like so
/deploy.sh
python3 main.py \
--runner DataflowRunner \
--region us-west1 \
--job_name name \
--project project \
--autoscaling_algorithm THROUGHPUT_BASED \
--max_num_workers 10 \
--environment prod \
--staging_location gs staging loc \
--temp_location temp loc \
--setup_file ./setup.py \
--subnetwork subnetwork \
--experiments use_network_tags=internal-ssh-server
So I use gitlab ci to run this
./gitlab-ci.yml
Deploy Prod:
stage: Deploy Prod
environment: production
script:
- *setup-project
- pip3 install --upgrade pip;
- pip install -r requirements.txt;
- chmod +x deploy.sh
- ./deploy.sh
only:
- master
So now my code runs and logs in the gitlab pipeline AND in the logs viewer in dataflow. What I want to be able to do is that once gitlab sees JOB_STATE_RUNNING, it marks the pipeline as passed and stops outputting logs to gitlab. Maybe there's a way to do this in the bash script? Or can it be done in gitlab ci?
GitLab doesn't have this capability as a feature, so your only is to script the solution.
Something like this should work to monitor the output and exit once that text is encountered.
python3 myscript.py > logfile.log &
( tail -f -n0 logfile.log & ) | grep -q "JOB_STATE_RUNNING"
echo "Job is running. We're done here"
exit 0
reference: https://superuser.com/a/900134/654263
You'd need to worry about some other things in the script, but that's the basic idea. GitLab unfortunately has no success condition based on trace output.
Little late, hope this helps someone, The way i solved it was I created two scripts
build.sh
#!/bin/bash
# This helps submit the beam dataflow job using nohup and parse the results for job submission
nohup stdbuf -oL bash ~/submit_beam_job.sh &> ~/output.log &
# Wait for log file to appear before checking on it
sleep 5
# Count to make sure the while loop is not stuck
cnt=1
while ! grep -q "JOB_STATE_RUNNING" ~/output.log; do
echo "Job has not started yet, waiting for it to start"
cnt=$((cnt+1))
if [[ $cnt -gt 5 ]]; then
echo "Job submission taking too long please check!"
exit 1
fi
# For other error on the log file, checking the keyword
if grep -q "Errno" ~/output.log || grep -q "Error" ~/output.log; then
echo "Error submitting Dataflow job, please check!!"
exit 1
fi
sleep 30
done
The submit beam job is like this
#!/bin/bash
# This submits individual beam jobs for each topics
export PYTHONIOENCODING=utf8
# Beam pipeline for carlistingprimaryimage table
python3 ~/dataflow.py \
--input_subscription=projects/data-warehouse/subscriptions/cars \
--window_size=2 \
--num_shards=2 \
--runner=DataflowRunner \
--temp_location=gs://bucket/beam-python/temp/ \
--staging_location=gs://bucket/beam-python/binaries/ \
--max_num_workers=5 \
--project=data-warehouse \
--region=us-central1 \
--gcs_project=data-warehouse \
--setup_file=~/setup.py

Running a bash script after the kafka-connect docker is up and running

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/KarthikDuggirala/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
# COPY script and make it executable
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN ["chmod", "+x", "/usr/share/kafka-connect-script/plugins-config.sh"]
#entrypoint
ENTRYPOINT [ "./usr/share/kafka-connect-script/plugins-config.sh" ]
and the following bash script
#!/bin/bash
#script to configure kafka connect with plugins
#export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
#export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=30
echo "Waiting for Kafka Connect to start listening on localhost"
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT"
while [[ $(eval $curl_command) -eq 000 ]]
do
echo "In"
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter"
echo "Going to sleep for $sleep_second seconds"
# sleep $sleep_second
echo "Finished sleeping"
# ((sleep_second_counter+=$sleep_second))
echo "Finished counter"
done
echo "Out"
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
I try to run the docker and using docker logs to see whats happening, and I am expecting that the script would run and wait till the kafka connect is started. But apparently after say few seconds the script or (I dont know what is hanging) hangs and I do not see any console prints anymore.
I am a bit lost what is wrong, so I need some guidance on what is that I am missing or is this not the correct way
What I am trying to do
I want to have logic defined that I could wait for kafka connect to start then run the curl command
curl -X POST -H "Content-Type: application/json" --data '{"name":"","config":{"connector.class":"com.github.jcustenborder.kafka.connect.snmp.SnmpTrapSourceConnector","topic":"fm_snmp"}}' http://localhost:8083/connectors
PS: I cannot use docker-compose way to do it, since there are places I have to use docker run
The problem here is that ENTRYPOINT will run when the container starts and will prevent the default CMD to run since the script will loop waiting for the server to be up , that is the script will loop forever since the CMD will not run.
you need to do one of the following:
start the kafka connect server in your Entrypoint and your script in CMD or running your script outside the container ....

The vehicle lifecycle application exits as we run it on a server which does not have a browser. Any solution?

I am using 16.04 ubuntu server from digital ocean.
I am trying to run vehicle example from composer-sample-appplication git project. So once we usee build.sh then excetue install.sh , it does everything like downloading docker images and deploy bna. Eveything run fines.
So once network is up,
It starts UI application as follows:
# Start the VDA application.
docker run \
-d \
--network composer_default \
--name vda \
-e COMPOSER_BASE_URL=http://rest:3000 \
-e NODE_RED_BASE_URL=ws://node-red:1880 \
-p 6001:6001 \
hyperledger/vehicle-lifecycle-vda
2e0370a1d3694e6504d16fdf7b542f36bfb8c9a9e37f217ce35625968e772b52
# Start the manufacturing application.
docker run \
-d \
--network composer_default \
--name manufacturing \
-e COMPOSER_BASE_URL=http://rest:3000 \
-e NODE_RED_BASE_URL=ws://node-red:1880 \
-p 6002:6001 \
hyperledger/vehicle-lifecycle-manufacturing
09f641244ad91e410640450c55e8997fc0b60464c649180721465a63efbeb445
# Start the car-builder application.
docker run \
-d \
--network composer_default \
--name car-builder \
-e NODE_RED_BASE_URL=ws://node-red:1880 \
-p 8100:8100 \
hyperledger/vehicle-lifecycle-car-builder
4ef35265ee0f507ddfabcdc36ed6774fa8e0137808f7fd4b47c1a36ce74c4e10
but after this
# Open the playground in a web browser.
URLS="http://localhost:8080 http://localhost:3000/explorer/ http://localhost:1880 http://localhost:6001 http://localhost:6002 http://localhost:8100"
case "$(uname)" in
"Darwin") open ${URLS}
;;
"Linux") if [ -n "$BROWSER" ] ; then
$BROWSER http://localhost:8080 http://localhost:3000/explorer/ http://localhost:1880 http://localhost:6001 http://localhost:6002 http://localhost:8100
elif which x-www-browser > /dev/null ; then
nohup x-www-browser ${URLS} < /dev/null > /dev/null 2>&1 &
elif which xdg-open > /dev/null ; then
for URL in ${URLS} ; do
xdg-open ${URL}
done
elif which gnome-open > /dev/null ; then
gnome-open http://localhost:8080 http://localhost:3000/explorer/ http://localhost:1880 http://localhost:6001 http://localhost:6002 http://localhost:8100
#elif other types blah blah
else
echo "Could not detect web browser to use - please launch Composer Playground URL using your chosen browser ie: <browser executable name> http://localhost:8080 or set your BROWSER variable to the browser launcher in your PATH"
fi
;;
*) echo "Playground not launched - this OS is currently not supported "
;;
esac
uname
Could not detect web browser to use - please launch Composer Playground URL using your chosen browser ie: <browser executable name> http://localhost:8080 or set your BROWSER variable to the browser launcher in your PATH
# Exit; this is required as the payload immediately follows.
exit 0
So the server does not have GUI and browser. I think scipt(install.sh) is not able to load all the payload that is required. Can this be fixed somehow?
All that section of the script does is try to open a web browser and open the various pages. It should not have any effect on the docker containers launched.

How to check whether a docker service is already running on UCP using shell script

I want to check whether a docker service is running or not.
if it's running I want to remove that service and create the new one
I am doing this task with shell script
I am providing the code snippet of my shell script where I am facing
Error response from daemon: service data-mapper-service not found
if [[ "$(docker service inspect ${DOCKER_SERVICE_NAME} 2> /dev/null)" != "" ]]; then
docker service rm ${DOCKER_SERVICE_NAME}
else echo "service doesn't exist or may have been removed manually"
fi
docker service create \
--name ${DOCKER_SERVICE_NAME} \
--network ${OVERLAY_NETWORK} \
--reserve-memory ${10} \
--constraint node.labels.run_images==yes \
--mode global \
--with-registry-auth \
--restart-condition any \
-p ${EXTERNAL_PORT}:${INTERNAL_PORT} \
-e "SPRING_PROFILES_ACTIVE="${SPRING_PROFILE} \
-e "JAVA_OPTS: -Xms256m -Xmx512m" \
${DTR_HOST_NAME}/${DTR_ORG_NAME}/${DTR_REP_NAME}:${BUILD_NUMBER}
I am getting the error on if statement line.
If the service is running and I trigger this shell script everything runs fine, But if the service is not running and I trigger this shell script I am facing above mention error.

how can know ssh is disconected and retry with bash script

I'm using reverse ssh for connecting to remote client , Operator run reverse one time and leave client system
how can i write bash script , when reverse ssh disconnected from server retry to connect to server (ssh)
Use autossh. Autossh "automatically restart[s] SSH sessions and tunnels"
sudo apt-get install autossh
I use autossh to to keep open reverse tunnel that I depend on. It works very well, even with long periods of lost connection.
Here is the script I use to create the tunnel:
#!/bin/bash
AUTOSSH_GATETIME=0
export AUTOSSH_GATETIME
autossh -f -N -R 8022:localhost:22 username#host -o "ServerAliveInterval 45" -o "ServerAliveCountMax 2"
I execute this script at boot with this cronjob:
#reboot /home/scripts/./persistent-tunnel.sh
If you simply want to retry a command until it succeeds, you can use this pattern:
while ! ssh [...]
do
echo "Command failed, retrying..." >&2
done
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
#/bin/bash
if [ -z "$1" ]
then
echo '''
Please also provide ssh connection details.
'''
exit 1
fi
retries=0
repeat=true
today=$(date)
while "$repeat"
do
((retries+=1)) &&
echo "Try number $retries..." &&
today=$(date) &&
ssh "$#" &&
repeat=false
sleep 5
done
echo """
Disconnected sshx after a successful login.
Total number of tries = $retries
Connected at:
$today
"""
You might want to take a look into ssh options ServerAliveInterval, ServerAliveCountMax and TCPKeepAlive because sometimes your line dies without making this obvious, let me demonstrate:
#!/bin/sh
while true; do
ssh -T user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
pkill -f "^sshd:\ user\ \ \ \ $" # needs to be edited for nearly every case
sleep 2
ssh -T -N user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
-o Batchmode=yes \
-o ExitOnForwardFailure=yes \
-o ServerAliveCountMax=1 \
-o ServerAliveInterval=60 \
-o LocalForward=127.0.0.1:2501=127.0.0.1:25 \
-o RemoteForward=127.0.0.1:2501=127.0.0.1:25
sleep 60
done
You can use netstat -ntp | grep ":22" or ss -ntp | grep ":22" to see established connections to ssh port, then use grep to filter the ip address you're looking for. If you don't find a connection then reconnect the tunnel.
Use autossh if it works on your version of Linux. It did not on mine as it was an outdated Linux distribution for a custom NAS box.
The alternative is a simple bash script in crontab like this:
maintain_reverse_ssh_tunnel.sh
if ! netstat -planet |grep myserver_ip_or_name |grep ESTABLISHED > /dev/null; then
echo "REVERSE SSH DOWN - Restarting the tunnels"
ssh -fN -R 32999:localhost:22 -R 28080:localhost:80 myusername#myserver_ip_or_name
fi
Replace myusername and myserver_ip_or_name with those of your user and server.
Then add an entry to crontab by typing crontab -e and adding the following line:
1 * * * * /path_to_my_script/maintain_reverse_ssh_tunnel.sh
Make sure to have the execute permissions on the script:
chmod 755 maintain_reverse_ssh_tunnel.sh

Resources