How to tail logs from an AppFog remote server using a shell script poll? - shell

Testing out AppFog and I've run into their tailing problem. Specifically, they don't offer tailing of your server. A big sticking point for me as I prefer to work on a remote dev server and without access to the logs it makes it very difficult to debug. Same with staging.
They do offer the following to pull the logs down:
af logs my-app-name --all
Which will dump whatever's in the log to your terminal. Not exactly elegant, but at least the info is there.
But it's not continuous. And having to type af logs my-app-name --all a million times will make me go out of my mind, especially while I'm trying to hunt down a bug.
So I thought I'd write a shell script that will fire the af logs command against my app server for me, and I came up with this:
#!/bin/bash
while true; do
af logs $1
sleep 3
done
Where $1 is the name of my app. So I'd use it like so:
af-tail my-app-name
And then every three seconds I'd get a log dump from my app server. However, I keep getting all the logs and really I'd like it to concatenate any missing entries to the existing 'stream' in my terminal, but I'm not sure how I'd go about doing that. Any help?

Maybe this could help. I use it to monitor remote logs in my local machine.
https://gist.github.com/iolloyd/da60ef316643d7894bdf

Related

Is it possible to send google clouds log stream to a local file from command line

Basically, when I run the command gcloud app logs tail >> logs.txt I would expect it to store the streamed logs in the output file.
I am guessing that since the command executes asynchronously it ends up storing nothing and stops the process.
If so, is there a way to store the logs without the need of a script?
You're not seeing anything because you are redirecting stdout, while gcloud, just like most non-shell programs, prints to stderr. This seems to work fine:
gcloud app logs tail >> logs.txt 2>&1
In any case except for like some temporary thing (getting logs for a couple of minutes), you should be using sinks for this, not some homebrew solution: https://cloud.google.com/logging/docs/export/configure_export_v2. Logs will be stored to GCS every hour.

Re-attempt docker-compose logs -f when services are down?

I commonly find myself making docker-compose environments for development purposes.
However, an annoying thing to have to do when I start up my services with docker-compose up -d is to have to run docker-compose logs -f every time all services come down.
I'd like to be able to run a script that attempts to run docker-compose logs -f and then once it succeeds, acts as it normally would. Then once the services are down, continue to retry to run until they are up again.
Does this make sense? I've tried using watch but that is not behaving how I'd like and trying a loop with a sleep command in it doesn't produce any useful results either.
If you want to run a command in a loop, just:
while sleep 1; do docker-compose logs -f; done
It will run docker-compose logs -f every second.
watch clears the screen each period, so you will not see all the messages then.

How can I run a Shell when booting up?

I am configuring an app at work which is on a Amazon Web Server.
To get the app running you have to run a shell called "Start.sh"
I want this to be done automatically after booting up the server
I have already tried with the following bash in the User Data section (Which runs on boot)
#!/bin/bash
cd "/home/ec2-user/app_name/"
sh Start.sh
echo "worked" > worked.txt
Thanks for the help
Scripts provided through User Data are only executed the first time the instance is started. (Officially, it is executed once per instance id.) This is done because the normal use-case is to install software, which should only be done once.
If you wish something to run on every boot, you could probably use the cloud-init once-per-boot feature:
Any scripts in the scripts/per-boot directory on the datasource will be run every time the system boots. Scripts will be run in alphabetical order.

Shell timeout does not stop cloudfoundry app-nozzle, there is still new output

I would like to run a CloudFoundry app-nozzle command for 10 seconds to gather some metrics about an application. Even though I stop the command, there is still new output in the output file afterwards. I have no idea what is happening.
My command (that would be run inside a script):
timeout 10s cf app-nozzle my_app --filter ContainerMetric > CF_nozzle.txt
It looks that it stopped and exited in Git Bash, I can run other scripts, even after minutes there are new lines in the file. I closed the whole window, and it is still ongoing.
Update: I tried it in CLI only and after the timeout it still emmits data even to command line.
It seems that this might be a bug in Windows Git bash. The same command works well in Ubuntu terminal.

glassfish dies and does not start again

One of our application servers (Glassfish v3.0.1) keeps crushing down with no reason. Sometimes, I am away from Internet so I cannot run it back again. Therefore, I wrote a simple bash script to wait for 10 minutes and then run asadmin. It is like:
#!/bin/bash
while true;
do sleep 600;
sudo /home/ismetb/glassfishv3.0.1/glassfish/bin/asadmin start-domain;
done
This seems to work fine however I have a couple of problems:
If I terminate the bash script (by pressing ctrl+z buttons), the Java process (Glassfish) dies and start-domain and stop-domain commands do not work at all. That means, I can neither stop Glassfish nor can I access it. I do not know if anybody else experienced this problem before or not. If the process dies, only thing I can do is to look for the ID of Java process and kill it from terminal. This not desirable at all. Any ideas why Java process dies when I quit script?
What I want to add to my script is something like to check the port Glassfish is using. If port is occupied maybe I can assume that Glassfish is not down! (However, the port (8080 default) might still be used by Glassfish although Glassfish is dead, I am not sure of it). If not, then with the help of a simple code, I can get the id of the Java process and kill them all. Then start-domain command will successfully work. Any ideas or any directions on how I can do this?
You can use a cron job instead. To install a cron job for root, enter
sudo crontab -e
and add this line
*/10 * * * * /home/ismetb/glassfishv3.0.1/glassfish/bin/asadmin start-domain
This will run asadmin every ten minutes.
If you're not comfortable with the command line, you might also try gnome-schedule, but I have no experience with that.
For your second problem, you can use curl or wget to access glassfish. You can try to get some URL, or even access the administration interface, and if you don't get a response, assume glassfish is down.

Resources