Monitoring nginx on docker using stackdriver (gcloud hosted) - google-cloud-logging

In order to monitor nginx (as an application) using stackdriver, is it sufficient to simply direct loggin to gcploggin driver or does one have to install the monitoring agent as well?
Google Cloud logging on Docker.com
Google Cloud website

To monitor nginx status page metrics:
Your server need to be google cloud instance or an AWS instance.
Yes. You need to install the agent on your server/instance/docker
You need to add nginx plugin configuration.
You have to modify your Nginx configuration to allow Stackdriver agent to access the status page.
Nginx:
# Enable nginx-status for better monitoring with Stackdriver module.
location = /nginx-status {
# Turn on nginx stats
stub_status on;
# I do not need logs for stats
access_log off;
# Security: Only allow access from localhost
allow 127.0.0.1;
# Send rest of the world to /dev/null
deny all;
}
Example Nginx plugin Configuration:
https://raw.githubusercontent.com/Stackdriver/stackdriver-agent-service-configs/master/etc/collectd.d/nginx.conf
You can modify the stackdriver nginx plugin config to read nginx status metrics like this:
LoadPlugin nginx
<Plugin "nginx">
URL "http://localhost/nginx-status"
</Plugin>
More plugin configurations:
https://github.com/Stackdriver/stackdriver-agent-service-configs/tree/master/etc/collectd.d

Related

Not able to Configure relay sentry setup with proxy

I want to connect relay to sentry.io via proxy service/application.
Please help me in this I am not able to find any way to put proxy between relay and sentry.
Config.yml
relay:
mode: managed
upstream: “https://sentry.io/”
host: 0.0.0.0
port: 3000
tls_port: ~
tls_identity_path: ~
tls_identity_password: ~
Where I have to set the proxy in relay?
You can replace the upstream location to your proxy service/application and there you need to have another relay which can upload the data to sentry.io
Warn : This will just forward the messages, so configure your first relay in proxy mode.

Deploying gradle spring application on a 1and1 cloud server

I have an apache/2.4.18 ubuntu server and I want to host my spring application on it. I generated a JAR file and can run it on the server. It starts an embedded tomcat server on port 8090.
However when i navigate to 'my-site-ip:8090' the connection times out.
I have zero experience deploying web applications so any help would be appreciated.
I've created a TCP rule for port 8090 and still no joy.
The solution was adding a proxy to the Myapp.conf file as below:
ProxyRequests off
ProxyPreserveHost On
ProxyPass / http://localhost:8090/
ProxyPassReverse / http://localhost:8090/
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
It´s very hard to explain all the steps in one answer but you can follow these steps to get into the full configuration by your own. I did the same on my 1&1 cloud server.
First of all you need root access to your server.
Normally, on your server the port 80 and 443 should already be open. Else you can define that in the 1&1 Admin Portal. If your Server already has the apache configuration you should be able to see the apache site if you go to your server address. You can find details and the full setup if you dont have an apache installed for this step here:
How To Install the Apache Web Server on Ubuntu
The second step would be to configure a virtual host on your apache webserver.
This is cool because you can define multiple domains and there applications on your server. So http://yourServer.com(port 80 or 443 from extern) goes to yourApp1. (port 8090 from intern).
In this step you will tell apache if your enter your url to go to your app with port 8090
How To Set Up Apache Virtual Hosts on Ubuntu
The last step would be to install your spring-boot app as a service on your machine. The docs of Spring describes it very well.
Installation as an init.d Service
If you install the app as a service you are able to start and stop the app with the service command.
service myapp start
And dont forget to add the plugin for maven or gradle to your pom.xml. This is necessary to run the app as a service.
If you follow these Steps you should be able to reach you app without specify a port and be ready to go with your app in production if necessary.
The best approach for this would be to use the apache proxy. This should get it done.
https://blog.marcnuri.com/running-apache-tomcat-and-apache-httpd-on-port-80-simultaneously/

kibana.dev.yml is not applied in kibana development mode

I appreciate if someone can help me out with this issuse.
I am starting a development for kibana plugin and installed all necessary packages.
My environment is below.
kibana 5.0.0 alpha5 (Used git clone from the git repository)
I want to start the devlopment server other than 127.0.0.1:5601
so I have created config/kibana.dev.yml as below
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# This setting specifies the IP address of the back end server.
server.host: "0.0.0.0"
However, this seems not to be applied when I start the kibana server from npm start. It keeps starting at 127.0.0.1:5601
Do I need any other setting to read config/kibana.dev.yml?
Thanks,
Yu Watanabe
When started in dev mode, SSL is on be default. In that configuration and if no custom certificates have been specified, the server.host setting has no effect and is forced to localhost (to match the host name in the default provided certificates) as can be seen in the cli/serve/serve.js file:
if (opts.dev) {
set('env', 'development');
set('optimize.lazy', true);
if (opts.ssl && !has('server.ssl.cert') && !has('server.ssl.key')) {
set('server.host', 'localhost');
set('server.ssl.cert', fromRoot('test/dev_certs/server.crt'));
set('server.ssl.key', fromRoot('test/dev_certs/server.key'));
}
}
You can start Kibana by specifying the --no-ssl switch in order for the server.host setting to be taken into account:
sh ./bin/kibana --dev --no-ssl

ElasticSearch: Allow only local requests

How can allow only local requests for elasticsearch?
So command like:
curl -XGET 'http://localhost:9200/twitter/_settings'
can only be running on localhost and request like:
curl -XGET 'http://mydomain.com:9200/twitter/_settings'
would get rejected?
Because, from what i see, elasticsearch allows it by default.
EDIT:
According to http://www.elasticsearch.org/guide/reference/modules/network.html
you can manage bind_host parameter to allow hosts. And by default, it is set to anyLocalAddress
For elasticsearch prior to v2.0.0, if you want both http transport and internal elasticsearch transport to listen only on localhost simply add the following line to elasticsearch.yml file.
network.host: "127.0.0.1"
If you want only http transport to listen on localhost add the following line instead.
http.host: "127.0.0.1"
Starting from v2.0 elasticsearch is listening only on localhost by default. So, no additional configuration is needed.
If your final goal is to deny any requests from outside the host machine, the most reliable way would be to modify the host's iptables so that it denies any incoming requests to the service ports used by ElasticSearch (9200-9300).
If the end goal is to make sure that everyone refers to the service using an exclusive DNS, you're better off achieving this with an HTTP server that can proxy requests such as HTTPd or nginx.
I use this parameter:
http.host: "127.0.0.1"
This parameter not accept http requests for external request.

Protect Jenkins with nginx http auth except callback url

I installed jenkins on my server and I want to protected it with nginx http auth so that requests to:
http://my_domain.com:8080
http://ci.my_domain.com
will be protected except one location:
http://ci.my_domain.com/job/my_job/build
needed to trigger build. I am kinda new to nginx so I stuck with nginx config for that.
upstream jenkins {
server 127.0.0.1:8080;
}
server {
listen x.x.x.x:8080;
server_name *.*;
location '/' {
proxy_pass http://jenkins;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
auth_basic "Restricted";
auth_basic_user_file /path/.htpasswd;
}
}
I tried smth like above config but when I visit http://my_domain.com:8080 there is no http auth.
Finally I figured out how to solve this problem. At first we need to uncheck "Enable security" option at Manage Jenkins page. With security disabled we can trigger our jobs with requests like http://ci.your_domain.com/job/job_name/build.
If you want to add token to trigger URL we need to Enable Security, choose "Project-based Matrix Authorization Strategy" and give Admin rights to Anonymous user. After it in Configure page of your project will be "Trigger builds remotely" option where you can specify token so your request will look like JENKINS_URL/job/onru/build?token=TOKEN_NAME
So with disabled security we need to protect http://ci.your_domain.com with nginx http_auth except urls like /job/job_name/build'.
And of course we need to hide 8080 port from external requests. Since my server is on Ubuntu I can use iptables firewall:
iptables -A INPUT -p tcp --dport 8080 -s localhost -j ACCEPT
iptables -A INPUT -p tcp --dport 8080 -j DROP
But! On ubuntu (I am not sure about other linux oses) iptables will disappear after reboot. So we need to save them with:
iptables-save
And it is not the end. With this command we just get a file with iptables. On startup we need to load iptables and the easiest way is to use 'uptables-persistent' package:
sudo apt-get install iptables-persistent
iptables-save > /etc/iptables/rules
Take a closer look at iptables if needed https://help.ubuntu.com/community/IptablesHowTo#Saving_iptables and good luck with Jenkins!
And there is good example for running jenkins on subdomain of your server: https://wiki.jenkins-ci.org/display/JENKINS/Running+Hudson+behind+Nginx

Resources