Kibana connection failed to elastic search - elasticsearch

I have installed elasticsearch and logstash 1.4 off the Debian repository. It is working and collecting logs from another device forwarding syslog.
I followed the kibana install guide but I am getting an error message: Connection Failed
With check that es is running or ensure that http.cors.enabled: true
In console I am getting this error:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://'127.0.0.1':9200/_nodes. This can be fixed by moving the resource to the same domain or enabling CORS.
I have added this to my elasticsearch.yml:
http.cors.allow-origin: "http://192.168.1.1"
http.cors.enabled: true
That IP is the IP of itself since all 3 ELK apps run off the same host.
Any suggestions?
EDIT:::::
I got it working by adding Header set
Access-Control-Allow-Origin "*" right before the tag in site-enabled.
I also had to link to the module:
ln -s /etc/apache2/mods-available/headers.load /etc/apache2/mods-enabled/

For these configs, you'll need to sudo or be root.
First, make sure you have the following lines in elasticsearch.yml (usually at /etc/elasticsearch/elasticsearch.yml):
http.cors.allow-origin: "http://192.168.1.1"
http.cors.enabled: true
(don't worry if the rest of the file is all commented out--the defaults should be fine)
The rest of the configs are for Apache, so go to the apache directory. For example:
cd /etc/apache2
In your enabled sites folder, add a "Header set" option. On a simple system, this may be in the file pointed to at /etc/apache2/site-enabled/000-default.conf. Inside the directive (perhaps after the line that sets DocumentRoot) add:
Header set Access-Control-Allow-Origin "*"
For this to work, you also need to enable the headers module. Do:
cd /etc/apache2/mods-enabled
ln -s ../mods-available/headers.load
Finally, don't forget to reload or restart the Apache server (reload if you can't stand a 1 second downtime). For example, on a sysvinit-style system:
service apache2 reload
or
service apache2 restart
Then don't forget to refresh the page in your browser.

Related

Laravel deployment to apache with php-fpm restart

I'm quite new to laravel and the concept of CI/CD. But I have invested the last 24 hours to get something up and running. Actually I'm using gitlab.com as repo. There I have configured the CI/CD functionality.
The deployments should be done to SRV1 which has configured its corresponding user with a cert. The SRV1 should then clone the necessary files from the gitlab repo by using deployer. The gitlab repo also has the public key from SRV1 user. This chain is working quite good.
The problem is, that after deploying I need to restart php-fpm so that it can reinitialize its symlinks and updates its absolute path cache.
I saw various methodes to overcome this with setting some cgi settings in php-fpm. But these didn't work for me since they all are using nginx, while I'm using apache.
Is there any other way to tell php-fpm with apache to reinitialize its paths or reload after changes?
The method to add the deployer user to the sudoers list and to call service restart php-fpm looks quite hacky to me...
Thanks
UPDATE1:
Actually I found this : https://github.com/lorisleiva/laravel-deployer/blob/master/docs/how-to-reload-fpm.md
It looks like, that deployer has some technique to do this. But this requires the deployer user to have acces to php-fpm reload. Looks a bit unsafe to me.
I didn't found any other solutions. there are some for nginx to tell nginx to always re-evaluate the real path. Obviously for Apache it should be "followSymLink" but it was not working.
Actually I created a bash script which is running under root. this script always check for changes in the "current" symlink every 10 seconds. if there was a change -> reload php-fpm. Not nice, of course quite ugly, but should work.
Still open for other proposals.
I solved this issue on my server by adding a php file that clear APCU & OPCACHE :
<?php
if (in_array(#$_SERVER['REMOTE_ADDR'], ['127.0.0.1', '::1']))
{
apcu_clear_cache();
apcu_clear_cache('user');
apcu_clear_cache('opcode');
opcache_reset();
echo "Cache cleared";
}
else
{
die("You can't clear cache");
}
then you have to call it with curl after you updated your symlink :
/usr/bin/curl --silent https://domain.ext/clear_apc_cache.php
I use Gitlab CI/CD it works now for me

Imagemagick - change policy.xml on Heroku

I'm trying to access images via https on Heroku with Imagemagick. How can I change the policies (in policy.xml) on Heroku?
Heroku made an "ImageMagick security update" in May, 2016: https://devcenter.heroku.com/changelog-items/891
I can see the policy list, after typing heroku run bash and convert -list policy:
Path: [built-in]
Policy: Undefined
rights: None
Path: /etc/ImageMagick/policy.xml
[...]
Policy: Coder
rights: None
pattern: HTTPS
[...]
How can I change the policy?
update 1: this is the error in the log file:
Command failed: convert.im6: not authorized `//scontent-fra3-1.xx.fbcdn.net/v/t1.0-9/13962741_132344500547278_4974691444630710043_n.jpg?oh=c169b4ffce9e5ce330ee99214cc6b8d5&oe=5880F245'
I’ve found a relatively simple solution.
Create a .magick directory in your app’s source, and add your policy.xml there. Then, you’ll have to set the environment variable MAGICK_CONFIGURE_PATH to /app/.magick in order to load your file with higher precedence than the default one.
We need to install the third party software ImageMagick on heroku. I used this https://github.com/ello/heroku-buildpack-imagemagick build pack for installing ImageMagick.
So, inside bin/compile, there is a policy file, which is restricting the images to read over Https, enable the attribute rights to read which allows to read over Https
Fork the repo and do your changes, commit and add that repository url to your heroku buildpacks
Read the warnings at ImageTragick, then make a backup and delete the line that restricts you.
You can find the file to edit in the same directory as the other XML config files by doing the following - the file is called policy.xml:
convert -debug configure -list font 2>&1 | grep -E "Searching|Loading"

Lets Encrypt Error "urn:acme:error:unauthorized"

I use Lets Encrypt and get error:
urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Error parsing key authorization file: Invalid key authorization: malformed token
I try: sudo service nginx stop
but get error: nginx service not loaded
So I had a lot of trouble with this stuff. Basically, the error means that certbot was unable to find the file it was looking for when testing that you owned the site. This has a number of potential causes, so I'll try to summarize because I encountered most of them when I set this up. For more reference material, I found the github readme much more useful than the docs.
First thing to note is that the nginx service needs to be running for the acme authorization to work. It looks like you're saying it's not, so start by spinning that up.
sudo service nginx start
With that going, everything here is based on the file location of the website you're trying to create a certificate for. If you don't know where that is, it will be in the relevant configuration file under /etc/nginx which depends largely on your version of NGINX, but is usually under /etc/nginx/nginx.conf or /etc/nginx/sites-enabled/[site-name] or /etc/nginx/conf/[something].conf. Note that the configuration file should be listed (or at least it's directory) under /etc/nginx/nginx.conf so you might start there.
This is an important folder, because this is the folder that certbot needs to modify. It needs to create some files in a nested folder structure that the URL it tries to read from returns the data from those files. The folder it tries to create will be under the root directory you give it under the folder:
/.well-known/acme-challenge
It will then try to create a file with an obscure name (I think it's a GUID), and read that file from the URL. Something like:
http://example.com/.well-known/acme-challenge/abcdefgh12345678
This is important, because if your root directory is poorly configured, the url will not match the folder and the authorization will fail. And if certbot does not have write permissions to the folders when you run it, the file will not be created, so the authorization will fail. I encountered both of these issues.
Additionally, you may have noticed that the above URL is http not https. This is also important. I was using an existing encryption tool, so I had to configure NGINX to allow me to view the ./well-known folder tree under port 80 instead of 443 while still keeping most of my data under the secure https url. These two things make for a somewhat complicated NGINX file, so here is an example configuration to reference.
server {
listen 80;
server_name example.com;
location '/.well-known/acme-challenge' {
default_type "text/plain";
root /home/example;
}
location '/' {
return 301 https://$server_name$request_uri;
}
}
This allows port 80 for everything related to the certbot challenges, while retaining security for the rest of my website. You can modify the directory permissions to ensure that certbot has access to write the files, or simply run it as root:
sudo ./certbot-auto certonly
After you get the certificate, you'll have to set it up in your config as well, but that's outside the scope of this question, so here's a link.

Laravel - Generic Apache 500 error with Envoyer directory structure

I am trying to deploy my Laravel 5 site to my VPS using Envoyer. I changed the document root in the site's Apache settings to /current/public (settings below), when I do this I receive a generic Apache 500 error. If I use the old public directory, everything loads properly.
I also tried chmod 777 -R storage, no luck. There are no log entries in the Laravel log, everything deploys fine without errors.
I did notice that if I create a plain HTML document and deploy it via Envoyer, I am able to access it directly with the /current/public document root, anything related to Laravel (and only using current/public), results in the 500.
Ideas? Would a symlink be a possible solution? Oddly, my Forge configuration on my other Envoyer site has the document root set to public, yet there is no symlink to current/public that I can see. It may be set to current/public and just not displaying that for some reason.
customlog:
-
format: combined
target: /usr/local/apache/domlogs/mydomain.org
-
format: "\"%{%s}t %I .\\n%{%s}t %O .\""
target: /usr/local/apache/domlogs/mydomain.org-bytes_log
documentroot: /home/eyf/current/public
group: eyf
hascgi: 1
homedir: /home/eyf
ifmoduleconcurrentphpc: {}
ifmodulemodsuphpc:
group: eyf
ip: MY.IP.ADDR
owner: root
phpopenbasedirprotect: 1
port: 80
scriptalias:
-
path: /home/eyf/public/cgi-bin
url: /cgi-bin/
-
path: /home/eyf/public/cgi-bin/
url: /cgi-bin/
serveradmin: webmaster#mydomain.org
serveralias: www.mydomain.org
servername: mydomain.org
usecanonicalname: 'Off'
user: eyf
userdirprotect: ''
Okay, so I encountered two separate problems here.
The first problem was the fact that I was deploying code as root and trying to access a site owned by a cPanel user (eyf in this case). Because the files/directories were deployed as root, an ownership issue caused the generic 500 error page.
I then tried to connect via Envoyer with eyf and there was some sort of SSH key issue - even though I added the key to eyf via cPanel, it did not seem to take. Repeated attempts to connect from Envoyer eventually lead the IP address to be blacklisted.
In response to this, Envoyer simply said "Failed" when trying to connect to the server. Immediately after saying "Failed," a warning message would appear saying that there was a problem with PHP-FPM.
Taylor says that this PHP-FPM warning message appears because the connection was unsuccessful and Envoyer could not connect to PHP-FPM. Well, this is totally misleading because I do not have PHP-FPM installed on this server and it has absolutely nothing to do with why the connection failed (it was an SSH authentication problem).
I asked him to please improve the warnings/errors for things like this, it stretched what should have been a quick fix into a several hour long tail chasing session. Dploy.io, a competitor, clearly showed an SSH connection issue when I first attempted to connect and had forgot the SSH key - "d'oh! Let me fix that," problem solved in less than a minute.
Anyway, back to Envoyer bliss - just a bit ticked. ;) The IP addresses were whitelisted, I added the SSH key manually for the cPanel user (/.ssh/id_rsa), and now everything works.

ElasticSearch installed---but Installing kibana on localhost?

I'd like to view my machine's syslogs more beautifully on an ubuntu desktop. I notice that all the kibana documentation is oriented towards remote servers (which makes sense). However, how would I securely view the same information about my local machine?
Here are some things I've read that were not helpful because they were designed for remote access:
https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-logs-on-centos-7
Kibana deployment issue on server . . . client not able to access GUI
http://www.elasticsearch.org/overview/kibana/installation/ which has the following problems:
there is no config.js to open in an editor per step 2, you can see this very plainly on their github page: https://github.com/elasticsearch/kibana
running
~/kibana/src/server/bin$ bash kibana.sh
The Kibana Backend is starting up... be patient
Error: Unable to access jarfile ./../lib/kibana.jar
How do I install kibana locally?
Not sure if you're still looking for an answer, but for future searchers:
What you can do is download elasticsearch - http://www.elasticsearch.org/overview/elkdownloads/
Extract it, and create a plugins subdirectory. Then, within the /plugins directory create a /kibana/_site subdirectory.
Then, download kibana using the above mentioned link. Extract the archive, then edit config.js to point to the localhost as the elasticsearch host:
elasticsearch: "http://localhost:9200",
Copy all of the contents of the folder you extracted kibana into to the /kibana/_site directory you created inside the elasticsearch folder.
Then start elasticsearch:
within the elasticsearch directory -
bin/elasticsearch
Kibana will now run off of the same 'server' as elasticsearch, on your local host.
UPDATE: Kibana 4 comes bundled with a web server now: see the docs

Resources