I am trying to setup ElasticSearch APM, my OS is Ubuntu 16.04,
I installed ElasticSearch and Kibana on the system,
I am referring the following site for installation steps -
https://jee-appy.blogspot.com/2018/02/setup-kibana-elastisearch.html
The installation commands for ElasticSearch and Kibana are as follows-
# Install Elasticsearch-6
curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.1.tar.gz
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
sudo apt-get update && sudo apt-get install elasticsearch
ls /etc/init.d/elasticsearch
sudo service elasticsearch status
Change bind address and JVM heap option as per requirement
Change network.host to 0.0.0.0 in elasticsearch.yml and set -Xms 4g & -Xmx 4g in jvm.options
sudo vi /etc/elasticsearch/elasticsearch.yml
sudo vi /etc/elasticsearch/jvm.options
Setting read replicas to 0 if you are creating single node cluster
curl -XPUT H 'Content-Type: application/json' 'http://localhost:9200/_all/_settings?preserve_existing=false' -d '{"index.number_of_replicas" : "0"}'
Install Kibana
sudo apt-get update && sudo apt-get install kibana
sudo service kibana restart
Install nginx
sudo apt-get -y install nginx
Add nginx config file for kibana
sudo vi /etc/nginx/conf.d/kibana.conf
Replace mykibana.com with your server_name or IP. We will setup auth in next step, hence we have placed a line for auth_basic in kibana.conf
server {
listen 80;
server_name mykibana.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Setup auth
After installing apache2-utils when you run htpasswd, it will ask for a password, provide a password. This username and password would be useful when you try to access kibana from browser.
sudo apt-get install apache2-utils
sudo htpasswd -c /etc/nginx/htpasswd.users efkadmin
sudo service nginx restart
Web view of Kibana
After successful Setup, hit http://localhost:5601. Put username and password and you will see kibana web as shown below.
APM setup
After installing ElasticSearch and Kibana,
I am trying to install APM server into it.
I used the following commands to install it -
curl -L -O https://artifacts.elastic.co/downloads/apm-server/apm-server-6.3.1-amd64.deb
sudo dpkg -i apm-server-6.3.1-amd64.deb
Import dashboard -
./apm-server setup
On firing the above command I get the following error -
bash: ./apm-server: No such file or directory
Please help to setup APM
If the command following command is executed sucessfully
> sudo dpkg -i apm-server-6.3.1-amd64.deb
The apm-server must have been installed.
You are trying to run apm-server by ./apm-server, where you specifying that the binary apm-server is present in current directory.
But that is not the case, by installing via dpkg cmd the package is installed somewhere in /usr directory.
So you just need to run apm-server -e cmd in the shell.
No need to add ./
If you get permission deniod error.
Run the command with sudo
Related
Installed JITSI but unable to enable audio and video ...it throw an error saying SSL certificate is required. Can I get the exact steps to install SSL in ubuntu 16.04 instance in AWS EC2.
In ubuntu you can find the nginx conf file in the directory /etc/nginx/sites-available and you will find <your_domain>.conf file.
Edit the config file to point the SSL certificate
ssl_certificate /etc/ssl/<your_domain>.crt;
ssl_certificate_key /etc/ssl/<your_domain>.key;
More information on how to setup SSL certificate with Nginx: http://nginx.org/en/docs/http/configuring_https_servers.html
Install certbot, it will place correctly your certificates.
sudo snap install core; sudo snap refresh core
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
sudo certbot --nginx
Jitsi only works if it have SSL.
I have a problem when I upload my API on the Azure service app, a ping on port 80 does not find it and returns an error.
If I manually launch my docker on the machine via SSH with --network=host, and I ping on port 80 it works.
It may be a bad configuration on my part on the Dockerfile, but I don't know what it is.
How can I fix this problem?
FROM ruby:2.4
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && \
apt-get update && apt-get install -qq -y build-essential libpq-dev nodejs yarn
WORKDIR /api
COPY . .
RUN bundle install
CMD bundle exec thin -p 80 start
EXPOSE 80
In the Azure app service portal, under the settings -> configuration -> Application Settings , please check if you have added a property PORT.
If not, add 80.
A Clean RHEL 6.0, Can you please tell me what am i missing.
i have done the following but seems to be some where i am going wrong cat start logstash for some reason.
i am not a newbe to linux. trying to learn as i go.
have done the following to get the install working.
Elasticsearch and Kibana start fine problem is with the Logstash.
hope some one can help me out here. really stuck to get this one working.
#Install Java
sudo su -c "yum install java-1.8.0-openjdk"
*Installed:
java-1.8.0-openjdk.x86_64 1:1.8.0.45-28.b13.el6_6
**Complete**!*
java -version
*openjdk version "1.8.0_45"*
#Install elasticsearch
sudo su -c "yum localinstall elasticsearch-5.0.1.rpm "
*Running Transaction
Installing : elasticsearch-5.0.1-1.noarch 1/1
Creating elasticsearch group... OK
Creating elasticsearch user... OK
NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using chkconfig
sudo chkconfig --add elasticsearch
You can start elasticsearch service by executing
sudo service elasticsearch start
Verifying : elasticsearch-5.0.1-1.noarch 1/1
Installed:
elasticsearch.noarch 0:5.0.1-1*
**Complete**!
sudo chkconfig --add elasticsearch
sudo service elasticsearch start
* elasticsearch Started*
#Install kibana
sudo su -c "yum localinstall kibana-5.0.1-x86_64.rpm"
*Downloading Packages:
Installed:
kibana.x86_64 0:5.0.1-1*
**Complete**!
#Install logstash
sudo su -c "yum localinstall logstash-5.0.1.rpm"
*Total size: 189 M
Installing : 1:logstash-5.0.1-1.noarch 1/1
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash
Verifying : 1:logstash-5.0.1-1.noarch 1/1
Installed:
logstash.noarch 1:5.0.1-1*
**Complete!**
#Query config values
rpm -qc elasticsearch
*/etc/elasticsearch/elasticsearch.yml
/etc/elasticsearch/jvm.options
/etc/elasticsearch/log4j2.properties
/etc/elasticsearch/scripts
/etc/init.d/elasticsearch
/etc/sysconfig/elasticsearch
/usr/lib/sysctl.d/elasticsearch.conf
/usr/lib/systemd/system/elasticsearch.service*
#Query config values
rpm -qc logstash
*/etc/logstash/jvm.options
/etc/logstash/logstash.yml
/etc/logstash/startup.options*
#Query config values
rpm -qc kibana
*/etc/kibana/kibana.yml*
#Permissions
sudo su -c "chmod -R 777 /etc/kibana/"
sudo su -c "chmod -R 777 /etc/logstash/"
sudo su -c "chmod -R 777 /etc/elasticsearch/"
sudo service kibana start
*-- output kibana started*
Grrr..... This does not stat
sudo chkconfig --add logstash
sudo service logstash start
> try this ->
sudo su -c "/usr/share/logstash/bin/system-install"
* Sending all logs to /tmp/tmp.ZNfIKnQxFh Successfully created system
startup script for Logstash*
I'm playing with boot2docker (docker 1.6) on windows 8.1. I wanted to make myself machine container to play with ruby and I want to be able to connect to rails server from my windows host. To start with small steps first I want to connect to my container from my boot2docker VM. I attach my docker file below, it builds without a problem and I can run a container from it. I do it like so:
docker run -it -p 3000:3000 3564860f7afd /bin/bash
Then in this container I say:
cd ~/myapp && bundle exec rails server -d
And to see if everything is working I do:
~/myapp$ sudo apt-get install wget && wget localhost:3000
and I get http 500, which is ok, I just wanted to check if server is running. Then I exit using ctrl+p, ctrl+q. But then on boot2docker machine I do agin
wget localhost:3000
and get
Connecting to localhost:3000 (127.0.0.1:3000)
wget: error getting response: Connection reset by peer
So it seems like port 3000 is not correctly forwarded to boot2docker VM. What have I done wrong? What did I miss? I googled extensively and tried couple of things like explicitly exposing port from dockerfile of or adding -P switch to run, but I always end up the same way - it's not working.
Any help will be greatly appreciated.
UPDATE 02.05.2015
I have also tried things described in comment from Markus W Mahlberg and reponse from VonC. My VM configuration seems to be ok, I also checked in GUI of VirtualBox and it seems fine. Some more info: When I start
boot2docker ssh -vnNTL 3000:localhost:3000
and then open localhost:3000 on my windows host I see in trace logs in boot2docker console, they look like this:
debug1: channel 1: free: direct-tcpip: listening port 3000 for localhost port 3000, connect from 127.0.0.1 port 50512 to 127.0.0.1 port 3000, nchannels 3
Chrome tells me that the response was empty. From checking the logs on container I know that request never got to it.
End of update
Update 03.05.2015
I thing that my problem have not so much to do with boot2docker or docker as with my computer configuration. I've been over my docker/boot2docker configuration so many times, that it is rather unlikely that I've made a mistake there.
Desperately I've reinstalled boot2docker and VirtualBox, still no effects. Any ideas how to debug what can be wrong with my configuration? Only other idea I have is to try doing the same on another machine. But even if this works my original problem is no less annoying.
End of update
Here is my dockerfile:
FROM ubuntu
MAINTAINER anonymous <anonymous#localhost.com>
LABEL Description="Ruby container"
# based on https://gorails.com/setup/ubuntu/14.10
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers \
&& groupadd anonymous \
&& useradd anonymous -m -g anonymous -g sudo
ENV HOME /home/anonymous
USER anonymous
RUN git clone https://github.com/sstephenson/rbenv.git ~/.rbenv
RUN echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
RUN echo 'eval "$(rbenv init -)"' >> ~/.bashrc
RUN exec $SHELL
RUN git clone https://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
RUN echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
RUN exec $SHELL
RUN git clone https://github.com/sstephenson/rbenv-gem-rehash.git ~/.rbenv/plugins/rbenv-gem-rehash
ENV PATH "$HOME/.rbenv/bin:$HOME/.rbenv/plugins/ruby-build/bin:$PATH"
RUN rbenv install 2.2.1
RUN rbenv global 2.2.1
ENV PATH "$HOME/.rbenv/shims:$PATH"
RUN echo 'gem: --no-ri --no-rdoc' > ~/.gemrc
RUN gem install bundler
RUN git config --global color.ui true
RUN git config --global user.name "mindriven"
RUN git config --global user.email "3dcreator.pl#gmail.com"
RUN ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa -C "3dcreator.pl#gmail.com"
RUN sudo apt-get -qy install software-properties-common python-software-properties
RUN sudo add-apt-repository ppa:chris-lea/node.js
RUN sudo apt-get -y install nodejs
RUN gem install rails -v 4.2.0
RUN ~/.rbenv/bin/rbenv rehash
RUN rails -v
RUN sudo apt-get -qy install mysql-server mysql-client
RUN sudo apt-get install libmysqlclient-dev
RUN rails new ~/myapp -d mysql
RUN sudo /etc/init.d/mysql start && cd ~/myapp && rake db:create
See Boot2docker workarounds:
You can use VBoxManage.exe commands to open those ports on the boot2docker VM level, in order for your actual VM host to access them.
By default, only the port 2222 is open, for boot2docker ssh to work and open an interactive ssh boot2docker session.
Just make sure VirtualBox is in your PATH.
VBoxManage modifyvm: works when the boot2docker VM isn't started yet, or after a boot2docker stop,
VBoxManage controlvm: works when the boot2docker VM is running, after a boot2docker start.
Let's say your Docker container exposes the port 8000 and you want access it from your other computers on your LAN. You can do it temporarily, using ssh:
Run following command (and keep it open):
$ boot2docker ssh -vnNTL 8000:localhost:8000
or you can set up a permanent VirtualBox NAT Port forwarding:
$ VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port8000,tcp,,8000,,8000";
If the vm is already running, you should run this other command:
$ VBoxManage controlvm "boot2docker-vm" natpf1 "tcp-port8000,tcp,,8000,,8000";
Now you can access your container from your host machine under
localhost:8000
That way, you don't have to mess around with the VirtualBox GUI, selecting the computer called boot2docker-vm from the list on the left, choosing Settings from the Machine menu (or press Command-S on a Mac), selecting the Network icon at the top, and finally clicking the Port Forwarding button.
boot2docker on Windows (and OSX) is running a VirtualBox VM with Linux in it. By default it exposes only the ports necessary to ssh into the VM. You'll need to modify the VM to get it to expose more ports.
Adding ports to the VM is more about configuring VirtualBox and less about boot2docker (it is a property of the VM, not the software running inside it). Please see the VirtualBox documentation for "port forwarding" and other network configuration. https://www.virtualbox.org/manual/ch06.html
Yes you need to open the ports in the Virtualbox machines:
enter image description here
When starting the latest (Okt 2014) Hadoop with start-dfs.sh we are seeing:
connect to host localhost port 22: Connection refused when running
Install openssh server.
For Ubuntu command is :
sudo apt-get install openssh-server
In hadoop-env.sh file ( present in /etc/hadoop) add the following line :
export HADOOP_SSH_OPTS="-p 22"
Configure "HADOOP_SSH_OPTS" in your hadoop-env.sh, to add any SSH CLI
options you need to always be present when the Hadoop scripts use SSH.
A line like 'export HADOOP_SSH_OPTS="-p "' perhaps would be what
you are looking for.
Source: Interweb
Install and start openssh server. Here is the command for CentOS:
Install Open SSH server:
sudo yum -y install openssh-server openssh-clients
Start SSH server:
sudo service sshd start
OS
On Ubuntu 20.04.1 LTS
Install OpenSSH
sudo apt install openssh-server openssh-client -y
Start SSH
sudo service ssh start