Skip to main content

ELK Stack (ElasticSearch / Logstash / Kibana ) Configuration

ELK Stack Installation Step By Step Guide
###########################


Make sure Java is installed
#################

Consider 1.2.3.4 as our ELK Stack server.

> cd /opt/
> wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u141-b15/336fa29ff2bb4ef291e347e091f7f4a7/jdk-8u141-linux-x64.tar.gz"

> tar -xzf jdk-8u141-linux-x64.tar.gz
> cd jdk1.8.0_141/
> alternatives --install /usr/bin/java java /opt/jdk1.8.0_141/bin/java 2
> alternatives --config java
> alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_141/bin/jar 2
> alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_141/bin/javac 2
> alternatives --set jar /opt/jdk1.8.0_141/bin/jar
> alternatives --set javac /opt/jdk1.8.0_141/bin/javac



Install Elasticsearch 5.5.1 (Port 9200)
########################

Before installing Elasticsearch, add the elastic.co key to the server.

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.1.rpm
rpm -ivh elasticsearch-5.5.1.rpm
systemctl start elasticsearch.service
curl http://127.0.0.1:9200
{
  "name" : "7i9vR7V",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "1FmzIA8XST2iIkaaF-kIQA",
  "version" : {
    "number" : "5.5.1",
    "build_hash" : "19c13d0",
    "build_date" : "2017-07-18T20:44:24.823Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.0"
  },
  "tagline" : "You Know, for Search"
}

Installing Kibana 5.5.1 (Port 5601)
######################

> wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
> rpm -ivh kibana-5.1.1-x86_64.rpm

Now edit the Kibana configuration file.

> vi /etc/kibana/kibana.yml

Uncomment the configuration lines for server.port, server.host and elasticsearch.url.

server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"

Save and exit.

Add Kibana to run at boot and start it.

> systemctl enable kibana
> systemctl start kibana


Installing Logstash 5.5.1 (Port 5443)
################

> curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-5.5.1.rpm
> rpm -ivh logstash-5.5.1.rpm


  Generating SSL certificate
  #################
  > vi /etc/pki/tls/openssl.cnf
  subjectAltName = IP: 1.2.3.4
  save and exit
Here the ip address is the ip of the ELK Stack Server.

  Generate the certificate file with the openssl command.
  > openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt
 
  The certificate files can be found in the '/etc/pki/tls/certs/' and '/etc/pki/tls/private/' directories.
 
  Next, we will create new configuration files for Logstash. We will create a new 'filebeat-input.conf' file to configure the log sources for filebeat, then a 'syslog-filter.conf' file for syslog processing and the 'output-elasticsearch.conf' file to define the Elasticsearch output.

  >  vi /etc/logstash/conf.d/filebeat-input.conf
 
  Input configuration: paste the configuration below.

    input {
      beats {
    port => 5443
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
      }
    }

  Save and exit.

  Create the syslog-filter.conf file.

  > vi /etc/logstash/conf.d/syslog-filter.conf

  Paste the configuration below.

    filter {
      if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
      }
    }

  Save and exit.

We use a filter plugin named 'grok' to parse the syslog files.

  Create the output configuration file 'output-elasticsearch.conf'.

  > vi /etc/logstash/conf.d/output-elasticsearch.conf

  Paste the configuration below.

    output {
      elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
      }
    }

Save and exit.


In Client1 server
###########

Login to the client1 server.

If you have firewall resctriction make sure the ip 184.72.218.26 of elastic.co is temporarly whitelisted before importing and downloading.

Copy the certificate file with the scp command.

> scp -P 5252 root@1.2.3.4:/etc/pki/tls/certs/logstash-forwarder.crt .
> mv /root/logstash-forwarder.crt /etc/pki/tls/certs/

Next, import the elastic key on the client1 server.

> rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

Download Filebeat and install it with rpm.

> wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm
> rpm -ivh filebeat-5.1.1-x86_64.rpm

go to the configuration directory and edit the file 'filebeat.yml'.

> vi /etc/filebeat/filebeat.yml

Add the new log file paths in the paths configuration section.

  paths:
    - /var/log/auth.log
    - /var/log/syslog

Set the document type to syslog.

  document_type: syslog

Disable elasticsearch output by adding comments to the lines shown below.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]

Enable logstash output, uncomment the configuration and change the values as shown below.

output.logstash:
  # The Logstash hosts
  hosts: ["1.2.3.4:5443"]
  bulk_max_size: 1024
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
  template.name: "filebeat"
  template.path: "filebeat.template.json"
  template.overwrite: false

Save the file and exit vim.

Finally before starting filebeat make sure that you are having connection to the logstash port 5443 from the client server.

If there is csf firewall please give the below command respectfully.

>> In ELK Stack server.
tcp|in|d=5443|s=cl.ie.nt.ip #"do not delete"
>> In Client server.
tcp|out|d=5443|d=se.rv.er.ip #"do not delete"

>  systemctl start filebeat





Now you can take the kibana url by http://1.2.3.4:5601

 Add the indices and there you go. Give the index name as filebeat-* and indices as @timestamp

You should get the below result for the curl command from the ELK STACK server

[root@server ~]# curl -XGET 'localhost:9200/_cat/indices'
yellow open filebeat-2017.08.15 3l1ixdJsTEW0HJrXQpv91A 5 1    4926 0   6.3mb   6.3mb
yellow open filebeat-2017.10.03 YLvtSFZDRCWP0pENXt9AUQ 5 1      15 0 133.3kb 133.3kb
yellow open filebeat-2017.08.13 wIMvOjt-SVi2GrtmJKQA2Q 5 1    2758 0   3.4mb   3.4mb
yellow open filebeat-2017.08.14 GCxU-idESla0Yb5L9SfERw 5 1    4759 0   6.3mb   6.3mb
yellow open filebeat-2017.10.02 S9NPwJy6TyulwT_-k34TgQ 5 1       1 0  14.9kb  14.9kb
yellow open filebeat-2017.08.16 bS1mRy02TK64SnS_s4TDdA 5 1 1159458 0 303.3mb 303.3mb
yellow open .kibana             Z5qARpx0TJODKE91L9k4oA 1 1       2 0  11.5kb  11.5kb


Log files
######

/var/log/logstash/logstash-plain.log
/var/log/filebeat/filebeat
tail -f /var/log/elasticsearch/elasticsearch.log


Issues and Fixes
##########

1) ERR Connecting error publishing events (retrying): i/o timeout

fix:
##
vi /etc/systemd/system/logstash.service

 LS_JAVA_OPTS="-Djava.io.tmpdir=${LS_HOME} -Djava.net.preferIPv4Stack=true"

Save and Exit, Restart the service

vi /usr/lib/systemd/system/elasticsearch.service


ES_JAVA_OPTS="-Djava.net.preferIPv4Stack=true"

Save and Exit , Restart the service

If you still having issues with timeout Disable the IPv6 in sysctl.conf

echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf

then sysctl -p for applying this changes.

https://support.nagios.com/forum/viewtopic.php?f=37&t=33114

2) Filebeat service not starting error with no-related line numbers.

Fix:
##

This could be due to indentation problem in yml config files. Make sure that your configurations are all set in proper line. Delete any space and add if need to make them indented.

3) There will be no authentication for Kibana by default.

Fix:
###
Install X-Pack for this.

Install X-Pack into Elasticsearch

> bin/elasticsearch-plugin install x-pack

> systemctl restart elasticsearch.service

Install X-Pack into Kibana

> bin/kibana-plugin install x-pack

> systemctl restart kibana.service

  • Navigate to Kibana at http://localhost:5601/
  • Log in as the built-in elastic user with the password changeme.

vi /etc/logstash/conf.d/output-elasticsearch.conf

output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
    user => "elastic"
    password => "changeme"
  }
}



vi /etc/logstash/logstash.yml

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: http://localhost:9200


vi /etc/kibana/kibana.yml


elasticsearch.username: "kibana"
elasticsearch.password: "changeme"

3)X-pack license expiring issue.

Your kibana will get an error like below:

[security_exception] current license is non-compliant for [security], with { license.expired.feature=security

You have to update your license by registering for the basic plan from the link https://www.elastic.co/subscriptions

You will then get a mail to download the license and the steps to activate the license.

Elasticsearch 5.x -- https://www.elastic.co/guide/en/x-pack/current/installing-license.html
Elasticsearch 2.x -- https://www.elastic.co/guide/en/marvel/current/license-management.html
 

Now give the below 2 commands from your ELK Stack server.

curl -XPUT -u elastic 'http://localhost:9200/_xpack/license' -H "Content-Type: application/json" -d @1c9033ef-73ee-413c-a688-98790009w34-v5.json


curl -XPUT -u elastic 'http://localhost:9200/_xpack/license?acknowledge=true' -H "Content-Type: application/json" -d @1c9033ef-73ee-413c-a688-98790009w34-v5.json

Where 1c9033ef-73ee-413c-a688-98790009w34-v5.json is the downloaded license file.

you also have to know the password of the elastic user. I also doesn't know the password first, later got it from the file /etc/logstash/conf.d/output-elasticsearch.conf



Reference
#######
http://www.itzgeek.com/how-tos/linux/centos-how-tos/updated-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7-rhel-7.html
https://www.howtoforge.com/tutorial/how-to-install-elastic-stack-on-centos-7/
http://devopspy.com/devops/install-elk-stack-centos-7-logs-analytics/
https://www.elastic.co/guide/en/beats/filebeat/index.html
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
https://www.howmovileworks.com/infra/high-availability-with-logstash/
https://www.elastic.co/downloads/x-pack#ga-release

Comments

Popular posts from this blog

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

K8s External Secrets integration between AWS EKS and Secrets Manager(SM) using IAM Role.

What is K8s External Secrets and how it will make your life easier? Before saying about External Secrets we will say about k8s secrets and how it will work. In k8s secrets we will create key value pairs of the secrets and set this as either pod env variables or mount them as volumes to pods. For more details about k8s secrets you can check my blog http://jinojoseph.blogspot.com/2020/08/k8s-secrets-explained.html   So in this case if developers wants to change the ENV variables , then we have to edit the k8s manifest yaml file, then we have to apply the new files to the deployment. This is a tiresome process and also chances of applying to the wrong context is high if you have multiple k8s clusters for dev / stage and Prod deployments. So in-order to make this easy , we can add all the secrets that is needed in the deployment, in the AWS Secret Manager and with the help of External secrets we can fetch and create those secrets in the k8s cluster. So what is K8s external Secret? It is an

Extending Your EBS Volume in Linux AWS EC2

First you have to select the EBS volume >> Actions >> Modify Volume . Now select the total volume that you want to increase to in the size section. Now click on the Modify button. Now finally do the below steps after login to the server for extending the additionally allocated space. # df -h  >> This will show you the Filesystem like '/dev/xvda1'. Expand the modified partition using  growpart  (note that you need to add a space between the device name and the partition number): # sudo  growpart /dev/xvda 1  # sudo growpart /dev/nvme0n1 1  >> for /dev/nvme0n1p1 A look at the  lsblk  output confirms that the partition  /dev/xvda1  now fills the available space on the volume: #  root@logserver:~# lsblk NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda    202:0    0  40G  0 disk  └─xvda1 202:1    0  40G  0 part / To extend a Linux file system # resize2fs /dev/xvda1  # resize2fs /dev/nvme0n1p1 https://docs.amazonaws.cn/en_us/