Skip to main content

Grok pattern for custome java log

Grok pattern for the below log.

27-05-2020 06:44:33.476 [app-api-5bd9d99b8-sjql5-6f5bdf4a-f2c9-4a25-8fe6-031e9fa28cf0] DEBUG 1 [http-nio-8080-exec-4] c.w.w.m.customer.controllers.CustomerController [get-141] : Get all Customer request received.

This has to be added in the logstash config file /usr/share/logstash/pipeline/logstash.conf

filter {

      grok {

        match => { "message" => ["%{DATE_EU:date} %{TIME:logTime} *\[%{DATA:requestId}] %{LOGLEVEL:logLevel} %{NUMBER:processId} *\[%{DATA:threadName}] %{JAVACLASS:className} *\[%{DATA:origin}] :%{GREEDYDATA:message}"] }

      }

}

alternative grok pattern
#####################
(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT}) *\[%{DATA:requestId}] %{LOGLEVEL:logLevel} %{NUMBER:processId} *\[%{DATA:threadName}] %{JAVACLASS:className} *\[%{DATA:origin}] :%{GREEDYDATA:messagebody}


Example logstash file with grok pattern for parsing

########################################


sh-4.2$ cat /usr/share/logstash/pipeline/logstash.conf
input {
beats {
port => 5044
}
}

## Add your filters / logstash plugins configuration here
## Rest API Log with requestId
12-06-2020 10:10:29.906 [wisilica-api-5bd9d99b8-tmc64-6cf4a2dc-bbbd-446b-8e44-b65e3b9bbfd0] DEBUG 1 [http-nio-8080-exec-4] c.w.w.commons.config.interceptor.CommonInterceptor [afterCompletion-54] : Response sent: org.springframework.security.web.header.HeaderWriterFilter$HeaderWriterResponse@6672779
##############################
filter {
      grok {
match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT}) *\[%{DATA:requestId}] %{LOGLEVEL:logLevel} %{NUMBER:processId} *\[%{DATA:threadName}] %{JAVACLASS:className} *\[%{DATA:origin}] :%{GREEDYDATA:messagebody}"] }
}
## Rest API Log with requestId null value
16-06-2020 04:14:02.971 [] INFO 1 [hive-pool connection adder] o.a.curator.framework.imps.CuratorFrameworkImpl [start-224] : Starting
#########################################
      grok {
match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT}) *\[%{DATA:requestId}]  %{LOGLEVEL:logLevel} %{NUMBER:processId} *\[%{DATA:threadName}] %{JAVACLASS:className} *\[%{DATA:origin}] :%{GREEDYDATA:messagebody}"] }
}
## Java Services
# Archiver
08-06-2020 12:43:52.441 INFO 7630 [hive-pool connection adder-EventThread] o.a.curator.framework.state.ConnectionStateManager [postState-228] : State change: CONNECTED
##########
grok {
match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT})  %{LOGLEVEL:logLevel} %{NUMBER:processId} *\[%{DATA:threadName}] %{JAVACLASS:className} *\[%{DATA:origin}] :%{GREEDYDATA:messagebody}"] }
}

# Cache Initializer / Notification Engine
#########################################
#log.file.path /tmp/cacheInitialize/wiseconnect-cacheInitialize.log
#log.file.path  /tmp/notification/wiseconnect-notifications.log
08-06-2020 12:24:43.757 DEBUG 28268 [scheduling-1] org.hibernate.engine.jdbc.spi.SqlStatementLogger [logStatement-103] : select alertgroup0_.id as col_0_0_ from tbl_alert_rule_group alertgroup0_ where alertgroup0_.root_organisation_id=? and alertgroup0_.status_id=? and (alertgroup0_.is_all_tags<>? or alertgroup0_.tag_id is not null)

grok {
match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT}) %{LOGLEVEL:logLevel} %{NUMBER:processId} *\[%{DATA:threadName}] %{JAVACLASS:className} *\[%{DATA:origin}] :%{GREEDYDATA:messagebody}"] }
}


#CPP Services
############
##rtlsmaster.log
16-06-2020 04:07:25.109 debug 12862 12935 cb.cpp 99 MCB,DELIVERED,16791
################
grok {
match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT}) %{LOGLEVEL:logLevel} %{NUMBER:processId} %{NUMBER:threadName} %{JAVACLASS:className} %{NUMBER:origin} %{GREEDYDATA:messagebody}"] }
}

##triangulator.console.log
##########################
#16-06-2020 04:05:55.021 info 14244 14325 appinstance.cpp 255 DELAY,REDIS,GET,0,3
#4.11.1
grok {
match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT})    %{LOGLEVEL:logLevel} %{NUMBER:processId} %{NUMBER:threadName}           %{JAVACLASS:className} %{NUMBER:origin} %{GREEDYDATA:messagebody}"] }
}
#4.11.2
grok {
match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT})    %{LOGLEVEL:logLevel} %{NUMBER:processId} %{NUMBER:threadName}           %{JAVACLASS:className}  %{NUMBER:origin} %{GREEDYDATA:messagebody}"] }
}
#3.9.2
grok {
match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT})   %{LOGLEVEL:logLevel} %{NUMBER:processId} %{NUMBER:threadName}         %{JAVACLASS:className}  %{NUMBER:origin} %{GREEDYDATA:messagebody}"] }
}
#3.11.1
        grok {
                match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT})   %{LOGLEVEL:logLevel} %{NUMBER:processId} %{NUMBER:threadName}           %{JAVACLASS:className} %{NUMBER:origin} %{GREEDYDATA:messagebody}"] }
        }
#Backend Services
#
grok {
match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT}) *\[%{LOGLEVEL:logLevel}].*\[%{DATA:threadName}].*(?com.wisilica.[^.]*)\$-(?[^.]*)] -%{GREEDYDATA:messagebody}"] }
}

# mutate {
# split => { "classOrigin" => "$-" }
# add_field => {
# "className" => "%{[classOrigin][0]}"
#                "origin" => "%{[classOrigin][1]}"
# }
# }

#Controller
###########

grok {
match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND},%{NONNEGINT}) (?%{WORD} %{WORD}) %{GREEDYDATA:messagebody}"] }
}
##Spark Container level log
#########################
#log.file.path: /hadoop/yarn/log/application_1591083059548_0002/container_e24_1591083059548_0002_02_000001/application.log

grok {
match => { "message" => ["(?%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND}.%{NONNEGINT}) *\[%{LOGLEVEL:logLevel}].*\[%{DATA:threadName}] .*(?com.wisilica.[^.]*)\$.-(?[^.]*)] -%{GREEDYDATA:messagebody}"] }
}

}



output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "changeme"
manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}

}

Comments

Popular posts from this blog

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

Running K8s cluster service kubelet with Swap Memory Enabled

For enabling swap memory check the below link : https://jinojoseph.blogspot.com/2019/10/enable-swap-memory-using-swapfile-in.html # sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Add the KUBELET_EXTRA_ARGS line as below: ---------------------------------------- Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS Now kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units # sudo systemctl daemon-reload # sudo systemctl restart kubelet # sudo systemctl status kubelet That is all cheers :p

Nginx Ingress controller setup in K8S MultiNode Cluster with HA-Proxy as External LB

https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md Pre-requisites: ############### >> K8s cluster setup with 1 Master and 2 Worker nodes. >> Deployed an application with Deployment name "client-sb" >> Also you need to create an HA-proxy server by spinning an Ec2 instance. After login the Ha-proxy server. # yum install haproxy # vi /etc/haproxy/haproxy.cfg delete everything after the global and defaults starting from "Main front-end which proxys to the backend" paste the below code in the end of the file: --------------------- frontend http_front   bind *:80   stats uri /haproxy?stats   default_backend http_back backend http_back   balance roundrobin   server kube 10.0.1.14:80   server kube 10.0.1.12:80 --------------------- # systemctl status haproxy # systemctl enable haproxy # systemctl start haproxy 1. Create a Namespace, a SA, the Default Secret, the Customization Confi...