Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Monitoring with logstash, kibana and grafana

No description

Marek Wiewiórski

on 7 July 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Monitoring with logstash, kibana and grafana

Enter Kibana - features
Based on dashboard concept
Integrates with Logstash, Apache Flume and FluentD
Gives structural search:
(html OR css) AND bytes:[100 to 500]
Search format
Various options for visualization:
scatter, line plots
bar, pie charts
map available for events with localization data (GeoIP)
Log analysis with Kibana - flow
Monitoring metrics with Grafana - flow
Enter Grafana
You can select region with mouse to zoom on timerange
Support for annotations on chart (e.g. deployment events)
Every chart can be resized separately
You can view selected graph in fullscreen
Chart styling
lines, bars, points
staircase, line
max, min, avg
Visual Graphite expression editor
Dashboard can be imported/exported to json for automatic deployment
Enter Logstash
Tool for extracting, parsing and transforming logs from files and sockets in various forms
Input: text data
Output: event record in machine friendly form
Flexible parser with templating and regular expressions
Logstash event format
% bin/logstash -e 'output { stdout { codec => rubydebug } }'
hello world

"message" => "hello world",
"@version" => "1",
"@timestamp" => "2014-04-22T23:03:14.111Z",
"type" => "stdin",
"host" => "Macintosh.local"
Monitoring and analysis with Logstash, Kibana and Grafana
Parse syslog event:
<%{POSINT:syslog_pri}> log_nginx:%{IP:client}:%{POSINT:port} %{URIHOST}%{URIPATHPARAM:query} %{INT:response} %{INT:latency}
Parse http request from Apache access.log:
%{IP:client} \[%{HTTPDATE:time_local}\] %{QS:request} %{INT:response} %{INT:bytes} %{QS:http_user_agent}
Multiple inputs and outputs
tcp, udp
Grafana - Tour
Logstash - drawbacks
Runs on JVM - big memory footprint (around 400MB)
Possible solution: logstash-forwarder (a.k.a. Lumberjack), implemented in Go for small memory footprint
Slow startup
Sometimes changes in newer version can break older configuration
Grafana - drawbacks
Charts rendered on client side
large amount of data transferred and drawing can be slow
Readability of charts can be poor when there are many points to be rendered
Real life example
"message" => "<150> log_nginx: af.opera.com/api/query?client=Opera 200 4\n",
"@version" => "1",
"@timestamp" => "2014-06-18T15:41:54.290Z",
"host" => "",
"syslog_pri" => "150",
"client" => "",
"port" => "1080",
"query" => "/api/query?client=Opera",
"response" => "200",
"latency" => "4",
"geoip" => {
"ip" => "",
"country_code2" => "RO",
"country_code3" => "ROU",
"country_name" => "Romania",
"continent_code" => "EU",
"latitude" => 46.0,
"longitude" => 25.0,
"timezone" => "Europe/Bucharest",
"location" => [
[0] 25.0,
[1] 46.0

Configure parser expression for syslog events:
<%{POSINT:syslog_pri}> log_nginx:%{IP:client}:%{POSINT:port} %{URIHOST}%{URIPATHPARAM:query} %{INT:response} %{INT:latency}

Simulate syslog event coming to UDP port 514, where Logstash can be set up to listen for data:
$ echo '<150> log_nginx: af.opera.com/api/query?client=Opera 200 4' | nc -u localhost 514

Kibana Backend - Elasticsearch
From creators of Logstash & Kibana
Written in Java
Based on Lucene
Schema-less JSON documents
Time machine - gateways
database/table/row/column = index/type/document/property
Using Elasticsearch for searchable logs
Every day has separate index
facilitates sharding
URL to show all stored indices:
URL to show all events from Warsaw:
Kibana - a quick look
Kibana - filtering
Kibana - trends
Sample Logstash configuration
input {
udp {
port => 514

filter {
grok {
match => ["message", "<%{POSINT:syslog_pri}> log_nginx:%{IP:client}:%{POSINT:port} %{URIHOST}%{URIPATHPARAM:query} %{INT:response} %{INT:latency}"]
geoip {
source => "client"
database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
if "location" not in [geoip] {
drop { }

output {
elasticsearch {
host => ""
port => 9200
protocol => "http"
statsd {
port => 8125
increment => ["requests.continent_origin.%{[geoip][continent_code]}"]
namespace => ""
sender => ""

Automatic configuration with puppet
include logstash
include elasticsearch
include kibana
include grafana

logstash::config {
content => template('autofill/logstash/graphs.conf')

kibana::dashboard { 'default':
source => "puppet:///files/autofill/graphs/kibana/dashboard.json"

grafana::dashboard { 'default':
source => "puppet:///files/autofill/graphs/grafana/dashboard.json"

Marek Wiewiórski, Opera Software

Brief history of logging
4000 BC - people learn how to log
642 AD - the biggest archive of logs destroyed
15th century - log format changes
2007 AD - the biggest archive of nothing launched
but it still has to be logged
Full transcript