how to customize fields

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
jamesoptima
Posts: 1
Joined: Fri Jan 22, 2016 4:08 am

how to customize fields

Post by jamesoptima »

i want to extract something in the message field as a new field,but i can't find any place to do this in the web UI. And i check the help documentation,but still can't find any information about it. Could some one have the experience about this? Any help will be appreciated.
jolson
Attack Rabbit
Posts: 2560
Joined: Thu Feb 12, 2015 12:40 pm

Re: how to customize fields

Post by jolson »

What you're looking to do is a little complicated, but easy enough once you get the hang of it. You'll have to understand Logstash filters - there are many great tutorials on the internet - here's an excerpt from one of my forum posts in the customer forum:

So, we have the following logs as pieces of information:

Code: Select all

<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986550 for inside:142.52.197.162/41905 to outside:10.242.74.148/53 duration 0:00:00 bytes 126

<167>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-7-609002: Teardown local-host outside:10.242.74.148 duration 0:00:00

<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986546 for inside:142.52.197.162/60400 to outside:10.242.74.148/53 duration 0:00:00 bytes 125

<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986545 for inside:142.52.197.162/34654 to outside:10.242.74.148/53 duration 0:00:00 bytes 125

<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302015: Built inbound UDP connection 519986521 for inside:142.52.197.162/45440 (142.52.197.162/45440) to outside:10.242.74.148/53 (10.242.74.148/53)

<164>Original Address=10.242.23.196 Apr 06 2015 13:17:59 APP-1619 : %ASA-4-106023: Deny udp src inside:10.172.124.165/137 dst outside:142.52.245.190/137 by access-group "inside-in" [0x46668482, 0x0]

<164>Original Address=10.242.22.4 Apr 06 2015 13:17:59 EXT-31 : %ASA-4-106023: Deny tcp src outside:61.160.224.130/42467 dst inside:142.52.197.22/7001 by access-group "outside-in" [0x415d9b30, 0x0]

<164>Original Address=10.242.23.196 Apr 06 2015 13:17:59 APP-1619 : %ASA-4-106023: Deny udp src inside:10.172.121.230/137 dst outside:142.52.245.190/137 by access-group "inside-in" [0x46668482, 0x0]

<164>Original Address=10.242.22.4 Apr 06 2015 13:17:59 EXT-31 : %ASA-4-106023: Deny tcp src inside:10.173.144.226/62659 dst outside:173.222.117.94/80 by access-group "inside-in" [0x46668482, 0x0]
We need to generate a filter that will match them all. Are you familiar with Regular Expression? It doesn't take very long to learn, and it is tremendously helpful when generating your own filters. There are many free tutorials online if you are interested.

First, let's quickly cover how Logstash processes information.

1. Logstash sets up 'inputs' which listen on ports. Some default input types are 'tcp', 'udp', and 'syslog'. While tcp and udp take in logs and pass the logs straight to step 2, the syslog input will parse log information before sending it to step 2. You will likely want to use a 'tcp' or 'udp' input here. Read more on inputs here: http://logstash.net/docs/1.4.2/

2. Logstash parses logs with 'filters' that you define. Before filters are applied, your logs are likely unstructured and have no 'fields' applied to them. Filters will define all of the fields for the data you're taking in, making it very easy to organize in Elasticsearch.

3. Logstash outputs the data to Elasticsearch, which will store it in a database and allow you to view all of those beautiful graphs.

We will be using the 'grok' filter - this is a widely used filter. Feel free to read more about it here: http://logstash.net/docs/1.4.2/filters/grok I will be using the following utility to build this grok pattern: http://grokdebug.herokuapp.com/

Please note that grok has many built-in patterns. There are regex patterns that are pre-set so that we do not have to define them. The full list is here: https://github.com/elastic/logstash/blo ... k-patterns

Let's start with your first log file:
<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986550 for inside:142.52.197.162/41905 to outside:10.242.74.148/53 duration 0:00:00 bytes 126

The question to ask is what information is relevant to you. Since I don't know this, I will define this filter by my standards - you are of course free to modify my work.

The following is what I came up with for a pattern:

Code: Select all

^\<%{NUMBER:data1:int}\>Original Address=%{IP:devadd} %{MONTH:month} %{MONTHNUM:day} %{YEAR:year} %{TIME:time} %{GREEDYDATA:devicestuff}: %{GREEDYDATA:information} duration %{TIME:duration} bytes %{NUMBER:bytes:int}$
A more basic pattern (that matches all of your logs):

Code: Select all

^\<%{NUMBER:data1:int}\>Original Address=%{IP:devadd} %{MONTH:month} %{MONTHNUM:day} %{YEAR:year} %{TIME:time} %{GREEDYDATA:devicestuff}: %{GREEDYDATA:information}$
In this pattern, I have tagged what is important with fields. Note that anything CAPITAL is simply a pre-defined regex pattern that logstash provides. After the colon : comes the field that I want it tagged as - this is how the information will show up in your Web GUI. Does that make sense? Any Integer that I want to graph, I need to use the :int suffix on - which is why some lines look like:
NUMBER:data1:int

Please define the above pattern to suit your needs. In a filter, it would look something like this:

Code: Select all

if [program] == 'ciscoasalog' {
    grok {
        match => [ 'message', '^\<%{NUMBER:data1:int}\>Original Address=%{IP:devadd} %{MONTH:month} %{MONTHNUM:day} %{YEAR:year} %{TIME:time} %{GREEDYDATA:devicestuff}: %{GREEDYDATA:information} duration %{TIME:duration} bytes %{NUMBER:bytes:int}$' ]
    }
}
Please note that this is not a full solution - my hope is to help you understand filters better so that you can take it from here.

Let me know if you have questions - I am more than happy to answer them.

You may also find the following useful:
https://github.com/elastic/logstash/issues/1369
Twits Blog
Show me a man who lives alone and has a perpetually clean kitchen, and 8 times out of 9 I'll show you a man with detestable spiritual qualities.
txavina
Posts: 1
Joined: Wed Apr 13, 2016 7:40 am

Re: how to customize fields

Post by txavina »

Really good answer jolson!

There's one thing that I don't understand. Where do you define the 'program' ?

As I understood, when you define a filter, the first thing to check is who is sending the trace, and you check this with this code:

Code: Select all

if [program] == 'ciscoasalog' 
But where is defined the 'program' ? In the Inputs?

Thanks,
User avatar
hsmith
Agent Smith
Posts: 3539
Joined: Thu Jul 30, 2015 11:09 am
Location: 127.0.0.1

Re: how to customize fields

Post by hsmith »

The syslog input is doing that for you.

Code: Select all

  syslog {
    type => 'syslog'
    port => 5544
}
is applying a filter that is similar to this:

Code: Select all

if [type] == "syslog" {
    grok {
      match => { "message" => "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }

    }
     syslog_pri {
    }  
}
Does that make sense?
Former Nagios Employee.
me.