I am looking to break down some firewall logs to basically add additional fields so we can create some visualizations in our dashboard.
As an example, this would be the message coming in: (These messages are redirected, so I suspect that LS will not sort them into the correct field?)
This is what's in the message field:
<165>Original Address=10.242.70.4 Apr 06 2015 09:24:33 EXT-11 : %ASA-5-106015: Deny TCP (no connection) from <SRCIP>/<PORT> to <DESIP>/<PORT> flags ACK on interface outside
I am trying to get filters in so that this message can be split into relating fields. ie. Src IP / Port and Des IP / Port. Also add a tag for Deny. Looking through some examples, I am having a hard time figuring out how to construct the filter. Looking for some guidance.
Thanks.
Filter Help
-
- Attack Rabbit
- Posts: 2560
- Joined: Thu Feb 12, 2015 12:40 pm
Re: Filter Help
No problem - can you give us a few more example logs so that we can be sure to have the filter syntax right?
I'll walk you through how to set this up in my next post after you get those additional logs to me - please let me know if you have any questions. Thanks!
I'll walk you through how to set this up in my next post after you get those additional logs to me - please let me know if you have any questions. Thanks!
-
- Posts: 146
- Joined: Mon Oct 27, 2014 10:08 pm
- Location: Canada
Re: Filter Help
Much appreciated for this assistance. Here are some additional samples of the logs
Code: Select all
<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986550 for inside:142.52.197.162/41905 to outside:10.242.74.148/53 duration 0:00:00 bytes 126
<167>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-7-609002: Teardown local-host outside:10.242.74.148 duration 0:00:00
<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986546 for inside:142.52.197.162/60400 to outside:10.242.74.148/53 duration 0:00:00 bytes 125
<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986545 for inside:142.52.197.162/34654 to outside:10.242.74.148/53 duration 0:00:00 bytes 125
<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302015: Built inbound UDP connection 519986521 for inside:142.52.197.162/45440 (142.52.197.162/45440) to outside:10.242.74.148/53 (10.242.74.148/53)
<164>Original Address=10.242.23.196 Apr 06 2015 13:17:59 APP-1619 : %ASA-4-106023: Deny udp src inside:10.172.124.165/137 dst outside:142.52.245.190/137 by access-group "inside-in" [0x46668482, 0x0]
<164>Original Address=10.242.22.4 Apr 06 2015 13:17:59 EXT-31 : %ASA-4-106023: Deny tcp src outside:61.160.224.130/42467 dst inside:142.52.197.22/7001 by access-group "outside-in" [0x415d9b30, 0x0]
<164>Original Address=10.242.23.196 Apr 06 2015 13:17:59 APP-1619 : %ASA-4-106023: Deny udp src inside:10.172.121.230/137 dst outside:142.52.245.190/137 by access-group "inside-in" [0x46668482, 0x0]
<164>Original Address=10.242.22.4 Apr 06 2015 13:17:59 EXT-31 : %ASA-4-106023: Deny tcp src inside:10.173.144.226/62659 dst outside:173.222.117.94/80 by access-group "inside-in" [0x46668482, 0x0]
-
- Attack Rabbit
- Posts: 2560
- Joined: Thu Feb 12, 2015 12:40 pm
Re: Filter Help
Thank you.
So, we have the following logs as pieces of information:
We need to generate a filter that will match them all. Are you familiar with Regular Expression? It doesn't take very long to learn, and it is tremendously helpful when generating your own filters. There are many free tutorials online if you are interested.
First, let's quickly cover how Logstash processes information.
1. Logstash sets up 'inputs' which listen on ports. Some default input types are 'tcp', 'udp', and 'syslog'. While tcp and udp take in logs and pass the logs straight to step 2, the syslog input will parse log information before sending it to step 2. You will likely want to use a 'tcp' or 'udp' input here. Read more on inputs here: http://logstash.net/docs/1.4.2/
2. Logstash parses logs with 'filters' that you define. Before filters are applied, your logs are likely unstructured and have no 'fields' applied to them. Filters will define all of the fields for the data you're taking in, making it very easy to organize in Elasticsearch.
3. Logstash outputs the data to Elasticsearch, which will store it in a database and allow you to view all of those beautiful graphs.
We will be using the 'grok' filter - this is a widely used filter. Feel free to read more about it here: http://logstash.net/docs/1.4.2/filters/grok I will be using the following utility to build this grok pattern: http://grokdebug.herokuapp.com/
Please note that grok has many built-in patterns. There are regex patterns that are pre-set so that we do not have to define them. The full list is here: https://github.com/elastic/logstash/blo ... k-patterns
Let's start with your first log file:
<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986550 for inside:142.52.197.162/41905 to outside:10.242.74.148/53 duration 0:00:00 bytes 126
The question to ask is what information is relevant to you. Since I don't know this, I will define this filter by my standards - you are of course free to modify my work.
The following is what I came up with for a pattern:
A more basic pattern (that matches all of your logs):
In this pattern, I have tagged what is important with fields. Note that anything CAPITAL is simply a pre-defined regex pattern that logstash provides. After the colon : comes the field that I want it tagged as - this is how the information will show up in your Web GUI. Does that make sense? Any Integer that I want to graph, I need to use the :int suffix on - which is why some lines look like:
NUMBER:data1:int
Please define the above pattern to suit your needs. In a filter, it would look something like this:
Please note that this is not a full solution - my hope is to help you understand filters better so that you can take it from here.
Let me know if you have questions - I am more than happy to answer them.
You may also find the following useful:
https://github.com/elastic/logstash/issues/1369
So, we have the following logs as pieces of information:
Code: Select all
<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986550 for inside:142.52.197.162/41905 to outside:10.242.74.148/53 duration 0:00:00 bytes 126
<167>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-7-609002: Teardown local-host outside:10.242.74.148 duration 0:00:00
<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986546 for inside:142.52.197.162/60400 to outside:10.242.74.148/53 duration 0:00:00 bytes 125
<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986545 for inside:142.52.197.162/34654 to outside:10.242.74.148/53 duration 0:00:00 bytes 125
<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302015: Built inbound UDP connection 519986521 for inside:142.52.197.162/45440 (142.52.197.162/45440) to outside:10.242.74.148/53 (10.242.74.148/53)
<164>Original Address=10.242.23.196 Apr 06 2015 13:17:59 APP-1619 : %ASA-4-106023: Deny udp src inside:10.172.124.165/137 dst outside:142.52.245.190/137 by access-group "inside-in" [0x46668482, 0x0]
<164>Original Address=10.242.22.4 Apr 06 2015 13:17:59 EXT-31 : %ASA-4-106023: Deny tcp src outside:61.160.224.130/42467 dst inside:142.52.197.22/7001 by access-group "outside-in" [0x415d9b30, 0x0]
<164>Original Address=10.242.23.196 Apr 06 2015 13:17:59 APP-1619 : %ASA-4-106023: Deny udp src inside:10.172.121.230/137 dst outside:142.52.245.190/137 by access-group "inside-in" [0x46668482, 0x0]
<164>Original Address=10.242.22.4 Apr 06 2015 13:17:59 EXT-31 : %ASA-4-106023: Deny tcp src inside:10.173.144.226/62659 dst outside:173.222.117.94/80 by access-group "inside-in" [0x46668482, 0x0]
First, let's quickly cover how Logstash processes information.
1. Logstash sets up 'inputs' which listen on ports. Some default input types are 'tcp', 'udp', and 'syslog'. While tcp and udp take in logs and pass the logs straight to step 2, the syslog input will parse log information before sending it to step 2. You will likely want to use a 'tcp' or 'udp' input here. Read more on inputs here: http://logstash.net/docs/1.4.2/
2. Logstash parses logs with 'filters' that you define. Before filters are applied, your logs are likely unstructured and have no 'fields' applied to them. Filters will define all of the fields for the data you're taking in, making it very easy to organize in Elasticsearch.
3. Logstash outputs the data to Elasticsearch, which will store it in a database and allow you to view all of those beautiful graphs.
We will be using the 'grok' filter - this is a widely used filter. Feel free to read more about it here: http://logstash.net/docs/1.4.2/filters/grok I will be using the following utility to build this grok pattern: http://grokdebug.herokuapp.com/
Please note that grok has many built-in patterns. There are regex patterns that are pre-set so that we do not have to define them. The full list is here: https://github.com/elastic/logstash/blo ... k-patterns
Let's start with your first log file:
<166>Original Address=10.242.23.172 Apr 06 2015 13:16:03 APP-1616 : %ASA-6-302016: Teardown UDP connection 519986550 for inside:142.52.197.162/41905 to outside:10.242.74.148/53 duration 0:00:00 bytes 126
The question to ask is what information is relevant to you. Since I don't know this, I will define this filter by my standards - you are of course free to modify my work.
The following is what I came up with for a pattern:
Code: Select all
^\<%{NUMBER:data1:int}\>Original Address=%{IP:devadd} %{MONTH:month} %{MONTHNUM:day} %{YEAR:year} %{TIME:time} %{GREEDYDATA:devicestuff}: %{GREEDYDATA:information} duration %{TIME:duration} bytes %{NUMBER:bytes:int}$
Code: Select all
^\<%{NUMBER:data1:int}\>Original Address=%{IP:devadd} %{MONTH:month} %{MONTHNUM:day} %{YEAR:year} %{TIME:time} %{GREEDYDATA:devicestuff}: %{GREEDYDATA:information}$
NUMBER:data1:int
Please define the above pattern to suit your needs. In a filter, it would look something like this:
Code: Select all
if [program] == 'ciscoasalog' {
grok {
match => [ 'message', '^\<%{NUMBER:data1:int}\>Original Address=%{IP:devadd} %{MONTH:month} %{MONTHNUM:day} %{YEAR:year} %{TIME:time} %{GREEDYDATA:devicestuff}: %{GREEDYDATA:information} duration %{TIME:duration} bytes %{NUMBER:bytes:int}$' ]
}
}
Let me know if you have questions - I am more than happy to answer them.
You may also find the following useful:
https://github.com/elastic/logstash/issues/1369
-
- Posts: 146
- Joined: Mon Oct 27, 2014 10:08 pm
- Location: Canada
Re: Filter Help
Thanks for the comprehensive reply. I will go through this and test our filters. Will report back once I get this setup. Very helpful as always!
-
- Attack Rabbit
- Posts: 2560
- Joined: Thu Feb 12, 2015 12:40 pm
Re: Filter Help
Sounds good - thanks!
-
- Posts: 146
- Joined: Mon Oct 27, 2014 10:08 pm
- Location: Canada
Re: Filter Help
The debugger and the list of grok patterns are a great help. As a test, I am using your sample to see what results I would get in LS. I also added a "add_tag" line just to see if additional tags.
So apply the filter is applied, is this applied instantly? Will I be able to see the relating fields/data in the dashboard?
So apply the filter is applied, is this applied instantly? Will I be able to see the relating fields/data in the dashboard?
-
- DevOps Engineer
- Posts: 19396
- Joined: Tue Nov 15, 2011 3:11 pm
- Location: Nagios Enterprises
Re: Filter Help
It will only affect messages received AFTER applying the filter.OptimusB wrote:The debugger and the list of grok patterns are a great help. As a test, I am using your sample to see what results I would get in LS. I also added a "add_tag" line just to see if additional tags.
So apply the filter is applied, is this applied instantly? Will I be able to see the relating fields/data in the dashboard?
-
- Posts: 146
- Joined: Mon Oct 27, 2014 10:08 pm
- Location: Canada
Re: Filter Help
Thanks. Currently I have the filter in place but doesn't seem to have any effect. I am not seeing any additional fields. After reviewing additional documentation, do I need to "add_fields" before they are shown in the dashboard?
-
- Attack Rabbit
- Posts: 2560
- Joined: Thu Feb 12, 2015 12:40 pm
Re: Filter Help
No - the fields are generated as the data is parsed - fields are equal to the second 'field' after the colon delimiter ':'.
If you used the filter I have described above, your filters are good to go. Your inputs may need a little work.
Note that the example filter states:
The question is how to make 'program' equal 'ciscoasalog'. What is 'program'? It's just another field. Let's try the following:
Define a new input:
This assumes that all Cisco ASA logs are going to be sent to tcp port 9000 on NLS. You could also use 'udp' here. Note the 'type' field being defined above - 'type' will be added to all logs.
You can then define your filter as follows:
Hope that helps. I should have put that in the original description. 
Code: Select all
REGEXPATTERN:fieldname
Note that the example filter states:
Code: Select all
if [program] == 'ciscoasalog' {
Define a new input:
Code: Select all
tcp {
type => 'cisco_asa_logs'
port => 9000
}
You can then define your filter as follows:
Code: Select all
if [type] == 'cisco_asa_logs' {
grok {
match => [ 'message', '^\<%{NUMBER:data1:int}\>Original Address=%{IP:devadd} %{MONTH:month} %{MONTHNUM:day} %{YEAR:year} %{TIME:time} %{GREEDYDATA:devicestuff}: %{GREEDYDATA:information} duration %{TIME:duration} bytes %{NUMBER:bytes:int}$' ]
}
}
