Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade
-
- Support Tech
- Posts: 5045
- Joined: Tue Feb 07, 2017 11:26 am
Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade
Can you PM me the profiles? Do you know which index causes it to crash again? I'd run through the same thing but hold off on reopening the problem index if possible at least for over the weekend.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
-
- Posts: 733
- Joined: Wed Jul 11, 2018 11:37 am
Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade
I don't think it's an index that's causing it, because yesterday I ended with the oldest indexes and today I ended with the week after the oldest indexes.
I'll send you the profiles.
I'll send you the profiles.
-
- Posts: 733
- Joined: Wed Jul 11, 2018 11:37 am
Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade
This morning I'm able to access the console, but I had a snapshot stall out last night and even though logstash says it's running and active on all three nodes are barely collecting any logs.
Also, the environment is very very unresponsive. More than usually.
Also, the environment is very very unresponsive. More than usually.
-
- Posts: 733
- Joined: Wed Jul 11, 2018 11:37 am
Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade
Also, the graphs on the homepage aren't working.
-
- Support Tech
- Posts: 5045
- Joined: Tue Feb 07, 2017 11:26 am
Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade
Are the graphs throwing an error? Can you provide screenshots of these? I'd also like to get a fresh profile from the machines to see the state it is in now.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
-
- Posts: 733
- Joined: Wed Jul 11, 2018 11:37 am
Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade
No errors, just blank.
The 'disk usage' graph just started working again, but as you can see the 'Logs Per 15 Minutes' graph is blank.
The 'disk usage' graph just started working again, but as you can see the 'Logs Per 15 Minutes' graph is blank.
You do not have the required permissions to view the files attached to this post.
-
- Posts: 733
- Joined: Wed Jul 11, 2018 11:37 am
Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade
My PMs are not leaving the outbox again, not sure why that happens. Can you please use the FTP credentials I sent you on Friday to access the System Profiles? Thank you!
-
- Posts: 733
- Joined: Wed Jul 11, 2018 11:37 am
Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade
I'm also not getting any results in my default dashboard. It's setup to show me all logs in the last 15 minutes and right now it's completely blank.
-
- Posts: 733
- Joined: Wed Jul 11, 2018 11:37 am
Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade
My PMs don't seem to be going through for some reason.
In response to your last PM. I wasn't sure where to add the line you suggested, so I decided to add it to the 'Index' section. I'm not sure if adding that or simply restarting the elasticsearch service did the trick, but the graphs are working again and it appears we are collecting logs.
It looks like my nodes keep trying to take the master role from each other. If there is a way to prevent this I would love to hear it. I think at this point that might be causing the biggest issue in my environment.
In response to your last PM. I wasn't sure where to add the line you suggested, so I decided to add it to the 'Index' section. I'm not sure if adding that or simply restarting the elasticsearch service did the trick, but the graphs are working again and it appears we are collecting logs.
It looks like my nodes keep trying to take the master role from each other. If there is a way to prevent this I would love to hear it. I think at this point that might be causing the biggest issue in my environment.
-
- Support Tech
- Posts: 5045
- Joined: Tue Feb 07, 2017 11:26 am
Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade
Thanks for the update. I just responded to your PM. NLS by default allows all nodes to become master with this config in /usr/local/nagioslogserver/elasticsearch/config/elasticsearch.yml:
You can change this to:
or just add this to the bottom of the file:
You'll need to do this on each machine in the cluster you wish to exclude from being the master and restart the elasticsearch service.
Code: Select all
...
# Allow this node to be eligible as a master node (enabled by default):
#
# node.master: true
...
Code: Select all
...
# Allow this node to be eligible as a master node (enabled by default):
#
node.master: false
...
Code: Select all
node.master: false
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.