You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Assignee: (none)
Status: New
Target Version: (none)
Last Update: 2016-09-28 07:58:01 +00:00 (in Redmine)
Backport?: Not yet backported
Include in Changelog: 1
Hi,
While parsing icinga2.log with Logstash or check_logfiles is fine, an extra, internal check like icinga or cluster showing issues like connections from unconfigured endpoints or commands while accept_commands = false is set would be a benefit because it would still be easier to implement.
I think extra checks would be a good idea because cluster and icinga are too important for stable operations of Icinga 2 and the new checks are just important for some users.
Background:
A customer of mine deploys Icinga 2 Agents automatically and sometimes the information about new agents takes some time until the monitoring admins are informed. They get frequent logevents about new agents but Icinga 2 does not accept checks from the new agents until the corresponding endpoints are configured. One of the new checks could inform the monitoring admins that there are new agents which still miss configuration on the masters.
During automatic deployment errors in the configuration managment can lead to misconfigurations like 'accept_commands = false' while the users want to use commands from masters. While this would be obvious when using the command execution bridge it might not be so obvious for satellites or agents with local scheduler where commands are just used for rescheduling checks. A check that shows that an Icinga 2 node received a command but did not execute it could deal with this issue.
If implementing extra checks is too complicated or not matching the benefit, being able to check these states via API would be fine for me.
Cheers,
Thomas
The text was updated successfully, but these errors were encountered:
This boils down to keeping track of possibly unwanted connections and clients, and is not a good strategy imho. If we get a connection we do not trust, we need to immediately bail out and safe the resources for other, more important tasks. If such a "connection tracer" would be implemented, this could lead into possible overloading the core process - a thing we need to prevent at all cost.
Since there's methods with Logstash and Elastic Stack to parse the Icinga 2 application log, and aggregate those errors into a "fixable" task for other departments, I'll close here.
This issue has been migrated from Redmine: https://dev.icinga.com/issues/12815
Created by twidhalm on 2016-09-28 07:58:01 +00:00
Assignee: (none)
Status: New
Target Version: (none)
Last Update: 2016-09-28 07:58:01 +00:00 (in Redmine)
Hi,
While parsing icinga2.log with Logstash or check_logfiles is fine, an extra, internal check like
icinga
orcluster
showing issues like connections from unconfigured endpoints or commands whileaccept_commands = false
is set would be a benefit because it would still be easier to implement.I think extra checks would be a good idea because
cluster
andicinga
are too important for stable operations of Icinga 2 and the new checks are just important for some users.Background:
A customer of mine deploys Icinga 2 Agents automatically and sometimes the information about new agents takes some time until the monitoring admins are informed. They get frequent logevents about new agents but Icinga 2 does not accept checks from the new agents until the corresponding endpoints are configured. One of the new checks could inform the monitoring admins that there are new agents which still miss configuration on the masters.
During automatic deployment errors in the configuration managment can lead to misconfigurations like 'accept_commands = false' while the users want to use commands from masters. While this would be obvious when using the command execution bridge it might not be so obvious for satellites or agents with local scheduler where commands are just used for rescheduling checks. A check that shows that an Icinga 2 node received a command but did not execute it could deal with this issue.
If implementing extra checks is too complicated or not matching the benefit, being able to check these states via API would be fine for me.
Cheers,
Thomas
The text was updated successfully, but these errors were encountered: