Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dev.icinga.com #7632] Please allow one to disable "repository" #2226

Closed
icinga-migration opened this issue Nov 11, 2014 · 21 comments
Closed
Labels
area/distributed Distributed monitoring (master, satellites, clients)

Comments

@icinga-migration
Copy link

This issue has been migrated from Redmine: https://dev.icinga.com/issues/7632

Created by tgelf on 2014-11-11 10:15:17 +00:00

Assignee: (none)
Status: New (closed on 2015-01-12 10:24:00 +00:00)
Target Version: (none)
Last Update: 2017-01-14 13:05:53 +00:00 (in Redmine)

Icinga Version: 2.4.1

IMO "repository" should be a feature one could disable on demand. Reasons:

  • While it is a nice feature I currently have no use case for it
  • It pollutes my config directory and therefore conflicts with Puppet Modules "purging" config directories
  • It wastes resources. Didn't measure the impact on large environments, but given the current update frequency it's probably more than "nothing"

I'd really love to use live repository data via the API for a config tool or similar, but I have no need for those files and I will probably never have. Same goes for the related CLI commands. They are very nice, I like them. However, people not using them should still be able to get a cleaner system.

Best,
Thomas


Parent Task: #13257

Relations:

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2014-11-11 21:30:45 +00:00

Not sure what you mean by "disable on-demand". If you don't want to use it, don't run 'node update-config' or invoke the cli commands manually.

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2014-11-11 21:30:58 +00:00

  • Status changed from New to Feedback

@icinga-migration
Copy link
Author

Updated by tgelf on 2014-11-11 21:42:53 +00:00

dnsmichi wrote:

Not sure what you mean by "disable on-demand". If you don't want to use it, don't run 'node update-config' or invoke the cli commands manually.

Icinga seems to dump inventory information twice a second minute to my disk. As a tuning measurement and to get rid of useless files I'd like to be able to handle this feature as every other core feature. Thus I could enable/disable it on demand. Btw, I'd really love to see an API version of this feature ;-) Espescially once we can ask plugins about "instances" (e.g. available disks...). The current implementation is cute, but in most of my environments just useless overhead I'd like to avoid.

Best,
Thomas

@icinga-migration
Copy link
Author

Updated by tgelf on 2014-11-11 22:12:11 +00:00

(corrected the last comment)

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2014-11-11 23:14:50 +00:00

Ah, so you mean the integrated api repository sync, and not the cli command. Sorry, I did get that wrong.

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2015-01-12 10:24:00 +00:00

  • Category set to Cluster
  • Status changed from Feedback to Rejected

Won't happen.

@icinga-migration
Copy link
Author

Updated by tgelf on 2016-02-23 11:25:55 +00:00

  • Status changed from Rejected to Feedback
  • Assigned to set to mfriedrich
  • Icinga Version changed from 2 to 2

Hi Michi,

in relation to recent performance issues I'd like to re-open this for discussion. I'm still waiting the day where I see anyone in a larger environment doing anything useful with this feature. Still, I have no chance to disable it. It creates a lot of completely useless cluster messages, and a quick look at it's behaviour suggests that it would dump 100 files to disk on my masters in a setup with as few as 3000 nodes.

Please correct me if I'm wrong on this, had no time to investigate farther. Looking at what happens on the filesystem and a quick grep in the code have led me to the above conclusion.

So, what reason speaks against allowing one to disable this useless use of system resources?

Cheers,
Thomas

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2016-02-24 23:27:45 +00:00

  • Relates set to 10054

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2016-02-25 00:31:40 +00:00

  • Status changed from Feedback to New
  • Assigned to deleted mfriedrich
  • Target Version set to Backlog

As discussed offline, we'll think about it.

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2016-03-04 15:54:11 +00:00

  • Parent Id set to 11313

@icinga-migration
Copy link
Author

Updated by tgelf on 2016-06-16 16:23:05 +00:00

bump

Stumbled over this issue from 2014 right now, problem persists. Just for the records, here some current numbers:

  • 2 masters
  • 1750 endpoints & agents
  • 1950 hosts
  • 1950 files in /var/lib/icinga2/api/repository
  • 500 file operations per second only in the above directory: open, modify, write/close or create/move

I guess there are cluster messages involved with each change, so there must be at least 1-200 cluster related messages a second. This sums up to quite some system load I guess. So, in my believes allowing one to disable repository shipping (and/or processing) in the ApiListener object (or similar) could really be a quick win. One flag, less load.

Could initially default to "on", so we would not have any behaviour change without manual interaction. Future major releases might decide to switch the default to "off" as this feature probably isn't widely used.

Cheers,
Thomas

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2016-11-09 14:52:12 +00:00

  • Parent Id deleted 11313

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2016-11-18 17:10:54 +00:00

  • Relates set to 13255

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2016-11-23 14:56:37 +00:00

  • Target Version deleted Backlog

@icinga-migration
Copy link
Author

Updated by tgelf on 2016-12-16 15:41:46 +00:00

Is there any chance we could finally tackle this issue? It hurts since more than two years now. The feature we're talking about has been deprecated in the meantime. I've never ever been using it and still - it's now responsible for 620 file operations every second on every master of the system mentioned above (10 month ago).

I didn't measure the network overhead it generates, but I guess also this shouldn't be underestimated.

Thanks,
Thomas

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2016-12-16 16:01:08 +00:00

It will be gone when we finally remove the deprecated parts of the "bottom up" client mode.

Kind regards,
Michael

@icinga-migration
Copy link
Author

Updated by tgelf on 2016-12-16 16:44:40 +00:00

What's so difficult with adding just a simple flag to toggle this? Sorry, I'm unable to understand your reasoning. This is an absolutely useless and insane waste of resources. Did you even look at the numbers? Why not add a little lever allowing those suffering from high load to lower it a little bit? Wouldn't hurt, don't you think so?

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2017-01-14 13:05:53 +00:00

  • Parent Id set to 13257

@icinga-migration
Copy link
Author

Updated by mfriedrich on 2017-01-14 13:08:36 +00:00

  • Relates deleted 10054

@icinga-migration icinga-migration added bug Something isn't working area/distributed Distributed monitoring (master, satellites, clients) labels Jan 17, 2017
@dnsmichi
Copy link
Contributor

dnsmichi commented Feb 2, 2017

#4799

@dnsmichi dnsmichi added enhancement New feature or request wishlist and removed bug Something isn't working labels Mar 30, 2017
@dnsmichi
Copy link
Contributor

Will be removed with the referenced issue.

@dnsmichi dnsmichi removed enhancement New feature or request wishlist labels Aug 17, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/distributed Distributed monitoring (master, satellites, clients)
Projects
None yet
Development

No branches or pull requests

2 participants