[dev.icinga.com #866] non-blocking icinga core while config dump at startup to ido2db #401
Comments
Updated by mfriedrich on 2010-10-10 11:34:10 +00:00 main problem in this regard is the config dump during startup (see idomod.cfg). I've been thinking (a lot and very often) about a second channel for that, pushing the configs aside. next to that, ido2db should be able to write everything incoming into a message queue like zeromq provides, and then having a worker running the database insert/update. problem would be a new dependency, so i was thinking of "forking" idoutils into a renamed module, and do a complete rewrite with nonblocking sockets. or even based on mod_gearman. also changing the db layout like merlin has it. |
Updated by mfriedrich on 2011-02-24 13:45:13 +00:00
|
Updated by mfriedrich on 2011-02-24 14:12:25 +00:00 i've been playing around with a worker thread on the dbuf read from the socket, using a mutex to keep buffers in sync. the main problem with this attempt is - a full configdump with 200 hosts and 4000 services + rest takes <1 second. the dbuf holds ~600k lines then, which need to be worked on. so ido2db takes a lot of cpu available (2 cores, one is "dedicated" to ido2db process) to be working on handling the data (like normal). so the consensus is ... no more blocking core, but more cpu usage. having that run on a single core is a considered fail. the housekeeping thread interferes a bit with the config dump. since the thread must get to now the instance_name (from the config dump!) in order to do housekeeping on the correct instance, i've now against postponed it 300 secs, and made this the default (less values not possible). furthermore the housekeeping cycle has been extended to 3600 seconds instead of 60 seconds. normally people would set 50 to 60 minutes the lowest on data housekeeping, so why checking this every single minute? also reduces the delete queries being fired onto the database. overall, except the cpu usage (and hacking the pthread scheduler on 'nice' priority seems to be not working) it's something i consider for further testing and productive deployments too as it will be huge enhancement, dropping long grown problems. |
Updated by mfriedrich on 2011-02-24 15:17:19 +00:00
|
Updated by mfriedrich on 2011-04-27 17:10:58 +00:00
i gotta resolve/debug several bugs on a circular buffer implementation in ido2db, and this can't be done for 1.4 right on without violating the 2-weeks-freeze policy - so postponing to 1.5 |
Updated by mfriedrich on 2011-04-28 11:32:47 +00:00
icinga 1.4 will feature a first implementation attempt, while resolving #1410 - goes hand in hand. |
Updated by mfriedrich on 2011-05-02 12:14:38 +00:00
postponed. needs a rewritten solution. |
Updated by mfriedrich on 2011-05-27 16:22:09 +00:00
i'll leave that until someone got a better idea. |
Updated by mfriedrich on 2011-09-25 00:11:28 +00:00
please check #1259 for future reference and/or solutions on both - buffering and config dump blockings. |
Updated by mfriedrich on 2014-12-08 14:34:41 +00:00
|
This issue has been migrated from Redmine: https://dev.icinga.com/issues/866
Created by kloppi on 2010-10-08 18:50:23 +00:00
Assignee: (none)
Status: Closed (closed on 2011-09-25 00:11:28 +00:00)
Target Version: (none)
Last Update: 2014-12-08 14:34:41 +00:00 (in Redmine)
Hi,
when i start my icinga, it tooks over 5 minutes to start till the first check startsup when the idomod is integrated. It looks like the ido suggest the core to wait, until all sql operations are done and executed in fifo mode. To Speedup a simple option would be nice to cache all sql statements from the icinga process either in memory or in a file. Another process took all datas and put them in database. A Difference between icinga-web and icinga-cgi would be the effect but i think the difference would be minimized after some runtime because the quene would be processed permanently.
Changesets
2011-02-24 15:31:36 +00:00 by mfriedrich 0846bbc12d02f8d6667c657798ecf4296cf2c63f
2011-04-28 10:52:39 +00:00 by mfriedrich 39fa0ac
2011-09-25 00:12:03 +00:00 by mfriedrich a34a899
Relations:
The text was updated successfully, but these errors were encountered: