You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This was fine for contactnotifications, statehistory and downtimehistory but caused a high load when running the delete statement on the externalcommands table.
Query: DELETE FROM icinga_externalcommands WHERE instance_id = 1 AND entry_time < FROM_UNIXTIME(1477991035)
This is fine, but, in this case, more than 50 million rows are affected. So the delete statement was running for a almost an hour, when I killed it, increased the load and prevented table locks by other queries.
The solution that came into my mind is to run the delete query with a limit in a loop and fetch the number of affected rows from the response. Exit the loop when 0 rows have been deleted.
The text was updated successfully, but these errors were encountered:
I'm removing the default categories setting in a separate PR. If you come up with a solution for batched deletes, please send in a PR and we can reopen this issue.
This issue has been migrated from Redmine: https://dev.icinga.com/issues/13203
Created by winem_ on 2016-11-15 13:24:10 +00:00
Assignee: (none)
Status: New
Target Version: (none)
Last Update: 2016-11-15 13:24:10 +00:00 (in Redmine)
We run this icinga instance for some months now and initially enabled the cleanup in the IdoMySqlConnection configuration today:
cleanup = {
downtimehistory_age = 31d
contactnotifications_age = 31d
statehistory_age = 31d
externalcommands_age = 14d
}
This was fine for contactnotifications, statehistory and downtimehistory but caused a high load when running the delete statement on the externalcommands table.
Query: DELETE FROM icinga_externalcommands WHERE instance_id = 1 AND entry_time < FROM_UNIXTIME(1477991035)
This is fine, but, in this case, more than 50 million rows are affected. So the delete statement was running for a almost an hour, when I killed it, increased the load and prevented table locks by other queries.
The solution that came into my mind is to run the delete query with a limit in a loop and fetch the number of affected rows from the response. Exit the loop when 0 rows have been deleted.
The text was updated successfully, but these errors were encountered: