I figured out a performance problem, that I don’t seem to understand completely: A (LEI) scripted activity changes about 100k documents within a database of 200k documents.
What happens is:
Within the first – let’s say – 2 hours, about 300 documents per minute are processed.
After that the performance is going down up to a rate of about 30 documents per minute.
What I did till now:
-
I started two more Updaters, so now there are 3 running (>4 core machine).
-
I sliced the amount of document into chunks of about 20k documents. What happens is, that when a job starts the performance is high – but then fades away. Between the processing of the chunks there is some spare time – so I assume that in this time some „tidy“ task does something that brings performance up again. What task? What does this task do?
Can anyone give me some clue/advice what I can do?
My next experiment:
a- Putting a sleep statement into my script after – lets say 10k documents
b- Monitorin IO extensively (which is not too easy in this environment – for organizational reasons). This should not be an issue as temporary files and databases are already separated by controller and disk – but I lost confidence in this process…
Remarks:
There are no other applications on this server which might cause this trouble as this is a nearly dedicated server for this application (just a few users and about 10 unfrequently used mailin databases – no agents or other performance killing tasks).
The database has about 40 views – none with @now or @today of a medium complexity – so I think that this is not the point, where my problem starts.