Is there a limit to the number of views you should have in a database? Or a recommend number of views not to exceed? We have a few database with a lot of veiws in them and they are rather large databases (3-5 GB each) and performance on these dtabasess is pretty bad. So I was just wondering if it could be too many views or something else causing it to have poor performance.
Mika’s suggestion is very good and focuses on some design changes.
Another approach may be to ‘tune’ your system if at all possible. This doesn’t solve design flaws and can’t help if the application is more popular than originally designed for. However, the benefit is that it probably can be achieved a lot faster, will have at least some benefit and can be done by mostly by administrators.
How would you tune your system for performance? There is a comprehensive list of recommendations in Admin Help - search for “server performance” or “performance tuning”. Also there is another document about limits to various parameters such as number of views, index sizes etc.
What IBM preaches is that views with a lot of documents take up the most space and resources of any design element in Notes. They recommend to use small views, with small number of documents (Naturally that will cause more views than using few big views).
I would add to that, that databases over 1 GB can be very slow too, not only view indexing takes long time, but also the server’s compact, fixup and other tasks take a long time to complete.
To have big applications with huge number of documents running smooth also on Windows servers (as opposed to RS/6000 servers), I would recommend to split up the documents onto several databases, as server based agents do unbelievable fast multi-database real time relational joins, for example when you want to show all the fields on one Form (on the Web it works also to make relational views). I have like 10 lookups to 10 databases (usually one lookup to per database, but sometimes also multiple). Each database has hundreds of thousands documents, and each is close to 1 GB, and using a keyfield to get the documents from other databases (1-1 or 1-n joins), the agent runs in average in 0.15 seconds. This works both in Notes and on the Web.
… but you’re on the right track.A better question is how much load should any particular db be allowed to put on the updater process(es)? And the answer depends on several factors, like how many updater instances are you able to run, which is a function of how much box you have; how many other db’s are on the server and what updater demands are they making; etc.
Once you get inside a particular db, it’s as Mika already pointed out – indexer time is affected by the number of docs in the db and the numer of docs in the different views. Each view has an index. Making a column sortable adds another index for each sortable view. A twwo-way sortable column adds two indexes instead of one.
Then you get to the complexity of the views - how many columns and how complex are the column formulas. Whenever possible a column formula should be a field name. Period. I’f you want to see that CPU smoke then go ahead and put that half-page folmula behind each column in a view that contains a zillion docs. Even better, include some date/time arithmetic.
Seriously though, I’m a huge fan of pulling all that complex logic out of the column formulas and sticking it into the form logic instead. Each time a user saves a form, let those formula run and resolve to a hidden field somewhere on the document. Then in the column formulas in the various views, just reference that new field. You’ll take a hit in performance as the doc is being saved (usually on the client though), but you’ll take that hit ONLY when a particular doc is updated, instead of on every doc every time a view index is re-evaluated.