Advice sought on replicating a large database (2Gb)

I have a rather large database on Server A and wish to setup a replica of this database on Server B.

This database is the domlog.nsf. It’s current size is 2 Gb with around 25-30% space used - I’ve deleted over 1 million log entries - hence the large amount of free space.

I have tried to start the replication via the Notes Client, but unfortunately, it appears that it is going to take a day or two for this task to be completed.

Two questions:

  1. Is it possible to run the compact task on the large database while the server is still populating it with more data? If yes, what are the parameters to pass onto the compact task?

  2. What is the best way to create a new replica of this database on Server B - without having any downtime?

  • Via Notes Client on my PC?

  • Via Server? Note: Server A does not initiate any connections to Server B. Server B does the calling to Server A. - if via server, how is this done?

  • Via window explorer’s file copy command?

  • Via some other method?

Subject: Compact won’t help

Compact eliminates unused space. Unused space doesn’t replicate. Neither do view indexes. I would guess you have no more than 1 GB of actual data that needs to replicate. Unfortunately, it does take a lot of time for that amount, though a day or two does seem like a bit much. I’ve seen more data than that replicate over trans-Atlantic lines in less time.

Subject: RE: Compact won’t help

Thanks for reminding me about what is and what is not replicated.

I estimated to be a day or two for replicating because the line between the two servers is quite busy and the capacity isn’t that high - around 1Mb.

Subject: Advice sought on replicating a large database (2Gb)

Have you tried creating a replica stub on server B using your Notes client and then letting the next server to server replication between A and B fill out the stub? The file copy at the OS level should work too, but it would carry with it the large amount of unused disk space.

One other maintenance possibility…When faced with large Domino logs, it can be useful to set up a monthly or quarterly maintenance task where you shut down the server, rename domlog.nsf to domlog[date in ddmmyyyy format etc.] and then creating a new instance of domlog.nsf from domlog.ntf. This eliminates the need to delete entries, and keeps the database from growing prohibitively large.

Stephen

Subject: RE: Advice sought on replicating a large database (2Gb)

Thanks for reminding me about the replica stub option - I’ve never used this before as I wasnt too sure how it worked - nor have I ever had the need for it.

I’ve now created the replica stub via the client and have the servers doing the full replication.

Subject: Advice sought on replicating a large database (2Gb)

  1. load compact domlog.nsf -B If you use transaction logging do a full backup after compact completes.

  2. That depends on how the servers are connected. You say “Server B calls Server A”, do you literally mean it’s on dial-up? If that’s the case, burn it to a DVD or buy a USB HD and copy it off, then take it to the other server and copy it back. If there is a WAN connection let the servers handle it. Search for “create replica” in the Domino Administrator help.

Subject: RE: Advice sought on replicating a large database (2Gb)

The connection is not a dial-up. Just that Server A doesn’t know how to call home. Server B knows how to reach Server A.

Copying to a DVD or USB HD is not an option - the server is located offsite and physical access to it is restricted …

I’m now using the replica stub and full-replication as suggested by another poster.