Preventing old address books from replicating entries back into the main address book

We have had an ongoing problem in our environment for the past year with our Domino address book. The symptoms present themselves as deleted users and groups re-appearing back in the address book long after they have been deleted, sometimes more than a year.

It appears, that somewhere, an old copy of the address book, still containing all of these deleted users and groups, is coming online and replicating back into our main address book.

What I am being told is that the delete stub for these users has long since expired by the time this address book comes online so when the address book replicates it doesn’t get the delete stubs and replicates all of its entries over rather than deleting the proper entries.

I am also being told that there is no way to prevent this from happening or even to track down where this rouge copy of the address book is.

Has anyone else had this happen and find a solution?

Subject: Tool to find Old documents pushed back by replication

Just a quick one from OpenNtf

Link : http://www.openntf.org/Projects/codebin/codebin.nsf/CodeByDate/300F25985BCB5CA38625737900608E54

Tool to find Old documents pushed back by replication

A big problem encountered by many users:

http://www-10.lotus.com/ldd/nd6forum.nsf/55c38d716d632d9b8525689b005ba1c0/1acb01c8dc57378785257377002dfd5f?OpenDocument

Description

Tool to find Old documents or deleted documents pushed back to server by replication

This db allows you to find the Added to file date of all person documents in your NAB.

The search is done against the Mail Users views.

In your search , you have to specify which mail servers (From your mail users view) to look at.

Find Old documents or deleted documents pushed back to server by replication

Deleted documents are reappearing after replication

http://www-1.ibm.com/support/docview.wss?rs=0&uid=swg21098733

It’s possible to find them by script with the AddedToThisFile API.

Q&As about replication purge intervals and cutoff dates

http://www-1.ibm.com/support/docview.wss?rs=0&uid=swg21110117

How to track down where replication changes originate

http://www-1.ibm.com/support/docview.wss?rs=0&uid=swg21225071

You can reuse this code to search other dbs, or other types of document sin the NAB (server documents, holiday documents, etc)

JYR

Subject: Sometimes there is differences between ‘no solution’ and 'no simple solution.

For your system, you may actually have the problem, that one of the Servers have the directory (names.nsf) twice. - In the meaning that this could be a) a bug, b) a client or c) a server causing the problem, or maybe you are usually not replicating more than a few nsf files (in a replication document) but sometimes someone types into console to replicate that server, or there is a test Server in the production environment usually not active at all.

Ways to go:

a) Find what is going on and go for the source:

  • the document properties can tell you when a document is added to a replica, looking at several replicas I see a chance to find out which server got hit by the documents again first, and locking in that servers log, you might be able to see which replication was active in this moment.

b) You could do additional things, Like, if you are doing hub and spoke replication only allow the hub to be more than reader in names.nsf of the hub, and no replication between the spokes and try to find out where the documents re - appear first.

c) there is tools to get Replica IDs on servers, even some

=> If you got told there is no way to solve this, I think this was not a complete answer.

What I would do instead:

Replace the Names.nsf by a copy (File Application new Copy, not a replica, and not an OS Copy - generate this file to local, keep the ACL and the Data, restart the server and exchange it when the server is down.). This way it gets a new replica ID. Now Replicas of this new names.nsf needs to get distributed to all other places and replace names.nsf there, too.

Obviously do Backups before you start, if possible, do it all in very short time (this way things like adminp would hit into the same data content in all places). Expect some risk for minor problems for a short time.