How Core Group Policy Works Updated: March 28, 2003 How Core Group Policy Works In this section • Core Group Policy Architecture • Core Group Policy Physical Structure • Core Group Policy Processes and Interactions • Network Ports Used by Group Policy • Related Information Core Group Policy or the Group Policy engine is the infrastructure that processes Group Policy components including server-side snap-in extensions and client-side extensions. You use administrative tools such as Group Policy Object Editor and Group Policy Management Console to configure and manage policy settings. At a minimum, Group Policy requires Windows 2000 Server with Active Directory installed and Windows 2000 clients. Fully implementing Group Policy to take advantage of all available functionality and the latest policy settings depends on a number of factors including: • Windows Server 2003 with Active Directory installed and with DNS properly configured. • Windows XP client computers. • Group Policy Management Console (GPMC) for administration. Top of page Core Group Policy Architecture The Group Policy engine is a framework that handles client-side extension (CSE) processing and interacts with other elements of Group Policy, as shown in the following figure: Core Group Policy Architecture 1
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
How Core Group Policy WorksUpdated: March 28, 2003
How Core Group Policy WorksIn this section
• Core Group Policy Architecture
• Core Group Policy Physical Structure
• Core Group Policy Processes and Interactions
• Network Ports Used by Group Policy
• Related Information
Core Group Policy or the Group Policy engine is the infrastructure that processes Group Policy
components including server-side snap-in extensions and client-side extensions. You use
administrative tools such as Group Policy Object Editor and Group Policy Management Console to
configure and manage policy settings.
At a minimum, Group Policy requires Windows 2000 Server with Active Directory installed and
Windows 2000 clients. Fully implementing Group Policy to take advantage of all available
functionality and the latest policy settings depends on a number of factors including:
• Windows Server 2003 with Active Directory installed and with DNS properly configured.
• Windows XP client computers.
• Group Policy Management Console (GPMC) for administration.
Top of page
Core Group Policy ArchitectureThe Group Policy engine is a framework that handles client-side extension (CSE) processing and
interacts with other elements of Group Policy, as shown in the following figure:
Core Group Policy Architecture
The following table describes the components that interact with the Group Policy engine.
objectClass The list of classes from which this class is derived. For a GPO, the objectClass is
Container, groupPolicyContainer, and top.
There are also a number of optional attributes inherited from the top class, and others that are
assigned directly to the Group Policy container. Many optional attributes are required in order for the
Group Policy container to function properly. For example, the GPCFileSysPath optional attribute must
be present or the Group Policy container will not be linked to its corresponding Group Policy
template.
Top of page
GroupPolicyContainer SubcontainersWithin the GroupPolicyContainer there are a series of subcontainers. The first level of subcontainers
— User and Machine — belong to the class Container. These two containers are used to separate
some User-specific and Computer-specific Group Policy components.
Top of page
Group Policy Container-Related Attributes of Domain, Site, and OU ContainersWindows Server 2003 uses domain, DNS, site, and organizational unit classes to create domain, site,
and OU container objects respectively. These objects contain two optional Group Policy container-
related attributes, gPLink and gPOptions. The gPLink property contains the prioritized list of GPOs
and the gPOptions property contains the Block Policy Inheritance setting.
The gPLink attribute holds a list of all Group Policy containers linked to the container and a number
for each listed Group Policy container, that represents the Enforced (previously known as No
Override) and Disabled option settings. The list appears in priority order from lowest to highest
priority GPO.
The gPOptions attribute holds an integer value that indicates whether the Block Policy Inheritance
option of a domain or OU is enabled (0) or disabled (1).
Top of page
Managing Group Policy Links for a Site, Domain, or OUTo manage GPO links to a site, domain, or OU, you must have read and write access to the gPLink
and gPOptions properties. By default, Domain Admins have this permission for domains and
organizational unit, and only Enterprise Admins and Domain Admins of the forest root domain can
manage links to sites. Active Directory supports security settings on a per-property basis. This
means that a non-administrator can be delegated read and write access to specific properties. In
this case, if non-administrators have read and write access to the gPLink and gPOptions properties,
they can manage the list of GPOs linked to that site, domain, or OU.
Top of page
How WMIPolicy Objects are Stored and Associated with Group Policy Container ObjectsA single WMI filter can be assigned to a Group Policy container. The Group Policy container stores
the distinguished name of the filter in gPCWQLFilter attribute. The Group Policy container locates
the assigned filter in the System/WMIPolicy/SOM container. Each Windows Server 2003 domain
stores its WMI filters in this Active Directory container. Each WMI filter stored in the SOM container
lists the rules that define the WMI filter. Each rule is listed separately. For example, consider a WMI
filter containing the following three WQL queries:
SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{5E076CF2-EFED-43A2-A623-13E0D62EC7E0}"
SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{242365CD-80F2-11D2-989A-00C04F7978A9}"
SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{00000409-78E1-11D2-B60F-006097C998E7}"
Three WMI rules are defined in the details of the filter. Each rule contains a number of attributes,
including the query language (WQL) and the WMI namespace queried by the rule.
Top of page
Group Policy TemplateThe majority of Group Policy settings are stored in the file system of the domain controllers. This
part of each GPO is known as the Group Policy template. The GroupPolicyContainer object for each
GPO has a property, GPCFileSysPath, which contains the UNC path to its related Group Policy
template.
All Group Policy templates in a domain are stored in the \\domain_name\Sysvol\domain_name\
Policies folder, where domain_name is the FQDN of the domain. The Group Policy template for the
most part stores the actual data for the policy extensions, for example Security Settings inf file,
Administrative Template-based policy settings .adm and .pol files, applications available for the
Group Policy Software installation extension, and potentially scripts.
Top of page
The Gpt.ini FileThe Gpt.ini file is located at the root of each Group Policy template. Each Gpt.ini file contains GPO
version information. Except for the Gpt.ini files created for the default GPOs, a display name value is
also written to the file.
Each Gpt.ini file contains the GPO version number of the Group Policy template.
[General]
Version=65539
Normally, this is identical to the version-number property of the corresponding GroupPolicyContainer
object. It is encoded in the same way — as a decimal representation of a 4 byte hexadecimal
number, the upper two bytes of which contain the GPO user settings version and the lower two bytes
contain the computer settings version. In this example the version is equal to 10003 hexadecimal
giving a user settings version of 1 and a computer settings version of 3.
Storing this version number in the Gpt.ini allows the CSEs to check if the client is out of date to the
last processing of policy settings or if the currently applied policy settings (cached policies) are up-
• Replace mode. In this mode, the user's list of GPOs is not gathered. Only the list of GPOs based
upon the computer object is used. In this example, the list is A3, A1, A2, A4, and A6.
The loopback feature can be enabled by using the User Group Policy loopback processing
mode policy under Computer Settings\Administrative settings\System\Group Policy.
The processing of the loopback feature is implemented in the Group Policy engine. When the Group
Policy engine is about to apply user policy, it looks in the registry for a computer policy, which
specifies which mode user policy should be applied in.
Top of page
How the Group Policy Engine Processes Client-Side ExtensionsClient-side extensions are the components running on the client system that process and apply the
Group Policy settings to that system. There are a number of extensions that are pre-installed in
Windows Server 2003. Other Microsoft applications and third party application vendors can also
write and install additional extensions to implement Group Policy management of these applications.
The default Windows Server 2003 CSEs are listed in the following table:
How Group Policy Processing History Is Maintained on the Client ComputerEach time GPOs are processed, a record of all of the GPOs applied to the user or computer is written
to the registry. GPOs applied to the local computer are stored in the following registry path:
Cross-references are stored as directory objects of the class crossRef that identify the existence
and location of all directory partitions, irrespective of location in the directory tree. In addition, these
objects contain information that Active Directory uses to construct the directory tree hierarchy.
Values for the following attributes are required for each cross-reference:
• nCName. The distinguished name of the directory partition that the crossRef object references.
(The prefix nC stands for naming context, which is a synonym for directory partition.) The
combination of all of the nCName properties in the forest defines the entire directory tree,
including the subordinate and superior relationships between partitions.
• dNSRoot. The DNS name of the domain where servers that store the particular directory partition
can be reached. This value can also be a DNS host name.
How Cross-Reference Information is Propagated Throughout the Domain and Forest StructureFor every directory partition in a forest, there is an internal cross-reference object stored in the
Partitions container (cn=Partitions,cn=Configuration,dc=ForestRootDomain). Because cross-
reference objects are located in the Configuration container, they are replicated to every domain
controller in the forest, and thus every domain controller has information about the name of every
partition in the forest. By virtue of this knowledge, any domain controller can generate referrals to
any other domain in the forest, as well as to the schema and configuration directory partitions.
When you create a new forest, the Active Directory Installation Wizard creates three directory
partitions: the first domain directory partition, the configuration directory partition, and the schema
directory partition. For each of these partitions, a cross-reference object is created automatically.
Thereafter, when a new domain is created in the forest, another directory partition is created and
the respective cross-reference object is created. When the configuration directory partition is
replicated to the new domain controller, a cross-reference object is created on the domain naming
master and is then replicated throughout the forest.
Note
• The state of cross-reference information at any specific time is subject to the effects of replication
latency.
For more information about cross-reference objects, see “How Active Directory Searches Work.”
Cross-reference objects can also be used to generate referrals to other directory partitions located in
another forest through external cross-references.
External Cross-ReferencesAn external cross-reference is a cross-reference object that can be created manually to provide the
location of an object that is not stored in the forest. If your Lightweight Directory Access Protocol
(LDAP) clients submit operations for an external portion of the global LDAP namespace against
servers in your forest, and you want servers in your forest to refer the client to the correct location,
you can create a cross-reference object for that directory in the Partitions container. There are two
ways that external cross-references are used:
• To reference external directories by their disjoint directory name (a name that is not contiguous
with the name of this directory tree). In this case, when you create the cross-reference, you
create a reference to a location that is not a child of any object in this directory.
• To reference external directories by a name that is within the Active Directory namespace (a
name that is contiguous with the name of this directory tree). In this case, when you create the
cross-reference, you create a referenceto a location that is a child of a real object in this directory.
Because the domain component (dc=) portion of the distinguished names of all Active Directory
domains matches their DNS addresses, and because DNS is the worldwide namespace, all domain
controllers can generate external referrals to each other automatically.
implements a replication topology that is guaranteed to deliver the contents of every directory
partition to every global catalog server.
The attributes that are replicated to the global catalog by default include a base set defined by
Microsoft. Administrators can use the Microsoft Management Console (MMC) Active Directory
Schema snap-in to specify additional attributes to meet the needs of their installation. In the Active
Directory Schema snap-in, you can select the Replicate this attribute to the global catalog
check box to designate an attributeSchema object as a member of the PAS, which sets the value
of the isMemberOfPartialAttributeSet attribute to TRUE.
Domain Controller and Global Catalog Server StructureThe physical representation of global catalog data is the same as all domain controllers: the Ntds.dit
database stores object attributes in a single file. On a domain controller that is not a global catalog
server, the Ntds.dit file contains a full, writable replica of every object in one domain directory
partition for its own domain, plus the writable configuration and schema directory partitions.
Note
• The schema directory partition is writable only on the domain controller that is the schema
operations master for the forest.
The following diagram shows the physical representations of the global catalog as a forestwide
resource that is distributed as a database on global catalog servers.
Global Catalog Physical Structure
As shown in the preceding diagram, a global catalog server stores a replica of its own domain (full
and writable) and a partial, read-only replica of all other domains in the forest. All directory
partitions on a global catalog server, whether full or partial, are stored in the directory database file
71
(Ntds.dit) on that server. That is, there is not a separate storage area for global catalog attributes;
they are treated as additional information in the directory database of the global catalog server.
The following table describes the physical components of the diagram.
Global Catalog Server Physical Components
Physical Component
Description
Active Directory
forest
The set of domains that comprise the Active Directory logical structure and that
are searchable in the global catalog.
Domain
controller
Server that stores one full, writable domain directory partition plus forestwide
configuration and schema directory partitions. Global catalog servers are always
domain controllers.
Global catalog
server
Domain controller that stores one full, writable domain plus forestwide
configuration and schema directory partitions, as well as a partial, read-only
replica of all other domains in the forest.
Ntds.dit Database file that stores replicas of the Active Directory objects held by any
domain controller, including global catalog servers.
Top of page
Global Catalog Processes and InteractionsIn addition to its activities as a domain controller, the global catalog server supports the following
special activities in the forest:
• User logon: Domain controllers must contact a global catalog server to retrieve any SIDs of
universal groups that the user is a member of. Additionally, if the user specifies a logon name in
the form of a UPN, the domain controller contacts a global catalog server to retrieve the domain
of the user.
• Universal and global group caching and updates: In sites where Universal Group Membership
Caching is enabled, domain controllers that are running Windows Server 2003 cache group
memberships and keep the cache updated by contacting a global catalog server.
• Global catalog searches: Clients can search the global catalog by specifying port 3268 or by using
search applications that use this port. Search activities include:
• Validation of references to non-local directory objects. When a domain controller holds a
directory object with an attribute that references an object in another domain, this reference is
validated by contacting a global catalog server.
• Exchange Address Book lookups: Exchange 2000 Server and Exchange Server 2003 use Active
Directory as the address book store. Outlook clients query the global catalog to locate Address
Book information.
• Global catalog server creation and advertisement: Global catalog servers register global-catalog-
specific service (SRV) resource records in DNS so that clients can locate them according to site. If
no global catalog server is available in the site of the user, a global catalog server is located in
the next closest site, according to the cost matrix that is generated by the KCC from site link cost
settings.
• Global catalog replication: Global catalog servers must either have replication partners for all
domains or be able to replicate with another global catalog server. When changes to the PAS
occur on, and are replicated between, domain controllers that are running Windows Server 2003,
To refresh the cache, domain controllers running Windows Server 2003 send a universal group
membership confirmation request to a global catalog server. There is no limit to the number of
accounts that can be cached, but a maximum of 500 account caches can be updated during any
cache refresh.
Note
• Universal Group Membership Caching can be enabled in a site that has domain controllers that
are running Windows 2000 Server. If Universal Group Membership Caching is enabled in such a
site, users might experience inconsistent group membership, depending on which domain
controller authenticates them. For this reason, it is recommended that you either upgrade all
domain controllers that are running Windows 2000 Server to Windows Server 2003 when group
caching is enabled in a site, or remove them.
Because the group memberships are cached, there is a period of latency before group membership
changes are reflected in an account’s access token. When group membership changes, the changes
are not reflected in the access token until the following events have occurred (in order):
1. The changes are replicated to the global catalog server that is used for the refresh of the cache.
2. The cache on the domain controllers in the account’s site is refreshed. Although the cache
refresh is not a replication event, the process uses the site link schedule. Therefore, a closed
site link schedule postpones the cache refresh until the schedule opens.
3. The user has logged off and back on again (user account is authenticated) or the computer has
restarted (computer account is authenticated).
When an access token is created during logon, the token contents are static until that user logs off
and logs on again. Furthermore, as long as Universal Group Membership Caching is enabled, an
account’s memberships are cached, and the cache entry has not expired, the cache entry is used
during logon. If changes have been made to group membership and the refresh task has not run, the
changes are not reflected until either the cache entry expires or the refresh task runs and processes
the cache entry.
The length of the latency period depends on when the next refresh task is scheduled to run. The
refresh task reschedules itself for its next refresh during each current refresh run, as follows:
• Beginning with the current time plus the registry-configured refresh interval, the domain
controller consults the replication schedule on the site link that connects its site to the site of the
closest (or designated) global catalog server.
• If the site link schedule allows replication at the projected time, the refresh task is scheduled to
run at this time.
• If the site link schedule does not allow replication at the projected time, the scheduling algorithm
steps forward one minimum replication interval (15 minutes) and checks the schedule again.
• This process is repeated until an open replication interval is found. If no open interval is found in
the seven-day schedule, the refresh task is scheduled to run by using a time calculated as the
current time plus the registry-configured refresh interval. In this case, event ID 1671 is logged as
a warning message that indicates the group membership cache refresh task was unable to find
the next available time slot of connectivity to the site of the global catalog server.
If faster updates are required, an administrator can initiate a cache refresh manually on the domain
controllers in the user’s site. For more information about refreshing the user cache, see “Registry
Settings that Affect Cache Refresh and Site Affinity Limits” later in this subject.
Determining the Site to Use for Populating and Refreshing the CacheYou can designate a site from which to initially populate and subsequently refresh the group
membership cache. The Universal Group Membership Caching feature user interface (UI) contains an
option to select a site from the list of existing sites. When a site has been selected and the cache on
78
a domain controller must be populated for the first time or updated, the domain controller contacts
a global catalog server in the designated site. If no site is designated, site link costs are evaluated to
determine the lowest-cost site that contains a global catalog server. The site link cost matrix is
supplied by the Intersite Messenger (ISM) service.
The UI that you use to designate a preferred site for cache refresh does not check for the presence
of a global catalog server in the selected site. Therefore, it is possible to designate a refresh site that
does not contain a global catalog server. In this case, or in any case where a refresh site is
designated but a global catalog server does not respond, the domain controller uses the site link
cost matrix and logs event ID 1668 in the Directory Service event log, which indicates that the group
membership cache refresh task did not locate a global catalog in the preferred site, but was able to
find a global catalog in the following available site. The event lists the named preferred site and the
actual site that was used.
Group Cache StorageCached group membership is stored as additional attributes of user and computer objects. Three
new attributeSchema objects were added to the Windows Server 2003 schema for the user object
class (and inherited by the computer object class) to support this feature:
• msDS-Cached-Membership: (cached membership) A binary large object that contains both
universal and global group memberships (the group SIDs) for the user. This attribute has the
following characteristics:
• Is single valued.
• Is not indexed.
• Can be deleted.
• Cannot be written.
• Is not replicated.
• msDS-Cached-Membership-Time-Stamp: (last refresh time) Contains the time that the cached
membership was last updated, either by the first logon or by a refresh. This attribute is used for
the “staleness” check. The maximum period that is tolerated when using a cached group
membership is called the staleness interval. The staleness interval, set in the registry as 7 days, is
measured against the current time and the last refresh time. If the timestamp indicates that the
cache is older than the staleness interval allows, the cached membership is invalidated and a
global catalog server is required for logon. This attribute has the following characteristics:
• Is large integer, time valued.
• Is indexed.
• Can be deleted.
• Cannot be written.
• Is not replicated.
• msDS-Site-Affinity: Identifies the site(s) where the account has logged on plus a timestamp that
indicates the start time for the cached logon in the respective site. Presence of a value in this
attribute causes automatic population of group memberships and refresh at every refresh
interval. When a domain controller refreshes its cached memberships (every 8 hours by default),
the timestamp is used for removing accounts from the cache that have not logged on within the
site affinity time limit (the cache expires). To avoid replication of this attribute every time the
account logs on, the timestamp is updated only when the age exceeds 50 percent of the age limit
that is set in the registry (180 days, by default) and one of the following actions occurs:
79
• The account logs on and is authenticated by a domain controller.
• A user changes his or her account password. This update ensures that users who go for
extended periods without logging on have their site affinity values updated.
This attribute has the following characteristics:
• Is multivalued.
• Is indexed.
• Can be
deleted.
• Can be written.
• Is replicated.
Note
• You can use ADSI Edit in Windows Support Tools to clear the cached entries for an account by
deleting the values in msDS-Cached-Membership and msDS-Cached-Membership-Time-
Stamp from the user or computer object. The attribute values are repopulated at the next logon
or cache refresh, whichever comes first.
Registry Settings that Affect Cache Refresh and Site Affinity LimitsRegistry settings on each domain controller determine the limits that are imposed on group
membership caches. Entries under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\
Services\NTDS\Parameters\ can be used to manage the cache, as shown in the following table.
Changes to these registry settings take effect the next time the refresh task runs.
Note
• In the following table, some of the entry names contain the string “(minutes)”. Note that this
string is part of the entry name and must be included when creating the entry. For example:
• The value name Cached Membership Refresh Interval (minutes) is correct.
• The value name Cached Membership Refresh Interval is incorrect.
Registry Entries Used to Configure Caching Behavior
Registry Entry Type Default Value
Notes
Cached Membership Site Stickiness
(minutes)
DWORD 172800
(Value is
in
minutes.
This
setting
equals
180 days)
Defines how long the site
affinity will remain in effect.
The site affinity value is
updated when half of the
period defined by this value
has expired. If an account has
not logged on with a domain
controller for a period of one
half of this value or longer, the
account is removed from the
list of accounts whose
memberships are being
refreshed. The default value is
recommended.
Cached Membership Staleness DWORD 10080 Determines the maximum
80
Registry Entry Type Default Value
Notes
(minutes) (Value is
in
minutes.
This
setting
equals 7
days)
staleness value when using
cached group membership.
The account cannot log on if
the cached membership list is
older than the staleness value
and if no global catalog server
is available. The default value
is recommended.
Cached Membership Refresh Interval
(minutes)
DWORD 480
(Value is
in
minutes.
This
setting
equals 8
hours)
Defines the length of time
between group membership
cache refreshes. This value
should be changed to
synchronize once a day (1440
minutes). For dial-up
connections, you might want a
higher value than 24 hours.
Lowering the value to increase
the frequency of cache refresh
is not recommended because
it causes increased WAN
traffic, potentially defeating
the purpose of Universal Group
Membership Caching.
Cached Membership Refresh Limit DWORD 500 Defines the maximum number
of user and computer accounts
that are refreshed. Increase
this setting only if event
ID 1669 occurs in the event log
or you have more than 500
users and computers in a
branch. If the number of users
and computers in a branch
exceeds 500, a general
recommendation is to either
place a global catalog server in
the branch or increase the
Cached Membership Refresh
Limit above 500. Be aware that
increasing the limit might incur
more WAN traffic than that
caused by global catalog
81
Registry Entry Type Default Value
Notes
update traffic.
SamNoGcLogonEnforceKerberosIpCheck DWORD 0 By default, allows site affinity
to be tracked for Kerberos
logons that originate outside
the site. A value of 1
configures SAM so it does not
give site affinity to Kerberos
logon requests that originate
outside the current site. This
option should be configured to
1 on domain controllers in the
branch-sites to prevent logon
requests from outside the site
being given affinity for the
local site. This setting prevents
an account’s affinity from
being changed during the
logon process when
connecting to a Kerberos key
distribution center (KDC)
outside of the account’s site.
SamNoGcLogonEnforceNTLMCheck DWORD 0 Configures SAM to not give site
affinity to NTLM logon requests
that are network logon
requests. This setting reduces
the number of accounts with
site affinity by excluding those
that simply accessed network
resources (by using NTLM).
This option should not be
enabled if you use older clients
that must authenticate from
outside the branch to local
resources in the branch.
Methods of Refreshing the Cached MembershipsYou can refresh cached memberships on a single domain controller.
For a one-time, immediate cache refresh:
• Use Ldp.exe (Windows Support Tools) to modify the operational attribute
updateCachedMemberships on the rootDSE with a value of 1. Adding a value of 1 to this
attribute instructs the local domain controller to perform the update. If the site link schedule
allows replication at the time you modify the attribute, this update occurs right away. This method
is the preferred method for updating a single domain controller because it does not require
82
restarting the domain controller. For information about using Ldp to modify this attribute, see the
Note below.
-or-
• Restart the domain controllers in the site to restart the cache refresh interval, which triggers a
cache refresh.
Note
• Use the following procedure to modify the updateCachedMemberships operational attribute.
To perform this operation, the user needs the control access right "Refresh Group Cache for
Logons" on the NTDS Settings object for the domain controller. Default access is granted to
System, Domain Admins, and Enterprise Admins.
1. Start Ldp.exe and bind to the target domain controller where the cache reset is to be performed.
(Do not select Tree view in the View menu.) When first binding to a domain controller with Ldp,
the default location is rootDSE. You can view the attributes for rootDSE in the details pane.
However, operational attributes are not listed.
2. On the Browse menu, click Modify.
3. In the Modify dialog box, in the Edit Entry Attribute box, type
updateCachedMemberships. In the Values box, type 1. Be sure to leave the Dn box blank.
4. Click Enter. The command should appear in the entry list.
5. Click Run. If the operation was successful, Ldp will report “Modified” in the output.
Method of Clearing the Cached MembershipsYou can clear all cached memberships on all domain controllers in a site. However, doing so can
affect performance. The need to clear the cached memberships might arise due to too many cached
accounts, causing inability to refresh all account caches during a single cache refresh. For example,
sites that have many transient accounts might exceed the 500-account refresh limit.
If you have more than 500 accounts cached and you want to clear them for all domain controllers in
the site, you can do so by editing the registry.
Note
• If you must edit the registry, use extreme caution and be sure that you back it up first. Registry
information is provided here as a reference for use by only highly skilled directory service
administrators. Do not directly edit the registry unless, as in this case, no Group Policy setting or
other Windows tools can accomplish the task. Modifications to the registry are not validated by
the registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.
On one domain controller, you can set the Cached Membership Site Stickiness (minutes)
registry entry to 0 and then initiate a cache refresh by using the operational attribute method on
that domain controller, as described in “Methods of Refreshing the Cached Memberships” earlier in
this subject. The 0 value in the setting causes the cache to be purged—values in all three attributes
(msDS-Cached-Membership, msDS-Cached-Membership-Time-Stamp, and msDS-Site-
Affinity) are cleared. After the site stickiness attribute deletion has replicated within the site, you
can then initiate cache refresh on the other domain controllers in the site. Remember to return the
value in Membership Site Stickiness (minutes) to its default value of 180.
Diagnostic Logging Levels and EventsDiagnostic logging for Universal Group Membership Caching can be set in the registry entry
20 Group Caching under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\
NTDS\Diagnostics
Data type: REG_DWORD
Value range: 0-5
83
Default value: 0
Significant events are reported at logging level 2 with many additional events reported at logging
level 5. For troubleshooting, set the logging level to 5.
Sample Events at Logging Level 0Event ID 1667: The group membership cache refresh task detected that the following site in which a
global catalog was found is not one of the cheapest sites, as indicated by the published site link
information.
Event ID 1668: The group membership cache refresh task did not locate a global catalog server in
the preferred site, but was able to find a global catalog server in the following available site.
Preferred site: <site name> Available site: <site name>
Event ID 1669: The group membership cache refresh task has reached the maximum number of
users for the local domain controller.
Event ID 1670: The group membership cache refresh task is behind schedule. Consider forcing a
group membership cache update.
Sample Events at Logging Level 2Event ID 1776 internal event: The group membership cache task is starting.
Event ID 1777 internal event: The group membership cache task has finished. The completion status
was 0, and the exit Internal ID was ######.
Event ID 1779 internal event: The Global Catalog Domain Controller <dcname> in site <site name>,
domain <domain name> will be used to update the group memberships.
Event ID 1781 internal event: By examining the published connectivity information, the group
membership cache task has determined site <site name> is a site with a low network cost to
contact. The task will schedule itself based on the schedule of network connectivity to this site.
Event ID 1782 internal event: By examining the published connectivity information, the group
membership cache task cannot find an efficient site to obtain group membership information. The
task will run using the global catalog server that is closest, as determined by the Net Logon locator,
and will schedule itself based on a fixed period.
Event ID 1842 internal event: The following site link will be used to schedule the group membership
cache refresh task. Site link: <distinguished name of site link>
Sample Events at Logging Level 5Event ID 1778 internal event: The group membership cache task will run again in xx minutes.
Event ID 1784 internal event: The group membership cache task determined that site
<distinguished name of site> does not have a global catalog server.
How the Cache is Populated at First LogonBy default, the caching attributes on the user and computer objects are not populated. The following
diagram shows how the domain controller builds the list of SIDs to be cached and where in the
process the caching attributes are populated during the user’s first logon in the site. This example
assumes that the user is in a site that has Universal Group Membership Caching enabled, the
domain of the client workstation is the same as the domain of the user, and the domain has a
functional level that allows universal groups.
Universal Group Membership Caching Process at First Logon
84
The following events occur at each step in the preceding diagram:
1. A user logs on in a site where Universal Group Membership Caching is enabled. The user is
authenticated by the domain controller as being the requesting user.
2. The domain controller checks the values of the three caching attributes of the user object.
3. Finding that the attributes are not populated, the domain controller checks its local directory
and retrieves the SID of the user (including SID history, if available) and the SIDs of all global
groups to which the user belongs.
4. The domain controller sends this list of SIDs to the global catalog server. The global catalog
server checks the universal group memberships of the user and all global groups in the list. The
global catalog server returns the list of combined universal group and global group SIDs to the
domain controller.
5. The user’s cache attributes are populated as follows:
1. The combined list of global group and universal group SIDs is recorded in the msDS-
Cached-Membership attribute.
2. The time is recorded in the msDS-Cached-Membership-Time-Stamp attribute (this
time indicates the last time the cache was updated; on the first logon, it also happens to
be the time the user logged on).
3. If SamNoGcLogonEnforceNTLMCheck or
SamNoGcLogonEnforceKerberosIpCheck, or both, are enabled on the domain
controller, the msDS-Site-Affinity attribute is ignored.
4. If the GUID for the local site exists in the msDS-Site-Affinity attribute and the settings
in step c. are not enabled, the timestamp value in msDS-Site-Affinity is evaluated as
follows: If the value indicates an age that is less than half the value in Cached
Membership Site Stickiness (minutes), the logon proceeds. If the timestamp
indicates an age that is greater than half the value in Cached Membership Site
Stickiness (minutes), or if the attribute is not populated, the site GUID and time are
written to the msDS-Site-Affinity attribute, and the logon proceeds.
6. The domain controller checks its local directory for any domain local groups to which the user
belongs and adds domain local group SIDs to its list of global group and universal group SIDs.
85
Note
• The process for accomplishing Step 6 differs depending on whether the domain of the client
computer is the same as the domain of the user and, if not, whether the client computer is
joined to a domain that has a mixed domain mode or functional level, or a native domain
mode or functional level. For more information about how SIDs are retrieved and added to
access tokens, see “Access Tokens Technical Reference.”
7. The domain controller sends the entire list of SIDs to the client computer, where the LSA
retrieves SIDs of the user’s built-in group memberships and constructs the user’s access token.
Note
• Global catalog servers in a site where caching is enabled do not populate the msDS-Cached-
Membership and msDS-Cached-Membership-Time-Stamp attributes of users they
authenticate. Because global catalog servers are already aware of universal group memberships
throughout the forest and global group memberships for the domain, there is no need for them to
use these attributes.
How the Cache is Used for Subsequent LogonsWhen Universal Group Membership Caching is enabled in the site, the following sequence occurs
during account logon:
1. The account is authenticated by the contacted domain controller.
2. The domain controller checks for the presence of values in the caching attributes of the
respective user or computer object. If the attribute values are present, the domain controller
updates the values as follows:
1. If the value in the msDS-Cached-Membership-Time-Stamp attribute indicates an age
that is less than the staleness interval (Cached Membership Staleness (minutes),
default seven days), the domain controller reads the group SIDs from the msDS-
Cached-Membership attribute and the logon proceeds.
2. If the value in msDS-Cached-Membership-Time-Stamp indicates an age of greater
than the staleness interval, the domain controller contacts a global catalog server to
request the universal group membership. If a global catalog server cannot be contacted,
the logon is denied.
3. If the value in the timestamp in msDS-Site-Affinity is equal to or older than 50 percent
of the site stickiness setting, the timestamp is updated with the current time.
3. The domain controller returns the group SIDs from the cache plus any domain local group SIDs
to the client computer and the logon proceeds.
Note
• At no time during a successful logon does the local domain controller check with a global catalog
server to see if the account’s group membership has changed. Changes to an account’s group
membership are not reflected in the account’s access token until the local domain controller
performs a cache refresh. The default amount of time between cache refreshes is eight hours.
This interval could result in an inconsistent view of group membership if the account was
authenticated by a domain controller in a different site. This discrepancy might also confuse
administrators who are unfamiliar with how universal group membership caching works.
How the Cache is RefreshedThe cache refresh process occurs automatically on every domain controller that is running Windows
Server 2003 and has received replication of the msDS-Site-Affinity attribute update for a user or
computer object or has already cached group memberships. The refresh operation occurs differently
depending on whether a site is selected for the preferred refresh site.
Cache Refresh Process When a Preferred Refresh Site is Not Selected
When the refresh interval expires, the domain controller proceeds as follows:
1. Lists all the site links that connect the domain controller’s site to a site that hosts a global
catalog server in increasing order of cost values on the site link objects.
2. Selects the lowest-cost site link and schedules the refresh by using the site link
schedule. If no site link schedule is set, then replication is always available.
Depending on the schedule, the refresh proceeds as follows:
• If the schedule currently allows replication, the domain controller begins the refresh.
• If the schedule does not currently allow replication, the domain controller schedules the
refresh to begin when the schedule allows replication.
Note
When the refresh is postponed according to the site link schedule, a random stagger in the range of
0-15 minutes is added to the computed start time. Schedule staggering has the effect of ensuring
that domain controllers begin their refresh at slightly different times, thereby improving load
balancing on the global catalog server.
1. When the schedule allows replication, begins the refresh by locating and binding to a global
catalog server in the next closest site.
2. Removes accounts that have a populated cache but no site affinity. Cached entries that do not
include a populated msDS-Site-Affinity value are purged at this time. A maximum of
64 entries are removed at a time. If more entries need to be removed, they are removed during
subsequent refreshes.
3. Removes any account whose site affinity matches the local site, but whose site affinity time
period has expired. In this case, the values in the three cache attributes are deleted and this
account no longer has a group membership cache on the domain controller.
4. Builds a list of accounts by querying the global catalog for all accounts that have GUIDs in their
msDS-Site-Affinity attribute that match the GUID of the domain controller’s site.
5. Updates cache attributes of the accounts in the list by querying the global catalog for
each account’s group membership, as follows:
• Update the msDS-Cached-Membership attribute with the account’s group membership
SIDs.
• Update the msDS-Cached-Membership-Time-Stamp attribute with the time of refresh.
6. Repeats the process for each account until all accounts are updated or until the refresh limit of
500 accounts is reached. If the refresh limit is reached, the domain controller logs event ID 1669
in the Directory Service event log, and the refresh stops.
7. Checks to ensure that the refresh task has not fallen behind in terms of the maximum period
allowed for an account’s cached membership list to be valid for logons. This step is
accomplished by locating the record with the oldest msDS-Cached-Membership-Time-Stamp
value and comparing the timestamp value to the staleness interval (seven days by default). If
the entry is more than seven days old, the domain controller logs event ID 1670, indicating that
the refresh task has fallen behind.
Note
• When the domain controller encounters the refresh limit, it stops updating cache entries.
Because the order in which the updates occur cannot be predicted, there is no way to ensure
that the caches of the most recent accounts are updated. The staleness check in step 9
checks all cached entries, even those excluded due to exceeding the refresh limit. After about
87
one week, the non-updated cache entries will become stale and cause the falling behind error
to be reported in the event log.
Cache Refresh Process When A Preferred Refresh Site is SelectedWhen a site is selected to always be used for refreshing the group membership cache, the domain
controller does not need to order the site links according to cost, but simply contacts a global
catalog server in the specified site. However, if no global catalog server is available at the time the
refresh is attempted, the domain controller logs event ID 1782, indicating that a domain controller
could not be found in the preferred site, and uses the site link cost to locate a global catalog server
in the next closest site.
Inconsistent Access to Domain-Based Objects on Global Catalog ServersWhen specifying Read or List permissions for domain data that is also replicated to the global
catalog, use global groups rather than domain local groups because the access token that is created
for the user by the global catalog server does not necessarily contain information about domain
local groups to which the user belongs.
When a user connects to a global catalog server, an access token is created for the user that is used
in subsequent access decisions on the server. If the user is a member of a domain other than the
domain of the global catalog server, the global catalog server contacts a domain controller in the
user’s domain to authenticate the user and retrieve authorization data. The domain controller
returns information about the user, including the SIDs of global groups in the user’s domain to which
the user belongs. The domain controller from the user's domain does not return domain local group
SIDs to the global catalog server.
Universal group membership is retrieved from the global catalog, and the global catalog server looks
to its own domain (which is not necessarily the domain of the user) for any domain local groups to
which the user belongs. Thus the access token for the user on the global catalog server contains the
global groups and universal groups to which the user belongs, as well as any domain local groups to
which the user belongs in the domain of the global catalog server.
The effect of a missing domain local group SID in the user’s access token is that the user’s access to
global catalog data is unpredictable. For example, if access to the homePhone attribute of a user
object is restricted by a domain local group in the user's domain, and the user is a member of that
group, the user is able to view that attribute in the global catalog when both of the following
conditions are true:
• The homePhone attribute is replicated to the global catalog.
• The global catalog server to which the user connects does not host a writable copy of the user’s
domain.
Similarly, if the user is a member of a domain local group in a domain other than the domain hosted
by the global catalog server, and that group is granted read access to the homePhone attribute,
the user cannot view that attribute in the global catalog.
Global Catalog SearchesThe location of an object in Active Directory is provided by the distinguished name of the object,
which includes the full path to a replica of the object, culminating in the directory partition that holds
the object. However, the user or application does not always have the distinguished name of the
target object, or even the domain of the object. To locate objects without knowing the domain, the
most commonly used attributes of the object are replicated to the global catalog. By using these
object attributes and directing the search to the global catalog, requesters can find objects of
interest without having to know their directory location. For example, to locate a printer, you can
88
search according to the building of the printer. To locate a person, you can provide the name of the
person. To locate all people who are managed by someone, you can provide the manager’s name.
LDAP Search PortsActive Directory uses the Lightweight Directory Access Protocol (LDAP) as its access protocol. LDAP
search requests can be sent and received by Active Directory on port 389 (the default LDAP access
port) and port 3268 (the default global catalog port). LDAP traffic that uses the Secure Sockets Layer
(SSL) authentication protocol accesses ports 686 and 3269, respectively. In this discussion, search
behavior that applies to ports 389 and 3268 also apply to the respective behavior of LDAP queries
over ports 686 and 3269.
When a search request is sent to port 389, the search is conducted on a single domain directory
partition. If the object is not found in that domain or the schema or configuration directory partitions,
the domain controller refers the request to a domain controller in the domain that is indicated in the
distinguished name of the object.
When a search request is sent to port 3268, the search includes all directory partitions in the forest
— that is, the search is processed by a global catalog server. If the request specifies attributes that
are part of the PAS, the global catalog can return results for objects in any domain without
generating a referral to a domain controller in a different domain. Only global catalog servers
receive LDAP requests through port 3268. Certain LDAP client applications are programmed to use
port 3268. Even if the data that satisfies a search request is available on a regular domain controller,
if the application launching the search uses port 3268, the search necessarily goes to a global
catalog server.
Search Criteria That Target the Global CatalogSearches are directed to a global catalog server under the following conditions:
• You specify port 3268 or 3269 in an LDAP search tool.
• You select Entire Directory in a search-scope list in an Active Directory snap-in or search tool,
such as Active Directory Users and Computers or the Search command on the Start menu.
• You write the distinguished name as an attribute value, where the distinguished name represents
a nonlocal object. For example, if you are adding a member to a group and the member that you
are adding is from a different domain, a global catalog server verifies that the user object
represented by the distinguished name exists and obtains its GUID.
Characteristics of a Global Catalog SearchThe following characteristics differentiate a global catalog search from a standard LDAP search:
• Global catalog queries are directed to port 3268, which explicitly indicates that global catalog
semantics are required. By default, ordinary LDAP searches are received through port 389. If you
bind to port 389, even if you bind to a global catalog server, your search includes a single domain
directory partition. If you bind to port 3268, your search includes all directory partitions in the
forest. If the server you attempt to bind to over port 3268 is not a global catalog server, the
server refuses the bind.
• Global catalog searches can specify a non-instantiated search base, indicated as "com" or " "
(blank search base).
• Global catalog searches cross directory partition boundaries. The extent of the standard LDAP
search is the directory partition.
• Global catalog searches do not return subordinate referrals. If you use port 3268 to request an
attribute that is not in the global catalog, you do not receive a referral to it. Subordinate referrals
are an LDAP response; when you query over port 3268, you receive global catalog responses,
which are based solely on the contents of the global catalog. If you query the same server by
using port 389, you receive referrals for objects that are in the forest but whose attributes are not
89
referenced in the global catalog.
Note
• A referral to a directory that is external to Active Directory can be returned by the global
catalog if a base-level search for an external directory is submitted and if the distinguished
name of the external directory uses the domain component (dc=) naming attribute. This
referral is returned according to the ability of Active Directory to construct a Domain Name
System (DNS) name from the domain components of the distinguished name and is not based
on the presence of any cross-reference object. The same referral is returned by using the LDAP
port; it is not specific to the global catalog.
Because the member attribute is not replicated to the global catalog for all group types, and
because the memberOf attribute derives its value by referencing the member attribute (called
back links and forward links, respectively), the results of searches for members of groups and
groups of which a member belongs can vary, depending on whether you search the global catalog
(port 3268) or the domain (port 389), the kind of groups that the user belongs to (global groups or
domain local groups), and whether the user belongs to universal groups outside the local domain.
For more information about global catalog searches and the implications of searching on back links
and forward links, see “How Active Directory Searches Work.”
The Infrastructure Master and Phantom RecordsAn attribute that has a distinguished name as a value references (points to) the named object. When
the referenced object does not actually exist in the local directory database because it is in a
different domain, a placeholder record called a phantom is created in that database as the object
reference. Because there is a reference to it, the referenced object must exist in some form, either
as the full object (if the domain controller stores the respective domain directory partition) or as an
object reference (when the domain controller does not store that domain).
The infrastructure master is a single domain controller in each domain that tracks name changes of
referenced objects and updates the references on the referencing object. When a referenced object
is moved to a different domain (which effectively renames the object), the infrastructure master
updates the distinguished name of the phantom. The infrastructure master finds phantom records
by using a database index that is created only on domain controllers that hold the infrastructure
operations master role. When the reference count of the phantom falls to zero (no objects are
referencing the object that the phantom represents), garbage collection on each domain controller
removes the phantom.
Because objects can reference objects in different domains, the infrastructure operations master
role is not compatible with global catalog server status if more than one domain is in the forest. If a
global catalog server holds the infrastructure operations master role, phantom records are never
created because the referenced object is always located in the directory database on the global
catalog server.
For more information about the infrastructure operations master role, see “How Operations Masters
Work.”
Exchange Address Book LookupsThe Exchange Server directory service for Exchange 2000 Server and Exchange Server 2003 is
integrated with Active Directory. When mail users want to find a person within their organization,
they usually search the global address book (GAL), which is an aggregation of all messaging
recipients in the enterprise, including mailbox-enabled users, mail-enabled users, groups, and
contacts. The GAL is a virtual linked list of pointers to the mail recipient objects that comprise it. Mail
recipients can be user accounts (both enabled and disabled accounts), contacts, distribution lists,
security groups, and folders. The GAL is automatically populated by a service on the Exchange
• If you must edit the registry, use extreme caution and be sure that you back it up first. Registry
information is provided here as a reference for use by only highly skilled directory service
administrators. Do not directly edit the registry unless, as in this case, no Group Policy setting or
other Windows tools can accomplish the task. Modifications to the registry are not validated by
the registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.
Requirements for Global Catalog ReadinessBy default, a global catalog server is not considered “ready” (the server advertises itself in DNS as a
global catalog server) until all read-only directory partitions have been fully replicated to the new
global catalog server. The Global Catalog Partition Occupancy registry entry under
HKEY_Local_Machine\System\CurrentControlSet\Services\NTDS\Parameters determines the
requirements for how many read-only directory partitions must be present on a domain controller for
it to be considered a global catalog server, from no partitions (0) to all partitions (6).
The default occupancy value for Windows Server 2003 domain controllers requires that all read-only
directory partitions be replicated to the global catalog server before the Net Logon service registers
SRV resource records in DNS. For most conditions, this default provides the best option for ensuring
that a global catalog server provides a consistent view of the directory. In less common
circumstances, however, it might be useful to make the global catalog server available with an
incomplete set of partial domain directory partitions—for example, when delay of replication of a
domain that is not required by users is jeopardizing their ability to log on.
The Global Catalog Partition Occupancy entry can have the values shown in the following table.
Global Catalog Partition Occupancy Level Values
Value Description
0 No occupancy requirement. Removing the occupancy level requirement might be useful in a
scenario where domain controllers are being staged for deployment but are not yet in
production.
1 At least one read-only directory partition in the site has been added by the KCC. This level,
as well as level 3 and level 5, provide the ability to distinguish between a source for the
directory partition being reachable (at least one object has been returned) and the entire
directory partition having been replicated (as in levels 2, 4, and 6).
When the KCC can reach the first object, it can create a replica link, which is the agreement
between the source and destination domain controllers to replicate to the destination. If the
KCC cannot reach a source, the KCC logs event ID 1558 in the Directory Service log, which
indicates the distinguished name of the directory partition that has not been fully
synchronized. In this case, the KCC continues to try to replicate the partition each time it
runs (every 15 minutes by default).
When the KCC succeeds in creating the replica link, it passes responsibility for retrying and
completing the synchronization to the replication engine. The KCC then stops logging
events, after which the replication status can be checked by using the repadmin/showrepl
command.
2 At least one read-only directory partition in the site has been fully synchronized.
3 All read-only directory partitions in the site have been added by the KCC (at least one has
92
Value Description
been fully synchronized). In this case, the KCC has been able to contact one source for every
directory partition in the site. This level is useful when you want to advertise a global
catalog server as soon as possible with a high likelihood of success.
4 All read-only directory partitions in the site have been fully synchronized. With this setting, if
a source for any directory partition is not available, DNS registrations cannot occur. On
domain controllers that are running Windows 2000 Server with Service Pack 1 (SP1) or
Windows 2000 Server with Service Pack 2 (SP2), this occupancy level is the default
requirement before the global catalog server is advertised in DNS.
5 All read-only directory partitions in the forest have been added by the KCC (at least one has
been fully synchronized).
6 All read-only directory partitions in the forest have been fully synchronized. On domain
controllers that are running Windows Server 2003 or Windows 2000 Server with SP3 or later,
this occupancy level is the default requirement before the global catalog server is
advertised in DNS. This setting ensures the highest level of consistency.
Event ID 1578 reports the level that is required and the level that the domain controller has
achieved.
Advertising a Global Catalog Server Prior to Full SynchronizationBy default, a domain controller checks every 30 minutes to see whether it has received all of the
read-only directory partitions that are required to be present before the server advertises itself in
DNS as a global catalog server. Event ID 1110 indicates that the promotion is being delayed because
the required directory partitions have not all been synchronized.
This delay is controlled by the Global Catalog Delay Advertisement (sec) registry entry under
HKEY_Local_Machine\System\CurrentControlSet\Services\NTDS\Parameters\. If you set a
value for Global Catalog Delay Advertisement (sec), it overrides the requirements set in Global
Catalog Partition Occupancy and allows global catalog advertisement without requiring full
synchronization of all read-only directory partitions.
When conditions preclude the successful synchronization of the new global catalog server, you can
force advertisement of the global catalog server and then remove the global catalog from the
server. Until the global catalog server is successfully advertised, you are unable to remove it.
Replication Process for Global Catalog CreationWhen you designate a domain controller to be a global catalog server, the Knowledge Consistency
Checker (KCC) on the domain controller runs immediately and updates the replication topology.
When the KCC runs, it checks to see whether the Global Catalog option is selected for any domain
controllers, and creates the replication topology accordingly. The KCC configures the newly selected
global catalog server to be the destination server for a read-only replica of every domain directory
partition in the forest except for the writable domain directory partition that the server already
holds. The KCC on the global catalog server must be able to reach a server that will be the source of
each read-only directory partition.
When the KCC locates an available source domain controller, it creates an inbound connection on
the new global catalog server and replication of that read-only partition takes place. If the source is
within the site, replication begins immediately. If the source is in a different site, replication begins
93
when it is next scheduled. Replication of all objects in the partial directory partition must complete
successfully before the directory partition is considered to be present on the global catalog server.
Successful Completion of Global Catalog CreationWhen all directory partitions are present, the domain controller sets its rootDSE
isGlobalCatalogReady attribute to TRUE and the Net Logon service on the domain controller
registers SRV resource records that specifically advertise the global catalog server in DNS. At this
point, the global catalog is considered to be available, and event ID 1119 is logged in the Directory
Service event log.
Global Catalog ReplicationAlthough read-only directory partitions on global catalog servers cannot be updated directly by
administrative LDAP writes, they are updated through Active Directory replication when changes
occur to their respective writable replicas.
The following diagram represents the Active Directory database on a global catalog server in the
corp.contoso.com forest root domain. A global catalog server has a single directory database.
However, to represent the different logical directory partitions in the forest, the diagram shows
database divided into segments. The top three segments represent directory partitions that are
writable replicas for the domain controller (the domain, configuration, and schema directory
partitions). The bottom three segments represent directory partitions that are read-only replicas of
the other domains in the forest.
Writable and read-only replicas in the Active Directory database on a global catalog
server
The source domain controller for replication of a given directory partition to a global catalog server
can be either a non-global catalog domain controller or another global catalog server. In the
following diagram, each directory partition on the global catalog server is being updated by a non-
global catalog domain controller. The writable replicas on the global catalog server are updated by a
domain controller that is authoritative for the same domain, Corp.contoso.com. The replication for
the Corp.contoso.com domain and the configuration and schema directory partitions is two-way
because the replicas are all writable.
Each of the read-only replicas is updated by a source domain controller that is authoritative for the
respective directory partition. The replication is one-way because read-only replicas never update
writable replicas.
Direction of directory partition replica updates between a global catalog server and
other domain controllers
94
Replication Between Global Catalog ServersAs is true for all domain controllers, a global catalog server uses a single topology to replicate the
schema and configuration directory partitions, and it uses a separate topology, if needed, for each
domain directory partition. However, when a two-way connection exists between the servers, either
for replication of the schema and configuration directory partitions or for replication in opposite
directions of the two writable domain directory partitions, all replicas on each global catalog server
use the same connection to update their common replicas when changes are available.
The diagram below shows the directions of replication between directory partitions on two global
catalog servers that are in different sites and are authoritative for different domains. The writable
replicas of soam.corp.contoso.com and corp.contoso.com update the respective read-only replicas in
one direction only (a writable replica is never updated by a read-only replica). Because neither
domain controller is authoritative for the noam.corp.contoso.com and eur.corp.contoso.com domain
replicas, the global catalog servers can be sources for replication of these partial read-only replicas.
This replication is shown as two-way because a two-way connection already exists and these
replicas are each capable of updating the other.
Direction of directory partition replica updates between two global catalog servers in
different domains
In the preceding diagram, the read-only replicas can also be updated from other domain controllers.
In a forest that has a forest functional level of Windows Server 2003 or Windows Server 2003
interim, the intersite KCC algorithm avoids creating redundant connection objects by implementing
one-way replication where possible. For example, if the schema and configuration writable replicas
and the Corp and Eur read-only domain replicas on GC1 are all updated by a domain controller other
than GC2, replication of the Corp and Eur read-only replicas from GC1 to GC2 occurs in one direction
if it occurs. In this case, GC1 might not generate a connection object for replication from GC2.
Replication of Changes to the Global Catalog Partial Attribute SetThe default set of attributes that are replicated to the global catalog are identified by the schema.
These attributes are referred to as the partial attribute set (PAS) because they provide a replica of
every object in the directory, but the object includes only those attributes that are most likely to be
95
used for searches. If you want to add an attribute to the partial attribute set, you can mark the
attribute by using the Active Directory Schema snap-in to edit the
isMemberOfPartialAttributeSet value on the respective attributeSchema object. If the value is
set to TRUE, the attribute is replicated to the global catalog.
When a schema change affects the set of attributes that are marked for inclusion in the global
catalog (an attribute is added to the partial attribute set), replication of the change occurs
differently on global catalog servers running Windows 2000 Server and those running
Windows Server 2003. Depending on the version of Windows that is running on the replication
partners, an update to the PAS can cause either a full synchronization of all directory partitions in
the global catalog or replication of only the updated attributes, as follows:
• Updates only: When both servers are running Windows Server 2003, only the changed
attributes are replicated to global catalog servers running Windows Server 2003. There is no
replication impact.
• Full synchronization: When both servers are running Windows 2000, the global catalog server
initiates a full synchronization of all partial, read-only domain directory partition replicas to
become up-to-date with the extended replicas on other domain controllers. If the partial directory
partition replica can be synchronized over an RPC connection, the domain controller attempts a
full synchronization over the RPC connection before it uses a configured SMTP connection. If full
synchronization is completed, the up-to-dateness vector that it creates ensures that later
synchronization requests on other connections do not include data that has been received during
the initial full synchronization.
Full synchronization of a global catalog server causes increased network traffic while it is in
progress and can take from several minutes to hours, depending on the size of the directory.
Although interruption of service does not occur, this replication causes higher bandwidth
consumption than is required for usual day-to-day replication. The resulting bandwidth
consumption for each global catalog server is equivalent to that caused by adding the global
catalog to a domain controller. Whenever the isMemberOfPartialAttributeSet value of a new
attributeSchema object in the schema directory partition is set to TRUE, event ID 1575 occurs,
stating full synchronization is required.
• Full synchronization: When one global catalog server is running Windows 2000 Server and the
other is running Windows Server 2003 and a global catalog server running Windows Server 2003
replicates the change to a global catalog server running Windows 2000 Server, the server running
Windows Server 2003 reverts to Windows 2000 Server behavior, as described above.
Note
• The Windows Server 2003 schema contains new attributes that are marked for inclusion in the
partial attribute set. Replication of these new attributes to global catalog servers is triggered by
raising the forest functional level to Windows Server 2003. Therefore, upgrading the schema has
no impact on Windows 2000–based global catalog servers because the global catalog is updated
only when all domain controllers are running Windows Server 2003 (the requirement for raising
the forest functional level to Windows Server 2003). For more information about functional levels,
see “How Active Directory Functional Levels Work.”
Removing an attribute from the PAS does not involve replication of a deletion, but is handled locally.
If you set the isMemberOfPartialAttributeSet value to FALSE in the schema, the attribute is
removed from the directory of each global catalog server immediately after receiving the schema
update. This behavior is the same on global catalog servers running Windows Server 2003 and
The Active Directory replication topology can use many different components. Some components
are required and others are not required but are available for optimization. The following diagram
illustrates most replication topology components and their place in a sample Active Directory
multisite and multidomain forest. The depiction of the intersite topology that uses multiple
bridgehead servers for each domain assumes that at least one domain controller in each site is
running Windows Server 2003. All components of this diagram and their interactions are explained
in detail later in this section.
Replication Topology Physical Structure
In the preceding diagram, all servers are domain controllers. They independently use global
knowledge of configuration data to generate one-way, inbound connection objects. The KCCs in a
site collectively create an intrasite topology for all domain controllers in the site. The ISTGs from all
sites collectively create an intersite topology. Within sites, one-way arrows indicate the inbound
connections by which each domain controller replicates changes from its partner in the ring. For
103
intersite replication, one-way arrows represent inbound connections that are created by the ISTG of
each site from bridgehead servers (BH) for the same domain (or from a global catalog server [GC]
acting as a bridgehead if the domain is not present in the site) in other sites that share a site link.
Domains are indicated as D1, D2, D3, and D4.
Each site in the diagram represents a physical LAN in the network, and each LAN is represented as a
site object in Active Directory. Heavy solid lines between sites indicate WAN links over which two-
way replication can occur, and each WAN link is represented in Active Directory as a site link object.
Site link objects allow connections to be created between bridgehead servers in each site that is
connected by the site link.
Not shown in the diagram is that where TCP/IP WAN links are available, replication between sites
uses the RPC replication transport. RPC is always used within sites. The site link between Site A and
Site D uses the SMTP protocol for the replication transport to replicate the configuration and schema
directory partitions and global catalog partial, read-only directory partitions. Although the SMTP
transport cannot be used to replicate writable domain directory partitions, this transport is required
because a TCP/IP connection is not available between Site A and Site D. This configuration is
acceptable for replication because Site D does not host domain controllers for any domains that
must be replicated over the site link A-D.
By default, site links A-B and A-C are transitive (bridged), which means that replication of domain D2
is possible between Site B and Site C, although no site link connects the two sites. The cost values
on site links A-B and A-C are site link settings that determine the routing preference for replication,
which is based on the aggregated cost of available site links. The cost of a direct connection
between Site C and Site B is the sum of costs on site links A-B and A-C. For this reason, replication
between Site B and Site C is automatically routed through Site A to avoid the more expensive,
transitive route. Connections are created between Site B and Site C only if replication through Site A
becomes impossible due to network or bridgehead server conditions.
Top of page
Performance Limits for Replication Topology GenerationActive Directory topology generation performance is limited primarily by the memory on the domain
controller. KCC performance degrades at the physical memory limit. In most deployments, topology
size will be limited by the amount of domain controller memory rather than CPU utilization required
by the KCC.
Scaling of sites and domains is improved in Windows Server 2003 by improving the algorithm that
the KCC uses to generate the intersite replication topology. Because all domain controllers must use
the same algorithm to arrive at a consistent view of the replication topology, the improved algorithm
has a forest functional level requirement of Windows Server 2003 or Windows Server 2003 interim.
KCC scalability was tested on domain controllers with 1.8 GHz processor speed, 512 megabytes (MB)
RAM, and small computer system interface (SCSI) disks. KCC performance results at the Windows
Server 2003 forest functional level are described in the following table. The times shown are for the
KCC to run where all new connections are needed (maximum) and where no new connections are
needed (minimum). Because most organizations add domain controllers in increments, the minimum
generation times shown are closest to the actual runtimes that can be expected in deployments of
comparable sizes. The CPU and memory usage values for the Local Security Authority (LSA) process
(Lsass.exe) indicate the more significant impact of memory versus percent of CPU usage when the
KCC runs.
Note
• Active Directory runs as part of the LSA, which manages authentication packages and
partition. Therefore, for the preceding example, only one of the three domain controllers would be
designated by the ISTG as a bridgehead server for the domain, and all four connection objects from
the four other sites would be created on the single bridgehead server. In large hub sites, a single
domain controller might not be able to adequately respond to the volume of replication requests
from perhaps thousands of branch sites.
For more information about how the KCC selects bridgehead servers in Windows Server 2003, see
“Bridgehead Server Selection” later in this section.
Compression of Replication DataIntersite replication is compressed by default. Compressing replication data allows the data to be
transferred over WAN links more quickly, thereby conserving network bandwidth. The cost of this
benefit is an increase in CPU utilization on bridgehead servers.
By default, replication data is compressed under the following conditions:
• Replication of updates between domain controllers in different sites.
• Replication of Active Directory to a newly created domain controller.
A new compression algorithm is employed by bridgehead servers that are running Windows
Server 2003. The new algorithm improves replication speed by operating between two and ten times
faster than the Windows 2000 Server algorithm.
Windows 2000 Server CompressionThe compression algorithm that is used by domain controllers that are running Windows 2000
Server achieves a compression ratio of approximately 75% to 85%. The cost of this compression in
terms of CPU utilization can be as high as 50% for intersite Active Directory replication. In some
cases, the CPUs on bridgehead servers that are running Windows 2000 Server can become
overwhelmed with compression requests, compounded by the need to service outbound replication
partners. In a worst case scenario, the bridgehead server becomes so overloaded that it cannot keep
up with outbound replication. This scenario is usually coupled with a replication topology issue
where a domain controller has more outbound partners than necessary or the replication schedule
was overly aggressive for the number of direct replication partners.
Note
• If a bridgehead server has too many replication partners, the KCC logs event ID 1870 in the
Directory Service log, indicating the current number of partners and the recommended number of
partners for the domain controller.
Windows Server 2003 CompressionOn domain controllers that are running Windows Server 2003, compression quality is comparable to
Windows 2000 but the processing burden is greatly decreased. The Windows Server 2003 algorithm
produces a compression ratio of approximately 60%, which is slightly less compression than is
achieved by the Windows 2000 Server ratio, but which significantly reduces the processing load on
bridgehead servers. The new compression algorithm provides a good compromise by significantly
reducing the CPU load on bridgehead servers, while only slightly increasing the WAN traffic. The new
algorithm reduces the time taken by compression from approximately 60% of replication time to
20%.
The Windows Server 2003 compression algorithm is used only when both bridgehead servers are
running Windows Server 2003. If a bridgehead server that is running Windows Server 2003
replicates with a bridgehead server that is running Windows 2000 Server, then the Windows 2000
compression algorithm is used.
Reverting to Windows 2000 CompressionFor slow WAN links (for example, 64 KB or less), if more compression is preferable to a decrease in
computation time, you can change the compression algorithm to the Windows 2000 algorithm. The
120
compression algorithm is controlled by the REG_DWORD registry entry HKEY_LOCAL_MACHINE\
SYSTEM\CurrentControlSet\Services\NTDS\Parameters\Replicator compression algorithm. By
editing this registry entry, you can change the algorithm that is used for compression to the
Windows 2000 algorithm.
Note
• If you must edit the registry, use extreme caution. Registry information is provided here as a
reference for use by only highly skilled directory service administrators. It is recommended that
you do not directly edit the registry unless, as in this case, there is no Group Policy or other
Windows tools to accomplish the task. Modifications to the registry are not validated by the
registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.
The default value is 3, which indicates that the Windows Server 2003 algorithm is in effect. By
changing the value to 2, the Windows 2000 algorithm is used for compression. However, switching
to the Windows 2000 algorithm is not recommended unless both bridgehead domain controllers
serve relatively few branches and have ample CPU (for example, > dual processor 850 megahertz
[MHz]).
Site Link Settings and Their Effects on Intersite ReplicationIn Active Directory Sites and Services, the General tab of the site link Properties contains the
following options for configuring site links to control the replication topology:
• A list of two or more sites to be connected.
• A single numeric cost that is associated with communication over the link. The default cost is 100,
but you can assign higher cost values to represent more expensive transmission. For example,
sites that are connected by low-speed or dial-up connections would have high-cost site links
between them. Sites that are well connected through backbone lines would have low-cost site
links. Where multiple routes or transports exist between two sites, the least expensive route and
transport combination is used.
• A schedule that determines days and hours during which replication can occur over the link (the
link is available). For example, you might use the default (100 percent available) schedule on
most links, but block replication traffic during peak business hours on links to certain branches. By
blocking replication, you give priority to other traffic, but you also increase replication latency.
Note
• Scheduling information is ignored by site links that use SMTP transports; the mail is stockpiled
and then exchanged at the times that are configured for your mail infrastructure.
• An interval in minutes that determines how often replication can occur (default is every
180 minutes, or 3 hours). The minimum interval is 15 minutes. If the interval exceeds the time
allowed by the schedule, replication occurs once at the scheduled time.
A site can be connected to other sites by any number of site links. For example, a hub site has site
links to each of its branch sites. Each site that contains a domain controller in a multisite directory
must be connected to at least one other site by at least one site link; otherwise, it cannot replicate
with domain controllers in any other site.
The following diagram shows two sites that are connected by a site link. Domain controllers DC1 and
DC2 belong to the same domain and are acting as partner bridgehead servers. When topology
generation occurs, the ISTG in each site creates an inbound connection object on the bridgehead
server in its site from the bridgehead server in the opposite site. With these objects in place,
replication can occur according to the settings on the SB site link.
Connections Between Domain Controllers in Two Sites that Are Connected by a Site Link
121
Site Link CostThe ISTG uses the cost settings on site links to determine the route of replication between three or
more sites that replicate the same directory partition. The default cost value on a site link object
is 100. You can assign lower or higher cost values to site links to favor inexpensive connections over
expensive connections, respectively. Certain applications and services, such as domain controller
Locator and DFS, also use site link cost information to locate nearest resources. For example, site
link cost can be used to determine which domain controller is contacted by clients located in a site
that does not include a domain controller for the specified domain. The client contacts the domain
controller in a different site according to the site link that has the lowest cost assigned to it.
Cost is usually assigned not only on the basis of the total bandwidth of the link, but also on the
availability, latency, and monetary cost of the link. For example, a 128-kilobits per second (Kbps)
permanent link might be assigned a lower cost than a dial-up 128-Kbps dual ISDN link because the
dial-up ISDN link has replication latency-producing delay that occurs as the links are being
established or removed. Furthermore, in this example, the permanent link might have a fixed
monthly cost, whereas the ISDN line is charged according to actual usage. Because the company is
paying up-front for the permanent link, the administrator might assign a lower cost to the
permanent link to avoid the extra monetary cost of the ISDN connections.
The method used by the ISTG to determine the least-cost path from each site to every other site for
each directory partition is more efficient when the forest has a functional level of Windows
Server 2003 than it is at other levels. For more information about how the KCC computes replication
routes, see “Automated Intersite Topology Generation” later in this section. For more information
about domain controller location, see “How DNS Support for Active Directory Works.”
Transitivity and Automatic Site Link BridgingBy default, site links are transitive, or “bridged.” If site A has a common site link with site B, site B
also has a common site link with Site C, and the two site links are bridged, domain controllers in
site A can replicate directly with domain controllers in site C under certain conditions, even though
there is no site link between site A and site C. In other words, the effect of bridged site links is that
replication between sites in the bridge is transitive.
The setting that implements automatic site link bridges is Bridge all site links, which is found in
Active Directory Sites and Services in the properties of the IP or SMTP intersite transport containers.
The default bridging of site links occurs automatically and no directory object represents the default
bridge. Therefore, in the common case of a fully routed IP network, you do not need to create any
site link bridge objects.
Transitivity and ReroutingFor a set of bridged site links, where replication schedules in the respective site links overlap
(replication is available on the site links during the same time period), connection objects can be
automatically created, if needed, between sites that do not have site links that connect them
directly. All site links for a specific transport implicitly belong to a single site link bridge for that
transport.
Site link transitivity enables the KCC to re-route replication when necessary. In the next diagram, a
domain controller that can replicate the domain is not available in Seattle. In this case, because the
site links are transitive (bridged) and the schedules on the two site links allow replication at the
instance of the directory in response to forest-wide changes, which are made known to the KCC by
changes to data in the configuration directory partition.
The KCC generates and maintains the replication topology for replication within sites and between
sites by converting KCC-defined and administrator-defined (if any) connection objects into a
configuration that is understood by the directory replication engine. By default, the KCC reviews and
makes modifications to the Active Directory replication topology every 15 minutes to ensure
propagation of data, either directly or transitively, by creating and deleting connection objects as
needed. The KCC recognizes changes that occur in the environment and ensures that domain
controllers are not orphaned in the replication topology.
Operating independently, the KCC on each domain controller uses its own view of the local replica of
the configuration directory partition to arrive at the same intrasite topology. One KCC per site, the
ISTG, determines the intersite replication topology for the site. Like the KCC that runs on each
domain controller within a site, the instances of the ISTG in different sites do not communicate with
each other. They independently use the same algorithm to produce a consistent, well-formed
spanning tree of connections. Each site constructs its own part of the tree and, when all have run, a
working replication topology exists across the enterprise.
The predictability of all KCCs allows scalability by reducing communication requirements between
KCC instances. All KCCs agree on where connections will be formed, ensuring that redundant
replication does not occur and that all parts of the enterprise are connected.
The KCC performs two major functions:
• Configures appropriate replication connections (connection objects) on the basis of the existing
cross-reference, server, NTDS settings, site, site link, and site link bridge objects and the current
status of replication.
• Converts the connection objects that represent inbound replication to the local domain controller
into the replication agreements that are actually used by the replication engine. These
agreements, called replica links, accommodate replication of a single directory partition from the
source to the destination domain controller.
Intervals at Which the KCC RunsBy default, the KCC runs its first replication topology check five minutes after the domain controller
starts. The domain controller then attempts initial replication with its intrasite replication partners. If
a domain controller is being used for multiple other services, such as DNS, WINS, or DHCP,
extending the replication topology check interval can ensure that all services have started before
the KCC begins using CPU resources.
You can edit the registry to modify the interval between startup and the time the domain controller
first checks the replication topology.
Note
• If you must edit the registry, use extreme caution. Registry information is provided here as a
reference for use by only highly skilled directory service administrators. It is recommended that
you do not directly edit the registry unless, as in this case, there is no Group Policy or other
Windows tools to accomplish the task. Modifications to the registry are not validated by the
registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.
Modifying the interval between startup and the time the domain controller first checks the
replication topology requires changing the Repl topology update delay (secs) entry in
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters as appropriate:
• Value: Number of seconds to wait between the time Active Directory starts and the KCC runs for
130
the first time.
• Default: 300 seconds (5 minutes)
• Data type: REG_DWORD
Thereafter, as long as services are running, the KCC on each domain controller checks the
replication topology every 15 minutes and makes changes as necessary.
Modifying the interval at which the KCC performs topology review requires changing the Repl
topology update period (secs) entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\
Services\NTDS\Parameters as appropriate:
• Value: Number of seconds between KCC topology updates
• Default: 900 seconds (15 minutes)
• Data type: REG_DWORD
Objects that the KCC Requires to Build the Replication TopologyThe following objects, which are stored in the configuration directory partition, provide the
information required by the KCC to create the replication topology:
• Cross-reference. Each directory partition in the forest is identified in the Partitions container by
a cross-reference object. The attributes of this object are used by the replication system to locate
the domain controllers that store each directory partition.
• Server. Each domain controller in the forest is identified as a server object in the Sites container.
• NTDS Settings. Each server object that represents a domain controller has a child NTDS Settings
object. Its presence identifies the server as having Active Directory installed. The NTDS Settings
object must be present for the server to be considered by the KCC for inclusion in the replication
topology.
• Site. The presence of the above objects also indicates to the KCC the site in which each domain
controller is located for replication. For example, the distinguished name of the NTDS Settings
object contains the name of the site in which the server object that represents the domain
controller exists.
• Site link. A site link must be available between any set of sites and its schedule and cost
properties evaluated for routing decisions.
• Site link bridge. If they exist, site link bridge objects and properties are evaluated for routing
decisions.
If the domain controller is physically located in one site but its server object is configured in a
different site, the domain controller will attempt intrasite replication with a replication partner that is
in the site of its server object. In this scenario, the improper configuration of servers in sites can
affect network bandwidth.
If a site object exists for a site that has no domain controllers, the KCC does not consider the site
when generating the replication topology.
Topology Generation PhasesThe KCC generates the replication topology in two phases:
• Evaluation. During the evaluation phase, the KCC evaluates the current topology, determines
whether replication failures have occurred with the existing connections, and constructs whatever
new connection objects are required to complete the replication topology.
• Translation. During the translation phase, the KCC implements, or “translates,” the decisions
that were made during the evaluation phase into agreements between the replication partners.
During this phase, the KCC writes to the repsFrom attribute on the local domain controller (for
intrasite topology) or on all bridgehead servers in a site (for intersite topology) to identify the
131
replication partners from which each domain controller pulls replication. For more information
about the information in the replication agreement, see “How the Active Directory Replication
Model Works.”
KCC Modes and ScopesBecause individual KCCs do not communicate directly to generate the replication topology, topology
generation occurs within the scope of either a single domain controller or a single site. In performing
the two topology generation phases, the KCC has three modes of operation. The following table
identifies the modes and scope for each mode.
Modes and Scopes of KCC Topology Generation
KCC Mode Performing Domain Controllers
Scope Description
Intrasite All Local
server
Evaluate all servers in a site and create connection
objects locally on this server from servers in the
same site that are adjacent to this server in the ring
topology.
Intersite One domain controller
per site that has the
ISTG role
Local
site
Evaluate the servers in all sites and create
connection objects both locally and on other servers
in the site from servers in different sites.
Link
translation
All Local
server
Translate connection objects into replica links
(partnerships) for each server relative to each
directory partition that it holds.
Topology Evaluation and Connection Object GenerationThe KCC on a destination domain controller evaluates the topology by reading the existing
connection objects. For each connection object, the KCC reads attribute values of the NTDS Settings
object (class nTDSDSA) of the source domain controller (indicated by the fromServer value on the
connection object) to determine what directory partitions its destination domain controller has in
common with the source domain controller.
Topology evaluation for all domain controllersTo determine the connection objects that need to be generated, the KCC uses information stored in
the attributes of the NTDS Settings object that is associated with each server object, as follows:
• For all directory partitions, the multivalued attribute hasMasterNCs stores the distinguished
names of all directory partitions that are stored on that domain controller.
• For all domain controllers, the value of the options attribute indicates whether that domain
controller is configured to host the global catalog.
• The hasPartialReplicaNCs attribute contains the set of partial-replica directory partitions (global
catalog read-only domain partitions) that are located on the domain controller that is represented
by the server object.
Topology evaluation for domain controllers running Windows Server 2003For all domain controllers that are running Windows Server 2003, the msDS-HasDomainNCs
attribute of the NTDS Settings object contains the name of the domain directory partition that is
hosted by the domain controller.
In forests that have the forest functional level of Windows Server 2003 or Windows Server 2003
interim, the following additional information is used by the KCC to evaluate the topology for
application directory partitions and to generate the needed connections:
Because a ring topology is created for each directory partition, the topology might look different if
domain controllers from a second domain were present in the site. The next diagram illustrates the
topology for domain controllers from two domains in the same site with no global catalog servers
defined in the site.
Ring Topology for Two Domains in a Site that Has No Global Catalog Server
135
The next diagram illustrates replication between a global catalog server and three domains to which
the global catalog server does not belong. When a global catalog server is added to the site in
DomainA, additional connections are required to replicate updates of the other domain directory
partitions to the global catalog server. The KCC on the global catalog server creates connection
objects to replicate from domain controllers for each of the other domain directory partitions within
the site, or from another global catalog server, to update the read-only partitions. Wherever a
domain directory partition is replicated, the KCC also uses the connection to replicate the schema
and configuration directory partitions.
Note
• Connection objects are generated independently for the configuration and schema directory
partitions (one connection) and for the separate domain and application directory partitions,
unless a connection from the same source to destination domain controllers already exists for one
directory partition. In that case, the same connection is used for all (duplicate connections are not
created).
Intrasite Topology for Site with Four Domains and a Global Catalog Server
136
Expanded Ring Topology Within a SiteWhen the number of servers in a site grows beyond seven, the KCC estimates the number of
connections that are needed so that if a change occurs at any one domain controller, there are as
many replication partners as needed to ensure that no domain controller is more than three
replication hops from another domain controller (that is, a change takes no more than three hops
before it reaches another domain controller that has not already received the change by another
path). These optimizing connections are created at random and are not necessarily created on every
third domain controller.
The KCC adds connections automatically to optimize a ring topology within a site, as follows:
• Given a set of nodes in a ring, create the minimum number of connections, n, that each server
must have to ensure a path of no more than three hops to another server.
Given n, topology generation proceeds as follows.
• If the local server does not have n extra connections, the KCC does the following:
• Chooses n other servers randomly in the site as source servers.
• For each of those servers, creates a connection object.
This approach approximates the minimum-hop goal of three servers. In addition, it grows well,
because as the site grows in server count, old optimizing connections are still useful and are not
removed. Also, every time an additional 9 to 11 servers are added, a connection object is deleted at
random; then a new one is created, ideally having one of the new servers as its source. This process
ensures that, over time, the additional connections are distributed well over the entire site.
The following diagram shows an intrasite ring topology with optimizing connections in a site that has
eight domain controllers in the same domain. Without optimizing connections, the hop count from
DC1 to DC2 is more than three hops. The KCC creates optimizing connections to limit the hop count
137
to three hops. The two one-way inbound optimizing connections accommodate all directory
partitions that are replicated between the two domain controllers.
Intrasite Topology with Optimizing Connections
Excluded Nonresponding ServersThe KCC automatically rebuilds the replication topology when it recognizes that a domain controller
has failed or is unresponsive.
The criteria that the KCC uses to determine when a domain controller is not responsive depend upon
whether the server computer is within the site or not. Two thresholds must be reached before a
domain controller is declared “unavailable” by the KCC:
• The requesting domain controller must have made n attempts to replicate from the target domain
controller.
• For replication between sites, the default value of n is 1 attempt.
• For replication within a site, the following distinctions are made between the two immediate
neighbors (in the ring) and the optimizing connections:
For immediate neighbors, the default value of n is 0 failed attempts. Thus, as soon as an
attempt fails, a new server is tried.
For optimizing connections, the default value of n is 1 failed attempt. Thus, as soon as a second
failed attempt occurs, a new server is tried.
• A certain amount of time must have passed since the last successful replication attempt.
• For replication between sites, the default time is 2 hours.
• For replication within a site, a distinction is made between the two immediate neighbors (in the
ring) and the optimizing connections:
For immediate neighbors, the default time is 2 hours.
For optimizing connections, the default value is 12 hours.
You can edit the registry to modify the thresholds for excluding nonresponding servers.
Note
• If you must edit the registry, use extreme caution. Registry information is provided here as a
reference for use by only highly skilled directory service administrators. It is recommended that
you do not directly edit the registry unless, as in this case, there is no Group Policy or other
Windows tools to accomplish the task. Modifications to the registry are not validated by the
138
registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.
Modifying the thresholds for excluding nonresponding servers requires editing the following registry
entries in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters, with the
data type REG_DWORD. You can modify these values to any desired value as follows:
For replication between sites, use the following entries:
• IntersiteFailuresAllowed
Value: Number of failed attempts
Default: 1
• MaxFailureTimeForIntersiteLink (secs)
Value: Time that must elapse before being considered unavailable, in seconds
Default: 7200 (2 hours)
For optimizing connections within a site, use the following entries:
• NonCriticalLinkFailuresAllowed
Value: Number of failed attempts
Default: 1
• MaxFailureTimeForNonCriticalLink
Value: Time that must elapse before considered unavailable, in seconds
Default: 43200 (12 hours)
For immediate neighbor connections within a site, use the following entries:
• CriticalLinkFailuresAllowed
Value: Number of failed attempts
Default: 0
• MaxFailureTimeForCriticalLink
Value: Time that must elapse before considered unavailable, in seconds
Default: 7200 (2 hours)
When the original domain controller begins responding again, the KCC automatically restores the
replication topology to its pre-failure condition the next time that the KCC runs.
Fully Optimized Ring Topology GenerationTaking the addition of extra connections, management of nonresponding servers, and growth-
management mechanisms into account, the KCC proceeds to fully optimize intrasite topology
generation. The appropriate connection objects are created and deleted according to the available
criteria.
Note
• Connection objects from nonresponding servers are not deleted because the condition is
expected to be transient.
Automated Intersite Topology GenerationTo produce a replication topology for hundreds of domains and thousands of sites in a timely
manner and without compromising domain controller performance, the KCC must make the best
possible decision when confronted with the question of which network link to use to replicate a given
directory partition between sites. Ideally, connections occur only between servers that contain the
same directory partition(s), but when necessary, the KCC can also use network paths that pass
through servers that do not store the directory partition.
Intersite topology generation and associated processes are improved in Windows Server 2003 in the
following ways:
• Improved scalability: A new spanning tree algorithm achieves greater efficiency and scalability
when the forest has a functional level of Windows Server 2003. For more information about this
139
new algorithm, see “Improved KCC Scalability in Windows Server 2003 Forests” later in this
section.
• Less network traffic: A new method of communicating the identity of the ISTG reduces the amount
of network traffic that is produced by this process. For more information about this method, see
“Intersite Topology Generator” later in this section.
• Multiple bridgehead servers per site and domain, and initial bridgehead server load balancing: An
improved algorithm provides random selection of multiple bridgehead servers per domain and
transport (the Windows 2000 algorithm allows selection of only one). The load among bridgehead
servers is balanced the first time connections are generated. For more information about
bridgehead server load balancing, see “Windows Server 2003 Multiple Bridgehead Selection” later
in this section.
Factors Considered by the KCCThe spanning tree algorithm used by the KCC that is running as the ISTG to create the intersite
replication topology determines how to connect all the sites that need to be connected with the
minimum number of connections and the least cost. The algorithm must also consider the fact that
each domain controller has at least three directory partitions that potentially require synchronization
with other sites, not all domain controllers store the same partitions, and not all sites host the same
domains.
The ISTG considers the following factors to arrive at the intersite replication topology:
• Location of domain directory partitions (calculate a replication topology for each domain).
• Bridgehead server availability in each site (at least one is available).
• All explicit site links.
• With automatic site link bridging in effect, consider all implicit paths as a single path with a
combined cost.
• With manual site link bridging in effect, consider the implicit combined paths of only those site
links included in the explicit site link bridges.
• With no site link bridging in effect, where the site links represent hops between domain
controllers in the same domain, replication flows in a store-and-forward manner through sites.
Improved KCC Scalability in Windows Server 2003 ForestsKCC scalability is greatly improved in Windows Server 2003 forests over its capacity in
Windows 2000 forests. Windows 2000 forests scale safely to support 300 sites, whereas Windows
Server 2003 forests have been tested to 3,000 sites. This level of scaling is achieved when the forest
functional level is Windows Server 2003. At this forest functional level, the method for determining
the least-cost path from each site to every other site for each directory partition is significantly more
efficient than the method that is used in a Windows 2000 forest or in a Windows Server 2003 forest
that has a forest functional level of Windows 2000.
Windows 2000 Spanning Tree AlgorithmThe ability of the KCC to generate the intersite topology in Windows 2000 forests is limited by the
amount of CPU time and memory that is consumed when the KCC computes the replication topology
in large environments that use transitive (bridged) site links. In a Windows 2000 forest, a potential
disadvantage of bridging all site links affects only very large networks (generally, greater than
100 sites) where periods of high CPU activity occur every 15 minutes when the KCC runs. By default,
the KCC creates a single bridge for the entire network, which generates more routes that must be
processed than if automatic site link bridging is not used and manual site link bridges are applied
selectively.
140
In a Windows 2000 forest, or in a Windows Server 2003 forest that has a forest functional level of
Windows 2000, the KCC reviews the comparison of multiple paths to and from every destination and
computes the spanning tree of the least-cost path. The spanning tree algorithm works as follows:
• Computes a cost matrix by identifying each site pair (that is, each pair of bridgehead servers in
different sites that store the directory partition) and the cost on the site link connecting each pair.
Note
• This matrix is actually computed by Intersite Messaging and used by the KCC.
• By using the costs computed in the matrix, builds a spanning tree between sites that store the
directory partition.
This method becomes inefficient when there are a large number of sites.
Note
• CPU time and memory is not an issue in a Windows 2000 forest as long as the following criteria
apply:
• D is the number of domains in your network
• S is the number of sites in your network
• (1 + D) * S^2 <= 100,000
Windows Server 2003 Spanning Tree AlgorithmA more efficient spanning tree algorithm improves efficiency and scalability of replication topology
generation in Windows Server 2003 forests. When the forest functional level is either Windows
Server 2003 or Windows Server 2003 interim, the improved algorithm takes effect and computes a
minimum-cost spanning tree of connections between the sites that host a particular directory
partition, but eliminates the inefficient cost matrix. Thus, the KCC directly determines the lowest-
cost spanning tree for each directory partition, considering the schema and configuration directory
partitions as a single tree. Where the spanning trees overlap, the KCC generates a single connection
between domain controllers for replication of all common directory partitions.
In a Windows Server 2003 forest, both versions of the KCC spanning tree algorithms are available.
The algorithm for Windows 2000 forests is retained for backwards compatibility with the
Windows 2000 KCC. It is not possible for the two algorithms to run simultaneously in the same
enterprise.
DFS Site Costing and Windows Server 2003 SP1 Site OptionsWhen the forest functional level is Windows Server 2003 or Windows Server 2003 interim and the
ISTG does not use Intersite Messaging to calculate the intersite cost matrix, DFS can still use
Intersite Messaging to compute the cost matrix for its site-costing functionality, provided that the
Bridge all site links option is not turned off. In branch office deployments, where the large number
of sites and site links makes automatic site link bridging too costly in terms of the replication
connections that are generated, the Bridge all site links option is usually turned off on the IP
container (CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=ForestRootDomain). In
this case, DFS is unable to use Intersite Messaging to calculate site costs.
When the forest functional level is Windows Server 2003 or Windows Server 2003 interim and the
ISTG in a site is running Windows Server 2003 with SP1, you can use a site option to turn off
automatic site link bridging for KCC operation without hampering the ability of DFS to use Intersite
Messaging to calculate the cost matrix. This site option is set by running the command repadmin
/siteoptions W2K3_BRIDGES_REQUIRED. This option is applied to the NTDS Site Settings object
(CN=NTDS Site Settings,CN=SiteName,CN=Sites,CN=Configuration,DC=ForestRootDomain). When
this method is used to disable automatic site link bridging (as opposed to turning off Bridge all site
links), default Intersite Messaging options enable the site-costing calculation to occur for DFS.
141
Note
The site option on the NTDS Site Settings object can be set on any domain controller, but it does not
take effect until replication of the change reaches the ISTG role holder for the site.
Intersite Topology GeneratorThe KCC on the domain controller that has the ISTG role creates the inbound connections on all
domain controllers in its site that require replication with domain controllers in other sites. The sum
of these connections for all sites in the forest forms the intersite replication topology.
A fundamental concept in the generation of the topology within a site is that each server does its
part to create a site-wide topology. In a similar manner, the generation of the topology between
sites depends on each site doing its part to create a forest-wide topology between sites.
ISTG Role Ownership and ViabilityThe owner of the ISTG role is communicated through normal Active Directory replication. Initially,
the first domain controller in the site is the ISTG role owner. It communicates its role ownership to
other domain controllers in the site by writing the distinguished name of its child NTDS Settings
object to the interSiteTopologyGenerator attribute of the NTDS Site Settings object for the site.
As a change to the configuration directory partition, this value is replicated to all domain controllers
in the forest.
The ISTG role owner is selected automatically. The role ownership does not change unless:
• The current ISTG role owner becomes unavailable.
• All domain controllers in the site are running Windows 2000 and one of them is upgraded to
Windows Server 2003.
If at least one domain controller in a site is running Windows Server 2003, the ISTG role is assumed
by a domain controller that is running Windows Server 2003.
The viability of the current ISTG is assessed by all other domain controllers in the site. The need for
a new ISTG in a site is established differently, depending on the forest functional level that is in
effect.
• Windows 2000 functional level: At 30-minute intervals, the current ISTG notifies every other
domain controller of its existence and availability by writing the interSiteTopologyGenerator
attribute of the NTDS Site Settings object for the site. The change replicates to every domain
controller in the forest. The KCC on each domain controller monitors this attribute for its site to
verify that it has been written. If a period of 60 minutes elapses without a modification to the
attribute, a new ISTG declares itself.
• Windows Server 2003 or Windows Server 2003 interim functional level: Each domain controller
maintains an up-to-dateness vector, which contains an entry for each domain controller that holds
a full replica of any directory partition that the domain controller replicates. On domain controllers
that are running Windows Server 2003, this up-to-dateness vector contains a timestamp that
indicates the times that it was last contacted by its replication partners, including both direct and
indirect partners (that is, every domain controller that replicates a directory partition that is
stored by this domain controller). The timestamp is recorded whether or not the local domain
controller actually received any changes from the partner. Because all domain controllers store
the schema and configuration directory partitions, every domain controller is guaranteed to have
the ISTG for its site among the domain controllers in its up-to-dateness vector.
This timestamp eliminates the need to receive periodic replication of the updated
interSiteTopologyGenerator attribute from the current ISTG. When the timestamp indicates
that the current ISTG has not contacted the domain controller in the last 120 minutes, a new ISTG
declares itself.
142
The Windows Server 2003 method eliminates the network traffic that is generated by periodically
replicating the interSiteTopologyGenerator attribute update to every domain controller in the
forest.
ISTG EligibilityWhen at least one domain controller in a site is running Windows Server 2003, the eligibility for the
ISTG role depends on the operating system of the domain controllers. When a new ISTG is required,
each domain controller computes a list of domain controllers in the site. All domain controllers in the
site arrive at the same ordered list. Eligibility is established as follows:
• If no domain controllers in the site are running Windows Server 2003, all domain controllers that
are running Windows 2000 Server are eligible. The list of eligible servers is ordered by GUID.
• If at least one domain controller in the site is running Windows Server 2003, all domain controllers
that are running Windows Server 2003 are eligible. In this case, the entries in the list are sorted
first by operating system and then by GUID. In a site in which some domain controllers are
running Windows 2000 Server, domain controllers that are running Windows Server 2003 remain
at the top of the list and use the GUID in the same manner to maintain the order.
The domain controller that is next in the list of servers after the current owner declares itself the
new ISTG by writing the interSiteTopologyGenerator attribute on the NTDS Site Settings object.
If the current ISTG is temporarily disconnected from the topology, as opposed to being shut down,
and a new ISTG declares itself in the interim, then two domain controllers can temporarily assume
the ISTG role. When the original ISTG resumes replication, it initially considers itself to be the current
ISTG and creates inbound replication connection objects, which results in duplicate intersite
connections. However, as soon as the two ISTGs replicate with each other, the last domain controller
to write the intersiteTopologyGenerator attribute continues as the single ISTG and removes the
duplicate connections.
Bridgehead Server SelectionBridgehead servers can be selected in the following ways:
• Automatically by the ISTG from all domain controllers in the site.
• Automatically by the ISTG from all domain controllers that are identified as preferred bridgehead
servers, if any preferred bridgehead servers are assigned. Preferred bridgehead servers must be
assigned manually.
• Manually by creating a connection object on a domain controller in one site from a domain
controller in a different site.
By default, when at least one domain controller in a site is running Windows Server 2003 (regardless
of forest functional level), any domain controller that hosts a domain in the site is a candidate to be
an ISTG-selected bridgehead server. If preferred bridgehead servers are selected, candidates are
limited to this list. The connections from remote servers are distributed among the available
candidate bridgehead servers in each site. The selection of multiple bridgehead servers per domain
and transport is new in Windows Server 2003. The ISTG uses an algorithm to evaluate the list of
domain controllers in the site that can replicate each directory partition. This algorithm is improved
in Windows Server 2003 to randomly select multiple bridgehead servers per directory partition and
transport. In sites containing only domain controllers that are running Windows 2000 Server, the
ISTG selects only one bridgehead server per directory partition and transport.
When bridgehead servers are selected by the ISTG, the ISTG ensures that each directory partition in
the site that has a replica in any other site can be replicated to and from that site or sites. Therefore,
if a single domain controller hosts the only replica of a domain in a specific site and the domain has
domain controllers in another site or sites, that domain controller must be a bridgehead server in its
143
site. In addition, that domain controller must be able to connect to a bridgehead server in the other
site that also hosts the same domain directory partition.
Note
• If a site has a global catalog server but does not contain at least one domain controller of every
domain in the forest, then at least one bridgehead server must be a global catalog server.
Preferred Bridgehead ServersBecause bridgehead servers must be able to accommodate more replication traffic than non-
bridgehead servers, you might want to control which servers have this responsibility. To specify
servers that the ISTG can designate as bridgeheads, you can select domain controllers in the site
that you want the ISTG to always consider as preferred bridgehead servers for the specified
transport. These servers are used exclusively to replicate changes collected from the site to any
other site over that transport. Designating preferred bridgehead servers also serves to exclude
those domain controllers that, for reasons of capability, you do not want to be used as bridgehead
servers.
Depending on the available transports, the directory partitions that must be replicated, and the
availability of global catalog servers, multiple bridgehead servers might be required to replicate full
and partial copies of directory data from one site to another.
The ISTG recognizes preferred bridgehead servers by reading the bridgeheadTransportList
attribute of the server object. When this attribute has a value that is the distinguished name of the
transport container that the server uses for intersite replication (IP or SMTP), the KCC treats the
server as a preferred bridgehead server. For example, the value for the IP transport is
CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=ForestRootDomainName. You can
use Active Directory Sites and Services to designate a preferred bridgehead server by opening the
server object properties and placing either the IP or SMTP transport into the preferred list, which
adds the respective transport distinguished name to the bridgeheadTransportList attribute of the
server.
The bridgeheadServerListBl attribute of the transport container object is a backlink attribute of
the bridgeheadTransportList attribute of the server object. If the bridgeheadServerListBl
attribute contains the distinguished name of at least one server in a site, then the KCC uses only
preferred bridgehead servers to replicate changes for that site, according to the following rules:
• If at least one server is designated as a preferred bridgehead server, updates to the domain
directory partition hosted by that server can be replicated only from a preferred bridgehead
server. If at the time of replication no preferred bridgehead server is available for that directory
partition, replication of that directory partition does not occur.
• If any bridgehead servers are designated but no domain controller is designated as a preferred
bridgehead server for a specific directory partition that has replicas in another site or sites, the
KCC selects a domain controller to act as the bridgehead server, if one is available that can
replicate the directory partition to the other site or sites.
Therefore, to use preferred bridgehead servers effectively, be sure to:
• Assign at least two or more bridgehead servers for each of the following:
• Any domain directory partition that has a replica in any other site.
• Any application directory partition that has a replica in another site.
• The schema and configuration directory partitions (one bridgehead server replicates both) if no
domains in the site have replicas in other sites.
• If the site has a global catalog server, select the global catalog server as one of the preferred
bridgehead servers.
144
Windows 2000 Single Bridgehead SelectionIn a Windows 2000 forest or in a Windows Server 2003 forest that has a forest functional level of
Windows 2000, the ISTG selects a single bridgehead server per directory partition and transport. The
selection changes only when the bridgehead server becomes unavailable. The following diagram
shows the automatic bridgehead server (BH) selection that occurs in the hub site where all domain
controllers host the same domain directory partition and multiple sites have domain controllers that
host that domain directory partition.
Windows 2000 Single Bridgehead Server in a Hub Site that Serves Multiple Branch Sites
Windows Server 2003 Multiple Bridgehead SelectionWhen at least one domain controller in a site is running Windows Server 2003 (and thereby becomes
the ISTG), the ISTG begins performing random load balancing of new connections. This load
balancing occurs by default, although it can be disabled.
When creating a new connection, the KCC must choose endpoints from the set of eligible
bridgeheads in the two endpoint sites. Whereas in Windows 2000 the ISTG always picks the same
bridgehead for all connections, in Windows Server 2003 it picks one randomly from the set of
possible bridgeheads.
Assuming two out of three of the domain controllers have been added to the site since the ISTG was
upgraded to Windows Server 2003, the following diagram shows the connections that might exist on
domain controllers in the hub site to accommodate the four branch sites that have domain
controllers for the same domain.
Random Bridgehead Server Selection in a Hub Site in which the ISTG Runs Windows
Server 2003
145
If one or more new domain controllers are added to the hub site, the inbound connections on the
existing bridgehead servers are not automatically re-balanced. The next time it runs, the ISTG
considers the newly added server(s) as follows:
• If preferred bridgehead servers are not selected in the site, the ISTG considers the newly added
servers as candidate bridgehead servers and creates new connections randomly if new
connections are needed. It does not remove or replace the existing connections.
• If preferred bridgehead servers are selected in the site, the ISTG does not consider the newly
added servers as candidate bridgehead servers unless they are designated as preferred
bridgehead servers.
The initial connections remain in place until a bridgehead server becomes unavailable, at which
point the KCC randomly replaces the connection on any available bridgehead server. That is, the
endpoints do not change automatically for the existing bridgehead servers. In the following diagram,
two new domain controllers are added to the hub site, but the existing connections are not
redistributed.
New Domain Controllers with No New Connections Created
146
Although the ISTG does not rebalance the load among the existing bridgehead servers in the hub
site after the initial connections are created, it does consider the added domain controllers as
candidate bridgehead servers under either of the following conditions:
• A new site is added that requires a bridgehead server connection to the hub site.
• An existing connection to the hub site becomes unavailable.
The following diagram illustrates the inbound connection that is possible on a new domain controller
in the hub site to accommodate a failed connection on one of the existing hub site bridgehead
servers. In addition, a new branch site is added and a new inbound connection can potentially be
created on the new domain controller in the hub site. However, because the selection is random,
there is no guarantee that the ISTG creates the connections on the newly added domain controllers.
Possible New Connections for Added Site and Failed Connection
147
Using ADLB to Balance Hub Site Bridgehead Server LoadIn large hub-and-spoke topologies, you can implement the redistribution of existing bridgehead
server connections by using the Active Directory Load Balancing (ADLB) tool (Adlb.exe), which is
included with the “Windows Server 2003 Active Directory Branch Office Deployment Guide.” ADLB
makes it possible to dynamically redistribute the connections on bridgehead servers. This
application works independently of the KCC, but uses the connections that are created by the KCC.
Connections that are manually created are not moved by ADLB. However, manual connections are
factored into the load-balancing equations that ADLB uses.
The ISTG is limited to making modifications in its site, but ADLB modifies both inbound and outbound
connections on eligible bridgehead servers and offers schedule staggering for outbound
connections.
For more information about how bridgehead server load balancing and schedule staggering work
with ADLB, see the “Windows Server 2003 Active Directory Branch Office Planning and Deployment
Guide” on the Web at http://go.microsoft.com/fwlink/?linkID=28523.
Top of page
Network Ports Used by Replication TopologyBy default, RPC-based replication uses dynamic port mapping. When connecting to an RPC endpoint
during Active Directory replication, the RPC run time on the client contacts the RPC endpoint mapper
on the server at a well-known port (port 135). The server queries the RPC endpoint mapper on this
port to determine what port has been assigned for Active Directory replication on the server. This
query occurs whether the port assignment is dynamic (the default) or fixed. The client never needs
to know which port to use for Active Directory replication.
Note
• An endpoint comprises the protocol, local address, and port address.