In this article, we will take a quick look at containerized (self-contained in BOL terminology) availability groups that appeared in SQL Server 2022. You can read more about them in the documentation: What is a standalone availability group? Also, you can read already published articles from other sources:
This is a new concept of ensuring the identity of a limited set of system objects and entities, which was sorely lacking before. DBAs had to “manually” synchronize logins, agent jobs, service plans, and a host of other system entities that were described in the master and msdb system database tables. Now there is a way to put this in SQL Server 2022, but we had to wait 10 years for this opportunity …
So here is the definition from the documentation:
Standalone Availability Group is an Always On availability group that supports:
manage metadata objects (users, logins, permissions, SQL Server Agent jobs, and so on) at the availability group level in addition to the instance level;
specialized stand-alone system databases as part of an availability group.
In fact, a container availability group is an add-on to the already familiar basic availability group, which has its own master and msdb databases (these are not copies of the same instance databases, they are miraculously created from a template), and they are synchronized in the same way as user databases in an availability group. To distinguish these database names from those that already exist in the instance of SQL Server, they are prefixed with the name of the availability group. Re-creation of the container group may lead to the completion of the names of these bases with the serial number of the attempt at the end of the name (if the old bases are found).
When creating a group, you need to mark the checkbox that it will be a container group or add the keyword: CONTAINED to the command. All this is described in detail in the article: CREATE AVAILABILITY GROUP (Transact-SQL). It is possible to use the REUSE_SYSTEM_DATABASES parameter – it can be useful when re-creating a group, then you can use the container system bases from the previous attempt. It may also be important that in the documentation: ALTER AVAILABILITY GROUP (Transact-SQL) today there is no indication of what can be done from an existing availability group to a container one. You have to delete and recreate.
If you mount the secondary replica manually (for example, the database files will be in a different location), you will first need to mount the master database of this group, then its msdb, and then all the user databases for which the group provides HADR.
If everything is done correctly when creating the container group, then in sys.availability_groups a unit appeared in a new column named is_contained.
An important point is that the container group’s own system databases almost completely replace the metadata that is stored in the metadata of the cluster instances. You can work and see the system objects and entities of msdb only by connecting through the group listener, which is a mandatory element for it. This applies not only to the primary replica, but also to the read replica that is connected to with “ApplicationIntent=ReadOnly”. In the connection string, as before, you must specify the database from the availability group, and the login with the password must exist among the logins of the container group. The tempdb system database will be used depending on where the connection goes according to the routing rules in the group. It does not become a container, but is accessible from the container group.
When connecting through the listener, a small “miracle” will occur, many components of the database server will become inaccessible (for example, the availability group itself is not visible in SSMS, you can manage it when connected to cluster instances, and many objects will simply disappear like a Cheshire cat – you will not see it there , for example, instance logins, tasks, alerts, etc. But not everything will disappear, all logins from the sysadmins server role of the parent instance will fall into the master container database.
What’s even weirder is that, although the databases that are not in a container group within it are not listed in sys.databases and therefore not visible in SSMS, they can be accessed by a three-part fully qualified name or by using the use command.
This, of course, does not make server maintenance and administration easier, with a container group, you need to carefully consider and distribute tasks, logins and system objects across instance hosts and container groups so as not to miss something when organizing, for example, backups or not forgetting add the necessary logins to the SQL Server instances included in the cluster, or register the Database Mail profile.
Also, difficulties are possible when sp_configure is used to enable options, which can only be changed directly on the instance, and not in the container group. And the fact that it is also possible to get data from sp_configure in a container group is another small miracle.
At the moment there are limitations unlike basic availability groups. Container groups cannot be Distributed Availability Groups and cannot be used in conjunction with replication. Log shipping is also not supported where the target database is in a containerized availability group; this is only possible with a source database in a container group.
In addition to limitations, you may encounter difficulties that demonstrate that the technology is new and in some places raw. I encountered the fact that the agent jobs created automatically in the container group to run maintenance plans use the wrong package path format, the leading slash character is missing (you have to edit it manually after each plan change). Colleagues also encountered difficulties in diagnosing agent job errors, it seems that for Alerts of a container group, you need to come up with a unique mail profile that is distinguishable from those on the hosts.
You may be puzzled because the system databases of a container group do not completely cover the metadata of the system databases with the instance, while it is a “solid sieve”. I hope all this will gradually become harmonious and understandable. In the meantime, there is still a lot of illogical or difficult to explain. However, as well as simply unexplained. The documentation does not mention, for example, how to restore container system databases from a copy. So far, it seems logical to restore them like user databases in an availability group, i.e. by removing them from the group.
Still from incomprehensible, on the Options tab of the backup task in the service plan, the checkbox “For availability databases, ignore replica priority for backup and backup on primary settings” became unavailable for disabling. Probably because it makes little sense to turn it off inside the container group, given the possible role change.
Also, there is a warning in the documentation, which I will also quote verbatim here:
Автономные группы доступности — это механизм обеспечения согласованности для конфигураций среды выполнения в репликах группы доступности. Они НЕ являются границей безопасности. Отсутствует граница, которая запрещает доступ к базам данных за пределами группы доступности при подключении к автономной группе доступности.
And indeed, container system databases behave like a sieve, sometimes their metadata overlaps those in system instances, and sometimes on the contrary, you configure something, and as a result the primary replica component is used. Yes, and access from system information, as they say, is quite possible. Everything needs to be checked and rechecked.
Consider what is written in the documentation:
In a containerized availability group, you can perform some instance-level operations, such as shutting down SQL Server. However, most database-level, endpoint-level, or availability group-level operations can only be performed from instance connections, not from a container group connection.
CDC change data capture must be configured on the container group.
To use TDE with databases in a container group, you must manually set the master key
databases to the master container database.