The VNX File has built in replication that will allow you to create full copies of a file system internally, for disaster recovery, or for migrations. Replicating the Virtual Data Movers as well will also maintain any CIFS Share and CIFS server information as well. Here, we will briefly go over all of the steps required to be able to replicate a NAS file system for CIFS
If you are a regular at the Anexinet ISG blog, you may remember a blog post on a similar topic at ://anexinetisg.blogspot.com/2014/05/vnx-replicator-cli-replication-setup.html. We will be expanding on that topic and going through setting up replication from beginning to end, including this step.
The first step is to confirm licensing for all of required products.
You will need the snapsure and replicatorV2 license to be able to complete this task. Contact your sales representative if you need to acquire these licenses.
To set up replication properly, you will need to follow these steps.
- Create a NAS-to-NAS relationship
- Create a Data Mover Interconnect
- Configure User Mapper
- Start VDM Replication
- Start File Systems Replication
How to create a NAS-to-NAS relationship.
The first step is to tell each VNX about each other. You will need to use the ‘nas_cel’ command to establish a connection from each VNX Control Station to send administrative commands to each other. You will need to establish the connection in both directions, so that each VNX will be able to query the other VNX.
Here is the syntax for the command.
Breaking it down, here is what you need to create the relationship.
nas_cel –create <remote VNX name> -ip <IP address of the remote control station> -passphrase <passphrase for the relationship>
The remote VNX name doesn’t have to be the DNS name, but it is a very good idea to use the same name as the VNX is known as in DNS, just for simplicity sake.
The IP address is the IP of the control station. If you have 2 control stations in the VNX, then the alias IP address can be used. This alias is set with the ‘nas_cs’ command.
The passphrase is a phrase to match up the connections up between the VNX systems. The passphrase will need to match when you create the NAS-to-NAS relationships on both sides of the replication. It is sent over the wire as plain text so do not use one of your more secure passwords. As a rule, I have always used ‘nasadmin’ as my passphrase, and I have always encouraged all of my customers to do the same.
In practice, here is what you should see on the VNX.
Create Data Mover to Data Mover relationship Interconnect.
Next task will be to create the Datamover to Datamover Interconnect. This link is the actual IP link that the data will flow over. When possible, I like to use a dedicated Interface (VNX speak for an IP address) to use for replication. I just like to call this interface ‘rep’ on both sides. It makes it easy to identify the purpose of the Interface this way.
So, let’s check the syntax of the command.
To create the Data Mover to Data Mover Interconnect, you will use the following:
nas_cel –interconnect –create <interconnect name> -source_server <local data mover name> -destination_system <remote VNX system name> -destination_server <remote data mover name> -source_interfaces ip=<ip of local interface> -destination_interfaces ip=<ip of remote interface>
Breaking down the command.
-create <interconnect name> is the name of the interconnect. This name is just a human friendly name, but it is still a good idea to use a name that makes sense to you and is descriptive. I like to use the format SourceSystem_DMx_DestSystem_DMx so that it is clear on what the source and target systems and Data Movers are.
-source_server <local Data Mover name> is the local Data Mover you are configuring for IP Replication. This is server_2 in many environments, but if you have more than one active Data Mover, you may be creating Interconnects for server_3 or server_4.
-destination_system <remote VNX System name> is the name of the remote system you defined in the NAS-to-NAS relationship step.
–destination_server <remote data mover name> is the name of the data mover you are replicating to. Again, this is server_2 normally, but may be server_3, server_4, etc, on larger arrays.
-source_interfaces ip=<ip of local interface> is where you define what is the IP address on the source array that will be linked up in the Interconnect. If you are typing this command out, remember that interfaces is plural in this case.
-destination_interfaces ip=<ip of remote Interface> is where you define the IP address on the remote data mover for the interconnect. Again, if you are typing the command out, make sure that you put the s on interfaces.
In action, we should see the following.
To verify that all connectivity is in place, there is a verify command. This will run a few network checks on your interconnect and ensure that data can go over the interconnect, just like how your actual replication sessions would. This is great to test against any routing issues or firewalls that may impact your replication sessions.
nas_cel –interconnect –validate <Interconnect name>
So, to check, we would run the following.
Usermapper is one of the more unknown topics of the VNX File. It is not as critical to CIFS operations since many of its duties were offloaded to Secmap which is stored inside the VDM, but there are still some VNX operations, such as user quotas that use the Usermapper database.
EMC recommends only one active Usermapper database in a single environment, and all other VNXs will be set as secondary Usermapper, pointing to a single as the primary Usermapper. I go one step further and put my primary Usermapper on my DR site, and have all of my production Usermapper databases point to it. This includes my production site set as a secondary Usermapper.
My reasoning is that in most cases, the Primary Usermapper in the DR site will be up and available. Any new users that are not in the Usermapper database at the production site will then communicate to the DR site, which has the Primary Usermapper running. The new user gets its SID to UID entry, and the database entry is then cached on the Secondary Usermapper site in production. All subsequent requests will then be satisfied with the local cached copy. If there is an event that requires the DR to come online, you just activate the DR side. New and existing users will be in the database and won’t notice any issues. When normal operation returns and the production VNX is back online, it should already be set (or in case of a full disaster, reconfigured) to secondary Usermapper. Data access and Usermapper entries will continue as normal. Simple failover and failback with no issues.
So, let’s talk about what happens if we let the source side be the primary Usermapper. Users will populate the primary Usermapper on the source side. The remote Usermapper will not populate, since it is not being queried at all. Come failover time, the Usermapper database will need to be converted to Primary to give access to the users. This is an empty database. In most cases (i.e. no quotas are in use), this won’t cause any issues to the end users. Once you bring the source back online, you then have two Usermapper databases with different entries for your SIDs. Your data will be fine, but you may have some access and security issues with the data due to the mismatched databases. It is fixable via EMC support, but will take some time.
This bad scenario can be 100% avoided if you use a single Usermapper database at the DR site.
These items are best done during off hours since there is a blip in service on the production side as you are in-between commands.
The steps for properly configuring Usermapper for DR are:
- Disable Usermapper on the DR side.
- Export the Usermapper group and user database to a file.
- Copy the files to the DR side.
- Disable the Usermapper Database on the Source side (This will cut access for a few moments)
- Import and start the Usermapper Database on the DR side.
- Point the Source Usermapper Database to the DR side (Services are restored at this point.)
Here is the syntax to run the Usermapper commands.
So, in practice, here is how we configure Usermapper.
Disable Usermapper on the DR side.
Next, we go to the source side and export the Usermapper user and group files. The E in Export has to be a capital E.
Now, copy the exports to the DR side. I use the SCP tool, but you are welcome to use any file copying software you like.
Disable Usermapper on the Source Side. If you are doing this on a production file system, then do this off hours, as this will cause an interruption until you finish the Usermapper configuration in the environment.
Now, import the Usermapper group and users to the database.
You can then start the Usermapper database on the DR side at this time.
Finally, to restore service and to get the Usermapper database set properly, point the Source Usermapper database to the DR side. The IP used here is the DR Data Mover IP. I normally use the IP used for replication, but any IP on the DM will work.
If you were using the source for production data, service are restored at this point.
To check your work, you can just type the following.
Configure VDM Replication
Now we are all set to start copying our Virtual Data Movers. The Virtual Data Movers, or VDMs, store all of the CIFS data. This includes the CIFS servers themselves, the Interface names that the CIFS servers use (But not the actual IP. More on that in a few moments.), the shares, local user databases, local group databases, and other details to make the CIFS servers easily protected. They do not contain any of the local data, since that is stored in the file systems.
What is important is the Interface of the CIFS server. Let’s take for example our CIFS server, ‘bobstuff’.
Look at the bolded line. It is using the Interface ‘bobstuff’. Disregard of what the IP of the CIFS server is at the moment and just think that the CIFS server needs the Interface ‘bobstuff’ to operate.
On our DR side, in order to be able to bring up the CIFS server on the DR side, we will need to have a matching environment. The CIFS server details will come over in the VDM replication and the data will come over in the file system replication. It is our job to ensure that the DR VNX is in the proper state to let the VDM bring up the CIFS servers. We do this by creating the Interface ‘bobstuff’ on the DR side. Obviously, with a different IP address.
Here, we see the ‘bobstuff’ interface but it has a different IP address. What this means is that when we failover the replication environment, the CIFS server will start on the DR side with this IP address. If we are using Active Directory DNS with dynamic updates enabled, this will be an automatic change in DNS. Most users won’t notice that CIFS services are on the DR side. They may need to do a ‘ipconfig /flushdns’ or at worst, reboot their workstation to get the new IP address, but in most cases, they can reconnect once the failover is complete. If your DNS server does not support dynamic updates, then a DNS change will be required to allow users to reconnect to the share.
Once we have the Interface created on the target side, we are all set to start our VDM replication. The command to start it off is nas_replicate, and it does a lot.
Don’t concern yourself with all of these switches at this time. We are just going to use a few.
The syntax on our VDM replication is:
nas_replicate –create <replication name> -source –vdm <vdm to replicate> -destination –pool <destination pool name> -interconnect <interconnect name> -source_interface ip=<IP address of Source VNX Interconnect> -destination_interface ip=<IP address of Target VNX Interconnect>
So this command is not that complicated.
-create <replication name> is just our friendly name of our replication session. I tend to start all of my replications sessions as rep_<thing I’m replicating>.
-vdm <vdm to replicate> is the name of the VDM we are replicating. Make sure that with this command syntax, not to have a VDM with the same name on the target side. If you do, things will still work, but you will see ‘bobstuff_replica1’ on the target side. Everything still works, but it will not look as clean.
-pool <destination pool name> is the storage pool on the target side in which the VDM will be carved from. It could be ‘Pool 0’ or ‘myPool’. Remember, if your pool name has a space, you may need to surround it with quotes for the command to work.
-interconnect <interconnect name> is the interconnect that we created in the earlier steps.
-source_interface ip=<IP address of Source VNX Interconnect> is the source Interconnect IP you wish to use. Please take notice that word interface, in this case, is singular.
-destination_interface ip=<Ip address of Target VNX Interconnect> is the target Interconnect IP that we defined earlier. Also, this is also singular Interface.
So, let’s run this against our source VDM.
The OK means that it created the VDM on the target side successfully and started to copy the VDM over the Interconnect. VDMs are relatively small, so they do not take long to sync.
We can check the status of the VDM replication with this command.
If you see data in the last sync time field, then the data is insync, which defaults to 5 minutes or less on a VDM.
Now we are ready to start the replication of the actual data. The syntax will seem very similar.
nas_replicate –create <replication name> -source –fs <fs to replicate> -destination –pool <destination pool name> -vdm <vdm to mount the file system to> -interconnect <interconnect name> -source_interface ip=<IP address of Source VNX Interconnect> -destination_interface ip=<IP address of Target VNX Interconnect>
With minimal changes to the command we used for VDM replication, we can start up a file system replication.
-create <replication name> Again, this is where we have our friendly name for the replication session. I still start my replications with rep_.
-fs <fs to replicate> This is where we define the name of the file system we want to replicate.
-pool <destination pool name> Here is where we will define what storage pool we will be carving out our file system from. Make sure you have enough space on the target to allow for the creation of the file system. If the source is thin, the target will be thin. If the source is thick, the target will be thick. It will also be exactly the same size, as this is a requirement for replicator.
-vdm <vdm to mount the file system to> is where we tell the VNX that we want to mount it on the VDM.
-interconnect <interconnect name> for the name of the Interconnect we wish to use.
-source_interface ip=<ip of the source VNX Interconnect> to enter the source IP of the Interconnect to use.
-destination_interface ip=<IP address of Target VNX Interconnect> to enter the target IP of the Interconnect to use.
So in practice, here is the File System replication in action.
When you run this command, do not panic if it takes some time to run. Remember, to get the OK, it needs to carve out the file system on the target side. If you have a large, thick file system, it can take time for the system to report back the OK. Afterwards, you can get the status of the replication with the nas_replication –info command.
And now to check up on the status.
From this we can see that our file system in in sync. If your file system is not in sync, don’t worry; it can take some time to move the data over the Interconnect, depending on the speed of the link and the amount of data it needs to copy.
At this time, give yourself a pat on the back. You just successfully protected yourself and your file data to a remote VNX. I hoped this guide gave a glimpse of how VNX file replication works and how to configure it with the VNX command line.
There are many, many other features to IP replication, such as the different failover types, local replication, NFS Replication, bandwidth throttling, migrations, cascading replication, DR Testing to name a few.
Many of these topics can be found in the EMC guide, Using VNX Replicator Release 8.1. Of course, feel free to reach out to Anexinet and one of our consultants would be happy to develop a VNX File DR plan to fit your business data needs.