The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

MSA Remote Snap Software . Technical white paper . Technical white paper Contents Introduction ... Comparison of linear replications versus virtual replications ...

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by , 2016-10-31 05:05:03

HP MSA Remote Snap Software - Technical white...

MSA Remote Snap Software . Technical white paper . Technical white paper Contents Introduction ... Comparison of linear replications versus virtual replications ...

Technical white paper Page 51

When the primary site is recovered, if the primary volume and replication set is intact, you have the ability to determine from the two snapshots
what has changed, and copy those changes back to the primary site, either to the primary volume directly, or through the primary server.

Figure 28. Resync to primary site by copying changes

Technical white paper Page 52

If the primary volume or replication set cannot be recovered, or If the amount of modified data is significant, or if the changes cannot easily be
determined, create a new replication set with the read-write snapshot of the secondary volume as the new primary volume and replicate back
to the primary site. The reason for using the snapshot rather than deleting the replication set and using the secondary volume as the primary
volume for the new replication set is that you retain flexibility in case the primary volume becomes available and you want to copy the changes
back to the primary volume and leave the original replication set in place. Once you’ve performed the initial replication on the “resynchronizing”
replication set, create a snapshot of the secondary volume, copy that snapshot to a new, independent volume and create a new replication set
using the new volume as the primary volume for the new replication set. The reason for using a snapshot rather than deleting the replication
set and using the secondary volume as the new primary volume is to provide flexibility in case additional changes need to be replicated from
the backup site to the primary site. Copying the snapshot to a new, independent volume rather than using the snapshot itself allows you to
clean up the resynchronizing replication set without consuming space for both the volume and the snapshot.

Technical white paper Page 53

Figure 29. Resync to primary site by creating replication sets

Technical white paper Page 54

Use cases

This white paper provides examples that demonstrate Remote Snap’s ability to replicate data in various situations.

Single office with a remote site for backup and disaster recovery using iSCSI to replicate data

Corporate End-Users Ethernet WAN
LAN

iSCSI capable array

File Servers A FC SAN Application
Servers A

FS Replica App A Replica

iSCSI capable array

FS Data App A Data

Figure 30. Single office with a remote site for backup and disaster recovery (iSCSI)

# create vdisk disks 1.1-3 level raid5 vd-r5-a
Success: Command completed successfully.

# create master-volume reserve 20GB size 50GB vdisk vd-r5-a FSDATA
Info: The volume was created. (spFSDATA)
Info: The volume was created. (FSDATA)
Success: Command completed successfully.

# create remote-system user manage password !manage 10.10.5.170
Success: Command completed successfully. (10.10.5.170) - The remote system was created.

# create replication-set link-type iSCSI remote-system 10.10.5.170 remote-vdisk vd-r5-a FSDATA
Info: The secondary volume was created. (rFSDATA)
Info: The primary volume was prepared for replication. (FSDATA)
Info: Started adding the secondary volume to the replication set. (rFSDATA)
Info: Verifying that the secondary volume was added to the replication set. This may take a couple of minutes... (rFSDATA)
Info: The secondary volume was added to the replication set. (rFSDATA)
Info: The primary volume is ready for replication. (FSDATA)
Success: Command completed successfully.

# replicate volume FSDATA snapshot init-FSDATA
Info: The replication has started. (init-FSDATA)
Success: Command completed successfully.

Command example 20 Single office with a remote site for backup and disaster recover (iSCSI) CLI output—linear replication

Technical white paper Page 55

To configure a single office with a remote site for backup and disaster recovery (iSCSI):

1. Set up a P2000 G3 FC/iSCSI combo or iSCSI controller array, an MSA 1040 iSCSI array, or an MSA 2040 SAN array with iSCSI-configured
ports with enough disks (according to the application load and users), then configure the management ports and iSCSI ports with IP
addresses. Install the Remote Snap license if one has been purchased, or install the temporary license from the system’s Tools > Install
License page of the SMU (for the P2000 only). See the Setup requirements section above for additional license and other information.

2. Create the vdisks or pools, then the master or base volumes FS Data and App A Data; if using linear replication, enable snapshots when
creating the volumes. For linear replication, if an existing snap pool is not specified, a snap pool is automatically created with the default
policy and size, or you can adjust the settings as necessary. Make sure the volumes are in different vdisks or pools and that each vdisk or
pool has enough space to expand the snap pool or snapshot space in the future.

3. Connect your array to the WAN. If using iSCSI over the WAN as part of your disaster recovery solution, connect your file server and
application server to the WAN. Connecting the management port of an array to the WAN helps you to manage the array remotely and is
necessary when using the SMU to create linear replication sets.

4. Map the volumes to the file server and the application server.

5. Identify a remote location and set up a second P2000 G3 FC/iSCSI combo or iSCSI controller array, an MSA 1040 iSCSI array, or an MSA
2040 SAN array with iSCSI-configured ports and configure both management ports and the iSCSI ports. This is the remote system.
Configure the vdisks or pools to accommodate secondary volumes at a later stage.

6. Setup connection with the remote system:

a. For linear replications, in both the local system and the remote system, add the other system using the system’s Configuration >
Remote Systems > Add Remote System page of the SMU or the create remote-system command in the CLI.

b. For virtual replications, create a peer connection between the systems using the Create Peer Connection action in the Replications topic
of the SMU or the create peer-connection command in the CLI.

7. Verify the data path between your local system and remote system. For linear replication, use the remote system’s Tools > Verify Remote
Link, or the verify links CLI command. For virtual replication, use the query peer-connection command in the CLI. Always
configure sufficient iSCSI ports to facilitate a working redundant connection to the WAN.

8. Set up the linear replication sets for the volumes FS Data and App A Data using the system’s Wizards > Replication Setup Wizard, using
the volume’s Provisioning > Replicate Volume, or using the create replication-set CLI command, and choose iSCSI as the link
type. Set up the virtual replication sets for the volumes FS Data and App A Data using the Create Replication Set action in the Replication
topic in the SMU, or the create replication-set command in the CLI.

9. After the setup is complete, schedule the replication in desired intervals, based on the application load, critical data, replication window
(the time it takes to perform a replication) and so on. This enables you to have a complete backup and disaster recovery setup.

10. Verify the progress of replications by checking the replication images for linear replications, or by checking the replication sets for virtual
replications. This will list the progress or a completed message.

11. Verify the data at the remote location by exporting the linear replication image to a snapshot, or by creating a snapshot of the secondary
volume of the virtual replication set, and mounting the snapshot to a host.

In case of a failure at the local site, it is possible to switch the application to the remote site data by employing the procedures defined in the
Disaster recovery operations section above. Alternatives include the following:

Linear replications:

• Move the remote array to the local site, convert the secondary volumes to primary or delete the replication sets, and map the volumes to the
servers.

• Move disks or enclosures that contain the secondary volumes to the local site, install or attach to the local array, convert the secondary volumes
to primary, and map them to the servers.

Technical white paper Page 56

• Replace the local array with a new array, convert the remote secondary volumes to primary, and then replicate the data to the new array.
Once done, convert the volumes of the new array to primary, map them to the servers, and convert the volumes on the remote array back to
secondary.

• Convert the secondary volumes at the remote array to primary or delete the replication sets and map the volumes to the servers.

Virtual replications:

• Move the remote array to the local site, create snapshots of the secondary volumes or delete the replication sets and map the volumes or
snapshots to the servers.

• Replace the local array with a new array, delete the replication sets, create new replication sets using the original secondary volumes as the
primary volumes, and then replicate the data to the new array. Once done, delete the new replication sets, map the new volumes to the
servers, create replication sets using the volumes at the primary site as the primary volumes, and replicate to the remote secondary array.

• Create snapshots of the secondary volumes or delete the replications sets on the remote array, and map the volumes to the servers.

For more information on disaster recovery, see Disaster recovery operations on page 47.

Single office with local site disaster recovery and backup using iSCSI and host access using FC

Corporate End-Users
LAN

File Servers FC SAN Application
Servers A

Application
Servers B

FC and iSCSI capable array A FC and iSCSI capable array B

App B FS Data App A Data LAN App B Data FS Replica App A
Replica Replica

Figure 31. Single office with local site disaster recovery and backup using iSCSI and host access using FC

To configure a single office with local site disaster recovery and backup using iSCSI and host access using FC:

1. Set up two P2000 G3 combo arrays or MSA 2040 SAN arrays with host ports set to a combination of FC and iSCSI at the local site.
2. Connect the file servers and application servers to the arrays via an FC SAN.
3. Mount the volumes to the applications.
4. Create multiple replication sets with FS Data and App A Data as primary volumes and the secondary volumes on the second P2000 G3 or

MSA 2040 system. Create a replication set with App B Data as the primary volume and the secondary volume on the first system. These
replication sets are created using the iSCSI link type. We recommend that both of the P2000 G3 or MSA 2040 systems are connected by a
dedicated Ethernet link (LAN).
Switch the applications to the other system if any failures occur on either of the two systems.

Technical white paper Page 57

Single office with a local site disaster recovery and backup using FC (linear replications only)

Corporate End-Users
LAN

File Servers FC SAN Application
Servers A

Application
Servers B

FC capable array A FC capable array B

FS Data App A Data App B App B Data FS Replica App A
Replica Replica

Figure 32. Single office with a local site disaster recovery and backup using FC

To configure a single office with a local site disaster recovery and backup using FC:

1. Set up two MSA 2040 Storage, MSA 1040 Storage, or P2000 G3 FC arrays at the local site. You can use any combination of the three
models—replication can occur between the three models as long as the P2000 G3 array has FW version TS250 or later.

2. Connect the file and application servers to these arrays via an FC SAN.
3. Mount the volumes to the applications.
4. Create multiple replication sets with FS Data and App A Data as primary volumes and the secondary volumes on the second Storage

system. Create a replication set with App B Data as the primary volume and the secondary volume on the first system. These sets are
created using the FC link type.
Switch the applications to the other system if any failures occur on either of the two systems.

Technical white paper Page 58

Two branch offices with disaster recovery and backup

Corporate End-Users Corporate End-Users
LAN LAN

FC SAN FC SAN

File Servers A Application File Servers B Application
Servers A Servers B
Ethernet
WAN

FC and iSCSI capable array “A” FC and iSCSI capable array “B”

FS “A” App “A” FS “A” App “A”
Data Data Replica Replica

FS “B” App “B” FS “B” App “B”
Replica Replica Data Data

Figure 33. Two branch offices with disaster recovery and backup

To configure two branch offices with disaster recovery and backup:

1. Set up two P2000 G3 FC/iSCSI combo controller arrays or MSA 2040 SAN arrays with host ports set to a combination of FC and iSCSI with
enough disks (according to the application load, users and secondary volumes) then configure the management ports and iSCSI ports with
IP addresses. Install the Remote Snap licenses if they have been purchased, or install the temporary licenses from the system’s Tools >
Install License page of the SMU (for the P2000 only).

2. On the array at site A, create the master or base volumes FS A Data and App A Data; if using linear replication, enable snapshots when
creating the volumes. For linear replications, if an existing snap pool is not specified, a snap pool is automatically created with the default
policy and size, or you can adjust the settings as necessary. Make sure the volumes are in different vdisks or pools and that each vdisk or
has enough space to expand the snap pool or snapshot space in the future.

3. On the array at site B, create the master or base volumes FS B Data and App B Data similar to the instructions above.

4. Connect both arrays to the WAN. If using iSCSI over the WAN as part of your disaster recovery solution, connect your file servers and
application servers to the WAN. Connecting the management ports of the arrays to the WAN helps you to manage either array remotely
and is necessary when using the SMU to create linear replication sets.

5. Map the volumes to the file servers and application servers.

6. At site A, create remote replication sets using the primary volumes FS A and App A. Corresponding secondary volumes are created
automatically on the array at site B.

7. Schedule replications at regular intervals. This ensures that data at the local site is backed up to the array at site B.

8. At site B, create remote replication sets using the primary volumes FS B and App B. Corresponding secondary volumes are created
automatically on the array at site A.

9. Schedule replications at regular intervals so that all data at site B is backed up to site B.

In case of failure at either site, you can fail over the application and file servers to the available site.

Technical white paper Page 59

Single office with a target model using FC and iSCSI ports

Corporate End-Users
LAN

iSCFCSI FC Application
File Servers FC SAN Servers A

FC FC FC

FC and iSCSI capable array Application
App A Data App B Data FS Data Servers B

Figure 34. Single office with target model using FC and iSCSI ports

To configure a single office with a target model using FC and iSCSI ports:

1. Set up a P2000 G3 FC/iSCSI combo controller array or an MSA 2040 SAN array with host ports set to a combination of FC and iSCSI with
enough disks, according to the application load and number of users, and configure the management ports and iSCSI ports with IP addresses.

2. Create master or base volumes App A Data, App B Data, and FS Data in the array.
3. Map FS Data to the iSCSI port so that the file server can use this volume via the iSCSI interface.
4. Map App A Data and App B Data volumes to the FC port so that the application servers can access these volumes via the FC SAN.
Using the P2000 G3 FC/iSCSI combo controllers or MSA 2040 SAN arrays with host ports set to a combination of FC and iSCSI ports provides
several advantages:

• You can leverage both the FC and iSCSI ports for target-mode operations
• You can connect file servers and other application servers that are not part of the FC SAN to the array using the iSCSI ports via the LAN or

WAN
• You can connect new servers with FC connectivity directly through the FC SAN

Note

Accessing a volume through both iSCSI and FC is not supported.

Technical white paper Page 60

Multiple local offices with a centralized backup (linear replications only)

Remote Snap Many-to-1 Replication (Centralization)

Application Use Case and Benefits:
Server #1
· Backup operations consolidated and managed from a centralized location
FC · Enables geographic disaster recovery

WAN Backup Tape
WAN Server Library

Primary Volume #1 Primary Snapshot #1 FC

Application
Server #2
FC

Secondary Secondary
Volume #1 Snapshop #1

Primary Volume #2 Primary Snapshot #2 WAN Secondary Secondary
Application WAN Volume #2 Snapshop #2
Server #3
Secondary Secondary
FC Volume #3 Snapshop #3

Primary Volume #3 Primary Snapshot #3

Figure 35. Multiple local offices with a centralized backup (linear replications only)

1. Setup P2000 G3 FC/iSCSI combo controller arrays or MSA 2040 SAN arrays with host ports set to a combination of FC and iSCSI with
sufficient storage and configure the management and iSCSI ports with valid IP addresses. Install the Remote Snap license at remote sites A,
B, and C.

2. Create master volumes FS (A|B|C) Data and App (A|B|C) Data corresponding to file server and application server at remote sites.
3. Setup an P2000 G3 iSCSI controller array or an MSA 2040 SAN array with host ports set to a combination of FC and iSCSI at the centralized

location and make sure that it has enough disks to accommodate data coming from remote sites A, B, and C and install the Remote Snap
license.
4. Connect sites A, B, and C with the central site using the WAN and make sure iSCSI ports are configured and connected to this WAN.
5. Make sure the iSCSI ports of the arrays at site A, B, and C can access the iSCSI ports of the array at the central site.
6. Create replication sets for volumes FS A data and App A Data, specifying the central system and vdisks on it to allow automatic creation of
secondary volumes at the central site.

Repeat step 6 for sites B and C.

Schedule the replication in regular intervals so that data from sites A, B, and C replicates to the central site.

Technical white paper Page 61

Replication of application-consistent snapshots (linear replications only)

You can replicate application-consistent snapshots on a local MSA 2040 Storage, MSA 1040 Storage, or P2000 G3 array to a remote MSA
2040 Storage, MSA 1040 Storage, or P2000 G3 array. Use the SMU for manual operation and the CLI for scripted operation. Both options
require you to establish a mechanism that enables all application I/O to be suspended (quiesced) before the snapshot is taken and resumed
afterwards. Many applications enable this via a scripting method. For an illustration of the following steps, see Figure 36.

To create application-consistent snapshots for any supported OS and any application:

1. Create the application volume. When defining the volume names, use a string name variant that will help identify the volumes as a larger
managed group:

a. With the SMU:

I. Use the system’s Wizards > Provisioning Wizard to create the necessary vdisks and volumes.

b. With the CLI:

I. Use the create vdisk command.

II. Use the create master-volume command.

2. Create a replication set for each volume used by the application. Use a string name variant when defining the replication set name. This
helps identify each replication set as part of a larger managed group.

a. With the SMU:

I. Use the systems’ Wizards > Replication Setup Wizard for each volume defined in step 1

b. With the CLI:

I. Use the create replication-set command

3. When the application and its volumes are in a quiesced state, you can create I/O-consistent snapshots across all volumes at same time.

a. With the SMU, use the Provisioning > Create Multiple Snapshots operation of the system or vdisk.

The SMU also enables scheduling snapshots one master volume at a time. For application-consistent snapshots across multiple master
volumes, we recommend a server-based scheduling as explained in the next step, step 4.

4. For an automated solution, schedule scripts on the application server that coordinate the quiescing of I/O, invoking of the CLI snapshot
commands, and resuming I/O. Verify that you defined the desired snapshot retention count. See Command example 21 for an example of
CLI snapshot commands.

The time interval between these snapshot groups will be utilized in the following steps.

Note

To achieve application-consistent snapshots, you must ensure application I/O to all volumes at the server level is suspended prior to taking
snapshots, and then resumed after the snapshots are taken. The array firmware will only create point-in-time consistent snapshots of indicated
volumes.

Technical white paper Page 62

Server activities Manual GUI Mode
Automated CLI Mode
my App, Setup path
my Vols
I/O Path
Quiesece and Remote Array
snap cycle

myscript

Script executed Application Server
all actions in #3
User executes array
actions in #3 with MSA Management
SMU and quiesces and Network
resumes I/O on the host
Replication Connection Topology
I/O Path
I/O
Local Array I/O

Create Master Volume 1 Primary Volumes replication actions Secondary Volumes
Volumes Volume 2 Initial Synch Points of Replication Set 2
Second Synch Points of Replication Set 2 replication
1 actions
Initial Synch Points of Replication Set 2
Create Snapshot 4
replication actions
sets Schedule each
Replication set.
2 Snap pool

Second Synch Points of Replication Set 2

Schedule Snap pool Manually quiesce I/O, Ensure replication 5
script to run 3 invoke snapshots, then activity coordinates
continually to resume I/O with snapshot activity
quiesce I/O, to
take snapshots,
to resume I/O
to all volumes
together

Continued creation of snapshots Continual creation of replication images and
of all Application Volumes. synch points of application consistent snapshots.

Figure 36. Setup steps for replication of application-consistent snapshots

To replicate application-consistent snapshots:
1. Ensure all volumes have established recurring snapshots as detailed above.
2. Schedule each replication set. See Command example 21 for an example of CLI replication commands.

a. With the SMU do the following:
I. After the Replication Setup Wizard creates the replication set, the SMU will display the volume’s Provisioning > Replicate Volume
page. For the Initiate Replication option, select the Scheduled option.
II. Select Replicate Most Recent Snapshot for the Replication Mode so that the latest primary volume snapshot will be used.
III. Choose the remaining options on the page.

Technical white paper Page 63

b. With the CLI do the following:

I. Use the create task type ReplicateVolume command and specify last-snapshot for the replication-mode
parameter

II. Use the create schedule command

3. Ensure the reoccurring schedule for each replication set coordinates with the scheduled snapshots.
a. Calculate an appropriate time that falls between the application volumes snapshot times created in step 4 on page 61.
b. See the Best practices on page 66 for additional information.

Create your own naming scheme to manage your application data volumes, snapshot volumes, and replication sets. In your naming scheme,
include the ability to establish a recognizable grouping of multiple replication sets. This will help with managing the instances of your
application-consistent snapshots and the application-consistent replication sets when restore or export operations are used.

# create snapshots volume FSDATA,APPDATA,LOG fs1-snap,app1-snap,log1-snap
Success: Command completed successfully. (fs1-snap,app1-snap,log1-snap) - Snapshot(s) were created. (2016-02-01 17:24:33)

# show snapshots

Vdisk Serial Number Name Creation Date/Time Status Status-Reason Source Volume Snap-

pool Name

Snap Data Unique Data Shared Data Priority User Priority Type

-----------------------------------------------------------------------------------------------------------------------------

--------

vd-r5-a 00c0ffda02f30000cc94af5602000000 app1-snap 2016-02-01 17:24:29 Available N/A APPDATA

spAPPDATA

0B 0B 0B 0x6000 0x0000 Standard snapshot

vd-r5-a 00c0ffda02f30000cc94af5601000000 fs1-snap 2016-02-01 17:24:29 Available N/A FSDATA

spFSDATA

0B 0B 0B 0x6000 0x0000 Standard snapshot

vd-r5-a 00c0ffda02f30000cc94af5603000000 log1-snap 2016-02-01 17:24:29 Available N/A LOG spLOG

0B 0B 0B 0x6000 0x0000 Standard snapshot

-----------------------------------------------------------------------------------------------------------------------------

--------

Success: Command completed successfully. (2016-02-01 17:24:39)

# replicate snapshot name repapp1 app1-snap
Info: The replication has started. (repapp1)
Success: Command completed successfully. (2016-02-01 17:26:06)

Command example 21 Examples of using the CLI for replication of application-consistent snapshots

Technical white paper Page 64

Replication of the Microsoft VSS-based application-consistent snapshots (linear replications only)

You can replicate the Microsoft® VSS-based application-consistent snapshots on an MSA 2040 Storage, MSA 1040 Storage, or P2000 G3 array
to the same array or another MSA 2040 Storage, MSA 1040 Storage, or P2000 G3 array.

To create application-consistent snapshots using VSS:

1. Create the volumes for your application. When defining the volume names, use a string name variant that helps identify the volumes as a
larger-managed group.
a. With the SMU, use the system’s Wizards > Provisioning Wizard to create the necessary vdisks and volumes.
b. With the CLI:
I. Use the create vdisk command.
II. Use the create master-volume command.
c. With a VDS client tool, refer to the vendor documentation to create the necessary volumes.

2. Create a replication set for each volume in the application. When defining the replication set name, use a string name variant that will help
identify each replication set as part of a larger managed group.
a. With the SMU:
I. Use the Replication Wizard for each volume defined in step 1.
b. With the CLI:
I. Use the create replication command

3. Determine an appropriate Microsoft VSS backup application, or VSS requestor, that is certified to manage your VSS-compliant application.
a. The HP P2000 G3 and MSA 1040/2040 VSS hardware providers are compatible with Microsoft Windows® certified backup applications.
b. For a general scripted solution, see the Microsoft VSS documents for usage of the Windows Server DiskShadow (Windows 2008 and
later) or VShadow (applicable for Windows 2003 and beyond) tools.

4. Configure your VSS backup application to perform VSS snapshots for all of your application’s volumes. The VSS backup application uses
the Microsoft VSS framework for managed coordination of quiescence of VSS-compatible applications and the creation of volume
snapshots through the VSS hardware provider.
a. Establish a reoccurring snapshot schedule with your VSS backup application.
b. The time interval between these snapshot groups will be used in the following steps.

Note

The VSS framework, the VSS Backup application (requestor), the VSS-compliant Application writer, and the VSS hardware provider achieve
application-consistent snapshots. The MSA 2040 Storage, MSA 1040 Storage, or P2000 G3 firmware only creates point-in-time snapshots of
indicated volumes.

To replicate VSS-generated, application-consistent snapshots:

1. Ensure all volumes have established reoccurring snapshots as detailed above.
2. Schedule each replication set.

Technical white paper Page 65

a. With the SMU do the following:

I. After the Replication Setup Wizard creates the replication set, the SMU will display the volume’s Provisioning > Replicate Volume
page. For the Initiate Replication option, select the Scheduled option.

II. Select Replicate Most Recent Snapshot for the Replication Mode so that the latest primary volume snapshot will be used.
III. Choose the remaining options on the page.
b. With the CLI:

I. Use the create task type ReplicateVolume command and specify last-snapshot for the replication-mode
parameter

II. Use the create schedule command
3. Ensure the reoccurring schedule for each replication set coordinates with the scheduled snapshots.

a. Calculate an appropriate time that falls between the application volumes snapshot times as created in step 4 on page 60 (See above)

b. See the Best practices on page 66 for additional information.

Server activities Manual GUI Mode
my App, VSS Backup Tool Automated CLI Mode
Setup path
my Vols VSS Requester
I/O Path
VSS Framework Remote Array

VSS writer Quiesece and
myscript snap cycle

VSS HW provider Application Server

Script executes
actions as defined.

User performs Management
action as desired Network

in VSS backup app. I/O Path Replication Connection Topology

Local Array I/O I/O

Create Master Volume 1 Primary Volumes replication actions Secondary Volumes
Volumes Volume 2
Initial Synch Points of Replication Set 2 replication
1 Second Synch Points of Replication Set 2 actions

Createre plication VSS driver Snap pool 4
sets Snapshot
actions Schedule each
2 Replication set.

Initial Synch Points of Replication Set 1
Second Synch Points of Replication Set 1

Schedule VSS Snap pool Schedule in VSS backup Ensure replication 5
script to run for 3 application snapshots activity coordinates
all volumes to for all volume. with snapshot activity
take snapshots

Continual creation of VSS based Continual creation of replication images and
Snapshot of all Application Volumes. synch points of application consistent snapshots.

Figure 37. Setup steps for replication of the VSS-based application-consistent snapshots

Technical white paper Page 66

Best practices

Fault tolerance

To achieve fault tolerance for Remote Snap setup, we recommend the following:

• For FC and iSCSI replications, the ports must be connected to at least one switch, but for excellent protection it is recommended that half of
the ports be connected to one switch (for example, the first port or first pair of ports on each controller) and the other half of the ports be
connected to a second switch, with both switches connected to a single SAN. This avoids having a single point of failure at the switch.

– Direct Attach configurations are not supported for replication over FC or iSCSI.

– The iSCSI ports must all be routable on a single network space.

In case of link failure, the replication operation will re-initiate within a specified amount of time. For linear replications, the amount of time is defined
by the parameter max-retry time of the set replication-volume-parameters command; the default value is 1800 seconds. Set this
time to a preferred value according to your setup. Once the retry time has passed, replication goes into a suspended state and then needs user
intervention to resume. For virtual replications, the system will attempt to resume the replication every 10 minutes for the first hour, then every
hour until the replication resumes. You can attempt to resume the virtual replication manually, or abort it, as desired.

• For linear replications, during a single replication, we recommend setting the maximum replication retry time on the secondary volume to
either 0 (retry forever), or 60 minutes for every 10 GB increment in volume size, to prevent a replication set from suspending when multiple
errors occur. This can be done in the CLI by issuing the following command:

– set replication-volume-parameter max-retry-time <# in seconds>

• Replication services are supported on both single-controller and dual-controller environments. For the P2000 G3 array replication is supported
only between similar environments. That is, a single-controller system can replicate to a single-controller system or a dual-controller system can
replicate to a dual-controller system. Replication between a single-controller system and a dual-controller system is not supported. For MSA 1040
Storage and MSA 2040 Storage, this restriction does not apply; replication is supported between a single-controller system and a dual-controller
system.

• We recommend using a dual-controller array to try to avoid a failure of one controller. If one controller fails, replication continues through
the second controller.

Volume size and policy

For linear replications
• While setting up the master volumes, ensure the size of the vdisk and the volume is sufficient for current and future use. Once part of a

replication set, the sizes of the primary/secondary volume cannot be changed.

• Every master volume must have a snap pool associated with it. If no space exists on the primary volume’s virtual disk to create the snap pool,
or insufficient space is available to create or maintain a sufficiently large snap pool for the snapshots to be retained, the snap pool should be
created on a separate vdisk that does have sufficient space.

To help you accurately set a snap pool’s size, consider the following:

a. What is the master volume size, and how much will the master volume data change between snapshots?
The amount of space needed by a snap shot in a snap pool depends on how much data is changed in the master volume and the
interval in which snapshots are taken. The longer the interval, the more data will be written to the snap pool.

b. How many snapshots will be retained?
The more snapshots retained, the more space is occupied in the snap pool.

c. How many snapshots will be modified?
Regular snapshots mounted with read/write will add more data to snap pool.

d. How much modified (write) data will the snapshots have?
The more data modified for mounted snapshots, the more space is occupied in the snap pool.

Technical white paper Page 67

• Although the array allows for more, it is recommended that no more than four volumes (master volume or snap pools) be created on a
single vdisk when used for snapshots or replication.

• By default, a snap pool is created with a size equal to 20 percent of the volume size. An option is available to expand the snap pool size to
the desired value. By default, the snap pool policy is set to automatically expand when it reaches a threshold value of 90 percent. Note that
the expansion of a snap pool may take up the entire volume or vdisk, limiting the ability to put additional data on that vdisk. We recommend
that you set the auto expansion size to a value so that snap pools are not expanded too often. It is also important that you monitor the
threshold errors and ensure that you have free space to grow the snap pool as more data is retained.

• Snapshots can be manually deleted when they are no longer needed or automatically deleted through a snap pool policy. When a snapshot
is deleted, all data uniquely associated with that snapshot is deleted and associated space in the snap pool is freed for use. In order to
accommodate the number of volumes per vdisk limit delete unnecessary snapshots.

• Scheduled replications have a retention count—setting this appropriately can help maintain the snap pool size and expansion. Once the
retention count has been met, new snapshots displace the oldest snapshot for the replication set.

For virtual replications
When setting up the pools for the primary and secondary volumes, consider the impact of the overcommit flag for the pool.

• If overcommit is disabled, the volume and associated two replication snapshots are fully provisioned, meaning the pool must be slightly over
three times the size of the volume.

• If overcommit is enabled, then the size of the pool should be sufficient for the amount of data in the volume, and for two sets of data changes.
The snapshot space can be managed using the set snapshot-space CLI command. The default snapshot space limit is 10% of the pool,
and the default limit policy is notify only.

The size of the primary volume can be increased after creating the replication set, and the size of the pools that contain the volumes can
change as well, increasing by adding disk groups, or decreasing by removing disk groups.

License

• Use a temporary license to enable Remote Snap and get a hands-on experience. For live disaster recovery setups, we recommend upgrading
to a permanent license. A temporary license expires after 60 or 180 days, disabling further replications.

• With a temporary license, test local replications and gain experience before setting up remote replications with live systems.

• To set up remote replication, you must have a Remote Snap license for both the remote and local systems.

• Updating to a permanent license at a later stage preserves the replication images.

• By default, there is a 64-snapshot limit that can be upgraded to a maximum number of 512 snapshots.

• Exporting a replication image to a standard snapshot is subject to the licensed limit; replication snapshots are not counted against the
licensed limit. Install a license that allows for the appropriate number of snapshots.

• Enabling the temporary license directly from the SMU is available only on the P2000 G3 arrays.

Scheduling

Linear replications
• In order to ensure that replication schedules are successful, we recommend scheduling no more than three volumes to start replicating

simultaneously, although as many as 16 (as many as 8 for the MSA 1040 Storage array) can replicate at the same time. These and other
replications should not be scheduled to start or recur less than one hour apart. If you schedule replications more frequently, some scheduled
replications may not have time to start.

• The Replicate most recent snapshot option on the primary volume’s Provisioning > Replication Volume page or specifying
last-snapshot for the replication-mode parameter of the create task command can be used when standard snapshots
are manually taken of the primary volume or when using any other tool such as Microsoft VSS and you want to replicate these snapshots.
This helps in achieving application-consistent snapshots.

Technical white paper Page 68

• For P2000 G3 firmware versions older than TS240, when older linear replication images are deleted on the primary system based on
retention count, corresponding replication images at the secondary volume were retained, until the total volume count reaches the
maximum volume limit per vdisk. For P2000 G3 firmware versions TS240 and newer, and all MSA 2040 Storage or MSA 1040 Storage
firmware versions, the retention count applies to both the primary and secondary system.

• You can set the replication image retention count to a preferred value. A best practice is to set the count such that deleting replication
images beyond the retention count is acceptable.

• Schedules are only associated with the primary volume. You can see the status of a schedule by selecting it from the primary volume’s
View > Overview panel.

• You can modify an existing schedule to change any of the parameters such as interval time and retention count using the system’s or
primary volume’s Provisioning > Modify Schedule page of the SMU.

• For linear replications, when standard snapshots are taken at the primary volume in regular intervals (manually or using VSS), select the
proper time interval for the replication scheduler so that the latest snapshot is always replicated to the remote system. The system restricts
the minimum time interval between replications to 30 minutes.

The following table provides a summary example.

Table 2. Tabulation of resources used with replication of application-consistent snapshots
SUPPOSE THE APPLICATION USES TWO MSA VOLUMES.

Suppose snapshots are taken every 2 hours with a retention of 32 instances for each Suppose replication are taken every 6 hours with a retention of 32 instances for
volume. each replication set.

Total hours before snapshot rollover: 64 hours (2 days 8 hours)

Total hours before replication rollover: 192 hours (7 days)

Total snapshots used by replication: 32 (per array)

Total volumes used by replication: 4 (per array)

Total vDisks used by replication: 2 (per array)

Virtual replications
• Replications can be scheduled at most once per hour. However, since replications are not queued, meaning that a replication is discarded if

it is started while an existing replication on that replication set is still running, please consider the rate of data change, the network ability,
the number of replications, and the host I/O rate when scheduling replications to avoid discarding a replication.

• Schedules are only associated with the primary volume. You can see the status of a schedule by selecting the primary volume from the
Volumes topic and hovering over the schedule in the Schedules tab.

• You can modify an existing schedule using the Manage Schedules action of the replication selected in the Replications topic.

Physical media transfer (linear replications only)

• Power down the enclosure or shut down the controllers before inserting the disks at the remote system.

• After you’ve stopped the vdisk(s), you don’t need to power down the local system to remove the disks while performing physical media transfer.

• While performing physical media transfer, it’s easier to have the snap pool of the secondary volume on the same vdisk as the secondary
volume. If the snap pool is on a different vdisk, you should first detach the secondary volume and stop the vdisk containing the secondary
volume before stopping the vdisk with the snap pool.

• Before moving the disks to a new system, ensure that the remote system does not have volumes or vdisks with the same names as volumes
on the newly inserted disks. After performing the transfer, the system should be checked for volumes and vdisks with the same names.
These should be renamed before performing any other operation on the volumes.

Technical white paper Page 69

• Always check that the initial replication is completed before detaching the secondary volume from the replication set.

• After reattaching the secondary volume, initiate a replication from the primary volume to continue syncing the data between the local and
remote systems.

• When using Full Disk Encryption (FDE) on an MSA 2040 Storage array, it is a best practice to move media between systems that are identically
configured with FDE enabled or disabled. That is, move secured Self-Encrypting Drives (SED) to a secured FDE system, and unsecured SEDs or
non-SEDs to an unsecured FDE system or non-FDE system.

Replication setup wizard (linear replications only)

• The system’s Wizards > Replication Setup Wizard helps set up remote or local replication sets. Enable Check Links when performing
remote replication. This will validate the links between the local and remote systems.

• When prompted, manually initiate or schedule the replication after the setup wizard is completed.

Application-consistent snapshots (linear replications only)

• When snapshots are taken manually, no I/O quiescing is done at the array level.

• Use the create snapshots command to take time-consistent snapshots across multiple volumes once the associated applications have
been quiesced.

• Use the Scheduled option of the primary volume’s Provisioning > Replicate Volume page to initiate the replication of these snapshots.
Select the Replicate most recent snapshot option.

• Software such as the Microsoft VSS framework enables quiescing applications and taking application-consistent snapshots.

Max volume limits

• Replication images for linear replications and the two replication snapshots for virtual replications count against the volume per vdisk or
pool limit. Monitor the number of replication images created to avoid unexpectedly reaching this limit.

• For linear replications, you can restrict the number of replication images at the local system, where the primary volume is residing, by using
the retention count in the scheduler. This limit does not apply to remote systems with firmware versions prior to TS240.

• For linear replications, delete older replication images at the remote system as needed. Older replication images are deleted automatically
once the volume count of the vdisk reaches the maximum volume limit. For firmware versions TS240 and newer on P2000 G3 arrays, and all
firmware versions on MSA 2040 Storage or MSA 1040 Storage arrays, the retention count applies to both the primary volume snapshots
and the secondary volume snapshots.

• For linear replications, you take snapshots of up to 16 volumes at a single operation by using the create snapshots command.

• For linear replications, a vdisk can accommodate only 128 volumes, including the replication images in that vdisk.

Table 3. Configuration limits for the P2000 G3 array
CONFIGURATION LIMITS

Property Value

Maximum Vdisks 32

Maximum Volumes 512

Maximum Volumes per Vdisk 128

Maximum Snapshots per volume 127

Maximum LUNs 512

Maximum Disks 149

Number of Host Ports 8

Technical white paper Page 70

Table 4. Configuration limits for the MSA 2040 array Linear Value Virtual Value
CONFIGURATION LIMITS 64 32
Property 512 1024
Maximum Vdisks/Disk Groups 128 N/A
Maximum Volumes 127 254
Maximum Volumes per Vdisk/Pool 512 1024
Maximum Snapshots per Volume 199 199
Maximum LUNs 8 8
Maximum Disks
Number of Host Ports Linear Value Virtual Value
64 32
Table 5. Configuration limits for the MSA 1040 array 512 1024
CONFIGURATION LIMITS 128 N/A
Property 127 254
Maximum Vdisks/Disk Groups 512 1024
Maximum Volumes 99 99
Maximum Volumes per Vdisk/Pool 4 4
Maximum Snapshots per Volume
Maximum LUNs
Maximum Disks
Number of Host Ports

Replication limits

Table 6. Replication configuration limits for the P2000 G3 array
CONFIGURATION LIMITS

Property Value

Remote Systems 3
Replication Sets 16

Technical white paper Page 71

Table 7. Replication configuration limits for the MSA 2040 array
CONFIGURATION LIMITS

Property Linear Value Virtual Value
1 Peer Connection
Remote Systems/Peer Connections 3 Remote Systems 32
16
Replication Sets 16
Virtual Value
Volumes per Volume Group Replicated N/A 1 Peer Connection
32
Table 8. Replication configuration limits for the MSA 1040 array 16
CONFIGURATION LIMITS

Property Linear Value

Remote Systems/Peer Connections 3 Remote Systems

Replication Sets 8

Volumes per Volume Group Replicated N/A

Monitoring

Replication
For linear replications, you can monitor the progress of an ongoing replication by selecting the replication image listed in the navigation tree.
The right panel displays the status and percentage of progress. When the replication is completed, the status appears as Completed.

For virtual replications, you can check the Status of an ongoing replication in the Replications topic and monitor the progress by hovering over
the replication set to see the current run progress and current and last run times and transferred data in the Replication Set Information panel.

Events
When monitoring the progress of ongoing replication, view the event log for the following events:

• Event code 316—Replication license expired—This indicates that the temporary license has expired. Remote Snap will no longer be
available until a permanent license is installed. All the replication data will be preserved even after the license has expired, but you cannot
create a new replication set or perform more replications.

• Event codes 229, 230, and 231—Snap pool threshold—The snap pool can fill up when there is steady I/O and replication snapshots are
taken at regular intervals. When the warning threshold is crossed, event code 229, consider taking action: either remove the older snapshots
or expand the vdisk.

• Event codes 418, 431, and 581—Replication suspended—If the ongoing replication is suspended, an event is received. Any further linear
replication initiated is queued. Once the problem is identified and fixed, you can manually resume the replications.

For more related events, see the HPE MSA 2040 Event Descriptions Reference Guide, the HPE MSA 1040 Event Descriptions Reference Guide,
or the HP P2000 G3 MSA System Event Descriptions Reference Guide.

SNMP traps and email (SMTP) notifications

You can set up the array to send SNMP traps and email notifications for the events described above. Using the v2 SMU, use the system’s
Configuration > Services > SNMP Notification or Configuration > Services > Email Notification pages. Using the v3 SMU, select the Set Up
Notifications action from the Home topic. For the CLI, use the set snmp-parameters and set email-parameters commands.

Technical white paper Page 72

Performance tips

For a gain in replication and host I/O performance of up to 20 percent, enable jumbo frames on all infrastructure components (if supported by
all) in the path and on iSCSI controllers. Jumbo frames are disabled by default for the iSCSI host ports. You can enable them using either the
SMU or CLI.

Note

If your infrastructure does not support jumbo frames, enabling them only on your controllers may actually lower performance or even prevent
the creation of replication sets or replications.

With the v2 SMU, enable jumbo frames by going to the system’s Configuration > System Settings > Host Interfaces.

With the v3 SMU, select the Set Up Host Ports action from the System topic, then select the Advanced Settings tab of the Host Ports Settings
panel.

With the CLI, enable jumbo frames by using the command set iscsi-parameters jumbo-frames enabled

Troubleshooting

Issue:

Replication enters a suspended state

Recommended actions:

If performing a local linear replication, ensure all the ports are configured and connected via the switch.

Check the Remote Snap license status at the local and remote site. If you are running a temporary license, the license may have expired. Install
a permanent license and manually resume replication.

The connectivity link may be broken:

For linear replications:

Use the remote system’s Tools > Check Remote System Link in the SMU to check the link connectivity between the local and remote systems.

For virtual replications:

Use the CLI command show peer-connections with the verify-links parameter to check the data link. Repair the link and make sure
all links are available between the systems, then manually resume the replication.

For virtual replications, the overcommit flag for the pool may be enabled and the pool’s high threshold has been exceeded. Hover over the pool
in the Pools topic in the SMU or use the show pools CLI command to see if overcommit is enabled, the percent the high threshold is set at, and
the available space to see if there is insufficient available space to continue. Add disk-groups to the pool or remove volumes or snapshots if
necessary to increase the available size of the pool.

The CHAP settings are not correct—check that the CHAP records exist and the secrets are correct.

Issue:

You cannot perform an action such as changing the schedule for a replication set.

Recommended actions:

For linear replications

Actions performed on a replication set, such as schedule creation or modification and adding or removing a secondary volume, must be
performed on the system where the primary volume resides.

Technical white paper Page 73

Changing the primary volume is a coordinated effort between the local and remote systems. It must first be performed on the remote system,
and then on the local system. To help remember this, the secondary volume pulls data from the primary volume. To avoid a potential conflict,
do not attempt to have two secondary volumes.

Since the secondary volume cannot be mapped to the hosts, unmap a primary volume before converting it to a secondary volume.

For virtual replications

Actions that control replications, such as scheduling, initiating, suspending, resuming or aborting a replication, must be performed on the
system where the primary volume resides.

Deleting a replication set or changing its name can be performed on either the primary volume’s system or the secondary volume’s system.

Issue:

You can’t delete a linear replication set involving a secondary volume.

Recommended actions:

Convert the secondary volume to a primary volume. You can now delete the replication set.

FAQs

1. Do we support port failover?
Answer: Yes. See examples below to understand how it works.
Example
A dual controller system where the primary volume is owned by controller A and ports A1, A2, B1, and B2 are connected and part of the
replication set’s primary addresses (for linear replications, see the output of the show replication-sets command or the Replication
Addresses of the primary volume’s View > Overview to verify that a port is a primary address, for virtual replications, see the output from
show peer-connections or hover over the peer connection in the Replications topic of the SMU)
• If port A1 fails, replication will go through A2 without any issues.

• If port A1 and A2 fail, the replication will continue using the B1 and B2 ports of controller B.
2. Do we support load balancing with multiple replications in progress?

Answer: Yes.
Example
Four primary volumes owned by controller A and both ports (A1 and A2) are connected and used for replication.

• All four sets will try to use both ports A1 and A2, unless the array doesn’t have sufficient resources to use both ports.
3. Can CHAP be added to a replication set at any time after it is created? For instance, if you have a local linear replication set for

doing an initial replication and then media transfer, do you need to set up CHAP before the set creation?
Answer: CHAP is specific to a system and not specific to the replication set. CHAP is specific to the local-to-remote system communication
path and vice versa. For linear replications, once you are done with the initial replication and physical media transfer, you can enable CHAP
before reattaching the secondary volume from the remote system; the reattach operation should go through fine.

4. Does using CHAP affect replication performance?
Answer: CHAP is just for initial authentication across nodes. Once a login is successful with another system, CHAP will not be involved in
further data transfer, so replication performance should not be affected.

Technical white paper Page 74

5. I created a master volume as the primary and did a local linear replication. Can I now do a remote replication with the same primary
volume?

Answer: A volume can only be part of one replication set. You need to delete the set and create a new set or remove the secondary volume
from the set and add the other remote secondary volume to the set.

6. I initiated a remote linear replication, and now I am not getting an option to suspend the replication/abort the replication in the
local system.

Answer: By design, suspend and abort operations can only be performed on the secondary volume for linear replications. You can access
the secondary volume on the remote system; it has an option to suspend/resume replication.

7. I deleted the linear replication set using remove replication and all my replication images disappeared.

Answer: All the replication images are converted to standard snapshots and can be viewed under the volume in the Snapshots section of
the Configuration View panel of the SMU.

8. I see an option called Enable Snapshot when attempting to create a linear volume.

Answer: By selecting the box Enable Snapshot, you automatically create a snap pool for the volume. The created volume is now a master
volume.

9. I am not able to map to the secondary volume.

Answer: Secondary volumes cannot be presented to any hosts. For linear replications you can export a snapshot of the secondary volume,
and for virtual replications you can create a snapshot of the secondary volume. Then, map the snapshot to hosts.

10. I cannot remove the primary volume from a linear replication set.

Answer: Only a secondary volume can be removed. If you want to remove the primary volume, first make the other volume the primary
volume, and then make the original primary volume a secondary volume. You can then remove the volume.

11. I can’t expand a primary or secondary volume in the linear replication set.

Answer: Master volumes cannot be expanded, because both the primary and the secondary volumes are master volumes; they can’t be
expanded even in a prepared state.

In the context of Remote Snap, volume expansion also causes problems because both the primary and the secondary volumes must be
identical in size. This further prohibits expanding volumes which are part of the set.

Note that for virtual replications, you can expand a primary volume—the secondary volume’s size will change on the next replication of the set.

12. I expanded the primary virtual volume, but the secondary virtual volume's size hasn’t changed.

Answer: The secondary volume’s size will increase on the next replication.

13. What is the Maximum Retry Time?

Answer: The Maximum Retry Time in the SMU and MaxRetryTime in the CLI refers to the maximum time in seconds to retry a single
linear replication. If this value is zero, there will be an infinite number of retries. This is valid only when on-error policy is set to retry. Use
the set replication-volume-parameters command to change these parameters.

A retry for a replication occurs every five minutes if an error is encountered. That is, a five-minute delay occurs in between retry attempts. If
the delta time from the current time to the initial retry time is greater than the Maximum Retry Time, the replication is suspended.

14. How does a virtual replication recover from a temporary peer connection failure?

Answer: The replication will be suspended and will attempt to resume every 10 minutes for the first hour, then once every hour until
successful or aborted by the user.

Technical white paper Page 75

15. Can the iSCSI controller ports run concurrent Remote Snap and I/O?

Answer: You can use iSCSI host ports for both Remote Snap and I/O with any P2000 G3 iSCSI arrays running TS230 or later firmware. We
do not support both remote snap and I/O at same time using iSCSI host ports with any pre-T230 firmware. MSA 2040 Storage and MSA
1040 Storage arrays have no limitations on concurrent Remote Snap and I/O.

16. How do I delete a virtual replication set when the peer connection is down?

Answer: Use the local-only option of the delete replication-set CLI command.

17. I cannot delete a peer connection even though there is no virtual replication set present in the system. What can I do?

Answer: You may have deleted the replication set on the local system using the local-only option of the delete replication-set
CLI command while the peer connection was down, and now the peer connection is back up. Delete the replication set from the remote
system to allow deleting the peer connection.

Summary

Remote Snap provides array-based, remote replication with a flexible architecture and simple management, and supports both Ethernet and FC
technology. The software protects against detrimental impacts to application performance, while the snapshot-based replication technology
minimizes the amount of data transferred. Remote Snap enables the use of multiple recovery points for daily backups (for linear replications),
access to data in remote sites, and business continuity when critical failures occur.

Glossary

Peer connection—A logical connection for virtual replications that defines the ports used to connect two systems. Virtual replication sets use a
peer connection to replicate from the primary to the secondary system.

Primary volume—The replication volume residing on the local system. It is the source from which replication snapshots are taken and copied.
It is externally accessible to host(s) and can be mapped for host I/O.

Replication-prepared volume—For linear volumes, the replication volume residing on a remote system that has not been added to a replication
set. You can create the replication-prepared volume using the SMU/CLI and can then use it as the secondary volume when creating a linear
replication set.

Remote system—A representation of a system which is added to the local system and which contains the address and authentication tokens
to access the remote system for linear replication management. The remote system may be queried for lists of vdisks, volumes, and host I/O
ports (used to replicate data) to aid in creating a linear replication set, for example.

Replication image (linear replications only)—The representation of a replication snapshot at both the local and remote systems. In essence,
it is the pair of replication snapshots that represent the point-in-time replication. In the SMU, clicking on the table shown in right pane displays
both the primary and secondary volume snapshots associated with a particular replication image. In the Configuration View pane of the SMU
the image name is the time at which it was created; in the CLI and elsewhere in the SMU the image name is the name of the primary volume
replication snapshot.

Replication set—The association between the source volume (primary volume) and the destination volume (secondary). A replication set is a
set of volumes associated with one another for the purposes of replicating data. To replicate data from one volume to another, you must create
a replication set to associate the two volumes. A replication set is a concept that spans systems. In other words, the volumes that are part of a
replication set are not necessarily (and not likely, and in the case of virtual replications, not allowed to be) located on the same system. It is not
a volume, but an association of volumes. A volume is part of exactly one replication set.

Replication snapshot—Replication snapshots are a special form of the existing snapshot functionality. They are explicitly used in replication
and do not count against a snapshot license.

Secondary volume—The replication volume residing on a remote system. For linear replications, this volume is also a normal master volume
and appears as a secondary volume once it is part of a replication set. For virtual replications, it is a base volume. It is the destination for the
replication snapshot copies. It cannot be mapped to any hosts.

Technical white paper

Sync points (linear replications only)—Replication snapshots are retained both on the primary volume and the secondary volume. When a
matching pair of snapshots is retained on both the primary and secondary volumes, they are referred to as sync points. There are four types of
sync points: the only replication snapshot that is copy-complete on any secondary system is the “only sync point;” the latest replication snapshot
that is copy-complete on any secondary system is the “current sync point;” the latest replication snapshot that is copy-complete on all secondary
systems is the “common sync point;” a common sync point that has been superseded by a new common sync point is an “old common sync point.”
VSS HW Provider—A software driver supplied by the storage array vendor that enables the vendor’s storage array to interact with the
Microsoft Server Volume Shadow Copy Service framework (VSS).
VSS Requestor—A software tool or application that manages the execution of user VSS commands.
VSS Writer—A software driver supplied by the Windows Server Application vendor that enables the application to interact with the Microsoft
VSS framework.

For more information

• HPE MSA 2040 Storage
• HPE MSA 2040 Storage QuickSpecs
• HPE MSA 1040 Storage
• HPE MSA 1040 Storage QuickSpecs
• HPE MSA 2040 CLI Reference Guide
• HPE MSA 2040 SMU Reference Guide
• HPE MSA 1040 CLI Reference Guide
• HPE MSA 1040 SMU Reference Guide
• HP P2000 G3 MSA Array Systems
• HPE MSA P2000 G3 Modular Smart Array Systems QuickSpecs
• HP P2000 G3 MSA System CLI Reference Guide
• HP P2000 G3 MSA System SMU Reference Guide
• HPE MSA 2040 Event Descriptions Reference Guide
• HPE MSA 1040 Event Descriptions Reference Guide
• HP P2000 G3 MSA System Event Descriptions Reference Guide

Learn more at

hpe.com/storage/RemoteSnap

Sign up for updates

Rate this document

© Copyright 2010–2011, 2013–2014, 2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to
change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States
and/or other countries.

4AA1-0977ENW, May 2016, rev. 5


Click to View FlipBook Version