RecoverPoint: Difference between revisions
keep editing down the uncited promotion; remove bullet points |
add a cite and keep editing |
||
Line 2: | Line 2: | ||
== Description== |
== Description== |
||
Kashya was founded in February, 2001.<ref>{{Cite web |title= Form D: Notice of Sale of Securities |publisher= US SEC |date= June 9, 2004 |url= https://www.sec.gov/Archives/edgar/vprr/0403/04032077.pdf |accessdate= October 31, 2016 }}</ref> |
|||
RecoverPoint continuous data protection (CDP) tracks changes to data at a block level and journals these changes.<ref name="h4175">http://www.emc.com/collateral/software/white-papers/h4175-recoverpoint-clr-operational-dr-wp.pdf</ref> |
RecoverPoint continuous data protection (CDP) tracks changes to data at a block level and journals these changes.<ref name="h4175">http://www.emc.com/collateral/software/white-papers/h4175-recoverpoint-clr-operational-dr-wp.pdf</ref> |
||
Every [[Input/output|write]] is tracked and stored as a different [[Snapshot (computer storage)|snapshot]]. Alternatively, groups of writes can be aggregated according to configuration in order to reduce storage space and network traffic.The journal then allows rolling data to a previous "Point-In-Time" in order to view the drive contents as they were before a certain data corruption. CDP can journal each write individually, hence enabling any-point-in-time snapshots, or it can be configured to combine consecutive writes in order to reduce journal space and improve bandwidth. CDP works only over a [[storage area network]] - the RecoverPoint [[Computer appliance|appliances]] |
Every [[Input/output|write]] is tracked and stored as a different [[Snapshot (computer storage)|snapshot]]. Alternatively, groups of writes can be aggregated according to configuration in order to reduce storage space and network traffic.The journal then allows rolling data to a previous "Point-In-Time" in order to view the drive contents as they were before a certain data corruption. CDP can journal each write individually, hence enabling any-point-in-time snapshots, or it can be configured to combine consecutive writes in order to reduce journal space and improve bandwidth. CDP works only over a [[storage area network]] - the RecoverPoint [[Computer appliance|appliances]] need to be configured for the [[Disk mirroring|replica]] and the journal [[Logical Unit Number]]s (LUNs). |
||
RecoverPoint continuous remote replication (CRR) enables a replica in a remote site. For such a setup, RecoverPoint appliances clusters are required in both the local and remote sites. These 2 clusters communicate over either [[Fibre Channel |
RecoverPoint continuous remote replication (CRR) enables a replica in a remote site. For such a setup, RecoverPoint appliances clusters are required in both the local and remote sites. These 2 clusters communicate over either [[Fibre Channel]] (FC) or [[Internet Protocol]]. RecoverPoint applies [[data compression]] and [[data de-duplication]] in order to reduce [[wide area network]] traffic. As of RecoverPoint 3.4, only one remote site. CRR can be combined with CDP in order to provide concurrent local and remote (CLR) replication. |
||
The consistency group (CG) term is used for grouping several LUNs together in order to ensure [[Causal consistency|write-order consistency]] over several volumes. This is used for example with a database that stores its data and journal on different logical drives. These logical drives must be kept in-sync on the replica if [[Consistency model|data-consistency]] needs to be preserved. Other examples are multi-volume file systems such as [[ZFS]] or Windows' [[Logical Disk Manager|Dynamic Disks]] |
The consistency group (CG) term is used for grouping several LUNs together in order to ensure [[Causal consistency|write-order consistency]] over several volumes. This is used for example with a database that stores its data and journal on different logical drives. These logical drives must be kept in-sync on the replica if [[Consistency model|data-consistency]] needs to be preserved. Other examples are multi-volume file systems such as [[ZFS]] or Windows' [[Logical Disk Manager|Dynamic Disks]].<ref>https://globalsp.ts.fujitsu.com/dmsp/docs/ss_recoverpoint.pdf</ref> |
||
Similar to other continuous data protection products, and unlike [[backup]] products, RecoverPoint needs to obtain a copy of every write in order to track data changes. EMC advertises RecoverPoint as heterogenous due to its support of multi-vendor server, network and [[Disk array|storage arrays]].<ref>http://www.emc.com/collateral/software/data-sheet/h2769-emc-recoverpoint-family.pdf</ref> |
Similar to other continuous data protection products, and unlike [[backup]] products, RecoverPoint needs to obtain a copy of every write in order to track data changes. EMC advertises RecoverPoint as heterogenous due to its support of multi-vendor server, network and [[Disk array|storage arrays]].<ref>http://www.emc.com/collateral/software/data-sheet/h2769-emc-recoverpoint-family.pdf</ref> |
||
Line 16: | Line 17: | ||
Storage array splitters are only supported on a subset of EMC storage products. This method allows write splitting from all operating systems, and does not require special SAN switching hardware. The RecoverPoint/SE is a slimmed-down version that only supports this type of splitter. |
Storage array splitters are only supported on a subset of EMC storage products. This method allows write splitting from all operating systems, and does not require special SAN switching hardware. The RecoverPoint/SE is a slimmed-down version that only supports this type of splitter. |
||
Each site requires installation of a cluster that is composed of 2-8 RecoverPoint appliances. The multiple appliances work together as an [[high availability]] [[Computer cluster|cluster]]. Each appliance is connected via |
Each site requires installation of a cluster that is composed of 2-8 RecoverPoint appliances. The multiple appliances work together as an [[high availability]] [[Computer cluster|cluster]]. Each appliance is connected via FC to the SAN, and must access both the server ([[SCSI]] initiator) and the storage (SCSI target). Each appliance must also be connected to an IP network for management. |
||
Replication takes place over either FC or standard |
Replication takes place over either FC or standard Internet Protocol. |
||
One or more splitters would split traffic to both the storage and the appliances. |
One or more splitters would split traffic to both the storage and the appliances. |
||
Line 23: | Line 24: | ||
Integration with [[VMware]] [[VMware_vSphere|vSphere]], VMware Site Recovery Manager and [[Microsoft]] [[Hyper-V]] allows protection to be specified per [[Virtual machine|VM]] instead of per volumes that are available to the [[hypervisor]]. |
Integration with [[VMware]] [[VMware_vSphere|vSphere]], VMware Site Recovery Manager and [[Microsoft]] [[Hyper-V]] allows protection to be specified per [[Virtual machine|VM]] instead of per volumes that are available to the [[hypervisor]]. |
||
Integration with Microsoft [[Shadow Copy]], [[Microsoft Exchange Server|Exchange]] and [[Microsoft SQL Server|SQL Server]] and [[Oracle Database]] Server allows RecoverPoint to temporarily stop writes by the host in order to take consistent application-specific snapshots. |
Integration with Microsoft [[Shadow Copy]], [[Microsoft Exchange Server|Exchange]] and [[Microsoft SQL Server|SQL Server]] and [[Oracle Database]] Server allows RecoverPoint to temporarily stop writes by the host in order to take consistent application-specific snapshots. |
||
[[[Application programming interface]]s and [[command-line interface]]s allow customers to integrate with custom internal software.<ref name="h4175"/> |
|||
== Notes == |
== Notes == |
Revision as of 22:31, 31 October 2016
RecoverPoint is a continuous data protection product offered by EMC Corporation which supports asynchronous and synchronous data replication of block-based storage. RecoverPoint was originally created by a company called Kashya, which was bought by EMC in 2006.[1]
Description
Kashya was founded in February, 2001.[2] RecoverPoint continuous data protection (CDP) tracks changes to data at a block level and journals these changes.[3] Every write is tracked and stored as a different snapshot. Alternatively, groups of writes can be aggregated according to configuration in order to reduce storage space and network traffic.The journal then allows rolling data to a previous "Point-In-Time" in order to view the drive contents as they were before a certain data corruption. CDP can journal each write individually, hence enabling any-point-in-time snapshots, or it can be configured to combine consecutive writes in order to reduce journal space and improve bandwidth. CDP works only over a storage area network - the RecoverPoint appliances need to be configured for the replica and the journal Logical Unit Numbers (LUNs).
RecoverPoint continuous remote replication (CRR) enables a replica in a remote site. For such a setup, RecoverPoint appliances clusters are required in both the local and remote sites. These 2 clusters communicate over either Fibre Channel (FC) or Internet Protocol. RecoverPoint applies data compression and data de-duplication in order to reduce wide area network traffic. As of RecoverPoint 3.4, only one remote site. CRR can be combined with CDP in order to provide concurrent local and remote (CLR) replication.
The consistency group (CG) term is used for grouping several LUNs together in order to ensure write-order consistency over several volumes. This is used for example with a database that stores its data and journal on different logical drives. These logical drives must be kept in-sync on the replica if data-consistency needs to be preserved. Other examples are multi-volume file systems such as ZFS or Windows' Dynamic Disks.[4]
Similar to other continuous data protection products, and unlike backup products, RecoverPoint needs to obtain a copy of every write in order to track data changes. EMC advertises RecoverPoint as heterogenous due to its support of multi-vendor server, network and storage arrays.[5]
Host-based write splitting is done using a device driver that is installed on the server accessing the storage volumes. The usage of a host-based splitter allows replication of selected non-EMC storage. Available fabric-based splitters are for Brocade Communications Systems SAN switches and for Cisco Systems SANTap. This requires the investment in additional switch blades. This configuration allows splitting from all operating systems regardless of their version, and is agnostic to the storage array vendor.
Storage array splitters are only supported on a subset of EMC storage products. This method allows write splitting from all operating systems, and does not require special SAN switching hardware. The RecoverPoint/SE is a slimmed-down version that only supports this type of splitter.
Each site requires installation of a cluster that is composed of 2-8 RecoverPoint appliances. The multiple appliances work together as an high availability cluster. Each appliance is connected via FC to the SAN, and must access both the server (SCSI initiator) and the storage (SCSI target). Each appliance must also be connected to an IP network for management. Replication takes place over either FC or standard Internet Protocol. One or more splitters would split traffic to both the storage and the appliances.
Besides integration with EMC products such as AppSync, ViPR, Replication Manager, Control Center and Unisphere, and the CLARiiON,VNX, Symmetrix and VPLEX storage arrays, RecoverPoint integrates with the following products: Integration with VMware vSphere, VMware Site Recovery Manager and Microsoft Hyper-V allows protection to be specified per VM instead of per volumes that are available to the hypervisor. Integration with Microsoft Shadow Copy, Exchange and SQL Server and Oracle Database Server allows RecoverPoint to temporarily stop writes by the host in order to take consistent application-specific snapshots. [[[Application programming interface]]s and command-line interfaces allow customers to integrate with custom internal software.[3]
Notes
- ^ Robert E. Passmore, Dave Russell and Stanley Zaffos (May 16, 2006). "EMC Buys Kashya for Replication Technology Opportunities". Gartner. Retrieved October 31, 2016.
- ^ "Form D: Notice of Sale of Securities" (PDF). US SEC. June 9, 2004. Retrieved October 31, 2016.
- ^ a b http://www.emc.com/collateral/software/white-papers/h4175-recoverpoint-clr-operational-dr-wp.pdf
- ^ https://globalsp.ts.fujitsu.com/dmsp/docs/ss_recoverpoint.pdf
- ^ http://www.emc.com/collateral/software/data-sheet/h2769-emc-recoverpoint-family.pdf
References
- EMC RecoverPoint, a Single Solution for CDP and Disaster Recovery
- EMC plans to use Kashya technologies in RecoverPoint
- Continuous Data Protection With EMC RecoverPoint
- EMC Updates RecoverPoint SAN CDP/Replication Engine
- EAL2+ certification
External links
- EMC RecoverPoint [1]