Jump to content

RecoverPoint: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Description: clean up, typo(s) fixed: May 9, 2006 → May 9, 2006,
 
(45 intermediate revisions by 28 users not shown)
Line 1: Line 1:
'''RecoverPoint''' is a [[continuous data protection]] product offered by [[Dell EMC]] which supports [[Replication (computing)#Disk storage replication|asynchronous]] and [[Replication (computing)#Disk storage replication|synchronous]] data [[Replication (computer science)|replication]] of block-based storage. RecoverPoint was originally created by a company called Kashya, which was bought by EMC in 2006.<ref>{{Cite web |title= EMC Buys Kashya for Replication Technology Opportunities |date= May 16, 2006 |publisher= Gartner |author= Robert E. Passmore, Dave Russell and Stanley Zaffos |url= https://www.gartner.com/doc/492305/emc-buys-kashya-replication-technology |archive-url= https://web.archive.org/web/20161101102140/https://www.gartner.com/doc/492305/emc-buys-kashya-replication-technology |url-status= dead |archive-date= November 1, 2016 |accessdate= October 31, 2016 }}</ref>
'''RecoverPoint''' is a continuous backup solution offered by [[EMC Corporation]]. Capabilities include bi-directional [[asynchronous]] and [[synchronous]] data [[replication]] across heterogeneous [[storage array]]s. Block based storage is supported as of today, both FC and iSCSI. All replication takes place over standard IP for asynchronous replication and Fibre Channel for synchronous replication. RecoverPoint handles FC to IP conversions for asynchronous replication.


== Description==
The core behind this product’s ingenuity is its block based journaling and WAN based compression capabilities. Every write (or groups of writes) are aggregated together into a single snapshot for distribution. Snapshots are important in comparison to mirroring technologies due to the fact Garbage In equals Garbage Out. If Logical corruption manifests itself on the source volume it will be replicated to the destination volume. This is not the case with snapshot based technology, as each snapshot is a point in time of the production volume. Delineation between one snapshot and another is hard defined. Therefore with any snapshot there will be a “before and after” view of the production volume. The before being prior to logical corruption, the after being post corruption. Mirroring is a moving target, its only a matter of time before the logical corruption becomes viral and infects the remote copy. Therefore journal based replication is the future and the “now” of local and site to site replication.
Kashya was founded in February, 2001, originally located in [[Ramat Gan]], [[Israel]].<ref>{{Cite web |title= Form D: Notice of Sale of Securities |publisher= US SEC |date= June 9, 2004 |url= https://www.sec.gov/Archives/edgar/vprr/0403/04032077.pdf |accessdate= October 31, 2016 }}</ref>
Venture funding included [[Battery Ventures]] and Jerusalem Global Ventures.<ref>{{Cite web |title= Kashya in a nutshell |work= Old web site |url= http://www.kashya.com/ |archivedate= January 21, 2002 |archiveurl= https://web.archive.org/web/20020121002328/http://www.kashya.com/ |accessdate= October 31, 2016 }}</ref>
In 2003, additional operations in [[San Jose, California]] were announced along with $12 million in funding and a first product.<ref>{{Cite web |title= Kashya Makes Kopies |date= March 27, 2003 |work= Byte and Switch |author= Jo Maitland |url= http://www.networkcomputing.com/careers/kashya-makes-kopies/1071353198 |accessdate= October 31, 2016 }}</ref>
Kashya was acquired by [[EMC Corporation]] on May 9, 2006, for $153 million.<ref>{{Cite news |title= EMC Coughs Up for Kashya |work= Byte and Switch |author= Dave Raffo |date= May 9, 2006 |url= http://www.networkcomputing.com/data-centers/emc-coughs-kashya/341206221 |accessdate= October 31, 2016 }}</ref>
EMC had already announced a product named RecoverPoint in October 2005, adapted from a product called Recovery One from Mendocino Software.<ref>{{Cite web |title= EMC Shows Some CDP |work= Enterprise Storage Forum |date= October 24, 2005 |author= Clint Boulton |url= http://www.enterprisestorageforum.com/continuity/news/article.php/3558516/EMC-Shows-Some-CDP.htm |accessdate= October 31, 2016 }}</ref>
The Kashya product had been named KDX 5000.<ref>{{Cite news |title= EMC plans to use Kashya technologies in RecoverPoint |date= June 13, 2006 |author= Greg Meckbach |work= IT Business |url= http://www.itbusiness.ca/news/emc-plans-to-use-kashya-technologies-in-recoverpoint/936 |accessdate= October 31, 2016 }}</ref>
The EMC RecoverPoint product based on Kashya technology was released in 2007, and version 3.0 released in 2008.<ref>{{Cite news |title= EMC RecoverPoint, a Single Solution for CDP and Disaster Recovery Using CLARiiON CX3 arrays |work= Press release (edited) |date= February 26, 2008 |url= http://www.storagenewsletter.com/rubriques/software/emc-recoverpoint-cdp-disaster-recovery/ |accessdate= October 31, 2016 }}</ref><ref>{{Cite web |title= EMC Updates RecoverPoint SAN CDP/Replication Engine |author= Howard Marks |date= February 25, 2008 |work= Information Week blog |url= http://www.informationweek.com/blog/main/archives/2008/02/emc_updates_rec.html |archivedate= May 2, 2008 |archiveurl= https://web.archive.org/web/20080502205249/http://www.informationweek.com/blog/main/archives/2008/02/emc_updates_rec.html |accessdate= October 31, 2016 }}</ref>


RecoverPoint continuous data protection (CDP) tracks changes to data at a block level and journals these changes.<ref name="h4175">http://www.emc.com/collateral/software/white-papers/h4175-recoverpoint-clr-operational-dr-wp.pdf {{Bare URL PDF|date=March 2022}}</ref>
== Replication ==
Every [[Input/output|write]] is tracked and stored as a different [[Snapshot (computer storage)|snapshot]]. Alternatively, groups of writes can be aggregated according to configuration in order to reduce storage space and network traffic. The journal then allows rolling data to a previous "Point-In-Time" in order to view the drive contents as they were before a certain data corruption. CDP can journal each write individually, hence enabling any-point-in-time snapshots, or it can be configured to combine consecutive writes in order to reduce journal space and improve bandwidth. CDP works only over a [[storage area network]] - the RecoverPoint [[Computer appliance|appliances]] need to be configured for the [[Disk mirroring|replica]] and the journal [[Logical Unit Number]]s (LUNs).


RecoverPoint continuous remote replication (CRR) enables a replica in a remote site. For such a setup, RecoverPoint appliances clusters are required in both the local and remote sites. These 2 clusters communicate over either [[Fibre Channel]] (FC) or [[Internet Protocol]]. RecoverPoint applies [[data compression]] and [[data de-duplication]] in order to reduce [[wide area network]] traffic. As of RecoverPoint 3.4, only one remote site. CRR can be combined with CDP in order to provide concurrent local and remote (CLR) replication.
RecoverPoint Contiuous Data Protection (CDP) tracks changes to data at a block level and journals these changes <ref>http://www.emc.com/collateral/software/white-papers/h4175-recoverpoint-clr-operational-dr-wp.pdf</ref>. The journal then allows rolling data to a previous "Point-In-Time" in order to view the drive contents as they were before a certain data corruption. CDP can journal each write individually, hence enabling "Any-Point-In-Time" snapshots, or it can be configured to combine consecutive writes in order to reduce journal space and improve bandwidth. CDP works only over SAN - the RecoverPoint appliances needs to be zoned and masked with both the master, the replica and the journal LUNs.


The consistency group (CG) term is used for grouping several LUNs together in order to ensure [[Causal consistency|write-order consistency]] over several volumes. This is used for example with a database that stores its data and journal on different logical drives. These logical drives must be kept in-sync on the replica if [[Consistency model|data-consistency]] needs to be preserved. Other examples are multi-volume file systems such as [[ZFS]] or Windows' [[Logical Disk Manager|Dynamic Disks]].<ref>https://globalsp.ts.fujitsu.com/dmsp/docs/ss_recoverpoint.pdf {{Dead link|date=February 2022}}</ref>
RecoverPoint Contiuous Remote Protection (CRR) allows a replica in a remote site. For such a setup, RecoverPoint appliances clusters are required in both the local and remote sites. These 2 clusters communicate over IP. RecoverPoint applies compression and de-duplication in order to reduce WAN traffic. As of RecoverPoint 3.4, only one remote site. CRR can be combined with CDP in order to provide Concurrent Local and Remote (CLR) replication.


Similar to other continuous data protection products, and unlike [[backup]] products, RecoverPoint needs to obtain a copy of every write in order to track data changes. EMC advertises RecoverPoint as heterogenous due to its support of multi-vendor server, network and [[Disk array|storage arrays]].<ref>http://www.emc.com/collateral/software/data-sheet/h2769-emc-recoverpoint-family.pdf {{Bare URL PDF|date=March 2022}}</ref>
The consistency group (CG) term is used for grouping several LUNs together in order to ensures write-order consistency over several volumes. This is used for example with a database that stores its data and journal on different logical drives. These logical drives must be kept in-sync on the replica if data-consistency needs to be preserved. Other examples are multi-volume file systems such as ZFS or Windows' Dynamic Disks. RecoverPoint 3.4 supports up to 128 CGs and 2048 LUNs <ref>https://globalsp.ts.fujitsu.com/dmsp/docs/ss_recoverpoint.pdf</ref>. Each LUN can contain up to 2 TB, and the total supported capacity can be up to 150 TB.


Host-based write splitting is done using a [[device driver]] that is installed on the server accessing the storage volumes. The usage of a host-based splitter allows replication of selected non-EMC storage.
== Write Splitting ==
Available fabric-based splitters are for [[Brocade Communications Systems]] SAN switches and for [[Cisco Systems]] SANTap. This requires the investment in additional switch [[Blade server|blades]]. This configuration allows splitting from all operating systems regardless of their version, and is agnostic to the storage array vendor.


Storage array splitters are only supported on a subset of EMC storage products. This method allows write splitting from all operating systems, and does not require special SAN switching hardware. The RecoverPoint/SE is a slimmed-down version that only supports this type of splitter.
Unlike periodic backups, but similar to other replication products, RecoverPoint needs to obtain a copy of every write in order to track data changes. RecoverPoint supports 3 methods or write splitting: host-based, fabric-based and in the storage array. EMC advertises RecoverPoint as Hetrogenous due to its support of Multivendor server, network and storage environments <ref>http://www.emc.com/collateral/software/data-sheet/h2769-emc-recoverpoint-family.pdf</ref>.


Each site requires installation of a cluster that is composed of 2-8 RecoverPoint appliances. The multiple appliances work together as a [[high availability]] [[Computer cluster|cluster]]. Each appliance is connected via FC to the SAN, and must access both the server ([[SCSI]] initiator) and the storage (SCSI target). Each appliance must also be connected to an IP network for management.
Host-based write splitting is done using a driver that is installed on the server accessing the storage volumes. The usage of a host-based splitter allows replication of non-EMC storages. However, splitters are not available for all operating systems and versions.
Replication takes place over either FC or standard Internet Protocol.
One or more splitters would split traffic to both the storage and the appliances.


Besides integration with EMC products such as AppSync, ViPR, Replication Manager, Control Center and Unisphere, and the [[Clariion]], VNX, [[Symmetrix]] and [[VPLEX]] storage arrays, RecoverPoint integrates with the following products:
Available Fabric-based splitters are for Brocade SAN switches and for Cisco SANTap. This requires the investment in additional switch blades which cost money. This configuration allows splitting from all operating systems regardless of their version, and is agonistic to the storage array vendor.
Integration with [[VMware]] [[VMware vSphere|vSphere]], VMware Site Recovery Manager and [[Microsoft]] [[Hyper-V]] allows protection to be specified per [[virtual machine]] instead of per volumes that are available to the [[hypervisor]].

Integration with Microsoft [[Shadow Copy]], [[Microsoft Exchange Server|Exchange]] and [[Microsoft SQL Server|SQL Server]] and [[Oracle Database]] Server allows RecoverPoint to temporarily stop writes by the host in order to take consistent application-specific snapshots.
Storage array splitters are only supported on a subset of EMC storages. This method allows write splitting from all operating systems, and does not require special SAN switching hardware. The RecoverPoint/SE is a slimmed-down version that only supports this type of splitters.
[[Application programming interface]]s and [[command-line interface]]s allow customers to integrate with custom internal software.<ref name="h4175"/>

== Integration with other products ==

Beyond integration with EMC products such as the Clariion storage array, Replication Manager and Control Center, RecoverPoint integrates with the following products:

Integration with VMWare vCenter and Microsoft Hyper-V allows protection to be specified per VM instead of per volumes that are available to the hypervisor.

Integration with Microsoft Volume Shadow Copy, Exchange and SQL Server and Oracle Database Server allows RecoverPoint to temporarily stop writes by the host in order to take consistent application-specific snapshots.

The usage of APIs/CLIs allows customers to integrate RecoverPoint with custom internal software <ref>http://www.emc.com/collateral/software/white-papers/h4175-recoverpoint-clr-operational-dr-wp.pdf</ref>.


== Notes ==
== Notes ==
Line 35: Line 37:


== References ==
== References ==
*[http://www.storagenewsletter.com/news/software/emc-recoverpoint-cdp-disaster-recovery EMC RecoverPoint, a Single Solution for CDP and Disaster Recovery]
*[http://www.itbusiness.ca/it/client/en/Home/News.asp?id=39762, EMC plans to use Kashya technologies in RecoverPoint]
*[http://www.processor.com/editorial/article.asp?article=articles/p2749/40p49/40p49.asp Continuous Data Protection With EMC RecoverPoint]
*[http://www.processor.com/editorial/article.asp?article=articles/p2749/40p49/40p49.asp Continuous Data Protection With EMC RecoverPoint]
*[http://www.commoncriteriaportal.org/products/?expand#OD, EAL2+ certification]
*[http://www.informationweek.com/blog/main/archives/2008/02/emc_updates_rec.html, EMC Updates RecoverPoint SAN CDP/Replication Engine]


== External links ==
== External links ==
* EMC RecoverPoint [http://www.emc.com/products/detail/software/recoverpoint.htm]
* EMC RecoverPoint [http://www.emc.com/products/detail/software/recoverpoint.htm]
{{EMC}}


[[Category:Backup software]]
[[Category:Backup software]]
[[Category:EMC Corporation]]
[[Category:Dell EMC]]

Latest revision as of 14:49, 1 April 2023

RecoverPoint is a continuous data protection product offered by Dell EMC which supports asynchronous and synchronous data replication of block-based storage. RecoverPoint was originally created by a company called Kashya, which was bought by EMC in 2006.[1]

Description

[edit]

Kashya was founded in February, 2001, originally located in Ramat Gan, Israel.[2] Venture funding included Battery Ventures and Jerusalem Global Ventures.[3] In 2003, additional operations in San Jose, California were announced along with $12 million in funding and a first product.[4] Kashya was acquired by EMC Corporation on May 9, 2006, for $153 million.[5] EMC had already announced a product named RecoverPoint in October 2005, adapted from a product called Recovery One from Mendocino Software.[6] The Kashya product had been named KDX 5000.[7] The EMC RecoverPoint product based on Kashya technology was released in 2007, and version 3.0 released in 2008.[8][9]

RecoverPoint continuous data protection (CDP) tracks changes to data at a block level and journals these changes.[10] Every write is tracked and stored as a different snapshot. Alternatively, groups of writes can be aggregated according to configuration in order to reduce storage space and network traffic. The journal then allows rolling data to a previous "Point-In-Time" in order to view the drive contents as they were before a certain data corruption. CDP can journal each write individually, hence enabling any-point-in-time snapshots, or it can be configured to combine consecutive writes in order to reduce journal space and improve bandwidth. CDP works only over a storage area network - the RecoverPoint appliances need to be configured for the replica and the journal Logical Unit Numbers (LUNs).

RecoverPoint continuous remote replication (CRR) enables a replica in a remote site. For such a setup, RecoverPoint appliances clusters are required in both the local and remote sites. These 2 clusters communicate over either Fibre Channel (FC) or Internet Protocol. RecoverPoint applies data compression and data de-duplication in order to reduce wide area network traffic. As of RecoverPoint 3.4, only one remote site. CRR can be combined with CDP in order to provide concurrent local and remote (CLR) replication.

The consistency group (CG) term is used for grouping several LUNs together in order to ensure write-order consistency over several volumes. This is used for example with a database that stores its data and journal on different logical drives. These logical drives must be kept in-sync on the replica if data-consistency needs to be preserved. Other examples are multi-volume file systems such as ZFS or Windows' Dynamic Disks.[11]

Similar to other continuous data protection products, and unlike backup products, RecoverPoint needs to obtain a copy of every write in order to track data changes. EMC advertises RecoverPoint as heterogenous due to its support of multi-vendor server, network and storage arrays.[12]

Host-based write splitting is done using a device driver that is installed on the server accessing the storage volumes. The usage of a host-based splitter allows replication of selected non-EMC storage. Available fabric-based splitters are for Brocade Communications Systems SAN switches and for Cisco Systems SANTap. This requires the investment in additional switch blades. This configuration allows splitting from all operating systems regardless of their version, and is agnostic to the storage array vendor.

Storage array splitters are only supported on a subset of EMC storage products. This method allows write splitting from all operating systems, and does not require special SAN switching hardware. The RecoverPoint/SE is a slimmed-down version that only supports this type of splitter.

Each site requires installation of a cluster that is composed of 2-8 RecoverPoint appliances. The multiple appliances work together as a high availability cluster. Each appliance is connected via FC to the SAN, and must access both the server (SCSI initiator) and the storage (SCSI target). Each appliance must also be connected to an IP network for management. Replication takes place over either FC or standard Internet Protocol. One or more splitters would split traffic to both the storage and the appliances.

Besides integration with EMC products such as AppSync, ViPR, Replication Manager, Control Center and Unisphere, and the Clariion, VNX, Symmetrix and VPLEX storage arrays, RecoverPoint integrates with the following products: Integration with VMware vSphere, VMware Site Recovery Manager and Microsoft Hyper-V allows protection to be specified per virtual machine instead of per volumes that are available to the hypervisor. Integration with Microsoft Shadow Copy, Exchange and SQL Server and Oracle Database Server allows RecoverPoint to temporarily stop writes by the host in order to take consistent application-specific snapshots. Application programming interfaces and command-line interfaces allow customers to integrate with custom internal software.[10]

Notes

[edit]
  1. ^ Robert E. Passmore, Dave Russell and Stanley Zaffos (May 16, 2006). "EMC Buys Kashya for Replication Technology Opportunities". Gartner. Archived from the original on November 1, 2016. Retrieved October 31, 2016.
  2. ^ "Form D: Notice of Sale of Securities" (PDF). US SEC. June 9, 2004. Retrieved October 31, 2016.
  3. ^ "Kashya in a nutshell". Old web site. Archived from the original on January 21, 2002. Retrieved October 31, 2016.
  4. ^ Jo Maitland (March 27, 2003). "Kashya Makes Kopies". Byte and Switch. Retrieved October 31, 2016.
  5. ^ Dave Raffo (May 9, 2006). "EMC Coughs Up for Kashya". Byte and Switch. Retrieved October 31, 2016.
  6. ^ Clint Boulton (October 24, 2005). "EMC Shows Some CDP". Enterprise Storage Forum. Retrieved October 31, 2016.
  7. ^ Greg Meckbach (June 13, 2006). "EMC plans to use Kashya technologies in RecoverPoint". IT Business. Retrieved October 31, 2016.
  8. ^ "EMC RecoverPoint, a Single Solution for CDP and Disaster Recovery Using CLARiiON CX3 arrays". Press release (edited). February 26, 2008. Retrieved October 31, 2016.
  9. ^ Howard Marks (February 25, 2008). "EMC Updates RecoverPoint SAN CDP/Replication Engine". Information Week blog. Archived from the original on May 2, 2008. Retrieved October 31, 2016.
  10. ^ a b http://www.emc.com/collateral/software/white-papers/h4175-recoverpoint-clr-operational-dr-wp.pdf [bare URL PDF]
  11. ^ https://globalsp.ts.fujitsu.com/dmsp/docs/ss_recoverpoint.pdf [dead link]
  12. ^ http://www.emc.com/collateral/software/data-sheet/h2769-emc-recoverpoint-family.pdf [bare URL PDF]

References

[edit]
[edit]
  • EMC RecoverPoint [1]