Jump to content

RecoverPoint: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Remove original research
m Description: clean up, typo(s) fixed: May 9, 2006 → May 9, 2006,
 
(39 intermediate revisions by 24 users not shown)
Line 1: Line 1:
'''RecoverPoint''' is a [[continuous data protection]] solution offered by [[EMC Corporation]] which supports [[Asynchronous I/O|asynchronous]] and [[Synchronization|synchronous]] data [[Replication (computer science)|replication]] of block-based storage.
'''RecoverPoint''' is a [[continuous data protection]] product offered by [[Dell EMC]] which supports [[Replication (computing)#Disk storage replication|asynchronous]] and [[Replication (computing)#Disk storage replication|synchronous]] data [[Replication (computer science)|replication]] of block-based storage. RecoverPoint was originally created by a company called Kashya, which was bought by EMC in 2006.<ref>{{Cite web |title= EMC Buys Kashya for Replication Technology Opportunities |date= May 16, 2006 |publisher= Gartner |author= Robert E. Passmore, Dave Russell and Stanley Zaffos |url= https://www.gartner.com/doc/492305/emc-buys-kashya-replication-technology |archive-url= https://web.archive.org/web/20161101102140/https://www.gartner.com/doc/492305/emc-buys-kashya-replication-technology |url-status= dead |archive-date= November 1, 2016 |accessdate= October 31, 2016 }}</ref>


== Capabilities ==
== Description==
Kashya was founded in February, 2001, originally located in [[Ramat Gan]], [[Israel]].<ref>{{Cite web |title= Form D: Notice of Sale of Securities |publisher= US SEC |date= June 9, 2004 |url= https://www.sec.gov/Archives/edgar/vprr/0403/04032077.pdf |accessdate= October 31, 2016 }}</ref>
Venture funding included [[Battery Ventures]] and Jerusalem Global Ventures.<ref>{{Cite web |title= Kashya in a nutshell |work= Old web site |url= http://www.kashya.com/ |archivedate= January 21, 2002 |archiveurl= https://web.archive.org/web/20020121002328/http://www.kashya.com/ |accessdate= October 31, 2016 }}</ref>
In 2003, additional operations in [[San Jose, California]] were announced along with $12 million in funding and a first product.<ref>{{Cite web |title= Kashya Makes Kopies |date= March 27, 2003 |work= Byte and Switch |author= Jo Maitland |url= http://www.networkcomputing.com/careers/kashya-makes-kopies/1071353198 |accessdate= October 31, 2016 }}</ref>
Kashya was acquired by [[EMC Corporation]] on May 9, 2006, for $153 million.<ref>{{Cite news |title= EMC Coughs Up for Kashya |work= Byte and Switch |author= Dave Raffo |date= May 9, 2006 |url= http://www.networkcomputing.com/data-centers/emc-coughs-kashya/341206221 |accessdate= October 31, 2016 }}</ref>
EMC had already announced a product named RecoverPoint in October 2005, adapted from a product called Recovery One from Mendocino Software.<ref>{{Cite web |title= EMC Shows Some CDP |work= Enterprise Storage Forum |date= October 24, 2005 |author= Clint Boulton |url= http://www.enterprisestorageforum.com/continuity/news/article.php/3558516/EMC-Shows-Some-CDP.htm |accessdate= October 31, 2016 }}</ref>
The Kashya product had been named KDX 5000.<ref>{{Cite news |title= EMC plans to use Kashya technologies in RecoverPoint |date= June 13, 2006 |author= Greg Meckbach |work= IT Business |url= http://www.itbusiness.ca/news/emc-plans-to-use-kashya-technologies-in-recoverpoint/936 |accessdate= October 31, 2016 }}</ref>
The EMC RecoverPoint product based on Kashya technology was released in 2007, and version 3.0 released in 2008.<ref>{{Cite news |title= EMC RecoverPoint, a Single Solution for CDP and Disaster Recovery Using CLARiiON CX3 arrays |work= Press release (edited) |date= February 26, 2008 |url= http://www.storagenewsletter.com/rubriques/software/emc-recoverpoint-cdp-disaster-recovery/ |accessdate= October 31, 2016 }}</ref><ref>{{Cite web |title= EMC Updates RecoverPoint SAN CDP/Replication Engine |author= Howard Marks |date= February 25, 2008 |work= Information Week blog |url= http://www.informationweek.com/blog/main/archives/2008/02/emc_updates_rec.html |archivedate= May 2, 2008 |archiveurl= https://web.archive.org/web/20080502205249/http://www.informationweek.com/blog/main/archives/2008/02/emc_updates_rec.html |accessdate= October 31, 2016 }}</ref>


RecoverPoint continuous data protection (CDP) tracks changes to data at a block level and journals these changes.<ref name="h4175">http://www.emc.com/collateral/software/white-papers/h4175-recoverpoint-clr-operational-dr-wp.pdf {{Bare URL PDF|date=March 2022}}</ref>
* Block-based [[Journaling file system|journaling]].
* Any-Point-In-Time - Every [[Input/output|write]] is tracked and stored as a different [[Snapshot (computer storage)|snapshot]]. Alternatively, groups of writes can be aggregated according to configuration in order to reduce storage space and network traffic.
Every [[Input/output|write]] is tracked and stored as a different [[Snapshot (computer storage)|snapshot]]. Alternatively, groups of writes can be aggregated according to configuration in order to reduce storage space and network traffic. The journal then allows rolling data to a previous "Point-In-Time" in order to view the drive contents as they were before a certain data corruption. CDP can journal each write individually, hence enabling any-point-in-time snapshots, or it can be configured to combine consecutive writes in order to reduce journal space and improve bandwidth. CDP works only over a [[storage area network]] - the RecoverPoint [[Computer appliance|appliances]] need to be configured for the [[Disk mirroring|replica]] and the journal [[Logical Unit Number]]s (LUNs).
* Heterogeneous (multi-vendor) [[Disk array|storage arrays]] via [[Fibre Channel]].
* WAN-based [[Data compression|compression]].
* Tracking multiple [[Volume (computing)|volume]] as a single [[consistency group]].


RecoverPoint continuous remote replication (CRR) enables a replica in a remote site. For such a setup, RecoverPoint appliances clusters are required in both the local and remote sites. These 2 clusters communicate over either [[Fibre Channel]] (FC) or [[Internet Protocol]]. RecoverPoint applies [[data compression]] and [[data de-duplication]] in order to reduce [[wide area network]] traffic. As of RecoverPoint 3.4, only one remote site. CRR can be combined with CDP in order to provide concurrent local and remote (CLR) replication.
== Replication ==
RecoverPoint continuous data protection (CDP) tracks changes to data at a block level and journals these changes <ref>http://www.emc.com/collateral/software/white-papers/h4175-recoverpoint-clr-operational-dr-wp.pdf</ref>. The journal then allows rolling data to a previous "Point-In-Time" in order to view the drive contents as they were before a certain data corruption. CDP can journal each write individually, hence enabling "Any-Point-In-Time" snapshots, or it can be configured to combine consecutive writes in order to reduce journal space and improve bandwidth. CDP works only over [[Storage area network|SAN]] - the RecoverPoint [[Computer appliance|appliances]] needs to be [[DNS zone|zoned]] and masked with both the master, the [[Mirror (computing)|replica]] and the journal [[Logical Unit Number|LUNs]].


The consistency group (CG) term is used for grouping several LUNs together in order to ensure [[Causal consistency|write-order consistency]] over several volumes. This is used for example with a database that stores its data and journal on different logical drives. These logical drives must be kept in-sync on the replica if [[Consistency model|data-consistency]] needs to be preserved. Other examples are multi-volume file systems such as [[ZFS]] or Windows' [[Logical Disk Manager|Dynamic Disks]].<ref>https://globalsp.ts.fujitsu.com/dmsp/docs/ss_recoverpoint.pdf {{Dead link|date=February 2022}}</ref>
RecoverPoint continuous remote replication (CRR) enables a replica in a remote site. For such a setup, RecoverPoint appliances clusters are required in both the local and remote sites. These 2 clusters communicate over IP. RecoverPoint applies compression and de-duplication in order to reduce WAN traffic. As of RecoverPoint 3.4, only one remote site. CRR can be combined with CDP in order to provide [[concurrent local and remote]] (CLR) replication.


Similar to other continuous data protection products, and unlike [[backup]] products, RecoverPoint needs to obtain a copy of every write in order to track data changes. EMC advertises RecoverPoint as heterogenous due to its support of multi-vendor server, network and [[Disk array|storage arrays]].<ref>http://www.emc.com/collateral/software/data-sheet/h2769-emc-recoverpoint-family.pdf {{Bare URL PDF|date=March 2022}}</ref>
The consistency group (CG) term is used for grouping several LUNs together in order to ensures write-order consistency over several volumes. This is used for example with a database that stores its data and journal on different logical drives. These logical drives must be kept in-sync on the replica if data-consistency needs to be preserved. Other examples are multi-volume file systems such as [[ZFS]] or Windows' [[Logical Disk Manager|Dynamic Disks]]. RecoverPoint 3.4 supports up to 128 CGs and 2048 LUNs <ref>https://globalsp.ts.fujitsu.com/dmsp/docs/ss_recoverpoint.pdf</ref>. Each LUN can contain up to 2 TB, and the total supported capacity can be up to 150 TB.


Host-based write splitting is done using a [[device driver]] that is installed on the server accessing the storage volumes. The usage of a host-based splitter allows replication of selected non-EMC storage.
== Write Splitting ==
Available fabric-based splitters are for [[Brocade Communications Systems]] SAN switches and for [[Cisco Systems]] SANTap. This requires the investment in additional switch [[Blade server|blades]]. This configuration allows splitting from all operating systems regardless of their version, and is agnostic to the storage array vendor.


Storage array splitters are only supported on a subset of EMC storage products. This method allows write splitting from all operating systems, and does not require special SAN switching hardware. The RecoverPoint/SE is a slimmed-down version that only supports this type of splitter.
Similar to other continuous data protection products, and unlike [[backup]] products, RecoverPoint needs to obtain a copy of every write in order to track data changes. RecoverPoint supports three methods or [[write splitting]]: [[host-based]], [[fabric-based]] and in the storage array. EMC advertises RecoverPoint as hetrogenous due to its support of multi-vendor server, network and storage environments <ref>http://www.emc.com/collateral/software/data-sheet/h2769-emc-recoverpoint-family.pdf</ref>.


Each site requires installation of a cluster that is composed of 2-8 RecoverPoint appliances. The multiple appliances work together as a [[high availability]] [[Computer cluster|cluster]]. Each appliance is connected via FC to the SAN, and must access both the server ([[SCSI]] initiator) and the storage (SCSI target). Each appliance must also be connected to an IP network for management.
Host-based write splitting is done using a [[device driver]] that is installed on the server accessing the storage volumes. The usage of a host-based splitter allows replication of non-EMC storages. However, splitters are not available for all operating systems and versions.
Replication takes place over either FC or standard Internet Protocol.

One or more splitters would split traffic to both the storage and the appliances.
Available fabric-based splitters are for [[Brocade Communications Systems|Brocade]] SAN switches and for [[Cisco Systems|Cisco]] SANTap. This requires the investment in additional switch [[Blade server|blades]] which cost money. This configuration allows splitting from all operating systems regardless of their version, and is agonistic to the storage array vendor.

Storage array splitters are only supported on a subset of EMC storages. This method allows write splitting from all operating systems, and does not require special SAN switching hardware. The RecoverPoint/SE is a slimmed-down version that only supports this type of splitters.

== Architecture ==

Each site requires installation of a cluster that is composed of 2-8 RecoverPoint appliances. The multi appliances backup one another in a [[high availability]] [[Computer cluster|cluster]]. Each appliance is connected via Fibre Channel to the SAN, and must be zoned together with both the server (SCSI initiator) and the storage (SCSI target). Each appliance must also be connected to an IP network for management.

All replication takes place over standard [[Internet Protocol|IP]] for asynchronous replication and Fibre Channel for synchronous replication.

One or more host-, fabric- or array- splitters would split traffic to both the storage and the appliances.

When configuring a consistency group, there is a need to select source LUNs on which the data will be monitored, target LUNs in the same size, and journal LUNs. The management GUI will indicate when the target LUNs are identical the the source LUNs, and will enable selecting an older timestamp in order to roll back the target LUNs to an historical state.

== Integration with other products ==

Beyond integration with EMC products such as the [[CLARiiON]] storage array, Replication Manager and Control Center, RecoverPoint integrates with the following products:

Integration with [[VMWare]] vCenter and [[Microsoft]] [[Hyper-V]] allows protection to be specified per [[Virtual machine|VM]] instead of per volumes that are available to the [[hypervisor]].


Besides integration with EMC products such as AppSync, ViPR, Replication Manager, Control Center and Unisphere, and the [[Clariion]], VNX, [[Symmetrix]] and [[VPLEX]] storage arrays, RecoverPoint integrates with the following products:
Integration with [[VMware]] [[VMware vSphere|vSphere]], VMware Site Recovery Manager and [[Microsoft]] [[Hyper-V]] allows protection to be specified per [[virtual machine]] instead of per volumes that are available to the [[hypervisor]].
Integration with Microsoft [[Shadow Copy]], [[Microsoft Exchange Server|Exchange]] and [[Microsoft SQL Server|SQL Server]] and [[Oracle Database]] Server allows RecoverPoint to temporarily stop writes by the host in order to take consistent application-specific snapshots.
Integration with Microsoft [[Shadow Copy]], [[Microsoft Exchange Server|Exchange]] and [[Microsoft SQL Server|SQL Server]] and [[Oracle Database]] Server allows RecoverPoint to temporarily stop writes by the host in order to take consistent application-specific snapshots.
[[Application programming interface]]s and [[command-line interface]]s allow customers to integrate with custom internal software.<ref name="h4175"/>

The usage of [[Application programming interface|API]]s/[[Command-line interface|CLI]]s allows customers to integrate RecoverPoint with custom internal software.<ref>http://www.emc.com/collateral/software/white-papers/h4175-recoverpoint-clr-operational-dr-wp.pdf</ref>


== Notes ==
== Notes ==
Line 50: Line 37:


== References ==
== References ==
*[http://www.storagenewsletter.com/news/software/emc-recoverpoint-cdp-disaster-recovery EMC RecoverPoint, a Single Solution for CDP and Disaster Recovery]
*[http://www.itbusiness.ca/it/client/en/Home/News.asp?id=39762, EMC plans to use Kashya technologies in RecoverPoint]
*[http://www.processor.com/editorial/article.asp?article=articles/p2749/40p49/40p49.asp Continuous Data Protection With EMC RecoverPoint]
*[http://www.processor.com/editorial/article.asp?article=articles/p2749/40p49/40p49.asp Continuous Data Protection With EMC RecoverPoint]
*[http://www.commoncriteriaportal.org/products/?expand#OD, EAL2+ certification]
*[http://www.informationweek.com/blog/main/archives/2008/02/emc_updates_rec.html, EMC Updates RecoverPoint SAN CDP/Replication Engine]


== External links ==
== External links ==
* EMC RecoverPoint [http://www.emc.com/products/detail/software/recoverpoint.htm]
* EMC RecoverPoint [http://www.emc.com/products/detail/software/recoverpoint.htm]
{{EMC}}


[[Category:Backup software]]
[[Category:Backup software]]
[[Category:EMC Corporation]]
[[Category:Dell EMC]]

Latest revision as of 14:49, 1 April 2023

RecoverPoint is a continuous data protection product offered by Dell EMC which supports asynchronous and synchronous data replication of block-based storage. RecoverPoint was originally created by a company called Kashya, which was bought by EMC in 2006.[1]

Description

[edit]

Kashya was founded in February, 2001, originally located in Ramat Gan, Israel.[2] Venture funding included Battery Ventures and Jerusalem Global Ventures.[3] In 2003, additional operations in San Jose, California were announced along with $12 million in funding and a first product.[4] Kashya was acquired by EMC Corporation on May 9, 2006, for $153 million.[5] EMC had already announced a product named RecoverPoint in October 2005, adapted from a product called Recovery One from Mendocino Software.[6] The Kashya product had been named KDX 5000.[7] The EMC RecoverPoint product based on Kashya technology was released in 2007, and version 3.0 released in 2008.[8][9]

RecoverPoint continuous data protection (CDP) tracks changes to data at a block level and journals these changes.[10] Every write is tracked and stored as a different snapshot. Alternatively, groups of writes can be aggregated according to configuration in order to reduce storage space and network traffic. The journal then allows rolling data to a previous "Point-In-Time" in order to view the drive contents as they were before a certain data corruption. CDP can journal each write individually, hence enabling any-point-in-time snapshots, or it can be configured to combine consecutive writes in order to reduce journal space and improve bandwidth. CDP works only over a storage area network - the RecoverPoint appliances need to be configured for the replica and the journal Logical Unit Numbers (LUNs).

RecoverPoint continuous remote replication (CRR) enables a replica in a remote site. For such a setup, RecoverPoint appliances clusters are required in both the local and remote sites. These 2 clusters communicate over either Fibre Channel (FC) or Internet Protocol. RecoverPoint applies data compression and data de-duplication in order to reduce wide area network traffic. As of RecoverPoint 3.4, only one remote site. CRR can be combined with CDP in order to provide concurrent local and remote (CLR) replication.

The consistency group (CG) term is used for grouping several LUNs together in order to ensure write-order consistency over several volumes. This is used for example with a database that stores its data and journal on different logical drives. These logical drives must be kept in-sync on the replica if data-consistency needs to be preserved. Other examples are multi-volume file systems such as ZFS or Windows' Dynamic Disks.[11]

Similar to other continuous data protection products, and unlike backup products, RecoverPoint needs to obtain a copy of every write in order to track data changes. EMC advertises RecoverPoint as heterogenous due to its support of multi-vendor server, network and storage arrays.[12]

Host-based write splitting is done using a device driver that is installed on the server accessing the storage volumes. The usage of a host-based splitter allows replication of selected non-EMC storage. Available fabric-based splitters are for Brocade Communications Systems SAN switches and for Cisco Systems SANTap. This requires the investment in additional switch blades. This configuration allows splitting from all operating systems regardless of their version, and is agnostic to the storage array vendor.

Storage array splitters are only supported on a subset of EMC storage products. This method allows write splitting from all operating systems, and does not require special SAN switching hardware. The RecoverPoint/SE is a slimmed-down version that only supports this type of splitter.

Each site requires installation of a cluster that is composed of 2-8 RecoverPoint appliances. The multiple appliances work together as a high availability cluster. Each appliance is connected via FC to the SAN, and must access both the server (SCSI initiator) and the storage (SCSI target). Each appliance must also be connected to an IP network for management. Replication takes place over either FC or standard Internet Protocol. One or more splitters would split traffic to both the storage and the appliances.

Besides integration with EMC products such as AppSync, ViPR, Replication Manager, Control Center and Unisphere, and the Clariion, VNX, Symmetrix and VPLEX storage arrays, RecoverPoint integrates with the following products: Integration with VMware vSphere, VMware Site Recovery Manager and Microsoft Hyper-V allows protection to be specified per virtual machine instead of per volumes that are available to the hypervisor. Integration with Microsoft Shadow Copy, Exchange and SQL Server and Oracle Database Server allows RecoverPoint to temporarily stop writes by the host in order to take consistent application-specific snapshots. Application programming interfaces and command-line interfaces allow customers to integrate with custom internal software.[10]

Notes

[edit]
  1. ^ Robert E. Passmore, Dave Russell and Stanley Zaffos (May 16, 2006). "EMC Buys Kashya for Replication Technology Opportunities". Gartner. Archived from the original on November 1, 2016. Retrieved October 31, 2016.
  2. ^ "Form D: Notice of Sale of Securities" (PDF). US SEC. June 9, 2004. Retrieved October 31, 2016.
  3. ^ "Kashya in a nutshell". Old web site. Archived from the original on January 21, 2002. Retrieved October 31, 2016.
  4. ^ Jo Maitland (March 27, 2003). "Kashya Makes Kopies". Byte and Switch. Retrieved October 31, 2016.
  5. ^ Dave Raffo (May 9, 2006). "EMC Coughs Up for Kashya". Byte and Switch. Retrieved October 31, 2016.
  6. ^ Clint Boulton (October 24, 2005). "EMC Shows Some CDP". Enterprise Storage Forum. Retrieved October 31, 2016.
  7. ^ Greg Meckbach (June 13, 2006). "EMC plans to use Kashya technologies in RecoverPoint". IT Business. Retrieved October 31, 2016.
  8. ^ "EMC RecoverPoint, a Single Solution for CDP and Disaster Recovery Using CLARiiON CX3 arrays". Press release (edited). February 26, 2008. Retrieved October 31, 2016.
  9. ^ Howard Marks (February 25, 2008). "EMC Updates RecoverPoint SAN CDP/Replication Engine". Information Week blog. Archived from the original on May 2, 2008. Retrieved October 31, 2016.
  10. ^ a b http://www.emc.com/collateral/software/white-papers/h4175-recoverpoint-clr-operational-dr-wp.pdf [bare URL PDF]
  11. ^ https://globalsp.ts.fujitsu.com/dmsp/docs/ss_recoverpoint.pdf [dead link]
  12. ^ http://www.emc.com/collateral/software/data-sheet/h2769-emc-recoverpoint-family.pdf [bare URL PDF]

References

[edit]
[edit]
  • EMC RecoverPoint [1]