US11099956B1 - Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations - Google Patents
Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations Download PDFInfo
- Publication number
- US11099956B1 US11099956B1 US16/831,562 US202016831562A US11099956B1 US 11099956 B1 US11099956 B1 US 11099956B1 US 202016831562 A US202016831562 A US 202016831562A US 11099956 B1 US11099956 B1 US 11099956B1
- Authority
- US
- United States
- Prior art keywords
- data
- failover
- storage
- virtual machine
- data storage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000011084 recovery Methods 0.000 title claims abstract description 180
- 238000003860 storage Methods 0.000 claims abstract description 803
- 238000013500 data storage Methods 0.000 claims abstract description 280
- 238000007726 management method Methods 0.000 claims abstract description 228
- 238000000034 method Methods 0.000 claims abstract description 174
- 230000010076 replication Effects 0.000 claims description 56
- 230000000977 initiatory effect Effects 0.000 claims description 28
- 238000012360 testing method Methods 0.000 abstract description 22
- 230000010354 integration Effects 0.000 abstract description 2
- 239000003795 chemical substances by application Substances 0.000 description 483
- 238000010586 diagram Methods 0.000 description 27
- 238000004519 manufacturing process Methods 0.000 description 26
- 238000012545 processing Methods 0.000 description 25
- 230000008569 process Effects 0.000 description 21
- 230000006870 function Effects 0.000 description 20
- 238000013459 approach Methods 0.000 description 16
- 230000008520 organization Effects 0.000 description 16
- 238000004891 communication Methods 0.000 description 14
- 230000014759 maintenance of location Effects 0.000 description 14
- 238000007906 compression Methods 0.000 description 13
- 230000006835 compression Effects 0.000 description 13
- 230000033001 locomotion Effects 0.000 description 12
- 230000008901 benefit Effects 0.000 description 11
- 238000013507 mapping Methods 0.000 description 10
- 238000003491 array Methods 0.000 description 9
- 230000008867 communication pathway Effects 0.000 description 9
- 238000004458 analytical method Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 230000002452 interceptive effect Effects 0.000 description 7
- 238000005192 partition Methods 0.000 description 7
- 238000012546 transfer Methods 0.000 description 7
- 238000012550 audit Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000001976 improved effect Effects 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 230000003362 replicative effect Effects 0.000 description 6
- 230000000717 retained effect Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000033228 biological regulation Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 238000004590 computer program Methods 0.000 description 5
- 238000007405 data analysis Methods 0.000 description 5
- 230000006855 networking Effects 0.000 description 5
- 230000037361 pathway Effects 0.000 description 5
- 238000013515 script Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000032683 aging Effects 0.000 description 4
- 230000002354 daily effect Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 230000012010 growth Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000005012 migration Effects 0.000 description 4
- 238000013508 migration Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 239000000470 constituent Substances 0.000 description 3
- 230000001276 controlling effect Effects 0.000 description 3
- 238000013523 data management Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 235000019580 granularity Nutrition 0.000 description 3
- 230000007774 longterm Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 235000006508 Nelumbo nucifera Nutrition 0.000 description 2
- 240000002853 Nelumbo nucifera Species 0.000 description 2
- 235000006510 Nelumbo pentapetala Nutrition 0.000 description 2
- 208000032225 Proximal spinal muscular atrophy type 1 Diseases 0.000 description 2
- 208000026481 Werdnig-Hoffmann disease Diseases 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000013478 data encryption standard Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000013138 pruning Methods 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 208000032471 type 1 spinal muscular atrophy Diseases 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 241001441724 Tetraodontidae Species 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010367 cloning Methods 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000013524 data verification Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000003116 impacting effect Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000003442 weekly effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2094—Redundant storage or storage space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2023—Failover techniques
- G06F11/2028—Failover techniques eliminating a faulty processor or activating a spare
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
- G06F11/1451—Management of the data involved in backup or backup restore by selection of backup contents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1469—Backup restoration techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2038—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2048—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share neither address space nor persistent storage
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2069—Management of state, configuration or failover
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45583—Memory management, e.g. access or allocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/815—Virtual
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/82—Solving problems relating to consistency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/84—Using snapshots, i.e. a logical point-in-time copy of the data
Definitions
- Businesses recognize the commercial value of their data and seek reliable, cost-effective ways to protect the information stored on their computer networks while minimizing impact on productivity.
- a company might back up critical computing systems such as databases, file servers, web servers, virtual machines, and so on as part of a maintenance program.
- critical computing systems such as databases, file servers, web servers, virtual machines, and so on as part of a maintenance program.
- companies also continue to seek innovative and robust techniques for ensuring disaster recovery will operate smoothly and reliably.
- the present inventors devised a scheme for disaster recovery (DR) orchestration of virtual machine (VM) failover and failback operations.
- An illustrative data storage management system deploys proprietary components at source data center(s) and at DR site(s).
- the proprietary components e.g., storage manager, data agents, media agents, backup nodes, etc.
- DR orchestration jobs are suitable for testing VM failover scenarios (“clone testing”), for conducting planned VM failovers, and for unplanned VM failovers.
- DR orchestration jobs also handle failback and integration of DR-generated data into the failback site.
- the illustrative approach is referred to herein as “snap-based DR orchestration.”
- the illustrative system exploits snapshot replication techniques.
- the system implements “snap backup jobs” that capture VM datastores at a source data center, in which so-called “hardware snapshots” are taken by the datastore's host storage device (e.g., a storage array, filer, and/or cloud storage resources).
- the system implements “auxiliary copy jobs” to replicate the snapshots to the DR site. Collectively, these jobs ensure that hardware snapshots regularly capture VM datastores at the source and that the DR site regularly receives snapshotted datastore data.
- One of the advantages of the disclosed DR orchestration job is that it does not require that VMs or their corresponding datastores be actively operating at the DR site before the DR orchestration job is initiated, i.e., before failover.
- This approach is distinguishable from an alternative proprietary approach known as “Live Sync,” which relies on ongoing repetitive cycles of incremental backups at the source followed by restores at the DR site to maintain the DR site in a “warm” readiness state that can take over with minimal start-up effort.
- Live Sync an alternative proprietary approach known as “Live Sync,” which relies on ongoing repetitive cycles of incremental backups at the source followed by restores at the DR site to maintain the DR site in a “warm” readiness state that can take over with minimal start-up effort.
- Live Sync requires VMs and their datastores to be actively operating (powered up) at the DR site in order to sustain the ongoing restore operations.
- the DR site With Live Sync, the DR site is operational after the first restore in a “warm” standby state.
- Live Sync can be relatively costly to operate and maintain as compared to the illustrative snap-based DR orchestration approach disclosed herein, because the Live Sync DR site must maintain actively operating VMs and datastores as well as data restoration infrastructure. In cloud computing environments, maintaining powered up VMs and data storage resources indefinitely can be very costly.
- the “warm” readiness of Live Sync is counter-balanced by relatively high costs of operation and maintenance of DR components and infrastructure.
- the illustrative snap-based DR orchestration takes a different approach that exploits snapshot techniques and other kinds of backup operations (e.g., auxiliary copy jobs) to feed data from source to DR site, and does not rely on Live Sync's ongoing cycles of backup and restore to maintain the DR site.
- the illustrative snap-based DR orchestration requires only minimal active resources at the DR site until such time as the DR orchestration job initiates a failover to the DR site. Accordingly, VMs are kept powered off at the DR site until failover.
- backup nodes that provide backup/restore infrastructure for completing the DR orchestration job execute on DR site VMs that are powered up on demand at failover in certain embodiments.
- the illustrative snap-based DR orchestration approach requires minimal active resources at the DR site until failover. The cost and effort of maintaining active components at a “warm” DR site are also avoided by snap-based DR orchestration. Instead, the illustrative snap-based DR orchestration approach relies on DR orchestration jobs to activate connections, establish datastores, and power up VMs as needed at the DR site, and to tear down appropriately after failback completes.
- the illustrative data storage management system is specially configured to track certain administrative information at source and DR sites, coordinate operations between the sites, and manage a number of operations at the DR site to ensure a successful failover, and conversely to ensure successful failbacks to the source.
- source or DR site or both can be a virtualized on-premises data center or a cloud computing environment, without limitation.
- many of the depicted scenarios illustrate a virtualized data center as a source production environment and a cloud computing environment as a failover/DR site, the embodiments are not so limited.
- FIG. 1A is a block diagram illustrating an exemplary information management system.
- FIG. 1B is a detailed view of a primary storage device, a secondary storage device, and some examples of primary data and secondary copy data.
- FIG. 1C is a block diagram of an exemplary information management system including a storage manager, one or more data agents, and one or more media agents.
- FIG. 1D is a block diagram illustrating a scalable information management system.
- FIG. 1E illustrates certain secondary copy operations according to an exemplary storage policy.
- FIGS. 1F-1H are block diagrams illustrating suitable data structures that may be employed by the information management system.
- FIG. 2A illustrates a system and technique for synchronizing primary data to a destination such as a failover site using secondary copy data.
- FIG. 2B illustrates an information management system architecture incorporating use of a network file system (NFS) protocol for communicating between the primary and secondary storage subsystems.
- NFS network file system
- FIG. 2C is a block diagram of an example of a highly scalable managed data pool architecture.
- FIG. 3A is a block diagram illustrating system 300 for snap-based disaster recovery orchestration of virtual machine failover and failback operations, according to an illustrative embodiment.
- FIG. 3B is a block diagram illustrating the system 300 , wherein the DR site is implemented in a cloud computing environment, according to an illustrative embodiment.
- FIG. 4 is a block diagram illustrating some salient components of system 300 , according to an illustrative embodiment.
- FIG. 5A is a block diagram illustrating some salient components of system 300 , wherein the source site and DR site are virtualized data centers, according to an illustrative embodiment.
- FIG. 5B is a block diagram illustrating some salient components of system 300 , wherein the DR site is implemented in a cloud computing environment, according to an illustrative embodiment.
- FIG. 5C is a block diagram illustrating come salient components involved in snap backup jobs and auxiliary copy jobs according to an illustrative embodiment.
- FIG. 6 is a flow chart that depicts some salient operations of a method 600 according to an illustrative embodiment.
- FIG. 7 depicts some salient operations of block 612 in method 600 .
- FIG. 8 depicts some salient operations of block 614 in method 600
- FIG. 9 depicts some salient operations of block 616 in method 600
- FIG. 10 depicts some salient operations of block 620 of method 600 .
- FIG. 11 depicts an illustrative screenshot of an administrative screen in system 300 for adding a failover group.
- FIG. 12 depicts an illustrative screenshot of an administrative screen in system 300 for editing a failover group and adding customization details for mapping source to destination relationships.
- FIG. 13 depicts an illustrative screenshot of an administration screen for defining how snapshot copies are to be replicated, showing a mirror copy option and an alternative vault copy option.
- FIG. 1A shows one such information management system 100 (or “system 100 ”), which generally includes combinations of hardware and software configured to protect and manage data and metadata that are generated and used by computing devices in system 100 .
- System 100 may be referred to in some embodiments as a “storage management system” or a “data storage management system.”
- System 100 performs information management operations, some of which may be referred to as “storage operations” or “data storage operations,” to protect and manage the data residing in and/or managed by system 100 .
- the organization that employs system 100 may be a corporation or other business entity, non-profit organization, educational institution, household, governmental agency, or the like.
- systems and associated components described herein may be compatible with and/or provide some or all of the functionality of the systems and corresponding components described in one or more of the following U.S. patents/publications and patent applications assigned to Commvault Systems, Inc., each of which is hereby incorporated by reference in its entirety herein:
- System 100 includes computing devices and computing technologies.
- system 100 can include one or more client computing devices 102 and secondary storage computing devices 106 , as well as storage manager 140 or a host computing device for it.
- Computing devices can include, without limitation, one or more: workstations, personal computers, desktop computers, or other types of generally fixed computing systems such as mainframe computers, servers, and minicomputers.
- Other computing devices can include mobile or portable computing devices, such as one or more laptops, tablet computers, personal data assistants, mobile phones (such as smartphones), and other mobile or portable computing devices such as embedded computers, set top boxes, vehicle-mounted devices, wearable computers, etc.
- Servers can include mail servers, file servers, database servers, virtual machine servers, and web servers.
- Any given computing device comprises one or more processors (e.g., CPU and/or single-core or multi-core processors), as well as corresponding non-transitory computer memory (e.g., random-access memory (RAM)) for storing computer programs which are to be executed by the one or more processors.
- processors e.g., CPU and/or single-core or multi-core processors
- non-transitory computer memory e.g., random-access memory (RAM)
- Other computer memory for mass storage of data may be packaged/configured with the computing device (e.g., an internal hard disk) and/or may be external and accessible by the computing device (e.g., network-attached storage, a storage array, etc.).
- a computing device includes cloud computing resources, which may be implemented as virtual machines. For instance, one or more virtual machines may be provided to the organization by a third-party cloud service vendor.
- computing devices can include one or more virtual machine(s) running on a physical host computing device (or “host machine”) operated by the organization.
- host machine a physical host computing device operated by the organization.
- the organization may use one virtual machine as a database server and another virtual machine as a mail server, both virtual machines operating on the same host machine.
- a Virtual machine (“VM”) is a software implementation of a computer that does not physically exist and is instead instantiated in an operating system of a physical computer (or host machine) to enable applications to execute within the VM's environment, i.e., a VM emulates a physical computer.
- AVM includes an operating system and associated virtual resources, such as computer memory and processor(s).
- a hypervisor operates between the VM and the hardware of the physical host machine and is generally responsible for creating and running the VMs.
- Hypervisors are also known in the art as virtual machine monitors or a virtual machine managers or “VMMs”, and may be implemented in software, firmware, and/or specialized hardware installed on the host machine. Examples of hypervisors include ESX Server, by VMware, Inc. of Palo Alto, Calif.; Microsoft Virtual Server and Microsoft Windows Server Hyper-V, both by Microsoft Corporation of Redmond, Wash.; Sun xVM by Oracle America Inc. of Santa Clara, Calif.; and Xen by Citrix Systems, Santa Clara, Calif. The hypervisor provides resources to each virtual operating system such as a virtual processor, virtual memory, a virtual network device, and a virtual disk. Each virtual machine has one or more associated virtual disks.
- the hypervisor typically stores the data of virtual disks in files on the file system of the physical host machine, called virtual machine disk files (“VMDK” in VMware lingo) or virtual hard disk image files (in Microsoft lingo).
- VMDK virtual machine disk files
- VMFS Virtual Machine File System
- a virtual machine reads data from and writes data to its virtual disk much the way that a physical machine reads data from and writes data to a physical disk. Examples of techniques for implementing information management in a cloud computing environment are described in U.S. Pat. No. 8,285,681. Examples of techniques for implementing information management in a virtualized computing environment are described in U.S. Pat. No. 8,307,177.
- Information management system 100 can also include electronic data storage devices, generally used for mass storage of data, including, e.g., primary storage devices 104 and secondary storage devices 108 .
- Storage devices can generally be of any suitable type including, without limitation, disk drives, storage arrays (e.g., storage-area network (SAN) and/or network-attached storage (NAS) technology), semiconductor memory (e.g., solid state storage devices), network attached storage (NAS) devices, tape libraries, or other magnetic, non-tape storage devices, optical media storage devices, combinations of the same, etc.
- storage devices form part of a distributed file system.
- storage devices are provided in a cloud storage environment (e.g., a private cloud or one operated by a third-party vendor), whether for primary data or secondary copies or both.
- system 100 can refer to generally all of the illustrated hardware and software components in FIG. 1C , or the term may refer to only a subset of the illustrated components.
- system 100 generally refers to a combination of specialized components used to protect, move, manage, manipulate, analyze, and/or process data and metadata generated by client computing devices 102 .
- system 100 in some cases does not include the underlying components that generate and/or store primary data 112 , such as the client computing devices 102 themselves, and the primary storage devices 104 .
- secondary storage devices 108 e.g., a third-party provided cloud storage environment
- “information management system” or “storage management system” may sometimes refer to one or more of the following components, which will be described in further detail below: storage manager, data agent, and media agent.
- One or more client computing devices 102 may be part of system 100 , each client computing device 102 having an operating system and at least one application 110 and one or more accompanying data agents executing thereon; and associated with one or more primary storage devices 104 storing primary data 112 .
- Client computing device(s) 102 and primary storage devices 104 may generally be referred to in some cases as primary storage subsystem 117 .
- data generation sources include one or more client computing devices 102 .
- a computing device that has a data agent 142 installed and operating on it is generally referred to as a “client computing device” 102 , and may include any type of computing device, without limitation.
- a client computing device 102 may be associated with one or more users and/or user accounts.
- a “client” is a logical component of information management system 100 , which may represent a logical grouping of one or more data agents installed on a client computing device 102 .
- Storage manager 140 recognizes a client as a component of system 100 , and in some embodiments, may automatically create a client component the first time a data agent 142 is installed on a client computing device 102 . Because data generated by executable component(s) 110 is tracked by the associated data agent 142 so that it may be properly protected in system 100 , a client may be said to generate data and to store the generated data to primary storage, such as primary storage device 104 .
- client computing device does not imply that a client computing device 102 is necessarily configured in the client/server sense relative to another computing device such as a mail server, or that a client computing device 102 cannot be a server in its own right.
- a client computing device 102 can be and/or include mail servers, file servers, database servers, virtual machine servers, and/or web servers.
- Each client computing device 102 may have application(s) 110 executing thereon which generate and manipulate the data that is to be protected from loss and managed in system 100 .
- Applications 110 generally facilitate the operations of an organization, and can include, without limitation, mail server applications (e.g., Microsoft Exchange Server), file system applications, mail client applications (e.g., Microsoft Exchange Client), database applications or database management systems (e.g., SQL, Oracle, SAP, Lotus Notes Database), word processing applications (e.g., Microsoft Word), spreadsheet applications, financial applications, presentation applications, graphics and/or video applications, browser applications, mobile applications, entertainment applications, and so on.
- Each application 110 may be accompanied by an application-specific data agent 142 , though not all data agents 142 are application-specific or associated with only application.
- a file manager application e.g., Microsoft Windows Explorer
- Client computing devices 102 can have at least one operating system (e.g., Microsoft Windows, Mac OS X, iOS, IBM z/OS, Linux, other Unix-based operating systems, etc.) installed thereon, which may support or host one or more file systems and other applications 110 .
- a virtual machine that executes on a host client computing device 102 may be considered an application 110 and may be accompanied by a specific data agent 142 (e.g., virtual server data agent).
- Client computing devices 102 and other components in system 100 can be connected to one another via one or more electronic communication pathways 114 .
- a first communication pathway 114 may communicatively couple client computing device 102 and secondary storage computing device 106 ;
- a second communication pathway 114 may communicatively couple storage manager 140 and client computing device 102 ;
- a third communication pathway 114 may communicatively couple storage manager 140 and secondary storage computing device 106 , etc. (see, e.g., FIG. 1A and FIG. 1C ).
- a communication pathway 114 can include one or more networks or other connection types including one or more of the following, without limitation: the Internet, a wide area network (WAN), a local area network (LAN), a Storage Area Network (SAN), a Fibre Channel (FC) connection, a Small Computer System Interface (SCSI) connection, a virtual private network (VPN), a token ring or TCP/IP based network, an intranet network, a point-to-point link, a cellular network, a wireless data transmission system, a two-way cable system, an interactive kiosk network, a satellite network, a broadband network, a baseband network, a neural network, a mesh network, an ad hoc network, other appropriate computer or telecommunications networks, combinations of the same or the like.
- WAN wide area network
- LAN local area network
- SAN Storage Area Network
- FC Fibre Channel
- SCSI Small Computer System Interface
- VPN virtual private network
- TCP/IP token ring or TCP/IP based network
- intranet network
- Communication pathways 114 in some cases may also include application programming interfaces (APIs) including, e.g., cloud service provider APIs, virtual machine management APIs, and hosted service provider APIs.
- APIs application programming interfaces
- the underlying infrastructure of communication pathways 114 may be wired and/or wireless, analog and/or digital, or any combination thereof; and the facilities used may be private, public, third-party provided, or any combination thereof, without limitation.
- a “subclient” is a logical grouping of all or part of a client's primary data 112 .
- a subclient may be defined according to how the subclient data is to be protected as a unit in system 100 .
- a subclient may be associated with a certain storage policy.
- a given client may thus comprise several subclients, each subclient associated with a different storage policy.
- some files may form a first subclient that requires compression and deduplication and is associated with a first storage policy.
- Other files of the client may form a second subclient that requires a different retention schedule as well as encryption, and may be associated with a different, second storage policy.
- the primary data may be generated by the same application 110 and may belong to one given client, portions of the data may be assigned to different subclients for distinct treatment by system 100 . More detail on subclients is given in regard to storage policies below.
- Primary data 112 is generally production data or “live” data generated by the operating system and/or applications 110 executing on client computing device 102 .
- Primary data 112 is generally stored on primary storage device(s) 104 and is organized via a file system operating on the client computing device 102 .
- client computing device(s) 102 and corresponding applications 110 may create, access, modify, write, delete, and otherwise use primary data 112 .
- Primary data 112 is generally in the native format of the source application 110 .
- Primary data 112 is an initial or first stored body of data generated by the source application 110 .
- Primary data 112 in some cases is created substantially directly from data generated by the corresponding source application 110 . It can be useful in performing certain tasks to organize primary data 112 into units of different granularities.
- primary data 112 can include files, directories, file system volumes, data blocks, extents, or any other hierarchies or organizations of data objects.
- a “data object” can refer to (i) any file that is currently addressable by a file system or that was previously addressable by the file system (e.g., an archive file), and/or to (ii) a subset of such a file (e.g., a data block, an extent, etc.).
- Primary data 112 may include structured data (e.g., database files), unstructured data (e.g., documents), and/or semi-structured data. See, e.g., FIG. 1B .
- Metadata generally includes information about data objects and/or characteristics associated with the data objects. For simplicity herein, it is to be understood that, unless expressly stated otherwise, any reference to primary data 112 generally also includes its associated metadata, but references to metadata generally do not include the primary data.
- Metadata can include, without limitation, one or more of the following: the data owner (e.g., the client or user that generates the data), the last modified time (e.g., the time of the most recent modification of the data object), a data object name (e.g., a file name), a data object size (e.g., a number of bytes of data), information about the content (e.g., an indication as to the existence of a particular search term), user-supplied tags, to/from information for email (e.g., an email sender, recipient, etc.), creation date, file type (e.g., format or application type), last accessed time, application type (e.g., type of application that generated the data object), location/network (e.g., a current, past or future location of the data object and network pathways to/from the data object), geographic location (e.g., GPS coordinates), frequency of change (e.g., a period in which the data object is modified), business unit (e.g.,
- some applications 110 and/or other components of system 100 maintain indices of metadata for data objects, e.g., metadata associated with individual email messages.
- metadata e.g., metadata associated with individual email messages.
- Primary storage devices 104 storing primary data 112 may be relatively fast and/or expensive technology (e.g., flash storage, a disk drive, a hard-disk storage array, solid state memory, etc.), typically to support high-performance live production environments. Primary data 112 may be highly changeable and/or may be intended for relatively short term retention (e.g., hours, days, or weeks). According to some embodiments, client computing device 102 can access primary data 112 stored in primary storage device 104 by making conventional file system calls via the operating system. Each client computing device 102 is generally associated with and/or in communication with one or more primary storage devices 104 storing corresponding primary data 112 .
- a client computing device 102 is said to be associated with or in communication with a particular primary storage device 104 if it is capable of one or more of: routing and/or storing data (e.g., primary data 112 ) to the primary storage device 104 , coordinating the routing and/or storing of data to the primary storage device 104 , retrieving data from the primary storage device 104 , coordinating the retrieval of data from the primary storage device 104 , and modifying and/or deleting data in the primary storage device 104 .
- a client computing device 102 may be said to access data stored in an associated storage device 104 .
- Primary storage device 104 may be dedicated or shared. In some cases, each primary storage device 104 is dedicated to an associated client computing device 102 , e.g., a local disk drive. In other cases, one or more primary storage devices 104 can be shared by multiple client computing devices 102 , e.g., via a local network, in a cloud storage implementation, etc. As one example, primary storage device 104 can be a storage array shared by a group of client computing devices 102 , such as EMC Clariion, EMC Symmetrix, EMC Celerra, Dell EqualLogic, IBM XIV, NetApp FAS, HP EVA, and HP 3PAR.
- EMC Clariion EMC Symmetrix
- EMC Celerra Dell EqualLogic
- IBM XIV NetApp FAS
- HP EVA HP 3PAR
- System 100 may also include hosted services (not shown), which may be hosted in some cases by an entity other than the organization that employs the other components of system 100 .
- the hosted services may be provided by online service providers.
- Such service providers can provide social networking services, hosted email services, or hosted productivity applications or other hosted applications such as software-as-a-service (SaaS), platform-as-a-service (PaaS), application service providers (ASPs), cloud services, or other mechanisms for delivering functionality via a network.
- each hosted service may generate additional data and metadata, which may be managed by system 100 , e.g., as primary data 112 .
- the hosted services may be accessed using one of the applications 110 .
- a hosted mail service may be accessed via browser running on a client computing device 102 .
- Secondary data 112 stored on primary storage devices 104 may be compromised in some cases, such as when an employee deliberately or accidentally deletes or overwrites primary data 112 . Or primary storage devices 104 can be damaged, lost, or otherwise corrupted. For recovery and/or regulatory compliance purposes, it is therefore useful to generate and maintain copies of primary data 112 . Accordingly, system 100 includes one or more secondary storage computing devices 106 and one or more secondary storage devices 108 configured to create and store one or more secondary copies 116 of primary data 112 including its associated metadata. The secondary storage computing devices 106 and the secondary storage devices 108 may be referred to as secondary storage subsystem 118 .
- Secondary copies 116 can help in search and analysis efforts and meet other information management goals as well, such as: restoring data and/or metadata if an original version is lost (e.g., by deletion, corruption, or disaster); allowing point-in-time recovery; complying with regulatory data retention and electronic discovery (e-discovery) requirements; reducing utilized storage capacity in the production system and/or in secondary storage; facilitating organization and search of data; improving user access to data files across multiple computing devices and/or hosted services; and implementing data retention and pruning policies.
- restoring data and/or metadata if an original version is lost e.g., by deletion, corruption, or disaster
- e-discovery electronic discovery
- reducing utilized storage capacity in the production system and/or in secondary storage facilitating organization and search of data
- improving user access to data files across multiple computing devices and/or hosted services and implementing data retention and pruning policies.
- a secondary copy 116 can comprise a separate stored copy of data that is derived from one or more earlier-created stored copies (e.g., derived from primary data 112 or from another secondary copy 116 ).
- Secondary copies 116 can include point-in-time data, and may be intended for relatively long-term retention before some or all of the data is moved to other storage or discarded.
- a secondary copy 116 may be in a different storage device than other previously stored copies; and/or may be remote from other previously stored copies.
- Secondary copies 116 can be stored in the same storage device as primary data 112 .
- a disk array capable of performing hardware snapshots stores primary data 112 and creates and stores hardware snapshots of the primary data 112 as secondary copies 116 .
- Secondary copies 116 may be stored in relatively slow and/or lower cost storage (e.g., magnetic tape).
- a secondary copy 116 may be stored in a backup or archive format, or in some other format different from the native source application format or other format of primary data 112 .
- Secondary storage computing devices 106 may index secondary copies 116 (e.g., using a media agent 144 ), enabling users to browse and restore at a later time and further enabling the lifecycle management of the indexed data.
- a pointer or other location indicia e.g., a stub
- system 100 may create and manage multiple secondary copies 116 of a particular data object or metadata, each copy representing the state of the data object in primary data 112 at a particular point in time. Moreover, since an instance of a data object in primary data 112 may eventually be deleted from primary storage device 104 and the file system, system 100 may continue to manage point-in-time representations of that data object, even though the instance in primary data 112 no longer exists.
- the operating system and other applications 110 of client computing device(s) 102 may execute within or under the management of virtualization software (e.g., a VMM), and the primary storage device(s) 104 may comprise a virtual disk created on a physical storage device.
- System 100 may create secondary copies 116 of the files or other data objects in a virtual disk file and/or secondary copies 116 of the entire virtual disk file itself (e.g., of an entire.vmdk file).
- Secondary copies 116 are distinguishable from corresponding primary data 112 .
- secondary copies 116 can be stored in a different format from primary data 112 (e.g., backup, archive, or other non-native format). For this or other reasons, secondary copies 116 may not be directly usable by applications 110 or client computing device 102 (e.g., via standard system calls or otherwise) without modification, processing, or other intervention by system 100 which may be referred to as “restore” operations.
- Secondary copies 116 may have been processed by data agent 142 and/or media agent 144 in the course of being created (e.g., compression, deduplication, encryption, integrity markers, indexing, formatting, application-aware metadata, etc.), and thus secondary copy 116 may represent source primary data 112 without necessarily being exactly identical to the source.
- data agent 142 and/or media agent 144 e.g., compression, deduplication, encryption, integrity markers, indexing, formatting, application-aware metadata, etc.
- secondary copies 116 may be stored on a secondary storage device 108 that is inaccessible to application 110 running on client computing device 102 and/or hosted service.
- Some secondary copies 116 may be “offline copies,” in that they are not readily available (e.g., not mounted to tape or disk). Offline copies can include copies of data that system 100 can access without human intervention (e.g., tapes within an automated tape library, but not yet mounted in a drive), and copies that the system 100 can access only with some human intervention (e.g., tapes located at an offsite storage site).
- Creating secondary copies can be challenging when hundreds or thousands of client computing devices 102 continually generate large volumes of primary data 112 to be protected. Also, there can be significant overhead involved in the creation of secondary copies 116 . Moreover, specialized programmed intelligence and/or hardware capability is generally needed for accessing and interacting with secondary storage devices 108 . Client computing devices 102 may interact directly with a secondary storage device 108 to create secondary copies 116 , but in view of the factors described above, this approach can negatively impact the ability of client computing device 102 to serve/service application 110 and produce primary data 112 . Further, any given client computing device 102 may not be optimized for interaction with certain secondary storage devices 108 .
- system 100 may include one or more software and/or hardware components which generally act as intermediaries between client computing devices 102 (that generate primary data 112 ) and secondary storage devices 108 (that store secondary copies 116 ).
- these intermediate components provide other benefits. For instance, as discussed further below with respect to FIG. 1D , distributing some of the work involved in creating secondary copies 116 can enhance scalability and improve system performance.
- the intermediate components can include one or more secondary storage computing devices 106 as shown in FIG. 1A and/or one or more media agents 144 .
- Media agents are discussed further below (e.g., with respect to FIGS. 1C-1E ).
- These special-purpose components of system 100 comprise specialized programmed intelligence and/or hardware capability for writing to, reading from, instructing, communicating with, or otherwise interacting with secondary storage devices 108 .
- Secondary storage computing device(s) 106 can comprise any of the computing devices described above, without limitation. In some cases, secondary storage computing device(s) 106 also include specialized hardware componentry and/or software intelligence (e.g., specialized interfaces) for interacting with certain secondary storage device(s) 108 with which they may be specially associated.
- specialized hardware componentry and/or software intelligence e.g., specialized interfaces
- client computing device 102 may communicate the primary data 112 to be copied (or a processed version thereof generated by a data agent 142 ) to the designated secondary storage computing device 106 , via a communication pathway 114 .
- Secondary storage computing device 106 in turn may further process and convey the data or a processed version thereof to secondary storage device 108 .
- One or more secondary copies 116 may be created from existing secondary copies 116 , such as in the case of an auxiliary copy operation, described further below.
- FIG. 1B is a detailed view of some specific examples of primary data stored on primary storage device(s) 104 and secondary copy data stored on secondary storage device(s) 108 , with other components of the system removed for the purposes of illustration.
- primary storage device(s) 104 Stored on primary storage device(s) 104 are primary data 112 objects including word processing documents 119 A-B, spreadsheets 120 , presentation documents 122 , video files 124 , image files 126 , email mailboxes 128 (and corresponding email messages 129 A-C), HTML/XML or other types of markup language files 130 , databases 132 and corresponding tables or other data structures 133 A- 133 C.
- Some or all primary data 112 objects are associated with corresponding metadata (e.g., “Meta1-11”), which may include file system metadata and/or application-specific metadata.
- metadata e.g., “Meta1-11”
- Stored on the secondary storage device(s) 108 are secondary copy 116 data objects 134 A-C which may include copies of or may otherwise represent corresponding primary data 112 .
- Secondary copy data objects 134 A-C can individually represent more than one primary data object.
- secondary copy data object 134 A represents three separate primary data objects 133 C, 122 , and 129 C (represented as 133 C′, 122 ′, and 129 C′, respectively, and accompanied by corresponding metadata Meta11, Meta3, and Meta8, respectively).
- secondary storage computing devices 106 or other components in secondary storage subsystem 118 may process the data received from primary storage subsystem 117 and store a secondary copy including a transformed and/or supplemented representation of a primary data object and/or metadata that is different from the original format, e.g., in a compressed, encrypted, deduplicated, or other modified format.
- Secondary storage computing devices 106 can generate new metadata or other information based on said processing, and store the newly generated information along with the secondary copies.
- Secondary copy data object 1346 represents primary data objects 120 , 1336 , and 119 A as 120 ′, 1336 ′, and 119 A′, respectively, accompanied by corresponding metadata Meta2, Meta10, and Meta1, respectively.
- secondary copy data object 134 C represents primary data objects 133 A, 1196 , and 129 A as 133 A′, 1196 ′, and 129 A′, respectively, accompanied by corresponding metadata Meta9, Meta5, and Meta6, respectively.
- System 100 can incorporate a variety of different hardware and software components, which can in turn be organized with respect to one another in many different configurations, depending on the embodiment. There are critical design choices involved in specifying the functional responsibilities of the components and the role of each component in system 100 . Such design choices can impact how system 100 performs and adapts to data growth and other changing circumstances.
- FIG. 1C shows a system 100 designed according to these considerations and includes: storage manager 140 , one or more data agents 142 executing on client computing device(s) 102 and configured to process primary data 112 , and one or more media agents 144 executing on one or more secondary storage computing devices 106 for performing tasks involving secondary storage devices 108 .
- Storage manager 140 is a centralized storage and/or information manager that is configured to perform certain control functions and also to store certain critical information about system 100 —hence storage manager 140 is said to manage system 100 .
- the number of components in system 100 and the amount of data under management can be large. Managing the components and data is therefore a significant task, which can grow unpredictably as the number of components and data scale to meet the needs of the organization.
- responsibility for controlling system 100 or at least a significant portion of that responsibility, is allocated to storage manager 140 .
- Storage manager 140 can be adapted independently according to changing circumstances, without having to replace or re-design the remainder of the system.
- a computing device for hosting and/or operating as storage manager 140 can be selected to best suit the functions and networking needs of storage manager 140 .
- Storage manager 140 may be a software module or other application hosted by a suitable computing device. In some embodiments, storage manager 140 is itself a computing device that performs the functions described herein. Storage manager 140 comprises or operates in conjunction with one or more associated data structures such as a dedicated database (e.g., management database 146 ), depending on the configuration. The storage manager 140 generally initiates, performs, coordinates, and/or controls storage and other information management operations performed by system 100 , e.g., to protect and control primary data 112 and secondary copies 116 . In general, storage manager 140 is said to manage system 100 , which includes communicating with, instructing, and controlling in some circumstances components such as data agents 142 and media agents 144 , etc.
- a dedicated database e.g., management database 146
- the storage manager 140 generally initiates, performs, coordinates, and/or controls storage and other information management operations performed by system 100 , e.g., to protect and control primary data 112 and secondary copies 116 .
- storage manager 140 is said to manage system
- storage manager 140 may communicate with, instruct, and/or control some or all elements of system 100 , such as data agents 142 and media agents 144 . In this manner, storage manager 140 manages the operation of various hardware and software components in system 100 . In certain embodiments, control information originates from storage manager 140 and status as well as index reporting is transmitted to storage manager 140 by the managed components, whereas payload data and metadata are generally communicated between data agents 142 and media agents 144 (or otherwise between client computing device(s) 102 and secondary storage computing device(s) 106 ), e.g., at the direction of and under the management of storage manager 140 .
- Control information can generally include parameters and instructions for carrying out information management operations, such as, without limitation, instructions to perform a task associated with an operation, timing information specifying when to initiate a task, data path information specifying what components to communicate with or access in carrying out an operation, and the like.
- information management operations are controlled or initiated by other components of system 100 (e.g., by media agents 144 or data agents 142 ), instead of or in combination with storage manager 140 .
- storage manager 140 provides one or more of the following functions:
- Storage manager 140 may maintain an associated database 146 (or “storage manager database 146 ” or “management database 146 ”) of management-related data and information management policies 148 .
- Database 146 is stored in computer memory accessible by storage manager 140 .
- Database 146 may include a management index 150 (or “index 150 ”) or other data structure(s) that may store: logical associations between components of the system; user preferences and/or profiles (e.g., preferences regarding encryption, compression, or deduplication of primary data or secondary copies; preferences regarding the scheduling, type, or other aspects of secondary copy or other operations; mappings of particular information management users or user accounts to certain computing devices or other components, etc.; management tasks; media containerization; other useful data; and/or any combination thereof.
- management index 150 or other data structure(s) that may store: logical associations between components of the system; user preferences and/or profiles (e.g., preferences regarding encryption, compression, or deduplication of primary data or secondary copies; preferences regarding the scheduling, type, or other aspects of secondary copy or other operations;
- index 150 may use index 150 to track logical associations between media agents 144 and secondary storage devices 108 and/or movement of data to/from secondary storage devices 108 .
- index 150 may store data associating a client computing device 102 with a particular media agent 144 and/or secondary storage device 108 , as specified in an information management policy 148 .
- Storage manager 140 can process an information management policy 148 and/or index 150 and, based on the results, identify an information management operation to perform, identify the appropriate components in system 100 to be involved in the operation (e.g., client computing devices 102 and corresponding data agents 142 , secondary storage computing devices 106 and corresponding media agents 144 , etc.), establish connections to those components and/or between those components, and/or instruct and control those components to carry out the operation. In this manner, system 100 can translate stored information into coordinated activity among the various computing devices in system 100 .
- an information management operation to perform identify the appropriate components in system 100 to be involved in the operation (e.g., client computing devices 102 and corresponding data agents 142 , secondary storage computing devices 106 and corresponding media agents 144 , etc.), establish connections to those components and/or between those components, and/or instruct and control those components to carry out the operation.
- system 100 can translate stored information into coordinated activity among the various computing devices in system 100 .
- Management database 146 may maintain information management policies 148 and associated data, although information management policies 148 can be stored in computer memory at any appropriate location outside management database 146 .
- an information management policy 148 such as a storage policy may be stored as metadata in a media agent database 152 or in a secondary storage device 108 (e.g., as an archive copy) for use in restore or other information management operations, depending on the embodiment.
- Information management policies 148 are described further below.
- management database 146 comprises a relational database (e.g., an SQL database) for tracking metadata, such as metadata associated with secondary copy operations (e.g., what client computing devices 102 and corresponding subclient data were protected and where the secondary copies are stored and which media agent 144 performed the storage operation(s)).
- management database 146 may comprise data needed to kick off secondary copy operations (e.g., storage policies, schedule policies, etc.), status and reporting information about completed jobs (e.g., status and error reports on yesterday's backup jobs), and additional information sufficient to enable restore and disaster recovery operations (e.g., media agent associations, location indexing, content indexing, etc.).
- secondary copy operations e.g., storage policies, schedule policies, etc.
- status and reporting information about completed jobs e.g., status and error reports on yesterday's backup jobs
- additional information sufficient to enable restore and disaster recovery operations e.g., media agent associations, location indexing, content indexing, etc.
- Storage manager 140 may include a jobs agent 156 , a user interface 158 , and a management agent 154 , all of which may be implemented as interconnected software modules or application programs. These are described further below.
- Jobs agent 156 in some embodiments initiates, controls, and/or monitors the status of some or all information management operations previously performed, currently being performed, or scheduled to be performed by system 100 .
- a job is a logical grouping of information management operations such as daily storage operations scheduled for a certain set of subclients (e.g., generating incremental block-level backup copies 116 at a certain time every day for database files in a certain geographical location).
- jobs agent 156 may access information management policies 148 (e.g., in management database 146 ) to determine when, where, and how to initiate/control jobs in system 100 .
- User interface 158 may include information processing and display software, such as a graphical user interface (GUI), an application program interface (API), and/or other interactive interface(s) through which users and system processes can retrieve information about the status of information management operations or issue instructions to storage manager 140 and other components.
- GUI graphical user interface
- API application program interface
- users may issue instructions to the components in system 100 regarding performance of secondary copy and recovery operations. For example, a user may modify a schedule concerning the number of pending secondary copy operations.
- a user may employ the GUI to view the status of pending secondary copy jobs or to monitor the status of certain components in system 100 (e.g., the amount of capacity left in a storage device).
- Storage manager 140 may track information that permits it to select, designate, or otherwise identify content indices, deduplication databases, or similar databases or resources or data sets within its information management cell (or another cell) to be searched in response to certain queries. Such queries may be entered by the user by interacting with user interface 158 .
- Various embodiments of information management system 100 may be configured and/or designed to generate user interface data usable for rendering the various interactive user interfaces described.
- the user interface data may be used by system 100 and/or by another system, device, and/or software program (for example, a browser program), to render the interactive user interfaces.
- the interactive user interfaces may be displayed on, for example, electronic displays (including, for example, touch-enabled displays), consoles, etc., whether direct-connected to storage manager 140 or communicatively coupled remotely, e.g., via an internet connection.
- the present disclosure describes various embodiments of interactive and dynamic user interfaces, some of which may be generated by user interface agent 158 , and which are the result of significant technological development.
- User interfaces described herein may provide improved human-computer interactions, allowing for significant cognitive and ergonomic efficiencies and advantages over previous systems, including reduced mental workloads, improved decision-making, and the like.
- User interface 158 may operate in a single integrated view or console (not shown).
- the console may support a reporting capability for generating a variety of reports, which may be tailored to a particular aspect of information management.
- User interfaces are not exclusive to storage manager 140 and in some embodiments a user may access information locally from a computing device component of system 100 .
- a user may access information locally from a computing device component of system 100 .
- some information pertaining to installed data agents 142 and associated data streams may be available from client computing device 102 .
- some information pertaining to media agents 144 and associated data streams may be available from secondary storage computing device 106 .
- Management agent 154 can provide storage manager 140 with the ability to communicate with other components within system 100 and/or with other information management cells via network protocols and application programming interfaces (APIs) including, e.g., HTTP, HTTPS, FTP, REST, virtualization software APIs, cloud service provider APIs, and hosted service provider APIs, without limitation.
- APIs application programming interfaces
- Management agent 154 also allows multiple information management cells to communicate with one another.
- system 100 in some cases may be one information management cell in a network of multiple cells adjacent to one another or otherwise logically related, e.g., in a WAN or LAN. With this arrangement, the cells may communicate with one another through respective management agents 154 . Inter-cell communications and hierarchy is described in greater detail in e.g., U.S. Pat. No. 7,343,453.
- An “information management cell” may generally include a logical and/or physical grouping of a combination of hardware and software components associated with performing information management operations on electronic data, typically one storage manager 140 and at least one data agent 142 (executing on a client computing device 102 ) and at least one media agent 144 (executing on a secondary storage computing device 106 ).
- the components shown in FIG. 1C may together form an information management cell.
- a system 100 may be referred to as an information management cell or a storage operation cell.
- a given cell may be identified by the identity of its storage manager 140 , which is generally responsible for managing the cell.
- Multiple cells may be organized hierarchically, so that cells may inherit properties from hierarchically superior cells or be controlled by other cells in the hierarchy (automatically or otherwise).
- cells may inherit or otherwise be associated with information management policies, preferences, information management operational parameters, or other properties or characteristics according to their relative position in a hierarchy of cells.
- Cells may also be organized hierarchically according to function, geography, architectural considerations, or other factors useful or desirable in performing information management operations. For example, a first cell may represent a geographic segment of an enterprise, such as a Chicago office, and a second cell may represent a different geographic segment, such as a New York City office.
- Other cells may represent departments within a particular office, e.g., human resources, finance, engineering, etc.
- a first cell may perform one or more first types of information management operations (e.g., one or more first types of secondary copies at a certain frequency), and a second cell may perform one or more second types of information management operations (e.g., one or more second types of secondary copies at a different frequency and under different retention rules).
- first types of information management operations e.g., one or more first types of secondary copies at a certain frequency
- second cell may perform one or more second types of information management operations (e.g., one or more second types of secondary copies at a different frequency and under different retention rules).
- the hierarchical information is maintained by one or more storage managers 140 that manage the respective cells (e.g., in corresponding management database(s) 146 ).
- a variety of different applications 110 can operate on a given client computing device 102 , including operating systems, file systems, database applications, e-mail applications, and virtual machines, just to name a few. And, as part of the process of creating and restoring secondary copies 116 , the client computing device 102 may be tasked with processing and preparing the primary data 112 generated by these various applications 110 . Moreover, the nature of the processing/preparation can differ across application types, e.g., due to inherent structural, state, and formatting differences among applications 110 and/or the operating system of client computing device 102 . Each data agent 142 is therefore advantageously configured in some embodiments to assist in the performance of information management operations based on the type of data that is being protected at a client-specific and/or application-specific level.
- Data agent 142 is a component of information system 100 and is generally directed by storage manager 140 to participate in creating or restoring secondary copies 116 .
- Data agent 142 may be a software program (e.g., in the form of a set of executable binary files) that executes on the same client computing device 102 as the associated application 110 that data agent 142 is configured to protect.
- Data agent 142 is generally responsible for managing, initiating, or otherwise assisting in the performance of information management operations in reference to its associated application(s) 110 and corresponding primary data 112 which is generated/accessed by the particular application(s) 110 .
- data agent 142 may take part in copying, archiving, migrating, and/or replicating of certain primary data 112 stored in the primary storage device(s) 104 .
- Data agent 142 may receive control information from storage manager 140 , such as commands to transfer copies of data objects and/or metadata to one or more media agents 144 .
- Data agent 142 also may compress, deduplicate, and encrypt certain primary data 112 , as well as capture application-related metadata before transmitting the processed data to media agent 144 .
- Data agent 142 also may receive instructions from storage manager 140 to restore (or assist in restoring) a secondary copy 116 from secondary storage device 108 to primary storage 104 , such that the restored data may be properly accessed by application 110 in a suitable format as though it were primary data 112 .
- Each data agent 142 may be specialized for a particular application 110 .
- different individual data agents 142 may be designed to handle Microsoft Exchange data, Lotus Notes data, Microsoft Windows file system data, Microsoft Active Directory Objects data, SQL Server data, SharePoint data, Oracle database data, SAP database data, virtual machines and/or associated data, and other types of data.
- a file system data agent may handle data files and/or other file system information. If a client computing device 102 has two or more types of data 112 , a specialized data agent 142 may be used for each data type.
- the client computing device 102 may use: (1) a Microsoft Exchange Mailbox data agent 142 to back up the Exchange mailboxes; (2) a Microsoft Exchange Database data agent 142 to back up the Exchange databases; (3) a Microsoft Exchange Public Folder data agent 142 to back up the Exchange Public Folders; and (4) a Microsoft Windows File System data agent 142 to back up the file system of client computing device 102 .
- these specialized data agents 142 are treated as four separate data agents 142 even though they operate on the same client computing device 102 .
- Other examples may include archive management data agents such as a migration archiver or a compliance archiver, Quick Recovery® agents, and continuous data replication agents.
- Application-specific data agents 142 can provide improved performance as compared to generic agents. For instance, because application-specific data agents 142 may only handle data for a single software application, the design, operation, and performance of the data agent 142 can be streamlined. The data agent 142 may therefore execute faster and consume less persistent storage and/or operating memory than data agents designed to generically accommodate multiple different software applications 110 .
- Each data agent 142 may be configured to access data and/or metadata stored in the primary storage device(s) 104 associated with data agent 142 and its host client computing device 102 , and process the data appropriately. For example, during a secondary copy operation, data agent 142 may arrange or assemble the data and metadata into one or more files having a certain format (e.g., a particular backup or archive format) before transferring the file(s) to a media agent 144 or other component.
- the file(s) may include a list of files or other metadata.
- a data agent 142 may be distributed between client computing device 102 and storage manager 140 (and any other intermediate components) or may be deployed from a remote location or its functions approximated by a remote process that performs some or all of the functions of data agent 142 .
- a data agent 142 may perform some functions provided by media agent 144 .
- Other embodiments may employ one or more generic data agents 142 that can handle and process data from two or more different applications 110 , or that can handle and process multiple data types, instead of or in addition to using specialized data agents 142 .
- one generic data agent 142 may be used to back up, migrate and restore Microsoft Exchange Mailbox data and Microsoft Exchange Database data, while another generic data agent may handle Microsoft Exchange Public Folder data and Microsoft Windows File System data.
- off-loading certain responsibilities from client computing devices 102 to intermediate components such as secondary storage computing device(s) 106 and corresponding media agent(s) 144 can provide a number of benefits including improved performance of client computing device 102 , faster and more reliable information management operations, and enhanced scalability.
- media agent 144 can act as a local cache of recently-copied data and/or metadata stored to secondary storage device(s) 108 , thus improving restore capabilities and performance for the cached data.
- Media agent 144 is a component of system 100 and is generally directed by storage manager 140 in creating and restoring secondary copies 116 . Whereas storage manager 140 generally manages system 100 as a whole, media agent 144 provides a portal to certain secondary storage devices 108 , such as by having specialized features for communicating with and accessing certain associated secondary storage device 108 . Media agent 144 may be a software program (e.g., in the form of a set of executable binary files) that executes on a secondary storage computing device 106 . Media agent 144 generally manages, coordinates, and facilitates the transmission of data between a data agent 142 (executing on client computing device 102 ) and secondary storage device(s) 108 associated with media agent 144 .
- a data agent 142 executing on client computing device 102
- secondary storage device(s) 108 associated with media agent 144 .
- media agent 144 may interact with media agent 144 to gain access to data stored on associated secondary storage device(s) 108 , (e.g., to browse, read, write, modify, delete, or restore data).
- media agents 144 can generate and store information relating to characteristics of the stored data and/or metadata, or can generate and store other types of information that generally provides insight into the contents of the secondary storage devices 108 —generally referred to as indexing of the stored secondary copies 116 .
- Each media agent 144 may operate on a dedicated secondary storage computing device 106 , while in other embodiments a plurality of media agents 144 may operate on the same secondary storage computing device 106 .
- a media agent 144 may be associated with a particular secondary storage device 108 if that media agent 144 is capable of one or more of: routing and/or storing data to the particular secondary storage device 108 ; coordinating the routing and/or storing of data to the particular secondary storage device 108 ; retrieving data from the particular secondary storage device 108 ; coordinating the retrieval of data from the particular secondary storage device 108 ; and modifying and/or deleting data retrieved from the particular secondary storage device 108 .
- Media agent 144 in certain embodiments is physically separate from the associated secondary storage device 108 .
- a media agent 144 may operate on a secondary storage computing device 106 in a distinct housing, package, and/or location from the associated secondary storage device 108 .
- a media agent 144 operates on a first server computer and is in communication with a secondary storage device(s) 108 operating in a separate rack-mounted RAID-based system.
- a media agent 144 associated with a particular secondary storage device 108 may instruct secondary storage device 108 to perform an information management task. For instance, a media agent 144 may instruct a tape library to use a robotic arm or other retrieval means to load or eject a certain storage media, and to subsequently archive, migrate, or retrieve data to or from that media, e.g., for the purpose of restoring data to a client computing device 102 .
- a secondary storage device 108 may include an array of hard disk drives or solid state drives organized in a RAID configuration, and media agent 144 may forward a logical unit number (LUN) and other appropriate information to the array, which uses the received information to execute the desired secondary copy operation.
- Media agent 144 may communicate with a secondary storage device 108 via a suitable communications link, such as a SCSI or Fibre Channel link.
- Each media agent 144 may maintain an associated media agent database 152 .
- Media agent database 152 may be stored to a disk or other storage device (not shown) that is local to the secondary storage computing device 106 on which media agent 144 executes. In other cases, media agent database 152 is stored separately from the host secondary storage computing device 106 .
- Media agent database 152 can include, among other things, a media agent index 153 (see, e.g., FIG. 1C ). In some cases, media agent index 153 does not form a part of and is instead separate from media agent database 152 .
- Media agent index 153 may be a data structure associated with the particular media agent 144 that includes information about the stored data associated with the particular media agent and which may be generated in the course of performing a secondary copy operation or a restore. Index 153 provides a fast and efficient mechanism for locating/browsing secondary copies 116 or other data stored in secondary storage devices 108 without having to access secondary storage device 108 to retrieve the information from there.
- index 153 may include metadata such as a list of the data objects (e.g., files/subdirectories, database objects, mailbox objects, etc.), a logical path to the secondary copy 116 on the corresponding secondary storage device 108 , location information (e.g., offsets) indicating where the data objects are stored in the secondary storage device 108 , when the data objects were created or modified, etc.
- location information e.g., offsets
- index 153 includes metadata associated with the secondary copies 116 that is readily available for use from media agent 144 .
- some or all of the information in index 153 may instead or additionally be stored along with secondary copies 116 in secondary storage device 108 .
- a secondary storage device 108 can include sufficient information to enable a “bare metal restore,” where the operating system and/or software applications of a failed client computing device 102 or another target may be automatically restored without manually reinstalling individual software packages (including operating systems).
- index 153 may operate as a cache, it can also be referred to as an “index cache.”
- information stored in index cache 153 typically comprises data that reflects certain particulars about relatively recent secondary copy operations. After some triggering event, such as after some time elapses or index cache 153 reaches a particular size, certain portions of index cache 153 may be copied or migrated to secondary storage device 108 , e.g., on a least-recently-used basis. This information may be retrieved and uploaded back into index cache 153 or otherwise restored to media agent 144 to facilitate retrieval of data from the secondary storage device(s) 108 .
- the cached information may include format or containerization information related to archives or other files stored on storage device(s) 108 .
- media agent 144 generally acts as a coordinator or facilitator of secondary copy operations between client computing devices 102 and secondary storage devices 108 , but does not actually write the data to secondary storage device 108 .
- storage manager 140 (or media agent 144 ) may instruct a client computing device 102 and secondary storage device 108 to communicate with one another directly.
- client computing device 102 transmits data directly or via one or more intermediary components to secondary storage device 108 according to the received instructions, and vice versa.
- Media agent 144 may still receive, process, and/or maintain metadata related to the secondary copy operations, i.e., may continue to build and maintain index 153 .
- payload data can flow through media agent 144 for the purposes of populating index 153 , but not for writing to secondary storage device 108 .
- Media agent 144 and/or other components such as storage manager 140 may in some cases incorporate additional functionality, such as data classification, content indexing, deduplication, encryption, compression, and the like. Further details regarding these and other functions are described below.
- certain functions of system 100 can be distributed amongst various physical and/or logical components.
- one or more of storage manager 140 , data agents 142 , and media agents 144 may operate on computing devices that are physically separate from one another.
- This architecture can provide a number of benefits. For instance, hardware and software design choices for each distributed component can be targeted to suit its particular function.
- the secondary computing devices 106 on which media agents 144 operate can be tailored for interaction with associated secondary storage devices 108 and provide fast index cache operation, among other specific tasks.
- client computing device(s) 102 can be selected to effectively service applications 110 in order to efficiently produce and store primary data 112 .
- one or more of the individual components of information management system 100 can be distributed to multiple separate computing devices.
- database 146 may be migrated to or may otherwise reside on a specialized database server (e.g., an SQL server) separate from a server that implements the other functions of storage manager 140 .
- This distributed configuration can provide added protection because database 146 can be protected with standard database utilities (e.g., SQL log shipping or database replication) independent from other functions of storage manager 140 .
- Database 146 can be efficiently replicated to a remote site for use in the event of a disaster or other data loss at the primary site. Or database 146 can be replicated to another computing device within the same site, such as to a higher performance machine in the event that a storage manager host computing device can no longer service the needs of a growing system 100 .
- FIG. 1D shows an embodiment of information management system 100 including a plurality of client computing devices 102 and associated data agents 142 as well as a plurality of secondary storage computing devices 106 and associated media agents 144 . Additional components can be added or subtracted based on the evolving needs of system 100 . For instance, depending on where bottlenecks are identified, administrators can add additional client computing devices 102 , secondary storage computing devices 106 , and/or secondary storage devices 108 . Moreover, where multiple fungible components are available, load balancing can be implemented to dynamically address identified bottlenecks. As an example, storage manager 140 may dynamically select which media agents 144 and/or secondary storage devices 108 to use for storage operations based on a processing load analysis of media agents 144 and/or secondary storage devices 108 , respectively.
- a first media agent 144 may provide failover functionality for a second failed media agent 144 .
- media agents 144 can be dynamically selected to provide load balancing.
- Each client computing device 102 can communicate with, among other components, any of the media agents 144 , e.g., as directed by storage manager 140 .
- each media agent 144 may communicate with, among other components, any of secondary storage devices 108 , e.g., as directed by storage manager 140 .
- operations can be routed to secondary storage devices 108 in a dynamic and highly flexible manner, to provide load balancing, failover, etc.
- Further examples of scalable systems capable of dynamic storage operations, load balancing, and failover are provided in U.S. Pat. No. 7,246,207.
- certain components may reside and execute on the same computing device.
- one or more of the components shown in FIG. 1C may be implemented on the same computing device.
- a storage manager 140 , one or more data agents 142 , and/or one or more media agents 144 are all implemented on the same computing device.
- one or more data agents 142 and one or more media agents 144 are implemented on the same computing device, while storage manager 140 is implemented on a separate computing device, etc. without limitation.
- system 100 can be configured to perform a variety of information management operations, which may also be referred to in some cases as storage management operations or storage operations. These operations can generally include (i) data movement operations, (ii) processing and data manipulation operations, and (iii) analysis, reporting, and management operations.
- Data movement operations are generally storage operations that involve the copying or migration of data between different locations in system 100 .
- data movement operations can include operations in which stored data is copied, migrated, or otherwise transferred from one or more first storage devices to one or more second storage devices, such as from primary storage device(s) 104 to secondary storage device(s) 108 , from secondary storage device(s) 108 to different secondary storage device(s) 108 , from secondary storage devices 108 to primary storage devices 104 , or from primary storage device(s) 104 to different primary storage device(s) 104 , or in some cases within the same primary storage device 104 such as within a storage array.
- Data movement operations can include by way of example, backup operations, archive operations, information lifecycle management operations such as hierarchical storage management operations, replication operations (e.g., continuous data replication), snapshot operations, deduplication or single-instancing operations, auxiliary copy operations, disaster-recovery copy operations, and the like. As will be discussed, some of these operations do not necessarily create distinct copies. Nonetheless, some or all of these operations are generally referred to as “secondary copy operations” for simplicity, because they involve secondary copies. Data movement also comprises restoring secondary copies.
- a backup operation creates a copy of a version of primary data 112 at a particular point in time (e.g., one or more files or other data units). Each subsequent backup copy 116 (which is a form of secondary copy 116 ) may be maintained independently of the first.
- a backup generally involves maintaining a version of the copied primary data 112 as well as backup copies 116 .
- a backup copy in some embodiments is generally stored in a form that is different from the native format, e.g., a backup format. This contrasts to the version in primary data 112 which may instead be stored in a format native to the source application(s) 110 .
- backup copies can be stored in a format in which the data is compressed, encrypted, deduplicated, and/or otherwise modified from the original native application format.
- a backup copy may be stored in a compressed backup format that facilitates efficient long-term storage.
- Backup copies 116 can have relatively long retention periods as compared to primary data 112 , which is generally highly changeable. Backup copies 116 may be stored on media with slower retrieval times than primary storage device 104 . Some backup copies may have shorter retention periods than some other types of secondary copies 116 , such as archive copies (described below). Backups may be stored at an offsite location.
- Backup operations can include full backups, differential backups, incremental backups, “synthetic full” backups, and/or creating a “reference copy.”
- a full backup (or “standard full backup”) in some embodiments is generally a complete image of the data to be protected. However, because full backup copies can consume a relatively large amount of storage, it can be useful to use a full backup copy as a baseline and only store changes relative to the full backup copy afterwards.
- a differential backup operation tracks and stores changes that occurred since the last full backup. Differential backups can grow quickly in size, but can restore relatively efficiently because a restore can be completed in some cases using only the full backup copy and the latest differential copy.
- An incremental backup operation generally tracks and stores changes since the most recent backup copy of any type, which can greatly reduce storage utilization. In some cases, however, restoring can be lengthy compared to full or differential backups because completing a restore operation may involve accessing a full backup in addition to multiple incremental backups.
- Synthetic full backups generally consolidate data without directly backing up data from the client computing device.
- a synthetic full backup is created from the most recent full backup (i.e., standard or synthetic) and subsequent incremental and/or differential backups. The resulting synthetic full backup is identical to what would have been created had the last backup for the subclient been a standard full backup.
- a synthetic full backup does not actually transfer data from primary storage to the backup media, because it operates as a backup consolidator.
- a synthetic full backup extracts the index data of each participating subclient. Using this index data and the previously backed up user data images, it builds new full backup images (e.g., bitmaps), one for each subclient. The new backup images consolidate the index and user data stored in the related incremental, differential, and previous full backups into a synthetic backup file that fully represents the subclient (e.g., via pointers) but does not comprise all its constituent data.
- volume level backup operations generally involve copying of a data volume (e.g., a logical disk or partition) as a whole.
- information management system 100 generally tracks changes to individual files and includes copies of files in the backup copy.
- block-level backups files are broken into constituent blocks, and changes are tracked at the block level.
- system 100 reassembles the blocks into files in a transparent fashion. Far less data may actually be transferred and copied to secondary storage devices 108 during a file-level copy than a volume-level copy.
- a block-level copy may transfer less data than a file-level copy, resulting in faster execution.
- restoring a relatively higher-granularity copy can result in longer restore times. For instance, when restoring a block-level copy, the process of locating and retrieving constituent blocks can sometimes take longer than restoring file-level backups.
- a reference copy may comprise copy(ies) of selected objects from backed up data, typically to help organize data by keeping contextual information from multiple sources together, and/or help retain specific data for a longer period of time, such as for legal hold needs.
- a reference copy generally maintains data integrity, and when the data is restored, it may be viewed in the same format as the source data.
- a reference copy is based on a specialized client, individual subclient and associated information management policies (e.g., storage policy, retention policy, etc.) that are administered within system 100 .
- an archive operation creates an archive copy 116 by both copying and removing source data. Or, seen another way, archive operations can involve moving some or all of the source data to the archive destination. Thus, data satisfying criteria for removal (e.g., data of a threshold age or size) may be removed from source storage.
- the source data may be primary data 112 or a secondary copy 116 , depending on the situation.
- archive copies can be stored in a format in which the data is compressed, encrypted, deduplicated, and/or otherwise modified from the format of the original application or source copy. In addition, archive copies may be retained for relatively long periods of time (e.g., years) and, in some cases are never deleted. In certain embodiments, archive copies may be made and kept for extended periods in order to meet compliance regulations.
- Archiving can also serve the purpose of freeing up space in primary storage device(s) 104 and easing the demand on computational resources on client computing device 102 . Similarly, when a secondary copy 116 is archived, the archive copy can therefore serve the purpose of freeing up space in the source secondary storage device(s) 108 . Examples of data archiving operations are provided in U.S. Pat. No. 7,107,298.
- Snapshot operations can provide a relatively lightweight, efficient mechanism for protecting data.
- a snapshot may be thought of as an “instant” image of primary data 112 at a given point in time, and may include state and/or status information relative to an application 110 that creates/manages primary data 112 .
- a snapshot may generally capture the directory structure of an object in primary data 112 such as a file or volume or other data set at a particular moment in time and may also preserve file attributes and contents.
- a snapshot in some cases is created relatively quickly, e.g., substantially instantly, using a minimum amount of file space, but may still function as a conventional file system backup.
- a “hardware snapshot” (or “hardware-based snapshot”) operation occurs where a target storage device (e.g., a primary storage device 104 or a secondary storage device 108 ) performs the snapshot operation in a self-contained fashion, substantially independently, using hardware, firmware and/or software operating on the storage device itself.
- the storage device may perform snapshot operations generally without intervention or oversight from any of the other components of the system 100 , e.g., a storage array may generate an “array-created” hardware snapshot and may also manage its storage, integrity, versioning, etc. In this manner, hardware snapshots can off-load other components of system 100 from snapshot processing.
- An array may receive a request from another component to take a snapshot and then proceed to execute the “hardware snapshot” operations autonomously, preferably reporting success to the requesting component.
- a “software snapshot” (or “software-based snapshot”) operation occurs where a component in system 100 (e.g., client computing device 102 , etc.) implements a software layer that manages the snapshot operation via interaction with the target storage device. For instance, the component executing the snapshot management software layer may derive a set of pointers and/or data that represents the snapshot. The snapshot management software layer may then transmit the same to the target storage device, along with appropriate instructions for writing the snapshot.
- a software snapshot product is Microsoft Volume Snapshot Service (VSS), which is part of the Microsoft Windows operating system.
- snapshots do not actually create another physical copy of all the data as it existed at the particular point in time, but may simply create pointers that map files and directories to specific memory locations (e.g., to specific disk blocks) where the data resides as it existed at the particular point in time.
- a snapshot copy may include a set of pointers derived from the file system or from an application.
- the snapshot may be created at the block-level, such that creation of the snapshot occurs without awareness of the file system.
- Each pointer points to a respective stored data block, so that collectively, the set of pointers reflect the storage location and state of the data object (e.g., file(s) or volume(s) or data set(s)) at the point in time when the snapshot copy was created.
- An initial snapshot may use only a small amount of disk space needed to record a mapping or other data structure representing or otherwise tracking the blocks that correspond to the current state of the file system. Additional disk space is usually required only when files and directories change later on. Furthermore, when files change, typically only the pointers which map to blocks are copied, not the blocks themselves. For example for “copy-on-write” snapshots, when a block changes in primary storage, the block is copied to secondary storage or cached in primary storage before the block is overwritten in primary storage, and the pointer to that block is changed to reflect the new location of that block. The snapshot mapping of file system data may also be updated to reflect the changed block(s) at that particular point in time.
- a snapshot includes a full physical copy of all or substantially all of the data represented by the snapshot. Further examples of snapshot operations are provided in U.S. Pat. No. 7,529,782.
- a snapshot copy in many cases can be made quickly and without significantly impacting primary computing resources because large amounts of data need not be copied or moved.
- a snapshot may exist as a virtual file system, parallel to the actual file system. Users in some cases gain read-only access to the record of files and directories of the snapshot. By electing to restore primary data 112 from a snapshot taken at a given point in time, users may also return the current file system to the state of the file system that existed when the snapshot was taken.
- Replication is another type of secondary copy operation.
- Some types of secondary copies 116 periodically capture images of primary data 112 at particular points in time (e.g., backups, archives, and snapshots). However, it can also be useful for recovery purposes to protect primary data 112 in a more continuous fashion, by replicating primary data 112 substantially as changes occur.
- a replication copy can be a mirror copy, for instance, where changes made to primary data 112 are mirrored or substantially immediately copied to another location (e.g., to secondary storage device(s) 108 ). By copying each write operation to the replication copy, two storage systems are kept synchronized or substantially synchronized so that they are virtually identical at approximately the same time. Where entire disk volumes are mirrored, however, mirroring can require significant amount of storage space and utilizes a large amount of processing resources.
- secondary copy operations are performed on replicated data that represents a recoverable state, or “known good state” of a particular application running on the source system.
- known good replication copies may be viewed as copies of primary data 112 . This feature allows the system to directly access, copy, restore, back up, or otherwise manipulate the replication copies as if they were the “live” primary data 112 . This can reduce access time, storage utilization, and impact on source applications 110 , among other benefits.
- system 100 can replicate sections of application data that represent a recoverable state rather than rote copying of blocks of data. Examples of replication operations (e.g., continuous data replication) are provided in U.S. Pat. No. 7,617,262.
- Deduplication or single-instance storage is useful to reduce the amount of non-primary data.
- some or all of the above-described secondary copy operations can involve deduplication in some fashion.
- New data is read, broken down into data portions of a selected granularity (e.g., sub-file level blocks, files, etc.), compared with corresponding portions that are already in secondary storage, and only new/changed portions are stored. Portions that already exist are represented as pointers to the already-stored data.
- a deduplicated secondary copy 116 may comprise actual data portions copied from primary data 112 and may further comprise pointers to already-stored data, which is generally more storage-efficient than a full copy.
- system 100 may calculate and/or store signatures (e.g., hashes or cryptographically unique IDs) corresponding to the individual source data portions and compare the signatures to already-stored data signatures, instead of comparing entire data portions.
- signatures e.g., hashes or cryptographically unique IDs
- deduplication operations may therefore be referred to interchangeably as “single-instancing” operations.
- deduplication operations can store more than one instance of certain data portions, yet still significantly reduce stored-data redundancy.
- deduplication portions such as data blocks can be of fixed or variable length. Using variable length blocks can enhance deduplication by responding to changes in the data stream, but can involve more complex processing.
- system 100 utilizes a technique for dynamically aligning deduplication blocks based on changing content in the data stream, as described in U.S. Pat. No. 8,364,652.
- System 100 can deduplicate in a variety of manners at a variety of locations. For instance, in some embodiments, system 100 implements “target-side” deduplication by deduplicating data at the media agent 144 after being received from data agent 142 .
- media agents 144 are generally configured to manage the deduplication process. For instance, one or more of the media agents 144 maintain a corresponding deduplication database that stores deduplication information (e.g., data block signatures). Examples of such a configuration are provided in U.S. Pat. No. 9,020,900.
- “source-side” (or “client-side”) deduplication can also be performed, e.g., to reduce the amount of data to be transmitted by data agent 142 to media agent 144 .
- Storage manager 140 may communicate with other components within system 100 via network protocols and cloud service provider APIs to facilitate cloud-based deduplication/single instancing, as exemplified in U.S. Pat. No. 8,954,446.
- Some other deduplication/single instancing techniques are described in U.S. Pat. Pub. No. 2006/0224846 and in U.S. Pat. No. 9,098,495.
- files and other data over their lifetime move from more expensive quick-access storage to less expensive slower-access storage.
- Operations associated with moving data through various tiers of storage are sometimes referred to as information lifecycle management (ILM) operations.
- ILM information lifecycle management
- HSM hierarchical storage management
- an HSM operation may involve movement of data from primary storage devices 104 to secondary storage devices 108 , or between tiers of secondary storage devices 108 . With each tier, the storage devices may be progressively cheaper, have relatively slower access/restore times, etc. For example, movement of data between tiers may occur as data becomes less important over time.
- an HSM operation is similar to archiving in that creating an HSM copy may (though not always) involve deleting some of the source data, e.g., according to one or more criteria related to the source data.
- an HSM copy may include primary data 112 or a secondary copy 116 that exceeds a given size threshold or a given age threshold.
- HSM data that is removed or aged from the source is replaced by a logical reference pointer or stub.
- the reference pointer or stub can be stored in the primary storage device 104 or other source storage device, such as a secondary storage device 108 to replace the deleted source data and to point to or otherwise indicate the new location in (another) secondary storage device 108 .
- system 100 uses the stub to locate the data and can make recovery of the data appear transparent, even though the HSM data may be stored at a location different from other source data. In this manner, the data appears to the user (e.g., in file system browsing windows and the like) as if it still resides in the source location (e.g., in a primary storage device 104 ).
- the stub may include metadata associated with the corresponding data, so that a file system and/or application can provide some information about the data object and/or a limited-functionality version (e.g., a preview) of the data object.
- An HSM copy may be stored in a format other than the native application format (e.g., compressed, encrypted, deduplicated, and/or otherwise modified).
- copies which involve the removal of data from source storage and the maintenance of stub or other logical reference information on source storage may be referred to generally as “on-line archive copies.”
- copies which involve the removal of data from source storage without the maintenance of stub or other logical reference information on source storage may be referred to as “off-line archive copies.” Examples of HSM and ILM techniques are provided in U.S. Pat. No. 7,343,453.
- An auxiliary copy is generally a copy of an existing secondary copy 116 .
- an initial secondary copy 116 may be derived from primary data 112 or from data residing in secondary storage subsystem 118 , whereas an auxiliary copy is generated from the initial secondary copy 116 .
- Auxiliary copies provide additional standby copies of data and may reside on different secondary storage devices 108 than the initial secondary copies 116 .
- auxiliary copies can be used for recovery purposes if initial secondary copies 116 become unavailable. Exemplary auxiliary copy techniques are described in further detail in U.S. Pat. No. 8,230,195.
- System 100 may also make and retain disaster recovery copies, often as secondary, high-availability disk copies.
- System 100 may create secondary copies and store them at disaster recovery locations using auxiliary copy or replication operations, such as continuous data replication technologies.
- disaster recovery locations can be remote from the client computing devices 102 and primary storage devices 104 , remote from some or all of the secondary storage devices 108 , or both.
- Data manipulation and processing may include encryption and compression as well as integrity marking and checking, formatting for transmission, formatting for storage, etc.
- Data may be manipulated “client-side” by data agent 142 as well as “target-side” by media agent 144 in the course of creating secondary copy 116 , or conversely in the course of restoring data from secondary to primary.
- System 100 in some cases is configured to process data (e.g., files or other data objects, primary data 112 , secondary copies 116 , etc.), according to an appropriate encryption algorithm (e.g., Blowfish, Advanced Encryption Standard (AES), Triple Data Encryption Standard (3-DES), etc.) to limit access and provide data security.
- System 100 in some cases encrypts the data at the client level, such that client computing devices 102 (e.g., data agents 142 ) encrypt the data prior to transferring it to other components, e.g., before sending the data to media agents 144 during a secondary copy operation.
- client computing device 102 may maintain or have access to an encryption key or passphrase for decrypting the data upon restore.
- Encryption can also occur when media agent 144 creates auxiliary copies or archive copies. Encryption may be applied in creating a secondary copy 116 of a previously unencrypted secondary copy 116 , without limitation.
- secondary storage devices 108 can implement built-in, high performance hardware-based encryption.
- system 100 may also or alternatively compress data in the course of generating a secondary copy 116 .
- Compression encodes information such that fewer bits are needed to represent the information as compared to the original representation.
- Compression techniques are well known in the art. Compression operations may apply one or more data compression algorithms. Compression may be applied in creating a secondary copy 116 of a previously uncompressed secondary copy, e.g., when making archive copies or disaster recovery copies. The use of compression may result in metadata that specifies the nature of the compression, so that data may be uncompressed on restore if appropriate.
- Data analysis, reporting, and management operations can differ from data movement operations in that they do not necessarily involve copying, migration or other transfer of data between different locations in the system.
- data analysis operations may involve processing (e.g., offline processing) or modification of already stored primary data 112 and/or secondary copies 116 .
- data analysis operations are performed in conjunction with data movement operations.
- Some data analysis operations include content indexing operations and classification operations which can be useful in leveraging data under management to enhance search and other features.
- information management system 100 analyzes and indexes characteristics, content, and metadata associated with primary data 112 (“online content indexing”) and/or secondary copies 116 (“off-line content indexing”).
- Content indexing can identify files or other data objects based on content (e.g., user-defined keywords or phrases, other keywords/phrases that are not defined by a user, etc.), and/or metadata (e.g., email metadata such as “to,” “from,” “cc,” “bcc,” attachment name, received time, etc.).
- Content indexes may be searched and search results may be restored.
- System 100 generally organizes and catalogues the results into a content index, which may be stored within media agent database 152 , for example.
- the content index can also include the storage locations of or pointer references to indexed data in primary data 112 and/or secondary copies 116 .
- Results may also be stored elsewhere in system 100 (e.g., in primary storage device 104 or in secondary storage device 108 ).
- Such content index data provides storage manager 140 or other components with an efficient mechanism for locating primary data 112 and/or secondary copies 116 of data objects that match particular criteria, thus greatly increasing the search speed capability of system 100 .
- search criteria can be specified by a user through user interface 158 of storage manager 140 .
- system 100 analyzes data and/or metadata in secondary copies 116 to create an “off-line content index,” this operation has no significant impact on the performance of client computing devices 102 and thus does not take a toll on the production environment.
- Examples of content indexing techniques are provided in U.S. Pat. No. 8,170,995.
- One or more components can be configured to scan data and/or associated metadata for classification purposes to populate a database (or other data structure) of information, which can be referred to as a “data classification database” or a “metabase.”
- a database or other data structure
- the data classification database(s) can be organized in a variety of different ways, including centralization, logical sub-divisions, and/or physical sub-divisions.
- one or more data classification databases may be associated with different subsystems or tiers within system 100 . As an example, there may be a first metabase associated with primary storage subsystem 117 and a second metabase associated with secondary storage subsystem 118 .
- metabase(s) may be associated with individual components, e.g., client computing devices 102 and/or media agents 144 .
- a data classification database may reside as one or more data structures within management database 146 , may be otherwise associated with storage manager 140 , and/or may reside as a separate component.
- metabase(s) may be included in separate database(s) and/or on separate storage device(s) from primary data 112 and/or secondary copies 116 , such that operations related to the metabase(s) do not significantly impact performance on other components of system 100 .
- metabase(s) may be stored along with primary data 112 and/or secondary copies 116 .
- Files or other data objects can be associated with identifiers (e.g., tag entries, etc.) to facilitate searches of stored data objects.
- identifiers e.g., tag entries, etc.
- the metabase can also allow efficient, automatic identification of files or other data objects to associate with secondary copy or other information management operations.
- a metabase can dramatically improve the speed with which system 100 can search through and identify data as compared to other approaches that involve scanning an entire file system. Examples of metabases and data classification operations are provided in U.S. Pat. Nos. 7,734,669 and 7,747,579.
- Operations management can generally include monitoring and managing the health and performance of system 100 by, without limitation, performing error tracking, generating granular storage/performance metrics (e.g., job success/failure information, deduplication efficiency, etc.), generating storage modeling and costing information, and the like.
- storage manager 140 or another component in system 100 may analyze traffic patterns and suggest and/or automatically route data to minimize congestion.
- the system can generate predictions relating to storage operations or storage operation information. Such predictions, which may be based on a trending analysis, may predict various network operations or resource usage, such as network traffic levels, storage media use, use of bandwidth of communication links, use of media agent components, etc. Further examples of traffic analysis, trend analysis, prediction generation, and the like are described in U.S. Pat. No. 7,343,453.
- a master storage manager 140 may track the status of subordinate cells, such as the status of jobs, system components, system resources, and other items, by communicating with storage managers 140 (or other components) in the respective storage operation cells. Moreover, the master storage manager 140 may also track status by receiving periodic status updates from the storage managers 140 (or other components) in the respective cells regarding jobs, system components, system resources, and other items. In some embodiments, a master storage manager 140 may store status information and other information regarding its associated storage operation cells and other system information in its management database 146 and/or index 150 (or in another location).
- the master storage manager 140 or other component may also determine whether certain storage-related or other criteria are satisfied, and may perform an action or trigger event (e.g., data migration) in response to the criteria being satisfied, such as where a storage threshold is met for a particular volume, or where inadequate protection exists for certain data. For instance, data from one or more storage operation cells is used to dynamically and automatically mitigate recognized risks, and/or to advise users of risks or suggest actions to mitigate these risks.
- an action or trigger event e.g., data migration
- an information management policy may specify certain requirements (e.g., that a storage device should maintain a certain amount of free space, that secondary copies should occur at a particular interval, that data should be aged and migrated to other storage after a particular period, that data on a secondary volume should always have a certain level of availability and be restorable within a given time period, that data on a secondary volume may be mirrored or otherwise migrated to a specified number of other volumes, etc.). If a risk condition or other criterion is triggered, the system may notify the user of these conditions and may suggest (or automatically implement) a mitigation action to address the risk.
- certain requirements e.g., that a storage device should maintain a certain amount of free space, that secondary copies should occur at a particular interval, that data should be aged and migrated to other storage after a particular period, that data on a secondary volume should always have a certain level of availability and be restorable within a given time period, that data on a secondary volume may be mirrored or otherwise migrated to a specified number
- the system may indicate that data from a primary copy 112 should be migrated to a secondary storage device 108 to free up space on primary storage device 104 .
- risk factors examples include, but not limited to, risk factors, risk factors, and other triggering criteria.
- system 100 may also determine whether a metric or other indication satisfies particular storage criteria sufficient to perform an action.
- a storage policy or other definition might indicate that a storage manager 140 should initiate a particular action if a storage metric or other indication drops below or otherwise fails to satisfy specified criteria such as a threshold of data protection.
- risk factors may be quantified into certain measurable service or risk levels. For example, certain applications and associated data may be considered to be more important relative to other data and services. Financial compliance data, for example, may be of greater importance than marketing materials, etc. Network administrators may assign priority values or “weights” to certain data and/or applications corresponding to the relative importance. The level of compliance of secondary copy operations specified for these applications may also be assigned a certain value.
- the health, impact, and overall importance of a service may be determined, such as by measuring the compliance value and calculating the product of the priority value and the compliance value to determine the “service level” and comparing it to certain operational thresholds to determine whether it is acceptable. Further examples of the service level determination are provided in U.S. Pat. No. 7,343,453.
- System 100 may additionally calculate data costing and data availability associated with information management operation cells. For instance, data received from a cell may be used in conjunction with hardware-related information and other information about system elements to determine the cost of storage and/or the availability of particular data. Exemplary information generated could include how fast a particular department is using up available storage space, how long data would take to recover over a particular pathway from a particular secondary storage device, costs over time, etc. Moreover, in some embodiments, such information may be used to determine or predict the overall cost associated with the storage of certain information. The cost associated with hosting a certain application may be based, at least in part, on the type of media on which the data resides, for example. Storage devices may be assigned to a particular cost categories, for example. Further examples of costing techniques are described in U.S. Pat. No. 7,343,453.
- Report types may include: scheduling, event management, media management and data aging. Available reports may also include backup history, data aging history, auxiliary copy history, job history, library and drive, media in library, restore history, and storage policy, etc., without limitation. Such reports may be specified and created at a certain point in time as a system analysis, forecasting, or provisioning tool. Integrated reports may also be generated that illustrate storage and performance metrics, risks and storage costing information. Moreover, users may create their own reports based on specific needs.
- User interface 158 can include an option to graphically depict the various components in the system using appropriate icons. As one example, user interface 158 may provide a graphical depiction of primary storage devices 104 , secondary storage devices 108 , data agents 142 and/or media agents 144 , and their relationship to one another in system 100 .
- the operations management functionality of system 100 can facilitate planning and decision-making. For example, in some embodiments, a user may view the status of some or all jobs as well as the status of each component of information management system 100 . Users may then plan and make decisions based on this data. For instance, a user may view high-level information regarding secondary copy operations for system 100 , such as job status, component status, resource status (e.g., communication pathways, etc.), and other information. The user may also drill down or use other means to obtain more detailed information regarding a particular component, job, or the like. Further examples are provided in U.S. Pat. No. 7,343,453.
- System 100 can also be configured to perform system-wide e-discovery operations in some embodiments.
- e-discovery operations provide a unified collection and search capability for data in the system, such as data stored in secondary storage devices 108 (e.g., backups, archives, or other secondary copies 116 ).
- system 100 may construct and maintain a virtual repository for data stored in system 100 that is integrated across source applications 110 , different storage device types, etc.
- e-discovery utilizes other techniques described herein, such as data classification and/or content indexing.
- An information management policy 148 can include a data structure or other information source that specifies a set of parameters (e.g., criteria and rules) associated with secondary copy and/or other information management operations.
- a storage policy generally comprises a data structure or other information source that defines (or includes information sufficient to determine) a set of preferences or other criteria for performing information management operations.
- Storage policies can include one or more of the following: (1) what data will be associated with the storage policy, e.g., subclient; (2) a destination to which the data will be stored; (3) datapath information specifying how the data will be communicated to the destination; (4) the type of secondary copy operation to be performed; and (5) retention information specifying how long the data will be retained at the destination (see, e.g., FIG. 1E ).
- Data associated with a storage policy can be logically organized into subclients, which may represent primary data 112 and/or secondary copies 116 .
- a subclient may represent static or dynamic associations of portions of a data volume.
- Subclients may represent mutually exclusive portions. Thus, in certain embodiments, a portion of data may be given a label and the association is stored as a static entity in an index, database or other storage location.
- Subclients may also be used as an effective administrative scheme of organizing data according to data type, department within the enterprise, storage preferences, or the like. Depending on the configuration, subclients can correspond to files, folders, virtual machines, databases, etc. In one exemplary scenario, an administrator may find it preferable to separate e-mail data from financial data using two different subclients.
- a storage policy can define where data is stored by specifying a target or destination storage device (or group of storage devices). For instance, where the secondary storage device 108 includes a group of disk libraries, the storage policy may specify a particular disk library for storing the subclients associated with the policy. As another example, where the secondary storage devices 108 include one or more tape libraries, the storage policy may specify a particular tape library for storing the subclients associated with the storage policy, and may also specify a drive pool and a tape pool defining a group of tape drives and a group of tapes, respectively, for use in storing the subclient data. While information in the storage policy can be statically assigned in some cases, some or all of the information in the storage policy can also be dynamically determined based on criteria set forth in the storage policy.
- a particular destination storage device(s) or other parameter of the storage policy may be determined based on characteristics associated with the data involved in a particular secondary copy operation, device availability (e.g., availability of a secondary storage device 108 or a media agent 144 ), network status and conditions (e.g., identified bottlenecks), user credentials, and the like.
- Datapath information can also be included in the storage policy.
- the storage policy may specify network pathways and components to utilize when moving the data to the destination storage device(s).
- the storage policy specifies one or more media agents 144 for conveying data associated with the storage policy between the source and destination.
- a storage policy can also specify the type(s) of associated operations, such as backup, archive, snapshot, auxiliary copy, or the like.
- retention parameters can specify how long the resulting secondary copies 116 will be kept (e.g., a number of days, months, years, etc.), perhaps depending on organizational needs and/or compliance criteria.
- system 100 automatically applies a default configuration to client computing device 102 .
- the installation script may register the client computing device 102 with storage manager 140 , which in turn applies the default configuration to the new client computing device 102 . In this manner, data protection operations can begin substantially immediately.
- the default configuration can include a default storage policy, for example, and can specify any appropriate information sufficient to begin data protection operations. This can include a type of data protection operation, scheduling information, a target secondary storage device 108 , data path information (e.g., a particular media agent 144 ), and the like.
- Scheduling policy 148 Another type of information management policy 148 is a “scheduling policy,” which specifies when and how often to perform operations. Scheduling parameters may specify with what frequency (e.g., hourly, weekly, daily, event-based, etc.) or under what triggering conditions secondary copy or other information management operations are to take place. Scheduling policies in some cases are associated with particular components, such as a subclient, client computing device 102 , and the like.
- an audit policy (or “security policy”), which comprises preferences, rules and/or criteria that protect sensitive data in system 100 .
- an audit policy may define “sensitive objects” which are files or data objects that contain particular keywords (e.g., “confidential,” or “privileged”) and/or are associated with particular keywords (e.g., in metadata) or particular flags (e.g., in metadata identifying a document or email as personal, confidential, etc.).
- An audit policy may further specify rules for handling sensitive objects.
- an audit policy may require that a reviewer approve the transfer of any sensitive objects to a cloud storage site, and that if approval is denied for a particular sensitive object, the sensitive object should be transferred to a local primary storage device 104 instead.
- the audit policy may further specify how a secondary storage computing device 106 or other system component should notify a reviewer that a sensitive object is slated for transfer.
- provisioning policy can include preferences, priorities, rules, and/or criteria that specify how client computing devices 102 (or groups thereof) may utilize system resources, such as available storage on cloud storage and/or network bandwidth.
- a provisioning policy specifies, for example, data quotas for particular client computing devices 102 (e.g., a number of gigabytes that can be stored monthly, quarterly or annually).
- Storage manager 140 or other components may enforce the provisioning policy. For instance, media agents 144 may enforce the policy when transferring data to secondary storage devices 108 . If a client computing device 102 exceeds a quota, a budget for the client computing device 102 (or associated department) may be adjusted accordingly or an alert may trigger.
- information management policies 148 are described as separate policies, one or more of these can be generally combined into a single information management policy 148 .
- a storage policy may also include or otherwise be associated with one or more scheduling, audit, or provisioning policies or operational parameters thereof.
- storage policies are typically associated with moving and storing data, other policies may be associated with other types of information management operations. The following is a non-exhaustive list of items that information management policies 148 may specify:
- Information management policies 148 can additionally specify or depend on historical or current criteria that may be used to determine which rules to apply to a particular data object, system component, or information management operation, such as:
- FIG. 1E includes a data flow diagram depicting performance of secondary copy operations by an embodiment of information management system 100 , according to an exemplary storage policy 148 A.
- System 100 includes a storage manager 140 , a client computing device 102 having a file system data agent 142 A and an email data agent 142 B operating thereon, a primary storage device 104 , two media agents 144 A, 144 B, and two secondary storage devices 108 : a disk library 108 A and a tape library 108 B.
- primary storage device 104 includes primary data 112 A, which is associated with a logical grouping of data associated with a file system (“file system subclient”), and primary data 112 B, which is a logical grouping of data associated with email (“email subclient”).
- file system subclient file system subclient
- email subclient email subclient
- the second media agent 144 B and tape library 108 B are “off-site,” and may be remotely located from the other components in system 100 (e.g., in a different city, office building, etc.).
- off-site may refer to a magnetic tape located in remote storage, which must be manually retrieved and loaded into a tape drive to be read.
- information stored on the tape library 108 B may provide protection in the event of a disaster or other failure at the main site(s) where data is stored.
- the file system subclient 112 A in certain embodiments generally comprises information generated by the file system and/or operating system of client computing device 102 , and can include, for example, file system data (e.g., regular files, file tables, mount points, etc.), operating system data (e.g., registries, event logs, etc.), and the like.
- the e-mail subclient 112 B can include data generated by an e-mail application operating on client computing device 102 , e.g., mailbox information, folder information, emails, attachments, associated database information, and the like.
- the subclients can be logical containers, and the data included in the corresponding primary data 112 A and 112 B may or may not be stored contiguously.
- the exemplary storage policy 148 A includes backup copy preferences or rule set 160 , disaster recovery copy preferences or rule set 162 , and compliance copy preferences or rule set 164 .
- Backup copy rule set 160 specifies that it is associated with file system subclient 166 and email subclient 168 . Each of subclients 166 and 168 are associated with the particular client computing device 102 .
- Backup copy rule set 160 further specifies that the backup operation will be written to disk library 108 A and designates a particular media agent 144 A to convey the data to disk library 108 A.
- backup copy rule set 160 specifies that backup copies created according to rule set 160 are scheduled to be generated hourly and are to be retained for 30 days. In some other embodiments, scheduling information is not included in storage policy 148 A and is instead specified by a separate scheduling policy.
- Disaster recovery copy rule set 162 is associated with the same two subclients 166 and 168 . However, disaster recovery copy rule set 162 is associated with tape library 108 B, unlike backup copy rule set 160 . Moreover, disaster recovery copy rule set 162 specifies that a different media agent, namely 144 B, will convey data to tape library 108 B. Disaster recovery copies created according to rule set 162 will be retained for 60 days and will be generated daily. Disaster recovery copies generated according to disaster recovery copy rule set 162 can provide protection in the event of a disaster or other catastrophic data loss that would affect the backup copy 116 A maintained on disk library 108 A.
- Compliance copy rule set 164 is only associated with the email subclient 168 , and not the file system subclient 166 . Compliance copies generated according to compliance copy rule set 164 will therefore not include primary data 112 A from the file system subclient 166 . For instance, the organization may be under an obligation to store and maintain copies of email data for a particular period of time (e.g., 10 years) to comply with state or federal regulations, while similar regulations do not apply to file system data. Compliance copy rule set 164 is associated with the same tape library 108 B and media agent 144 B as disaster recovery copy rule set 162 , although a different storage device or media agent could be used in other embodiments. Finally, compliance copy rule set 164 specifies that the copies it governs will be generated quarterly and retained for 10 years.
- a logical grouping of secondary copy operations governed by a rule set and being initiated at a point in time may be referred to as a “secondary copy job” (and sometimes may be called a “backup job,” even though it is not necessarily limited to creating only backup copies). Secondary copy jobs may be initiated on demand as well. Steps 1 - 9 below illustrate three secondary copy jobs based on storage policy 148 A.
- storage manager 140 initiates a backup job according to the backup copy rule set 160 , which logically comprises all the secondary copy operations necessary to effectuate rules 160 in storage policy 148 A every hour, including steps 1 - 4 occurring hourly.
- a scheduling service running on storage manager 140 accesses backup copy rule set 160 or a separate scheduling policy associated with client computing device 102 and initiates a backup job on an hourly basis.
- storage manager 140 sends instructions to client computing device 102 (i.e., to both data agent 142 A and data agent 142 B) to begin the backup job.
- file system data agent 142 A and email data agent 142 B on client computing device 102 respond to instructions from storage manager 140 by accessing and processing the respective subclient primary data 112 A and 112 B involved in the backup copy operation, which can be found in primary storage device 104 .
- the data agent(s) 142 A, 142 B may format the data into a backup format or otherwise process the data suitable for a backup copy.
- client computing device 102 communicates the processed file system data (e.g., using file system data agent 142 A) and the processed email data (e.g., using email data agent 142 B) to the first media agent 144 A according to backup copy rule set 160 , as directed by storage manager 140 .
- Storage manager 140 may further keep a record in management database 146 of the association between media agent 144 A and one or more of: client computing device 102 , file system subclient 112 A, file system data agent 142 A, email subclient 112 B, email data agent 142 B, and/or backup copy 116 A.
- the target media agent 144 A receives the data-agent-processed data from client computing device 102 , and at step 4 generates and conveys backup copy 116 A to disk library 108 A to be stored as backup copy 116 A, again at the direction of storage manager 140 and according to backup copy rule set 160 .
- Media agent 144 A can also update its index 153 to include data and/or metadata related to backup copy 116 A, such as information indicating where the backup copy 116 A resides on disk library 108 A, where the email copy resides, where the file system copy resides, data and metadata for cache retrieval, etc.
- Storage manager 140 may similarly update its index 150 to include information relating to the secondary copy operation, such as information relating to the type of operation, a physical location associated with one or more copies created by the operation, the time the operation was performed, status information relating to the operation, the components involved in the operation, and the like. In some cases, storage manager 140 may update its index 150 to include some or all of the information stored in index 153 of media agent 144 A. At this point, the backup job may be considered complete. After the 30-day retention period expires, storage manager 140 instructs media agent 144 A to delete backup copy 116 A from disk library 108 A and indexes 150 and/or 153 are updated accordingly.
- storage manager 140 initiates another backup job for a disaster recovery copy according to the disaster recovery rule set 162 .
- this includes steps 5 - 7 occurring daily for creating disaster recovery copy 116 B.
- disaster recovery copy 116 B is based on backup copy 116 A and not on primary data 112 A and 112 B.
- the specified media agent 1446 retrieves the most recent backup copy 116 A from disk library 108 A.
- disaster recovery copy 116 B is a direct, mirror copy of backup copy 116 A, and remains in the backup format.
- disaster recovery copy 116 B may be further compressed or encrypted, or may be generated in some other manner, such as by using primary data 112 A and 1126 from primary storage device 104 as sources.
- the disaster recovery copy operation is initiated once a day and disaster recovery copies 1166 are deleted after 60 days; indexes 153 and/or 150 are updated accordingly when/after each information management operation is executed and/or completed.
- the present backup job may be considered completed.
- storage manager 140 initiates another backup job according to compliance rule set 164 , which performs steps 8 - 9 quarterly to create compliance copy 116 C. For instance, storage manager 140 instructs media agent 144 B to create compliance copy 116 C on tape library 1086 , as specified in the compliance copy rule set 164 .
- compliance copy 116 C is generated using disaster recovery copy 1166 as the source. This is efficient, because disaster recovery copy resides on the same secondary storage device and thus no network resources are required to move the data.
- compliance copy 116 C is instead generated using primary data 112 B corresponding to the email subclient or using backup copy 116 A from disk library 108 A as source data.
- compliance copies 116 C are created quarterly, and are deleted after ten years, and indexes 153 and/or 150 are kept up-to-date accordingly.
- storage manager 140 may permit a user to specify aspects of storage policy 148 A.
- the storage policy can be modified to include information governance policies to define how data should be managed in order to comply with a certain regulation or business objective.
- the various policies may be stored, for example, in management database 146 .
- An information governance policy may align with one or more compliance tasks that are imposed by regulations or business requirements. Examples of information governance policies might include a Sarbanes-Oxley policy, a HIPAA policy, an electronic discovery (e-discovery) policy, and so on.
- Information governance policies allow administrators to obtain different perspectives on an organization's online and offline data, without the need for a dedicated data silo created solely for each different viewpoint.
- the data storage systems herein build an index that reflects the contents of a distributed data set that spans numerous clients and storage devices, including both primary data and secondary copies, and online and offline copies.
- An organization may apply multiple information governance policies in a top-down manner over that unified data set and indexing schema in order to view and manipulate the data set through different lenses, each of which is adapted to a particular compliance or business goal.
- An information governance policy may comprise a classification policy, which defines a taxonomy of classification terms or tags relevant to a compliance task and/or business objective.
- a classification policy may also associate a defined tag with a classification rule.
- a classification rule defines a particular combination of criteria, such as users who have created, accessed or modified a document or data object; file or application types; content or metadata keywords; clients or storage locations; dates of data creation and/or access; review status or other status within a workflow (e.g., reviewed or un-reviewed); modification times or types of modifications; and/or any other data attributes in any combination, without limitation.
- a classification rule may also be defined using other classification tags in the taxonomy.
- an e-discovery classification policy might define a classification tag “privileged” that is associated with documents or data objects that (1) were created or modified by legal department staff, or (2) were sent to or received from outside counsel via email, or (3) contain one of the following keywords: “privileged” or “attorney” or “counsel,” or other like terms. Accordingly, all these documents or data objects will be classified as “privileged.”
- An entity tag may be, for example, any content that matches a defined data mask format.
- entity tags might include, e.g., social security numbers (e.g., any numerical content matching the formatting mask XXX-XX-XXX), credit card numbers (e.g., content having a 13-16 digit string of numbers), SKU numbers, product numbers, etc.
- a user may define a classification policy by indicating criteria, parameters or descriptors of the policy via a graphical user interface, such as a form or page with fields to be filled in, pull-down menus or entries allowing one or more of several options to be selected, buttons, sliders, hypertext links or other known user interface tools for receiving user input, etc.
- a user may define certain entity tags, such as a particular product number or project ID.
- the classification policy can be implemented using cloud-based techniques.
- the storage devices may be cloud storage devices, and the storage manager 140 may execute cloud service provider API over a network to classify data stored on cloud storage devices.
- a restore operation can be initiated involving one or more of secondary copies 116 A, 116 B, and 116 C.
- a restore operation logically takes a selected secondary copy 116 , reverses the effects of the secondary copy operation that created it, and stores the restored data to primary storage where a client computing device 102 may properly access it as primary data.
- a media agent 144 and an appropriate data agent 142 e.g., executing on the client computing device 102 ) perform the tasks needed to complete a restore operation.
- data that was encrypted, compressed, and/or deduplicated in the creation of secondary copy 116 will be correspondingly rehydrated (reversing deduplication), uncompressed, and unencrypted into a format appropriate to primary data.
- Metadata stored within or associated with the secondary copy 116 may be used during the restore operation.
- restored data should be indistinguishable from other primary data 112 .
- the restored data has fully regained the native format that may make it immediately usable by application 110 .
- a user may manually initiate a restore of backup copy 116 A, e.g., by interacting with user interface 158 of storage manager 140 or with a web-based console with access to system 100 .
- Storage manager 140 may accesses data in its index 150 and/or management database 146 (and/or the respective storage policy 148 A) associated with the selected backup copy 116 A to identify the appropriate media agent 144 A and/or secondary storage device 108 A where the secondary copy resides.
- the user may be presented with a representation (e.g., stub, thumbnail, listing, etc.) and metadata about the selected secondary copy, in order to determine whether this is the appropriate copy to be restored, e.g., date that the original primary data was created.
- Storage manager 140 will then instruct media agent 144 A and an appropriate data agent 142 on the target client computing device 102 to restore secondary copy 116 A to primary storage device 104 .
- a media agent may be selected for use in the restore operation based on a load balancing algorithm, an availability based algorithm, or other criteria.
- the selected media agent e.g., 144 A, retrieves secondary copy 116 A from disk library 108 A. For instance, media agent 144 A may access its index 153 to identify a location of backup copy 116 A on disk library 108 A, or may access location information residing on disk library 108 A itself.
- a backup copy 116 A that was recently created or accessed may be cached to speed up the restore operation.
- media agent 144 A accesses a cached version of backup copy 116 A residing in index 153 , without having to access disk library 108 A for some or all of the data.
- the media agent 144 A communicates the data to the requesting client computing device 102 .
- file system data agent 142 A and email data agent 142 B may unpack (e.g., restore from a backup format to the native application format) the data in backup copy 116 A and restore the unpackaged data to primary storage device 104 .
- secondary copies 116 may be restored to the same volume or folder in primary storage device 104 from which the secondary copy was derived; to another storage location or client computing device 102 ; to shared storage, etc.
- the data may be restored so that it may be used by an application 110 of a different version/vintage from the application that created the original primary data 112 .
- secondary copies 116 can vary depending on the embodiment.
- secondary copies 116 are formatted as a series of logical data units or “chunks” (e.g., 512 MB, 1 GB, 2 GB, 4 GB, or 8 GB chunks). This can facilitate efficient communication and writing to secondary storage devices 108 , e.g., according to resource availability. For example, a single secondary copy 116 may be written on a chunk-by-chunk basis to one or more secondary storage devices 108 .
- users can select different chunk sizes, e.g., to improve throughput to tape storage devices.
- each chunk can include a header and a payload.
- the payload can include files (or other data units) or subsets thereof included in the chunk, whereas the chunk header generally includes metadata relating to the chunk, some or all of which may be derived from the payload.
- media agent 144 , storage manager 140 , or other component may divide files into chunks and generate headers for each chunk by processing the files.
- Headers can include a variety of information such as file and/or volume identifier(s), offset(s), and/or other information associated with the payload data items, a chunk sequence number, etc.
- chunk headers can also be stored to index 153 of the associated media agent(s) 144 and/or to index 150 associated with storage manager 140 . This can be useful for providing faster processing of secondary copies 116 during browsing, restores, or other operations.
- the secondary storage device 108 returns an indication of receipt, e.g., to media agent 144 and/or storage manager 140 , which may update their respective indexes 153 , 150 accordingly.
- chunks may be processed (e.g., by media agent 144 ) according to the information in the chunk header to reassemble the files.
- Data can also be communicated within system 100 in data channels that connect client computing devices 102 to secondary storage devices 108 .
- These data channels can be referred to as “data streams,” and multiple data streams can be employed to parallelize an information management operation, improving data transfer rate, among other advantages.
- Example data formatting techniques including techniques involving data streaming, chunking, and the use of other data structures in creating secondary copies are described in U.S. Pat. Nos. 7,315,923, 8,156,086, and 8,578,120.
- FIGS. 1F and 1G are diagrams of example data streams 170 and 171 , respectively, which may be employed for performing information management operations.
- data agent 142 forms data stream 170 from source data associated with a client computing device 102 (e.g., primary data 112 ).
- Data stream 170 is composed of multiple pairs of stream header 172 and stream data (or stream payload) 174 .
- Data streams 170 and 171 shown in the illustrated example are for a single-instanced storage operation, and a stream payload 174 therefore may include both single-instance (SI) data and/or non-SI data.
- a stream header 172 includes metadata about the stream payload 174 .
- This metadata may include, for example, a length of the stream payload 174 , an indication of whether the stream payload 174 is encrypted, an indication of whether the stream payload 174 is compressed, an archive file identifier (ID), an indication of whether the stream payload 174 is single instanceable, and an indication of whether the stream payload 174 is a start of a block of data.
- ID archive file identifier
- data stream 171 has the stream header 172 and stream payload 174 aligned into multiple data blocks.
- the data blocks are of size 64 KB.
- the first two stream header 172 and stream payload 174 pairs comprise a first data block of size 64 KB.
- the first stream header 172 indicates that the length of the succeeding stream payload 174 is 63 KB and that it is the start of a data block.
- the next stream header 172 indicates that the succeeding stream payload 174 has a length of 1 KB and that it is not the start of a new data block.
- Immediately following stream payload 174 is a pair comprising an identifier header 176 and identifier data 178 .
- the identifier header 176 includes an indication that the succeeding identifier data 178 includes the identifier for the immediately previous data block.
- the identifier data 178 includes the identifier that the data agent 142 generated for the data block.
- the data stream 171 also includes other stream header 172 and stream payload 174 pairs, which may be for SI data and/or non-SI data.
- FIG. 1H is a diagram illustrating data structures 180 that may be used to store blocks of SI data and non-SI data on a storage device (e.g., secondary storage device 108 ).
- data structures 180 do not form part of a native file system of the storage device.
- Data structures 180 include one or more volume folders 182 , one or more chunk folders 184 / 185 within the volume folder 182 , and multiple files within chunk folder 184 .
- Each chunk folder 184 / 185 includes a metadata file 186 / 187 , a metadata index file 188 / 189 , one or more container files 190 / 191 / 193 , and a container index file 192 / 194 .
- Metadata file 186 / 187 stores non-SI data blocks as well as links to SI data blocks stored in container files.
- Metadata index file 188 / 189 stores an index to the data in the metadata file 186 / 187 .
- Container files 190 / 191 / 193 store SI data blocks.
- Container index file 192 / 194 stores an index to container files 190 / 191 / 193 .
- container index file 192 / 194 stores an indication of whether a corresponding block in a container file 190 / 191 / 193 is referred to by a link in a metadata file 186 / 187 .
- data block B 2 in the container file 190 is referred to by a link in metadata file 187 in chunk folder 185 .
- the corresponding index entry in container index file 192 indicates that data block B 2 in container file 190 is referred to.
- data block B 1 in container file 191 is referred to by a link in metadata file 187 , and so the corresponding index entry in container index file 192 indicates that this data block is referred to.
- data structures 180 illustrated in FIG. 1H may have been created as a result of separate secondary copy operations involving two client computing devices 102 .
- a first secondary copy operation on a first client computing device 102 could result in the creation of the first chunk folder 184
- a second secondary copy operation on a second client computing device 102 could result in the creation of the second chunk folder 185 .
- Container files 190 / 191 in the first chunk folder 184 would contain the blocks of SI data of the first client computing device 102 .
- the second secondary copy operation on the data of the second client computing device 102 would result in media agent 144 storing primarily links to the data blocks of the first client computing device 102 that are already stored in the container files 190 / 191 . Accordingly, while a first secondary copy operation may result in storing nearly all of the data subject to the operation, subsequent secondary storage operations involving similar data may result in substantial data storage space savings, because links to already stored data blocks can be stored instead of additional instances of data blocks.
- a sparse file is a type of file that may include empty space (e.g., a sparse file may have real data within it, such as at the beginning of the file and/or at the end of the file, but may also have empty space in it that is not storing actual data, such as a contiguous range of bytes all having a value of zero).
- container files 190 / 191 / 193 be sparse files allows media agent 144 to free up space in container files 190 / 191 / 193 when blocks of data in container files 190 / 191 / 193 no longer need to be stored on the storage devices.
- media agent 144 creates a new container file 190 / 191 / 193 when a container file 190 / 191 / 193 either includes 100 blocks of data or when the size of the container file 190 exceeds 50 MB.
- media agent 144 creates a new container file 190 / 191 / 193 when a container file 190 / 191 / 193 satisfies other criteria (e.g., it contains from approx.
- a file on which a secondary copy operation is performed may comprise a large number of data blocks.
- a 100 MB file may comprise 400 data blocks of size 256 KB. If such a file is to be stored, its data blocks may span more than one container file, or even more than one chunk folder.
- a database file of 20 GB may comprise over 40,000 data blocks of size 512 KB. If such a database file is to be stored, its data blocks will likely span multiple container files, multiple chunk folders, and potentially multiple volume folders. Restoring such files may require accessing multiple container files, chunk folders, and/or volume folders to obtain the requisite data blocks.
- FIG. 2A illustrates a system 200 configured to address these and other issues by using backup or other secondary copy data to synchronize a source subsystem 201 (e.g., a production site) with a destination subsystem 203 (e.g., a failover site).
- a source subsystem 201 e.g., a production site
- a destination subsystem 203 e.g., a failover site
- live synchronization and/or “live synchronization replication.”
- the source client computing devices 202 a include one or more virtual machines (or “VMs”) executing on one or more corresponding VM host computers 205 a , though the source need not be virtualized.
- the destination site 203 may be at a location that is remote from the production site 201 , or may be located in the same data center, without limitation.
- One or more of the production site 201 and destination site 203 may reside at data centers at known geographic locations, or alternatively may operate “in the cloud.”
- FIG. 2A illustrates an embodiment of a data flow which may be orchestrated at the direction of one or more storage managers (not shown).
- the source data agent(s) 242 a and source media agent(s) 244 a work together to write backup or other secondary copies of the primary data generated by the source client computing devices 202 a into the source secondary storage device(s) 208 a .
- the backup/secondary copies are retrieved by the source media agent(s) 244 a from secondary storage.
- source media agent(s) 244 a communicate the backup/secondary copies across a network to the destination media agent(s) 244 b in destination subsystem 203 .
- the data can be copied from source to destination in an incremental fashion, such that only changed blocks are transmitted, and in some cases multiple incremental backups are consolidated at the source so that only the most current changed blocks are transmitted to and applied at the destination.
- An example of live synchronization of virtual machines using the “incremental forever” approach is found in U.S. Patent Application No. 62/265,339 entitled “Live Synchronization and Management of Virtual Machines across Computing and Virtualization Platforms and Using Live Synchronization to Support Disaster Recovery.”
- a deduplicated copy can be employed to further reduce network traffic from source to destination.
- the system can utilize the deduplicated copy techniques described in U.S. Pat. No. 9,239,687, entitled “Systems and Methods for Retaining and Using Data Block Signatures in Data Protection Operations.”
- destination media agent(s) 244 b write the received backup/secondary copy data to the destination secondary storage device(s) 208 b .
- the synchronization is completed when the destination media agent(s) and destination data agent(s) 242 b restore the backup/secondary copy data to the destination client computing device(s) 202 b .
- the destination client computing device(s) 202 b may be kept “warm” awaiting activation in case failure is detected at the source.
- This synchronization/replication process can incorporate the techniques described in U.S. patent application Ser. No. 14/721,971, entitled “Replication Using Deduplicated Secondary Copy Data.”
- the synchronized copies can be viewed as mirror or replication copies.
- the production site 201 is not burdened with the synchronization operations. Because the destination site 203 can be maintained in a synchronized “warm” state, the downtime for switching over from the production site 201 to the destination site 203 is substantially less than with a typical restore from secondary storage.
- the production site 201 may flexibly and efficiently fail over, with minimal downtime and with relatively up-to-date data, to a destination site 203 , such as a cloud-based failover site.
- the destination site 203 can later be reverse synchronized back to the production site 201 , such as after repairs have been implemented or after the failure has passed.
- FIG. 2B illustrates an information management system 200 having an architecture that provides such advantages, and incorporates use of a standard file system protocol between primary and secondary storage subsystems 217 , 218 .
- NFS network file system
- CIFS Common Internet File System
- data agent 242 can co-reside with media agent 244 on the same server (e.g., a secondary storage computing device such as component 106 ), or in some other location in secondary storage subsystem 218 .
- server e.g., a secondary storage computing device such as component 106
- secondary storage subsystem 218 allocates an NFS network path to the client computing device 202 or to one or more target applications 210 running on client computing device 202 .
- the client computing device 202 mounts the designated NFS path and writes data to that NFS path.
- the NFS path may be obtained from NFS path data 215 stored locally at the client computing device 202 , and which may be a copy of or otherwise derived from NFS path data 219 stored in the secondary storage subsystem 218 .
- Storage manager 240 can include a pseudo-client manager 217 , which coordinates the process by, among other things, communicating information relating to client computing device 202 and application 210 (e.g., application type, client computing device identifier, etc.) to data agent 242 , obtaining appropriate NFS path data from the data agent 242 (e.g., NFS path information), and delivering such data to client computing device 202 .
- information relating to client computing device 202 and application 210 e.g., application type, client computing device identifier, etc.
- NFS path data e.g., NFS path information
- client computing device 202 reads from the designated NFS network path, and the read request is translated by data agent 242 .
- the data agent 242 then works with media agent 244 to retrieve, re-process (e.g., re-hydrate, decompress, decrypt), and forward the requested data to client computing device 202 using NFS.
- re-process e.g., re-hydrate, decompress, decrypt
- the illustrative architecture effectively decouples the client computing devices 202 from the installed components of system 200 , improving both scalability and plug-ability of system 200 .
- the secondary storage subsystem 218 in such environments can be treated simply as a read/write NFS target for primary storage subsystem 217 , without the need for information management software to be installed on client computing devices 202 .
- an enterprise implementing a cloud production computing environment can add VM client computing devices 202 without installing and configuring specialized information management software on these VMs. Rather, backups and restores are achieved transparently, where the new VMs simply write to and read from the designated NFS path.
- FIG. 2C shows a block diagram of an example of a highly scalable, managed data pool architecture useful in accommodating such data growth.
- the illustrated system 200 which may be referred to as a “web-scale” architecture according to certain embodiments, can be readily incorporated into both open compute/storage and common-cloud architectures.
- the illustrated system 200 includes a grid 245 of media agents 244 logically organized into a control tier 231 and a secondary or storage tier 233 .
- Media agents assigned to the storage tier 233 can be configured to manage a secondary storage pool 208 as a deduplication store, and be configured to receive client write and read requests from the primary storage subsystem 217 , and direct those requests to the secondary tier 233 for servicing.
- media agents CMA 1 -CMA 3 in the control tier 231 maintain and consult one or more deduplication databases 247 , which can include deduplication information (e.g., data block hashes, data block links, file containers for deduplicated files, etc.) sufficient to read deduplicated files from secondary storage pool 208 and write deduplicated files to secondary storage pool 208 .
- deduplication information e.g., data block hashes, data block links, file containers for deduplicated files, etc.
- system 200 can incorporate any of the deduplication systems and methods shown and described in U.S. Pat. No. 9,020,900, entitled “Distributed Deduplicated Storage System,” and U.S. Pat. Pub. No. 2014/0201170, entitled “High Availability Distributed Deduplicated Storage System.”
- Media agents SMA 1 -SMA 6 assigned to the secondary tier 233 receive write and read requests from media agents CMA 1 -CMA 3 in control tier 231 , and access secondary storage pool 208 to service those requests.
- Media agents CMA 1 -CMA 3 in control tier 231 can also communicate with secondary storage pool 208 , and may execute read and write requests themselves (e.g., in response to requests from other control media agents CMA 1 -CMA 3 ) in addition to issuing requests to media agents in secondary tier 233 .
- deduplication database(s) 247 can in some cases reside in storage devices in secondary storage pool 208 .
- each of the media agents 244 (e.g., CMA 1 -CMA 3 , SMA 1 -SMA 6 , etc.) in grid 245 can be allocated a corresponding dedicated partition 251 A- 2511 , respectively, in secondary storage pool 208 .
- Each partition 251 can include a first portion 253 containing data associated with (e.g., stored by) media agent 244 corresponding to the respective partition 251 .
- System 200 can also implement a desired level of replication, thereby providing redundancy in the event of a failure of a media agent 244 in grid 245 .
- each partition 251 can further include a second portion 255 storing one or more replication copies of the data associated with one or more other media agents 244 in the grid.
- System 200 can also be configured to allow for seamless addition of media agents 244 to grid 245 via automatic configuration.
- a storage manager (not shown) or other appropriate component may determine that it is appropriate to add an additional node to control tier 231 , and perform some or all of the following: (i) assess the capabilities of a newly added or otherwise available computing device as satisfying a minimum criteria to be configured as or hosting a media agent in control tier 231 ; (ii) confirm that a sufficient amount of the appropriate type of storage exists to support an additional node in control tier 231 (e.g., enough disk drive capacity exists in storage pool 208 to support an additional deduplication database 247 ); (iii) install appropriate media agent software on the computing device and configure the computing device according to a pre-determined template; (iv) establish a partition 251 in the storage pool 208 dedicated to the newly established media agent 244 ; and (v) build any appropriate data structures (e.g., an instance of deduplication database 247 ).
- FIGS. 2A, 2B, and 2C may be implemented in any combination and permutation to satisfy data storage management and information management needs at one or more locations and/or data centers.
- FIG. 3A is a block diagram illustrating system 300 for snap-based disaster recovery orchestration of virtual machine failover and failback operations, according to an illustrative embodiment.
- FIG. 3A depicts logical views of connections, relationships, and/or operations associated with system 300 ; the connections and operations are supported by a physical networking and communications infrastructure that is well known in the art.
- FIG. 3A depicts logical views of connections, relationships, and/or operations associated with system 300 ; the connections and operations are supported by a physical networking and communications infrastructure that is well known in the art.
- FIG. 3A depicts: data storage management system 300 in communication with virtualization manager (e.g., vCenter VM server manager) 303 and primary storage resources (e.g., storage array/filer) 304 at a virtualized data center, which is a source for failover and a destination for failback; system 300 in further communication with failover virtualization manager 383 D and failover storage resources 384 D, which are configured at a virtualized DR site, which is a failover destination.
- FIG. 3A further depicts VMs 302 , which are managed by manager 303 ; snapshot replication operation 305 ; and VMs 382 D, which are managed by manager 383 D.
- Data storage management system 300 is a system analogous to system 100 and further comprising additional functionality for snap-based DR orchestration, such as administrative features for defining and configuring source and failover components, failover groups, customization of failover components, mapping between source and failover VMs, scheduling and tracking of snapshot generation and snapshot replication, etc. More details are given in FIGS. 4, 5A, and 5B .
- VMs 302 are virtual machines that execute on one or more VM hosts (not shown in the present figure) and are managed by manager 303 . VMs 302 are said to be sources of data, because they operate in a production environment. Each VM 302 has a datastore (e.g., VMDK, virtual disk, etc.) that comprises the VMs data and is configured in an associated primary data storage 304 , such as the depicted storage array/filer or cloud storage resources (not shown here). There is no limit on how many VMs 302 can be failed over by system 300 using the illustrative snap-based DR orchestration techniques described herein. VMs 302 and their host computing devices are protected by but are not part of system 300 .
- VMDK virtual disk
- cloud storage resources not shown here.
- Primary virtualization manager 303 is a computing device (e.g., a server) that provides a centralized platform for controlling any number of VM hosts and their VMs.
- a computing device e.g., a server
- VMware vCenter Server from VMware, but the invention is not limited to VMware virtualization.
- a specialized data agent component of system 300 e.g., virtual server agent 442
- manager 303 interoperates with manager 303 to ensure that VMs 302 and their datastores are protected by system 300 , e.g., making backup copies, replicating datastores, orchestrating failover, etc.
- Primary data storage 304 are one or more data storage devices that are configured to store primary data for and generated by VMs 302 , i.e., primary data storage 304 is where datastores 504 for VMs 302 reside. Examples of primary data storage 304 include SAN storage arrays, NAS filers/clusters, and/or cloud storage (not shown here). Primary data storage 304 are equipped with features for taking/making snapshots of their own data storage volumes, which are referred to herein as “hardware snapshots.” Primary data storage 304 are further equipped to replicate snapshots to another storage resource, e.g., 384 C, 384 D, etc. NetApp data storage appliances are an example storage array/filer 304 , but the invention is not limited to NetApp appliances or NetApp replication.
- Snapshot replication 305 represents a number of operations performed by data storage resources 304 / 384 and managed by components of system 300 (e.g., storage manager 440 , media agent 444 , etc.).
- Primary data storage 304 are equipped with features for taking/making snapshots of their own data storage volumes, which are referred to herein as “hardware snapshots.” Each hardware snapshot is stored at the storage resource (e.g., array) that took the snapshot, e.g., 304 .
- System 300 manages the schedule for and initiates the creation of the snapshots by communicating instructions to primary data storage 304 .
- Primary data storage 304 are further equipped with features for replicating the snapshots to other like (or compatible) storage resources, e.g., 384 C, 384 D, etc.
- the source and destination storage resources sometimes maintain a so-called “mirror relationship” that ensures that snapshots at the destination are read-only in order to be available for DR as needed.
- these replication operations are referred to as “array-to-array” replication, because the arrays/filers communicate with each other to structure and transmit each snapshot, even though the operation is scheduled and initiated by system 300 (e.g., using an auxiliary copy job).
- array-to-array Similar and equivalent techniques are used between arrays and cloud storage resources, or cloud-to-cloud.
- Some embodiments that use NetApp arrays use so-called “vault copy” features to replicate snapshots from source to DR site.
- Other embodiments that use NetApp arrays use so-called “mirror copy” features to replicate snapshots from source to destination. The embodiments are not limited to NetApp arrays or to these techniques for replicating snapshots.
- An auxiliary copy job as managed by system 300 comprises snapshot replication operation 305 (“array-to-array” replication or equivalent to/from/between cloud storage resources).
- the illustrative DR orchestration job as managed by system 300 comprises snapshot replication operation 305 (array-to-array or equivalent to/from/between cloud storage resources).
- VMs 382 D are VMs at a virtualized data center acting as a DR site. These VMs are managed by manager 383 D at the DR site. VMs 382 D are shown here in a dotted outline, because according to the illustrative snap-based DR orchestration approach, they are not active until failover. Each VM 382 D is pre-administered in system 300 to correspond (or map) to a source VM 302 . Thus, each source VM 302 maps to a DR VM 382 D (or 382 C in the next figure).
- Failover virtualization manager 383 D (or “manager 383 D”) is analogous to manager 303 and operates at the DR site. Manager 383 D manages VMs 382 D and as noted above does not activate (power up) these VMs until failover. Likewise, manager 383 D does not activate datastores for VMs 382 D until failover.
- Failover storage resources 384 D are configured at a virtualized DR site, which is a failover destination.
- Failover storage 384 D (e.g., storage array, filer, filer cluster, etc.) are analogous to primary data storage 304 , but operate at the DR site.
- Failover storage 384 C comprise a number of storage volumes (not shown here) for storing the replicated snapshot received from primary data storage 304 . However, these data storage volumes do not become associated with VMs 382 D until such time as manager 383 D establishes for each failover VM 382 D a corresponding datastore in one of the data storage volumes in failover storage resources 384 D.
- FIG. 3B is a block diagram illustrating the system 300 , wherein the DR site is implemented in a cloud computing environment, according to an illustrative embodiment.
- FIG. 3B depicts logical views of connections, relationships, and/or operations associated with system 300 ; the connections and operations are supported by a physical networking and communications infrastructure that is well known in the art.
- FIG. 3B depicts logical views of connections, relationships, and/or operations associated with system 300 ; the connections and operations are supported by a physical networking and communications infrastructure that is well known in the art.
- FIG. 3B depicts: data storage management system 300 in communication with primary virtualization manager (e.g., vCenter) 303 and primary storage (e.g., storage array/filer) 304 at a virtualized data center, which is a source for failover and a destination for failback; system 300 is in further communication with failover virtualization manager 383 C and cloud-based failover storage resources 384 C, which are configured at cloud computing environment 390 , which is a failover destination.
- FIG. 3B further depicts VMs 302 , which are managed by manager 303 ; snapshot replication operation 305 ; and VMs 382 C, which are managed by manager 383 C.
- VMs 382 C are analogous to VMs 382 D and are instantiated in cloud computing environment 390 . Collectively, VMs 382 C or VMs 382 D are referred to herein as “failover VMs 382 ” as a shorthand.
- Manager 383 C is functionally analogous to manager 383 D and is instantiated in cloud computing environment 390 . Managers 383 C and/or 383 D are referred to herein as “failover virtualization manager 383 ” as a shorthand.
- Cloud-based failover storage resources 384 C are functionally analogous to storage array/filer 384 D and are instantiated in cloud computing environment 390 .
- An illustrative example is Amazon AWS Elastic Block Store (“EBS”), which is well known in the art—but the invention is not so limited.
- EBS Amazon AWS Elastic Block Store
- cloud-based failover storage 384 C comprises data storage volumes (not shown here) that receive and store replicated snapshots from the source site. However, these data storage volumes do not become associated with failover VMs 382 C until such time as failover virtualization manager 383 C establishes for each failover VM 382 C a corresponding datastore in one of the data storage volumes in failover storage resources 384 C.
- failover storage 384 Any cloud-based storage technology may be used as failover storage resources 384 C.
- storage resources 384 C and 384 D at the DR site are referred to herein as “failover storage 384 ” as a shorthand.
- some alternative embodiments comprise a cloud computing environment at the source and a virtualized data center at the DR site; other alternative embodiments comprise a cloud computing environment at both source and DR site, whether the cloud computing environments are from the same cloud service provider or different ones. The latter scenario enables cloud-to-cloud failovers.
- some alternative environments comprise more than one DR site, thus enabling a choice or DR sites for clone testing and planned failovers.
- FIG. 4 is a block diagram illustrating some salient components of system 300 , according to an illustrative embodiment.
- FIG. 4 depicts VMs 302 , manager 303 , storage 304 , snapshot replication 305 , failover storage 384 , failover virtualization manager 383 , and failover VMs 382 ; and components of data storage management system 300 including: storage manager 440 , virtual server agent 442 , media agent 444 , virtual server agent (VSA) 492 , and media agent 494 .
- VSA virtual server agent
- Storage manager 440 is analogous to storage manager 140 and further comprises additional features for operating in system 300 , such as features for managing snap-based DR orchestration. More details are given in other figures.
- VSA 442 (or “VSA data agent 442 ”) is a data agent analogous to data agent 142 and additionally comprising features for operating in system 300 , such as interoperability with DR orchestration logic in storage manager 440 .
- VSA data agent 442 is generally responsible for taking part in snap backup jobs, e.g., triggering manager 303 to quiesce one or more source VMs 302 so that storage 304 can take a snapshot of the volumes hosting the datastore(s) corresponding to the source VM(s) 302 .
- VSA data agent 442 communicates with media agent 444 and with storage manager 440 , which manages snap backup jobs, auxiliary copy jobs, and DR orchestration jobs. More details are given in other figures.
- Media agent 444 is analogous to media agent 144 and additionally comprises features for operating in system 300 , such as interoperability with DR orchestration logic in storage manager 440 .
- Media agent 444 is generally responsible for instructing storage 304 to take a snapshot of the volumes hosting the datastore(s) corresponding to the source VM(s) 302 in a snap backup job, and is further responsible for instructing storage 304 to replicate snapshot(s) to failover storage 384 in an auxiliary copy job.
- Media agent 444 also maintains indexing information (e.g., in a media agent index 153 ) that tracks information about the snapshots generated and replicated by the snap backup jobs and auxiliary copy jobs.
- Media agent 444 communicates with VSA data agent 442 and with storage manager 440 , which manages snap backup jobs, auxiliary copy jobs, and DR orchestration jobs. More details are given in other figures.
- VSA 492 is analogous to VSA data agent 442 and is associated with failover virtualization manager 383 . Accordingly, in a DR orchestration job, VSA 492 instructs failover virtualization manager 383 when to create datastores for failover VMs 382 , causes failover virtualization manager 383 to register failover VMs 382 and implement customized parameters, and cause the failover VMs 382 to be powered on at the DR site. VSA 492 communicates with media agent 494 and storage manager 440 during DR orchestration jobs to perform failovers to the DR site and/or failbacks therefrom.
- Media agent 494 is analogous to media agent 444 and is associated with failover storage 384 . Accordingly, in a DR orchestration job, media agent 494 instructs failover storage 384 to bring online certain data storage volumes comprising replicated snapshots to be used in the failover. These data storage volumes will be configured as datastores for the failover VMs 382 . Media agent 494 communicates with VSA 492 and with storage manager 440 during DR orchestration jobs to perform failovers to the DR site and/or failbacks therefrom.
- System 300 is not limited to the depicted components shown in the present figure. One or more of the components shown in system 100 and 200 herein also can be present in system 300 . Likewise, there is no limit to how many VSA data agent 442 , VSA 492 , media agents 444 , and/or media agents 494 are configured in system 300 .
- FIG. 5A is a block diagram illustrating some salient components of system 300 , wherein the source site and DR site are virtualized data centers, according to an illustrative embodiment.
- the present figures depicts: virtualization manager 303 ; primary storage 304 comprising datastore 504 ; failover virtualization manager 383 ; failover storage 384 comprising datastore 584 ; storage manager 440 comprising management database 146 and DR orchestration logic 540 ; VM host 502 comprising VMs 302 managed by hypervisor 512 ; backup node 550 comprising VSA data agent 442 and media agent 444 ; VM host 552 comprising failover VMs 382 managed by hypervisor 553 ; and backup node 590 comprising VSA 492 and media agent 494 .
- Storage manager 440 , backup node 540 , and backup node 590 are components of data storage management system 300 .
- the computing devices 540 and 590 hosting VSAs and media agents are not part of system 300 , whereas VSAs, media agents, and storage manager 440 form a core portion of system 300 .
- Management database 146 is a logical components of storage manager 440 and comprises storage policies and schedules that govern snap backup copy jobs and auxiliary copy jobs, which may or may not be invoked by a DR orchestration job.
- the DR orchestration job uses snapshots that were previously generated and replicated in the ordinary course of snap backup jobs and auxiliary copy jobs, respectively.
- the DR orchestration invokes a snap backup job and an auxiliary copy job on demand to ensure that the planned failover/clone testing uses the latest data snapshotted from datastore(s) 504 .
- VM host 502 is a computing device comprising one or more hardware processors and computer memory and is configured for hosting virtual machines 302 .
- the hosting is managed and controlled by hypervisor 512 , which is any kind of hypervisor and is well known in the art.
- Datastore 504 is a repository for storing data and/or metadata that is associated with, is used by, and is generated by a corresponding VM such as production VMs 302 .
- Each VM 302 has a corresponding datastore 504 ; the relationship between datastore 504 and its corresponding VM 302 is established by virtualization manager 303 .
- Each datastore 504 is configured in a data storage volume (physical volume or logical volume) 599 configured in a data storage resource such as primary storage 304 . More details are given in FIG. 5C .
- DR orchestration logic 540 is a functional component of storage manager 440 .
- DR orchestration logic 540 is generally responsible for performing a number of operations, at the source and at the DR site, that collectively ensure a successful failover occurs.
- DR orchestration logic also manages failbacks. More details are given in regard to method 600 .
- Backup node 550 is a computing device that comprises one or more hardware processors and computer memory for executing or hosting one or more VSA data agent 442 data agents and one or more media agents 444 .
- Backup node 540 is illustratively configured in the same data center as the source data in datastores 504 associated with source/production VMs 302 .
- VM host 552 is a computing device comprising one or more hardware processors and computer memory and is configured for hosting failover virtual machines 382 .
- the hosting is managed and controlled by hypervisor 553 , which is any kind of hypervisor and is well known in the art.
- Hypervisor 553 may be the same as hypervisor 512 , but the invention is not so limited. Other compatible or like hypervisors also may be implemented without guaranteeing exact identity to hypervisor 512 .
- Datastore 584 is analogous to datastore 504 . Each datastore 584 has a corresponding failover VM 382 . The reason for the dotted arrow between datastore 584 and failover VM 382 is that this relationship is established at failover time, since failover VMs 382 and datastores 584 are not maintained in an active state prior to failover.
- Backup node 590 is a computing device that comprises one or more hardware processors and computer memory for executing or hosting one or more VSA 492 data agents and one or more media agents 494 .
- Backup node 590 is illustratively configured at the DR site.
- Co-location (physical or logical in the same cloud computing account) provides improved performance between backup node 590 and communicatively coupled components failover storage 384 and failover virtualization manager 383 .
- the dotted arrows shown between certain components at the DR site illustrate that failover VMs 382 and their corresponding datastores 584 are not maintained in an active state prior to failover. See method 600 in FIG. 6 for more details.
- FIG. 5B is a block diagram illustrating some salient components of system 300 , wherein the DR site is implemented in a cloud computing environment, according to an illustrative embodiment. Most of the components were shown and described in earlier figures.
- the present figure depicts a cloud computing environment 390 (e.g., a customer's cloud computing account) that hosts failover components such as failover storage 384 , failover virtualization manager 383 , and failover VMs 382 .
- Cloud computing environment 390 also hosts one or more on-demand VMs that act as backup node(s) 592 , each backup node 592 hosting a VSA 492 data agent and/or a media agent 494 .
- the depicted DR site is hosted in a cloud service account that is fully equipped to act as a failover site using the illustrative snap-based DR orchestration approach.
- the source/failback site is also hosted by a cloud computing environment suitably equipped with all the depicted components.
- backup node 550 and storage manager 440 execute on VMs instantiated at the source/failback site.
- storage manager 440 is configured/instantiated at another, distinct data center or cloud service account, and need not be co-located (physically or logically) with the other components of system 300 , such as VSA data agent 442 , media agent 444 , VSA 492 , or media agent 494 .
- the dotted arrows shown between certain components at the DR site illustrate that failover VMs 382 and their corresponding datastores 584 are not maintained in an active state prior to failover. See method 600 in FIG. 6 for more details.
- FIG. 5C is a block diagram illustrating come salient components involved in snap backup jobs and auxiliary copy jobs according to an illustrative embodiment.
- FIG. 5C depicts: primary storage 304 comprising data store 504 configured in a data storage volume 598 , and snapshot S 598 taken from volume 598 ; failover storage 384 comprising replicated snapshot SR 598 , which is replicated from snapshot S 598 , and data storage volume 599 comprising data store 584 ; backup node 550 comprising VSA data agent 442 and media agent 444 ; and backup node 590 / 592 comprising VSA 492 and media agent 494 .
- Data storage volume 598 is a physical volume or a logical volume (e.g., implemented using a logical volume manager) implemented in primary data storage 304 .
- Data storage volume 598 comprises one or more datastores 504 , each data store 504 associated with a different source VM 302 .
- Snapshot S 598 is a hardware snapshot of data storage volume 598 taken by primary storage 304 as directed by media agent 444 , e.g., using APIs, using custom scripts, etc. Snapshot S 598 is taken in the course of a snap backup job managed by storage manager 440 . Snapshot S 598 is stored at primary storage 304 .
- Snapshot S 598 is replicated by primary storage 304 to failover storage 384 as directed by media agent 444 in the course of an auxiliary copy job managed by storage manager 440 .
- the auxiliary copy job generates a snapshot SR 598 that is a replica of snapshot S 598 .
- Snapshot SR 598 is stored at failover storage 384 in a data storage volume 599 .
- the DR orchestration job will create a relationship between data in snapshot SR 598 and a failover VM 382 and will establish datastore 584 corresponding to the failover VM 382 .
- FIG. 6 is a flow chart that depicts some salient operations of a method 600 according to an illustrative embodiment.
- Method 600 is performed by one or more components of system 300 , except as stated otherwise.
- Components of system 300 interoperate with each other and with other components described herein to successfully orchestrate DR failovers and failbacks using snap-based technologies.
- hardware components e.g., VM hosts, virtualization managers, storage resources, backup nodes, storage manager, etc.
- networking are configured at source virtualized data center and at DR/failover site. This initial set-up is well known in the art.
- storage manager 440 configures storage policies for snap backup jobs and auxiliary copy jobs.
- the storage policies govern when snap backup and auxiliary copy jobs are to run and which media agent(s) (e.g., 444 , 494 ) will be involved in each job, as well specifying the data sources for the jobs, e.g., datastore 504 , data storage volume 598 , snapshot S 598 , etc.
- the storage policies are illustratively stored in management database 146 . See also FIG. 13 .
- storage manager 440 configures parameters for the source data and the failover destination.
- a number of administrative entries are configured here, e.g., failover group, VM host mapping, network settings, domain & IP address customization for DR site, etc.
- a failover group is defined, which specifies one or more source VMs 302 to be failed over by DR orchestration jobs, a mapping between source VM host 502 and DR VM host 552 , and an indication that the failover is to be made using the illustrative snap-based DR orchestration approach. See also FIG. 11 , FIG. 12 .
- Customization ensures that appropriate IP addresses and domain names are used at the DR site.
- block 606 ensures that there is a complete plan for selecting source VMs 302 and failing them over to appropriate entities at the DR site.
- block 606 provides a font of information to be used by the DR orchestration job in order to have a successful failover event.
- all the administrative parameters configured at block 606 are stored in management database 146 .
- one or more of these administrative parameters are communicated as needed by storage manager 440 to media agents and data agents when initiating the DR orchestration job.
- system 300 performs snap backup jobs.
- storage manager 440 instructs media agent 444 and VSA data agent 442 to launch a snap backup job for a certain source VM 302 .
- VSA data agent 442 reports to media agent 444 an identity of where the VM's datastore is located, e.g., in a data storage volume 598 .
- Media agent 444 instructs (e.g., using APIs, custom scripts, etc.) primary storage 304 to take a snapshot of data storage volume 598 , resulting in snapshot S 598 stored in primary storage 304 .
- the successful generation of snapshot S 598 is noted by media agent 444 and the snapshot is tracked in media agent index 153 at media agent 444 .
- These snap backup jobs are performed according to a plan (e.g., RPO plan, opportunistic plan, etc.), schedule, and/or storage policies, one or more of which are administered at storage manager 440 and illustratively stored in management database 146 . Job results and the location of media agent 153 are reported back to storage manager 440 for future reference.
- a plan e.g., RPO plan, opportunistic plan, etc.
- system 300 performs auxiliary copy jobs to replicate snapshots from primary storage 304 to failover storage 384 (e.g., array, filer, cloud).
- storage manager 440 initiates an auxiliary copy job by instructing media manager to replicate snapshot(s) in primary storage (e.g., snapshot S 598 ) to failover storage 384 .
- Media agent 444 in turn instructs primary storage 304 (e.g., using APIs, custom scripts, etc.) to begin an “array-to-array” snapshot replication operation.
- Array-to-array is used here as shorthand for hardware-to-hardware replication, which is handled by the storage resources themselves under the direction and instruction of media agent 444 as directed by storage manager 440 .
- system 300 is responsible for the auxiliary copy job, even if the replication operation itself is performed by the storage resources.
- the replicated snapshot SR 598 is stored at failover storage 384 .
- Media agent(s) 494 and/or 444 note the completion of the snapshot replication and update media agent index 153 with information about replicated snapshot SR 598 .
- These auxiliary copy jobs are performed according to a plan (e.g., RPO plan, opportunistic plan, etc.), schedule, and/or storage policies, one or more of which are administered at storage manager 440 and illustratively stored in management database 146 . Job results and the location of media agent 153 are reported back to storage manager 440 for future reference. From block 610 , control passes to block 612 , block 614 , and/or block 616 .
- system 300 performs an illustrative DR orchestration job to test the DR/failover site configuration, e.g., test clones.
- This operation is distinguishable from failover scenarios (blocks 614 , 616 ), because a replicated snapshot at the failover site is cloned there for test purposes without actually failing over source VMs 302 . More details are given in a subsequent figure.
- method 600 may end or control may pass (not shown here) to block 608 , 610 , 612 , 614 , or 616 , without limitation.
- system 300 performs an illustrative DR orchestration job to conduct a planned failover.
- This operation is distinguishable from unplanned failover scenarios (block 616 ), because it includes an on-demand snap backup job immediately followed by an auxiliary copy job to ensure that the latest source data from VMs 302 is captured in the planned failover.
- a so-called “mirror relationship” between primary storage and failover storage is affirmatively broken in order to stop further replication operations and to enable the failover site to take over in a production (data generation) mode in placed of the original site. More details are given in a subsequent figure.
- system 300 performs an illustrative DR orchestration job to conduct an unplanned failover.
- This operation is distinguishable from planned failover scenarios (block 614 ), because it relies on preceding snap backup and auxiliary copy job(s) that generated replicated snapshot(s) SR 598 at the failover storage.
- These previously generated replaced snapshots SR 598 will become datastores for the failover VMs 382 , thus capturing the most recently replicated data from source VMs 302 , though not necessarily the most recently generated data from source VMs 302 .
- the unplanned failure at the source data center breaks a so-called “mirror relationship” between primary storage and failover storage, which disables further replication operations. More details are given in a subsequent figure.
- system 300 uses another DR orchestration job to perform a failback operation and optionally to integrate DR site data generated after failover back into the original data sources. This operation is described in more detail in a subsequent figure.
- method 600 may end or control may pass (not shown here) to other blocks, e.g., 608 , 610 , 612 , 614 , 616 , without limitation.
- FIG. 7 depicts some salient operations of block 612 in method 600 .
- Block 612 is generally directed to performing a DR orchestration job to test the DR/failover site (test clone scenario).
- the DR orchestration job is initiated and managed by storage manager 440 and involves one or more components of system 300 , e.g., VSA data agent 442 , media agent 444 , data agent 492 , and/or media agent 494 as described in more detail below.
- system 300 optionally performs blocks 608 and 610 on demand if more recent replicated snapshots SR 598 are needed at the DR site for the test. In some cases, older replicated snapshots SR 598 are readily available at the DR site from earlier snap backup and auxiliary copy jobs.
- system 300 clones replicated snapshot(s) SR 598 into corresponding clone snapshots (not shown).
- the cloning operation is performed by failover storage 384 as instructed by media agent 494 , under the direction of storage manager 440 .
- Media agent 494 uses APIs, custom scripts, and/or other communication protocols to communicate with failover storage 384 .
- the cloned snapshots are stored at failover storage 384 .
- failover virtualization server 383 creates a datastore for each failover VM 382 using the cloned snapshots. Illustratively this operation is directed by VSA data agent 492 .
- VSA data agent 492 receives certain administrative parameters from storage manager 440 , e.g., mapping information administered for the failover group at block 606 . See also FIG. 12 .
- Information about the clone snapshots and the VM data therein (from source VMs 302 ) is obtained from media agent 494 and/or from storage manager 440 . Accordingly, VSA data agent 492 instructs manager 383 to designate a datastore 584 for each failover VM 382 , wherein datastore 584 comprises a certain clone snapshot generated at block 704 .
- failover virtualization manager 383 registers failover VMs 382 , configures customization, and powers on failover VMs 382 .
- this operation is directed by VSA data agent 492 .
- VSA data agent 492 receives certain administrative parameters from storage manager 440 , e.g., network settings, mapping information, and/or IP addresses that were administered for the failover group at block 606 . See also FIG. 12 .
- failover VMs 382 are active with connectivity and access to their respective datastores 584 .
- Block 710 confirms that failover VMs 382 are operational at the DR site as configured with access to each respective datastore 584 .
- Failover VMs 382 are up and running at the DR/failover site using the cloned snapshots as their datastores.
- Each failover VM 382 may read, write, change, and/or delete data in its respective/corresponding datastore 584 .
- Automatic and/or manual operations are executed at this stage to verify that one or more failover VMs 382 are operational at the DR site as configured by pre-failover administration and using the cloned replicated snapshots (hence the “clone testing” moniker).
- system 300 powers down failover VMs 382 , deletes their datastores, and deletes the cloned snapshots to “undo the test failover.”
- This operation is also initiated by storage manager 440 , which directs VSA data agent 492 to instruct failover virtualization manager 383 to power down failover VMs 382 , de-register VMs 382 , and sever the datastore relationship to the cloned snapshots.
- Storage manager 440 further directs media agent 494 to instructs failover storage 384 to delete the cloned snapshots. Block 612 ends.
- source/production VMs 302 continue operating at the source site; snap backup jobs are performed; auxiliary copy jobs are also performed—unfettered by the test failover (clone testing) operations at the DR site.
- FIG. 8 depicts some salient operations of block 614 in method 600 .
- Block 614 is generally directed to performing a DR orchestration job for a planned failover to the DR site.
- the DR orchestration job is initiated and managed by storage manager 440 and involves one or more components of system 300 , e.g., VSA data agent 442 , media agent 444 , data agent 492 , and/or media agent 494 as described in more detail below.
- storage manager 440 selects one or more source VMs 302 for the present planned failover.
- system 300 powers off source VMs 302 .
- Illustratively storage manager 440 directs VSA data agent 442 to instruct virtualization manager 303 to power off the selected one or more VMs 302 .
- Manager 303 comprises features for causing VMs 302 to power off, e.g., commands to hypervisor 512 , commands to VM host 502 , etc., without limitation. This operation freezes any further data changes in the source VMs' datastores 504 .
- system 300 performs an on-demand snap backup job to take snapshots S 598 of datastores 504 corresponding to the one or more powered off source VMs. Snap backup jobs are described in more detail elsewhere herein, e.g., at block 608 .
- system 300 performs an on-demand auxiliary copy job to replicate snapshots S 598 to failover storage 384 at the DR site (e.g., array, filer, cloud), i.e., to generate replicates snapshot(s) SR 598 .
- auxiliary copy jobs are described in more detail elsewhere herein, e.g., at block 610 .
- system 300 breaks the so-called “mirror relationship” between primary storage 304 and failover storage 384 , which was previously established to enable “array-to-array” (or equivalent) replication jobs therebetween.
- mirror relationship One of the features of the mirror relationship is that it maintains the replicated snapshots at the failover storage 384 in a read-only state to prevent replicated data from being changed at the DR site.
- the DR orchestration job enables the replicated snapshots SR 598 to be activated into datastores for active failover VMs 382 .
- media agent 444 and/or media agent 494 as directed by storage manager 440 , cause the mirror relationship to break, e.g., by so instructing primary storage 304 and/or failover storage 384 , respectively.
- system 300 brings data storage volumes 599 online at the DR site.
- media agent 494 as directed by storage manager 440 , instructs failover storage 384 to bring online data storage volumes 599 comprising replicated snapshot(s) SR 598 .
- the replicated snapshots themselves will become datastores for failover VMs 382 .
- failover virtualization manager 383 creates a datastore for each VM to be failed over using the replicated snapshots in the volumes brought online in the preceding block. This block is similar to block 706 , except that here the replicated snapshots SR 598 become the failover datastores 584 .
- failover virtualization manager 383 registers failover VMs 382 , configures customization, and powers on failover VMs 382 .
- failover VMs 382 are operational at the DR site using datastores comprising data that was replicated from the source data center.
- the planned failover operation has successfully completed.
- the failover VMs 382 are now operating “live” and the selected VMs 302 are not operational.
- System 300 now treats VMs 382 as source/production VMs for future storage operations.
- Appropriate updates are entered into media agent indexes 153 at media agent 444 and media agent 494 for tracking the various snapshots.
- VSA data agent 492 tracks the failover datastores 584 as data sources for future backups of failover VMs 382 . Job completion is reported to storage manager 440 by data agents and media agents and the DR orchestration job ends here.
- FIG. 9 depicts some salient operations of block 616 in method 600 .
- Block 616 is generally directed at performing a DR orchestration job for an unplanned VM failover.
- the DR orchestration job is initiated and managed by storage manager 440 and involves one or more components of system 300 , e.g., VSA data agent 442 , media agent 444 , data agent 492 , and/or media agent 494 as described in more detail below.
- VSA data agent 442 e.g., VSA data agent 442 , media agent 444 , data agent 492 , and/or media agent 494 as described in more detail below.
- only the source VMs 302 administered in one or more failover groups are subject to the DR orchestration job for the unplanned failover.
- Other failed source VMs 302 that are not in a failover group will not be failed over by the illustrative DR orchestration job. These other failed VMs may be restored in the future from a VM backup copy, but they are
- an unplanned failure at the source data center causes source VMs 302 to power off and causes a break in the mirror relationship to the DR site.
- the unplanned failure is detected by one or more operating components of system 300 , which triggers a DR orchestration job to be initiated for failover to the DR site. If the mirror relationship between primary storage and failover storage has not been broken by the unplanned failure, system 300 breaks it here according to block 808 .
- the unplanned failover is made possible by all the operations performed by blocks 602 - 610 (and optionally 612 ), which set up all configurations and administration needed for the unplanned failover to succeed.
- storage manager 440 initiates and manages the DR orchestration job for such source VMs 302 that are part of one or more failover groups set up for snap-based DR orchestration.
- data storage volumes are brought online at the DR site.
- Media agent 494 (using information in its media agent index 153 ) identifies the appropriate replicated snapshots SR 598 (e.g., the most recently replicated) at failover storage 384 that are to made into datastores for the failover VMs.
- Failover storage 384 may comprise any number of replicated snapshots SR 598 generated by countless auxiliary copy jobs, but the most recently created ones are most desirable for the present failover.
- Media agent 494 identifies the data storage volumes comprising these snapshots and instructs failover storage 384 to bring them online.
- failover virtualization manager 383 registers failover VMs 382 , configures customization, and powers on failover VMs 382 .
- failover VMs 382 are operational at the DR site using datastores comprising data that was replicated from the source data center, preferably by the most recent auxiliary copy job.
- the unplanned failover operation has successfully completed.
- the failover VMs 382 are now operating “live” and the failed source VMs 302 are not operational.
- System 300 now treats VMs 382 as source/production VMs for future storage operations.
- VSA data agent 492 tracks the failover datastores 584 as data sources for future backups of failover VMs 382 . Job completion is reported to storage manager 440 by data agents and media agents and the DR orchestration job ends here.
- FIG. 10 depicts some salient operations of block 620 of method 600 .
- Block 620 is generally directed at using a DR orchestration job to failback from a DR site and optionally integrate DR data back into the source data center.
- the DR orchestration job is initiated and managed by storage manager 440 and involves one or more components of system 300 , e.g., VSA data agent 442 , media agent 444 , data agent 492 , and/or media agent 494 as described in more detail below.
- system 300 reverses steps of a planned failover—from DR site to failback site, resulting in VMs 302 at the failback site using datastores that are based on snapshots replicated from the DR site (see, e.g., FIG. 8 ).
- the failover VMs 382 are powered off and not operational.
- failed-back VMs 302 operate with the most recent data recovered from the DR site.
- the steps taken in block 1002 result in failed-back VMs 302 operating “live” at the failback site, which is the original source data center.
- system 300 determines whether any VMs at the source/failback site were powered off or failed but were not failed-over to the DR site at block 614 or block 616 (VMs 302 that were “left-behind” by the failover orchestrated by the DR orchestration job). For example, VMs 302 that are not administered into a failover group will be “left behind” in planned or unplanned failover. If not, control passes to block 1020 (failback complete). If yes, control passes to block 1008 . Illustratively, storage manager 440 consults management database 146 to determine failover status. Alternatively, failover status may be determined by storage manager 440 querying VSA data agent 442 or VSA data agent 492 .
- system 300 uses previously created backup copies to restore one or more left-behind VMs 302 at the source/failback site. For example, backup copies of these left-behind VMs 302 were created prior to the planned/unplanned failover and the accompanying DR orchestration job. Such backup copies (e.g., 116 ) are governed by storage policies and schedules configured by storage manager 440 in management database 146 . Such backup copies (e.g., 116 ) are stored locally at the source data center or elsewhere, without limitation. Such backup copies are well known in the art and are available at this point to be restored in order to re-activate the left-behind VMs 302 .
- storage manager 440 initiates one or more restore operations to restore backup copies 116 previously made for left-behind VMs 302 .
- VSA data agent 442 and media agent 444 (or another media agent 144 with access to storage media hosting backup copies 116 ) interoperate as directed by storage manager 440 to populate data storage volume(s).
- Virtualization manager 303 activates the data storage volume(s) into datastores for the left-behind VMs 302 , registers said VMs 302 , and powers up said VMs 302 .
- the restored VMs 302 operate with data recovered from previous backup copies 116 alongside the failed-back VMs that were failed back using a DR orchestration job at block 1002 .
- FIG. 11 depicts an illustrative screenshot of an administrative screen in system 300 for adding a failover group.
- One of the administrative options ( 1102 ) for the failover group is whether it will be subject to Live Sync or snap-based DR orchestration using “array based replication.”
- array-to-array replication and “array-based replication” are used here as shorthand even when the source and/or destination of the replication operation are virtualized storage resources, such as in a cloud computing environment. See, e.g., FIGS. 3B and 5B .
- the present screenshot depicts the distinction between Live Sync and snap-based DR orchestration for how system 300 will fail over a certain VM failover group. Because this option is administered at the granularity of a failover group, these options are not mutually exclusive within the same system 300 .
- FIG. 12 depicts an illustrative screenshot of an administrative screen in system 300 for editing a failover group and adding customization details for mapping source to destination relationships.
- a number of administrative parameters are entered here to help identify the source VMs 302 , any VM groups they are part of, identify their respective datastores, map source to failover VM hosts, administer network settings and IP addresses, etc. without limitation. This information is used by the DR orchestration job for failing over the particular VMs in the failover group as described in more detail above.
- FIG. 13 depicts an illustrative screenshot of an administration screen for defining how snapshot copies are to be replicated, showing a mirror copy option and an alternative vault copy option.
- One of choices for the auxiliary copy jobs is what kind of “array-to-array” replication scheme to use.
- the illustrative “add snapshot copy” screen provides a choice of “mirror copy” ( 1302 ) or “vault copy” ( 1304 ) between source ( 304 ) and failover ( 384 ) NetApp arrays, without limitation. Again, the NetApp implementation is illustrative and not limiting.
- data storage management system for orchestrating virtual machine failover, the system comprising: a first computing device comprising one or more hardware processors and computer memory; wherein the first computing is configured to: initiate a snapshot backup job by causing a primary data storage (i) to take a first snapshot of a first data storage volume hosting a first datastore for a first virtual machine, and (ii) to store the first snapshot at the primary data storage, wherein the first virtual machine executes on a first virtual machine host computing device comprising one or more hardware processors, computer memory, and a hypervisor; initiate an auxiliary copy job by causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween; initiate a disaster recovery orchestration job for the first virtual machine to fail over to a second virtual machine that is currently powered off, where
- the above-recited system wherein the data storage management system is configured to create the second datastore and to power up the second virtual machine with access to the second datastore on-demand after initiating the disaster recovery orchestration job.
- the above-recited system wherein the snapshot-based disaster recovery (DR) job does not require that VMs or their corresponding datastores be actively operating at the DR site before the DR orchestration job is initiated, i.e., before failover, whether test clones, planned failover, or unplanned failover.
- the above-recited system wherein the disaster recovery orchestration job is for an unplanned failover of the first virtual machine to the second virtual machine, based on using administrative settings in a storage manager that executes on the first computing device.
- the above-recited system wherein the disaster recovery orchestration job is initiated based on detecting a failure at one or more of the first virtual machine host computing device, the primary data storage, and a first virtualization manager associated with the first virtual machine host computing device.
- the first computing device is further configured to activate, on-demand, a data agent associated with the failover virtualization manager and a media agent associated with the failover storage.
- the first virtual machine executes in one of: a first virtualized data center and a first cloud computing environment; and wherein after the disaster recovery orchestration job, the second virtual machine executes in one of: another distinct virtualized data center configured for disaster recovery and another cloud computing environment configured for disaster recovery.
- the first computing device is further configured to: initiate a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover storage.
- the above-recited system wherein the first computing is further configured to, as part of the disaster recovery orchestration job for the first virtual machine: cause the failover storage to bring the second data storage volume online for access by the second virtual machine.
- the above-recited system wherein the first computing is further configured to, as part of the disaster recovery orchestration job for the first virtual machine: cause the failover virtualization manager to register the second virtual machine with the failover virtualization manager.
- the system further comprises a second computing device that executes a first data agent that detects the failure at the first virtual machine host computing device.
- the system further comprises a second computing device that executes a first data agent that detects the failure at the first virtual machine host computing device by way of a first virtualization manager.
- system further comprises a second computing device that executes a first media agent, and wherein, as part of the snapshot backup job, the first media agent instructs the primary data storage to take the first snapshot.
- system further comprises a second computing device that executes a first media agent, and wherein, as part of the auxiliary copy job, the first media agent instructs the primary data storage to replicate the first snapshot to the failover storage.
- system further comprises a second computing device that executes a first media agent that detects the failure at the primary data storage.
- system further comprises a second computing device that executes a first media agent that detects the failure at the primary data storage, and wherein the failure comprises a break in the mirror-relationship to the failover storage.
- the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine.
- the above-recited system wherein the second data agent instructs the failover virtualization manager to create the second datastore.
- the system further comprises a second computing device that executes a first data agent associated with a first virtualization manager that manages the first virtual machine host computing device.
- system further comprises a second computing device that executes a first media agent associated with the primary data storage.
- system further comprises a second computing device that executes a second data agent associated with the failover virtualization manager.
- system further comprises a second computing device that executes a second media agent associated with the failover storage.
- a method for orchestrating virtual machine failover comprising: by a data storage management system, initiating a disaster recovery orchestration job for a first virtual machine that is included in a failover group administered in the data storage management system, wherein the disaster recovery orchestration job comprises: powering off the first virtual machine having a corresponding first datastore in a primary data storage; causing the primary data storage (i) to take a first snapshot of a first data storage volume hosting the first datastore, and (ii) to store the first snapshot at the primary data storage; causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween; causing the mirror-relationship to break; causing the failover storage to bring the second data storage volume online; causing a failover virtualization manager to create, for a second virtualization manager to create, for
- the above-recited method wherein the disaster recovery orchestration job is for a planned failover of the first virtual machine to the second virtual machine, and wherein a storage manager initiates the disaster recovery orchestration job.
- a first data agent instructs a first virtualization manager to power off the first virtual machine.
- a first media agent associated with the primary data storage instructs the primary data storage to break the mirror-relationship to the failover storage.
- a storage manager that manages storage operations in the data storage management system activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover storage.
- the method above wherein the second data agent and the second media agent execute on a backup node that comprises one or more hardware processors and computer memory, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager.
- the method above wherein the second data agent and the second media agent execute on backup node that comprises a virtual machine distinct from the second virtual machine, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager.
- a first media agent instructs the primary data storage to take the first snapshot.
- a first media agent instructs the primary data storage to replicate the first snapshot to the failover storage.
- the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine.
- the second data agent instructs the failover virtualization manager to create the second datastore.
- the second data agent instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore.
- the above-recited method wherein the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in another distinct virtualized data center configured for disaster recovery from the first virtualized data center.
- the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a cloud computing environment configured for disaster recovery from the first virtualized data center.
- the first virtual machine executes in a cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a first virtualized data center configured for disaster recovery from the cloud computing environment.
- the first virtual machine executes in a first cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in executes in a second cloud computing environment configured for disaster recovery from the first cloud computing environment.
- the data storage management system comprises a storage manager that managers storage operations in the data storage management system, including the disaster recovery orchestration job, and wherein the storage manager executes on one of: a computing device comprising one or more hardware processors and computer memory, and a virtual machine, distinct from the first virtual machine and the second virtual machine, that executes on a computing device comprising one or more hardware processors and computer memory.
- the data storage management system comprises a first data agent associated with a first virtualization manager that powers off the first virtual machine.
- the data storage management system comprises a first media agent associated with the primary data storage.
- the data storage management system comprises a second data agent associated with the failover virtualization manager.
- the data storage management system comprises a second media agent associated with the failover storage.
- the above-recited method further comprising: by the data storage management system, initiating a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover storage.
- the second disaster recovery orchestration job comprises: determining that a third virtual machine not included in the failover group administered at the storage manager is powered off and did not fail over in the disaster recovery orchestration job for the first virtual machine; identifying a backup copy of the third virtual machine; and initiating a restore job that restores the backup copy of the third virtual machine to a third datastore at the primary storage and cause the first virtualization manager to re-activate the third virtual machine with access to the third datastore.
- a method for orchestrating virtual machine failover comprising: by a storage manager that manages storage operations in a data storage management system, initiating a disaster recovery orchestration job for a first virtual machine that is included in a failover group administered at the storage manager, wherein the disaster recovery orchestration job comprises: powering off the first virtual machine having a corresponding first datastore in a primary data storage; causing the primary data storage (i) to take a first snapshot of a first data storage volume hosting the first datastore, and (ii) to store the first snapshot at the primary data storage; causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween; causing the mirror-relationship to break; causing the failover storage to bring the second data storage volume online; causing a failover
- the disaster recovery orchestration job is for a planned failover of the first virtual machine to the second virtual machine, based on using administrative settings in the storage manager.
- the storage manager activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover storage; wherein the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine; wherein the second data agent instructs the failover virtualization manager to create the second datastore; and wherein the second data agent further instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore.
- the data storage management system comprises a first data agent associated with a first virtualization manager that powers off the first virtual machine; wherein the data storage management system comprises a first media agent associated with the primary data storage; wherein the data storage management system comprises a second data agent associated with the failover virtualization manager; and wherein the data storage management system comprises a second media agent associated with the failover storage.
- the above-recited method further comprising: by the storage manager, initiating a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover storage.
- the second disaster recovery orchestration job comprises: by the storage manager, determining that a third virtual machine not included in the failover group administered at the storage manager is powered off and did not fail over in the disaster recovery orchestration job for the first virtual machine; by the storage manager, identifying a backup copy of the third virtual machine; and by the storage manager, initiating a restore job that restores the backup copy of the third virtual machine to a third datastore at the primary storage and cause the first virtualization manager to re-activate the third virtual machine with access to the third datastore.
- a first data agent instructs a first virtualization manager to power off the first virtual machine.
- a first media agent instructs the primary data storage to take the first snapshot.
- a first media agent instructs the primary data storage to replicate the first snapshot to the failover storage.
- a first media agent associated with the primary data storage instructs the primary data storage to break the mirror-relationship to the failover storage.
- the storage manager activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover storage.
- the second data agent and the second media agent execute on a backup node that comprises one or more hardware processors and computer memory, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager.
- the second data agent and the second media agent execute on backup node that comprises a virtual machine distinct from the second virtual machine, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager.
- the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine.
- the second data agent instructs the failover virtualization manager to create the second datastore.
- the above-recited method wherein the second data agent instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore.
- the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in another distinct virtualized data center configured for disaster recovery from the first virtualized data center.
- the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a cloud computing environment configured for disaster recovery from the first virtualized data center.
- the first virtual machine executes in a cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a first virtualized data center configured for disaster recovery from the cloud computing environment.
- the first virtual machine executes in a first cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in executes in a second cloud computing environment configured for disaster recovery from the first cloud computing environment.
- the data storage management system comprises a first data agent associated with a first virtualization manager that powers off the first virtual machine.
- the data storage management system comprises a first media agent associated with the primary data storage.
- the data storage management system comprises a second data agent associated with the failover virtualization manager.
- the data storage management system comprises a second media agent associated with the failover storage.
- a method for orchestrating virtual machine failover comprising: by a storage manager that manages storage operations in a data storage management system, initiating a snapshot backup job by causing a primary data storage (i) to take a first snapshot of a first data storage volume hosting a first datastore for a first virtual machine, and (ii) to store the first snapshot at the primary data storage, wherein the first virtual machine executes on a first virtual machine host computing device comprising one or more hardware processors, computer memory, and a hypervisor, and wherein the first virtual machine is included in a failover group administered at the storage manager; by the storage manager, initiating an auxiliary copy job by causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween; based on a failure at
- the disaster recovery orchestration job is for an unplanned failover of the first virtual machine to the second virtual machine, based on using administrative settings in the storage manager.
- a first data agent detects the failure at the first virtual machine host computing device.
- a first data agent detects the failure at the first virtual machine host computing device by way of a first virtualization manager.
- a first media agent instructs the primary data storage to take the first snapshot.
- a first media agent instructs the primary data storage to replicate the first snapshot to the failover storage.
- a first media agent detects the failure at the primary data storage.
- a first media agent detects the failure at the primary data storage, and wherein the failure comprises a break in the mirror-relationship to the failover storage.
- the storage manager activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover storage.
- the second data agent and the second media agent execute on a backup node that comprises one or more hardware processors and computer memory, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager.
- the second data agent and the second media agent execute on backup node that comprises a virtual machine distinct from the second virtual machine, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager.
- the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine.
- the second data agent instructs the failover virtualization manager to create the second datastore.
- the above-recited method wherein the second data agent instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore.
- the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in another distinct virtualized data center configured for disaster recovery from the first virtualized data center.
- the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a cloud computing environment configured for disaster recovery from the first virtualized data center.
- the first virtual machine executes in a cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a first virtualized data center configured for disaster recovery from the cloud computing environment.
- the first virtual machine executes in a first cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in executes in a second cloud computing environment configured for disaster recovery from the first cloud computing environment.
- the data storage management system comprises a first data agent associated with a first virtualization manager that manages the first virtual machine host computing device.
- the data storage management system comprises a first media agent associated with the primary data storage.
- the data storage management system comprises a second data agent associated with the failover virtualization manager.
- the data storage management system comprises a second media agent associated with the failover storage.
- the above-recited method further comprising: by the storage manager, initiating a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover storage.
- the second disaster recovery orchestration job comprises: by the storage manager, determining that a third virtual machine not included in the failover group administered at the storage manager is powered off and did not fail over in the disaster recovery orchestration job for the first virtual machine; by the storage manager, identifying a backup copy of the third virtual machine; and by the storage manager, initiating a restore job that restores the backup copy of the third virtual machine to a third datastore at the primary storage and cause the first virtualization manager to re-activate the third virtual machine with access to the third datastore.
- a data storage management system for orchestrating virtual machine failover comprising: a first computing device comprising one or more hardware processors and computer memory, wherein a storage manager executes on the first computing device; wherein the first computing device executing the storage manager is configured to: initiate a snapshot backup job by causing a primary data storage (i) to take a first snapshot of a first data storage volume hosting a first datastore for a first virtual machine, and (ii) to store the first snapshot at the primary data storage, wherein the first virtual machine executes on a first virtual machine host computing device comprising one or more hardware processors, computer memory, and a hypervisor, and wherein the first virtual machine is included in a failover group administered at the storage manager; initiate an auxiliary copy job by causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage
- the above-recited system wherein the disaster recovery orchestration job is for an unplanned failover of the first virtual machine to the second virtual machine, based on using administrative settings in the storage manager.
- the above-recited system wherein the disaster recovery orchestration job is initiated based on detecting a failure at one or more of the first virtual machine host computing device, the primary data storage, and a first virtualization manager associated with the first virtual machine host computing device.
- the system further comprises a second computing device that executes a first data agent that detects the failure at the first virtual machine host computing device.
- system further comprises a second computing device that executes a first data agent that detects the failure at the first virtual machine host computing device by way of a first virtualization manager.
- system further comprises a second computing device that executes a first media agent, and wherein, as part of the snapshot backup job initiated by the storage manager, the first media agent instructs the primary data storage to take the first snapshot.
- system further comprises a second computing device that executes a first media agent, and wherein, as part of the auxiliary copy job initiated by the storage manager, the first media agent instructs the primary data storage to replicate the first snapshot to the failover storage.
- system further comprises a second computing device that executes a first media agent that detects the failure at the primary data storage.
- system further comprises a second computing device that executes a first media agent that detects the failure at the primary data storage, and wherein the failure comprises a break in the mirror-relationship to the failover storage.
- the storage manager activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover storage.
- the second data agent and the second media agent execute on a backup node that comprises one or more hardware processors and computer memory, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager.
- the second data agent and the second media agent execute on backup node that comprises a virtual machine distinct from the second virtual machine, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager.
- the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine.
- the second data agent instructs the failover virtualization manager to create the second datastore.
- the above-recited system wherein the second data agent instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore.
- the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in another distinct virtualized data center configured for disaster recovery from the first virtualized data center.
- the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a cloud computing environment configured for disaster recovery from the first virtualized data center.
- the first virtual machine executes in a cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a first virtualized data center configured for disaster recovery from the cloud computing environment.
- the first virtual machine executes in a first cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in executes in a second cloud computing environment configured for disaster recovery from the first cloud computing environment.
- the system further comprises a second computing device that executes a first data agent associated with a first virtualization manager that manages the first virtual machine host computing device.
- system further comprises a second computing device that executes a first media agent associated with the primary data storage.
- system further comprises a second computing device that executes a second data agent associated with the failover virtualization manager.
- system further comprises a second computing device that executes a second media agent associated with the failover storage.
- the above-recited system wherein the first computing device executing the storage manager is further configured to: initiate a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover storage.
- the first computing device executing the storage manager is further configured to, while performing the second disaster recovery orchestration job: determine that a third virtual machine not included in the failover group administered at the storage manager is powered off and did not fail over in the disaster recovery orchestration job for the first virtual machine; identify a backup copy of the third virtual machine; and initiate a restore job that restores the backup copy of the third virtual machine to a third datastore at the primary storage and cause the first virtualization manager to re-activate the third virtual machine with access to the third datastore.
- a system or systems operates according to one or more of the methods and/or computer-readable media recited in the preceding paragraphs.
- a method or methods operates according to one or more of the systems and/or computer-readable media recited in the preceding paragraphs.
- a non-transitory computer-readable medium or media causes one or more computing devices having one or more processors and computer-readable memory to operate according to one or more of the systems and/or methods recited in the preceding paragraphs.
- Conditional language such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
- the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense, i.e., in the sense of “including, but not limited to.”
- the terms “connected,” “coupled,” or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof.
- the words “herein,” “above,” “below,” and words of similar import when used in this application, refer to this application as a whole and not to any particular portions of this application.
- words using the singular or plural number may also include the plural or singular number respectively.
- the word “or” in reference to a list of two or more items covers all of the following interpretations of the word: any one of the items in the list, all of the items in the list, and any combination of the items in the list.
- the term “and/or” in reference to a list of two or more items covers all of the following interpretations of the word: any one of the items in the list, all of the items in the list, and any combination of the items in the list.
- certain operations, acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all are necessary for the practice of the algorithms).
- operations, acts, functions, or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially.
- Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described.
- Software and other modules may reside and execute on servers, workstations, personal computers, computerized tablets, PDAs, and other computing devices suitable for the purposes described herein.
- Software and other modules may be accessible via local computer memory, via a network, via a browser, or via other means suitable for the purposes described herein.
- Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein.
- User interface elements described herein may comprise elements from graphical user interfaces, interactive voice response, command line interfaces, and other suitable interfaces.
- processing of the various components of the illustrated systems can be distributed across multiple machines, networks, and other computing resources. Two or more components of a system can be combined into fewer components.
- Various components of the illustrated systems can be implemented in one or more virtual machines, rather than in dedicated computer hardware systems and/or computing devices.
- the data repositories shown can represent physical and/or logical data storage, including, e.g., storage area networks or other distributed storage systems.
- the connections between the components shown represent possible paths of data flow, rather than actual connections between hardware. While some examples of possible connections are shown, any of the subset of the components shown can communicate with any other subset of components in various implementations.
- Embodiments are also described above with reference to flow chart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products.
- Each block of the flow chart illustrations and/or block diagrams, and combinations of blocks in the flow chart illustrations and/or block diagrams may be implemented by computer program instructions.
- Such instructions may be provided to a processor of a general purpose computer, special purpose computer, specially-equipped computer (e.g., comprising a high-performance database server, a graphics subsystem, etc.) or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor(s) of the computer or other programmable data processing apparatus, create means for implementing the acts specified in the flow chart and/or block diagram block or blocks.
- These computer program instructions may also be stored in a non-transitory computer-readable memory that can direct a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the acts specified in the flow chart and/or block diagram block or blocks.
- the computer program instructions may also be loaded to a computing device or other programmable data processing apparatus to cause operations to be performed on the computing device or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computing device or other programmable apparatus provide steps for implementing the acts specified in the flow chart and/or block diagram block or blocks.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Snapshot-based disaster recovery (DR) orchestration systems and methods for virtual machine (VM) failover and failback do not require that VMs or their corresponding datastores be actively operating at the DR site before a DR orchestration job is initiated, i.e., before failover. An illustrative data storage management system deploys proprietary components at source data center(s) and at DR site(s). The proprietary components (e.g., storage manager, data agents, media agents, backup nodes, etc.) interoperate with each other and with the source and DR components to ensure that VMs will successfully failover and/or failback. DR orchestration jobs are suitable for testing VM failover scenarios (“clone testing”), for conducting planned VM failovers, and for unplanned VM failovers. DR orchestration jobs also handle failback and integration of DR-generated data into the failback site, including restoring VMs that never failed over to fully re-populate the source/failback site.
Description
Any and all applications, if any, for which a foreign or domestic priority claim is identified in the Application Data Sheet of the present application are hereby incorporated by reference in their entireties under 37 CFR 1.57.
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document and/or the patent disclosure as it appears in the United States Patent and Trademark Office patent file and/or records, but otherwise reserves all copyrights whatsoever.
Businesses recognize the commercial value of their data and seek reliable, cost-effective ways to protect the information stored on their computer networks while minimizing impact on productivity. A company might back up critical computing systems such as databases, file servers, web servers, virtual machines, and so on as part of a maintenance program. Given the rapidly expanding volume of data under management, companies also continue to seek innovative and robust techniques for ensuring disaster recovery will operate smoothly and reliably.
The present inventors devised a scheme for disaster recovery (DR) orchestration of virtual machine (VM) failover and failback operations. An illustrative data storage management system deploys proprietary components at source data center(s) and at DR site(s). The proprietary components (e.g., storage manager, data agents, media agents, backup nodes, etc.) interoperate with each other and with the source and DR components to ensure that VMs will successfully failover and/or failback using so-called “DR orchestration jobs.” DR orchestration jobs are suitable for testing VM failover scenarios (“clone testing”), for conducting planned VM failovers, and for unplanned VM failovers. DR orchestration jobs also handle failback and integration of DR-generated data into the failback site. As a shorthand, the illustrative approach is referred to herein as “snap-based DR orchestration.”
The illustrative system exploits snapshot replication techniques. The system implements “snap backup jobs” that capture VM datastores at a source data center, in which so-called “hardware snapshots” are taken by the datastore's host storage device (e.g., a storage array, filer, and/or cloud storage resources). The system implements “auxiliary copy jobs” to replicate the snapshots to the DR site. Collectively, these jobs ensure that hardware snapshots regularly capture VM datastores at the source and that the DR site regularly receives snapshotted datastore data.
One of the advantages of the disclosed DR orchestration job is that it does not require that VMs or their corresponding datastores be actively operating at the DR site before the DR orchestration job is initiated, i.e., before failover. This approach is distinguishable from an alternative proprietary approach known as “Live Sync,” which relies on ongoing repetitive cycles of incremental backups at the source followed by restores at the DR site to maintain the DR site in a “warm” readiness state that can take over with minimal start-up effort. (See, e.g., U.S. Pat. No. 10,228,962, which is incorporated by reference herein; see also FIG. 2A herein). Live Sync requires VMs and their datastores to be actively operating (powered up) at the DR site in order to sustain the ongoing restore operations. With Live Sync, the DR site is operational after the first restore in a “warm” standby state. However, Live Sync can be relatively costly to operate and maintain as compared to the illustrative snap-based DR orchestration approach disclosed herein, because the Live Sync DR site must maintain actively operating VMs and datastores as well as data restoration infrastructure. In cloud computing environments, maintaining powered up VMs and data storage resources indefinitely can be very costly. Thus the “warm” readiness of Live Sync is counter-balanced by relatively high costs of operation and maintenance of DR components and infrastructure.
In contrast to Live Sync, the illustrative snap-based DR orchestration takes a different approach that exploits snapshot techniques and other kinds of backup operations (e.g., auxiliary copy jobs) to feed data from source to DR site, and does not rely on Live Sync's ongoing cycles of backup and restore to maintain the DR site. In contrast to Live Sync, the illustrative snap-based DR orchestration requires only minimal active resources at the DR site until such time as the DR orchestration job initiates a failover to the DR site. Accordingly, VMs are kept powered off at the DR site until failover. Even though data storage is configured at the DR site to receive snapshots replicated from the source, no active connections are maintained to VM hosts and/or VM server management resources and thus no datastores are established until failover. In cloud-based data centers, backup nodes that provide backup/restore infrastructure for completing the DR orchestration job execute on DR site VMs that are powered up on demand at failover in certain embodiments.
Thus, the illustrative snap-based DR orchestration approach requires minimal active resources at the DR site until failover. The cost and effort of maintaining active components at a “warm” DR site are also avoided by snap-based DR orchestration. Instead, the illustrative snap-based DR orchestration approach relies on DR orchestration jobs to activate connections, establish datastores, and power up VMs as needed at the DR site, and to tear down appropriately after failback completes.
To implement DR orchestration jobs, the illustrative data storage management system is specially configured to track certain administrative information at source and DR sites, coordinate operations between the sites, and manage a number of operations at the DR site to ensure a successful failover, and conversely to ensure successful failbacks to the source. According to the illustrative snap-based DR orchestration approach, source or DR site or both can be a virtualized on-premises data center or a cloud computing environment, without limitation. Thus, although many of the depicted scenarios illustrate a virtualized data center as a source production environment and a cloud computing environment as a failover/DR site, the embodiments are not so limited.
Detailed descriptions and examples of systems and methods according to one or more illustrative embodiments of the present invention may be found in the section entitled SNAPSHOT-BASED DISASTER RECOVERY ORCHESTRATION OF VIRTUAL MACHINE FAILOVER AND FAILBACK OPERATIONS, as well as in the section entitled Example Embodiments, and also in FIGS. 3A-13 herein. Furthermore, components and functionality for snap-based DR orchestration may be configured and/or incorporated into information management systems such as those described herein in FIGS. 1A-1H and 2A-2C .
Various embodiments described herein are intimately tied to, enabled by, and would not exist except for, computer technology. For example, taking snapshots, replicating snapshots, activating VMs, orchestrating failbacks, orchestrating and integrating failbacks, etc. described herein in reference to various embodiments cannot reasonably be performed by humans alone, without the computer technology upon which they are implemented.
Information Management System Overview
With the increasing importance of protecting and leveraging data, organizations simply cannot risk losing critical data. Moreover, runaway data growth and other modern realities make protecting and managing data increasingly difficult. There is therefore a need for efficient, powerful, and user-friendly solutions for protecting and managing data and for smart and efficient management of data storage. Depending on the size of the organization, there may be many data production sources which are under the purview of tens, hundreds, or even thousands of individuals. In the past, individuals were sometimes responsible for managing and protecting their own data, and a patchwork of hardware and software point solutions may have been used in any given organization. These solutions were often provided by different vendors and had limited or no interoperability. Certain embodiments described herein address these and other shortcomings of prior approaches by implementing scalable, unified, organization-wide information management, including data storage management.
Generally, the systems and associated components described herein may be compatible with and/or provide some or all of the functionality of the systems and corresponding components described in one or more of the following U.S. patents/publications and patent applications assigned to Commvault Systems, Inc., each of which is hereby incorporated by reference in its entirety herein:
-
- U.S. Pat. No. 7,035,880, entitled “Modular Backup and Retrieval System Used in Conjunction With a Storage Area Network”;
- U.S. Pat. No. 7,107,298, entitled “System And Method For Archiving Objects In An Information Store”;
- U.S. Pat. No. 7,246,207, entitled “System and Method for Dynamically Performing Storage Operations in a Computer Network”;
- U.S. Pat. No. 7,315,923, entitled “System And Method For Combining Data Streams In Pipelined Storage Operations In A Storage Network”;
- U.S. Pat. No. 7,343,453, entitled “Hierarchical Systems and Methods for Providing a Unified View of Storage Information”;
- U.S. Pat. No. 7,395,282, entitled “Hierarchical Backup and Retrieval System”;
- U.S. Pat. No. 7,529,782, entitled “System and Methods for Performing a Snapshot and for Restoring Data”;
- U.S. Pat. No. 7,617,262, entitled “System and Methods for Monitoring Application Data in a Data Replication System”;
- U.S. Pat. No. 7,734,669, entitled “Managing Copies Of Data”;
- U.S. Pat. No. 7,747,579, entitled “Metabase for Facilitating Data Classification”;
- U.S. Pat. No. 8,156,086, entitled “Systems And Methods For Stored Data Verification”;
- U.S. Pat. No. 8,170,995, entitled “Method and System for Offline Indexing of Content and Classifying Stored Data”;
- U.S. Pat. No. 8,230,195, entitled “System And Method For Performing Auxiliary Storage Operations”;
- U.S. Pat. No. 8,285,681, entitled “Data Object Store and Server for a Cloud Storage Environment, Including Data Deduplication and Data Management Across Multiple Cloud Storage Sites”;
- U.S. Pat. No. 8,307,177, entitled “Systems And Methods For Management Of Virtualization Data”;
- U.S. Pat. No. 8,364,652, entitled “Content-Aligned, Block-Based Deduplication”;
- U.S. Pat. No. 8,578,120, entitled “Block-Level Single Instancing”;
- U.S. Pat. No. 8,954,446, entitled “Client-Side Repository in a Networked Deduplicated Storage System”;
- U.S. Pat. No. 9,020,900, entitled “Distributed Deduplicated Storage System”;
- U.S. Pat. No. 9,098,495, entitled “Application-Aware and Remote Single Instance Data Management”;
- U.S. Pat. No. 9,239,687, entitled “Systems and Methods for Retaining and Using Data Block Signatures in Data Protection Operations”;
- U.S. Pat. No. 9,633,033, entitled “High Availability Distributed Deduplicated Storage System”;
- U.S. Pat. Pub. No. 2006/0224846, entitled “System and Method to Support Single Instance Storage Operations”;
- U.S. Pat. Pub. No. 2016-0350391, entitled “Replication Using Deduplicated Secondary Copy Data”;
- U.S. Pat. Pub. No. 2017-0168903 A1, entitled “Live Synchronization and Management of Virtual Machines across Computing and Virtualization Platforms and Using Live Synchronization to Support Disaster Recovery”;
- U.S. Pat. Pub. No. 2017-0185488 A1, entitled “Application-Level Live Synchronization Across Computing Platforms Including Synchronizing Co-Resident Applications To Disparate Standby Destinations And Selectively Synchronizing Some Applications And Not Others”;
- U.S. Pat. Pub. No. 2017-0192866 A1, entitled “System For Redirecting Requests After A Secondary Storage Computing Device Failure”;
- U.S. Pat. Pub. No. 2017-0235647 A1, entitled “Data Protection Operations Based on Network Path Information”; and
- U.S. Pat. Pub. No. 2017-0242871 A1, entitled “Data Restoration Operations Based on Network Path Information”.
In some embodiments, computing devices can include one or more virtual machine(s) running on a physical host computing device (or “host machine”) operated by the organization. As one example, the organization may use one virtual machine as a database server and another virtual machine as a mail server, both virtual machines operating on the same host machine. A Virtual machine (“VM”) is a software implementation of a computer that does not physically exist and is instead instantiated in an operating system of a physical computer (or host machine) to enable applications to execute within the VM's environment, i.e., a VM emulates a physical computer. AVM includes an operating system and associated virtual resources, such as computer memory and processor(s). A hypervisor operates between the VM and the hardware of the physical host machine and is generally responsible for creating and running the VMs. Hypervisors are also known in the art as virtual machine monitors or a virtual machine managers or “VMMs”, and may be implemented in software, firmware, and/or specialized hardware installed on the host machine. Examples of hypervisors include ESX Server, by VMware, Inc. of Palo Alto, Calif.; Microsoft Virtual Server and Microsoft Windows Server Hyper-V, both by Microsoft Corporation of Redmond, Wash.; Sun xVM by Oracle America Inc. of Santa Clara, Calif.; and Xen by Citrix Systems, Santa Clara, Calif. The hypervisor provides resources to each virtual operating system such as a virtual processor, virtual memory, a virtual network device, and a virtual disk. Each virtual machine has one or more associated virtual disks. The hypervisor typically stores the data of virtual disks in files on the file system of the physical host machine, called virtual machine disk files (“VMDK” in VMware lingo) or virtual hard disk image files (in Microsoft lingo). For example, VMware's ESX Server provides the Virtual Machine File System (VMFS) for the storage of virtual machine disk files. A virtual machine reads data from and writes data to its virtual disk much the way that a physical machine reads data from and writes data to a physical disk. Examples of techniques for implementing information management in a cloud computing environment are described in U.S. Pat. No. 8,285,681. Examples of techniques for implementing information management in a virtualized computing environment are described in U.S. Pat. No. 8,307,177.
Depending on context, the term “information management system” can refer to generally all of the illustrated hardware and software components in FIG. 1C , or the term may refer to only a subset of the illustrated components. For instance, in some cases, system 100 generally refers to a combination of specialized components used to protect, move, manage, manipulate, analyze, and/or process data and metadata generated by client computing devices 102. However, system 100 in some cases does not include the underlying components that generate and/or store primary data 112, such as the client computing devices 102 themselves, and the primary storage devices 104. Likewise secondary storage devices 108 (e.g., a third-party provided cloud storage environment) may not be part of system 100. As an example, “information management system” or “storage management system” may sometimes refer to one or more of the following components, which will be described in further detail below: storage manager, data agent, and media agent.
One or more client computing devices 102 may be part of system 100, each client computing device 102 having an operating system and at least one application 110 and one or more accompanying data agents executing thereon; and associated with one or more primary storage devices 104 storing primary data 112. Client computing device(s) 102 and primary storage devices 104 may generally be referred to in some cases as primary storage subsystem 117.
Client Computing Devices, Clients, and Subclients
Typically, a variety of sources in an organization produce data to be protected and managed. As just one illustrative example, in a corporate environment such data sources can be employee workstations and company servers such as a mail server, a web server, a database server, a transaction server, or the like. In system 100, data generation sources include one or more client computing devices 102. A computing device that has a data agent 142 installed and operating on it is generally referred to as a “client computing device” 102, and may include any type of computing device, without limitation. A client computing device 102 may be associated with one or more users and/or user accounts.
A “client” is a logical component of information management system 100, which may represent a logical grouping of one or more data agents installed on a client computing device 102. Storage manager 140 recognizes a client as a component of system 100, and in some embodiments, may automatically create a client component the first time a data agent 142 is installed on a client computing device 102. Because data generated by executable component(s) 110 is tracked by the associated data agent 142 so that it may be properly protected in system 100, a client may be said to generate data and to store the generated data to primary storage, such as primary storage device 104. However, the terms “client” and “client computing device” as used herein do not imply that a client computing device 102 is necessarily configured in the client/server sense relative to another computing device such as a mail server, or that a client computing device 102 cannot be a server in its own right. As just a few examples, a client computing device 102 can be and/or include mail servers, file servers, database servers, virtual machine servers, and/or web servers.
Each client computing device 102 may have application(s) 110 executing thereon which generate and manipulate the data that is to be protected from loss and managed in system 100. Applications 110 generally facilitate the operations of an organization, and can include, without limitation, mail server applications (e.g., Microsoft Exchange Server), file system applications, mail client applications (e.g., Microsoft Exchange Client), database applications or database management systems (e.g., SQL, Oracle, SAP, Lotus Notes Database), word processing applications (e.g., Microsoft Word), spreadsheet applications, financial applications, presentation applications, graphics and/or video applications, browser applications, mobile applications, entertainment applications, and so on. Each application 110 may be accompanied by an application-specific data agent 142, though not all data agents 142 are application-specific or associated with only application. A file manager application, e.g., Microsoft Windows Explorer, may be considered an application 110 and may be accompanied by its own data agent 142. Client computing devices 102 can have at least one operating system (e.g., Microsoft Windows, Mac OS X, iOS, IBM z/OS, Linux, other Unix-based operating systems, etc.) installed thereon, which may support or host one or more file systems and other applications 110. In some embodiments, a virtual machine that executes on a host client computing device 102 may be considered an application 110 and may be accompanied by a specific data agent 142 (e.g., virtual server data agent).
A “subclient” is a logical grouping of all or part of a client's primary data 112. In general, a subclient may be defined according to how the subclient data is to be protected as a unit in system 100. For example, a subclient may be associated with a certain storage policy. A given client may thus comprise several subclients, each subclient associated with a different storage policy. For example, some files may form a first subclient that requires compression and deduplication and is associated with a first storage policy. Other files of the client may form a second subclient that requires a different retention schedule as well as encryption, and may be associated with a different, second storage policy. As a result, though the primary data may be generated by the same application 110 and may belong to one given client, portions of the data may be assigned to different subclients for distinct treatment by system 100. More detail on subclients is given in regard to storage policies below.
Primary Data and Exemplary Primary Storage Devices
It can also be useful in performing certain functions of system 100 to access and modify metadata within primary data 112. Metadata generally includes information about data objects and/or characteristics associated with the data objects. For simplicity herein, it is to be understood that, unless expressly stated otherwise, any reference to primary data 112 generally also includes its associated metadata, but references to metadata generally do not include the primary data. Metadata can include, without limitation, one or more of the following: the data owner (e.g., the client or user that generates the data), the last modified time (e.g., the time of the most recent modification of the data object), a data object name (e.g., a file name), a data object size (e.g., a number of bytes of data), information about the content (e.g., an indication as to the existence of a particular search term), user-supplied tags, to/from information for email (e.g., an email sender, recipient, etc.), creation date, file type (e.g., format or application type), last accessed time, application type (e.g., type of application that generated the data object), location/network (e.g., a current, past or future location of the data object and network pathways to/from the data object), geographic location (e.g., GPS coordinates), frequency of change (e.g., a period in which the data object is modified), business unit (e.g., a group or department that generates, manages or is otherwise associated with the data object), aging information (e.g., a schedule, such as a time period, in which the data object is migrated to secondary or long term storage), boot sectors, partition layouts, file location within a file folder directory structure, user permissions, owners, groups, access control lists (ACLs), system metadata (e.g., registry information), combinations of the same or other similar information related to the data object. In addition to metadata generated by or related to file systems and operating systems, some applications 110 and/or other components of system 100 maintain indices of metadata for data objects, e.g., metadata associated with individual email messages. The use of metadata to perform classification and other functions is described in greater detail below.
Secondary Copies and Exemplary Secondary Storage Devices
A secondary copy 116 can comprise a separate stored copy of data that is derived from one or more earlier-created stored copies (e.g., derived from primary data 112 or from another secondary copy 116). Secondary copies 116 can include point-in-time data, and may be intended for relatively long-term retention before some or all of the data is moved to other storage or discarded. In some cases, a secondary copy 116 may be in a different storage device than other previously stored copies; and/or may be remote from other previously stored copies. Secondary copies 116 can be stored in the same storage device as primary data 112. For example, a disk array capable of performing hardware snapshots stores primary data 112 and creates and stores hardware snapshots of the primary data 112 as secondary copies 116. Secondary copies 116 may be stored in relatively slow and/or lower cost storage (e.g., magnetic tape). A secondary copy 116 may be stored in a backup or archive format, or in some other format different from the native source application format or other format of primary data 112.
Secondary storage computing devices 106 may index secondary copies 116 (e.g., using a media agent 144), enabling users to browse and restore at a later time and further enabling the lifecycle management of the indexed data. After creation of a secondary copy 116 that represents certain primary data 112, a pointer or other location indicia (e.g., a stub) may be placed in primary data 112, or be otherwise associated with primary data 112, to indicate the current location of a particular secondary copy 116. Since an instance of a data object or metadata in primary data 112 may change over time as it is modified by application 110 (or hosted service or the operating system), system 100 may create and manage multiple secondary copies 116 of a particular data object or metadata, each copy representing the state of the data object in primary data 112 at a particular point in time. Moreover, since an instance of a data object in primary data 112 may eventually be deleted from primary storage device 104 and the file system, system 100 may continue to manage point-in-time representations of that data object, even though the instance in primary data 112 no longer exists. For virtual machines, the operating system and other applications 110 of client computing device(s) 102 may execute within or under the management of virtualization software (e.g., a VMM), and the primary storage device(s) 104 may comprise a virtual disk created on a physical storage device. System 100 may create secondary copies 116 of the files or other data objects in a virtual disk file and/or secondary copies 116 of the entire virtual disk file itself (e.g., of an entire.vmdk file).
Second, secondary copies 116 may be stored on a secondary storage device 108 that is inaccessible to application 110 running on client computing device 102 and/or hosted service. Some secondary copies 116 may be “offline copies,” in that they are not readily available (e.g., not mounted to tape or disk). Offline copies can include copies of data that system 100 can access without human intervention (e.g., tapes within an automated tape library, but not yet mounted in a drive), and copies that the system 100 can access only with some human intervention (e.g., tapes located at an offsite storage site).
Using Intermediate Devices for Creating Secondary Copies—Secondary Storage Computing Devices
Creating secondary copies can be challenging when hundreds or thousands of client computing devices 102 continually generate large volumes of primary data 112 to be protected. Also, there can be significant overhead involved in the creation of secondary copies 116. Moreover, specialized programmed intelligence and/or hardware capability is generally needed for accessing and interacting with secondary storage devices 108. Client computing devices 102 may interact directly with a secondary storage device 108 to create secondary copies 116, but in view of the factors described above, this approach can negatively impact the ability of client computing device 102 to serve/service application 110 and produce primary data 112. Further, any given client computing device 102 may not be optimized for interaction with certain secondary storage devices 108.
Thus, system 100 may include one or more software and/or hardware components which generally act as intermediaries between client computing devices 102 (that generate primary data 112) and secondary storage devices 108 (that store secondary copies 116). In addition to off-loading certain responsibilities from client computing devices 102, these intermediate components provide other benefits. For instance, as discussed further below with respect to FIG. 1D , distributing some of the work involved in creating secondary copies 116 can enhance scalability and improve system performance. For instance, using specialized secondary storage computing devices 106 and media agents 144 for interfacing with secondary storage devices 108 and/or for performing certain data processing operations can greatly improve the speed with which system 100 performs information management operations and can also improve the capacity of the system to handle large numbers of such operations, while reducing the computational load on the production environment of client computing devices 102. The intermediate components can include one or more secondary storage computing devices 106 as shown in FIG. 1A and/or one or more media agents 144. Media agents are discussed further below (e.g., with respect to FIGS. 1C-1E ). These special-purpose components of system 100 comprise specialized programmed intelligence and/or hardware capability for writing to, reading from, instructing, communicating with, or otherwise interacting with secondary storage devices 108.
Secondary storage computing device(s) 106 can comprise any of the computing devices described above, without limitation. In some cases, secondary storage computing device(s) 106 also include specialized hardware componentry and/or software intelligence (e.g., specialized interfaces) for interacting with certain secondary storage device(s) 108 with which they may be specially associated.
To create a secondary copy 116 involving the copying of data from primary storage subsystem 117 to secondary storage subsystem 118, client computing device 102 may communicate the primary data 112 to be copied (or a processed version thereof generated by a data agent 142) to the designated secondary storage computing device 106, via a communication pathway 114. Secondary storage computing device 106 in turn may further process and convey the data or a processed version thereof to secondary storage device 108. One or more secondary copies 116 may be created from existing secondary copies 116, such as in the case of an auxiliary copy operation, described further below.
Exemplary Primary Data and an Exemplary Secondary Copy
Secondary copy data objects 134A-C can individually represent more than one primary data object. For example, secondary copy data object 134A represents three separate primary data objects 133C, 122, and 129C (represented as 133C′, 122′, and 129C′, respectively, and accompanied by corresponding metadata Meta11, Meta3, and Meta8, respectively). Moreover, as indicated by the prime mark (′), secondary storage computing devices 106 or other components in secondary storage subsystem 118 may process the data received from primary storage subsystem 117 and store a secondary copy including a transformed and/or supplemented representation of a primary data object and/or metadata that is different from the original format, e.g., in a compressed, encrypted, deduplicated, or other modified format. For instance, secondary storage computing devices 106 can generate new metadata or other information based on said processing, and store the newly generated information along with the secondary copies. Secondary copy data object 1346 represents primary data objects 120, 1336, and 119A as 120′, 1336′, and 119A′, respectively, accompanied by corresponding metadata Meta2, Meta10, and Meta1, respectively. Also, secondary copy data object 134C represents primary data objects 133A, 1196, and 129A as 133A′, 1196′, and 129A′, respectively, accompanied by corresponding metadata Meta9, Meta5, and Meta6, respectively.
Exemplary Information Management System Architecture
Storage Manager
As shown by the dashed arrowed lines 114 in FIG. 1C , storage manager 140 may communicate with, instruct, and/or control some or all elements of system 100, such as data agents 142 and media agents 144. In this manner, storage manager 140 manages the operation of various hardware and software components in system 100. In certain embodiments, control information originates from storage manager 140 and status as well as index reporting is transmitted to storage manager 140 by the managed components, whereas payload data and metadata are generally communicated between data agents 142 and media agents 144 (or otherwise between client computing device(s) 102 and secondary storage computing device(s) 106), e.g., at the direction of and under the management of storage manager 140. Control information can generally include parameters and instructions for carrying out information management operations, such as, without limitation, instructions to perform a task associated with an operation, timing information specifying when to initiate a task, data path information specifying what components to communicate with or access in carrying out an operation, and the like. In other embodiments, some information management operations are controlled or initiated by other components of system 100 (e.g., by media agents 144 or data agents 142), instead of or in combination with storage manager 140.
According to certain embodiments, storage manager 140 provides one or more of the following functions:
-
- communicating with
data agents 142 andmedia agents 144, including transmitting instructions, messages, and/or queries, as well as receiving status reports, index information, messages, and/or queries, and responding to same; - initiating execution of information management operations;
- initiating restore and recovery operations;
- managing
secondary storage devices 108 and inventory/capacity of the same; - allocating
secondary storage devices 108 for secondary copy operations; - reporting, searching, and/or classification of data in
system 100; - monitoring completion of and status reporting related to information management operations and jobs;
- tracking movement of data within
system 100; - tracking age information relating to
secondary copies 116,secondary storage devices 108, comparing the age information against retention guidelines, and initiating data pruning when appropriate; - tracking logical associations between components in
system 100; - protecting metadata associated with
system 100, e.g., inmanagement database 146; - implementing job management, schedule management, event management, alert management, reporting, job history maintenance, user security management, disaster recovery management, and/or user interfacing for system administrators and/or end users of
system 100; - sending, searching, and/or viewing of log files; and
- implementing operations management functionality.
- communicating with
Administrators and others may configure and initiate certain information management operations on an individual basis. But while this may be acceptable for some recovery operations or other infrequent tasks, it is often not workable for implementing on-going organization-wide data protection and management. Thus, system 100 may utilize information management policies 148 for specifying and executing information management operations on an automated basis. Generally, an information management policy 148 can include a stored data structure or other information source that specifies parameters (e.g., criteria and rules) associated with storage management or other information management operations. Storage manager 140 can process an information management policy 148 and/or index 150 and, based on the results, identify an information management operation to perform, identify the appropriate components in system 100 to be involved in the operation (e.g., client computing devices 102 and corresponding data agents 142, secondary storage computing devices 106 and corresponding media agents 144, etc.), establish connections to those components and/or between those components, and/or instruct and control those components to carry out the operation. In this manner, system 100 can translate stored information into coordinated activity among the various computing devices in system 100.
Storage Manager User Interfaces
Various embodiments of information management system 100 may be configured and/or designed to generate user interface data usable for rendering the various interactive user interfaces described. The user interface data may be used by system 100 and/or by another system, device, and/or software program (for example, a browser program), to render the interactive user interfaces. The interactive user interfaces may be displayed on, for example, electronic displays (including, for example, touch-enabled displays), consoles, etc., whether direct-connected to storage manager 140 or communicatively coupled remotely, e.g., via an internet connection. The present disclosure describes various embodiments of interactive and dynamic user interfaces, some of which may be generated by user interface agent 158, and which are the result of significant technological development. The user interfaces described herein may provide improved human-computer interactions, allowing for significant cognitive and ergonomic efficiencies and advantages over previous systems, including reduced mental workloads, improved decision-making, and the like. User interface 158 may operate in a single integrated view or console (not shown). The console may support a reporting capability for generating a variety of reports, which may be tailored to a particular aspect of information management.
User interfaces are not exclusive to storage manager 140 and in some embodiments a user may access information locally from a computing device component of system 100. For example, some information pertaining to installed data agents 142 and associated data streams may be available from client computing device 102. Likewise, some information pertaining to media agents 144 and associated data streams may be available from secondary storage computing device 106.
Storage Manager Management Agent
Information Management Cell
An “information management cell” (or “storage operation cell” or “cell”) may generally include a logical and/or physical grouping of a combination of hardware and software components associated with performing information management operations on electronic data, typically one storage manager 140 and at least one data agent 142 (executing on a client computing device 102) and at least one media agent 144 (executing on a secondary storage computing device 106). For instance, the components shown in FIG. 1C may together form an information management cell. Thus, in some configurations, a system 100 may be referred to as an information management cell or a storage operation cell. A given cell may be identified by the identity of its storage manager 140, which is generally responsible for managing the cell.
Multiple cells may be organized hierarchically, so that cells may inherit properties from hierarchically superior cells or be controlled by other cells in the hierarchy (automatically or otherwise). Alternatively, in some embodiments, cells may inherit or otherwise be associated with information management policies, preferences, information management operational parameters, or other properties or characteristics according to their relative position in a hierarchy of cells. Cells may also be organized hierarchically according to function, geography, architectural considerations, or other factors useful or desirable in performing information management operations. For example, a first cell may represent a geographic segment of an enterprise, such as a Chicago office, and a second cell may represent a different geographic segment, such as a New York City office. Other cells may represent departments within a particular office, e.g., human resources, finance, engineering, etc. Where delineated by function, a first cell may perform one or more first types of information management operations (e.g., one or more first types of secondary copies at a certain frequency), and a second cell may perform one or more second types of information management operations (e.g., one or more second types of secondary copies at a different frequency and under different retention rules). In general, the hierarchical information is maintained by one or more storage managers 140 that manage the respective cells (e.g., in corresponding management database(s) 146).
Data Agents
A variety of different applications 110 can operate on a given client computing device 102, including operating systems, file systems, database applications, e-mail applications, and virtual machines, just to name a few. And, as part of the process of creating and restoring secondary copies 116, the client computing device 102 may be tasked with processing and preparing the primary data 112 generated by these various applications 110. Moreover, the nature of the processing/preparation can differ across application types, e.g., due to inherent structural, state, and formatting differences among applications 110 and/or the operating system of client computing device 102. Each data agent 142 is therefore advantageously configured in some embodiments to assist in the performance of information management operations based on the type of data that is being protected at a client-specific and/or application-specific level.
Each data agent 142 may be specialized for a particular application 110. For instance, different individual data agents 142 may be designed to handle Microsoft Exchange data, Lotus Notes data, Microsoft Windows file system data, Microsoft Active Directory Objects data, SQL Server data, SharePoint data, Oracle database data, SAP database data, virtual machines and/or associated data, and other types of data. A file system data agent, for example, may handle data files and/or other file system information. If a client computing device 102 has two or more types of data 112, a specialized data agent 142 may be used for each data type. For example, to backup, migrate, and/or restore all of the data on a Microsoft Exchange server, the client computing device 102 may use: (1) a Microsoft Exchange Mailbox data agent 142 to back up the Exchange mailboxes; (2) a Microsoft Exchange Database data agent 142 to back up the Exchange databases; (3) a Microsoft Exchange Public Folder data agent 142 to back up the Exchange Public Folders; and (4) a Microsoft Windows File System data agent 142 to back up the file system of client computing device 102. In this example, these specialized data agents 142 are treated as four separate data agents 142 even though they operate on the same client computing device 102. Other examples may include archive management data agents such as a migration archiver or a compliance archiver, Quick Recovery® agents, and continuous data replication agents. Application-specific data agents 142 can provide improved performance as compared to generic agents. For instance, because application-specific data agents 142 may only handle data for a single software application, the design, operation, and performance of the data agent 142 can be streamlined. The data agent 142 may therefore execute faster and consume less persistent storage and/or operating memory than data agents designed to generically accommodate multiple different software applications 110.
Each data agent 142 may be configured to access data and/or metadata stored in the primary storage device(s) 104 associated with data agent 142 and its host client computing device 102, and process the data appropriately. For example, during a secondary copy operation, data agent 142 may arrange or assemble the data and metadata into one or more files having a certain format (e.g., a particular backup or archive format) before transferring the file(s) to a media agent 144 or other component. The file(s) may include a list of files or other metadata. In some embodiments, a data agent 142 may be distributed between client computing device 102 and storage manager 140 (and any other intermediate components) or may be deployed from a remote location or its functions approximated by a remote process that performs some or all of the functions of data agent 142. In addition, a data agent 142 may perform some functions provided by media agent 144. Other embodiments may employ one or more generic data agents 142 that can handle and process data from two or more different applications 110, or that can handle and process multiple data types, instead of or in addition to using specialized data agents 142. For example, one generic data agent 142 may be used to back up, migrate and restore Microsoft Exchange Mailbox data and Microsoft Exchange Database data, while another generic data agent may handle Microsoft Exchange Public Folder data and Microsoft Windows File System data.
Media Agents
As noted, off-loading certain responsibilities from client computing devices 102 to intermediate components such as secondary storage computing device(s) 106 and corresponding media agent(s) 144 can provide a number of benefits including improved performance of client computing device 102, faster and more reliable information management operations, and enhanced scalability. In one example which will be discussed further below, media agent 144 can act as a local cache of recently-copied data and/or metadata stored to secondary storage device(s) 108, thus improving restore capabilities and performance for the cached data.
A media agent 144 may be associated with a particular secondary storage device 108 if that media agent 144 is capable of one or more of: routing and/or storing data to the particular secondary storage device 108; coordinating the routing and/or storing of data to the particular secondary storage device 108; retrieving data from the particular secondary storage device 108; coordinating the retrieval of data from the particular secondary storage device 108; and modifying and/or deleting data retrieved from the particular secondary storage device 108. Media agent 144 in certain embodiments is physically separate from the associated secondary storage device 108. For instance, a media agent 144 may operate on a secondary storage computing device 106 in a distinct housing, package, and/or location from the associated secondary storage device 108. In one example, a media agent 144 operates on a first server computer and is in communication with a secondary storage device(s) 108 operating in a separate rack-mounted RAID-based system.
A media agent 144 associated with a particular secondary storage device 108 may instruct secondary storage device 108 to perform an information management task. For instance, a media agent 144 may instruct a tape library to use a robotic arm or other retrieval means to load or eject a certain storage media, and to subsequently archive, migrate, or retrieve data to or from that media, e.g., for the purpose of restoring data to a client computing device 102. As another example, a secondary storage device 108 may include an array of hard disk drives or solid state drives organized in a RAID configuration, and media agent 144 may forward a logical unit number (LUN) and other appropriate information to the array, which uses the received information to execute the desired secondary copy operation. Media agent 144 may communicate with a secondary storage device 108 via a suitable communications link, such as a SCSI or Fibre Channel link.
Each media agent 144 may maintain an associated media agent database 152. Media agent database 152 may be stored to a disk or other storage device (not shown) that is local to the secondary storage computing device 106 on which media agent 144 executes. In other cases, media agent database 152 is stored separately from the host secondary storage computing device 106. Media agent database 152 can include, among other things, a media agent index 153 (see, e.g., FIG. 1C ). In some cases, media agent index 153 does not form a part of and is instead separate from media agent database 152.
Media agent index 153 (or “index 153”) may be a data structure associated with the particular media agent 144 that includes information about the stored data associated with the particular media agent and which may be generated in the course of performing a secondary copy operation or a restore. Index 153 provides a fast and efficient mechanism for locating/browsing secondary copies 116 or other data stored in secondary storage devices 108 without having to access secondary storage device 108 to retrieve the information from there. For instance, for each secondary copy 116, index 153 may include metadata such as a list of the data objects (e.g., files/subdirectories, database objects, mailbox objects, etc.), a logical path to the secondary copy 116 on the corresponding secondary storage device 108, location information (e.g., offsets) indicating where the data objects are stored in the secondary storage device 108, when the data objects were created or modified, etc. Thus, index 153 includes metadata associated with the secondary copies 116 that is readily available for use from media agent 144. In some embodiments, some or all of the information in index 153 may instead or additionally be stored along with secondary copies 116 in secondary storage device 108. In some embodiments, a secondary storage device 108 can include sufficient information to enable a “bare metal restore,” where the operating system and/or software applications of a failed client computing device 102 or another target may be automatically restored without manually reinstalling individual software packages (including operating systems).
Because index 153 may operate as a cache, it can also be referred to as an “index cache.” In such cases, information stored in index cache 153 typically comprises data that reflects certain particulars about relatively recent secondary copy operations. After some triggering event, such as after some time elapses or index cache 153 reaches a particular size, certain portions of index cache 153 may be copied or migrated to secondary storage device 108, e.g., on a least-recently-used basis. This information may be retrieved and uploaded back into index cache 153 or otherwise restored to media agent 144 to facilitate retrieval of data from the secondary storage device(s) 108. In some embodiments, the cached information may include format or containerization information related to archives or other files stored on storage device(s) 108.
In some alternative embodiments media agent 144 generally acts as a coordinator or facilitator of secondary copy operations between client computing devices 102 and secondary storage devices 108, but does not actually write the data to secondary storage device 108. For instance, storage manager 140 (or media agent 144) may instruct a client computing device 102 and secondary storage device 108 to communicate with one another directly. In such a case, client computing device 102 transmits data directly or via one or more intermediary components to secondary storage device 108 according to the received instructions, and vice versa. Media agent 144 may still receive, process, and/or maintain metadata related to the secondary copy operations, i.e., may continue to build and maintain index 153. In these embodiments, payload data can flow through media agent 144 for the purposes of populating index 153, but not for writing to secondary storage device 108. Media agent 144 and/or other components such as storage manager 140 may in some cases incorporate additional functionality, such as data classification, content indexing, deduplication, encryption, compression, and the like. Further details regarding these and other functions are described below.
Distributed, Scalable Architecture
As described, certain functions of system 100 can be distributed amongst various physical and/or logical components. For instance, one or more of storage manager 140, data agents 142, and media agents 144 may operate on computing devices that are physically separate from one another. This architecture can provide a number of benefits. For instance, hardware and software design choices for each distributed component can be targeted to suit its particular function. The secondary computing devices 106 on which media agents 144 operate can be tailored for interaction with associated secondary storage devices 108 and provide fast index cache operation, among other specific tasks. Similarly, client computing device(s) 102 can be selected to effectively service applications 110 in order to efficiently produce and store primary data 112.
Moreover, in some cases, one or more of the individual components of information management system 100 can be distributed to multiple separate computing devices. As one example, for large file systems where the amount of data stored in management database 146 is relatively large, database 146 may be migrated to or may otherwise reside on a specialized database server (e.g., an SQL server) separate from a server that implements the other functions of storage manager 140. This distributed configuration can provide added protection because database 146 can be protected with standard database utilities (e.g., SQL log shipping or database replication) independent from other functions of storage manager 140. Database 146 can be efficiently replicated to a remote site for use in the event of a disaster or other data loss at the primary site. Or database 146 can be replicated to another computing device within the same site, such as to a higher performance machine in the event that a storage manager host computing device can no longer service the needs of a growing system 100.
The distributed architecture also provides scalability and efficient component utilization. FIG. 1D shows an embodiment of information management system 100 including a plurality of client computing devices 102 and associated data agents 142 as well as a plurality of secondary storage computing devices 106 and associated media agents 144. Additional components can be added or subtracted based on the evolving needs of system 100. For instance, depending on where bottlenecks are identified, administrators can add additional client computing devices 102, secondary storage computing devices 106, and/or secondary storage devices 108. Moreover, where multiple fungible components are available, load balancing can be implemented to dynamically address identified bottlenecks. As an example, storage manager 140 may dynamically select which media agents 144 and/or secondary storage devices 108 to use for storage operations based on a processing load analysis of media agents 144 and/or secondary storage devices 108, respectively.
Where system 100 includes multiple media agents 144 (see, e.g., FIG. 1D ), a first media agent 144 may provide failover functionality for a second failed media agent 144. In addition, media agents 144 can be dynamically selected to provide load balancing. Each client computing device 102 can communicate with, among other components, any of the media agents 144, e.g., as directed by storage manager 140. And each media agent 144 may communicate with, among other components, any of secondary storage devices 108, e.g., as directed by storage manager 140. Thus, operations can be routed to secondary storage devices 108 in a dynamic and highly flexible manner, to provide load balancing, failover, etc. Further examples of scalable systems capable of dynamic storage operations, load balancing, and failover are provided in U.S. Pat. No. 7,246,207.
While distributing functionality amongst multiple computing devices can have certain advantages, in other contexts it can be beneficial to consolidate functionality on the same computing device. In alternative configurations, certain components may reside and execute on the same computing device. As such, in other embodiments, one or more of the components shown in FIG. 1C may be implemented on the same computing device. In one configuration, a storage manager 140, one or more data agents 142, and/or one or more media agents 144 are all implemented on the same computing device. In other embodiments, one or more data agents 142 and one or more media agents 144 are implemented on the same computing device, while storage manager 140 is implemented on a separate computing device, etc. without limitation.
Exemplary Types of Information Management Operations, Including Storage Operations
In order to protect and leverage stored data, system 100 can be configured to perform a variety of information management operations, which may also be referred to in some cases as storage management operations or storage operations. These operations can generally include (i) data movement operations, (ii) processing and data manipulation operations, and (iii) analysis, reporting, and management operations.
Data Movement Operations, Including Secondary Copy Operations
Data movement operations are generally storage operations that involve the copying or migration of data between different locations in system 100. For example, data movement operations can include operations in which stored data is copied, migrated, or otherwise transferred from one or more first storage devices to one or more second storage devices, such as from primary storage device(s) 104 to secondary storage device(s) 108, from secondary storage device(s) 108 to different secondary storage device(s) 108, from secondary storage devices 108 to primary storage devices 104, or from primary storage device(s) 104 to different primary storage device(s) 104, or in some cases within the same primary storage device 104 such as within a storage array.
Data movement operations can include by way of example, backup operations, archive operations, information lifecycle management operations such as hierarchical storage management operations, replication operations (e.g., continuous data replication), snapshot operations, deduplication or single-instancing operations, auxiliary copy operations, disaster-recovery copy operations, and the like. As will be discussed, some of these operations do not necessarily create distinct copies. Nonetheless, some or all of these operations are generally referred to as “secondary copy operations” for simplicity, because they involve secondary copies. Data movement also comprises restoring secondary copies.
Backup Operations
A backup operation creates a copy of a version of primary data 112 at a particular point in time (e.g., one or more files or other data units). Each subsequent backup copy 116 (which is a form of secondary copy 116) may be maintained independently of the first. A backup generally involves maintaining a version of the copied primary data 112 as well as backup copies 116. Further, a backup copy in some embodiments is generally stored in a form that is different from the native format, e.g., a backup format. This contrasts to the version in primary data 112 which may instead be stored in a format native to the source application(s) 110. In various cases, backup copies can be stored in a format in which the data is compressed, encrypted, deduplicated, and/or otherwise modified from the original native application format. For example, a backup copy may be stored in a compressed backup format that facilitates efficient long-term storage. Backup copies 116 can have relatively long retention periods as compared to primary data 112, which is generally highly changeable. Backup copies 116 may be stored on media with slower retrieval times than primary storage device 104. Some backup copies may have shorter retention periods than some other types of secondary copies 116, such as archive copies (described below). Backups may be stored at an offsite location.
Backup operations can include full backups, differential backups, incremental backups, “synthetic full” backups, and/or creating a “reference copy.” A full backup (or “standard full backup”) in some embodiments is generally a complete image of the data to be protected. However, because full backup copies can consume a relatively large amount of storage, it can be useful to use a full backup copy as a baseline and only store changes relative to the full backup copy afterwards.
A differential backup operation (or cumulative incremental backup operation) tracks and stores changes that occurred since the last full backup. Differential backups can grow quickly in size, but can restore relatively efficiently because a restore can be completed in some cases using only the full backup copy and the latest differential copy.
An incremental backup operation generally tracks and stores changes since the most recent backup copy of any type, which can greatly reduce storage utilization. In some cases, however, restoring can be lengthy compared to full or differential backups because completing a restore operation may involve accessing a full backup in addition to multiple incremental backups.
Synthetic full backups generally consolidate data without directly backing up data from the client computing device. A synthetic full backup is created from the most recent full backup (i.e., standard or synthetic) and subsequent incremental and/or differential backups. The resulting synthetic full backup is identical to what would have been created had the last backup for the subclient been a standard full backup. Unlike standard full, incremental, and differential backups, however, a synthetic full backup does not actually transfer data from primary storage to the backup media, because it operates as a backup consolidator. A synthetic full backup extracts the index data of each participating subclient. Using this index data and the previously backed up user data images, it builds new full backup images (e.g., bitmaps), one for each subclient. The new backup images consolidate the index and user data stored in the related incremental, differential, and previous full backups into a synthetic backup file that fully represents the subclient (e.g., via pointers) but does not comprise all its constituent data.
Any of the above types of backup operations can be at the volume level, file level, or block level. Volume level backup operations generally involve copying of a data volume (e.g., a logical disk or partition) as a whole. In a file-level backup, information management system 100 generally tracks changes to individual files and includes copies of files in the backup copy. For block-level backups, files are broken into constituent blocks, and changes are tracked at the block level. Upon restore, system 100 reassembles the blocks into files in a transparent fashion. Far less data may actually be transferred and copied to secondary storage devices 108 during a file-level copy than a volume-level copy. Likewise, a block-level copy may transfer less data than a file-level copy, resulting in faster execution. However, restoring a relatively higher-granularity copy can result in longer restore times. For instance, when restoring a block-level copy, the process of locating and retrieving constituent blocks can sometimes take longer than restoring file-level backups.
A reference copy may comprise copy(ies) of selected objects from backed up data, typically to help organize data by keeping contextual information from multiple sources together, and/or help retain specific data for a longer period of time, such as for legal hold needs. A reference copy generally maintains data integrity, and when the data is restored, it may be viewed in the same format as the source data. In some embodiments, a reference copy is based on a specialized client, individual subclient and associated information management policies (e.g., storage policy, retention policy, etc.) that are administered within system 100.
Archive Operations
Because backup operations generally involve maintaining a version of the copied primary data 112 and also maintaining backup copies in secondary storage device(s) 108, they can consume significant storage capacity. To reduce storage consumption, an archive operation according to certain embodiments creates an archive copy 116 by both copying and removing source data. Or, seen another way, archive operations can involve moving some or all of the source data to the archive destination. Thus, data satisfying criteria for removal (e.g., data of a threshold age or size) may be removed from source storage. The source data may be primary data 112 or a secondary copy 116, depending on the situation. As with backup copies, archive copies can be stored in a format in which the data is compressed, encrypted, deduplicated, and/or otherwise modified from the format of the original application or source copy. In addition, archive copies may be retained for relatively long periods of time (e.g., years) and, in some cases are never deleted. In certain embodiments, archive copies may be made and kept for extended periods in order to meet compliance regulations.
Archiving can also serve the purpose of freeing up space in primary storage device(s) 104 and easing the demand on computational resources on client computing device 102. Similarly, when a secondary copy 116 is archived, the archive copy can therefore serve the purpose of freeing up space in the source secondary storage device(s) 108. Examples of data archiving operations are provided in U.S. Pat. No. 7,107,298.
Snapshot Operations
Snapshot operations can provide a relatively lightweight, efficient mechanism for protecting data. From an end-user viewpoint, a snapshot may be thought of as an “instant” image of primary data 112 at a given point in time, and may include state and/or status information relative to an application 110 that creates/manages primary data 112. In one embodiment, a snapshot may generally capture the directory structure of an object in primary data 112 such as a file or volume or other data set at a particular moment in time and may also preserve file attributes and contents. A snapshot in some cases is created relatively quickly, e.g., substantially instantly, using a minimum amount of file space, but may still function as a conventional file system backup.
A “hardware snapshot” (or “hardware-based snapshot”) operation occurs where a target storage device (e.g., a primary storage device 104 or a secondary storage device 108) performs the snapshot operation in a self-contained fashion, substantially independently, using hardware, firmware and/or software operating on the storage device itself. For instance, the storage device may perform snapshot operations generally without intervention or oversight from any of the other components of the system 100, e.g., a storage array may generate an “array-created” hardware snapshot and may also manage its storage, integrity, versioning, etc. In this manner, hardware snapshots can off-load other components of system 100 from snapshot processing. An array may receive a request from another component to take a snapshot and then proceed to execute the “hardware snapshot” operations autonomously, preferably reporting success to the requesting component.
A “software snapshot” (or “software-based snapshot”) operation, on the other hand, occurs where a component in system 100 (e.g., client computing device 102, etc.) implements a software layer that manages the snapshot operation via interaction with the target storage device. For instance, the component executing the snapshot management software layer may derive a set of pointers and/or data that represents the snapshot. The snapshot management software layer may then transmit the same to the target storage device, along with appropriate instructions for writing the snapshot. One example of a software snapshot product is Microsoft Volume Snapshot Service (VSS), which is part of the Microsoft Windows operating system.
Some types of snapshots do not actually create another physical copy of all the data as it existed at the particular point in time, but may simply create pointers that map files and directories to specific memory locations (e.g., to specific disk blocks) where the data resides as it existed at the particular point in time. For example, a snapshot copy may include a set of pointers derived from the file system or from an application. In some other cases, the snapshot may be created at the block-level, such that creation of the snapshot occurs without awareness of the file system. Each pointer points to a respective stored data block, so that collectively, the set of pointers reflect the storage location and state of the data object (e.g., file(s) or volume(s) or data set(s)) at the point in time when the snapshot copy was created.
An initial snapshot may use only a small amount of disk space needed to record a mapping or other data structure representing or otherwise tracking the blocks that correspond to the current state of the file system. Additional disk space is usually required only when files and directories change later on. Furthermore, when files change, typically only the pointers which map to blocks are copied, not the blocks themselves. For example for “copy-on-write” snapshots, when a block changes in primary storage, the block is copied to secondary storage or cached in primary storage before the block is overwritten in primary storage, and the pointer to that block is changed to reflect the new location of that block. The snapshot mapping of file system data may also be updated to reflect the changed block(s) at that particular point in time. In some other cases, a snapshot includes a full physical copy of all or substantially all of the data represented by the snapshot. Further examples of snapshot operations are provided in U.S. Pat. No. 7,529,782. A snapshot copy in many cases can be made quickly and without significantly impacting primary computing resources because large amounts of data need not be copied or moved. In some embodiments, a snapshot may exist as a virtual file system, parallel to the actual file system. Users in some cases gain read-only access to the record of files and directories of the snapshot. By electing to restore primary data 112 from a snapshot taken at a given point in time, users may also return the current file system to the state of the file system that existed when the snapshot was taken.
Replication Operations
Replication is another type of secondary copy operation. Some types of secondary copies 116 periodically capture images of primary data 112 at particular points in time (e.g., backups, archives, and snapshots). However, it can also be useful for recovery purposes to protect primary data 112 in a more continuous fashion, by replicating primary data 112 substantially as changes occur. In some cases a replication copy can be a mirror copy, for instance, where changes made to primary data 112 are mirrored or substantially immediately copied to another location (e.g., to secondary storage device(s) 108). By copying each write operation to the replication copy, two storage systems are kept synchronized or substantially synchronized so that they are virtually identical at approximately the same time. Where entire disk volumes are mirrored, however, mirroring can require significant amount of storage space and utilizes a large amount of processing resources.
According to some embodiments, secondary copy operations are performed on replicated data that represents a recoverable state, or “known good state” of a particular application running on the source system. For instance, in certain embodiments, known good replication copies may be viewed as copies of primary data 112. This feature allows the system to directly access, copy, restore, back up, or otherwise manipulate the replication copies as if they were the “live” primary data 112. This can reduce access time, storage utilization, and impact on source applications 110, among other benefits. Based on known good state information, system 100 can replicate sections of application data that represent a recoverable state rather than rote copying of blocks of data. Examples of replication operations (e.g., continuous data replication) are provided in U.S. Pat. No. 7,617,262.
Deduplication/Single-Instancing Operations
Deduplication or single-instance storage is useful to reduce the amount of non-primary data. For instance, some or all of the above-described secondary copy operations can involve deduplication in some fashion. New data is read, broken down into data portions of a selected granularity (e.g., sub-file level blocks, files, etc.), compared with corresponding portions that are already in secondary storage, and only new/changed portions are stored. Portions that already exist are represented as pointers to the already-stored data. Thus, a deduplicated secondary copy 116 may comprise actual data portions copied from primary data 112 and may further comprise pointers to already-stored data, which is generally more storage-efficient than a full copy.
In order to streamline the comparison process, system 100 may calculate and/or store signatures (e.g., hashes or cryptographically unique IDs) corresponding to the individual source data portions and compare the signatures to already-stored data signatures, instead of comparing entire data portions. In some cases, only a single instance of each data portion is stored, and deduplication operations may therefore be referred to interchangeably as “single-instancing” operations. Depending on the implementation, however, deduplication operations can store more than one instance of certain data portions, yet still significantly reduce stored-data redundancy. Depending on the embodiment, deduplication portions such as data blocks can be of fixed or variable length. Using variable length blocks can enhance deduplication by responding to changes in the data stream, but can involve more complex processing. In some cases, system 100 utilizes a technique for dynamically aligning deduplication blocks based on changing content in the data stream, as described in U.S. Pat. No. 8,364,652.
Information Lifecycle Management and Hierarchical Storage Management
In some embodiments, files and other data over their lifetime move from more expensive quick-access storage to less expensive slower-access storage. Operations associated with moving data through various tiers of storage are sometimes referred to as information lifecycle management (ILM) operations.
One type of ILM operation is a hierarchical storage management (HSM) operation, which generally automatically moves data between classes of storage devices, such as from high-cost to low-cost storage devices. For instance, an HSM operation may involve movement of data from primary storage devices 104 to secondary storage devices 108, or between tiers of secondary storage devices 108. With each tier, the storage devices may be progressively cheaper, have relatively slower access/restore times, etc. For example, movement of data between tiers may occur as data becomes less important over time. In some embodiments, an HSM operation is similar to archiving in that creating an HSM copy may (though not always) involve deleting some of the source data, e.g., according to one or more criteria related to the source data. For example, an HSM copy may include primary data 112 or a secondary copy 116 that exceeds a given size threshold or a given age threshold. Often, and unlike some types of archive copies, HSM data that is removed or aged from the source is replaced by a logical reference pointer or stub. The reference pointer or stub can be stored in the primary storage device 104 or other source storage device, such as a secondary storage device 108 to replace the deleted source data and to point to or otherwise indicate the new location in (another) secondary storage device 108.
For example, files are generally moved between higher and lower cost storage depending on how often the files are accessed. When a user requests access to HSM data that has been removed or migrated, system 100 uses the stub to locate the data and can make recovery of the data appear transparent, even though the HSM data may be stored at a location different from other source data. In this manner, the data appears to the user (e.g., in file system browsing windows and the like) as if it still resides in the source location (e.g., in a primary storage device 104). The stub may include metadata associated with the corresponding data, so that a file system and/or application can provide some information about the data object and/or a limited-functionality version (e.g., a preview) of the data object.
An HSM copy may be stored in a format other than the native application format (e.g., compressed, encrypted, deduplicated, and/or otherwise modified). In some cases, copies which involve the removal of data from source storage and the maintenance of stub or other logical reference information on source storage may be referred to generally as “on-line archive copies.” On the other hand, copies which involve the removal of data from source storage without the maintenance of stub or other logical reference information on source storage may be referred to as “off-line archive copies.” Examples of HSM and ILM techniques are provided in U.S. Pat. No. 7,343,453.
Auxiliary Copy Operations
An auxiliary copy is generally a copy of an existing secondary copy 116. For instance, an initial secondary copy 116 may be derived from primary data 112 or from data residing in secondary storage subsystem 118, whereas an auxiliary copy is generated from the initial secondary copy 116. Auxiliary copies provide additional standby copies of data and may reside on different secondary storage devices 108 than the initial secondary copies 116. Thus, auxiliary copies can be used for recovery purposes if initial secondary copies 116 become unavailable. Exemplary auxiliary copy techniques are described in further detail in U.S. Pat. No. 8,230,195.
Disaster-Recovery Copy Operations
Data Manipulation, Including Encryption and Compression
Data manipulation and processing may include encryption and compression as well as integrity marking and checking, formatting for transmission, formatting for storage, etc. Data may be manipulated “client-side” by data agent 142 as well as “target-side” by media agent 144 in the course of creating secondary copy 116, or conversely in the course of restoring data from secondary to primary.
Encryption Operations
Compression Operations
Similar to encryption, system 100 may also or alternatively compress data in the course of generating a secondary copy 116. Compression encodes information such that fewer bits are needed to represent the information as compared to the original representation. Compression techniques are well known in the art. Compression operations may apply one or more data compression algorithms. Compression may be applied in creating a secondary copy 116 of a previously uncompressed secondary copy, e.g., when making archive copies or disaster recovery copies. The use of compression may result in metadata that specifies the nature of the compression, so that data may be uncompressed on restore if appropriate.
Data Analysis, Reporting, and Management Operations
Data analysis, reporting, and management operations can differ from data movement operations in that they do not necessarily involve copying, migration or other transfer of data between different locations in the system. For instance, data analysis operations may involve processing (e.g., offline processing) or modification of already stored primary data 112 and/or secondary copies 116. However, in some embodiments data analysis operations are performed in conjunction with data movement operations. Some data analysis operations include content indexing operations and classification operations which can be useful in leveraging data under management to enhance search and other features.
Classification Operations/Content Indexing
In some embodiments, information management system 100 analyzes and indexes characteristics, content, and metadata associated with primary data 112 (“online content indexing”) and/or secondary copies 116 (“off-line content indexing”). Content indexing can identify files or other data objects based on content (e.g., user-defined keywords or phrases, other keywords/phrases that are not defined by a user, etc.), and/or metadata (e.g., email metadata such as “to,” “from,” “cc,” “bcc,” attachment name, received time, etc.). Content indexes may be searched and search results may be restored.
One or more components, such as a content index engine, can be configured to scan data and/or associated metadata for classification purposes to populate a database (or other data structure) of information, which can be referred to as a “data classification database” or a “metabase.” Depending on the embodiment, the data classification database(s) can be organized in a variety of different ways, including centralization, logical sub-divisions, and/or physical sub-divisions. For instance, one or more data classification databases may be associated with different subsystems or tiers within system 100. As an example, there may be a first metabase associated with primary storage subsystem 117 and a second metabase associated with secondary storage subsystem 118. In other cases, metabase(s) may be associated with individual components, e.g., client computing devices 102 and/or media agents 144. In some embodiments, a data classification database may reside as one or more data structures within management database 146, may be otherwise associated with storage manager 140, and/or may reside as a separate component. In some cases, metabase(s) may be included in separate database(s) and/or on separate storage device(s) from primary data 112 and/or secondary copies 116, such that operations related to the metabase(s) do not significantly impact performance on other components of system 100. In other cases, metabase(s) may be stored along with primary data 112 and/or secondary copies 116. Files or other data objects can be associated with identifiers (e.g., tag entries, etc.) to facilitate searches of stored data objects. Among a number of other benefits, the metabase can also allow efficient, automatic identification of files or other data objects to associate with secondary copy or other information management operations. For instance, a metabase can dramatically improve the speed with which system 100 can search through and identify data as compared to other approaches that involve scanning an entire file system. Examples of metabases and data classification operations are provided in U.S. Pat. Nos. 7,734,669 and 7,747,579.
Management and Reporting Operations
Certain embodiments leverage the integrated ubiquitous nature of system 100 to provide useful system-wide management and reporting. Operations management can generally include monitoring and managing the health and performance of system 100 by, without limitation, performing error tracking, generating granular storage/performance metrics (e.g., job success/failure information, deduplication efficiency, etc.), generating storage modeling and costing information, and the like. As an example, storage manager 140 or another component in system 100 may analyze traffic patterns and suggest and/or automatically route data to minimize congestion. In some embodiments, the system can generate predictions relating to storage operations or storage operation information. Such predictions, which may be based on a trending analysis, may predict various network operations or resource usage, such as network traffic levels, storage media use, use of bandwidth of communication links, use of media agent components, etc. Further examples of traffic analysis, trend analysis, prediction generation, and the like are described in U.S. Pat. No. 7,343,453.
In some configurations having a hierarchy of storage operation cells, a master storage manager 140 may track the status of subordinate cells, such as the status of jobs, system components, system resources, and other items, by communicating with storage managers 140 (or other components) in the respective storage operation cells. Moreover, the master storage manager 140 may also track status by receiving periodic status updates from the storage managers 140 (or other components) in the respective cells regarding jobs, system components, system resources, and other items. In some embodiments, a master storage manager 140 may store status information and other information regarding its associated storage operation cells and other system information in its management database 146 and/or index 150 (or in another location). The master storage manager 140 or other component may also determine whether certain storage-related or other criteria are satisfied, and may perform an action or trigger event (e.g., data migration) in response to the criteria being satisfied, such as where a storage threshold is met for a particular volume, or where inadequate protection exists for certain data. For instance, data from one or more storage operation cells is used to dynamically and automatically mitigate recognized risks, and/or to advise users of risks or suggest actions to mitigate these risks. For example, an information management policy may specify certain requirements (e.g., that a storage device should maintain a certain amount of free space, that secondary copies should occur at a particular interval, that data should be aged and migrated to other storage after a particular period, that data on a secondary volume should always have a certain level of availability and be restorable within a given time period, that data on a secondary volume may be mirrored or otherwise migrated to a specified number of other volumes, etc.). If a risk condition or other criterion is triggered, the system may notify the user of these conditions and may suggest (or automatically implement) a mitigation action to address the risk. For example, the system may indicate that data from a primary copy 112 should be migrated to a secondary storage device 108 to free up space on primary storage device 104. Examples of the use of risk factors and other triggering criteria are described in U.S. Pat. No. 7,343,453.
In some embodiments, system 100 may also determine whether a metric or other indication satisfies particular storage criteria sufficient to perform an action. For example, a storage policy or other definition might indicate that a storage manager 140 should initiate a particular action if a storage metric or other indication drops below or otherwise fails to satisfy specified criteria such as a threshold of data protection. In some embodiments, risk factors may be quantified into certain measurable service or risk levels. For example, certain applications and associated data may be considered to be more important relative to other data and services. Financial compliance data, for example, may be of greater importance than marketing materials, etc. Network administrators may assign priority values or “weights” to certain data and/or applications corresponding to the relative importance. The level of compliance of secondary copy operations specified for these applications may also be assigned a certain value. Thus, the health, impact, and overall importance of a service may be determined, such as by measuring the compliance value and calculating the product of the priority value and the compliance value to determine the “service level” and comparing it to certain operational thresholds to determine whether it is acceptable. Further examples of the service level determination are provided in U.S. Pat. No. 7,343,453.
Any of the above types of information (e.g., information related to trending, predictions, job, cell or component status, risk, service level, costing, etc.) can generally be provided to users via user interface 158 in a single integrated view or console (not shown). Report types may include: scheduling, event management, media management and data aging. Available reports may also include backup history, data aging history, auxiliary copy history, job history, library and drive, media in library, restore history, and storage policy, etc., without limitation. Such reports may be specified and created at a certain point in time as a system analysis, forecasting, or provisioning tool. Integrated reports may also be generated that illustrate storage and performance metrics, risks and storage costing information. Moreover, users may create their own reports based on specific needs. User interface 158 can include an option to graphically depict the various components in the system using appropriate icons. As one example, user interface 158 may provide a graphical depiction of primary storage devices 104, secondary storage devices 108, data agents 142 and/or media agents 144, and their relationship to one another in system 100.
In general, the operations management functionality of system 100 can facilitate planning and decision-making. For example, in some embodiments, a user may view the status of some or all jobs as well as the status of each component of information management system 100. Users may then plan and make decisions based on this data. For instance, a user may view high-level information regarding secondary copy operations for system 100, such as job status, component status, resource status (e.g., communication pathways, etc.), and other information. The user may also drill down or use other means to obtain more detailed information regarding a particular component, job, or the like. Further examples are provided in U.S. Pat. No. 7,343,453.
Information Management Policies
An information management policy 148 can include a data structure or other information source that specifies a set of parameters (e.g., criteria and rules) associated with secondary copy and/or other information management operations.
One type of information management policy 148 is a “storage policy.” According to certain embodiments, a storage policy generally comprises a data structure or other information source that defines (or includes information sufficient to determine) a set of preferences or other criteria for performing information management operations. Storage policies can include one or more of the following: (1) what data will be associated with the storage policy, e.g., subclient; (2) a destination to which the data will be stored; (3) datapath information specifying how the data will be communicated to the destination; (4) the type of secondary copy operation to be performed; and (5) retention information specifying how long the data will be retained at the destination (see, e.g., FIG. 1E ). Data associated with a storage policy can be logically organized into subclients, which may represent primary data 112 and/or secondary copies 116. A subclient may represent static or dynamic associations of portions of a data volume. Subclients may represent mutually exclusive portions. Thus, in certain embodiments, a portion of data may be given a label and the association is stored as a static entity in an index, database or other storage location. Subclients may also be used as an effective administrative scheme of organizing data according to data type, department within the enterprise, storage preferences, or the like. Depending on the configuration, subclients can correspond to files, folders, virtual machines, databases, etc. In one exemplary scenario, an administrator may find it preferable to separate e-mail data from financial data using two different subclients.
A storage policy can define where data is stored by specifying a target or destination storage device (or group of storage devices). For instance, where the secondary storage device 108 includes a group of disk libraries, the storage policy may specify a particular disk library for storing the subclients associated with the policy. As another example, where the secondary storage devices 108 include one or more tape libraries, the storage policy may specify a particular tape library for storing the subclients associated with the storage policy, and may also specify a drive pool and a tape pool defining a group of tape drives and a group of tapes, respectively, for use in storing the subclient data. While information in the storage policy can be statically assigned in some cases, some or all of the information in the storage policy can also be dynamically determined based on criteria set forth in the storage policy. For instance, based on such criteria, a particular destination storage device(s) or other parameter of the storage policy may be determined based on characteristics associated with the data involved in a particular secondary copy operation, device availability (e.g., availability of a secondary storage device 108 or a media agent 144), network status and conditions (e.g., identified bottlenecks), user credentials, and the like.
Datapath information can also be included in the storage policy. For instance, the storage policy may specify network pathways and components to utilize when moving the data to the destination storage device(s). In some embodiments, the storage policy specifies one or more media agents 144 for conveying data associated with the storage policy between the source and destination. A storage policy can also specify the type(s) of associated operations, such as backup, archive, snapshot, auxiliary copy, or the like. Furthermore, retention parameters can specify how long the resulting secondary copies 116 will be kept (e.g., a number of days, months, years, etc.), perhaps depending on organizational needs and/or compliance criteria.
When adding a new client computing device 102, administrators can manually configure information management policies 148 and/or other settings, e.g., via user interface 158. However, this can be an involved process resulting in delays, and it may be desirable to begin data protection operations quickly, without awaiting human intervention. Thus, in some embodiments, system 100 automatically applies a default configuration to client computing device 102. As one example, when one or more data agent(s) 142 are installed on a client computing device 102, the installation script may register the client computing device 102 with storage manager 140, which in turn applies the default configuration to the new client computing device 102. In this manner, data protection operations can begin substantially immediately. The default configuration can include a default storage policy, for example, and can specify any appropriate information sufficient to begin data protection operations. This can include a type of data protection operation, scheduling information, a target secondary storage device 108, data path information (e.g., a particular media agent 144), and the like.
Another type of information management policy 148 is a “scheduling policy,” which specifies when and how often to perform operations. Scheduling parameters may specify with what frequency (e.g., hourly, weekly, daily, event-based, etc.) or under what triggering conditions secondary copy or other information management operations are to take place. Scheduling policies in some cases are associated with particular components, such as a subclient, client computing device 102, and the like.
Another type of information management policy 148 is an “audit policy” (or “security policy”), which comprises preferences, rules and/or criteria that protect sensitive data in system 100. For example, an audit policy may define “sensitive objects” which are files or data objects that contain particular keywords (e.g., “confidential,” or “privileged”) and/or are associated with particular keywords (e.g., in metadata) or particular flags (e.g., in metadata identifying a document or email as personal, confidential, etc.). An audit policy may further specify rules for handling sensitive objects. As an example, an audit policy may require that a reviewer approve the transfer of any sensitive objects to a cloud storage site, and that if approval is denied for a particular sensitive object, the sensitive object should be transferred to a local primary storage device 104 instead. To facilitate this approval, the audit policy may further specify how a secondary storage computing device 106 or other system component should notify a reviewer that a sensitive object is slated for transfer.
Another type of information management policy 148 is a “provisioning policy,” which can include preferences, priorities, rules, and/or criteria that specify how client computing devices 102 (or groups thereof) may utilize system resources, such as available storage on cloud storage and/or network bandwidth. A provisioning policy specifies, for example, data quotas for particular client computing devices 102 (e.g., a number of gigabytes that can be stored monthly, quarterly or annually). Storage manager 140 or other components may enforce the provisioning policy. For instance, media agents 144 may enforce the policy when transferring data to secondary storage devices 108. If a client computing device 102 exceeds a quota, a budget for the client computing device 102 (or associated department) may be adjusted accordingly or an alert may trigger.
While the above types of information management policies 148 are described as separate policies, one or more of these can be generally combined into a single information management policy 148. For instance, a storage policy may also include or otherwise be associated with one or more scheduling, audit, or provisioning policies or operational parameters thereof. Moreover, while storage policies are typically associated with moving and storing data, other policies may be associated with other types of information management operations. The following is a non-exhaustive list of items that information management policies 148 may specify:
-
- schedules or other timing information, e.g., specifying when and/or how often to perform information management operations;
- the type of
secondary copy 116 and/or copy format (e.g., snapshot, backup, archive, HSM, etc.); - a location or a class or quality of storage for storing secondary copies 116 (e.g., one or more particular secondary storage devices 108);
- preferences regarding whether and how to encrypt, compress, deduplicate, or otherwise modify or transform
secondary copies 116; - which system components and/or network pathways (e.g., preferred media agents 144) should be used to perform secondary storage operations;
- resource allocation among different computing devices or other system components used in performing information management operations (e.g., bandwidth allocation, available storage capacity, etc.);
- whether and how to synchronize or otherwise distribute files or other data objects across multiple computing devices or hosted services; and
- retention information specifying the length of time
primary data 112 and/orsecondary copies 116 should be retained, e.g., in a particular class or tier of storage devices, or within thesystem 100.
-
- frequency with which
primary data 112 or asecondary copy 116 of a data object or metadata has been or is predicted to be used, accessed, or modified; - time-related factors (e.g., aging information such as time since the creation or modification of a data object);
- deduplication information (e.g., hashes, data blocks, deduplication block size, deduplication efficiency or other metrics);
- an estimated or historic usage or cost associated with different components (e.g., with secondary storage devices 108);
- the identity of users,
applications 110,client computing devices 102 and/or other computing devices that created, accessed, modified, or otherwise utilizedprimary data 112 orsecondary copies 116; - a relative sensitivity (e.g., confidentiality, importance) of a data object, e.g., as determined by its content and/or metadata;
- the current or historical storage capacity of various storage devices;
- the current or historical network capacity of network pathways connecting various components within the storage operation cell;
- access control lists or other security information; and
- the content of a particular data object (e.g., its textual content) or of metadata associated with the data object.
- frequency with which
Exemplary Storage Policy and Secondary Copy Operations
As indicated by the dashed box, the second media agent 144B and tape library 108B are “off-site,” and may be remotely located from the other components in system 100 (e.g., in a different city, office building, etc.). Indeed, “off-site” may refer to a magnetic tape located in remote storage, which must be manually retrieved and loaded into a tape drive to be read. In this manner, information stored on the tape library 108B may provide protection in the event of a disaster or other failure at the main site(s) where data is stored.
The file system subclient 112A in certain embodiments generally comprises information generated by the file system and/or operating system of client computing device 102, and can include, for example, file system data (e.g., regular files, file tables, mount points, etc.), operating system data (e.g., registries, event logs, etc.), and the like. The e-mail subclient 112B can include data generated by an e-mail application operating on client computing device 102, e.g., mailbox information, folder information, emails, attachments, associated database information, and the like. As described above, the subclients can be logical containers, and the data included in the corresponding primary data 112A and 112B may or may not be stored contiguously.
The exemplary storage policy 148A includes backup copy preferences or rule set 160, disaster recovery copy preferences or rule set 162, and compliance copy preferences or rule set 164. Backup copy rule set 160 specifies that it is associated with file system subclient 166 and email subclient 168. Each of subclients 166 and 168 are associated with the particular client computing device 102. Backup copy rule set 160 further specifies that the backup operation will be written to disk library 108A and designates a particular media agent 144A to convey the data to disk library 108A. Finally, backup copy rule set 160 specifies that backup copies created according to rule set 160 are scheduled to be generated hourly and are to be retained for 30 days. In some other embodiments, scheduling information is not included in storage policy 148A and is instead specified by a separate scheduling policy.
Disaster recovery copy rule set 162 is associated with the same two subclients 166 and 168. However, disaster recovery copy rule set 162 is associated with tape library 108B, unlike backup copy rule set 160. Moreover, disaster recovery copy rule set 162 specifies that a different media agent, namely 144B, will convey data to tape library 108B. Disaster recovery copies created according to rule set 162 will be retained for 60 days and will be generated daily. Disaster recovery copies generated according to disaster recovery copy rule set 162 can provide protection in the event of a disaster or other catastrophic data loss that would affect the backup copy 116A maintained on disk library 108A.
Compliance copy rule set 164 is only associated with the email subclient 168, and not the file system subclient 166. Compliance copies generated according to compliance copy rule set 164 will therefore not include primary data 112A from the file system subclient 166. For instance, the organization may be under an obligation to store and maintain copies of email data for a particular period of time (e.g., 10 years) to comply with state or federal regulations, while similar regulations do not apply to file system data. Compliance copy rule set 164 is associated with the same tape library 108B and media agent 144B as disaster recovery copy rule set 162, although a different storage device or media agent could be used in other embodiments. Finally, compliance copy rule set 164 specifies that the copies it governs will be generated quarterly and retained for 10 years.
Secondary Copy Jobs
A logical grouping of secondary copy operations governed by a rule set and being initiated at a point in time may be referred to as a “secondary copy job” (and sometimes may be called a “backup job,” even though it is not necessarily limited to creating only backup copies). Secondary copy jobs may be initiated on demand as well. Steps 1-9 below illustrate three secondary copy jobs based on storage policy 148A.
Referring to FIG. 1E , at step 1, storage manager 140 initiates a backup job according to the backup copy rule set 160, which logically comprises all the secondary copy operations necessary to effectuate rules 160 in storage policy 148A every hour, including steps 1-4 occurring hourly. For instance, a scheduling service running on storage manager 140 accesses backup copy rule set 160 or a separate scheduling policy associated with client computing device 102 and initiates a backup job on an hourly basis. Thus, at the scheduled time, storage manager 140 sends instructions to client computing device 102 (i.e., to both data agent 142A and data agent 142B) to begin the backup job.
At step 2, file system data agent 142A and email data agent 142B on client computing device 102 respond to instructions from storage manager 140 by accessing and processing the respective subclient primary data 112A and 112B involved in the backup copy operation, which can be found in primary storage device 104. Because the secondary copy operation is a backup copy operation, the data agent(s) 142A, 142B may format the data into a backup format or otherwise process the data suitable for a backup copy.
At step 3, client computing device 102 communicates the processed file system data (e.g., using file system data agent 142A) and the processed email data (e.g., using email data agent 142B) to the first media agent 144A according to backup copy rule set 160, as directed by storage manager 140. Storage manager 140 may further keep a record in management database 146 of the association between media agent 144A and one or more of: client computing device 102, file system subclient 112A, file system data agent 142A, email subclient 112B, email data agent 142B, and/or backup copy 116A.
The target media agent 144A receives the data-agent-processed data from client computing device 102, and at step 4 generates and conveys backup copy 116A to disk library 108A to be stored as backup copy 116A, again at the direction of storage manager 140 and according to backup copy rule set 160. Media agent 144A can also update its index 153 to include data and/or metadata related to backup copy 116A, such as information indicating where the backup copy 116A resides on disk library 108A, where the email copy resides, where the file system copy resides, data and metadata for cache retrieval, etc. Storage manager 140 may similarly update its index 150 to include information relating to the secondary copy operation, such as information relating to the type of operation, a physical location associated with one or more copies created by the operation, the time the operation was performed, status information relating to the operation, the components involved in the operation, and the like. In some cases, storage manager 140 may update its index 150 to include some or all of the information stored in index 153 of media agent 144A. At this point, the backup job may be considered complete. After the 30-day retention period expires, storage manager 140 instructs media agent 144A to delete backup copy 116A from disk library 108A and indexes 150 and/or 153 are updated accordingly.
At step 5, storage manager 140 initiates another backup job for a disaster recovery copy according to the disaster recovery rule set 162. Illustratively this includes steps 5-7 occurring daily for creating disaster recovery copy 116B. Illustratively, and by way of illustrating the scalable aspects and off-loading principles embedded in system 100, disaster recovery copy 116B is based on backup copy 116A and not on primary data 112A and 112B.
At step 6, illustratively based on instructions received from storage manager 140 at step 5, the specified media agent 1446 retrieves the most recent backup copy 116A from disk library 108A.
At step 7, again at the direction of storage manager 140 and as specified in disaster recovery copy rule set 162, media agent 144B uses the retrieved data to create a disaster recovery copy 1166 and store it to tape library 1086. In some cases, disaster recovery copy 116B is a direct, mirror copy of backup copy 116A, and remains in the backup format. In other embodiments, disaster recovery copy 116B may be further compressed or encrypted, or may be generated in some other manner, such as by using primary data 112A and 1126 from primary storage device 104 as sources. The disaster recovery copy operation is initiated once a day and disaster recovery copies 1166 are deleted after 60 days; indexes 153 and/or 150 are updated accordingly when/after each information management operation is executed and/or completed. The present backup job may be considered completed.
At step 8, storage manager 140 initiates another backup job according to compliance rule set 164, which performs steps 8-9 quarterly to create compliance copy 116C. For instance, storage manager 140 instructs media agent 144B to create compliance copy 116C on tape library 1086, as specified in the compliance copy rule set 164.
At step 9 in the example, compliance copy 116C is generated using disaster recovery copy 1166 as the source. This is efficient, because disaster recovery copy resides on the same secondary storage device and thus no network resources are required to move the data. In other embodiments, compliance copy 116C is instead generated using primary data 112B corresponding to the email subclient or using backup copy 116A from disk library 108A as source data. As specified in the illustrated example, compliance copies 116C are created quarterly, and are deleted after ten years, and indexes 153 and/or 150 are kept up-to-date accordingly.
Exemplary Applications of Storage Policies—Information Governance Policies and Classification
Again referring to FIG. 1E , storage manager 140 may permit a user to specify aspects of storage policy 148A. For example, the storage policy can be modified to include information governance policies to define how data should be managed in order to comply with a certain regulation or business objective. The various policies may be stored, for example, in management database 146. An information governance policy may align with one or more compliance tasks that are imposed by regulations or business requirements. Examples of information governance policies might include a Sarbanes-Oxley policy, a HIPAA policy, an electronic discovery (e-discovery) policy, and so on.
Information governance policies allow administrators to obtain different perspectives on an organization's online and offline data, without the need for a dedicated data silo created solely for each different viewpoint. As described previously, the data storage systems herein build an index that reflects the contents of a distributed data set that spans numerous clients and storage devices, including both primary data and secondary copies, and online and offline copies. An organization may apply multiple information governance policies in a top-down manner over that unified data set and indexing schema in order to view and manipulate the data set through different lenses, each of which is adapted to a particular compliance or business goal. Thus, for example, by applying an e-discovery policy and a Sarbanes-Oxley policy, two different groups of users in an organization can conduct two very different analyses of the same underlying physical set of data/copies, which may be distributed throughout the information management system.
An information governance policy may comprise a classification policy, which defines a taxonomy of classification terms or tags relevant to a compliance task and/or business objective. A classification policy may also associate a defined tag with a classification rule. A classification rule defines a particular combination of criteria, such as users who have created, accessed or modified a document or data object; file or application types; content or metadata keywords; clients or storage locations; dates of data creation and/or access; review status or other status within a workflow (e.g., reviewed or un-reviewed); modification times or types of modifications; and/or any other data attributes in any combination, without limitation. A classification rule may also be defined using other classification tags in the taxonomy. The various criteria used to define a classification rule may be combined in any suitable fashion, for example, via Boolean operators, to define a complex classification rule. As an example, an e-discovery classification policy might define a classification tag “privileged” that is associated with documents or data objects that (1) were created or modified by legal department staff, or (2) were sent to or received from outside counsel via email, or (3) contain one of the following keywords: “privileged” or “attorney” or “counsel,” or other like terms. Accordingly, all these documents or data objects will be classified as “privileged.”
One specific type of classification tag, which may be added to an index at the time of indexing, is an “entity tag.” An entity tag may be, for example, any content that matches a defined data mask format. Examples of entity tags might include, e.g., social security numbers (e.g., any numerical content matching the formatting mask XXX-XX-XXXX), credit card numbers (e.g., content having a 13-16 digit string of numbers), SKU numbers, product numbers, etc. A user may define a classification policy by indicating criteria, parameters or descriptors of the policy via a graphical user interface, such as a form or page with fields to be filled in, pull-down menus or entries allowing one or more of several options to be selected, buttons, sliders, hypertext links or other known user interface tools for receiving user input, etc. For example, a user may define certain entity tags, such as a particular product number or project ID. In some implementations, the classification policy can be implemented using cloud-based techniques. For example, the storage devices may be cloud storage devices, and the storage manager 140 may execute cloud service provider API over a network to classify data stored on cloud storage devices.
Restore Operations from Secondary Copies
While not shown in FIG. 1E , at some later point in time, a restore operation can be initiated involving one or more of secondary copies 116A, 116B, and 116C. A restore operation logically takes a selected secondary copy 116, reverses the effects of the secondary copy operation that created it, and stores the restored data to primary storage where a client computing device 102 may properly access it as primary data. A media agent 144 and an appropriate data agent 142 (e.g., executing on the client computing device 102) perform the tasks needed to complete a restore operation. For example, data that was encrypted, compressed, and/or deduplicated in the creation of secondary copy 116 will be correspondingly rehydrated (reversing deduplication), uncompressed, and unencrypted into a format appropriate to primary data. Metadata stored within or associated with the secondary copy 116 may be used during the restore operation. In general, restored data should be indistinguishable from other primary data 112. Preferably, the restored data has fully regained the native format that may make it immediately usable by application 110.
As one example, a user may manually initiate a restore of backup copy 116A, e.g., by interacting with user interface 158 of storage manager 140 or with a web-based console with access to system 100. Storage manager 140 may accesses data in its index 150 and/or management database 146 (and/or the respective storage policy 148A) associated with the selected backup copy 116A to identify the appropriate media agent 144A and/or secondary storage device 108A where the secondary copy resides. The user may be presented with a representation (e.g., stub, thumbnail, listing, etc.) and metadata about the selected secondary copy, in order to determine whether this is the appropriate copy to be restored, e.g., date that the original primary data was created. Storage manager 140 will then instruct media agent 144A and an appropriate data agent 142 on the target client computing device 102 to restore secondary copy 116A to primary storage device 104. A media agent may be selected for use in the restore operation based on a load balancing algorithm, an availability based algorithm, or other criteria. The selected media agent, e.g., 144A, retrieves secondary copy 116A from disk library 108A. For instance, media agent 144A may access its index 153 to identify a location of backup copy 116A on disk library 108A, or may access location information residing on disk library 108A itself.
In some cases a backup copy 116A that was recently created or accessed, may be cached to speed up the restore operation. In such a case, media agent 144A accesses a cached version of backup copy 116A residing in index 153, without having to access disk library 108A for some or all of the data. Once it has retrieved backup copy 116A, the media agent 144A communicates the data to the requesting client computing device 102. Upon receipt, file system data agent 142A and email data agent 142B may unpack (e.g., restore from a backup format to the native application format) the data in backup copy 116A and restore the unpackaged data to primary storage device 104. In general, secondary copies 116 may be restored to the same volume or folder in primary storage device 104 from which the secondary copy was derived; to another storage location or client computing device 102; to shared storage, etc. In some cases, the data may be restored so that it may be used by an application 110 of a different version/vintage from the application that created the original primary data 112.
Exemplary Secondary Copy Formatting
The formatting and structure of secondary copies 116 can vary depending on the embodiment. In some cases, secondary copies 116 are formatted as a series of logical data units or “chunks” (e.g., 512 MB, 1 GB, 2 GB, 4 GB, or 8 GB chunks). This can facilitate efficient communication and writing to secondary storage devices 108, e.g., according to resource availability. For example, a single secondary copy 116 may be written on a chunk-by-chunk basis to one or more secondary storage devices 108. In some cases, users can select different chunk sizes, e.g., to improve throughput to tape storage devices. Generally, each chunk can include a header and a payload. The payload can include files (or other data units) or subsets thereof included in the chunk, whereas the chunk header generally includes metadata relating to the chunk, some or all of which may be derived from the payload. For example, during a secondary copy operation, media agent 144, storage manager 140, or other component may divide files into chunks and generate headers for each chunk by processing the files. Headers can include a variety of information such as file and/or volume identifier(s), offset(s), and/or other information associated with the payload data items, a chunk sequence number, etc. Importantly, in addition to being stored with secondary copy 116 on secondary storage device 108, chunk headers can also be stored to index 153 of the associated media agent(s) 144 and/or to index 150 associated with storage manager 140. This can be useful for providing faster processing of secondary copies 116 during browsing, restores, or other operations. In some cases, once a chunk is successfully transferred to a secondary storage device 108, the secondary storage device 108 returns an indication of receipt, e.g., to media agent 144 and/or storage manager 140, which may update their respective indexes 153, 150 accordingly. During restore, chunks may be processed (e.g., by media agent 144) according to the information in the chunk header to reassemble the files.
Data can also be communicated within system 100 in data channels that connect client computing devices 102 to secondary storage devices 108. These data channels can be referred to as “data streams,” and multiple data streams can be employed to parallelize an information management operation, improving data transfer rate, among other advantages. Example data formatting techniques including techniques involving data streaming, chunking, and the use of other data structures in creating secondary copies are described in U.S. Pat. Nos. 7,315,923, 8,156,086, and 8,578,120.
Referring to FIG. 1G , data stream 171 has the stream header 172 and stream payload 174 aligned into multiple data blocks. In this example, the data blocks are of size 64 KB. The first two stream header 172 and stream payload 174 pairs comprise a first data block of size 64 KB. The first stream header 172 indicates that the length of the succeeding stream payload 174 is 63 KB and that it is the start of a data block. The next stream header 172 indicates that the succeeding stream payload 174 has a length of 1 KB and that it is not the start of a new data block. Immediately following stream payload 174 is a pair comprising an identifier header 176 and identifier data 178. The identifier header 176 includes an indication that the succeeding identifier data 178 includes the identifier for the immediately previous data block. The identifier data 178 includes the identifier that the data agent 142 generated for the data block. The data stream 171 also includes other stream header 172 and stream payload 174 pairs, which may be for SI data and/or non-SI data.
As an example, data structures 180 illustrated in FIG. 1H may have been created as a result of separate secondary copy operations involving two client computing devices 102. For example, a first secondary copy operation on a first client computing device 102 could result in the creation of the first chunk folder 184, and a second secondary copy operation on a second client computing device 102 could result in the creation of the second chunk folder 185. Container files 190/191 in the first chunk folder 184 would contain the blocks of SI data of the first client computing device 102. If the two client computing devices 102 have substantially similar data, the second secondary copy operation on the data of the second client computing device 102 would result in media agent 144 storing primarily links to the data blocks of the first client computing device 102 that are already stored in the container files 190/191. Accordingly, while a first secondary copy operation may result in storing nearly all of the data subject to the operation, subsequent secondary storage operations involving similar data may result in substantial data storage space savings, because links to already stored data blocks can be stored instead of additional instances of data blocks.
If the operating system of the secondary storage computing device 106 on which media agent 144 operates supports sparse files, then when media agent 144 creates container files 190/191/193, it can create them as sparse files. A sparse file is a type of file that may include empty space (e.g., a sparse file may have real data within it, such as at the beginning of the file and/or at the end of the file, but may also have empty space in it that is not storing actual data, such as a contiguous range of bytes all having a value of zero). Having container files 190/191/193 be sparse files allows media agent 144 to free up space in container files 190/191/193 when blocks of data in container files 190/191/193 no longer need to be stored on the storage devices. In some examples, media agent 144 creates a new container file 190/191/193 when a container file 190/191/193 either includes 100 blocks of data or when the size of the container file 190 exceeds 50 MB. In other examples, media agent 144 creates a new container file 190/191/193 when a container file 190/191/193 satisfies other criteria (e.g., it contains from approx. 100 to approx. 1000 blocks or when its size exceeds approximately 50 MB to 1 GB). In some cases, a file on which a secondary copy operation is performed may comprise a large number of data blocks. For example, a 100 MB file may comprise 400 data blocks of size 256 KB. If such a file is to be stored, its data blocks may span more than one container file, or even more than one chunk folder. As another example, a database file of 20 GB may comprise over 40,000 data blocks of size 512 KB. If such a database file is to be stored, its data blocks will likely span multiple container files, multiple chunk folders, and potentially multiple volume folders. Restoring such files may require accessing multiple container files, chunk folders, and/or volume folders to obtain the requisite data blocks.
Using Backup Data for Replication and Disaster Recovery (“Live Synchronization”)
There is an increased demand to off-load resource intensive information management tasks (e.g., data replication tasks) away from production devices (e.g., physical or virtual client computing devices) in order to maximize production efficiency. At the same time, enterprises expect access to readily-available up-to-date recovery copies in the event of failure, with little or no production downtime.
The synchronization can be achieved by generally applying an ongoing stream of incremental backups from the source subsystem 201 to the destination subsystem 203, such as according to what can be referred to as an “incremental forever” approach. FIG. 2A illustrates an embodiment of a data flow which may be orchestrated at the direction of one or more storage managers (not shown). At step 1, the source data agent(s) 242 a and source media agent(s) 244 a work together to write backup or other secondary copies of the primary data generated by the source client computing devices 202 a into the source secondary storage device(s) 208 a. At step 2, the backup/secondary copies are retrieved by the source media agent(s) 244 a from secondary storage. At step 3, source media agent(s) 244 a communicate the backup/secondary copies across a network to the destination media agent(s) 244 b in destination subsystem 203.
As shown, the data can be copied from source to destination in an incremental fashion, such that only changed blocks are transmitted, and in some cases multiple incremental backups are consolidated at the source so that only the most current changed blocks are transmitted to and applied at the destination. An example of live synchronization of virtual machines using the “incremental forever” approach is found in U.S. Patent Application No. 62/265,339 entitled “Live Synchronization and Management of Virtual Machines across Computing and Virtualization Platforms and Using Live Synchronization to Support Disaster Recovery.” Moreover, a deduplicated copy can be employed to further reduce network traffic from source to destination. For instance, the system can utilize the deduplicated copy techniques described in U.S. Pat. No. 9,239,687, entitled “Systems and Methods for Retaining and Using Data Block Signatures in Data Protection Operations.”
At step 4, destination media agent(s) 244 b write the received backup/secondary copy data to the destination secondary storage device(s) 208 b. At step 5, the synchronization is completed when the destination media agent(s) and destination data agent(s) 242 b restore the backup/secondary copy data to the destination client computing device(s) 202 b. The destination client computing device(s) 202 b may be kept “warm” awaiting activation in case failure is detected at the source. This synchronization/replication process can incorporate the techniques described in U.S. patent application Ser. No. 14/721,971, entitled “Replication Using Deduplicated Secondary Copy Data.”
Where the incremental backups are applied on a frequent, on-going basis, the synchronized copies can be viewed as mirror or replication copies. Moreover, by applying the incremental backups to the destination site 203 using backup or other secondary copy data, the production site 201 is not burdened with the synchronization operations. Because the destination site 203 can be maintained in a synchronized “warm” state, the downtime for switching over from the production site 201 to the destination site 203 is substantially less than with a typical restore from secondary storage. Thus, the production site 201 may flexibly and efficiently fail over, with minimal downtime and with relatively up-to-date data, to a destination site 203, such as a cloud-based failover site. The destination site 203 can later be reverse synchronized back to the production site 201, such as after repairs have been implemented or after the failure has passed.
Integrating With the Cloud Using File System Protocols
Given the ubiquity of cloud computing, it can be increasingly useful to provide data protection and other information management services in a scalable, transparent, and highly plug-able fashion. FIG. 2B illustrates an information management system 200 having an architecture that provides such advantages, and incorporates use of a standard file system protocol between primary and secondary storage subsystems 217, 218. As shown, the use of the network file system (NFS) protocol (or any another appropriate file system protocol such as that of the Common Internet File System (CIFS)) allows data agent 242 to be moved from the primary storage subsystem 217 to the secondary storage subsystem 218. For instance, as indicated by the dashed box 206 around data agent 242 and media agent 244, data agent 242 can co-reside with media agent 244 on the same server (e.g., a secondary storage computing device such as component 106), or in some other location in secondary storage subsystem 218.
Where NFS is used, for example, secondary storage subsystem 218 allocates an NFS network path to the client computing device 202 or to one or more target applications 210 running on client computing device 202. During a backup or other secondary copy operation, the client computing device 202 mounts the designated NFS path and writes data to that NFS path. The NFS path may be obtained from NFS path data 215 stored locally at the client computing device 202, and which may be a copy of or otherwise derived from NFS path data 219 stored in the secondary storage subsystem 218.
Write requests issued by client computing device(s) 202 are received by data agent 242 in secondary storage subsystem 218, which translates the requests and works in conjunction with media agent 244 to process and write data to a secondary storage device(s) 208, thereby creating a backup or other secondary copy. Storage manager 240 can include a pseudo-client manager 217, which coordinates the process by, among other things, communicating information relating to client computing device 202 and application 210 (e.g., application type, client computing device identifier, etc.) to data agent 242, obtaining appropriate NFS path data from the data agent 242 (e.g., NFS path information), and delivering such data to client computing device 202.
Conversely, during a restore or recovery operation client computing device 202 reads from the designated NFS network path, and the read request is translated by data agent 242. The data agent 242 then works with media agent 244 to retrieve, re-process (e.g., re-hydrate, decompress, decrypt), and forward the requested data to client computing device 202 using NFS.
By moving specialized software associated with system 200 such as data agent 242 off the client computing devices 202, the illustrative architecture effectively decouples the client computing devices 202 from the installed components of system 200, improving both scalability and plug-ability of system 200. Indeed, the secondary storage subsystem 218 in such environments can be treated simply as a read/write NFS target for primary storage subsystem 217, without the need for information management software to be installed on client computing devices 202. As one example, an enterprise implementing a cloud production computing environment can add VM client computing devices 202 without installing and configuring specialized information management software on these VMs. Rather, backups and restores are achieved transparently, where the new VMs simply write to and read from the designated NFS path. An example of integrating with the cloud using file system protocols or so-called “infinite backup” using NFS share is found in U.S. Patent Application No. 62/294,920, entitled “Data Protection Operations Based on Network Path Information.” Examples of improved data restoration scenarios based on network-path information, including using stored backups effectively as primary data sources, may be found in U.S. Patent Application No. 62/297,057, entitled “Data Restoration Operations Based on Network Path Information.”
Highly Scalable Managed Data Pool Architecture
Enterprises are seeing explosive data growth in recent years, often from various applications running in geographically distributed locations. FIG. 2C shows a block diagram of an example of a highly scalable, managed data pool architecture useful in accommodating such data growth. The illustrated system 200, which may be referred to as a “web-scale” architecture according to certain embodiments, can be readily incorporated into both open compute/storage and common-cloud architectures.
The illustrated system 200 includes a grid 245 of media agents 244 logically organized into a control tier 231 and a secondary or storage tier 233. Media agents assigned to the storage tier 233 can be configured to manage a secondary storage pool 208 as a deduplication store, and be configured to receive client write and read requests from the primary storage subsystem 217, and direct those requests to the secondary tier 233 for servicing. For instance, media agents CMA1-CMA3 in the control tier 231 maintain and consult one or more deduplication databases 247, which can include deduplication information (e.g., data block hashes, data block links, file containers for deduplicated files, etc.) sufficient to read deduplicated files from secondary storage pool 208 and write deduplicated files to secondary storage pool 208. For instance, system 200 can incorporate any of the deduplication systems and methods shown and described in U.S. Pat. No. 9,020,900, entitled “Distributed Deduplicated Storage System,” and U.S. Pat. Pub. No. 2014/0201170, entitled “High Availability Distributed Deduplicated Storage System.”
Media agents SMA1-SMA6 assigned to the secondary tier 233 receive write and read requests from media agents CMA1-CMA3 in control tier 231, and access secondary storage pool 208 to service those requests. Media agents CMA1-CMA3 in control tier 231 can also communicate with secondary storage pool 208, and may execute read and write requests themselves (e.g., in response to requests from other control media agents CMA1-CMA3) in addition to issuing requests to media agents in secondary tier 233. Moreover, while shown as separate from the secondary storage pool 208, deduplication database(s) 247 can in some cases reside in storage devices in secondary storage pool 208.
As shown, each of the media agents 244 (e.g., CMA1-CMA3, SMA1-SMA6, etc.) in grid 245 can be allocated a corresponding dedicated partition 251A-2511, respectively, in secondary storage pool 208. Each partition 251 can include a first portion 253 containing data associated with (e.g., stored by) media agent 244 corresponding to the respective partition 251. System 200 can also implement a desired level of replication, thereby providing redundancy in the event of a failure of a media agent 244 in grid 245. Along these lines, each partition 251 can further include a second portion 255 storing one or more replication copies of the data associated with one or more other media agents 244 in the grid.
The embodiments and components thereof disclosed in FIGS. 2A, 2B, and 2C , as well as those in FIGS. 1A-1H , may be implemented in any combination and permutation to satisfy data storage management and information management needs at one or more locations and/or data centers.
Snapshot-Based Disaster Recovery Orchestration of Virtual Machine Failover and Failback Operations
Data storage management system 300 is a system analogous to system 100 and further comprising additional functionality for snap-based DR orchestration, such as administrative features for defining and configuring source and failover components, failover groups, customization of failover components, mapping between source and failover VMs, scheduling and tracking of snapshot generation and snapshot replication, etc. More details are given in FIGS. 4, 5A, and 5B .
Primary virtualization manager 303 (or “manager 303”) is a computing device (e.g., a server) that provides a centralized platform for controlling any number of VM hosts and their VMs. One illustrative example is VMware vCenter Server from VMware, but the invention is not limited to VMware virtualization. As shown later, a specialized data agent component of system 300 (e.g., virtual server agent 442) interoperates with manager 303 to ensure that VMs 302 and their datastores are protected by system 300, e.g., making backup copies, replicating datastores, orchestrating failover, etc.
In some embodiments, these replication operations are referred to as “array-to-array” replication, because the arrays/filers communicate with each other to structure and transmit each snapshot, even though the operation is scheduled and initiated by system 300 (e.g., using an auxiliary copy job). Of course, similar and equivalent techniques are used between arrays and cloud storage resources, or cloud-to-cloud. Some embodiments that use NetApp arrays use so-called “vault copy” features to replicate snapshots from source to DR site. Other embodiments that use NetApp arrays use so-called “mirror copy” features to replicate snapshots from source to destination. The embodiments are not limited to NetApp arrays or to these techniques for replicating snapshots.
An auxiliary copy job as managed by system 300 comprises snapshot replication operation 305 (“array-to-array” replication or equivalent to/from/between cloud storage resources). Alternatively, the illustrative DR orchestration job as managed by system 300 comprises snapshot replication operation 305 (array-to-array or equivalent to/from/between cloud storage resources).
Cloud-based failover storage resources 384C are functionally analogous to storage array/filer 384D and are instantiated in cloud computing environment 390. An illustrative example is Amazon AWS Elastic Block Store (“EBS”), which is well known in the art—but the invention is not so limited. Like array-based storage 384D, cloud-based failover storage 384C comprises data storage volumes (not shown here) that receive and store replicated snapshots from the source site. However, these data storage volumes do not become associated with failover VMs 382C until such time as failover virtualization manager 383C establishes for each failover VM 382C a corresponding datastore in one of the data storage volumes in failover storage resources 384C.
Any cloud-based storage technology may be used as failover storage resources 384C. Collectively, storage resources 384C and 384D at the DR site are referred to herein as “failover storage 384” as a shorthand.
Although not expressly depicted in FIGS. 3A and 3B , some alternative embodiments comprise a cloud computing environment at the source and a virtualized data center at the DR site; other alternative embodiments comprise a cloud computing environment at both source and DR site, whether the cloud computing environments are from the same cloud service provider or different ones. The latter scenario enables cloud-to-cloud failovers. Although not expressly depicted in FIGS. 3A and 3B , some alternative environments comprise more than one DR site, thus enabling a choice or DR sites for clone testing and planned failovers.
Virtual server agent (VSA) 442 (or “VSA data agent 442”) is a data agent analogous to data agent 142 and additionally comprising features for operating in system 300, such as interoperability with DR orchestration logic in storage manager 440. VSA data agent 442 is generally responsible for taking part in snap backup jobs, e.g., triggering manager 303 to quiesce one or more source VMs 302 so that storage 304 can take a snapshot of the volumes hosting the datastore(s) corresponding to the source VM(s) 302. VSA data agent 442 communicates with media agent 444 and with storage manager 440, which manages snap backup jobs, auxiliary copy jobs, and DR orchestration jobs. More details are given in other figures.
Virtual server agent (VSA) 492 is analogous to VSA data agent 442 and is associated with failover virtualization manager 383. Accordingly, in a DR orchestration job, VSA 492 instructs failover virtualization manager 383 when to create datastores for failover VMs 382, causes failover virtualization manager 383 to register failover VMs 382 and implement customized parameters, and cause the failover VMs 382 to be powered on at the DR site. VSA 492 communicates with media agent 494 and storage manager 440 during DR orchestration jobs to perform failovers to the DR site and/or failbacks therefrom.
The dotted arrows shown between certain components at the DR site illustrate that failover VMs 382 and their corresponding datastores 584 are not maintained in an active state prior to failover. See method 600 in FIG. 6 for more details.
The dotted arrows shown between certain components at the DR site illustrate that failover VMs 382 and their corresponding datastores 584 are not maintained in an active state prior to failover. See method 600 in FIG. 6 for more details.
Snapshot S598 is a hardware snapshot of data storage volume 598 taken by primary storage 304 as directed by media agent 444, e.g., using APIs, using custom scripts, etc. Snapshot S598 is taken in the course of a snap backup job managed by storage manager 440. Snapshot S598 is stored at primary storage 304.
Snapshot S598 is replicated by primary storage 304 to failover storage 384 as directed by media agent 444 in the course of an auxiliary copy job managed by storage manager 440. The auxiliary copy job generates a snapshot SR598 that is a replica of snapshot S598. Snapshot SR598 is stored at failover storage 384 in a data storage volume 599. At failover time, the DR orchestration job will create a relationship between data in snapshot SR598 and a failover VM 382 and will establish datastore 584 corresponding to the failover VM 382.
At block 602, hardware components (e.g., VM hosts, virtualization managers, storage resources, backup nodes, storage manager, etc.) and networking are configured at source virtualized data center and at DR/failover site. This initial set-up is well known in the art.
At block 604, storage manager 440 configures storage policies for snap backup jobs and auxiliary copy jobs. The storage policies govern when snap backup and auxiliary copy jobs are to run and which media agent(s) (e.g., 444, 494) will be involved in each job, as well specifying the data sources for the jobs, e.g., datastore 504, data storage volume 598, snapshot S598, etc. The storage policies are illustratively stored in management database 146. See also FIG. 13 .
At block 606, storage manager 440 configures parameters for the source data and the failover destination. A number of administrative entries are configured here, e.g., failover group, VM host mapping, network settings, domain & IP address customization for DR site, etc. For example, a failover group is defined, which specifies one or more source VMs 302 to be failed over by DR orchestration jobs, a mapping between source VM host 502 and DR VM host 552, and an indication that the failover is to be made using the illustrative snap-based DR orchestration approach. See also FIG. 11 , FIG. 12 . Customization ensures that appropriate IP addresses and domain names are used at the DR site. In effect, block 606 ensures that there is a complete plan for selecting source VMs 302 and failing them over to appropriate entities at the DR site. Thus, block 606 provides a font of information to be used by the DR orchestration job in order to have a successful failover event. Illustratively, all the administrative parameters configured at block 606 are stored in management database 146. Illustratively one or more of these administrative parameters are communicated as needed by storage manager 440 to media agents and data agents when initiating the DR orchestration job.
At block 608, system 300 performs snap backup jobs. For example, storage manager 440 instructs media agent 444 and VSA data agent 442 to launch a snap backup job for a certain source VM 302. VSA data agent 442 reports to media agent 444 an identity of where the VM's datastore is located, e.g., in a data storage volume 598. Media agent 444 instructs (e.g., using APIs, custom scripts, etc.) primary storage 304 to take a snapshot of data storage volume 598, resulting in snapshot S598 stored in primary storage 304. The successful generation of snapshot S598 is noted by media agent 444 and the snapshot is tracked in media agent index 153 at media agent 444. These snap backup jobs are performed according to a plan (e.g., RPO plan, opportunistic plan, etc.), schedule, and/or storage policies, one or more of which are administered at storage manager 440 and illustratively stored in management database 146. Job results and the location of media agent 153 are reported back to storage manager 440 for future reference.
At block 610, system 300 performs auxiliary copy jobs to replicate snapshots from primary storage 304 to failover storage 384 (e.g., array, filer, cloud). Accordingly, storage manager 440 initiates an auxiliary copy job by instructing media manager to replicate snapshot(s) in primary storage (e.g., snapshot S598) to failover storage 384. Media agent 444 in turn instructs primary storage 304 (e.g., using APIs, custom scripts, etc.) to begin an “array-to-array” snapshot replication operation. “Array-to-array” is used here as shorthand for hardware-to-hardware replication, which is handled by the storage resources themselves under the direction and instruction of media agent 444 as directed by storage manager 440. Thus, system 300 is responsible for the auxiliary copy job, even if the replication operation itself is performed by the storage resources. The replicated snapshot SR598 is stored at failover storage 384. Media agent(s) 494 and/or 444 note the completion of the snapshot replication and update media agent index 153 with information about replicated snapshot SR598. These auxiliary copy jobs are performed according to a plan (e.g., RPO plan, opportunistic plan, etc.), schedule, and/or storage policies, one or more of which are administered at storage manager 440 and illustratively stored in management database 146. Job results and the location of media agent 153 are reported back to storage manager 440 for future reference. From block 610, control passes to block 612, block 614, and/or block 616.
At block 612, system 300 performs an illustrative DR orchestration job to test the DR/failover site configuration, e.g., test clones. This operation is distinguishable from failover scenarios (blocks 614, 616), because a replicated snapshot at the failover site is cloned there for test purposes without actually failing over source VMs 302. More details are given in a subsequent figure. After block 612, method 600 may end or control may pass (not shown here) to block 608, 610, 612, 614, or 616, without limitation.
At block 614, system 300 performs an illustrative DR orchestration job to conduct a planned failover. This operation is distinguishable from unplanned failover scenarios (block 616), because it includes an on-demand snap backup job immediately followed by an auxiliary copy job to ensure that the latest source data from VMs 302 is captured in the planned failover. In contrast to the test clone scenario (block 612) a so-called “mirror relationship” between primary storage and failover storage is affirmatively broken in order to stop further replication operations and to enable the failover site to take over in a production (data generation) mode in placed of the original site. More details are given in a subsequent figure.
At block 616, system 300 performs an illustrative DR orchestration job to conduct an unplanned failover. This operation is distinguishable from planned failover scenarios (block 614), because it relies on preceding snap backup and auxiliary copy job(s) that generated replicated snapshot(s) SR598 at the failover storage. These previously generated replaced snapshots SR598 will become datastores for the failover VMs 382, thus capturing the most recently replicated data from source VMs 302, though not necessarily the most recently generated data from source VMs 302. In contrast to the test clone scenario (block 612) the unplanned failure at the source data center breaks a so-called “mirror relationship” between primary storage and failover storage, which disables further replication operations. More details are given in a subsequent figure.
At block 620, which follows a planned failover (block 614) and/or an unplanned failover (block 616), system 300 uses another DR orchestration job to perform a failback operation and optionally to integrate DR site data generated after failover back into the original data sources. This operation is described in more detail in a subsequent figure. After block 620, method 600 may end or control may pass (not shown here) to other blocks, e.g., 608, 610, 612, 614, 616, without limitation.
At block 702, system 300 optionally performs blocks 608 and 610 on demand if more recent replicated snapshots SR598 are needed at the DR site for the test. In some cases, older replicated snapshots SR598 are readily available at the DR site from earlier snap backup and auxiliary copy jobs.
At block 704, system 300 clones replicated snapshot(s) SR598 into corresponding clone snapshots (not shown). Illustratively, the cloning operation is performed by failover storage 384 as instructed by media agent 494, under the direction of storage manager 440. Media agent 494 uses APIs, custom scripts, and/or other communication protocols to communicate with failover storage 384. The cloned snapshots are stored at failover storage 384.
At block 706, failover virtualization server 383 creates a datastore for each failover VM 382 using the cloned snapshots. Illustratively this operation is directed by VSA data agent 492. To properly direct manager 383, VSA data agent 492 receives certain administrative parameters from storage manager 440, e.g., mapping information administered for the failover group at block 606. See also FIG. 12 . Information about the clone snapshots and the VM data therein (from source VMs 302) is obtained from media agent 494 and/or from storage manager 440. Accordingly, VSA data agent 492 instructs manager 383 to designate a datastore 584 for each failover VM 382, wherein datastore 584 comprises a certain clone snapshot generated at block 704.
At block 708, failover virtualization manager 383 registers failover VMs 382, configures customization, and powers on failover VMs 382. Illustratively, this operation is directed by VSA data agent 492. To properly direct manager 383, VSA data agent 492 receives certain administrative parameters from storage manager 440, e.g., network settings, mapping information, and/or IP addresses that were administered for the failover group at block 606. See also FIG. 12 . At this point, failover VMs 382 are active with connectivity and access to their respective datastores 584.
At block 712, system 300 powers down failover VMs 382, deletes their datastores, and deletes the cloned snapshots to “undo the test failover.” This operation is also initiated by storage manager 440, which directs VSA data agent 492 to instruct failover virtualization manager 383 to power down failover VMs 382, de-register VMs 382, and sever the datastore relationship to the cloned snapshots. Storage manager 440 further directs media agent 494 to instructs failover storage 384 to delete the cloned snapshots. Block 612 ends.
Throughout block 612, source/production VMs 302 continue operating at the source site; snap backup jobs are performed; auxiliary copy jobs are also performed—unfettered by the test failover (clone testing) operations at the DR site.
At block 802, at the source data center, system 300 powers off source VMs 302. Illustratively storage manager 440 directs VSA data agent 442 to instruct virtualization manager 303 to power off the selected one or more VMs 302. Manager 303 comprises features for causing VMs 302 to power off, e.g., commands to hypervisor 512, commands to VM host 502, etc., without limitation. This operation freezes any further data changes in the source VMs' datastores 504.
At block 804, at the source data center, system 300 performs an on-demand snap backup job to take snapshots S598 of datastores 504 corresponding to the one or more powered off source VMs. Snap backup jobs are described in more detail elsewhere herein, e.g., at block 608.
At block 806, system 300 performs an on-demand auxiliary copy job to replicate snapshots S598 to failover storage 384 at the DR site (e.g., array, filer, cloud), i.e., to generate replicates snapshot(s) SR598. Auxiliary copy jobs are described in more detail elsewhere herein, e.g., at block 610.
At block 808, system 300 breaks the so-called “mirror relationship” between primary storage 304 and failover storage 384, which was previously established to enable “array-to-array” (or equivalent) replication jobs therebetween. One of the features of the mirror relationship is that it maintains the replicated snapshots at the failover storage 384 in a read-only state to prevent replicated data from being changed at the DR site. By breaking the mirror relationship, the DR orchestration job enables the replicated snapshots SR598 to be activated into datastores for active failover VMs 382. Illustratively, media agent 444 and/or media agent 494, as directed by storage manager 440, cause the mirror relationship to break, e.g., by so instructing primary storage 304 and/or failover storage 384, respectively.
At block 810, system 300 brings data storage volumes 599 online at the DR site. Illustratively, media agent 494, as directed by storage manager 440, instructs failover storage 384 to bring online data storage volumes 599 comprising replicated snapshot(s) SR598. In contrast to the test clone scenario in block 612, where clones of the replicated snapshots were used as datastores, here the replicated snapshots themselves will become datastores for failover VMs 382.
At block 812, failover virtualization manager 383 creates a datastore for each VM to be failed over using the replicated snapshots in the volumes brought online in the preceding block. This block is similar to block 706, except that here the replicated snapshots SR598 become the failover datastores 584.
At block 814, which is analogous to block 708, failover virtualization manager 383 registers failover VMs 382, configures customization, and powers on failover VMs 382.
At block 816, failover VMs 382 are operational at the DR site using datastores comprising data that was replicated from the source data center. The planned failover operation has successfully completed. The failover VMs 382 are now operating “live” and the selected VMs 302 are not operational. System 300 now treats VMs 382 as source/production VMs for future storage operations. Appropriate updates are entered into media agent indexes 153 at media agent 444 and media agent 494 for tracking the various snapshots. VSA data agent 492 tracks the failover datastores 584 as data sources for future backups of failover VMs 382. Job completion is reported to storage manager 440 by data agents and media agents and the DR orchestration job ends here.
At block 902, an unplanned failure at the source data center causes source VMs 302 to power off and causes a break in the mirror relationship to the DR site. The unplanned failure is detected by one or more operating components of system 300, which triggers a DR orchestration job to be initiated for failover to the DR site. If the mirror relationship between primary storage and failover storage has not been broken by the unplanned failure, system 300 breaks it here according to block 808. The unplanned failover is made possible by all the operations performed by blocks 602-610 (and optionally 612), which set up all configurations and administration needed for the unplanned failover to succeed. As before, storage manager 440 initiates and manages the DR orchestration job for such source VMs 302 that are part of one or more failover groups set up for snap-based DR orchestration.
At block 904, which is analogous to block 810, data storage volumes are brought online at the DR site. Media agent 494 (using information in its media agent index 153) identifies the appropriate replicated snapshots SR598 (e.g., the most recently replicated) at failover storage 384 that are to made into datastores for the failover VMs. Failover storage 384 may comprise any number of replicated snapshots SR598 generated by countless auxiliary copy jobs, but the most recently created ones are most desirable for the present failover. Media agent 494 identifies the data storage volumes comprising these snapshots and instructs failover storage 384 to bring them online.
At block 906, which is similar to block 812, failover virtualization manager 383 creates a datastore for each VM to be failed over using the previously existing replicated snapshots SR 589.
At block 908, which is similar to block 814, failover virtualization manager 383 registers failover VMs 382, configures customization, and powers on failover VMs 382.
At block 910, which is analogous to block 816, failover VMs 382 are operational at the DR site using datastores comprising data that was replicated from the source data center, preferably by the most recent auxiliary copy job. The unplanned failover operation has successfully completed. The failover VMs 382 are now operating “live” and the failed source VMs 302 are not operational. System 300 now treats VMs 382 as source/production VMs for future storage operations. VSA data agent 492 tracks the failover datastores 584 as data sources for future backups of failover VMs 382. Job completion is reported to storage manager 440 by data agents and media agents and the DR orchestration job ends here.
At block 1002, system 300 reverses steps of a planned failover—from DR site to failback site, resulting in VMs 302 at the failback site using datastores that are based on snapshots replicated from the DR site (see, e.g., FIG. 8 ). The failover VMs 382 are powered off and not operational.
At block 1004, as a result of block 1002, failed-back VMs 302 operate with the most recent data recovered from the DR site. The steps taken in block 1002 result in failed-back VMs 302 operating “live” at the failback site, which is the original source data center.
At block 1006, system 300 determines whether any VMs at the source/failback site were powered off or failed but were not failed-over to the DR site at block 614 or block 616 (VMs 302 that were “left-behind” by the failover orchestrated by the DR orchestration job). For example, VMs 302 that are not administered into a failover group will be “left behind” in planned or unplanned failover. If not, control passes to block 1020 (failback complete). If yes, control passes to block 1008. Illustratively, storage manager 440 consults management database 146 to determine failover status. Alternatively, failover status may be determined by storage manager 440 querying VSA data agent 442 or VSA data agent 492.
At block 1008, system 300 uses previously created backup copies to restore one or more left-behind VMs 302 at the source/failback site. For example, backup copies of these left-behind VMs 302 were created prior to the planned/unplanned failover and the accompanying DR orchestration job. Such backup copies (e.g., 116) are governed by storage policies and schedules configured by storage manager 440 in management database 146. Such backup copies (e.g., 116) are stored locally at the source data center or elsewhere, without limitation. Such backup copies are well known in the art and are available at this point to be restored in order to re-activate the left-behind VMs 302. Accordingly, storage manager 440 initiates one or more restore operations to restore backup copies 116 previously made for left-behind VMs 302. VSA data agent 442 and media agent 444 (or another media agent 144 with access to storage media hosting backup copies 116) interoperate as directed by storage manager 440 to populate data storage volume(s). Virtualization manager 303 activates the data storage volume(s) into datastores for the left-behind VMs 302, registers said VMs 302, and powers up said VMs 302.
At block 1010, as a result of block 1008, the restored VMs 302 operate with data recovered from previous backup copies 116 alongside the failed-back VMs that were failed back using a DR orchestration job at block 1002.
At block 1012, the failback operation is complete and block 620 ends.
In regard to the figures described herein, other embodiments are possible within the scope of the present invention, such that the above-recited components, steps, blocks, operations, messages, requests, queries, and/or instructions are differently arranged, sequenced, sub-divided, organized, and/or combined. In some embodiments, a different component may initiate or execute a given operation. The screenshots are merely illustrative to help with the reader's understanding and are not to be considered limiting.
Some example enumerated embodiments of the present invention are recited in this section in the form of methods, systems, and non-transitory computer-readable media, without limitation.
According to an exemplary embodiment, data storage management system for orchestrating virtual machine failover, the system comprising: a first computing device comprising one or more hardware processors and computer memory; wherein the first computing is configured to: initiate a snapshot backup job by causing a primary data storage (i) to take a first snapshot of a first data storage volume hosting a first datastore for a first virtual machine, and (ii) to store the first snapshot at the primary data storage, wherein the first virtual machine executes on a first virtual machine host computing device comprising one or more hardware processors, computer memory, and a hypervisor; initiate an auxiliary copy job by causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween; initiate a disaster recovery orchestration job for the first virtual machine to fail over to a second virtual machine that is currently powered off, wherein the first virtual machine is included in a failover group administered in the data storage management system, and wherein the failover group maps the first virtual machine to fail over to the second virtual machine, and wherein the first computing device is further configured to: cause a failover virtualization manager to create, for the second virtual machine, a second datastore based on the second snapshot in a second data storage volume at the failover data storage; cause the failover virtualization manager to cause a second virtual machine host computing device to power up the second virtual machine and to provide the second virtual machine with access to the second datastore at the failover storage, wherein the second virtual machine operates with data in the second datastore replicated from the first snapshot
The above-recited system wherein the data storage management system is configured to create the second datastore and to power up the second virtual machine with access to the second datastore on-demand after initiating the disaster recovery orchestration job. The above-recited system wherein the snapshot-based disaster recovery (DR) job does not require that VMs or their corresponding datastores be actively operating at the DR site before the DR orchestration job is initiated, i.e., before failover, whether test clones, planned failover, or unplanned failover. The above-recited system wherein the disaster recovery orchestration job is for an unplanned failover of the first virtual machine to the second virtual machine, based on using administrative settings in a storage manager that executes on the first computing device.
The above-recited system wherein the disaster recovery orchestration job is initiated based on detecting a failure at one or more of the first virtual machine host computing device, the primary data storage, and a first virtualization manager associated with the first virtual machine host computing device. The above-recited system wherein as part of the disaster recovery orchestration job, the first computing device is further configured to activate, on-demand, a data agent associated with the failover virtualization manager and a media agent associated with the failover storage. The above-recited system wherein the first virtual machine executes in one of: a first virtualized data center and a first cloud computing environment; and wherein after the disaster recovery orchestration job, the second virtual machine executes in one of: another distinct virtualized data center configured for disaster recovery and another cloud computing environment configured for disaster recovery. The above-recited system wherein the first computing device is further configured to: initiate a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover storage. The above-recited system wherein the first computing device is further configured to, while performing the second disaster recovery orchestration job: determine that a third virtual machine not included in the failover group is powered off and did not fail over in the disaster recovery orchestration job for the first virtual machine; identify a backup copy of the third virtual machine; and initiate a restore job that restores the backup copy of the third virtual machine to a third datastore at the primary storage and cause the first virtualization manager to re-activate the third virtual machine with access to the third datastore.
The above-recited system wherein the first computing is further configured to, as part of the disaster recovery orchestration job for the first virtual machine: cause the failover storage to bring the second data storage volume online for access by the second virtual machine. The above-recited system wherein the first computing is further configured to, as part of the disaster recovery orchestration job for the first virtual machine: cause the failover virtualization manager to register the second virtual machine with the failover virtualization manager. The above-recited system wherein the system further comprises a second computing device that executes a first data agent that detects the failure at the first virtual machine host computing device. The above-recited system wherein the system further comprises a second computing device that executes a first data agent that detects the failure at the first virtual machine host computing device by way of a first virtualization manager. The above-recited system wherein the system further comprises a second computing device that executes a first media agent, and wherein, as part of the snapshot backup job, the first media agent instructs the primary data storage to take the first snapshot. The above-recited system wherein the system further comprises a second computing device that executes a first media agent, and wherein, as part of the auxiliary copy job, the first media agent instructs the primary data storage to replicate the first snapshot to the failover storage. The above-recited system wherein the system further comprises a second computing device that executes a first media agent that detects the failure at the primary data storage. The above-recited system wherein the system further comprises a second computing device that executes a first media agent that detects the failure at the primary data storage, and wherein the failure comprises a break in the mirror-relationship to the failover storage.
The above-recited system wherein the second data agent and the second media agent execute on a backup node that comprises one or more hardware processors and computer memory, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager. The above-recited system wherein the second data agent and the second media agent execute on backup node that comprises a virtual machine distinct from the second virtual machine, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager. The above-recited system wherein the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine. The above-recited system wherein the second data agent instructs the failover virtualization manager to create the second datastore. The above-recited system wherein the second data agent instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore. The above-recited system wherein the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in another distinct virtualized data center configured for disaster recovery from the first virtualized data center. The above-recited system wherein the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a cloud computing environment configured for disaster recovery from the first virtualized data center. The above-recited system wherein the first virtual machine executes in a cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a first virtualized data center configured for disaster recovery from the cloud computing environment. The above-recited system wherein the first virtual machine executes in a first cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in executes in a second cloud computing environment configured for disaster recovery from the first cloud computing environment. The above-recited system wherein the system further comprises a second computing device that executes a first data agent associated with a first virtualization manager that manages the first virtual machine host computing device. The above-recited system, wherein the system further comprises a second computing device that executes a first media agent associated with the primary data storage. The above-recited system, wherein the system further comprises a second computing device that executes a second data agent associated with the failover virtualization manager. The above-recited system, wherein the system further comprises a second computing device that executes a second media agent associated with the failover storage.
According to another embodiment, a method for orchestrating virtual machine failover, the method comprising: by a data storage management system, initiating a disaster recovery orchestration job for a first virtual machine that is included in a failover group administered in the data storage management system, wherein the disaster recovery orchestration job comprises: powering off the first virtual machine having a corresponding first datastore in a primary data storage; causing the primary data storage (i) to take a first snapshot of a first data storage volume hosting the first datastore, and (ii) to store the first snapshot at the primary data storage; causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween; causing the mirror-relationship to break; causing the failover storage to bring the second data storage volume online; causing a failover virtualization manager to create, for a second virtual machine that is powered off, a second datastore based on the second snapshot in the second data storage volume at the failover data storage; causing the failover virtualization manager to register the second virtual machine with the failover virtualization manager; causing the failover virtualization manager to cause a virtual machine host computing device to power up the second virtual machine and to provide the second virtual machine with access to the second datastore at the failover storage, wherein the second virtual machine operates with data in the second datastore replicated from the first snapshot; and wherein the data storage management system is configured to create the second datastore and to power up the second virtual machine with access to the second datastore on-demand after initiating the disaster recovery orchestration job.
The above-recited method wherein the disaster recovery orchestration job is for a planned failover of the first virtual machine to the second virtual machine, and wherein a storage manager initiates the disaster recovery orchestration job. The above-recited method wherein a first data agent instructs a first virtualization manager to power off the first virtual machine. The above-recited method, wherein a first media agent associated with the primary data storage instructs the primary data storage to break the mirror-relationship to the failover storage. The above-recited method, wherein as part of the disaster recovery orchestration job, a storage manager that manages storage operations in the data storage management system activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover storage. The method above, wherein the second data agent and the second media agent execute on a backup node that comprises one or more hardware processors and computer memory, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager. The method above, wherein the second data agent and the second media agent execute on backup node that comprises a virtual machine distinct from the second virtual machine, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager. The above-recited method, wherein, as part of a snapshot backup job, a first media agent instructs the primary data storage to take the first snapshot. The above-recited method, wherein, as part of an auxiliary copy job, a first media agent instructs the primary data storage to replicate the first snapshot to the failover storage. The above-recited method wherein the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine. The above-recited method wherein the second data agent instructs the failover virtualization manager to create the second datastore. The above-recited method wherein the second data agent instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore. The above-recited method, wherein the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in another distinct virtualized data center configured for disaster recovery from the first virtualized data center. The above-recited method, wherein the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a cloud computing environment configured for disaster recovery from the first virtualized data center. The above-recited method, wherein the first virtual machine executes in a cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a first virtualized data center configured for disaster recovery from the cloud computing environment. The above-recited method, wherein the first virtual machine executes in a first cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in executes in a second cloud computing environment configured for disaster recovery from the first cloud computing environment. The above-recited method, wherein the data storage management system comprises a storage manager that managers storage operations in the data storage management system, including the disaster recovery orchestration job, and wherein the storage manager executes on one of: a computing device comprising one or more hardware processors and computer memory, and a virtual machine, distinct from the first virtual machine and the second virtual machine, that executes on a computing device comprising one or more hardware processors and computer memory. The above-recited method, wherein the data storage management system comprises a first data agent associated with a first virtualization manager that powers off the first virtual machine. The above-recited method, wherein the data storage management system comprises a first media agent associated with the primary data storage. The above-recited method, wherein the data storage management system comprises a second data agent associated with the failover virtualization manager. The above-recited method, wherein the data storage management system comprises a second media agent associated with the failover storage. The above-recited method further comprising: by the data storage management system, initiating a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover storage. The above-recited method, wherein the second disaster recovery orchestration job comprises: determining that a third virtual machine not included in the failover group administered at the storage manager is powered off and did not fail over in the disaster recovery orchestration job for the first virtual machine; identifying a backup copy of the third virtual machine; and initiating a restore job that restores the backup copy of the third virtual machine to a third datastore at the primary storage and cause the first virtualization manager to re-activate the third virtual machine with access to the third datastore.
According to yet another exemplary embodiment, a method for orchestrating virtual machine failover, the method comprising: by a storage manager that manages storage operations in a data storage management system, initiating a disaster recovery orchestration job for a first virtual machine that is included in a failover group administered at the storage manager, wherein the disaster recovery orchestration job comprises: powering off the first virtual machine having a corresponding first datastore in a primary data storage; causing the primary data storage (i) to take a first snapshot of a first data storage volume hosting the first datastore, and (ii) to store the first snapshot at the primary data storage; causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween; causing the mirror-relationship to break; causing the failover storage to bring the second data storage volume online; causing a failover virtualization manager to create, for a second virtual machine that is powered off, a second datastore based on the second snapshot in the second data storage volume at the failover data storage; causing the failover virtualization manager to register the second virtual machine with the failover virtualization manager; causing the failover virtualization manager to cause a virtual machine host computing device to power up the second virtual machine and to provide the second virtual machine with access to the second datastore at the failover storage, wherein the second virtual machine operates with data in the second datastore replicated from the first snapshot; and wherein the data storage management system is configured to create the second datastore and to power up the second virtual machine with access to the second datastore on-demand after initiating the disaster recovery orchestration job.
The above-recited method, wherein the disaster recovery orchestration job is for a planned failover of the first virtual machine to the second virtual machine, based on using administrative settings in the storage manager. The above-recited method, wherein as part of the disaster recovery orchestration job, the storage manager activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover storage; wherein the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine; wherein the second data agent instructs the failover virtualization manager to create the second datastore; and wherein the second data agent further instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore. The above-recited method, wherein the data storage management system comprises a first data agent associated with a first virtualization manager that powers off the first virtual machine; wherein the data storage management system comprises a first media agent associated with the primary data storage; wherein the data storage management system comprises a second data agent associated with the failover virtualization manager; and wherein the data storage management system comprises a second media agent associated with the failover storage. The above-recited method further comprising: by the storage manager, initiating a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover storage. The above-recited method, wherein the second disaster recovery orchestration job comprises: by the storage manager, determining that a third virtual machine not included in the failover group administered at the storage manager is powered off and did not fail over in the disaster recovery orchestration job for the first virtual machine; by the storage manager, identifying a backup copy of the third virtual machine; and by the storage manager, initiating a restore job that restores the backup copy of the third virtual machine to a third datastore at the primary storage and cause the first virtualization manager to re-activate the third virtual machine with access to the third datastore. The above-recited method, wherein a first data agent instructs a first virtualization manager to power off the first virtual machine. The above-recited method, wherein, as part of a snapshot backup job initiated by the storage manager, a first media agent instructs the primary data storage to take the first snapshot. The above-recited method, wherein, as part of an auxiliary copy job initiated by the storage manager, a first media agent instructs the primary data storage to replicate the first snapshot to the failover storage. The above-recited method, wherein a first media agent associated with the primary data storage instructs the primary data storage to break the mirror-relationship to the failover storage. The above-recited method, wherein as part of the disaster recovery orchestration job, the storage manager activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover storage. The above-recited method, wherein the second data agent and the second media agent execute on a backup node that comprises one or more hardware processors and computer memory, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager. The above-recited method, wherein the second data agent and the second media agent execute on backup node that comprises a virtual machine distinct from the second virtual machine, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager. The above-recited method wherein the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine. The above-recited method wherein the second data agent instructs the failover virtualization manager to create the second datastore. The above-recited method wherein the second data agent instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore. The above-recited method, wherein the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in another distinct virtualized data center configured for disaster recovery from the first virtualized data center. The above-recited method, wherein the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a cloud computing environment configured for disaster recovery from the first virtualized data center. The above-recited method, wherein the first virtual machine executes in a cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a first virtualized data center configured for disaster recovery from the cloud computing environment. The above-recited method, wherein the first virtual machine executes in a first cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in executes in a second cloud computing environment configured for disaster recovery from the first cloud computing environment. The above-recited method, wherein the data storage management system comprises a first data agent associated with a first virtualization manager that powers off the first virtual machine. The above-recited method, wherein the data storage management system comprises a first media agent associated with the primary data storage. The above-recited method, wherein the data storage management system comprises a second data agent associated with the failover virtualization manager. The above-recited method, wherein the data storage management system comprises a second media agent associated with the failover storage.
According to another illustrative embodiment, a method for orchestrating virtual machine failover, the method comprising: by a storage manager that manages storage operations in a data storage management system, initiating a snapshot backup job by causing a primary data storage (i) to take a first snapshot of a first data storage volume hosting a first datastore for a first virtual machine, and (ii) to store the first snapshot at the primary data storage, wherein the first virtual machine executes on a first virtual machine host computing device comprising one or more hardware processors, computer memory, and a hypervisor, and wherein the first virtual machine is included in a failover group administered at the storage manager; by the storage manager, initiating an auxiliary copy job by causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween; based on a failure at one or more of the first virtual machine host computing device and the primary data storage, initiating, by the storage manager, a disaster recovery orchestration job for the first virtual machine to fail over to a second virtual machine that is currently powered off, wherein the disaster recovery orchestration job comprises: causing the failover storage to bring the second data storage volume online for access by the second virtual machine; causing a failover virtualization manager to create, for the second virtual machine, a second datastore based on the second snapshot in the second data storage volume at the failover data storage; causing the failover virtualization manager to register the second virtual machine with the failover virtualization manager; causing the failover virtualization manager to cause a second virtual machine host computing device to power up the second virtual machine and to provide the second virtual machine with access to the second datastore at the failover storage, wherein the second virtual machine operates with data in the second datastore replicated from the first snapshot; and wherein the data storage management system is configured to create the second datastore and to power up the second virtual machine with access to the second datastore on-demand after initiating the disaster recovery orchestration job.
The above-recited method, wherein the disaster recovery orchestration job is for an unplanned failover of the first virtual machine to the second virtual machine, based on using administrative settings in the storage manager. The above-recited method, wherein a first data agent detects the failure at the first virtual machine host computing device. The above-recited method, wherein a first data agent detects the failure at the first virtual machine host computing device by way of a first virtualization manager. The above-recited method, wherein, as part of the snapshot backup job initiated by the storage manager, a first media agent instructs the primary data storage to take the first snapshot. The above-recited method, wherein, as part of the auxiliary copy job initiated by the storage manager, a first media agent instructs the primary data storage to replicate the first snapshot to the failover storage. The above-recited method, wherein a first media agent detects the failure at the primary data storage. The above-recited method, wherein a first media agent detects the failure at the primary data storage, and wherein the failure comprises a break in the mirror-relationship to the failover storage. The above-recited method, wherein as part of the disaster recovery orchestration job, the storage manager activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover storage. The above-recited method, wherein the second data agent and the second media agent execute on a backup node that comprises one or more hardware processors and computer memory, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager. The above-recited method, wherein the second data agent and the second media agent execute on backup node that comprises a virtual machine distinct from the second virtual machine, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager. The above-recited method wherein the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine. The above-recited method wherein the second data agent instructs the failover virtualization manager to create the second datastore. The above-recited method wherein the second data agent instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore. The above-recited method, wherein the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in another distinct virtualized data center configured for disaster recovery from the first virtualized data center. The above-recited method, wherein the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a cloud computing environment configured for disaster recovery from the first virtualized data center. The above-recited method, wherein the first virtual machine executes in a cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a first virtualized data center configured for disaster recovery from the cloud computing environment. The above-recited method, wherein the first virtual machine executes in a first cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in executes in a second cloud computing environment configured for disaster recovery from the first cloud computing environment. The above-recited method, wherein the data storage management system comprises a first data agent associated with a first virtualization manager that manages the first virtual machine host computing device. The above-recited method, wherein the data storage management system comprises a first media agent associated with the primary data storage. The above-recited method, wherein the data storage management system comprises a second data agent associated with the failover virtualization manager. The above-recited method, wherein the data storage management system comprises a second media agent associated with the failover storage. The above-recited method further comprising: by the storage manager, initiating a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover storage. The above-recited method, wherein the second disaster recovery orchestration job comprises: by the storage manager, determining that a third virtual machine not included in the failover group administered at the storage manager is powered off and did not fail over in the disaster recovery orchestration job for the first virtual machine; by the storage manager, identifying a backup copy of the third virtual machine; and by the storage manager, initiating a restore job that restores the backup copy of the third virtual machine to a third datastore at the primary storage and cause the first virtualization manager to re-activate the third virtual machine with access to the third datastore.
According to yet another illustrative embodiment, a data storage management system for orchestrating virtual machine failover, the system comprising: a first computing device comprising one or more hardware processors and computer memory, wherein a storage manager executes on the first computing device; wherein the first computing device executing the storage manager is configured to: initiate a snapshot backup job by causing a primary data storage (i) to take a first snapshot of a first data storage volume hosting a first datastore for a first virtual machine, and (ii) to store the first snapshot at the primary data storage, wherein the first virtual machine executes on a first virtual machine host computing device comprising one or more hardware processors, computer memory, and a hypervisor, and wherein the first virtual machine is included in a failover group administered at the storage manager; initiate an auxiliary copy job by causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween; initiate a disaster recovery orchestration job for the first virtual machine to fail over to a second virtual machine that is currently powered off, wherein the first computing device executing the storage manager is further configured to: cause the failover storage to bring the second data storage volume online for access by the second virtual machine; cause a failover virtualization manager to create, for the second virtual machine, a second datastore based on the second snapshot in the second data storage volume at the failover data storage; cause the failover virtualization manager to register the second virtual machine with the failover virtualization manager; cause the failover virtualization manager to cause a second virtual machine host computing device to power up the second virtual machine and to provide the second virtual machine with access to the second datastore at the failover storage, wherein the second virtual machine operates with data in the second datastore replicated from the first snapshot; and wherein the data storage management system is configured to create the second datastore and to power up the second virtual machine with access to the second datastore on-demand after initiating the disaster recovery orchestration job.
The above-recited system wherein the disaster recovery orchestration job is for an unplanned failover of the first virtual machine to the second virtual machine, based on using administrative settings in the storage manager. The above-recited system The above-recited system, wherein the disaster recovery orchestration job is initiated based on detecting a failure at one or more of the first virtual machine host computing device, the primary data storage, and a first virtualization manager associated with the first virtual machine host computing device. The above-recited system, wherein the system further comprises a second computing device that executes a first data agent that detects the failure at the first virtual machine host computing device. The above-recited system, wherein the system further comprises a second computing device that executes a first data agent that detects the failure at the first virtual machine host computing device by way of a first virtualization manager. The above-recited system, wherein the system further comprises a second computing device that executes a first media agent, and wherein, as part of the snapshot backup job initiated by the storage manager, the first media agent instructs the primary data storage to take the first snapshot. The above-recited system, wherein the system further comprises a second computing device that executes a first media agent, and wherein, as part of the auxiliary copy job initiated by the storage manager, the first media agent instructs the primary data storage to replicate the first snapshot to the failover storage. The above-recited system, wherein the system further comprises a second computing device that executes a first media agent that detects the failure at the primary data storage. The above-recited system, wherein the system further comprises a second computing device that executes a first media agent that detects the failure at the primary data storage, and wherein the failure comprises a break in the mirror-relationship to the failover storage. The above-recited system, wherein as part of the disaster recovery orchestration job, the storage manager activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover storage. The above-recited system, wherein the second data agent and the second media agent execute on a backup node that comprises one or more hardware processors and computer memory, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager. The above-recited system, wherein the second data agent and the second media agent execute on backup node that comprises a virtual machine distinct from the second virtual machine, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager. The above-recited system wherein the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine. The above-recited system wherein the second data agent instructs the failover virtualization manager to create the second datastore. The above-recited system wherein the second data agent instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore. The above-recited system, wherein the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in another distinct virtualized data center configured for disaster recovery from the first virtualized data center. The above-recited system, wherein the first virtual machine executes in a first virtualized data center and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a cloud computing environment configured for disaster recovery from the first virtualized data center. The above-recited system, wherein the first virtual machine executes in a cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in a first virtualized data center configured for disaster recovery from the cloud computing environment. The above-recited system, wherein the first virtual machine executes in a first cloud computing environment and wherein, after the disaster recovery orchestration job, the second virtual machine executes in executes in a second cloud computing environment configured for disaster recovery from the first cloud computing environment. The above-recited system, wherein the system further comprises a second computing device that executes a first data agent associated with a first virtualization manager that manages the first virtual machine host computing device. The above-recited system, wherein the system further comprises a second computing device that executes a first media agent associated with the primary data storage. The above-recited system, wherein the system further comprises a second computing device that executes a second data agent associated with the failover virtualization manager. The above-recited system, wherein the system further comprises a second computing device that executes a second media agent associated with the failover storage. The above-recited system wherein the first computing device executing the storage manager is further configured to: initiate a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover storage. The above-recited system, wherein the first computing device executing the storage manager is further configured to, while performing the second disaster recovery orchestration job: determine that a third virtual machine not included in the failover group administered at the storage manager is powered off and did not fail over in the disaster recovery orchestration job for the first virtual machine; identify a backup copy of the third virtual machine; and initiate a restore job that restores the backup copy of the third virtual machine to a third datastore at the primary storage and cause the first virtualization manager to re-activate the third virtual machine with access to the third datastore.
In other embodiments, a system or systems operates according to one or more of the methods and/or computer-readable media recited in the preceding paragraphs. In yet other embodiments, a method or methods operates according to one or more of the systems and/or computer-readable media recited in the preceding paragraphs. In yet more embodiments, a non-transitory computer-readable medium or media causes one or more computing devices having one or more processors and computer-readable memory to operate according to one or more of the systems and/or methods recited in the preceding paragraphs.
Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense, i.e., in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. Where the context permits, words using the singular or plural number may also include the plural or singular number respectively. The word “or” in reference to a list of two or more items, covers all of the following interpretations of the word: any one of the items in the list, all of the items in the list, and any combination of the items in the list. Likewise the term “and/or” in reference to a list of two or more items, covers all of the following interpretations of the word: any one of the items in the list, all of the items in the list, and any combination of the items in the list.
In some embodiments, certain operations, acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all are necessary for the practice of the algorithms). In certain embodiments, operations, acts, functions, or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially.
Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described. Software and other modules may reside and execute on servers, workstations, personal computers, computerized tablets, PDAs, and other computing devices suitable for the purposes described herein. Software and other modules may be accessible via local computer memory, via a network, via a browser, or via other means suitable for the purposes described herein. Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, interactive voice response, command line interfaces, and other suitable interfaces.
Further, processing of the various components of the illustrated systems can be distributed across multiple machines, networks, and other computing resources. Two or more components of a system can be combined into fewer components. Various components of the illustrated systems can be implemented in one or more virtual machines, rather than in dedicated computer hardware systems and/or computing devices. Likewise, the data repositories shown can represent physical and/or logical data storage, including, e.g., storage area networks or other distributed storage systems. Moreover, in some embodiments the connections between the components shown represent possible paths of data flow, rather than actual connections between hardware. While some examples of possible connections are shown, any of the subset of the components shown can communicate with any other subset of components in various implementations.
Embodiments are also described above with reference to flow chart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products. Each block of the flow chart illustrations and/or block diagrams, and combinations of blocks in the flow chart illustrations and/or block diagrams, may be implemented by computer program instructions. Such instructions may be provided to a processor of a general purpose computer, special purpose computer, specially-equipped computer (e.g., comprising a high-performance database server, a graphics subsystem, etc.) or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor(s) of the computer or other programmable data processing apparatus, create means for implementing the acts specified in the flow chart and/or block diagram block or blocks. These computer program instructions may also be stored in a non-transitory computer-readable memory that can direct a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the acts specified in the flow chart and/or block diagram block or blocks. The computer program instructions may also be loaded to a computing device or other programmable data processing apparatus to cause operations to be performed on the computing device or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computing device or other programmable apparatus provide steps for implementing the acts specified in the flow chart and/or block diagram block or blocks.
Any patents and applications and other references noted above, including any that may be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the invention can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further implementations of the invention. These and other changes can be made to the invention in light of the above Detailed Description. While the above description describes certain examples of the invention, and describes the best mode contemplated, no matter how detailed the above appears in text, the invention can be practiced in many ways. Details of the system may vary considerably in its specific implementation, while still being encompassed by the invention disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the invention should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the invention with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific examples disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the invention encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the invention under the claims.
To reduce the number of claims, certain aspects of the invention are presented below in certain claim forms, but the applicant contemplates other aspects of the invention in any number of claim forms. For example, while only one aspect of the invention is recited as a means-plus-function claim under 35 U.S.C sec. 112(f) (AIA), other aspects may likewise be embodied as a means-plus-function claim, or in other forms, such as being embodied in a computer-readable medium. Any claims intended to be treated under 35 U.S.C. § 112(f) will begin with the words “means for,” but use of the term “for” in any other context is not intended to invoke treatment under 35 U.S.C. § 112(f). Accordingly, the applicant reserves the right to pursue additional claims after filing this application, in either this application or in a continuing application.
Claims (20)
1. A data storage management system for orchestrating virtual machine failover, the system comprising:
a first computing device comprising one or more hardware processors and computer memory;
wherein the first computing device is configured to:
initiate a snapshot backup job by causing a primary data storage (i) to take a first snapshot of a first data storage volume hosting a first datastore for a first virtual machine, and (ii) to store the first snapshot at the primary data storage, wherein the first virtual machine executes on a first virtual machine host computing device comprising one or more hardware processors, computer memory, and a hypervisor;
initiate an auxiliary copy job by causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween;
initiate a disaster recovery orchestration job for the first virtual machine to fail over to a second virtual machine that is currently powered off, wherein the first virtual machine is included in a failover group administered in the data storage management system, and wherein the failover group maps the first virtual machine to fail over to the second virtual machine, and wherein the first computing device is further configured to:
cause a failover virtualization manager to create, for the second virtual machine, a second datastore based on the second snapshot in a second data storage volume at the failover data storage;
cause the failover virtualization manager to cause a second virtual machine host computing device to power up the second virtual machine and to provide the second virtual machine with access to the second datastore at the failover data storage, wherein the second virtual machine operates with data in the second datastore replicated from the first snapshot; and
wherein the data storage management system is configured to create the second datastore and to power up the second virtual machine with access to the second datastore on-demand after initiating the disaster recovery orchestration job.
2. The system of claim 1 , wherein the disaster recovery orchestration job is for an unplanned failover of the first virtual machine to the second virtual machine, based on using administrative settings in a storage manager that executes on the first computing device.
3. The system of claim 1 , wherein the disaster recovery orchestration job is initiated based on detecting a failure at one or more of: (i) the first virtual machine host computing device, (ii) the primary data storage, and (iii) a first virtualization manager associated with the first virtual machine host computing device.
4. The system of claim 1 , wherein as part of the disaster recovery orchestration job, the first computing device is further configured to activate, on-demand, a data agent associated with the failover virtualization manager and a media agent associated with the failover data storage.
5. The system of claim 1 , wherein the first virtual machine executes in one of: a first virtualized data center and a first cloud computing environment; and
wherein after the disaster recovery orchestration job, the second virtual machine executes in one of: another distinct virtualized data center configured for disaster recovery and another cloud computing environment configured for disaster recovery.
6. The system of claim 1 , wherein the first computing device is further configured to:
initiate a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery orchestration job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary data storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover data storage.
7. The system of claim 6 , wherein the first computing device is further configured to, while performing the second disaster recovery orchestration job:
determine that a third virtual machine not included in the failover group is powered off and did not fail over in the disaster recovery orchestration job for the first virtual machine;
identify a backup copy of the third virtual machine; and
initiate a restore job that restores the backup copy of the third virtual machine to a third datastore at the primary data storage and cause the first virtualization manager to re-activate the third virtual machine with access to the third datastore.
8. A method for orchestrating virtual machine failover, the method comprising:
by a data storage management system, initiating a disaster recovery orchestration job for a first virtual machine that is included in a failover group administered in the data storage management system, wherein the disaster recovery orchestration job comprises:
powering off the first virtual machine having a corresponding first datastore in a primary data storage;
causing the primary data storage (i) to take a first snapshot of a first data storage volume hosting the first datastore, and (ii) to store the first snapshot at the primary data storage;
causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween;
causing the mirror-relationship to break;
causing the failover data storage to bring the second data storage volume online;
causing a failover virtualization manager to create, for a second virtual machine that is powered off, a second datastore based on the second snapshot in the second data storage volume at the failover data storage;
causing the failover virtualization manager to register the second virtual machine with the failover virtualization manager;
causing the failover virtualization manager to cause a virtual machine host computing device to power up the second virtual machine and to provide the second virtual machine with access to the second datastore at the failover data storage, wherein the second virtual machine operates with data in the second datastore replicated from the first snapshot; and
wherein the data storage management system is configured to create the second datastore and to power up the second virtual machine with access to the second datastore on-demand after initiating the disaster recovery orchestration job.
9. The method of claim 8 , wherein the disaster recovery orchestration job is for a planned failover of the first virtual machine to the second virtual machine, and wherein a storage manager initiates the disaster recovery orchestration job.
10. The method of claim 8 , wherein a first data agent instructs a first virtualization manager to power off the first virtual machine.
11. The method of claim 8 , wherein a first media agent associated with the primary data storage instructs the primary data storage to break the mirror-relationship to the failover data storage.
12. The method of claim 8 , wherein as part of the disaster recovery orchestration job, a storage manager that manages storage operations in the data storage management system activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover data storage.
13. The method of claim 12 , wherein the second data agent and the second media agent execute on a backup node that comprises one or more hardware processors and computer memory, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager.
14. The method of claim 12 , wherein the second data agent and the second media agent execute on a backup node that comprises a virtual machine distinct from the second virtual machine, and wherein the backup node is communicatively coupled to the failover data storage and to the failover virtualization manager.
15. A method for orchestrating virtual machine failover, the method comprising:
by a storage manager that manages storage operations in a data storage management system, initiating a disaster recovery orchestration job for a first virtual machine that is included in a failover group administered at the storage manager, wherein the disaster recovery orchestration job comprises:
powering off the first virtual machine having a corresponding first datastore in a primary data storage;
causing the primary data storage (i) to take a first snapshot of a first data storage volume hosting the first datastore, and (ii) to store the first snapshot at the primary data storage;
causing the primary data storage to replicate the first snapshot to a failover data storage configured for disaster recovery, resulting in a second snapshot stored in a second data storage volume at the failover data storage, wherein the primary data storage and the failover data storage have a mirror-relationship that enables replication of snapshots therebetween;
causing the mirror-relationship to break;
causing the failover data storage to bring the second data storage volume online;
causing a failover virtualization manager to create, for a second virtual machine that is powered off, a second datastore based on the second snapshot in the second data storage volume at the failover data storage;
causing the failover virtualization manager to register the second virtual machine with the failover virtualization manager;
causing the failover virtualization manager to cause a virtual machine host computing device to power up the second virtual machine and to provide the second virtual machine with access to the second datastore at the failover data storage, wherein the second virtual machine operates with data in the second datastore replicated from the first snapshot; and
wherein the data storage management system is configured to create the second datastore and to power up the second virtual machine with access to the second datastore on-demand after initiating the disaster recovery orchestration job.
16. The method of claim 15 , wherein the disaster recovery orchestration job is for a planned failover of the first virtual machine to the second virtual machine, based on using administrative settings in the storage manager.
17. The method of claim 15 , wherein as part of the disaster recovery orchestration job, the storage manager activates a second data agent associated with the failover virtualization manager and further activates a second media agent associated with the failover data storage;
wherein the second media agent instructs the failover data storage to bring the second data storage volume online for access by the second virtual machine;
wherein the second data agent instructs the failover virtualization manager to create the second datastore; and
wherein the second data agent further instructs the failover virtualization manager to power up the second virtual machine with access to the second datastore.
18. The method of claim 15 , wherein the data storage management system comprises a first data agent associated with a first virtualization manager that powers off the first virtual machine;
wherein the data storage management system comprises a first media agent associated with the primary data storage;
wherein the data storage management system comprises a second data agent associated with the failover virtualization manager; and
wherein the data storage management system comprises a second media agent associated with the failover data storage.
19. The method of claim 15 further comprising:
by the storage manager, initiating a second disaster recovery orchestration job that causes the second virtual machine to fail back to the first virtual machine, wherein the second disaster recovery orchestration job causes a first virtualization manager to re-activate the first virtual machine and establishes in the primary data storage the first datastore of the re-activated first virtual machine based on a snapshot replicated from the failover data storage.
20. The method of claim 19 , wherein the second disaster recovery orchestration job comprises:
by the storage manager, determining that a third virtual machine not included in the failover group administered at the storage manager is powered off and did not fail over in the disaster recovery orchestration job for the first virtual machine;
by the storage manager, identifying a backup copy of the third virtual machine; and
by the storage manager, initiating a restore job that restores the backup copy of the third virtual machine to a third datastore at the primary data storage and cause the first virtualization manager to re-activate the third virtual machine with access to the third datastore.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/831,562 US11099956B1 (en) | 2020-03-26 | 2020-03-26 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US17/377,877 US11663099B2 (en) | 2020-03-26 | 2021-07-16 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US18/135,639 US12235744B2 (en) | 2020-03-26 | 2023-04-17 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US18/918,981 US20250036535A1 (en) | 2020-03-26 | 2024-10-17 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/831,562 US11099956B1 (en) | 2020-03-26 | 2020-03-26 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/377,877 Continuation US11663099B2 (en) | 2020-03-26 | 2021-07-16 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
Publications (1)
Publication Number | Publication Date |
---|---|
US11099956B1 true US11099956B1 (en) | 2021-08-24 |
Family
ID=77389865
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/831,562 Active US11099956B1 (en) | 2020-03-26 | 2020-03-26 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US17/377,877 Active US11663099B2 (en) | 2020-03-26 | 2021-07-16 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US18/135,639 Active US12235744B2 (en) | 2020-03-26 | 2023-04-17 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US18/918,981 Pending US20250036535A1 (en) | 2020-03-26 | 2024-10-17 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/377,877 Active US11663099B2 (en) | 2020-03-26 | 2021-07-16 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US18/135,639 Active US12235744B2 (en) | 2020-03-26 | 2023-04-17 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US18/918,981 Pending US20250036535A1 (en) | 2020-03-26 | 2024-10-17 | Snapshot-based disaster recovery orchestration of virtual machine failover and failback |
Country Status (1)
Country | Link |
---|---|
US (4) | US11099956B1 (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200409803A1 (en) * | 2019-06-27 | 2020-12-31 | Netapp Inc. | Incremental restore of a virtual machine |
US20210342237A1 (en) * | 2020-03-26 | 2021-11-04 | Commvault Systems, Inc. | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US11228552B1 (en) * | 2020-10-20 | 2022-01-18 | Servicenow, Inc. | Automatically handling messages of a non-operational mail transfer agent within a virtualization container |
US11314687B2 (en) | 2020-09-24 | 2022-04-26 | Commvault Systems, Inc. | Container data mover for migrating data between distributed data storage systems integrated with application orchestrators |
US11321189B2 (en) | 2014-04-02 | 2022-05-03 | Commvault Systems, Inc. | Information management by a media agent in the absence of communications with a storage manager |
US11327852B1 (en) * | 2020-10-22 | 2022-05-10 | Dell Products L.P. | Live migration/high availability system |
US11334450B1 (en) * | 2021-02-25 | 2022-05-17 | Qnap Systems, Inc. | Backup method and backup system for virtual machine |
US11429499B2 (en) | 2016-09-30 | 2022-08-30 | Commvault Systems, Inc. | Heartbeat monitoring of virtual machines for initiating failover operations in a data storage management system, including operations by a master monitor node |
US11442768B2 (en) | 2020-03-12 | 2022-09-13 | Commvault Systems, Inc. | Cross-hypervisor live recovery of virtual machines |
US11449394B2 (en) | 2010-06-04 | 2022-09-20 | Commvault Systems, Inc. | Failover systems and methods for performing backup operations, including heterogeneous indexing and load balancing of backup and indexing resources |
US11461200B2 (en) * | 2020-11-19 | 2022-10-04 | Kyndryl, Inc. | Disaster recovery failback advisor |
US11467863B2 (en) | 2019-01-30 | 2022-10-11 | Commvault Systems, Inc. | Cross-hypervisor live mount of backed up virtual machine data |
US11467753B2 (en) | 2020-02-14 | 2022-10-11 | Commvault Systems, Inc. | On-demand restore of virtual machine data |
US11500669B2 (en) | 2020-05-15 | 2022-11-15 | Commvault Systems, Inc. | Live recovery of virtual machines in a public cloud computing environment |
US11500566B2 (en) | 2020-08-25 | 2022-11-15 | Commvault Systems, Inc. | Cloud-based distributed data storage system using block-level deduplication based on backup frequencies of incoming backup copies |
US11550680B2 (en) | 2018-12-06 | 2023-01-10 | Commvault Systems, Inc. | Assigning backup resources in a data storage management system based on failover of partnered data storage resources |
US11570243B2 (en) | 2020-09-22 | 2023-01-31 | Commvault Systems, Inc. | Decommissioning, re-commissioning, and commissioning new metadata nodes in a working distributed data storage system |
US20230315592A1 (en) * | 2022-03-30 | 2023-10-05 | Rubrik, Inc. | Virtual machine failover management for geo-redundant data centers |
US11789830B2 (en) | 2020-09-22 | 2023-10-17 | Commvault Systems, Inc. | Anti-entropy-based metadata recovery in a strongly consistent distributed data storage system |
US11960364B2 (en) * | 2020-04-14 | 2024-04-16 | Capital One Services, Llc | Event processing |
US11995042B1 (en) * | 2023-01-11 | 2024-05-28 | Dell Products L.P. | Fast recovery for replication corruptions |
US12019525B2 (en) | 2021-10-05 | 2024-06-25 | Commvault Systems, Inc. | Cloud-based recovery of backed up data using auxiliary copy replication and on-demand failover resources |
US12045147B2 (en) * | 2022-10-03 | 2024-07-23 | Rubrik, Inc. | Lossless failover for data recovery |
US12061524B2 (en) | 2019-06-24 | 2024-08-13 | Commvault Systems, Inc. | Content indexing of files in block-level backup copies of virtual machine data |
US20250021452A1 (en) * | 2023-07-14 | 2025-01-16 | Sap Se | Disaster recovery using incremental database recovery |
US12306725B2 (en) | 2023-12-15 | 2025-05-20 | Commvault Systems, Inc. | Cloud-based recovery of backed up data using auxiliary copy replication and on-demand failover resources |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10423588B2 (en) * | 2015-08-25 | 2019-09-24 | International Business Machines Corporation | Orchestrated disaster recovery |
US11797400B2 (en) * | 2020-05-19 | 2023-10-24 | EMC IP Holding Company LLC | Cost-optimized true zero recovery time objective for multiple applications based on interdependent applications |
US11934283B2 (en) | 2020-05-19 | 2024-03-19 | EMC IP Holding Company LLC | Cost-optimized true zero recovery time objective for multiple applications using failure domains |
US11899957B2 (en) | 2020-05-19 | 2024-02-13 | EMC IP Holding Company LLC | Cost-optimized true zero recovery time objective for multiple applications |
US11836512B2 (en) | 2020-05-19 | 2023-12-05 | EMC IP Holding Company LLC | Virtual machine replication strategy based on predicted application failures |
US11748166B2 (en) * | 2020-06-26 | 2023-09-05 | EMC IP Holding Company LLC | Method and system for pre-allocation of computing resources prior to preparation of physical assets |
US20230066137A1 (en) | 2021-08-19 | 2023-03-02 | Nutanix, Inc. | User interfaces for disaster recovery of distributed file servers |
US12117972B2 (en) | 2021-08-19 | 2024-10-15 | Nutanix, Inc. | File server managers and systems for managing virtualized file servers |
US12204923B2 (en) | 2021-10-21 | 2025-01-21 | EMC IP Holding Company LLC | Data center restoration |
US12001303B2 (en) * | 2021-10-21 | 2024-06-04 | EMC IP Holding Company LLC | Data center restoration and migration |
US11809292B2 (en) | 2021-12-10 | 2023-11-07 | Cisco Technology, Inc. | Adaptive application recovery |
US12008135B2 (en) * | 2021-12-21 | 2024-06-11 | Commvault Systems, Inc. | Controlling information privacy in a shared data storage management system |
US12153690B2 (en) | 2022-01-24 | 2024-11-26 | Nutanix, Inc. | Consistent access control lists across file servers for local users in a distributed file server environment |
US12255769B2 (en) * | 2022-07-28 | 2025-03-18 | Nutanix, Inc. | Disaster recovery pipeline for block storage and dependent applications |
US12189499B2 (en) | 2022-07-29 | 2025-01-07 | Nutanix, Inc. | Self-service restore (SSR) snapshot replication with share-level file system disaster recovery on virtualized file servers |
Citations (134)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4084231A (en) | 1975-12-18 | 1978-04-11 | International Business Machines Corporation | System for facilitating the copying back of data in disc and tape units of a memory hierarchial system |
US4267568A (en) | 1975-12-03 | 1981-05-12 | System Development Corporation | Information storage and retrieval system |
US4283787A (en) | 1978-11-06 | 1981-08-11 | British Broadcasting Corporation | Cyclic redundancy data check encoding method and apparatus |
US4417321A (en) | 1981-05-18 | 1983-11-22 | International Business Machines Corp. | Qualifying and sorting file record data |
US4641274A (en) | 1982-12-03 | 1987-02-03 | International Business Machines Corporation | Method for communicating changes made to text form a text processor to a remote host |
US4654819A (en) | 1982-12-09 | 1987-03-31 | Sequoia Systems, Inc. | Memory back-up system |
US4686620A (en) | 1984-07-26 | 1987-08-11 | American Telephone And Telegraph Company, At&T Bell Laboratories | Database backup method |
EP0259912A1 (en) | 1986-09-12 | 1988-03-16 | Hewlett-Packard Limited | File backup facility for a community of personal computers |
US4912637A (en) | 1988-04-26 | 1990-03-27 | Tandem Computers Incorporated | Version management tool |
EP0405926A2 (en) | 1989-06-30 | 1991-01-02 | Digital Equipment Corporation | Method and apparatus for managing a shadow set of storage media |
US4995035A (en) | 1988-10-31 | 1991-02-19 | International Business Machines Corporation | Centralized management in a computer network |
US5005122A (en) | 1987-09-08 | 1991-04-02 | Digital Equipment Corporation | Arrangement with cooperating management server node and network service node |
EP0467546A2 (en) | 1990-07-18 | 1992-01-22 | International Computers Limited | Distributed data processing systems |
US5093912A (en) | 1989-06-26 | 1992-03-03 | International Business Machines Corporation | Dynamic resource pool expansion and contraction in multiprocessing environments |
US5133065A (en) | 1989-07-27 | 1992-07-21 | Personal Computer Peripherals Corporation | Backup computer program for networks |
US5193154A (en) | 1987-07-10 | 1993-03-09 | Hitachi, Ltd. | Buffered peripheral system and method for backing up and retrieving data to and from backup memory device |
EP0541281A2 (en) | 1991-11-04 | 1993-05-12 | AT&T Corp. | Incremental-computer-file backup using signatures |
US5212772A (en) | 1991-02-11 | 1993-05-18 | Gigatrend Incorporated | System for storing data in backup tape device |
US5226157A (en) | 1988-03-11 | 1993-07-06 | Hitachi, Ltd. | Backup control method and system in data processing system using identifiers for controlling block data transfer |
US5239647A (en) | 1990-09-07 | 1993-08-24 | International Business Machines Corporation | Data storage hierarchy with shared storage level |
US5241668A (en) | 1992-04-20 | 1993-08-31 | International Business Machines Corporation | Method and system for automated termination and resumption in a time zero backup copy process |
US5241670A (en) | 1992-04-20 | 1993-08-31 | International Business Machines Corporation | Method and system for automated backup copy ordering in a time zero backup copy session |
US5276860A (en) | 1989-12-19 | 1994-01-04 | Epoch Systems, Inc. | Digital data processor with improved backup storage |
US5276867A (en) | 1989-12-19 | 1994-01-04 | Epoch Systems, Inc. | Digital data storage system with improved data migration |
US5287500A (en) | 1991-06-03 | 1994-02-15 | Digital Equipment Corporation | System for allocating storage spaces based upon required and optional service attributes having assigned piorities |
US5301286A (en) | 1991-01-02 | 1994-04-05 | At&T Bell Laboratories | Memory archiving indexing arrangement |
US5321816A (en) | 1989-10-10 | 1994-06-14 | Unisys Corporation | Local-remote apparatus with specialized image storage modules |
US5347653A (en) | 1991-06-28 | 1994-09-13 | Digital Equipment Corporation | System for reconstructing prior versions of indexes using records indicating changes between successive versions of the indexes |
US5410700A (en) | 1991-09-04 | 1995-04-25 | International Business Machines Corporation | Computer system which supports asynchronous commitment of data |
WO1995013580A1 (en) | 1993-11-09 | 1995-05-18 | Arcada Software | Data backup and restore system for a computer network |
US5420996A (en) | 1990-04-27 | 1995-05-30 | Kabushiki Kaisha Toshiba | Data processing system having selective data save and address translation mechanism utilizing CPU idle period |
US5454099A (en) | 1989-07-25 | 1995-09-26 | International Business Machines Corporation | CPU implemented method for backing up modified data sets in non-volatile store for recovery in the event of CPU failure |
EP0774715A1 (en) | 1995-10-23 | 1997-05-21 | Stac Electronics | System for backing up files from disk volumes on multiple nodes of a computer network |
US5642496A (en) | 1993-09-23 | 1997-06-24 | Kanfi; Arnon | Method of making a backup copy of a memory over a plurality of copying sessions |
EP0809184A1 (en) | 1996-05-23 | 1997-11-26 | International Business Machines Corporation | Availability and recovery of files using copy storage pools |
EP0899662A1 (en) | 1997-08-29 | 1999-03-03 | Hewlett-Packard Company | Backup and restore system for a computer network |
WO1999012098A1 (en) | 1997-08-29 | 1999-03-11 | Hewlett-Packard Company | Data backup and recovery systems |
EP0981090A1 (en) | 1998-08-17 | 2000-02-23 | Connected Place Limited | A method of producing a checkpoint which describes a base file and a method of generating a difference file defining differences between an updated file and a base file |
US6418478B1 (en) | 1997-10-30 | 2002-07-09 | Commvault Systems, Inc. | Pipelined high speed data transfer mechanism |
US6542972B2 (en) | 2000-01-31 | 2003-04-01 | Commvault Systems, Inc. | Logical view and access to physical storage in modular data and storage management system |
US6658436B2 (en) | 2000-01-31 | 2003-12-02 | Commvault Systems, Inc. | Logical view and access to data managed by a modular data and storage management system |
US6721767B2 (en) | 2000-01-31 | 2004-04-13 | Commvault Systems, Inc. | Application specific rollback in a computer system |
US6760723B2 (en) | 2000-01-31 | 2004-07-06 | Commvault Systems Inc. | Storage management across multiple time zones |
US7003641B2 (en) | 2000-01-31 | 2006-02-21 | Commvault Systems, Inc. | Logical view with granular access to exchange data managed by a modular data and storage management system |
US7035880B1 (en) | 1999-07-14 | 2006-04-25 | Commvault Systems, Inc. | Modular backup and retrieval system used in conjunction with a storage area network |
WO2006052872A2 (en) | 2004-11-05 | 2006-05-18 | Commvault Systems, Inc. | System and method to support single instance storage operations |
US7107298B2 (en) | 2001-09-28 | 2006-09-12 | Commvault Systems, Inc. | System and method for archiving objects in an information store |
US7130970B2 (en) | 2002-09-09 | 2006-10-31 | Commvault Systems, Inc. | Dynamic storage device pooling in a computer system |
US7162496B2 (en) | 2002-09-16 | 2007-01-09 | Commvault Systems, Inc. | System and method for blind media support |
US7174433B2 (en) | 2003-04-03 | 2007-02-06 | Commvault Systems, Inc. | System and method for dynamically sharing media in a computer network |
US7315923B2 (en) | 2003-11-13 | 2008-01-01 | Commvault Systems, Inc. | System and method for combining data streams in pipelined storage operations in a storage network |
US7343453B2 (en) | 2004-04-30 | 2008-03-11 | Commvault Systems, Inc. | Hierarchical systems and methods for providing a unified view of storage information |
US7346623B2 (en) | 2001-09-28 | 2008-03-18 | Commvault Systems, Inc. | System and method for generating and managing quick recovery volumes |
US7389311B1 (en) | 1999-07-15 | 2008-06-17 | Commvault Systems, Inc. | Modular backup and retrieval system |
US7395282B1 (en) | 1999-07-15 | 2008-07-01 | Commvault Systems, Inc. | Hierarchical backup and retrieval system |
US7440982B2 (en) | 2003-11-13 | 2008-10-21 | Commvault Systems, Inc. | System and method for stored data archive verification |
US7454569B2 (en) | 2003-06-25 | 2008-11-18 | Commvault Systems, Inc. | Hierarchical system and method for performing storage operations in a computer network |
US7490207B2 (en) | 2004-11-08 | 2009-02-10 | Commvault Systems, Inc. | System and method for performing auxillary storage operations |
US7529782B2 (en) | 2003-11-13 | 2009-05-05 | Commvault Systems, Inc. | System and method for performing a snapshot and for restoring data |
US7543125B2 (en) | 2005-12-19 | 2009-06-02 | Commvault Systems, Inc. | System and method for performing time-flexible calendric storage operations |
US7546324B2 (en) | 2003-11-13 | 2009-06-09 | Commvault Systems, Inc. | Systems and methods for performing storage operations using network attached storage |
US7568080B2 (en) | 2002-10-07 | 2009-07-28 | Commvault Systems, Inc. | Snapshot storage and management system with indexing and user interface |
US7606844B2 (en) | 2005-12-19 | 2009-10-20 | Commvault Systems, Inc. | System and method for performing replication copy storage operations |
US7613752B2 (en) | 2005-11-28 | 2009-11-03 | Commvault Systems, Inc. | Systems and methods for using metadata to enhance data management operations |
US7617253B2 (en) | 2005-12-19 | 2009-11-10 | Commvault Systems, Inc. | Destination systems and methods for performing data replication |
US7617262B2 (en) | 2005-12-19 | 2009-11-10 | Commvault Systems, Inc. | Systems and methods for monitoring application data in a data replication system |
US7620710B2 (en) | 2005-12-19 | 2009-11-17 | Commvault Systems, Inc. | System and method for performing multi-path storage operations |
US7636743B2 (en) | 2005-12-19 | 2009-12-22 | Commvault Systems, Inc. | Pathname translation in a data replication system |
US20090319534A1 (en) | 2008-06-24 | 2009-12-24 | Parag Gokhale | Application-aware and remote single instance data management |
US7651593B2 (en) | 2005-12-19 | 2010-01-26 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US7661028B2 (en) | 2005-12-19 | 2010-02-09 | Commvault Systems, Inc. | Rolling cache configuration for a data replication system |
US7734669B2 (en) | 2006-12-22 | 2010-06-08 | Commvault Systems, Inc. | Managing copies of data |
US7734578B2 (en) | 2003-11-13 | 2010-06-08 | Comm Vault Systems, Inc. | System and method for performing integrated storage operations |
US8170995B2 (en) | 2006-10-17 | 2012-05-01 | Commvault Systems, Inc. | Method and system for offline indexing of content and classifying stored data |
US20120150818A1 (en) | 2010-12-14 | 2012-06-14 | Commvault Systems, Inc. | Client-side repository in a networked deduplicated storage system |
US20120150826A1 (en) | 2010-12-14 | 2012-06-14 | Commvault Systems, Inc. | Distributed deduplicated storage system |
US8285681B2 (en) | 2009-06-30 | 2012-10-09 | Commvault Systems, Inc. | Data object store and server for a cloud storage environment, including data deduplication and data management across multiple cloud storage sites |
US8307177B2 (en) | 2008-09-05 | 2012-11-06 | Commvault Systems, Inc. | Systems and methods for management of virtualization data |
US8364652B2 (en) | 2010-09-30 | 2013-01-29 | Commvault Systems, Inc. | Content aligned block-based deduplication |
US8370542B2 (en) | 2002-09-16 | 2013-02-05 | Commvault Systems, Inc. | Combined stream auxiliary copy system and method |
US8433682B2 (en) | 2009-12-31 | 2013-04-30 | Commvault Systems, Inc. | Systems and methods for analyzing snapshots |
US8504526B2 (en) | 2010-06-04 | 2013-08-06 | Commvault Systems, Inc. | Failover systems and methods for performing backup operations |
US8578120B2 (en) | 2009-05-22 | 2013-11-05 | Commvault Systems, Inc. | Block-level single instancing |
US8595191B2 (en) | 2009-12-31 | 2013-11-26 | Commvault Systems, Inc. | Systems and methods for performing data management operations using snapshots |
US8706867B2 (en) | 2011-03-31 | 2014-04-22 | Commvault Systems, Inc. | Realtime streaming of multimedia content from secondary storage devices |
US20140196038A1 (en) | 2013-01-08 | 2014-07-10 | Commvault Systems, Inc. | Virtual machine management in a data storage system |
US20140201170A1 (en) | 2013-01-11 | 2014-07-17 | Commvault Systems, Inc. | High availability distributed deduplicated storage system |
US8959299B2 (en) | 2004-11-15 | 2015-02-17 | Commvault Systems, Inc. | Using a snapshot as a data source |
US9116633B2 (en) | 2011-09-30 | 2015-08-25 | Commvault Systems, Inc. | Information management of virtual machines having mapped storage devices |
US9239687B2 (en) | 2010-09-30 | 2016-01-19 | Commvault Systems, Inc. | Systems and methods for retaining and using data block signatures in data protection operations |
US9286110B2 (en) | 2013-01-14 | 2016-03-15 | Commvault Systems, Inc. | Seamless virtual machine recall in a data storage system |
US9298715B2 (en) | 2012-03-07 | 2016-03-29 | Commvault Systems, Inc. | Data storage system utilizing proxy device for storage operations |
US9311121B2 (en) | 2012-12-21 | 2016-04-12 | Commvault Systems, Inc. | Archiving virtual machines in a data storage system |
US9342537B2 (en) | 2012-04-23 | 2016-05-17 | Commvault Systems, Inc. | Integrated snapshot interface for a data storage system |
US9372827B2 (en) | 2011-09-30 | 2016-06-21 | Commvault Systems, Inc. | Migration of an existing computing system to new hardware |
US9378035B2 (en) | 2012-12-28 | 2016-06-28 | Commvault Systems, Inc. | Systems and methods for repurposing virtual machines |
US20160203060A1 (en) * | 2015-01-09 | 2016-07-14 | Vmware, Inc. | Client deployment with disaster recovery considerations |
US9448731B2 (en) | 2014-11-14 | 2016-09-20 | Commvault Systems, Inc. | Unified snapshot storage management |
US9461881B2 (en) | 2011-09-30 | 2016-10-04 | Commvault Systems, Inc. | Migration of existing computing systems to cloud computing sites or virtual machines |
US9483362B2 (en) | 2013-05-08 | 2016-11-01 | Commvault Systems, Inc. | Use of auxiliary data protection software in failover operations |
US9495251B2 (en) | 2014-01-24 | 2016-11-15 | Commvault Systems, Inc. | Snapshot readiness checking and reporting |
US9495404B2 (en) | 2013-01-11 | 2016-11-15 | Commvault Systems, Inc. | Systems and methods to process block-level backup for selective file restoration for virtual machines |
US20160350391A1 (en) | 2015-05-26 | 2016-12-01 | Commvault Systems, Inc. | Replication using deduplicated secondary copy data |
US20160371127A1 (en) * | 2015-06-19 | 2016-12-22 | Vmware, Inc. | Resource management for containers in a virtualized environment |
US9588972B2 (en) | 2010-09-30 | 2017-03-07 | Commvault Systems, Inc. | Efficient data management improvements, such as docking limited-feature data management modules to a full-featured data management system |
US9639274B2 (en) | 2015-04-14 | 2017-05-02 | Commvault Systems, Inc. | Efficient deduplication database validation |
US9639426B2 (en) | 2014-01-24 | 2017-05-02 | Commvault Systems, Inc. | Single snapshot for multiple applications |
US20170168903A1 (en) * | 2015-12-09 | 2017-06-15 | Commvault Systems, Inc. | Live synchronization and management of virtual machines across computing and virtualization platforms and using live synchronization to support disaster recovery |
US20170185488A1 (en) | 2015-12-23 | 2017-06-29 | Commvault Systems, Inc. | Application-level live synchronization across computing platforms including synchronizing co-resident applications to disparate standby destinations and selectively synchronizing some applications and not others |
US20170193003A1 (en) | 2015-12-30 | 2017-07-06 | Commvault Systems, Inc. | Redundant and robust distributed deduplication data storage system |
US9710465B2 (en) | 2014-09-22 | 2017-07-18 | Commvault Systems, Inc. | Efficiently restoring execution of a backed up virtual machine based on coordination with virtual-machine-file-relocation operations |
US20170235647A1 (en) | 2016-02-12 | 2017-08-17 | Commvault Systems, Inc. | Data protection operations based on network path information |
US20170242871A1 (en) | 2016-02-18 | 2017-08-24 | Commvault Systems, Inc. | Data restoration operations based on network path information |
US9886346B2 (en) | 2013-01-11 | 2018-02-06 | Commvault Systems, Inc. | Single snapshot for multiple agents |
US9898213B2 (en) | 2015-01-23 | 2018-02-20 | Commvault Systems, Inc. | Scalable auxiliary copy processing using media agent resources |
US9939981B2 (en) | 2013-09-12 | 2018-04-10 | Commvault Systems, Inc. | File manager integration with virtualization in an information management system with an enhanced storage manager, including user control and storage management of virtual machines |
US9965306B1 (en) * | 2012-06-27 | 2018-05-08 | EMC IP Holding Company LLC | Snapshot replication |
US9983936B2 (en) | 2014-11-20 | 2018-05-29 | Commvault Systems, Inc. | Virtual machine change block tracking |
US20180267861A1 (en) | 2017-03-15 | 2018-09-20 | Commvault Systems, Inc. | Application aware backup of virtual machines |
US10084873B2 (en) | 2015-06-19 | 2018-09-25 | Commvault Systems, Inc. | Assignment of data agent proxies for executing virtual-machine secondary copy operations including streaming backup jobs |
US10387073B2 (en) | 2017-03-29 | 2019-08-20 | Commvault Systems, Inc. | External dynamic virtual machine synchronization |
US10474542B2 (en) | 2017-03-24 | 2019-11-12 | Commvault Systems, Inc. | Time-based virtual machine reversion |
US10503753B2 (en) | 2016-03-10 | 2019-12-10 | Commvault Systems, Inc. | Snapshot replication operations based on incremental block change tracking |
US10592350B2 (en) | 2016-03-09 | 2020-03-17 | Commvault Systems, Inc. | Virtual server cloud file system for virtual machine restore to cloud operations |
US10650057B2 (en) | 2014-07-16 | 2020-05-12 | Commvault Systems, Inc. | Volume or virtual machine level backup and generating placeholders for virtual machine files |
US20200159627A1 (en) | 2010-06-04 | 2020-05-21 | Commvault Systems, Inc. | Failover systems and methods for performing backup operations, including heterogeneous indexing and load balancing of backup and indexing resources |
US10678758B2 (en) | 2016-11-21 | 2020-06-09 | Commvault Systems, Inc. | Cross-platform virtual machine data and memory backup and replication |
US20200183802A1 (en) | 2018-12-06 | 2020-06-11 | Commvault Systems, Inc. | Assigning backup resources based on failover of partnered data storage servers in a data storage management system |
US10732885B2 (en) | 2018-02-14 | 2020-08-04 | Commvault Systems, Inc. | Block-level live browsing and private writable snapshots using an ISCSI server |
US10747630B2 (en) | 2016-09-30 | 2020-08-18 | Commvault Systems, Inc. | Heartbeat monitoring of virtual machines for initiating failover operations in a data storage management system, including operations by a master monitor node |
US20200265024A1 (en) | 2013-01-11 | 2020-08-20 | Commvault Systems, Inc. | Systems and methods for rule-based virtual machine data protection |
US10776209B2 (en) | 2014-11-10 | 2020-09-15 | Commvault Systems, Inc. | Cross-platform virtual machine backup and replication |
US10853195B2 (en) | 2017-03-31 | 2020-12-01 | Commvault Systems, Inc. | Granular restoration of virtual machine application data |
US10877928B2 (en) | 2018-03-07 | 2020-12-29 | Commvault Systems, Inc. | Using utilities injected into cloud-based virtual machines for speeding up virtual machine backup operations |
Family Cites Families (623)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5504873A (en) | 1989-11-01 | 1996-04-02 | E-Systems, Inc. | Mass data storage and retrieval system |
US5544347A (en) | 1990-09-24 | 1996-08-06 | Emc Corporation | Data storage system controlled remote data mirroring with respectively maintained data indices |
US5333315A (en) | 1991-06-27 | 1994-07-26 | Digital Equipment Corporation | System of device independent file directories using a tag between the directories and file descriptors that migrate with the files |
US5481694A (en) | 1991-09-26 | 1996-01-02 | Hewlett-Packard Company | High performance multiple-unit electronic data storage system with checkpoint logs for rapid failure recovery |
US5263154A (en) | 1992-04-20 | 1993-11-16 | International Business Machines Corporation | Method and system for incremental time zero backup copying of data |
US5530855A (en) | 1992-10-13 | 1996-06-25 | International Business Machines Corporation | Replicating a database by the sequential application of hierarchically sorted log records |
SE500656C2 (en) | 1992-12-08 | 1994-08-01 | Ellemtel Utvecklings Ab | Backup capture system in a distributed database |
JP2551312B2 (en) | 1992-12-28 | 1996-11-06 | 日本電気株式会社 | Job step parallel execution method |
AU6092894A (en) | 1993-01-21 | 1994-08-15 | Apple Computer, Inc. | Apparatus and method for backing up data from networked computer storage devices |
DE69434311D1 (en) | 1993-02-01 | 2005-04-28 | Sun Microsystems Inc | ARCHIVING FILES SYSTEM FOR DATA PROVIDERS IN A DISTRIBUTED NETWORK ENVIRONMENT |
US5664204A (en) | 1993-03-22 | 1997-09-02 | Lichen Wang | Apparatus and method for supplying power and wake-up signal using host port's signal lines of opposite polarities |
US5544359A (en) | 1993-03-30 | 1996-08-06 | Fujitsu Limited | Apparatus and method for classifying and acquiring log data by updating and storing log data |
JPH0721135A (en) | 1993-07-02 | 1995-01-24 | Fujitsu Ltd | Data processing system with redundant monitoring function |
US5544345A (en) | 1993-11-08 | 1996-08-06 | International Business Machines Corporation | Coherence controls for store-multiple shared data coordinated by cache directory entries in a shared electronic storage |
US5495607A (en) | 1993-11-15 | 1996-02-27 | Conner Peripherals, Inc. | Network management system having virtual catalog overview of files distributively stored across network domain |
US5491810A (en) | 1994-03-01 | 1996-02-13 | International Business Machines Corporation | Method and system for automated data storage system space allocation utilizing prioritized data set parameters |
US5673381A (en) | 1994-05-27 | 1997-09-30 | Cheyenne Software International Sales Corp. | System and parallel streaming and data stripping to back-up a network |
US5638509A (en) | 1994-06-10 | 1997-06-10 | Exabyte Corporation | Data storage and protection system |
US5574906A (en) | 1994-10-24 | 1996-11-12 | International Business Machines Corporation | System and method for reducing storage requirement in backup subsystems utilizing segmented compression and differencing |
US5930831A (en) | 1995-02-23 | 1999-07-27 | Powerquest Corporation | Partition manipulation architecture supporting multiple file systems |
US5559957A (en) | 1995-05-31 | 1996-09-24 | Lucent Technologies Inc. | File system for a data storage device having a power fail recovery mechanism for write/replace operations |
US5699361A (en) | 1995-07-18 | 1997-12-16 | Industrial Technology Research Institute | Multimedia channel formulation mechanism |
US5813009A (en) | 1995-07-28 | 1998-09-22 | Univirtual Corp. | Computer based records management system method |
US5619644A (en) | 1995-09-18 | 1997-04-08 | International Business Machines Corporation | Software directed microcode state save for distributed storage controller |
US5819020A (en) | 1995-10-16 | 1998-10-06 | Network Specialists, Inc. | Real time backup system |
US5729743A (en) | 1995-11-17 | 1998-03-17 | Deltatech Research, Inc. | Computer apparatus and method for merging system deltas |
US5793867A (en) | 1995-12-19 | 1998-08-11 | Pitney Bowes Inc. | System and method for disaster recovery in an open metering system |
US5761677A (en) | 1996-01-03 | 1998-06-02 | Sun Microsystems, Inc. | Computer system method and apparatus providing for various versions of a file without requiring data copy or log operations |
US5889935A (en) | 1996-05-28 | 1999-03-30 | Emc Corporation | Disaster control features for remote data mirroring |
US5812398A (en) | 1996-06-10 | 1998-09-22 | Sun Microsystems, Inc. | Method and system for escrowed backup of hotelled world wide web sites |
US6006227A (en) | 1996-06-28 | 1999-12-21 | Yale University | Document stream operating system |
US5835906A (en) | 1996-07-01 | 1998-11-10 | Sun Microsystems, Inc. | Methods and apparatus for sharing stored data objects in a computer system |
US5758359A (en) | 1996-10-24 | 1998-05-26 | Digital Equipment Corporation | Method and apparatus for performing retroactive backups in a computer system |
US5875478A (en) | 1996-12-03 | 1999-02-23 | Emc Corporation | Computer backup using a file system, network, disk, tape and remote archiving repository media system |
US6131095A (en) | 1996-12-11 | 2000-10-10 | Hewlett-Packard Company | Method of accessing a target entity over a communications network |
AU5929398A (en) | 1997-01-23 | 1998-08-18 | Overland Data, Inc. | Virtual media library |
TW376542B (en) | 1997-03-04 | 1999-12-11 | Canon Kk | Exposure unit, exposure system and device manufacturing method |
US6658526B2 (en) | 1997-03-12 | 2003-12-02 | Storage Technology Corporation | Network attached virtual data storage subsystem |
US6286011B1 (en) | 1997-04-30 | 2001-09-04 | Bellsouth Corporation | System and method for recording transactions using a chronological list superimposed on an indexed list |
US5924102A (en) | 1997-05-07 | 1999-07-13 | International Business Machines Corporation | System and method for managing critical files |
US6094416A (en) | 1997-05-09 | 2000-07-25 | I/O Control Corporation | Multi-tier architecture for control network |
US6272631B1 (en) | 1997-06-30 | 2001-08-07 | Microsoft Corporation | Protected storage of core data secrets |
US5887134A (en) | 1997-06-30 | 1999-03-23 | Sun Microsystems | System and method for preserving message order while employing both programmed I/O and DMA operations |
US6073220A (en) | 1997-09-03 | 2000-06-06 | Duocor, Inc. | Apparatus and method for providing a transparent disk drive back-up |
US5950205A (en) | 1997-09-25 | 1999-09-07 | Cisco Technology, Inc. | Data transmission over the internet using a cache memory file system |
US6275953B1 (en) | 1997-09-26 | 2001-08-14 | Emc Corporation | Recovery from failure of a data processor in a network server |
US6199074B1 (en) | 1997-10-09 | 2001-03-06 | International Business Machines Corporation | Database backup system ensuring consistency between primary and mirrored backup database copies despite backup interruption |
US6052735A (en) | 1997-10-24 | 2000-04-18 | Microsoft Corporation | Electronic mail object synchronization between a desktop computer and mobile device |
US6021415A (en) | 1997-10-29 | 2000-02-01 | International Business Machines Corporation | Storage management system with file aggregation and space reclamation within aggregated files |
US7209972B1 (en) | 1997-10-30 | 2007-04-24 | Commvault Systems, Inc. | High speed data transfer mechanism |
US7581077B2 (en) | 1997-10-30 | 2009-08-25 | Commvault Systems, Inc. | Method and system for transferring data in a storage operation |
US6101585A (en) | 1997-11-04 | 2000-08-08 | Adaptec, Inc. | Mechanism for incremental backup of on-line files |
JPH11143754A (en) | 1997-11-05 | 1999-05-28 | Hitachi Ltd | Version information / configuration information display method and apparatus, and computer-readable recording medium recording version information / configuration information display program |
US6131190A (en) | 1997-12-18 | 2000-10-10 | Sidwell; Leland P. | System for modifying JCL parameters to optimize data storage allocations |
US6076148A (en) | 1997-12-26 | 2000-06-13 | Emc Corporation | Mass storage subsystem and backup arrangement for digital data processing system which permits information to be backed up while host computer(s) continue(s) operating in connection with information stored on mass storage subsystem |
US6078932A (en) | 1998-01-13 | 2000-06-20 | International Business Machines Corporation | Point-in-time backup utilizing multiple copy technologies |
US6154787A (en) | 1998-01-21 | 2000-11-28 | Unisys Corporation | Grouping shared resources into one or more pools and automatically re-assigning shared resources from where they are not currently needed to where they are needed |
US6260069B1 (en) | 1998-02-10 | 2001-07-10 | International Business Machines Corporation | Direct data retrieval in a distributed computing system |
DE69816415T2 (en) | 1998-03-02 | 2004-04-15 | Hewlett-Packard Co. (N.D.Ges.D.Staates Delaware), Palo Alto | Data Backup System |
US6026414A (en) | 1998-03-05 | 2000-02-15 | International Business Machines Corporation | System including a proxy client to backup files in a distributed computing environment |
US7277941B2 (en) | 1998-03-11 | 2007-10-02 | Commvault Systems, Inc. | System and method for providing encryption in a storage network by storing a secured encryption key with encrypted archive data in an archive storage device |
US6161111A (en) | 1998-03-31 | 2000-12-12 | Emc Corporation | System and method for performing file-handling operations in a digital data processing system using an operating system-independent file map |
US6167402A (en) | 1998-04-27 | 2000-12-26 | Sun Microsystems, Inc. | High performance message store |
US6397242B1 (en) | 1998-05-15 | 2002-05-28 | Vmware, Inc. | Virtualization system including a virtual machine monitor for a computer with a segmented architecture |
US6421711B1 (en) | 1998-06-29 | 2002-07-16 | Emc Corporation | Virtual ports for data transferring of a data storage system |
US7756986B2 (en) | 1998-06-30 | 2010-07-13 | Emc Corporation | Method and apparatus for providing data management for a storage system coupled to a network |
US6269431B1 (en) | 1998-08-13 | 2001-07-31 | Emc Corporation | Virtual storage and block level direct access of secondary storage for recovery of backup data |
US6516327B1 (en) | 1998-12-24 | 2003-02-04 | International Business Machines Corporation | System and method for synchronizing data in multiple databases |
US6487561B1 (en) | 1998-12-31 | 2002-11-26 | Emc Corporation | Apparatus and methods for copying, backing up, and restoring data using a backup segment size larger than the storage block size |
US6212512B1 (en) | 1999-01-06 | 2001-04-03 | Hewlett-Packard Company | Integration of a database into file management software for protecting, tracking and retrieving data |
US7428212B2 (en) | 1999-01-15 | 2008-09-23 | Cisco Technology, Inc. | Best effort technique for virtual path restoration |
US6324581B1 (en) | 1999-03-03 | 2001-11-27 | Emc Corporation | File server system using file system storage, data movers, and an exchange of meta data among data movers for file locking and direct access to shared file systems |
JP3763992B2 (en) | 1999-03-30 | 2006-04-05 | 富士通株式会社 | Data processing apparatus and recording medium |
US6389432B1 (en) | 1999-04-05 | 2002-05-14 | Auspex Systems, Inc. | Intelligent virtual volume access |
DE60043873D1 (en) | 1999-06-01 | 2010-04-08 | Hitachi Ltd | Method for data backup |
US6519679B2 (en) | 1999-06-11 | 2003-02-11 | Dell Usa, L.P. | Policy based storage configuration |
US6538669B1 (en) | 1999-07-15 | 2003-03-25 | Dell Products L.P. | Graphical user interface for configuration of a storage system |
US6820214B1 (en) | 1999-07-26 | 2004-11-16 | Microsoft Corporation | Automated system recovery via backup and restoration of system state |
US7430670B1 (en) | 1999-07-29 | 2008-09-30 | Intertrust Technologies Corp. | Software self-defense systems and methods |
US6415323B1 (en) | 1999-09-03 | 2002-07-02 | Fastforward Networks | Proximity-based redirection system for robust and scalable service-node location in an internetwork |
US6343324B1 (en) | 1999-09-13 | 2002-01-29 | International Business Machines Corporation | Method and system for controlling access share storage devices in a network environment by configuring host-to-volume mapping data structures in the controller memory for granting and denying access to the devices |
US6564228B1 (en) | 2000-01-14 | 2003-05-13 | Sun Microsystems, Inc. | Method of enabling heterogeneous platforms to utilize a universal file system in a storage area network |
US6581076B1 (en) | 2000-03-29 | 2003-06-17 | International Business Machines Corporation | Method and system for efficient file archiving and dearchiving in a DMD system |
JP2001356945A (en) | 2000-04-12 | 2001-12-26 | Anetsukusu Syst Kk | Data backup recovery system |
EP1209569A1 (en) | 2000-04-12 | 2002-05-29 | Annex Systems Incorporated | Data backup/recovery system |
US6892221B2 (en) | 2000-05-19 | 2005-05-10 | Centerbeam | Data backup |
US6356801B1 (en) | 2000-05-19 | 2002-03-12 | International Business Machines Corporation | High availability work queuing in an automated data storage library |
KR100390853B1 (en) | 2000-06-07 | 2003-07-10 | 차상균 | A Logging Method and System for Highly Parallel Recovery Operation in Main-Memory Transaction Processing Systems |
US6330642B1 (en) | 2000-06-29 | 2001-12-11 | Bull Hn Informatin Systems Inc. | Three interconnected raid disk controller data processing system architecture |
CA2414869C (en) | 2000-07-05 | 2010-08-17 | Ernst & Young Llp | Method and apparatus for providing computer services |
US6704885B1 (en) | 2000-07-28 | 2004-03-09 | Oracle International Corporation | Performing data backups with a stochastic scheduler in a distributed computing environment |
US7512894B1 (en) | 2000-09-11 | 2009-03-31 | International Business Machines Corporation | Pictorial-based user interface management of computer hardware components |
US7822967B2 (en) | 2000-09-27 | 2010-10-26 | Huron Ip Llc | Apparatus, architecture, and method for integrated modular server system providing dynamically power-managed and work-load managed network devices |
GB0025226D0 (en) | 2000-10-14 | 2000-11-29 | Ibm | Data storage system and method of storing data |
JP2002215597A (en) | 2001-01-15 | 2002-08-02 | Mitsubishi Electric Corp | Multiprocessor device |
US7076270B2 (en) | 2001-02-28 | 2006-07-11 | Dell Products L.P. | Docking station for wireless communication device |
US7756835B2 (en) | 2001-03-23 | 2010-07-13 | Bea Systems, Inc. | Database and operating system independent copying/archiving of a web base application |
US6912630B1 (en) | 2001-03-30 | 2005-06-28 | Emc Corporation | Method and apparatus for computing file storage elements for backup and restore |
EP1388266B1 (en) | 2001-05-14 | 2013-01-30 | TELEFONAKTIEBOLAGET LM ERICSSON (publ) | Method for protecting against overload in a mobile communication network |
JP4632574B2 (en) | 2001-05-25 | 2011-02-16 | 株式会社日立製作所 | Storage device, file data backup method, and file data copy method |
KR100625595B1 (en) | 2001-05-28 | 2006-09-20 | 한국전자통신연구원 | Parallel Logging Method and Transaction Log Processing System of Transaction Processing System |
US20020194511A1 (en) | 2001-06-18 | 2002-12-19 | Swoboda Gary L. | Apparatus and method for central processing unit power measurement in a digital signal processor |
US7249150B1 (en) | 2001-07-03 | 2007-07-24 | Network Appliance, Inc. | System and method for parallelized replay of an NVRAM log in a storage appliance |
US7082464B2 (en) | 2001-07-06 | 2006-07-25 | Juniper Networks, Inc. | Network management system |
US7016299B2 (en) | 2001-07-27 | 2006-03-21 | International Business Machines Corporation | Network node failover using path rerouting by manager component or switch port remapping |
US6721851B2 (en) | 2001-08-07 | 2004-04-13 | Veritas Operating Corporation | System and method for preventing sector slipping in a storage area network |
US6922791B2 (en) | 2001-08-09 | 2005-07-26 | Dell Products L.P. | Failover system and method for cluster environment |
US6845465B2 (en) | 2001-09-17 | 2005-01-18 | Sun Microsystems, Inc. | Method and system for leveraging spares in a data storage system including a plurality of disk drives |
US6880101B2 (en) | 2001-10-12 | 2005-04-12 | Dell Products L.P. | System and method for providing automatic data restoration after a storage device failure |
JP4113352B2 (en) | 2001-10-31 | 2008-07-09 | 株式会社日立製作所 | Storage resource operation management method in storage network |
CN1591406A (en) | 2001-11-09 | 2005-03-09 | 无锡永中科技有限公司 | Integrated multi-purpose data processing system |
US6990603B2 (en) | 2002-01-02 | 2006-01-24 | Exanet Inc. | Method and apparatus for securing volatile data in power failure in systems having redundancy |
JP4434543B2 (en) | 2002-01-10 | 2010-03-17 | 株式会社日立製作所 | Distributed storage system, storage device, and data copying method |
US20030149750A1 (en) | 2002-02-07 | 2003-08-07 | Franzenburg Alan M. | Distributed storage array |
US7178050B2 (en) | 2002-02-22 | 2007-02-13 | Bea Systems, Inc. | System for highly available transaction recovery for transaction processing systems |
US7043507B2 (en) | 2002-02-28 | 2006-05-09 | Veritas Operating Corporation | System and method for validated indirect data backup using operating system I/O Operations |
US7165154B2 (en) | 2002-03-18 | 2007-01-16 | Net Integration Technologies Inc. | System and method for data backup |
US7475098B2 (en) | 2002-03-19 | 2009-01-06 | Network Appliance, Inc. | System and method for managing a plurality of snapshots |
JP4173673B2 (en) | 2002-03-20 | 2008-10-29 | 株式会社日立製作所 | File backup method and storage device |
US6795904B1 (en) | 2002-03-28 | 2004-09-21 | Hewlett-Packard Development Company, L.P. | System and method for improving performance of a data backup operation |
US20050262033A1 (en) | 2002-03-29 | 2005-11-24 | Kazuhiko Yamashita | Data recording apparatus, data recording method, program for implementing the method, and program recording medium |
US7191357B2 (en) | 2002-03-29 | 2007-03-13 | Panasas, Inc. | Hybrid quorum/primary-backup fault-tolerance model |
US7577722B1 (en) | 2002-04-05 | 2009-08-18 | Vmware, Inc. | Provisioning of computer systems using virtual machines |
JP2003316522A (en) | 2002-04-26 | 2003-11-07 | Hitachi Ltd | Computer system and computer system control method |
US8291407B2 (en) | 2002-06-12 | 2012-10-16 | Symantec Corporation | Systems and methods for patching computer programs |
US6829688B2 (en) | 2002-06-20 | 2004-12-07 | International Business Machines Corporation | File system backup in a logical volume management data storage environment |
US7093230B2 (en) | 2002-07-24 | 2006-08-15 | Sun Microsystems, Inc. | Lock management thread pools for distributed data systems |
EP1387269A1 (en) | 2002-08-02 | 2004-02-04 | Hewlett Packard Company, a Delaware Corporation | Backup system and method of generating a checkpoint for a database |
US7107385B2 (en) | 2002-08-09 | 2006-09-12 | Network Appliance, Inc. | Storage virtualization by layering virtual disk objects on a file system |
US7873700B2 (en) | 2002-08-09 | 2011-01-18 | Netapp, Inc. | Multi-protocol storage appliance that provides integrated support for file and block access protocols |
JP4166056B2 (en) | 2002-08-16 | 2008-10-15 | 富士通株式会社 | Database operation history management device, database operation history management method, and database operation history management program |
EP1550053A4 (en) | 2002-09-18 | 2009-03-25 | Netezza Corp | Disk mirror architecture for database appliance |
US7707184B1 (en) | 2002-10-09 | 2010-04-27 | Netapp, Inc. | System and method for snapshot full backup and hard recovery of a database |
WO2004046971A1 (en) | 2002-11-14 | 2004-06-03 | Isilon Systems, Inc. | Systems and methods for restriping files in a distributed file system |
GB2411030B (en) | 2002-11-20 | 2006-03-22 | Filesx Ltd | Fast backup storage and fast recovery of data (FBSRD) |
US7219162B2 (en) | 2002-12-02 | 2007-05-15 | International Business Machines Corporation | System and method for accessing content of a web page |
US7484208B1 (en) | 2002-12-12 | 2009-01-27 | Michael Nelson | Virtual machine migration |
US7293201B2 (en) | 2003-01-17 | 2007-11-06 | Microsoft Corporation | System and method for active diagnosis and self healing of software systems |
JP2004302751A (en) | 2003-03-31 | 2004-10-28 | Hitachi Ltd | Performance management method for computer system and computer system for managing performance of storage device |
WO2004090675A2 (en) | 2003-04-03 | 2004-10-21 | Commvault Systems, Inc. | System and method for performing storage operations through a firewall |
WO2004090676A2 (en) | 2003-04-03 | 2004-10-21 | Commvault Systems, Inc. | Remote disaster data recovery system and method |
US7320083B2 (en) | 2003-04-23 | 2008-01-15 | Dot Hill Systems Corporation | Apparatus and method for storage controller to deterministically kill one of redundant servers integrated within the storage controller chassis |
US7181439B1 (en) | 2003-04-25 | 2007-02-20 | Network Appliance, Inc. | System and method for transparently accessing a virtual disk using a file-based protocol |
US7178059B2 (en) | 2003-05-07 | 2007-02-13 | Egenera, Inc. | Disaster recovery for processing resources using configurable deployment platform |
US20040230899A1 (en) | 2003-05-13 | 2004-11-18 | Pagnano Marco Aurelio De Oliveira | Arrangements, storage mediums and methods for associating an extensible stylesheet language device description file with a non- proprietary language device description file |
US7251745B2 (en) | 2003-06-11 | 2007-07-31 | Availigent, Inc. | Transparent TCP connection failover |
US7092976B2 (en) | 2003-06-24 | 2006-08-15 | International Business Machines Corporation | Parallel high speed backup for a storage area network (SAN) file system |
US7143121B2 (en) | 2003-06-27 | 2006-11-28 | Hewlett-Packard Development Company, L.P. | Method and system for archiving and restoring data from an operations center in a utility data center |
US8095511B2 (en) | 2003-06-30 | 2012-01-10 | Microsoft Corporation | Database data recovery system and method |
US7330859B2 (en) | 2003-09-10 | 2008-02-12 | International Business Machines Corporation | Database backup system using data and user-defined routines replicators for maintaining a copy of database on a secondary server |
JP4404246B2 (en) | 2003-09-12 | 2010-01-27 | 株式会社日立製作所 | Backup system and method based on data characteristics |
US7234073B1 (en) | 2003-09-30 | 2007-06-19 | Emc Corporation | System and methods for failover management of manageable entity agents |
US7177967B2 (en) | 2003-09-30 | 2007-02-13 | Intel Corporation | Chipset support for managing hardware interrupts in a virtual machine system |
US7613748B2 (en) | 2003-11-13 | 2009-11-03 | Commvault Systems, Inc. | Stored data reverification management system and method |
US7188273B2 (en) | 2003-11-24 | 2007-03-06 | Tsx Inc. | System and method for failover |
TWI248579B (en) | 2003-12-04 | 2006-02-01 | Wistron Corp | Method and system for restoring backup data |
US7584266B2 (en) | 2003-12-16 | 2009-09-01 | International Business Machines Corporation | Autonomous storage for backup, restore, and file access |
US20050149940A1 (en) | 2003-12-31 | 2005-07-07 | Sychron Inc. | System Providing Methodology for Policy-Based Resource Allocation |
US20050198303A1 (en) | 2004-01-02 | 2005-09-08 | Robert Knauerhase | Dynamic virtual machine service provider allocation |
US7596721B1 (en) | 2004-01-09 | 2009-09-29 | Maxtor Corporation | Methods and structure for patching embedded firmware |
US7246256B2 (en) | 2004-01-20 | 2007-07-17 | International Business Machines Corporation | Managing failover of J2EE compliant middleware in a high availability system |
US7168001B2 (en) | 2004-02-06 | 2007-01-23 | Hewlett-Packard Development Company, L.P. | Transaction processing apparatus and method |
US7418491B2 (en) | 2004-02-19 | 2008-08-26 | International Business Machines Corporation | Architecture for a centralized management system |
US7386744B2 (en) | 2004-03-15 | 2008-06-10 | Hewlett-Packard Development Company, L.P. | Rack equipment power pricing plan control system and method |
US7318134B1 (en) | 2004-03-16 | 2008-01-08 | Emc Corporation | Continuous data backup using distributed journaling |
JP5022030B2 (en) | 2004-03-19 | 2012-09-12 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Computer system, server constituting the same, job execution control method thereof, and program |
US7277905B2 (en) | 2004-03-31 | 2007-10-02 | Microsoft Corporation | System and method for a consistency check of a database backup |
US8336040B2 (en) | 2004-04-15 | 2012-12-18 | Raytheon Company | System and method for topology-aware job scheduling and backfilling in an HPC environment |
US7424482B2 (en) | 2004-04-26 | 2008-09-09 | Storwize Inc. | Method and system for compression of data for block mode access storage |
US8266406B2 (en) | 2004-04-30 | 2012-09-11 | Commvault Systems, Inc. | System and method for allocation of organizational resources |
US7502820B2 (en) | 2004-05-03 | 2009-03-10 | Microsoft Corporation | System and method for optimized property retrieval of stored objects |
US8108429B2 (en) | 2004-05-07 | 2012-01-31 | Quest Software, Inc. | System for moving real-time data events across a plurality of devices in a network for simultaneous data protection, replication, and access services |
JP2005332067A (en) | 2004-05-18 | 2005-12-02 | Hitachi Ltd | Backup acquisition method and disk array device |
US8949395B2 (en) | 2004-06-01 | 2015-02-03 | Inmage Systems, Inc. | Systems and methods of event driven recovery management |
US7676502B2 (en) | 2006-05-22 | 2010-03-09 | Inmage Systems, Inc. | Recovery point data view shift through a direction-agnostic roll algorithm |
US8055745B2 (en) | 2004-06-01 | 2011-11-08 | Inmage Systems, Inc. | Methods and apparatus for accessing data from a primary data storage system for secondary storage |
US7103432B2 (en) | 2004-06-02 | 2006-09-05 | Research In Motion Limited | Auto-configuration of hardware on a portable computing device |
US20050278397A1 (en) | 2004-06-02 | 2005-12-15 | Clark Kevin J | Method and apparatus for automated redundant data storage of data files maintained in diverse file infrastructures |
US7383462B2 (en) | 2004-07-02 | 2008-06-03 | Hitachi, Ltd. | Method and apparatus for encrypted remote copy for secure data backup and restoration |
EP1619898A1 (en) | 2004-07-19 | 2006-01-25 | Sony Deutschland GmbH | Method for operating in a home network |
JP4484618B2 (en) | 2004-07-30 | 2010-06-16 | 株式会社日立製作所 | Disaster recovery system, program, and data replication method |
US8224784B2 (en) | 2004-08-13 | 2012-07-17 | Microsoft Corporation | Combined computer disaster recovery and migration tool for effective disaster recovery as well as the backup and migration of user- and system-specific information |
US7650356B2 (en) | 2004-08-24 | 2010-01-19 | Microsoft Corporation | Generating an optimized restore plan |
US7330861B2 (en) | 2004-09-10 | 2008-02-12 | Hitachi, Ltd. | Remote copying system and method of controlling remote copying |
US20060058994A1 (en) | 2004-09-16 | 2006-03-16 | Nec Laboratories America, Inc. | Power estimation through power emulation |
JP4489550B2 (en) | 2004-09-30 | 2010-06-23 | 株式会社日立製作所 | Backup data creation management method |
US7536525B2 (en) | 2004-11-09 | 2009-05-19 | Dell Products L.P. | Virtual machine hot cloning including freezing and unfreezing memory in a distributed network |
US8775823B2 (en) | 2006-12-29 | 2014-07-08 | Commvault Systems, Inc. | System and method for encrypting secondary copies of data |
US7778984B2 (en) | 2004-11-19 | 2010-08-17 | Microsoft Corporation | System and method for a distributed object store |
CA2489619A1 (en) | 2004-12-07 | 2006-06-07 | Ibm Canada Limited - Ibm Canada Limitee | Method system and program product for managing a file system that includes an archive |
US7437388B1 (en) | 2004-12-21 | 2008-10-14 | Symantec Corporation | Protecting data for distributed applications using cooperative backup agents |
US7600125B1 (en) | 2004-12-23 | 2009-10-06 | Symantec Corporation | Hash-based data block processing with intermittently-connected systems |
US7475282B2 (en) | 2004-12-28 | 2009-01-06 | Acronis Inc. | System and method for rapid restoration of server from back up |
US7721138B1 (en) | 2004-12-28 | 2010-05-18 | Acronis Inc. | System and method for on-the-fly migration of server from backup |
US20060155594A1 (en) | 2005-01-13 | 2006-07-13 | Jess Almeida | Adaptive step-by-step process with guided conversation logs for improving the quality of transaction data |
US8918366B2 (en) | 2005-02-07 | 2014-12-23 | Mimosa Systems, Inc. | Synthetic full copies of data and dynamic bulk-to-brick transformation |
US8799206B2 (en) | 2005-02-07 | 2014-08-05 | Mimosa Systems, Inc. | Dynamic bulk-to-brick transformation of data |
US7506010B2 (en) | 2005-02-08 | 2009-03-17 | Pro Softnet Corporation | Storing and retrieving computer data files using an encrypted network drive file system |
US20060184935A1 (en) | 2005-02-11 | 2006-08-17 | Timothy Abels | System and method using virtual machines for decoupling software from users and services |
US7861234B1 (en) | 2005-02-23 | 2010-12-28 | Oracle America, Inc. | System and method for binary translation to improve parameter passing |
US7730486B2 (en) | 2005-02-28 | 2010-06-01 | Hewlett-Packard Development Company, L.P. | System and method for migrating virtual machines on cluster systems |
US7899788B2 (en) | 2005-04-01 | 2011-03-01 | Microsoft Corporation | Using a data protection server to backup and restore data on virtual servers |
US20060230136A1 (en) | 2005-04-12 | 2006-10-12 | Kenneth Ma | Intelligent auto-archiving |
US7480780B2 (en) | 2005-04-19 | 2009-01-20 | Hitachi, Ltd. | Highly available external storage system |
US7725893B2 (en) | 2005-04-28 | 2010-05-25 | Sap Aktiengesellschaft | Platform independent replication |
US8112605B2 (en) | 2005-05-02 | 2012-02-07 | Commvault Systems, Inc. | System and method for allocation of organizational resources |
JP5420242B2 (en) | 2005-06-24 | 2014-02-19 | シンクソート インコーポレイテッド | System and method for high performance enterprise data protection |
US9418040B2 (en) | 2005-07-07 | 2016-08-16 | Sciencelogic, Inc. | Dynamically deployable self configuring distributed network management system |
US20070027999A1 (en) | 2005-07-29 | 2007-02-01 | Allen James P | Method for coordinated error tracking and reporting in distributed storage systems |
WO2007021836A2 (en) | 2005-08-15 | 2007-02-22 | Toutvirtual Inc. | Virtual systems management |
US20070043705A1 (en) | 2005-08-18 | 2007-02-22 | Emc Corporation | Searchable backups |
US7519859B2 (en) | 2005-08-30 | 2009-04-14 | International Business Machines Corporation | Fault recovery for transaction server |
JP2007065984A (en) | 2005-08-31 | 2007-03-15 | Hitachi Ltd | Storage control device and separation-type storage device |
TWI279726B (en) | 2005-09-28 | 2007-04-21 | Lite On Technology Corp | Method and computer system for securing backup data from damage by virus and hacker program |
US9774684B2 (en) | 2005-09-30 | 2017-09-26 | International Business Machines Corporation | Storing data in a dispersed storage network |
US7844251B2 (en) | 2005-10-12 | 2010-11-30 | Qualcomm Incorporated | Peer-to-peer distributed backup system for mobile devices |
US8069271B2 (en) | 2005-10-12 | 2011-11-29 | Storage Appliance Corporation | Systems and methods for converting a media player into a backup device |
JP4668763B2 (en) | 2005-10-20 | 2011-04-13 | 株式会社日立製作所 | Storage device restore method and storage device |
CN101346883A (en) | 2005-10-26 | 2009-01-14 | 斯多维兹有限公司 | Method and system for compression of data for block mode access storage |
US7877517B2 (en) | 2005-11-09 | 2011-01-25 | International Business Machines Corporation | Determining whether to compress data transmitted over a network |
US8930496B2 (en) | 2005-12-19 | 2015-01-06 | Commvault Systems, Inc. | Systems and methods of unified reconstruction in storage systems |
US7962709B2 (en) | 2005-12-19 | 2011-06-14 | Commvault Systems, Inc. | Network redirector systems and methods for performing data replication |
JPWO2007077600A1 (en) | 2005-12-28 | 2009-06-04 | 富士通株式会社 | Operation management program, operation management method, and operation management apparatus |
US7512595B1 (en) | 2006-01-03 | 2009-03-31 | Emc Corporation | Methods and systems for utilizing configuration information |
US7552279B1 (en) | 2006-01-03 | 2009-06-23 | Emc Corporation | System and method for multiple virtual computing environments in data storage environment |
US7966513B2 (en) | 2006-02-03 | 2011-06-21 | Emc Corporation | Automatic classification of backup clients |
US7822717B2 (en) | 2006-02-07 | 2010-10-26 | Emc Corporation | Point-in-time database restore |
US7546484B2 (en) | 2006-02-08 | 2009-06-09 | Microsoft Corporation | Managing backup solutions with light-weight storage nodes |
US20070208918A1 (en) | 2006-03-01 | 2007-09-06 | Kenneth Harbin | Method and apparatus for providing virtual machine backup |
US8001342B2 (en) | 2006-03-29 | 2011-08-16 | International Business Machines Corporation | Method for storing and restoring persistent memory content and virtual machine state information |
US8843783B2 (en) | 2006-03-31 | 2014-09-23 | Emc Corporation | Failover to backup site in connection with triangular asynchronous replication |
US9547485B2 (en) | 2006-03-31 | 2017-01-17 | Prowess Consulting, Llc | System and method for deploying a virtual machine |
US9397944B1 (en) | 2006-03-31 | 2016-07-19 | Teradici Corporation | Apparatus and method for dynamic communication scheduling of virtualized device traffic based on changing available bandwidth |
US20070250365A1 (en) | 2006-04-21 | 2007-10-25 | Infosys Technologies Ltd. | Grid computing systems and methods thereof |
US7546432B2 (en) | 2006-05-09 | 2009-06-09 | Emc Corporation | Pass-through write policies of files in distributed storage management |
US7780079B2 (en) | 2006-05-22 | 2010-08-24 | Seagate Technology Llc | Data storage device with built-in data protection for ultra sensitive applications |
US8209434B2 (en) | 2006-06-22 | 2012-06-26 | Sony Ericsson Mobile Communications Ab | Continued transfer or streaming of a data file after loss of a local connection |
US7451286B2 (en) | 2006-07-18 | 2008-11-11 | Network Appliance, Inc. | Removable portable data backup for a network storage system |
US7434096B2 (en) | 2006-08-11 | 2008-10-07 | Chicago Mercantile Exchange | Match server for a financial exchange having fault tolerant operation |
US8041985B2 (en) | 2006-08-11 | 2011-10-18 | Chicago Mercantile Exchange, Inc. | Match server for a financial exchange having fault tolerant operation |
US8121977B2 (en) | 2006-08-30 | 2012-02-21 | Iwmage Systems, Inc. | Ensuring data persistence and consistency in enterprise storage backup systems |
JP4236677B2 (en) | 2006-09-20 | 2009-03-11 | 株式会社日立製作所 | Recovery method using CDP |
US7640406B1 (en) | 2006-10-03 | 2009-12-29 | Emc Corporation | Detecting and managing orphan files between primary and secondary data stores for content addressed storage |
US7685177B1 (en) | 2006-10-03 | 2010-03-23 | Emc Corporation | Detecting and managing orphan files between primary and secondary data stores |
US8234641B2 (en) | 2006-10-17 | 2012-07-31 | Managelq, Inc. | Compliance-based adaptations in managed virtual systems |
CN101529419B (en) | 2006-10-17 | 2013-05-01 | 慷孚系统公司 | Method and system for offline indexing of content and classifying stored data |
US8655914B2 (en) | 2006-10-17 | 2014-02-18 | Commvault Systems, Inc. | System and method for storage operation access security |
US7702782B1 (en) | 2006-10-18 | 2010-04-20 | Emc Corporation | Using watermarks to indicate alerts in a storage area network management console |
US8185893B2 (en) | 2006-10-27 | 2012-05-22 | Hewlett-Packard Development Company, L.P. | Starting up at least one virtual machine in a physical machine by a load balancer |
CA2705379C (en) | 2006-12-04 | 2016-08-30 | Commvault Systems, Inc. | Systems and methods for creating copies of data, such as archive copies |
US8677091B2 (en) | 2006-12-18 | 2014-03-18 | Commvault Systems, Inc. | Writing data and storage system specific metadata to network attached storage device |
GB2444952A (en) | 2006-12-19 | 2008-06-25 | Pcme Ltd | Improvements in methods and apparatus for monitoring particles flowing in a stack |
US20080228771A1 (en) | 2006-12-22 | 2008-09-18 | Commvault Systems, Inc. | Method and system for searching stored data |
US7840537B2 (en) | 2006-12-22 | 2010-11-23 | Commvault Systems, Inc. | System and method for storing redundant information |
US20080162840A1 (en) | 2007-01-03 | 2008-07-03 | Oliver Augenstein | Methods and infrastructure for performing repetitive data protection and a corresponding restore of data |
JP4295326B2 (en) | 2007-01-10 | 2009-07-15 | 株式会社日立製作所 | Computer system |
US7797281B1 (en) | 2007-01-12 | 2010-09-14 | Symantec Operating Corporation | Granular restore of data objects from a directory service |
US7594138B2 (en) | 2007-01-31 | 2009-09-22 | International Business Machines Corporation | System and method of error recovery for backup applications |
US8554981B2 (en) | 2007-02-02 | 2013-10-08 | Vmware, Inc. | High availability virtual machine cluster |
US7783918B2 (en) | 2007-02-15 | 2010-08-24 | Inventec Corporation | Data protection method of storage device |
US7711712B2 (en) | 2007-03-12 | 2010-05-04 | Hitachi, Ltd. | System and method for managing consistency among volumes based on application information |
US8397038B2 (en) | 2007-03-22 | 2013-03-12 | Vmware, Inc. | Initializing file data blocks |
US7769971B2 (en) | 2007-03-29 | 2010-08-03 | Data Center Technologies | Replication and restoration of single-instance storage pools |
US8281301B2 (en) | 2007-03-30 | 2012-10-02 | Hitachi, Ltd. | Method and apparatus for controlling storage provisioning |
US20080250407A1 (en) | 2007-04-05 | 2008-10-09 | Microsoft Corporation | Network group name for virtual machines |
US7793307B2 (en) | 2007-04-06 | 2010-09-07 | Network Appliance, Inc. | Apparatus and method for providing virtualized hardware resources within a virtual execution environment |
US8225129B2 (en) | 2007-04-10 | 2012-07-17 | International Business Machines Corporation | Methods and apparatus for effective on-line backup selection for failure recovery in distributed stream processing systems |
US8479194B2 (en) | 2007-04-25 | 2013-07-02 | Microsoft Corporation | Virtual machine migration |
US7805631B2 (en) | 2007-05-03 | 2010-09-28 | Microsoft Corporation | Bare metal recovery from backup media to virtual machine |
US8752055B2 (en) | 2007-05-10 | 2014-06-10 | International Business Machines Corporation | Method of managing resources within a set of processes |
US8429425B2 (en) | 2007-06-08 | 2013-04-23 | Apple Inc. | Electronic backup and restoration of encrypted data |
US9354960B2 (en) | 2010-12-27 | 2016-05-31 | Red Hat, Inc. | Assigning virtual machines to business application service groups based on ranking of the virtual machines |
US9880906B2 (en) | 2007-06-27 | 2018-01-30 | Hewlett Packard Enterprise Development Lp | Drive resources in storage library behind virtual library |
US8554734B1 (en) | 2007-07-19 | 2013-10-08 | American Megatrends, Inc. | Continuous data protection journaling in data storage systems |
US8239646B2 (en) | 2007-07-31 | 2012-08-07 | Vmware, Inc. | Online virtual machine disk migration |
US9009327B2 (en) | 2007-08-03 | 2015-04-14 | Citrix Systems, Inc. | Systems and methods for providing IIP address stickiness in an SSL VPN session failover environment |
US7970903B2 (en) | 2007-08-20 | 2011-06-28 | Hitachi, Ltd. | Storage and server provisioning for virtualized and geographically dispersed data centers |
US8707070B2 (en) | 2007-08-28 | 2014-04-22 | Commvault Systems, Inc. | Power management of data processing resources, such as power adaptive management of data storage operations |
WO2009032710A2 (en) | 2007-08-29 | 2009-03-12 | Nirvanix, Inc. | Filing system and method for data files stored in a distributed communications network |
JP5156310B2 (en) | 2007-09-19 | 2013-03-06 | 株式会社日立製作所 | Method and computer for supporting construction of backup configuration |
US7822939B1 (en) | 2007-09-25 | 2010-10-26 | Emc Corporation | Data de-duplication using thin provisioning |
US8650389B1 (en) | 2007-09-28 | 2014-02-11 | F5 Networks, Inc. | Secure sockets layer protocol handshake mirroring |
US8396838B2 (en) | 2007-10-17 | 2013-03-12 | Commvault Systems, Inc. | Legal compliance, electronic discovery and electronic document handling of online and offline copies of data |
US8407518B2 (en) | 2007-10-26 | 2013-03-26 | Vmware, Inc. | Using virtual machine cloning to create a backup virtual machine in a fault tolerant system |
US8548953B2 (en) | 2007-11-12 | 2013-10-01 | F5 Networks, Inc. | File deduplication using storage tiers |
US9473598B2 (en) | 2007-12-18 | 2016-10-18 | International Business Machines Corporation | Network connection failover during application service interruption |
US20110040812A1 (en) | 2007-12-20 | 2011-02-17 | Virtual Computer, Inc. | Layered Virtual File System |
US8589909B2 (en) | 2008-01-10 | 2013-11-19 | Oracle International Corporation | Techniques for reducing down time in updating applications with metadata |
US20090210427A1 (en) | 2008-02-15 | 2009-08-20 | Chris Eidler | Secure Business Continuity and Disaster Recovery Platform for Multiple Protected Systems |
US7882069B2 (en) | 2008-02-19 | 2011-02-01 | Oracle International Corp. | Tag based backup and recovery |
US8458419B2 (en) | 2008-02-27 | 2013-06-04 | International Business Machines Corporation | Method for application backup in the VMware consolidated backup framework |
US20090228669A1 (en) | 2008-03-10 | 2009-09-10 | Microsoft Corporation | Storage Device Optimization Using File Characteristics |
US8291180B2 (en) | 2008-03-20 | 2012-10-16 | Vmware, Inc. | Loose synchronization of virtual disks |
US8438347B1 (en) | 2008-03-27 | 2013-05-07 | Symantec Corporation | Techniques for proactive synchronization of backups on replication targets |
US7953945B2 (en) | 2008-03-27 | 2011-05-31 | International Business Machines Corporation | System and method for providing a backup/restore interface for third party HSM clients |
JP5115272B2 (en) | 2008-03-28 | 2013-01-09 | 富士通株式会社 | An electronic device system in which a large number of electronic devices are rack-mounted, and an electronic device specific processing method for the electronic device system. |
US8199911B1 (en) | 2008-03-31 | 2012-06-12 | Symantec Operating Corporation | Secure encryption algorithm for data deduplication on untrusted storage |
JP5405320B2 (en) | 2008-04-28 | 2014-02-05 | パナソニック株式会社 | Virtual machine control device, virtual machine control method, and virtual machine control program |
US8972978B2 (en) | 2008-05-02 | 2015-03-03 | Skytap | Multitenant hosted virtual machine infrastructure |
US8266099B2 (en) | 2008-05-29 | 2012-09-11 | Vmware, Inc. | Offloading storage operations to storage hardware using a third party server |
US8543998B2 (en) | 2008-05-30 | 2013-09-24 | Oracle International Corporation | System and method for building virtual appliances using a repository metadata server and a dependency resolution service |
US8121966B2 (en) | 2008-06-05 | 2012-02-21 | International Business Machines Corporation | Method and system for automated integrated server-network-storage disaster recovery planning |
US8230256B1 (en) | 2008-06-06 | 2012-07-24 | Symantec Corporation | Method and apparatus for achieving high availability for an application in a computer cluster |
US8577845B2 (en) | 2008-06-13 | 2013-11-05 | Symantec Operating Corporation | Remote, granular restore from full virtual machine backup |
US8015146B2 (en) | 2008-06-16 | 2011-09-06 | Hitachi, Ltd. | Methods and systems for assisting information processing by using storage system |
US8751629B2 (en) | 2008-06-18 | 2014-06-10 | Camber Defense Security And Systems Solutions, Inc. | Systems and methods for automated building of a simulated network environment |
US8769048B2 (en) | 2008-06-18 | 2014-07-01 | Commvault Systems, Inc. | Data protection scheduling, such as providing a flexible backup window in a data protection system |
US8484162B2 (en) | 2008-06-24 | 2013-07-09 | Commvault Systems, Inc. | De-duplication systems and methods for application-specific data |
US8219524B2 (en) | 2008-06-24 | 2012-07-10 | Commvault Systems, Inc. | Application-aware and remote single instance data management |
US7756964B2 (en) | 2008-06-26 | 2010-07-13 | Oracle America, Inc. | Modular integrated computing and storage |
US8229896B1 (en) | 2008-06-30 | 2012-07-24 | Symantec Corporation | Method and apparatus for identifying data blocks required for restoration |
US8135930B1 (en) | 2008-07-14 | 2012-03-13 | Vizioncore, Inc. | Replication systems and methods for a virtual computing environment |
US8046550B2 (en) | 2008-07-14 | 2011-10-25 | Quest Software, Inc. | Systems and methods for performing backup operations of virtual machine files |
US8060476B1 (en) | 2008-07-14 | 2011-11-15 | Quest Software, Inc. | Backup systems and methods for a virtual computing environment |
US8706694B2 (en) | 2008-07-15 | 2014-04-22 | American Megatrends, Inc. | Continuous data protection of files stored on a remote storage device |
US7913114B2 (en) | 2008-07-31 | 2011-03-22 | Quantum Corporation | Repair of a corrupt data segment used by a de-duplication engine |
US7913047B2 (en) | 2008-08-01 | 2011-03-22 | Disney Enterprises, Inc. | Method and system for optimizing data backup |
US8086799B2 (en) | 2008-08-12 | 2011-12-27 | Netapp, Inc. | Scalable deduplication of stored data |
US7917617B1 (en) | 2008-08-14 | 2011-03-29 | Netapp, Inc. | Mitigating rebaselining of a virtual machine (VM) |
US8037032B2 (en) | 2008-08-25 | 2011-10-11 | Vmware, Inc. | Managing backups using virtual machines |
US8495316B2 (en) | 2008-08-25 | 2013-07-23 | Symantec Operating Corporation | Efficient management of archival images of virtual machines having incremental snapshots |
US20100070474A1 (en) | 2008-09-12 | 2010-03-18 | Lad Kamleshkumar K | Transferring or migrating portions of data objects, such as block-level data migration or chunk-based data migration |
US9280335B2 (en) | 2010-09-30 | 2016-03-08 | International Business Machines Corporation | Semantically rich composable software image bundles |
US8307187B2 (en) | 2008-09-12 | 2012-11-06 | Vmware, Inc. | VDI Storage overcommit and rebalancing |
US20100070466A1 (en) | 2008-09-15 | 2010-03-18 | Anand Prahlad | Data transfer techniques within data storage devices, such as network attached storage performing data migration |
US8290915B2 (en) | 2008-09-15 | 2012-10-16 | International Business Machines Corporation | Retrieval and recovery of data chunks from alternate data stores in a deduplicating system |
US9798560B1 (en) | 2008-09-23 | 2017-10-24 | Gogrid, LLC | Automated system and method for extracting and adapting system configurations |
US8620845B2 (en) | 2008-09-24 | 2013-12-31 | Timothy John Stoakes | Identifying application metadata in a backup stream |
US8452731B2 (en) | 2008-09-25 | 2013-05-28 | Quest Software, Inc. | Remote backup and restore |
US9015181B2 (en) | 2008-09-26 | 2015-04-21 | Commvault Systems, Inc. | Systems and methods for managing single instancing data |
US8200637B1 (en) | 2008-09-30 | 2012-06-12 | Symantec Operating Corporation | Block-based sparse backup images of file system volumes |
JP5346536B2 (en) | 2008-10-02 | 2013-11-20 | 株式会社日立ソリューションズ | Information backup / restore processing device and information backup / restore processing system |
US8200771B2 (en) | 2008-10-10 | 2012-06-12 | International Business Machines Corporation | Workload migration using on demand remote paging |
US8499297B2 (en) | 2008-10-28 | 2013-07-30 | Vmware, Inc. | Low overhead fault tolerance through hybrid checkpointing and replay |
US8386798B2 (en) | 2008-12-23 | 2013-02-26 | Unisys Corporation | Block-level data storage using an outstanding write list |
US8315992B1 (en) | 2008-11-26 | 2012-11-20 | Symantec Corporation | Affinity based allocation for storage implementations employing deduplicated data stores |
US8204859B2 (en) | 2008-12-10 | 2012-06-19 | Commvault Systems, Inc. | Systems and methods for managing replicated database data |
US20100162037A1 (en) | 2008-12-22 | 2010-06-24 | International Business Machines Corporation | Memory System having Spare Memory Devices Attached to a Local Interface Bus |
US9454368B2 (en) | 2009-01-21 | 2016-09-27 | Vmware, Inc. | Data mover permitting data transfer without transferring data between application and operating system |
US8243911B1 (en) | 2009-01-23 | 2012-08-14 | Sprint Communications Company L.P. | Failover mechanism based on historical usage data |
TWI526823B (en) | 2009-01-23 | 2016-03-21 | 普安科技股份有限公司 | Method and apparatus for performing volume replication using unified architecture |
US8990801B2 (en) | 2009-01-30 | 2015-03-24 | Hewlett-Packard Development Company, L.P. | Server switch integration in a virtualized system |
US8108638B2 (en) | 2009-02-06 | 2012-01-31 | International Business Machines Corporation | Backup of deduplicated data |
US8549364B2 (en) | 2009-02-18 | 2013-10-01 | Vmware, Inc. | Failure detection and recovery of host computers in a cluster |
US8645334B2 (en) | 2009-02-27 | 2014-02-04 | Andrew LEPPARD | Minimize damage caused by corruption of de-duplicated data |
US8321863B2 (en) | 2009-03-06 | 2012-11-27 | Hitachi, Ltd. | Security management device and method |
US8443166B2 (en) | 2009-03-06 | 2013-05-14 | Vmware, Inc. | Method for tracking changes in virtual disks |
US8099391B1 (en) | 2009-03-17 | 2012-01-17 | Symantec Corporation | Incremental and differential backups of virtual machine files |
US8434131B2 (en) | 2009-03-20 | 2013-04-30 | Commvault Systems, Inc. | Managing connections in a data storage system |
US8401996B2 (en) | 2009-03-30 | 2013-03-19 | Commvault Systems, Inc. | Storing a variable number of instances of data objects |
US8479304B1 (en) | 2009-03-31 | 2013-07-02 | Symantec Corporation | Selectively protecting against chosen plaintext attacks in untrusted storage environments that support data deduplication |
US8191065B2 (en) | 2009-04-06 | 2012-05-29 | Red Hat Israel, Ltd. | Managing virtual machine images |
US20100262797A1 (en) | 2009-04-10 | 2010-10-14 | PHD Virtual Technologies | Virtual machine data backup |
US8205050B2 (en) | 2009-04-14 | 2012-06-19 | Novell, Inc. | Data backup for virtual machines |
US8108640B1 (en) | 2009-04-16 | 2012-01-31 | Network Appliance, Inc. | Reserving a thin provisioned space in a storage system |
US8327351B2 (en) | 2009-04-30 | 2012-12-04 | Sap Ag | Application modification framework |
US8156301B1 (en) | 2009-05-13 | 2012-04-10 | Symantec Corporation | Method and apparatus for synchronizing a physical machine with a virtual machine while the virtual machine is operational |
US8307258B2 (en) | 2009-05-18 | 2012-11-06 | Fusion-10, Inc | Apparatus, system, and method for reconfiguring an array to operate with less storage elements |
JP5227887B2 (en) | 2009-05-21 | 2013-07-03 | 株式会社日立製作所 | Backup management method |
US8689211B2 (en) | 2009-05-25 | 2014-04-01 | International Business Machines Corporation | Live migration of virtual machines in a computing environment |
US20100306486A1 (en) | 2009-05-29 | 2010-12-02 | Sridhar Balasubramanian | Policy-based application aware storage array snapshot backup and restore technique |
US8527466B2 (en) | 2009-05-31 | 2013-09-03 | Red Hat Israel, Ltd. | Handling temporary files of a virtual machine |
WO2010140264A1 (en) | 2009-06-04 | 2010-12-09 | Hitachi,Ltd. | Storage subsystem and its data processing method, and computer system |
US20100332629A1 (en) | 2009-06-04 | 2010-12-30 | Lauren Ann Cotugno | Secure custom application cloud computing architecture |
US8135985B2 (en) | 2009-06-17 | 2012-03-13 | International Business Machines Corporation | High availability support for virtual machines |
US8955108B2 (en) | 2009-06-17 | 2015-02-10 | Microsoft Corporation | Security virtual machine for advanced auditing |
US8271443B1 (en) | 2009-06-29 | 2012-09-18 | Symantec Operating Corporation | Backup system including a privately accessible primary backup server and a publicly accessible alternate backup server |
US9146755B2 (en) | 2009-07-08 | 2015-09-29 | Kaseya Limited | System and method for transporting platform independent power configuration parameters |
US8930306B1 (en) | 2009-07-08 | 2015-01-06 | Commvault Systems, Inc. | Synchronized data deduplication |
US8234469B2 (en) | 2009-07-09 | 2012-07-31 | Microsoft Corporation | Backup of virtual machines using cloned virtual machines |
US10120767B2 (en) | 2009-07-15 | 2018-11-06 | Idera, Inc. | System, method, and computer program product for creating a virtual database |
US8578374B2 (en) | 2009-07-16 | 2013-11-05 | Ca, Inc. | System and method for managing virtual machines |
US8613085B2 (en) | 2009-07-22 | 2013-12-17 | Broadcom Corporation | Method and system for traffic management via virtual machine migration |
US8566650B2 (en) | 2009-08-04 | 2013-10-22 | Red Hat Israel, Ltd. | Virtual machine infrastructure with storage domain monitoring |
US9239762B1 (en) | 2009-08-11 | 2016-01-19 | Symantec Corporation | Method and apparatus for virtualizing file system placeholders at a computer |
US8176360B2 (en) | 2009-08-11 | 2012-05-08 | Texas Memory Systems, Inc. | Method and apparatus for addressing actual or predicted failures in a FLASH-based storage system |
US8719767B2 (en) | 2011-03-31 | 2014-05-06 | Commvault Systems, Inc. | Utilizing snapshots to provide builds to developer computing devices |
US8769535B2 (en) | 2009-09-24 | 2014-07-01 | Avaya Inc. | Providing virtual machine high-availability and fault tolerance via solid-state backup drives |
US9311378B2 (en) | 2009-10-09 | 2016-04-12 | International Business Machines Corporation | Data synchronization between a data management system and an external system |
US8578126B1 (en) | 2009-10-29 | 2013-11-05 | Netapp, Inc. | Mapping of logical start addresses to physical start addresses in a system having misalignment between logical and physical data blocks |
US8417907B2 (en) | 2009-10-29 | 2013-04-09 | Symantec Corporation | Synchronizing snapshot volumes across hosts |
US8621460B2 (en) | 2009-11-02 | 2013-12-31 | International Business Machines Corporation | Endpoint-hosted hypervisor management |
US8793222B1 (en) | 2009-11-06 | 2014-07-29 | Symantec Corporation | Systems and methods for indexing backup content |
US20110153570A1 (en) | 2009-12-18 | 2011-06-23 | Electronics And Telecommunications Research Institute | Data replication and recovery method in asymmetric clustered distributed file system |
US7996723B2 (en) | 2009-12-22 | 2011-08-09 | Xerox Corporation | Continuous, automated discovery of bugs in released software |
US8189001B2 (en) | 2010-01-04 | 2012-05-29 | Adshir Ltd. | Method and apparatus for parallel ray-tracing employing modular space division |
US8346935B2 (en) | 2010-01-15 | 2013-01-01 | Joyent, Inc. | Managing hardware resources by sending messages amongst servers in a data center |
US8473947B2 (en) | 2010-01-18 | 2013-06-25 | Vmware, Inc. | Method for configuring a physical adapter with virtual function (VF) and physical function (PF) for controlling address translation between virtual disks and physical storage regions |
US8131681B1 (en) | 2010-01-21 | 2012-03-06 | Netapp, Inc. | Backup disk-tape integration method and system |
US9477531B2 (en) | 2010-01-27 | 2016-10-25 | Vmware, Inc. | Accessing virtual disk content of a virtual machine without running a virtual desktop |
CN102141928A (en) | 2010-01-29 | 2011-08-03 | 国际商业机器公司 | Data processing method and system in virtual environment and deployment method of system |
US8117492B1 (en) | 2010-01-29 | 2012-02-14 | Symantec Corporation | Techniques for backup error management |
US8397039B2 (en) | 2010-02-12 | 2013-03-12 | Symantec Corporation | Storage systems and methods |
US20110202728A1 (en) | 2010-02-17 | 2011-08-18 | Lsi Corporation | Methods and apparatus for managing cache persistence in a storage system using multiple virtual machines |
US8495317B2 (en) | 2010-02-22 | 2013-07-23 | Ca, Inc. | System and method for improving performance of data container backups |
US8458131B2 (en) | 2010-02-26 | 2013-06-04 | Microsoft Corporation | Opportunistic asynchronous de-duplication in block level backups |
US20110218967A1 (en) | 2010-03-08 | 2011-09-08 | Microsoft Corporation | Partial Block Based Backups |
US8478878B2 (en) | 2010-03-11 | 2013-07-02 | International Business Machines Corporation | Placement of virtual machines based on server cost and network cost |
US9710294B2 (en) | 2010-03-17 | 2017-07-18 | Zerto Ltd. | Methods and apparatus for providing hypervisor level data services for server virtualization |
US8560788B1 (en) | 2010-03-29 | 2013-10-15 | Emc Corporation | Method of performing backups using multiple streams |
US8352422B2 (en) | 2010-03-30 | 2013-01-08 | Commvault Systems, Inc. | Data restore systems and methods in a replication environment |
US20110252208A1 (en) | 2010-04-12 | 2011-10-13 | Microsoft Corporation | Express-full backup of a cluster shared virtual machine |
US8751857B2 (en) | 2010-04-13 | 2014-06-10 | Red Hat Israel, Ltd. | Monitoring of highly available virtual machines |
US8219769B1 (en) | 2010-05-04 | 2012-07-10 | Symantec Corporation | Discovering cluster resources to efficiently perform cluster backups and restores |
US8453145B1 (en) | 2010-05-06 | 2013-05-28 | Quest Software, Inc. | Systems and methods for instant provisioning of virtual machine files |
US8966027B1 (en) | 2010-05-24 | 2015-02-24 | Amazon Technologies, Inc. | Managing replication of computing nodes for provided computer networks |
US8244992B2 (en) | 2010-05-24 | 2012-08-14 | Spackman Stephen P | Policy based data retrieval performance for deduplicated data |
US8667171B2 (en) | 2010-05-28 | 2014-03-04 | Microsoft Corporation | Virtual data center allocation with bandwidth guarantees |
US8489676B1 (en) | 2010-06-30 | 2013-07-16 | Symantec Corporation | Technique for implementing seamless shortcuts in sharepoint |
US8868726B1 (en) | 2010-07-02 | 2014-10-21 | Symantec Corporation | Systems and methods for performing backups |
US8954669B2 (en) | 2010-07-07 | 2015-02-10 | Nexenta System, Inc | Method and system for heterogeneous data volume |
US9135171B2 (en) | 2010-07-13 | 2015-09-15 | Vmware, Inc. | Method for improving save and restore performance in virtual machine systems |
US10162722B2 (en) | 2010-07-15 | 2018-12-25 | Veritas Technologies Llc | Virtual machine aware replication method and system |
US8566640B2 (en) | 2010-07-19 | 2013-10-22 | Veeam Software Ag | Systems, methods, and computer program products for instant recovery of image level backups |
US8291170B1 (en) | 2010-08-19 | 2012-10-16 | Symantec Corporation | System and method for event driven backup data storage |
TWI505189B (en) | 2010-08-27 | 2015-10-21 | Ibm | A method computer program and system for automatic upgrade of virtual appliances |
US20120072685A1 (en) | 2010-09-16 | 2012-03-22 | Hitachi, Ltd. | Method and apparatus for backup of virtual machine data |
US8838624B2 (en) | 2010-09-24 | 2014-09-16 | Hitachi Data Systems Corporation | System and method for aggregating query results in a fault-tolerant database management system |
US9304867B2 (en) | 2010-09-28 | 2016-04-05 | Amazon Technologies, Inc. | System and method for providing flexible storage and retrieval of snapshot archives |
US8799226B2 (en) | 2010-09-28 | 2014-08-05 | International Business Machines Corporation | Prioritization of data items for backup in a computing environment |
US9235474B1 (en) | 2011-02-17 | 2016-01-12 | Axcient, Inc. | Systems and methods for maintaining a virtual failover volume of a target computing system |
WO2012042509A1 (en) | 2010-10-01 | 2012-04-05 | Peter Chacko | A distributed virtual storage cloud architecture and a method thereof |
US20120084272A1 (en) | 2010-10-04 | 2012-04-05 | International Business Machines Corporation | File system support for inert files |
US8909767B2 (en) | 2010-10-13 | 2014-12-09 | Rackware, Inc. | Cloud federation in a cloud computing environment |
US9015119B2 (en) | 2010-10-26 | 2015-04-21 | International Business Machines Corporation | Performing a background copy process during a backup operation |
WO2012057942A1 (en) | 2010-10-27 | 2012-05-03 | High Cloud Security, Inc. | System and method for secure storage of virtual machines |
JP5606293B2 (en) | 2010-11-22 | 2014-10-15 | キヤノン株式会社 | Data processing apparatus, access control method and program |
US8910157B2 (en) | 2010-11-23 | 2014-12-09 | International Business Machines Corporation | Optimization of virtual appliance deployment |
US8688645B2 (en) | 2010-11-30 | 2014-04-01 | Netapp, Inc. | Incremental restore of data between storage systems having dissimilar storage operating systems associated therewith |
US8392378B2 (en) | 2010-12-09 | 2013-03-05 | International Business Machines Corporation | Efficient backup and restore of virtual input/output server (VIOS) cluster |
US9253100B2 (en) | 2010-12-10 | 2016-02-02 | Alcatel Lucent | Asynchronous virtual machine replication |
JP5555780B2 (en) | 2010-12-17 | 2014-07-23 | 株式会社日立製作所 | Information processing service failure recovery method and virtual machine image generation apparatus |
US9038066B2 (en) | 2010-12-22 | 2015-05-19 | Vmware, Inc. | In-place snapshots of a virtual disk configured with sparse extent |
US9020895B1 (en) | 2010-12-27 | 2015-04-28 | Netapp, Inc. | Disaster recovery for virtual machines across primary and secondary sites |
US8738883B2 (en) | 2011-01-19 | 2014-05-27 | Quantum Corporation | Snapshot creation from block lists |
US8832029B2 (en) | 2011-02-16 | 2014-09-09 | Microsoft Corporation | Incremental virtual machine backup supporting migration |
US8694764B2 (en) | 2011-02-24 | 2014-04-08 | Microsoft Corporation | Multi-phase resume from hibernate |
US9542215B2 (en) | 2011-09-30 | 2017-01-10 | V3 Systems, Inc. | Migrating virtual machines from a source physical support environment to a target physical support environment using master image and user delta collections |
JP5724477B2 (en) | 2011-03-10 | 2015-05-27 | 富士通株式会社 | Migration program, information processing apparatus, migration method, and information processing system |
US8849762B2 (en) | 2011-03-31 | 2014-09-30 | Commvault Systems, Inc. | Restoring computing environments, such as autorecovery of file systems at certain points in time |
US9990253B1 (en) | 2011-03-31 | 2018-06-05 | EMC IP Holding Company LLC | System and method for recovering file systems without a replica |
US8825720B1 (en) | 2011-04-12 | 2014-09-02 | Emc Corporation | Scaling asynchronous reclamation of free space in de-duplicated multi-controller storage systems |
US8938643B1 (en) | 2011-04-22 | 2015-01-20 | Symantec Corporation | Cloning using streaming restore |
CN102761566B (en) | 2011-04-26 | 2015-09-23 | 国际商业机器公司 | Method and device for migrating virtual machine |
US9519496B2 (en) | 2011-04-26 | 2016-12-13 | Microsoft Technology Licensing, Llc | Detecting and preventing virtual disk storage linkage faults |
US8924967B2 (en) | 2011-04-28 | 2014-12-30 | Vmware, Inc. | Maintaining high availability of a group of virtual machines using heartbeat messages |
US9785523B2 (en) | 2011-06-20 | 2017-10-10 | Microsoft Technology Licensing, Llc | Managing replicated virtual storage at recovery sites |
WO2012176307A1 (en) | 2011-06-23 | 2012-12-27 | 株式会社日立製作所 | Storage administration system and storage administration method |
US9020987B1 (en) | 2011-06-29 | 2015-04-28 | Emc Corporation | Managing updating of metadata of file systems |
US8689047B2 (en) | 2011-07-22 | 2014-04-01 | Microsoft Corporation | Virtual disk replication using log files |
US8671249B2 (en) | 2011-07-22 | 2014-03-11 | Fusion-Io, Inc. | Apparatus, system, and method for managing storage capacity recovery |
US8533715B2 (en) | 2011-08-09 | 2013-09-10 | International Business Machines Corporation | Virtual machine management |
US20130054533A1 (en) | 2011-08-24 | 2013-02-28 | Microsoft Corporation | Verifying a data recovery component using a managed interface |
US20130074181A1 (en) | 2011-09-19 | 2013-03-21 | Cisco Technology, Inc. | Auto Migration of Services Within a Virtual Data Center |
US20130080841A1 (en) | 2011-09-23 | 2013-03-28 | Sungard Availability Services | Recover to cloud: recovery point objective analysis tool |
US9021459B1 (en) | 2011-09-28 | 2015-04-28 | Juniper Networks, Inc. | High availability in-service software upgrade using virtual machine instances in dual control units of a network device |
US8776043B1 (en) | 2011-09-29 | 2014-07-08 | Amazon Technologies, Inc. | Service image notifications |
US9069587B2 (en) | 2011-10-31 | 2015-06-30 | Stec, Inc. | System and method to cache hypervisor data |
US20130117744A1 (en) | 2011-11-03 | 2013-05-09 | Ocz Technology Group, Inc. | Methods and apparatus for providing hypervisor-level acceleration and virtualization services |
US9372707B2 (en) | 2011-11-18 | 2016-06-21 | Hitachi, Ltd. | Computer, virtual machine deployment method and program |
WO2013080254A1 (en) | 2011-11-30 | 2013-06-06 | Hitachi, Ltd. | Storage system and method for controlling storage system |
US9280378B2 (en) | 2011-11-30 | 2016-03-08 | Red Hat, Inc. | Adjustment during migration to a different virtualization environment |
CA2862596A1 (en) | 2011-12-05 | 2013-06-13 | Persistent Telecom Solutions Inc. | Universal pluggable cloud disaster recovery system |
US9292350B1 (en) | 2011-12-15 | 2016-03-22 | Symantec Corporation | Management and provisioning of virtual machines |
US9703647B2 (en) | 2011-12-30 | 2017-07-11 | Veritas Technologies Llc | Automated policy management in a virtual machine environment |
US8930542B2 (en) | 2012-01-23 | 2015-01-06 | International Business Machines Corporation | Dynamically building a set of compute nodes to host the user's workload |
US9268590B2 (en) | 2012-02-29 | 2016-02-23 | Vmware, Inc. | Provisioning a cluster of distributed computing platform based on placement strategy |
US9047133B2 (en) | 2012-03-02 | 2015-06-02 | Vmware, Inc. | Single, logical, multi-tier application blueprint used for deployment and management of multiple physical applications in a cloud environment |
US20130232215A1 (en) | 2012-03-05 | 2013-09-05 | Riverbed Technology, Inc. | Virtualized data storage system architecture using prefetching agent |
US9124633B1 (en) | 2012-03-29 | 2015-09-01 | Infoblox Inc. | IP address and domain name automation of virtual infrastructure |
US9286327B2 (en) | 2012-03-30 | 2016-03-15 | Commvault Systems, Inc. | Data storage recovery automation |
US9110604B2 (en) | 2012-09-28 | 2015-08-18 | Emc Corporation | System and method for full virtual machine backup using storage system functionality |
CN104520873A (en) | 2012-04-06 | 2015-04-15 | 安全第一公司 | Systems and methods for securing and restoring virtual machines |
US8966318B1 (en) | 2012-04-27 | 2015-02-24 | Symantec Corporation | Method to validate availability of applications within a backup image |
US9311248B2 (en) | 2012-05-07 | 2016-04-12 | Raytheon Cyber Products, Llc | Methods and apparatuses for monitoring activities of virtual machines |
US9246996B1 (en) | 2012-05-07 | 2016-01-26 | Amazon Technologies, Inc. | Data volume placement techniques |
US8984134B2 (en) | 2012-05-07 | 2015-03-17 | International Business Machines Corporation | Unified cloud computing infrastructure to manage and deploy physical and virtual environments |
US8904081B1 (en) | 2012-05-08 | 2014-12-02 | Vmware, Inc. | Composing a virtual disk using application delta disk images |
US20130311429A1 (en) | 2012-05-18 | 2013-11-21 | Hitachi, Ltd. | Method for controlling backup and restoration, and storage system using the same |
TWI610166B (en) | 2012-06-04 | 2018-01-01 | 飛康國際網路科技股份有限公司 | Automated disaster recovery and data migration system and method |
US9218375B2 (en) | 2012-06-13 | 2015-12-22 | Commvault Systems, Inc. | Dedicated client-side signature generator in a networked storage system |
US8954796B1 (en) | 2012-06-26 | 2015-02-10 | Emc International Company | Recovery of a logical unit in a consistency group while replicating other logical units in the consistency group |
US8909980B1 (en) | 2012-06-29 | 2014-12-09 | Emc Corporation | Coordinating processing for request redirection |
US20140007097A1 (en) | 2012-06-29 | 2014-01-02 | Brocade Communications Systems, Inc. | Dynamic resource allocation for virtual machines |
US9230096B2 (en) | 2012-07-02 | 2016-01-05 | Symantec Corporation | System and method for data loss prevention in a virtualized environment |
US9612966B2 (en) | 2012-07-03 | 2017-04-04 | Sandisk Technologies Llc | Systems, methods and apparatus for a virtual machine cache |
US8850146B1 (en) | 2012-07-27 | 2014-09-30 | Symantec Corporation | Backup of a virtual machine configured to perform I/O operations bypassing a hypervisor |
US9026498B2 (en) | 2012-08-13 | 2015-05-05 | Commvault Systems, Inc. | Lightweight mounting of a secondary copy of file system data |
US8938481B2 (en) | 2012-08-13 | 2015-01-20 | Commvault Systems, Inc. | Generic file level restore from a block-level secondary copy |
US9141529B2 (en) | 2012-08-14 | 2015-09-22 | OCZ Storage Solutions Inc. | Methods and apparatus for providing acceleration of virtual machines in virtual environments |
US9268642B2 (en) | 2012-08-24 | 2016-02-23 | Vmware, Inc. | Protecting paired virtual machines |
US20140082128A1 (en) | 2012-09-18 | 2014-03-20 | Netapp, Inc. | Dynamic detection and selection of file servers in a caching application or system |
WO2014049691A1 (en) | 2012-09-25 | 2014-04-03 | 株式会社東芝 | Information processing system |
US9712402B2 (en) | 2012-10-10 | 2017-07-18 | Alcatel Lucent | Method and apparatus for automated deployment of geographically distributed applications within a cloud |
GB2507261A (en) | 2012-10-23 | 2014-04-30 | Ibm | Reverting to a snapshot of a VM by modifying metadata |
CN103810058B (en) | 2012-11-12 | 2017-02-22 | 华为技术有限公司 | Backup method, equipment and system for virtual machine |
US9063971B2 (en) | 2012-12-03 | 2015-06-23 | Red Hat Israel, Ltd. | Schema and query abstraction for different LDAP service providers |
US10162873B2 (en) | 2012-12-21 | 2018-12-25 | Red Hat, Inc. | Synchronization of physical disks |
US20140181046A1 (en) | 2012-12-21 | 2014-06-26 | Commvault Systems, Inc. | Systems and methods to backup unprotected virtual machines |
US9678978B2 (en) | 2012-12-31 | 2017-06-13 | Carbonite, Inc. | Systems and methods for automatic synchronization of recently modified data |
US9372726B2 (en) | 2013-01-09 | 2016-06-21 | The Research Foundation For The State University Of New York | Gang migration of virtual machines using cluster-wide deduplication |
US9646039B2 (en) | 2013-01-10 | 2017-05-09 | Pure Storage, Inc. | Snapshots in a storage system |
US9804930B2 (en) | 2013-01-11 | 2017-10-31 | Commvault Systems, Inc. | Partial file restore in a data storage system |
US9483489B2 (en) | 2013-01-14 | 2016-11-01 | Commvault Systems, Inc. | Partial sharing of secondary storage files in a data storage system |
US9300726B2 (en) | 2013-01-15 | 2016-03-29 | International Business Machines Corporation | Implementing a private network isolated from a user network for virtual machine deployment and migration and for monitoring and managing the cloud environment |
WO2014115188A1 (en) | 2013-01-28 | 2014-07-31 | Hitachi, Ltd. | Storage system and method for allocating resource |
EP2951963B1 (en) | 2013-01-30 | 2019-04-24 | Hewlett-Packard Enterprise Development LP | Failover in response to failure of a port |
US9372865B2 (en) | 2013-02-12 | 2016-06-21 | Atlantis Computing, Inc. | Deduplication metadata access in deduplication file system |
US9165150B2 (en) | 2013-02-19 | 2015-10-20 | Symantec Corporation | Application and device control in a virtualized environment |
US9594837B2 (en) | 2013-02-26 | 2017-03-14 | Microsoft Technology Licensing, Llc | Prediction and information retrieval for intrinsically diverse sessions |
US9244777B2 (en) | 2013-03-01 | 2016-01-26 | International Business Machines Corporation | Balanced distributed backup scheduling |
US9558199B2 (en) | 2013-03-07 | 2017-01-31 | Jive Software, Inc. | Efficient data deduplication |
US9959190B2 (en) | 2013-03-12 | 2018-05-01 | International Business Machines Corporation | On-site visualization of component status |
US9720717B2 (en) | 2013-03-14 | 2017-08-01 | Sandisk Technologies Llc | Virtualization support for storage devices |
US9235582B1 (en) | 2013-03-14 | 2016-01-12 | Emc Corporation | Tracking files excluded from backup |
US11188589B2 (en) | 2013-03-15 | 2021-11-30 | Wits(Md), Llc. | Associating received medical imaging data to stored medical imaging data |
US20140344323A1 (en) | 2013-03-15 | 2014-11-20 | Reactor8 Inc. | State-based configuration management for distributed systems |
US9823955B2 (en) | 2013-04-23 | 2017-11-21 | Hitachi, Ltd. | Storage system which is capable of processing file access requests and block access requests, and which can manage failures in A and storage system failure management method having a cluster configuration |
US9405767B2 (en) | 2013-05-01 | 2016-08-02 | Microsoft Technology Licensing, Llc | Streaming content and placeholders |
US9424136B1 (en) | 2013-06-12 | 2016-08-23 | Veritas Technologies Llc | Systems and methods for creating optimized synthetic backup images |
US9213706B2 (en) | 2013-06-13 | 2015-12-15 | DataGravity, Inc. | Live restore for a data intelligent storage system |
US8849764B1 (en) | 2013-06-13 | 2014-09-30 | DataGravity, Inc. | System and method of data intelligent storage |
US10089192B2 (en) | 2013-06-13 | 2018-10-02 | Hytrust, Inc. | Live restore for a data intelligent storage system |
US9208015B2 (en) | 2013-06-18 | 2015-12-08 | Vmware, Inc. | Hypervisor remedial action for a virtual machine in response to an error message from the virtual machine |
US9575789B1 (en) | 2013-06-26 | 2017-02-21 | Veritas Technologies | Systems and methods for enabling migratory virtual machines to expedite access to resources |
US9235485B2 (en) | 2013-07-22 | 2016-01-12 | International Business Machines Corporation | Moving objects in a primary computer based on memory errors in a secondary computer |
US9043576B2 (en) | 2013-08-21 | 2015-05-26 | Simplivity Corporation | System and method for virtual machine conversion |
US9298386B2 (en) | 2013-08-23 | 2016-03-29 | Globalfoundries Inc. | System and method for improved placement of blocks in a deduplication-erasure code environment |
US9336076B2 (en) | 2013-08-23 | 2016-05-10 | Globalfoundries Inc. | System and method for controlling a redundancy parity encoding amount based on deduplication indications of activity |
US9858154B1 (en) | 2013-08-23 | 2018-01-02 | Acronis International Gmbh | Agentless file backup of a virtual machine |
US20150067393A1 (en) | 2013-08-27 | 2015-03-05 | Connectloud, Inc. | Method and apparatus to remotely take a snapshot of a complete virtual machine from a software defined cloud with backup and restore capacity |
US9792170B2 (en) | 2013-08-30 | 2017-10-17 | Cisco Technology, Inc. | Correcting operational state and incorporating additional debugging support into an online system without disruption |
US20150089185A1 (en) | 2013-09-23 | 2015-03-26 | International Business Machines Corporation | Managing Mirror Copies without Blocking Application I/O |
US9405628B2 (en) | 2013-09-23 | 2016-08-02 | International Business Machines Corporation | Data migration using multi-storage volume swap |
US9201736B1 (en) * | 2013-09-30 | 2015-12-01 | Emc Corporation | Methods and apparatus for recovery of complex assets in distributed information processing systems |
US9727357B2 (en) | 2013-10-01 | 2017-08-08 | International Business Machines Corporation | Failover detection and treatment in checkpoint systems |
CN104123186B (en) | 2013-10-15 | 2015-09-16 | 腾讯科技(深圳)有限公司 | Method for distributing business and device |
US10193963B2 (en) | 2013-10-24 | 2019-01-29 | Vmware, Inc. | Container virtual machines for hadoop |
US9098457B2 (en) | 2013-10-31 | 2015-08-04 | Vmware, Inc. | Visualizing disaster recovery plan execution for the cloud |
US9230001B2 (en) | 2013-11-14 | 2016-01-05 | Vmware, Inc. | Intelligent data propagation using performance monitoring |
US9904603B2 (en) | 2013-11-18 | 2018-02-27 | Actifio, Inc. | Successive data fingerprinting for copy accuracy assurance |
JP2015103092A (en) | 2013-11-26 | 2015-06-04 | 株式会社日立製作所 | Fault recovery system and method of constructing fault recovery system |
US10241709B2 (en) | 2013-12-09 | 2019-03-26 | Vmware, Inc. | Elastic temporary filesystem |
US9442966B2 (en) | 2014-01-15 | 2016-09-13 | Ca, Inc. | Extending the recovery and reporting ranges of objects |
US11194667B2 (en) | 2014-02-07 | 2021-12-07 | International Business Machines Corporation | Creating a restore copy from a copy of a full copy of source data in a repository that is at a different point-in-time than a restore point-in-time of a restore request |
US20150227600A1 (en) | 2014-02-13 | 2015-08-13 | Actifio, Inc. | Virtual data backup |
US9959177B2 (en) | 2014-02-27 | 2018-05-01 | Red Hat Israel, Ltd. | Backing up virtual machines |
US10216585B2 (en) | 2014-02-28 | 2019-02-26 | Red Hat Israel, Ltd. | Enabling disk image operations in conjunction with snapshot locking |
US11243707B2 (en) | 2014-03-12 | 2022-02-08 | Nutanix, Inc. | Method and system for implementing virtual machine images |
US10380072B2 (en) | 2014-03-17 | 2019-08-13 | Commvault Systems, Inc. | Managing deletions from a deduplication database |
US20150268876A1 (en) | 2014-03-18 | 2015-09-24 | Commvault Systems, Inc. | Efficient information management performed by a client in the absence of a storage manager |
US9588847B1 (en) | 2014-03-25 | 2017-03-07 | EMC IP Holding Company LLC | Recovering corrupt virtual machine disks |
US9582373B2 (en) | 2014-03-31 | 2017-02-28 | Vmware, Inc. | Methods and systems to hot-swap a virtual machine |
US9811427B2 (en) | 2014-04-02 | 2017-11-07 | Commvault Systems, Inc. | Information management by a media agent in the absence of communications with a storage manager |
US9697228B2 (en) | 2014-04-14 | 2017-07-04 | Vembu Technologies Private Limited | Secure relational file system with version control, deduplication, and error correction |
US9280430B2 (en) | 2014-05-13 | 2016-03-08 | Netapp, Inc. | Deferred replication of recovery information at site switchover |
US10203975B2 (en) | 2014-05-28 | 2019-02-12 | Red Hat Israel, Ltd. | Virtual machine template management |
US9785554B2 (en) | 2014-05-30 | 2017-10-10 | International Business Machines Corporation | Synchronizing updates of page table status indicators in a multiprocessing environment |
US9594636B2 (en) | 2014-05-30 | 2017-03-14 | Datto, Inc. | Management of data replication and storage apparatuses, methods and systems |
US9477683B2 (en) | 2014-05-30 | 2016-10-25 | International Business Machines Corporation | Techniques for enabling coarse-grained volume snapshots for virtual machine backup and restore |
US9819750B2 (en) | 2014-06-03 | 2017-11-14 | Qualcomm Incorporated | Neighbor aware network cluster topology establishment based on proximity measurements |
US9619342B2 (en) | 2014-06-24 | 2017-04-11 | International Business Machines Corporation | Back up and recovery in virtual machine environments |
US10097410B2 (en) | 2014-06-26 | 2018-10-09 | Vmware, Inc. | Methods and apparatus to scale application deployments in cloud computing environments |
US9626254B2 (en) | 2014-06-26 | 2017-04-18 | Hewlett Packard Enterprise Development Lp | Backup and non-staged recovery of virtual environment data |
US9430284B2 (en) | 2014-06-26 | 2016-08-30 | Vmware, Inc. | Processing virtual machine objects through multistep workflows |
US9367414B2 (en) | 2014-06-27 | 2016-06-14 | Vmware, Inc. | Persisting high availability protection state for virtual machines stored on distributed object-based storage |
US9898320B2 (en) | 2014-06-28 | 2018-02-20 | Vmware, Inc. | Using a delta query to seed live migration |
CN105446826A (en) | 2014-06-30 | 2016-03-30 | 国际商业机器公司 | Virtual machine backup and recovery method and device |
US10073649B2 (en) | 2014-07-24 | 2018-09-11 | Hewlett Packard Enterprise Development Lp | Storing metadata |
US9852026B2 (en) | 2014-08-06 | 2017-12-26 | Commvault Systems, Inc. | Efficient application recovery in an information management system based on a pseudo-storage-device driver |
US10360110B2 (en) | 2014-08-06 | 2019-07-23 | Commvault Systems, Inc. | Point-in-time backups of a production application made accessible over fibre channel and/or iSCSI as data sources to a remote application by representing the backups as pseudo-disks operating apart from the production application and its host |
US9684567B2 (en) | 2014-09-04 | 2017-06-20 | International Business Machines Corporation | Hypervisor agnostic interchangeable backup recovery and file level recovery from virtual disks |
US20160085606A1 (en) | 2014-09-19 | 2016-03-24 | Netapp Inc. | Cluster-wide outage detection |
US9417968B2 (en) | 2014-09-22 | 2016-08-16 | Commvault Systems, Inc. | Efficiently restoring execution of a backed up virtual machine based on coordination with virtual-machine-file-relocation operations |
US9436555B2 (en) | 2014-09-22 | 2016-09-06 | Commvault Systems, Inc. | Efficient live-mount of a backed up virtual machine in a storage management system |
US20160094649A1 (en) | 2014-09-30 | 2016-03-31 | Code 42 Software, Inc. | Node-to-node data distribution |
US10102218B2 (en) | 2014-09-30 | 2018-10-16 | Microsoft Technology Licensing, Llc | File system with per-extent checksums |
US9444811B2 (en) | 2014-10-21 | 2016-09-13 | Commvault Systems, Inc. | Using an enhanced data agent to restore backed up data across autonomous storage management systems |
US9575673B2 (en) | 2014-10-29 | 2017-02-21 | Commvault Systems, Inc. | Accessing a file system using tiered deduplication |
US9804927B2 (en) | 2014-12-27 | 2017-10-31 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Virtual machine distributed checkpointing |
US10108687B2 (en) | 2015-01-21 | 2018-10-23 | Commvault Systems, Inc. | Database protection using block-level mapping |
CN104679907A (en) | 2015-03-24 | 2015-06-03 | 新余兴邦信息产业有限公司 | Realization method and system for high-availability and high-performance database cluster |
US10339106B2 (en) | 2015-04-09 | 2019-07-02 | Commvault Systems, Inc. | Highly reusable deduplication database after disaster recovery |
US10140189B2 (en) | 2015-04-28 | 2018-11-27 | International Business Machines Corporation | Database recovery and index rebuilds |
US9715347B2 (en) | 2015-05-14 | 2017-07-25 | Netapp, Inc. | Virtual disk migration |
US10108502B1 (en) * | 2015-06-26 | 2018-10-23 | EMC IP Holding Company LLC | Data protection using checkpoint restart for cluster shared resources |
US9760398B1 (en) | 2015-06-29 | 2017-09-12 | Amazon Technologies, Inc. | Automatic placement of virtual machine instances |
US9766825B2 (en) | 2015-07-22 | 2017-09-19 | Commvault Systems, Inc. | Browse and restore for block-level backups |
US10089183B2 (en) | 2015-07-31 | 2018-10-02 | Hiveio Inc. | Method and apparatus for reconstructing and checking the consistency of deduplication metadata of a deduplication file system |
US10129357B2 (en) | 2015-08-21 | 2018-11-13 | International Business Machines Corporation | Managing data storage in distributed virtual environment |
US10628194B2 (en) | 2015-09-30 | 2020-04-21 | Netapp Inc. | Techniques for data migration |
US9836368B2 (en) | 2015-10-22 | 2017-12-05 | Netapp, Inc. | Implementing automatic switchover |
US9747179B2 (en) | 2015-10-29 | 2017-08-29 | Netapp, Inc. | Data management agent for selective storage re-caching |
US9892276B2 (en) | 2015-11-11 | 2018-02-13 | International Business Machines Corporation | Verifiable data destruction in a database |
US10481984B1 (en) | 2015-11-23 | 2019-11-19 | Acronis International Gmbh | Backup of virtual machines from storage snapshot |
US10372904B2 (en) | 2016-03-08 | 2019-08-06 | Tanium Inc. | Cost prioritized evaluations of indicators of compromise |
US10296368B2 (en) | 2016-03-09 | 2019-05-21 | Commvault Systems, Inc. | Hypervisor-independent block-level live browse for access to backed up virtual machine (VM) data and hypervisor-free file-level recovery (block-level pseudo-mount) |
US10445188B2 (en) | 2016-04-04 | 2019-10-15 | Vmware, Inc. | Method and system for virtualizing guest-generated file system snapshots |
US10255147B2 (en) * | 2016-04-14 | 2019-04-09 | Vmware, Inc. | Fault tolerance for containers in a virtualized computing environment |
US10417098B2 (en) | 2016-06-28 | 2019-09-17 | International Business Machines Corporation | File level access to block level incremental backups of a virtual disk |
US10564996B2 (en) * | 2016-08-28 | 2020-02-18 | Vmware, Inc. | Parentless virtual machine forking |
CN107885622B (en) * | 2016-09-30 | 2021-03-09 | 伊姆西Ip控股有限责任公司 | Handling Virtual Data Mover (VDM) failover conditions |
US10210048B2 (en) | 2016-10-25 | 2019-02-19 | Commvault Systems, Inc. | Selective snapshot and backup copy operations for individual virtual machines in a shared storage |
US10162528B2 (en) | 2016-10-25 | 2018-12-25 | Commvault Systems, Inc. | Targeted snapshot based on virtual machine location |
US10152251B2 (en) | 2016-10-25 | 2018-12-11 | Commvault Systems, Inc. | Targeted backup of virtual machine |
US20180143880A1 (en) | 2016-11-21 | 2018-05-24 | Commvault Systems, Inc. | Cross-platform virtual machine data and memory backup and resumption |
US10261719B2 (en) * | 2017-01-31 | 2019-04-16 | Hewlett Packard Enterprise Development Lp | Volume and snapshot replication |
US10776329B2 (en) | 2017-03-28 | 2020-09-15 | Commvault Systems, Inc. | Migration of a database management system to cloud storage |
US20180285202A1 (en) | 2017-03-29 | 2018-10-04 | Commvault Systems, Inc. | External fallback system for local computing systems |
US11074140B2 (en) | 2017-03-29 | 2021-07-27 | Commvault Systems, Inc. | Live browsing of granular mailbox data |
US10496547B1 (en) | 2017-05-10 | 2019-12-03 | Parallels International Gmbh | External disk cache for guest operating system in a virtualized environment |
US10664352B2 (en) | 2017-06-14 | 2020-05-26 | Commvault Systems, Inc. | Live browsing of backed up data residing on cloned disks |
US10417096B2 (en) | 2017-07-20 | 2019-09-17 | Vmware, Inc. | Multi-virtual machine time consistent snapshots |
US20190108341A1 (en) | 2017-09-14 | 2019-04-11 | Commvault Systems, Inc. | Ransomware detection and data pruning management |
US20190090305A1 (en) | 2017-09-20 | 2019-03-21 | Unisys Corporation | SYSTEM AND METHOD FOR PROVIDING SECURE AND REDUNDANT COMMUNICATIONS AND PROCESSING FOR A COLLECTION OF MULTI-STATE INTERNET OF THINGS (IoT) DEVICES |
US10592145B2 (en) | 2018-02-14 | 2020-03-17 | Commvault Systems, Inc. | Machine learning-based data object storage |
US10628267B2 (en) | 2018-05-02 | 2020-04-21 | Commvault Systems, Inc. | Client managed data backup process within an enterprise information management system |
US10673943B2 (en) | 2018-05-02 | 2020-06-02 | Commvault Systems, Inc. | Network storage backup using distributed media agents |
US10365964B1 (en) | 2018-05-31 | 2019-07-30 | Capital One Services, Llc | Data processing platform monitoring |
US11016696B2 (en) | 2018-09-14 | 2021-05-25 | Commvault Systems, Inc. | Redundant distributed data storage system |
US10996974B2 (en) | 2019-01-30 | 2021-05-04 | Commvault Systems, Inc. | Cross-hypervisor live mount of backed up virtual machine data, including management of cache storage for virtual machine data |
US10768971B2 (en) | 2019-01-30 | 2020-09-08 | Commvault Systems, Inc. | Cross-hypervisor live mount of backed up virtual machine data |
US11099956B1 (en) * | 2020-03-26 | 2021-08-24 | Commvault Systems, Inc. | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
-
2020
- 2020-03-26 US US16/831,562 patent/US11099956B1/en active Active
-
2021
- 2021-07-16 US US17/377,877 patent/US11663099B2/en active Active
-
2023
- 2023-04-17 US US18/135,639 patent/US12235744B2/en active Active
-
2024
- 2024-10-17 US US18/918,981 patent/US20250036535A1/en active Pending
Patent Citations (156)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4267568A (en) | 1975-12-03 | 1981-05-12 | System Development Corporation | Information storage and retrieval system |
US4084231A (en) | 1975-12-18 | 1978-04-11 | International Business Machines Corporation | System for facilitating the copying back of data in disc and tape units of a memory hierarchial system |
US4283787A (en) | 1978-11-06 | 1981-08-11 | British Broadcasting Corporation | Cyclic redundancy data check encoding method and apparatus |
US4417321A (en) | 1981-05-18 | 1983-11-22 | International Business Machines Corp. | Qualifying and sorting file record data |
US4641274A (en) | 1982-12-03 | 1987-02-03 | International Business Machines Corporation | Method for communicating changes made to text form a text processor to a remote host |
US4654819A (en) | 1982-12-09 | 1987-03-31 | Sequoia Systems, Inc. | Memory back-up system |
US4686620A (en) | 1984-07-26 | 1987-08-11 | American Telephone And Telegraph Company, At&T Bell Laboratories | Database backup method |
EP0259912A1 (en) | 1986-09-12 | 1988-03-16 | Hewlett-Packard Limited | File backup facility for a community of personal computers |
US5193154A (en) | 1987-07-10 | 1993-03-09 | Hitachi, Ltd. | Buffered peripheral system and method for backing up and retrieving data to and from backup memory device |
US5005122A (en) | 1987-09-08 | 1991-04-02 | Digital Equipment Corporation | Arrangement with cooperating management server node and network service node |
US5226157A (en) | 1988-03-11 | 1993-07-06 | Hitachi, Ltd. | Backup control method and system in data processing system using identifiers for controlling block data transfer |
US4912637A (en) | 1988-04-26 | 1990-03-27 | Tandem Computers Incorporated | Version management tool |
US4995035A (en) | 1988-10-31 | 1991-02-19 | International Business Machines Corporation | Centralized management in a computer network |
US5093912A (en) | 1989-06-26 | 1992-03-03 | International Business Machines Corporation | Dynamic resource pool expansion and contraction in multiprocessing environments |
EP0405926A2 (en) | 1989-06-30 | 1991-01-02 | Digital Equipment Corporation | Method and apparatus for managing a shadow set of storage media |
US5454099A (en) | 1989-07-25 | 1995-09-26 | International Business Machines Corporation | CPU implemented method for backing up modified data sets in non-volatile store for recovery in the event of CPU failure |
US5133065A (en) | 1989-07-27 | 1992-07-21 | Personal Computer Peripherals Corporation | Backup computer program for networks |
US5321816A (en) | 1989-10-10 | 1994-06-14 | Unisys Corporation | Local-remote apparatus with specialized image storage modules |
US5276867A (en) | 1989-12-19 | 1994-01-04 | Epoch Systems, Inc. | Digital data storage system with improved data migration |
US5276860A (en) | 1989-12-19 | 1994-01-04 | Epoch Systems, Inc. | Digital data processor with improved backup storage |
US5420996A (en) | 1990-04-27 | 1995-05-30 | Kabushiki Kaisha Toshiba | Data processing system having selective data save and address translation mechanism utilizing CPU idle period |
EP0467546A2 (en) | 1990-07-18 | 1992-01-22 | International Computers Limited | Distributed data processing systems |
US5239647A (en) | 1990-09-07 | 1993-08-24 | International Business Machines Corporation | Data storage hierarchy with shared storage level |
US5301286A (en) | 1991-01-02 | 1994-04-05 | At&T Bell Laboratories | Memory archiving indexing arrangement |
US5212772A (en) | 1991-02-11 | 1993-05-18 | Gigatrend Incorporated | System for storing data in backup tape device |
US5287500A (en) | 1991-06-03 | 1994-02-15 | Digital Equipment Corporation | System for allocating storage spaces based upon required and optional service attributes having assigned piorities |
US5347653A (en) | 1991-06-28 | 1994-09-13 | Digital Equipment Corporation | System for reconstructing prior versions of indexes using records indicating changes between successive versions of the indexes |
US5410700A (en) | 1991-09-04 | 1995-04-25 | International Business Machines Corporation | Computer system which supports asynchronous commitment of data |
EP0541281A2 (en) | 1991-11-04 | 1993-05-12 | AT&T Corp. | Incremental-computer-file backup using signatures |
US5559991A (en) | 1991-11-04 | 1996-09-24 | Lucent Technologies Inc. | Incremental computer file backup using check words |
US5241670A (en) | 1992-04-20 | 1993-08-31 | International Business Machines Corporation | Method and system for automated backup copy ordering in a time zero backup copy session |
US5241668A (en) | 1992-04-20 | 1993-08-31 | International Business Machines Corporation | Method and system for automated termination and resumption in a time zero backup copy process |
US5642496A (en) | 1993-09-23 | 1997-06-24 | Kanfi; Arnon | Method of making a backup copy of a memory over a plurality of copying sessions |
WO1995013580A1 (en) | 1993-11-09 | 1995-05-18 | Arcada Software | Data backup and restore system for a computer network |
EP0774715A1 (en) | 1995-10-23 | 1997-05-21 | Stac Electronics | System for backing up files from disk volumes on multiple nodes of a computer network |
EP0809184A1 (en) | 1996-05-23 | 1997-11-26 | International Business Machines Corporation | Availability and recovery of files using copy storage pools |
EP0899662A1 (en) | 1997-08-29 | 1999-03-03 | Hewlett-Packard Company | Backup and restore system for a computer network |
WO1999012098A1 (en) | 1997-08-29 | 1999-03-11 | Hewlett-Packard Company | Data backup and recovery systems |
US6418478B1 (en) | 1997-10-30 | 2002-07-09 | Commvault Systems, Inc. | Pipelined high speed data transfer mechanism |
EP0981090A1 (en) | 1998-08-17 | 2000-02-23 | Connected Place Limited | A method of producing a checkpoint which describes a base file and a method of generating a difference file defining differences between an updated file and a base file |
US7035880B1 (en) | 1999-07-14 | 2006-04-25 | Commvault Systems, Inc. | Modular backup and retrieval system used in conjunction with a storage area network |
US7395282B1 (en) | 1999-07-15 | 2008-07-01 | Commvault Systems, Inc. | Hierarchical backup and retrieval system |
US7389311B1 (en) | 1999-07-15 | 2008-06-17 | Commvault Systems, Inc. | Modular backup and retrieval system |
US6542972B2 (en) | 2000-01-31 | 2003-04-01 | Commvault Systems, Inc. | Logical view and access to physical storage in modular data and storage management system |
US6658436B2 (en) | 2000-01-31 | 2003-12-02 | Commvault Systems, Inc. | Logical view and access to data managed by a modular data and storage management system |
US6721767B2 (en) | 2000-01-31 | 2004-04-13 | Commvault Systems, Inc. | Application specific rollback in a computer system |
US6760723B2 (en) | 2000-01-31 | 2004-07-06 | Commvault Systems Inc. | Storage management across multiple time zones |
US7003641B2 (en) | 2000-01-31 | 2006-02-21 | Commvault Systems, Inc. | Logical view with granular access to exchange data managed by a modular data and storage management system |
US7107298B2 (en) | 2001-09-28 | 2006-09-12 | Commvault Systems, Inc. | System and method for archiving objects in an information store |
US7346623B2 (en) | 2001-09-28 | 2008-03-18 | Commvault Systems, Inc. | System and method for generating and managing quick recovery volumes |
US7130970B2 (en) | 2002-09-09 | 2006-10-31 | Commvault Systems, Inc. | Dynamic storage device pooling in a computer system |
US7162496B2 (en) | 2002-09-16 | 2007-01-09 | Commvault Systems, Inc. | System and method for blind media support |
US7603386B2 (en) | 2002-09-16 | 2009-10-13 | Commvault Systems, Inc. | Systems and methods for managing location of media in a storage system |
US8370542B2 (en) | 2002-09-16 | 2013-02-05 | Commvault Systems, Inc. | Combined stream auxiliary copy system and method |
US7568080B2 (en) | 2002-10-07 | 2009-07-28 | Commvault Systems, Inc. | Snapshot storage and management system with indexing and user interface |
US7174433B2 (en) | 2003-04-03 | 2007-02-06 | Commvault Systems, Inc. | System and method for dynamically sharing media in a computer network |
US7246207B2 (en) | 2003-04-03 | 2007-07-17 | Commvault Systems, Inc. | System and method for dynamically performing storage operations in a computer network |
US7454569B2 (en) | 2003-06-25 | 2008-11-18 | Commvault Systems, Inc. | Hierarchical system and method for performing storage operations in a computer network |
US7539707B2 (en) | 2003-11-13 | 2009-05-26 | Commvault Systems, Inc. | System and method for performing an image level snapshot and for restoring partial volume data |
US7546324B2 (en) | 2003-11-13 | 2009-06-09 | Commvault Systems, Inc. | Systems and methods for performing storage operations using network attached storage |
US7734578B2 (en) | 2003-11-13 | 2010-06-08 | Comm Vault Systems, Inc. | System and method for performing integrated storage operations |
US8156086B2 (en) | 2003-11-13 | 2012-04-10 | Commvault Systems, Inc. | Systems and methods for stored data verification |
US7529782B2 (en) | 2003-11-13 | 2009-05-05 | Commvault Systems, Inc. | System and method for performing a snapshot and for restoring data |
US7440982B2 (en) | 2003-11-13 | 2008-10-21 | Commvault Systems, Inc. | System and method for stored data archive verification |
US7315923B2 (en) | 2003-11-13 | 2008-01-01 | Commvault Systems, Inc. | System and method for combining data streams in pipelined storage operations in a storage network |
US7343453B2 (en) | 2004-04-30 | 2008-03-11 | Commvault Systems, Inc. | Hierarchical systems and methods for providing a unified view of storage information |
US20060224846A1 (en) | 2004-11-05 | 2006-10-05 | Amarendran Arun P | System and method to support single instance storage operations |
WO2006052872A2 (en) | 2004-11-05 | 2006-05-18 | Commvault Systems, Inc. | System and method to support single instance storage operations |
US7500053B1 (en) | 2004-11-05 | 2009-03-03 | Commvvault Systems, Inc. | Method and system for grouping storage system components |
US7809914B2 (en) | 2004-11-05 | 2010-10-05 | Commvault Systems, Inc. | Methods and system of pooling storage devices |
US7536291B1 (en) | 2004-11-08 | 2009-05-19 | Commvault Systems, Inc. | System and method to support simulated storage operations |
US8230195B2 (en) | 2004-11-08 | 2012-07-24 | Commvault Systems, Inc. | System and method for performing auxiliary storage operations |
US7490207B2 (en) | 2004-11-08 | 2009-02-10 | Commvault Systems, Inc. | System and method for performing auxillary storage operations |
US8959299B2 (en) | 2004-11-15 | 2015-02-17 | Commvault Systems, Inc. | Using a snapshot as a data source |
US7660807B2 (en) | 2005-11-28 | 2010-02-09 | Commvault Systems, Inc. | Systems and methods for cataloging metadata for a metabase |
US7747579B2 (en) | 2005-11-28 | 2010-06-29 | Commvault Systems, Inc. | Metabase for facilitating data classification |
US7613752B2 (en) | 2005-11-28 | 2009-11-03 | Commvault Systems, Inc. | Systems and methods for using metadata to enhance data management operations |
US7657550B2 (en) | 2005-11-28 | 2010-02-02 | Commvault Systems, Inc. | User interfaces and methods for managing data in a metabase |
US7801864B2 (en) | 2005-11-28 | 2010-09-21 | Commvault Systems, Inc. | Systems and methods for using metadata to enhance data identification operations |
US7620710B2 (en) | 2005-12-19 | 2009-11-17 | Commvault Systems, Inc. | System and method for performing multi-path storage operations |
US7661028B2 (en) | 2005-12-19 | 2010-02-09 | Commvault Systems, Inc. | Rolling cache configuration for a data replication system |
US7636743B2 (en) | 2005-12-19 | 2009-12-22 | Commvault Systems, Inc. | Pathname translation in a data replication system |
US7617262B2 (en) | 2005-12-19 | 2009-11-10 | Commvault Systems, Inc. | Systems and methods for monitoring application data in a data replication system |
US7617253B2 (en) | 2005-12-19 | 2009-11-10 | Commvault Systems, Inc. | Destination systems and methods for performing data replication |
US7543125B2 (en) | 2005-12-19 | 2009-06-02 | Commvault Systems, Inc. | System and method for performing time-flexible calendric storage operations |
US7606844B2 (en) | 2005-12-19 | 2009-10-20 | Commvault Systems, Inc. | System and method for performing replication copy storage operations |
US7651593B2 (en) | 2005-12-19 | 2010-01-26 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US8170995B2 (en) | 2006-10-17 | 2012-05-01 | Commvault Systems, Inc. | Method and system for offline indexing of content and classifying stored data |
US7734669B2 (en) | 2006-12-22 | 2010-06-08 | Commvault Systems, Inc. | Managing copies of data |
US8229954B2 (en) | 2006-12-22 | 2012-07-24 | Commvault Systems, Inc. | Managing copies of data |
US20090319534A1 (en) | 2008-06-24 | 2009-12-24 | Parag Gokhale | Application-aware and remote single instance data management |
US9098495B2 (en) | 2008-06-24 | 2015-08-04 | Commvault Systems, Inc. | Application-aware and remote single instance data management |
US8307177B2 (en) | 2008-09-05 | 2012-11-06 | Commvault Systems, Inc. | Systems and methods for management of virtualization data |
US20200334221A1 (en) | 2008-09-05 | 2020-10-22 | Commvault Systems, Inc. | Classification of virtualization data |
US8578120B2 (en) | 2009-05-22 | 2013-11-05 | Commvault Systems, Inc. | Block-level single instancing |
US8285681B2 (en) | 2009-06-30 | 2012-10-09 | Commvault Systems, Inc. | Data object store and server for a cloud storage environment, including data deduplication and data management across multiple cloud storage sites |
US8433682B2 (en) | 2009-12-31 | 2013-04-30 | Commvault Systems, Inc. | Systems and methods for analyzing snapshots |
US8595191B2 (en) | 2009-12-31 | 2013-11-26 | Commvault Systems, Inc. | Systems and methods for performing data management operations using snapshots |
US20200159627A1 (en) | 2010-06-04 | 2020-05-21 | Commvault Systems, Inc. | Failover systems and methods for performing backup operations, including heterogeneous indexing and load balancing of backup and indexing resources |
US8504526B2 (en) | 2010-06-04 | 2013-08-06 | Commvault Systems, Inc. | Failover systems and methods for performing backup operations |
US9239687B2 (en) | 2010-09-30 | 2016-01-19 | Commvault Systems, Inc. | Systems and methods for retaining and using data block signatures in data protection operations |
US9588972B2 (en) | 2010-09-30 | 2017-03-07 | Commvault Systems, Inc. | Efficient data management improvements, such as docking limited-feature data management modules to a full-featured data management system |
US8364652B2 (en) | 2010-09-30 | 2013-01-29 | Commvault Systems, Inc. | Content aligned block-based deduplication |
US8954446B2 (en) | 2010-12-14 | 2015-02-10 | Comm Vault Systems, Inc. | Client-side repository in a networked deduplicated storage system |
US20120150818A1 (en) | 2010-12-14 | 2012-06-14 | Commvault Systems, Inc. | Client-side repository in a networked deduplicated storage system |
US9020900B2 (en) | 2010-12-14 | 2015-04-28 | Commvault Systems, Inc. | Distributed deduplicated storage system |
US20120150826A1 (en) | 2010-12-14 | 2012-06-14 | Commvault Systems, Inc. | Distributed deduplicated storage system |
US8706867B2 (en) | 2011-03-31 | 2014-04-22 | Commvault Systems, Inc. | Realtime streaming of multimedia content from secondary storage devices |
US9461881B2 (en) | 2011-09-30 | 2016-10-04 | Commvault Systems, Inc. | Migration of existing computing systems to cloud computing sites or virtual machines |
US9372827B2 (en) | 2011-09-30 | 2016-06-21 | Commvault Systems, Inc. | Migration of an existing computing system to new hardware |
US9116633B2 (en) | 2011-09-30 | 2015-08-25 | Commvault Systems, Inc. | Information management of virtual machines having mapped storage devices |
US9298715B2 (en) | 2012-03-07 | 2016-03-29 | Commvault Systems, Inc. | Data storage system utilizing proxy device for storage operations |
US9342537B2 (en) | 2012-04-23 | 2016-05-17 | Commvault Systems, Inc. | Integrated snapshot interface for a data storage system |
US9965306B1 (en) * | 2012-06-27 | 2018-05-08 | EMC IP Holding Company LLC | Snapshot replication |
US9311121B2 (en) | 2012-12-21 | 2016-04-12 | Commvault Systems, Inc. | Archiving virtual machines in a data storage system |
US9378035B2 (en) | 2012-12-28 | 2016-06-28 | Commvault Systems, Inc. | Systems and methods for repurposing virtual machines |
US20190324791A1 (en) | 2012-12-28 | 2019-10-24 | Commvault Systems, Inc. | Systems and methods for repurposing virtual machines |
US20140196038A1 (en) | 2013-01-08 | 2014-07-10 | Commvault Systems, Inc. | Virtual machine management in a data storage system |
US20200265024A1 (en) | 2013-01-11 | 2020-08-20 | Commvault Systems, Inc. | Systems and methods for rule-based virtual machine data protection |
US9495404B2 (en) | 2013-01-11 | 2016-11-15 | Commvault Systems, Inc. | Systems and methods to process block-level backup for selective file restoration for virtual machines |
US20140201170A1 (en) | 2013-01-11 | 2014-07-17 | Commvault Systems, Inc. | High availability distributed deduplicated storage system |
US9633033B2 (en) | 2013-01-11 | 2017-04-25 | Commvault Systems, Inc. | High availability distributed deduplicated storage system |
US9886346B2 (en) | 2013-01-11 | 2018-02-06 | Commvault Systems, Inc. | Single snapshot for multiple agents |
US9286110B2 (en) | 2013-01-14 | 2016-03-15 | Commvault Systems, Inc. | Seamless virtual machine recall in a data storage system |
US9483362B2 (en) | 2013-05-08 | 2016-11-01 | Commvault Systems, Inc. | Use of auxiliary data protection software in failover operations |
US9939981B2 (en) | 2013-09-12 | 2018-04-10 | Commvault Systems, Inc. | File manager integration with virtualization in an information management system with an enhanced storage manager, including user control and storage management of virtual machines |
US9495251B2 (en) | 2014-01-24 | 2016-11-15 | Commvault Systems, Inc. | Snapshot readiness checking and reporting |
US9639426B2 (en) | 2014-01-24 | 2017-05-02 | Commvault Systems, Inc. | Single snapshot for multiple applications |
US10650057B2 (en) | 2014-07-16 | 2020-05-12 | Commvault Systems, Inc. | Volume or virtual machine level backup and generating placeholders for virtual machine files |
US9710465B2 (en) | 2014-09-22 | 2017-07-18 | Commvault Systems, Inc. | Efficiently restoring execution of a backed up virtual machine based on coordination with virtual-machine-file-relocation operations |
US10776209B2 (en) | 2014-11-10 | 2020-09-15 | Commvault Systems, Inc. | Cross-platform virtual machine backup and replication |
US9448731B2 (en) | 2014-11-14 | 2016-09-20 | Commvault Systems, Inc. | Unified snapshot storage management |
US9983936B2 (en) | 2014-11-20 | 2018-05-29 | Commvault Systems, Inc. | Virtual machine change block tracking |
US20160203060A1 (en) * | 2015-01-09 | 2016-07-14 | Vmware, Inc. | Client deployment with disaster recovery considerations |
US9898213B2 (en) | 2015-01-23 | 2018-02-20 | Commvault Systems, Inc. | Scalable auxiliary copy processing using media agent resources |
US9639274B2 (en) | 2015-04-14 | 2017-05-02 | Commvault Systems, Inc. | Efficient deduplication database validation |
US20160350391A1 (en) | 2015-05-26 | 2016-12-01 | Commvault Systems, Inc. | Replication using deduplicated secondary copy data |
US20160371127A1 (en) * | 2015-06-19 | 2016-12-22 | Vmware, Inc. | Resource management for containers in a virtualized environment |
US10084873B2 (en) | 2015-06-19 | 2018-09-25 | Commvault Systems, Inc. | Assignment of data agent proxies for executing virtual-machine secondary copy operations including streaming backup jobs |
US20170168903A1 (en) * | 2015-12-09 | 2017-06-15 | Commvault Systems, Inc. | Live synchronization and management of virtual machines across computing and virtualization platforms and using live synchronization to support disaster recovery |
US20170185488A1 (en) | 2015-12-23 | 2017-06-29 | Commvault Systems, Inc. | Application-level live synchronization across computing platforms including synchronizing co-resident applications to disparate standby destinations and selectively synchronizing some applications and not others |
US20170192866A1 (en) | 2015-12-30 | 2017-07-06 | Commvault Systems, Inc. | System for redirecting requests after a secondary storage computing device failure |
US20170193003A1 (en) | 2015-12-30 | 2017-07-06 | Commvault Systems, Inc. | Redundant and robust distributed deduplication data storage system |
US20170235647A1 (en) | 2016-02-12 | 2017-08-17 | Commvault Systems, Inc. | Data protection operations based on network path information |
US20170242871A1 (en) | 2016-02-18 | 2017-08-24 | Commvault Systems, Inc. | Data restoration operations based on network path information |
US10592350B2 (en) | 2016-03-09 | 2020-03-17 | Commvault Systems, Inc. | Virtual server cloud file system for virtual machine restore to cloud operations |
US10503753B2 (en) | 2016-03-10 | 2019-12-10 | Commvault Systems, Inc. | Snapshot replication operations based on incremental block change tracking |
US10747630B2 (en) | 2016-09-30 | 2020-08-18 | Commvault Systems, Inc. | Heartbeat monitoring of virtual machines for initiating failover operations in a data storage management system, including operations by a master monitor node |
US10678758B2 (en) | 2016-11-21 | 2020-06-09 | Commvault Systems, Inc. | Cross-platform virtual machine data and memory backup and replication |
US20180267861A1 (en) | 2017-03-15 | 2018-09-20 | Commvault Systems, Inc. | Application aware backup of virtual machines |
US10474542B2 (en) | 2017-03-24 | 2019-11-12 | Commvault Systems, Inc. | Time-based virtual machine reversion |
US10387073B2 (en) | 2017-03-29 | 2019-08-20 | Commvault Systems, Inc. | External dynamic virtual machine synchronization |
US10853195B2 (en) | 2017-03-31 | 2020-12-01 | Commvault Systems, Inc. | Granular restoration of virtual machine application data |
US10732885B2 (en) | 2018-02-14 | 2020-08-04 | Commvault Systems, Inc. | Block-level live browsing and private writable snapshots using an ISCSI server |
US10877928B2 (en) | 2018-03-07 | 2020-12-29 | Commvault Systems, Inc. | Using utilities injected into cloud-based virtual machines for speeding up virtual machine backup operations |
US20200183802A1 (en) | 2018-12-06 | 2020-06-11 | Commvault Systems, Inc. | Assigning backup resources based on failover of partnered data storage servers in a data storage management system |
Non-Patent Citations (5)
Title |
---|
Arneson, "Mass Storage Archiving in Network Environments" IEEE, Oct. 31-Nov. 1998, pp. 45-50. |
Cabrera, et al. "ADSM: A Multi-Platform, Scalable, Back-up and Archive Mass Storage System," Digest of Papers, Compcon '95, Proceedings of the 40th IEEE Computer Society International Conference, Mar. 5, 1995-Mar. 9, 1995, pp. 420-427, San Francisco, CA. |
Eitel, "Backup and Storage Management in Distributed Heterogeneous Environments," IEEE, 1994, pp. 124-126. |
Huff, KL, "Data Set Usage Sequence Number," IBM Technical Disclosure Bulletin, vol. 24, No. 5, Oct. 1981 New York, US, pp. 2404-2406. |
Rosenblum et al., "The Design and Implementation of a Log-Structure File System," Operating Systems Review SIGOPS, vol. 25, No. 5, May 1991, New York, US, pp. 1-15. |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12001295B2 (en) | 2010-06-04 | 2024-06-04 | Commvault Systems, Inc. | Heterogeneous indexing and load balancing of backup and indexing resources |
US11449394B2 (en) | 2010-06-04 | 2022-09-20 | Commvault Systems, Inc. | Failover systems and methods for performing backup operations, including heterogeneous indexing and load balancing of backup and indexing resources |
US11321189B2 (en) | 2014-04-02 | 2022-05-03 | Commvault Systems, Inc. | Information management by a media agent in the absence of communications with a storage manager |
US11429499B2 (en) | 2016-09-30 | 2022-08-30 | Commvault Systems, Inc. | Heartbeat monitoring of virtual machines for initiating failover operations in a data storage management system, including operations by a master monitor node |
US11550680B2 (en) | 2018-12-06 | 2023-01-10 | Commvault Systems, Inc. | Assigning backup resources in a data storage management system based on failover of partnered data storage resources |
US11467863B2 (en) | 2019-01-30 | 2022-10-11 | Commvault Systems, Inc. | Cross-hypervisor live mount of backed up virtual machine data |
US11947990B2 (en) | 2019-01-30 | 2024-04-02 | Commvault Systems, Inc. | Cross-hypervisor live-mount of backed up virtual machine data |
US12061524B2 (en) | 2019-06-24 | 2024-08-13 | Commvault Systems, Inc. | Content indexing of files in block-level backup copies of virtual machine data |
US11977461B2 (en) | 2019-06-27 | 2024-05-07 | Netapp, Inc. | Incremental restore of a virtual machine |
US11650886B2 (en) | 2019-06-27 | 2023-05-16 | Netapp, Inc. | Orchestrator for orchestrating operations between a computing environment hosting virtual machines and a storage environment |
US11615001B2 (en) * | 2019-06-27 | 2023-03-28 | Netapp, Inc. | Incremental restore of a virtual machine |
US12045144B2 (en) | 2019-06-27 | 2024-07-23 | Netapp, Inc. | Orchestrator for orchestrating operations between a computing environment hosting virtual machines and a storage environment |
US20200409803A1 (en) * | 2019-06-27 | 2020-12-31 | Netapp Inc. | Incremental restore of a virtual machine |
US11853104B2 (en) | 2019-06-27 | 2023-12-26 | Netapp, Inc. | Virtual machine backup from computing environment to storage environment |
US11467753B2 (en) | 2020-02-14 | 2022-10-11 | Commvault Systems, Inc. | On-demand restore of virtual machine data |
US11714568B2 (en) | 2020-02-14 | 2023-08-01 | Commvault Systems, Inc. | On-demand restore of virtual machine data |
US11442768B2 (en) | 2020-03-12 | 2022-09-13 | Commvault Systems, Inc. | Cross-hypervisor live recovery of virtual machines |
US20210342237A1 (en) * | 2020-03-26 | 2021-11-04 | Commvault Systems, Inc. | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US11663099B2 (en) * | 2020-03-26 | 2023-05-30 | Commvault Systems, Inc. | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US12235744B2 (en) | 2020-03-26 | 2025-02-25 | Commvault Systems, Inc. | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US11960364B2 (en) * | 2020-04-14 | 2024-04-16 | Capital One Services, Llc | Event processing |
US11748143B2 (en) | 2020-05-15 | 2023-09-05 | Commvault Systems, Inc. | Live mount of virtual machines in a public cloud computing environment |
US12086624B2 (en) | 2020-05-15 | 2024-09-10 | Commvault Systems, Inc. | Live recovery of virtual machines in a public cloud computing environment based on temporary live mount |
US11500669B2 (en) | 2020-05-15 | 2022-11-15 | Commvault Systems, Inc. | Live recovery of virtual machines in a public cloud computing environment |
US11513708B2 (en) | 2020-08-25 | 2022-11-29 | Commvault Systems, Inc. | Optimized deduplication based on backup frequency in a distributed data storage system |
US11693572B2 (en) | 2020-08-25 | 2023-07-04 | Commvault Systems, Inc. | Optimized deduplication based on backup frequency in a distributed data storage system |
US11500566B2 (en) | 2020-08-25 | 2022-11-15 | Commvault Systems, Inc. | Cloud-based distributed data storage system using block-level deduplication based on backup frequencies of incoming backup copies |
US11789830B2 (en) | 2020-09-22 | 2023-10-17 | Commvault Systems, Inc. | Anti-entropy-based metadata recovery in a strongly consistent distributed data storage system |
US11570243B2 (en) | 2020-09-22 | 2023-01-31 | Commvault Systems, Inc. | Decommissioning, re-commissioning, and commissioning new metadata nodes in a working distributed data storage system |
US11647075B2 (en) | 2020-09-22 | 2023-05-09 | Commvault Systems, Inc. | Commissioning and decommissioning metadata nodes in a running distributed data storage system |
US12294622B2 (en) | 2020-09-22 | 2025-05-06 | Commvault Systems, Inc. | Commissioning and decommissioning metadata nodes in a running distributed data storage system |
US12063270B2 (en) | 2020-09-22 | 2024-08-13 | Commvault Systems, Inc. | Commissioning and decommissioning metadata nodes in a running distributed data storage system |
US11314687B2 (en) | 2020-09-24 | 2022-04-26 | Commvault Systems, Inc. | Container data mover for migrating data between distributed data storage systems integrated with application orchestrators |
US12007940B2 (en) | 2020-09-24 | 2024-06-11 | Commvault Systems, Inc. | Container data mover for migrating data between distributed data storage systems integrated with application orchestrators |
US11228552B1 (en) * | 2020-10-20 | 2022-01-18 | Servicenow, Inc. | Automatically handling messages of a non-operational mail transfer agent within a virtualization container |
US11327852B1 (en) * | 2020-10-22 | 2022-05-10 | Dell Products L.P. | Live migration/high availability system |
US11461200B2 (en) * | 2020-11-19 | 2022-10-04 | Kyndryl, Inc. | Disaster recovery failback advisor |
US11334450B1 (en) * | 2021-02-25 | 2022-05-17 | Qnap Systems, Inc. | Backup method and backup system for virtual machine |
US12019525B2 (en) | 2021-10-05 | 2024-06-25 | Commvault Systems, Inc. | Cloud-based recovery of backed up data using auxiliary copy replication and on-demand failover resources |
US20230315592A1 (en) * | 2022-03-30 | 2023-10-05 | Rubrik, Inc. | Virtual machine failover management for geo-redundant data centers |
US11921596B2 (en) * | 2022-03-30 | 2024-03-05 | Rubrik, Inc. | Virtual machine failover management for geo-redundant data centers |
US12045147B2 (en) * | 2022-10-03 | 2024-07-23 | Rubrik, Inc. | Lossless failover for data recovery |
US11995042B1 (en) * | 2023-01-11 | 2024-05-28 | Dell Products L.P. | Fast recovery for replication corruptions |
US20250021452A1 (en) * | 2023-07-14 | 2025-01-16 | Sap Se | Disaster recovery using incremental database recovery |
US12306725B2 (en) | 2023-12-15 | 2025-05-20 | Commvault Systems, Inc. | Cloud-based recovery of backed up data using auxiliary copy replication and on-demand failover resources |
Also Published As
Publication number | Publication date |
---|---|
US11663099B2 (en) | 2023-05-30 |
US20250036535A1 (en) | 2025-01-30 |
US20230251945A1 (en) | 2023-08-10 |
US12235744B2 (en) | 2025-02-25 |
US20210342237A1 (en) | 2021-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12235744B2 (en) | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations | |
US11816005B2 (en) | Systems and methods for change block tracking for backing up changed data | |
US11573862B2 (en) | Application aware backup of virtual machines | |
US11520755B2 (en) | Migration of a database management system to cloud storage | |
US11544155B2 (en) | Granular restoration of virtual machine application data | |
US11467863B2 (en) | Cross-hypervisor live mount of backed up virtual machine data | |
US12199952B2 (en) | Data protection component scaling in a cloud- based data storage system | |
US20210374016A1 (en) | Synchronization of a database by restoring copies of changed database objects | |
US20220035559A1 (en) | Managing subordinate storage operation pod cells using a global repository cell or master storage operation cell | |
US10996974B2 (en) | Cross-hypervisor live mount of backed up virtual machine data, including management of cache storage for virtual machine data | |
US10824459B2 (en) | Targeted snapshot based on virtual machine location | |
US20210334174A1 (en) | Dynamically allocating streams during restoration of data | |
US11609826B2 (en) | Multi-streaming backup operations for mailboxes | |
US10152251B2 (en) | Targeted backup of virtual machine | |
US20220043727A1 (en) | Assigning backup resources in a data storage management system based on failover of partnered data storage resources | |
US11308034B2 (en) | Continuously run log backup with minimal configuration and resource usage from the source machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |