US20120011500A1 - Managing a memory segment using a memory virtual appliance - Google Patents
Managing a memory segment using a memory virtual appliance Download PDFInfo
- Publication number
- US20120011500A1 US20120011500A1 US12/833,438 US83343810A US2012011500A1 US 20120011500 A1 US20120011500 A1 US 20120011500A1 US 83343810 A US83343810 A US 83343810A US 2012011500 A1 US2012011500 A1 US 2012011500A1
- Authority
- US
- United States
- Prior art keywords
- memory
- virtual appliance
- data
- segment
- memory segment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/14—Protection against unauthorised use of memory or access to memory
- G06F12/1458—Protection against unauthorised use of memory or access to memory by checking the subject access rights
- G06F12/1491—Protection against unauthorised use of memory or access to memory by checking the subject access rights in a hierarchical protection system, e.g. privilege levels, memory rings
Definitions
- Fat computing nodes are nodes with multiple sockets of high-end symmetrical multiprocessors (SMPs) with large memory spaces, while thin computing nodes are relatively low-power and low-cost processors with reduced memory.
- SMPs high-end symmetrical multiprocessors
- the dedicated nodes are nodes that are limited in purpose and/or functionality and include nodes that are used for memory. These memory-purposed nodes are known to be designated as memory appliances.
- Memory appliances are useful in several environments in the datacenter, such as acceleration of transaction processing, storing metadata for fast locking, in-memory databases for analytics and business intelligence (BI), storage caching or tier-0 storage.
- memory appliances When used as memory expanders, memory appliances have also been shown to be effective as a remote paging device under hypervisor control. Additionally, when used to encapsulate high level abstractions (such as memcached) memory appliances are known to significantly accelerate dynamic web serving.
- FIG. 1 depicts a block diagram of a computing ensemble comprised of a mixture of computing nodes, in which various embodiments of the invention may be implemented, according to an example embodiment of the invention
- FIG. 2 depicts a simplified block diagram of a data processing infrastructure configured to be implemented in computing environment, according to an example embodiment of the invention
- FIG. 3A depicts a schematic diagram of a portion of a node depicted in FIG. 2 , according to an example embodiment of the invention
- FIG. 3B depicts a schematic diagram of a portion of a plurality of nodes including the node depicted in FIG. 3A , according to an example embodiment of the invention
- FIG. 4 shows a flow diagram of a method for managing a memory segment by using a memory virtual appliance, according to an example embodiment of the invention.
- FIG. 5 illustrates a computer system, which may be employed to perform various functions of the nodes depicted in FIGS. 2 , 3 A and 3 B in performing some or all of the steps contained in the flow diagram depicted in FIG. 4 , according to an example embodiment of the invention.
- the memory virtual appliance comprises a virtual machine configured to manage a memory segment in a physical memory and is configured to encapsulate the data.
- the memory virtual appliance is implemented using a virtualization wrapper that comprises computer readable code that enables the encapsulated data to be shared among a plurality of clients.
- the memory virtual appliance enables the encapsulated data to be disaggregated from a client, such as a computing device or a virtual machine operating on the computing device, to thus be accessed by multiple clients.
- the memory virtual appliance is able to actively manage the memory segment containing the data independent of instructions from a client.
- encapsulation of data with the memory virtual appliance as disclosed herein offers a relatively rich access interface to clients.
- the memory virtual appliances disclosed herein are not tied to a particular device, the memory virtual appliances may be stored in devices having excess memory, to thereby substantially maximize usage of available memory capacities.
- the memory virtual appliances additionally provide an abstraction that may be used to implement a variety of applications.
- the memory virtual appliances may be used to encapsulate data and to offer a rich access interface to clients that require additional memory.
- the memory virtual appliances may be relatively lightweight and may be employed under hypervisor control to transparently implement resilience and distributed replication functionalities without compromising performance.
- the memory virtual appliances discussed herein are hardware independent and may be used to interface to a wide variety of configurations of physical memory (including non-volatile memory) and to also expose excess dynamic random access memory (DRAM) capacity of computing nodes.
- DRAM dynamic random access memory
- FIG. 1 there is shown a block diagram of a computing ensemble 100 comprised of a mixture of computing nodes, in which various embodiments of the invention may be implemented, according to an example embodiment. It should be understood that the computing ensemble 100 may include additional components and that one or more of the components described herein may be removed and/or modified without departing from a scope of the computing ensemble 100 .
- the computing ensemble 100 includes a high-radix switch 110 , a plurality of fat computing nodes 112 a - 112 n , a plurality of thin computing nodes 114 a - 114 n and a plurality of memory nodes 116 a - 116 n .
- all of the nodes in the computing ensemble 100 may be connected through a fast and flat optical fabric.
- the fat computing nodes 112 a - 112 n comprise nodes with central processing units (CPUs) 120 and physical memories (PMs) 122 a - 122 n .
- the CPUs 120 may comprise, for example, multiple sockets of high-end symmetrical multiprocessors (SMPs) and the PMs 122 a - 122 n may comprise memories with relatively large memory capacities.
- the thin computing nodes 114 a - 114 n are also depicted as including CPUs 120 and PMs 122 a .
- the CPUs 120 of the thin computing nodes 114 a - 114 n generally comprise relatively low power and low-cost processors and the PMs 122 a comprise relatively low capacity memories.
- the memory nodes 116 a - 116 n may comprise memory appliances having a plurality of PMs 122 a - 122 n and non-volatile memory cells (NVs) 124 a - 124 n , along with one or more CPUs 120 to control the memory appliances.
- the computing nodes 112 a - 112 n and 114 a - 114 n may also include NVs 124 a - 124 n.
- specialized virtual machines which are termed “memory virtual appliances” (MVAs) throughout the present disclosure, may operate on one or more of the nodes 112 a - 112 n , 114 a - 114 n , and 116 a - 116 n to enable data to be stored on the node(s) 112 a - 112 n , 114 a - 114 n , and 116 a - 116 n virtually.
- MVAs memory virtual appliances
- the MVAs are associated with, assigned to, or host respective memory segments in the PMs 122 a - 122 n or NVs 124 a - 124 n and make those memory segments visible to clients, such as, virtual machines, servers, client devices, etc., regardless of whether the clients are located in the same node, the same network, etc., as the MVAs.
- clients may interact with the MVAs to store and access data virtually as if the data were stored locally on the clients.
- the MVAs operate with hypervisors that are typically employed with conventional virtual machines, the clients may access the virtually stored data through either or both the MVAs or the hypervisor, which may be necessary, for instance, when the MVAs are not running.
- the data may be virtually stored on one or more of the PMs 122 a - 122 n or NVs 124 a - 124 n through implementation of the memory virtual appliances discussed herein. Because the data maybe disassociated from the physical memory upon which the data is stored, the data may be manipulated in various manners that are unavailable to data that are tied to particular physical memories.
- FIG. 2 there is shown a simplified block diagram of a data processing infrastructure 200 configured to be implemented in a computing environment, such as, the computing ensemble 100 depicted in FIG. 1 , according to an example.
- the data processing infrastructure 200 may include additional components and that one or more of the components described herein may be removed and/or modified without departing from a scope of the data processing infrastructure 200 .
- the data processing infrastructure 200 comprises a plurality of nodes 210 a - 210 n , where n is a value greater than 1.
- the plurality of nodes 210 a - 210 n may comprise a homogenous set or a heterogeneous mixture of computing nodes.
- the nodes 210 a - 210 n may comprise the fat computing nodes 112 a - 112 n , the thin computing nodes 114 a - 114 n , and/or the memory nodes 116 a - 116 n depicted in FIG. 1 .
- each of the nodes 210 a - 210 n comprises a computing device, such as, a server or a memory node having a processor 214 for implementing and/or executing various instructions in each of the nodes 210 a - 210 n.
- one or more of the nodes 210 a - 210 n comprise servers upon which one or more virtual machines (VM) 220 a - 220 n are run.
- VM virtual machines
- the VMs 220 a - 220 n comprise software implementations of machines, such as, computers, that execute programs similar to a physical machine.
- the nodes 210 a - 210 n include respective hypervisors 230 , which may comprise a software layer or hardware that provides virtualization to the VMs 220 a - 220 n .
- the hypervisors 230 generally operate to provide the VMs 220 a - 220 n with a virtualization platform upon which the VMs 220 a - 220 n operate and to monitor the execution of the VMs 220 a - 220 n .
- suitable virtualization platforms include those available from XEN, VirtualBox, and VMware.
- one or more of the VMs 220 a - 220 n in one or more of the nodes 210 a - 210 n comprise specialized VMs configured to be implemented as memory virtual appliances (MVAs) configured to host or control respective memory segments in the physical memories 212 a - 212 n of the one or more nodes 210 a - 210 n .
- the physical memories 212 a - 212 n may comprise any of a variety of storage devices, such as, solid-state disks, disk caches, flash memories, etc.
- the physical memories 212 a - 212 n may be volatile or non-volatile, replaceable or irreplaceable, storage devices.
- the physical memories 212 a - 212 n may be homogeneous with respect to each other or two or more of the physical memories 212 a - 212 n may be heterogeneous with respect to each other.
- the remaining VMs 220 a - 220 n in this embodiment may comprise system VMs or other types of process virtual machines.
- the MVA(s) 220 a - 220 n are generally implemented using a light-weight operating system and comprise computer readable code that hosts or controls respective memory segments in the physical memories 212 a - 212 n .
- the operating system is considered to be “light-weight” because its sole function may be to manage the data stored in the memory segments under the control the MVAs 220 a - 220 n .
- the virtualization wrapper generally enables data stored in the memory segment controlled by the memory virtual appliance 220 a - 220 n to be shared among a plurality of clients, which may comprise other nodes 210 a - 210 n , input/output node(s) 260 , or nodes located outside of the data processing infrastructure 200 .
- the input/output nodes 260 may comprise computing devices, such as, servers, user terminals, etc., configured to communicate data with the nodes 210 a - 210 n over a network 240 .
- the network 240 may comprise a local area network, a wide area network, the Internet, etc.
- FIG. 3A there is shown a schematic diagram 300 of a portion of a node 210 a depicted in FIG. 2 , according to an example. More particularly, FIG. 3A depicts an example of various manners in which VMs 220 a - 220 n manage respectively controlled or hosted memory (MEM) segments 320 a - 320 n of a physical memory 212 a .
- MEM hosted memory
- the node 210 a may include additional components and that one or more of the components described herein may be removed and/or modified without departing from a scope of the node 210 a .
- the node 210 a may include additional physical memories 212 a .
- one or more of the memory segments 320 b - 320 n may be located one or more physical memories of the node 210 a.
- the VMs 220 a - 220 n , the hypervisor 230 , and the physical memory 212 a of the node 210 a have been depicted in FIG. 3A .
- the first VM 220 a has also been depicted as a conventional type of virtual machine, which is, for instance, configured to perform various functions other than MVA functions.
- the first VM 220 a may comprise a conventional system VM or a process VM other than a MVA.
- the remaining VMs 220 b - 220 n have been depicted as comprising MVAs 220 b - 220 n .
- Each of the VMs 220 a - 220 n has been depicted as having control over or hosting a particular memory segment 320 a - 320 n of the physical memory 212 a .
- the memory segments 320 a - 320 n may equivalently be termed as bits, memory blocks, or other equivalent elements for defining discrete locations in the physical memories 212 a - 212 n in which data is at least one of stored, read, erased, rewritten, etc.
- each of the memory segments 320 a - 320 n is formed of a plurality of memory cells or blocks upon which data may be stored and read from.
- each of the VMs 220 a - 220 n is configured to cause respective data 310 a - 310 n to be accessed in the respective memory segments 320 a - 320 n .
- Each of the data 310 a - 310 n may comprise a standalone set of information, such as, independently executable applications, photo files, audio files, video files, etc.
- two or more of the data 310 a - 310 n may be interoperable.
- the MVAs 220 b - 220 n are configured to actively manage the respective memory segments 320 a - 320 n .
- the MVAs 220 b - 220 n actively manage the memory segments 320 a - 320 n independent of the clients that requested storage or other manipulation of the data 310 b - 310 n and the physical memories 212 a - 212 n on which the data 310 a - 310 n are stored.
- the MVAs 220 b - 220 n comprise computer readable code that enables the MVAs 220 b - 220 n to control how the memory segments 320 a - 320 n are to be managed.
- the MVAs 220 b - 220 n may define one or more policies with respect to access, duplication, erasure, etc., of the data 310 b - 310 n stored on the memory segments 320 a - 320 n and may be able to cause any of those policies to be implemented without requiring receipt of additional instructions from a client.
- the MVAs 220 b - 220 n may control one or more of: which clients, such as, nodes or VMs operating on the nodes are able to access the data 310 b - 310 n stored in their respective memory segments 320 b - 320 n , when the data 310 b - 310 n is to be duplicated in another memory segment, whether and the number of times the data 310 a - 310 n may be duplicated, whether and when the data 310 b - 310 n are to be migrated, etc.
- the MVAs 220 b - 220 n also enable bookkeeping and access functions to be performed with respect to the data 310 b - 310 n .
- the bookkeeping functions may include tracking which memory segments 320 a - 320 n are hosted or controlled by which of the MVAs 220 b - 220 n , as well as other life cycle management information of the MVAs 220 b - 220 n .
- the access functions are generally configured to enable sharing of the data 310 b - 310 n among multiple clients, security authorization requirements to access the data 310 b - 310 n , etc.
- the MVAs 220 b - 220 n may ensure atomicity of the copy operation so that no concurrent updates are allowed to occur. Moreover, the MVAs 220 b - 220 n may support different client services, such as fast paging, Tier-0 storage, or remote direct memory access (RDMA)-based object replication.
- client services such as fast paging, Tier-0 storage, or remote direct memory access (RDMA)-based object replication.
- the VM 220 a accesses the data 310 b - 310 n under the control of the MVAs 220 b - 220 n
- the VM 220 a is configured access the data to which it has rights to access, such as the data 310 a stored in the memory segment 320 a .
- the memory segment 320 a may comprise, for instance, the available memory segments other than those under the control of the MVAs 220 b - 220 n .
- the memory segment 320 a to which the VM 220 a stored data 310 a differs from the other memory segments 320 b - 320 n because the memory segment 320 a does not store the data virtually.
- the data 310 a stored in the memory segment 320 a is not controlled by an MVA and thus, may not be actively managed.
- the memory segment 320 a is tied directly to the physical memory 212 a.
- FIG. 3B there is shown a schematic diagram 350 of a plurality of nodes including the node 210 a depicted in FIG. 3A , according to an example.
- a first MVA 220 b of the first node 210 a has control over a first memory segment 320 b
- a second MVA 220 c of the first node 210 a has control over a second memory segment 320 c .
- the physical memory 212 a containing the memory segments 320 b and 320 c may be physically located on the first node 210 a or on another node.
- FIG. 3B is intended to show that the memory segments 320 a - 320 n may be accessed by other nodes 210 b - 210 c.
- a second node 210 b includes a VM 220 a that is configured to access the memory segment 320 b .
- a third node 210 c does not include a VM, but instead, comprises, for instance, a conventional computing device configured to access the memory segment 320 c .
- the MVA 220 b of the first node 210 a controls access to the data stored in the memory segment 320 b and may thus prevent access to that data to the processor 214 of the third node 210 c , while allowing access to the VM 220 a running on the second node 210 b.
- FIG. 4 shows a flow diagram of a method 400 for managing a memory segment by using a memory virtual appliance, according to an example. It should be understood that the method 400 may include additional steps and that one or more of the steps described herein may be removed and/or modified without departing from a scope of the method 400 .
- the description of the method 400 is made with reference to the computing ensemble 100 and the data processing infrastructure 200 depicted in FIG. 2 , respectively, and thus makes particular reference to the elements contained in those figures. It should, however, be understood that the method 400 may be implemented in an infrastructure that differs from the computing ensemble 100 and the data processing infrastructure 200 depicted in FIGS. 1 and 2 without departing from a scope of the method 400 .
- data 310 b to be stored on a physical memory 212 a of a node 210 a is identified.
- the MVA 220 b may receive a request from a client, such as, a VM 220 a , another node 210 b , an input/output node 260 , etc., for the data 310 b to be stored on the physical memory 212 a .
- the request to store the data 310 b on the physical memory 212 a may comprise a broadcast request by the client for the data 310 b to be stored.
- the request to store the data 310 b may be responsive to an indication by the MVA 220 b that the MVA 220 b has available storage capacity for the data 310 b .
- the MVA 220 b once initiated, may broadcast an indication to the nodes 210 a - 210 n in the infrastructure 200 that it is ready to receive data.
- the MVA 220 b may register itself with a management network to inform the nodes 210 a - 210 n that it is able to store data. It should be understood that the MVA 220 b may receive the data to be stored in any other suitable manner.
- the memory segments 320 b - 320 n hosted by the MVAs 220 b - 220 n are made visible to the clients and the clients are thus able to store the data 310 b - 310 n in the hosted memory segments 320 b - 320 n.
- the data 310 b is encapsulated with a MVA 320 b that is implemented using a virtualization wrapper.
- the MVA 220 b comprises computer readable code that enables the data 310 b to be stored virtually and to be shared among a plurality of clients.
- the MVA 320 b may be programmed with one or more policies to actively manage the data and/or hosted memory segment 320 b in one or more manners. For instance, the MVA 220 b may control credential requirements of clients for access to the data, migration schedules, duplication schedules, etc.
- the active management may be based upon the data 310 b itself, the physical memory 212 a on which the data 310 b is stored, a combination of the data 310 b and the physical memory, etc.
- the MVA 220 b may encapsulate the data 310 b , such as, by adding or modifying metadata of the data 310 b .
- the MVA 220 b may perform various other operations to ensure that client access to the memory segment 320 b containing the data 310 b is mediated, for instance, by the MVA 220 b or by the local hypervisor 230 , such that, for instance, migration, access control, memory addressing changes, etc., are transparent to the client.
- the data 310 b is stored in a memory segment 320 b that is hosted by the MVA 220 b that encapsulates the data 310 b .
- the client that instructed the MVA 220 b to store the data 310 b and other clients that are authorized to access the data 310 b have access to memory segment 320 b upon which the data 310 b is stored through the MVA 220 b .
- the client(s) may access the stored data 310 b through the hypervisor 230 .
- the MVA 220 b may communicate with the hypervisor 230 using an interface that indicates that the data 310 b is stored in the memory segment 320 b and is available to specific clients. Then the hypervisor 230 may employ mechanisms to facilitate remote and local clients access to the data 310 b.
- the MVAs 220 b - 220 n enable the data 310 b - 310 n to be stored virtually on respective memory segments 320 b - 320 n , and because the MVAs 220 b - 220 n are not necessarily tied to any particular node 210 a - 210 n physical memory 212 a - 212 n , the MVAs 220 b - 220 n may be moved within and among different ones of the nodes 210 a - 210 n .
- the clients may access the data 310 b even in instances where the MVA 220 b hosting the memory segment 320 b is not operational through interaction with the hypervisor 230 of the node 210 b containing the physical memory 212 a upon which the data 310 a is stored.
- the MVA 220 b may manage the memory segment 320 b based upon one or more policies of the MVA 220 b . More particularly, for instance, the MVA 220 b may be programmed with code that causes the MVA 220 b to store the data 320 b in a persistent memory location, migrated, duplicated, etc. In this regard, the MVA 220 b may manage the memory segment 320 b independently from and without receiving any instructions from a client. In addition, the hypervisor 250 is configured to track the manipulations to thus enable the data 310 b to be later located and accessed.
- the MVA 220 b initially stores the data 310 b in a volatile memory location, such as, RAM, and the MVA 220 b may include code that causes the MVA 220 b to migrate the data 310 b to a different memory location that is persistent.
- the MVA 220 b may automatically and transparently migrate to another node 210 b , for instance, in response to the node 210 a undergoing a failure or scheduled maintenance.
- the MVA 220 b may migrate in manners similar to conventional VMs and may cause the data 310 b stored on the memory segment 320 b to also be migrated to a memory segment in the another node 210 b .
- clients may continue to access the data 310 b regardless of which node 210 a - 220 n in which the data 310 b is stored because the clients access the data 310 b through the MVAs 220 b - 220 n .
- clients may continue to access the data 310 b in instances where the MVAs 220 a - 220 n are not operating by accessing the data 310 b through the hypervisor 230 .
- step 410 following storage, and optionally, manipulation, of the data 310 b , access to the virtually stored data 310 b is provided, for instance, through the MVA 220 b and/or the hypervisor 230 as discussed above.
- the data 310 b may be accessed, for instance, when a client seeks to read and/or manipulate, such as, duplicate, move, erase, re-write, etc., the data 310 b.
- a control domain may be configured to access page tables of all of the clients and, assuming that the control domain is pinned to a single core, the control domain may perform a remote direct memory access (RDMA) request on behalf of a “dormant” virtual machine without a latency hit.
- RDMA remote direct memory access
- the MVAs 220 b - 220 n may export their visible segments/policies to the control domain thereby ensuring that the MVAs 220 b - 220 n do not receive partial updates (by controlling a scheduler or changing page protection).
- the MVAs 220 b - 220 n may thereby read their own memory segment 320 b - 320 n contents without being concerned about partial updates of on-going updates from the network 260 and may, for instance, implement their own backup or versioning to the NVs 124 a - 124 n.
- the operations set forth in the method 400 may be contained as one or more utilities, programs, or subprograms, in any desired computer accessible or readable medium.
- the method 400 may be embodied by a computer program, which may exist in a variety of forms both active and inactive.
- it can exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats. Any of the above can be embodied on a computer readable medium, which include storage devices and signals, in compressed or uncompressed form.
- Exemplary computer readable storage devices include conventional computer system RAM, ROM, EPROM, EEPROM, phase change RAM (PCRAM), Memristor, and magnetic or optical disks or tapes.
- Exemplary computer readable signals are signals that a computer system hosting or running the computer program can be configured to access, including signals downloaded through the Internet or other networks. Concrete examples of the foregoing include distribution of the programs on a CD ROM or via Internet download. In a sense, the Internet itself, as an abstract entity, is a computer readable medium. The same is true of computer networks in general. It is therefore to be understood that any electronic device capable of executing the above-described functions may perform those functions enumerated above.
- FIG. 5 illustrates a computer system 500 , which may be employed to perform the various functions of the nodes 210 a - 210 n depicted in FIGS. 2 , 3 A, and 3 B in performing some or all of the steps contained in the flow diagram depicted in FIG. 4 , according to an example.
- the computer system 500 may be used as a platform for executing one or more of the functions described hereinabove with respect to the method 400 . More particularly, for instance, the computer system 500 may be used as a platform for executing one or more of the MVAs 220 b - 220 n discussed above.
- the computer system 500 includes a processor 502 , which may be used to execute some or all of the steps described in the methods herein. Commands and data from the processor 502 are communicated over a communication bus 504 .
- the computer system 500 also includes a main memory 506 , such as a random access memory (RAM), where the program code may be executed during runtime, and a secondary storage 510 .
- the secondary storage may comprise, for example, a hard drive or other non volatile memory, where a copy of the program code for the virtual machines 220 a - 220 n , including the MVAs 220 b - 220 n may be stored.
- the computer system 500 may comprise a server having a web interface.
- the computing system 500 may be configured with user input and output devices including a keyboard 516 , a mouse 518 , and a display 520 .
- a display adaptor 522 may interface with the communication bus 504 and the display 520 and may receive display data from the processor 502 and convert the display data into display commands for the display 520 .
- the processor 502 may communicate over a network, for instance, the Internet, LAN, etc., through a network adaptor 524 .
- the computer system 500 may include a system board or blade used in a rack in a data center, a conventional “white box” server or computing device, etc.
- the components in FIG. 5 may be optional (for instance, user input devices, secondary memory, etc.).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- Increasingly, modern data centers are designed with a heterogeneous mixture of computing nodes including “fat” computing nodes, “thin” computing nodes, and dedicated nodes to accelerate important functions. Fat computing nodes are nodes with multiple sockets of high-end symmetrical multiprocessors (SMPs) with large memory spaces, while thin computing nodes are relatively low-power and low-cost processors with reduced memory. The dedicated nodes are nodes that are limited in purpose and/or functionality and include nodes that are used for memory. These memory-purposed nodes are known to be designated as memory appliances.
- Memory appliances are useful in several environments in the datacenter, such as acceleration of transaction processing, storing metadata for fast locking, in-memory databases for analytics and business intelligence (BI), storage caching or tier-0 storage. When used as memory expanders, memory appliances have also been shown to be effective as a remote paging device under hypervisor control. Additionally, when used to encapsulate high level abstractions (such as memcached) memory appliances are known to significantly accelerate dynamic web serving.
- However, these approaches represent ad-hoc solutions that only address limited aspects at a time of memory usage in modern data centers. In other words, the conventional approaches to memory usage tend to be directed towards a single functionality and/or rely on some combination of special-purpose hardware and software. Additionally, these approaches also do not provide a uniform way of covering centralized and peer-to-peer approaches, whose combination is becoming increasingly common as the modern data center evolves and gradually introduces new functionalities.
- Features of the present invention will become apparent to those skilled in the art from the following description with reference to the figures, in which:
-
FIG. 1 depicts a block diagram of a computing ensemble comprised of a mixture of computing nodes, in which various embodiments of the invention may be implemented, according to an example embodiment of the invention; -
FIG. 2 depicts a simplified block diagram of a data processing infrastructure configured to be implemented in computing environment, according to an example embodiment of the invention; -
FIG. 3A depicts a schematic diagram of a portion of a node depicted inFIG. 2 , according to an example embodiment of the invention; -
FIG. 3B depicts a schematic diagram of a portion of a plurality of nodes including the node depicted inFIG. 3A , according to an example embodiment of the invention; -
FIG. 4 shows a flow diagram of a method for managing a memory segment by using a memory virtual appliance, according to an example embodiment of the invention; and -
FIG. 5 illustrates a computer system, which may be employed to perform various functions of the nodes depicted inFIGS. 2 , 3A and 3B in performing some or all of the steps contained in the flow diagram depicted inFIG. 4 , according to an example embodiment of the invention. - For simplicity and illustrative purposes, the present invention is described by referring mainly to an example embodiment thereof. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent however, to one of ordinary skill in the art, that the present invention may be practiced without limitation to these specific details. In other instances, well known methods and structures have not been described in detail so as not to unnecessarily obscure the present invention.
- Disclosed herein are embodiments directed to a method and node for managing a memory segment through use of a memory virtual appliance. The memory virtual appliance comprises a virtual machine configured to manage a memory segment in a physical memory and is configured to encapsulate the data. The memory virtual appliance is implemented using a virtualization wrapper that comprises computer readable code that enables the encapsulated data to be shared among a plurality of clients. In one regard, the memory virtual appliance enables the encapsulated data to be disaggregated from a client, such as a computing device or a virtual machine operating on the computing device, to thus be accessed by multiple clients. In another regard, the memory virtual appliance is able to actively manage the memory segment containing the data independent of instructions from a client. As such, encapsulation of data with the memory virtual appliance as disclosed herein offers a relatively rich access interface to clients. For instance, because the memory virtual appliances disclosed herein are not tied to a particular device, the memory virtual appliances may be stored in devices having excess memory, to thereby substantially maximize usage of available memory capacities.
- The memory virtual appliances additionally provide an abstraction that may be used to implement a variety of applications. In one example, the memory virtual appliances may be used to encapsulate data and to offer a rich access interface to clients that require additional memory. In addition, the memory virtual appliances may be relatively lightweight and may be employed under hypervisor control to transparently implement resilience and distributed replication functionalities without compromising performance. The memory virtual appliances discussed herein are hardware independent and may be used to interface to a wide variety of configurations of physical memory (including non-volatile memory) and to also expose excess dynamic random access memory (DRAM) capacity of computing nodes.
- With reference first to
FIG. 1 , there is shown a block diagram of acomputing ensemble 100 comprised of a mixture of computing nodes, in which various embodiments of the invention may be implemented, according to an example embodiment. It should be understood that thecomputing ensemble 100 may include additional components and that one or more of the components described herein may be removed and/or modified without departing from a scope of thecomputing ensemble 100. - As shown in
FIG. 1 , thecomputing ensemble 100 includes a high-radix switch 110, a plurality of fat computing nodes 112 a-112 n, a plurality of thin computing nodes 114 a-114 n and a plurality of memory nodes 116 a-116 n. Although not shown, all of the nodes in thecomputing ensemble 100 may be connected through a fast and flat optical fabric. - In any regard, as shown in
FIG. 1 , the fat computing nodes 112 a-112 n comprise nodes with central processing units (CPUs) 120 and physical memories (PMs) 122 a-122 n. TheCPUs 120 may comprise, for example, multiple sockets of high-end symmetrical multiprocessors (SMPs) and the PMs 122 a-122 n may comprise memories with relatively large memory capacities. The thin computing nodes 114 a-114 n are also depicted as includingCPUs 120 andPMs 122 a. TheCPUs 120 of the thin computing nodes 114 a-114 n generally comprise relatively low power and low-cost processors and thePMs 122 a comprise relatively low capacity memories. The memory nodes 116 a-116 n may comprise memory appliances having a plurality of PMs 122 a-122 n and non-volatile memory cells (NVs) 124 a-124 n, along with one ormore CPUs 120 to control the memory appliances. Although not explicitly shown, the computing nodes 112 a-112 n and 114 a-114 n may also include NVs 124 a-124 n. - As discussed in greater detail herein below, specialized virtual machines, which are termed “memory virtual appliances” (MVAs) throughout the present disclosure, may operate on one or more of the nodes 112 a-112 n, 114 a-114 n, and 116 a-116 n to enable data to be stored on the node(s) 112 a-112 n, 114 a-114 n, and 116 a-116 n virtually. In addition, the MVAs are associated with, assigned to, or host respective memory segments in the PMs 122 a-122 n or NVs 124 a-124 n and make those memory segments visible to clients, such as, virtual machines, servers, client devices, etc., regardless of whether the clients are located in the same node, the same network, etc., as the MVAs. In this regard, the clients may interact with the MVAs to store and access data virtually as if the data were stored locally on the clients. In addition, because the MVAs operate with hypervisors that are typically employed with conventional virtual machines, the clients may access the virtually stored data through either or both the MVAs or the hypervisor, which may be necessary, for instance, when the MVAs are not running.
- As such, instead of being tied directly to any one of the PMs 122 a-122 n or NVs 124 a-124 n, the data may be virtually stored on one or more of the PMs 122 a-122 n or NVs 124 a-124 n through implementation of the memory virtual appliances discussed herein. Because the data maybe disassociated from the physical memory upon which the data is stored, the data may be manipulated in various manners that are unavailable to data that are tied to particular physical memories.
- Turning now to
FIG. 2 , there is shown a simplified block diagram of adata processing infrastructure 200 configured to be implemented in a computing environment, such as, thecomputing ensemble 100 depicted inFIG. 1 , according to an example. It should be understood that thedata processing infrastructure 200 may include additional components and that one or more of the components described herein may be removed and/or modified without departing from a scope of thedata processing infrastructure 200. - Generally speaking, the
data processing infrastructure 200 comprises a plurality of nodes 210 a-210 n, where n is a value greater than 1. The plurality of nodes 210 a-210 n may comprise a homogenous set or a heterogeneous mixture of computing nodes. Thus, for instance, the nodes 210 a-210 n may comprise the fat computing nodes 112 a-112 n, the thin computing nodes 114 a-114 n, and/or the memory nodes 116 a-116 n depicted inFIG. 1 . In this regard, each of the nodes 210 a-210 n comprises a computing device, such as, a server or a memory node having aprocessor 214 for implementing and/or executing various instructions in each of the nodes 210 a-210 n. - In one particular implementation, one or more of the nodes 210 a-210 n comprise servers upon which one or more virtual machines (VM) 220 a-220 n are run. As is generally known to those of ordinary skill in the art, the VMs 220 a-220 n comprise software implementations of machines, such as, computers, that execute programs similar to a physical machine. In addition, the nodes 210 a-210 n include
respective hypervisors 230, which may comprise a software layer or hardware that provides virtualization to the VMs 220 a-220 n. Thehypervisors 230, or virtual machine monitors, generally operate to provide the VMs 220 a-220 n with a virtualization platform upon which the VMs 220 a-220 n operate and to monitor the execution of the VMs 220 a-220 n. Examples of suitable virtualization platforms include those available from XEN, VirtualBox, and VMware. - According to an embodiment, one or more of the VMs 220 a-220 n in one or more of the nodes 210 a-210 n comprise specialized VMs configured to be implemented as memory virtual appliances (MVAs) configured to host or control respective memory segments in the physical memories 212 a-212 n of the one or more nodes 210 a-210 n. The physical memories 212 a-212 n may comprise any of a variety of storage devices, such as, solid-state disks, disk caches, flash memories, etc. In addition, the physical memories 212 a-212 n may be volatile or non-volatile, replaceable or irreplaceable, storage devices. Moreover, the physical memories 212 a-212 n may be homogeneous with respect to each other or two or more of the physical memories 212 a-212 n may be heterogeneous with respect to each other.
- The remaining VMs 220 a-220 n in this embodiment may comprise system VMs or other types of process virtual machines. The MVA(s) 220 a-220 n are generally implemented using a light-weight operating system and comprise computer readable code that hosts or controls respective memory segments in the physical memories 212 a-212 n. The operating system is considered to be “light-weight” because its sole function may be to manage the data stored in the memory segments under the control the MVAs 220 a-220 n. In any regard, the virtualization wrapper generally enables data stored in the memory segment controlled by the memory virtual appliance 220 a-220 n to be shared among a plurality of clients, which may comprise other nodes 210 a-210 n, input/output node(s) 260, or nodes located outside of the
data processing infrastructure 200. The input/output nodes 260 may comprise computing devices, such as, servers, user terminals, etc., configured to communicate data with the nodes 210 a-210 n over anetwork 240. Thenetwork 240 may comprise a local area network, a wide area network, the Internet, etc. - Turning now to
FIG. 3A , there is shown a schematic diagram 300 of a portion of anode 210 a depicted inFIG. 2 , according to an example. More particularly,FIG. 3A depicts an example of various manners in which VMs 220 a-220 n manage respectively controlled or hosted memory (MEM) segments 320 a-320 n of aphysical memory 212 a. It should be understood that thenode 210 a may include additional components and that one or more of the components described herein may be removed and/or modified without departing from a scope of thenode 210 a. For instance, although thenode 210 a has been depicted as having a singlephysical memory 212 a, thenode 210 a may include additionalphysical memories 212 a. In this regard, one or more of thememory segments 320 b-320 n may be located one or more physical memories of thenode 210 a. - The VMs 220 a-220 n, the
hypervisor 230, and thephysical memory 212 a of thenode 210 a have been depicted inFIG. 3A . Thefirst VM 220 a has also been depicted as a conventional type of virtual machine, which is, for instance, configured to perform various functions other than MVA functions. In this regard, thefirst VM 220 a may comprise a conventional system VM or a process VM other than a MVA. In addition, the remainingVMs 220 b-220 n have been depicted as comprising MVAs 220 b-220 n. Each of the VMs 220 a-220 n has been depicted as having control over or hosting a particular memory segment 320 a-320 n of thephysical memory 212 a. Although particular reference is made throughout to thephysical memory 212 a being composed of a plurality of memory segments 320 a-320 n, the memory segments 320 a-320 n may equivalently be termed as bits, memory blocks, or other equivalent elements for defining discrete locations in the physical memories 212 a-212 n in which data is at least one of stored, read, erased, rewritten, etc. In addition, each of the memory segments 320 a-320 n is formed of a plurality of memory cells or blocks upon which data may be stored and read from. - As also shown in
FIG. 3A , each of the VMs 220 a-220 n is configured to cause respective data 310 a-310 n to be accessed in the respective memory segments 320 a-320 n. Each of the data 310 a-310 n may comprise a standalone set of information, such as, independently executable applications, photo files, audio files, video files, etc. In addition, two or more of the data 310 a-310 n may be interoperable. - During operation, the
MVAs 220 b-220 n are configured to actively manage the respective memory segments 320 a-320 n. In one regard, theMVAs 220 b-220 n actively manage the memory segments 320 a-320 n independent of the clients that requested storage or other manipulation of thedata 310 b-310 n and the physical memories 212 a-212 n on which the data 310 a-310 n are stored. More particularly, for instance, theMVAs 220 b-220 n comprise computer readable code that enables the MVAs 220 b-220 n to control how the memory segments 320 a-320 n are to be managed. By way of example, theMVAs 220 b-220 n may define one or more policies with respect to access, duplication, erasure, etc., of thedata 310 b-310 n stored on the memory segments 320 a-320 n and may be able to cause any of those policies to be implemented without requiring receipt of additional instructions from a client. Thus, for instance, theMVAs 220 b-220 n may control one or more of: which clients, such as, nodes or VMs operating on the nodes are able to access thedata 310 b-310 n stored in theirrespective memory segments 320 b-320 n, when thedata 310 b-310 n is to be duplicated in another memory segment, whether and the number of times the data 310 a-310 n may be duplicated, whether and when thedata 310 b-310 n are to be migrated, etc. - In addition, or alternatively, the
MVAs 220 b-220 n also enable bookkeeping and access functions to be performed with respect to thedata 310 b-310 n. The bookkeeping functions may include tracking which memory segments 320 a-320 n are hosted or controlled by which of the MVAs 220 b-220 n, as well as other life cycle management information of the MVAs 220 b-220 n. The access functions are generally configured to enable sharing of thedata 310 b-310 n among multiple clients, security authorization requirements to access thedata 310 b-310 n, etc. Furthermore, theMVAs 220 b-220 n may ensure atomicity of the copy operation so that no concurrent updates are allowed to occur. Moreover, theMVAs 220 b-220 n may support different client services, such as fast paging, Tier-0 storage, or remote direct memory access (RDMA)-based object replication. - Unless the
VM 220 a accesses thedata 310 b-310 n under the control of the MVAs 220 b-220 n, theVM 220 a is configured access the data to which it has rights to access, such as thedata 310 a stored in thememory segment 320 a. Thememory segment 320 a may comprise, for instance, the available memory segments other than those under the control of the MVAs 220 b-220 n. Thememory segment 320 a to which theVM 220 a storeddata 310 a differs from theother memory segments 320 b-320 n because thememory segment 320 a does not store the data virtually. In other words, thedata 310 a stored in thememory segment 320 a is not controlled by an MVA and thus, may not be actively managed. In this regard, thememory segment 320 a is tied directly to thephysical memory 212 a. - Turning now to
FIG. 3B , there is shown a schematic diagram 350 of a plurality of nodes including thenode 210 a depicted inFIG. 3A , according to an example. As shown inFIG. 3B , afirst MVA 220 b of thefirst node 210 a has control over afirst memory segment 320 b and asecond MVA 220 c of thefirst node 210 a has control over asecond memory segment 320 c. Thephysical memory 212 a containing thememory segments first node 210 a or on another node. Generally speaking,FIG. 3B is intended to show that the memory segments 320 a-320 n may be accessed byother nodes 210 b-210 c. - As shown in
FIG. 3B , asecond node 210 b includes aVM 220 a that is configured to access thememory segment 320 b. In addition, athird node 210 c does not include a VM, but instead, comprises, for instance, a conventional computing device configured to access thememory segment 320 c. In the example depicted inFIG. 3B , theMVA 220 b of thefirst node 210 a controls access to the data stored in thememory segment 320 b and may thus prevent access to that data to theprocessor 214 of thethird node 210 c, while allowing access to theVM 220 a running on thesecond node 210 b. - Various manners in which the
MVAs 220 b-220 n may function are discussed in greater detail herein below with respect toFIG. 4 .FIG. 4 , more particularly, shows a flow diagram of amethod 400 for managing a memory segment by using a memory virtual appliance, according to an example. It should be understood that themethod 400 may include additional steps and that one or more of the steps described herein may be removed and/or modified without departing from a scope of themethod 400. - The description of the
method 400 is made with reference to thecomputing ensemble 100 and thedata processing infrastructure 200 depicted inFIG. 2 , respectively, and thus makes particular reference to the elements contained in those figures. It should, however, be understood that themethod 400 may be implemented in an infrastructure that differs from thecomputing ensemble 100 and thedata processing infrastructure 200 depicted inFIGS. 1 and 2 without departing from a scope of themethod 400. - As shown in
FIG. 4 , atstep 402,data 310 b to be stored on aphysical memory 212 a of anode 210 a is identified. For instance, theMVA 220 b may receive a request from a client, such as, aVM 220 a, anothernode 210 b, an input/output node 260, etc., for thedata 310 b to be stored on thephysical memory 212 a. According to an example, the request to store thedata 310 b on thephysical memory 212 a may comprise a broadcast request by the client for thedata 310 b to be stored. In another example, the request to store thedata 310 b may be responsive to an indication by theMVA 220 b that theMVA 220 b has available storage capacity for thedata 310 b. In this example, theMVA 220 b, once initiated, may broadcast an indication to the nodes 210 a-210 n in theinfrastructure 200 that it is ready to receive data. Alternatively, theMVA 220 b may register itself with a management network to inform the nodes 210 a-210 n that it is able to store data. It should be understood that theMVA 220 b may receive the data to be stored in any other suitable manner. In any regard, for instance, thememory segments 320 b-320 n hosted by theMVAs 220 b-220 n are made visible to the clients and the clients are thus able to store thedata 310 b-310 n in the hostedmemory segments 320 b-320 n. - In any regard, at
step 404, thedata 310 b is encapsulated with aMVA 320 b that is implemented using a virtualization wrapper. As discussed above, theMVA 220 b comprises computer readable code that enables thedata 310 b to be stored virtually and to be shared among a plurality of clients. In addition, prior to or during the encapsulation process, theMVA 320 b may be programmed with one or more policies to actively manage the data and/or hostedmemory segment 320 b in one or more manners. For instance, theMVA 220 b may control credential requirements of clients for access to the data, migration schedules, duplication schedules, etc. The active management may be based upon thedata 310 b itself, thephysical memory 212 a on which thedata 310 b is stored, a combination of thedata 310 b and the physical memory, etc. In any regard, theMVA 220 b may encapsulate thedata 310 b, such as, by adding or modifying metadata of thedata 310 b. In addition, or alternatively, theMVA 220 b may perform various other operations to ensure that client access to thememory segment 320 b containing thedata 310 b is mediated, for instance, by theMVA 220 b or by thelocal hypervisor 230, such that, for instance, migration, access control, memory addressing changes, etc., are transparent to the client. - At
step 406, thedata 310 b is stored in amemory segment 320 b that is hosted by theMVA 220 b that encapsulates thedata 310 b. In this regard, the client that instructed theMVA 220 b to store thedata 310 b and other clients that are authorized to access thedata 310 b have access tomemory segment 320 b upon which thedata 310 b is stored through theMVA 220 b. In addition, the client(s) may access the storeddata 310 b through thehypervisor 230. In this regard, for example, theMVA 220 b may communicate with thehypervisor 230 using an interface that indicates that thedata 310 b is stored in thememory segment 320 b and is available to specific clients. Then the hypervisor 230 may employ mechanisms to facilitate remote and local clients access to thedata 310 b. - In one regard, because the MVAs 220 b-220 n enable the
data 310 b-310 n to be stored virtually onrespective memory segments 320 b-320 n, and because the MVAs 220 b-220 n are not necessarily tied to any particular node 210 a-210 n physical memory 212 a-212 n, theMVAs 220 b-220 n may be moved within and among different ones of the nodes 210 a-210 n. In addition, the clients may access thedata 310 b even in instances where theMVA 220 b hosting thememory segment 320 b is not operational through interaction with thehypervisor 230 of thenode 210 b containing thephysical memory 212 a upon which thedata 310 a is stored. - At
step 408, following storage of thedata 310 b, theMVA 220 b may manage thememory segment 320 b based upon one or more policies of theMVA 220 b. More particularly, for instance, theMVA 220 b may be programmed with code that causes theMVA 220 b to store thedata 320 b in a persistent memory location, migrated, duplicated, etc. In this regard, theMVA 220 b may manage thememory segment 320 b independently from and without receiving any instructions from a client. In addition, the hypervisor 250 is configured to track the manipulations to thus enable thedata 310 b to be later located and accessed. - In one particular example, the
MVA 220 b initially stores thedata 310 b in a volatile memory location, such as, RAM, and theMVA 220 b may include code that causes theMVA 220 b to migrate thedata 310 b to a different memory location that is persistent. In another example, theMVA 220 b may automatically and transparently migrate to anothernode 210 b, for instance, in response to thenode 210 a undergoing a failure or scheduled maintenance. In this example, theMVA 220 b may migrate in manners similar to conventional VMs and may cause thedata 310 b stored on thememory segment 320 b to also be migrated to a memory segment in the anothernode 210 b. In one regard, therefore, clients may continue to access thedata 310 b regardless of which node 210 a-220 n in which thedata 310 b is stored because the clients access thedata 310 b through the MVAs 220 b-220 n. In addition, clients may continue to access thedata 310 b in instances where the MVAs 220 a-220 n are not operating by accessing thedata 310 b through thehypervisor 230. - At
step 410, following storage, and optionally, manipulation, of thedata 310 b, access to the virtually storeddata 310 b is provided, for instance, through theMVA 220 b and/or thehypervisor 230 as discussed above. Thedata 310 b may be accessed, for instance, when a client seeks to read and/or manipulate, such as, duplicate, move, erase, re-write, etc., thedata 310 b. - According to an embodiment, a control domain may be configured to access page tables of all of the clients and, assuming that the control domain is pinned to a single core, the control domain may perform a remote direct memory access (RDMA) request on behalf of a “dormant” virtual machine without a latency hit. In that instance, the
MVAs 220 b-220 n may export their visible segments/policies to the control domain thereby ensuring that the MVAs 220 b-220 n do not receive partial updates (by controlling a scheduler or changing page protection). The MVAs 220 b-220 n may thereby read theirown memory segment 320 b-320 n contents without being concerned about partial updates of on-going updates from thenetwork 260 and may, for instance, implement their own backup or versioning to the NVs 124 a-124 n. - Some of the operations set forth in the
method 400 may be contained as one or more utilities, programs, or subprograms, in any desired computer accessible or readable medium. In addition, themethod 400 may be embodied by a computer program, which may exist in a variety of forms both active and inactive. For example, it can exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats. Any of the above can be embodied on a computer readable medium, which include storage devices and signals, in compressed or uncompressed form. - Exemplary computer readable storage devices include conventional computer system RAM, ROM, EPROM, EEPROM, phase change RAM (PCRAM), Memristor, and magnetic or optical disks or tapes. Exemplary computer readable signals, whether modulated using a carrier or not, are signals that a computer system hosting or running the computer program can be configured to access, including signals downloaded through the Internet or other networks. Concrete examples of the foregoing include distribution of the programs on a CD ROM or via Internet download. In a sense, the Internet itself, as an abstract entity, is a computer readable medium. The same is true of computer networks in general. It is therefore to be understood that any electronic device capable of executing the above-described functions may perform those functions enumerated above.
-
FIG. 5 illustrates acomputer system 500, which may be employed to perform the various functions of the nodes 210 a-210 n depicted inFIGS. 2 , 3A, and 3B in performing some or all of the steps contained in the flow diagram depicted inFIG. 4 , according to an example. In this respect, thecomputer system 500 may be used as a platform for executing one or more of the functions described hereinabove with respect to themethod 400. More particularly, for instance, thecomputer system 500 may be used as a platform for executing one or more of the MVAs 220 b-220 n discussed above. - The
computer system 500 includes aprocessor 502, which may be used to execute some or all of the steps described in the methods herein. Commands and data from theprocessor 502 are communicated over acommunication bus 504. Thecomputer system 500 also includes amain memory 506, such as a random access memory (RAM), where the program code may be executed during runtime, and asecondary storage 510. The secondary storage may comprise, for example, a hard drive or other non volatile memory, where a copy of the program code for the virtual machines 220 a-220 n, including the MVAs 220 b-220 n may be stored. - The
computer system 500 may comprise a server having a web interface. In addition, or alternately, thecomputing system 500 may be configured with user input and output devices including akeyboard 516, amouse 518, and adisplay 520. Adisplay adaptor 522 may interface with thecommunication bus 504 and thedisplay 520 and may receive display data from theprocessor 502 and convert the display data into display commands for thedisplay 520. In addition, theprocessor 502 may communicate over a network, for instance, the Internet, LAN, etc., through anetwork adaptor 524. - It will be apparent to one of ordinary skill in the art that other known electronic components may be added or substituted in the
computer system 500. In addition, thecomputer system 500 may include a system board or blade used in a rack in a data center, a conventional “white box” server or computing device, etc. Also, one or more of the components inFIG. 5 may be optional (for instance, user input devices, secondary memory, etc.). - What has been described and illustrated herein is a preferred embodiment of the invention along with some of its variations. The terms, descriptions and figures used herein are set forth by way of illustration only and are not meant as limitations. Those skilled in the art will recognize that many variations are possible within the scope of the invention, which is intended to be defined by the following claims—and their equivalents—in which all terms are meant in their broadest reasonable sense unless otherwise indicated.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/833,438 US8812400B2 (en) | 2010-07-09 | 2010-07-09 | Managing a memory segment using a memory virtual appliance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/833,438 US8812400B2 (en) | 2010-07-09 | 2010-07-09 | Managing a memory segment using a memory virtual appliance |
Publications (2)
Publication Number | Publication Date |
---|---|
US20120011500A1 true US20120011500A1 (en) | 2012-01-12 |
US8812400B2 US8812400B2 (en) | 2014-08-19 |
Family
ID=45439493
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/833,438 Active 2032-04-18 US8812400B2 (en) | 2010-07-09 | 2010-07-09 | Managing a memory segment using a memory virtual appliance |
Country Status (1)
Country | Link |
---|---|
US (1) | US8812400B2 (en) |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120331243A1 (en) * | 2011-06-24 | 2012-12-27 | International Business Machines Corporation | Remote Direct Memory Access ('RDMA') In A Parallel Computer |
US20120331065A1 (en) * | 2011-06-24 | 2012-12-27 | International Business Machines Corporation | Messaging In A Parallel Computer Using Remote Direct Memory Access ('RDMA') |
US20130018507A1 (en) * | 2011-07-13 | 2013-01-17 | Kuka Roboter Gmbh | Control System Of A Robot |
WO2013138587A1 (en) * | 2012-03-14 | 2013-09-19 | Convergent .Io Technologies Inc. | Systems, methods and devices for management of virtual memory systems |
US20140208043A1 (en) * | 2013-01-24 | 2014-07-24 | Raytheon Company | Synchronizing parallel applications in an asymmetric multi-processing system |
US20150113088A1 (en) * | 2013-10-23 | 2015-04-23 | International Business Machines Corporation | Persistent caching for operating a persistent caching system |
US9262225B2 (en) | 2009-10-30 | 2016-02-16 | Iii Holdings 2, Llc | Remote memory access functionality in a cluster of data processing nodes |
WO2016033691A1 (en) * | 2014-09-04 | 2016-03-10 | Iofabric Inc. | Application centric distributed storage system and method |
US9311269B2 (en) | 2009-10-30 | 2016-04-12 | Iii Holdings 2, Llc | Network proxy for high-performance, low-power data center interconnect fabric |
US9390055B2 (en) | 2012-07-17 | 2016-07-12 | Coho Data, Inc. | Systems, methods and devices for integrating end-host and network resources in distributed memory |
US9465771B2 (en) | 2009-09-24 | 2016-10-11 | Iii Holdings 2, Llc | Server on a chip and node cards comprising one or more of same |
US9479463B2 (en) | 2009-10-30 | 2016-10-25 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging managed server SOCs |
US9509552B2 (en) | 2009-10-30 | 2016-11-29 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging server SOCs or server fabrics |
WO2017019001A1 (en) * | 2015-07-24 | 2017-02-02 | Hewlett Packard Enterprise Development Lp | Distributed datasets in shared non-volatile memory |
US9585281B2 (en) | 2011-10-28 | 2017-02-28 | Iii Holdings 2, Llc | System and method for flexible storage and networking provisioning in large scalable processor installations |
US9619155B2 (en) | 2014-02-07 | 2017-04-11 | Coho Data Inc. | Methods, systems and devices relating to data storage interfaces for managing data address spaces in data storage devices |
US9648102B1 (en) * | 2012-12-27 | 2017-05-09 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9680770B2 (en) | 2009-10-30 | 2017-06-13 | Iii Holdings 2, Llc | System and method for using a multi-protocol fabric module across a distributed server interconnect fabric |
US9684589B2 (en) | 2012-11-29 | 2017-06-20 | Hewlett-Packard Development Company, L.P. | Memory module including memory resistors |
US9792249B2 (en) | 2011-10-31 | 2017-10-17 | Iii Holdings 2, Llc | Node card utilizing a same connector to communicate pluralities of signals |
US9876735B2 (en) | 2009-10-30 | 2018-01-23 | Iii Holdings 2, Llc | Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect |
US10140245B2 (en) | 2009-10-30 | 2018-11-27 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US10877695B2 (en) | 2009-10-30 | 2020-12-29 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US11467883B2 (en) | 2004-03-13 | 2022-10-11 | Iii Holdings 12, Llc | Co-allocating a reservation spanning different compute resources types |
US11494235B2 (en) | 2004-11-08 | 2022-11-08 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11496415B2 (en) | 2005-04-07 | 2022-11-08 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11522952B2 (en) | 2007-09-24 | 2022-12-06 | The Research Foundation For The State University Of New York | Automatic clustering for self-organizing grids |
US11630704B2 (en) | 2004-08-20 | 2023-04-18 | Iii Holdings 12, Llc | System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information |
US11650857B2 (en) | 2006-03-16 | 2023-05-16 | Iii Holdings 12, Llc | System and method for managing a hybrid computer environment |
US11652706B2 (en) | 2004-06-18 | 2023-05-16 | Iii Holdings 12, Llc | System and method for providing dynamic provisioning within a compute environment |
US11658916B2 (en) | 2005-03-16 | 2023-05-23 | Iii Holdings 12, Llc | Simple integration of an on-demand compute environment |
US11720290B2 (en) | 2009-10-30 | 2023-08-08 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US11960937B2 (en) | 2004-03-13 | 2024-04-16 | Iii Holdings 12, Llc | System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter |
US12120040B2 (en) | 2005-03-16 | 2024-10-15 | Iii Holdings 12, Llc | On-demand compute environment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6263421B1 (en) * | 1994-03-10 | 2001-07-17 | Apple Computer, Inc. | Virtual memory system that is portable between different CPU types |
US20040168030A1 (en) * | 2000-06-02 | 2004-08-26 | Sun Microsystems, Inc. | Caching mechanism for a virtual heap |
US20050183088A1 (en) * | 1999-11-12 | 2005-08-18 | National Instruments Corporation | Method for the direct call of a function by a software module by means of a processor with a memory-management unit (MMU) |
US20050187786A1 (en) * | 2000-09-22 | 2005-08-25 | Tsai Daniel E. | Electronic commerce using personal preferences |
US20100030742A1 (en) * | 2008-07-30 | 2010-02-04 | John Steven Surmont | System and method for capturing, storing, retrieving, and publishing data |
US20100083274A1 (en) * | 2008-09-30 | 2010-04-01 | Microsoft Corporation | Hardware throughput saturation detection |
US20100138831A1 (en) * | 2008-12-02 | 2010-06-03 | Hitachi, Ltd. | Virtual machine system, hypervisor in virtual machine system, and scheduling method in virtual machine system |
US7817038B2 (en) * | 2007-01-22 | 2010-10-19 | Microsoft Corporation | Object detection framework for set of related objects |
US20120246478A1 (en) * | 2007-05-23 | 2012-09-27 | Nec Corporation | Information sharing system, computer, project managing server, and infomation sharing method used in them |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7356665B2 (en) | 2003-12-17 | 2008-04-08 | International Business Machines Corporation | Method and system for machine memory power and availability management in a processing system supporting multiple virtual machines |
US7478204B2 (en) | 2004-04-29 | 2009-01-13 | International Business Machines Corporation | Efficient sharing of memory between applications running under different operating systems on a shared hardware system |
US7412705B2 (en) | 2005-01-04 | 2008-08-12 | International Business Machines Corporation | Method for inter partition communication within a logical partitioned data processing system |
JP2007004661A (en) | 2005-06-27 | 2007-01-11 | Hitachi Ltd | Virtual computer control method and program |
US8694712B2 (en) | 2006-12-05 | 2014-04-08 | Microsoft Corporation | Reduction of operational costs of virtual TLBs |
WO2009032446A1 (en) | 2007-08-01 | 2009-03-12 | Devicevm, Inc. | Diagnostic virtual appliance |
US7689801B2 (en) | 2007-08-29 | 2010-03-30 | International Business Machines Corporation | Method for distributing hypervisor memory requirements across logical partitions |
US8156492B2 (en) | 2007-09-07 | 2012-04-10 | Oracle International Corporation | System and method to improve memory usage in virtual machines running as hypervisor guests |
US8261264B2 (en) | 2008-01-03 | 2012-09-04 | Dell Products L.P. | Accessing a network |
US8156503B2 (en) | 2008-02-12 | 2012-04-10 | International Business Machines Corporation | System, method and computer program product for accessing a memory space allocated to a virtual machine |
US20090210888A1 (en) | 2008-02-14 | 2009-08-20 | Microsoft Corporation | Software isolated device driver architecture |
GB2460393B (en) | 2008-02-29 | 2012-03-28 | Advanced Risc Mach Ltd | A data processing apparatus and method for controlling access to secure memory by virtual machines executing on processing circuitry |
-
2010
- 2010-07-09 US US12/833,438 patent/US8812400B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6263421B1 (en) * | 1994-03-10 | 2001-07-17 | Apple Computer, Inc. | Virtual memory system that is portable between different CPU types |
US20050183088A1 (en) * | 1999-11-12 | 2005-08-18 | National Instruments Corporation | Method for the direct call of a function by a software module by means of a processor with a memory-management unit (MMU) |
US20040168030A1 (en) * | 2000-06-02 | 2004-08-26 | Sun Microsystems, Inc. | Caching mechanism for a virtual heap |
US20050187786A1 (en) * | 2000-09-22 | 2005-08-25 | Tsai Daniel E. | Electronic commerce using personal preferences |
US7817038B2 (en) * | 2007-01-22 | 2010-10-19 | Microsoft Corporation | Object detection framework for set of related objects |
US20120246478A1 (en) * | 2007-05-23 | 2012-09-27 | Nec Corporation | Information sharing system, computer, project managing server, and infomation sharing method used in them |
US20100030742A1 (en) * | 2008-07-30 | 2010-02-04 | John Steven Surmont | System and method for capturing, storing, retrieving, and publishing data |
US20100083274A1 (en) * | 2008-09-30 | 2010-04-01 | Microsoft Corporation | Hardware throughput saturation detection |
US20100138831A1 (en) * | 2008-12-02 | 2010-06-03 | Hitachi, Ltd. | Virtual machine system, hypervisor in virtual machine system, and scheduling method in virtual machine system |
Cited By (76)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12124878B2 (en) | 2004-03-13 | 2024-10-22 | Iii Holdings 12, Llc | System and method for scheduling resources within a compute environment using a scheduler process with reservation mask function |
US11467883B2 (en) | 2004-03-13 | 2022-10-11 | Iii Holdings 12, Llc | Co-allocating a reservation spanning different compute resources types |
US11960937B2 (en) | 2004-03-13 | 2024-04-16 | Iii Holdings 12, Llc | System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter |
US12009996B2 (en) | 2004-06-18 | 2024-06-11 | Iii Holdings 12, Llc | System and method for providing dynamic provisioning within a compute environment |
US11652706B2 (en) | 2004-06-18 | 2023-05-16 | Iii Holdings 12, Llc | System and method for providing dynamic provisioning within a compute environment |
US11630704B2 (en) | 2004-08-20 | 2023-04-18 | Iii Holdings 12, Llc | System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information |
US11709709B2 (en) | 2004-11-08 | 2023-07-25 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11537435B2 (en) | 2004-11-08 | 2022-12-27 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US12039370B2 (en) | 2004-11-08 | 2024-07-16 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11886915B2 (en) | 2004-11-08 | 2024-01-30 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11861404B2 (en) | 2004-11-08 | 2024-01-02 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11494235B2 (en) | 2004-11-08 | 2022-11-08 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11762694B2 (en) | 2004-11-08 | 2023-09-19 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US12008405B2 (en) | 2004-11-08 | 2024-06-11 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11537434B2 (en) | 2004-11-08 | 2022-12-27 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11656907B2 (en) | 2004-11-08 | 2023-05-23 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11658916B2 (en) | 2005-03-16 | 2023-05-23 | Iii Holdings 12, Llc | Simple integration of an on-demand compute environment |
US12120040B2 (en) | 2005-03-16 | 2024-10-15 | Iii Holdings 12, Llc | On-demand compute environment |
US12155582B2 (en) | 2005-04-07 | 2024-11-26 | Iii Holdings 12, Llc | On-demand access to compute resources |
US12160371B2 (en) | 2005-04-07 | 2024-12-03 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11765101B2 (en) | 2005-04-07 | 2023-09-19 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11533274B2 (en) | 2005-04-07 | 2022-12-20 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11522811B2 (en) | 2005-04-07 | 2022-12-06 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11496415B2 (en) | 2005-04-07 | 2022-11-08 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11831564B2 (en) | 2005-04-07 | 2023-11-28 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11650857B2 (en) | 2006-03-16 | 2023-05-16 | Iii Holdings 12, Llc | System and method for managing a hybrid computer environment |
US11522952B2 (en) | 2007-09-24 | 2022-12-06 | The Research Foundation For The State University Of New York | Automatic clustering for self-organizing grids |
US9465771B2 (en) | 2009-09-24 | 2016-10-11 | Iii Holdings 2, Llc | Server on a chip and node cards comprising one or more of same |
US9454403B2 (en) | 2009-10-30 | 2016-09-27 | Iii Holdings 2, Llc | System and method for high-performance, low-power data center interconnect fabric |
US11720290B2 (en) | 2009-10-30 | 2023-08-08 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9866477B2 (en) | 2009-10-30 | 2018-01-09 | Iii Holdings 2, Llc | System and method for high-performance, low-power data center interconnect fabric |
US9876735B2 (en) | 2009-10-30 | 2018-01-23 | Iii Holdings 2, Llc | Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect |
US9929976B2 (en) | 2009-10-30 | 2018-03-27 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging managed server SOCs |
US9262225B2 (en) | 2009-10-30 | 2016-02-16 | Iii Holdings 2, Llc | Remote memory access functionality in a cluster of data processing nodes |
US9311269B2 (en) | 2009-10-30 | 2016-04-12 | Iii Holdings 2, Llc | Network proxy for high-performance, low-power data center interconnect fabric |
US9977763B2 (en) | 2009-10-30 | 2018-05-22 | Iii Holdings 2, Llc | Network proxy for high-performance, low-power data center interconnect fabric |
US9405584B2 (en) | 2009-10-30 | 2016-08-02 | Iii Holdings 2, Llc | System and method for high-performance, low-power data center interconnect fabric with addressing and unicast routing |
US9479463B2 (en) | 2009-10-30 | 2016-10-25 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging managed server SOCs |
US10050970B2 (en) | 2009-10-30 | 2018-08-14 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging server SOCs or server fabrics |
US10135731B2 (en) | 2009-10-30 | 2018-11-20 | Iii Holdings 2, Llc | Remote memory access functionality in a cluster of data processing nodes |
US10140245B2 (en) | 2009-10-30 | 2018-11-27 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9509552B2 (en) | 2009-10-30 | 2016-11-29 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging server SOCs or server fabrics |
US11526304B2 (en) | 2009-10-30 | 2022-12-13 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US10877695B2 (en) | 2009-10-30 | 2020-12-29 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9680770B2 (en) | 2009-10-30 | 2017-06-13 | Iii Holdings 2, Llc | System and method for using a multi-protocol fabric module across a distributed server interconnect fabric |
US9749326B2 (en) | 2009-10-30 | 2017-08-29 | Iii Holdings 2, Llc | System and method for data center security enhancements leveraging server SOCs or server fabrics |
US20120331243A1 (en) * | 2011-06-24 | 2012-12-27 | International Business Machines Corporation | Remote Direct Memory Access ('RDMA') In A Parallel Computer |
US20120331065A1 (en) * | 2011-06-24 | 2012-12-27 | International Business Machines Corporation | Messaging In A Parallel Computer Using Remote Direct Memory Access ('RDMA') |
US20130091236A1 (en) * | 2011-06-24 | 2013-04-11 | International Business Machines Corporation | Remote direct memory access ('rdma') in a parallel computer |
US8490113B2 (en) * | 2011-06-24 | 2013-07-16 | International Business Machines Corporation | Messaging in a parallel computer using remote direct memory access (‘RDMA’) |
US8495655B2 (en) * | 2011-06-24 | 2013-07-23 | International Business Machines Corporation | Messaging in a parallel computer using remote direct memory access (‘RDMA’) |
US8874681B2 (en) * | 2011-06-24 | 2014-10-28 | International Business Machines Corporation | Remote direct memory access (‘RDMA’) in a parallel computer |
US20130018507A1 (en) * | 2011-07-13 | 2013-01-17 | Kuka Roboter Gmbh | Control System Of A Robot |
US9114528B2 (en) * | 2011-07-13 | 2015-08-25 | Kuka Roboter Gmbh | Control system of a robot |
US9585281B2 (en) | 2011-10-28 | 2017-02-28 | Iii Holdings 2, Llc | System and method for flexible storage and networking provisioning in large scalable processor installations |
US10021806B2 (en) | 2011-10-28 | 2018-07-10 | Iii Holdings 2, Llc | System and method for flexible storage and networking provisioning in large scalable processor installations |
US9792249B2 (en) | 2011-10-31 | 2017-10-17 | Iii Holdings 2, Llc | Node card utilizing a same connector to communicate pluralities of signals |
US9965442B2 (en) | 2011-10-31 | 2018-05-08 | Iii Holdings 2, Llc | Node card management in a modular and large scalable server system |
WO2013138587A1 (en) * | 2012-03-14 | 2013-09-19 | Convergent .Io Technologies Inc. | Systems, methods and devices for management of virtual memory systems |
US10019159B2 (en) * | 2012-03-14 | 2018-07-10 | Open Invention Network Llc | Systems, methods and devices for management of virtual memory systems |
US20130282994A1 (en) * | 2012-03-14 | 2013-10-24 | Convergent.Io Technologies Inc. | Systems, methods and devices for management of virtual memory systems |
US9390055B2 (en) | 2012-07-17 | 2016-07-12 | Coho Data, Inc. | Systems, methods and devices for integrating end-host and network resources in distributed memory |
US10979383B1 (en) | 2012-07-17 | 2021-04-13 | Open Invention Network Llc | Systems, methods and devices for integrating end-host and network resources in distributed memory |
US11271893B1 (en) | 2012-07-17 | 2022-03-08 | Open Invention Network Llc | Systems, methods and devices for integrating end-host and network resources in distributed memory |
US10341285B2 (en) | 2012-07-17 | 2019-07-02 | Open Invention Network Llc | Systems, methods and devices for integrating end-host and network resources in distributed memory |
US9684589B2 (en) | 2012-11-29 | 2017-06-20 | Hewlett-Packard Development Company, L.P. | Memory module including memory resistors |
US9648102B1 (en) * | 2012-12-27 | 2017-05-09 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US9304945B2 (en) * | 2013-01-24 | 2016-04-05 | Raytheon Company | Synchronizing parallel applications in an asymmetric multi-processing system |
US20140208043A1 (en) * | 2013-01-24 | 2014-07-24 | Raytheon Company | Synchronizing parallel applications in an asymmetric multi-processing system |
US9940240B2 (en) * | 2013-10-23 | 2018-04-10 | International Business Machines Corporation | Persistent caching for operating a persistent caching system |
US20150113088A1 (en) * | 2013-10-23 | 2015-04-23 | International Business Machines Corporation | Persistent caching for operating a persistent caching system |
US9619155B2 (en) | 2014-02-07 | 2017-04-11 | Coho Data Inc. | Methods, systems and devices relating to data storage interfaces for managing data address spaces in data storage devices |
US10268390B2 (en) | 2014-02-07 | 2019-04-23 | Open Invention Network Llc | Methods, systems and devices relating to data storage interfaces for managing data address spaces in data storage devices |
US10891055B2 (en) | 2014-02-07 | 2021-01-12 | Open Invention Network Llc | Methods, systems and devices relating to data storage interfaces for managing data address spaces in data storage devices |
WO2016033691A1 (en) * | 2014-09-04 | 2016-03-10 | Iofabric Inc. | Application centric distributed storage system and method |
WO2017019001A1 (en) * | 2015-07-24 | 2017-02-02 | Hewlett Packard Enterprise Development Lp | Distributed datasets in shared non-volatile memory |
Also Published As
Publication number | Publication date |
---|---|
US8812400B2 (en) | 2014-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8812400B2 (en) | Managing a memory segment using a memory virtual appliance | |
EP3762826B1 (en) | Live migration of virtual machines in distributed computing systems | |
US10120711B2 (en) | Rapid suspend/resume for virtual machines via resource sharing | |
JP6798960B2 (en) | Virtual Disk Blueprint for Virtualized Storage Area Networks | |
US9619270B2 (en) | Remote-direct-memory-access-based virtual machine live migration | |
EP2985702B1 (en) | Data processing method and device, and computer system | |
US9875122B2 (en) | System and method for providing hardware virtualization in a virtual machine environment | |
US10592434B2 (en) | Hypervisor-enforced self encrypting memory in computing fabric | |
JP6488296B2 (en) | Scalable distributed storage architecture | |
US20180004555A1 (en) | Provisioning executable managed objects of a virtualized computing environment from non-executable managed objects | |
US8464253B2 (en) | Apparatus and method for providing services using a virtual operating system | |
CN111880891B (en) | Microkernel-based scalable virtual machine monitor and embedded system | |
KR20060071307A (en) | System and method for exposing a processor topology for virtual devices | |
US20170206104A1 (en) | Persistent guest and software-defined storage in computing fabric | |
GB2506684A (en) | Migration of a virtual machine between hypervisors | |
US10061701B2 (en) | Sharing of class data among virtual machine applications running on guests in virtualized environment using memory management facility | |
US12131075B2 (en) | Implementing coherency and page cache support for a storage system spread across multiple data centers | |
JP2015517159A (en) | Method for controlling the use of hardware resources in a computer system, system and piece of code method | |
EP3786797A1 (en) | Cloud resource marketplace | |
US20230127061A1 (en) | Standby Data Center as a Service | |
US20210326253A1 (en) | Computer memory management in computing devices | |
US10942761B2 (en) | Migrating a virtual machine in response to identifying an unsupported virtual hardware component | |
US12056514B2 (en) | Virtualization engine for virtualization operations in a virtualization system | |
US10228859B2 (en) | Efficiency in active memory sharing | |
US12169730B2 (en) | Handling memory accounting when suspending and resuming virtual machines to/from volatile memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FARABOSCHI, PAOLO;MCLAREN, MORAY;LAIN, ANTONIO;AND OTHERS;SIGNING DATES FROM 20100708 TO 20100709;REEL/FRAME:024716/0408 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001 Effective date: 20151027 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |