US11157304B2 - System for peering container clusters running on different container orchestration systems - Google Patents
System for peering container clusters running on different container orchestration systems Download PDFInfo
- Publication number
- US11157304B2 US11157304B2 US16/671,361 US201916671361A US11157304B2 US 11157304 B2 US11157304 B2 US 11157304B2 US 201916671361 A US201916671361 A US 201916671361A US 11157304 B2 US11157304 B2 US 11157304B2
- Authority
- US
- United States
- Prior art keywords
- container
- cluster
- service
- endpoint
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 230000001419 dependent effect Effects 0.000 claims abstract description 46
- 238000000034 method Methods 0.000 claims description 24
- 238000003860 storage Methods 0.000 claims description 17
- 238000004891 communication Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 description 10
- 239000008186 active pharmaceutical agent Substances 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000010076 replication Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000007366 host health Effects 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000013341 scale-up Methods 0.000 description 1
- 230000005641 tunneling Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
- G06F9/4856—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Definitions
- the present disclosure relates to information handling systems. More specifically, embodiments of the disclosure relate to a system for peering container clusters running on different container orchestration systems.
- An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information.
- information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated.
- the variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications.
- information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
- a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to peer container clusters running on different container orchestration systems.
- One general aspect includes a computer-implemented method for moving an endpoint service container between clusters of a cluster mesh, the cluster mesh including an original cluster and a target cluster, the method including: operating respective container orchestration systems at the original cluster and the target cluster, where the original cluster includes at least one endpoint service container and a dependent container configured to consume a service available at the endpoint service container; moving the endpoint service container from the original cluster to the target cluster; updating service registry information relating to moving the endpoint service container from the original cluster to the target cluster, where the service registry information includes a list of services that are globally available in the cluster mesh, where the list of services includes at least one service available at the endpoint service container in the target cluster; accessing the service registry information at the original cluster using a mesh operator executed by the container orchestration system of the original
- Another general aspect includes a system including: a processor; a data bus coupled to the processor; and a non-transitory, computer-readable storage medium embodying computer program code, the non-transitory, computer-readable storage medium being coupled to the data bus, the computer program code interacting with a plurality of computer operations and including instructions executable by the processor and configured for: operating respective container orchestration systems at an original cluster and a target cluster, where the original cluster includes at least one endpoint service container and a dependent container configured to consume a service available at the endpoint service container; moving the endpoint service container from the original cluster to the target cluster; updating service registry information relating to moving the endpoint service container from the original cluster to the target cluster, where the service registry information includes a list of services that are globally available in a cluster mesh that includes the original cluster and target cluster, where the list of services includes at least one service available at the endpoint service container in the target cluster; accessing the service registry information at the original cluster using a mesh operator executed by the container orchestration system of the original cluster; establishing a remote
- Another general aspect includes a non-transitory, computer-readable storage medium embodying computer program code, the computer program code including computer executable instructions configured for: operating respective container orchestration systems at an original cluster and a target cluster, where the original cluster includes at least one endpoint service container and a dependent container configured to consume a service available at the endpoint service container; moving the endpoint service container from the original cluster to the target cluster; updating service registry information relating to moving the endpoint service container from the original cluster to the target cluster, where the service registry information includes a list of services that are globally available in a cluster mesh that includes the original cluster and target cluster, where the list of services includes at least one service available at the endpoint service container in the target cluster; accessing the service registry information at the original cluster using a mesh operator executed by the container orchestration system of the original cluster; establishing a remote service endpoint container at the original cluster using service registry information accessed by the mesh operator of the original cluster; accessing the service registry information at the target cluster using a mesh operator executed by the container orchestration system of the target cluster; configuring
- FIG. 1 is a generalized illustration of an information handling system that is configured to implement certain embodiments of the system and method of the present disclosure.
- FIG. 2 shows one embodiment of a Kubernetes container orchestration system.
- FIG. 3 depicts an electronic environment in which certain embodiments of the disclosed system may operate.
- FIG. 4 depicts an electronic environment showing a standard local use case in which a container orchestration system manages all containers in a single cluster.
- FIG. 5 depicts an electronic environment of the prior art.
- FIG. 6 depicts a system architecture for implementing certain embodiments of the disclosed system.
- FIG. 7 depicts a system environment in which communications of calls between a remote endpoint service container and a target cluster are secured.
- FIG. 8 is a flowchart showing exemplary operations that may be executed in certain embodiments of the disclosed system.
- Microservice architectures are increasingly being used to deploy services in local and cloud-based information handling systems.
- a microservice is an independent, stand-alone capability designed as an executable or a process that communicates with other microservices through standard but lightweight interprocess communications such as Hypertext Transfer Protocol (HTTP), RESTful web services (built on the Representational State Transfer architecture), message queues, and the like.
- Microservices are unique when compared to standard monolithic applications in that each microservice is developed, tested, deployed, on-demand and independent of other microservices.
- Microservices are often deployed as container applications (such as, for example, Docker containers) that operate in a cluster under the management of a container orchestration system (such as, for example, Kubernetes, DockerSwarm, etc.).
- a system, method, and computer-readable medium are disclosed for peering clusters in a cluster mesh when a container is moved from its original cluster to a target cluster.
- the cluster mesh includes a collection of container orchestration systems, which share a networking convention or implementation.
- code for a dependent container that is written to consume services at the original cluster need not be rewritten if the services are moved to the target cluster, even if the target cluster executes a different container orchestration system.
- applications running in any cluster of the cluster mesh have direct access to any services of the mesh that have been flagged for global use.
- the dependent application containers are able to access the flagged services as if the flagged services are local (e.g., in the host name/path the dependent application container).
- the dependent application containers remain unaware of the running locations or external fully qualified domain names (FQDNs) of any of the flagged services on which the dependent application containers depend.
- dependent services do not need to be directly exposed to insecure clients in order to run in clusters that are remote or cloud-based.
- dependent services can be executed remotely in datacenters that are separated by strict firewall processes using secured connections, such as mutual TLS (mTLS) tunneling, which only require single port access between the clusters for peering.
- mTLS mutual TLS
- multicluster functionality is simplified and thinned by removing any need of sidecar injection.
- an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes.
- an information handling system may be a personal computer, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price.
- the information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of non-volatile memory.
- Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display.
- the information handling system may also include one or more buses operable to transmit communications between the various hardware components.
- FIG. 1 is a generalized illustration of an information handling system 100 that is configured to implement certain embodiments of the system and method of the present disclosure.
- the information handling system 100 includes a processor (e.g., central processor unit or “CPU”) 102 , input/output (I/O) devices 104 , such as a display, a keyboard, a mouse, and associated controllers, a hard drive or disk storage 106 , and various other subsystems 108 .
- the information handling system 100 also includes network port 110 operable to connect to a network 140 , which is likewise accessible by a service provider server 142 .
- the information handling system 100 likewise includes system memory 112 , which is interconnected to the foregoing via one or more buses 114 .
- System memory 112 further comprises an operating system 116 and in various embodiments may also comprise other software modules and engines configured to implement certain embodiments of the disclosed system.
- Memory 112 includes storage for a plurality of software containers 118 that may be used to implement certain embodiments of the disclosed system.
- Containers A-D each include a respective application, App A-D, and corresponding dependencies, Depend A-D.
- Containers A-D may be instantiated by a container engine such as Docker.
- Containers A-D In a Docker architecture, Docker runs on top of the operating system 116 .
- a container such as Containers A-D, are created on top of Docker.
- the containers include the required application, binary files, and library files required to run the application.
- Containers A-D share the kernel of the same operating system 116 .
- containers may be executed on separate, different operating system kernels.
- the example shown in FIG. 1 also includes a container orchestration system 122 that may be executed in certain embodiments of the disclosed system.
- the container orchestration system 122 manages the lifecycles of containers, especially in large, dynamic environments.
- Software teams may use the container orchestration system 122 to control and automate many tasks such as, for example, 1) provisioning and deployment of containers, 2) redundancy and availability of containers, 3) scaling up or removing containers to spread application load evenly across host infrastructure, 4) movement of containers from one host to another if there is a shortage of resources in a host, or if a host dies, 5) allocation of resources between containers, 6) external exposure of services running in a container, 7) load balancing of service discovery between containers, 8) health monitoring of containers and hosts, and 8) configuration of an application in relation to the containers running the application.
- Container orchestration systems There are several different container orchestration systems that may be employed in the information handling system 100 .
- Two such container orchestration systems include Kubernetes and DockerSwarm. Although the embodiments disclosed herein are discussed for a Kubernetes container orchestration system, it will be recognized, given the teachings of the present disclosure, that the disclosed system is extensive able to other container orchestration systems.
- FIG. 2 shows one embodiment of a Kubernetes container orchestration system 200 .
- the container orchestration system 200 includes a master node 202 and a plurality of worker nodes 204 .
- the master node 202 includes an etcd cluster 206 .
- the etcd cluster 206 is a simple, distributed key value storage which is used to store Kubernetes cluster data (such as number of pods, the state of the pods, namespace, etc), API objects, and service discovery details.
- the etcd cluster 206 is only accessible from an API server 208 for security reasons.
- the etcd cluster 206 enables notifications to a cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node that trigger the update of information in the node's storage.
- the API server 208 of the master node 202 shown in FIG. 2 is a central management entity that receives all REST requests for modifications to pods (a co-located group of containers inside which application processes are running). The requests may also relate to modifying services, replication sets/controllers, and others.
- the API server 208 serves as a frontend to a container cluster. Certain embodiments of the API server 208 may communicate with the etcd cluster 206 , making sure data is stored in the etcd cluster 206 and is in agreement with the service details of the deployed pods.
- the master node 202 shown in FIG. 2 also includes Kube-controller-manager 210 , which runs several distinct controller processes in the background to regulate the shared state of the cluster and to perform routine tasks.
- the Kube-controller-manager 210 may, for example, include 1) a replication controller configured to controls the number of replicas in a pod, 2) an endpoints controller configured to populates endpoint objects like services and pods.
- a change in a service configuration for example, replacing the image from which the pods are running, or changing parameters in a configuration yam1 file
- the Kube-controller-manager 210 spots the change and starts working towards the new desired state.
- the embodiment shown in FIG. 2 also includes a cloud-controller-manager 212 .
- the cloud-controller-manager 212 is responsible for managing controller processes with dependencies on the underlying cloud provider (if applicable). For example, the Kube-controller-manager 210 may check if a worker node has been terminated. In some examples, Kube-controller-manager 210 is used to set up routes, load balancers or volumes in the cloud infrastructure.
- the embodiment shown in FIG. 2 also includes a scheduler 214 .
- the scheduler 214 helps schedule the pods on the various nodes based on resource utilization.
- the scheduler 214 reads the operational requirements of a service and schedules the service to run on the best fit node. For example, if the application needs 1 GB of memory and 2 CPU cores, then the pods for that application will be scheduled on a node with at least those resources.
- the scheduler 214 runs each time there is a need to schedule pods.
- the scheduler 214 knows the total resources available on the worker nodes 204 as well as resources allocated to existing workloads on each worker node 204 .
- the worker nodes 204 of the example shown in FIG. 2 include a plurality of pods 216 operating on top of a Docker container engine 218 .
- Each worker node 204 may include a kubelet 220 .
- the kubelet 220 operates as the main service on a worker node 204 , regularly taking in new or modified pod specifications (primarily through the API server 208 ) and ensuring that pods and the containers of the pod are healthy and running in the desired state.
- the kubelet 220 also reports to the master node 202 on the health of the host where the worker node 204 is running.
- the worker nodes 204 of the example shown in FIG. 2 also include a kube-proxy 222 .
- the kube-proxy 222 is a proxy service that runs on each worker node 204 to deal with individual host subnetting and expose services to the external world.
- the kube-proxy 222 performs request forwarding to the correct pods/containers across various isolated networks in a cluster.
- container orchestration systems may have characteristics which simplify the provisioning and deployment of the systems at the given system provider, application teams often deploy the application containers across multiple providers having different container orchestration systems. Once an application is deployed on a system provider, developers often have to resolve connectivity issues that arise from executing the same container on disparate platforms and clusters that have different container orchestration systems.
- FIG. 3 depicts an electronic environment 300 in which certain embodiments of the disclosed system may operate.
- user devices 302 are configured to access services provided on one or more cloud platforms over the Internet 322 .
- the cloud platforms in this example include an Amazon platform 304 , a Microsoft platform 306 , and a Google platform 308 .
- the Amazon platform 304 includes a plurality of clusters 310 that are each orchestrated with a corresponding version of the Kubernetes (AWS) container orchestration system 312 .
- the Microsoft platform 306 includes a plurality of clusters 314 that are each orchestrated with a corresponding version of the Kubernetes (Azure) container orchestration system 316 .
- the Google platform 308 includes a plurality of clusters 318 that are each orchestrated with a corresponding version of the Kubernetes (Google) container orchestration system 320 .
- Amazon platform 304 Microsoft platform 306 , and Google platform 308 are each shown with corresponding Kubernetes container orchestration systems 312 , 316 , and 318
- the different platforms often execute unique versions of Kubernetes container orchestration. Developers writing a container application for one platform do not have a guarantee that the same container application can be executed at another platform without problems. As such, developers often write different versions of container applications depending on the platform on which the container will run. Incompatibility can be particularly problematic when, for example, a container running on a cluster at one platform must be provisioned for execution at a cluster on another platform in response to a scale-up of the need for the container's services.
- FIG. 4 depicts an electronic environment 400 showing a standard local use case in which a container orchestration system manages all containers in a single cluster.
- container application D is self-contained and not dependent on other container applications in cluster 402 .
- dependent container A is configured to consume a service provided by container C.
- service endpoint container B is created to make the container service hosted on C available to dependent container A. Since the container service is in the same cluster 402 as container A, the container service may be consumed with cluster-private addressing in the same host name/path (such as, for example, service-b.c-hostname/path.svc.cluster.local).
- container C may be moved from the original cluster 402 to a target cluster that, for example, executes a different container orchestration system. Being on a different cluster, container C now operates in a new host name/path.
- an ingress container D is created at the target cluster and inserted to expose service endpoint B.
- the configuration of container A must be updated to the FQDN exposed by an ingress container at the target cluster (such as, for example, service-b.c-hostname/path.external-cluster-name.datacenter.region.company.com).
- dependent container A is provided with a sidecar container.
- the sidecar for container A proxies all the traffic for container A.
- container A can access ingress container D through an alternate FQDN that is more convenient but still not within the same local hostname/path or address space as container A.
- FIG. 5 depicts an electronic environment of the prior art.
- FIG. 6 depicts a system architecture 600 for implementing certain embodiments of the disclosed system.
- the system architecture 600 includes a service registry system 602 that is accessible to mesh operators F 1 and F 2 disposed respectively in the original cluster 402 and target cluster 502 .
- the mesh operators F 1 and F 2 are configured as software that encodes domain knowledge for a cluster and extends the Kubernetes API to enable users to create, configure, and manage stateful applications within the respective clusters 402 and 502 .
- an operator does not merely manage a single instance of the application container, but manages multiple instances of the application container across the cluster.
- the service registry system 602 maintains a list of services and corresponding IP addresses that are to be globally available in the cluster mesh.
- the service registry system 602 can reside in one of the container orchestration systems of the original cluster and/or target cluster. Additionally, or in the alternative, the service registry system 602 can be a stand-alone deployment elsewhere, as long as the service registry system 602 can be contacted by all of the peered container orchestration systems.
- the service registry system 602 is populated with IP addresses of the globally available services when the services are discovered by the cluster orchestration system in a managed cluster.
- the services registry system 602 is maintained using cluster orchestration system operators when services are added, deleted, or moved.
- the services registry system 602 may be directly updated during application deployments.
- the service registry system 602 supports create, read, update, and delete operations.
- service endpoint container B and service container C have been moved from the original cluster 402 to the target cluster 502 .
- This action may result in the following exemplary update to the service registry data:
- Service Name Hostname/path Discovered Address Service-b c-hostname/path 10.1.2.3 The principal change before and after a cluster move in this embodiment is an update of the ‘discovered address’ field.
- the service registry data is the ‘source of truth’ location for all peered services in certain embodiments.
- the mesh operators F 1 and F 2 may support different functions inside the respective clusters 402 and 502 .
- the mesh operator F 1 accesses information relating to service endpoint container B, now operating at the target cluster 502 , from the service registry system data.
- the mesh operator F 1 uses the service registry data to configure a remote service endpoint container B′, which operates as a shim that transparently intercepts calls from the dependent container A and redirects the calls to the service endpoint container B at the target cluster 502 .
- the remote service endpoint B′ receives calls from dependent container A in the hostname/path of original cluster 402 and redirects the calls to the new IP address associated with service endpoint container B, now operating in the target cluster 502 at the target cluster 502 .
- the remote service endpoint container B′ redirects the calls by changing the hostname/path and/or address space of the calls made by dependent container A to the hostname/path and/or address space associated with the service endpoint container B at the target cluster 502 .
- the dependent container A may access the services available at service endpoint container B as though service endpoint container B is present in the same cluster (e.g., in original cluster 402 ).
- the target cluster 502 includes an ingress container E.
- the mesh operator F 2 accesses service registry information from the service registry system 602 and configures the ingress container E to allow passage of calls from the remote service endpoint B′ to the service endpoint container B.
- Ingress container E must know the hostname/path the ingress container E is listening for (i.e., service-b.hostname/path.svc.cluster.local) and which backend pods to direct traffic to in the target cluster 502 .
- the ingress container E uses the configuration information of the service registry system 602 listens for traffic directed to that hostname/path and directs the call to the proper container/pod.
- remote endpoint service container B′ is configured to communicate with ingress container E of the target cluster 502 either directly on the same machine and/or directly over a network such as the Internet.
- FIG. 7 depicts a system environment 700 in which communications of calls between the remote endpoint service B′ and the target cluster 502 are secured.
- original cluster 402 employs an egress proxy container G, which is used to establish an mTLS connection between clusters 402 and 502 , thereby providing bidirectional trust in ensuring that no external actors have access to ingress container E.
- FIG. 8 is a flowchart 800 showing exemplary operations that may be executed in certain embodiments of the disclosed system.
- a request to move a service endpoint container between an original cluster on a first platform to a target cluster on a different platform is received at operation 802 .
- an instance of the service endpoint container is created at the target cluster, and the instance of the service endpoint container is removed from the original cluster at operation 806 .
- the service registry information associated with the creation of the service endpoint container at the target cluster and removal of the service endpoint container from the original cluster is updated at operation 808 .
- the registry information for the service endpoint container running at the target cluster is communicated to a mesh operator at the original cluster.
- the original cluster When container is moved between clusters, the original cluster needs is updated with a new remote service endpoint that points to the target cluster for the container.
- the mesh operator F 1 in the original cluster identifies the new service location of the moved container and deploys the remote service endpoint B′ with the new information.
- the service registry information accessed by the mesh operator at the original cluster in certain embodiments is used by the mesh operator to create a remote service endpoint at the original cluster.
- registry information associated with the original cluster and/or remote service endpoint container is communicated to a mesh operator at the target cluster at operation 814 .
- certain embodiments of the mesh operator configure an ingress container at the target cluster using the registry information accessed by the mesh operator.
- a secure communication container is instantiated at the original cluster at operation 818 to secure communications between the original cluster and target cluster.
- Certain embodiments use the remote service endpoint at the original cluster to redirect calls made by a dependent endpoint container at the original cluster to the service endpoint container running at the target cluster.
- the remote service endpoint redirects calls by redirecting the calls received from the dependent endpoint container to the IP address of the hostname/path of the service endpoint container in the address space of the target cluster.
- Embodiments of the disclosure are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Stored Programmes (AREA)
Abstract
Description
Service Name | Hostname/path | Discovered Address | ||
Service-b | c-hostname/path | 10.1.2.3 | ||
The principal change before and after a cluster move in this embodiment is an update of the ‘discovered address’ field. The service registry data is the ‘source of truth’ location for all peered services in certain embodiments.
Claims (18)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/671,361 US11157304B2 (en) | 2019-11-01 | 2019-11-01 | System for peering container clusters running on different container orchestration systems |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/671,361 US11157304B2 (en) | 2019-11-01 | 2019-11-01 | System for peering container clusters running on different container orchestration systems |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210132974A1 US20210132974A1 (en) | 2021-05-06 |
US11157304B2 true US11157304B2 (en) | 2021-10-26 |
Family
ID=75687315
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/671,361 Active 2040-05-22 US11157304B2 (en) | 2019-11-01 | 2019-11-01 | System for peering container clusters running on different container orchestration systems |
Country Status (1)
Country | Link |
---|---|
US (1) | US11157304B2 (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11354148B2 (en) | 2019-02-22 | 2022-06-07 | Vmware, Inc. | Using service data plane for service control plane messaging |
US20220191304A1 (en) * | 2020-12-15 | 2022-06-16 | Vmware, Inc. | Providing stateful services in a scalable manner for machines executing on host computers |
US11368387B2 (en) | 2020-04-06 | 2022-06-21 | Vmware, Inc. | Using router as service node through logical service plane |
US11405431B2 (en) | 2015-04-03 | 2022-08-02 | Nicira, Inc. | Method, apparatus, and system for implementing a content switch |
US11438267B2 (en) | 2013-05-09 | 2022-09-06 | Nicira, Inc. | Method and system for service switching using service tags |
US11496606B2 (en) | 2014-09-30 | 2022-11-08 | Nicira, Inc. | Sticky service sessions in a datacenter |
US11595250B2 (en) | 2018-09-02 | 2023-02-28 | Vmware, Inc. | Service insertion at logical network gateway |
US11659061B2 (en) | 2020-01-20 | 2023-05-23 | Vmware, Inc. | Method of adjusting service function chains to improve network performance |
US11722559B2 (en) | 2019-10-30 | 2023-08-08 | Vmware, Inc. | Distributed service chain across multiple clouds |
US11722367B2 (en) | 2014-09-30 | 2023-08-08 | Nicira, Inc. | Method and apparatus for providing a service with a plurality of service nodes |
US11734043B2 (en) | 2020-12-15 | 2023-08-22 | Vmware, Inc. | Providing stateful services in a scalable manner for machines executing on host computers |
US11750476B2 (en) | 2017-10-29 | 2023-09-05 | Nicira, Inc. | Service operation chaining |
US11805036B2 (en) | 2018-03-27 | 2023-10-31 | Nicira, Inc. | Detecting failure of layer 2 service using broadcast messages |
US12068961B2 (en) | 2014-09-30 | 2024-08-20 | Nicira, Inc. | Inline load balancing |
US12231252B2 (en) | 2020-01-13 | 2025-02-18 | VMware LLC | Service insertion for multicast traffic at boundary |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11588693B2 (en) * | 2020-02-26 | 2023-02-21 | Red Hat, Inc. | Migrating networking configurations |
US11474851B2 (en) * | 2020-04-08 | 2022-10-18 | Open Text Holdings, Inc. | Systems and methods for efficient scalability and high availability of applications in container orchestration cloud environment |
US11561802B2 (en) * | 2020-05-19 | 2023-01-24 | Amdocs Development Limited | System, method, and computer program for a microservice lifecycle operator |
US12218942B2 (en) * | 2020-08-14 | 2025-02-04 | VMware LLC | Methods and apparatus for automatic configuration of a containerized computing namespace |
US11334334B2 (en) * | 2020-08-27 | 2022-05-17 | Red Hat, Inc. | Generating a software release based on controller metadata |
US11245748B1 (en) * | 2021-02-24 | 2022-02-08 | International Business Machines Corporation | Proxied nodes in a container orchestration environment for scalable resource allocation |
US11303712B1 (en) * | 2021-04-09 | 2022-04-12 | International Business Machines Corporation | Service management in distributed system |
CN113254156B (en) * | 2021-05-31 | 2024-04-09 | 深信服科技股份有限公司 | Container group deployment method and device, electronic equipment and storage medium |
US11750710B2 (en) | 2021-11-30 | 2023-09-05 | Hewlett Packard Enterprise Development Lp | Management cluster with integration service for deploying and managing a service in tenant clusters |
US12147846B2 (en) | 2021-12-13 | 2024-11-19 | International Business Machines Corporation | Clustered container protection |
CN114153566B (en) * | 2021-12-20 | 2025-04-29 | 广东浪潮智慧计算技术有限公司 | Cross-processor architecture multi-container cluster service discovery method, device and equipment |
CN114201245B (en) * | 2021-12-22 | 2025-02-14 | 杭州数政科技有限公司 | A resource dynamic arrangement system, method and application thereof |
CN114640618B (en) * | 2022-03-15 | 2024-03-12 | 平安国际智慧城市科技股份有限公司 | Cluster route scheduling method and device, electronic equipment and readable storage medium |
CN115086321B (en) * | 2022-06-14 | 2024-04-05 | 京东科技信息技术有限公司 | Multi-cluster traffic forwarding method and device and electronic equipment |
US11914637B2 (en) * | 2022-07-25 | 2024-02-27 | Sap Se | Image scaling cloud database |
CN115834668B (en) * | 2022-11-08 | 2024-05-07 | 中国工商银行股份有限公司 | Cluster node control method, device, equipment, storage medium and program product |
CN117061338B (en) * | 2023-08-16 | 2024-06-07 | 中科驭数(北京)科技有限公司 | Service grid data processing method, device and system based on multiple network cards |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020173984A1 (en) * | 2000-05-22 | 2002-11-21 | Robertson James A. | Method and system for implementing improved containers in a global ecosystem of interrelated services |
US20100312809A1 (en) * | 2009-06-05 | 2010-12-09 | Microsoft Corporation | Geographic co-location service for cloud computing |
US20110060821A1 (en) * | 2009-09-10 | 2011-03-10 | Sun Microsystems, Inc. | System and method for determining affinity groups and co-locating the affinity groups in a distributing network |
US20140101300A1 (en) * | 2012-10-10 | 2014-04-10 | Elisha J. Rosensweig | Method and apparatus for automated deployment of geographically distributed applications within a cloud |
US20180013636A1 (en) * | 2016-07-07 | 2018-01-11 | Cisco Technology, Inc. | System and method for scaling application containers in cloud environments |
US20180205759A1 (en) * | 2017-01-18 | 2018-07-19 | International Business Machines Corporation | Reconfiguration of security requirements for deployed components of applications |
US20180309630A1 (en) * | 2017-04-21 | 2018-10-25 | Microsoft Technology Licensing, Llc | Automated constraint-based deployment of microservices to cloud-based server sets |
US20180349199A1 (en) * | 2017-05-30 | 2018-12-06 | Red Hat, Inc. | Merging scaled-down container clusters using vitality metrics |
US10191778B1 (en) * | 2015-11-16 | 2019-01-29 | Turbonomic, Inc. | Systems, apparatus and methods for management of software containers |
US20190050272A1 (en) * | 2017-08-14 | 2019-02-14 | International Business Machines Corporation | Container based service management |
US20190102157A1 (en) * | 2017-09-30 | 2019-04-04 | Oracle International Corporation | Optimizing redeployment of functions and services across multiple container platforms and installations |
US20190332421A1 (en) * | 2018-04-25 | 2019-10-31 | Dell Products, L.P. | Secure delivery and deployment of a virtual environment |
US10469574B1 (en) * | 2016-04-20 | 2019-11-05 | EMC IP Holding Company LLC | Incremental container state persistency and replication for containerized stateful applications |
US20200104161A1 (en) * | 2018-09-28 | 2020-04-02 | Juniper Networks, Inc. | Migrating workloads in multicloud computing environments |
US20200112487A1 (en) * | 2018-10-05 | 2020-04-09 | Cisco Technology, Inc. | Canary release validation mechanisms for a containerized application or service mesh |
US20200314173A1 (en) * | 2019-04-01 | 2020-10-01 | Google Llc | Multi-cluster Ingress |
US20200379812A1 (en) * | 2019-05-31 | 2020-12-03 | Hewlett Packard Enterprise Development Lp | Unified container orchestration controller |
US20210004267A1 (en) * | 2019-07-04 | 2021-01-07 | Guangdong University Of Petrochemical Technology | Cooperative scheduling method and system for computing resource and network resource of container cloud platform |
US20210034423A1 (en) * | 2019-08-01 | 2021-02-04 | International Business Machines Corporation | Container orchestration in decentralized network computing environments |
US20210042151A1 (en) * | 2018-02-01 | 2021-02-11 | Siemens Aktiengesellschaft | Method and system for migration of containers in a container orchestration platform between compute nodes |
US10936717B1 (en) * | 2018-01-30 | 2021-03-02 | EMC IP Holding Company LLC | Monitoring containers running on container host devices for detection of anomalies in current container behavior |
US20210124603A1 (en) * | 2019-10-24 | 2021-04-29 | Dell Products L.P. | Software container replication using geographic location affinity in a distributed computing environment |
-
2019
- 2019-11-01 US US16/671,361 patent/US11157304B2/en active Active
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020173984A1 (en) * | 2000-05-22 | 2002-11-21 | Robertson James A. | Method and system for implementing improved containers in a global ecosystem of interrelated services |
US20100312809A1 (en) * | 2009-06-05 | 2010-12-09 | Microsoft Corporation | Geographic co-location service for cloud computing |
US20110060821A1 (en) * | 2009-09-10 | 2011-03-10 | Sun Microsystems, Inc. | System and method for determining affinity groups and co-locating the affinity groups in a distributing network |
US20140101300A1 (en) * | 2012-10-10 | 2014-04-10 | Elisha J. Rosensweig | Method and apparatus for automated deployment of geographically distributed applications within a cloud |
US10191778B1 (en) * | 2015-11-16 | 2019-01-29 | Turbonomic, Inc. | Systems, apparatus and methods for management of software containers |
US10469574B1 (en) * | 2016-04-20 | 2019-11-05 | EMC IP Holding Company LLC | Incremental container state persistency and replication for containerized stateful applications |
US20180013636A1 (en) * | 2016-07-07 | 2018-01-11 | Cisco Technology, Inc. | System and method for scaling application containers in cloud environments |
US20180205759A1 (en) * | 2017-01-18 | 2018-07-19 | International Business Machines Corporation | Reconfiguration of security requirements for deployed components of applications |
US20180309630A1 (en) * | 2017-04-21 | 2018-10-25 | Microsoft Technology Licensing, Llc | Automated constraint-based deployment of microservices to cloud-based server sets |
US20180349199A1 (en) * | 2017-05-30 | 2018-12-06 | Red Hat, Inc. | Merging scaled-down container clusters using vitality metrics |
US20190050272A1 (en) * | 2017-08-14 | 2019-02-14 | International Business Machines Corporation | Container based service management |
US20190102157A1 (en) * | 2017-09-30 | 2019-04-04 | Oracle International Corporation | Optimizing redeployment of functions and services across multiple container platforms and installations |
US20190102280A1 (en) * | 2017-09-30 | 2019-04-04 | Oracle International Corporation | Real-time debugging instances in a deployed container platform |
US10936717B1 (en) * | 2018-01-30 | 2021-03-02 | EMC IP Holding Company LLC | Monitoring containers running on container host devices for detection of anomalies in current container behavior |
US20210042151A1 (en) * | 2018-02-01 | 2021-02-11 | Siemens Aktiengesellschaft | Method and system for migration of containers in a container orchestration platform between compute nodes |
US20190332421A1 (en) * | 2018-04-25 | 2019-10-31 | Dell Products, L.P. | Secure delivery and deployment of a virtual environment |
US20200104161A1 (en) * | 2018-09-28 | 2020-04-02 | Juniper Networks, Inc. | Migrating workloads in multicloud computing environments |
US20200112487A1 (en) * | 2018-10-05 | 2020-04-09 | Cisco Technology, Inc. | Canary release validation mechanisms for a containerized application or service mesh |
US20200314173A1 (en) * | 2019-04-01 | 2020-10-01 | Google Llc | Multi-cluster Ingress |
US20200379812A1 (en) * | 2019-05-31 | 2020-12-03 | Hewlett Packard Enterprise Development Lp | Unified container orchestration controller |
US20210004267A1 (en) * | 2019-07-04 | 2021-01-07 | Guangdong University Of Petrochemical Technology | Cooperative scheduling method and system for computing resource and network resource of container cloud platform |
US20210034423A1 (en) * | 2019-08-01 | 2021-02-04 | International Business Machines Corporation | Container orchestration in decentralized network computing environments |
US20210124603A1 (en) * | 2019-10-24 | 2021-04-29 | Dell Products L.P. | Software container replication using geographic location affinity in a distributed computing environment |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11438267B2 (en) | 2013-05-09 | 2022-09-06 | Nicira, Inc. | Method and system for service switching using service tags |
US11805056B2 (en) | 2013-05-09 | 2023-10-31 | Nicira, Inc. | Method and system for service switching using service tags |
US12068961B2 (en) | 2014-09-30 | 2024-08-20 | Nicira, Inc. | Inline load balancing |
US11496606B2 (en) | 2014-09-30 | 2022-11-08 | Nicira, Inc. | Sticky service sessions in a datacenter |
US11722367B2 (en) | 2014-09-30 | 2023-08-08 | Nicira, Inc. | Method and apparatus for providing a service with a plurality of service nodes |
US11405431B2 (en) | 2015-04-03 | 2022-08-02 | Nicira, Inc. | Method, apparatus, and system for implementing a content switch |
US11750476B2 (en) | 2017-10-29 | 2023-09-05 | Nicira, Inc. | Service operation chaining |
US11805036B2 (en) | 2018-03-27 | 2023-10-31 | Nicira, Inc. | Detecting failure of layer 2 service using broadcast messages |
US11595250B2 (en) | 2018-09-02 | 2023-02-28 | Vmware, Inc. | Service insertion at logical network gateway |
US12177067B2 (en) | 2018-09-02 | 2024-12-24 | VMware LLC | Service insertion at logical network gateway |
US11467861B2 (en) | 2019-02-22 | 2022-10-11 | Vmware, Inc. | Configuring distributed forwarding for performing service chain operations |
US11354148B2 (en) | 2019-02-22 | 2022-06-07 | Vmware, Inc. | Using service data plane for service control plane messaging |
US11609781B2 (en) | 2019-02-22 | 2023-03-21 | Vmware, Inc. | Providing services with guest VM mobility |
US12254340B2 (en) | 2019-02-22 | 2025-03-18 | VMware LLC | Providing services with guest VM mobility |
US11397604B2 (en) | 2019-02-22 | 2022-07-26 | Vmware, Inc. | Service path selection in load balanced manner |
US11604666B2 (en) | 2019-02-22 | 2023-03-14 | Vmware, Inc. | Service path generation in load balanced manner |
US11722559B2 (en) | 2019-10-30 | 2023-08-08 | Vmware, Inc. | Distributed service chain across multiple clouds |
US12132780B2 (en) | 2019-10-30 | 2024-10-29 | VMware LLC | Distributed service chain across multiple clouds |
US12231252B2 (en) | 2020-01-13 | 2025-02-18 | VMware LLC | Service insertion for multicast traffic at boundary |
US11659061B2 (en) | 2020-01-20 | 2023-05-23 | Vmware, Inc. | Method of adjusting service function chains to improve network performance |
US11743172B2 (en) | 2020-04-06 | 2023-08-29 | Vmware, Inc. | Using multiple transport mechanisms to provide services at the edge of a network |
US11528219B2 (en) | 2020-04-06 | 2022-12-13 | Vmware, Inc. | Using applied-to field to identify connection-tracking records for different interfaces |
US11792112B2 (en) | 2020-04-06 | 2023-10-17 | Vmware, Inc. | Using service planes to perform services at the edge of a network |
US11438257B2 (en) | 2020-04-06 | 2022-09-06 | Vmware, Inc. | Generating forward and reverse direction connection-tracking records for service paths at a network edge |
US11368387B2 (en) | 2020-04-06 | 2022-06-21 | Vmware, Inc. | Using router as service node through logical service plane |
US11734043B2 (en) | 2020-12-15 | 2023-08-22 | Vmware, Inc. | Providing stateful services in a scalable manner for machines executing on host computers |
US20220191304A1 (en) * | 2020-12-15 | 2022-06-16 | Vmware, Inc. | Providing stateful services in a scalable manner for machines executing on host computers |
US11611625B2 (en) * | 2020-12-15 | 2023-03-21 | Vmware, Inc. | Providing stateful services in a scalable manner for machines executing on host computers |
Also Published As
Publication number | Publication date |
---|---|
US20210132974A1 (en) | 2021-05-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11157304B2 (en) | System for peering container clusters running on different container orchestration systems | |
US12001884B2 (en) | Remote management of distributed datacenters | |
US11507364B2 (en) | Cloud services release orchestration with a reusable deployment pipeline | |
CN109196474B (en) | Distributed operation control in a computing system | |
JP7461471B2 (en) | Cloud Services for Cross-Cloud Operations | |
US9942273B2 (en) | Dynamic detection and reconfiguration of a multi-tenant service | |
JP2021518018A (en) | Function portability for service hubs with function checkpoints | |
US12073258B2 (en) | Configuration map based sharding for containers in a machine learning serving infrastructure | |
US20220382529A1 (en) | Systems and methods for managing releases of global services in a controlled manner | |
KR20220057631A (en) | Codeless specification of software-as-a-service integrations | |
US10983770B2 (en) | Efficient bundling and delivery of client-side scripts | |
US20200218566A1 (en) | Workload migration | |
US11706162B2 (en) | Dynamic, distributed, and scalable single endpoint solution for a service in cloud platform | |
US11474980B1 (en) | Cloud resource tagging and inventory | |
US20210034431A1 (en) | Discovery and mapping of a platform-as-a-service environment | |
US20230385121A1 (en) | Techniques for cloud agnostic discovery of clusters of a containerized application orchestration infrastructure | |
WO2024118056A1 (en) | Cloud initiated bare metal as a service for on-premises servers | |
US10824476B1 (en) | Multi-homed computing instance processes | |
Xiong et al. | Amino-a distributed runtime for applications running dynamically across device, edge and cloud | |
US12287989B2 (en) | Multi-interface container storage interface driver deployment model | |
US11968278B2 (en) | Method and system for decentralized message handling for distributed computing environments | |
US20230269290A1 (en) | Nested Request-Response Protocol Network Communications | |
JP2024106320A (en) | A unified framework for configuring and deploying Platform Intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: DELL PRODUCTS L. P., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WATT, JAMES S., JR.;DIROSA, FRANK, IV;REEL/FRAME:050901/0335 Effective date: 20191028 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;WYSE TECHNOLOGY L.L.C.;AND OTHERS;REEL/FRAME:051302/0528 Effective date: 20191212 |
|
AS | Assignment |
Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAROLINA Free format text: SECURITY AGREEMENT;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;WYSE TECHNOLOGY L.L.C.;AND OTHERS;REEL/FRAME:051449/0728 Effective date: 20191230 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001 Effective date: 20200409 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:053311/0169 Effective date: 20200603 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST AT REEL 051449 FRAME 0728;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058002/0010 Effective date: 20211101 Owner name: SECUREWORKS CORP., DELAWARE Free format text: RELEASE OF SECURITY INTEREST AT REEL 051449 FRAME 0728;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058002/0010 Effective date: 20211101 Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST AT REEL 051449 FRAME 0728;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058002/0010 Effective date: 20211101 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 051449 FRAME 0728;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058002/0010 Effective date: 20211101 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 051449 FRAME 0728;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058002/0010 Effective date: 20211101 |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: SECUREWORKS CORP., DELAWARE Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (051302/0528);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0593 Effective date: 20220329 Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO WYSE TECHNOLOGY L.L.C.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (051302/0528);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0593 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (051302/0528);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0593 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (051302/0528);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0593 Effective date: 20220329 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |