WO2002033564A1 - Distributed multiprocessing system - Google Patents
Distributed multiprocessing system Download PDFInfo
- Publication number
- WO2002033564A1 WO2002033564A1 PCT/US2001/032528 US0132528W WO0233564A1 WO 2002033564 A1 WO2002033564 A1 WO 2002033564A1 US 0132528 W US0132528 W US 0132528W WO 0233564 A1 WO0233564 A1 WO 0233564A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- processors
- processor
- set forth
- processed information
- hub
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
Definitions
- the subject invention relates to a multiprocessing system which distributes data and processes between a number of processors.
- Data processing and distribution is utilized in a number of different manufacturing and business related applications for accomplishing a virtually unlimited variety of tasks.
- the systems implemented to accomplish these tasks utilize different design configurations and are typically organized in a network fashion.
- Networks may be arranged in a variety of configurations such as a bus or linear topology, a star topology, ring topology, and the like.
- Within the network there are typically a plurality of nodes and communication links which interconnect each of the nodes.
- the nodes may be computers, terminals, workstations, actuators, data collectors, sensors, or the like.
- the nodes typically have a processor, a memory, and various other hardware and software components.
- the nodes communicate with each other over the communication links within the network to obtain and send information.
- a primary deficiency in the prior art systems is in the manner in which nodes communicate with other nodes.
- a first node will send a signal to a second node requesting information.
- the second node is already processing information such that the first node must wait for a response.
- the second node will at some time recognize the request by the first node and access the desired information.
- the second node then sends a response signal to the first node with the attached information.
- the second node maintains a copy of the information which it may need for its own processing purposes.
- the second node may also send a verification to ensure that the information data was received by the first node. This type of communication may be acceptable in a number of applications where the time lost between the communications of the first and second nodes is acceptable.
- the subject invention overcomes the deficiencies in the prior art by providing a distributed multiprocessing system comprising a first processor for processing information at a first station and for assigning a first address to a first processed information.
- a second processor processes information at a second station and assigns a second address to a second processed information.
- a central signal routing hub is interconnected between the first and second processors. Specifically, a first communication link interconnects the first processor and the hub for transmitting the first, processed information between the first processor and the hub.
- a second communication link interconnects the second processor and the hub for transmitting the second processed information between the second processor and the hub.
- the central routing hub includes a sorter for receiving at least one of the first and second processed information from at least one of the first and second processors, thereby defining at least one sending processor.
- the hub and sorter also identify a destination of at least one of the first and second addresses of the first and second processed information, respectively.
- the hub and sorter send at least one of the first and second processed information without modification over at least one of the communication links to at least one of the first and second processors, thereby defining at least one addressed processor.
- the subject invention also includes a method of communicating across the distributed multiprocessing system having the first processor and the second processor.
- the method comprising the steps of; processing information within at least one of the first and second processors; addressing the processed information; transmitting the processed information from at least one of the first and second processors across at least one of the communication links toward the hub, thereby defining at least one sending processor; receiving the processed information along with the address within the hub; identifying the destination of the address for the transmitted processed information within the hub; and sending the processed information without modification over at least one of the communication links to at least one of the first and second processors, thereby defining at least one addressed processor.
- first and second memory locations are connected to the first and second processors, respectfully, for storing received processed information.
- An indexer is provided for indexing said first and second processors to define a different code for each of said processors for differentiating said processors.
- said first and second processors each include virtual memory maps of each code such that said first and second processors can address and forward processed information to each of said indexed processors within said system.
- the subject invention eliminating the hub also includes the steps of indexing the first and second processors to define a different code for each of the processors for differentiating the processors; creating a virtual memory map of each of the codes within each of the first and second processors such that the first and second processors can address and forward processed information to each of the indexed processors within the system; and storing the processed information within the memory location of the addressed processor.
- the subject invention therefore provides a data processing system which operates in a virtually instantaneous manner while reducing or eliminating unnecessary redundancies.
- Figure 1 is a schematic view of the distributed multiprocessing system utilizing six nodes interconnected to a single hub;
- Figure 2 is another view of the system of Figure 1 illustrating possible paths of data flow between the nodes and the hub;
- Figure 3 is a detailed schematic view of node 1 and node 2 as connected to the hub;
- Figure 4 is a detailed schematic view of a memory space for node 1;
- Figure 5 is a detailed schematic view of a processor for node 1 ;
- Figure 6 is a detailed schematic view of a memory space for node 2
- Figure 7 is a detailed schematic view of a processor for node 2;
- Figure 8 is an alternative embodiment illustrating only two nodes without a hub
- Figure 9 is a schematic view of two multiprocessing systems each having a hub with the hubs interconnected by a hub link;
- Figure 10 is a schematic view of the two multiprocessing systems of Figure 8 before the hubs are interconnected;
- Figure 11 is a schematic view of two multiprocessing systems each having a hub with the hubs interconnected by a common node;
- Figure 12 is another schematic view of two multiprocessing systems interconnected by a common node
- Figure 13 is yet another schematic view of two multiprocessing systems interconnected by a common node
- Figure 14 is a schematic view of three multiprocessing systems each having a hub with the hubs interconnected by two common nodes;
- Figure 15 is a schematic view of the system of Figure 1 illustrating another example of data flow between the nodes and the hub;
- Figure 16 is a detailed schematic view of the processor and memory space of node 1 as node 1 processes information
- Figure 17 is a schematic view of the system of Figure 14 illustrating an incoming transmission of information
- Figure 18 is a schematic view of the system of Figure 14 illustrating an outgoing transmission of information
- Figure 19 is a schematic view of the memory space of node 2 as the processed information of node 1 is stored into a real memory location of node 2;
- Figure 20 is a schematic view of the system of Figure 1 illustrating yet another example of data flow between a node and the hub;
- Figure 21 is a schematic view of the system of Figure 1 illustrating a incoming transmission from node 6;
- Figure 22 is a schematic view of the system of Figure 20 illustrating a broadcast which sends outgoing transmissions to all nodes;
- Figure 23 is a schematic view of five systems interconnected by four common nodes illustrating a broadcast through the system.
- a distributed multiprocessing system is generally shown at 30 in Figure 1.
- the system 30 comprises a plurality of modules or nodes 1-6 interconnected by a central signal routing hub 32 to preferably create a star topology configuration.
- a central signal routing hub 32 to preferably create a star topology configuration.
- numerical indicators 1 through 6 are illustrated.
- any suitable alpha/numeric indicator may be used to differentiate one node from another.
- the shape, configuration, and orientation of the hub 32 which is shown as an octagon shape, is purely illustrative and may be altered to meet any desired need.
- the nodes 1-6 may be part of a workstation or may be the workstation itself.
- node 6 is part of a host computer 34, nodes 1, 2, 4, and 5 are connected to actuators 36 and node 3 is unconnected. It should be appreciated that the nodes 1-6 can be connected to any type of peripheral device or devices including multiple computers, actuators, hand held devices, and the like. For example, node 6 is shown also connected to a hand held device 35. Alternatively, none of the nodes 1-6 could be connected to a peripheral device which would create a completely virtual system.
- the host computer 34 has a digital signal processing card 38 and preferably at least one peripheral device.
- the peripheral devices may be any suitable device as is known in the computer art such as a monitor, a printer, a key board, a mouse, etc.
- the nodes 1-6 preferably communicate with each other through the hub 32.
- node 5 is shown communicating with node 6 through the hub 32 which in turn communicates with node 1 through the hub 32.
- node 4 is shown cornmunicating with node 3 through the hub 32.
- the hub 32 preferably communicate with each other through the hub 32.
- node 5 is shown communicating with node 6 through the hub 32 which in turn communicates with node 1 through the hub 32.
- node 4 is shown cornmunicating with node 3 through the hub 32.
- the subject invention is extremely versatile in the number of nodes which can be connected to the hub 32. There may be ten, one hundred, or thousands of nodes connected to the hub 32 or only a pair of nodes or even a single node connected to the hub 32. As will be discussed in greater detail below, the nodes 1-6 can operate independently of each other. h the preferred embodiment, the nodes 1-6 of the subject invention are utilized to compile data during a testing of a vehicle, hi particular, during servo-hydraulic testing of a vehicle on a testing platform. Of course, the subject invention is in no way limited to this envisioned application.
- the distributed multiprocessing system 30 of the subject invention can be used in virtually any industry to perform virtually any type of computer calculation or processing of data.
- nodes 1 and 2 and the hub 32 are shown in greater detail.
- Each of the nodes 1-6 are virtually identical. Accordingly, nodes 3 through 6 can be analogized as having substantially identical features illustrated in the detail of nodes 1 and 2.
- Each of the nodes 1-6 includes a processor and a number of other components which will be outlined individually below.
- the processors may be of different sizes and speeds.
- node 6 may have a 1,500 MFbps processor and the remaining nodes may have a 300 MFbps processors.
- the size and speed of the processor may be varied to satisfy a multitude of design criteria.
- the processor will only be of a size and speed to support the tasks or operation which are associated with the node 1-6.
- the processors can be of different types which recognize different computer formats and languages.
- the first node, node 1 includes a first processor 40 and the second node, node 2, includes a second processor 42.
- the first 40 and second 42 processors are indexed in concert with nodes 1 and 2 to define a different code for each of the processors 40, 42 for differentiating the processors 40, 42 in the same fashion as the nodes 1-6 are differentiated.
- an indexer 73 which is discussed in greater detail below, is included for indexing the first 40 and second 42 processors to define the different code for each of the processors 40, 42 for differentiating the processors 40, 42 and the nodes 1-6.
- the first processor 40 processes information at a first station, i.e., node 1 's location, and assigns a first address to a first processed information.
- a second processor 42 processes information at a second station, i.e., node 2's location, and assigns a second address to a second processed information.
- the addresses are indexed to correlate to the indexing of the processors 40, 42 and the nodes 1-6.
- First and second actuators 36 are connected to the first 40 and second 42 processors, respectively, for performing the testing operation during an operation of the system 30.
- There are additional components included within each of the nodes 1-6 such as a chipset 44 which interconnects the hub 32 and the processors 40, 42 and a buffer 46 disposed between each of the processors 40, 42 and the chipsets 44. Chipsets 44 were chosen for their transparent handling of data streams.
- the first 40 and second 42 processors further include a hardware portion 48 for assigning the first and second addresses to the first and second processed information, respectively.
- the hardware portion 48 assigns a destination address onto the processed information indicative of the code of an addressed processor.
- the hardware portion 48 also conforms or rearranges the data or information to an appropriate format.
- the processors 40, 42 can be of different types which recognize different computer formats.
- the hardware portion 48 ensures that the proper format is sent to the addressed processor.
- the addresses are preferably of a common format such that the hub 32 commonly recognizes these signals. Examples of the processors 40, 42 operation are discussed below in greater detail.
- a first memory space 50 is connected to the first processor 40 and a second memory space 52 is connected to the second processor 42. As shown in Figures 4 and 6, the first 50 and second 52 memory spaces are shown in greater detail, respectively.
- a first real memory location 54 is disposed within the first memory space 50 and is connected to the hardware portion 48 of the first processor 40.
- a second real memory location 56 is disposed within the second memory space 52 and is connected to the hardware portion 48 of the second processor 42.
- the hardware portion 48 assigns a memory address onto the processed information indicative of the memory location of an addressed processor.
- the first 54 and second 56 real memory locations can therefore store received processed information, which is also discussed in greater detail below.
- the first 54 and second 56 real memory locations are not capable of reading the memory of another processor. In other words, the processors of a particular node 1-6 can read its own memory within its own memory locations but cannot read the memory stored within a memory location of another processor.
- the first 54 and second 56 real memory locations may also have categorized message areas (not shown) such that multiple data inputs will not be overwritten.
- the categorized message areas could correlate to the memory addresses.
- the first 54 and second 56 real memory locations are of a size commensurate with the needs of the associated node 1- 6.
- first 58 and second 60 virtual memory maps are also illustrated within the first 50 and second 52 memory spaces at Figures 4 and 6,.
- the first 40 and second 42 processors each include virtual memory maps 58, 60 of each code disposed within each of the first 40 and second 42 processors for each node 1-6 such that the first 40 and second 42 processors can address and forward processed information to each of the indexed processors within the system 30.
- the virtual memory maps 58, 60 are essentially a means for the processors 40, 42 to be able to address each other processor or node 1-6 within the system 30. The operation and specifics of the virtual memory maps 58, 60 will be discussed in greater detail below. Referring back to Figures 5 and 7, each of the first 40 and second 42 processors further include at least one task 62.
- Each of the first 40 and second 42 processors will typically include a plurality of tasks 62 which can be performed in any order.
- a task 62 is a generic term for a specific operation or function being performed by a processor.
- the processors 40, 42 will include executable code for performing the tasks 62 which may be of different complexities. No one process or output associated with a task 62 is unique to any one node 1-6. In fact, many nodes 1-6 may have the same task 62 or tasks 62 for producing similar data.
- the task 62 may be any suitable type of calculation, data collection, classification, or any other desired operation.
- each task 62 includes at least a pair of pointers 64, 66 for directing a flow of data from a sending processor to a destination processor.
- the pointers 64, 66 are illustrated as branching off of the fourth task 62 hi Figure 5 and the third task 62 in Figure 7.
- there are pointers 64, 66 associated with each of the tasks 62 such that there is a continuous stream of information.
- Each pair of pointers 64, 66 includes a next task pointer 64 for directing the sending processor to a subsequent task 62 to be performed, and at least one data destination pointer 66 for sending the processed information to the hub 32.
- next task pointer 64 there is only one next task pointer 64 such that there is a clear order of operation for the processors 40, 42.
- data destination pointers 66 there may be any number of data destination pointers 66 such that the sending processor may simultaneously forward processed information to a multitude of addressed processors. Further, each of the processed information sent to the multitude of addressed processors may be different.
- next task 64 and data destination 66 pointers do not necessarily have to be operational for each task 62. For example, there may not be a need to send the particular information that the fourth task 62 has performed such that the data destination pointer 66 will not be operational. Conversely, the fourth task 62 may be the final task to be performed such that the next task pointer 64 will not be operational. Typically, at least one of the pointers 64, 66 will be operational such that, at a minimum, the information will be sent to the hub 32 or a subsequent task 62 will be performed.
- a first communication link 68 interconnects the first processor 40 of node 1 and the hub 32 for transmitting the first processed information between the first processor 40 and the hub 32.
- a second communication link 70 interconnects the second processor 42 of node 2 and the hub 32 for transmitting the second processed information between the second processor 42 and the hub 32.
- the hub 32 is capable of receiving processed information from all of the nodes 1-6 simultaneously and then forwarding the processed information to the correct destinations.
- communication links (not numbered) interconnecting each of the remaining processors of the remaining nodes 3-6 to the hub 32.
- the number of communication links is directly dependent upon the number of processors and nodes 1-6.
- an indexer 73 is provided for indexing or organizing the first 40 and second 42 processors to define the different codes for each of the processors 40, 42, which differentiates the processors 40, 42 and the nodes 1-6.
- the indexer 73 is disposed within the hub 32. Hence, when the nodes 1-6 are initially connected to the hub 32, the indexer 73 within the hub 32 begins to organize the nodes 1-6 in a particular order. This is how the entire organization of the system 30 begins.
- the hub 32 and indexer 73 also create the mapping within the processors 40, 42 as part of this organization.
- the mapping includes the first 58 and second 60 virtual memory maps of the first 40 and second 42 processors.
- the virtual memory maps 58, 60 outline each code disposed within each of the processors for each node 1-6 such that the processors can address and forward processed information to each of the indexed processors within the system 30.
- the central routing hub 32 includes a sorter 72 for receiving at least one of the first and second processed hiformation from at least one of the first 40 and second 42 processors.
- a sorter 72 for receiving at least one of the first and second processed hiformation from at least one of the first 40 and second 42 processors.
- at least one sending processor is defined.
- Each of the first 40 and second 42 processors may send processed hiformation or only one of the first 40 and second 42 processors may send processed information, hi any event, at least one of the first 40 and second 42 processors will be deemed as a sending processor.
- the hub 32 and sorter 72 also identify a destination of at least one of the first and second addresses of the first and second processed information, respectively. Finally, the hub 32 and sorter 72 send at least one of the first and second processed information without modification over at least one of the communication links 68, 70 to at least one of the first 40 and second 42 processors.
- the processor to which the information is being sent defines at least one addressed processor.
- the sorter 72 includes hardware 74 for determining the destination addresses of the addressed processors.
- the first communication link 68 preferably includes first incoming 76 and first outgoing 78 transmission lines.
- the second communication link 70 preferably includes second incoming 80 and second outgoing 82 transmission lines.
- the first 76 and second 80 incoming transmission lines interconnect the first 40 and second 42 processors, respectively, to the hub 32 for transmitting signals hi only one direction from the first 40 and second 42 processors to the hub 32 to define a send-only system 30.
- the first 78 and second 82 outgoing transmission lines interconnect the first 40 and second 42 processors, respectively, to the hub 32 for transmitting signals in only one direction from the hub 32 to the first 40 and second 42 processors to further define the send-only system 30.
- the chipsets 44 are designed to interconnect each of the incoming 76, 80 and outgoing 78, 82 transmission lines and the corresponding processors 40, 42 for creating a virtually transparent connection therebetween.
- the send-only system 30 eliminates the duplication of stored data.
- the first 76 and second 80 incoming transmission lines and the first 78 and second 82 outgoing transmission lines are unidirectional optical fiber links.
- the optical fiber links are particularly advantageous in that the information is passed under high speeds and becomes substantially generic. Further, the unidirectional optical fiber links prevent the possibility of data collision.
- the first 76 and second 80 incoming and the first 78 and second 82 outgoing transmission lines may be of any suitable design without deviating from the scope of the subject invention.
- the distributed multiprocessing system 30 can include any number of additional features for assisting in the uninterrupted flow of data through the system 30.
- a counter may be included to determine and control a number of times processed information is sent to an addressed processor.
- a sequencer may also be included to monitor and control a testing operation as performed by the system 30. In particular, the sequencer may be used to start the testing, perform the test, react appropriately to limits and events, establish that the test is complete, and switch off the test.
- a single communication link 68 interconnects the first processor 40 with the second processor 42 for transmitting the first and second processed information between the first 40 and second 42 processors.
- An indexer indexes the first 40 and second 42 processors to define a different code for each of the processors 40, 42 in a similarly manner as above.
- the first 40 and second 42 processors also each include virtual memory maps of each code such that the first 40 and second 42 processors can address and forward processed information to each other.
- the unique architecture allows the two nods 1, 2 to communicate in a virtually seamless manner.
- the method of communicating between the first 40 and second 42 processors includes the steps of initially indexing the first 40 and second 42 processors to differentiate the processors 40, 42. Then the virtual memory maps of each of the codes is created within each of the first 40 and second 42 processors such that the first 40 and second 42 processors can address and forward processed information to each other.
- the processed information is transmitted by utilizing the virtual memory map of the sending processor, which may be from either node 1, 2, from the sending processor across the communication link toward the addressed processor, which is the corresponding opposite node 1, 2.
- the processed information is then received along with the address in the addressed processor and the processed information is stored within the memory location of the addressed processor.
- a second hub 84 having nodes 7 and 8 with seventh and eighth processors, is interconnected to the first hub 32 by a hub link 86.
- the connection of one hub to another is known as cascading.
- the second hub 84 before connected to the first hub 32, indexed the two nodes 7 and 8 as node 1 and node 2.
- the nodes 1-8 of the two hubs 32, 84 must be re-indexed such that there are not two node Is and node 2s.
- the indexer first indexes the first 32 and second 84 hubs to define a master hub 32 and secondary hub 84.
- hub number 1 is the master hub 32 and hub number 2 is the secondary hub 84.
- a key 88 is disposed within one of the first 32 and second 84 hubs to determine which of the hubs 32, 84 will be defined as the master hub. As illustrated, the key 88 is within the first hub 32.
- the indexer also indexes the nodes 1-8 and processors to redefine the codes for each of the nodes 1-8 for differentiating the processors and nodes 1-8.
- each hub 32, 84 can write to all of the nodes 1-8 in the new combined or cascaded system 30 as shown in Figure 9.
- Figures 11 through 13 there is illustrated various configurations for the combining of two hubs each having a plurality of nodes. These examples illustrate that the hubs can be attached through a node as opposed to utilizing the hub link 86. Further, as shown in Figure 11, a node may be connected to more than one hub and the hubs may be connected to more than one common node.
- hub links 86 there may be a third or more hubs interconnected to the system 30 through either a node (as shown) or by hub links 86.
- a node as shown
- hub links 86 there may be a third or more hubs interconnected to the system 30 through either a node (as shown) or by hub links 86.
- node 1 is shown again in greater detail.
- the method comprises the steps of processing information within at least one of the first 40 and second 42 processors.
- the information is processed within the first processor 40 by proceeding through a number of tasks 62 in node 1.
- the tasks 62 may be any suitable type of calculation, compilation or the like.
- the processing of the information is further defined as creating data within the first processor 40.
- the creating of the data is further defined as compiling the data within the first processor 40.
- many of the processors of the nodes 1-6, including in this example node 1 will obtain and compile testing data.
- the system 30 further includes the step of directing the sending processor, which in this example is the first processor 40 of node 1, to a subsequent task 62 to be performed within the first processor 40 while simultaneously sending the processed information across one of the communication links 68, 70 to the hub 32.
- This step is accomplished by the use of the tasks 62 and pointers 64, 66.
- the first task 62 is first completed and then the first processor 40 proceeds to the second task 62.
- the pointers 64, 66 within the first task 62 direct the flow of the first processor 40 to the second task 62.
- the data destination pointer 66 is silent and the next task pointer 64 indicates that the second task 62 should be the next task to be completed.
- the second task 62 is then completed and the first processor 40 proceeds to the fourth task 62.
- the next task pointer 64 of the second task 62 indicates to the first processor 40 that the fourth task 62 should be next, thereby skipping over the third task 62.
- the fourth task 62 is completed and the next task pointer 64 directs the flow to another task 62.
- the data destination pointer 66 of the fourth task 62 indicates that the information as processed after the fourth task 62 should be sent to the hub 32.
- the flow of information from the first task 62 to the second task 62 to the forth task 62 is purely illustrative and is in now way intended to limit the subject application.
- the processed information from the fourth task 62 is then addressed and transmitted from the first processor 40 across at least one of the communication links 68, 70 toward the hub 32.
- the communication links 68, 70 are preferably unidirectional.
- the step of transmitting the processed information is further defined as transmitting the processed information across the first incoming transmission line 76 in only one direction from the first processor 40 to the hub 32 to define a send-only system 30.
- the transmitting of the processed information is also further defined by transmitting the data along with executable code from the sending processor to the addressed processor.
- the first 40 and second 42 processors initially do not have any processing capabilities.
- the executable code for the processors 40, 42 is preferably sent to the processors 40, 42 over the same system 30.
- the executable code will include a command to instruct the processors 40, 42 to process the forwarded data in a certain fashion.
- the transmitting of the processed information may be a command to rearrange or reorganize the pointers of the addressed processor. This in turn may change the order of the tasks which changes the processing of the addressed processor.
- the transmitted processed data may include any combination of all or other like features.
- the processed information is preferably addressed by the data destination pointer 66 directing the flow to the first virtual memory map 58 of node 1 and pointing to a destination node.
- the step of addressing the processed information is further defined as assigning a destination address onto the processed information indicative of a code of an addressed processor.
- the step of addressing the processed information is further defined as assigning a memory address onto the processed information indicative of the memory location of the addressed processor, i.e., node 2. hi this example the destination node, destination address, and memory address will be node 2 while the originating node will be node 1.
- the virtual memory map 58, 60 of each of the codes is created within each of the first 40 and second 42 processors such that the first 40 and second 42 processors can address and forward processed information to each of the indexed processors within the system 30.
- the virtual memory map 58, 60 is a means to which the processor can recognize and address each of the other processors in the system 30.
- node 1 is then defined as a sending processor.
- the data destination pointer 66 directs the processed information to node 2 in the first virtual memory map 58 such that the destination address of node 2 will be assigned to this information.
- the processed hiformation is sent across the first incoming transmission line 76 of the first communication link 68.
- the processed information, along with the addresses, is then received within the hub 32.
- the destination of the address for the transmitted processed information is identified within the hub 32 and the processed information is sent without modification over the second communication link 70 to, in this example, the second processor 42 of node 2.
- the step of sending the processed information without modification is further defined as sending the processed information over the second outgoing transmission line 82 in only one direction from the hub 32 to the second processor 42 to further define the send-only system 30.
- the hub 32 determines that the destination of the address is for node 2 which defines node 2 as an addressed processor with the destination address.
- the processed information is then stored within the second real memory location 56 of the addressed second processor 42 wherein the second processor 42 can utilize the information as needed.
- the processed information may be stored within the categorized message areas of the second real memory location 56 in accordance with the associated memory address.
- the destination address (of node 2) may be stripped from sent processed information before the information is stored in the second real memory location 56.
- the method of operation for the subject invention eliminates unnecessary duplication of information.
- node 1 sends the processed information to the hub 32, which then travels to node 2, the information, which can include data, executable code, or both, is not saved at node 1 and is only stored at node 2.
- Node 2 does not send a confirmation and node 1 does not request a confirmation.
- Node 1 assumes that the information arrived at node 2.
- the subject system 30 is used to transport data to desired real memory locations where the data can be used during subsequent processing or evaluation.
- the flow of communication across the system 30 will be precisely controlled such that the nodes 1-6, i.e., node 2, will not receive unnecessary or processed information until it is needed.
- the processing at node 1 and the data destination pointer 66 at node 1 will be precisely timed to send the processed information across the system 30 to node 2 only moments before node 2 requires this information.
- node 2 will require the processed information of node 1 during its own processing of tasks.
- the system 30 of the subject invention is therefore virtually seamless and does not suffer from the deficiencies of requesting information from other nodes.
- FIG. 20 Another example of communicating across the subject system 30 is illustrated in Figure 20 wherein node 2 communicates with itself. The information is processed within the second processor 42 of node 2 by proceeding through a number of tasks 62.
- the processed information is then addressed and transmitted from the second processor 42 across the second incoming transmission line 80 toward the hub 32.
- the processed information is addressed by the data destination pointer 66 directing the flow to the second virtual memory map 60 and pointing to the destination node.
- a destination address and a memory address are then assigned to the information.
- the destination node, destination address, and memory address will be node 2 while the originating node will also be node 2.
- node 2 is defined as a sending processor.
- the processed information, along with the address, is then received within the hub 32.
- the destination of the address for the transmitted processed information is identified within the hub 32 and the processed information is sent without modification over the second outgoing transmission line 82 to the designated processor.
- the hub 32 determines that the destination of the address is for node 2 which defines node 2 as an addressed processor with the desthiation address.
- the processed information is sent across the second outgoing transmission line 82 back to the second processor 42 within node 2.
- the processed information is then stored within the second real memory location 56 of the addressed second processor 42 of node 2. Node 2 has now successfully written information to itself.
- the nodes 1-6 can perform self tests.
- the node such as node 2 above, can send data and address the data using the second virtual memory space 60 and then later check to ensure that the data was actually received into the second real memory location 56 of node 2. This would test the hub 32 and communication link 68, 70 connections.
- the system 30 also includes the step of simultaneously sending the processed information to all of the indexed processors by simultaneously placing the destination addresses of each of the indexed processors onto the sent information. This is also know as broadcasting a message through the system 30.
- node 6 originates a message which is addressed to each of the nodes 1-6 in the system 30.
- the message or information is sent to the hub 32 across the associated incoming transmission line in the same manner as outlined above.
- the hub 32 determines that there are destination addresses for all of the nodes 1-6. This may be accomplished by choosing a special node number or ID. which, if selected, automatically distributes the data to all nodes 1-6.
- FIG. 22 illustrates the message or information from each of the nodes 1-6 as shown in Figure 22.
- the broadcasting is typically utilized for sending universally needed information, a shut down or start up message, an identify yourself message, or any like message or information.
- Figure 23 illustrates the broadcasting of information from node 4 in a multi system 30, i.e., multi hub, configuration.
- the information is sent from node 4 to each hub in which node 4 is connected.
- the hubs which are shown as hub numbers 1, 2, and 3, in turn broadcast the information to each of their attached nodes 1-6. It should be appreciated, that a broadcast can be accomplished regardless of the configuration of the system 30.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multi Processors (AREA)
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2002213378A AU2002213378A1 (en) | 2000-10-18 | 2001-10-18 | Distributed multiprocessing system |
MXPA03003361A MXPA03003361A (en) | 2000-10-18 | 2001-10-18 | Distributed multiprocessing system. |
EP01981756.8A EP1328870B1 (en) | 2000-10-18 | 2001-10-18 | Distributed multiprocessing system |
KR1020037005396A KR100851618B1 (en) | 2000-10-18 | 2001-10-18 | Distributed multiprocessing system |
JP2002536882A JP2004526221A (en) | 2000-10-18 | 2001-10-18 | Distributed multiprocessing system |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US24123300P | 2000-10-18 | 2000-10-18 | |
US60/241,233 | 2000-10-18 | ||
US09/692,852 | 2000-10-20 | ||
US09/692,852 US7328232B1 (en) | 2000-10-18 | 2000-10-20 | Distributed multiprocessing system |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2002033564A1 true WO2002033564A1 (en) | 2002-04-25 |
WO2002033564B1 WO2002033564B1 (en) | 2002-09-06 |
Family
ID=26934111
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2001/032528 WO2002033564A1 (en) | 2000-10-18 | 2001-10-18 | Distributed multiprocessing system |
Country Status (7)
Country | Link |
---|---|
US (1) | US7328232B1 (en) |
EP (1) | EP1328870B1 (en) |
JP (2) | JP2004526221A (en) |
KR (1) | KR100851618B1 (en) |
AU (1) | AU2002213378A1 (en) |
MX (1) | MXPA03003361A (en) |
WO (1) | WO2002033564A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100412849C (en) * | 2005-01-11 | 2008-08-20 | Ut斯达康通讯有限公司 | Distributed multi-processor system and communication method between equivalently relevant state machines on the system |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9137179B2 (en) * | 2006-07-26 | 2015-09-15 | Hewlett-Packard Development Company, L.P. | Memory-mapped buffers for network interface controllers |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5448698A (en) * | 1993-04-05 | 1995-09-05 | Hewlett-Packard Company | Inter-processor communication system in which messages are stored at locations specified by the sender |
EP0844559A2 (en) | 1996-11-22 | 1998-05-27 | MangoSoft Corporation | Shared memory computer networks |
US5884046A (en) * | 1996-10-23 | 1999-03-16 | Pluris, Inc. | Apparatus and method for sharing data and routing messages between a plurality of workstations in a local area network |
US5905725A (en) * | 1996-12-16 | 1999-05-18 | Juniper Networks | High speed switching device |
US6148379A (en) * | 1997-09-19 | 2000-11-14 | Silicon Graphics, Inc. | System, method and computer program product for page sharing between fault-isolated cells in a distributed shared memory system |
Family Cites Families (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4276594A (en) | 1978-01-27 | 1981-06-30 | Gould Inc. Modicon Division | Digital computer with multi-processor capability utilizing intelligent composite memory and input/output modules and method for performing the same |
US4402082A (en) * | 1980-10-31 | 1983-08-30 | Foster Wheeler Energy Corporation | Automatic line termination in distributed industrial process control system |
US4435780A (en) | 1981-06-16 | 1984-03-06 | International Business Machines Corporation | Separate stack areas for plural processes |
US4517637A (en) * | 1983-04-21 | 1985-05-14 | Inconix Corporation | Distributed measurement and control system for industrial processes |
US4862350A (en) * | 1984-08-03 | 1989-08-29 | International Business Machines Corp. | Architecture for a distributive microprocessing system |
US4724520A (en) | 1985-07-01 | 1988-02-09 | United Technologies Corporation | Modular multiport data hub |
US4777487A (en) | 1986-07-30 | 1988-10-11 | The University Of Toronto Innovations Foundation | Deterministic access protocol local area network |
CA1293819C (en) * | 1986-08-29 | 1991-12-31 | Thinking Machines Corporation | Very large scale computer |
US4757497A (en) | 1986-12-03 | 1988-07-12 | Lan-Tel, Inc. | Local area voice/data communications and switching system |
CA2011935A1 (en) | 1989-04-07 | 1990-10-07 | Desiree A. Awiszio | Dual-path computer interconnect system with four-ported packet memory control |
US5276789A (en) | 1990-05-14 | 1994-01-04 | Hewlett-Packard Co. | Graphic display of network topology |
US5296936A (en) | 1991-07-22 | 1994-03-22 | International Business Machines Corporation | Communication apparatus and method for transferring image data from a source to one or more receivers |
GB2263988B (en) | 1992-02-04 | 1996-05-22 | Digital Equipment Corp | Work flow management system and method |
JPH0690695B2 (en) * | 1992-06-24 | 1994-11-14 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Computer system and system expansion device |
EP0596648A1 (en) | 1992-11-02 | 1994-05-11 | National Semiconductor Corporation | Network link endpoint capability detection |
EP0674790B1 (en) | 1992-12-21 | 2002-03-13 | Apple Computer, Inc. | Method and apparatus for transforming an arbitrary topology collection of nodes into an acyclic directed graph |
US5513325A (en) | 1992-12-23 | 1996-04-30 | Unisys Corporation | Technique for coupling CTOS units to non-CTOS host |
JP3266351B2 (en) * | 1993-01-20 | 2002-03-18 | 株式会社日立製作所 | Database management system and query processing method |
US5802391A (en) | 1993-03-16 | 1998-09-01 | Ht Research, Inc. | Direct-access team/workgroup server shared by team/workgrouped computers without using a network operating system |
JPH06348658A (en) | 1993-06-03 | 1994-12-22 | Nec Corp | Memory management system for multiprocessor system |
US5933607A (en) | 1993-06-07 | 1999-08-03 | Telstra Corporation Limited | Digital communication system for simultaneous transmission of data from constant and variable rate sources |
US5596723A (en) | 1994-06-23 | 1997-01-21 | Dell Usa, Lp | Method and apparatus for automatically detecting the available network services in a network system |
FR2722597B1 (en) * | 1994-07-18 | 1996-08-14 | Kodak Pathe | DEVICE FOR MONITORING THE PARAMETERS OF A MANUFACTURING PROCESS |
US5557778A (en) | 1994-11-07 | 1996-09-17 | Network Devices, Inc. | Star hub connection device for an information display system |
SE514798C2 (en) | 1994-11-23 | 2001-04-23 | Ericsson Telefon Ab L M | Systems and methods for providing a management system with information and a telecommunications system |
US5630059A (en) * | 1995-02-06 | 1997-05-13 | International Business Machines Corporation | Expedited message transfer in a multi-nodal data processing system |
JP2736237B2 (en) * | 1995-03-06 | 1998-04-02 | 技術研究組合新情報処理開発機構 | Remote memory access controller |
JP3303045B2 (en) * | 1995-06-09 | 2002-07-15 | 日本電信電話株式会社 | Network distributed processing system |
US5742602A (en) | 1995-07-12 | 1998-04-21 | Compaq Computer Corporation | Adaptive repeater system |
JP2000514967A (en) | 1996-07-10 | 2000-11-07 | レクロイ・コーポレーション | Method and system for characterizing a terminal in a local area network |
US6021495A (en) | 1996-12-13 | 2000-02-01 | 3Com Corporation | Method and apparatus for authentication process of a star or hub network connection ports by detecting interruption in link beat |
US6052380A (en) | 1996-11-08 | 2000-04-18 | Advanced Micro Devices, Inc. | Network adapter utilizing an ethernet protocol and utilizing a digital subscriber line physical layer driver for improved performance |
US5937388A (en) | 1996-12-05 | 1999-08-10 | Hewlett-Packard Company | System and method for performing scalable distribution of process flow activities in a distributed workflow management system |
US6098091A (en) * | 1996-12-30 | 2000-08-01 | Intel Corporation | Method and system including a central computer that assigns tasks to idle workstations using availability schedules and computational capabilities |
US6269391B1 (en) | 1997-02-24 | 2001-07-31 | Novell, Inc. | Multi-processor scheduling kernel |
US5964832A (en) * | 1997-04-18 | 1999-10-12 | Intel Corporation | Using networked remote computers to execute computer processing tasks at a predetermined time |
US5937168A (en) * | 1997-05-30 | 1999-08-10 | Bellsouth Corporation | Routing information within an adaptive routing architecture of an information retrieval system |
US5991808A (en) * | 1997-06-02 | 1999-11-23 | Digital Equipment Corporation | Task processing optimization in a multiprocessor system |
US6067585A (en) | 1997-06-23 | 2000-05-23 | Compaq Computer Corporation | Adaptive interface controller that can operate with segments of different protocol and transmission rates in a single integrated device |
US5905868A (en) * | 1997-07-22 | 1999-05-18 | Ncr Corporation | Client/server distribution of performance monitoring data |
US6173207B1 (en) * | 1997-09-22 | 2001-01-09 | Agilent Technologies, Inc. | Real-time control system with non-deterministic communication |
US6067595A (en) | 1997-09-23 | 2000-05-23 | Icore Technologies, Inc. | Method and apparatus for enabling high-performance intelligent I/O subsystems using multi-port memories |
US6002996A (en) * | 1997-11-26 | 1999-12-14 | The Johns Hopkins University | Networked sensor system |
US6067477A (en) * | 1998-01-15 | 2000-05-23 | Eutech Cybernetics Pte Ltd. | Method and apparatus for the creation of personalized supervisory and control data acquisition systems for the management and integration of real-time enterprise-wide applications and systems |
US6012101A (en) | 1998-01-16 | 2000-01-04 | Int Labs, Inc. | Computer network having commonly located computing systems |
US6233611B1 (en) * | 1998-05-08 | 2001-05-15 | Sony Corporation | Media manager for controlling autonomous media devices within a network environment and managing the flow and format of data between the devices |
JP3720981B2 (en) | 1998-06-15 | 2005-11-30 | 日本電気株式会社 | Multiprocessor system |
JP2000039383A (en) * | 1998-07-22 | 2000-02-08 | Nissan Motor Co Ltd | Fault diagnosing device for automobile |
EP1171823A4 (en) * | 1999-03-03 | 2006-10-04 | Cyrano Sciences Inc | Apparatus, systems and methods for detecting and transmitting sensory data over a computer network |
US6261103B1 (en) * | 1999-04-15 | 2001-07-17 | Cb Sciences, Inc. | System for analyzing and/or effecting experimental data from a remote location |
US6405337B1 (en) * | 1999-06-21 | 2002-06-11 | Ericsson Inc. | Systems, methods and computer program products for adjusting a timeout for message retransmission based on measured round-trip communications delays |
US6421676B1 (en) * | 1999-06-30 | 2002-07-16 | International Business Machines Corporation | Scheduler for use in a scalable, distributed, asynchronous data collection mechanism |
US6125420A (en) * | 1999-11-12 | 2000-09-26 | Agilent Technologies Inc. | Mechanisms for determining groupings of nodes in a distributed system |
US6871211B2 (en) * | 2000-03-28 | 2005-03-22 | Ge Medical Systems Information Technologies, Inc. | Intranet-based medical data distribution system |
US20020019844A1 (en) * | 2000-07-06 | 2002-02-14 | Kurowski Scott J. | Method and system for network-distributed computing |
-
2000
- 2000-10-20 US US09/692,852 patent/US7328232B1/en not_active Expired - Lifetime
-
2001
- 2001-10-18 WO PCT/US2001/032528 patent/WO2002033564A1/en active Application Filing
- 2001-10-18 KR KR1020037005396A patent/KR100851618B1/en not_active Expired - Fee Related
- 2001-10-18 MX MXPA03003361A patent/MXPA03003361A/en active IP Right Grant
- 2001-10-18 EP EP01981756.8A patent/EP1328870B1/en not_active Expired - Lifetime
- 2001-10-18 AU AU2002213378A patent/AU2002213378A1/en not_active Abandoned
- 2001-10-18 JP JP2002536882A patent/JP2004526221A/en active Pending
-
2008
- 2008-07-30 JP JP2008196242A patent/JP5599139B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5448698A (en) * | 1993-04-05 | 1995-09-05 | Hewlett-Packard Company | Inter-processor communication system in which messages are stored at locations specified by the sender |
US5884046A (en) * | 1996-10-23 | 1999-03-16 | Pluris, Inc. | Apparatus and method for sharing data and routing messages between a plurality of workstations in a local area network |
EP0844559A2 (en) | 1996-11-22 | 1998-05-27 | MangoSoft Corporation | Shared memory computer networks |
US5905725A (en) * | 1996-12-16 | 1999-05-18 | Juniper Networks | High speed switching device |
US6148379A (en) * | 1997-09-19 | 2000-11-14 | Silicon Graphics, Inc. | System, method and computer program product for page sharing between fault-isolated cells in a distributed shared memory system |
Non-Patent Citations (1)
Title |
---|
See also references of EP1328870A4 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100412849C (en) * | 2005-01-11 | 2008-08-20 | Ut斯达康通讯有限公司 | Distributed multi-processor system and communication method between equivalently relevant state machines on the system |
Also Published As
Publication number | Publication date |
---|---|
JP2004526221A (en) | 2004-08-26 |
EP1328870A1 (en) | 2003-07-23 |
MXPA03003361A (en) | 2004-12-02 |
KR20040018244A (en) | 2004-03-02 |
EP1328870B1 (en) | 2017-11-22 |
AU2002213378A1 (en) | 2002-04-29 |
JP2008269651A (en) | 2008-11-06 |
EP1328870A4 (en) | 2008-03-12 |
JP5599139B2 (en) | 2014-10-01 |
US7328232B1 (en) | 2008-02-05 |
KR100851618B1 (en) | 2008-08-12 |
WO2002033564B1 (en) | 2002-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US4920484A (en) | Multiprocessor/memory interconnection network wherein messages sent through the network to the same memory are combined | |
US5822605A (en) | Parallel processor system with a broadcast message serializing circuit provided within a network | |
US5195181A (en) | Message processing system having separate message receiving and transmitting processors with message processing being distributed between the separate processors | |
EP0427250B1 (en) | Method and apparatus for exploiting communications bandwidth as for providing shared memory | |
AU592149B2 (en) | Dynamically partitionable parallel processors | |
EP1444578B1 (en) | Method of communicating across an operating system | |
JPH03149936A (en) | Communication changeover element and variable long-distance communication message transfer method | |
CN1320469C (en) | A switching I/O node for connection in a multiprocessor computer system | |
JP4154853B2 (en) | A redundant programmable controller and an equalization method for equalizing control data. | |
US4999771A (en) | Communications network | |
AU649176B2 (en) | Parallel data processing control system | |
EP0734139A2 (en) | A data transfer device with cluster control | |
JP5599139B2 (en) | Distributed multiprocessing system | |
US6772232B1 (en) | Address assignment procedure that enables a device to calculate addresses of neighbor devices | |
CN100483382C (en) | Distributed multiprocessing system | |
WO1991010958A1 (en) | Computer bus system | |
EP0321544A1 (en) | Intercomputer communication control apparatus and method. | |
JPS6031668A (en) | Method for controlling distributed information processing system | |
JP2008269651A5 (en) | ||
JPH0821949B2 (en) | Automatic allocation method of logical path information | |
JPH05191474A (en) | Communication protocol processor | |
FI87716C (en) | Duplication procedure for a switching system, especially for a telephone exchange | |
JPH02308644A (en) | Communication protocol converter | |
JPS6132629A (en) | Control method of multiple circuit communication | |
JPH03151740A (en) | Communication controller |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 290/DELNP/2003 Country of ref document: IN |
|
REEP | Request for entry into the european phase |
Ref document number: 2001981756 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2001981756 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 018167829 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: PA/a/2003/003361 Country of ref document: MX |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020037005396 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2002536882 Country of ref document: JP |
|
WWP | Wipo information: published in national office |
Ref document number: 2001981756 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
WWP | Wipo information: published in national office |
Ref document number: 1020037005396 Country of ref document: KR |